Matches in SemOpenAlex for { <https://semopenalex.org/work/W3203310587> ?p ?o ?g. }
Showing items 1 to 75 of
75
with 100 items per page.
- W3203310587 endingPage "657" @default.
- W3203310587 startingPage "642" @default.
- W3203310587 abstract "In the evolutionary game of the same task for groups, the changes in game rules, personal interests, the crowd size, and external supervision cause uncertain effects on individual decision-making and game results. In the Markov decision framework, a single-task multi-decision evolutionary game model based on multi-agent reinforcement learning is proposed to explore the evolutionary rules in the process of a game. The model can improve the result of a evolutionary game and facilitate the completion of the task. First, based on the multi-agent theory, to solve the existing problems in the original model, a negative feedback tax penalty mechanism is proposed to guide the strategy selection of individuals in the group. In addition, in order to evaluate the evolutionary game results of the group in the model, a calculation method of the group intelligence level is defined. Secondly, the Q-learning algorithm is used to improve the guiding effect of the negative feedback tax penalty mechanism. In the model, the selection strategy of the Q-learning algorithm is improved and a bounded rationality evolutionary game strategy is proposed based on the rule of evolutionary games and the consideration of the bounded rationality of individuals. Finally, simulation results show that the proposed model can effectively guide individuals to choose cooperation strategies which are beneficial to task completion and stability under different negative feedback factor values and different group sizes, so as to improve the group intelligence level." @default.
- W3203310587 created "2021-10-11" @default.
- W3203310587 creator A5022014664 @default.
- W3203310587 creator A5028525418 @default.
- W3203310587 creator A5045921163 @default.
- W3203310587 date "2021-06-01" @default.
- W3203310587 modified "2023-10-14" @default.
- W3203310587 title "A single-task and multi-decision evolutionary game model based on multi-agent reinforcement learning" @default.
- W3203310587 doi "https://doi.org/10.23919/jsee.2021.000055" @default.
- W3203310587 hasPublicationYear "2021" @default.
- W3203310587 type Work @default.
- W3203310587 sameAs 3203310587 @default.
- W3203310587 citedByCount "8" @default.
- W3203310587 countsByYear W32033105872022 @default.
- W3203310587 countsByYear W32033105872023 @default.
- W3203310587 crossrefType "journal-article" @default.
- W3203310587 hasAuthorship W3203310587A5022014664 @default.
- W3203310587 hasAuthorship W3203310587A5028525418 @default.
- W3203310587 hasAuthorship W3203310587A5045921163 @default.
- W3203310587 hasBestOaLocation W32033105871 @default.
- W3203310587 hasConcept C105795698 @default.
- W3203310587 hasConcept C106189395 @default.
- W3203310587 hasConcept C112972136 @default.
- W3203310587 hasConcept C119857082 @default.
- W3203310587 hasConcept C144237770 @default.
- W3203310587 hasConcept C154945302 @default.
- W3203310587 hasConcept C159886148 @default.
- W3203310587 hasConcept C162324750 @default.
- W3203310587 hasConcept C177142836 @default.
- W3203310587 hasConcept C187736073 @default.
- W3203310587 hasConcept C20249471 @default.
- W3203310587 hasConcept C2780451532 @default.
- W3203310587 hasConcept C33923547 @default.
- W3203310587 hasConcept C41008148 @default.
- W3203310587 hasConcept C58694771 @default.
- W3203310587 hasConcept C81917197 @default.
- W3203310587 hasConcept C97541855 @default.
- W3203310587 hasConceptScore W3203310587C105795698 @default.
- W3203310587 hasConceptScore W3203310587C106189395 @default.
- W3203310587 hasConceptScore W3203310587C112972136 @default.
- W3203310587 hasConceptScore W3203310587C119857082 @default.
- W3203310587 hasConceptScore W3203310587C144237770 @default.
- W3203310587 hasConceptScore W3203310587C154945302 @default.
- W3203310587 hasConceptScore W3203310587C159886148 @default.
- W3203310587 hasConceptScore W3203310587C162324750 @default.
- W3203310587 hasConceptScore W3203310587C177142836 @default.
- W3203310587 hasConceptScore W3203310587C187736073 @default.
- W3203310587 hasConceptScore W3203310587C20249471 @default.
- W3203310587 hasConceptScore W3203310587C2780451532 @default.
- W3203310587 hasConceptScore W3203310587C33923547 @default.
- W3203310587 hasConceptScore W3203310587C41008148 @default.
- W3203310587 hasConceptScore W3203310587C58694771 @default.
- W3203310587 hasConceptScore W3203310587C81917197 @default.
- W3203310587 hasConceptScore W3203310587C97541855 @default.
- W3203310587 hasIssue "3" @default.
- W3203310587 hasLocation W32033105871 @default.
- W3203310587 hasOpenAccess W3203310587 @default.
- W3203310587 hasPrimaryLocation W32033105871 @default.
- W3203310587 hasRelatedWork W2057550885 @default.
- W3203310587 hasRelatedWork W2355089623 @default.
- W3203310587 hasRelatedWork W2381024141 @default.
- W3203310587 hasRelatedWork W2384572063 @default.
- W3203310587 hasRelatedWork W2390784065 @default.
- W3203310587 hasRelatedWork W2949964922 @default.
- W3203310587 hasRelatedWork W2952448454 @default.
- W3203310587 hasRelatedWork W4226437174 @default.
- W3203310587 hasRelatedWork W4313038809 @default.
- W3203310587 hasRelatedWork W4319083788 @default.
- W3203310587 hasVolume "32" @default.
- W3203310587 isParatext "false" @default.
- W3203310587 isRetracted "false" @default.
- W3203310587 magId "3203310587" @default.
- W3203310587 workType "article" @default.