Matches in SemOpenAlex for { <https://semopenalex.org/work/W4223411650> ?p ?o ?g. }
Showing items 1 to 71 of
71
with 100 items per page.
- W4223411650 endingPage "219" @default.
- W4223411650 startingPage "210" @default.
- W4223411650 abstract "The scale of ground-to-air confrontation task assignments is large and needs to deal with many concurrent task assignments and random events. Aiming at the problems where existing task assignment methods are applied to ground-to-air confrontation, there is low efficiency in dealing with complex tasks, and there are interactive conflicts in multiagent systems. This study proposes a multiagent architecture based on a one-general agent with multiple narrow agents (OGMN) to reduce task assignment conflicts. Considering the slow speed of traditional dynamic task assignment algorithms, this paper proposes the proximal policy optimization for task assignment of general and narrow agents (PPO-TAGNA) algorithm. The algorithm based on the idea of the optimal assignment strategy algorithm and combined with the training framework of deep reinforcement learning (DRL) adds a multihead attention mechanism and a stage reward mechanism to the bilateral band clipping PPO algorithm to solve the problem of low training efficiency. Finally, simulation experiments are carried out in the digital battlefield. The multiagent architecture based on OGMN combined with the PPO-TAGNA algorithm can obtain higher rewards faster and has a higher win ratio. By analyzing agent behavior, the efficiency, superiority and rationality of resource utilization of this method are verified." @default.
- W4223411650 created "2022-04-14" @default.
- W4223411650 creator A5002528386 @default.
- W4223411650 creator A5030921116 @default.
- W4223411650 creator A5032254756 @default.
- W4223411650 creator A5032894412 @default.
- W4223411650 creator A5075969066 @default.
- W4223411650 date "2023-01-01" @default.
- W4223411650 modified "2023-10-13" @default.
- W4223411650 title "Task assignment in ground-to-air confrontation based on multiagent deep reinforcement learning" @default.
- W4223411650 cites W2205857407 @default.
- W4223411650 cites W2772007490 @default.
- W4223411650 cites W2789490856 @default.
- W4223411650 cites W2899192399 @default.
- W4223411650 cites W2969419629 @default.
- W4223411650 cites W3130936605 @default.
- W4223411650 cites W3140715751 @default.
- W4223411650 doi "https://doi.org/10.1016/j.dt.2022.04.001" @default.
- W4223411650 hasPublicationYear "2023" @default.
- W4223411650 type Work @default.
- W4223411650 citedByCount "6" @default.
- W4223411650 countsByYear W42234116502022 @default.
- W4223411650 countsByYear W42234116502023 @default.
- W4223411650 crossrefType "journal-article" @default.
- W4223411650 hasAuthorship W4223411650A5002528386 @default.
- W4223411650 hasAuthorship W4223411650A5030921116 @default.
- W4223411650 hasAuthorship W4223411650A5032254756 @default.
- W4223411650 hasAuthorship W4223411650A5032894412 @default.
- W4223411650 hasAuthorship W4223411650A5075969066 @default.
- W4223411650 hasBestOaLocation W42234116501 @default.
- W4223411650 hasConcept C120314980 @default.
- W4223411650 hasConcept C127413603 @default.
- W4223411650 hasConcept C154945302 @default.
- W4223411650 hasConcept C195244886 @default.
- W4223411650 hasConcept C201995342 @default.
- W4223411650 hasConcept C2779669469 @default.
- W4223411650 hasConcept C2780451532 @default.
- W4223411650 hasConcept C41008148 @default.
- W4223411650 hasConcept C95457728 @default.
- W4223411650 hasConcept C97541855 @default.
- W4223411650 hasConceptScore W4223411650C120314980 @default.
- W4223411650 hasConceptScore W4223411650C127413603 @default.
- W4223411650 hasConceptScore W4223411650C154945302 @default.
- W4223411650 hasConceptScore W4223411650C195244886 @default.
- W4223411650 hasConceptScore W4223411650C201995342 @default.
- W4223411650 hasConceptScore W4223411650C2779669469 @default.
- W4223411650 hasConceptScore W4223411650C2780451532 @default.
- W4223411650 hasConceptScore W4223411650C41008148 @default.
- W4223411650 hasConceptScore W4223411650C95457728 @default.
- W4223411650 hasConceptScore W4223411650C97541855 @default.
- W4223411650 hasFunder F4320321001 @default.
- W4223411650 hasFunder F4320324173 @default.
- W4223411650 hasLocation W42234116501 @default.
- W4223411650 hasOpenAccess W4223411650 @default.
- W4223411650 hasPrimaryLocation W42234116501 @default.
- W4223411650 hasRelatedWork W1984179778 @default.
- W4223411650 hasRelatedWork W2044153644 @default.
- W4223411650 hasRelatedWork W2354389285 @default.
- W4223411650 hasRelatedWork W2362261179 @default.
- W4223411650 hasRelatedWork W2363573290 @default.
- W4223411650 hasRelatedWork W2383940993 @default.
- W4223411650 hasRelatedWork W2387459700 @default.
- W4223411650 hasRelatedWork W2387678935 @default.
- W4223411650 hasRelatedWork W2390277087 @default.
- W4223411650 hasRelatedWork W4375930838 @default.
- W4223411650 hasVolume "19" @default.
- W4223411650 isParatext "false" @default.
- W4223411650 isRetracted "false" @default.
- W4223411650 workType "article" @default.