Matches in SemOpenAlex for { <https://semopenalex.org/work/W2418374443> ?p ?o ?g. }
- W2418374443 abstract "In agreement problems, agents must assign options to variables, typically via negotiation, in order to gain reward. For an assignment to occur, agents with different individual preferences must agree on an option. We provide a formalism for agreement problems and negotiation, which encompasses important real-world problems faced by people daily, e.g., selecting a meeting time, deciding on a location for a group lunch, and assigning roles in a joint task. We focus on the challenge of designing algorithms to improve agent negotiation. We formalize the negotiation process as a series of rounds, where the agents each make an offer until they reach agreement. The offers agents make, and hence the outcome of a negotiation, is influenced by the agents' negotiation strategies, preferences and utility functions. As an agent negotiates with other agents, it can keep a record of the offers the agents made, and the negotiation round in which they were made. We address the challenge of designing algorithms to improve agent negotiation, by providing learning algorithms, which agents can use to inform, and thereby improve, their negotiation. In particular, we show how an agent can learn complex models of its own user's preferences, by observing the outcomes of negotiations. We also address the problem of modeling the properties of other agents from negotiation histories. This problem is particularly challenging, because negotiation histories do not provide sufficient information to reconstruct a complete model of an agent. As such, we have developed a domain independent approach, to designing and learning abstractions of the properties of other agents. We demonstrate that an agent can learn these abstractions online while negotiating, and use the learned abstractions to increase the efficiency of its negotiation, without sacrificing its individual preferences. We also provide an algorithm for agents to model the reward they receive from different negotiation strategies. When an agent is faced with the problem of dealing with a wide variety of negotiation behavior in other agents, we show that the agent can improve its negotiation performance by using experts algorithms to adapt its selection of negotiation strategies according to their performance. We observe that agreement problems are neither fully cooperative (agents have preferences), nor fully competitive (agents must agree to receive reward). We define semi-cooperative agents, to capture this space between cooperation and self-interest. The utility function of a semi-cooperative agent, trades-off preference satisfaction, and the time taken to reach agreement. The degree of the trade-off depends on the individual agent. We show how semi-cooperative agents can estimate the utility of different negotiation outcomes, by learning online about other agents' behavior. We also provide an in-depth analysis of two different strategies for using the learned information. The approaches presented apply to a wide range of agreement problems. We provide analytical and experimental analysis to demonstrate their effectiveness in representative domains. Keywords: Multiagent Negotiation, Multiagent Learning, Machine Learning, Personal Assistant Agents" @default.
- W2418374443 created "2016-06-24" @default.
- W2418374443 creator A5012709476 @default.
- W2418374443 creator A5076061157 @default.
- W2418374443 date "2009-01-01" @default.
- W2418374443 modified "2023-09-26" @default.
- W2418374443 title "Learning to improve negotiation in semi-cooperative agreement problems" @default.
- W2418374443 cites W1486300940 @default.
- W2418374443 cites W1552645109 @default.
- W2418374443 cites W1553897744 @default.
- W2418374443 cites W1554229428 @default.
- W2418374443 cites W1559785148 @default.
- W2418374443 cites W1564160698 @default.
- W2418374443 cites W169201220 @default.
- W2418374443 cites W1754744892 @default.
- W2418374443 cites W1955370508 @default.
- W2418374443 cites W1978639627 @default.
- W2418374443 cites W1988889793 @default.
- W2418374443 cites W1990442869 @default.
- W2418374443 cites W2018286849 @default.
- W2418374443 cites W2033868663 @default.
- W2418374443 cites W2046934276 @default.
- W2418374443 cites W2053827988 @default.
- W2418374443 cites W2065213477 @default.
- W2418374443 cites W2070104046 @default.
- W2418374443 cites W2086226441 @default.
- W2418374443 cites W2093825590 @default.
- W2418374443 cites W2094123267 @default.
- W2418374443 cites W2102039174 @default.
- W2418374443 cites W2105507006 @default.
- W2418374443 cites W2105581656 @default.
- W2418374443 cites W2110872636 @default.
- W2418374443 cites W2112223472 @default.
- W2418374443 cites W2112374372 @default.
- W2418374443 cites W2113600625 @default.
- W2418374443 cites W2116067849 @default.
- W2418374443 cites W2116515842 @default.
- W2418374443 cites W2118383892 @default.
- W2418374443 cites W2121353282 @default.
- W2418374443 cites W2133857047 @default.
- W2418374443 cites W2135601601 @default.
- W2418374443 cites W2138362680 @default.
- W2418374443 cites W2145340246 @default.
- W2418374443 cites W2152662904 @default.
- W2418374443 cites W2160458341 @default.
- W2418374443 cites W2343987118 @default.
- W2418374443 cites W2402768771 @default.
- W2418374443 cites W2546087207 @default.
- W2418374443 cites W2610496362 @default.
- W2418374443 cites W3123461587 @default.
- W2418374443 cites W67623224 @default.
- W2418374443 cites W98783214 @default.
- W2418374443 hasPublicationYear "2009" @default.
- W2418374443 type Work @default.
- W2418374443 sameAs 2418374443 @default.
- W2418374443 citedByCount "3" @default.
- W2418374443 countsByYear W24183744432013 @default.
- W2418374443 countsByYear W24183744432017 @default.
- W2418374443 crossrefType "journal-article" @default.
- W2418374443 hasAuthorship W2418374443A5012709476 @default.
- W2418374443 hasAuthorship W2418374443A5076061157 @default.
- W2418374443 hasConcept C10138342 @default.
- W2418374443 hasConcept C107457646 @default.
- W2418374443 hasConcept C111919701 @default.
- W2418374443 hasConcept C127413603 @default.
- W2418374443 hasConcept C142362112 @default.
- W2418374443 hasConcept C144133560 @default.
- W2418374443 hasConcept C144237770 @default.
- W2418374443 hasConcept C148220186 @default.
- W2418374443 hasConcept C153349607 @default.
- W2418374443 hasConcept C154945302 @default.
- W2418374443 hasConcept C17744445 @default.
- W2418374443 hasConcept C182306322 @default.
- W2418374443 hasConcept C199539241 @default.
- W2418374443 hasConcept C199776023 @default.
- W2418374443 hasConcept C201995342 @default.
- W2418374443 hasConcept C2780451532 @default.
- W2418374443 hasConcept C33923547 @default.
- W2418374443 hasConcept C41008148 @default.
- W2418374443 hasConcept C41550386 @default.
- W2418374443 hasConcept C558565934 @default.
- W2418374443 hasConcept C56739046 @default.
- W2418374443 hasConcept C73301696 @default.
- W2418374443 hasConcept C98045186 @default.
- W2418374443 hasConceptScore W2418374443C10138342 @default.
- W2418374443 hasConceptScore W2418374443C107457646 @default.
- W2418374443 hasConceptScore W2418374443C111919701 @default.
- W2418374443 hasConceptScore W2418374443C127413603 @default.
- W2418374443 hasConceptScore W2418374443C142362112 @default.
- W2418374443 hasConceptScore W2418374443C144133560 @default.
- W2418374443 hasConceptScore W2418374443C144237770 @default.
- W2418374443 hasConceptScore W2418374443C148220186 @default.
- W2418374443 hasConceptScore W2418374443C153349607 @default.
- W2418374443 hasConceptScore W2418374443C154945302 @default.
- W2418374443 hasConceptScore W2418374443C17744445 @default.
- W2418374443 hasConceptScore W2418374443C182306322 @default.
- W2418374443 hasConceptScore W2418374443C199539241 @default.
- W2418374443 hasConceptScore W2418374443C199776023 @default.
- W2418374443 hasConceptScore W2418374443C201995342 @default.
- W2418374443 hasConceptScore W2418374443C2780451532 @default.