Matches in SemOpenAlex for { <https://semopenalex.org/work/W1589774673> ?p ?o ?g. }
Showing items 1 to 53 of
53
with 100 items per page.
- W1589774673 abstract "The act of bluffing confounds game designers to this day. The very nature of bluffing is even open for debate, adding further complication to the process of creating intelligent virtual players that can bluff, and hence play, realistically. Through the use of intelligent, learning agents, and carefully designed agent outlooks, an agent can in fact learn to predict its opponents’ reactions based not only on its own cards, but on the actions of those around it. With this wider scope of understanding, an agent can in fact learn to bluff its opponents, with the action representing not an “illogical” action, as bluffing is often viewed, but rather as an act of maximising returns through an effective statistical optimisation. By using a Temporal Difference-lamba (TD(λ) re-inforcement learning algorithm (Sutton, 1988; Sutton, 1989) to continuously adapt neural network agent‘s intelligence ability, agents are shown, in this chapter, to be able to learn to bluff without outside prompting, and even to learn to call each other’s bluffs in a free competative play. While many card games involve an element of bluffing, simulating and fully understanding bluffing yet remains one of the most elusive tasks presented to the game design engineer (Hurwitz & Marwala, 2005, 2007a,b). The entire process of bluffing relies on performing a task that is unexpected, and is thus misinterpreted by one’s opponent. For this reason, static rules are doomed to failure since once they become predictable, they cannot be misinterpreted. In order to create an artificially intelligent agent that can bluff, one must first create an agent that is capable of learning. There are many learning algorithms that have been developed and successfully implemented and these include neural networks (Mohamed et al, 2005), support vector machines (Msiza et al, 2007) and neuro-fuzzy systems (Tettey & Marwala, 2006). These learning algorithms have been applied to diverse areas such as civil engineering (Marwala, 2000), mechanical engineering (Marwala & Hunt, 1999), aerospace engineering (Marwala, 2001) and biomedical engineering (Leke et al, 2006). The agent must be able to learn not only about the inherent nature of the game it is playing, but also must be capable of learning trends emerging from its opponent’s behaviour, since bluffing is only plausible when one can anticipate the opponent’s reactions to one’s own actions. Firstly the game to be modelled will be detailed, with the rationale for its choice being explained. This chapter then details the system and agent architecture, which is of paramount importance since this not only ensures that the correct information is available to the agent, but also has a direct impact on the efficiency of the learning algorithms utilised. Once the system is fully illustrated, the actual learning of the agents is demonstrated, with the appropriate findings detailed. O pe n A cc es s D at ab as e w w w .in te ch w eb .o rg" @default.
- W1589774673 created "2016-06-24" @default.
- W1589774673 creator A5029034465 @default.
- W1589774673 creator A5066130528 @default.
- W1589774673 date "2009-01-01" @default.
- W1589774673 modified "2023-10-14" @default.
- W1589774673 title "A Multi-Agent Approach to Bluffing" @default.
- W1589774673 cites W1484980274 @default.
- W1589774673 cites W1489238133 @default.
- W1589774673 cites W1754549361 @default.
- W1589774673 cites W1922303085 @default.
- W1589774673 cites W2026635907 @default.
- W1589774673 cites W2041678420 @default.
- W1589774673 cites W2100677568 @default.
- W1589774673 cites W2117155555 @default.
- W1589774673 cites W2121863487 @default.
- W1589774673 cites W2133083074 @default.
- W1589774673 cites W2145945775 @default.
- W1589774673 cites W2146268548 @default.
- W1589774673 cites W2164300250 @default.
- W1589774673 cites W2237036 @default.
- W1589774673 cites W2258148307 @default.
- W1589774673 doi "https://doi.org/10.5772/6603" @default.
- W1589774673 hasPublicationYear "2009" @default.
- W1589774673 type Work @default.
- W1589774673 sameAs 1589774673 @default.
- W1589774673 citedByCount "2" @default.
- W1589774673 countsByYear W15897746732022 @default.
- W1589774673 countsByYear W15897746732023 @default.
- W1589774673 crossrefType "book-chapter" @default.
- W1589774673 hasAuthorship W1589774673A5029034465 @default.
- W1589774673 hasAuthorship W1589774673A5066130528 @default.
- W1589774673 hasBestOaLocation W15897746731 @default.
- W1589774673 hasConcept C41008148 @default.
- W1589774673 hasConceptScore W1589774673C41008148 @default.
- W1589774673 hasLocation W15897746731 @default.
- W1589774673 hasLocation W15897746732 @default.
- W1589774673 hasOpenAccess W1589774673 @default.
- W1589774673 hasPrimaryLocation W15897746731 @default.
- W1589774673 hasRelatedWork W1596801655 @default.
- W1589774673 hasRelatedWork W2049775471 @default.
- W1589774673 hasRelatedWork W2350741829 @default.
- W1589774673 hasRelatedWork W2358668433 @default.
- W1589774673 hasRelatedWork W2376932109 @default.
- W1589774673 hasRelatedWork W2382290278 @default.
- W1589774673 hasRelatedWork W2390279801 @default.
- W1589774673 hasRelatedWork W2748952813 @default.
- W1589774673 hasRelatedWork W2899084033 @default.
- W1589774673 hasRelatedWork W2530322880 @default.
- W1589774673 isParatext "false" @default.
- W1589774673 isRetracted "false" @default.
- W1589774673 magId "1589774673" @default.
- W1589774673 workType "book-chapter" @default.