description: solution concept of a non-cooperative game involving two or more players in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only their own strategy
63 results
by David Easley and Jon Kleinberg · 15 Nov 2010 · 1,535pp · 337,071 words
.1 What is a Game? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 6.2 Reasoning about Behavior in a Game . . . . . . . . . . . . . . . . . . . . . . 168 6.3 Best Responses and Dominant Strategies . . . . . . . . . . . . . . . . . . . . 173 6.4 Nash Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 6.5 Multiple Equilibria: Coordination Games . . . . . . . . . . . . . . . . . . . . 178 6.6 Multiple Equilibria: The Hawk-Dove Game . . . . . . . . . . . . . . . . . . . 182 6.7 Mixed Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 6.8
…
Dilemma, where strictly dominant strategies for each player imply a particular course of action regardless of what the other player is doing. 6.4 Nash Equilibrium When neither player in a two-player game has a strictly dominant strategy, we need some other way of predicting what is likely to happen
…
best response to S. Nash shared the 1994 Nobel Prize in Economics for his development and analysis of this idea. To understand the idea of Nash equilibrium, we should first ask why a pair of strategies that are not best responses to each other would not constitute an equilibrium. The answer
…
the other player, and then find strategies that are mutual best responses. 6.5 Multiple Equilibria: Coordination Games For a game with a single Nash equilibrium, such as the Three-Client Game in the previous section, it seems reasonable to predict that the players will play the strategies in this equilibrium
…
one player will not be using a best response to what the other is doing. Some natural games, however, can have more than one Nash equilibrium, and in this case it becomes difficult to predict how rational players will actually behave in the game. We consider some fundamental examples of this
…
individual strategies in these pairs are not best responses to each other. So as in the coordination games we looked at earlier, the concept of Nash equilibrium helps to narrow down the set of reasonable predictions, but it does not provide a unique prediction. 6.7. MIXED STRATEGIES 183 The Hawk-
…
point for Player 1 to put any probability on her weaker pure strategy. But we already established that pure strategies cannot be part of any Nash equilibrium for Matching Pennies, and because pure strategies are the best responses whenever 1 − 2q = 2q − 1, probabilities that make these two expectations unequal cannot
…
be part of a Nash equilibrium either. So we’ve concluded that in any Nash equilibrium for the mixed-strategy version of Matching Pennies, we must have 1 − 2q = 2q − 1, or in other words, q =
…
the highly symmetric structure of Matching Pennies; as we will see in subsequent examples in the next section, when the payoffs are less symmetric, the Nash equilibrium can consist of unequal probabilities. This notion of indifference is a general principle behind the computation of mixed-strategy equilibria, in two-player, two-
…
has a stronger option (pass) and a weaker option (run).) Just as in Matching Pennies, it’s easy to check that there is no Nash equilibrium where either player uses a pure strategy: both have to make their behavior unpredictable by randomizing. So let’s work out a mixed-strategy equilibrium
…
be indifferent between the two options, and will get the same expected payoff however you choose. 6.9 Pareto-Optimality and Social Optimality In a Nash equilibrium, each player’s strategy is a best response to the other player’s strategies. In other words, the players are optimizing individually. But this
…
idea that the players can construct a binding agreement to actually play the superior pair choice of strategies: if this alternate choice is not a Nash equilibrium, then absent a binding agreement, at least one player would want to switch to a different strategy. As an illustration of why this is
…
the Prisoner’s Dilemma — are examples of games in which the only outcome that is not Pareto-optimal is the one corresponding to the unique Nash equilibrium. Social Optimality. A stronger condition that is even simpler to state is social optimality. A choice of strategies — one by each player — is a
…
version of the Exam-or-Presentation Game with an easier exam, yielding the payoff matrix that we saw earlier in Figure 6.4, the unique Nash equilibrium is also the unique social optimum. 6.10 Advanced Material: Dominated Strategies and Dynamic Games In this final section, we consider two further issues
…
+1, . . . , Sn) for all other possible strategies S available to player i. i Finally, an outcome consisting of strategies (S1, S2, . . . , Sn) is a Nash equilibrium if each strategy it contains is a best response to all the others. B. Dominated Strategies and their Role in Strategic Reasoning In Sections 6
…
predicted using the structure of the dominated strategies. In this way, reasoning based on dominated strategies forms an intriguing intermediate approach between dominant strategies and Nash equilibrium: on the one hand, it can be more powerful than reasoning based solely on dominant strategies; but on the other hand, it still relies
…
it’s worth making some observations about the example of the Facility Location Game. First, the pair of strategies (C, D) is indeed the unique Nash equilibrium in the game, and when we discuss the iterated deletion of strictly dominated strategies in general, we will 200 CHAPTER 6. GAMES see that it
…
search for Nash equilibria. But beyond this, it is also an effective way to justify the Nash equilibria that one finds. When we first introduced Nash equilibrium, we observed that it couldn’t be derived purely from an assumption of rationality on the part of the players; rather, we had to
…
of the game would be found at an equilibrium from which neither player had an incentive to deviate. On the other hand, when a unique Nash equilibrium emerges from the iterated deletion of strictly dominated strategies, it is in fact a prediction made purely based on the assumptions of the players’
…
steps of such reasoning, we’ll have a game in which only the 500th and 501st nodes have survived as strategies. This is the unique Nash equilibrium for the game, and this unique prediction can be justified by a very long sequence of deletions of dominated strategies. It’s also interesting
…
incentive to deviate from Si to S , and S is still i i present in the reduced game, contradicting our assumption that E is a Nash equilibrium of the reduced game. This establishes that the game we end up with, after iterated deletion of strictly dominated strategies, still has all the
…
at least as well, and sometimes strictly better, by playing Hunt Hare. Nevertheless, the outcome in which both players choose Hunt Stag is a Nash equilibrium, since each is playing a best response to the other’s strategy. Thus, deleting weakly dominated strategies is not in general a safe thing to
…
will discuss an alternate equilibrium concept known as evolutionary stability that in fact does eliminate weakly dominated strategies in a principled way. The relationship between Nash equilibrium, evolutionary stability and weakly dominated strategies is considered in the exercises at the end of this part of the book. C. Dynamic Games Our
…
player A plays sA and player B plays a best response to sA. 2. Consider the following statement: 6.11. EXERCISES 211 In a Nash equilibrium of a two-player game each player is playing an optimal strategy, so the two player’s strategies are social-welfare maximizing. Is this statement
…
A D 0, 0 4, 4 (a) Find all pure-strategy Nash equilibria for this game. (b) This game also has a mixed-strategy Nash equilibrium; find the probabilities the players use in this equilibrium, together with an explanation for your answer. (c) Keeping in mind Schelling’s focal point idea
…
do you expect player 3 to do and why? What triple of strategies would you expect to see played? Is this list of strategies a Nash equilibrium of the simultaneous move game between the three players? 218 CHAPTER 6. GAMES Chapter 7 Evolutionary Game Theory In Chapter 6, we developed the
…
section, we can write down the condition for (S, S) (i.e. the choice of S by both players) to be a Nash equilibrium. (S, S) is a Nash equilibrium when S is a best response to the choice of S by the other player: this translates into the simple condition a ≥ c
…
Stag Hunt: A version with added benefit from hunting hare alone In this case, the choice of strategies (Hunt Stag, Hunt Stag) is still a Nash equilibrium: if each player expects the other to hunt stag, then hunting stag is a best response. But Hunt Stag is not an evolutionarily stable strategy
…
let’s see how to apply these ideas to the Hawk-Dove Game. First, since any evolutionarily stable mixed strategy must correspond to a mixed Nash equilibrium of the game, this gives us a 7.5. EVOLUTIONARILY STABLE MIXED STRATEGIES 233 way to search for possible evolutionarily stable strategies: we first
…
strategies. 7.6. EXERCISES 237 (b) Your answers to part (a) should suggest that the difference between the predictions of evolutionary stability and Nash equilibrium arises when a Nash equilibrium uses a weakly dominated strategy. We say that a strategy s∗ is weakly dominated if player i i has another strategy s with the
…
claim that makes a connection between evolutionarily stable strategies and weakly dominated strategies. Claim: Suppose that in the game below, (X, X) is a Nash equilibrium and that strategy X is weakly dominated. Then X is not an evolutionarily stable strategy. Player B X Y X a, a b, c Player
…
each player, so that each player’s strategy is a best response to all the others. The notions of dominant strategies, mixed strategies and Nash equilibrium with mixed strategies all have direct parallels with their definitions for two-player games. In this traffic game, there is generally not a dominant strategy
…
it was before the new road was built. That is, the total travel time of the population can be reduced (below that in the original Nash equilibrium from part (b)) by assigning travelers to routes. There are many assignments of routes that will accomplish this. Find one. Explain why your reassignment
…
transactions in such situations. This discussion motivates the equilibrium concept we will use for this game, which is a generalization of Nash equilibrium. As in the standard notion of Nash equilibrium from Chapter 6, it will be based on a set of strategies such that each player is choosing a best response to
…
) and the strategies the other traders use (what bids and asks they post). So everyone is employing a best response just as in any Nash equilibrium. The one difference here is that since the sellers and buyers move second they are required to chose optimally given whatever prices the traders have
…
posted, and the traders know this. This equilibrium is called a subgame perfect Nash equilibrium; in this chapter, we will simply refer to it as an equilibrium.2 The two-stage nature of our game here is particularly easy
…
squares, the buyer and the seller as circles, and edges representing pairs of people who are able to transact directly. Then describe what the possible Nash equilibrium outcomes are, together with an explanation for your answer. 5. Consider a trading network with intermediaries in which there is one seller S, two
…
profit do the traders make? (c) Suppose now that we add edges representing the idea that each buyer can trade with each trader. Find a Nash equilibrium in this new trading game. What happens to trader profits? Why? 6. Consider a trading network with intermediaries in which there are three sellers
…
related to the stream-of-consciousness way in which one mentally free-associates between different ideas. For example, suppose you’ve just been reading about Nash equilibrium in a book, and while thinking about it during a walk home your mind wanders, and you suddenly notice that you’ve shifted to
…
behaving.4 First, we’ll see that GSP has a number of pathologies that VCG was designed to avoid: truth-telling might not constitute a Nash equilibrium; there can in fact be multiple possible equilibria; and some of these may produce assignments of advertisers to slots that do not maximize total
…
advertiser valuation. On the positive side, we show in the next section that there is always at least one Nash equilibrium set of bids for GSP, and that among the (possibly multiple) equilibria, there is always one that does maximize total advertiser valuation. The analysis
…
15.6 as a matching market, with advertiser valuations for the full set of clicks associated with each slot. how far from social optimality a Nash equilibrium of GSP can be. The Revenue of GSP and VCG. The existence of multiple equilibria also adds to the difficulty in reasoning about the
…
matching market of advertisers 15.6. EQUILIBRIA OF THE GENERALIZED SECOND PRICE AUCTION 465 and slots, one can always construct a set of bids in Nash equilibrium — and moreover one that produces a socially optimal assignment of advertisers to slots. As a consequence, there always exists a set of socially optimal
…
market-clearing prices to guide us toward a set of bids, we now use the market-clearing property to verify that these bids form a Nash equilibrium. There are several cases to consider, but the overall reasoning will form the general principles that extend beyond just this example. First, let’s
…
set of market-clearing prices, together with the same socially optimal matching of advertisers to slots. Then, we will show that these bids form a Nash equilibrium. Constructing the bids. For the first step, we start by considering the prices per click that we get from the market-clearing prices: p∗ =
…
and the two worlds of business. Harvard Business Review, 74(4):100–109, July–August 1996. [28] Robert Aumann and Adam Brandenberger. Epistemic conditions for Nash equilibrium. Econometrica, 63(5):1161–1180, 1995. [29] Robert J. Aumann. Agreeing to disagree. Annals of Statistics, 4:1236–1239, 1976. [30] David Austen-Smith
…
. Incentive compatibility and the bargaining problem. Econometrica, 47:61–73, 1979. [306] John Nash. The bargaining problem. Econometrica, 18:155–162, 1950. [307] John Nash. Equilibrium points in n-person games. Proc. Natl. Acad. Sci. USA, 36:48–49, 1950. [308] John Nash. Non-cooperative games. Annals of Mathematics, 54:286
…
Handbook of Statistical Genetics, pages 239–270. John Wiley & Sons, 2001. [349] Matthew C. Rousu. A football play-calling experiment to illustrate the mixed strategy Nash equilibrium. Journal of the Academy of Business Education, pages 79–89, Summer 2008. [350] Ariel Rubinstein. Perfect equilibrium in a bargaining model. Econometrica, 50:97–109
by Stuart Russell and Peter Norvig · 14 Jul 2019 · 2,466pp · 668,761 words
possible counterpart strategies. The next solution concept we consider is weaker than dominant strategy equilibrium, but it is much more widely applicable. It is called Nash equilibrium, and is named for John Forbes Nash, Jr. (1928–2015), who studied it in his 1950 Ph.D. thesis—work for which he was
…
awarded a Nobel Prize in 1994. A strategy profile is a Nash equilibrium if no player could unilaterally change their strategy and as a consequence receive a higher payoff, under the assumption that the other players stayed with
…
their strategy choices. Thus, in a Nash equilibrium, every player is simultaneously playing a best response to the choices of their counterparts. A Nash equilibrium represents a stable point in a game: stable in the sense that there is no rational incentive
…
equilibria. Since a dominant strategy is a best response to all counterpart strategies, it follows that any dominant strategy equilibrium must also be a Nash equilibrium (Exercise 17.EQIB). In the prisoner’s dilemma, therefore, there is a unique dominant strategy equilibrium, which is also the unique
…
Nash equilibrium. The following example game demonstrates, first, that sometimes games have no dominant strategies, and second, that some games have multiple Nash equilibria. Ali:l
…
(t, l) and (b, r) are both Nash equilibria. Now, clearly it is in the interests of both agents to aim for the same Nash equilibrium—either (t, l) or (b, r)—but since we are in the domain of non-cooperative game theory, players must make their choices independently, without
…
B = 1 A = 1, B = –1 We invite the reader to check that the game contains no dominant strategies, and that no outcome is a Nash equilibrium in pure strategies: in every outcome, one player regrets their choice, and would rather have chosen differently, given the choice of the other player. To
…
find a Nash equilibrium, the trick is to use mixed strategies—to allow players to randomize over their choices. Nash proved that every game has at least one
…
Nash equilibrium in mixed strategies. This explains why Nash equilibrium is such an important solution concept: other solution concepts, such as dominant strategy equilibrium, are not guaranteed to exist for every
…
, but we always get a solution if we look for Nash equilibria with mixed strategies. In the case of matching pennies, we have a Nash equilibrium in mixed strategies if both players choose heads and tails with equal probability. To see that this outcome is indeed a
…
Nash equilibrium, suppose one of the players chose an outcome with a probability other than 0.5. Then the other player would be able to exploit that
…
to play heads with certainty. It is then easy to see that Bo playing heads with probability 0.6 could not form part of any Nash equilibrium. 17.2.2Social welfare The main perspective in game theory is that of players within the game, trying to obtain the best outcomes for themselves
…
prisoner’s dilemma game, introduced above, explains why it is called a dilemma. Recall that (testify, testify) is a dominant strategy equilibrium, and the only Nash equilibrium. However, this is the only outcome that is not Pareto optimal. The outcome (refuse, refuse) maximizes both utilitarian and egalitarian social welfare. The dilemma in
…
equilibria: iterate through each possible strategy profile, and check whether any player has a beneficial deviation from that profile; if not, then it is a Nash equilibrium in pure strategies. Dominant strategies and dominant strategy equilibria can be computed by similar algorithms. Unfortunately, the number of possible strategy profiles for n players
…
will converge if it leads to a strategy profile in which every player is making an optimal choice, given the choices of the others—a Nash equilibrium, in other words. For some games, myopic best response does not converge, but for some important classes of games, it is guaranteed to converge.
…
12:one;5/12:two], which should be played by both players. This strategy is called the maximin equilibrium of the game, and is a Nash equilibrium. Note that each component strategy in an equilibrium mixed strategy has the same expected utility. In this case, both one and two have the
…
example of the general result by von Neumann: every two-player zero-sum game has a maximin equilibrium when you allow mixed strategies. Furthermore, every Nash equilibrium in a zero–sum game is a maximin for both players. A player who adopts the maximin strategy has two guarantees: First, no other strategy
…
Figure 17.3. First, suppose Ali and Bo both pick DOVE. It is not hard to see that this strategy pair does not form a Nash equilibrium: either player would have done better to alter their choice to HAWK. So, suppose Ali switches to HAWK: This is the worst possible outcome
…
for Bo; and this strategy pair is again not a Nash equilibrium. Bo would have done better by also choosing HAWK: This strategy pair does form a Nash equilibrium, but not a very interesting one—it takes us more or less back to where we started
…
the two players get the same utility as if they had both played HAWK. But here is the thing: these strategies do not form a Nash equilibrium because this time, Ali has a beneficial deviation—to GRIM. If both players choose GRIM, then this is what happens: The outcomes and payoffs are
…
the same as if both players had chosen DOVE, but unlike that case, GRIM playing against GRIM forms a Nash equilibrium, and Ali and Bo are able to rationally achieve an outcome that is impossible in the one-shot version of the game. To see that
…
these strategies form a Nash equilibrium, suppose for the sake of contradiction that they do not. Then one player—assume without loss of generality that it is Ali—has a
…
a payoff of no more than –5: worse than the –1 she would have received by choosing GRIM. Thus, both players choosing GRIM forms a Nash equilibrium in the infinitely repeated prisoner’s dilemma, giving a rationally sustained outcome that is impossible in the one-shot version of the game. This is
…
of the Nash folk theorems is roughly that every outcome in which every player receives at least their security value can be sustained as a Nash equilibrium in an infinitely repeated game. GRIM strategies are the key to the folk theorems: the mutual threat of punishment if any agent fails to
…
it traces out strategies for each player. As it turns out, these strategies are Nash equilibrium strategies, and the payoff profile labeling the initial state is a payoff profile that would be obtained by playing Nash equilibrium strategies. Thus, Nash equilibrium strategies for extensive‑form games can be computed in polynomial time using backward induction; and
…
since the algorithm is guaranteed to label the initial state with a payoff profile, it follows that every extensive‑form game has at least one Nash equilibrium in pure strategies. These are attractive results, but there are several caveats. Game trees very quickly get very large, so polynomial running time should
…
be understood in that context. But more problematically, Nash equilibrium itself has some limitations when it is applied to extensive‑form games. Consider the game in Figure 17.4. Player 1 has two moves available
…
plus 2 BaseLine. Figure 17.4An extensive‑form game with a counterintuitive Nash equilibrium. Backward induction immediately tells us that (above, up) is a Nash equilibrium, resulting in both players receiving a payoff of 1. However, (below, down) is also a Nash equilibrium, which would result in both players receiving a payoff of 0. Player
…
not a credible threat, because if player 2 is actually called upon to make the choice, then she will choose up. A refinement of Nash equilibrium called subgame perfect Nash equilibrium deals with this problem. To define it, we need the idea of a subgame. Every decision state in a game tree (including the
…
1’s decision state, one rooted at player 2’s decision state. A profile of strategies then forms a subgame perfect Nash equilibrium in a game G if it is a Nash equilibrium in every subgame of G. Applying this definition to the game of Figure 17.4, we find that (above, up)
…
is subgame perfect, but (below, down) is not, because choosing down is not a Nash equilibrium of the subgame rooted at player 2’s decision state. Although we needed some new terminology to define subgame perfect
…
Nash equilibrium, we don’t need any new algorithms. The strategies computed through backward induction will be subgame perfect Nash equilibria, and it follows that every
…
extensive-form game of perfect information has a subgame perfect Nash equilibrium, which can be computed in time polynomial in the size of the game tree. Chance and simultaneous moves To represent stochastic games, such as
…
agents are rational. The theory does not say what to do when the other players are less than fully rational. The notion of a Bayes–Nash equilibrium partially addresses this point: it is an equilibrium with respect to a player’s prior probability distribution over the other players’ strategies—in other words
…
choice. How does Harriet make her choice? That depends on how Robbie is going to interpret it. We can resolve this circularity by finding a Nash equilibrium. In this case, it is unique and can be found by applying myopic best response: pick any strategy for Harriet; pick the best strategy for
…
A1 cannot do better than getting the whole pie. Thus, these two strategies—A1 proposes to get the whole pie, and A2 accepts—form a Nash equilibrium. Now consider the case where we permit exactly two rounds of negotiation. Now the power has shifted: A2 can simply reject the first offer, thereby
…
for A2 (as well as for A1). So A2 can do no better than accepting the first proposal that A1 makes. Again, this is a Nash equilibrium. But what if A1 uses the strategy: Always propose (0.8,0.2), and always reject any offer. By a similar argument we can
…
see that for this offer or for any possible deal (x, 1 – x) in the negotiation set, there is a Nash equilibrium pair of negotiation strategies such that the outcome will be agreement on the deal in the first time period. Impatient agents This analysis tells us
…
be for some small value of ϵ.) So, the two strategies of A1 offering (1 – γ2,γ2), and A2 accepting that offer are in Nash equilibrium. Patient players (those with a larger γ2) will be able to obtain larger pieces of the pie under this protocol: in this setting, patience truly
…
may require O(2|T|) computations of the cost function at each negotiation step. Finally, the Zeuthen strategy (with the coin flipping rule) is in Nash equilibrium. Summary •Multiagent planning is necessary when there are other agents in the environment with which to cooperate or compete. Joint plans can be constructed, but
…
occur if every agent acted rationally. •Non-cooperative game theory assumes that agents must make their decisions independently. Nash equilibrium is the most important solution concept in non-cooperative game theory. A Nash equilibrium is a strategy profile in which no agent has an incentive to deviate from its specified strategy. We have
…
concerning equilibria in general (non-zero-sum) games. His definition of an equilibrium solution, although anticipated in the work of Cournot (1838), became known as Nash equilibrium. After a long delay because of the schizophrenia he suffered from 1959 onward, Nash was awarded the Nobel Memorial Prize in Economics (along with Reinhart
…
Selten and John Harsanyi) in 1994. The Bayes–Nash equilibrium is described by Harsanyi (1967) and discussed by Kadane and Larkey (1982). Some issues in the use of game theory for agent control are covered
…
first RL algorithms for zero-sum Markov games. Hu and Wellman (2003) present a Q-learning algorithm for general-sum games that converges when the Nash equilibrium is unique; when there are multiple equilibria, the notion of convergence is not so easy to define (Shoham et al., 2004). Assistance games were
…
University. Abramson, B. (1990). Expected-outcome: A general model of static evaluation. PAMI, 12, 182–193. Abreu, D. and Rubinstein, A. (1988). The structure of Nash equilibrium in repeated games with finite automata. Econometrica, 56, 1259–1281. Achlioptas, D. (2009). Random satisfiability. In Biere, A., Heule, M., van Maaren, H., and Walsh
…
Fischer, P. (2002). Finite-time analysis of the multiarmed bandit problem. Machine Learning, 47, 235–256. Aumann, R. and Brandenburger, A. (1995). Epistemic conditions for nash equilibrium. Econometrica, 67, 1161–1180. Axelrod, R. (1985). The Evolution of Cooperation. Basic Books. Ba, J. L., Kiros, J. R., and Hinton, G. E. (2016).
…
668, 1103 Bayen, A. M., 516, 1099 Bayerl, S., 330, 1103 Bayes’ rule, 26, 417, 417–118, 426 Bayes, T., 417, 428, 798, 1086 Bayes-Nash equilibrium, 613 Bayesian, 427 Bayesian classifier, 420 Bayesian learning, 719,773, 773–774, 797 Bayesian network, 43, 430, 430–478, 799 continuous-time, 516 dynamic, 503
…
NAS (neural architecture search), 821, 838 NASA, 47, 360, 373, 402, 473 Nash’s theorem, 599 Nash, J., 598, 637, 1107 Nash, P., 587, 1107 Nash equilibrium, 598, 635 Nash folk theorems, 607 NATACHATA (chatbot), 1035 Natarajan, S., 551, 1094 Natarajan, V., 48, 1104 naturalism, 24 natural kind, 338 natural language inference
…
1090 Stutzle, T., 160, 190, 1093, 1099 style transfer, 1022 Su, H., 837, 1111 Su, Y., 125, 1093 subcategory, 335, 892 subgame, 609 subgame perfect Nash equilibrium, 609 subgoal independence, 374 subjective case, 892 subjectivism, 427 subproblem, 118 Subrahmanian, V. S., 222, 1108 Subramanian, D., 360, 1071, 1111, 1114 substance, 339–340
by Sylvia Nasar · 11 Jun 1998 · 998pp · 211,235 words
hung around the math building might be on the short list for a Nobel prize in economics. “You don’t mean the Nash of the Nash equilibrium?” I asked. He told me to call a couple of people in the math department to learn more. By the time I put down the
…
observations of his hometown to his focus on the logical strategy necessary for the individual to maximize his own advantage and minimize his disadvantages. The Nash equilibrium, once it is explained, sounds obvious, but by formulating the problem of economic competition in the way that he did, Nash showed that a decentralized
…
1950s than as an adjective for concepts too universally accepted, too familiar a part of the foundation of many subjects to require a particular reference: “Nash equilibrium,” “Nash bargaining solution,” “Nash program,” “De Giorgi-Nash result,” “Nash embedding,” “Nash-Moser theorem,” “Nash blowing-up.”54 When a massive new encyclopedia of economics
…
to crystallize and mature. That October, he started to experience a virtual storm of ideas. Among them was his brilliant insight into human behavior: the Nash equilibrium. Nash went to see von Neumann a few days after he passed his generals.9 He wanted, he had told the secretary cockily, to discuss
…
every game. By contrast, Nash proved on page six of his thesis that every noncooperative game with any number of players has at least one Nash equilibrium point. To understand the beauty of Nash’s result, write Avinash Dixit and Barry Nalebuff in Thinking Strategically, one begins with the notion that interdependence
…
Selten, the German economist who shared the 1994 Nobel with Nash and John C. Harsanyi, said: “Nobody would have foretold the great impact of the Nash equilibrium on economics and social science in general. It was even less expected that Nash’s equilibrium point concept would ever have any significance for biological
…
National Academy of Sciences proceedings — swept through the white stucco building at Fourth and Broadway like a California brushfire.1 The biggest appeal of the Nash equilibrium concept was its promise of liberation from the two-person zero-sum game. The mathematicians, military strategists, and economists at RAND had focused almost exclusively
…
both better off. Though Williams and Alchian didn’t always cooperate, the results hardly resembled a Nash equilibrium. Dresher and Flood argued, and von Neumann apparently agreed, that their experiment showed that players tended not to choose Nash equilibrium strategies and instead were likely to “split the difference.” As it turns out, Williams and
…
Newman, an economist at Johns Hopkins, was editing a volume of important contributions to mathematical economics. He wanted to include Nash’s NAS note on Nash equilibrium. The first problem was finding him. I found him teaching or something at a small women’s college near Roanoke. I wrote to him there
…
is not what he was.” So I had to give it up. Later, when the book was reviewed, reviewers chided me for not including the Nash equilibrium.34 • • • Nash was constantly fearful that Martha and Virginia would hospitalize him again. As he said in one letter, “It is the mechanism of how
…
. Weibull, a gentle, soft-spoken man, asked Nash questions about his work. Sometimes the conversation took odd turns. When Weibull asked Nash about refining the Nash equilibrium concept by, perhaps, taking into account irrational moves by players, Nash answered him by talking, not about irrationality, but about immortality. But on the whole
…
graduate student. “The most damning thing,” Stahl repeated later, was something Martin Shubik wrote in one of his books: that “you can only understand the Nash equilibrium if you have met Nash. It’s a game and it’s played alone.”61 He brought up Nash’s work for RAND: “These guys
…
realization of the true potential of the revolution launched by von Neumann and Morgenstern.”11 And because most economic applications of game theory use the Nash equilibrium concept, “Nash is the point of departure.”12 The revolution has gone far beyond research journals, experimental laboratories at Caltech and the University of Pittsburgh
…
help evaluate the effect of every proposed rule. According to Milgrom, “Game theory played a central role in the analysis of the rules. Ideas of Nash equilibrium, rationalizability, backward induction, and incomplete information, though rarely named explicitly, were the real basis of daily decisions about the details of the auction process.”23
…
. Dixit and Bam J. Nalebuff, Thinking Strategically (New York: Norton, 1991). 17. Robert J. Leonard, “Reading Cournot, Reading Nash: The Creation and Stabilization of the Nash Equilibrium,” The Economic Journal (May 1994), pp. 492–511; Martin Shubik, “Antoine Augustin Cournot,” in Eatwell, Milgate, and Newman, op. cit., pp. 117–28. 18. Joseph
…
Seminar: The Work of John Nash in Game Theory,” in Les Prix Nobel 1994 (Stockholm: Norstedts Tryckeri, 1995). For a reader-friendly exposition of the Nash equilibrium, see Avinash Dixit and Susan Skeath, Games of Strategy (New York: Norton, 1997). 20. See, for example, Anthony Storr, Solitude: A Return to the Self
…
. After historian Robert Leonard published the established version of the origins of the paper in “Reading Cournot, Reading Nash: The Creation and Stabilisation of the Nash Equilibrium,” The Economic Journal, no. 164 (May 1994), p. 497, Nash corrected the record at a lunch with Harold Kuhn and Roger Myerson, 5.96, Kuhn
…
Neumann, Morgenstern and the Creation of Game Theory, 1928–1944.” Journal of Economic Literature (1995). ———. “Reading Cournot, Reading Nash: The Creation and Stabilization of the Nash Equilibrium.” The Economic Journal (May 1994), pp. 492–511. Lindbeck, Assar. “The Prize in Economic Science in Memory of Alfred Nobel.” Journal of Economic Literature, vol
…
, 93 Fuck Your Buddy, 102 Fukuda, Hiroshi, 75 Fulbright program, 236 Galbraith, John Kenneth, 116 Gale, David, 62, 64, 77, 78, 83, 100, 308–9 Nash equilibrium and, 95 Gallagher, Chicky, 193 Galmarino, Alberto, 240–41 games: non-zero-sum, 87 two-person zero-sum, 14, 87, 95, 96, 115, 116, 119
…
, 92, 93–94, 95, 96–97, 98, 100, 111, 115, 116, 117–18, 119, 128, 149, 150, 362, 363 see also bargaining; min-max theorem; Nash equilibrium Gangolli, Ramesh, 240–41 Garabedian, Paul, 219–20 Garber, Robert, 292, 293, 294, 305, 307, 310 Gårding, Lars, 219, 368 Garsia, Adriano, 237, 257, 258
…
, 33 Nash, Martha (sister), see Legg, Martha Nash Nash, Martha Smith (grandmother), 26 Nash, Richard (cousin), 293, 320–21 Mary Nash College for Women, 26 Nash equilibrium, 115, 118, 119, 329, 339, 361–62, 375 assessment of, 96–98 dominant vs. dominated strategies in, 97 elaboration of, 93–96 see also Nobel
by Ananyo Bhattacharya · 6 Oct 2021 · 476pp · 121,460 words
set free. If A and B both remain silent, both of them will serve only one year in prison (on the lesser charge). The only Nash equilibrium of the dilemma is for the prisoners to rat on each other. To see why, imagine you are prisoner A. If you betray B, and
…
cent, player W gets a cent. If they both choose to defect, player A gets nothing, but player W still wins a half cent. The Nash equilibrium is in the bottom-left corner square – both players should defect. Had they played this strategy throughout the match, Williams would have ended the game
…
won 65 cents. The two cooperated in sixty out of 100 plays – much more often than a ‘rational’ player should. ‘It seems unlikely that the Nash equilibrium is in any realistic sense the correct solution,’ Flood notes.49 Though the participants were prohibited from reaching an understanding on dividing up the winnings
…
thought them more rational.’ Nash was right that the experimental conditions are far from an ideal test of his theory. The problem is that the Nash equilibrium for the 100-move game is for players to defect every time. To see why, imagine that the players are about to play the last
…
not do this. Flood recalls that von Neumann was quite tickled by their experiment. As he had predicted, players did not naturally gravitate towards the Nash equilibrium.52 Beyond that, however, he seems to have taken remarkably little interest. The Prisoner’s Dilemma is often portrayed as a paradox of rationality because
…
Static Economics’, Journal of Political Economy, 54 (1946), pp. 97–115. 67. Robert J. Leonard, ‘Reading Cournot, Reading Nash: The Creation and Stabilisation of the Nash Equilibrium’, Economic Journal, 104(424) (1994), pp. 492–511. 68. Ibid. 69. William F. Lucas, ‘A Game with No Solution’, Bulletin of the American Mathematics Society
by William Poundstone · 2 Jan 1993 · 323pp · 100,772 words
player.” No longer can we assume that one player’s gain is another’s loss. Some cells have a higher combined payoff than others. The Nash equilibrium solution is for both players to choose their strategy 2 (lower right cell, boldface). Obviously the row player is satisfied with this, for he wins
…
the equilibrium point solution clearly makes sense. Other times, equilibrium point solutions appear less inevitable than the solutions of zero-sum games. In fact, sometimes Nash equilibriums appear to be distinctly irrational. We will explore the consequences of this in the following chapters. 1. “The Rand Hymn” words and music by Malvina
…
in everyday life. He asked the people involved how they had decided what to do. Were they (unconsciously!) using the von Neumann-Morgenstern theory, the Nash equilibrium theory, or something else entirely? Flood also accumulated data on how departing RAND colleagues sold or gave away their belongings (many stayed just for the
…
. In the RAND experiment, Alchian and Williams played the game one hundred times in succession. There was no evidence of any instinctive preference for the Nash equilibrium—if anything, the reverse. Alchian chose his nonequilibrium strategy (cooperation; his strategy 1) sixty-eight times out of a hundred, and Williams picked his nonequilibrium
…
“fair” playoff table, the cooperation rate might have been higher yet. Flood and Dresher wondered what John Nash would make of this. Mutual defection, the Nash equilibrium, occurred only fourteen times. When they showed their results to Nash, he objected that “the flaw in the experiment as a test of equilibrium point
…
much interaction, which is obvious in the results of the experiment.” This is true enough. However, if you work it out, you find that the Nash equilibrium strategy for the multi-move “supergame” is for both players to defect in each of the hundred trials. They didn’t do that. TUCKER’S
…
expectant about its implications for game theory.” Flood recalls that von Neumann thought the game was provocative, in a general way, as a challenge to Nash equilibrium theory, but he didn’t take their informal experiment entirely seriously. Dresher showed the game to another RAND consultant, Albert Tucker. Tucker was a distinguished
…
the paradigms of our time. Flood says he wasn’t thinking specifically of the nuclear situation when he and Dresher formulated their game—rather, of Nash equilibriums. Of course, it quickly became apparent that there were parallels, and defense in the nuclear age was the underlying purpose of all RAND’s research
…
going to a different dealer. In game theory you generally commit to a strategy on the basis of a single potential outcome (a maximin or Nash equilibrium). If your opponent doesn’t do as game theory advocates, you may find that you could have done better with a different strategy. One of
…
would want to swerve—better chicken than dead. When both players want to be contrary, how do you decide? The game of chicken has two Nash equilibriums (boldface, lower left and upper right cells). This is another case where the Nash theory leaves something to be desired. You don’t want two
…
outcome may not be an equilibrium point at all. Each player can choose to drive straight—on grounds that it is consistent with a rational, Nash-equilibrium solution—and rationally crash. Consider these variations: (a) As you speed toward possible doom, you are informed that the approaching driver is your long-lost
…
could do with cooperation. Deadlock is not properly a dilemma at all. There is no reason for wavering: you should defect. Mutual defection is a Nash equilibrium. Deadlock occurs when two parties fail to cooperate because neither really wants to—they just want the other guy to cooperate. Not all failures to
…
points, look like this: Hunt stag Chase hare Hunt stag 3, 3 0, 2 Chase hare 2, 0 1, 1 Obviously, mutual cooperation is a Nash equilibrium. The players can’t do any better no matter what. Temptation to defect arises only when you believe that others will defect. For this reason
by Brian Christian and Tom Griffiths · 4 Apr 2016 · 523pp · 143,139 words
1994 (and lead to the book and film A Beautiful Mind, about Nash’s life). Such an equilibrium is now often spoken of as the “Nash equilibrium”—the “Nash” that Dan Smith always tries to keep track of. On the face of it, the fact that a
…
Nash equilibrium always exists in two-player games would seem to bring us some relief from the hall-of-mirrors recursions that characterize poker and many other
…
throw next may not be worthwhile, if you know that simply throwing at random is an unbeatable strategy in the long run. More generally, the Nash equilibrium offers a prediction of the stable long-term outcome of any set of rules or incentives. As such, it provides an invaluable tool for both
…
predicting and shaping economic policy, as well as social policy in general. As Nobel laureate economist Roger Myerson puts it, the Nash equilibrium “has had a fundamental and pervasive impact in economics and the social sciences which is comparable to that of the discovery of the DNA double
…
committed, and more dedicated (hence more promotion-worthy). Everyone looks to the others for a baseline, and will take just slightly less than that. The Nash equilibrium of this game is zero. As the CEO of software company Travis CI, Mathias Meyer, writes, “People will hesitate to take a vacation as they
…
decides to open his shop seven days a week, he’ll draw extra customers—taking them away from his competitor and threatening his livelihood. The Nash equilibrium, again, is for everyone to work all the time. This exact issue became a flash point in the United States during the 2014 holiday season
…
interests of the players are pitted directly against one another. every two-player game has at least one equilibrium: Nash, “Equilibrium Points in N-Person Games”; Nash, “Non-Cooperative Games.” the fact that a Nash equilibrium always exists: To be more precise, ibid. proved that every game with a finite number of players and
…
a finite number of strategies has at least one mixed-strategy equilibrium. “has had a fundamental and pervasive impact”: Myerson, “Nash Equilibrium and the History of Economic Theory.” “a computer scientist’s foremost concern”: Papadimitriou, “Foreword.” “Give us something we can use”: Tim Roughgarden, “Algorithmic Game Theory
…
) is widely believed to be intractable. The link between Nash equilibria and PPAD was established in Daskalakis, Goldberg, and Papadimitriou, “The Complexity of Computing a Nash Equilibrium” and Goldberg and Papadimitriou, “Reducibility Between Equilibrium Problems,” which was then extended to two-player games by Chen and Deng, “Settling the Complexity of Two
…
-Player Nash Equilibrium,” and then further generalized in Daskalakis, Goldberg, and Papadimitriou, “The Complexity of Computing a Nash Equilibrium.” PPAD stands for “Polynomial Parity Arguments on Directed graphs”; Papadimitriou, who named this class of problems in “On
…
format with n players, you should bid exactly (n−1)⁄n times what you think the item is worth. Note that this strategy is the Nash equilibrium but is not a dominant strategy; that is to say, nothing is better if everyone else is doing it, too, but isn’t necessarily optimal
…
of the 34th Annual Meeting of the Association for Computational Linguistics, 1996, 310–318. Chen, Xi, and Xiaotie Deng. “Settling the Complexity of Two-Player Nash Equilibrium.” In Foundations of Computer Science, 2006, 261–272. Chow, Y. S., and Herbert Robbins. “A Martingale System Theorem and Applications.” In Proceedings of the Fourth
…
: Cambridge University Press, 1987. Daskalakis, Constantinos, Paul W. Goldberg, and Christos H. Papadimitriou. “The Complexity of Computing a Nash Equilibrium.” ACM Symposium on Theory of Computing, 2006, 71–78. ______. “The Complexity of Computing a Nash Equilibrium.” SIAM Journal on Computing 39, no. 1 (2009): 195–259. Davis, Lydia. Almost No Memory: Stories. New York
…
Probability 1 (1973): 417–427. Murray, David. Chapters in the History of Bookkeeping, Accountancy and Commercial Arithmetic. Glasgow, UK: Jackson, Wylie, 1930. Myerson, Roger B. “Nash Equilibrium and the History of Economic Theory.” Journal of Economic Literature 1999, 1067–1082. ______. “Optimal Auction Design.” Mathematics of Operations Research 6, no. 1 (1981): 58
…
) ear Earliest Due Date Early Stopping Eat That Frog! (Tracy) Ebbinghaus, Hermann ECMO (extracorporeal membrane oxygenation) economics. See also auctions; investment strategies; market behavior bubbles Nash equilibrium and tragedy of commons and Economist Edmonds, Jack educational evaluation Edwards, Ward efficient algorithm efficient or tractable problem, defined Egyptian pharaohs’ reigns electrical memory organ
…
times and sequels and Mozart, Wolfgang Amadeus multi-armed bandits Multiplicative Rule multitasking murder rate Murphy, Tom Myerson, Roger myopic algorithm Nakamura, Hikaru Nash, John Nash equilibrium National Library Sorting Champion Nature NBA NCAA nervous system Netflix networking. See also Internet network queues Neumann, Christof neural networks news reports Newton, Isaac New
by Daron Acemoğlu and James A. Robinson · 28 Sep 2001
is between a dictator and the disenfranchised citizens. Once we have taken this step, looking for determinate social choices is equivalent to looking for the Nash equilibrium of the relevant games. 3. Single-Peaked Preferences and the Median Voter Theorem 3.1 Single-Peaked Preferences Let’s first be more specific about
…
a decision.3 A strategy here is simply how to vote in different pairwise comparisons. The basic solution concept for such a game is a Nash equilibrium, which is a set of n strategies, one for each player, such that no player can increase his payoff by unilaterally changing strategy. Another way
…
to say this is that players’ strategies have to be mutual best responses. We also extensively use a refinement of Nash equilibrium – the concept of subgame perfect Nash equilibrium – in which players’ strategies have to be mutual best responses on every proper subgame, not just the whole game. (The relationship between
…
have to choose an action q j ∈ Q for j = A, B, and citizens again have to vote. Thus, in this model, a subgame perfect Nash equilibrium would be a set of n + 2 strategies, one for each of the political parties and one for each of the n voters, which would
…
offering a different policy for parties or voting differently for citizens). In the present model, however, we can simplify the description of a subgame perfect Nash equilibrium because, given a policy vector (q A , q B ) ∈ Q × Q, voters simply vote for the party offering the policy closest to their ideal point
…
party wins and, considering this at the initial stage of the game, parties choose policies to maximize (4.1). This implies that a subgame perfect Nash equilibrium in this game reduces to a pair of policies (q A∗ , q B∗ ) such that q A∗ maximizes P (q A , q B∗ )R, taking
…
its payoff by choosing an alternative policy (or, in the language of game theory, by “deviating”). Formally, the following theorem characterizes the unique subgame perfect Nash equilibrium of this game: Proposition 4.2 (Downsian Policy Convergence Theorem): Consider a vector of policy choices (q A , q B ) ∈ Q × Q where Q ⊂ R
…
M be the median voter, with ideal point q M . If all individuals have single-peaked preferences over Q, then in the unique subgame perfect Nash equilibrium, both parties will choose the platforms q A∗ = q B∗ = q M . 98 Democratic Politics Stated differently, both parties converge to offer exactly the ideal
…
only be created in certain circumstances, and we want to know what makes such circumstances more likely. We can now think of a game, the (Nash) equilibrium of which will determine the level of redistributive taxation. We can do this in the context of either a direct democracy or a representative democracy
…
poor. To characterize the equilibrium, we can again think of the model as a game in which two political parties propose policy platforms. The unique Nash equilibrium involves both parties offering the ideal point of the poor. To see what this ideal point is, note that a poor agent clearly does not
…
in this chapter, where all issues are voted on simultaneously, then because the model has a three-dimensional policy space, it may not possess a Nash equilibrium. To circumvent this problem in a simple way, we formulate the game by assuming that the tax rate and the transfers are voted on sequentially
…
the transfers to be used to redistribute income. We solve this game by backward induction and show that there is always a unique subgame perfect Nash equilibrium. We focus on two types of equilibria. p In the first, when δ X > 1/2, so that poor type Xs form an absolute majority
…
must be the ideal p one for poor type Xs, τ X . Therefore, in this case, there is a unique subgame perfect p p p Nash equilibrium, which we denote (τ X , TZ = 0, TX = (τ X − C (τ X )) ȳ/δ X ). In the second case, where poor Xs are not
…
MVT implies that τ Xr will be the tax rate determined at the first stage. Therefore, in this case, there is a unique subgame perfect Nash equilibrium (τ Xr , TZ = 0, TX = (τ Xr − C (τ Xr )) ȳ/δ X ). The equilibrium of this game does not depend on the timing of
…
act for the group. In terms of the more reduced-form condition in (5.4), this is similar to a higher µ. 2 There is another Nash equilibrium where, even though (5.5) is satisfied, there is a “coordination failure,” so that no agent takes part in revolution because they all believe that
…
equilibrium is in such a game, we would have to describe the payoff functions and strategies for all the elites and all the citizens. A Nash equilibrium would then entail a specification of strategies, one for each player, such that no member of the elite and no citizen could increase their payoff
…
to in Chapter 4, is useful because it characterizes the subgame perfect Nash equilibria of the game. Subgame perfection is a refinement of the original Nash equilibrium concept, useful in games with sequential moves and in dynamic games. The key feature of such an equilibrium, noted originally by Selten (1975), is that
…
the world, including themselves. Faced with this threat, it is optimal for the elite to give the citizens all of their money. This is one Nash equilibrium. However, it rests on the threat that if the elite refuses, the citizens will blow up the world. This threat is off the equilibrium path
…
to get nothing from the elite than to kill themselves. Therefore, their threat is not credible, and the Nash equilibrium supported by this noncredible threat is not appealing. Fortunately, there is another more plausible Nash equilibrium in which the elite refuses to give the citizens anything and the citizens do not blow up the
…
world. This second Nash equilibrium is indeed subgame perfect, whereas the first is not because it rests on noncredible threats. Given the importance in this book of the credibility of
…
elite deviated and tried to get away with less redistribution, it would be optimal for the citizens to undertake revolution. The concept of subgame perfect Nash equilibrium explicitly imposes that such threats have to be credible. Summarizing this analysis, we have the following: Proposition 5.1: There is a unique subgame perfect
…
.6) n=1 Party B faces a symmetric problem, which can be thought of as minimizing π A . Equilibrium policies then are determined as the Nash equilibrium of a game in 1 In Chapter 4, the parties’ objectives function was to come to power; thus, they simply wanted their vote share to
…
lobbying game described above, contribution functions for groups n = 1, 2 . . . L , {γ̂ n (·)}n=1,2..L and policy q ∗ constitute a subgame perfect Nash equilibrium if: 1. γ̂ n (·) is feasible in the sense that 0 ≤ γ̂ n (q ) ≤ V i (q ). 2. The politician chooses the policy that maximizes
…
.e., policy platforms) simultaneously. Therefore, the predictions of this model can be summarized by the corresponding Nash equilibrium, in which each party chooses the policy that maximizes its utility given the policy of the other party. Nash equilibrium policy platforms, (q A∗ , q B∗ ), satisfy the following conditions: q A∗ = arg max {P (q
…
A∗ , q B )) (R + WB (q B )) + P (q A∗ , q B )WB (q A∗ )} q B ∈Q Intuitively, these conditions state that in a Nash equilibrium, taking q B∗ as given, q A∗ should maximize party A’s expected utility. At the same time, it must be true that taking q
…
A∗ as given, q B∗ should maximize B’s expected utility. The problem in characterizing this Nash equilibrium is that the function P (q A , q B ), as shown by (12.1), is not differentiable. Nevertheless, it is possible to Partisan Politics and
…
we make this point which maximizes winning probabilities the median voter’s ideal point is simply a normalization without any consequences). In that case, the Nash equilibrium of the policy competition game between the two parties is a pair of policies (q A∗ , q B∗ ) such that the following first-order conditions
…
are equal to each other, each party is playing its best response. When both parties are playing their best responses, we have a Nash equilibrium. Although (12.17) characterizes the Nash equilibrium implicitly for any function P (q A , q B ), it is not informative unless we put more structure on this function. To
…
’Donnell’s attack of, 77 Moore, Barrington, 38, 76, 255, 307 bourgeoisie emphasized by, 79 MVT. See Median Voter Theorem Myanmar. See Burma Napoleon, 67 Nash equilibrium, 95, 103 subgame perfect, 97, 132 uniqueness of, 108 nature decision by, 157, 181 taxation reset by, 305 networks, buyer/supplier, 297 Nicaragua dictatorship of
by Jonathan Aldred · 5 Jun 2019 · 453pp · 111,010 words
than two players and which were not zero-sum. In 1950 Nash published the simple, elegant idea that made his name, nowadays known as the Nash equilibrium. Barely 300 words long, it had been accepted by the prestigious journal Proceedings of the National Academy of Sciences, a great achievement for a doctoral
…
to change their behaviour. And that must mean everyone has already adopted the best possible strategy given the strategies adopted by others. This is a Nash equilibrium. Even though, when making their decisions, no one knew what anyone else would do, it is as if everyone correctly guessed the strategy adopted by
…
, a solution, comprising a prescription of how to play, or a prediction of what will be played, or both. Ever since Nash’s 1950 paper, Nash equilibrium has been the basis of that answer: simultaneously a prediction of what a stable outcome must look like and a prescription of how to play
…
. Nash equilibrium bears the mark of a real intellectual breakthrough – an idea that had not occurred to anyone before Nash yet one that with hindsight seems entirely
…
cooperative game theory, Nash was a loner. Indeed, he argued (in another path-breaking paper published just a year after his paper setting out the Nash equilibrium idea) that von Neumann’s cooperative game theory was redundant. All cooperative games, Nash argued, should be understood as in fact non-cooperative: the seemingly
…
had remained seated. In the original Prisoner’s Dilemma story, the reasoning described earlier implies that both players should confess. And this outcome is a Nash equilibrium: if your partner confesses, you do best by confessing too. So the blame for the damaging non-cooperation nurtured by this reasoning may seem to
…
– although millions of students in social science, philosophy, law and biology are today introduced to game theory via the Prisoner’s Dilemma and its Nash equilibrium ‘solution’ – the Nash equilibrium idea is not driving the outcome here. There is a more basic logic at work: regardless of what the other player does, your best
…
missiles on Cuba. It was obvious to both sides that Chicken was the game being played. However, what they both wanted to know was: which Nash equilibrium? In other words, who would swerve first? A mistake could mean annihilation. Most historians agree that the world has never come closer to full-scale
…
are to provide a prediction about how the players will behave, and/or a prescription about how they should. In games with more than one Nash equilibrium, like Chicken, game theory seemed to fail on both counts. Even game theorists began to ask: what’s the point? Worse still, over the coming
…
locked into non-cooperative situations disappears. Put another way, game theory says we will end up in a Nash equilibrium, but it does not explain which equilibrium – cooperative, non-cooperative or otherwise. It is a Nash equilibrium that everyone drives on the same side of the road, and there are two equilibria: everyone drives
…
. Game theory has little to say about which equilibrium will emerge, and why it differs across countries. Likewise, the QWERTY layout for keyboards is a Nash equilibrium: if everyone is using QWERTY to type, and almost all keyboards are manufactured with QWERTY, then you should learn to type using QWERTY too, and
…
not explain why we got stuck in this slow equilibrium, with the slow QWERTY layout. The key question, then, is often less about why a Nash equilibrium persists once the players are playing their equilibrium strategies, and more about whether we will reach that equilibrium in the first place: a question of
…
keys that were liable to jam when used at speed.) Most troubling of all for game-theoretic orthodoxy, even if a game has only one Nash equilibrium, it does not follow that we will reach it – that it will be the outcome when the game is actually played. Playing the
…
Nash equilibrium strategy is only the best way for you to play the game if everyone else is playing the Nash equilibrium strategy too. But as we have seen, there are several good reasons why you might think
…
–3, 24–32 and interdependence, 23 limitations of, 32, 33–4, 37–40, 41–3 minimax solution, 22 multiplicity problem, 33–4, 35–7, 38 Nash equilibrium, 22–3, 24, 25, 27–8, 33–4, 41–2 the Nash program, 25 and nature of trust, 28–31, 41 the Prisoner’s Dilemma
…
Morgenstern, Oskar, 20–22, 24–5, 28, 35, 124, 129, 189, 190 Mozart, Wolfgang Amadeus, 91, 92–3 Murphy, Kevin, 229 Mussolini, Benito, 216, 219 Nash equilibrium, 22–3, 24, 25, 27–8, 33–4, 41–2 Nash, John, 17–18, 22–3, 24, 25–6, 27–8, 33–4, 41–2
by Nate Silver · 12 Aug 2024 · 848pp · 227,015 words
ties are deeper than I expected when I began working on this project. They speak one another’s language with terms such as expected value, Nash equilibriums, and Bayesian priors. I think of the River as having several subregions. Let’s start with the one that will require the most explanation: Upriver
…
’s some truth to this: these programs are trained by essentially playing against themselves. And they’re designed to achieve a Nash equilibrium or game-theory optimal (GTO) style of play. “Nash equilibrium” is named after the American mathematician John Nash, a discovery for which he shared the Nobel Prize. (Nash is also famous
…
the movie A Beautiful Mind.) I’ll have a lot more to say about game theory later in this chapter, but the idea of a Nash equilibrium is that it’s a defensive approach, one that’s impossible to beat over the long run because it prevents your opponents from exploiting your
…
the game rock paper scissors (you know the one: rock crushes scissors, scissors cut paper, paper covers rock). Since no move dominates any other, the Nash equilibrium strategy for this game is simply to randomize and make each play one-third of the time. Humans are so predictable and so poor at
…
only they’d been able to coordinate, they could have kept their sentences to two years. The prisoner’s dilemma is one example of a Nash equilibrium. No player can improve their position by unilaterally changing their strategy. No matter what Isabella does, for instance, Wyatt is better off snitching. That term
…
a hearty $10,000 profit ($2 profit margin x 5,000 slices). This is a nice situation for the brothers, but it isn’t a Nash equilibrium. Why not? Well, either store can unilaterally improve its outcome by changing its strategy. For instance, what if Lupo’s decides to undercut Francisco’s
…
cost—and it leads to two grumpy brothers barely eking out a living and lots of happy stoners munching on cheap pizza. This is a Nash equilibrium—and not coincidentally, it’s what the economics of the real world look like: the average restaurant only makes a profit margin of 3 to
…
hand (call, fold, or raise) a few moments later. Has the player had an “Aha!” moment? Good guess, but no. She’s probably randomizing. The Nash equilibrium solution for poker—yes, there is a solution to poker, though as you’ll soon see it’s an exceptionally complicated one—involves lots of
…
Average in Batter vs. Pitcher Battle Verlander throws fastball Verlander throws curveball Betts guesses fastball .260 .240 Betts guesses curveball .180 .350 What’s the Nash equilibrium? It can be solved with a little algebra—and as you’ve probably guessed, it involves randomization. It turns out that Verlander should throw the
…
mediocre programmer, he decided to give it a try himself. Lopusiewicz endeavored to create a poker “solver”: a computer program that would literally find the Nash equilibrium for poker. This was an ambitious goal; even von Neumann had thought that poker was too complicated to solve from first principles. “It seemed that
…
only very minor changes, like raising with A♦Q♦ 77 percent instead of 76 percent of the time—the solution converges on a Nash equilibrium, since the definition of a Nash equilibrium is when there are no further unilateral strategic improvements. In principle, it’s an elegant approach, although I’m leaving out a
…
as a competition, but it is—there are thousands of other drivers facing the same problem and using the same software. The result is a Nash equilibrium where you’re indifferent between the various logical routes. So it’s probably not worth stressing out too much about finding a shortcut. Just sit
…
been quiet for several hours, he suddenly goes all-in. What do you do? You sure as hell don’t consult some solver on the Nash equilibrium. This player is doing everything in his power to signal that he wants to hang around a little longer, which means playing his best hands
…
. Raised with my crappy pair. Believe it or not, this is a play the solver likes. Why? Sometimes offense is the best defense. Remember, the Nash equilibrium is trying to prevent me from being exploited by Hendrix. If Hendrix can make a cheap bet on the flop and win every time we
…
, he still had a good hand, losing only to an ace-high flush or a full house. His play was a massive deviation from the Nash equilibrium—according to a solver, it was roughly a $20,000 mistake if Hendrix had been playing against a computer. But I don’t mean to
…
matter most to them. Play the long game. Sure, other people sometimes give you opportunities to take advantage of them. But remember that, in a Nash equilibrium, any attempt to exploit your opponent runs the risk of being exploited in return. Avoid “noble lies” and positions taken out of naked political expediency
…
for the actions of the other players. Game-theory optimal (GTO): In poker, play in accordance with that recommended by game theory, such as a Nash equilibrium derived by a solver. Informally, GTO refers to a style of play that is perceived as mathematically precise but rigid, in contrast to feel players
…
theory, when two or more strategies (such as raising or calling in poker) have the same expected value and you should randomize between them. A Nash equilibrium often requires the use of mixed strategies. Model: A simplified representation of a complex system designed to replicate its essential features accurately enough to make
…
nuclear deterrence that holds that nuclear states won’t attack one another because they’d be assured of a devastating retaliatory strike if they did. Nash equilibrium: After the Princeton mathematician John Nash, a game theory solution in which all participants have optimized their EV and there are no further gains from
…
(decentralized autonomous organizations, a form of self-governance structure) are examples of smart contracts. Solver: A poker computer program that calculates an approximation of the Nash equilibrium and thereby can advise you on the correct play. Spectrumy: On the autism spectrum. Splash zone*: My term for the vicinity near whales or degens
…
Something to Chance,” RAND, rand.org/pubs/historical_documents/HDA1631-1.html. GO TO NOTE REFERENCE IN TEXT Nash’s most famous paper: John F. Nash, “Equilibrium Points in N-Person Games,” Proceedings of the National Academy of Sciences 36, no. 1 (January 1950): 48–49, doi.org/10.1073/pnas.36
…
, 469, 472, 484, 488 envy-based economy, 329, 484 Enzer, Sam, 382–83, 385–86 epistemic humility, 484 epistemic rationality, 372–73, 495 equilibrium. See Nash equilibrium equity (poker), 484 error bar, 484 estimation ability, 237–38 Ethereum, 109, 323–24, 326–27, 484 Everydays (Winkelmann), 331 EV maximization AI existential risk
…
–60, 63, 64, 426 rationality and, 379 reciprocity, 130, 367–68, 471–72, 495 solvers and, 22, 60–61, 62–65, 71, 74 See also Nash equilibrium; prisoner’s dilemma game trees, 61, 486, 508n gaming, 486 See also gambling Gates, Bill, 344 Gay, Claudine, 295 Gebru, Timnit, 380n gender casinos and
…
, 486 group selection, 429n GTO (game theory optimal) strategies (poker), 47, 62, 63, 65–67, 71–72, 485–86, 508n, 509n See also EV maximizing; Nash equilibrium Gurley, Bill, 259, 269, 282 H Habryka, Oliver, 374, 378–79, 402 Haidt, Jonathan, 522n half-Kelly, 488 Hall, Cate, 122, 370 Hall, Galen, 240
…
, 267n, 295 secular stagnation and, 467 mutually assured destruction (MAD), 58, 421, 424–27, 488, 490 N Nakamoto, Satoshi, 322–23, 496 narcissism, 274–75 Nash equilibrium defined, 47, 490 dominant strategies and, 55 everyday randomization and, 64 in poker, 57–58, 60, 61, 62 prisoner’s dilemma as, 54 reciprocity and
…
line shopping, 202–4, 488 manual trading, 175, 176–77 market makers vs. retail bookmakers, 186–90, 187, 489, 518n models and, 179–80, 182 Nash equilibrium in, 58–60, 508n networking and, 191, 197 obsession and, 196 online legalization, 184–85, 198–99n patience and, 259 probabilistic thinking and, 16–17
by Adam Kucharski · 23 Feb 2016 · 360pp · 85,321 words
other firms. Economists refer to such a situation—where each person is making the best decision possible given the choices made by others—as a “Nash equilibrium.” Spending would rise further and further until this costly game stopped. Or somebody forced it to stop. Congress finally banned tobacco ads from television in
…
is always better to talk: if your partner stays silent, you get off; if your partner talks, you receive two years rather than three. The Nash equilibrium for the prisoner’s dilemma game therefore has both players talking. Although they will end up suffering two years in prison rather than one, neither
…
context. While the fiery debate between von Neumann and Fréchet sparked and crackled, John Nash was busy finishing his doctorate at Princeton. By establishing the Nash equilibrium, he had managed to extend von Neumann’s work, making it applicable to a wider number of situations. Whereas von Neumann had looked at zero
…
as complicated as Texas hold’em poker. Because a huge number of different possible situations could arise, it is very difficult to compute the ideal Nash equilibrium strategy. One way around the problem is to simplify things, creating an abstract version of the game. Just as stripped-down versions of poker helped
…
in straightforward games in which all information is known. Tic-tac-toe is a good example: after a few games, most people work out the Nash equilibrium. This is because there aren’t many ways in which a game can progress: if a player gets three in a row, the game is
…
-toe or the prisoner’s dilemma, it’s easy to make sense of the possible options, which means players’ strategies almost always end up in Nash equilibrium. But what happens when games are too complicated to fully grasp? The complexity of chess and many forms of poker means that players, be they
…
attraction,” preferring past actions that were successful to those that were not. Galla and Farmer wondered whether this process of learning helps players find the Nash equilibrium when games are difficult. They were also curious to see what happens if the game doesn’t settle down to an optimal outcome. What sort
…
particularly important in games like poker, which can have more than two players. Recall that, in game theory, optimal strategies are said to be in Nash equilibrium: no single player will gain anything by picking a different strategy. Neil Burch, one of the researchers in the University of Alberta poker group, points
…
if you have a single opponent. If the game is zero-sum—with everything you lose going to your opponent, and vice versa—then a Nash equilibrium strategy will limit your losses. What’s more, if your opponent deviates from an equilibrium strategy, your opponent will lose out. “In two player games
…
that are zero-sum, there’s a really good reason to say that a Nash equilibrium is the correct thing to play,” Burch said. However, it isn’t necessarily the best option when more players join the game. “In a three
…
: coalitions don’t always have to be deliberate. They might just result from the strategies players choose. In many situations, there is more than one Nash equilibrium. Take driving a car. There are two equilibrium strategies: if everyone drives on the left, you will lose out if you make a unilateral decision
…
with the situation. The same problem crops up in poker. As well as causing inconvenience, it can cost players money. Three poker players could choose Nash equilibrium strategies, and when these strategies are put together, it may turn out that two players have selected tactics that just so happen to pick on
…
point of view. Not only is the game far more complicated, with more potential moves to analyze, it’s not clear that hunting for the Nash equilibrium is always the best approach. “Even if you could compute one,” Michael Johanson said, “it wouldn’t necessarily be useful.” There are other drawbacks, too
…
. But if your opponent has flaws—or if there are more than two players in the game—you might want to deviate from the “optimal” Nash equilibrium strategy and instead take advantage of weaknesses. One way to do this would be to start off with an equilibrium strategy, and then gradually tweak
…
to exploit, taking as much as possible from weak opponents, but not be exploitable, and come unstuck against strong players. Defensive strategies—such as the Nash equilibrium, and the tactics employed by Dahl’s poker bot—are not very exploitable. Strong players will struggle to beat them. However, this comes at the
…
bot had made completely the wrong assumptions about its opponent. Along with PhD student Sam Ganzfried, Sandholm has been developing “hybrid” bots, which combine defensive Nash equilibrium tactics with opponent modeling. “We would like to only attempt to exploit weak opponents,” they said, “while playing the equilibrium against strong opponents.” IT’S
…
poker games. Polaris was designed to be hard to beat. Rather than attempting to exploit its opponents, it employed a strategy that was close to Nash equilibrium. At the time, some of the poker community thought Laak and Eslami were strange choices for the match. Laak had a reputation for being hyperactive
…
, the next challenge for the Alberta team was how to make a bot that was truly unbeatable. Their existing bots could only compute an approximate Nash equilibrium, which meant there might have been a strategy out there that could beat them. Bowling and his colleagues therefore set out to find a set
…
, 181–182 mortgage loan crisis, 96–97 multiple regression, 49 Munchkin, Richard, 72, 214 Nadal, Rafael, 110 NASCAR, 107 Nash, John, 137, 148–149, 158 Nash equilibrium, 137–138, 148–149, 151, 154, 160, 161, 181, 183, 184, 185, 187 National Academy of Sciences, 133 Nature (journal), 48, 49, 51 NBA, 85
by Scott E. Page · 27 Nov 2018 · 543pp · 153,550 words
by J. Doyne Farmer · 24 Apr 2024 · 406pp · 114,438 words
by Jim Jansen · 25 Jul 2011 · 298pp · 43,745 words
by Niall Kishtainy · 15 Jan 2017 · 272pp · 83,798 words
by Camilla Pang · 12 Mar 2020 · 256pp · 67,563 words
by Michael Wooldridge · 2 Nov 2018 · 346pp · 97,890 words
by Lawrence Freedman · 31 Oct 2013 · 1,073pp · 314,528 words
by Richard H. Thaler · 10 May 2015 · 500pp · 145,005 words
by Stuart Russell · 7 Oct 2019 · 416pp · 112,268 words
by John von Neumann and Oskar Morgenstern · 19 Mar 2007
by William Poundstone · 5 Feb 2008
by Philip Mirowski · 24 Jun 2013 · 662pp · 180,546 words
by Gabriel Weinberg and Lauren McCann · 17 Jun 2019
by Bruce C. Greenwald · 31 Aug 2016 · 482pp · 125,973 words
by Michael Kearns and Aaron Roth · 3 Oct 2019
by Andrew McAfee · 14 Nov 2023 · 381pp · 113,173 words
by John Kay · 24 May 2004 · 436pp · 76 words
by Kenneth Payne · 16 Jun 2021 · 339pp · 92,785 words
by Steven Pinker · 14 Oct 2021 · 533pp · 125,495 words
by Michael W. Covel · 19 Mar 2007 · 467pp · 154,960 words
by Peter L. Bernstein · 23 Aug 1996 · 415pp · 125,089 words
by George Dyson · 28 Mar 2012 · 463pp · 118,936 words
by Diane Coyle · 11 Oct 2021 · 305pp · 75,697 words
by Joseph Henrich · 27 Oct 2015 · 631pp · 177,227 words
by Tim Sullivan · 6 Jun 2016 · 252pp · 73,131 words
by Luciano Floridi · 25 Feb 2010 · 137pp · 36,231 words
by Clay Shirky · 28 Feb 2008 · 313pp · 95,077 words
by Michael J. Mauboussin · 1 Jan 2006 · 348pp · 83,490 words
by Hannah Fry · 3 Feb 2015 · 88pp · 25,047 words
by John Allen Paulos · 1 Jan 2003 · 295pp · 66,824 words
by Jonathan Tepper · 20 Nov 2018 · 417pp · 97,577 words
by Michel Aglietta · 23 Oct 2018 · 665pp · 146,542 words
by Robin Dunbar and Robin Ian MacDonald Dunbar · 2 Nov 2010 · 255pp · 79,514 words
by Michele Boldrin and David K. Levine · 6 Jul 2008 · 607pp · 133,452 words
by John Kay · 30 Apr 2010 · 237pp · 50,758 words
by Eliezer Yudkowsky · 11 Mar 2015 · 1,737pp · 491,616 words
by Tom Vanderbilt · 28 Jul 2008 · 512pp · 165,704 words
by Douglas B. Laney · 4 Sep 2017 · 374pp · 94,508 words
by William Poundstone · 1 Jan 2010 · 519pp · 104,396 words
by Branko Milanovic · 10 Apr 2016 · 312pp · 91,835 words
by Sean McFate · 22 Jan 2019 · 330pp · 83,319 words
by Bruce Schneier · 2 Mar 2015 · 598pp · 134,339 words
by Cal Newport · 2 Mar 2021 · 350pp · 90,898 words
by Mervyn King and John Kay · 5 Mar 2020 · 807pp · 154,435 words
by Natasha Dow Schüll · 15 Jan 2012 · 632pp · 166,729 words
by Natasha Dow Schüll · 19 Aug 2012
by Robin Hanson · 31 Mar 2016 · 589pp · 147,053 words
by Nick Bostrom · 3 Jun 2014 · 574pp · 164,509 words
by Toby Ord · 24 Mar 2020 · 513pp · 152,381 words
by Mario Livio · 23 Sep 2003
by David Kerrigan · 18 Jun 2017 · 472pp · 80,835 words
by Nouriel Roubini · 17 Oct 2022 · 328pp · 96,678 words
by Philip Tetlock and Dan Gardner · 14 Sep 2015 · 317pp · 100,414 words