Nash equilibrium (1950)

Named after mathematician JOHN NASH, and central to game theory, Nash equilibrium refers to a situation in which individuals participating in a game pursue the best possible strategy while possessing the knowledge of the strategies of other players.

It works on the premise that the player cannot improve his/her position given the other players’ strategy.

Nash equilibrium is sometimes referred to as the non-co-operative equilibrium because each player chooses his/her own strategy believing it is the best one possible, without collusion, and without thinking about the interests of either his opponent or the society in which he/she lives.

Also see: co-operative games theory, collusion theory, oligopoly theory, Allais paradox

Source:
J F Nash, ‘Equilibrium Points in n-Person Games’, Proceedings of the National Academy of Science, USA, vol. XXXVI (1950), 48-49

Applications

Game theorists use Nash equilibrium to analyze the outcome of the strategic interaction of several decision makers. In a strategic interaction, the outcome for each decision-maker depends on the decisions of the others as well as their own. The simple insight underlying Nash’s idea is that one cannot predict the choices of multiple decision makers if one analyzes those decisions in isolation. Instead, one must ask what each player would do taking into account what she/he expects the others to do. Nash equilibrium requires that their choices be consistent: no player wishes to undo their decision given what the others are deciding.

The concept has been used to analyze hostile situations such as wars and arms races[3] (see prisoner’s dilemma), and also how conflict may be mitigated by repeated interaction (see tit-for-tat). It has also been used to study to what extent people with different preferences can cooperate (see battle of the sexes), and whether they will take risks to achieve a cooperative outcome (see stag hunt). It has been used to study the adoption of technical standards,[citation needed] and also the occurrence of bank runs and currency crises (see coordination game). Other applications include traffic flow (see Wardrop’s principle), how to organize auctions (see auction theory), the outcome of efforts exerted by multiple parties in the education process,[4] regulatory legislation such as environmental regulations (see tragedy of the commons),[5] natural resource management,[6] analysing strategies in marketing,[7] even penalty kicks in football (see matching pennies),[8] energy systems, transportation systems, evacuation problems[9] and wireless communications.[10]

History

Nash equilibrium is named after American mathematician John Forbes Nash, Jr. The same idea was used in a particular application in 1838 by Antoine Augustin Cournot in his theory of oligopoly.[11] In Cournot’s theory, each of several firms choose how much output to produce to maximize its profit. The best output for one firm depends on the outputs of the others. A Cournot equilibrium occurs when each firm’s output maximizes its profits given the output of the other firms, which is a pure-strategy Nash equilibrium. Cournot also introduced the concept of best response dynamics in his analysis of the stability of equilibrium. Cournot did not use the idea in any other applications, however, or define it generally.

The modern concept of Nash equilibrium is instead defined in terms of mixed strategies, where players choose a probability distribution over possible pure strategies (which might put 100% of the probability on one pure strategy; such pure strategies are a subset of mixed strategies). The concept of a mixed-strategy equilibrium was introduced by John von Neumann and Oskar Morgenstern in their 1944 book The Theory of Games and Economic Behavior, but their analysis was restricted to the special case of zero-sum games. They showed that a mixed-strategy Nash equilibrium will exist for any zero-sum game with a finite set of actions.[12] The contribution of Nash in his 1951 article “Non-Cooperative Games” was to define a mixed-strategy Nash equilibrium for any game with a finite set of actions and prove that at least one (mixed-strategy) Nash equilibrium must exist in such a game. The key to Nash’s ability to prove existence far more generally than von Neumann lay in his definition of equilibrium. According to Nash, “an equilibrium point is an n-tuple such that each player’s mixed strategy maximizes his payoff if the strategies of the others are held fixed. Thus each player’s strategy is optimal against those of the others.” Putting the problem in this framework allowed Nash to employ the Kakutani fixed-point theorem in his 1950 paper to prove existence of equilibria. His 1951 paper used the simpler Brouwer fixed-point theorem for the same purpose.[13]

Game theorists have discovered that in some circumstances Nash equilibrium makes invalid predictions or fails to make a unique prediction. They have proposed many solution concepts (‘refinements’ of Nash equilibria) designed to rule out implausible Nash equilibria. One particularly important issue is that some Nash equilibria may be based on threats that are not ‘credible’. In 1965 Reinhard Selten proposed subgame perfect equilibrium as a refinement that eliminates equilibria which depend on non-credible threats. Other extensions of the Nash equilibrium concept have addressed what happens if a game is repeated, or what happens if a game is played in the absence of complete information. However, subsequent refinements and extensions of Nash equilibrium share the main insight on which Nash’s concept rests: the equilibrium is a set of strategies such that each player’s strategy is optimal given the choices of the others.

Definitions

Nash Equilibrium

A strategy profile is a set of strategies, one for each player. Informally, a strategy profile is a Nash equilibrium if no player can do better by unilaterally changing his strategy. To see what this means, imagine that each player is told the strategies of the others. Suppose then that each player asks himself: “Knowing the strategies of the other players, and treating the strategies of the other players as set in stone, can I benefit by changing my strategy?”

If any player could answer “Yes”, then that set of strategies is not a Nash equilibrium. But if every player prefers not to switch (or is indifferent between switching and not) then the strategy profile is a Nash equilibrium. Thus, each strategy in a Nash equilibrium is a best response to the other players’ strategies in that equilibrium.[14]

Formally, let {\displaystyle S_{i}} be the set of all possible strategies for player {\displaystyle i}, where {\displaystyle i=1,\ldots N}. Let {\displaystyle s^{*}=(s_{i}^{*},s_{-i}^{*})} be a strategy profile, a set consisting of one strategy for each player, where {\displaystyle s_{-i}^{*}} denotes the {\displaystyle N-1} strategies of all the players except {\displaystyle i}. Let {\displaystyle u_{i}(s_{i},s_{-i}^{*})} be player i’s payoff as a function of the strategies. The strategy profile {\displaystyle s^{*}} is a Nash equilibrium if[15]

{\displaystyle u_{i}(s_{i}^{*},s_{-i}^{*})\geq u_{i}(s_{i},s_{-i}^{*})\;\;{\rm {for\;all}}\;\;s_{i}\in S_{i}}

A game can have more than one Nash equilibrium. Even if the equilibrium is unique, it might be weak: a player might be indifferent among several strategies given the other players’ choices. It is unique and called a strict Nash equilibrium if the inequality is strict so one strategy is the unique best response:

{\displaystyle u_{i}(s_{i}^{*},s_{-i}^{*})>u_{i}(s_{i},s_{-i}^{*})\;\;{\rm {for\;all}}\;\;s_{i}\in S_{i}}

Note that the strategy set {\displaystyle S_{i}} can be different for different players, and its elements can be a variety of mathematical objects. Most simply, a player might choose between two strategies, e.g. {\displaystyle S_{i}=\{Yes,No\}.} Or, the strategy set might be a finite set of conditional strategies responding to other players, e.g. {\displaystyle S_{i}=\{Yes|p=Low,No|p=High\}.} Or, it might be an infinite set, a continuum or unbounded, e.g. {\displaystyle S_{i}=\{Price\}} such that {\displaystyle Price} is a non-negative real number. Nash’s existence proofs assume a finite strategy set, but the concept of Nash equilibrium does not require it.

The Nash equilibrium may sometimes appear non-rational in a third-person perspective. This is because a Nash equilibrium is not necessarily Pareto optimal.

Nash equilibrium may also have non-rational consequences in sequential games because players may “threaten” each other with threats they would not actually carry out. For such games the subgame perfect Nash equilibrium may be more meaningful as a tool of analysis.

Strict/Weak Equilibrium

Suppose that in the Nash equilibrium, each player asks themselves: “Knowing the strategies of the other players, and treating the strategies of the other players as set in stone, would I suffer a loss by changing my strategy?”

If every player’s answer is “Yes”, then the equilibrium is classified as a strict Nash equilibrium.[16]

If instead, for some player, there is exact equality between the strategy in Nash equilibrium and some other strategy that gives exactly the same payout (i.e. this player is indifferent between switching and not), then the equilibrium is classified as a weak Nash equilibrium.

A game can have a pure-strategy or a mixed-strategy Nash equilibrium. (In the latter a pure strategy is chosen stochastically with a fixed probability).

Nash’s Existence Theorem

Nash proved that if mixed strategies (where a player chooses probabilities of using various pure strategies) are allowed, then every game with a finite number of players in which each player can choose from finitely many pure strategies has at least one Nash equilibrium, which might be a pure strategy for each player or might be a probability distribution over strategies for each player.

Nash equilibria need not exist if the set of choices is infinite and non-compact. An example is a game where two players simultaneously name a number and the player naming the larger number wins. Another example is where each of two players chooses a real number strictly less than 5 and the winner is whoever has the biggest number; no biggest number strictly less than 5 exists (if the number could equal 5, the Nash equilibrium would have both players choosing 5 and tying the game). However, a Nash equilibrium exists if the set of choices is compact with each player’s payoff continuous in the strategies of all the players.

4 thoughts on “Nash equilibrium (1950)

  1. zovre lioptor says:

    Hello very cool web site!! Guy .. Beautiful .. Wonderful .. I will bookmark your website and take the feeds also…I’m satisfied to seek out numerous helpful info right here within the post, we’d like work out more strategies in this regard, thanks for sharing. . . . . .

  2. Adalberto Rugga says:

    Hello! I could have sworn I’ve been to this blog before but after browsing through some of the post I realized it’s new to me. Anyways, I’m definitely happy I found it and I’ll be book-marking and checking back frequently!

Leave a Reply

Your email address will not be published. Required fields are marked *