Concept in systems theory.

In a political system there are ‘gatekeepers’, individuals or institutions which control access to positions of power and regulate the flow of information and political influence.

Feminists have adapted this theory to explain male control of language and knowledge.

Source:

Geoffrey Roberts and Alistair Edwards, A New Dictionary of Political Analysis, (London, 1991)

## History

Discussions of two-person games began long before the rise of modern, mathematical game theory. In 1713, a letter attributed to Charles Waldegrave analyzed a game called “le her”. He was an active Jacobite and uncle to James Waldegrave, a British diplomat.^{[2]} The true identity of the original correspondent is somewhat elusive given the limited details and evidence available and the subjective nature of its interpretation. One theory postulates Francis Waldegrave as the true correspondent, but this has yet to be proven.^{[3]} In this letter, Waldegrave provides a minimax mixed strategy solution to a two-person version of the card game le Her, and the problem is now known as Waldegrave problem. In his 1838 *Recherches sur les principes mathématiques de la théorie des richesses* (*Researches into the Mathematical Principles of the Theory of Wealth*), Antoine Augustin Cournot considered a duopoly and presents a solution that is the Nash equilibrium of the game.

In 1913, Ernst Zermelo published *Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels* (*On an Application of Set Theory to the Theory of the Game of Chess*), which proved that the optimal chess strategy is strictly determined. This paved the way for more general theorems.^{[4]}

In 1938, the Danish mathematical economist Frederik Zeuthen proved that the mathematical model had a winning strategy by using Brouwer’s fixed point theorem.^{[5]} In his 1938 book *Applications aux Jeux de Hasard* and earlier notes, Émile Borel proved a minimax theorem for two-person zero-sum matrix games only when the pay-off matrix was symmetric and provides a solution to a non-trivial infinite game (known in English as Blotto game). Borel conjectured the non-existence of mixed-strategy equilibria in finite two-person zero-sum games, a conjecture that was proved false by von Neumann.

Game theory did not really exist as a unique field until John von Neumann published the paper *On the Theory of Games of Strategy* in 1928.^{[6]}^{[7]} Von Neumann’s original proof used Brouwer’s fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics. His paper was followed by his 1944 book *Theory of Games and Economic Behavior* co-authored with Oskar Morgenstern.^{[8]} The second edition of this book provided an axiomatic theory of utility, which reincarnated Daniel Bernoulli’s old theory of utility (of money) as an independent discipline. Von Neumann’s work in game theory culminated in this 1944 book. This foundational work contains the method for finding mutually consistent solutions for two-person zero-sum games. Subsequent work focused primarily on cooperative game theory, which analyzes optimal strategies for groups of individuals, presuming that they can enforce agreements between them about proper strategies.^{[9]}

In 1950, the first mathematical discussion of the prisoner’s dilemma appeared, and an experiment was undertaken by notable mathematicians Merrill M. Flood and Melvin Dresher, as part of the RAND Corporation’s investigations into game theory. RAND pursued the studies because of possible applications to global nuclear strategy.^{[10]} Around this same time, John Nash developed a criterion for mutual consistency of players’ strategies known as the Nash equilibrium, applicable to a wider variety of games than the criterion proposed by von Neumann and Morgenstern. Nash proved that every finite n-player, non-zero-sum (not just two-player zero-sum) non-cooperative game has what is now known as a Nash equilibrium in mixed strategies.

Game theory experienced a flurry of activity in the 1950s, during which the concepts of the core, the extensive form game, fictitious play, repeated games, and the Shapley value were developed. The 1950s also saw the first applications of game theory to philosophy and political science.

In 1979 Robert Axelrod tried setting up computer programs as players and found that in tournaments between them the winner was often a simple “tit-for-tat” program—submitted by Anatol Rapoport—that cooperates on the first step, then, on subsequent steps, does whatever its opponent did on the previous step. The same winner was also often obtained by natural selection; a fact that is widely taken to explain cooperation phenomena in evolutionary biology and the social sciences.^{[11]}

### Prize-winning achievements

In 1965, Reinhard Selten introduced his solution concept of subgame perfect equilibria, which further refined the Nash equilibrium. Later he would introduce trembling hand perfection as well. In 1994 Nash, Selten and Harsanyi became Economics Nobel Laureates for their contributions to economic game theory.

In the 1970s, game theory was extensively applied in biology, largely as a result of the work of John Maynard Smith and his evolutionarily stable strategy. In addition, the concepts of correlated equilibrium, trembling hand perfection, and common knowledge^{[a]} were introduced and analyzed.

In 2005, game theorists Thomas Schelling and Robert Aumann followed Nash, Selten, and Harsanyi as Nobel Laureates. Schelling worked on dynamic models, early examples of evolutionary game theory. Aumann contributed more to the equilibrium school, introducing equilibrium coarsening and correlated equilibria, and developing an extensive formal analysis of the assumption of common knowledge and of its consequences.

In 2007, Leonid Hurwicz, Eric Maskin, and Roger Myerson were awarded the Nobel Prize in Economics “for having laid the foundations of mechanism design theory”. Myerson’s contributions include the notion of proper equilibrium, and an important graduate text: *Game Theory, Analysis of Conflict*.^{[1]} Hurwicz introduced and formalized the concept of incentive compatibility.

In 2012, Alvin E. Roth and Lloyd S. Shapley were awarded the Nobel Prize in Economics “for the theory of stable allocations and the practice of market design”. In 2014, the Nobel went to game theorist Jean Tirole.

## Game types

### Cooperative / non-cooperative

A game is *cooperative* if the players are able to form binding commitments externally enforced (e.g. through contract law). A game is *non-cooperative* if players cannot form alliances or if all agreements need to be self-enforcing (e.g. through credible threats).^{[12]}

Cooperative games are often analyzed through the framework of *cooperative game theory***,** which focuses on predicting which coalitions will form, the joint actions that groups take, and the resulting collective payoffs. It is opposed to the traditional *non-cooperative game theory* which focuses on predicting individual players’ actions and payoffs and analyzing Nash equilibria.^{[13]}^{[14]}

Cooperative game theory provides a high-level approach as it describes only the structure, strategies, and payoffs of coalitions, whereas non-cooperative game theory also looks at how bargaining procedures will affect the distribution of payoffs within each coalition. As non-cooperative game theory is more general, cooperative games can be analyzed through the approach of non-cooperative game theory (the converse does not hold) provided that sufficient assumptions are made to encompass all the possible strategies available to players due to the possibility of external enforcement of cooperation. While it would thus be optimal to have all games expressed under a non-cooperative framework, in many instances insufficient information is available to accurately model the formal procedures available during the strategic bargaining process, or the resulting model would be too complex to offer a practical tool in the real world. In such cases, cooperative game theory provides a simplified approach that allows analysis of the game at large without having to make any assumption about bargaining powers.

### Symmetric / asymmetric

E | F | |

E | 1, 2 | 0, 0 |

F | 0, 0 | 1, 2 |

An asymmetric game |

A symmetric game is a game where the payoffs for playing a particular strategy depend only on the other strategies employed, not on who is playing them. That is, if the identities of the players can be changed without changing the payoff to the strategies, then a game is symmetric. Many of the commonly studied 2×2 games are symmetric. The standard representations of chicken, the prisoner’s dilemma, and the stag hunt are all symmetric games. Some^{[who?]} scholars would consider certain asymmetric games as examples of these games as well. However, the most common payoffs for each of these games are symmetric.

The most commonly studied asymmetric games are games where there are not identical strategy sets for both players. For instance, the ultimatum game and similarly the dictator game have different strategies for each player. It is possible, however, for a game to have identical strategies for both players, yet be asymmetric. For example, the game pictured to the right is asymmetric despite having identical strategy sets for both players.

### Zero-sum / non-zero-sum

A | B | |

A | –1, 1 | 3, –3 |

B | 0, 0 | –2, 2 |

A zero-sum game |

Zero-sum games are a special case of constant-sum games in which choices by players can neither increase nor decrease the available resources. In zero-sum games, the total benefit goes to all players in a game, for every combination of strategies, always adds to zero (more informally, a player benefits only at the equal expense of others).^{[15]} Poker exemplifies a zero-sum game (ignoring the possibility of the house’s cut), because one wins exactly the amount one’s opponents lose. Other zero-sum games include matching pennies and most classical board games including Go and chess.

Many games studied by game theorists (including the famed prisoner’s dilemma) are non-zero-sum games, because the outcome has net results greater or less than zero. Informally, in non-zero-sum games, a gain by one player does not necessarily correspond with a loss by another.

Constant-sum games correspond to activities like theft and gambling, but not to the fundamental economic situation in which there are potential gains from trade. It is possible to transform any game into a (possibly asymmetric) zero-sum game by adding a dummy player (often called “the board”) whose losses compensate the players’ net winnings.

### Simultaneous / sequential

Simultaneous games are games where both players move simultaneously, or if they do not move simultaneously, the later players are unaware of the earlier players’ actions (making them *effectively* simultaneous). Sequential games (or dynamic games) are games where later players have some knowledge about earlier actions. This need not be perfect information about every action of earlier players; it might be very little knowledge. For instance, a player may know that an earlier player did not perform one particular action, while they do not know which of the other available actions the first player actually performed.

The difference between simultaneous and sequential games is captured in the different representations discussed above.Often, normal form is used to represent simultaneous games, while extensive form is used to represent sequential ones. The transformation of extensive to normal form is one way, meaning that multiple extensive form games correspond to the same normal form. Consequently, notions of equilibrium for simultaneous games are insufficient for reasoning about sequential games; see subgame perfection.

There may be noticeably a bundle to learn about this. I assume you made sure nice points in features also.

Thanks a lot for sharing this with all of us you actually know what you’re talking about! Bookmarked. Kindly also visit my web site =). We could have a link exchange agreement between us!