Focal points and risk dominance

This appendix introduces some of the concepts mentioned in Chapter 8 and presents some of the related issues for the convenience of the reader who is not much acquainted with game theory.

1. Salience and precedence

Let us, for a moment, dispense with the assumptions concerning the driving game of Table 8.1 and think about how real individuals (i.e. in contrast to the model- theoretic individuals who are inclined to use a mixed strategy) would behave. The first thing that comes to mind is to look for the specific pieces of information that the individuals may benefit from. For example, we may speculate that if the driving wheels of the two cars are on the left they may find it convenient to drive on the right-hand side of the road. Or, we may contemplate that since a majority of individuals are right-handed they may use this as a coordination device; that is, they may expect others to use this piece of information in order to increase their chances of coordination. We may also speculate that right and left have dif- ferent connotations in the culture to which players belong. For example, if they believe that doing something from the left (e.g. getting out of the bed from the left-hand side) causes ‘bad’ things, then they would not drive on the left. That is, they would consider ‘right’ as the salient or prominent option. These examples il- lustrate the way in which salience may work: given their particular environment, agents might think that one of the alternatives (e.g. driving on the right) stands out and expect others to use this alternative as a coordination device. ‘Salience in general is uniqueness of a coordination equilibrium in a pre-eminently conspicu- ous respect’ (Lewis 1969: 38, also see Sugden 1986: 47–52).

Alternatively, it might be the case that individuals always walk on the right- hand side of the pavement. That is, they avoid hitting other people by walking on the right. If this is the case, then it is possible that they might consider the driving game as being analogous to their ‘walking game’ and expect others to adopt the convention of walking on the right in the driving game. If they could coordinate by using this analogy then we may say that driving on the right would emerge as a convention by precedence. Another form of precedence may be the following:

let us assume that agents had previously been able to coordinate in the driving game many times by driving on the right. Then, when they have to play again, they might choose driving on the right just because they were successful in coor- dination in the past by driving on the right. It should be noted here that although salience and precedence are different notions, they are closely related. In fact, ‘precedence is merely the source of one important kind of salience: conspicuous uniqueness of an equilibrium because reached it last time’ (Lewis 1969: 36).

2. Schelling games

In the following games, players will win a prize if they choose the same alterna- tive or do the same thing. They cannot communicate and they make their choices simultaneously (reproduced from Schelling 1960: 56–57).

  • Name ‘heads’ or ‘tails’.
  • Circle one of the numbers listed below:

7     100        13         261        99          555

  • You are to meet somebody in New York City. You have not been instructed where to meet; you have no prior understanding on where to meet; and you cannot communicate with each other. Where would you go to meet the other?
  • You were told the date (and the meeting place) but not the hour of the meeting in no. 4. At what time will you appear in the meeting place?
  • Write a positive
  • Name an amount of
  • Divide $100 into two piles.

In Schelling’s informal experiments, most preferred answers were:

  1. Heads
  2. Number 7
  3. Grand Central Station
  4. 12 noon
  5. 1
  6. $1,000,000
  7. $50

3. Ultimatum games and predictions of game theory

The gap between the ‘predictions’ of game theory and the actual behaviour of individuals may be observed in a number of other experiments that concern some other games, such as the ultimatum game. For example, consider that as a par- ticipant of a TV show you are asked to divide 100 euros between yourself and your co-player. The other participant is asked to do the same and you may not communicate in any way. If you and your co-player can independently agree on the division, you and your co-player will receive the amount of money you had specified for yourselves. If not, you will get nothing.

In this problem there are many equilibrium points. Every division that sums up to 100 may be an equilibrium. For example, you may want to get 60 per cent of the money and give away 40 per cent to the other participant. If the other participant independently makes the same offer – that is, 40 per cent for herself and 60 per cent for you – you will get the money you specify. There are many options you may choose from and standard game theory will not help you a lot. However, you may just think that a fair division may be the first thing that would occur to your co-player, and for that reason you would divide the money in two. You may be right in that people, at least in Western countries, may follow this ‘fairness norm’ to coordinate in such games.

But let us make the problem a little bit more interesting. Let us say that in the same game you are going to make the offer and your co-player has the chance to accept or reject it. If she accepts, you get the amounts you have specified in your offer, if she rejects you both get nothing. (This game is known as the ultimatum game. There is a huge experimental and theoretical literature concerning the ul- timatum game in its different forms. Some of the papers that analyse ultimatum games can be listed as follows: Cameron (1995); Fehr and Tougareva (1995); Forsythe et al. (1994); Güth et al. (1982); Hoffmann et al. (1994); Roth et al. (1991).) Game theory predicts that as a rational player your co-player would ac- cept any positive offer. The intuition behind this is that he is getting money out of nowhere and, hence, he should get what he could. Your rational decision should then be to offer the possible minimum amount to your co-player and keep the rest for yourself – that is, you could keep 99 euros and give away 1 euro to the other participant. However, experiments showed that this is not what real people do in such situations, for example, Güth et al. (1982) found that a fair offer (50 per cent, 50 per cent) is a good coordinating device among real individuals. That is, existing norms and conventions (e.g. fairness norm) help real individuals solve this coordination problem.

A striking experiment that reports how institutions and peculiarities of the particular environment matter is a recent study on fifteen small-scale societies (Henrich et al. 2001, 2005). It shows that predictions of game theory do not hold and that there is a wide variety of ways in which individuals coordinate their behaviour. Henrich et al. (2001, 2005) demonstrate that specific characteristics of different communities (e.g. economic organisation, structure of social interac- tions) influence the way in which individuals act in such games. The point is that existing institutions (i.e. conventions, norms, regularities in behaviour) matter and they are relevant if we want to understand how people coordinate and how new institutions evolve.

Also note that Schelling argues:

the mathematical structure of the payoff function should not be permitted to dominate the analysis. [. . .] there is a danger in too much abstractness: we change the character of the game when we drastically alter the amount of contextual detail [. . .]. It is often contextual detail that can guide the players to the discovery of a stable or, at least, mutually non-destructive outcome. [. . .] This corner of game theory is inherently dependent on empirical evi- dence.

(Schelling 1958: 252, emphasis added)

4. Pareto dominance and risk dominance

Pareto dominance

In a game where one equilibrium Pareto dominates, other equilibria agents may consider the Pareto dominant equilibrium as a focal point (Harsanyi and Selten 1988). Table AIV.1 represents a coordination game where two individuals have to choose an integer between 1 and 100. If they can simultaneously choose the same number, x, they will be paid x euros. If they fail to coordinate, they will get nothing.

In this game there are 100 pure strategy Nash equilibria, that is, every success- ful coordination counts as one. However, one of them is superior to others. The argument is that individuals prefer 100 euros to other outcomes and since players may expect the other player to reason in a similar fashion, the Pareto-dominant equilibrium (100, 100) may be considered as a focal point of the game. Neverthe- less, since individuals do not know how the other player thinks, they cannot really know whether the other player will consider the Pareto-dominant equilibrium as a focal point or not.

Risk dominance

If a11> a21, b11> b12, a22> a12, b22> b21 then the game presented in Table AIV.2 is a coordination game with (D, D) and (Q, Q) as pure strategy Nash equilibria.

If the following condition holds we say that the pure strategy Nash equilibrium (D, D) is risk dominant:

(a11–a21)(b11–b12) ≥ (a22–a12)(b22–b21)

To see the intuitive idea behind risk dominance, consider the stag hunt game presented in Table AIV.3.

First, let us show that this is a coordination game with two pure strategy Nash equilibria:

10 = a11>8 = a21, 10 = b11>8 = b12, 7 = a22>0 = a12, 7 = b22>0=b21

Let us assume that I expects II to play D and for this reason I plays his part in the (D, D) equilibrium. If I’s expectations are correct both players’ payoff is 10 euros (assuming that payoffs are expressed in euros). Yet if I’s expectation does not hold and II chooses to play Q, then while II gets 8 euros, I receives nothing. That is, by choosing to play D, I takes a risk of losing 10 euros. Now assume that I expects II to choose Q, and for this reason plays his part in the (Q, Q) equilibrium. If his expectation holds, then both players get 8 euros. Yet if I’s expectation does not hold and II chooses to play D, then while I gets 8 euros, II gets nothing. That is, if I chooses to play Q he does not loose anything even if his expectations turn out to be incorrect. Since the same argument holds for II as well, we say that equilibrium (Q, Q) is less risky, that is, (Q, Q) is the risk-dominant equilibrium.

That is, since (10–8) (10–8) < (7–0) (7–0), (Q, Q) is the risk-dominant equilib- rium. On the other hand, both players prefer (D, D) equilibrium to (Q, Q), and for this reason it is the Pareto-optimal equilibrium.

5. Theory of focal points of Bacharach and Bernasconi

Bacharach and Bernasconi (1997) try to formalise the different ways in which strategies may be framed. They generalise Bacharach’s (1991) idea that players’ options are acts under descriptions and they are distinguished by the concepts the players use to specify them. This model permits us to conceptualise the pos-sible differences in agents’ perceptions and, for this reason, it is a step further in understanding how these differences may influence the outcome of a coordination game. Like Janssen (2001b), this model focuses on the attributes of the alternative strategies and how players of the game perceive these attributes. Yet, unfortunate- ly, the model is only able to ‘predict’ the outcome of simple coordination games. Consider the games in Figure AIV.1. In these coordination games individuals are supposed to pick the same object from a set of objects. Bacharach and Bernas- coni (1997) predict that in Game I individuals would select the black circle. The principle is that individuals (ceteris paribus) prefer to pick an object that is rarer (principle of rarity preference). It is more difficult to coordinate in Game II. Here, the principle of symmetry disqualification is needed. In order to find out the odd alternative, one has to disqualify the symmetrical or similar objects. It is asserted that players will not be able to discriminate among symmetrical alternatives and choose the one with different attributes, that is, ‘U’, which is the only vowel. Game III is more problematic, as the ‘odd’ option is not easily available. It is ar- gued that in such cases there is a trade-off between availability and rarity and that, ceteris paribus, agents are more inclined to pick an attribute which is more avail- able (principle of availability preference). That is, agents would not base their reasoning on an attribute which is less likely to appear to their co-player’s mind. In this game position, shape and size are the most obviously available attributes, and according to this principle, agents should limit their thinking to these. If they do, with some effort, they will see that one of the diamonds is slightly smaller than the others and that it should be picked (i.e. according to the principles of rarity preference, symmetry disqualification and availability preference).

Bacharach and Bernasconi capture the idea that ‘in the pure co-ordination game, the player’s objective is to make contact with the other player through some imaginative process of introspection, of searching of shared clues’ (Schell- ing 1958: 211). However, from the point of view of explaining the emergence of conventions, their analysis is still in its infancy. Consider the picking game in Figure AIV.2. In this game, Bacharach and Bernasconi’s model ‘predicts’ that ‘the arrow’ should be picked, that is, according to the rarity preference. Yet it seems to be reasonable to expect agents to choose the circle indicated by the arrow because of the connotations of ‘the arrow’. In fact, when I ask my students to play this game they choose the circle which is indicated by the arrow in order to coordinate with their co-players. Although ‘the arrow’ is a simple object, we do not perceive it as such and the conventions concerning the arrow (e.g. the traffic convention that we follow the road which is indicated by the arrow) may influence the way we behave in this particular game. (Bacharach and Bernasconi also mention this problem.) It is evident that physical attributes of the objects cannot be all there is to focal points and coordination. Existing conventions might matter even in the context of simple objects. Then it may be argued that the static models we have examined above fail to explain the possibility of successful coordination.

Figure AIV.1 Simple games of Bacharach and Bernasconi (1997).

Figure AIV.2 A coordination game.

It should be noted here that from a formal point of view there might be many ways in which a particular set of available strategies may be conceived or framed from the perspective of the agents. For this reason, formalising focal points is not an easy task. It should be evident that the above argument is not meant to degrade the existing models of focal points; rather, it tries to explicate the limits within which these models should be evaluated and criticised.

6. Evolutionary stability and replicator dynamics

Let us, for a moment, assume that the driving game (Table 8.1) is played among deers in a certain area. A large population of deers use a limited number of narrow deer paths. Every day every deer meets at least one other deer coming from the opposite direction. When they meet they have to simultaneously ‘decide’ whether to use the right- or left-hand side of the path. Since they do not ‘want’ to reduce speed they have an ‘incentive’ to coordinate. Simply, they are playing a game similar to the driving game. Should we expect them to bring about a deer-traffic convention?

To examine this question, let us further assume that every deer is predisposed to play a certain pure strategy. That is, a percentage of the deer population al- ways plays ‘right’, while the rest is predisposed to play ‘left’. The deer that are successful in coordination are supposed to have more reproductive success and those who fail to coordinate will become extinct in time. Now consider the fol- lowing scenario: in time the deer population somehow reaches a state where every deer is predisposed to play the same strategy (e.g. ‘right’). Could we say that this equilibrium point (right, right) would be stable? This scenario brings us to the evolutionary analysis of Maynard Smith and Price (1973) and Maynard Smith (1974, 1982) (see Weibull 1995; Mailath 1992, 1998; Michihiro 1997; Friedman 1991; Hofbauer and Sigmund 1988 for a general discussion of evolutionary game theory). They have argued that evolutionary stability of an equilibrium of this sort depends on whether the population may be invaded by a mutant strategy or not. An evolutionarily stable strategy (ESS) cannot be invaded by mutant strategies and for this reason evolutionary stable equilibria of a coordination game may be considered as conventions in that they are self-supporting equilibria (for the no- tion of evolutionary stability see Binmore and Samuelson 1992; Blume, Kim and Sobel 1993; Hofbauer, Schuster and Sigmund 1979; Samuelson and Zhang 1992; Taylor and Jonker 1978; Wärneryd 1991).

In our case, it is easy to see that a mutant deer which is predisposed to play ‘left’ cannot invade a population of ‘right’ playing deers, since its average success against other deers will be less than a ‘right’ playing deer. For this reason, evolu- tionary game theorists argue that two pure-strategy Nash equilibria of the driving game are evolutionary stable.

(Note that in the standard analysis, it is assumed no individual has an incentive to deviate from his Nash strategy. However, it is still possible that one may be indifferent between his Nash strategy and another strategy, given others’ strate- gies. Given other players’ actions, if a player strictly prefers his Nash strategy to other actions then this equilibrium is argued to be a strict Nash equilibrium. Or in other words, if an individual has no alternative strategy that does as good as his equilibrium strategy (i.e. has no alternative best reply) given others’ strategies, then the resulting equilibrium is a strict Nash equilibrium. In fact, Nash equilibria of the driving game are strict Nash equilibria, and every strict Nash equilibrium is ESS. See Maynard Smith and Price 1973; Selten 1980.)

However, the notion of ESS does not provide any basis for arguing that the deer population will be able to reach one of these evolutionary stable equilibrium points. It does not help us solve the equilibrium selection in the context of the driving game; rather it justifies the idea that if a population of agents (human be- ings, or deer) is able to coordinate their actions somehow, this equilibrium would be self-fulfilling. Although it tries to capture the dynamics of an evolutionary process, ESS is a static concept and does not explicate why a certain ESS would be selected among others (Mailath 1992: 267). The dynamics behind ESS and its refinements (e.g. stochastically stable strategies) has been studied with different models that vary in the formulations of the evolutionary dynamics. Only a few of these are examined here. First, we will focus on the replicator dynamics that works at the population level and then we will discuss learning models.

Replicator dynamics

The study of natural selection is usually associated with a mechanism known as replicator dynamics. The term ‘replicator’ comes from Dawkins (1976). For the original statement of ‘replicator dynamics’ see Taylor and Jonker (1978). The rep- licator dynamics captures the idea that reproductive success of a certain strategy is a function of its success in games. For example, concerning the deer population, this means that if a certain strategy does better on average than the population average, the type of deers that are predisposed with this strategy will grow at a greater rate than the others. Replicator dynamics has various alternative formula- tions that need not be examined here (see Fudenberg and Levine 1998: 51–99; Samuelson 1998: 63–75; Weibull 1995: 69–119). Considered from the point of view of pure coordination games, the replicator dynamics produces two interest- ing results. To discuss these results, consider the telephone game in Table AIV.4. The telephone game has the following scenario: Player I and Player II are having a telephone conversation. In the middle of their conversation the telephone line is cut off for no specified reason. They have to decide whether to wait for the other to call or to call back. If they both call they cannot communicate for the lines will be busy. If they both wait they cannot continue their conversation either. This is an asymmetric coordination game for they have to choose different strategies for coordination. Now we may ask whether any convention may emerge out of this coordination problem if a finite population of agents are randomly matched to play this game repeatedly. The first interesting result of the replicator dynamics approach is that if the population of agents is homogenous then the only stable equilibrium is the mixed-strategy equilibrium, where agents randomise (Fuden- berg and Levine 1998: 56–58). That is, if two players who are randomly matched are from the same population, that is, are identical, then repeated play will not bring about a convention. The second interesting result concerns replicator mod- els with distinct populations. If the players of the game are drawn from distinct populations, then pure strategy Nash equilibria of this game ((wait, call back), (call back, wait)) become stable, but the mixed strategy equilibrium is not stable. (It should also be noted here that the outcome of the replicator dynamics de- pend on the initial state of the population in the context of the stag hunt game (see Appendix VI.6). That is, if many agents are playing the Pareto-dominant strategy, initially the replicator dynamics converges to the Pareto-dominant equilibrium.

Yet if most of the agents are playing the risk-dominant strategy, then the process converges to the risk-dominant equilibrium. See Samuelson 1998: 79–80.)

The distinct population model may be considered as an approximation to a situation where players can distinguish between ‘the caller’ and ‘the receiver’. Although replicator dynamics is not exactly designed to represent such cases, it implies that if players can recognise each other in some way (e.g. label each other) then the telephone coordination problem may bring about one of the alter- native conventions: original caller calls back (call back, wait); or receiver calls back (wait, call back). Note that the model remains silent concerning equilibrium selection.

Replicator dynamics works at the population level and ostensibly leaves no role to the individual players, as it is suggested by the ‘pre-programmed behaviour’ assumption. For this reason, replicator dynamics is generally considered as being inappropriate in economic contexts and that the dynamics of an economic process should be based on a model of individual learning (Fudenberg and Levine 1998; Kandori, Mailath and Rob 1993; Young 1998). If successful strategies grow faster, then stable coordination equilibria may be reached and conventions may emerge. Models with replicator dynamics do not explain why successful strategies grow faster. Such models study the dynamics at the aggregate level and are silent about which individual mechanisms bring about the consequences at the aggregate level (Fudenberg and Levine 1998: 52). Although we may still argue that replicator dynamics point to the possibility that there may be a certain mechanisms that may bring about conventions, the need for the study of individual mechanisms is obvious. There could be many mechanisms that may be consistent with replicator dynamics, such as learning and imitation. In the following pages we focus on models that study individual learning.

7. Local interaction

Local interaction raises another question about conventions. Since any of the al- ternative conventions is likely to emerge in the short run, local interaction may result in the emergence of different conventions in different localities. From a real-world perspective, this is quite reasonable for very large populations who are spatially separated, yet it is not always reasonable to argue that alternative conventions are likely to exist in areas such as cities or countries. While countries may abide to different conventions, within a certain country people usually abide to a uniform convention. Yet in some countries different conventions co-exist in different areas. Could these models give any insight concerning the global diver- sity of conventions while retaining the argument that a single convention is very likely to emerge in local areas?

Young (1996, 2001) analyses the issue of co-existence of conventions by ex- tending his model for the driving game to consider the interactions among dif- ferent countries who use different driving conventions (e.g. (left, left) or (right, right)). The model is a spatial model similar in some respects to Schelling’s resi- dential segregation model (see Chapter 4). Every country has a limited number of neighbour countries and there is consistent traffic on the borders. For this reason, conflicting driving conventions are costly. Let us say that in each time period one country considers whether to switch to the other driving convention by observing the existing conventions of its neighbours. For example, if country A abides to the left-convention and all its neighbours abide to the right-convention it is ‘rational’ to switch to the right-convention. Yet if its neighbours abide to different conven- tions then it has to ‘assess’ the costs of switching. Note here that Young abstracts from the technological costs of switching a convention and argues that this would not change the argument. If we consider the countries as nodes in a network, then this model implies the number of connections one country has with other nodes is the decisive factor given the costs of abiding to different conventions. If, for example, two different parts of the network are densely connected but have a weak link with each other, then the model says that two different conventions may be adopted in these parts of the network. Yet if all countries are densely connected then we may expect a uniform convention. It is also possible to bring in random shocks to this model. Let us assume that one or more countries switch conventions occasionally, whatever the costs (e.g. France switched to the ‘right’ convention af- ter the revolution and imposed this convention in some of the countries they have occupied; see Young 1996). If there are such idiosyncratic shocks, then in any connected network of countries (i.e. it is possible to drive from any country to any other) it is more likely that all countries adopt the same convention. Moreover, if it exists, the risk-dominant convention is likely to drive other conventions out.

While the above model analyses exclusive conventions in the sense that indi- viduals cannot adopt two or more conventions at a time, not all conventions are like this. Consider the telephone game. It might be that while one country adopts the caller-calls-back convention, another country might adopt the receiver-calls- back convention. It is for these types of conventions that the individuals may switch between conventions with some costs (e.g. learning) when they travel. Goyal and Janssen (1997) analyse this type of non-exclusive convention and examine equilibrium selection in such environments. (They use a deterministic model with no learning. In this model individuals form their expectations given the existent local information. Yet they argue that their model is robust to several learning dynamics.) The model implies that, given the costs of adopting both conventions are not very high, both conventions might prevail and co-exist. In a symmetric coordination game the result of their analysis is that both conventions co-exist under all conditions. They further argue that if one convention is better then the other then whatever the costs the Pareto optimal convention is adopted in the long run (note here that this is the result for the case where Pareto-dominant convention is also risk dominant). But if Pareto dominance and risk dominance are in conflict then the following results hold: if the costs are low then Pareto- dominant convention will eventually be adopted in both countries; if costs are high then both countries would eventually adopt the risk-dominant convention. Yet if costs are at an intermediate level then two conventions would co-exist. That is, while different conventions would be adopted in different countries, some individuals would undertake the costs of adopting both conventions. Goyal and Janssen’s model indicates that the cost of adopting a certain form of behaviour may be relevant in explaining the emergence and persistence of conventions.

8. Rationality and learning (Goyal and Janssen 1996)

An important question is whether rational individuals who learn from experience might arrive at a coordination equilibrium, or whether we could better explain the emergence of a convention without restricting individuals’ expectations and learning behaviour with a certain form of common knowledge assumption. Goyal and Janssen (1996), who study similar questions, argue that rationality alone does not suffice to explain coordination even if individuals are able to learn. Consider the driving game where individuals are randomly paired to play the driving game repeatedly and they are able to observe the whole history of play, including the payoffs. Assume that every time an individual is confronted with another driver he evaluates the previous actions of the other players and bases his expectations on this. The question is whether he and other players could achieve concordant mutual expectation by evaluating the information gathered from previous plays.

A similar model has been examined in Chapter 6. Schotter assumed that when individuals are close to the absorbing points (e.g. 95 per cent plays ‘left’) they will consider this as an indication that everybody will chose a certain strategy (e.g. ‘left’) with unit probability. This assumption implies that rational (Bayesian) learning does not guarantee coordination. Additional assumptions concerning learning behaviour have to be imposed on the model. Moreover, agents are as- sumed to base their expectations on what happened in the past. They expect that what happened in the past is likely to happen in the future. Hence, there is an im- plicit assumption that says that agents know that the other agents are forming their expectations in a similar way. Otherwise, there is no rational reason for them to expect that what happened in the past would happen in the future. Similar things can be said for other learning models. Goyal and Janssen (1996) discuss Crawford and Haller (1990) and Kalai and Lehrer (1993a,b), who argue that rational indi- viduals can learn to coordinate. While Crawford and Haller assume that there are optimal rules for learning, Kalai and Lehrer put certain restrictions on individuals’ prior beliefs. Both assumptions imply that these models put additional restric- tions on learning behaviour and, for this reason, rationality alone cannot ensure coordination. More specifically, Goyal and Janssen (1996) argue that even if there may be optimal rules for learning how to coordinate, these rules are not unique. That is, if agents’ learning behaviour is not coordinated at the outset they might not be able to coordinate. Similarly, Kalai and Lehrer’s model indicate that agents’ prior beliefs have to be coordinated to ensure their success in coordination. Both of these models imply that pre-existent conventions are necessary for individuals’ success in coordination.

Goyal and Janssen generalise this argument by employing a more sophisticated learning model. The idea behind their model is the following: in order to ensure coordination in the next period, every agent has to take into account the previous plays of other players. However, since every player knows that the other players are using the information gathered in previous plays to form their expectation for the next period, in order to ensure coordination every player has to know how the others are forming their expectations. The problem is that the outcome of the previous encounters does not restrict the type of hypotheses they might entertain about each other. In other words, as Goyal and Janssen argue, at any point in time one may entertain an infinite number of hypotheses about others, which are consistent with their existent information (also see Foster and Young 2001). Thus, unless the modeller restricts the number of these hypotheses, rationality does not ensure players to learn how to coordinate. Again, from the perspective of explain- ing real world coordination problems, this means that knowledge of pre-existing conventions is necessary to explain how coordination is achieved.

Source: Aydinonat N. Emrah (2008), The Invisible Hand in Economics: How Economists Explain Unintended Social Consequences, Routledge; 1st edition.

Leave a Reply

Your email address will not be published. Required fields are marked *