Even though incentive theory has been developed under the standard assumption that all players are rational, it can take into account whatever bounded rationality assumption one may wish to choose.38 However, there is an infinity of possible theories of bounded rationality and, in each case, the modeller must derive spe- cific optimal contracts. Let us consider a few examples that allow us to introduce bounded rationality without perturbing the basic lessons of incentive theory too much.

### 1. Trembling-Hand Behavior** **

Let us come back to the canonical model of chapter 2. We will assume that the agent is *ex ante *rational when he accepts the contract but makes a mistake with some probability when he chooses the contract *ex post*. *Ex ante *rationality implies that the agent anticipates the impact of these future errors on his expected utility at the time of acceptance.

This possibility of an *ex post *irrational behavior only matters for the efficient type when the size of the mistakes is small enough. Recall that, in the standard solution to chapter 2, only the efficient type is indifferent between taking his contract and taking the contract of the inefficient type. The inefficient agent strictly prefers his contract and will continue to do so as long as mistakes are small enough.

Let us denote by x the error term in the efficient agent’s choice. The latter agent chooses the contract (__t,__*q)* when

i.e., with probability , where *G(*·) is the cumulative distribution of x on some centered interval . The density of this random variable is denoted by *g(*·) with *g*(0) *>* 0. Moreover, we will assume that the monotone hazard rate property is satisfied. When ε¯ is less than Δ*θ** q*, the inefficient agent does not make any error and chooses the right contract with probability one. His acceptance is thus ensured when

The principal’s problem then becomes

Introducing εˆ as the greatest value of ε such that the efficient type’s incentive constraint (9.91) is binding, this problem is rewritten

since (9.92) is necessarily binding at the optimum.

We index the optimal contract by a superscript *BR*, meaning *bounded ratio- **nality*.

**Proposition 9.8: ***With a trembling-hand behavior, the optimal contract entails no output distortion for the efficient type, *__q__* ^{BR}* =

__q__^{∗}

*, and a down- ward distortion for the inefficient type,*

*, such that*

* *

*where ** is given by*

Because the left-hand side of (9.94) is strictly positive. This left-hand side is the difference between the first-best surplus and what would be obtained if the efficient agent had made a mistake and taken the contract of an inefficient one. Since *G(*·) satisfies the monotone hazard rate property, is increasing and * *is uniquely defined. For an interior solution such that , everything happens as if the efficient type was less likely. The rent differential given up to the efficient agent is less costly than in a model with no mistake. Hence, , and the output distortion is less important than without the mistake.

**Remark: **The reader will have recognized the similarity of this section with the model of section 3.4. There, random decisions did not affect the efficient type’s incentive constraint, but instead they affected the inefficient type’s participation constraint.

### 2. Satisficing Behavior** **

Consider a three-type example along the lines of section 3.1 with a general cost function *C(**q*, *θ)*. The incentive constraints of the three types are written respec- tively as

Suppose that the agent has a *satisficing behavior *and only looks at the nearby contracts, which are ordered as . Starting from an initial con- tract choice that may be suboptimal, the agent moves to another contract choice if the nearby contract yields a higher payoff.

Then it is immediately apparent that, if the Spence-Mirrlees property is sat- isfied, the agent will discover the optimal contract for him, and, neglecting tem- porary misallocations, the theory can proceed as if the agent was fully rational. Indeed, whatever his initial choice in the menu, he will move in the right direction in this set.

For example, let us take the case where *C*(*q,* *θ)* = *θ**q*. If the agent has type *θ*¯ and starts from the contract (* t*,

*), he moves to (*

__q__*t*ˆ,

*q*ˆ) if and only if , which can be rewritten as . This last inequality holds, because both

*q*ˆ ≤

*and the*

__q__*θ*ˆ-incentive compatibility constraint are satisfied. In a second step of the tâtonnement process, the

*θ*¯-agent will move to contract , because by the incentive compatibility of the contract the following inequality holds.

However, if the Spence-Mirrlees property is not satisfied, the agent may get stuck at a nonoptimal contract at some point in the tâtonnement process. The principal might then want to take those potential inefficiencies into account (which depend on the starting choices) when structuring a menu. In an extreme case, he might choose a single bunching contract that gives up screening but avoids these temporary inefficiencies.

There are many examples where an approach that takes the agent’s bounded rationality into account could be fruitful. An obvious case is when the choice is made by a group of agents (a family, a firm, or an organization) that does not reach an efficient collective decision mechanism.

### 3. Costly Communication and Complexity

The complexity of information places some limits on the possibility of its full communication and utilization. Costs of transmission, storage, and information processing are among the factors that could cause a principal to limit the potential for information flows between the agent and himself.

An earlier trend in the mechanism design literature dealt with the size of communication spaces needed to implement a particular allo-cation while ignoring incentive constraints (see Mount and Reiter 1974, and Hurwicz 1977, among others). The analysis of the interaction of incentive and communication constraints is a difficult topic. Green and Laffont (1986a; 1987) introduced data compression technics and dimensionality restrictions in adverse selection environments. See also Reichelstein and Reiter (1988). Green and Laffont (1986b) analyzed how exogenous constraints on commu- nication may invalidate the revelation principle. Legros and Newman (1999) analyzed incentive problems where agents have to secure their communi- cation channels with the principal. For multiagent settings, see the work of Melumad, Mookherjee, and Reichelstein (1997) and the references therein. Various papers also explicitly introduced the cost of including multiple con- tingencies in contracts (see Dye 1985, Allen and Gale 1992, and Anderlini and Felli 2000 for a recent synthesis).

Source: Laffont Jean-Jacques, Martimort David (2002), *The Theory of Incentives: The Principal-Agent Model*, Princeton University Press.