# Moral Hazard: More than Two Levels of Performance

We now extend our previous 2 × 2 model to allow for more than two levels of performance. We consider a production process where n possible outcomes can be realized. Those performances can be ordered so that q1 < q2 < ··· < qi < ··· < qn. We denote the principal’s return in each of those states of nature by Si = S(qi). In this context, a contract is a n-uple of payments {(t1,…,tn)}. Also, let pik be the probability that production qi takes place when the effort level is ek.

We assume that πik > 0 for all pairs (i,k)  with . Finally, we keep the assumption that only two levels of effort are feasible, i.e., ek   in {0, 1}. We still denote Δπi = πi1 − πi0.

### 1. Limited Liability

Consider first the limited liability model of section 4.4. If the optimal contract induces a positive effort, it solves the following program:

(4.44) is the agent’s participation constraint. (4.45) is his incentive constraint. (4.46) are all the limited liability constraints that we simplify, with respect to sec- tion 4.3, by assuming that the agent cannot be given a negative payment. i.e., the agent has no asset of his own before starting the relationship with the principal.

First, note that the participation constraint (4.44) is implied by the incentive (4.45) and the limited liability (4.46) constraints. Indeed, we have

Hence, we can neglect the participation constraint (4.44) in the optimization of problem (P).

Denoting the multiplier of (4.45) by λ and the respective multipliers of (4.46), by ξi the first-order conditions of program (P) lead to

with  the  slackness  conditions ξiti  = 0  for  each  i in  {1, . . ., n}.

For i such that the second-best transfer tiSB is strictly positive, ξi = 0, and we must have  for any such i. If the ratios   are all different, there exists a single index j such that is the highest possible ratio. Then, the structure of the optimal payments is bang-bang. The agent receives a strictly positive transfer only in this particular state of nature j, and this payment is such that the incentive constraint (4.45) is binding, i.e., . In all other states, the agent receives no transfer and tiSB = 0 for all i # j.  Finally, the agent gets a strictly  positive  ex  ante  limited  liability  rent  that  is  worth .

The important point here is that the agent is rewarded in the state of nature that is the most informative about the fact that he has exerted a positive effort.

Indeed, can be interpreted as a likelihood ratio. The principal therefore uses a maximum likelihood ratio criterion to reward the agent. The agent is only rewarded when this likelihood ratio is maximum. Like an econometrician, the principal tries to infer from the observed output what has been the parameter (effort) underlying this distribution. But here the parameter is endogenous and affected by the incentive contract.

Definition 4.2: The probabilities of success satisfy the monotone likeli-hood ratio  property (MLRP) if  is nondecreasing in i.

When this monotonicity property holds, the structure of the agent’s rewards is quite intuitive and is described in proposition 4.6.

Proposition 4.6: If the probability of success satisfies MLRP, the second- best payment tiSB received by the agent may be chosen to be nondecreasing with the level of production qi.

To understand this result intuitively, let us consider the case of n = 3. Then, MLRP means

Observe that MLRP is stronger than first-order stochastic dominance, which amounts here to

Suppose (4.49) is false when MLRP holds. Then π11 > π10, which implies π21 + π31 < π20 + π30. Then, we necessarily have either π21 − π20 < 0 or π31 − π30 < 0, which contradicts (4.48)

Suppose (4.50) is false when MLRP holds and π10 ≥ π11. Then π10 + π20 < π11 + π21, which implies π30 > π31 and π21 − π20 > 0. Again, it contradicts (4.48).

First-order stochastic dominance ensures that an increase of effort is good for the principal in a very strong sense, namely that any principal with a utility function increasing in q favors a higher effort level. However, this is not enough to reward the agent with a transfer increasing in q. It must also be the case that a higher production level is clear evidence that the agent has made a higher effort. MLRP provides this additional information. As (4.48) shows, a higher effort level increases the likelihood of a high production level more than the likelihood of a low production level.

Innes (1990) characterizes optimal contracts in a model with a risk- neutral principal and a risk-neutral agent, both with limited liability constraints, using the first order approach described below for concave utility functions (see also Park 1995). Milgrom (1981) proposes an extensive discus-sion of the MLRP assumption.

### 2. Risk Aversion

Suppose now that the agent is strictly risk-averse. The optimal contract that induces effort must solve the program below:

where the latter constraint is the agent’s participation constraint.

Using the same change of variables as in section 4.4, it should be clear that (P) is again a concave problem with respect to the new variables ui = u(ti). Using the same notations as in section  4.4, the first-order conditions of  program (P) are written as:

Multiplying each of these equations by πi1 and summing over i yields μ= where Eq(·) denotes the expectation operator with respect to the

distribution of outputs induced by effort e = 1.

Multiplying (4.53) by , summing all these equations over i, and tak- ing into account the expression of t obtained above yields

Using the slackness condition to simplify the left-hand side of (4.54), we finally get

By assumption, u(·) and u'(·) covary in opposite directions. Moreover, a constant wage tiSB = tSB for all i does not satisfy the incentive constraint, and thus tiSB cannot be constant everywhere. Hence, the right-hand side of (4.55) is necessarily strictly positive. Thus we have λ > 0, and the incentive constraint (4.51) is binding.

Coming back to (4.53), we observe that the left-hand side is increasing in tiSB since u(·) is concave. For tiSB to be nondecreasing with i, MLRP must again hold. Then higher outputs are also those that are the more informative ones about the realization of a high effort. Hence, the agent should be more rewarded as output increases.

The benefit of offering a schedule of rewards to the agent that increases with the level of production is that such a scheme does not create any incentive for the agent to sabotage or destroy production to increase his payment. However, only the rather strong assumption of a monotone likelihood ratio ensures this intuitive property. To show why, consider a simple example where MLRP does not hold. Let the probabilities in the different states of nature be when the agent exerts no effort and when he exerts an effort.

Then we have

and thus MLRP fails. Of course, when the principal’s benefits are such that S3 is much larger than S2 and S1 (with q3 > q2 > q1 and S3 > S2 > S1), the principal would like to implement a positive effort in order to increase the probability that the state of nature 3 is realized. Since outputs q1 and q3 are equally informative of the fact that the agent has exerted a positive effort, the agent must receive the same transfers in both states 1 and 3 from (4.53). Since output q2 is also particularly informative of the fact that the agent has exerted no effort, the second-best pay- ment should be lower in this state of nature. Hence, the non-monotonic schedule reduces the agent’s incentives to shirk and therefore reduces the probability that state 2, which is bad from the principal’s point of view, is realized.

Source: Laffont Jean-Jacques, Martimort David (2002), The Theory of Incentives: The Principal-Agent Model, Princeton University Press.