We now extend our previous 2 × 2 model to allow for more than two levels of performance.8 We consider a production process where *n *possible outcomes can be realized. Those performances can be ordered so that *q*_{1} *< **q*_{2} *< *··· *< q _{i}*

*<*···

*< q*

*n*. We denote the principal’s return in each of those states of nature by

*S*

_{i}*=*

*S(q*). In this context, a contract is a

_{i}*n*-uple of payments {(

*t*,…

_{1}*t*). Also, let

_{n}*π*

_{ik}

*be the probability that production*

*q*

_{i}*takes place when the effort level is*

*e*.

_{k}We assume that *π*_{ik }>0 for all pairs (*i,* *k)* with . Finally, we keep the assumption that only two levels of effort are feasible, i.e., *e _{k}*

*in {0, 1}. We still denote Δπ*

_{i}

*= π*

_{i1}

*− π*

_{i0}

*.*

### 1. Limited Liability

Consider first the limited liability model of section 4.4. If the optimal contract induces a positive effort, it solves the following program:

(4.44) is the agent’s participation constraint. (4.45) is his incentive constraint. (4.46) are all the limited liability constraints that we simplify, with respect to sec- tion 4.3, by assuming that the agent cannot be given a negative payment. i.e., the agent has no asset of his own before starting the relationship with the principal.

First, note that the participation constraint (4.44) is implied by the incentive (4.45) and the limited liability (4.46) constraints. Indeed, we have

Hence, we can neglect the participation constraint (4.44) in the optimization of problem k*P *l.

Denoting the multiplier of (4.45) by λ and the respective multipliers of (4.46), by ξ_{i}* *the first-order conditions of program (*P)*l lead to

with the slackness conditions ξ_{i}*t*_{i}* ** *= 0 for each *i* in {1, . . .,*n}*.

For *i* such that the second-best transfer *t _{i}^{SB} *is strictly positive, ξ

_{i}

*= 0, and we must have for any such*

*i*. If the ratios are all different, there exists a single index

*j*such that is the highest possible ratio. Then, the structure of the optimal payments is

*bang-bang*. The agent receives a strictly positive transfer only in this particular state of nature

*j*, and this payment is such that the incentive constraint (4.45) is binding, i.e.,. In all other states, the agent receives no transfer and

*t*= 0 for all

_{i}^{SB}*i # j.*Finally, the agent gets a strictly positive

*ex ante*limited liability rent that is worth .

The important point here is that the agent is rewarded in the state of nature that is the most informative about the fact that he has exerted a positive effort.

Indeed, __ __can be interpreted as a *likelihood ratio*. The principal therefore uses a *maximum likelihood ratio criterion *to reward the agent. The agent is only rewarded when this likelihood ratio is maximum. Like an econometrician, the principal tries to infer from the observed output what has been the *parameter *(effort) underlying this distribution. But here the *parameter *is endogenous and affected by the incentive contract.

**Definition 4.2: ***The probabilities of success satisfy the monotone likeli-**hood ratio property*11 (*MLRP) if **is nondecreasing in **i.*

When this monotonicity property holds, the structure of the agent’s rewards is quite intuitive and is described in proposition 4.6.

**Proposition 4.6: ***If the probability of success satisfies MLRP, the second- best payment **t _{i}^{SB} received by the agent may be chosen to be nondecreasing with the level of production *

*q*

_{i}*.*

To understand this result intuitively, let us consider the case of *n *= 3. Then, MLRP means

Observe that MLRP is stronger than first-order stochastic dominance, which amounts here to

Suppose (4.49) is false when MLRP holds. Then π11 *> π*10, which implies π21 + π31 *< π*20 + π30. Then, we necessarily have either π21 − π20 *< *0 or π31 − π30 *< *0, which contradicts (4.48)

Suppose (4.50) is false when MLRP holds and π10 ≥ π11. Then π10 + π20 *< π*11 + π21, which implies π30 *> π*31 and π21 − π20 *> *0. Again, it contradicts (4.48).

First-order stochastic dominance ensures that an increase of effort is good for the principal in a very strong sense, namely that any principal with a utility function increasing in *q *favors a higher effort level. However, this is not enough to reward the agent with a transfer increasing in *q*. It must also be the case that a higher production level is clear evidence that the agent has made a higher effort. MLRP provides this additional information. As (4.48) shows, a higher effort level increases the likelihood of a high production level *more *than the likelihood of a low production level.

Innes (1990) characterizes optimal contracts in a model with a risk- neutral principal and a risk-neutral agent, both with limited liability constraints, using the first order approach described below for concave utility functions (see also Park 1995). Milgrom (1981) proposes an extensive discus-sion of the MLRP assumption.

### 2. Risk Aversion

Suppose now that the agent is strictly risk-averse. The optimal contract that induces effort must solve the program below:

where the latter constraint is the agent’s participation constraint.

Using the same change of variables as in section 4.4, it should be clear that (*P)* is again a concave problem with respect to the new variables *u _{i}*

*=*

*u*(

*t*

_{i}*)*. Using the same notations as in section 4.4, the first-order conditions of program (

*P)*are written as:

Multiplying each of these equations by *π _{i1}* and summing over

*i*yields μ = , where

*E*·) denotes the expectation operator with respect to the

_{q}(distribution of outputs induced by effort *e *= 1.

Multiplying (4.53) by , summing all these equations over *i*, and tak- ing into account the expression of μ obtained above yields

Using the slackness condition to simplify the left-hand side of (4.54), we finally get

By assumption, *u(*·) and *u'(*·) covary in opposite directions. Moreover, a constant wage *t _{i}^{SB} *=

*t*for all

^{SB}*i*does not satisfy the incentive constraint, and thus

*t*cannot be constant everywhere. Hence, the right-hand side of (4.55) is necessarily strictly positive. Thus we have λ

_{i}^{SB}*>*0, and the incentive constraint (4.51) is binding.

Coming back to (4.53), we observe that the left-hand side is increasing in *t _{i}^{SB} *since

*u(*·) is concave. For

*t*to be nondecreasing with

_{i}^{SB}*i*, MLRP must again hold. Then higher outputs are also those that are the more informative ones about the realization of a high effort. Hence, the agent should be more rewarded as output increases.

The benefit of offering a schedule of rewards to the agent that increases with the level of production is that such a scheme does not create any incentive for the agent to sabotage or destroy production to increase his payment.13 However, only the rather strong assumption of a monotone likelihood ratio ensures this intuitive property. To show why, consider a simple example where MLRP does not hold. Let the probabilities in the different states of nature be π10 = π30 = 1/6 , π20 = 2/3 when the agent exerts no effort and π11 = π21 = π31 = 1/3 when he exerts an effort.

Then we have

and thus MLRP fails. Of course, when the principal’s benefits are such that *S*_{3} is much larger than *S _{2}* and

*S*(with

_{1}*q*

_{3}*> q*

_{2}*> q*and

_{1}*S*

_{3}*> S*

_{2}*> S*), the principal would like to implement a positive effort in order to increase the probability that the state of nature 3 is realized. Since outputs

_{1}*q*

_{1}and

*q*

_{3}are equally informative of the fact that the agent has exerted a positive effort, the agent must receive the same transfers in both states 1 and 3 from (4.53). Since output

*q*

_{2}is also particularly informative of the fact that the agent has exerted no effort, the second-best pay- ment should be lower in this state of nature. Hence, the non-monotonic schedule reduces the agent’s incentives to shirk and therefore reduces the probability that state 2, which is bad from the principal’s point of view, is realized.

Source: Laffont Jean-Jacques, Martimort David (2002), *The Theory of Incentives: The Principal-Agent Model*, Princeton University Press.