As in the case of adverse selection analyzed in section 2.14, various verifiable signals can be used by the principal to improve the provision of incentives to the agent in a moral hazard framework. These pieces of information can be gathered by different kinds of information systems that are internal to the organization in the case of monitoring and supervision, or that are obtained by comparing the agent’s performances with those of other related agents in the market place if such public information is available. Those practices are sometimes called “benchmarking” or “yardstick competition.”
1. Informativeness of Signals
The framework of section 4.5, with multiple levels of performance, is extremely useful when assessing the principal’s benefit from sources of information other than the agent’s sole performance. To assess the role of improved information structures, let us still assume that there are only two levels of production q¯ and q, and that the principal also learns a binary signal σ˜ belonging to the set ∑ = {σ0, σ1}, which depends directly on the agent’s effort. More precisely, the matrix in figure 4.5 gives the probabilities of each signal σi for i in {0,1} as a function of the agent’s effort.
Figure 4.5: Information Structure
Note that the signal σ1 (resp. σ0) is good news (resp. bad news), that the agent has exerted a high level of effort. The signal is uninformative on the agent’s effort when ν0 = ν1.
The signal σ˜ being verifiable, the principal now has the ability to condition the agent’s performance on four possible different states of nature, yi, for i in {1,…, 4}. Each of these states is defined in table 4.1.
The signal σ˜ is not related to output, but only to effort. We assume that it does not affect the principal’s return from the relationship, and we have S1 = S2 = S¯ and S3 = S4 = S.
Denoting the respective multipliers of the agent’s incentive and participation constraints by λ and μ, the first-order conditions (4.53) now become
Note that t1SB = t2SB and t3SB = t4SB only when ν1 = ν0, i.e., when σ˜ is not informative of the agent’s effort. In this case, conditioning the agent’s contribu-tion on a risk σ˜ unrelated to the agent’s effort is of no value to the principal.
This situation can only increase the risk borne by the agent without any incen- tive benefit. Indeed, any compensation t(σ˜, q˜) yielding utility u(t(σ˜, q˜)) to the agent can be replaced by a new scheme that is independent of σ˜ , such that for any q˜ without changing the agent’s incentive and participation constraints. Furthermore, this new scheme is also less costly to the principal, because . As proof of this latter inequal-ity, note that, using the definition of and thus
where the first inequality comes from using Jensen’s inequality for h(·) convex, and the second equality is the Law of Iterated Expectations.
Instead, when σ˜ is informative of the agent’s effort, conditioning the agent’s reward on the realization of σ˜ has some positive incentive value as shown in equations (4.57) through (4.60). We state this as a proposition:
Proposition 4.7: Any signal σ˜ that is informative of the agent’s effort should be used to condition the agent’s compensation scheme.
This result is known as Holmström’s Sufficient Statistic Theorem (1979). It was initially proved in a model with a continuum of out- comes and a continuum of effort levels, but its logic is the same as above. The most spectacular applications of the Sufficient Statistic Theorem arise in multiagent environments. In such environments, it has been shown that the performance of an agent can be used to incentivize another agent if their per- formances are correlated, even if their efforts are technologically unrelated. On this, see Mookherjee (1984) and the tournament literature (Nalebuff and Stiglitz 1983, and Green and Stockey 1983).
2. More Comparisons Among Information Structures
The previous section has shown how the principal can strictly prefer a given infor-mation structure {q˜, σ˜} to another structure {q˜} as soon as the signal σ˜ is informa-tive of the agent’s effort. More generally, the choice between various information structures will trade off the costs and benefits of these systems. The costs may increase as the principal uses signals on the agent’s performance which are more informative. The possible benefits come from reducing the agency costs.
Let us thus define an information structure π(e) as a n-uple such that πi(e)≥ 0 for all i and for each value of e. Again, we assume that e can be either 0 or 1, and to simplify we denote π(1) = π.
A natural ordering of information systems is provided by Blackwell’s condition stated in definition 4.3.
Definition 4.3: The information structure π(e) is sufficient, in the sense of Blackwell, for the information structure πˆ(e) if and only if there exists a transition matrix P = (pij), (i, j) ∈ {1, . . ., n}2, that is independent of e and that is such that , for all e in {0, 1}.
An intuitive example of this ordering is given by the garbling of an infor- mation structure. Then, each signal of the information structure 1 is transformed by a purely random information mechanism (independent of the signal consid- ered) into a vector of final signals. The new information, say structure 2, is such that the information structure 1 is sufficient for the information structure 2. The ordering implied by the Blackwell condition is an interesting expression of dom- inance, because it is a necessary and sufficient condition for any decision-maker to prefer information structure 1 to information structure 2.17 We want to under- stand whether this natural statistical ordering among information structures also ranks the agency costs in the incentive problems associated with these information structures.18 To see that, let us define CSBkpl as the second-best cost of implement- ing a positive effort when the information structure is p. By definition, we have is given by (4.53).
Note that we make the dependence of these transfers on the information system explicit, because different information systems certainly do not yield the same second-best transfers and implementation costs.
We are interested in comparing information structures according to their agency costs. Let us first state definition 4.4.
Definition 4.4: The information structure π is weakly more efficient than the information structure πˆ if and only if CSB(π) ≤ CSB(πˆ).
We can then obtain the comparison outlined in proposition 4.8.
Proposition 4.8: If the information structure π is sufficient for the infor-mation structure πˆ efficient than πˆ in the sense of Blackwell, then p is weakly more efficient than πˆ.
Proof: To prove this result, note first that the definition of the information system πˆ implies that
where the second equality uses the definition of πˆ and the last line is obtained from Jensen’s inequality.
However, implements a positive effort at a minimal cost when the information structure is πˆ . Hence, the agent’s incentive compatibility constraint , and his participation constraint are both binding. Using the definition of πˆ again, those two last equations are written, respectively, as
Let us now define the ex post utility levels . These new utility levels implement the high level of effort for the information structure π (from the right-hand equality of (4.63)) and make the agent’s participation con-straint binding (from (4.64)). By definition of CSB(π), we have CSB(π).
Finally, using (4.62) we obtain
Source: Laffont Jean-Jacques, Martimort David (2002), The Theory of Incentives: The Principal-Agent Model, Princeton University Press.