The Rent Extraction-Efficiency Trade-Off: The Revelation Principle

In the above analysis, we have restricted the principal to offer a menu of con- tracts, one for each possible type. First, one may wonder if a better outcome could be achieved with a more complex contract allowing the agent possibly to choose among more options. Second, one may also wonder whether some sort of com- munication device between the agent and the principal could be used to transmit information to the principal so that the latter can recommend outputs and pay- ments as a function of transmitted information. The revelation principle ensures that there is no loss of generality in restricting the principal to offer simple menus having at most as many options as the cardinality of the type space. Those simple menus are actually examples of direct revelation mechanisms for which we now give a couple of definitions.

Definition  2.3:  A  direct  revelation  mechanism  is  a  mapping  g(·) from Θ to  A  which  writes  as  g(θ) = (q(θ), t(θ)) for  all θ  belonging  to Θ.  The principal  commits  to  offer  the  transfer  t(θ˜) and  the  production  level  q(θ˜) if  the  agent  announces  the  value θ˜ for  any θ˜ belonging  to θ.

Definition  2.4:  A direct revelation mechanism g(·) is truthful if it is incen- tive compatible for the agent to announce his true type for any type, i.e., if the direct revelation mechanism satisfies the following incentive compat- ibility constraints:

Denoting  transfer  and  output  for  each  possible  report  respectively  as  t(θ) = t,   we  get  back  to  the  notations  of  the  previous sections and in particular to the incentive constraints (2.9) and (2.10).

A more general mechanism can be obtained when communication between the principal and the agent is more complex than simply having the agent report his type to the principal. Let M be the message space offered to the agent by a more general mechanism. This message space can be very complex. Conditionally, on a given message m received from the agent, the principal requests a production level q˜(m) and provides a corresponding payment t˜(m).

Definition  2.5:  A  mechanism  is  a  message  space  M and  a  mapping  g˜(·) from  M to A which  writes  as  g˜(m) = (q˜(m), t˜(m)) for  all  m  belonging to M.

When facing such a mechanism, the agent with type θ chooses a best message m(θ) that   is  implicitly  defined  as

The  mechanism  (M, g˜(·)) induces  therefore  an  allocation  rule  a(θ) = (q˜(m(θ)), t˜(m(θ))) mapping  the  set  of  types  Θ into  the  set  of  allocations A.  Then  we  are ready to state the revelation principle in the one agent case.

Proposition  2.2:  The  Revelation  Principle.  Any  allocation  rule  a(θ) obtained  with  a  mechanism  (M, g˜(·))  can  also  be  implemented  with  a truthful direct revelation mechanism.

Figure 2.6: The Revelation Principle

Proof:     The  indirect  mechanism  (M, g˜(·))  induces  an  allocation  rule  a(θ)  = from Θ into A. By composition of g˜(·) and m(·), we can con- struct  a  direct  revelation  mechanism  g(·) mapping Θ into A,  namely  g = g˜ ◦ m, or  more  precisely for  all  θ in Θ.

Figure 2.6 illustrates this construction, which is at the core of the revelation principle.

We  check  now  that  the  direct  revelation  mechanism  g(·) is  truthful.  Indeed, since  (2.37)  is  true  for  all  m˜ ,  it  holds  in  particular  for  m˜=m*(θ’) for  all θ’ in Θ. Thus we have

Finally,  using  the  definition  of  gj·k,  we  get

Hence,  the  direct  revelation  mechanism  gj·k is  truthful.

Importantly, the revelation principle provides a considerable simplification of contract theory. It enables us to restrict the analysis to a simple and well-defined family of functions, the truthful direct revelation mechanisms.

Earlier analyses of the set of incentive compatible mechanisms took place in multiagent environments because their focus was the pro-vision of public good, bargaining, or voting problems. It is out of the scope of this volume to discuss multiagent models, but let us briefly mention that dominant strategy implementation requires that each agent’s best strategy is to reveal his type truthfully whatever the reports made by the other agents. Gibbard (1973) characterized the dominant strategy (nonrandom) mecha- nisms (mappings from arbitrary strategy spaces into allocations) when feasible allocations belong to a finite set and when there is no a priori information on the players’ preferences (which are strict orderings). He showed that such mechanisms had to be dictatorial, i.e., they had to correspond to the optimal choice of a single agent. As a corollary he showed that any voting mecha- nism (i.e., direct revelation mechanism) for which the truth was a dominant strategy was also dictatorial. In this environment, anything achievable by a dominant strategy mechanism can also be achieved by a truthful direct rev- elation mechanism. So, Gibbard proved one version of the revelation prin- ciple indirectly. For the case of quasi-linear preferences, Green and Laffont (1977) defined dominant-strategy, truthful direct revelation mechanisms and proved directly that, for any other dominant strategy mechanism, there is an equivalent truthful direct revelation mechanism (and they characterized the class of these mechanisms). Dasgupta, Hammond, and Maskin (1979) extended this direct proof to any family of preferences. The revelation prin- ciple can be extended to settings where several agents are privately informed about their own types with the less demanding concept of Bayesian-Nash equilibrium. In such a Bayesian context, the revelation principle requires that each agent’s truthful report of his own type is a Bayesian-Nash equilibrium (Myerson 1979). The expression “the revelation principle” finally appeared in Myerson (1981).

Source: Laffont Jean-Jacques, Martimort David (2002), The Theory of Incentives: The Principal-Agent Model, Princeton University Press.

Leave a Reply

Your email address will not be published. Required fields are marked *