Bernoulli’s hypothesis (18TH CENTURY)

Proposed by Swiss mathematician Daniel Bernoulli (1700-1782), Bernoulli’s hypothesis suggests added dimensions to the evaluation of risk.

Acceptance of a risk depends not only on the nominal value of what may be lost but also on the intrinsic value, or utility, of it to the person accepting the risk.

In economics, game theory, and decision theory, the expected utility hypothesis—concerning people’s preferences with regard to choices that have uncertain outcomes (gambles)⁠—states that the subjective value associated with an individual’s gamble is the statistical expectation of that individual’s valuations of the outcomes of that gamble, where these valuations may differ from the dollar value of those outcomes. The introduction of St. Petersburg Paradox by Daniel Bernoulli in 1738 is considered the beginnings of the hypothesis. This hypothesis has proven useful to explain some popular choices that seem to contradict the expected value criterion (which takes into account only the sizes of the payouts and the probabilities of occurrence), such as occur in the contexts of gambling and insurance.

The von Neumann–Morgenstern utility theorem provides necessary and sufficient conditions under which the expected utility hypothesis holds. From relatively early on, it was accepted that some of these conditions would be violated by real decision-makers in practice but that the conditions could be interpreted nonetheless as ‘axioms’ of rational choice.

Until the mid-twentieth century, the standard term for the expected utility was the moral expectation, contrasted with “mathematical expectation” for the expected value.

Bernoulli came across expected utility by playing the St Petersburg paradox. This paradox involves you flipping a coin until you get to heads. The number of times it took you to get to heads is what you put as an exponent to 2 and receive that in dollar amounts. This game helped to understand what people were willing to pay versus what people were expected to gain from this game.

Formula for expected utility

When the entity {\displaystyle x} whose value {\displaystyle x_{i}} affects a person’s utility takes on one of a set of discrete values, the formula for expected utility, which is assumed to be maximized, is

{\displaystyle E[u(x)]=p_{1}\cdot u(x_{1})+p_{2}\cdot u(x_{2})+…}

where the left side is the subjective valuation of the gamble as a whole, {\displaystyle x_{i}} is the ith possible outcome, {\displaystyle u(x_{i})} is its valuation, and {\displaystyle p_{i}} is its probability. There could be either a finite set of possible values {\displaystyle x_{i},} in which case the right side of this equation has a finite number of terms; or there could be an infinite set of discrete values, in which case the right side has an infinite number of terms.

When {\displaystyle x} can take on any of a continuous range of values, the expected utility is given by

{\displaystyle E[u(x)]=\int _{-\infty }^{\infty }u(x)f(x)dx,}

where {\displaystyle f(x)} is the probability density function of {\displaystyle x.}

Expected value and choice under risk

In the presence of risky outcomes, a human decision maker does not always choose the option with higher expected value investments. For example, suppose there is a choice between a guaranteed payment of $1.00, and a gamble in which the probability of getting a $100 payment is 1 in 80 and the alternative, far more likely outcome (79 out of 80) is receiving $0. The expected value of the first alternative is $1.00 and the expected value of the second alternative is $1.25. According to expected value theory, people should choose the $100-or-nothing gamble; however, as stressed by expected utility theory, some people are risk averse enough to prefer the sure thing, despite its lower expected value. People with less risk aversion would choose the riskier, higher-expected-value gamble. This is precedence for utility theory.

Bernoulli’s formulation

Nicolas Bernoulli described the St. Petersburg paradox (involving infinite expected values) in 1713, prompting two Swiss mathematicians to develop expected utility theory as a solution. The theory can also more accurately describe more realistic scenarios (where expected values are finite) than expected value alone. In 1728, Gabriel Cramer, in a letter to Nicolas Bernoulli, wrote, “the mathematicians estimate money in proportion to its quantity, and men of good sense in proportion to the usage that they may make of it.”

In 1738, Nicolas’ cousin Daniel Bernoulli, published the canonical 18th Century description of this solution in Specimen theoriae novae de mensura sortis or Exposition of a New Theory on the Measurement of Risk. Daniel Bernoulli proposed that a nonlinear function of utility of an outcome should be used instead of the expected value of an outcome, accounting for risk aversion, where the risk premium is higher for low-probability events than the difference between the payout level of a particular outcome and its expected value. Bernoulli further proposed that it was not the goal of the gambler to maximize his expected gain but to instead maximize the logarithm of his gain.

Bernoulli’s paper was the first formalization of marginal utility, which has broad application in economics in addition to expected utility theory. He used this concept to formalize the idea that the same amount of additional money was less useful to an already-wealthy person than it would be to a poor person.

Infinite expected value—St. Petersburg paradox

The St. Petersburg paradox (named after the journal in which Bernoulli’s paper was published) arises when there is no upper bound on the potential rewards from very low probability events. Because some probability distribution functions have an infinite expected value, an expected-wealth maximizing person would pay an arbitrarily large finite amount to take this gamble. In real life, people do not do this.

Bernoulli proposed a solution to this paradox in his paper: the utility function used in real life means that the expected utility of the gamble is finite, even if its expected value is infinite. (Thus he hypothesized diminishing marginal utility of increasingly larger amounts of money.) It has also been resolved differently by other economists by proposing that very low probability events are neglected, by taking into account the finite resources of the participants, or by noting that one simply cannot buy that which is not sold (and that sellers would not produce a lottery whose expected loss to them were unacceptable).

Savage’s Framework

In the 1950s Leonard Jimmie Savage, an American statistician, derived a framework for comprehending expected utility. At that point, it was considered the first and most thorough foundation to understanding the concept. Savage’s framework involved proving that expected utility could be used to make an optimal choice among several acts through seven postulates (notated as P1-P7).

Savage’s framework has since been used in neo-Bayesian statistics (see Bayesian probability) and the field of applied statistics.

Von Neumann–Morgenstern formulation

The von Neumann–Morgenstern axioms

There are four axioms of the expected utility theory that define a rational decision maker. They are completeness, transitivity, independence and continuity.

Completeness assumes that an individual has well defined preferences and can always decide between any two alternatives.

  • Axiom (Completeness): For every A and B either {\displaystyle A\succeq B} or {\displaystyle A\preceq B}.

This means that the individual either prefers A to B, or is indifferent between A and B, or prefers B to A.

Transitivity assumes that, as an individual decides according to the completeness axiom, the individual also decides consistently.

  • Axiom (Transitivity): For every A, B and C with {\displaystyle A\succeq B} and {\displaystyle B\succeq C} we must have {\displaystyle A\succeq C}.

Independence of irrelevant alternatives pertains to well-defined preferences as well. It assumes that two gambles mixed with an irrelevant third one will maintain the same order of preference as when the two are presented independently of the third one. The independence axiom is the most controversial axiom.[citation needed].

  • Axiom (Independence of irrelevant alternatives): Let A, B, and C be three lotteries with {\displaystyle A\succeq B}, and let {\displaystyle t} be the probability that a third choice is present: {\displaystyle t\in [0,1]};
    if {\displaystyle tA+(1-t)C\succeq tB+(1-t)C,} then the third choice, C, is irrelevant, and the order of preference for A before B holds, independently of the presence of C.

Continuity assumes that when there are three lotteries (A, B and C) and the individual prefers A to B and B to C, then there should be a possible combination of A and C in which the individual is then indifferent between this mix and the lottery B.

  • Axiom (Continuity): Let A, B and C be lotteries with {\displaystyle A\succeq B\succeq C}; then there exists a probability p such that B is equally good as {\displaystyle pA+(1-p)C}.

If all these axioms are satisfied, then the individual is said to be rational and the preferences can be represented by a utility function, i.e. one can assign numbers (utilities) to each outcome of the lottery such that choosing the best lottery according to the preference {\displaystyle \succeq } amounts to choosing the lottery with the highest expected utility. This result is called the von Neumann–Morgenstern utility representation theorem.

In other words, if an individual’s behavior always satisfies the above axioms, then there is a utility function such that the individual will choose one gamble over another if and only if the expected utility of one exceeds that of the other. The expected utility of any gamble may be expressed as a linear combination of the utilities of the outcomes, with the weights being the respective probabilities. Utility functions are also normally continuous functions. Such utility functions are also referred to as von Neumann–Morgenstern (vNM) utility functions. This is a central theme of the expected utility hypothesis in which an individual chooses not the highest expected value, but rather the highest expected utility. The expected utility maximizing individual makes decisions rationally based on the axioms of the theory.

The von Neumann–Morgenstern formulation is important in the application of set theory to economics because it was developed shortly after the Hicks–Allen “ordinal revolution” of the 1930s, and it revived the idea of cardinal utility in economic theory.[citation needed] However, while in this context the utility function is cardinal, in that implied behavior would be altered by a non-linear monotonic transformation of utility, the expected utility function is ordinal because any monotonic increasing transformation of expected utility gives the same behavior.

Risk aversion

The expected utility theory takes into account that individuals may be risk-averse, meaning that the individual would refuse a fair gamble (a fair gamble has an expected value of zero). Risk aversion implies that their utility functions are concave and show diminishing marginal wealth utility. The risk attitude is directly related to the curvature of the utility function: risk neutral individuals have linear utility functions, while risk seeking individuals have convex utility functions and risk averse individuals have concave utility functions. The degree of risk aversion can be measured by the curvature of the utility function.

Since the risk attitudes are unchanged under affine transformations of u, the second derivative u” is not an adequate measure of the risk aversion of a utility function. Instead, it needs to be normalized. This leads to the definition of the Arrow–Pratt measure of absolute risk aversion:

{\displaystyle {\mathit {ARA}}(w)=-{\frac {u”(w)}{u'(w)}},}

where {\displaystyle w} is wealth.

The Arrow–Pratt measure of relative risk aversion is:

{\displaystyle {\mathit {RRA}}(w)=-{\frac {wu”(w)}{u'(w)}}}

Special classes of utility functions are the CRRA (constant relative risk aversion) functions, where RRA(w) is constant, and the CARA (constant absolute risk aversion) functions, where ARA(w) is constant. They are often used in economics for simplification.

A decision that maximizes expected utility also maximizes the probability of the decision’s consequences being preferable to some uncertain threshold (Castagnoli and LiCalzi,1996; Bordley and LiCalzi,2000; Bordley and Kirkwood).[citation needed] In the absence of uncertainty about the threshold, expected utility maximization simplifies to maximizing the probability of achieving some fixed target. If the uncertainty is uniformly distributed, then expected utility maximization becomes expected value maximization. Intermediate cases lead to increasing risk aversion above some fixed threshold and increasing risk seeking below a fixed threshold.

Examples of von Neumann-Morgenstern utility functions

The utility function {\displaystyle u(w)=\log(w)} was originally suggested by Bernoulli (see above). It has relative risk aversion constant and equal to one, and is still sometimes assumed in economic analyses. The utility function

{\displaystyle u(w)=-e^{-aw}}

exhibits constant absolute risk aversion, and for this reason is often avoided, although it has the advantage of offering substantial mathematical tractability when asset returns are normally distributed. Note that, as per the affine transformation property alluded to above, the utility function {\displaystyle K-e^{-aw}} gives exactly the same preferences orderings as does {\displaystyle -e^{-aw}}; thus it is irrelevant that the values of {\displaystyle -e^{-aw}} and its expected value are always negative: what matters for preference ordering is which of two gambles gives the higher expected utility, not the numerical values of those expected utilities.

The class of constant relative risk aversion utility functions contains three categories. Bernoulli’s utility function

{\displaystyle u(w)=\log(w)}

has relative risk aversion equal to 1. The functions

{\displaystyle u(w)=w^{\alpha }}

for {\displaystyle \alpha \in (0,1)} have relative risk aversion equal to {\displaystyle 1-\alpha \in (0,1)}. And the functions

{\displaystyle u(w)=-w^{\alpha }}

for {\displaystyle \alpha <0} have relative risk aversion equal to {\displaystyle 1-\alpha >1.}

See also the discussion of utility functions having hyperbolic absolute risk aversion (HARA).

Measuring risk in the expected utility context

Often people refer to “risk” in the sense of a potentially quantifiable entity. In the context of mean-variance analysis, variance is used as a risk measure for portfolio return; however, this is only valid if returns are normally distributed or otherwise jointly elliptically distributed, or in the unlikely case in which the utility function has a quadratic form. However, David E. Bell proposed a measure of risk which follows naturally from a certain class of von Neumann-Morgenstern utility functions. Let utility of wealth be given by

{\displaystyle u(w)=w-be^{-aw}}

for individual-specific positive parameters a and b. Then expected utility is given by

{\displaystyle {\begin{aligned}\operatorname {E} [u(w)]&=\operatorname {E} [w]-b\operatorname {E} [e^{-aw}]\\&=\operatorname {E} [w]-b\operatorname {E} [e^{-a\operatorname {E} [w]-a(w-\operatorname {E} [w])}]\\&=\operatorname {E} [w]-be^{-a\operatorname {E} [w]}\operatorname {E} [e^{-a(w-\operatorname {E} [w])}]\\&={\text{Expected wealth}}-b\cdot e^{-a\cdot {\text{Expected wealth}}}\cdot {\text{Risk}}.\end{aligned}}}

Thus the risk measure is {\displaystyle \operatorname {E} (e^{-a(w-\operatorname {E} w)})}, which differs between two individuals if they have different values of the parameter {\displaystyle a,} allowing different people to disagree about the degree of risk associated with any given portfolio. Individuals sharing a given risk measure (based on given value of a) may choose different portfolios because they may have different values of b. See also Entropic risk measure.

For general utility functions, however, expected utility analysis does not permit the expression of preferences to be separated into two parameters with one representing the expected value of the variable in question and the other representing its risk.

Criticism

Expected utility theory is a theory about how to make optimal decisions under risk. It has a normative interpretation which economists particularly used to think applies in all situations to rational agents but now tend to regard as a useful and insightful first order approximation. In empirical applications, a number of violations have been shown to be systematic and these falsifications have deepened understanding of how people actually decide. Daniel Kahneman and Amos Tversky in 1979 presented their prospect theory which showed empirically, among other things, how preferences of individuals are inconsistent among the same choices, depending on how those choices are presented.

Like any mathematical model, expected utility theory is an abstraction and simplification of reality. The mathematical correctness of expected utility theory and the salience of its primitive concepts do not guarantee that expected utility theory is a reliable guide to human behavior or optimal practice.

The mathematical clarity of expected utility theory has helped scientists design experiments to test its adequacy, and to distinguish systematic departures from its predictions. This has led to the field of behavioral finance, which has produced deviations from expected utility theory to account for the empirical facts.

Conservatism in updating beliefs

It is well established that humans find logic hard, mathematics harder, and probability even more challenging.[citation needed] Psychologists have discovered systematic violations of probability calculations and behavior by humans.[citation needed] Consider, for example, the Monty Hall problem.

In updating probability distributions using evidence, a standard method uses conditional probability, namely the rule of Bayes. An experiment on belief revision has suggested that humans change their beliefs faster when using Bayesian methods than when using informal judgment.

Irrational deviations

Behavioral finance has produced several generalized expected utility theories to account for instances where people’s choices deviate from those predicted by expected utility theory. These deviations are described as “irrational” because they can depend on the way the problem is presented, not on the actual costs, rewards, or probabilities involved.

Particular theories include prospect theory, rank-dependent expected utility and cumulative prospect theory and SP/A theory.[

Preference reversals over uncertain outcomes

Starting with studies such as Lichtenstein & Slovic (1971), it was discovered that subjects sometimes exhibit signs of preference reversals with regard to their certainty equivalents of different lotteries. Specifically, when eliciting certainty equivalents, subjects tend to value “p bets” (lotteries with a high chance of winning a low prize) lower than “$ bets” (lotteries with a small chance of winning a large prize). When subjects are asked which lotteries they prefer in direct comparison, however, they frequently prefer the “p bets” over “$ bets”. Many studies have examined this “preference reversal”, from both an experimental (e.g., Plott & Grether, 1979) and theoretical (e.g., Holt, 1986) standpoint, indicating that this behavior can be brought into accordance with neoclassical economic theory under specific assumptions.

Uncertain probabilities

If one is using the frequentist notion of probability, where probabilities are considered to be fixed values, then applying expected value and expected utility to decision-making requires knowing the probabilities of various outcomes. However, in practice there will be many situations where the probabilities are unknown, and one is operating under uncertainty. In economics, Knightian uncertainty or ambiguity may occur. Thus one must make assumptions about the probabilities, but then the expected values of various decisions can be very sensitive to the assumptions. This is particularly a problem when the expectation is dominated by rare extreme events, as in a long-tailed distribution.

Alternative decision techniques are robust to uncertainty of probability of outcomes, either not depending on probabilities of outcomes and only requiring scenario analysis (as in minimax or minimax regret), or being less sensitive to assumptions.

Bayesian approaches to probability treat it as a degree of belief and thus they do not draw a distinction between risk and a wider concept of uncertainty: they deny the existence of Knightian uncertainty. They would model uncertain probabilities with hierarchical models, i.e. where the uncertain probabilities are modelled as distributions whose parameters are themselves drawn from a higher-level distribution (hyperpriors).

Also see: bounded rationality, st petersburg paradox, uncertainty

3 thoughts on “Bernoulli’s hypothesis (18TH CENTURY)

  1. Lakesha says:

    Spot on with this write-up, I actually feel this site needs far more attention.
    I’ll probably be returning to read
    through more, thanks for the information!

Leave a Reply

Your email address will not be published. Required fields are marked *