# Propensity theory of probability

Theory mainly associated with Karl Raimund Popper (1902-1994), though it goes back to Charles Sanders Peirce (1839-1914). Popper introduces it to replace the frequency theory of probability in view of an objection he brings to that.

Probabilities are propensities, not of objects under study but of the experimental arrangements which we keep constant during repeated experiments. Though not directly observable, Popper claims that propensities are no more mysterious than forces or fields, and that their existence can be falsified (see falsificationism). This, however, may be doubted, if the run of relevant events is potentially infinity; we may be forced to say that probably the propensity exists, or (if the propensity is simply identified with the evidence for it) that it is rational to act on it because probably it will continue.

In neither case is ‘probably’ explained, and the theory seems open to most of the objections to the frequency theory, except the rather recherche’ one Popper brings it in to avoid.

Source:
C S Peirce, Collected Papers, 2 (1932), 404-14;
K R Popper, ‘The Propensity Interpretation of Probability’, British Journal for Philosophy of Science (1959)

The propensity theory of probability is one interpretation of the concept of probability. Theorists who adopt this interpretation think of probability as a physical propensity, or disposition, or tendency of a given type of physical situation to yield an outcome of a certain kind, or to yield a long run relative frequency of such an outcome.[1]

Propensities are not relative frequencies, but purported causes of the observed stable relative frequencies. Propensities are invoked to explain why repeating a certain kind of experiment will generate a given outcome type at a persistent rate. A central aspect of this explanation is the law of large numbers. This law, which is a consequence of the axioms of probability, says that if (for example) a coin is tossed repeatedly many times, in such a way that its probability of landing heads is the same on each toss, and the outcomes are probabilistically independent, then the relative frequency of heads will (with high probability) be close to the probability of heads on each single toss. This law suggests that stable long-run frequencies are a manifestation of invariant single-case probabilities. Frequentists are unable to take this approach, since relative frequencies do not exist for single tosses of a coin, but only for large ensembles or collectives. Hence, these single-case probabilities are known as propensities or chances.

In addition to explaining the emergence of stable relative frequencies, the idea of propensity is motivated by the desire to make sense of single-case probability attributions in quantum mechanics, such as the probability of decay of a particular atom at a particular moment .

The main challenge facing propensity theories is to say exactly what propensity means, and to show that propensity thus defined has the required properties.