Alternative name for a weak version of the verifiability principle, whereby in order to be meaningful a statement must, if not a tautology, be confirmable or disconfirmable by observation.
Origins
Although verificationist principles of a general sort—grounding scientific theory in some verifiable experience—are found retrospectively even with the American pragmatist C.S. Peirce and with the French conventionalist Pierre Duhem[2] who fostered instrumentalism,[3] the vigorous program termed verificationism was launched by the logical positivists who, emerging from the Berlin Circle and the Vienna Circle in the 1920s, sought an epistemology whereby philosophical discourse would be, in their perception, as authoritative and meaningful as empirical science.
Logical positivists garnered the verifiability criterion of cognitive meaningfulness from young Ludwig Wittgenstein’s philosophy of language posed in his 1921 book Tractatus,[4] and, led by Bertrand Russell, sought to reformulate the analytic–synthetic distinction in a way that would reduce mathematics and logic to semantical conventions. This would be pivotal to verificationism, in that logic and mathematics would otherwise be classified as synthetic a priori knowledge and defined as “meaningless” under verificationism.
Seeking grounding in such empiricism as of David Hume,[5] Auguste Comte, and Ernst Mach—along with the positivism of the latter two—they borrowed some perspectives from Immanuel Kant, and found the exemplar of science to be Albert Einstein’s general theory of relativity.
Revisions
Logical positivists within the Vienna Circle recognized quickly that the verifiability criterion was too stringent. Notably, all universal generalizations are empirically unverifiable, such that, under verificationism, vast domains of science and reason, such as scientific hypothesis, would be rendered meaningless.[6]
Rudolf Carnap, Otto Neurath, Hans Hahn and Philipp Frank led a faction seeking to make the verifiability criterion more inclusive, beginning a movement they referred to as the “liberalization of empiricism”. Moritz Schlick and Friedrich Waismann led a “conservative wing” that maintained a strict verificationism. Whereas Schlick sought to reduce universal generalizations to frameworks of ‘rules’ from which verifiable statements can be derived,[7] Hahn argued that the verifiability criterion should accede to less-than-conclusive verifiability.[8] Among other ideas espoused by the liberalization movement were physicalism, over Mach’s phenomenalism, coherentism over foundationalism, as well as pragmatism and fallibilism.[6][9]
In 1936, Carnap sought a switch from verification to confirmation.[6] Carnap’s confirmability criterion (confirmationism) would not require conclusive verification (thus accommodating for universal generalizations) but allow for partial testability to establish “degrees of confirmation” on a probabilistic basis. Carnap never succeeded in formalizing his thesis despite employing abundant logical and mathematical tools for this purpose. In all of Carnap’s formulations, a universal law’s degree of confirmation is zero.[10]
That same year saw the publication of A. J. Ayer’s work, Language, Truth and Logic, in which he proposed two types of verification: strong and weak. This system espoused conclusive verification, yet accommodated for probabilistic inclusion where verifiability is inconclusive. Ayer also distinguished between practical and theoretical verifiability. Under the latter, propositions that cannot be verified in practice would still be meaningful if they can be verified in principle.
Karl Popper’s The Logic of Scientific Discovery proposed falsificationism as a criterion under which scientific hypothesis would be tenable. Falsificationism would allow hypotheses expressed as universal generalizations, such as “all swans are white”, to be provisionally true until falsified by evidence, in contrast to verificationism under which they would be disqualified immediately as meaningless.
Though generally considered a revision of verificationism,[4][11] Popper intended falsificationism as a methodological standard specific to the sciences rather than as a theory of meaning.[4] Popper regarded scientific hypotheses to be unverifiable, as well as not “confirmable” under Rudolf Carnap’s thesis.[4][12] He also found non-scientific, metaphysical, ethical and aesthetic statements often rich in meaning and important in the origination of scientific theories.[4][13]
Decline
The 1951 article “Two Dogmas of Empiricism”, by Willard Van Orman Quine, attacked the analytic/synthetic division and apparently rendered the verificationist program untenable. Carl Hempel, one of verificationism’s greatest internal critics, had recently concluded the same as to the verifiability criterion.[14] In 1958, Norwood Hanson explained that even direct observations must be collected, sorted, and reported with guidance and constraint by theory, which sets a horizon of expectation and interpretation, how observational reports, never neutral, are laden with theory.[15]
Thomas Kuhn’s landmark book of 1962, The Structure of Scientific Revolutions—which identified paradigms of science overturned by revolutionary science within fundamental physics—critically destabilized confidence in scientific foundationalism,[16] commonly if erroneously attributed to verificationism.[17] Popper, who had long claimed to have killed verificationism but recognized that some would confuse his falsificationism for more of it,[11] was knighted in 1965. In 1967, John Passmore, a leading historian of 20th-century philosophy, wrote, “Logical positivism is dead, or as dead as a philosophical movement ever becomes”—a general view among philosophers.[18] Logical positivism’s fall heralded postpositivism, where Popper’s view of human knowledge as hypothetical, continually growing, and open to change ascended,[11] and verificationism became mostly maligned.[2]
Legacy
Although Karl Popper’s falsificationism has been widely criticized by philosophers,[19] Popper has been the only philosopher of science often praised by many scientists.[12] Verificationists, in contrast, have been likened to economists of the 19th century who took circuitous, protracted measures to refuse refutation of their preconceived principles.[20] Still, logical positivists practiced Popper’s principles—conjecturing and refuting—until they ran their course, catapulting Popper, initially a contentious misfit, to carry the richest philosophy out of interwar Vienna.[11] And his falsificationism, as did verificationism, poses a criterion, falsifiability, to ensure that empiricism anchors scientific theory.[2]
In a 1979 TV interview, A. J. Ayer, who had introduced logical positivism to the English-speaking world in the 1930s, was asked what he saw as its main defects, and answered that “nearly all of it was false”.[18] However, he soon admitted to still holding “the same general approach”.[18] The “general approach” of empiricism and reductionism—whereby mental phenomena resolve to the material or physical, and philosophical questions largely resolve to ones of language and meaning—has run through Western philosophy since the 17th century and lived beyond logical positivism’s fall.[18]
In 1977, Ayer had noted, “The verification principle is seldom mentioned and when it is mentioned it is usually scorned; it continues, however, to be put to work. The attitude of many philosophers reminds me of the relationship between Pip and Magwitch in Dickens’s Great Expectations. They have lived on the money, but are ashamed to acknowledge its source”.[2] In the late 20th and early 21st centuries, the general concept of verification criteria—in forms that differed from those of the logical positivists—was defended by Bas van Fraassen, Michael Dummett, Crispin Wright, Christopher Peacocke, David Wiggins, Richard Rorty, and others.
magnificent post, very informative. I wonder why the other experts of this sector do not notice this. You should continue your writing. I’m confident, you have a great readers’ base already!