# Philosophy of Probability

Considerations on Philosophical Interpretations of Probability written for my senior year Statistics class.

Philosophy of Probability

Probability theory, the study of uncertainty and variability, exists everywhere in our lives. It is the fundamental underpinning that enables our daily decision making, our assessment of a historical text as being reliable evidence, our belief in scientific observations through statistical testing, our ability to plan and predict for the future with confidence intervals, and much more.  Within the fields of epistemology, metaphysics, and ethics, understanding and applying probability is central to justifying our perceived knowledge, interpreting correlation as it possibly relates to causation, establishing laws of nature, and evaluating decision theory. As a result, it is essential that we understand this fundamental theory that we use so regularly to answers fundamental questions across various fields on a deeper level by exploring various interpretations of probability – classical, frequentist, subjective – and evaluating whether or not we are justified in placing such trust in the conceptual power of these interpretations of probability to inform us of the nature of our actions and the world around us by examining the critiques and limitations that each of these interpretations encounter.

First introduced by Pierre-Simon Laplace, the classical interpretation of probability is heavily built upon the surrounding notion of fractionally shared probability across all assumed equally likely possible outcomes that arises from the influences of examining games of chance. As the name classical suggests, this form of interpretation takes on a traditional, theoretical, a prior view on probability theory, by basing its main tenets on preconceived notions of the possibility of a knowable equality between different outcomes without relying on experimental evidence. More specifically, in Analytical Theories of Probability, Laplace explains the [classical] probability of an event is the ratio of the number of cases favorable to it, to the number of all cases possible when nothing leads us to expect that any one of these cases should occur more than any other, which renders them, for us, equally possible (6–7). To demonstrate and apply this interpretation of probability, let us take the example of choosing a single card from a standard deck of fifty-two total cards. According to classical probability, the chance of choosing a specific card from the deck, regardless of which number or suit is specified, will be 1/52. We arrive at this value by observing that there exists only one favorable case when that single specified card is chosen, contrasted by what we presume to be the equally likely possibility of all other fifty two outcomes that arise from the act of choosing a card. We can write out this expression more generally as P(A) = O / N, where P(A) is the probability of event A occurring for some numerical count O as the number of successful outcomes corresponding to the occurrence of event A and count N as the total of mutually exclusive and equally likely outcomes.

In examining this approach to probability, it becomes readily apparent that applying this classical definition of probability poses flaws and severe limitations in terms of the assumed required conditions of each situation that must be met. First and foremost, the natural question arises in the form of what does it mean for events to be equal in the outcome or of the same kind, given that Laplace builds the term equally possible into his description of classical probability, and who designates such classifications of some a prior equality? Consider the situation of flipping a coin. Traditionally, we think of the possibility that the coin lands heads and the possibility that the coin lands tails as equal outcomes. However, such a presumption fails to acknowledge exceptional situations in which the coin could land on its side edge, skewing the assumed equality of outcomes taken into account when calculating the probability of a coin. It becomes unclear how we are able to justify our claim to a theoretical knowledge that we have knowledge of all possible situations that could occur when a coin is tossed up into the air and that we necessarily know the equality of these possible situations of which we believe to be two unless we already have some existing knowledge of the probability of a coin toss. Following this line of reasoning only leads us to what is a cycle of circular reasoning that classical probability seemingly depends on. The principle of indifference attempts to disqualify this criticism by clarifying that equal probability is knowable and possible when there is “no evidence favoring one possibility over another” (Hájek). Still, while this principle quells concerns regarding the problem of begging the question when it comes to the mutual lack of evidence for either outcome being indicative of them having the same likelihood of occurring, the same cannot be said for situations in which evidence must be examined because the determination of how evidence is weighed and comparatively favored or not favored relative to the evidence for another outcome depends on an interpretation of probability to evaluate confidence and credibility of the evidence, leaving us with the remaining issue of circularity (Hájek).

Next, we take a closer look at the frequentist perspective as an alternative approach towards interpreting probability which comparatively relies on empirical, a posteriori evidence to designate probability evaluations, providing more substantive justification in ways that the classical interpretation fails to do so. Rather than building probabilities upon an assumed understanding of possible outcomes, the frequency probability strictly adheres to constructing notions of events that are achieved with respect to the number of trials and extrapolate a probability as the number of total trials approaches the limit of infinity. This interpretation can be seen in action by examining the situation of tossing a dice over many repeated trials such that after each rolling of the dice the number on the upward face of the tie is recorded. We then adopt that the observed fractional result of observed results over many many trials converges upon the probability as it approaches infinity such that when a dice is thrown repeatedly, that the fractional probability of observing a two on the dice converges towards 1/6. In this way, frequency probability is essentially reduced down to a proportion of recorded outcomes. It is capable of reconciling not only the finite numerical count limitations of classical probability theory, but also the difficulty with qualifying instances of possible equal outcomes and of evidence revealing favoritism towards one outcome over another since with frequency probability, the determination of the likelihood of something occurring only relies on what is actually observed to have happened.

By the same features that frequency probability can address many of the issues with classical probability, these also emerge as weaknesses when encountering the problem of inconsistency in repeated and random trials when the theory is applied and put into practice. In other words, when we attempt to apply the frequentist definition of probability, we repeat trials over finitely many times and determine relative frequencies with respect to this finite count. Yet, as our number of finite trials changes, so do the possibility for various relative frequencies, leading to different and subjective relative frequency values each time. Returning to the dice example, let us imagine for instance that the dice is rolled twelve times and ten out of those twelve times the dice lands on a three. It seems unsatisfying to then then conclude that the probability for rolling a dice is three. Furthermore, how does this assigned probability then change when the next time we roll the dice a different proportion for the number of threes landed emerges instead of 5/6? Perhaps even more troubling is that frequency probability is incapable of dealing with a single case scenario where only one trial is administered. In the case of a dice, if rolled once and a five lands, a frequentist would have to conclude that the probability for landing a five is one and the probability for all other numbers is zero. Given this situation, there is only a possibility for two different probabilities, zero or one. No other proportions are possible given the single trial. This difficulty can be further expanded to situations of greater N number of trials where the proportion can still only represent probabilities based on denominator N, resulting in bias and possible error (Hájek). These proportional differences, in-betweens, and the degree to which they shift off a true infinite limit value also become impossible to evaluate without appealing to a predetermined interpretation of probability to calculate error or expected deviation from the limit, which ends up being circular reasoning since a developed approach towards probability to make these assessments is the very interpretation we are attempting to demonstrate as reliable. The extreme unreliable nature of frequency probability in the situations of particularly lower trial counts makes this probabilistic interpretation incredibly faulty when it comes to making predictions on weather, earthquakes, and other events that occur with such rarity and on such random occasions.

Last, but not least, subjective probability provides yet another formulation of probability theory, which is an adapted form of interpretation most commonly found in epistemology. Subjectivity probability relies on the extrapolation of the probable nature of prior outcomes and experiences to denotes one’s degree of belief in an outcome, and thus its probability. In other words, the probability of event A occurring is represented by P(A) = degree of belief that A is true, based on reason, intuition, and estimates. At first glance, this idea may appear to be quite straightforward in that reason and intuition should come innately to us, but concerns arise in the form of distinguishing between more correct and more justifiable lines of reasoning from other forms that are not even when conceding for the possibility of subjectively accurate arguments. Even the most fundamental form of inductive reasoning that many consider to be intuitively true relies on an underlying belief that the future resembles the past and questions relating to epistemology emerge in terms of how we can have confidence in these methods of acquiring new information to base our trust, beliefs, and in turn probability off of.

In particular, we see this presumed perspective that we can use past events to predict future events permeating everywhere in the field of science where laws and principles are present, as well as the field of probability itself. Taking physics for example, equations that we have formed to predict and describe the movement of an object when it encounters a force are developed around the observed behavior of objects around us. While we take these laws to be the rules of nature, we ultimately lack the means to claim that the laws of physics will be preserved to the same way as we presently know them to be such that we may claim to have total confidence that a ball on earth will experience a 9.8 N unit of force exerted by the earth pulling it downwards since every time we’ve examined a ball thus far it has behaved this way. This here lies the fundamental flaw with subjective probability, and to some capacity, frequency probability as well in terms of its dependence on past empirical information to determine probability proportion values because we are unjustified in our assumed belief that the future will necessarily resemble the past exactly and that the next time we drop a ball that it must necessarily also have the same force exerted upon it. For all we know, maybe the ball will float up the next time we attempt this experiment. Furthermore, like the probability interpretations before this, subjective probability is unable to escape from assuming this unprovable premise that the future resembles the past. Attempts to prove that the past is indeed an accurate predictor of the future relies on an appeal to the past experiences we have had thus far observing a consistency between the present and the past, which yields an unproductive circular form of reasoning since the very point of contention is that the past is not a reliable source from which to construct knowledge of the future. (Salmon).

Overall, three interpretations of probability were examined – classical, frequency, and subjective – all of which had their unique set of flaws that tended towards an end result of begging the question. While no upstanding defense of probability theory emerged throughout the examination of the difficulties that were revealed with each formulation of probability, this should not necessarily demand that we completely rid ourselves of the concept of probability due to its pockets of failure and justifiability with all the fields that rely on the very fundamentals of probability. Instead, the truth that emerges from this compiled together assessment of probability theory is that each one requires knowledge of how and when to apply which interpretation based on the various situations that require probability in such a way that those interpretations minimize the severity of the flaws as best as possible. By acquiring a clearer conception of the strengths and weaknesses of each probability theory, we become better equipped to engage in conscious decisions making regarding the most appropriate probabilistic interpretations for each of the various situations that we encounter whether its a situation that enables an appeal to symmetry coherent with classical probability or rigorous experimental testing consistent with frequency probability.

Works Cited

Hájek, Alan, “Interpretations of Probability”, The Stanford Encyclopedia of Philosophy (Winter 2012 Edition), Edward N. Zalta (ed.)

Laplace, P. S., 1814, English edition 1951, A Philosophical Essay on Probabilities, New York: Dover Publications Inc.

Salmon, W.C. “The Problem of Induction.” The Problem of Induction. N.p., n.d. Web. 26 May 2018.

Bibliography

“What Is Probability?” Pentominoes Page. N.p., n.d. Web. 26 May 2018.

“Probability Interpretations.” Wikipedia, Wikimedia Foundation, 12 Apr. 2018, en.wikipedia.org/wiki/Probability_interpretations.

“Classical Probability: Definition and Examples.” Statistics How To