Probability

Probability has to do with the likeliness of an event occurring given an initial event.

Probability concerns events. Events are what are probable or improbable. In asking what the probability is that a fair coin turns up heads, we are asking what the probability is for that event.

The event that is the subject matter of probability is sometimes called the desired outcome, but I think target event is a better name, since it should have nothing to do with whether or not heads, or any other event, is actually desired. The target event is typically explicit. It is this event’s likeliness of occurring we are after in making a probability calculation.

The initial event, on the other hand, is the event that is assumed to occur whenever measuring the target event. When we want to know the probability of turning up heads, the initial event is the flipping of the coin.

If the target event never occurs given the initial event, then the probability of the target event’s occurring is 0. If the target event always occurs given the initial event, then the probability of the target event’s occurring is 1. Everything in between is put in terms of either a decimal or fraction less than 1 and greater than 0, which signifies a greater and lesser likeliness of occurring. So what is the likeliness of the target event given the initial event? How is that known or measured?

A good answer is that likeliness has to do with frequency, or how frequent the target event is given the initial event. A more accurate calculation of the frequency is made over a greater number of “trials” or “experiments“, which are just reinstantiations of the initial event. The calculation itself is merely one of parts and wholes—given the trials that are the initial event (the whole), how many of them also involve the target event (the part).

However, casting likeliness as mere frequency may not be sufficient to explain either how we come to understand probability or what, exactly, the probability is measuring. To illustrate how, let’s say that from 100 flips of a coin, I get 52 heads. The calculation is straightforward—the frequency is 52 out of 100, or .52. Yet, of course the probability that a coin turns up heads is not .52. So where did I go wrong? Well, if I did go wrong, then perhaps it was in the number of trials that I made. A greater number of them may be more accurate, since a greater number of them would be thought to converge on the number that is its true probability. Yet couldn’t that ratio end up reflecting some ratio that is not the true probability? If I did 1,000 flips, it is nonetheless possible that I land 520 heads. It’s possible—just unlikely. Yet saying that it is unlikely here is saying that it will not happen with great frequency, or as great of that of flipping 500 heads, which is true. Yet how is this truth established, particularly if we continue to get unlucky in our results?

No doubt, the likeliness of an event may be established by completing trials and recording the results. However, this should not be the only method to establish likelihood. Another way, one that may go hand-in-hand with frequency trials, is to simply have a grasp of the underlying features of the objects and forces of the event. To illustrate, a coin is two-sided, highly symmetrical, and evenly-weighted. When it is flicked, it quickly spins, and any micro-adjustments in speed, momentum, and height over different trials all make a clear difference to the position of the coin as it lands. Crucially, there are a range of forces involved (which includes the thumb and its contribution) that themselves and together are understood to apply equally to heads and tails. It is this understanding of the forces involved to discriminate equally between the two options that leads to the notion of their being equally likely.  And since they are two options that are equally likely, they split the probability in half. 

From a flip alone, there is no reason to think that the forces involved will favor one side rather than the other. Even if the outcome is already deterministically fixed and could in principle come to be known, the entirety of the range of the forces themselves are understood to be spread evenly enough between heads and tails for .5 to be nonetheless be the accurate measure. The objective feature that is measured is precisely the objects and the range of forces of the initial event to causally favor the target event. To show this, consider a weighted coin. The subsequent change in the likeliness of one side over another is precisely a change in the force of weight toward one side. Without such an objective change, there would be no difference in likeliness.

Even if the forces involved in a fair coin came to be understood to favor one side, for a micro-difference in one side’s weight for example, this would simply A) work within an acceptable bound of accuracy (e.g. the chance of heads is close to .5, or closer to .5 than it is to .51—a perfectly acceptable fact), or B) be incorporated into a further, more accurate measure of the probability. 

So frequency is important in measuring and establishing probability, but it is not the only tool that does so. A more basic understanding is of a range of forces and objects involved in the initial event, and these establish how likely the target event is.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s