4.3.3 The Subjective Approach
The subjective approach interprets probability as the experimenter’s degree of belief that the event will occur. The estimate of the probability of an event is based on the totality of the individual’s knowledge at the time. As new information becomes available. The estimate is modified accordingly to best reflect his/her current knowledge. The method by which the probabilities are updated is commonly done with Bayes" Rule. discussed in Section 4.8.
So for the coin toss example. a person may have lP(Heads) = l/Z in the absence of additional information. But perhaps the observer knows additional information about the coin or the thrower that would shift the probability in a certain direction. For instance, parlor magicians may be trained to be quite skilled at tossing coins. and some are so skilled that they may toss a fair coin and get nothing but Heads. indefinitely. I have seen this. lt was similarly claimed in Bringing Down the House [65] that MIT students were accomplished enough with cards to be able to cut a deck to the same location. every single time. In such cases. one clearly should use the additional information to assign lP(Heads) away from the symmetry value of l/2.
This approach works well in situations that cannot be repeated indefinitely, for example, to assign your probability that you will get an A in this class. the chances of a devastating nuclear war. or the likelihood that a cure for the common cold will be discovered. The roots of subjective probability reach back a long time. See
http : //en . wikipedia . org/wiki/Subj ective_probabi1ity
for a short discussion and links to references about the subjective approach.
4.3.4 Equally Likely Model(ELM)
We have seen several approaches to the assignment of a probability model to a given random experiment and they are very different in their underlying interpretation. But they all cross paths when it comes to the equally likely model which assigns equal probability to all elementary outcomes of the experiment.
The ELM appears in the measure theory approach when the experiment boasts symmetry of some kind. If symmetry guarantees that all outcomes have equal “size” ,and if outcomes with equal “size” should get the same probability, then the ELM is a logical objective choice for the experimenter. Consider the balanced 6-sided die, the fair coin, or the dart board with equal-sized wedges.
The ELM appears in the subjective approach when the experimenter resorts to indifference or ignorance with respect to his/her knowledge of the outcome of the experimenter has on prior knowledge to suggest that (s) he prefer Heads over Tails, then it is reasonable for the him/her to assign equal subjective probability to both possible outcomes.
The ELM appears in the relative frequency approach as a fascinating fact of Nature: when we flip balanced coins over and over again, we observe that the proportion of times that coin comes up Heads tends to 1/2. Of course if we assume that the measure theory applies then we can prove that the sample proportion must tend to 1/2 as expected, but that is putting the cart before the horse, in a manner of speaking.
The ELM is only available when there are finitely many elements in the sample space.
4.3.5 How to do it with R
In the prob package, a probability space is an object space is an object of outcomes S and a vector of probabilities (called “probs”) with entries that correspond to each outcome in S. When S is a data frame, we may simply add a column called probs to S and we will be finished; the probability space will simply be a data frame which we may call S. In the case that S is a list, we may combine the outcomes and probs into a larger list, space; it will have two components: outcomes and probs. The only requirements we need are for the entries of probs to be nonnegative and sum (probs) to be one.
To accomplish this in R, we may use the probspace function. The general syntax is probspace (x,probs),where x is a sample space of outcomes and probs is a vector ( of the same length as the number of outcomes in x ). The specific choice of probs depends on the context of the problem, and some examples follow to demonstrate some of the more common choices.
Example 4.4. The Equally Likely Model asserts that every outcome of the sample space has the Same probability, thus, if a sample space has n outcomes, then probs would be a vector of length n with identical entries l/n. The quickest way to generate probs is with the rep function. We will start with the experiment of rolling a die, so that n = 6. We will construct the sample space, generate the probs vector, and put them together with probsspace.
> outcomes p probspace (outcomes, probs = p)
X1 probs
1 1 0.1666667
2 2 0.1666667
3 3 0.1666667
4 4 0.1666667
5 5 0.1666667
6 6 0.1666667
The probspace function is designed to save us some time in many of the most common situations. For example, due to the especial simplicity of the sample space in this case, we could have achieved the same result with only (note the name change for the first column)
> probspace (1:6, probs = p)
X probs
1 1 0.1666667
2 2 0.1666667
3 3 0.1666667
4 4 0.1666667
5 5 0.1666667
6 6 0.1666667
Further, since the equally likely model plays such a fundamental role in the study of probability the probspace function will assume that the equally model is desired if no probs are specified. Thus, we got the same answer with only
> probspace(1:6)
X probs
1 1 0.1666667
2 2 0.1666667
3 3 0.1666667
4 4 0.1666667
5 5 0.1666667
6 6 0.1666667
And finally. since rolling dice is such a common experiment in probability classes, the rolldie
function has an additional logical argument makespace that will add a column of equally likely
probs to the generated sample space
> rolldie(1, makespace = TRUE)
X1 probs
1 1 0.1666667
2 2 0.1666667
3 3 0.1666667
4 4 0.1666667
5 5 0.1666667
6 6 0.1666667
or just rolldie(1, TRUE). Many of the other sample space functions (tosscoin, cards. roulette
etc.) have similar makespace arguments. Check the documentation for details.
One sample space function that does NOT have a makespace option is the urnsamples function.
This was intentional. The reason is that under the varied sampling assumptions the outcomes
in the respective sample spaces are NOT, in general. equally likely. It is important for the user to
carefully consider the experiment to decide whether or not the outcomes are equally likely and then
use probspace to assign the model.
Example 4.5. An unbalanced coin. While the makespace argument to tosscoin is useful to
represent the tossing of a fuir coin, it is not always appropriate. For example, suppose our coin is
not perfectly balanced, for instance, maybe the "H side is somewhat heavier such that the chances
of a H appearing in a single toss is 0.70 instead of 0.5. We may set up the probability space with
> probspace(tosscoin(1), probs : c(0.7, 0.3))
tossl probs
1 H 0.7
2 T 0.3
The same procedure can be used to represent an unbalanced die, roulette wheel. etc.
4.3.6 Words of Warning
It should be mentioned that while the splendour of R is uncontested. it, like everything else. has
limits both with respect to the sample/probability spaces it can manage and with respect to the finite
accuracy of the representation of most numbers (see the R FAQ 7.31). When playing around with
probability. one may be tempted to set up a probability space for tossing 100 coins or rolling 50
dice in an attempt to answer some scintillating question. (Bear in mind: rolling a die just 9 times
has a sample space with over I0 million outcomes.)
Alas, even if there were enough RAM to barely hold the sample space (and there were enough
time to wait for it to be generated). the infinitesimal probabilities that are associated with so many
outcomes make it diflicult for the underlying machinery to handle reliably. In some cases. special
algorithms need to be called just to give something that holds asymptotically. User beware.