Main Page Everything for Finite Math Everything for Applied Calc Everything Topic Summaries On Line Tutorials On Line Utilities
← Previous Summary Next Summary → Tutorial for This Topic Review Exercises Textbook Lleveme a la Página Español
Finite mathematics topic summary: probability

Tools: Pop-up Factorials, Permutations and Combinations Window

Subtopics: Sample Space and Events | Combining Events | Relative Frequency | Some Properties of Relative Frequency | Modeled Probability | Computing Modeled Probability for Equally Likely Outcomes | Probability Distributions | Addition Principle | Further Properties of Probability | Conditional Probability | Independent Events | Bayes' Theorem

Sample Space and Events

An experiment is an occurrence whose result, or outcome is uncertain.

The set of all possible outcomes is called the sample space for the experiment.

Given a sample space S, an event E is a subset of S. The outcomes in E are called the favorable outcomes.

We say that E occurs in a particular experiment if the outcome of that experiment is one of the elements of E, that is, if the outcome of the experiment is favorable.

On-Line Tutorial Beginning With This Topic

Top of Page
Example

1. Experiment: Cast a die and observe the number facing up.
    Outcomes: 1, 2, 3, 4, 5, 6
    Sample Space: S = {1, 2, 3, 4, 5, 6}
    An Event: E: the outcome is even; E = {2, 4, 6}

2. Here is an experiment that simulates tossing three fair distinguishable coins. To run the experiment, press the "Toss Coins" button and record the occurrences of heads and tails. The sample space is the set of eight possible outcomes:

    S = {HHH, HHT, HTH, HTT, THH, THT, TTH, TTT}

Let E be the event that heads comes up at least twice.

    E = {HHH, HHT, HTH, THH}

Coin 1:
Coin 2:
Coin 3:
Outcome Favorable? (In E)


Top of Page
Combining Events

If E and F are events in an experiment, then:

E' is the event that E does not occur.

E F is the event that either E occurs or F occurs (or both).

E F is the event that both E and F occur.

E and F are said to be disjoint or mutually exclusive if (E F) is empty.

Top of Page
Example

Let S be the sample space for the coin tossing experiment in the box above, so that

   S = {HHH, HHT, HTH, HTT, THH, THT, TTH, TTT}.

Let E be the event that heads comes up at least twice;

   E = {HHH, HHT, HTH, THH},

and let F be the event that tails comes up at least once;

   F = {HHT, HTH, HTT, THH, THT, TTH, TTT}.

Then:

   E' = {HTT, THT, TTH, TTT}
   E F = {HHH, HHT, HTH, HTT, THH, THT, TTH, TTT}
                = S
   E F = {HHT, HTH, THH}

E and F are not mutually exclusive, since E F

Top of Page
Relative Frequency

If an experiment is performed N times, and the event E occurs fr(E) times, then the ratio

    P(E)=
    fr(E)

    N
is called the relative frequency or estimated probability of E.

The number fr(E) is called the frequency of E. N, the number of times that the experiment is performed, is called the number of trials or the sample size.

If E consists of a single outcome s, then we refer to P(E) as the relative frequency of the outcome s, and write P(s).

On-Line Tutorial on Relative Frequency

Top of Page
Example

In the above experiment (toss three coins) take E to be the event that heads comes up at least twice. To compute its relative frequency, let us use the simulated experiment below.

Every time you press "Toss Coins" the web page will compute both fr(E) and P(E).

Coin 1:
Coin 2:
Coin 3:
Event Favorable? (In E)
N:
fr(E):
P(E):


Top of Page
Some Properties of Relative Frequency

Let S = {s1, s2, ... , sn} be a sample space and let P(si) be the relative frequency of the event {si}. Then

(a) 0 ≤ P(si) ≤ 1
(b) P(s1) + P(s2) + ... + P(sn) = 1
(c) If E = {e1, e2, ..., er}, then P(E) = P(e1) + P(e2) + ... + P(er).

In words:

(a) The relative frequency of each outcome is a number between 0 and 1.
(b) The relative frequencies of all the outcomes add up to 1.
(c) The relative frequency of an event E is the sum of the relative frequencies of the individual outcomes in E.

Top of Page
Modeled Probability

The modeled probability, P(E), of an event E is a mathematical model whose purpose is to predict the relative frequency based on the nature of the experiment rather than through actual experimentation.

The relative frequency should approach the modeled probability as the number of trials gets larger and larger.

Notes

1. We sometimes write P(E) for both relative frequency and modeled probability. Which one we are referring to should always be clear from the context.
2. Modeled probability satisfies the same properties (shown above) as relative frequcy.

Top of Page
Example

In the above experiment, there are eight outcomes in S, and half of them are in E. Therefore, we expect E to occur half the time. In other words, the modeled probability of E is 0.5.

If you "toss the coins" in the simulated experiment a large number of times, the relative frequency should approach the modeled probability of .5. In the following simulation, you can toss the coins 50 times with each click on the button.

N:
fr(E):
P(E):

Top of Page
Modeled Probability for Equally Likely Outcomes

In an experiment in which all outcomes are equally likely, we model the probability of an event E by

    P(E)=
    number of favorable outcomes

    total number of outcomes
    =
    n(E)

    n(S)
where n(E) is the number of elements in E, and n(S) is the number of elements in S.

Top of Page
Example

In the above experiment (toss three coins) there are eight equally likely outcomes in S, and half of them are in E (the event that heads comes up at least twice). Therefore,

    P(E)=
    n(E)

    n(S)
    P(E)=
    4

    8
    =
    1

    2
    .

Top of Page
Probability Distributions

Relative frequency and modeled probability have in common the idea of a probability distribution:

A finite sample space is just a finite set S. A probability distribution is an assignment of a number P(si) to each outcome si in a sample space S ={s1, s2, ... , sn} so that

  (a) 0 ≤ P(si) ≤ 1
  (b) P(s1) + P(s2) + ... + P(sn) = 1.

P(si) is called the probability of si. Given a probability distribution, we obtain the probability of an event E by adding up the probabilities of the outcomes in E.

If P(E) = 0, we call E an impossible event. The event is always impossible, since something must happen.

Notes

1. All the above properties apply to both relative frequency and modeled probability. Thus, when we speak only of "probability," we could mean either, depending on the context.

On-Line Tutorial on Probability Distributions

Top of Page
Example

1. All the examples of relative frequency and modeled probability above give examples of probability distributions.

2. Let us take S = {H, T} and make the assignments P(H) = 0.2, P(T) = 0.8. Since these numbers are between 0 and 1, and add to 1, they specify a probability distribution.

3. With S = (H, T} again, we could also take P(H) = 1, P(T) = 0, so that T is an impossible event.

4. The following table gives a probability distribution for the sample space S = {1, 2, 3, 4, 5, 6}.

Outcome 123456
Probability0.30.300.10.20.1

It follows that

P({1, 6}) = 0.3 + 0.1 = 0.4
P({2, 3}) = 0.3 + 0 = 0.3
P(3) = 0         An impossible event

5. Suppose we toss three coins as above, but this time, we only look at the number of heads that come up. In other words, S = {0, 1, 2, 3}. The (modeled) probability distribution is given by counting the number of combinations that give 0, 1, 2, and 3 heads:

Outcome0123
Probability0.1250.3750.3750.125

The following simulation computes the relative frequency distribution of the above experiment. You will find that, after many coin tosses, these relative frequencies converge to the modeled probabilities shown above.

N
Frequency Distribution
0123
Relative Frequency Distribution
0123

Top of Page
Addition Principle

Mutually Exclusive Events
If E and F are mutually exclusive events, then

P(E F) = P(E) + P(F).
This holds true also for more events: If E1, E2, . . . , En are mutually exclusive events (that is, the intersection of any pair of them is empty) and E is the union of E1, E2, . . . , En, then
P(E) = P(E1) + P(E2) + . . . + P(En).

General Addition Principle
If E and F are any two events, then

P(E F) = P(E) + P(F) - P(E F).

Top of Page
Example

Let S be the sample space for the coin-tossing experiment above;

S = {HHH, HHT, HTH, HTT, THH, THT, TTH, TTT}

Let E be the event that heads comes up at exactly once;

E = {HTT, THT, TTH}
and let F be the event that heads comes up exactly twice;
F = {HHT, HTH, THH}.

Then E and F are mutually exclusive, and

P(E F) = P(E) + P(F) = 3/8 + 3/8 = 6/8.

Now let E be as above, and let F be the event that heads comes up at most once:

F = {HTT, THT, TTH, TTT}
Then E and F are not mutually exclusive, and E F is the event that heads comes up exactly once (= E). Therefore,
P(E F) = P(E) + P(F) - P(E F)
      = 3/8 + 4/8 3/8 = 4/8.

Top of Page
Further Properties of Probability

The following are true for any sample space S and any event E.

P(S) = 1The probability of something happening is 1.
P() = 0The probability of nothing happening is 0.
P(E') = 1-P(E)     The probability of E not happening is 1 minus the probability of E.

Top of Page
Example

Continuing with

S = {HHH, HHT, HTH, HTT, THH, THT, TTH, TTT}

Let E be the event that heads comes up at exactly once;

E = {HTT, THT, TTH}.
Then E' is the event that heads does not come up exactly once, and
P(E') = 1- P(E) = 1- 3/8 = 5/8.

Top of Page
Conditional Probability

If E and F are two events, then the conditional probability, P(E | F), is the probability that E occurs, given that F occurs, and is defined by

    P(E | F)=
    P(E F)

    P(F)
    .

We can rewrite this formula in a form known as the multiplication principle:

    P(E F) = P(F)P(E | F).

Conditional Estimated Probability
If E and F are events and P is estimated probability, then

    P(E | F)=
    fr(E F)

    fr(F)
    .

Conditional Probability for Equally Likely Outcomes
If all the outcomes in S are equally likely, then

    P(E | F)=
    n(E F)

    n(F)
    .

On-Line Tutorial Beginning With This Topic

Top of Page
Example

Let S be the original sample space for the experiment above;

S = {HHH, HHT, HTH, HTT, THH, THT, TTH, TTT}

Let E be the event that heads comes up at exactly once;

E = {HTT, THT, TTH}
and let F be the event that the first coin comes up heads;
F = {HHH, HHT, HTH, HTT}.

Then the probability that heads comes up exactly once, given that the first comes up heads is

    P(E | F)=
    P(E F)

    P(F)
    =
    P{HTT}

    P(F)
    =
    1/8

    1/2
    =
    1

    4
    .

Since the outcomes in this experiment are equally likely, we could also use the formula

Independent Events

The events E and F are independent if

P(E | F) = P(E)
or, equivalently (assuming that P(F) is not 0), we have the:

Test for Independence

The events E and F are independent if and only if

P(E F) = P(E)P(F).

If two events E and F are not independent, then they are dependent.

Given any number of mutually independent events (that is, each one of them is independent of the intersection of any combination of the others), the probability of their intersection is the product of the probabilities of the individual events.

Top of Page
Example

As in the example immediately above, let S be the original sample space for the experiment above;

S = {HHH, HHT, HTH, HTT, THH, THT, TTH, TTT}

Let E be the event that heads comes up at exactly once;

E = {HTT, THT, TTH}
and let F be the event that the first coin comes up heads;
F = {HHH, HHT, HTH, HTT}.

To test these two events for independence, we check the formula

P(E F) = P(E)P(F).
Here,
    P(E) = 3/8, P(F) = 1/2, and
    E F = {HTT}, so that P(E F) = 1/8.
Since
    (3/8)(1/2) 1/8,
the events E and F are not independent.

Top of Page
Bayes' Theorem

The short form of Bayes' Theorem states that if E and F are events, then

    P(F | E)=
    P(E | F)P(F)

    P(E | F)P(F) + P(E | F')P(F')
    .

We can often compute P(F | E) by instead constructing a probability tree. (To see how, go to the tutorial by following the link below.)

An expanded form of Bayes' Theorem states that if E is an event, and if F1, F2, and F3 are a partition of the sample space S, then

P(F1 | E)=
P(E | F1)P(F1)

P(E | F1)P(F1) + P(E | F2)P(F2) + P(E | F3)P(F3)
.

A similar formula works for a partition of S into four or more events.

On-Line Tutorial Beginning With This Topic

Top of Page
Example

If P(E | F) = 0.95     P(E | F') = 0.15     P(F) = 0.1     P(F') = 0.9,   then

P(F | E)=
P(E | F)P(F)

P(E | F)P(F) + P(E | F')P(F')
=
(0.95)(0.1)

(0.95)(0.1) + (0.15)(0.9)
0.4130.

This example comes from a scenario discussed in the tutorial (link on the adjacent box).

Top of Page
Last Updated: April 2010
Copyright © Stefan Waner

Top of Page