A Bayesian Primer

Topic 15:  Learning About Models Using Bayes' Rule

Diagnostic Testing

We introduce Bayes' rule by considering a basic problem in diagnostic testing.  Suppose you are concerned if  you have a disease which is known to affect 10% of your community.  You take a blood test that is designed to detect if you have the design.  If the blood test result is positive, there is some indication that you have the disease.  Otherwise, a negative test result is information that you are disease-free.  These blood tests are not completely accurate.  If you really have the disease, you will get an incorrect negative result with probability .05.  Also, if you don't have the disease, you will get an incorrect positive blood test result with probability .2.

You take the test and it is positive.  Should you be concerned that you really have the disease?

A Model

A model is a description of some unknown phenomena.  Examples of models are

Here a model is your disease condition.  There are two possible models:

One generally is interested in learning about the truth of the different models after observing data.

Subjective Probability

From the Bayesian viewpoint, uncertainty about models is expressed using the language of subjective probability.  Subjective probability is a way of stating your belief in the truthfulness of an uncertain event.  For example, consider the possibility that you have the disease.  Certainly you have an opinion on how likely you have the disease.  You express this opinion about your disease status by a probability -- a number between 0 and 1.  If you are sure you have the disease, you would assign a probability close to 1 to the event "have the disease".  On the other hand, if you are indifferent between the events "have the disease" and "don't have the disease", presumably you would assign a probability close to .5 to "have the disease".

Prior Probabilities

Prior probabilities are subjective probabilities that you assign to the various models which reflect your beliefs before or prior to any data observed.  In our example we wish to assign probabilities to the two models

"have disease"        "don't have disease"

which reflect your beliefs before you have the blood test.  There are different possible beliefs about your disease status, and each belief would correspond to a set of subjective probabilities for these two models.  We consider one set of plausible prior probabilities that correspond to one set of beliefs.  Suppose that you think that your risk of the disease is similar to that of a typical person in your community.  Then, since you know the disease has a 10% incidence rate in your community, this belief would correspond to the probabilities

P("have disease") = .10

P("don't have disease") = .90

Data

Data is information you collect that sheds light on the truth of the various models.  Data can be: 

Here the data is the result of the blood test: either the test is + (positive) or - (negative).  Here you got a positive test result.  This result should shed some light on the truthfulness of the two possible models.  We find the new or updated probabilities of the models "have disease" and "don't have disease" by use of a special probability formula called Bayes' rule.

Bayes' Rule

This recipe tells us how to compute our new model probabilities given

We call these new model probabilities, posterior probabilities, since they reflect our beliefs about the model after or posterior to the data.

Bayes rule says that

Prob(Model given Data) is proportional to Prob(Model) x Prob(Data given Model)

or

POSTERIOR is proportional to PRIOR times LIKELIHOOD

Let's discuss the three components of Bayes' rule -- the PRIOR, the LIKELIHOOD, and the POSTERIOR.

  1. The PRIOR is simply the prior probabilities that we assigned to the models.  Here the prior is the probabilities .1, .9 that we assigned respectively to "have disease" and "don't have disease".

  2. The LIKELIHOOD connects the data and the models.  For each model, we compute

                                                    Prob(Data given Model),

    the probability of our data result if that particular model is true.

    Here we observe the data "positive test result" or "+".  We find the probability of "+" if you really "have disease", and find the probability of "+" if you "don't have the disease".  That is, you compute

                                                Prob(+ given "have disease") = .95

                                                Prob(+ given "don't have disease") = .2

    (These numbers, .95 and .2, come from the error rates that are given in the initial statement of the problem.  If you have the disease, we are told that you get an incorrect negative result with probability .05, so the probability of a correct + result is 1-.05 = .95.  Also the probability of an incorrect + result if you really don't have the disease is .2.)

  3. The POSTERIOR is the probabilities of the two models after observing the positive blood test.  We compute these probabilities using Bayes' rule.

We illustrate the Bayes' rule computation using two different methods.

The Bayes' Table

The Bayes' table shows the POSTERIOR = PRIOR x LIKELIHOOD computation using a tabular format.

MODEL PRIOR LIKELIHOOD PRODUCT POSTERIOR
"have disease" .10 .95 .095 .345
"don't have disease" .90 .2 .180 .655
SUM 1.00   .275 1.000

We put the two possible disease states in the MODEL column, the prior probabilities in the PRIOR column, and the P(data given model) values in the LIKELIHOOD column.  Then we compute the posterior probabilities by

The Bayes' Box

The Bayes' box is an alternative way of computing the posterior probabilities by Bayes' rule.  One constructs a Bayes' box by classifying hypothetical models by MODELS and DATA RESULTS.  One implements Bayes' rule by computing proportions conditional on a particular column in the table.

We illustrate the construction of a Bayes' box for this blood testing example.

We start with a table where the rows are the two models and the columns are the data results.  We start with 1000 representative people from the community.  (Actually, you can start with any whole number -- we usually start with a large number which will result in whole number calculations for the inside cells of the table.)

DATA

TOTAL
+ -
MODEL "have disease"
"don't have disease"

TOTAL

1000

We first put counts in the TOTAL column.  Since our probability of getting the disease is .10, we expect 10% of 1000 = 100 people to actually have the disease.  So 1000 - 100 = 900 people don't have the disease.

DATA

TOTAL
+ -
MODEL "have disease" 100
"don't have disease" 900
TOTAL 1000

Next, you fill in the middle cells of the table using the likelihoods:

DATA

TOTAL
+ -
MODEL "have disease" 95 5 100
"don't have disease" 180 720 900
TOTAL 1000

Now we are ready to do our inference.  You observed a + test result, and so we focus on the + column of the Bayes' box.

DATA

PROPORTION
+
MODEL "have disease" 95 95/275 = .345
"don't have disease" 180 180/275 = .655
TOTAL 275 1

There were 275 people in our population who received a positive test result.  Of these 275 people, 95 (34.5%) actually had the disease and 180 (65.5%) didn't have the disease.  These percentages represent the posterior probabilities of the two models.

Interpretation

Bayes' rule tells us how we change our model probabilities after obtaining new information (data).  It is instructive to compare the prior and posterior probabilities to see if the change in probabilities makes sense.

The prior and posterior probabilities for the disease states are shown below.

MODEL

PRIOR

POSTERIOR
(after observing +)
"have disease" .1 .345
"don't have disease" .9  .655

Before the blood test was taken, you thought that your chances of getting the disease (.1) were similar to that of a typical person in the community.  Now that you got a positive blood test result, your chances of getting the disease have increased to .345.  This makes sense -- one would expect the probability of "have disease" to increase.  But most people are surprised that the new probability of disease is only .345.  Actually, it is still more likely that you don't have the disease after getting this positive test result.


Page Author: Jim Albert (c) 
albert@math.bgsu.edu 
Document: http://math-80.bgsu.edu/nsf_web/main.htm/primer/topic15.htm 
Last Modified: October 17, 2000