MCB111: Mathematics in Biology (Spring 2018)
 An example: bacterial mutation wait times
 Probabilities used to quantify degrees of belief
 Forward probabilities and inverse probabilities
 Notation: likelihood and priors
 Using posterior probabilities to make subsequent predictions
 Parameter’s best estimates and confidence intervals
 Best estimates and confidence intervals for the bacterial mutation wait times
week 02:
Probability and inference
For this topic, MacKay’s Chapter 3 is a good source, also MacKay’s lecture 10, and Sivia’s Chapter 2. I would also read this short article ``What is Bayesian statistics’’ by S. R. Eddy
An example: bacterial mutation wait times
In a large bacterial colony, each bacterium can independently mutate after time .
In a particular experiment in which we observe mutation events that occur in a window of time from to minutes, we observe bacteria that mutated at times
Let’s assume that the probability that a bacterium mutates after time follows an exponential decay,
\begin{equation} e^{ t/\lambda}, \end{equation}
where we call the mutation rate.
What can we say about the mutation parameter (which has dimensions of time)?
If the observation time were infinity, we remember that the exponential distribution has mean , and we could use the sample mean as a proxy for the value of the mutation rate .
The sample mean is
\begin{equation} \frac{1.2+2.1+3.4+4.1+7+11}{6} = 4.97. \end{equation}
This estimation does not tell us anything about mutations that occur after mins, which can happen, so we know this is an underestimation of , but by how much?
Let’s try to do some inference on the mutation parameter . We can write down the probability (density) of a bacterium mutating after time as
where normalization imposes \begin{equation} Z(\lambda) = \int_0^{20} e^{ t/\lambda} dt = \left. \lambda e^{ t/\lambda}\right_{t=0}^{t=20} = \lambda(1  e^{20/\lambda}). \end{equation}
Figure 1. The probability density as a function of t, for different values of the mutation parameter.
If we observe individual bacterium that have mutated at times , since they are all independent
\begin{equation} P(t_1,\ldots t_N\mid \lambda) = \prod_i P(t_i\mid \lambda) = \frac{e^{\sum_i t_i/\lambda}}{Z^N(\lambda)} \end{equation}
Using Bayes theorem, we have \begin{equation} P(\lambda\mid t_1,\ldots t_N) = \frac{P(t_1,\ldots t_N\mid \lambda) P(\lambda)}{P(t_1,\ldots t_N)} \propto \frac{e^{\sum_i t_i/\lambda}}{Z^N(\lambda)} P(\lambda). \end{equation}
Now, we have turned around what was the probability of the data given the parameter into a probability for the parameter itself, given the data.
What does this function of tells us?
We can plot as a function of time for different values , which is the standard way to look at the exponential distribution (Figure 1). But, we can also plot as a function of for our particular example, , (Figure 2).
Figure 2. The probability of the data given the model (aka. the likelihood) as a function of the mutation parameter. The maximal probability corresponds to a mutation rate of 5.5.
From this distribution, we already get a clear picture of which are the most favorable values of the parameter . From this distribution, you can calculate, for instance, the value of that maximizes the probability of the data, which is (larger than the sample mean which is ).
This is what we can do using Bayes’ theorem: obtain information about the parameters of the model (in this case ) based on what the data tells us .
Probabilities used to quantify degrees of belief
The mutation wait time has a unique value (within a fixed bacterial environment) that correctly describes the process. We use a probability distribution to describe , but that does not mean that we think it is an stochastic process. Different values of represent mutually exclusive alternatives, of which only one is true. represent our beliefs (or state of knowledge) for each one of those alternative values given the data and the assumptions. We call this a posterior probability of the parameter, given the data and the assumptions.
There is an interesting historical precedent. PierreSimon Laplace (17491827) astronomer and mathematician used his probability theory (Laplace rediscovered Bayes’ theorem on his own) to determine a probability distribution for the mass of Saturn, based on different observations of Saturn’s orbit from different observatories. Obviously, Saturn’s mass is not a random variable from which we could sample. Laplace’s probability distribution (which is Gaussianlike) is a posterior probability for the mass of Saturn based on existing knowledge . Laplace’s estimate of Saturn’ mass based on that posterior distribution only differs from the modern value by about 0.5%.
Forward probabilities and inverse probabilities
Given the assumptions listed above (bacterium mutation times follow a exponential distribution) that we represent by , and the data (the sixpoint dataset) that we represent by , the posterior probability of the parameter is \begin{equation} P(\lambda\mid D, H) = \frac{P(D\mid \lambda, H) P(\lambda) }{P(D\mid H)}. \end{equation}
This result shows that probabilities can be used in two different ways:

Forward probabilities. They describe frequencies of outcomes in random experiments. They require a generative (probabilistic) model, from which we can calculate the probabilities of quantities produced by the process. These quantities can be sampled.

Inverse probabilities They also requires a generative (probabilistic) model, but an inverse probabilities refers to a quantity not directly produced by the process. For any such derivative quantity, we calculate its conditional probability given the observed quantities. Inverse probabilities require the use of Bayes theorem.
Sivia represents these two situations with a graph in his book (Figure 1.1) that I reproduce here.
Notation: likelihood and priors
If denotes the unknown parameters, denotes the data, and the overall hypothesis, the equation
\begin{equation} P(\lambda\mid D, H) = \frac{P(D\mid \lambda, H) P(\lambda\mid H) }{P(D\mid H)} \end{equation}
is written as
\begin{equation} \mbox{posterior}\quad = \quad \frac{\mbox{likelihood}\,\times\,\mbox{prior}}{\mbox{evidence}} \end{equation}
Some important points:

Priors are not an ``initial guess’’ of the value of the parameters. Specifying a prior is providing a whole probability distribution over all values of the parameter(s), not singling out one particular value.
Priors are the more subjective part of the inference process. If you have no other input, the maximum entropy principle tells you you should use a uniform distribution as the prior distribution.
In our previous example, a uniform prior would mean so that .

The value of the evidence is not important. The evidence, , does not depend on the parameters, and oftentimes it is left uncalculated if you are only interested in the relative posterior probabilities of the parameters. The evidence can be calculated by marginalization as \begin{equation} P(D\mid H) = \int_{\lambda} P(D\mid \lambda, H)\ P(\lambda\mid H)\ d \lambda. \end{equation} In week03’ lectures, we will use the evidence when comparing different models.

Never say ‘the likelihood of the data’. You can refer to as the likelihood of the parameters or more correctly the likelihood of the parameters given the data. ``Likelihood’’ means that as a function of the parameters it is NOT a probability distribution (does not sum to one over all values of the parameters, but for a given value of the parameters it does sum to one for all data).
I prefer to always refer to as the probability of the data given the parameters.

The likelihood principle. Given a generative model for data given parameters , , and having observed a particular outcome , all inferences and predictions should be based only on the function .
This looks deceivingly simple, but many classical statistical test fail to obey it, as they introduce additional and obscure assumptions not part of the generative process.
Using posterior probabilities to make subsequent predictions
Another example: the effectiveness of a new vaccine
A new malaria vaccine is tested on a group of volunteers. From a group of subjects, are malaria free a year after their vaccination.
What is the probability that the vaccine is effective?
The probability, given , that subjects are malaria free is given by the binomial distribution \begin{equation} P(n \mid N, f) = {N\choose n} f^{n} (1f)^{Nn} \end{equation}
The posterior probability of is, \begin{equation} P(f\mid n,N) = \frac{P(n,N\mid f) P(f)}{P(n,N)}. \end{equation}
The evidence (not dependent on ) is given by \begin{equation} P(n\mid N)= \int_0^1 P(n,N\mid g)\, P(g)\,dg. \end{equation}
If we assume an uniform prior , \begin{equation} P(f\mid n,N) = \frac{f^{n} (1f)^{Nn}}{\int_0^1 g^{n} (1g)^{Nn}\,dg}. \end{equation}
The denominator is the beta function and has a nice analytical expression, and it is important to know \begin{equation} \int_0^1 g^{n} (1g)^{Nn} \, dg = \frac{n!(Nn)!}{(N+1)!} \end{equation}
Our inference for is then \begin{equation} P(f\mid n,N) = \frac{(N+1)!}{n!(Nn)!}f^{n} (1f)^{Nn}. \end{equation}
For the case and , this posterior probability of is given in Figure 3.
Figure 3. Posterior probability density for the malaria vaccine success frequency given the data of N=10 total subjects and n=6 malaria free subjects.
We can calculate the most probable value of (i.e. the value that maximizes the posterior probability density), and the mean value of . This is going to be part of your homework this week.
Now, we can use the posterior probability distribution of to make predictions.
Given the pilot test we have done so far for this malaria vaccine, you would like to estimate the probability that a new subject would be malaria free after treatment with the vaccine.
The probability of finding one malaria free subject is , the mean of respect to the posterior distribution of is then
\begin{equation} P(\mbox{next subject malaria free}\mid n,N) = \int_0^1 f \times P(f\mid n, M) df \end{equation}
Notice that we are not putting our bets in one particular value of the probability parameter, instead we integrate over all possible values of This has the effect of taking into account our uncertainty predicting . This concept is at the heart of Bayesian inference.
The solution is
This result is known as Laplace’s rule.
For subjects and ,
This is a way in which Bayesian statistics is different from classical statistics. In classical statistics, once you run a ``test’’ that accepts a model at some significance level, then one uses exclusively that model to make predictions. Here, we have make our next prediction, considering (integrating) over all possible values of the effectiveness parameter.
Figure 4. Posterior probability distribution for the vaccine effectiveness frequency f for different data. As the number of data points increases, the probability distribution narrows around the optimal value of the parameter.
It is interesting to see how the posterior probability distribution changes as the amount of data increases. In Figure 4, I show a few examples for our vaccine example. Notice, that by the time I have just 2 data points, I can already say a lot about the value of . As the amount of data increases, the posterior distributions narrows around the true value.
Note on marginalization:
We have found an example of marginalization when calculating the evidence, that is, the probability of the data given the hypothesis that the results follow a Binomial distributions , where the data D is that out of individuals suffer no malaria after one year.
The hypothesis is characterized by the effectiveness of the vaccine , but if all we care about is the hypothesis itself, and not the actual value of , then, the effectiveness is a variable that we integrate out
Assuming a uniform distribution for the prior , we have
Where we have used the result for the beta function given earlier.
Note on probability densities.
Figure 5. Posterior probability density and cumulative probability for the effectiveness of a malaria vaccine. For N=10 subjects, and n=6 malaria free after one year.
We have calculated the posterior probability density for the effectiveness of the malaria vaccine as
If we plot this probability density function (PDF) (for and ), we see that for many values of their pdf is larger than one! (see Figure 5).
Yet, the cumulative probability distribution (CDF), defined as the probability that the value is less than or equal to f,
is properly bound (see Figure 5). That is because of the other term in the integrand () that correspond to the width that defines the infinitesimal area to sum. The product of the two term is such that as a result, the CDF is bound to be smaller than one for any value of .
So, to interpret the actual value of the posterior probability for the effectiveness parameters, as with sampling, you want to look at the CDF. One can ask for instance, what is the probability that the effectiveness is ?
That is given by
Parameter’s best estimates and confidence intervals
We have had a taste of how posterior probabilities convey information about the value of the parameters given the data. Often, we would like to summarize the information in the posterior probability distributions into the best estimate (the maximum likelihood value), and its reliability (the standard deviation around the best estimate).
We can tell a lot about confidence intervals, by doing a Taylor expansion around the maximum likelihood estimate.
We have calculated a posterior probability density for our parameters given the data and the hypothesis \begin{equation} P(\lambda\mid D, H) \end{equation}
the best estimate of is the one that satisfies
Let’s now consider the function instead of the probability itself. Because and are both monotonically increasing positive functions, a optimal value of is also an optimal value of , and it is easier to work with than with . A Taylor expansion around tells us
The linear term is zero, because we are expanding around a maximum. Next comes The quadratic term. Thus ignoring higher orders, we have a Gaussian distribution to approximate the posterior as
where is given by
which is positive because as is a maximum, the second derivative has to be negative.
Best estimates and confidence intervals for the bacterial mutation wait times
Let’s calculate the best estimate and confidence interval for the posterior distribution of the exponential problem we started with of the time between mutations in a bacterial colony.
The posterior probability density for the time parameters was \begin{equation} P(\lambda\mid {t_1,\ldots,t_N}) = A \frac{e^{\mu N/\lambda}}{\lambda^N\ (1e^{20/\lambda})^N} \end{equation} where is the mean of the given waiting times, and the normalization constant is independent of .
The log of the posterior is given by \begin{equation} L = \log A+ \frac{\mu N}{\lambda}  N\ \log{\lambda}  N\ \log(1e^{20/\lambda}). \end{equation}
For simplicity sake, we are going to ignore the last term. If instead of measuring for 20 minutes, we’ve done it for a much longer period of time , and then \begin{equation} L = \log A+ \frac{\mu N}{\lambda}  N\ \log{\lambda}. \end{equation}
The derivative respect to is
\begin{equation} \frac{d L}{d\lambda} = \frac{N\mu}{\lambda^2}  \frac{N}{\lambda}. \end{equation}
The value of that maximized the log probability is given by
or
\begin{equation} \lambda_\ast = \mu. \end{equation}
The second derivative is
\begin{equation} \frac{d^2 L}{d\lambda^2} = \frac{2N\mu}{\lambda^3} +\frac{N}{\lambda^2} = \frac{N}{\lambda^2}\left(12\frac{\mu}{\lambda}\right), \end{equation}
and the standard deviation is given by
or
\begin{equation} \sigma = \frac{\mu}{\sqrt{N}}. \end{equation}
Figure 6. Posterior probability of the mutation wait time (for a slightly simplified case in which the observation time is very large) and its Gaussian approximation around the maximal value that estimates the value of the parameter to be 4.97 +/ 2.03.
Thus our estimation of the parameter is given by
where is the sample mean.
This results is much more general that how we have deduced it here:
The error in the derivation of our parameters estimates is always proportional to the inverse of the square root of the amount of data.
So, whichever experiment you are running, always be mindful of this quantity
\begin{equation} \frac{1}{\sqrt{N}}. \end{equation}
In Figure 6, I present the Gaussian approximation and the standard deviation compared with the actual posterior distribution for the bacterial mutation wait time.