MCB111: Mathematics in Biology (Fall 2024)
week 09:
Random Walks in Biology
Preliminars
Present all your reasoning, derivations, plots and code as part of the homework. Imagine that you are writing a short paper that anyone in class should to be able to understand. If you are stuck in some point, please describe the issue and how far you got. A jupyter notebook if you are working in Python is not required, but recommended.
A more realistic Brownian motion that includes random steps in time
You are going to implement a more sophisticated 1D random walk than the one we discussed in class.
Particles emit from a fixed point, that we define as \(x=0\). When the particle moves, it does it either to the left with probability \(p=0.5\) or to the right with probability \(q = 1-p = 0.5\). Nothing new so far.
The difference is that instead of moving with the tick of a clock, the particle can wait in a given position for a while before making the next move. We are going to assume that the time a particle waits at a position \(i\), \(t_w^i\), follows a exponential distribution, that is,
\[P(t^i_w = t) = \frac{e^{-t/\lambda}}{\lambda}\]where \(\lambda\) is the mean rate.
-
Simulate 10 random walks assuming that \(\lambda = 0.2\ s^{-1}\) up to \(t = 200\ s\).
To do this simulation, you may want to first sample the wait time using the exponential distribution, and later decide on the direction to take.
(See hint below on how to do the inverse sampling from the exponential distribution)
-
Now simulate 100 walks, but instead of looking at the individual trajectories, we are going to collect the positions at two different time points \(t_1 = 200\ s\) and \(t_2 = 800\ s\).
Calculate the sample mean \(\bar{x} = \langle x\rangle\) and variance \(\langle (x-\bar{x})^2\rangle\) for each of those times.
-
Repeat the simulation, but changing the wait time constant to \(\lambda = 20.0\ s^{-1}\). Again report the sample mean and variance.
Can you infer the expressions of the mean and variance as a function of \(t\) and \(\lambda\)?
Hints
-
Hit about inverse sampling from the exponential distribution
The probability density function (PDF) is
\[PDF(t) = \frac{1}{\lambda}\ e^{-t/\lambda}.\]In order to inverse sample a time \(t\), we used the cumulative probability function (CDF)
\[CDF(t) = \int_0^{t} \frac{1}{\lambda}\ e^{-t^\prime/\lambda}\ dt^\prime\]such that for a random number \(0\leq x\leq 1\), the sampled \(t\) time is given by
\[x = CDF(t)\quad \mbox{or}\quad t = CDF^{-1}(x).\]The CDF of the exponential distribution has a closed form given by
\[CDF(t) = \int_0^{t^\prime} \frac{1}{\lambda}\ e^{-t^\prime/\lambda} = 1 - e^{-t/\lambda}.\]Then, the sampled time is given by
\[t = -\lambda\ \log(1-x).\]If \(x\) is a random number in \([0,1]\), then \(1-x\) is also a random number in the same range, then we can take the sampled time \(t\) given by
\[t = -\lambda\ \log(x).\] -
A hint about last question
For the simple random walks we described in class, where every time the clock ticked, there was a movement left or right. The number of total (left or right) moves (\(N\)) is always identical to the number of time steps, that is \(t/\tau = N\). In other words, at a given time \(t\) all sampled trajectories have experienced the same number \(N=t/\tau\) of left/right moves.
Then the probability of being at position \(x\), after time \(t/\tau=N\) is (as we discussed in class) given by
\[P(x\mid t/\tau=N) = \frac{N!}{\left(\frac{N-x}{2}\right)!\left(\frac{N+x}{2}\right)!} \left(\frac{1}{2}\right)^N\]where of the \(N\) total moves, \(\frac{N-x}{2}\) where in the left direction and \(\frac{N+x}{2}\) where in the right direction.
We also derived that the mean and variance of this unbiased random walk is given by
\[\begin{aligned} \langle x\rangle &= N (p-q) = 0\\ \langle x^2\rangle &= 4 N pq = N. \end{aligned}\]That property is not true anymore when we introduce a wait time. At a given time, each sampled trajectory could have experienced a different number of total moves, depending on the actual wait times experienced.
That is, the number of left/right moves of the particles at a given time is a “hidden variable”, thus using again marginalization!
The probability of being at position \(x\) at time \(t\), \(P(x\mid t)\), could be the result of an arbitrary number of \(N\) moves,
\[P(x\mid t) = \sum_{N=0}^{\infty} P(x, N\mid t) = \sum_N P(x\mid N)\ P(N\mid t)\]The first term is the same as for the simpler case, that is a Binomial distribution that calculates of being at position \(x\) after \(N\) total moves.
The second term, calculates the probability that at time \(t\), the particle has experienced \(m\) left/right moves.
In the simpler case we studied in class of no wait time,
\[P(N\mid t) = \delta \left( N=\frac{t}{\tau}\right),\]where the delta function indicates that the probabity takes value one for \(N=\frac{t}{\tau}\), and zero otherwise.
For our case in which the wait times follow an exponential distribution, that probability density is given by a Poisson distribution of parameter \(t/\lambda\) (not an obvious result that you can find for instance here, bottom of page 2).
\[P(N\mid t) = \left(\frac{t}{\lambda}\right)^N \frac{e^{-t/\lambda}}{N!}.\]Combining the two together we have
\[P(x\mid t) = \sum_{N=0}^{\infty} \frac{N!}{\left(\frac{N-x}{2}\right)!\left(\frac{N+x}{2}\right)!} \left(\frac{1}{2}\right)^N \left(\frac{t}{\lambda}\right)^N \frac{e^{-t/\lambda}}{N!}.\]Using this distribution, you can calculate the analytic expression for the mean and variance and compare with your experimental results.
It may look intimidating, but think about it, the dependence in “\(x\)” is only in the “binomial” component, and you can change the order in which you perform the sum in “\(x\)” and the sum in “\(N\)”.
The type of Brownian motion described here, and even more general cases with different wait time distributions are referred to as continuous time random walks (CTRW).