*David A. Tanzer, August 24, 2020, in unit Epidemic Models 2*.

# Continuous vs. Discrete Flow

With the advent of the rate equation, we have shifted into a model of reaction networks consisting of “pumps” that move at continuous rates, transferring “mathematical fluid” between containers. Yet, as we have indicated, the real activity proceeds by discrete steps; an individual recovers, two molecules collide to form a compound.

How are these views to be reconciled?

For concreteness, let’s recall our reaction with the cooking of popcorn:

$$\mathrm{Unpopped} \xrightarrow{\mathrm{Cooking}} \mathrm{Popped}$$

Solving the rate equations gives us equations for Unpopped(t) and Popped(t), for points in time $t$. For now the details don’t matter, so we’ll write

- Unpopped(t) = f(t), for some mathematical function $f$.

Let’s now measure the progress of cooking by the *percentages* of Unpopped and Popped kernels, at time points $t$. So Unpopped(0) = 100%, and Unpopped(t) approaches 0% as $t$ gets larger and larger.

This is a smooth, continuous flow, which effectively assumes that the pot contains a homogeneous mixture of unpopped kernels, not discrete kernels, rather like a mixture of two colors of paint (whose composition changes over time to produce a changing color).

But now let’s look at the actual situation, with discrete kernels.

Suppose the pot just contained four kernels.

Let $t_1$ be the time the first kernel pops, $t_2$ the time the second kernel pops, etc., so we’ll have:

$$0 \le t_1 \le t_2 \le t_3 \le t_4$$

In the interval from 0 up to $t_1$, 100% of the kernels are unpopped, between $t_1$ and $t_2$ 75% are unpopped, …, and after $t_4$ 0% are unpopped. So the actual curve for Unpopped(t) looks like this:

PICTURE

It’s a downward moving step function. We’ve also depicted the continuous solution produced by the rate equation.

Now if we run the experiment again, with four new kernels, we’ll get different values for $t_1$, $t_2$, $t_3$ and $t_4$, and a different downward step function.

We may think of the step function as a kind of “random variable” — in this case a curve — that gets selected on every run of an experiment.

Returning to the question of the relationship between the continuous and discrete models, here is the key to the answer: the continuous case gives the curve showing the expectation / expected value — which can be understood as the average of the step functions obtained by a larger and larger number of trials.

But there’s an important and common case when the continuous curve becomes a good approximation to the actual behavior: when the population counts are large.

We see this in hydraulics, where the movement of water through a system of pumps and containers treats the water as as continuous fluid. In fact this is how we think of water in practice. But as we know, the treatment of water as a continuous fluid is an idealization of the reality, where the fluid is really a large assemblage of colliding molecules.

At the other extreme, we had our popcorn pot with just four kernels, which displays a marked difference from the behavior of a continuous fluid.

Well what if there were 40 kernels?

Now each run of the experiment gives you a step curve that goes down from 100% to 0% in forty steps. And you will observe that the curves tend to hew more closely to the average behavior.

With 4 million kernels, there would be four million steps, and the curves would look very close to the expected. There would still be random variations in each specific case, but the deviations from the continuous curve would be hard to spot.

By the way, besides being *empirical* statements about the behavior of large vats of cooking popcorn, they can also be shown from first principles; in particular, the follow from the mathematical theorem known as the Law of Large Numbers.

Next time we will close this series by commenting on the difference between discrete and continuous models for epidemic reaction networks.

*Copyright © 2020, David A. Tanzer. All Rights Reserved.*