HEAD | PREVIOUS |

Monte Carlo Radiation Transport

Figure 12.1: A random walk is executed by a particle with
variable-length steps between individual scatterings or
collisions. Eventually it is permanently absorbed. The statistical
distribution of the angle of
scattering is determined by the details of the collision
process.

A particle executes a random walk through the
matter, illustrated in Fig. 12.1. It travels a certain
distance in a straight line, then collides. After the collision it has
a different direction and speed. It takes another step in the new
direction, generally with a different distance, to the next
collision. Eventually the particle has an absorbing collision, or
leaves the system, or becomes so degraded (for example in energy) that
it need no-longer be tracked. The walk ends.
${\mathit{\Sigma}}_{t}=\sum _{j}{n}_{j}{\mathit{\sigma}}_{j},$ | $(12.1)$ |

Figure 12.2: The probability, $\stackrel{\u203e}{P}(l)$, of survival without a
collision to position $l$ decreases by a multiplicative factor
$1-{\mathit{\Sigma}}_{t}\mathit{dl}$ in an increment $\mathit{dl}$.

Therefore (refer to Fig. 12.2) if the
probability that the particle survives at least as far as $l$ $\stackrel{\u203e}{P}(l+dl)=\stackrel{\u203e}{P}(l)(1-{\mathit{\Sigma}}_{t}dl)$ | $(12.2)$ |

$\frac{\stackrel{\u203e}{P}(l+dl)-\stackrel{\u203e}{P}(l)}{dl}\hspace{1em}\underset{\mathit{dl}\to 0}{\to}\hspace{0.5em}\frac{d\stackrel{\u203e}{P}}{dl}=-{\mathit{\Sigma}}_{t}\stackrel{\u203e}{P}.$ | $(12.3)$ |

$\stackrel{\u203e}{P}(l)=\mathrm{exp}(-{\int}_{0}^{l}{\mathit{\Sigma}}_{t}dl),$ | $(12.4)$ |

$p(l)=\frac{\mathit{dP}}{dl}=-\frac{d\stackrel{\u203e}{P}}{dl}={\mathit{\Sigma}}_{t}\stackrel{\u203e}{P}={\mathit{\Sigma}}_{t}\mathrm{exp}(-{\int}_{0}^{l}{\mathit{\Sigma}}_{t}dl).$ | $(12.5)$ |

${x}_{j}=\sum _{i=1}^{j-1}{n}_{i}{\mathit{\sigma}}_{i}/{\mathit{\Sigma}}_{t},$ | $(12.6)$ |

Figure 12.4: "Stimulated" emission is a source of new particles caused
by the collisions of the old ones. It can multiply the particles
through succeeding generations, as the new particles (different
line styles) give rise themselves to additional
emissions.

Any additional particles produced by collisions must be tracked by the
same process as used for the original particle. They become new
tracks. In a serial code, the secondary particles must be initialized
and then set aside during the collision step. Then when the original
track ends by absorption or escape, the code must take up the job of
tracking the secondary particles. The secondary particles may generate
tertiary particles, which will also eventually be tracked. And so on.
Parallel code might offload the secondary and tertiary tracks to other
processors (or computational threads).
One might be concerned that we'll never get to the end if particles
are generated faster than they are lost. That's true; we won't. But
particle generation faster than loss corresponds to runaway physical
situation, for example a prompt supercritical reactor. So the
real-world will have problems much more troublesome than our
computational ones!
Figure 12.5: Tallying is done in discrete volumes. One can tally every
collision(green) but if the steps are larger than the
volumes, it gives better statistics to tally every volume the
particle passes through(blue).

A further statistical advantage is obtained by tallying more than just
collisions. If we want particle flux properties, as well as the
effects on the matter, we need to tally not just collisions, but also
the average flux density in all the discrete volumes. That requires us
to determine for every volume, every passage $i$ of a particle through
it, the fractional time $\mathit{\Delta}{t}_{i}$ that it spends in the volume, and
the speed ${v}_{i}$ at which it passed through. See Fig. 12.5. When the simulation has been tracked for a
sufficient number of particles, the density of particles in the volume
is proportional to the sum $\sum _{i}\mathit{\Delta}{t}_{i}$ and the scalar flux
density${}^{79}$ to $\sum _{i}{v}_{i}\mathit{\Delta}{t}_{i}$. If collisional lengths (the length of the random walk step) are
substantially larger than the size of the discrete volumes, then there
are more contributions to the flux property tallies than there are
walk steps, i.e. modelled collisions. The statistical accuracy of the
flux density measurement is then better than the collision tallies
(even when all collision types are tallied for every collision) by a
factor approximately equal to the ratio of collision step length to
discrete volume side-length. Therefore it may be worth the effort to
keep a tally of the total contribution to collisions of type $j$ that
we care about from every track that passes through every volume. In
other words, for each volume, to obtain the sum over all passages $i$:
$\sum _{i}\mathit{\Delta}{t}_{i}{v}_{i}{n}_{j}{\mathit{\sigma}}_{j}$. The extra cost of this process
is the geometric task of determining the length of a line during which
it is inside a particular volume. But if that task is necessary
anyway, because flux properties are desired, then it is no more work
to tally the collisional probabilities in this way.
Figure 12.6: If all particles have the same weight, there are many more
representing bulk velocities (${v}_{1}$) than tail velocities
(${v}_{2}$). Improved statistical representation of the tail is
obtained by giving the tail velocities lower weight ${w}_{2}$. If the
weights are proportional to $f$ then equal numbers of particles
are assigned to equal velocity ranges.

It is possible to compensate for this problem by ${w}_{n}={w}_{n-1}p({v}_{n}).$ | $(12.7)$ |

${S}_{n}=\sum _{i}\mathit{\Delta}{t}_{i},\mathrm{\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}}{S}_{\mathit{\phi}}=\sum _{i}{v}_{i}\mathit{\Delta}{t}_{i}.$ |

Deduce quantitative physical formulas for what the particle and flux densities are (not just that they are proportional to these sums), and the uncertainties in them, giving careful rationales.

A total of $N$ particles tracked from the source (not including the other particles arising from collisions that we also had to track), is the number of particles that would be emitted in a time $T=N/R$. Suppose the typical physical duration of a particle "flight" (the time from launch to absorption/loss of the particle and all of its descendants) is $\mathit{\tau}$. If $T>>\mathit{\tau}$, then it is clear that the calculation is equivalent to treating a physical duration $T$. In that case almost all particle flights are completed within the same physical duration $T$. Only those that start within $\mathit{\tau}$ of the end of $T$ would be physically uncompleted; and only those that started a time less than $\mathit{\tau}$ before the start of $T$ would be still in flight when $T$ starts. The fraction affected is $~\mathit{\tau}/T$, which is small. But in fact, even if $T$ is not much longer than $\mathit{\tau}$, the calculation is still equivalent to simulating a duration $T$. In an actual duration $T$ there would then be many flights that are unfinished at the end of $T$, and others that are part way through at its start. But on average the different types of partial flights add up to represent a number of total flights equal to $N$. The fact that in the physical case the partial flights are of different particles, whereas in the Monte Carlo calculation they are of the same particle, does not affect the average tally. Given, then, that the calculation is effectively simulating a duration $T$, the physical formulas for particle and flux densities in a cell of volume $V$ are

$n={S}_{n}/\mathit{TV}={S}_{n}R/\mathit{NV},\mathrm{\hspace{1em}\hspace{1em}\hspace{1em}}\text{and}\mathrm{\hspace{1em}\hspace{1em}\hspace{1em}}\mathit{\phi}={S}_{\mathit{\phi}}/\mathit{TV}={S}_{\mathit{\phi}}R/\mathit{NV}.$ |

To obtain the uncertainty, we require additional sample sums. The total number of tallies in the cell of interest we'll call ${S}_{1}=\sum _{i}1$. We may also need the sum of the squared tally contributions ${S}_{{n}^{2}}=\sum _{i}\mathit{\Delta}{t}_{i}^{2}$ and ${S}_{{\mathit{\phi}}^{2}}=\sum _{i}({v}_{i}\mathit{\Delta}{t}_{i}{)}^{2}$. Then finding ${S}_{n}$ and ${S}_{\mathit{\phi}}$ can be considered to be a process of making ${S}_{1}$ random selections of typical transits of the cell from probability distributions whose mean contribution per transit are ${S}_{n}/{S}_{1}$ and ${S}_{\mathit{\phi}}/{S}_{1}$ respectively. Of course the probability distributions aren't actually known, they are indirectly represented by the Monte Carlo calculation. But we don't need to know what the distributions are, because we can invoke the fact that the variance of the mean of a large number (${S}_{1}$) of samples from a population of variance ${\mathit{\sigma}}^{2}$ is just ${\mathit{\sigma}}^{2}/{S}_{1}$. We need an estimate of the population variance for density and flux density. That estimate is provided by the standard expressions for variance:

${\mathit{\sigma}}_{n}^{2}=\frac{1}{{S}_{1}-1}[{S}_{{n}^{2}}-({S}_{n}/{S}_{1}{)}^{2}],\mathrm{\hspace{1em}\hspace{1em}\hspace{1em}}{\mathit{\sigma}}_{\mathit{\phi}}^{2}=\frac{1}{{S}_{1}-1}[{S}_{{\mathit{\phi}}^{2}}-({S}_{\mathit{\phi}}/{S}_{1}{)}^{2}].$ |

So the uncertainties in the tally sums when a fixed number ${S}_{1}\approx {S}_{1}-1$ of tallies occurs are

${\mathit{\sigma}}_{n}/\sqrt{{S}_{1}},\mathrm{\hspace{1em}\hspace{1em}\hspace{1em}}\text{and}\mathrm{\hspace{1em}\hspace{1em}\hspace{1em}}{\mathit{\sigma}}_{\mathit{\phi}}/\sqrt{{S}_{1}}.$ |

However, there is also uncertainty arising from the fact that the number of tallies ${S}_{1}$ is

$\mathit{\delta}{S}_{n}=\sqrt{\frac{{\mathit{\sigma}}_{n}^{2}+{S}_{n}^{2}}{{S}_{1}}}\mathrm{\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}}\mathit{\delta}{S}_{\mathit{\phi}}=\sqrt{\frac{{\mathit{\sigma}}_{\mathit{\phi}}^{2}+{S}_{\mathit{\phi}}^{2}}{{S}_{1}}}\mathrm{\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}}$ |

Usually, ${\mathit{\sigma}}_{n}<~{S}_{n}$ and ${\mathit{\sigma}}_{\mathit{\phi}}<~{S}_{\mathit{\phi}}$, in which case we can (within a factor of $\sqrt{2}$) ignore the ${\mathit{\sigma}}_{n}$ and ${\mathit{\sigma}}_{\mathit{\phi}}$ contributions, and approximate the

2. Consider a model transport problem represented by two particle ranges, low and high energy: $1,2$. Suppose on average there are ${n}_{1}$ and ${n}_{2}$ particles in each range and ${n}_{1}+{n}_{2}=n$ is fixed. The particles in these ranges react with the material background at overall average rates ${r}_{1}$ and ${r}_{2}$. (a) A Monte Carlo determination of the reaction rate is based on a random draw for each particle determining whether or not it has reacted during a time $T$ (chosen such that ${r}_{1}T,{r}_{2}T<<1$). Estimate the fractional statistical uncertainty in the reaction rate determination after drawing ${n}_{1}$ and ${n}_{2}$ times respectively. (b) Now consider a determination using the same ${r}_{1,2}$, $T$, and total number of particles, $n$, but distributed differently so that the numbers of particles (and hence number of random draws) in the two ranges are artificially adjusted to ${n}_{1}\text{'}$, ${n}_{2}\text{'}$ (keeping ${n}_{1}\text{'}+{n}_{2}\text{'}=n$), and the reaction rate contributions are appropriately scaled by ${n}_{1}/{n}_{1}\text{'}$ and ${n}_{2}/{n}_{2}\text{'}$. What is now the fractional statistical uncertainty in reaction rate determination? What is the optimum value of ${n}_{1}\text{'}$ (and ${n}_{2}\text{'}$) to minimize the uncertainty?

3. Build a program to sample randomly from an exponential probability distribution $p(x)=\mathrm{exp}(-x)$, using a built-in or library uniform random deviate routine. Code the ability to form a series of $M$ independent samples labelled $j$, each sample consisting of $N$ independent random values ${x}_{i}$ from the distribution $p(x)$. The samples are to be assigned to $K$ bins depending on the sample mean ${\mathit{\mu}}_{j}=\sum _{i=1}^{N}{x}_{i}/N$. Bin $k$ contains all samples for which ${x}_{k-1}\le {\mathit{\mu}}_{j}<{x}_{k}$, where ${x}_{k}=k\mathit{\Delta}$. Together they form a distribution ${n}_{k}$, for $k=1,\dots ,K$ that is the number of samples with ${\mathit{\mu}}_{j}$ in the bin $k$. Find the mean ${\mathit{\mu}}_{n}=\sum _{k=1}^{K}{n}_{k}(k-1/2)\mathit{\Delta}/M$ and variance ${\mathit{\sigma}}_{n}^{2}=\sum _{k=1}^{K}{n}_{k}[(k-1/2)\mathit{\Delta}-{\mathit{\mu}}_{n}{]}^{2}/(M-1)$ of this distribution ${n}_{k}$, and compare them with the prediction of the Central Limit Theorem, for $N=100$, $K=30$, $\mathit{\Delta}=0.2$, and two cases: $M=100$ and $M=1000$. Submit the following as your solution:

- Your code in a computer format that is capable of being executed.
- Plots of ${n}_{k}$ versus ${x}_{k}$, for the two cases.
- Your numerical comparison of the mean and variance, and comments as to whether it is consistent with theoretical expectations.

HEAD | NEXT |