HEAD | PREVIOUS |

Fitting Functions to Data

$f({x}_{i})={y}_{i},\mathrm{\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}}\text{for all}i=1,\dots ,N.$ | $(1.1)$ |

$f(x)={c}_{1}+{c}_{2}x+{c}_{3}{x}^{2}+...+{c}_{N}{x}^{N-1}$ | $(1.2)$ |

$\left(\begin{array}{ccccc}\hfill 1\hfill & \hfill {x}_{1}\hfill & \hfill {x}_{1}^{2}\hfill & \hfill ...\hfill & \hfill {x}_{1}^{N-1}\hfill \\ \hfill 1\hfill & \hfill {x}_{2}\hfill & \hfill {x}_{2}^{2}\hfill & \hfill ...\hfill & \hfill {x}_{2}^{N-1}\hfill \\ \hfill ...\hfill \\ \hfill 1\hfill & \hfill {x}_{N}\hfill & \hfill {x}_{N}^{2}\hfill & \hfill ...\hfill & \hfill {x}_{N}^{N-1}\hfill \end{array}\right)\left(\begin{array}{c}\hfill {c}_{1}\hfill \\ \hfill {c}_{2}\hfill \\ \hfill ...\hfill \\ \hfill {c}_{N}\hfill \end{array}\right)=\left(\begin{array}{c}\hfill {y}_{1}\hfill \\ \hfill {y}_{2}\hfill \\ \hfill ...\hfill \\ \hfill {y}_{N}\hfill \end{array}\right)$ | $(1.3)$ |

$f(x)={c}_{1}{f}_{1}(x)+{c}_{2}{f}_{2}(x)+{c}_{3}{f}_{3}(x)+...+{c}_{N}{f}_{N}(x)$ | $(1.4)$ |

$\left(\begin{array}{ccccc}\hfill {f}_{1}({x}_{1})\hfill & \hfill {f}_{2}({x}_{1})\hfill & \hfill {f}_{3}({x}_{1})\hfill & \hfill ...\hfill & \hfill {f}_{N}({x}_{1})\hfill \\ \hfill {f}_{1}({x}_{2})\hfill & \hfill {f}_{2}({x}_{2})\hfill & \hfill {f}_{3}({x}_{2})\hfill & \hfill ...\hfill & \hfill {f}_{N}({x}_{2})\hfill \\ \hfill ...\hfill \\ \hfill {f}_{1}({x}_{N})\hfill & \hfill {f}_{2}({x}_{N})\hfill & \hfill {f}_{3}({x}_{N})\hfill & \hfill ...\hfill & \hfill {f}_{N}({x}_{N})\hfill \end{array}\right)\left(\begin{array}{c}\hfill {c}_{1}\hfill \\ \hfill {c}_{2}\hfill \\ \hfill ...\hfill \\ \hfill {c}_{N}\hfill \end{array}\right)=\left(\begin{array}{c}\hfill {y}_{1}\hfill \\ \hfill {y}_{2}\hfill \\ \hfill ...\hfill \\ \hfill {y}_{N}\hfill \end{array}\right)$ | $(1.5)$ |

${\mathbf{S}}^{-1}\mathbf{S}\mathbf{c}=\mathbf{c}={\mathbf{S}}^{-1}\mathbf{y}.$ | $(1.6)$ |

$f(x)={c}_{1}{f}_{1}(x)+{c}_{2}{f}_{2}(x)+{c}_{3}{f}_{3}(x)+...+{c}_{M}{f}_{M}(x)$ | $(1.7)$ |

$\left(\begin{array}{ccccc}\hfill {f}_{1}({x}_{1})\hfill & \hfill {f}_{2}({x}_{1})\hfill & \hfill {f}_{3}({x}_{1})\hfill & \hfill ...\hfill & \hfill {f}_{M}({x}_{1})\hfill \\ \hfill {f}_{1}({x}_{2})\hfill & \hfill {f}_{2}({x}_{2})\hfill & \hfill {f}_{3}({x}_{2})\hfill & \hfill ...\hfill & \hfill {f}_{M}({x}_{2})\hfill \\ \hfill ...\hfill \\ \hfill {f}_{1}({x}_{N})\hfill & \hfill {f}_{2}({x}_{N})\hfill & \hfill {f}_{3}({x}_{N})\hfill & \hfill ...\hfill & \hfill {f}_{M}({x}_{N})\hfill \end{array}\right)\left(\begin{array}{c}\hfill {c}_{1}\hfill \\ \hfill {c}_{2}\hfill \\ \hfill ...\hfill \\ \hfill {c}_{M}\hfill \end{array}\right)=\left(\begin{array}{c}\hfill {y}_{1}\hfill \\ \hfill {y}_{2}\hfill \\ \hfill ...\hfill \\ \hfill {y}_{N}\hfill \end{array}\right)$ | $(1.8)$ |

${\mathit{\chi}}^{2}=\sum _{i=1,N}({y}_{i}-f({x}_{i}{))}^{2}.$ | $(1.9)$ |

$\mathbf{S}=\mathbf{U}\mathbf{D}{\mathbf{V}}^{T},$ | $(1.10)$ |

- $\mathbf{U}$ is an $N\times N$ orthonormal matrix
- $\mathbf{V}$ is an $M\times M$ orthonormal matrix
- $\mathbf{D}$ is an $N\times M$ diagonal matrix

$\underset{N\times N}{\underset{\u23df}{{\mathbf{U}}^{T}}}\underset{N\times N}{\underset{\u23df}{\mathbf{U}}}=\underset{N\times N}{\underset{\u23df}{\mathbf{I}}}\mathrm{\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}}\text{and}\mathrm{\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}}\underset{M\times M}{\underset{\u23df}{{\mathbf{V}}^{T}}}\underset{M\times M}{\underset{\u23df}{\mathbf{V}}}=\underset{M\times M}{\underset{\u23df}{\mathbf{I}}}$ | $(1.11)$ |

$\mathbf{D}=\left(\begin{array}{ccccc}\hfill {d}_{1}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill \dots \hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill {d}_{2}\hfill & \hfill 0\hfill & \hfill \hfill & \hfill \hfill \\ \hfill 0\hfill & \hfill \hfill & \hfill \ddots \hfill & \hfill \hfill & \hfill \hfill \\ \hfill :\hfill & \hfill \hfill & \hfill \hfill & \hfill \ddots \hfill & \hfill 0\hfill \\ \hfill \hfill & \hfill \hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill {d}_{M}\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \end{array}\right)$ | $(1.12)$ |

${\mathbf{S}}^{-1}=\mathbf{V}{\mathbf{D}}^{-1}{\mathbf{U}}^{T}.$ | $(1.13)$ |

${\mathbf{D}}^{-1}=\left(\begin{array}{cccccc}\hfill 1/{d}_{1}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill \dots \hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 1/{d}_{2}\hfill & \hfill 0\hfill & \hfill \hfill & \hfill \hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill \hfill & \hfill \ddots \hfill & \hfill \hfill & \hfill \hfill & \hfill 0\hfill \\ \hfill :\hfill & \hfill \hfill & \hfill \hfill & \hfill \ddots \hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill \dots \hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 1/{d}_{M}\hfill & \hfill 0\hfill \end{array}\right).$ | $(1.14)$ |

${\mathbf{S}}^{-1}\mathbf{S}=(\mathbf{V}{\mathbf{D}}^{-1}{\mathbf{U}}^{T})(\mathbf{U}\mathbf{D}{\mathbf{V}}^{T})=\mathbf{V}{\mathbf{D}}^{-1}\mathbf{D}{\mathbf{V}}^{T}=\mathbf{V}{\mathbf{V}}^{T}=\mathbf{I}.$ | $(1.15)$ |

$\underset{M\times N}{\underset{\u23df}{{\mathbf{D}}^{-1}}}\underset{N\times M}{\underset{\u23df}{\mathbf{D}}}=\underset{M\times M}{\underset{\u23df}{\mathbf{I}}}.$ | $(1.16)$ |

$\left(\begin{array}{c}\hfill \mathbf{S}\hfill \\ \hfill \mathit{\lambda}\mathbf{R}\hfill \end{array}\right)\mathbf{c}=\left(\begin{array}{c}\hfill \mathbf{y}\hfill \\ \hfill 0\hfill \end{array}\right).$ | $(1.17)$ |

$\mathit{\chi}}^{2}=\sum _{i=1,N}({y}_{i}-f({x}_{i}{))}^{2}+{\mathit{\lambda}}^{2}\sum _{k=1,{N}_{R}}(\sum _{j}{R}_{\mathit{kj}}{c}_{j}{)}^{2$ | $(1.18)$ |

${R}_{\mathit{kj}}={\frac{{d}^{2}{f}_{j}}{{\mathit{dx}}^{2}}|}_{{x}_{k}}.$ | $(1.19)$ |

$\mathit{\rho}(x,y)=\sum _{j=1,M}{c}_{j}{\mathit{\rho}}_{j}(x,y),$ | $(1.20)$ |

Figure 1.5: Illustrative layout of
tomographic reconstruction of density in a plane
using multiple fans of chordal observations.

Each chord along which measurements are made, passes through the basis
functions (e.g. the pixels), and for a particular set of coefficients
${c}_{j}$ we therefore get a chordal measurement value
${v}_{i}={\int}_{{l}_{i}}\mathit{\rho}d\mathit{\ell}={\int}_{{l}_{i}}\sum _{j=1,M}{c}_{j}{\mathit{\rho}}_{j}(x,y)d\mathit{\ell}=\sum _{j=1,M}{\int}_{{l}_{i}}{\mathit{\rho}}_{j}(x,y)d\mathit{\ell}\hspace{0.5em}{c}_{j}=\mathbf{S}\mathbf{c},$ | $(1.21)$ |

$\mathbf{S}\mathbf{c}=\mathbf{v},$ | $(1.22)$ |

Figure 1.6: Contour plots of the initial test $\mathit{\rho}$-function (left) used to
calculate the chordal integrals, and its reconstruction based upon
inversion of the chordal data (right). The number of pixels (100)
exceeds the number of views (49), and the number of singular
values used in the pseudo inverse is restricted to 30. Still they
do not agree well, because various artifacts appear. Reducing the
number of singular values does not help.

We then almost certainly want to smooth the representation
otherwise all sorts of meaningless artifacts will
appear in our reconstruction that have no physical existence. If we
try to do this by forming a pseudo-inverse in which a smaller number
of singular values are retained, and the others put to zero, there is
no guarantee that this will get rid of the roughness. Fig. 1.6 gives an example.
If we instead smooth the reconstruction by regularization, using as
our measure of roughness the discrete (2-D) Laplacian
(${\nabla}^{2}\mathit{\rho}$) evaluated at each pixel. We get a far better result,
as shown in Fig. 1.7. It turns out that this good result
is rather insensitive to the value of ${\mathit{\lambda}}^{2}$ over two or three
orders of magnitude.
Figure 1.7: Reconstruction using a regularization smoothing based upon
${\nabla}^{2}\mathit{\rho}$. The contours are much nearer to reality.

$\mathit{\theta}=\mathit{\pi}(x-a)/(b-a),$ | $(1.23)$ |

$f(x)={c}_{1}\mathrm{sin}(\mathit{\theta})+{c}_{2}\mathrm{sin}(2\mathit{\theta})+{c}_{3}\mathrm{sin}(3\mathit{\theta})+\dots +{c}_{M}\mathrm{sin}(M\mathit{\theta}).$ | $(1.24)$ |

$\mathbf{S}\mathbf{c}=\left(\begin{array}{cccc}\hfill \mathrm{sin}(1{\mathit{\theta}}_{1})\hfill & \hfill \mathrm{sin}(2{\mathit{\theta}}_{1})\hfill & \hfill ...\hfill & \hfill \mathrm{sin}(M{\mathit{\theta}}_{1})\hfill \\ \hfill \mathrm{sin}(1{\mathit{\theta}}_{2})\hfill & \hfill \mathrm{sin}(2{\mathit{\theta}}_{2})\hfill & \hfill ...\hfill & \hfill \mathrm{sin}(M{\mathit{\theta}}_{2})\hfill \\ \hfill ...\hfill \\ \hfill \mathrm{sin}(1{\mathit{\theta}}_{N})\hfill & \hfill \mathrm{sin}(2{\mathit{\theta}}_{N})\hfill & \hfill ...\hfill & \hfill \mathrm{sin}(M{\mathit{\theta}}_{N})\hfill \end{array}\right)\left(\begin{array}{c}\hfill {c}_{1}\hfill \\ \hfill {c}_{2}\hfill \\ \hfill ...\hfill \\ \hfill {c}_{M}\hfill \end{array}\right)=\left(\begin{array}{c}\hfill {y}_{1}\hfill \\ \hfill {y}_{2}\hfill \\ \hfill ...\hfill \\ \hfill {y}_{N}\hfill \end{array}\right)=\mathbf{y}$ | $(1.25)$ |

% Suppose x and y exist as column vectors of length N. (Nx1 matrices) j=[1:M]; % Create a 1xM matrix containing numbers 1 to M. theta=pi*(x-a)/(b-a); % Scale x to obtain the column vector theta. S=sin(theta*j); % Construct the matrix S using an outer product. Sinv=pinv(S); % Pseudo invert it. c=Sinv*y; % Matrix multiply y to find the coefficients c.The fit can then be evaluated for any $x$ value (or array)

yfit=sin(pi*(xfit-a)/(b-a)*j)*c; % Evaluate the yfit at any xfitAn example is shown in Fig. 1.8.

Figure 1.8: The result of the fit of sinusoids up to $M=5$ to a noisy
dataset of size $N=20$. The points are the input data. The curve is
constructed by using the `yfit` expression on an
`xfit` array of some convenient length spanning the $x$-range, and
then simply plotting `yfit` versus
`xfit`.

- Your code in a computer format that is capable of being executed.
- The numeric values of your coefficients ${c}_{j},\hspace{0.5em}j=1,N$.
- Your plot.
- Brief commentary ($<300$ words) on what problems you faced and how you solved them.

2. Save your code from part 1. Make a copy of it with a new name and change the new code as needed to fit (in the linear least squares sense) a polynomial of order possibly lower than $N-1$ to a set of data ${x}_{i}$, ${y}_{i}$ (for which the points are in no particular order). Obtain a pair of data sets of length $(N=)$ 20 numbers ${x}_{i}$, ${y}_{i}$ from the same URL by changing the entry in the "Number of Numbers" box. (Or if that is inaccessible, generate your own data set from random numbers added to a line.) Run your code on that data to produce the fitting coefficients ${c}_{j}$ when the number of coefficients of the polynomial is ($M=$) (a) 1, (b) 2, (c) 3. That is: constant, linear, quadratic. Plot the fitted curves and the original data points on the same plot(s) for all three cases. Submit the following as your solution:

- Your code in a computer format that is capable of being executed.
- Your coefficients ${c}_{j},\hspace{0.5em}j=1,M$, for three cases (a), (b), (c).
- Your plot(s).
- Very brief remarks on the extent to which the coefficients are the same for the three cases.
- Can your code from this part also solve the problem of part 1?

HEAD | NEXT |