# Auto-zero/Auto-calibration

(Difference between revisions)
 Revision as of 09:16, 18 September 2010 (edit) (→Optimization Format)← Previous diff Current revision (11:25, 5 December 2011) (edit) (undo) (20 intermediate revisions not shown.) Line 42: Line 42: Then the problem can be formulated as: Then the problem can be formulated as: *Given $x_c,y_c\,$ *Given $x_c,y_c\,$ - *Find a formula or process $\hat{X}\,$ to select $\left(x_c,y_c\right)\xrightarrow{\hat{X}}\hat{x} \,$ so as to minimize $Q(x,\hat{x})$ + *Find a formula or process $\hat{X}\,$ to select $\hat{X}\left(x_c,y_c\right)\rightarrow\hat{x} \,$ so as to minimize $Q(x,\hat{x})$ ** The reason for the process term $\hat{X} \,$ is that many correction schemes are feedback controlled; $\hat{X} \,$ is never computed, internally, although it might be necessary in design or analysis. ** The reason for the process term $\hat{X} \,$ is that many correction schemes are feedback controlled; $\hat{X} \,$ is never computed, internally, although it might be necessary in design or analysis. === Algebra of Distributions of Random Variables === === Algebra of Distributions of Random Variables === This section discusses algebraic combinations of variables realized from distributions. This section discusses algebraic combinations of variables realized from distributions. - * $\mathcal{F}(g(x))\, + * [itex]\mathcal{F}(g(x))\,$ is the Fourier transform of $g(x)$. - [/itex] is the Fourier transform of $g(x)$. + * $\mathcal{M}(g(x))\,$ is the Mellin transform of $g(x)$. - * $\mathcal{M}(g(x))\, + * [itex]\oplus\,$:Let - [/itex] is the Mellin transform of $g(x)$. + ** $c=a+b,\ a,b,c\in \mathbb{R}\,$, - * $\oplus\, + ** [itex]\overline{c},\overline{a},\overline{b}\,$ the PDF's of $a,b,c \,$. - [/itex]:Let + ** $\overline{c} = \overline{a}\oplus \overline{b}=\mathcal{F}^{-1}( \mathcal{F}(\overline{a})\cdot \mathcal{F}\left(\overline{b} \right))\,$ - ** $c=a+b,\ a,b,c\in \mathbb{R}\, + * [itex]\ominus \,$:Let - [/itex], + ** $c=a-b=a+(-b),\ a,b,c\in \mathbb{R} \,$, - ** $\overline{c},\overline{a},\overline{b}\, + ** [itex]\overline{c},\overline{a},\overline{b}(-x)\,$ the PDF's of $a,-b,c\,$. - [/itex] the PDF's of $a,b,c \,$. + ** $\overline{c} = \overline{a}\ominus \overline{b}=\mathcal{F}^{-1}( \mathcal{F}(\overline{a})\cdot \mathcal{F}\left(\overline{b}(-x) \right))\,$ - ** $\overline{c} = \overline{a}\oplus \overline{b}=\mathcal{F}^{-1}( \mathcal{F}(\overline{a})\cdot \mathcal{F}\left( \overline{b} \right))\, + * [itex]\otimes \,$:Let - [/itex] + ** $c=a\cdot b,\ a,b,c\in (0,\infty)\,$ - * $\ominus \, + -$:Let + - ** $c=a-b=a+(-b),\ a,b,c\in \mathbb{R} \, + -$, + - ** $\overline{c},\overline{a},\overline{b}(-x)\, + -$ the PDF's of $a,-b,c\,$. + - ** $\overline{c} = \overline{a}\ominus \overline{b}=\mathcal{F}^{-1}( \mathcal{F}(\overline{a})\cdot \mathcal{F}\left( \overline{b}(-x) \right))\, + -$ + - * $\otimes \, + -$:Let + - ** $c=a\cdot b,\ a,b,c\in (0,\infty)\, + -$ + *** Springer as an extension to $\mathbb{R} \,$ *** Springer as an extension to $\mathbb{R} \,$ - ** $\overline{c},\overline{a},\overline{b}\, + ** [itex]\overline{c},\overline{a},\overline{b}\,$ the PDF's of $a,b,c\,$. - [/itex] the PDF's of $a,b,c\,$. + ** $\overline{c} = \overline{a}\otimes \overline{b}=\mathcal{M}^{-1}( \mathcal{M}(\overline{a})\cdot \mathcal{M}\left( \overline{b} \right))\,$ - ** $\overline{c} = \overline{a}\otimes \overline{b}=\mathcal{M}^{-1}( \mathcal{M}(\overline{a})\cdot \mathcal{M}\left( \overline{b} \right))\, + * [itex]\oslash$:Let - [/itex] + ** $c= \frac{a}{b},\ a,b,c\in (0,\infty) \,$ - * $\oslash + -$:Let + - ** $c= \frac{a}{b},\ a,b,c\in (0,\infty) \, + -$ + *** Springer as an extension to $\mathbb{R} \,$ *** Springer as an extension to $\mathbb{R} \,$ - ** $\overline{c},\overline{a},\overline{b}(x)\, + ** [itex]\overline{c},\overline{a},\overline{b}(x)\,$ the PDF's of $a,b,c\,$. - [/itex] the PDF's of $a,b,c\,$. + ** $\overline{c} = \overline{a}\oslash \overline{b}=\mathcal{M}^{-1}( \mathcal{M}(\overline{a})\cdot \mathcal{M}\left(\overline{b}(\frac{1}{x}) \right))\,$ - ** $\overline{c} = \overline{a}\oslash \overline{b}=\mathcal{M}^{-1}( \mathcal{M}(\overline{a})\cdot \mathcal{M}\left( \overline{b}(\frac{1}{x}) \right))\, + Reference: The algebra of random variables, Springer, M.D., 1979, Wiley, New York -$ + - Ref: + - @book{springer1979algebra, + - title={{The algebra of random variables}}, + - author={Springer, M.D.}, + - year={1979}, + - publisher={Wiley New York} + - } + == Examples == == Examples == Line 113: Line 89: ** $V_x=v_{off}+\frac{(V_{ref} +e_{ref})(R_x +e_{com})}{(R_x+e_{com} + R_b+e_b)}$ ** $V_x=v_{off}+\frac{(V_{ref} +e_{ref})(R_x +e_{com})}{(R_x+e_{com} + R_b+e_b)}$ * Implicit formula for PDF distributions. Generally this form two unknown distributions $\overline{V_{t}},\overline{R_{t}}\,$ but in any particular application one or the other is known. In addition the constants, $1, V_{ref}, V_b \,$ in the formula stand for the Dirac delta distribution positioned on the constant. * Implicit formula for PDF distributions. Generally this form two unknown distributions $\overline{V_{t}},\overline{R_{t}}\,$ but in any particular application one or the other is known. In addition the constants, $1, V_{ref}, V_b \,$ in the formula stand for the Dirac delta distribution positioned on the constant. - **$\overline{V_{t}}\ominus\overline{v_{off}}\ominus\left(\left(V_{ref}\oplus\overline{e_{ref}}\right)\oslash\left(1\oplus\left(\left(R_{b}\oplus\overline{e_{b}}\right)\oslash\left(\overline{R_{t}}\oplus\overline{e_{com}}\right)\right)\right)\right)=0\, + **[itex]\overline{V_{t}}\ominus\overline{v_{off}}\ominus\left(\left(V_{ref}\oplus\overline{e_{ref}}\right)\oslash\left(1\oplus\left(\left(R_{b}\oplus\overline{e_{b}}\right)\oslash\left(\overline{R_{t}}\oplus\overline{e_{com}}\right)\right)\right)\right)=0\,$ - [/itex] + * Calibration reading * Calibration reading ** $V_c=\left.V_x\right|_{x\leftarrow c}$ ** $V_c=\left.V_x\right|_{x\leftarrow c}$ Line 124: Line 99: *** $R_t=\frac{(V_t-v_{off})(e_{com}+R_b+e_b)-e_{com}(V_{ref}+e_{ref})}{V_{ref}-v_{off}-V_t-e_{ref}}$ *** $R_t=\frac{(V_t-v_{off})(e_{com}+R_b+e_b)-e_{com}(V_{ref}+e_{ref})}{V_{ref}-v_{off}-V_t-e_{ref}}$ ** Inversion in terms of estimates ** Inversion in terms of estimates - ***$\widehat{R_t}=\frac{(V_t-\widehat{v_{off}}) (\widehat{e_{com}}+R_b+\widehat{e_b})-\widehat{e_{com}}(V_{ref}+\widehat{e_{ref}})}{V_{ref}-\widehat{v_{off}}-V_{t}-\widehat{e_{ref}}} + ***[itex]\widehat{R_t}=\frac{(V_t-\widehat{v_{off}}) (\widehat{e_{com}}+R_b+\widehat{e_b})-\widehat{e_{com}}(V_{ref}+\widehat{e_{ref}})}{V_{ref}-\widehat{v_{off}}-V_{t}-\widehat{e_{ref}}}$ - [/itex] + - + ** Mode estimate: Find the maximum value of $\overline{R_{t}'} \,$. A necessary condition for differentiable functions is $\frac{d\overline{R_{t}'}_{|{p'}}}{dp'}=0 \,$ In polynomial cases this can theoretically be solved via Groebner basis; but even given the "exact" solutions, one is forced into sub-optimal/approximate estimates. ** Mode estimate: Find the maximum value of $\overline{R_{t}'} \,$. A necessary condition for differentiable functions is $\frac{d\overline{R_{t}'}_{|{p'}}}{dp'}=0 \,$ In polynomial cases this can theoretically be solved via Groebner basis; but even given the "exact" solutions, one is forced into sub-optimal/approximate estimates. ** Mean estimate: Find the expectation, that is $\widehat{R_t}=\int R'\cdot \overline{R_{t}'}_{|{R'}} dR'$ ** Mean estimate: Find the expectation, that is $\widehat{R_t}=\int R'\cdot \overline{R_{t}'}_{|{R'}} dR'$ Line 135: Line 108: ====Optimization Format==== ====Optimization Format==== - *The setpoint case with a full optimzer available at runtime. The purpose is to choose error space values that minimize some likelihood. The selected error values can then be reused to calculate $\widehat{R_t} \,$ for different values of $V_t \,$. It is presumed that we choose likelihoods such that surfaces of constant likelihood form a series of nested n-1 dimensional convex sets/shells around a center. Note the weighting so that the equal likelihood rectangles are the same shape as the boundaries. In addition the $e_{com-center}$ is mathematically centered. In real situations this variable might not be numerically centered in the allowed range. Given the above assumptions this just leads to more complicated weighting but still a boundary around a convex set. + *The setpoint case with a full optimzer available at runtime. The purpose is to choose error space values that minimize some likelihood function. The selected error values can then be reused to calculate $\widehat{R_t} \,$ for different values of $V_t \,$. It is presumed that we choose likelihoods such that surfaces of constant likelihood form a series of nested n-1 dimensional convex sets/shells around a center. + * Absolute value case choosing error values for subsequent calculations. + minimize $\frac{|\widehat{v_{off}}|}{L_{off}}\,+\frac{|\widehat{e_{com}}\,-e_{com-center}|}{L_{com}}+\frac{|\widehat{e_{ref}}|}{L_{ref}}\,+\frac{| \widehat{e_b}\,|}{L_b}$ minimize $\frac{|\widehat{v_{off}}|}{L_{off}}\,+\frac{|\widehat{e_{com}}\,-e_{com-center}|}{L_{com}}+\frac{|\widehat{e_{ref}}|}{L_{ref}}\,+\frac{| \widehat{e_b}\,|}{L_b}$ Line 141: Line 116: subject to $[|\widehat{v_{off}}|\,,\widehat{e_{com}}\,,-\widehat{e_{com}}\,,|\widehat{e_{ref}}|\,,|\widehat{e_b}|]\,\leq\,[L_{off},L_{com},0,L_{ref}\,,L_{b}] subject to [itex][|\widehat{v_{off}}|\,,\widehat{e_{com}}\,,-\widehat{e_{com}}\,,|\widehat{e_{ref}}|\,,|\widehat{e_b}|]\,\leq\,[L_{off},L_{com},0,L_{ref}\,,L_{b}] -$
+ [/itex]
$V_c=\widehat{v_{off}}+\frac{(V_{ref} +\widehat{e_{ref}})(R_c +\widehat{e_{com}})}{(R_c+\widehat{e_{com}} + R_b+\widehat{e_b})} [itex]V_c=\widehat{v_{off}}+\frac{(V_{ref} +\widehat{e_{ref}})(R_c +\widehat{e_{com}})}{(R_c+\widehat{e_{com}} + R_b+\widehat{e_b})} -$ + [/itex] - *Working directly on choosing the most likely (mode) value of $\overline{R_{t}}$. + ====Optimization Format 2==== - + * Reformulation in extended matrix format. A derivation is in the [[Talk:Auto-zero/Auto-calibration | discussion]] under /* Optimization Format */. + * We introduce a new variable to produce coupled equations: $\widehat{I_c}\,$. This is meant to simplify the nonlinearities. - maximize ${\overline{R_{t}}(R_{t})}\, + minimize [itex]\frac{|\widehat{v_{off}}|}{L_{off}}\,+\frac{|\widehat{e_{com}}\,-e_{com-center}|}{L_{com}}+\frac{|\widehat{e_{ref}}|}{L_{ref}}\,+\frac{| \widehat{e_b}\,|}{L_b}$ - [/itex] + - w.r.t $R_{t}\,$ + w.r.t. $\widehat{v_{off}}\,,\widehat{e_{com}}\,,\widehat{e_{ref}}\,,\widehat{e_b}\,,\widehat{I_c} +$ - subject to $[\overline{v_{off}}\,,\overline{e_{com}}\,,\overline{e_{ref}}\,,\overline{e_{b}}]=[\mathcal{N}({\mu}_{off},{\sigma}^{2}_{off}),\mathcal{N}({\mu}_{com},{\sigma}^{2}_{com}) , \mathcal{N}({\mu}_{ref},{\sigma}^{2}_{ref}), \mathcal{N}({\mu}_{b},{\sigma}^{2}_{b})]\, + subject to [itex][|\widehat{v_{off}}|\,,\widehat{e_{com}}\,,-\widehat{e_{com}}\,,|\widehat{e_{ref}}|\,,|\widehat{e_b}|]\,\leq\,[L_{off}\,,L_{com}\,,0,L_{ref}\,,L_{b}] -$ +
- $V_c=v_{off}+\frac{(V_{ref} +e_{ref})(R_c +e_{com})}{(R_c+e_{com} + R_b+e_b)}\, + [itex]\begin{array}{lcl} -$ + \left[\begin{array}{c} - $V_t=v_{off}+\frac{(V_{ref} +e_{ref})(R_t +e_{com})}{(R_t+e_{com} + R_b+e_b)}\, + V_{ref}\\ -$ + V_{c}\end{array}\right] & = & \left[\begin{array}{ccccc} - + 1 & (R_{b}+R_{c}) & 1 & 1 & 0\\ - * Perhaps using Expectation as an objective might be better. Under some conditions it has linearity. + 0 & R_{c} & 0 & 1 & 1\end{array}\right]\left[\begin{array}{c} - ** Boyd has $f(\bold{E}(x))\leq \bold{E}(f(x))$ as characterizing convexity in the reference section 3.1.8, Eq 3.5 . + \widehat{e_{ref}}\\ - ** Below the expectation ${\bold{E}(R_t)}\, + \widehat{I_{c}}\\ -$ is defined as $\int_{0}^{\infty}R_t\overline{R_t}(R_t)dR_t \,$ + \widehat{I_{c}}\,\widehat{e_{b}}\\ - *** Where $\overline{R_t}\,$ is implicitly defined below. + \widehat{I_{c}}\,\widehat{e_{com}}\\ - ** If we identify $f(x)\,$ in Boyd's equation with $R_t=R_t(V_c;[{v_{off}},{e_{com}},{e_{ref}},{e_{b}}])\, + \widehat{v_{off}}\end{array}\right]\end{array} -$ then we would have the expression $R_t(\bold{E}(V_c;[{v_{off}},{e_{com}},{e_{ref}},{e_{b}}]) \leq \bold{E}(R_t(V_c;[{v_{off}},{e_{com}},{e_{ref}},{e_{b}}]))\, +$ - [/itex]. True if $R_t()\,$ was convex (or could be made convex)? This would reduce the problem to finding the expectation of the errors. + $\begin{array}{lcl} - *** I ignored the multivariable complication. Does it matter if the terms are independent? There are expectation results that might prove it doesn't. Going further into the problem, the constraint introduces a dependency into the distributions. How to accommodate this? + |\widehat{e_{ref}}| & < & V_{ref}\\ - *** I know [itex]R_t\,$ is monotonic in every variable; but this doesn't seem sufficient. The Hessian is constant but indeterminate. Perhaps a different "objective" would fix the Hessian? ?? + \widehat{I_{c}} & > & 0\\ - *** Going to the geometric picture of errors, a prior error distribution, and the constraint as a surface; it would seem that convexity might be characterized by the error distribution projecting onto the constraint surface as a monotonic function away from a minumum. i.e. the constant surfaces of the a prior distribution forming a nested set of subsurfaces (curves) on the constraint surface. Not proved yet! + V_{c} & < & V_{ref}+\widehat{e_{ref}}\end{array} - *** $\overline{e}=\overbrace{\mathcal{N}(\mu , {\sigma}^2)}^{6}$ stands for a renormalized Gaussian distribution that is truncated to Six Sigma; a conventional engineering limit. Thus it has $-6*\sigma \leq \overline{e}(e)-\mu \leq 6*\sigma$ built in. The normalization constant induced by restricting the domain is about $\frac{1}{.9999966}$. A redundant constraint would be: + [/itex] - *** $\left(\frac{{v_{off}-\mu_{off}}}{\sigma_{off}}\right)^2+\left(\frac{{e_{com}-\mu_{com}}}{\sigma_{com}}\right)^2+\left(\frac{{e_{ref}-\mu_{ref}}}{\sigma_{ref}}\right)^2+\left(\frac{{e_{b}-\mu_{b}}}{\sigma_{b}}\right)^2=36 + -$ + - + ====Optimization Format 3==== + *Working directly on choosing the most likely value of $\widehat{R_{t}}$. - find ${\bold{E}(R_t)}\, + minimize r -$ + - w.r.t $[{v_{off}},{e_{com}},{e_{ref}},{e_{b}}]\,$ + w.r.t $\widehat{R_{t}}\,$ + $\widehat{v_{off}}\,,\widehat{e_{com}}\,,\widehat{e_{ref}}\,,\widehat{e_b}\,$ - subject to $[\overline{v_{off}}\,,\overline{e_{com}}\,,\overline{e_{ref}}\,,\overline{e_{b}}]=[\overbrace{\mathcal{N}({\mu}_{off},{\sigma}^{2}_{off})}^{6},\overbrace{\mathcal{N}({\mu}_{com},{\sigma}^{2}_{com})}^{6} , \overbrace{ \mathcal{N}({\mu}_{ref},{\sigma}^{2}_{ref})}^{6}, \overbrace{\mathcal{N}({\mu}_{b},{\sigma}^{2}_{b})}^{6}]\, + + subject to + [itex]V_c=\widehat{v_{off}}+\frac{(V_{ref} +\widehat{e_{ref}})(R_c +\widehat{e_{com}})}{(R_c+\widehat{e_{com}} + R_b+\widehat{e_b})}$ [/itex] - $V_c=v_{off}+\frac{(V_{ref} +e_{ref})(R_c +e_{com})}{(R_c+e_{com} + R_b+e_b)}\, + [itex]V_t=\widehat{v_{off}}+\frac{(V_{ref} +\widehat{e_{ref}})(\widehat{R_t} +\widehat{e_{com}})}{(\widehat{R_t}+\widehat{e_{com}} + R_b+\widehat{e_b})}$ [/itex] - $V_t=v_{off}+\frac{(V_{ref} +e_{ref})(R_t +e_{com})}{(R_t+e_{com} + R_b+e_b)}\, + [itex]\left(\frac{{\widehat{v_{off}}-\mu_{off}}}{\sigma_{off}}\right)^2+\left(\frac{{\widehat{e_{com}}-\mu_{com}}}{\sigma_{com}}\right)^2+\left(\frac{{\widehat{e_{ref}}-\mu_{ref}}}{\sigma_{ref}}\right)^2+\left(\frac{{\widehat{e_{b}}-\mu_{b}}}{\sigma_{b}}\right)^2 =r$ [/itex] + $\widehat{R_{t}} > 0$ - * Alternatively + ====Optimization Format 4==== + *An alternate selection criteria; probably not usable. + + minimize r - + w.r.t $\bold{E}(\widehat{R_{t}})\,$ - find ${\bold{E}(R_t)}\, + [itex]\widehat{v_{off}}\,,\widehat{e_{com}}\,,\widehat{e_{ref}}\,,\widehat{e_b}\,$ - [/itex] + - w.r.t $[{v_{off}},{e_{com}},{e_{ref}},{e_{b}}]\,$ - subject to $[\overline{v_{off}}\,,\overline{e_{com}}\,,\overline{e_{ref}}\,,\overline{e_{b}}]=[{\mathcal{N}({\mu}_{off},{\sigma}^{2}_{off})},{\mathcal{N}({\mu}_{com},{\sigma}^{2}_{com})} , { \mathcal{N}({\mu}_{ref},{\sigma}^{2}_{ref})}, {\mathcal{N}({\mu}_{b},{\sigma}^{2}_{b})}]\cdot \frac{1}{.9999966}\, + subject to + [itex]V_c=\widehat{v_{off}}+\frac{(V_{ref} +\widehat{e_{ref}})(R_c +\widehat{e_{com}})}{(R_c+\widehat{e_{com}} + R_b+\widehat{e_b})}$ [/itex] - $V_c=v_{off}+\frac{(V_{ref} +e_{ref})(R_c +e_{com})}{(R_c+e_{com} + R_b+e_b)}\, + [itex]V_t=\widehat{v_{off}}+\frac{(V_{ref} +\widehat{e_{ref}})(\widehat{R_t} +\widehat{e_{com}})}{(\widehat{R_t}+\widehat{e_{com}} + R_b+\widehat{e_b})}$ [/itex] - $V_t=v_{off}+\frac{(V_{ref} +e_{ref})(R_t +e_{com})}{(R_t+e_{com} + R_b+e_b)}\, + [itex]\left(\frac{{\widehat{v_{off}}-\mu_{off}}}{\sigma_{off}}\right)^2+\left(\frac{{\widehat{e_{com}}-\mu_{com}}}{\sigma_{com}}\right)^2+\left(\frac{{\widehat{e_{ref}}-\mu_{ref}}}{\sigma_{ref}}\right)^2+\left(\frac{{\widehat{e_{b}}-\mu_{b}}}{\sigma_{b}}\right)^2 =r$ [/itex] - $\left(\frac{{v_{off}-\mu_{off}}}{\sigma_{off}}\right)^2+\left(\frac{{e_{com}-\mu_{com}}}{\sigma_{com}}\right)^2+\left(\frac{{e_{ref}-\mu_{ref}}}{\sigma_{ref}}\right)^2+\left(\frac{{e_{b}-\mu_{b}}}{\sigma_{b}}\right)^2=36 + [itex]\widehat{R_{t}} > 0$ - [/itex] + ===Infrared Gas analysers=== ===Infrared Gas analysers===

## Motivation

In instrumentation, both in a supporting role and as a prime objective, measurements are taken that are subject to systematic errors. Routes to minimizing the effects of these errors are:

• Spend more money on the hardware. This is valid but hits areas of diminishing returns; the price rises disproportionately with respect to increased accuracy.
• Apparently, in the industrial processing industry, various measurement points are implemented and regressed to find "subspaces" that the process has to be operating on. Due to lack of experience I (RR) will not be covering that here; although others are welcome to (and replace this statement). This is apparently called "data reconciliation".
• Calibrations are done and incorporated into the instrument. This can be done by analog adjustments or written into storage mediums for subsequent use by operators or software.
• Runtime Auto-calibrations done at regular intervals. These are done at a variety of time intervals: every .01 seconds to 30 minutes. I can speak to these most directly; but I consider the "Calibrations" to be a special case.

## Mathematical Formulation

Nomenclature: It will be presumed, unless otherwise stated, that collected variables compose a (topological) manifold; i.e. a collection designated by single symbol and id. Not necessarily possessing a differential geometry metric. The means that there is no intrinsic two dimensional tensor, $LaTeX: g_{ij} \,$, allowing a identification of contravarient vectors with covarient ones. These terms are presented to provide a mathematically precise context to distinguish: contravarient and covariant vectors, tangent spaces and underlying coordinate spaces. Typically they can be ignored.

• The quintessential example of covariant tensor is the differential of a scaler, although the vector space formed by differentials is more extensive than differentials of scalars.
• The quintessental contravariant vector is the derivative of a path $LaTeX: p \,$ with component values $LaTeX: e^i =f_i(s) \,$ on the manifold. With $LaTeX: (p)^j=\frac{\partial e^j}{\partial s}\cdot \frac {\partial}{\partial e^j} \,$ being a component of the contravariant vector along $LaTeX: p \,$ parametrized by "s".
• Using $LaTeX: e \,$ (see directly below) as an example
• $LaTeX: e_{id} \,$ refers to collection of variables identified by "id"
• Although a collection does not have the properties of a vector space; in some cases we will assume (restrict) it have those properties. In particular this seems to be needed to state that the Q() functions are convex.
• $LaTeX: e_{id}^i \,$ refers to the $LaTeX: i'th \,$ component of $LaTeX: e_{id} \,$
• $LaTeX: (e_{id})_j^i\,$ refers to the tangent/cotangent bundle with $LaTeX: i \,$ selecting a contravariant component and $LaTeX: j \,$ selecting a covariant component
• $LaTeX: (expr)_{|x\leftarrow c} \,$ refers to an expression, "expr", where $LaTeX: (expr)$ is evaluated with $LaTeX: x=c$
• $LaTeX: (expr)_{|x\rightarrow c} \,$ refers to an expression, "expr", where $LaTeX: (expr)$ is evaluated as the limit of "x" as it approaches value "c"

Definitions:

• $LaTeX: x\,$ a collection of some environmental or control variables that need to be estimated
• $LaTeX: x_c\,$ a collection of calibration points
• $LaTeX: \bar{x}\,$ the PDF of $LaTeX: x\,$ . This is a representation of a function.
• $LaTeX: \bar{x}(x_c)\,$ the evaluation of the PDF $LaTeX: \bar{x}\,$ at $LaTeX: x_c$.
• $LaTeX: \hat{x}$ be the estimate of $LaTeX: x\,$
• $LaTeX: p \,$ a collection of parameters that are constant during operations but selected at design time. The system "real" values during operation are typically $LaTeX: p+e \,$; although other modifications are possible.
• $LaTeX: e\,$ be errors: assumed to vary, but constant during intervals between calibrations and real measurements
• $LaTeX: \bar{e}\,$ the PDF of $LaTeX: e\,$ . This is a representation of a function.
• $LaTeX: \bar{e}(x)\,$ the evaluation of the PDF $LaTeX: \bar{e}\,$ at x.
• $LaTeX: y\,$ be the results of a measurement processes attempting to measure $LaTeX: x\,$
• $LaTeX: y=Y(x;p,e)\,$ where $LaTeX: e\,$ might be additive, multiplicative, or some other form.
• $LaTeX: y_c=Y(x_c;p,e)\,$ the reading values at the calibration points for an instantiation of $LaTeX: e\,$
• $LaTeX: \hat{e}$ be estimates of $LaTeX: e\,$ derived from $LaTeX: x, y\,$
• $LaTeX: \hat{e}=Est(x_c,y_c)\,$
• $LaTeX: Q(x,\hat{x})$ be a quality measure of resulting estimation; for example $LaTeX: \sum{(x_i-\hat{x_i})^2}$
• Where $LaTeX: x\,$ is allowed to vary over a domain for fixed $LaTeX: \hat{x}$
• The example is oversimplified as will be demonstrated below.
• $LaTeX: \hat{x} \,$ is typically decomposed into a chain using $LaTeX: \hat{e}\,$

Then the problem can be formulated as:

• Given $LaTeX: x_c,y_c\,$
• Find a formula or process $LaTeX: \hat{X}\,$ to select $LaTeX: \hat{X}\left(x_c,y_c\right)\rightarrow\hat{x} \,$ so as to minimize $LaTeX: Q(x,\hat{x})$
• The reason for the process term $LaTeX: \hat{X} \,$ is that many correction schemes are feedback controlled; $LaTeX: \hat{X} \,$ is never computed, internally, although it might be necessary in design or analysis.

### Algebra of Distributions of Random Variables

This section discusses algebraic combinations of variables realized from distributions.

• $LaTeX: \mathcal{F}(g(x))\,$ is the Fourier transform of $LaTeX: g(x)$.
• $LaTeX: \mathcal{M}(g(x))\,$ is the Mellin transform of $LaTeX: g(x)$.
• $LaTeX: \oplus\,$:Let
• $LaTeX: c=a+b,\ a,b,c\in \mathbb{R}\,$,
• $LaTeX: \overline{c},\overline{a},\overline{b}\,$ the PDF's of $LaTeX: a,b,c \,$.
• $LaTeX: \overline{c} = \overline{a}\oplus \overline{b}=\mathcal{F}^{-1}( \mathcal{F}(\overline{a})\cdot \mathcal{F}\left(\overline{b} \right))\,$
• $LaTeX: \ominus \,$:Let
• $LaTeX: c=a-b=a+(-b),\ a,b,c\in \mathbb{R} \,$,
• $LaTeX: \overline{c},\overline{a},\overline{b}(-x)\,$ the PDF's of $LaTeX: a,-b,c\,$.
• $LaTeX: \overline{c} = \overline{a}\ominus \overline{b}=\mathcal{F}^{-1}( \mathcal{F}(\overline{a})\cdot \mathcal{F}\left(\overline{b}(-x) \right))\,$
• $LaTeX: \otimes \,$:Let
• $LaTeX: c=a\cdot b,\ a,b,c\in (0,\infty)\,$
• Springer as an extension to $LaTeX: \mathbb{R} \,$
• $LaTeX: \overline{c},\overline{a},\overline{b}\,$ the PDF's of $LaTeX: a,b,c\,$.
• $LaTeX: \overline{c} = \overline{a}\otimes \overline{b}=\mathcal{M}^{-1}( \mathcal{M}(\overline{a})\cdot \mathcal{M}\left( \overline{b} \right))\,$
• $LaTeX: \oslash$:Let
• $LaTeX: c= \frac{a}{b},\ a,b,c\in (0,\infty) \,$
• Springer as an extension to $LaTeX: \mathbb{R} \,$
• $LaTeX: \overline{c},\overline{a},\overline{b}(x)\,$ the PDF's of $LaTeX: a,b,c\,$.
• $LaTeX: \overline{c} = \overline{a}\oslash \overline{b}=\mathcal{M}^{-1}( \mathcal{M}(\overline{a})\cdot \mathcal{M}\left(\overline{b}(\frac{1}{x}) \right))\,$

Reference: The algebra of random variables, Springer, M.D., 1979, Wiley, New York

## Examples

### Biochemical temperature control

where multiple temperature sensors are multiplexed into a data stream and one or more channels are set aside for Auto-calibration. Expected final systems accuracies of .05 degC are needed because mammalian temperature regulation has resulting in processes and diseases that are "tuned" to particular temperatures.

• A simplified example, evaluating one calibration channel and one reading channel. In order to be more obvious the unknown and calibration readings are designated separately; instead of the convention given above. This is more obvious in a simple case, but in more complicated cases is unsystematic.
• $LaTeX: V_x=v_{off}+\frac{V_{ref}R_x}{R_x + R_b}$
• $LaTeX: R_x \,$ can be either the calibration resistor $LaTeX: R_c \,$ or the unknown resistance $LaTeX: R_t \,$ of the thermistor
• $LaTeX: V_x \,$ is the corresponding voltage read: $LaTeX: V_c\,$ or $LaTeX: V_t\,$
• $LaTeX: v_{off} \,$ is the reading offset value, an error
• $LaTeX: \overline{v_{off}}\,$ the PDF of $LaTeX: v_{off} \,$
• $LaTeX: V_{ref}, e_{ref} \,$ the nominal bias voltage and bias voltage error
• $LaTeX: \overline{e_{ref}}\,$ the PDF of $LaTeX: e_{ref} \,$
• $LaTeX: R_b, e_b \,$ the nominal bias resistor and bias resistor error
• $LaTeX: \overline{e_b}\,$ the PDF of $LaTeX: e_b\,$
• $LaTeX: e_{com} \,$ an unknown constant resistance in series with $LaTeX: R_c, R_t \,$ for all readings
• $LaTeX: \overline{e_{com}}\,$ the PDF of $LaTeX: e_{com} \,$
• $LaTeX: \overline{e_{com}'}\,$ the restricted form of $LaTeX: \overline{e_{com}}\,$
• Hereafter $LaTeX: x' \,$ will signify a restricted form of $LaTeX: x\,$
• With errors
• $LaTeX: V_x=v_{off}+\frac{(V_{ref} +e_{ref})(R_x +e_{com})}{(R_x+e_{com} + R_b+e_b)}$
• Implicit formula for PDF distributions. Generally this form two unknown distributions $LaTeX: \overline{V_{t}},\overline{R_{t}}\,$ but in any particular application one or the other is known. In addition the constants, $LaTeX: 1, V_{ref}, V_b \,$ in the formula stand for the Dirac delta distribution positioned on the constant.
• $LaTeX: \overline{V_{t}}\ominus\overline{v_{off}}\ominus\left(\left(V_{ref}\oplus\overline{e_{ref}}\right)\oslash\left(1\oplus\left(\left(R_{b}\oplus\overline{e_{b}}\right)\oslash\left(\overline{R_{t}}\oplus\overline{e_{com}}\right)\right)\right)\right)=0\,$
• $LaTeX: V_c=\left.V_x\right|_{x\leftarrow c}$
• Thermistor (real) reading
• $LaTeX: V_t=\left.V_x\right|_{x\leftarrow t}$
• The problem is to optimally estimate $LaTeX: R_t\,$ based upon $LaTeX: V_t \,$ and $LaTeX: V_c \,$
• The direct inversion formula illustrates the utility of mathematically using the error space $LaTeX: [V_{off}\,,e_{com}\,,e_b\,,e_{ref}] \,$ during design and analysis. The direct inversion of $LaTeX: V_t \,$ for $LaTeX: R_t \,$ naturally invokes the error space as a link to $LaTeX: V_c \,$.
• Inversion for $LaTeX: R_t \,$
• $LaTeX: R_t=\frac{(V_t-v_{off})(e_{com}+R_b+e_b)-e_{com}(V_{ref}+e_{ref})}{V_{ref}-v_{off}-V_t-e_{ref}}$
• Inversion in terms of estimates
• $LaTeX: \widehat{R_t}=\frac{(V_t-\widehat{v_{off}}) (\widehat{e_{com}}+R_b+\widehat{e_b})-\widehat{e_{com}}(V_{ref}+\widehat{e_{ref}})}{V_{ref}-\widehat{v_{off}}-V_{t}-\widehat{e_{ref}}}$
• Mode estimate: Find the maximum value of $LaTeX: \overline{R_{t}'} \,$. A necessary condition for differentiable functions is $LaTeX: \frac{d\overline{R_{t}'}_{|{p'}}}{dp'}=0 \,$ In polynomial cases this can theoretically be solved via Groebner basis; but even given the "exact" solutions, one is forced into sub-optimal/approximate estimates.
• Mean estimate: Find the expectation, that is $LaTeX: \widehat{R_t}=\int R'\cdot \overline{R_{t}'}_{|{R'}} dR'$
• Worst case: where points considered where the constraint meets some boundary; say +- .01%
• Any of the above extended to cover a range of $LaTeX: R_t \,$ as well as the range of errors.
• It should be mentioned that, in this case $LaTeX: (R_t - \widehat{R_t})$ is not a good (or natural) function. A better function for both results and calculations is $LaTeX: (1-\frac{\widehat{R_t}}{R_t})\,$. The form of errors to be a natural variation from problem to problem and should be accommodated in any organized procedure.
• Sensitivities are needed during design in order to determine which errors are tight and find out how much improvement can be had by spending more money on individual parts; and during analysis to determine the most likely cause of failures.

#### Optimization Format

• The setpoint case with a full optimzer available at runtime. The purpose is to choose error space values that minimize some likelihood function. The selected error values can then be reused to calculate $LaTeX: \widehat{R_t} \,$ for different values of $LaTeX: V_t \,$. It is presumed that we choose likelihoods such that surfaces of constant likelihood form a series of nested n-1 dimensional convex sets/shells around a center.
• Absolute value case choosing error values for subsequent calculations.
minimize   $LaTeX: \frac{|\widehat{v_{off}}|}{L_{off}}\,+\frac{|\widehat{e_{com}}\,-e_{com-center}|}{L_{com}}+\frac{|\widehat{e_{ref}}|}{L_{ref}}\,+\frac{| \widehat{e_b}\,|}{L_b}$

w.r.t.     $LaTeX: \widehat{v_{off}}\,,\widehat{e_{com}}\,,\widehat{e_{ref}}\,,\widehat{e_b}\,$

subject to $LaTeX: [|\widehat{v_{off}}|\,,\widehat{e_{com}}\,,-\widehat{e_{com}}\,,|\widehat{e_{ref}}|\,,|\widehat{e_b}|]\,\leq\,[L_{off},L_{com},0,L_{ref}\,,L_{b}]$
$LaTeX: V_c=\widehat{v_{off}}+\frac{(V_{ref} +\widehat{e_{ref}})(R_c +\widehat{e_{com}})}{(R_c+\widehat{e_{com}} + R_b+\widehat{e_b})}$


#### Optimization Format 2

• Reformulation in extended matrix format. A derivation is in the discussion under /* Optimization Format */.
• We introduce a new variable to produce coupled equations: $LaTeX: \widehat{I_c}\,$. This is meant to simplify the nonlinearities.
minimize   $LaTeX: \frac{|\widehat{v_{off}}|}{L_{off}}\,+\frac{|\widehat{e_{com}}\,-e_{com-center}|}{L_{com}}+\frac{|\widehat{e_{ref}}|}{L_{ref}}\,+\frac{| \widehat{e_b}\,|}{L_b}$

w.r.t.     $LaTeX: \widehat{v_{off}}\,,\widehat{e_{com}}\,,\widehat{e_{ref}}\,,\widehat{e_b}\,,\widehat{I_c}$

subject to $LaTeX: [|\widehat{v_{off}}|\,,\widehat{e_{com}}\,,-\widehat{e_{com}}\,,|\widehat{e_{ref}}|\,,|\widehat{e_b}|]\,\leq\,[L_{off}\,,L_{com}\,,0,L_{ref}\,,L_{b}]$
$LaTeX: \begin{array}{lcl}  \left[\begin{array}{c} V_{ref}\\ V_{c}\end{array}\right] & = & \left[\begin{array}{ccccc} 1 & (R_{b}+R_{c}) & 1 & 1 & 0\\ 0 & R_{c} & 0 & 1 & 1\end{array}\right]\left[\begin{array}{c} \widehat{e_{ref}}\\ \widehat{I_{c}}\\ \widehat{I_{c}}\,\widehat{e_{b}}\\ \widehat{I_{c}}\,\widehat{e_{com}}\\ \widehat{v_{off}}\end{array}\right]\end{array}  $
$LaTeX: \begin{array}{lcl}  |\widehat{e_{ref}}| & < & V_{ref}\\ \widehat{I_{c}} & > & 0\\ V_{c} & < & V_{ref}+\widehat{e_{ref}}\end{array}  $


#### Optimization Format 3

• Working directly on choosing the most likely value of $LaTeX: \widehat{R_{t}}$.
minimize   r

w.r.t      $LaTeX: \widehat{R_{t}}\,$
$LaTeX: \widehat{v_{off}}\,,\widehat{e_{com}}\,,\widehat{e_{ref}}\,,\widehat{e_b}\,$

subject to
$LaTeX: V_c=\widehat{v_{off}}+\frac{(V_{ref} +\widehat{e_{ref}})(R_c +\widehat{e_{com}})}{(R_c+\widehat{e_{com}} + R_b+\widehat{e_b})}$
$LaTeX: V_t=\widehat{v_{off}}+\frac{(V_{ref} +\widehat{e_{ref}})(\widehat{R_t} +\widehat{e_{com}})}{(\widehat{R_t}+\widehat{e_{com}} + R_b+\widehat{e_b})}$
$LaTeX: \left(\frac{{\widehat{v_{off}}-\mu_{off}}}{\sigma_{off}}\right)^2+\left(\frac{{\widehat{e_{com}}-\mu_{com}}}{\sigma_{com}}\right)^2+\left(\frac{{\widehat{e_{ref}}-\mu_{ref}}}{\sigma_{ref}}\right)^2+\left(\frac{{\widehat{e_{b}}-\mu_{b}}}{\sigma_{b}}\right)^2 =r$
$LaTeX: \widehat{R_{t}} > 0$


#### Optimization Format 4

• An alternate selection criteria; probably not usable.
minimize   r

w.r.t      $LaTeX: \bold{E}(\widehat{R_{t}})\,$
$LaTeX: \widehat{v_{off}}\,,\widehat{e_{com}}\,,\widehat{e_{ref}}\,,\widehat{e_b}\,$

subject to
$LaTeX: V_c=\widehat{v_{off}}+\frac{(V_{ref} +\widehat{e_{ref}})(R_c +\widehat{e_{com}})}{(R_c+\widehat{e_{com}} + R_b+\widehat{e_b})}$
$LaTeX: V_t=\widehat{v_{off}}+\frac{(V_{ref} +\widehat{e_{ref}})(\widehat{R_t} +\widehat{e_{com}})}{(\widehat{R_t}+\widehat{e_{com}} + R_b+\widehat{e_b})}$
$LaTeX: \left(\frac{{\widehat{v_{off}}-\mu_{off}}}{\sigma_{off}}\right)^2+\left(\frac{{\widehat{e_{com}}-\mu_{com}}}{\sigma_{com}}\right)^2+\left(\frac{{\widehat{e_{ref}}-\mu_{ref}}}{\sigma_{ref}}\right)^2+\left(\frac{{\widehat{e_{b}}-\mu_{b}}}{\sigma_{b}}\right)^2 =r$
$LaTeX: \widehat{R_{t}} > 0$


### Infrared Gas analysers

With either multiple stationary filters or a rotating filter wheel. In either case the components, sensors, and physical structures are subject to significant variation.

## Various forms of $LaTeX: Q()\,$

• Weighted least squares of $LaTeX: Q()\,$ over the range of $LaTeX: x\,$
• Minimize mode of $LaTeX: \hat{e}$ with respect to the range of $LaTeX: e\,$ and the measurements $LaTeX: \bar{y}=Y(\bar{x};p,e)$
• Minimize the mean of $LaTeX: \hat{e}$ with respect to the range of $LaTeX: e\,$ and the measurements $LaTeX: \bar{y}=Y(\bar{x};p,e)$
• Minimize the worst case of $LaTeX: Q()\,$ over the range of $LaTeX: x\,$
• Some weighting of the error interval with respect to $LaTeX: Q()\,$

## Areas of optimization

Design

Runtime

Calibration usage