# Auto-zero/Auto-calibration

(Difference between revisions)
 Revision as of 17:07, 1 September 2010 (edit) (→Optimization Format)← Previous diff Revision as of 08:48, 2 September 2010 (edit) (undo) (→Biochemical temperature control)Next diff → Line 64: Line 64: ** Inversion in terms of estimates ** Inversion in terms of estimates *** $\widehat{R_t}=\frac{(\widehat{v_{off}}-V_t)(\widehat{e_{com}}+R_b+\widehat{e_b})+\widehat{e_{com}}(V_{ref}+\widehat{e_{ref}})}{V_t-\widehat{v_{off}}-V_{ref}-\widehat{e_{ref}}}$ *** $\widehat{R_t}=\frac{(\widehat{v_{off}}-V_t)(\widehat{e_{com}}+R_b+\widehat{e_b})+\widehat{e_{com}}(V_{ref}+\widehat{e_{ref}})}{V_t-\widehat{v_{off}}-V_{ref}-\widehat{e_{ref}}}$ - ** Setpoint: when $R_t=R_c \,$ then minimize $(1-\frac{\widehat{R_t}}{R_c}) \,$. This might seem a little strange, but applies when one is trying to set $R_t \,$ to a setpoint $R_c \,$ but errors occur during measurement. The division is induced when $R_t=ke^{b/t} \,$ and $t \,$ is the real variable of interest. ** Mode estimate: Maximize the probability of an estimate treating the calibration measurement $V_c \,$ as a constraint hypersurface in the error space; requiring a defined PDF function. This can done via KKT; and also extended to more calibration readings. In polynomial cases this can theoretically be solved via Groebner basis; but even given the "exact" solutions, one is forced into sub-optimal/approximate estimates. ** Mode estimate: Maximize the probability of an estimate treating the calibration measurement $V_c \,$ as a constraint hypersurface in the error space; requiring a defined PDF function. This can done via KKT; and also extended to more calibration readings. In polynomial cases this can theoretically be solved via Groebner basis; but even given the "exact" solutions, one is forced into sub-optimal/approximate estimates. ** Mean estimate: Using the same model the expected error of a estimate given all possible values on the constraint surface weighted by a PDF distribution on the constraint surface; is minimized. The projection of the original PDF on n-space, to the constraint surface can be done via differential geometry. There are probably statistical methods, but the statistics descriptions seem to take a cavalier attitude towards some transformations involving integrals. ** Mean estimate: Using the same model the expected error of a estimate given all possible values on the constraint surface weighted by a PDF distribution on the constraint surface; is minimized. The projection of the original PDF on n-space, to the constraint surface can be done via differential geometry. There are probably statistical methods, but the statistics descriptions seem to take a cavalier attitude towards some transformations involving integrals. ** Worst case: where points considered where the constraint meets some boundary; say +- .01% ** Worst case: where points considered where the constraint meets some boundary; say +- .01% ** Any of the above extended to cover a range of $R_t \,$ as well as the range of errors. ** Any of the above extended to cover a range of $R_t \,$ as well as the range of errors. - * It should be mentioned that, in this case $(R_t - \widehat{R_t})$ is not a good (or natural) function. A better function for both results and calculations is $(1-\frac{R_t}{\widehat{R_t}})$. I consider the form of errors to be a natural variation from problem to problem and should be accommodated in any organized procedure. + * It should be mentioned that, in this case $(R_t - \widehat{R_t})$ is not a good (or natural) function. A better function for both results and calculations is $(1-\frac{\widehat{R_t}}{R_t})\,$. The form of errors to be a natural variation from problem to problem and should be accommodated in any organized procedure. * Sensitivities are needed during design in order to determine which errors are tight and find out how much improvement can be had by spending more money on individual parts; and during analysis to determine the most likely cause of failures. * Sensitivities are needed during design in order to determine which errors are tight and find out how much improvement can be had by spending more money on individual parts; and during analysis to determine the most likely cause of failures. ====Optimization Format==== ====Optimization Format==== - *The setpoint case with a full optimzier available at runtime. Note that the third expression in inequaties is not affine. I don't know how to move it to inequalities without the the $[V_c,-V_c]<=0\,$ trick + *The setpoint case with a full optimzier available at runtime. minimize $\left(1-\frac{\widehat{R_t}}{R_t}\right)^2\,$ minimize $\left(1-\frac{\widehat{R_t}}{R_t}\right)^2\,$ w.r.t. $\widehat{v_{off}}\,,\widehat{e_{com}}\,,\widehat{e_{ref}}\,,\widehat{e_b}\,$ w.r.t. $\widehat{v_{off}}\,,\widehat{e_{com}}\,,\widehat{e_{ref}}\,,\widehat{e_b}\,$ - subject to $[\widehat{v_{off}}\,,\widehat{e_{com}}\,,-\widehat{e_{com}}\,,|\widehat{e_{ref}}|\,,|\widehat{e_b}|]\leq [\sqrt{.001},.001,0,.0001V_{ref}\,,.001R_b]\,$ + subject to $[|\widehat{v_{off}}|\,,\widehat{e_{com}}\,,-\widehat{e_{com}}\,,|\widehat{e_{ref}}|\,,|\widehat{e_b}|]\leq [.001,.001,0,.0001V_{ref}\,,.001R_b]\,$ $[V_{ref}\,,R_b\,,R_t]=[2,1,1]\,$ $[V_{ref}\,,R_b\,,R_t]=[2,1,1]\,$ $[V_c\,,R_c]=[.998,1] \,$ $[V_c\,,R_c]=[.998,1] \,$ $V_c=\widehat{v_{off}}+\frac{(V_{ref} +\widehat{e_{ref}})(R_c +\widehat{e_{com}})}{(R_c+\widehat{e_{com}} + R_b+\widehat{e_b})} \,$ $V_c=\widehat{v_{off}}+\frac{(V_{ref} +\widehat{e_{ref}})(R_c +\widehat{e_{com}})}{(R_c+\widehat{e_{com}} + R_b+\widehat{e_b})} \,$ + $\widehat{R_t}=\frac{(\widehat{v_{off}}-V_t)(\widehat{e_{com}}+R_b+\widehat{e_b})+\widehat{e_{com}}(V_{ref}+\widehat{e_{ref}})}{V_t-\widehat{v_{off}}-V_{ref}-\widehat{e_{ref}}}\,$ * Proposed: Change the w.r.t. to coefficients like so. This corresponds to the case where the run time processor is limited and linear corrections are to be used. This is unfinished since $V_c \,$ needs to be moved to a range; i.e. robust approximation. * Proposed: Change the w.r.t. to coefficients like so. This corresponds to the case where the run time processor is limited and linear corrections are to be used. This is unfinished since $V_c \,$ needs to be moved to a range; i.e. robust approximation. - minimize: $(1-\frac{\widehat{R_t}}{R_t})^2 \,$ + minimize $\left(1-\frac{\widehat{R_t}}{R_t}\right)^2\,$ - w.r.t. : $[\widehat{a_{off}},\widehat{a_{com}},\widehat{a_{ref}},\widehat{a_b}],[\widehat{b_{off}},\widehat{b_{com}},\widehat{b_{ref}},\widehat{b_b}] \,$ + w.r.t. $[\widehat{a_{off}},\widehat{a_{com}},\widehat{a_{ref}},\widehat{a_b}],[\widehat{b_{off}},\widehat{b_{com}},\widehat{b_{ref}},\widehat{b_b}] \,$ - subject to inequalities: $[\widehat{v_{off}}^2,\widehat{e_{com}},-\widehat{e_{com}},\widehat{e_{ref}}^2,\widehat{e_b}^2] \leq [.001,.001,0,(.0001*V{ref})^2,(.001*R_b)^2] \,$ + subject to $[|\widehat{v_{off}}|\,,\widehat{e_{com}}\,,-\widehat{e_{com}}\,,|\widehat{e_{ref}}|\,,|\widehat{e_b}|]\leq [.001,.001,0,.0001V_{ref}\,,.001R_b]\,$ - subject to equalities: $[V_{ref},R_b,R_t]=[2,1,1] \,$ + $[V_{ref},R_b,R_t]=[2,1,1] \, - [itex] [V_c,R_c]=[.998,1] \,$ + [/itex] - $V_c=\widehat{v_{off}}+\frac{(V_{ref} +\widehat{e_{ref}})(R_c +\widehat{e_{com}})}{(R_c+\widehat{e_{com}} + R_b+\widehat{e_b)}}\,$ + $[V_c,R_c]=[.998,1] \, - [itex] [\widehat{v_{off}},\widehat{e_{com}},\widehat{e_{ref}},\widehat{e_b}]=[\widehat{a_{off}},\widehat{a_{com}},\widehat{a_{ref}},\widehat{a_b}]+V_c*[\widehat{b_{off}},\widehat{b_{com}},\widehat{b_{ref}},\widehat{b_b}] +$ - \,[/itex] + $V_c=\widehat{v_{off}}+\frac{(V_{ref} +\widehat{e_{ref}})(R_c +\widehat{e_{com}})}{(R_c+\widehat{e_{com}} + R_b+\widehat{e_b)}}\, +$ + $\widehat{R_t}=\frac{(\widehat{v_{off}}-V_t)(\widehat{e_{com}}+R_b+\widehat{e_b})+\widehat{e_{com}}(V_{ref}+\widehat{e_{ref}})}{V_t-\widehat{v_{off}}-V_{ref}-\widehat{e_{ref}}} +$ + $[\widehat{v_{off}},\widehat{e_{com}},\widehat{e_{ref}},\widehat{e_b}]=[\widehat{a_{off}},\widehat{a_{com}},\widehat{a_{ref}},\widehat{a_b}]+V_c\cdot[\widehat{b_{off}},\widehat{b_{com}},\widehat{b_{ref}},\widehat{b_b}] + \, +$ ===Infrared Gas analysers=== ===Infrared Gas analysers===

## Motivation

In instrumentation, both in a supporting role and as a prime objective, measurements are taken that are subject to systematic errors. Routes to minimizing the effects of these errors are:

• Spend more money on the hardware. This is valid but hits areas of diminishing returns; the price rises disproportionately with respect to increased accuracy.
• Apparently, in the industrial processing industry, various measurement points are implemented and regressed to find "subspaces" that the process has to be operating on. Due to lack of experience I (RR) will not be covering that here; although others are welcome to (and replace this statement). This is apparently called "data reconciliation".
• Calibrations are done and incorporated into the instrument. This can be done by analog adjustments or written into storage mediums for subsequent use by operators or software.
• Runtime Auto-calibrations done at regular intervals. These are done at a variety of time intervals: every .01 seconds to 30 minutes. I can speak to these most directly; but I consider the "Calibrations" to be a special case.

## Mathematical Formulation

Nomenclature: It will be presumed, unless otherwise stated, that collected variables compose a (topological) manifold; i.e. a collection designated by single symbol and id. Not necessarily possessing a differential geometry metric. The means that there is no intrinsic two dimensional tensor, $LaTeX: g_{ij} \,$, allowing a identification of contravarient vectors with covarient ones. These terms are presented to provide a mathematically precise context to distinguish: contravarient and covariant vectors, tangent spaces and underlying coordinate spaces. Typically they can be ignored.

• The quintessential example of covariant tensor is the differential of a scaler, although the vector space formed by differentials is more extensive than differentials of scalars.
• The quintessental contravariant vector is the derivative of a path $LaTeX: p \,$ with component values $LaTeX: e^i =f_i(s) \,$ on the manifold. With $LaTeX: (p)^j=\frac{\partial e^j}{\partial s}\cdot \frac {\partial}{\partial e^j} \,$ being a component of the contravariant vector along $LaTeX: p \,$ parametrized by "s".
• Using $LaTeX: e \,$ (see directly below) as an example
• $LaTeX: e_{id} \,$ refers to collection of variables identified by "id"
• Although a collection does not have the properties of a vector space; in some cases we will assume (restrict) it have those properties. In particular this seems to be needed to state that the Q() functions are convex.
• $LaTeX: e_{id}^i \,$ refers to the $LaTeX: i'th \,$ component of $LaTeX: e_{id} \,$
• $LaTeX: (e_{id})_j^i\,$ refers to the tangent/cotangent bundle with $LaTeX: i \,$ selecting a contravariant component and $LaTeX: j \,$ selecting a covariant component
• $LaTeX: (expr)_{|x\leftarrow c} \,$ refers to an expression, "expr", where $LaTeX: (expr)$ is evaluated with $LaTeX: x=c$
• $LaTeX: (expr)_{|x\rightarrow c} \,$ refers to an expression, "expr", where $LaTeX: (expr)$ is evaluated as the limit of "x" as it approaches value "c"

Definitions:

• $LaTeX: x\,$ a collection of some environmental or control variables that need to be estimated
• $LaTeX: \bar{x}$ a collection of calibration points
• $LaTeX: \hat{x}$ be the estimate of $LaTeX: x\,$
• $LaTeX: p \,$ a collection of parameters that are constant during operations but selected at design time. The system "real" values during operation are typically $LaTeX: p+e \,$; although other modifications, $LaTeX: E(p,e) \,$ are possible indicating variance of parameters from nominal. "p" are mostly included in symbolic formulas to allow sensitivity calculations or completeness in symbolic expressions.
• $LaTeX: e\,$ be errors: assumed to vary, but constant during intervals between calibrations and real measurements
• $LaTeX: y\,$ be the results of a measurement processes attempting to measure $LaTeX: x\,$
• $LaTeX: y=Y(x;p,e)\,$ where $LaTeX: e\,$ might be additive, multiplicative, or some other form.
• $LaTeX: \bar{y}=Y(\bar{x};p,e)$ the reading values at the calibration points
• $LaTeX: \hat{e}$ be estimates of $LaTeX: e\,$ derived from $LaTeX: \bar{x}, \bar{y}$
• $LaTeX: \hat{e}=E(\bar{x},\bar{y})$
• $LaTeX: Q(x,\hat{x})$ be a quality measure of resulting estimation; for example $LaTeX: \sum{(x_i-\hat{x_i})^2}$
• Where $LaTeX: x\,$ is allowed to vary over a domain for fixed $LaTeX: \hat{x}$
• The example is oversimplified as will be demonstrated below.
• $LaTeX: \hat{x} \,$ is typically decomposed into a chain using $LaTeX: \hat{e}$

Then the problem can be formulated as:

• Given $LaTeX: \bar{x},\bar{y}$
• Find a formula or process to select $LaTeX: (\bar{x},\bar{y})\xrightarrow{\hat{X}}\hat{x} \,$ so as to minimize $LaTeX: Q(x,\hat{x})$
• The reason for the process term $LaTeX: \hat{X} \,$ is that many correction schemes are feedback controlled; $LaTeX: \hat{X} \,$ is never computed, internally, although it might be necessary in design or analysis.

## Examples

### Biochemical temperature control

where multiple temperature sensors are multiplexed into a data stream and one or more channels are set aside for Auto-calibration. Expected final systems accuracies of .05 degC are needed because mammalian temperature regulation has resulting in processes and diseases that are "tuned" to particular temperatures.

• A simplified example, evaluating one calibration channel and one reading channel. In order to be more obvious the unknown and calibration readings are designated separately; instead of the convention given above. This is more obvious in a simple case, but in more complicated cases is unsystematic.
• $LaTeX: V_x=v_{off}+\frac{V_{ref}R_x}{R_x + R_b}$
• $LaTeX: R_x \,$ can be either the calibration resistor $LaTeX: R_c \,$ or the unknown resistance $LaTeX: R_t \,$ of the thermistor
• $LaTeX: V_x \,$ is the corresponding voltage read: $LaTeX: V_c\,$ or $LaTeX: V_t\,$
• $LaTeX: v_{off} \,$ is the reading offset value, an error
• $LaTeX: V_{ref}, e_{ref} \,$ the nominal bias voltage and bias voltage error
• $LaTeX: R_b, e_b \,$ the nominal bias resistor and bias resistor error
• $LaTeX: e_{com} \,$ an unknown constant resistance in series with $LaTeX: R_c, R_t \,$ for all readings
• With errors
• $LaTeX: V_x=v_{off}+\frac{(V_{ref} +e_{ref})(R_x +e_{com})}{(R_x+e_{com} + R_b+e_b)}$
• $LaTeX: V_c=\left.V_x\right|_{x\leftarrow c}$
• $LaTeX: V_t=\left.V_x\right|_{x\leftarrow t}$
• The problem is to optimally estimate $LaTeX: R_t\,$ based upon $LaTeX: V_t \,$ and $LaTeX: V_c \,$
• The direct inversion formula illustrates the utility of mathematically using the error space $LaTeX: [V_{off}\,,e_{com}\,,e_b\,,e_{ref}] \,$ during design and analysis. The direct inversion of $LaTeX: V_t \,$ for $LaTeX: R_t \,$ naturally invokes the error space as a link to $LaTeX: V_c \,$.
• Inversion for $LaTeX: R_t \,$
• $LaTeX: R_t=\frac{(v_{off}-V_t)(e_{com}+R_b+e_b)+e_{com}(V_{ref}+e_{ref})}{V_t-v_{off}-V_{ref}-e_{ref}}$
• Inversion in terms of estimates
• $LaTeX: \widehat{R_t}=\frac{(\widehat{v_{off}}-V_t)(\widehat{e_{com}}+R_b+\widehat{e_b})+\widehat{e_{com}}(V_{ref}+\widehat{e_{ref}})}{V_t-\widehat{v_{off}}-V_{ref}-\widehat{e_{ref}}}$
• Mode estimate: Maximize the probability of an estimate treating the calibration measurement $LaTeX: V_c \,$ as a constraint hypersurface in the error space; requiring a defined PDF function. This can done via KKT; and also extended to more calibration readings. In polynomial cases this can theoretically be solved via Groebner basis; but even given the "exact" solutions, one is forced into sub-optimal/approximate estimates.
• Mean estimate: Using the same model the expected error of a estimate given all possible values on the constraint surface weighted by a PDF distribution on the constraint surface; is minimized. The projection of the original PDF on n-space, to the constraint surface can be done via differential geometry. There are probably statistical methods, but the statistics descriptions seem to take a cavalier attitude towards some transformations involving integrals.
• Worst case: where points considered where the constraint meets some boundary; say +- .01%
• Any of the above extended to cover a range of $LaTeX: R_t \,$ as well as the range of errors.
• It should be mentioned that, in this case $LaTeX: (R_t - \widehat{R_t})$ is not a good (or natural) function. A better function for both results and calculations is $LaTeX: (1-\frac{\widehat{R_t}}{R_t})\,$. The form of errors to be a natural variation from problem to problem and should be accommodated in any organized procedure.
• Sensitivities are needed during design in order to determine which errors are tight and find out how much improvement can be had by spending more money on individual parts; and during analysis to determine the most likely cause of failures.

#### Optimization Format

• The setpoint case with a full optimzier available at runtime.
minimize   $LaTeX: \left(1-\frac{\widehat{R_t}}{R_t}\right)^2\,$
w.r.t.     $LaTeX: \widehat{v_{off}}\,,\widehat{e_{com}}\,,\widehat{e_{ref}}\,,\widehat{e_b}\,$
subject to $LaTeX: [|\widehat{v_{off}}|\,,\widehat{e_{com}}\,,-\widehat{e_{com}}\,,|\widehat{e_{ref}}|\,,|\widehat{e_b}|]\leq [.001,.001,0,.0001V_{ref}\,,.001R_b]\,$
$LaTeX: [V_{ref}\,,R_b\,,R_t]=[2,1,1]\,$
$LaTeX: [V_c\,,R_c]=[.998,1] \,$
$LaTeX: V_c=\widehat{v_{off}}+\frac{(V_{ref} +\widehat{e_{ref}})(R_c +\widehat{e_{com}})}{(R_c+\widehat{e_{com}} + R_b+\widehat{e_b})} \,$
$LaTeX: \widehat{R_t}=\frac{(\widehat{v_{off}}-V_t)(\widehat{e_{com}}+R_b+\widehat{e_b})+\widehat{e_{com}}(V_{ref}+\widehat{e_{ref}})}{V_t-\widehat{v_{off}}-V_{ref}-\widehat{e_{ref}}}\,$

• Proposed: Change the w.r.t. to coefficients like so. This corresponds to the case where the run time processor is limited and linear corrections are to be used. This is unfinished since $LaTeX: V_c \,$ needs to be moved to a range; i.e. robust approximation.
minimize   $LaTeX: \left(1-\frac{\widehat{R_t}}{R_t}\right)^2\,$

w.r.t.     $LaTeX: [\widehat{a_{off}},\widehat{a_{com}},\widehat{a_{ref}},\widehat{a_b}],[\widehat{b_{off}},\widehat{b_{com}},\widehat{b_{ref}},\widehat{b_b}] \,$
subject to $LaTeX: [|\widehat{v_{off}}|\,,\widehat{e_{com}}\,,-\widehat{e_{com}}\,,|\widehat{e_{ref}}|\,,|\widehat{e_b}|]\leq [.001,.001,0,.0001V_{ref}\,,.001R_b]\,$
$LaTeX: [V_{ref},R_b,R_t]=[2,1,1] \,$
$LaTeX: [V_c,R_c]=[.998,1] \,$
$LaTeX: V_c=\widehat{v_{off}}+\frac{(V_{ref} +\widehat{e_{ref}})(R_c +\widehat{e_{com}})}{(R_c+\widehat{e_{com}} + R_b+\widehat{e_b)}}\,$
$LaTeX: \widehat{R_t}=\frac{(\widehat{v_{off}}-V_t)(\widehat{e_{com}}+R_b+\widehat{e_b})+\widehat{e_{com}}(V_{ref}+\widehat{e_{ref}})}{V_t-\widehat{v_{off}}-V_{ref}-\widehat{e_{ref}}}$
$LaTeX: [\widehat{v_{off}},\widehat{e_{com}},\widehat{e_{ref}},\widehat{e_b}]=[\widehat{a_{off}},\widehat{a_{com}},\widehat{a_{ref}},\widehat{a_b}]+V_c\cdot[\widehat{b_{off}},\widehat{b_{com}},\widehat{b_{ref}},\widehat{b_b}] \,$


### Infrared Gas analysers

With either multiple stationary filters or a rotating filter wheel. In either case the components, sensors, and physical structures are subject to significant variation.

## Various forms of $LaTeX: Q()\,$

• Weighted least squares of $LaTeX: Q()\,$ over the range of $LaTeX: x\,$
• Minimize mode of $LaTeX: \hat{e}$ with respect to the range of $LaTeX: e\,$ and the measurements $LaTeX: \bar{y}=Y(\bar{x};p,e)$
• Minimize the mean of $LaTeX: \hat{e}$ with respect to the range of $LaTeX: e\,$ and the measurements $LaTeX: \bar{y}=Y(\bar{x};p,e)$
• Minimize the worst case of $LaTeX: Q()\,$ over the range of $LaTeX: x\,$
• Some weighting of the error interval with respect to $LaTeX: Q()\,$

## Areas of optimization

Design

Runtime

Calibration usage