Auto-zero/Auto-calibration

From Wikimization

(Difference between revisions)
Jump to: navigation, search
(Optimization Format 2)
Current revision (11:25, 5 December 2011) (edit) (undo)
 
(10 intermediate revisions not shown.)
Line 42: Line 42:
Then the problem can be formulated as:
Then the problem can be formulated as:
*Given <math>x_c,y_c\,</math>
*Given <math>x_c,y_c\,</math>
-
*Find a formula or process <math>\hat{X}\,</math> to select <math>\left(x_c,y_c\right)\xrightarrow{\hat{X}}\hat{x} \,</math> so as to minimize <math>Q(x,\hat{x})</math>
+
*Find a formula or process <math>\hat{X}\,</math> to select <math>\hat{X}\left(x_c,y_c\right)\rightarrow\hat{x} \,</math> so as to minimize <math>Q(x,\hat{x})</math>
** The reason for the process term <math>\hat{X} \,</math> is that many correction schemes are feedback controlled; <math>\hat{X} \,</math> is never computed, internally, although it might be necessary in design or analysis.
** The reason for the process term <math>\hat{X} \,</math> is that many correction schemes are feedback controlled; <math>\hat{X} \,</math> is never computed, internally, although it might be necessary in design or analysis.
=== Algebra of Distributions of Random Variables ===
=== Algebra of Distributions of Random Variables ===
This section discusses algebraic combinations of variables realized from distributions.
This section discusses algebraic combinations of variables realized from distributions.
-
* <math>\mathcal{F}(g(x))\,
+
* <math>\mathcal{F}(g(x))\,</math> is the Fourier transform of <math>g(x)</math>.
-
</math> is the Fourier transform of <math>g(x)</math>.
+
* <math>\mathcal{M}(g(x))\,</math> is the Mellin transform of <math>g(x)</math>.
-
* <math>\mathcal{M}(g(x))\,
+
* <math>\oplus\,</math>:Let
-
</math> is the Mellin transform of <math>g(x)</math>.
+
** <math>c=a+b,\ a,b,c\in \mathbb{R}\,</math>,
-
* <math>\oplus\,
+
** <math>\overline{c},\overline{a},\overline{b}\,</math> the PDF's of <math>a,b,c \,</math>.
-
</math>:Let
+
** <math>\overline{c} = \overline{a}\oplus \overline{b}=\mathcal{F}^{-1}( \mathcal{F}(\overline{a})\cdot \mathcal{F}\left(\overline{b} \right))\,</math>
-
** <math>c=a+b,\ a,b,c\in \mathbb{R}\,
+
* <math>\ominus \,</math>:Let
-
</math>,
+
** <math>c=a-b=a+(-b),\ a,b,c\in \mathbb{R} \,</math>,
-
** <math>\overline{c},\overline{a},\overline{b}\,
+
** <math>\overline{c},\overline{a},\overline{b}(-x)\,</math> the PDF's of <math>a,-b,c\,</math>.
-
</math> the PDF's of <math>a,b,c \,</math>.
+
** <math>\overline{c} = \overline{a}\ominus \overline{b}=\mathcal{F}^{-1}( \mathcal{F}(\overline{a})\cdot \mathcal{F}\left(\overline{b}(-x) \right))\,</math>
-
** <math>\overline{c} = \overline{a}\oplus \overline{b}=\mathcal{F}^{-1}( \mathcal{F}(\overline{a})\cdot \mathcal{F}\left( \overline{b} \right))\,
+
* <math>\otimes \,</math>:Let
-
</math>
+
** <math>c=a\cdot b,\ a,b,c\in (0,\infty)\,</math>
-
* <math>\ominus \,
+
-
</math>:Let
+
-
** <math>c=a-b=a+(-b),\ a,b,c\in \mathbb{R} \,
+
-
</math>,
+
-
** <math>\overline{c},\overline{a},\overline{b}(-x)\,
+
-
</math> the PDF's of <math>a,-b,c\,</math>.
+
-
** <math>\overline{c} = \overline{a}\ominus \overline{b}=\mathcal{F}^{-1}( \mathcal{F}(\overline{a})\cdot \mathcal{F}\left( \overline{b}(-x) \right))\,
+
-
</math>
+
-
* <math>\otimes \,
+
-
</math>:Let
+
-
** <math>c=a\cdot b,\ a,b,c\in (0,\infty)\,
+
-
</math>
+
*** Springer as an extension to <math>\mathbb{R} \,</math>
*** Springer as an extension to <math>\mathbb{R} \,</math>
-
** <math>\overline{c},\overline{a},\overline{b}\,
+
** <math>\overline{c},\overline{a},\overline{b}\,</math> the PDF's of <math>a,b,c\,</math>.
-
</math> the PDF's of <math>a,b,c\,</math>.
+
** <math>\overline{c} = \overline{a}\otimes \overline{b}=\mathcal{M}^{-1}( \mathcal{M}(\overline{a})\cdot \mathcal{M}\left( \overline{b} \right))\,</math>
-
** <math>\overline{c} = \overline{a}\otimes \overline{b}=\mathcal{M}^{-1}( \mathcal{M}(\overline{a})\cdot \mathcal{M}\left( \overline{b} \right))\,
+
* <math>\oslash</math>:Let
-
</math>
+
** <math>c= \frac{a}{b},\ a,b,c\in (0,\infty) \,</math>
-
* <math>\oslash
+
-
</math>:Let
+
-
** <math>c= \frac{a}{b},\ a,b,c\in (0,\infty) \,
+
-
</math>
+
*** Springer as an extension to <math>\mathbb{R} \,</math>
*** Springer as an extension to <math>\mathbb{R} \,</math>
-
** <math>\overline{c},\overline{a},\overline{b}(x)\,
+
** <math>\overline{c},\overline{a},\overline{b}(x)\,</math> the PDF's of <math>a,b,c\,</math>.
-
</math> the PDF's of <math>a,b,c\,</math>.
+
** <math>\overline{c} = \overline{a}\oslash \overline{b}=\mathcal{M}^{-1}( \mathcal{M}(\overline{a})\cdot \mathcal{M}\left(\overline{b}(\frac{1}{x}) \right))\,</math>
-
** <math>\overline{c} = \overline{a}\oslash \overline{b}=\mathcal{M}^{-1}( \mathcal{M}(\overline{a})\cdot \mathcal{M}\left( \overline{b}(\frac{1}{x}) \right))\,
+
Reference: The algebra of random variables, Springer, M.D., 1979, Wiley, New York
-
</math>
+
-
Ref:
+
-
@book{springer1979algebra,
+
-
title={{The algebra of random variables}},
+
-
author={Springer, M.D.},
+
-
year={1979},
+
-
publisher={Wiley New York}
+
-
}
+
== Examples ==
== Examples ==
Line 113: Line 89:
** <math>V_x=v_{off}+\frac{(V_{ref} +e_{ref})(R_x +e_{com})}{(R_x+e_{com} + R_b+e_b)}</math>
** <math>V_x=v_{off}+\frac{(V_{ref} +e_{ref})(R_x +e_{com})}{(R_x+e_{com} + R_b+e_b)}</math>
* Implicit formula for PDF distributions. Generally this form two unknown distributions <math>\overline{V_{t}},\overline{R_{t}}\,</math> but in any particular application one or the other is known. In addition the constants, <math>1, V_{ref}, V_b \,</math> in the formula stand for the Dirac delta distribution positioned on the constant.
* Implicit formula for PDF distributions. Generally this form two unknown distributions <math>\overline{V_{t}},\overline{R_{t}}\,</math> but in any particular application one or the other is known. In addition the constants, <math>1, V_{ref}, V_b \,</math> in the formula stand for the Dirac delta distribution positioned on the constant.
-
**<math>\overline{V_{t}}\ominus\overline{v_{off}}\ominus\left(\left(V_{ref}\oplus\overline{e_{ref}}\right)\oslash\left(1\oplus\left(\left(R_{b}\oplus\overline{e_{b}}\right)\oslash\left(\overline{R_{t}}\oplus\overline{e_{com}}\right)\right)\right)\right)=0\,
+
**<math>\overline{V_{t}}\ominus\overline{v_{off}}\ominus\left(\left(V_{ref}\oplus\overline{e_{ref}}\right)\oslash\left(1\oplus\left(\left(R_{b}\oplus\overline{e_{b}}\right)\oslash\left(\overline{R_{t}}\oplus\overline{e_{com}}\right)\right)\right)\right)=0\,</math>
-
</math>
+
* Calibration reading
* Calibration reading
** <math>V_c=\left.V_x\right|_{x\leftarrow c}</math>
** <math>V_c=\left.V_x\right|_{x\leftarrow c}</math>
Line 124: Line 99:
*** <math>R_t=\frac{(V_t-v_{off})(e_{com}+R_b+e_b)-e_{com}(V_{ref}+e_{ref})}{V_{ref}-v_{off}-V_t-e_{ref}}</math>
*** <math>R_t=\frac{(V_t-v_{off})(e_{com}+R_b+e_b)-e_{com}(V_{ref}+e_{ref})}{V_{ref}-v_{off}-V_t-e_{ref}}</math>
** Inversion in terms of estimates
** Inversion in terms of estimates
-
***<math>\widehat{R_t}=\frac{(V_t-\widehat{v_{off}}) (\widehat{e_{com}}+R_b+\widehat{e_b})-\widehat{e_{com}}(V_{ref}+\widehat{e_{ref}})}{V_{ref}-\widehat{v_{off}}-V_{t}-\widehat{e_{ref}}}
+
***<math>\widehat{R_t}=\frac{(V_t-\widehat{v_{off}}) (\widehat{e_{com}}+R_b+\widehat{e_b})-\widehat{e_{com}}(V_{ref}+\widehat{e_{ref}})}{V_{ref}-\widehat{v_{off}}-V_{t}-\widehat{e_{ref}}}</math>
-
</math>
+
-
 
+
** Mode estimate: Find the maximum value of <math>\overline{R_{t}'} \,</math>. A necessary condition for differentiable functions is <math>\frac{d\overline{R_{t}'}_{|{p'}}}{dp'}=0 \,</math> In polynomial cases this can theoretically be solved via Groebner basis; but even given the "exact" solutions, one is forced into sub-optimal/approximate estimates.
** Mode estimate: Find the maximum value of <math>\overline{R_{t}'} \,</math>. A necessary condition for differentiable functions is <math>\frac{d\overline{R_{t}'}_{|{p'}}}{dp'}=0 \,</math> In polynomial cases this can theoretically be solved via Groebner basis; but even given the "exact" solutions, one is forced into sub-optimal/approximate estimates.
** Mean estimate: Find the expectation, that is <math>\widehat{R_t}=\int R'\cdot \overline{R_{t}'}_{|{R'}} dR' </math>
** Mean estimate: Find the expectation, that is <math>\widehat{R_t}=\int R'\cdot \overline{R_{t}'}_{|{R'}} dR' </math>
Line 149: Line 122:
====Optimization Format 2====
====Optimization Format 2====
* Reformulation in extended matrix format. A derivation is in the [[Talk:Auto-zero/Auto-calibration | discussion]] under /* Optimization Format */.
* Reformulation in extended matrix format. A derivation is in the [[Talk:Auto-zero/Auto-calibration | discussion]] under /* Optimization Format */.
-
* We introduce three new variables to produce coupled equations:
+
* We introduce a new variable to produce coupled equations: <math>\widehat{I_c}\,</math>. This is meant to simplify the nonlinearities.
-
<math>\widehat{I_c}\,,~\widehat{k_1}=\widehat{I_c}\,\widehat{e_b}\,,~\widehat{k_2}=\widehat{I_c}\,\widehat{e_{com}}</math>
+
-
* These are meant to simplify the nonlinearities
+
minimize <math>\frac{|\widehat{v_{off}}|}{L_{off}}\,+\frac{|\widehat{e_{com}}\,-e_{com-center}|}{L_{com}}+\frac{|\widehat{e_{ref}}|}{L_{ref}}\,+\frac{| \widehat{e_b}\,|}{L_b}</math>
minimize <math>\frac{|\widehat{v_{off}}|}{L_{off}}\,+\frac{|\widehat{e_{com}}\,-e_{com-center}|}{L_{com}}+\frac{|\widehat{e_{ref}}|}{L_{ref}}\,+\frac{| \widehat{e_b}\,|}{L_b}</math>
-
w.r.t. <math>\widehat{v_{off}}\,,\widehat{e_{com}}\,,\widehat{e_{ref}}\,,\widehat{e_b}\,,\widehat{I_c}\,,\widehat{k_1}\,,\widehat{k_{2}}\,
+
w.r.t. <math>\widehat{v_{off}}\,,\widehat{e_{com}}\,,\widehat{e_{ref}}\,,\widehat{e_b}\,,\widehat{I_c}
</math>
</math>
-
subject to <math>[|\widehat{v_{off}}|\,,\widehat{e_{com}}\,,-\widehat{e_{com}}\,,|\widehat{e_{ref}}|\,,|\widehat{e_b}|]\,\leq\,[L_{off},L_{com},0,L_{ref}\,,L_{b}]
+
subject to <math>[|\widehat{v_{off}}|\,,\widehat{e_{com}}\,,-\widehat{e_{com}}\,,|\widehat{e_{ref}}|\,,|\widehat{e_b}|]\,\leq\,[L_{off}\,,L_{com}\,,0,L_{ref}\,,L_{b}]
</math><br>
</math><br>
<math>\begin{array}{lcl}
<math>\begin{array}{lcl}
Line 168: Line 139:
\widehat{e_{ref}}\\
\widehat{e_{ref}}\\
\widehat{I_{c}}\\
\widehat{I_{c}}\\
-
\widehat{k_{1}}\\
+
\widehat{I_{c}}\,\widehat{e_{b}}\\
-
\widehat{k_{2}}\\
+
\widehat{I_{c}}\,\widehat{e_{com}}\\
\widehat{v_{off}}\end{array}\right]\end{array}
\widehat{v_{off}}\end{array}\right]\end{array}
</math>
</math>
<math>\begin{array}{lcl}
<math>\begin{array}{lcl}
-
\widehat{|e_{ref}|} & < & V_{ref}\\
+
|\widehat{e_{ref}}| & < & V_{ref}\\
\widehat{I_{c}} & > & 0\\
\widehat{I_{c}} & > & 0\\
V_{c} & < & V_{ref}+\widehat{e_{ref}}\end{array}
V_{c} & < & V_{ref}+\widehat{e_{ref}}\end{array}
-
\,</math>
 
-
<math>\begin{array}{lcl}
 
-
\widehat{k_{1}} & =\widehat{I_{c}} & \widehat{e_{b}}\\
 
-
\widehat{k_{2}} & =\widehat{I_{c}} & \widehat{e_{com}}\end{array}
 
</math>
</math>

Current revision

Contents

Motivation

In instrumentation, both in a supporting role and as a prime objective, measurements are taken that are subject to systematic errors. Routes to minimizing the effects of these errors are:

  • Spend more money on the hardware. This is valid but hits areas of diminishing returns; the price rises disproportionately with respect to increased accuracy.
  • Apparently, in the industrial processing industry, various measurement points are implemented and regressed to find "subspaces" that the process has to be operating on. Due to lack of experience I (RR) will not be covering that here; although others are welcome to (and replace this statement). This is apparently called "data reconciliation".
  • Calibrations are done and incorporated into the instrument. This can be done by analog adjustments or written into storage mediums for subsequent use by operators or software.
  • Runtime Auto-calibrations done at regular intervals. These are done at a variety of time intervals: every .01 seconds to 30 minutes. I can speak to these most directly; but I consider the "Calibrations" to be a special case.

Mathematical Formulation

Nomenclature: It will be presumed, unless otherwise stated, that collected variables compose a (topological) manifold; i.e. a collection designated by single symbol and id. Not necessarily possessing a differential geometry metric. The means that there is no intrinsic two dimensional tensor, LaTeX: g_{ij} \,, allowing a identification of contravarient vectors with covarient ones. These terms are presented to provide a mathematically precise context to distinguish: contravarient and covariant vectors, tangent spaces and underlying coordinate spaces. Typically they can be ignored.

  • The quintessential example of covariant tensor is the differential of a scaler, although the vector space formed by differentials is more extensive than differentials of scalars.
  • The quintessental contravariant vector is the derivative of a path LaTeX: p \, with component values LaTeX: e^i =f_i(s) \, on the manifold. With LaTeX: (p)^j=\frac{\partial e^j}{\partial s}\cdot \frac {\partial}{\partial e^j} \, being a component of the contravariant vector along LaTeX: p \, parametrized by "s".
  • Using LaTeX: e \, (see directly below) as an example
    • LaTeX: e_{id} \, refers to collection of variables identified by "id"
      • Although a collection does not have the properties of a vector space; in some cases we will assume (restrict) it have those properties. In particular this seems to be needed to state that the Q() functions are convex.
    • LaTeX: e_{id}^i \, refers to the LaTeX: i'th \, component of LaTeX: e_{id} \,
    • LaTeX: (e_{id})_j^i\, refers to the tangent/cotangent bundle with LaTeX: i \, selecting a contravariant component and LaTeX: j \, selecting a covariant component
    • LaTeX: (expr)_{|x\leftarrow c} \, refers to an expression, "expr", where LaTeX: (expr) is evaluated with LaTeX: x=c
    • LaTeX: (expr)_{|x\rightarrow c} \, refers to an expression, "expr", where LaTeX: (expr) is evaluated as the limit of "x" as it approaches value "c"

Definitions:

  • LaTeX:  x\, a collection of some environmental or control variables that need to be estimated
  • LaTeX: x_c\, a collection of calibration points
  • LaTeX: \bar{x}\, the PDF of LaTeX:  x\, . This is a representation of a function.
  • LaTeX: \bar{x}(x_c)\, the evaluation of the PDF LaTeX: \bar{x}\, at LaTeX: x_c.
  • LaTeX: \hat{x} be the estimate of LaTeX: x\,
  • LaTeX: p \, a collection of parameters that are constant during operations but selected at design time. The system "real" values during operation are typically LaTeX: p+e \,; although other modifications are possible.
  • LaTeX: e\, be errors: assumed to vary, but constant during intervals between calibrations and real measurements
  • LaTeX: \bar{e}\, the PDF of LaTeX:  e\, . This is a representation of a function.
  • LaTeX: \bar{e}(x)\, the evaluation of the PDF LaTeX: \bar{e}\, at x.
  • LaTeX: y\, be the results of a measurement processes attempting to measure LaTeX: x\,
    • LaTeX: y=Y(x;p,e)\, where LaTeX: e\, might be additive, multiplicative, or some other form.
    • LaTeX: y_c=Y(x_c;p,e)\, the reading values at the calibration points for an instantiation of LaTeX: e\,
  • LaTeX: \hat{e} be estimates of LaTeX: e\, derived from LaTeX: x, y\,
    • LaTeX: \hat{e}=Est(x_c,y_c)\,
  • LaTeX: Q(x,\hat{x}) be a quality measure of resulting estimation; for example LaTeX: \sum{(x_i-\hat{x_i})^2}
    • Where LaTeX: x\, is allowed to vary over a domain for fixed LaTeX: \hat{x}
    • The example is oversimplified as will be demonstrated below.
    • LaTeX: \hat{x} \, is typically decomposed into a chain using LaTeX: \hat{e}\,


Then the problem can be formulated as:

  • Given LaTeX: x_c,y_c\,
  • Find a formula or process LaTeX: \hat{X}\, to select LaTeX: \hat{X}\left(x_c,y_c\right)\rightarrow\hat{x} \, so as to minimize LaTeX: Q(x,\hat{x})
    • The reason for the process term LaTeX: \hat{X} \, is that many correction schemes are feedback controlled; LaTeX: \hat{X} \, is never computed, internally, although it might be necessary in design or analysis.

Algebra of Distributions of Random Variables

This section discusses algebraic combinations of variables realized from distributions.

  • LaTeX: \mathcal{F}(g(x))\, is the Fourier transform of LaTeX: g(x).
  • LaTeX: \mathcal{M}(g(x))\, is the Mellin transform of LaTeX: g(x).
  • LaTeX: \oplus\,:Let
    • LaTeX: c=a+b,\ a,b,c\in \mathbb{R}\,,
    • LaTeX: \overline{c},\overline{a},\overline{b}\, the PDF's of LaTeX: a,b,c \,.
    • LaTeX: \overline{c} = \overline{a}\oplus \overline{b}=\mathcal{F}^{-1}( \mathcal{F}(\overline{a})\cdot \mathcal{F}\left(\overline{b} \right))\,
  • LaTeX: \ominus \,:Let
    • LaTeX: c=a-b=a+(-b),\ a,b,c\in \mathbb{R} \,,
    • LaTeX: \overline{c},\overline{a},\overline{b}(-x)\, the PDF's of LaTeX: a,-b,c\,.
    • LaTeX: \overline{c} = \overline{a}\ominus \overline{b}=\mathcal{F}^{-1}( \mathcal{F}(\overline{a})\cdot \mathcal{F}\left(\overline{b}(-x) \right))\,
  • LaTeX: \otimes \,:Let
    • LaTeX: c=a\cdot b,\ a,b,c\in (0,\infty)\,
      • Springer as an extension to LaTeX: \mathbb{R} \,
    • LaTeX: \overline{c},\overline{a},\overline{b}\, the PDF's of LaTeX: a,b,c\,.
    • LaTeX: \overline{c} = \overline{a}\otimes \overline{b}=\mathcal{M}^{-1}( \mathcal{M}(\overline{a})\cdot \mathcal{M}\left( \overline{b} \right))\,
  • LaTeX: \oslash:Let
    • LaTeX: c= \frac{a}{b},\ a,b,c\in (0,\infty) \,
      • Springer as an extension to LaTeX: \mathbb{R} \,
    • LaTeX: \overline{c},\overline{a},\overline{b}(x)\, the PDF's of LaTeX: a,b,c\,.
    • LaTeX: \overline{c} = \overline{a}\oslash \overline{b}=\mathcal{M}^{-1}( \mathcal{M}(\overline{a})\cdot \mathcal{M}\left(\overline{b}(\frac{1}{x}) \right))\,

Reference: The algebra of random variables, Springer, M.D., 1979, Wiley, New York

Examples

Biochemical temperature control

where multiple temperature sensors are multiplexed into a data stream and one or more channels are set aside for Auto-calibration. Expected final systems accuracies of .05 degC are needed because mammalian temperature regulation has resulting in processes and diseases that are "tuned" to particular temperatures.

  • A simplified example, evaluating one calibration channel and one reading channel. In order to be more obvious the unknown and calibration readings are designated separately; instead of the convention given above. This is more obvious in a simple case, but in more complicated cases is unsystematic.
    • LaTeX: V_x=v_{off}+\frac{V_{ref}R_x}{R_x + R_b}
    • LaTeX: R_x \, can be either the calibration resistor LaTeX: R_c \, or the unknown resistance LaTeX: R_t \, of the thermistor
    • LaTeX: V_x \, is the corresponding voltage read: LaTeX: V_c\, or LaTeX: V_t\,
    • LaTeX: v_{off} \, is the reading offset value, an error
    • LaTeX: \overline{v_{off}}\, the PDF of LaTeX: v_{off} \,
    • LaTeX: V_{ref}, e_{ref} \, the nominal bias voltage and bias voltage error
    • LaTeX: \overline{e_{ref}}\, the PDF of LaTeX:  e_{ref} \,
    • LaTeX: R_b, e_b \, the nominal bias resistor and bias resistor error
    • LaTeX: \overline{e_b}\, the PDF of LaTeX: e_b\,
    • LaTeX: e_{com} \, an unknown constant resistance in series with LaTeX: R_c, R_t \, for all readings
    • LaTeX: \overline{e_{com}}\, the PDF of LaTeX: e_{com} \,
    • LaTeX: \overline{e_{com}'}\, the restricted form of LaTeX: \overline{e_{com}}\,
      • Hereafter LaTeX: x' \, will signify a restricted form of LaTeX: x\,
  • With errors
    • LaTeX: V_x=v_{off}+\frac{(V_{ref} +e_{ref})(R_x +e_{com})}{(R_x+e_{com} + R_b+e_b)}
  • Implicit formula for PDF distributions. Generally this form two unknown distributions LaTeX: \overline{V_{t}},\overline{R_{t}}\, but in any particular application one or the other is known. In addition the constants, LaTeX: 1, V_{ref}, V_b \, in the formula stand for the Dirac delta distribution positioned on the constant.
    • LaTeX: \overline{V_{t}}\ominus\overline{v_{off}}\ominus\left(\left(V_{ref}\oplus\overline{e_{ref}}\right)\oslash\left(1\oplus\left(\left(R_{b}\oplus\overline{e_{b}}\right)\oslash\left(\overline{R_{t}}\oplus\overline{e_{com}}\right)\right)\right)\right)=0\,
  • Calibration reading
    • LaTeX: V_c=\left.V_x\right|_{x\leftarrow c}
  • Thermistor (real) reading
    • LaTeX: V_t=\left.V_x\right|_{x\leftarrow t}
  • The problem is to optimally estimate LaTeX: R_t\, based upon LaTeX: V_t \, and LaTeX: V_c \,
  • The direct inversion formula illustrates the utility of mathematically using the error space LaTeX: [V_{off}\,,e_{com}\,,e_b\,,e_{ref}] \, during design and analysis. The direct inversion of LaTeX: V_t \, for LaTeX: R_t \, naturally invokes the error space as a link to LaTeX: V_c \,.
    • Inversion for LaTeX: R_t \,
      • LaTeX: R_t=\frac{(V_t-v_{off})(e_{com}+R_b+e_b)-e_{com}(V_{ref}+e_{ref})}{V_{ref}-v_{off}-V_t-e_{ref}}
    • Inversion in terms of estimates
      • LaTeX: \widehat{R_t}=\frac{(V_t-\widehat{v_{off}}) (\widehat{e_{com}}+R_b+\widehat{e_b})-\widehat{e_{com}}(V_{ref}+\widehat{e_{ref}})}{V_{ref}-\widehat{v_{off}}-V_{t}-\widehat{e_{ref}}}
    • Mode estimate: Find the maximum value of LaTeX: \overline{R_{t}'} \,. A necessary condition for differentiable functions is LaTeX: \frac{d\overline{R_{t}'}_{|{p'}}}{dp'}=0 \, In polynomial cases this can theoretically be solved via Groebner basis; but even given the "exact" solutions, one is forced into sub-optimal/approximate estimates.
    • Mean estimate: Find the expectation, that is LaTeX: \widehat{R_t}=\int R'\cdot \overline{R_{t}'}_{|{R'}} dR'
    • Worst case: where points considered where the constraint meets some boundary; say +- .01%
    • Any of the above extended to cover a range of LaTeX: R_t \, as well as the range of errors.
  • It should be mentioned that, in this case LaTeX: (R_t - \widehat{R_t}) is not a good (or natural) function. A better function for both results and calculations is LaTeX: (1-\frac{\widehat{R_t}}{R_t})\,. The form of errors to be a natural variation from problem to problem and should be accommodated in any organized procedure.
  • Sensitivities are needed during design in order to determine which errors are tight and find out how much improvement can be had by spending more money on individual parts; and during analysis to determine the most likely cause of failures.

Optimization Format

  • The setpoint case with a full optimzer available at runtime. The purpose is to choose error space values that minimize some likelihood function. The selected error values can then be reused to calculate LaTeX: \widehat{R_t} \, for different values of LaTeX: V_t \,. It is presumed that we choose likelihoods such that surfaces of constant likelihood form a series of nested n-1 dimensional convex sets/shells around a center.
  • Absolute value case choosing error values for subsequent calculations.
minimize   LaTeX: \frac{|\widehat{v_{off}}|}{L_{off}}\,+\frac{|\widehat{e_{com}}\,-e_{com-center}|}{L_{com}}+\frac{|\widehat{e_{ref}}|}{L_{ref}}\,+\frac{| \widehat{e_b}\,|}{L_b}

w.r.t.     LaTeX: \widehat{v_{off}}\,,\widehat{e_{com}}\,,\widehat{e_{ref}}\,,\widehat{e_b}\,

subject to LaTeX: [|\widehat{v_{off}}|\,,\widehat{e_{com}}\,,-\widehat{e_{com}}\,,|\widehat{e_{ref}}|\,,|\widehat{e_b}|]\,\leq\,[L_{off},L_{com},0,L_{ref}\,,L_{b}]
           
LaTeX: V_c=\widehat{v_{off}}+\frac{(V_{ref} +\widehat{e_{ref}})(R_c +\widehat{e_{com}})}{(R_c+\widehat{e_{com}} + R_b+\widehat{e_b})}

Optimization Format 2

  • Reformulation in extended matrix format. A derivation is in the discussion under /* Optimization Format */.
  • We introduce a new variable to produce coupled equations: LaTeX: \widehat{I_c}\,. This is meant to simplify the nonlinearities.
minimize   LaTeX: \frac{|\widehat{v_{off}}|}{L_{off}}\,+\frac{|\widehat{e_{com}}\,-e_{com-center}|}{L_{com}}+\frac{|\widehat{e_{ref}}|}{L_{ref}}\,+\frac{| \widehat{e_b}\,|}{L_b}

w.r.t.     LaTeX: \widehat{v_{off}}\,,\widehat{e_{com}}\,,\widehat{e_{ref}}\,,\widehat{e_b}\,,\widehat{I_c}
                

subject to LaTeX: [|\widehat{v_{off}}|\,,\widehat{e_{com}}\,,-\widehat{e_{com}}\,,|\widehat{e_{ref}}|\,,|\widehat{e_b}|]\,\leq\,[L_{off}\,,L_{com}\,,0,L_{ref}\,,L_{b}]
           
LaTeX: \begin{array}{lcl}
</pre>
<p>\left[\begin{array}{c}
V_{ref}\\
V_{c}\end{array}\right] & = & \left[\begin{array}{ccccc}
1 & (R_{b}+R_{c}) & 1 & 1 & 0\\
0 & R_{c} & 0 & 1 & 1\end{array}\right]\left[\begin{array}{c}
\widehat{e_{ref}}\\
\widehat{I_{c}}\\
\widehat{I_{c}}\,\widehat{e_{b}}\\
\widehat{I_{c}}\,\widehat{e_{com}}\\
\widehat{v_{off}}\end{array}\right]\end{array}
</p>
<pre>          LaTeX: \begin{array}{lcl}
</pre>
<p>|\widehat{e_{ref}}| & < & V_{ref}\\
\widehat{I_{c}} & > & 0\\
V_{c} & < & V_{ref}+\widehat{e_{ref}}\end{array}
</p>
<pre>

Optimization Format 3

  • Working directly on choosing the most likely value of LaTeX: \widehat{R_{t}}.
minimize   r

w.r.t      LaTeX: \widehat{R_{t}}\,
           LaTeX: \widehat{v_{off}}\,,\widehat{e_{com}}\,,\widehat{e_{ref}}\,,\widehat{e_b}\,


subject to 
           LaTeX: V_c=\widehat{v_{off}}+\frac{(V_{ref} +\widehat{e_{ref}})(R_c +\widehat{e_{com}})}{(R_c+\widehat{e_{com}} + R_b+\widehat{e_b})}
           
           LaTeX: V_t=\widehat{v_{off}}+\frac{(V_{ref} +\widehat{e_{ref}})(\widehat{R_t} +\widehat{e_{com}})}{(\widehat{R_t}+\widehat{e_{com}} + R_b+\widehat{e_b})}
           
           LaTeX: \left(\frac{{\widehat{v_{off}}-\mu_{off}}}{\sigma_{off}}\right)^2+\left(\frac{{\widehat{e_{com}}-\mu_{com}}}{\sigma_{com}}\right)^2+\left(\frac{{\widehat{e_{ref}}-\mu_{ref}}}{\sigma_{ref}}\right)^2+\left(\frac{{\widehat{e_{b}}-\mu_{b}}}{\sigma_{b}}\right)^2 =r
           
           LaTeX: \widehat{R_{t}} > 0

Optimization Format 4

  • An alternate selection criteria; probably not usable.
minimize   r

w.r.t      LaTeX: \bold{E}(\widehat{R_{t}})\,
           LaTeX: \widehat{v_{off}}\,,\widehat{e_{com}}\,,\widehat{e_{ref}}\,,\widehat{e_b}\,


subject to 
           LaTeX: V_c=\widehat{v_{off}}+\frac{(V_{ref} +\widehat{e_{ref}})(R_c +\widehat{e_{com}})}{(R_c+\widehat{e_{com}} + R_b+\widehat{e_b})}
           
           LaTeX: V_t=\widehat{v_{off}}+\frac{(V_{ref} +\widehat{e_{ref}})(\widehat{R_t} +\widehat{e_{com}})}{(\widehat{R_t}+\widehat{e_{com}} + R_b+\widehat{e_b})}
           
           LaTeX: \left(\frac{{\widehat{v_{off}}-\mu_{off}}}{\sigma_{off}}\right)^2+\left(\frac{{\widehat{e_{com}}-\mu_{com}}}{\sigma_{com}}\right)^2+\left(\frac{{\widehat{e_{ref}}-\mu_{ref}}}{\sigma_{ref}}\right)^2+\left(\frac{{\widehat{e_{b}}-\mu_{b}}}{\sigma_{b}}\right)^2 =r
           
           LaTeX: \widehat{R_{t}} > 0

Infrared Gas analysers

With either multiple stationary filters or a rotating filter wheel. In either case the components, sensors, and physical structures are subject to significant variation.

Various forms of LaTeX: Q()\,

  • Weighted least squares of LaTeX: Q()\, over the range of LaTeX:  x\,
  • Minimize mode of LaTeX: \hat{e} with respect to the range of LaTeX: e\, and the measurements LaTeX: \bar{y}=Y(\bar{x};p,e)
  • Minimize the mean of LaTeX: \hat{e} with respect to the range of LaTeX: e\, and the measurements LaTeX: \bar{y}=Y(\bar{x};p,e)
  • Minimize the worst case of LaTeX: Q()\, over the range of LaTeX:  x\,
  • Some weighting of the error interval with respect to LaTeX: Q()\,


Areas of optimization

Design

Runtime

Calibration usage

Personal tools