Talk:Auto-zero/Auto-calibration

From Wikimization

Jump to: navigation, search

I have been working on creating a robust design structure for the design of Auto-Zero/Auto-calibration implementations. I have a lot of moving parts in my head; but I believe I need outside viewpoints and knowledge in order to construct a general approach. If anybody is interested please respond here. It is a bit more complicated than it would seem on the surface IMHO. I somewhat think it falls within convex optimization. On the other hand I sometimes think it doesn't. I do have a particular example that illustrates the various problems that can arise. Although the ideas should be applicable to Scientific measurements; the applications I have in mind relate to autonomous embedded software and hardware implementations.

Ray

Note on the examples: I think that due to the physically meaningful restrictions, R>0 and errors less than 100%, on the problem; a conversion process using logs and affine transforms will generate posynomial equations for optimization and constraints. I tried Geometric programing before but didn't put the proper (I hope) restrictions in place. Perhaps I gave up too early? They might not exactly fit geometric programing but they might fit convex programing. Ray

Trial: Poysnomial expressions

LaTeX: e^{\psi_{x}}=\left(1-\frac{v_{off}}{V_{x}}\right),e^{\psi_{c}}=\left(1-\frac{v_{off}}{V_{c}}\right),e^{\psi_{t}}=\left(1-\frac{v_{off}}{V_{t}}\right),e^{\mathcal{V}_{x}}=V_{x},e^{\mathcal{V}_{c}}=V_{c},e^{\mathcal{V}_{t}}=V_{t}

LaTeX: e^{\mathcal{R}_{d}}=R_{x}+e_{com}+R_{b}+e_{b},e^{\mathcal{R}_{x}}=R_{x},e^{\epsilon_{com}}=e_{com},e^{\mathcal{R}_{b}}=R_{b},e^{\epsilon_{b}}=\left(1-\frac{e_{b}}{R_{t}}\right)

LaTeX: e^{\mathcal{R}_{t}}=R_{t}

LaTeX: e^{\mathcal{V}_{ref}}=V_{ref},e^{\psi_{ref}}=\left(1-\frac{v_{off}}{V_{ref}}\right)

Thus the expression for LaTeX: V_{x} is

LaTeX: e^{\mathcal{V}_{x}}e^{\psi_{x}}=e^{\mathcal{V}_{ref}}e^{\psi_{ref}}\cdot\left(e^{\mathcal{R}_{x}}+e^{\epsilon_{com}}\right)\cdot e^{-\mathcal{R}_{d}}

Keeping the new variable LaTeX: e^{\mathcal{R}_{d}} we have the following constraint

LaTeX: e^{\mathcal{R}_{d}}=e^{\mathcal{R}_{x}}+e^{\epsilon_{com}}+e^{\mathcal{R}_{b}}e^{\epsilon_{b}}

The denominator of LaTeX: R_{t} can be expressed as

LaTeX: e^{\mathcal{\eta}_{e}}=V_{ref}+e_{ref}-V_{t}+v_{off}

Note a sign change this is complimented in the denominator.

Note that due to the circuit physics LaTeX: V_{ref}>V_{x} for all errors

The expression for LaTeX: R_{t} is

LaTeX: e^{\mathcal{R}_{t}}=\left(\left(e^{\mathcal{V}_{t}}e^{\psi_{t}}\right)\left(e^{\epsilon_{com}}+e^{\epsilon_{b}}e^{\mathcal{R}_{b}}\right)+e^{\epsilon_{com}}e^{\mathcal{V}_{ref}}e^{\mathcal{\psi}_{ref}}\right)e^{-\mathcal{\eta}_{e}}

With the constraint

LaTeX: e^{\mathcal{\eta}_{e}}=e^{\mathcal{V}_{t}}e^{\psi_{t}}+e^{\mathcal{V}_{ref}}e^{\mathcal{\psi}_{ref}}

Unfortunately applying the constraint algebraically leads to some negative terms. Perhaps collecting the positive and negative terms into separate conditions and placing the sum constraint would avoid this?

Abstract form of Likelihood case

Setup (obsolete?):

LaTeX: P\, a n-dimensional collection of Gaussian probability distribution functions. This is a condition I want to liberalize to any convex PDF (and in some sense to a uniform PDF).

LaTeX: p\in P\, ; LaTeX: \bar{p}\, PDF of LaTeX: p\,

General formula: LaTeX: y=f(x;p)\,

Calibration pair LaTeX: [y_{c},x_{c}]\, constraining LaTeX: p\,  : LaTeX: y_{c}=f(x_{c};p)\, and consequently forming a new PDF LaTeX: \bar{p'}\, with LaTeX: p'\, the constrained LaTeX: p\,.

LaTeX: y_{t}=f(x_{t};p')\,

Clearly given LaTeX: y_{t}\, fixed, LaTeX: x_{t}\, has a PDF. LaTeX: \bar{x_{t}}=\bar{x}_{t}\left(y_{t},y_{c},x_{c},\bar{p}\right)=\bar{x}_{t}\left(y_{t},\bar{p'}\right)\,

Problem 1: mode

maximize: The most likely value of LaTeX: \bar{x}_{t}\,

(i.e. LaTeX: \frac{dx_{t}}{dp'}=0\, for differentiable functions)

wrt : LaTeX: p'\,

given LaTeX: y_{t},y_{c},x_{c},\bar{x_{t}},y=f(x;p)\, or alternately LaTeX: y_t,\bar{x_{t}},y_t=f(x_t;p')\,

/* Optimization Format */

Since the example equations are not obviously deterministic, much less convex, here I will back track to the original problem and put it in matrix form. This is possible because the underlying physical problem is a linear electrical circuit. Some work is neccesary to disentangle the unknows which turn up not linear.

The fundamental equation is:LaTeX: V=I\cdot R\, We introduce three new variables to produce coupled equations: LaTeX: \widehat{I_{c}},\widehat{k_{1}}=\widehat{I_{c}}\widehat{e_{b}};\widehat{k_{2}}=\widehat{I_{c}}\widehat{e_{com}}\,

This hides the nonlinearity.

Considering the calibration example where only the component errors are to be estimated.

Starting from the original circuit:

LaTeX: 
\begin{array}{lcl}
V_{ref}-\widehat{e_{ref}}-\widehat{I_{c}}R_{b}-\widehat{I_{c}}e_{b}-\widehat{I_{c}}\widehat{e_{com}}-\widehat{I_{c}}R_{c} & = & 0\\
V_{c}-\widehat{v_{off}}-\widehat{I_{c}}\widehat{e_{com}}-\widehat{I_{c}}R_{c} & = & 0\end{array}

We start to seperate constants from variables.

LaTeX: \begin{array}{lcl}
\left[\begin{array}{c}
V_{ref}\\
V_{c}\end{array}\right] & = & \left[\begin{array}{ccccc}
1 & (R_{b}+R_{c}) & 1 & 1 & 0\\
0 & R_{c} & 0 & 1 & 1\end{array}\right]\left[\begin{array}{c}
\widehat{e_{ref}}\\
\widehat{I_{c}}\\
\widehat{k_{1}}\\
\widehat{k_{2}}\\
\widehat{v_{off}}\end{array}\right]\end{array}

LaTeX: \begin{array}{lcl}
\widehat{e_{ref}} & < & V_{ref}\\
\widehat{I_{c}} & > & 0\\
V_{c} & < & V_{ref}\end{array}

Now if we have resolved this we can state:

LaTeX: \begin{array}{lcl}
\widehat{k_{1}} & =\widehat{I_{c}'} & \widehat{e_{b}}\\
\widehat{k_{2}} & =\widehat{I_{c}'} & \widehat{e_{com}}\end{array}

Personal tools