Linear matrix inequality

From Wikimization

Revision as of 14:49, 21 September 2016 by Ranjelin (Talk | contribs)
Jump to: navigation, search

In convex optimization, a linear matrix inequality (LMI) is an expression of the form

LaTeX: LMI(y):=A_0+y_1A_1+y_2A_2+\ldots+y_m A_m\succeq0\,

where

  • LaTeX: y=[y_i\,,~i\!=\!1\ldots m] is a real vector,
  • LaTeX: A_0\,, A_1\,, A_2\,,\ldots\,A_m are symmetric matrices in the subspace of LaTeX: n\times n symmetric matrices LaTeX: \mathbb{S}^n,
  • LaTeX: B\succeq0 is a generalized inequality meaning LaTeX: B is a positive semidefinite matrix belonging to the positive semidefinite cone LaTeX: \mathbb{S}_+ in the subspace of symmetric matrices LaTeX: \mathbb{S}.

This linear matrix inequality specifies a convex constraint on y.

Contents

Convexity of the LMI constraint

LaTeX: LMI(y)\succeq 0 is a convex constraint on y which means membership to a dual (convex) cone as we now explain: (Dattorro, Example 2.13.5.1.1)

Consider a peculiar vertex-description for a convex cone defined over the positive semidefinite cone

(instead of the more common nonnegative orthant, LaTeX: x\succeq0):

for LaTeX: X\!\in\mathbb{S}^n given LaTeX: \,A_j\!\in\mathbb{S}^n, LaTeX: \,j\!=\!1\ldots m

LaTeX: \begin{array}{ll}\mathcal{K}
\!\!&=\left\{\left[\begin{array}{c}\langle A_1\,,\,X^{}\rangle\\\vdots\\\langle A_m\;,\,X^{}\rangle\end{array}\right]|~X\!\succeq_{\!}0\right\}\subseteq_{}\mathbb{R}^m\\\\
&=\left\{\left[\begin{array}{c}\textrm{svec}(A_1)^T\\\vdots\\\textrm{svec}(A_m)^T\end{array}\right]\textrm{svec}X~|~X\!\succeq_{\!}0\right\}\\\\
&:=\;\{A\,\textrm{svec}X~|~X\!\succeq_{\!}0_{}\}
\end{array}

where

  • LaTeX: A\!\in_{}\!\mathbb{R}^{m\times n(n+1)/2},
  • symmetric vectorization svec is a stacking of columns defined in (Dattorro, ch.2.2.2.1),
  • LaTeX: A_0=\mathbf{0} is assumed without loss of generality.

LaTeX: \mathcal{K} is a convex cone because

LaTeX: A\,\textrm{svec}{X_{{\rm p}_1}}_{\,},_{_{}}A\,\textrm{svec}{X_{{\rm p}_2}}\!\in\mathcal{K}~\Rightarrow~
A(\zeta_{\,}\textrm{svec}{X_{{\rm p}_1\!}}+_{}\xi_{\,}\textrm{svec}{X_{{\rm p}_2}})\in_{}\mathcal{K}
\textrm{~~for\,all~\,}\zeta_{\,},\xi\geq0

since a nonnegatively weighted sum of positive semidefinite matrices must be positive semidefinite.

Now consider the (closed convex) dual cone:

LaTeX: \begin{array}{rl}\mathcal{K}^*
\!\!\!&=_{}\left\{_{}y~|~\langle z\,,\,y_{}\rangle\geq_{}0\,~\textrm{for\,all}~\,z\!\in_{_{}\!}\mathcal{K}_{}\right\}\subseteq_{}\mathbb{R}^m\\
&=_{}\left\{_{}y~|~\langle z\,,\,y_{}\rangle\geq_{}0\,~\textrm{for\,all}~\,z_{\!}=_{\!}A\,\textrm{svec}X\,,~X\succeq0_{}\right\}\\
&=_{}\left\{_{}y~|~\langle A\,\textrm{svec}X\,,~y_{}\rangle\geq_{}0\,~\textrm{for\,all}~\,X\!\succeq_{_{}\!}0_{}\right\}\\
&=\left\{y~|~\langle\textrm{svec}X\,,\,A^{T\!}y\rangle\geq_{}0\;~\textrm{for\,all}~\,X\!\succeq_{\!}0\right\}\\
&=\left\{y~|~\textrm{svec}^{-1}(A^{T\!}y)\succeq_{}0\right\}
\end{array}

that follows from Fejer's dual generalized inequalities for the positive semidefinite cone:

  • LaTeX: Y\succeq0~\Leftrightarrow~\langle Y\,,\,X\rangle\geq0\;~\textrm{for\,all}~\,X\succeq0

This leads directly to an equally peculiar halfspace-description

LaTeX: \mathcal{K}^*=\{y\!\in\mathbb{R}^m~|\,\sum\limits_{j=1}^my_jA_j\succeq_{}0_{}\}

The summation inequality with respect to the positive semidefinite cone is known as a linear matrix inequality.

LMI Geometry

Although matrix LaTeX: \,A\, is finite-dimensional, LaTeX: \mathcal{K} is generally not a polyhedral cone (unless LaTeX: \,m\, equals 1 or 2) simply because LaTeX: \,X\!\in\mathbb{S}_+^n\,.

Relative interior of LaTeX: \mathcal{K} may always be expressed LaTeX: \textrm{rel\,int}\,\mathcal{K}=\{A\,\textrm{svec}X~|~X\!\succ_{\!}0_{}\}.

Provided the LaTeX: \,A_j matrices are linearly independent, then LaTeX: \textrm{rel\,int}\,\mathcal{K}=\textrm{int}\,\mathcal{K}

meaning, cone LaTeX: \mathcal{K} interior is nonempty; implying, dual cone LaTeX: \mathcal{K}^* is pointed (Dattorro, ch.2).

If matrix LaTeX: \,A\, has no nullspace, then LaTeX: \,A\,\textrm{svec}X\, is an isomorphism in LaTeX: \,X\, between the positive semidefinite cone LaTeX: \mathbb{S}_+^n and range LaTeX: \,\mathcal{R}(A)\, of matrix LaTeX: \,A.

That is sufficient for convex cone LaTeX: \,\mathcal{K}\, to be closed, and necessary to have relative boundary LaTeX: \textrm{rel}\,\partial^{}\mathcal{K}=\{A\,\textrm{svec}X~|~X\!\succeq_{\!}0\,,~X\!\not\succ_{\!}0_{}\}.


Relative interior of the dual cone may always be expressed LaTeX: \textrm{rel\,int}\,\mathcal{K}^*=\{y\!\in_{}\!\mathbb{R}^m~|\,\sum\limits_{j=1}^my_jA_j\succ_{}0_{}\}.

When the LaTeX: A_j matrices are linearly independent, function LaTeX: \,g(y)_{\!}:=_{_{}\!}\sum y_jA_j\, is a linear bijection on LaTeX: \mathbb{R}^m.

Inverse image of the positive semidefinite cone under LaTeX: \,g(y)\, must therefore have dimension equal to LaTeX: \dim\!\left(\mathcal{R}(A^{\rm T})_{}\!\cap\mbox{svec}\,\mathbb{S}_+^{_{}n}\right)

and relative boundary LaTeX: \textrm{rel\,}\partial^{}\mathcal{K}^*=\{y\!\in_{}\!\mathbb{R}^m~|\,\sum\limits_{j=1}^my_jA_j\succeq_{}0\,,~\sum\limits_{j=1}^my_jA_j\not\succ_{}0_{}\}.

When this dimension is LaTeX: \,m\,, the dual cone interior is nonempty LaTeX: \textrm{rel\,int}\,\mathcal{K}^*=\textrm{int}\,\mathcal{K}^*

and closure of convex cone LaTeX: \mathcal{K} is pointed.

Applications

There are efficient numerical methods to determine whether an LMI is feasible (i.e., whether there exists a vector LaTeX: y such that LaTeX: LMI(y)\succeq0 ), or to solve a convex optimization problem with LMI constraints. Many optimization problems in control theory, system identification, and signal processing can be formulated using LMIs. The prototypical primal and dual semidefinite program are optimizations of a real linear function respectively subject to the primal and dual convex cones governing this LMI.

External links

Personal tools