Euclidean distance matrix completion via semidefinite facial reduction
From Wikimization
(→Properties of <math>\mathcal K </math> [http://orion.math.uwaterloo.ca/~hwolkowi/henry/reports/edmapr04.pdf]) |
(→Semidefinite programming relaxation of the low-dimensional Euclidean distance matrix completion problem) |
||
(14 intermediate revisions not shown.) | |||
Line 6: | Line 6: | ||
EDMC is a ''fundamental problem of distance geometry, (FPDG)''. | EDMC is a ''fundamental problem of distance geometry, (FPDG)''. | ||
- | Let <math>\mathcal S^n</math> be the vector space of symmetric matrices equipped with the trace inner product, <math>\langle S, T \rangle = \mathrm{trace}(ST) </math>; let <math>D \in \mathcal E^n</math>, the set of <math>n \times n</math> Euclidean distance matrices, i.e. <math>D_{ij}=|| p_i-p_j||^2</math>, for points <math>p_i\in \mathbb R^r, i=1,...n</math>, with embedding dimension <math>r</math>; | + | Let <math>\mathcal S^n</math> be the vector space of symmetric matrices equipped with the trace inner product, <math>\langle S, T \rangle = \mathrm{trace}(ST) </math>; let <math>D \in \mathcal E^n</math>, the set of <math>n \times n</math> Euclidean distance matrices, i.e. <math>D_{ij}=|| p_i-p_j||^2</math>, for points <math>p_i\in \mathbb{R}^r, i=1,...n</math>, with embedding dimension <math>r</math>; |
and, let <math>\mathcal S^n_+</math> be the set (convex cone) of (symmetric) positive semidefinite matrices. Defining <math>\mathcal K : \mathcal S^n \rightarrow \mathcal S^n</math> by | and, let <math>\mathcal S^n_+</math> be the set (convex cone) of (symmetric) positive semidefinite matrices. Defining <math>\mathcal K : \mathcal S^n \rightarrow \mathcal S^n</math> by | ||
Line 13: | Line 13: | ||
we have that <math> \mathcal K(\mathcal S^n_+) = \mathcal E^n</math>. Note that <math>\mathcal K</math> is a one-one linear transformation between the ''centered symmetric matrices'', <math>S \in \mathcal S_c,</math> and the ''hollow matrices'' <math>\mathcal S_H</math>, where centered means row (and column) sums are all zero, and hollow means that the diagonal is zero. | we have that <math> \mathcal K(\mathcal S^n_+) = \mathcal E^n</math>. Note that <math>\mathcal K</math> is a one-one linear transformation between the ''centered symmetric matrices'', <math>S \in \mathcal S_c,</math> and the ''hollow matrices'' <math>\mathcal S_H</math>, where centered means row (and column) sums are all zero, and hollow means that the diagonal is zero. | ||
<p> | <p> | ||
- | A matrix <math>D \in \mathcal S^n</math> is a Euclidean distance matrix with embedding dimension <math>r\,</math> if and only if there exists <math>P \in \mathbb R^{n \times r}</math> such that <math>D = \mathcal K(PP^T).</math> | + | A matrix <math>D \in \mathcal S^n</math> is a Euclidean distance matrix with embedding dimension <math>r\,</math> if and only if there exists <math>P \in \mathbb{R}^{n \times r}</math> such that <math>D = \mathcal K(PP^{\rm{T}}).</math> |
Suppose <math>D\,</math> is a partial Euclidean distance matrix with embedding dimension <math>r\,</math>. The ''low-dimensional Euclidean distance matrix completion'' problem is | Suppose <math>D\,</math> is a partial Euclidean distance matrix with embedding dimension <math>r\,</math>. The ''low-dimensional Euclidean distance matrix completion'' problem is | ||
- | <center><math>\begin{array}{rl} | + | <center><math>\begin{array}{rl}{\text find}&P \in \mathbb{R}^{n \times r} \\ {\text s.t.}&H \circ \mathcal K(PP^{\rm{T}}) = H \circ D \\ & P^{\rm{T}}e = 0, \end{array}</math></center> |
- | where <math>e \in \mathbb R^n</math> is the vector of all ones, and <math>H\,</math> is the adjacency matrix of the graph <math>G = (V,E)\,</math> associated with the partial Euclidean distance matrix <math>D.</math> | + | where <math>e \in \mathbb{R}^n</math> is the vector of all ones, and <math>H\,</math> is the adjacency matrix of the graph <math>G = (V,E)\,</math> associated with the partial Euclidean distance matrix <math>D.</math> |
===Properties of <math>\mathcal K </math> [http://orion.math.uwaterloo.ca/~hwolkowi/henry/reports/edmapr04.pdf]=== | ===Properties of <math>\mathcal K </math> [http://orion.math.uwaterloo.ca/~hwolkowi/henry/reports/edmapr04.pdf]=== | ||
Let <math> | Let <math> | ||
- | \mathcal D_e(Y):= | + | \mathcal D_e(Y):={\text diag}(Y)e^{\text T}+e{\text diag}(Y)^{\text T} |
</math>, where diag denotes the diagonal of the argument. Then | </math>, where diag denotes the diagonal of the argument. Then | ||
alternatively, we can write | alternatively, we can write | ||
Line 31: | Line 31: | ||
The adjoints of these linear transformations are | The adjoints of these linear transformations are | ||
- | |||
<center><math> | <center><math> | ||
- | \mathcal D^*(D) = 2 \ | + | \mathcal D^*(D) = 2 {\text Diag} (De), |
\qquad | \qquad | ||
- | \mathcal K^*(D) = 2 (\ | + | \mathcal K^*(D) = 2 ({\text Diag} (De) -D), |
</math></center> | </math></center> | ||
where Diag is the diagonal matrix formed from its argument and is the adjoint of diag. The Moore-Penrose generalized inverse is | where Diag is the diagonal matrix formed from its argument and is the adjoint of diag. The Moore-Penrose generalized inverse is | ||
- | |||
<center><math>\mathcal K^\dagger(D) = -\frac 12 J \operatorname{offDiag} (D) J,</math></center> | <center><math>\mathcal K^\dagger(D) = -\frac 12 J \operatorname{offDiag} (D) J,</math></center> | ||
- | where the matrix | + | where the matrix <math> J:=I-\frac 1n ee^{\rm{T}}</math> |
- | <math> J:=I-\frac 1n ee^T</math> | + | |
- | + | ||
- | + | ||
and the linear operator (orthogonal projection) | and the linear operator (orthogonal projection) | ||
- | <math>\operatorname{offDiag}(D):=D- | + | <math>\operatorname{offDiag}(D):=D-{\text Diag}{\text diag}(D)</math>. The orthogonal projections onto the range of <math>\mathcal K</math> and range of <math>\mathcal K^*</math> is given by |
+ | <center><math>\mathcal K\mathcal K^\dagger (D)=\operatorname{offDiag}(D), \qquad | ||
+ | \mathcal K^\dagger \mathcal K (Y)=JYJ </math>, | ||
+ | </center> | ||
+ | respectively. These are the hollow and centered subspaces of <math>\mathcal S^n</math>, respectively. The nullspaces of <math>\mathcal K, \mathcal K^\dagger</math> are the ranges of Diag, <math>\mathcal D_e</math>, respectively. | ||
== Semidefinite programming relaxation of the low-dimensional Euclidean distance matrix completion problem == | == Semidefinite programming relaxation of the low-dimensional Euclidean distance matrix completion problem == | ||
- | Using the substitution <math>Y = PP^T\,,</math> and relaxing the (NP-hard) condition that <math> | + | Using the substitution <math>Y = PP^{\text T}\,,</math> and relaxing the (NP-hard) condition that <math>{\text rank}(Y) = r\,,</math> we obtain the semidefinite programming relaxation |
- | <center><math>\begin{array}{rl} | + | <center><math>\begin{array}{rl}{\text find}& Y \in \mathcal S^n_+ \\ {\text s.t.}& H \circ \mathcal K(Y) = H \circ D \\ & Ye = 0. \end{array}</math></center> |
== Single Clique Facial Reduction Theorem [http://www.optimization-online.org/DB_HTML/2009/05/2297.html] == | == Single Clique Facial Reduction Theorem [http://www.optimization-online.org/DB_HTML/2009/05/2297.html] == | ||
- | Let <math>C \subseteq V</math> be a clique in the graph <math>G\,</math> such that the embedding dimension of <math>D[C]\,</math> is <math>r\,.</math> Then there exists <math>U \in \mathbb R^{n \times (n-|C|+r)}</math> such that | + | Let <math>C \subseteq V</math> be a clique in the graph <math>G\,</math> such that the embedding dimension of <math>D[C]\,</math> is <math>r\,.</math> Then there exists <math>U \in \mathbb{R}^{n \times (n-|C|+r)}</math> such that |
- | <center><math> \mathbf{face} \left\{ Y \in \mathcal S^n_+ : \mathcal{K}(Y[C]) = D[C], Ye = 0 \right\} = U \mathcal S^{n-|C|+r}_+ U^T. </math></center> | + | <center><math> \mathbf{face} \left\{ Y \in \mathcal S^n_+ : \mathcal{K}(Y[C]) = D[C], Ye = 0 \right\} = U \mathcal S^{n-|C|+r}_+ U^{\rm{T}}. </math></center> |
Current revision
Contents |
Euclidean distance matrix completions (EDMC), background
EDMC is a fundamental problem of distance geometry, (FPDG). Let be the vector space of symmetric matrices equipped with the trace inner product, ; let , the set of Euclidean distance matrices, i.e. , for points , with embedding dimension ; and, let be the set (convex cone) of (symmetric) positive semidefinite matrices. Defining by
we have that . Note that is a one-one linear transformation between the centered symmetric matrices, and the hollow matrices , where centered means row (and column) sums are all zero, and hollow means that the diagonal is zero.
A matrix is a Euclidean distance matrix with embedding dimension if and only if there exists such that Suppose is a partial Euclidean distance matrix with embedding dimension . The low-dimensional Euclidean distance matrix completion problem is
where is the vector of all ones, and is the adjacency matrix of the graph associated with the partial Euclidean distance matrix
Properties of [1]
Let , where diag denotes the diagonal of the argument. Then alternatively, we can write
The adjoints of these linear transformations are
where Diag is the diagonal matrix formed from its argument and is the adjoint of diag. The Moore-Penrose generalized inverse is
where the matrix and the linear operator (orthogonal projection) . The orthogonal projections onto the range of and range of is given by
respectively. These are the hollow and centered subspaces of , respectively. The nullspaces of are the ranges of Diag, , respectively.
Semidefinite programming relaxation of the low-dimensional Euclidean distance matrix completion problem
Using the substitution and relaxing the (NP-hard) condition that we obtain the semidefinite programming relaxation
Single Clique Facial Reduction Theorem [2]
Let be a clique in the graph such that the embedding dimension of is Then there exists such that