Complementarity problem
From Wikimization
| Line 131: | Line 131: | ||
<math>Fix(I-G+P_{\mathcal K}\circ(G-F)),</math> where <math>I:\mathbb H\to\mathbb H</math> is the identity mapping defined by <math>I(x)=x.\,</math> | <math>Fix(I-G+P_{\mathcal K}\circ(G-F)),</math> where <math>I:\mathbb H\to\mathbb H</math> is the identity mapping defined by <math>I(x)=x.\,</math> | ||
| - | == Proof == | + | === Proof === |
For all <math>u\in\mathbb H</math> denote <math>z=G(u)-G(u),\,</math> <math>x=G(u)\,</math> and <math>y=-F(u).\,</math> Then, | For all <math>u\in\mathbb H</math> denote <math>z=G(u)-G(u),\,</math> <math>x=G(u)\,</math> and <math>y=-F(u).\,</math> Then, | ||
<math>z=x+y.\,</math> | <math>z=x+y.\,</math> | ||
| Line 157: | Line 157: | ||
<math>G(u)=x\in\mathcal K,</math> <math>F(u)=-y\in\mathcal K^*</math> and <math>\langle G(u),F(u)\rangle=0.</math> Therefore, <math>u\,</math> is a solution of <math>ICP(F,G,\mathcal K).</math> | <math>G(u)=x\in\mathcal K,</math> <math>F(u)=-y\in\mathcal K^*</math> and <math>\langle G(u),F(u)\rangle=0.</math> Therefore, <math>u\,</math> is a solution of <math>ICP(F,G,\mathcal K).</math> | ||
| - | == Remark == | + | === Remark === |
In particular if <math>g=I,</math> we obtain the result | In particular if <math>g=I,</math> we obtain the result | ||
[[Complementarity_problem#Every_nonlinear_complementarity_problem_is_equivalent_to_a_fixed_point_problem | ''Every nonlinear complementarity problem is equivalent to a fixed point problem'']], | [[Complementarity_problem#Every_nonlinear_complementarity_problem_is_equivalent_to_a_fixed_point_problem | ''Every nonlinear complementarity problem is equivalent to a fixed point problem'']], | ||
| Line 163: | Line 163: | ||
== Nonlinear optimization problems == | == Nonlinear optimization problems == | ||
| - | |||
Let <math>\mathcal C</math> be a closed convex set in the Hilbert space <math>(\mathbb H,\langle\cdot,\cdot\rangle)</math> and <math>f:\mathbb H\to\mathbb R</math> a function. The | Let <math>\mathcal C</math> be a closed convex set in the Hilbert space <math>(\mathbb H,\langle\cdot,\cdot\rangle)</math> and <math>f:\mathbb H\to\mathbb R</math> a function. The | ||
'''nonlinear optimization problem''' defined by <math>\mathcal C</math> and | '''nonlinear optimization problem''' defined by <math>\mathcal C</math> and | ||
| Line 180: | Line 179: | ||
== Any solution of a nonlinear optimization problem is a solution of a variational inequality == | == Any solution of a nonlinear optimization problem is a solution of a variational inequality == | ||
| - | |||
Let <math>\mathcal C</math> be a closed convex set in the Hilbert space <math>(\mathbb H,\langle\cdot,\cdot\rangle)</math> and <math>f:\mathbb H\to\mathbb R</math> a differentiable | Let <math>\mathcal C</math> be a closed convex set in the Hilbert space <math>(\mathbb H,\langle\cdot,\cdot\rangle)</math> and <math>f:\mathbb H\to\mathbb R</math> a differentiable | ||
function. Then, any solution of the [[Complementarity_problem#Nonlinear_optimization_problems| nonlinear optimization problem ]] <math>NOPT(f,\mathcal C)</math> is a solution of the [[Complementarity_problem#Variational_inequalities | variational inequality]] <math>VI(F,\mathcal C),</math> where <math>F=\nabla f</math> is the gradient of <math>f.\,</math> | function. Then, any solution of the [[Complementarity_problem#Nonlinear_optimization_problems| nonlinear optimization problem ]] <math>NOPT(f,\mathcal C)</math> is a solution of the [[Complementarity_problem#Variational_inequalities | variational inequality]] <math>VI(F,\mathcal C),</math> where <math>F=\nabla f</math> is the gradient of <math>f.\,</math> | ||
| - | == Proof == | + | === Proof === |
| - | + | ||
Let <math>x\,\in\mathcal C</math> be a solution of <math>NOPT(f,\mathcal C)</math> and | Let <math>x\,\in\mathcal C</math> be a solution of <math>NOPT(f,\mathcal C)</math> and | ||
<math>y\in\mathcal C</math> an arbitrary point. Then, by the convexity of | <math>y\in\mathcal C</math> an arbitrary point. Then, by the convexity of | ||
| Line 196: | Line 193: | ||
== A convex optimization problem is equivalent to a variational inequality == | == A convex optimization problem is equivalent to a variational inequality == | ||
| - | |||
Let <math>\mathcal C</math> be a closed convex set in the Hilbert space | Let <math>\mathcal C</math> be a closed convex set in the Hilbert space | ||
<math>(\mathbb H,\langle\cdot,\cdot\rangle)</math> and <math>f:\mathbb H\to\mathbb R</math> a | <math>(\mathbb H,\langle\cdot,\cdot\rangle)</math> and <math>f:\mathbb H\to\mathbb R</math> a | ||
| Line 202: | Line 198: | ||
Then, the [[Complementarity_problem#Nonlinear_optimization_problems | nonlinear optimization problem]] <math>NOPT(f,\mathcal C)</math> is equivalent to the [[Complementarity_problem#Variational_inequalities | variational inequality]] <math>VI(F,\mathcal C),</math> where <math>F=\nabla f</math> is the gradient of <math>f.\,</math> | Then, the [[Complementarity_problem#Nonlinear_optimization_problems | nonlinear optimization problem]] <math>NOPT(f,\mathcal C)</math> is equivalent to the [[Complementarity_problem#Variational_inequalities | variational inequality]] <math>VI(F,\mathcal C),</math> where <math>F=\nabla f</math> is the gradient of <math>f.\,</math> | ||
| - | == Proof == | + | === Proof === |
| - | + | ||
Any solution of <math>NOPT(f,\mathcal C)</math> | Any solution of <math>NOPT(f,\mathcal C)</math> | ||
[[Complementarity_problem#Any_solution_of_a_nonlinear_optimization_problem_is_a_solution_of_a_variational_inequality | is a solution of]] <math>VI(F,\mathcal C).</math> | [[Complementarity_problem#Any_solution_of_a_nonlinear_optimization_problem_is_a_solution_of_a_variational_inequality | is a solution of]] <math>VI(F,\mathcal C).</math> | ||
| Line 213: | Line 208: | ||
== Any solution of a nonlinear optimization problem on a closed convex cone is a solution of a nonlinear complementarity problem == | == Any solution of a nonlinear optimization problem on a closed convex cone is a solution of a nonlinear complementarity problem == | ||
| - | |||
Let <math>\mathcal K</math> be a closed convex cone in the Hilbert space | Let <math>\mathcal K</math> be a closed convex cone in the Hilbert space | ||
<math>(\mathbb H,\langle\cdot,\cdot\rangle)</math> and <math>f:\mathbb H\to\mathbb R</math> a differentiable function. Then any solution of the [[Complementarity_problem#Nonlinear_optimization_problems | nonlinear optimization problem]] <math>NOPT(f,\mathcal K)</math> is a | <math>(\mathbb H,\langle\cdot,\cdot\rangle)</math> and <math>f:\mathbb H\to\mathbb R</math> a differentiable function. Then any solution of the [[Complementarity_problem#Nonlinear_optimization_problems | nonlinear optimization problem]] <math>NOPT(f,\mathcal K)</math> is a | ||
solution of the [[Complementarity_problem#Nonlinear_complementarity_problems | nonlinear complementarity problem]] <math>NCP(F,\mathcal K),</math> where <math>F=\nabla f</math> is the gradient of <math>f.\,</math> | solution of the [[Complementarity_problem#Nonlinear_complementarity_problems | nonlinear complementarity problem]] <math>NCP(F,\mathcal K),</math> where <math>F=\nabla f</math> is the gradient of <math>f.\,</math> | ||
| - | == Proof == | + | === Proof === |
| - | + | ||
Any solution of <math>NOPT(f,\mathcal K)</math> [[Complementarity_problem#Any_solution_of_a_nonlinear_optimization_problem_is_a_solution_of_a_variational_inequality | is a solution of]] | Any solution of <math>NOPT(f,\mathcal K)</math> [[Complementarity_problem#Any_solution_of_a_nonlinear_optimization_problem_is_a_solution_of_a_variational_inequality | is a solution of]] | ||
<math>VI(F,\mathcal K)</math> which [[Complementarity_problem#Every_variational_inequality_defined_on_a_closed_convex_cone_is_equivalent_to_a_complementarity_problem| is equivalent to]] <math>NCP(F,\mathcal K).</math> | <math>VI(F,\mathcal K)</math> which [[Complementarity_problem#Every_variational_inequality_defined_on_a_closed_convex_cone_is_equivalent_to_a_complementarity_problem| is equivalent to]] <math>NCP(F,\mathcal K).</math> | ||
== A convex optimization problem on a closed convex cone is equivalent to a nonlinear complementarity problem == | == A convex optimization problem on a closed convex cone is equivalent to a nonlinear complementarity problem == | ||
| - | |||
Let <math>\mathcal K</math> be a closed convex cone in the Hilbert space | Let <math>\mathcal K</math> be a closed convex cone in the Hilbert space | ||
<math>(\mathbb H,\langle\cdot,\cdot\rangle)</math> and <math>f:\mathbb H\to\mathbb R</math> | <math>(\mathbb H,\langle\cdot,\cdot\rangle)</math> and <math>f:\mathbb H\to\mathbb R</math> | ||
| Line 230: | Line 222: | ||
<math>NCP(F,\mathcal K),</math> where <math>F=\nabla f</math> is the gradient of <math>f.\,</math> | <math>NCP(F,\mathcal K),</math> where <math>F=\nabla f</math> is the gradient of <math>f.\,</math> | ||
| - | == Proof == | + | === Proof === |
| - | + | ||
<math>NOPT(f,\mathcal K)</math> [[Complementarity_problem#Every_variational_inequality_is_equivalent_to_a_fixed_point_problem | is equivalent to]] <math>VI(F,\mathcal K)</math> which [[Complementarity_problem#Every_variational_inequality_defined_on_a_closed_convex_cone_is_equivalent_to_a_complementarity_problem| is equivalent to]] <math>NCP(F,\mathcal K).</math> | <math>NOPT(f,\mathcal K)</math> [[Complementarity_problem#Every_variational_inequality_is_equivalent_to_a_fixed_point_problem | is equivalent to]] <math>VI(F,\mathcal K)</math> which [[Complementarity_problem#Every_variational_inequality_defined_on_a_closed_convex_cone_is_equivalent_to_a_complementarity_problem| is equivalent to]] <math>NCP(F,\mathcal K).</math> | ||
== Fat nonlinear programming problem == | == Fat nonlinear programming problem == | ||
| - | |||
Let <math>f:\mathbb R^n\to\mathbb R</math> be a function <math>b\in\mathbb R^n,</math> and | Let <math>f:\mathbb R^n\to\mathbb R</math> be a function <math>b\in\mathbb R^n,</math> and | ||
<math>A\in\mathbb R^{m\times n}</math> a matrix of full rank <math>m,\,</math> where <math>m\leq n.</math> | <math>A\in\mathbb R^{m\times n}</math> a matrix of full rank <math>m,\,</math> where <math>m\leq n.</math> | ||
| Line 254: | Line 244: | ||
== Any solution of a fat nonlinear programming problem is a solution of a nonlinear complementarity problem defined by a polyhedral cone == | == Any solution of a fat nonlinear programming problem is a solution of a nonlinear complementarity problem defined by a polyhedral cone == | ||
| - | |||
Let <math>f:\mathbb R^n\to\mathbb R</math> be a differentiable function | Let <math>f:\mathbb R^n\to\mathbb R</math> be a differentiable function | ||
<math>b\in\mathbb R^m,</math> and | <math>b\in\mathbb R^m,</math> and | ||
| Line 274: | Line 263: | ||
</center> | </center> | ||
| - | == Proof == | + | === Proof === |
| - | + | ||
Let <math>x\in\mathbb R^n</math> be a solution of <math>NP(f,A,b).\,</math> Then, it is easy to see that <math>x-x_0\,</math> is a | Let <math>x\in\mathbb R^n</math> be a solution of <math>NP(f,A,b).\,</math> Then, it is easy to see that <math>x-x_0\,</math> is a | ||
solution of <math>NP(g,A,0),\,</math> where <math>g:\mathbb R^n\to\mathbb R</math> is defined by | solution of <math>NP(g,A,0),\,</math> where <math>g:\mathbb R^n\to\mathbb R</math> is defined by | ||
| Line 281: | Line 269: | ||
[[Complementarity_problem#Any_solution_of_a_nonlinear_optimization_problem_on_a_closed_convex_cone_is_a_solution_of_a_nonlinear_complementarity_problem | It follows that]] <math>x-x_0\,</math> is a solution of <math>NCP(G,\mathcal K,0),</math> because <math>G(x)=\nabla f(x+x_0)=\nabla g(x).</math> | [[Complementarity_problem#Any_solution_of_a_nonlinear_optimization_problem_on_a_closed_convex_cone_is_a_solution_of_a_nonlinear_complementarity_problem | It follows that]] <math>x-x_0\,</math> is a solution of <math>NCP(G,\mathcal K,0),</math> because <math>G(x)=\nabla f(x+x_0)=\nabla g(x).</math> | ||
| - | == Remark == | + | === Remark === |
| - | + | ||
If <math>f\,</math> is convex, then the converse of the above results also holds. | If <math>f\,</math> is convex, then the converse of the above results also holds. | ||
Revision as of 17:50, 16 August 2009
Sándor Zoltán Németh
Fixed point problems
Let be a set and
a mapping. The fixed point problem defined by
is the problem
Nonlinear complementarity problems
Let be a closed convex cone in the Hilbert space
and
a mapping. Recall that the dual cone of
is the closed convex cone
where
is the polar of
The nonlinear complementarity problem defined by
and
is the problem
Every nonlinear complementarity problem is equivalent to a fixed point problem
Let be a closed convex cone in the Hilbert space
and
a mapping. Then, the nonlinear complementarity problem
is equivalent to the fixed point problem
where
is the identity mapping defined by
and
is the projection onto
Proof
For all denote
and
Then,
Suppose that is a solution of
Then,
with
and
Hence, by using Moreau's theorem, we get
Therefore,
is a solution of
Conversely, suppose that is a solution of
Then,
and by using Moreau's theorem
Hence, . Thus,
. Moreau's theorem also implies that
In conclusion,
and
Therefore,
is a solution of
An alternative proof without Moreau's theorem
Variational inequalities
Let be a closed convex set in the Hilbert space
and
a mapping. The variational inequality defined by
and
is the problem
Every variational inequality is equivalent to a fixed point problem
Let be a closed convex set in the Hilbert space
and
a mapping. Then the variational inequality
is equivalent to the fixed point problem
Proof
is a solution of
if and only if
By using the characterization of the projection the latter equation is equivalent to
for all But this holds if and only if
is a solution
of
Remark
The next section shows that the equivalence of variational inequalities and fixed point problems is much stronger than the equivalence of nonlinear complementarity problems and fixed point problems, because each nonlinear complementarity problem is a variational inequality defined on a closed convex cone.
Every variational inequality defined on a closed convex cone is equivalent to a complementarity problem
Let be a closed convex cone in the Hilbert space
and
a mapping. Then, the nonlinear complementarity problem
is equivalent to the variational inequality
Proof
Suppose that is a solution of
Then,
and
Hence,
for all Therefore,
is a solution of
Conversely, suppose that is a solution of
Then,
and
for all Particularly, taking
and
, respectively, we get
Thus,
for all
or equivalently
In conclusion,
and
Therefore,
is a solution of
Concluding the alternative proof
Since is a closed convex cone, the nonlinear complementarity problem
is equivalent to the variational inequality
which is equivalent to the fixed point problem
Implicit complementarity problems
Let be a closed convex cone in the Hilbert space
and
two mappings. Recall that the dual cone of
is the closed convex cone
where
is the
polar
of
The implicit complementarity problem defined by
and the ordered pair of mappings
is the problem
Every implicit complementarity problem is equivalent to a fixed point problem
Let be a closed convex cone in the Hilbert space
and
two mappings. Then, the implicit complementarity problem
is equivalent to the fixed point problem
where
is the identity mapping defined by
Proof
For all denote
and
Then,
Suppose that is a solution of
Then,
with
and
Hence, by using
Moreau's theorem,
we get
Therefore,
is a solution of
Conversely, suppose that is a solution of
Then,
and by using
Moreau's theorem
Hence, . Thus,
.
Moreau's theorem
also implies that
In conclusion,
and
Therefore,
is a solution of
Remark
In particular if we obtain the result
Every nonlinear complementarity problem is equivalent to a fixed point problem,
but the more general result Every implicit complementarity problem is equivalent to a fixed point problem has no known connection with variational inequalities. Therefore, using Moreau's theorem is essential for proving the latter result.
Nonlinear optimization problems
Let be a closed convex set in the Hilbert space
and
a function. The
nonlinear optimization problem defined by
and
is the problem
Any solution of a nonlinear optimization problem is a solution of a variational inequality
Let be a closed convex set in the Hilbert space
and
a differentiable
function. Then, any solution of the nonlinear optimization problem
is a solution of the variational inequality
where
is the gradient of
Proof
Let be a solution of
and
an arbitrary point. Then, by the convexity of
we have
Hence,
and therefore
Therefore, is a solution of
A convex optimization problem is equivalent to a variational inequality
Let be a closed convex set in the Hilbert space
and
a
differentiable convex function.
Then, the nonlinear optimization problem
is equivalent to the variational inequality
where
is the gradient of
Proof
Any solution of
is a solution of
Conversely, suppose that is a solution of
Hence, by using the
convexity of
we have
for all
Therefore,
is a solution of
Any solution of a nonlinear optimization problem on a closed convex cone is a solution of a nonlinear complementarity problem
Let be a closed convex cone in the Hilbert space
and
a differentiable function. Then any solution of the nonlinear optimization problem
is a
solution of the nonlinear complementarity problem
where
is the gradient of
Proof
Any solution of is a solution of
which is equivalent to
A convex optimization problem on a closed convex cone is equivalent to a nonlinear complementarity problem
Let be a closed convex cone in the Hilbert space
and
a differentiable convex function. Then, the nonlinear optimization problem
is equivalent to the nonlinear complementarity problem
where
is the gradient of
Proof
is equivalent to
which is equivalent to
Fat nonlinear programming problem
Let be a function
and
a matrix of full rank
where
Then, the problem
is called fat nonlinear programming problem.
Any solution of a fat nonlinear programming problem is a solution of a nonlinear complementarity problem defined by a polyhedral cone
Let be a differentiable function
and
a matrix of full rank
where
If
is a solution of the fat nonlinear programming problem
then
is a solution of the nonlinear
complementarity problem
where
is
a particular solution of the linear system of equations
is the polyhedral cone defined by
and
is defined by
Proof
Let be a solution of
Then, it is easy to see that
is a
solution of
where
is defined by
It follows that
is a solution of
because
Remark
If is convex, then the converse of the above results also holds.
We note that there are also many nonlinear programming problems defined by skinny matrices (i.e., m>n) which can be reduced to complementarity problems.
Since a very large class of nonlinear programming problems can be reduced to nonlinear complementarity problems, the importance of nonlinear complementarity problems on polyhedral cones is obvious both from theoretical and practical point of view.