# Introduction

Presolving conventionally means quick elimination of some variables and constraints prior to numerical solution of an optimization problem. Presented with constraints $LaTeX: a^{\rm T}x=0\,,~x\succeq0$ for example, a good presolver is likely to check whether constant vector $LaTeX: \,a$ were positive; for if it were, then variable $LaTeX: \,x$ has only the trivial solution. The overall effect of such tests is to make a problem dimensionally smaller.

Most commercial optimization problem solvers incorporate presolving. Particular reductions performed can be proprietary, invisible, or some control or selection may be given to a user. But all presolvers have the same motivation: to make an optimization problem smaller. There is profit in reducing problem size because, then, a solver can compete more effectively in the marketplace for large-scale problems.

We present a method for reducing variable dimension based upon geometry of constraints in the problem statement:

$LaTeX: \begin{array}{rl}\mbox{minimize}_{x\in_{}\mathbb{R}^{^n}}&f(x)\\

where $LaTeX: \,A$ is a matrix of predetermined dimension, $LaTeX: \mathbb{Z}$ represent the integers, $LaTeX: \reals$ the real numbers, and $LaTeX: \mathcal{J}$ is some possibly empty index set.