Home | 18.022 | Chapter 15

Tools    Index    Up    Previous    Next


15.6 Systems of Equations with More Variables than Equations

Basic questions:

What happens to Gaussian elimination when the determinant is 0 and there is no inverse and no unique solution?
What can or should you do with systems of equations for which there are more variables than equations?

Gaussian elimination works fine even when the determinant is zero. What happens is  that when a row is a linear combination of others, after you subtract multiples of those rows to make certain entries of your row into zeroes, you find the whole row becomes zero. In terms of eliminating variables from equations you find that on substituting for x and y, z drops out of the equation as well. All this means of course that your original equations were redundant, and you really had more variables than equations.

Example

If there are more variables than equations, you cannot find a unique solution, because there isnt one. However, you can eliminate some of the variables in terms of others. In other words, you can start the Gaussian elimination process and continue until you run out of rows. If you have n + m variables, and only m equations, you can solve for m of the variables in terms of the others. From a matrix point of view this means making the entries of the matrix in the columns corresponding to the variables-to-be-eliminated into all zeros except for one 1.

Example

The effect of the partial row reduction that we are able to form is to divide the variables into two groups: those that are solved for in terms of the others, and those that are not. Those that are not solved for then form what is called a basis of the solution space, of your system of equations.

The general solution of the system of equations is: assign arbitrary values to the basis variables: read off the others from the equations for them. In other words, the basis variables form a parametric representation of the solution space.

Example

Application: Linear Programming

This term,  linear programming refers to  a kind of optimization problem; one in which you have linear constraints, and seek to maximize a linear objective function.

Example

There is a standard form for such linear programs. We write all the inequality constraint in the form v0, by introducing a slack variable, that takes up the slack in any inequality constraint that looks different.

Example

Notice that this is exactly the kind of underdetermined system of linear equations we have been discussing.

The standard approach to solving such problems, which works remarkably well even with thousands of variables and constraints, is to first find a set of basis variables such that all the inequalities are obeyed if you set all of them equal to 0; so that the origin in terms of the basis variables is a feasible solution point, in that it obeys all the constraints. Then you try to switch one new variable into the basis, removing another, so that the new basis origin is also feasible, and  the objective function is bigger at the new basis origin than at the old one. If you cannot do this, your origin is an optimizing  solution, if there is one. If you can switch you do so, and try to do it again. This method is called the simplex algorithm and is the basic tool of the subject called operations research.

Example

There are simple rules for determining which variables can be taken out of the basis (they should have positive coefficient in the objective function) and which should replace the one taken out; (usually only one choice will maintain feasibility,) and when you should stop (all terms in the coefficients in the objective function are negative, or there is a positive one whose variable has the wrong sign in all equations.)
There is also an easy procedure (similar to the one just described) for finding a feasible basis origin, if there is one, or showing that there are no feasible points if there is no feasible basis origin.