




1. In performing the elementary row operations of Gaussian elimination to make
the coefficient matrix into an identity matrix, you must perform the same operations
simultaneously on the right hand side of the equation, on r in the example.
In doing so you may write v as I_{3} v, leave v
fixed and perform the row operations on I_{3} as you do them
on C.
When you are done you will transform I_{3} into some new matrix,
call it D.
r = I_{3} r = D v
while we also have
C r = v
D is therefore the inverse of C:
D = C^{1}
Gaussian elimination provides a relatively efficient way of constructing the inverse to a matrix.
2. Exactly the same results hold with any number of variables and equations.
Gaussian elimination is practical, under most circumstances, for finding the
inverse to matrices involving thousands of equations and variables.
Gaussian elimination is, however, an extremely boring an repetitive procedure,
not well suited for human beings. Fortunately computers never seem to get bored
with performing elementary row operations and they can easily be trained to
do Gaussian elimination.
3. Gaussian elimination provides a straightforward way to evaluate the determinant
of a matrix: the product of all the quantities divided by in the row reduction
is the magnitude of the determinant of the matrix. It is the determinant itself
if there are no rearrangements of rows in the row reduction process; otherwise
each simple interchange of two rows changes the sign.
Thus in the example above, steps 1 ,3 and 5 involved division, by 2, 5/2 and
1/5 respectively. Their product is 1 which is the determinant of this matrix.
This is among the easiest ways to compute determinants in general.
Please realize that the way you have learned to evaluate determinants: multiply
along diagonals with appropriate signs, works only for 2 by 2 or 3 by 3 arrays.
It is dead wrong for larger determinants.
An interesting question is: what is the fastest way to compute determinants
and inverses and products of matrices. Gaussian elimination takes on the order
of n^{3 }operations, for an n by n matrix; taking products in the obvious
way takes on the same order of time. There are other clever ways of doing these
things which take on the order of n^{ 5/2} steps for very large n, but
they are not very useful in practice since they seem to lack numerical stability,
so that small errors can grow horribly; also they do not lend themselves to
parallel computation as well as the standard ones.