Home | 18.022 | Chapter 17

Tools    Index    Up    Previous    Next


17.1 Determinants

The determinant of  a matrix or transformation can be defined in many ways. Here is perhaps the simplest definition:

1. For a diagonal matrix it is the product of the diagonal elements.

2. It is unchanged by adding a multiple of one row to another.

It is readily seen that these two properties are shared by the n-dimensional volume of the n-dimensional parallelopiped determined by the rows (or columns)  considered as vectors, when the basis vectors are orthonormal. (They are mutually orthogonal and each of length 1.) Such addition merely tilts the sides of the parallelopiped without changing its thickness.
The determinant also is linear in each row, and changes sign when any two rows are interchanged.
The fact that it is linear in each row means that you can pick any row, and write the determinant as a sum of contributions each one linear in one element of that row. The coefficients of that element in this expression for the determinant is called the cofactor of the element. Since the (i, j)th element of the matrix is already linear in the ith row and jth column, its cofactor can contain no other contributions from that row or column. It is in fact the n-1 dimensional determinant of the matrix obtained by omitting that row and column from the original matrix, multiplied by (-1)i-j .
We get:

.

If A has two identical rows then its determinant is 0. This means that substituting the elements of any other row for the ajk here (replacing ajk by ask for j different from s in this equation) gives the determinant of a matrix with two identical rows, which is 0. All this implies that we have an expression for the inverse of the matrix A with elements ajk.
We get that A B = I with B given by the matrix whose (k, j)th element is the (j, k) cofactor, divided by the determinant of A.
In other words we have:

Another interesting fact is that if we take any vector v with components vk,  and form , it is the determinant of the matrix obtained by replacing the jth row of the original matrix by the components of the vector v. By identical reasoning is the determinant of the matrix obtained by replacing the jth column of A by the components of v.
But this product divided byA is the jth component of the result of applying A-1 to v:

This statement is called Cramers Rule.
If we start with equations representable as A r = v, then the solution has the form r = A-1v. Cramers rule tells us that the jth component of r, the jth unknown in our equations, is the ratio of the determinant of the matrix obtained by replacing the jth column of A by v, divided by the determinant of A itself.

The fact that the determinant is linear in each row or column separately implies that it can be written as a sum of terms, each term being a product containing one factor from each row and column, multiplied by a constant. Since the determinant of a diagonal matrix is the product of the diagonal elements, this constant is 1  when the elements are all on the diagonal. If we switch two rows, the determinant changes sign; which means that the constant is always 1 or -1, being -1 when the elements require an odd number of row switches to put them all on the main diagonal, and 1 when this can be done with an even number of switches.
A single term in the determinant can be characterized by the pairs of row-column indices of the various factors in it. Since each row and column index must occur in every term in the determinant, we can describe it by listing in order the row indices paired with columns 1,2,....
Thus in three dimensions there are six terms, which can be described by the row index lists: 123, 231, 312, 213, 132, 321. Lists like this are called permutations of the number s from one to n, here for n = 3.
We can imagine each permutation as a mapping which takes the indices 123... respectively to its list: thus 132 can be imagined as a mapping from 1 to 1, 2 to 3 and 3 to 2. This provides us with another notation for describing permutations: we can write down the cycles that the numbers go through under the permutation mapping: thus under 132, 1 stays fixed and is in a cycle by itself while 2 goes to 3 which goes to 2: we describe that by the notation (1)(23). The permutation 231 corresponds to the cycle (123), while the identity mapping 123 has cycle representation (1)(2)(3).
One nice thing about this notation is that from it you can immediately see if the corresponding term in the determinant gets a plus or minus sign. Each cycle of even length (like (23)) causes a sign change, so the sign is positive if there are an even number of even length cycles in this representation of the index permutation corresponding to the term. It is negative if there are an odd number of even length cycles.