Home | 18.022 | Chapter 15

Tools    Index    Up    Previous    Next


15.3 Matrix Addition, Multiplication and Multiplication by a Number

1. To multiply a matrix by a number you multiply every element of it by that number.

Example

2. You may add two matrices which have the same shape: to do so add corresponding elements to get the corresponding element of the sum.

Example

3. You may form the product of matrices A and B if the number of columns of A is the same as the number of rows of B. There is an entry in the product A B corresponding to each row of A and column of B: it is the dot product of the two: the sum of the products of corresponding terms.

Example

Some definitions:

A matrix having only one row is called a row vector; a matrix having only one column is called a column vector.
The transpose of a matrix A is the matrix, written as AT obtained by switching its rows and columns.
If A is n by m then AT is m by n. (n by m always means n rows and m columns.)
A matrix which is its own transpose is said to be symmetric.
If transposition changes the sign of every element of A then A is said to be antisymmetric.
The square matrix having n ones on its diagonal and zeroes elsewhere, is called the n dimensional identity matrix, written as In or as I when its dimension is obvious. Multiplying In on the right by any matrix A that has n rows yields: In A = A.
The transpose of a column vector is a row vector.
A square matrix whose determinant is zero is said to be singular.
An n dimensional vector space is a space having at most n linearly independent vectors. Any set of n linearly independent vectors in such a space forms a basis for it. Any other vector in it is linearly dependent on the vectors in the basis.
The k-th basis vector (row or column) is the row or column vector whose components are all zeros except for a 1 in its kth place. Thus the 3rd basis row vector in 4 dimensions is (0, 0, 1, 0).
A matrix defines the linear transformation which takes the kth basis column vector into the kth column of the matrix. (This determines the transformation completely by its linearity: its value on a linear combination of basis vectors is the same linear combination of the columns of the matrix.)
The inverse of an n by n matrix A, is the matrix A-1 which obeys AA-1=In, and A-1A=In.
When n is finite, either one of these conditions implies

Proof

This is not always true in infinite dimensions: (consider the transformation T which takes the ith component of a vector and makes it the (i + 1)st with first component 0. Its inverse makes the (i+1)st into the ith. But what does this inverse do in acting on the vector v with a 1 as first component and all others 0? Since this vector is not in the range of T there is no way that applying T to T-1v to get this vector back.)