|
||
|
||
|
Since we can both add and multiply transformations from a vector space to itself, and hence can do the same to square matrices, we can combine these operations together to form arbitrary polynomial functions of them, and even infinite series of them.
It is quite easy to make such definitions, but matrix multiplication, while
reasonably easy to do, is rather cumbersome, and it is not very easily to visualize
what such constructs amount to.
On the other hand, if the matrix A is diagonal, then it is much easier to see
what a complicated function of A actually means. It will also be diagonal matrices
whose diagonal elements are the same function of the corresponding diagonal
elements of A.
Some of the central questions in the study of transformations on vector spaces is: for what T can you find a basis with respect to which T will be diagonal? How close can you come in general to doing so? How do you go about finding such basis vectors?
The matrix of T will be diagonal when each basis vector has the property that
the action of T on it produces a multiple of itself, and that multiple will
be the corresponding diagonal element.
A vector v such that Tv =
v is called an eigenvector of T and
is
called the eigenvalue of T corresponding to v. Thus a matrix
for T will be diagonal if every basis vector is an eigenvector of T; its diagonal
elements are then the corresponding eigenvalues.
The concepts of eigenvalue and eigenvector can be extended to matrices also.
Thus a vector v is an eigenvector of a matrix if when muliplied by the matrix,
a multiple of itself is produced; that multiple is the corresponding eigenvalue:
M v =
v.