Theorem
Let be a matrix with dimension
, and
,
be vectors. Then, consider the system of linear equations given by
Let represent the elements of
, and similarly
,
represent the elements of
and
. Finally, let
be the matrix
, except where column
(which will be a column vector) of the matrix
is replaced with the vector
.
Then,
2. Proof
Let the columns of be denoted by
for
. Then, we can write
The key point here is that . Furthermore, we can write the matrix
as
. Then, the matrix
, where
is replaced with the vector
, can be written as
If ,
if , or
the determinant of the matrix is therefore of the form above, ie for
,
. But, recall that
So, substituting back into
, we end up with (in the case that
),
But, from the laws of determinants, we have that adding multiples of columns together does not change the value of the determinant, and that multiplying one column by a constant, say , multiplies the determinant by
. Thus,
Which, by the assumption that is not singular (and hence
, we have
And this proves the theorem.
NB – The inverse of a matrix is a special use of cramer’s rule, where is the column of zeros and a one for each column of the identity matrix.