Inverse Matrices

In this post, we introduce the idea of the inverse of a matrix, which undoes the transformation of that matrix. For example, it’s straightforward that the inverse of the rescaling matrix

\begin{align*} \begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix} \end{align*}

is obtained as the rescaling matrix which rescales each dimension by the inverse amount.

\begin{align*} \begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix}^{-1} = \begin{pmatrix} \frac{1}{2} & 0 \\ 0 & \frac{1}{3} \end{pmatrix} \end{align*}

We can verify that by multiplying the matrix by its inverse, and observing that the inverse takes the matrix back to the unit square.

\begin{align*} \begin{pmatrix} \frac{1}{2} & 0 \\ 0 & \frac{1}{3} \end{pmatrix} \begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \end{align*}

But when we consider a more general matrix like the one below, it’s less straightforward how to find the inverse.

\begin{align*} \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} \end{align*}

We could try inverting each of the components separately, like we did with the diagonal of the rescaling matrix, but the resulting matrix doesn’t take the original matrix back to the unit square – so it can’t be the inverse.

\begin{align*} \begin{pmatrix} 1 & \frac{1}{2} \\ \frac{1}{3} & \frac{1}{4} \end{pmatrix} \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} = \begin{pmatrix} \frac{5}{2} & 4 \\ \frac{13}{12} & \frac{5}{3} \end{pmatrix} \end{align*}

Here is another idea: since we want to end up with the unit square, let’s left-multiply our matrix by other matrices representing row operations until we get to the unit square.

\begin{align*} \begin{pmatrix} 1 & 0 \\ −3 & 1 \end{pmatrix} \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} &= \begin{pmatrix} 1 & 2 \\ 0 & −2 \end{pmatrix} \\ \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ −3 & 1 \end{pmatrix} \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} &= \begin{pmatrix} 1 & 0 \\ 0 & −2 \end{pmatrix} \\ \begin{pmatrix} 1 & 0 \\ 0 & −\frac{1}{2} \end{pmatrix} \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ −3 & 1 \end{pmatrix} \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} &= \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \end{align*}

Then, let’s take all the matrices we multiplied by, and find their product. That will be our inverse matrix.

\begin{align*} \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}^{−1} &= \begin{pmatrix} 1 & 0 \\ 0 & −\frac{1}{2} \end{pmatrix} \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ −3 & 1 \end{pmatrix} \\ &= \begin{pmatrix} −2 & 1 \\ \frac{3}{2} & −\frac{1}{2} \end{pmatrix} \end{align*}

We can verify that indeed, this is the correct inverse matrix.

\begin{align*} \begin{pmatrix} −2 & 1 \\ \frac{3}{2} & −\frac{1}{2} \end{pmatrix} \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \end{align*}

Based on the fact that we computed the inverse by left-multiplying, we should only expect the inverse to work for left-multiplication. Interestingly, it works for right-multiplication as well!

\begin{align*} \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} \begin{pmatrix} −2 & 1 \\ \frac{3}{2} & −\frac{1}{2} \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \end{align*}

This result is general to any inverse matrix – regardless of whether we multiply a matrix by its inverse on the left or right, the result will be the unit cube. To see why, we’ll need to do a bit of simple algebra. To ease notation, we’ll denote the unit cube matrix by $I$, which stands for the identity matrix and comes from the fact that $AI = IA = A$ for any matrix $A$. Since $A^{-1}A=I$ for an inverse matrix obtained by left-multiplication, and since matrix multiplication is associative, we have

\begin{align*} A=AI=A(A^{-1}A) = (AA^{-1})A. \end{align*}

But if we left-multiply $A$ by a matrix and maintain a result of $A$, that matrix must be the identity! That is, if $A=(AA^{-1})A$, then we must have $AA^{-1}=I$. Hence, left and right inverses are one and the same.

Now, let’s try to find the inverse of the matrix below. Something weird will happen.

\begin{align*} \begin{pmatrix} 2 & 0 \\ 0 & 0 \end{pmatrix} \end{align*}

This is simply a rescaling matrix with the rescaling quantities $2$ and $0$ on the diagonal. With rescaling matrices, we’re used to finding the inverse by inverting the diagonal entries. We can invert $2$ and get $\frac{1}{2}$, but we can’t invert $0$ – the fraction $\frac{1}{0}$ is undefined.

It turns out, this matrix has no inverse. In general, any matrix having a $0$ rescaling has no inverse, because once a vector is rescaled by a factor of $0$, it’s impossible to recover the original length of the vector – as far as we know, it could be any length, because $0$ times any number results in $0$.

By the same token, any matrix whose rescalings are all nonzero has an inverse. Once a vector is rescaled by a factor of $r \neq 0$, we can recover the original length of the vector by simply rescaling again by $\frac{1}{r}$. Since the determinant of a matrix is the product of its rescalings, we can put all this together into an elegant statement: a matrix is invertible if and only if its determinant is nonzero.

This statement gives another perspective on why a linear system with nonzero determinant has exactly 1 solution, whereas a linear system with zero determinant has none or infinitely many solutions. Any linear system can be written as a matrix equation $Ax=b$, and if $\det(A) \neq 0$, then $A^{-1}$ exists, resulting in a single solution given by $x = A^{-1}b$. On the other hand, if $\det(A)=0$, then contains some zero rescaling, and thus if there is any solution at all, then there must be infinitely many solutions because multiplication by zero gives the same result for infinitely many numbers.

Lastly, let’s end by discussing a faster method to compute inverse matrices, based on the technique of reduction. We already know how to use reduction to keep track of coefficients when solving linear systems by elimination – but we’ll introduce a more compact augmented matrix notation that will allow us to compute inverse matrices.

To solve the linear system below, we first convert it to an augmented matrix.

\begin{align*} \begin{cases} 2x_1+x_2=4 \\ x_1+x_2=3 \end{cases} \rightarrow \begin{pmatrix} 2 & 1 & | & 4 \\ 1 & 1 & | & 3 \end{pmatrix} \end{align*}

Then, we perform row operations on the augmented matrix until we have reduced the left-hand side to the identity matrix.

\begin{align*} \begin{pmatrix} 2 & 1 & | & 4 \\ 1 & 1 & | & 3 \end{pmatrix} \rightarrow \begin{pmatrix} 1 & 0 & | & 1 \\ 1 & 1 & | & 3 \end{pmatrix} \rightarrow \begin{pmatrix} 1 & 0 & | & 1 \\ 0 & 1 & | & 2 \end{pmatrix} \end{align*}

Finally, the solutions are displayed on the right-hand side: $x_1=1$ and $x_2=2$.

This process is familiar – we’re just left-multiplying by matrices corresponding to row operations until we get to the identity matrix, at which point we have effectively multiplied the original left-hand side matrix by its inverse. Since we perform those same operations on the right-hand side vector, we are effectively multiplying the vector by the inverse matrix as well, which yields the solution.

If we want to find the actual inverse matrix, rather than just using it to solve the system, we can modify this process slightly by replacing the original right-hand side vector with the identity matrix. Then, once the left-hand side matrix is taken to the identity matrix, the right-hand side identity matrix will be taken to the inverse matrix.

To find the actual inverse matrix in the previous example, we replace the right-hand side with the identity matrix and perform the same row operations to reduce the left-hand side.

\begin{align*} \begin{pmatrix} 2 & 1 & | & 1 & 0 \\ 1 & 1 & | & 0 & 1 \end{pmatrix} \rightarrow \begin{pmatrix} 1 & 0 & | & 1 & −1 \\ 1 & 1 & | & 0 & 1 \end{pmatrix} \rightarrow \begin{pmatrix} 1 & 0 & | & 1 & −1 \\ 0 & 1 & | & −1 & 2 \end{pmatrix} \end{align*}

Thus, we have the inverse matrix:

\begin{align*} \begin{pmatrix} 2 & 1 \\ 1 & 1 \end{pmatrix}^{−1} = \begin{pmatrix} 1 & −1 \\ −1 & 2 \end{pmatrix} \end{align*}

There is a nice general formula for the inverse of a $2 \times 2$ matrix, which is given below. It is recommended to memorize the formula to ease manipulations with $2 \times 2$ matrices, since the whole point of doing examples with $2 \times 2$ matrices is to ensure that they are relatively simple and fast.

\begin{align*} \begin{pmatrix} a & b \\ c & d \end{pmatrix}^{−1} = \frac{1}{ad−bc} \begin{pmatrix} d & −b \\ −c & a \end{pmatrix} \end{align*}

Practice Problems

For each given matrix $A$, compute $\det(A)$ to tell whether $A$ is invertible. If it is, then compute $A^{-1}$, and verify that $A^{-1}A=I$ and $AA^{-1}=I$. (You can view the solution by clicking on the problem.)

\begin{align*} 1) \hspace{.5cm} \begin{pmatrix} 3 & 1 \\ 2 & 1 \end{pmatrix} \end{align*}
Solution:
\begin{align*} \begin{pmatrix} 1 & −1 \\ −2 & 3 \end{pmatrix}\end{align*}

\begin{align*} 2) \hspace{.5cm} \begin{pmatrix} 2 & 1 \\ 4 & 3 \end{pmatrix} \end{align*}
Solution:
\begin{align*}\begin{pmatrix} \frac{3}{2} & −\frac{1}{2} \\ −2 & 1 \end{pmatrix} \end{align*}

\begin{align*} 3) \hspace{.5cm} \begin{pmatrix} 0 & 1 \\ 5 & 7 \end{pmatrix} \end{align*}
Solution:
\begin{align*} \begin{pmatrix} −\frac{7}{5} & \frac{1}{5} \\ 1 & 0 \end{pmatrix}\end{align*}

\begin{align*} 4) \hspace{.5cm} \begin{pmatrix} 2 & 1 \\ 6 & 3 \end{pmatrix} \end{align*}
Solution:
\begin{align*} \text{not invertible} \end{align*}

\begin{align*} 5) \hspace{.5cm} \begin{pmatrix} 1 & −2 & 0 \\ 0 & 1 & 0 \\ 0 & −1 & 1 \end{pmatrix} \end{align*}
Solution:
\begin{align*} \begin{pmatrix} 1 & 2 & 0 \\ 0 & 1 & 0 \\ 0 & 1 & 1 \end{pmatrix}\end{align*}

\begin{align*} 6) \hspace{.5cm} \begin{pmatrix} 1 & 0 & 1 \\ 2 & 3 & 4 \\ −1 & 1 & 0 \end{pmatrix} \end{align*}
Solution:
\begin{align*} \begin{pmatrix} −4 & 1 & −3 \\ −4 & 1 & −2 \\ 5 & −1 & 3 \end{pmatrix}\end{align*}

\begin{align*} 7) \hspace{.5cm} \begin{pmatrix} 2 & 1 & 1 \\ 1 & 1 & 7 \\ 5 & 2 & −4 \end{pmatrix} \end{align*}
Solution:
\begin{align*} \text{not invertible} \end{align*}

\begin{align*} 8) \hspace{.5cm} \begin{pmatrix} 3 & 2 & 1 \\ 0 & 2 & 4 \\ −1 & 1 & −1 \end{pmatrix} \end{align*}
Solution:
\begin{align*} \frac{1}{24} \begin{pmatrix} 6 & −3 & −6 \\ 4 & 2 & 12 \\ −2 & 5 & −6 \end{pmatrix} \end{align*}

\begin{align*} 9) \hspace{.5cm} \begin{pmatrix} 1 & 2 & 1 & −2 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 0 & −1 \\ 2 & 1 & 0 & 2 \end{pmatrix} \end{align*}
Solution:
\begin{align*} \begin{pmatrix} 0 & 3 & −2 & −1 \\ 0 & −2 & 2 & 1 \\ 1 & −3 & 0 & 1 \\ 0 & −2 & 1 & 1 \end{pmatrix} \end{align*}

\begin{align*} 10) \hspace{.5cm} \begin{pmatrix} 2 & 3 & 1 & 0 \\ −1 & 0 & 3 & 1 \\ 2 & 2 & 4 & 1 \\ 2 & −1 & 2 & −1 \end{pmatrix} \end{align*}
Solution:
\begin{align*} \frac{1}{25} \begin{pmatrix} −11 & −17 & 16 & −1 \\ 14 & 8 & −9 & −1 \\ 5 & 10 & −5 & 5 \\ −26 & −22 & 31 & −16 \end{pmatrix}\end{align*}

For each equation $Ax=b$, tell whether $A^{-1}$ exists. If it does, then compute the solution $x=A^{-1}b$. (You can view the solution by clicking on the problem.)

\begin{align*} 11) \hspace{.5cm} \begin{pmatrix} 1 & 4 \\ 2 & 3 \end{pmatrix} x = \begin{pmatrix} 5 \\ 6 \end{pmatrix} \end{align*}
Solution:
\begin{align*} x = \frac{1}{5} \begin{pmatrix} 9 \\ 4 \end{pmatrix} \end{align*}

\begin{align*} 12) \hspace{.5cm} \begin{pmatrix} −1 & 5 \\ 1 & 1 \end{pmatrix} x = \begin{pmatrix} 2 \\ −1 \end{pmatrix} \end{align*}
Solution:
\begin{align*} x = \frac{1}{6} \begin{pmatrix} −7 \\ 1 \end{pmatrix} \end{align*}

\begin{align*} 13) \hspace{.5cm} \begin{pmatrix} 3 & −7 \\ −9 & 21 \end{pmatrix} x = \begin{pmatrix} 0 \\ 1 \end{pmatrix} \end{align*}
Solution:
\begin{align*} \text{no inverse} \end{align*}

\begin{align*} 14) \hspace{.5cm} \begin{pmatrix} 2 & 7 \\ 5 & −1 \end{pmatrix} x = \begin{pmatrix} 3 \\ 4 \end{pmatrix} \end{align*}
Solution:
\begin{align*} x = \frac{1}{37} \begin{pmatrix} 31 \\ 7 \end{pmatrix} \end{align*}

\begin{align*} 15) \hspace{.5cm} \begin{pmatrix} 1 & 0 & 1 \\ 0 & −1 & 1 \\ 1 & 2 & 2 \end{pmatrix} x = \begin{pmatrix} 1 \\ −2 \\ −1 \end{pmatrix} \end{align*}
Solution:
\begin{align*} x = \begin{pmatrix} 3 \\ 0 \\ −2 \end{pmatrix}\end{align*}

\begin{align*} 16) \hspace{.5cm} \begin{pmatrix} 7 & 3 & 4 \\ 1 & 2 & 3 \\ 4 & −3 & −5 \end{pmatrix} x = \begin{pmatrix} 3 \\ 2 \\ 1 \end{pmatrix} \end{align*}
Solution:
\begin{align*} \text{no inverse} \end{align*}

\begin{align*} 17) \hspace{.5cm} \begin{pmatrix} 3 & 4 & 1 \\ −2 & 3 & 1 \\ 0 & 3 & 2 \end{pmatrix} x = \begin{pmatrix} 5 \\ 1 \\ 2 \end{pmatrix} \end{align*}
Solution:
\begin{align*} x = \frac{1}{19} \begin{pmatrix} 12 \\ 16 \\ −5 \end{pmatrix} \end{align*}

\begin{align*} 18) \hspace{.5cm} \begin{pmatrix} 3 & 1 & 1 \\ −1 & 1 & 2 \\ 4 & 3 & 5 \end{pmatrix} x = \begin{pmatrix} −3 \\ 3 \\ 4 \end{pmatrix} \end{align*}
Solution:
\begin{align*} x = \frac{1}{3} \begin{pmatrix} 1 \\ −34 \\ 22 \end{pmatrix} \end{align*}

\begin{align*} 19) \hspace{.5cm} \begin{pmatrix} 1 & 2 & −1 & −2 \\ 3 & 0 & 4 & 1 \\ 1 & 5 & 0 & 0 \\ 0 & 1 & 0 & 1 \end{pmatrix} x = \begin{pmatrix} 1 \\ 0 \\ 0 \\ 2 \end{pmatrix} \end{align*}
Solution:
\begin{align*} x = \frac{1}{10} \begin{pmatrix} 45 \\ −9 \\ −41 \\ 29 \end{pmatrix} \end{align*}

\begin{align*} 20) \hspace{.5cm} \begin{pmatrix} 0 & 7 & 1 & 0 \\ 1 & 1 & 1 & 0 \\ 1 & −1 & 1 & −1 \\ 0 & 1 & 2 & 3 \end{pmatrix} x = \begin{pmatrix} 1 \\ 0 \\ 2 \\ −2 \end{pmatrix} \end{align*}
Solution:
\begin{align*} x &=\frac{1}{19} \begin{pmatrix} −31 \\ −2 \\ 33 \\ −34 \end{pmatrix} \end{align*}

Tags: