Eigendecomposition Explained
2 min read

Eigendecomposition Explained

In other words, how to rip matrices apart.
Eigendecomposition Explained

Look, let's face it. Matrices are annoying. If your motor skills are as bad as mine, drawing those lines on the sides can get really frustrating, really fast.

shallow focus photography of person writing on book
Photo by Ilya Ilford / Unsplash

Fortunately, most mathematicians are in the same boat as us (There is a reason we failed in art class, you know) and therefore, we have a workaround - eigendecomposition.

As I seem to be mentioning in all my articles, this is not 100% rigorous. Please do consult a more formal guide on linear algebra if you want to get the complete picture. With that said, let's dig in.

To start with, we have a matrix:

$$ \begin{bmatrix} a & b \\ c & d \end{bmatrix} $$

Since matrices represent linear transformations (if you didn't know this, check out 3Blue1Brown's playlist),  let's take a look at what our matrix does to a particular vector:

$$ \begin{bmatrix} a & b \\ c & d \end{bmatrix}\begin{bmatrix} x \\ y \end{bmatrix}\quad =\quad \lambda \begin{bmatrix} x \\ y \end{bmatrix} $$

Now, this doesn't always work. But the idea is that you apply a matrix's linear transformation onto a vector, and for some particular vector, that matrix has the same effect as scaling the components of that vector by some number $\lambda$. A mounthful, I know. The idea is probably better illustrated through subbing in some letters:

$$ A \vec{v} = \lambda \vec{v} $$

What the above equation tells you is that the matrix $A$ does the same thing to $ \vec{v} $ as $\lambda$ - they both scale. And again, I will shamelessly reference 3blue1brown's brilliant explanation of eigenstuffs:

I'm serious, watch this.

Now that we know our idea works, we need to figure out a way of implementing it.

Our main task is to find $\lambda$ and $\vec{v}$. Recall that we only said the matrix is equivalent in function to a scalar $\lambda$ with respect to a vector $\vec{v}$. Now, use this equation to find $\lambda$:

$$ \text{det}(A - \lambda I) = 0$$

If you want to, go ahead and try this out for a few matrices of your choice. What you'll get is a polynomial (which, in our case, is called a characteristic polynomial). The roots of the polynomial give the possible values of the scalar (which, in our case, are called the eigenvalues).

Once you have the values of $\lambda$ (the eigenvalues), you can substitute them in $ A \vec{v} = \lambda \vec{v} $ to get a system of linear equations. Solving the system yeilds $x$ and $y$, which are the components of the vector $ \vec{v} $. By the way, $ \vec{v} $ is called the eigenvector.

So as it turns out, the eigenvectors and their corresponding eigenvalues can give you everything you need to reconstruct the original matrix, without going through all the steps in reverse!

Here's the shortcut to making a matrix from it's eigenvalues ($ \lambda_1, \lambda_2, \lambda_3, ...$) and the corresponding eigenvectors ($ \vec{v_1}, \vec{v_2}, \vec{v_3}, ...$ ):

$$ P\quad = \quad \begin{bmatrix} | & | & | \\ { v_1 } & { v_2 } & { v_3 } \\ | & | & | \end{bmatrix} $$

$$ D\quad =\quad \begin{bmatrix} { \lambda_1 } & 0 & 0 \\ 0 & { \lambda_2 } & 0 \\ 0 & 0 & { \lambda_3 } \end{bmatrix} $$

$$ A = PDP^{-1} $$

The matrix A is the reconstructed matrix. But truth be told, none of this would be clear without a little practice, so pull out a matrix, and get to work!

Enjoying these posts? Subscribe for more