2

I tried to get eigen vectors by single value decomposition using scipy.linalg.svd() which gives me three outputs the left matrix U, diagonal matrix S, right matrix Vh. Where U is supposed to be eigenvectors for symmetric positive definite matrix.
But eigenvectors obtained from scipy.linalg.eig() are different from U or the left matrix. Different in a sense that they have same absolute values but different signs for some of the values.

import scipy
import numpy as np
mat = np.array([[3,2],[2,3]])
U, S, Vh = scipy.linalg.svd(mat)
eigenvalues, eigenvectors = scipy.linalg.eig(mat)

print(U) >>> [[-0.70710678 -0.70710678] [-0.70710678 0.70710678]] print(eigenvectors) >>> [[ 0.70710678 -0.70710678] [ 0.70710678 0.70710678]]

Maybe I am missing some mathematical point. It would be very helpful if someone explain it to me why this happens.

1 Answers1

0

If $\vec{x}$ is an eigenvector of a matrix $A$ with eigenvalue $\lambda$, then $\alpha \vec{x}$ is also an eigenvector with the same eigenvalue for any nonzero scalar $\alpha$. Equivalently, $A\vec{x} = \lambda \vec{x} \iff A (\alpha \vec{x}) = \lambda (\alpha \vec{x})$.

Sometimes it's common to normalize eigenvectors to have norm $1$, but this still leaves the choice of sign.

Your question is an example of this indeterminancy. The matrix $\left( \begin{smallmatrix} 3 & 2 \\ 2 & 3 \end{smallmatrix} \right)$ has eigenvalues $1$ and $5$ with corresponding (human-readable normalized) eigenvectors $(-1, 1)$ and $(1, 1)$. If we insist that these eigenvectors have norm one, then we can write them as $$\pm \left(\frac{-1}{\sqrt 2}, \frac{1}{\sqrt 2}\right) \qquad \text{and} \qquad \pm \left(\frac{1}{\sqrt 2}, \frac{1}{\sqrt 2}\right). $$ It happens to be that .svd() and .linalg.mat() make different choices for the order of eigenvectors and the signs. But they are the "same" eigenvectors, in the sense that they are multiplied by $-1$.