4

I became confused about how singular value decomposition can be used to find generalized inverse of singular matrix.

Specifically, I am dealing with the matrix $G=\begin{pmatrix} 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \\ 0 & \sqrt{2} & \sqrt{2} & 0 \\ \sqrt{2} & 0 & 0 & \sqrt{2} \end{pmatrix}$. Calculating $A=G^{T}G$, $A=\begin{pmatrix} 3 & 0 & 1 & 2 \\ 0 & 3 & 2 & 1 \\ 1 & 2 & 3 & 0 \\ 2 & 1 & 0 & 3 \end{pmatrix}$ and its eigenvalues are $6,4,2,0$ so the matrix $A$ is singular. However, my textbook describes that it is possible to calculate the generalized inverse of $G$ using the decomposition called Lanczos decomposition, namely $G=U_{p}\Lambda_{p} V_p^{T}$. It is written that the generalized inverse is then $G^{-p}=V_{p} \Lambda_{p}^{-1} U_{p}^{T}$.

Now, matrices $U_{p}$, $\Lambda_{p}$, and $V_{p}^{T}$ are defined as follows: $\Lambda_{p}$ is the diagonal matrix where its diagonal components are $p$ nonzero eigenvalues of $G$ (in this case, $p=3$ because $rank(G)=3$). Also, it is written that $U_{p}$ and $V_{p}$ can be attained by calculating singular value decomposition of $GG^{T}$ and $G^{T}G$, respectively.

I understand $\Lambda_{p}$ part, but I want to know how to exactly calculate $U_{p}$ and $V_{p}$. I did an experiment using NumPy module but I think I am misunderstanding something because result is different from desired one.

I attach my Python code I used for computation.

import numpy as np
from numpy.linalg import svd

G=np.array([[1, 0, 1, 0], [0, 1, 0, 1], [0, np.sqrt(2), np.sqrt(2), 0], [np.sqrt(2), 0, 0, np.sqrt(2)]])

V, S, Vh=svd(G.T@G) Vp=V[:, :3] # Eigenvectors corresponding to 3 nonzero eigenvalues

U, S, Uh=svd(G@G.T) Up=U[:, :3] # Eigenvectors corresponding to 3 nonzero eigenvalues

D=np.sqrt(np.diag(S[:3])) # square root of eigenvalues of G.T@G print(Up@D@Vp.T) print(G)

calculation result

Two results must be equal according to the theory but it is not. Specifically, it seems that the first two rows of $U_{p}\Lambda_{p}V_{p}^{T}$ are swapped but I don't get why. I think this is because I incorrectly solved for $V_{p}$ and $U_{p}$. I want to know what am I misunderstanding.

Senna
  • 125
  • You don't take the SVD. You take the eigen-decomposition of the covariance matrices. If $A = U \Sigma V^T$ then $A^TA = V \Sigma^2 V^T$ and $AA^T = U \Sigma^2 U^T$. The way the SVD is computed is by bi-orthgonalization. – Ryan Howe Dec 07 '20 at 02:52

1 Answers1

1

When you take the SVD of $GG^T = U_p\Lambda_p \Lambda_p^T U_p^T$, there is an arbitrary sign in the columns of $U_p$. Similarly, there is an arbitrary sign in the columns of $V_p$ when you take the SVD of $G^T G$.

When calculating $U$ and $V$ directly from the SVD of $G$, you can multiply corresponding columns of both $U$ and $V$ by an arbitrary sign, but you can't do it to a column of $U$ only or to a column of $V$ only. In your calculation, however, the signs of the columns of $U_p$ and $V_p$ are getting changed arbitrarily relative to one another, leading to an incorrect answer when you reconstruct G. The link between the columns of $U_p$ and the corresponding columns of $V_p$ with regard to their signs gets broken when you calculate them separately in this manner.

If you take your code and follow it up by computing $U$ and $V$ directly from the SVD of $G$, and compare them to the $U_p$ and $V_p$ you computed, you will see that this is exactly what happened. The third column of $U_p$ is the negative of the third column of $U$, but the first three columns of $V_p$ match those of $V$.

rpm2718
  • 274