There are several methods available for computing SVD of a general matrix. I am interested in knowing about the best approach which could be used for computing SVD of an upper triangular matrix. Please suggest me an algorithm which could be optimized for this special case of matrices.
-
This might be interesting – user66081 Apr 24 '16 at 21:06
2 Answers
My recommendation would be to use Cholesky decomposition of the matrix $AA^T$. For an upper-triangular matrix $A$ the Cholesky decomposition of $AA^T$ is obviously trivial.
And then there exist several algorithms for finding the singular values given the Cholesky decomposition. I was trying to think of one myself, but apparently someone has already written a paper about it, which I imagine would be of more use to you:
http://arxiv.org/pdf/1202.1490.pdf
In any case, the Cholesky decomposition seems like the best framework for utilizing the structure of the matrix $AA^T$ when A is upper-triangular.
More references:
http://www.maths.manchester.ac.uk/~nstrabic/talks/chol_en.pdf
- 22,055
- 10
- 67
- 178
-
1(+1) for the original idea that seemed full of promise. Yet, the fact that $A$ is triangular provides only the first decomposition $A=LL^$. After, we consider the standard iteration $A_k=L_kL_k^,A_{k+1}=L_k^*L_k,\cdots$ that converges to the diagonal of singular values. Finally the gain relative to a standard matrix is very small. – Apr 25 '16 at 19:37
-
Do we still have that problem for the implicit Cholesky algorithm mentioned in the second reference? – Chill2Macht Apr 25 '16 at 21:02
-
1The implicit algorithm is similar. We consider the first decomposition $AA^=LL^$ (and not $A=LL^$); then $L_0:=L,U:=I,V:=I$ and we consider the QR iteration: $L_k=Q_kR_k$, ${$ if $k$ is even then $U:=UQ_k$, else $V:=VQ_k}$, $L_{k+1}:=R_k^$. – Apr 25 '16 at 21:25
-
2This is very bad method for computing svd. Since normal matrix is formed, smallest singular values can be very inaccurate. Convergence is very slow. For random triangular matrix of size 100x100 it requires hundreds of iterations (for algorithm in the first reference). For matrices of size nxn one iteration requires 2/3 n^3 flop, when for bidiagonalization 8/3 n^3 (Lapack's algorithm). Since estimation of singular values of bidiagonal matrix has O(n^2) cost, only cost of bidiagonalization matters. Thus this algorithm is not only much less accurate, than the standard one, but also much slower. – Pawel Kowal Jun 18 '16 at 08:26
Stardard algorithms for computing SVD factorization of an $m \times n$ matrix A process in two steps:
- bidiagonalization of A
- SVD factorization of bidiagonal matrix using QR method or divide-and-conquer.
The bidiagonalization procedure cannot utilize the triangular structure of A, and therefore standard SVD solvers cannot be optimized for triangular matrices.
However the Jacobi method can be optimized for triangular matrices, see MLA Drmac, Zlatko, and Krešimir Veselic. "New fast and accurate Jacobi SVD algorithm. II." SIAM Journal on Matrix Analysis and Applications 29.4 (2008): 1343-1362; also Lapack Working Note 170.
To my knowledge, this optimization was not implemented and it is hard to say if this method would be faster, than methods for general matrices. Jacobi method is slower than divide-and-conquer for general matrices, so it may be still slower for triangular matrices.
- 22,055
- 10
- 67
- 178
- 2,362