[The OP is answered, but I'd like to include a quick, to the point, reminder of sorts... or an observation on symmetries.]
1. Left null space:
$A= \begin{bmatrix}
1&3\\
1&2\\
1&-1\\
2&1\\
\end{bmatrix}$
In R statistical language,
A = matrix(c(1,1,1,2,3,2,-1,1), ncol = 2)
r = qr(A)$rank # Rank 2
SVD.A = svd(A, nu = nrow(A))
SVD.A$u # Extracting the matrix U from it...
$U=\begin{bmatrix}-0.73038560& -0.27428549& \color{blue}{-0.1764270}& \color{blue}{-0.6001482}\\
-0.52378089& -0.03187309& \color{blue}{0.7303387}& \color{blue}{0.4373135}\\
0.09603322& 0.69536411& \color{blue}{0.4362937}& \color{blue}{-0.5629335}\\
-0.42774767& 0.66349102& \color{blue}{-0.4951027}& \color{blue}{0.3628841}
\end{bmatrix}$
t.U.A = t(SVD.A$u)
(left_null = t.U.A[(r + 1):nrow(t.U.A),])
[,1] [,2] [,3] [,4]
[1,] -0.1764270 0.7303387 0.4362937 -0.4951027
[2,] -0.6001482 0.4373135 -0.5629335 0.3628841
colSums(left_null) %*% A
Therefore,
$\left[\alpha\begin{bmatrix}-0.1764270\\ 0.7303387\\ 0.4362937\\ -0.4951027\end{bmatrix}^\top
+\beta\begin{bmatrix}-0.6001482\\ 0.4373135\\ -0.5629335\\ 0.3628841\end{bmatrix}^\top\right]\; \begin{bmatrix}
1&3\\
1&2\\
1&-1\\
2&1\\
\end{bmatrix} = \mathbf 0$
with $\alpha$ and $\beta$ being scalars.
2. Right null space:
Defining matrix $B$ as the transpose of $A$,
$B= \begin{bmatrix}1&1&1&2\\3&2&-1&1\end{bmatrix}$
B = t(A)
r = qr(B)$rank # Naturally it will also have rank 2.
SVD.B = svd(B, nv = ncol(B))
SVD.B$v # Extracting the matrix V from it...
$V = \begin{bmatrix}-0.73038560& -0.27428549& \color{blue}{-0.1764270}& \color{blue}{-0.6001482}\\
-0.52378089& -0.03187309& \color{blue}{0.7303387}& \color{blue}{0.4373135}\\
0.09603322& 0.69536411& \color{blue}{0.4362937}& \color{blue}{-0.5629335}\\
-0.42774767& 0.66349102& \color{blue}{-0.4951027}& \color{blue}{0.3628841}
\end{bmatrix}$
(right_null = SVD.B$v[ ,(r + 1):ncol(B)])
[,1] [,2]
[1,] -0.1764270 -0.6001482
[2,] 0.7303387 0.4373135
[3,] 0.4362937 -0.5629335
[4,] -0.4951027 0.3628841
B %*% rowSums(right_null)
Therefore,
$\begin{bmatrix}1&1&1&2\\3&2&-1&1\end{bmatrix}\;\left[\alpha\begin{bmatrix}-0.1764270\\ 0.7303387\\ 0.4362937\\ -0.4951027\end{bmatrix}
+\beta\begin{bmatrix}-0.6001482\\ 0.4373135\\ -0.5629335\\ 0.3628841\end{bmatrix}\right].$
In Matlab:
% Left null:
A = [1 3; 1 2; 1 -1; 2 1];
rank(A);
[U,S,V] = svd(A);
left_null_A = transpose(U);
rows = (rank(A) + 1): size(left_null_A,1);
left_null_A = left_null_A(rows,:)
(left_null_A(1,:) + left_null_A(2,:)) * A
% Right null:
B = transpose(A);
rank(B);
[U,S,V] = svd(B);
right_null_B = transpose(V);
rows = (rank(B) + 1): size(right_null_B,1);
right_null_B(rows,:)
right_null_B = transpose(right_null_B(rows,:))
B * (right_null_B(:,1) + right_null_B(:,2))
---
In Python:
Python:
# Left null:
import numpy as np
A = np.matrix([[1,3], [1,2], [1, -1], [2,1]])
rank = np.linalg.matrix_rank(A)
U, s, V = np.linalg.svd(A, full_matrices = True)
t_U_A = np.transpose(U)
nrow = t_U_A.shape[0]
left_null_A = t_U_A[rank:nrow,:]
left_null_A
np.dot((left_null_A[0,:] + left_null_A[0,:]), A)
# Right null:
B = np.transpose(A)
rank = np.linalg.matrix_rank(B)
U, s, V = np.linalg.svd(B, full_matrices = True)
t_V_B = np.transpose(V)
ncols = t_V_B.shape[1]
right_null_B = t_V_B[:,rank:ncols]
right_null_B
np.dot(B, (right_null_B[:,0] + right_null_B[:,1]))