0

In this previous question is stated that given a weighted undirected Laplacian corresponding to a connected graph $L$ it's well known that if you add a small positive (resp. negative) amount to any diagonal element of , the zero eigenvalue is pushed into the right (resp. left) half plane.

I tried to find the result in the literature and even though I found quite a lot of paper on perturbed Laplacian matrices, I still did not find a way to prove the statement in italic above. The definition of perturbed Laplacian matrices is not related to 'typical perturbation theory' in case this creates any confusion. You can refer here for a definition of perturbed laplacian.

It is the property stated in the question general for any matrices with zero-sum rows? Is the property valid for any singular matrix? How would I in general approach such a problem? Also, does it hold for any positive value or it has to be sufficiently small?

Thank you!

---------------------- UPDATE -------------------

What happens if instead of $L$, we consider $L+\gamma I$ where $I$ is the identity matrix? How can I quantify the effect of perturbing just one diagonal entry? In case $\gamma=0$ (question above) it is enough to know that we can 'move' the $0$ eigenvalue to the left or to the right with minimal diagonal perturbation. But how we can quantify this effect?

giangian
  • 298
  • 2
  • 13
  • The amount you're adding is said to be small so use perturbation theory. – AHusain Apr 07 '22 at 15:49
  • Yeah, I don't really mean to use perturbation theory. More for any positive constant rather then 'small'. I added a note at the end, thank you! – giangian Apr 07 '22 at 16:32
  • 2
    Is there an implicit assumption that the underlying graph is connected here? ("The" zero eigenvalue doesn't completely make sense otherwise.) – Ian Apr 07 '22 at 16:35
  • Yes, will add it! – giangian Apr 07 '22 at 16:36
  • This result isn't really perturbation theory related (e.g. there is no need for the amount being added to be 'small'). This result follows from Perron-Frobenius theory – user8675309 Apr 07 '22 at 16:40
  • Yes, will remove the word 'perturbation' to avoid any misunderstanding – giangian Apr 07 '22 at 16:41
  • Assuming $L=D-A$, it seems like the positive case is easy: the modified $L$ is positive semidefinite (symmetry + Gerschgorin theorem) and irreducibly diagonally dominant, hence nonsingular, hence positive definite. In the negative case I guess you need to bring out Perron-Frobenius theory. (Flip all the signs around if your convention is $L=A-D$ obviously.) – Ian Apr 07 '22 at 16:42

3 Answers3

2

Let us write $\mathbf{L}=\sum_k \lambda_k \mathbf{u}_k \mathbf{u}_k^T$. It follows $\lambda_i = \mathbf{u}_i^T \mathbf{L} \mathbf{u}_i$ and $d\lambda_i = \mathbf{u}_i\mathbf{u}_i^T:d\mathbf{L}$. The colon operator denotes the Fronebius inner product here.

The eigenvector associated to the zero eigenvalue is the vector of ones $\mathbf{e}$, i.e. $\mathbf{Le}=0\cdot \mathbf{e}$

For this particular eigenvalue $d\lambda_0 = \mathbf{J}:d\mathbf{L}$ where $\mathbf{J}=\mathbf{e}\mathbf{e}^T$ is populated with ones.

If you perturb one element of the diagonal of $\mathbf{L}$ by $h$, we can conclude $d\lambda_0 = h$ Thus if $h>0$, the (zero) eigenvalue will increase.

Steph
  • 4,140
  • 1
  • 5
  • 13
  • This technically only covers the perturbation case right? – Ian Apr 07 '22 at 16:53
  • What do you mean by d$\lambda$? and in general every time you use the operator $d$? If you could, could you give me a reference on the background theory/operators you used? I could use it to learn a bit more for next time! – giangian Apr 07 '22 at 17:12
  • 1
    d means differential. The proposed approach covers indeed the perturbation case which was perhaps not what you were asking for. – Steph Apr 08 '22 at 06:05
2

Steph's approach is the way I would think about quantifying how matrix perturbations affect the matrix eigenvalues. I want to point out, though, that your specific highlighted statement is a consequence of just elementary properties of semidefinite matrices; namely, if $A$ and $B$ are positive-semidefinite then $$\ker (A+B) = \ker A \cap \ker B$$ which you can see by probing $v^T(A+B)v$.

Assuming you know that the kernel of $L$ is one-dimensional (spanned by the constant vector $\mathbf{1}$) for a connected graph, it follows immediately that $L+D$ is (strictly) positive-definite for any nonzero, non-negative diagonal matrix $D$, and so the zero eigenvalue became positive. Also since $$\mathbf{1}^T(L-D)\mathbf{1} = -\mathbf{1}^T D\mathbf{1} < 0$$ you know that $L-D$ has at least one negative eigenvalue. Note though that this simple analysis isn't enough to tell you that there is only one negative eigenvalue when $D$ is a sufficiently small perturbation.

user7530
  • 50,625
  • Oh, that is quite interesting how trivial it was with simple matrix algebra properties! Maybe it will help me answer the 'updated' question as well! Thanks for pointing it out even though the answer above helped me to understand and study a bit more about Perron Frobenius theory. At the moment the 'small perturbation' argument is not the main focus. – giangian Jul 02 '22 at 09:20
  • Yeah it actually solves the issue for other cases as well, thank you so much! – giangian Jul 02 '22 at 10:01
1

idea: take $L$ and use very simple (i.e. diagonal matrix) congruence and similarity transforms to make this a result about stochastic matrices which are especially easy to work with; e.g. multiplication by a positive diagonal matrix does not change irreducibility. The same technique can be used to prove e.g. that $\dim \ker L$ gives the number of connected components (because the algebraic multiplicity of the Perron root of a stochastic matrix counts such thing.)


$L=D-A$
where $A$ is the adjacency matrix for your connected graph. Now effect a congruence transform with $D^\frac{-1}{2}$
$ D^\frac{-1}{2}\big(D-A\big)D^\frac{-1}{2}=I -D^\frac{-1}{2}AD^\frac{-1}{2} = I-B$
and congruence of course preserves rank.

$B$ is an irreducible real-non-negative matrix with that single Perron root $\lambda =1$. Using $\Gamma := \text{diag}(\mathbf v)$ the Perron vector of $B$ we see $S:= \Gamma^{-1} B \Gamma $ where $S$ is a stochastic matrix.

Now if you change some diagonal component of $D$ (WLOG the first component) -- by any amount so long as it stays a positive matrix, then this is equivalent to multiplying by

$\Sigma =\begin{bmatrix} \alpha & \mathbf 0\\ \mathbf 0 & I_{n-1} \end{bmatrix}$ for $\alpha \in (0,1)$ to decrement and $\alpha \gt 1$ to increment,


side note: if for some reason we wanted to consider the case where $D$ had a non-positive diagonal element it would immediately follow that the 'Laplacian' was no longer PSD, giving the result.


essentially running through the same steps as before:

$L\to L'=\Sigma D - A$ and the congruence transform now gives us
$I -\Sigma^\frac{-1}{2} D^\frac{-1}{2}AD^\frac{-1}{2}\Sigma^\frac{-1}{2} = I-\Sigma^\frac{-1}{2} B\Sigma^\frac{-1}{2}=I-B'$
$S':= \Gamma^{-1}B' \Gamma= \Gamma^{-1}\Big(\Sigma^\frac{-1}{2} B\Sigma^\frac{-1}{2}\Big) \Gamma=\Sigma^\frac{-1}{2}\Big( \Gamma^{-1} B \Gamma\Big)\Sigma^\frac{-1}{2} = \Sigma^\frac{-1}{2} S \Sigma^\frac{-1}{2}$
(since diagonal matrices commute). By design $S'$ is no longer stochastic and we can exploit this. In particular if $\alpha \in (0,1)$ then the row sum in the first row increases and it does not decrease in any other row. And if $\alpha \gt 1$ then the first row sum decreases and it does not decrease in any other row.

Finally Perron-Frobenius theory tells us that for an irreducible non-negative matrix
$\text{min row sum }S' \leq \text{Perron root }S' \leq \text{max row sum }S'$
and both inequalities are strict unless $\text{min row sum } = \text{max row sum }$ (which was the case with stochastic $S$ and is not the case with $S'$). Thus we recover Perron root $\lambda \gt 1$ for $\alpha \in(0,1)$ and Perron $\lambda \in (0,1)$ for $\alpha \gt 1$. This is preserved under similarity transform back to $B'$ and of course the mapping to $I-B'$ is $\lambda \to 1-\lambda$, and congruent matrices have the same signature which gives the result for $L'$.

user8675309
  • 12,193
  • Why is the Perron root equal to 1 for B? Is it because we can scale everything so that it turns to 1 anyway? – giangian Apr 07 '22 at 18:18
  • 1
    Suppose Perron root of $B$ is $\neq 1$ -- then what are the eigenvalues of $I-B$? Note: by congruence to $L$ there needs to be $n-1$ positive eigenvalues and exactly $1$ eigenvalue equal to zero (and the Perron root of B is necessarily the largest for B's eigenvalues hence the smallest for $I-B$ ). – user8675309 Apr 07 '22 at 18:24
  • I see, so it follows from the fact that I-B has one (and only one) eigenvalue at 0, same as the 'original' L? – giangian Apr 07 '22 at 18:28
  • 1
    That's right. There is a direct correspondence $\dim \ker L$ and algebraic multiplicity of Perron roots of $B$ which is what I was alluding to in the first paragraph when I mentioned counting connected components as another application of this technique. – user8675309 Apr 07 '22 at 18:33
  • Do you have some reference on the theory of Perron-Frobenious? Or any book (like Horn & Johnson is fine) – giangian Apr 07 '22 at 18:38
  • 1
    Probably the easiest is the chapter on it in Meyer's Matrix Analysis but it depends on what you already know (e.g. the theory of countable state markov chains and/or renewal theory actually implies most of these results but this is a very different approach). I would strongly suggest focusing on Perron Theory first -- i.e. the theory for positive matrices; the generalization to Perron-Frobenius theory which covers non-negative matrices is fairly straightforward (except perhaps for periodicity issues). – user8675309 Apr 07 '22 at 19:00
  • I can not find the statement about the strict inequality in case min and max row sums are different. Is it trivial or it depends on the fact that the matrices are irreducible? – giangian Apr 25 '22 at 15:33
  • 1
    if you are not focusing on learning the positive matrix case first, which I strongly recommended, then yes you must insist on irreducibility (and then do some more work to show it is implied by the positive matrix case). The positive matrix case is ex 8.2.7 in Meyer's Matrix Analysis and if you solve it carefully you should be able to figure out the equality conditions. There are are many texts covering Perron-Frobneius Theory so if you are looking for a reference that doesn't require you to solve any exercises, then you should probably choose something other than Meyer. – user8675309 Apr 25 '22 at 16:22
  • Makes sense, thanks! – giangian Apr 25 '22 at 17:08
  • 1
    I almost forgot, Meyer's book comes with a complete set of solutions, so that is an option as well here. His solution with Collatz-Wielandt is a little different than how I think about it but probably the more common approach. Again positive matrices are where the insight is. – user8675309 Apr 25 '22 at 20:52
  • I formulated a complete proof, also with the help of this question/answer https://math.stackexchange.com/questions/36828/substochastic-matrix-spectral-radius . I updated the question though as I thought this solved my original problem, but it actually doesn't. – giangian Jul 02 '22 at 08:27