3

Let $M$ be a irreducible, symmetric matrix with some negative entries such that $M^k>0$ for some $k>k_o$ and $\sum_j m_{ij}=1$ with $m_{ij} \in \Re$, and spectral radius $\rho(M)=1$. After multiplying it with a diagonal matrix $D={\rm diag}(d_{ii})$ where $0<d_{ii}\leq 1$ with at least one $d_{ii}<1$. Is there an easy way to show $\rho(DM)<1$? I know that this result is true as stated.

Similar questions have been asked for matrices $M$ but only for nonnegative entries, e.g., Substochastic matrix spectral radius.

  • 1
    How can a stochastic matrix have negative entries? – Rodrigo de Azevedo May 09 '23 at 10:50
  • @charmd It is true because of the property of irreducibility, the multiplication of row with a value less than 1 is propagated through out the matrix. The matrix $DM$ is a nilpotent matrix. I don't have a stringent proof for this. – Desperado May 09 '23 at 11:02
  • 2
    The question as stated is unclear. You have to define "eventually stochastic matrix" – parsiad May 09 '23 at 15:45
  • To be clear: does "eventually positive" actually mean positive or does it mean non-negative in entries? – user8675309 May 09 '23 at 16:37
  • @Ahsan why did you delete your response to my above question? For this question to make sense it is vital that people see your prior response of $M^k$ having strictly positive entries for some $k$. Also your newest edit to the original post doesn't make sense as there is a single standard definition for a stochastic matrix and your allowance of negative entries violates this. The information in the version of the original post prior to this edit and your now deleted comments were both necessary to make it a complete post. – user8675309 May 10 '23 at 16:19
  • I tried to make a concise question and ended up making it bad. I have tried to improve the statement. – Desperado May 24 '23 at 11:48

1 Answers1

3

I use OP's definition that $M^k$ is a positive matrix for all $k$ large enough. This means $M$ has a single eigenvalue on the unit circle that is simple and $=1$ and an associated Perron vector that is strictly positive (apply Perron theory to $M^k$ which thus has a single eigenvalue on unit circle $=1$ and this is simple, now work backwards).

WLOG suppose $d_n \in(0,1)$ Now using the operator 2 norm write
$\big \Vert DM\big\Vert_2 = \max_{\mathbf x , \mathbf y \in S^{n-1}}\big\vert \mathbf x^TD M\mathbf y\big\vert \leq \max_{\mathbf x , \mathbf y \in S^{n-1}}\big\Vert D\mathbf x\big \Vert_2\big\Vert M\mathbf y\big \Vert_2\leq 1 \cdot 1$
where the first inequality is Cauchy-Schwarz which is met with equality iff $ \alpha \cdot D\mathbf x = M\mathbf y$ and the RHS is met with equality iff $ D\mathbf x =\mathbf x\implies x_n=0$ and $M\mathbf y = \mathbf y \implies \mathbf y$ is a positive vector. So the upper bound being met with equality implies $\alpha \cdot \mathbf x =\mathbf y$ where the left hand side has a zero in its $n$th component and the vector on the right is positive. This is impossible.

Conclude $\lambda_\text{max modulus}(DM)\leq \big \Vert DM\big\Vert_2\lt 1$

user8675309
  • 12,193
  • Hi, you assume $d_n \in (0,,1)$, but $d_n$ can be 1. If $d_n \in (0,,1)$ is used, then proof is a lot simpler where $\rho(DM) \leq \rho(D)\rho(M)$ with $\rho(M)=1$, $\rho(D)<1$ and where $\rho(.)$ is the spectral radius of a matrix. – Desperado May 09 '23 at 18:56
  • 1
    No. By $d_n$ I meant the nth component of $D$ -- this would be $d_{n,n}$ in your notation. The spectral radius of $D$ is assumed to be $1$. If the Without Loss of Generality is confusing, you can replace that $n$ with "$r$ for some $r\in \big{1,2,\dots,n\big}$" in the two times it comes up. – user8675309 May 09 '23 at 19:14
  • Hi, sorry I have one more question. I know I said M is symmetric but It looks like your answer is for any $M$ with a positive left eigenvector. But I have an example where $M=\begin{bmatrix} 0.6683 & -0.5264 & 0.2627 & 0.5954\ 0.0580 & 0 & 0.3073 & 0.6347\ 0.1093 & 0.1965 & 0.9058 & -0.2115\ 0.5888 & -0.4422 & 0.1360 & 0.7173 \end{bmatrix}$ and $D={\rm diag}([0.9413,, 0.5038,, 0.6997,, 0.9123])$ result in $\rho(DM)>1$. So what is the catch here. is there any restriction on $M$ – Desperado May 09 '23 at 20:55
  • In your proof, I think you need the simplicity of the eigenvalue $1$ to justify that $|My|_2=1\Longrightarrow My=y$, because in general, a singular vector of a symmetric matrix is not necessarily an eigenvector. It seems that the proof can be made simpler if instead of $|DM|_2$ you consider $v^TDMv$ for an eigenvector $v$ of $DM$ corresponding to a dominant eigenvalue. – user1551 May 09 '23 at 21:20
  • There are 2 things. (1) we need the only eigenvalue on unit circle to be $=1$ and simple-- this is why I explicitly asked about entries becoming positive vs non-negative in a comment before answering – user8675309 May 09 '23 at 22:32
  • (2) We can do a diagonal similarity transform to symmetrize the Perron vector (see end of here: https://math.stackexchange.com/questions/3863953/lambda-max-geq-n-for-a-positive-reciprocal-matrix/3864520#3864520 ) which does not affect $D$ -- i.e. one could WLOG assume the Perron vector is a singular vector (I think this is related to @user1551 's comment ) but this doesn't say whether other there are other $\sigma_i \geq 1$... we need more structure to make that conclusion and typically what this means is (i) $M$ is normal [including symmetric] or (ii) $M$ is a positive matrix. – user8675309 May 09 '23 at 22:35
  • @user8675309 About the proof. $D\mathbf{x}=\mathbf{x}$, suppose $D={\rm diag}([1 , .5, 1, 1])$, I can choose $\mathbf{x}=[1 , 0, 1, 1]/1.7321$ or simply $\mathbf{x}=[0 , 0, 0, 1]$ such that $D\mathbf{x}=\mathbf{x}$ holds. – Desperado May 24 '23 at 09:12
  • 1
    Yes. As I said two weeks ago in the above: we have $D\mathbf x = \mathbf x$ implies $x_n=0$ since $D$ has some diagonal component $\in (0,1)$ and I had assumed Without Loss of Generality that it was the $n$th component. In my original comment to this answer I further explained how to interpret the Without Loss of Generality assumption in case you were unfamiliar. Please re-visit that comment. – user8675309 May 24 '23 at 16:06
  • Is there anything I can say about $\rho(DM)$ if I know that $\rho(DM_{\rm sym})<1$ where $M_{\rm sym}=\frac{M+M^T}{2}$? I tried thinking of it as $(DM_{\rm sym}+DM_{\rm skewsym})^t=\sum_{k=0}^{t} \binom{t}{k} \left(DM_{\rm sym}\right)^{t-k}\left(DM_{\rm skewsym}\right)^{k}$. But I can't say anything about $DM_{\rm skewsym}$. with $t\rightarrow \infty$, check if $(DM_{\rm sym}+DM_{\rm skewsym})^t\rightarrow 0$. – Desperado Jun 22 '23 at 08:37
  • This is 6 weeks old so the nuances involved when I wrote this up have faded. But high level: the structure of this is we know how eigenvalues behave under multiplication but not so for singular values; since we have $M^k$ being positive for some $k$ we have powerful information on the spectrum of $M$. As I sad in the comment that begins with '(2)' the issue is: w/out normality of $M$ you're going to have trouble relating singular values of $M$ with its eigenvalues and I don't see any of that in your most recent comments. Your binomial expansion is wrong BTW unless those 2 matrices commute. – user8675309 Jun 22 '23 at 16:28
  • for clarity: the prior comment should have said "we know how eigenvalues behave under self multiplication " – user8675309 Jun 22 '23 at 17:25