2

Consider two iid $N \times N$ GOE random matrices1 $A$ and $B$. Let $W = A^2 + g B^2$ for $g>0$, and let $\lambda_\min(W)$ denote the smallest eigenvalue of $W$.

What is the expected value of the smallest eigenvalue of $W$ in the limit of large $N$, i.e. $\overline{\lambda_\min} : = \lim_{N \to \infty} \langle\lambda_\min(W)\rangle$

Numerics indicates that the limit is finite, and satisfies $\overline{\lambda_\min} = O(\min(g,1))$.

Additionally, a sub case of interest is the asymptotic scaling of $\overline{\lambda_\min} $ in the limit of small $g$. In this limit we have the upper bound $\overline{\lambda_\min} \leq g \langle v^T B^2 v \rangle = g$ where $v$ is the normalised eigenvector associated to the smallest eigenvalue of $A^2$. I suspect that this may be tight, $\overline{\lambda_\min} \sim g$, but it is not clear to me.


1. Specifically, the matrix elements $A_{ij}$ are multivariate gaussian distributed real numbers, with mean $\langle A_{ij} \rangle = 0$ and covariance $\langle A_{ij} A_{nm} \rangle = N^{-1}(\delta_{in}\delta_{jm} + \delta_{im}\delta_{jn}) $ and similarly for $B$)

  • Related: https://math.stackexchange.com/questions/1700408/expected-value-of-the-smallest-eigenvalue – ComptonScattering Oct 02 '21 at 02:07
  • I don't think its right to call this a Wishart matrix. That is a fairly specific family of random matrices of which I don't believe this is one. – Pax Oct 03 '21 at 17:29
  • @pax is that correct? I've seen examples in the literature of sums of Wishart matrices referred to as Wishart matrices. But it may be that this is non standard and not helpful in the present context. – ComptonScattering Oct 03 '21 at 17:31
  • I think the difference is more between a Wishart matrix and the square of a GOE. – Pax Oct 03 '21 at 17:40
  • Oh you mean the matrix elements of $A$ are not independent, and so $A^2$ is not Wishart? That is probably correct, so I have removed reference to Wishart. It is maybe a shame as the I suspect the reason $\overline{\lambda_\min} > 0$ here is the same as the reason the Marchenko–Pastur distribution has a finite lower bound $\lambda_- > 0$, and thus the analogy may have been useful nevertheless. – ComptonScattering Oct 03 '21 at 17:47
  • 1
    Unless I am missing something, computation of this requires a lot of random matrix machinery. The limiting spectral measure of $A^2$ can be computed explicitly from the semi-circle law from $A$. The limiting spectral measure of $A^2+gB^2$ can be computed by using the free convolution, which you can read about in Chapter 5 of this book. Using some known results, for example this you can derive that the smallest expected eigenvalue, in this case, is at the bottom of the support. – Pax Oct 03 '21 at 17:53

0 Answers0