6

A sequence of real numbers $\{m_k\}$ is the list of moments of some real random variable if and only if the infinite Hankel matrix $$\left(\begin{matrix} m_0 & m_1 & m_2 & \cdots \\ m_1 & m_2 & m_3 & \cdots \\ m_2 & m_3 & m_4 & \cdots \\ \vdots & \vdots & \vdots & \ddots \\ \end{matrix}\right)$$ is positive definite. (Source: https://en.wikipedia.org/wiki/Hamburger_moment_problem)

My question is, given only the first $k$ moments, is it sufficient that the top left $k \times k$ minor of the Hankel matrix be positive definite for there to exist a real random variable with those first $k$ moments?

In other words, can a $k \times k$ positive definite Hankel matrix always be extended to an infinite positive definite Hankel matrix?

keej
  • 1,277
  • 9
  • 28

2 Answers2

3

A less high-brow approach is to note that a $k\times k$ positive definite (strictly positive definite) Hankel matrix $A$ can be extended to a $(k+1)\times(k+1)$ one by appending a $(k+1)$st row and column, preserving positive definiteness. The Hankel structure determines all the entries in the extension except the entries $a_{k,k+1} = a_{k+1,k}$ and $a_{k+1,k+1},$ which I'll denote by $w$ and $z$, respectively. By Sylvester's criterion, it suffices to choose $(w,z)$ so the determinant $D(w,z)$ of the new matrix is positive. By expanding minors, say, we see $D$ is a linear function of $z$ plus an inhomogeneous quadratic function of $w$. Further, the coefficient of $z$ is the determinant of the original matrix, which by assumption is positive. So any choice of $w$ followed by a sufficiently large choice of $z$ makes $D(w,z)>0$.

(Thanks to @tristan for pointing out a flaw and its fix in an earlier version of this answer.)

kimchi lover
  • 24,981
  • The Hankel structure determines all the entries except for $3$ of them : the entries at positions $(k,k+1)$, $(k+1,k)$ and $(k+1,k+1)$. But i think using your argument one can just set $m_{2k+1}=0$ anyway. – tristan Jul 11 '17 at 11:18
  • Thanks, @tristan, for finding a fix to my blooper. – kimchi lover Jul 11 '17 at 11:20
1

Yes, this works. There should really be an easy direct argument, but all I can think of right now is the following: The finite moment problem $\int x^n\, d\mu(x)=m_n$, $n=0,1,\ldots , k$, can be solved in the same way as the full problem $n\ge 0$. Namely, run Gram-Schmidt on $1,x,\ldots , x^N$ (with $2N=k$); the orthogonal polynomials will satisfy a three term recurrence $$ a_n p_{n+1} + a_{n-1}p_{n-1} + b_n p_n = xp_n , $$ and the spectral measures of the associated Jacobi matrix will solve the moment problem.

In particular, you can extend to the half line $n\ge 1$ by just making up coefficients $a_n,b_n$ for $n\ge N$ at will, and any such measure will have the given moments $m_0,\ldots , m_k$ (because these only depend on the first coefficients). Its subsequent moments will give you the desired extension.

In fact, there is a description of all solutions to a finite moment problem (sometimes called the Nevanlinna parametrization), which more or less works like this.

  • Thanks, can you provide an introductory reference for some of this stuff? What does it mean to run Gram-Schmidt when we don't know what the measure is? – keej Jul 11 '17 at 18:42
  • Some standard sources are Simon's review of the moment problem and Teschl's book (both from a spectral theory point of view, there are also more classical moment problem books that I'm less familiar with). –  Jul 11 '17 at 18:45
  • @keej: As for G-S, I only need the scalar products of two polynomials (monomials, even), and I get these from the moments. –  Jul 11 '17 at 18:47
  • Of course, thanks. – keej Jul 11 '17 at 19:06