0

I am reading An Introduction to Infinite-Dimensional Analysis by Da Prato. Let $H$ be a $d$ dimensional Hilbert space where $d < \infty$ and $L^+(H)$ the set of symmetric, positive, and linear operators on $H$. In the section on finite-dimensional Hilbert spaces the author says:

We are going to define a Gaussian measure $N_{a, Q}$ for any $a \in H$ and any $Q \in L^+(H)$. Let $Q \in L^+(H)$ and let $(e_1, \ldots, e_d)$ be an orthonormal basis on $H$ such that $Qe_k = \lambda_k e_k$, $k = 1, \ldots, d$, for some $\lambda_k \geq 0$. We set $$x_k = \langle x, e_k \rangle, \quad x \in H, k = 1, \ldots d,$$ and we identify $H$ with $\mathbb{R}^d$ through the isomorphism $\gamma$, $$\gamma: H \rightarrow \mathbb{R}^d, \quad x \mapsto \gamma(x) = (x_1, \ldots, x_d).$$ Now we define a probability measure $N_{a,Q}$ on $(\mathbb{R}^d, \mathscr{B}(\mathbb{R}^d))$ by setting $$N_{a, Q} = \prod_{k=1}^d N_{a_k, \lambda_k}.$$

Here $\mathscr{B}(\mathbb{R}^d)$ is the Borel $\sigma$-algebra on $\mathbb{R}^n$.

Then in a proposition he claims $a \in H$ is the mean of $N_{a,Q}$ if $$\int_H xN_{a,Q}(dx) = a. \tag{1}$$

I am trying to make sense of the integral (1) before moving onto measures on infinite dimensional Hilbert spaces. My questions are:

  1. What is the $\sigma$-algebra on $H$? Is it the one induced by $\mathscr{B}(\mathbb{R}^d)$ and the isomorphism $\gamma$, i.e. the $\sigma$ algebra generated by sets of the form $\gamma^{-1}(B)$ for $B \in \mathscr{B}(\mathbb{R}^d)$? Or is it the Borel $\sigma$-algebra induced by the inner product on $H$?
  2. The measure $N_{a, Q}$ is defined on $\mathbb{R}^n$ and not $H$, does that mean in (1) the measure should actually be $N_{a,Q}(\gamma(dx))$?
  3. How is the integral (1) constructed? When the measure is over an ordered field like $\mathbb{R}$, then the usual way one defines the Lebesgue integral of function $f$ is to partition the range and use simple functions. However, how exactly does one partition the range in a Hilbert space where this not necessarily an ordering? Based on the answer to another question (Why is the Lebesgue Integral defined through integrals of simple functions?) it seems the ordering is not necessary and one uses simple functions consisting of indicator functions of the form $\mathbf{1}_{f^{-1}(x_i)}$ where $x_i \in H$. But how does one construct a sequence of simple functions of this form that converge to $f$ without an ordering?
CBBAM
  • 7,149

1 Answers1

1
  1. Either; both are the same since $\gamma$ is a Hilbert-space isomorphism.
  2. Yeah something like this. More precisely, for measurable $f : H \to [0, \infty]$, $\int_{H}f(x)N(dx) = \int_{\mathbb{R}^d}f(\gamma^{-1}(y))N(dy)$.
  3. The problem is not about the measure, the problem is that you are integrating $\int f(x) N(dx)$ where $f$ takes values in $H$, which you might not know how to perform. If $f$ takes values in $\mathbb{R}^d$, then you just integrate component-wise. You can do that here by defining $\int f(x)N(dx) := \gamma^{-1} \int \gamma f(x)N(dx)$. But the systematic way is the Bochner integral, which allows you to integrate $f : \Omega \to V$, when $\Omega$ is a $\sigma$-finite measure space and $V$ is a separable Banach space. See https://mtaylor.web.unc.edu/wp-content/uploads/sites/16915/2018/04/vecint.pdf. If you view the integral as a Bochner integral, then the equality $\int f(x)N(dx) = \gamma^{-1} \int \gamma f(x)N(dx)$ is a theorem since $\gamma$ is a bounded linear map.
Mason
  • 12,787
  • Thank you, those notes by Taylor seem to be exactly what I was looking for! – CBBAM Jun 05 '24 at 02:00
  • I have been reading those notes of Taylor and I cannot get a good intuition for how he defines the simple functions when constructing the Bochner integral. In particular, I don't understand his choice of coefficient for each indicator function. Should I ask this as a separate question? – CBBAM Jun 05 '24 at 06:26
  • 1
    @CBBAM If you use the construction in $V = \mathbb{R}^2$, then you partition $\mathbb{R}^2$ into disjoint squares $I_j = [a_j, b_j) \times [c_j, d_j)$ of side length $\varepsilon$, and you set $f_{\varepsilon} = \text{argmin}{v \in \overline{I_j}}|v|$ on $f^{-1}(I_j)$. On $V = \mathbb{R}^+$, this is exactly how the approximation is presented in measure theory books. Obviously $|f(x) - f{\varepsilon}(x)| \leq \sqrt{2}\varepsilon$ and $|f_{\varepsilon}(x)| \leq |f(x)|$ for all $x \in \mathbb{R}^2$. – Mason Jun 05 '24 at 16:36
  • Thank you very much! – CBBAM Jun 05 '24 at 18:47