it is known that, for any $\alpha > 0$, any vector sequence $\left\{\mathbf{v}_i\right\}_{i=1}^{n}$,
$$ \sum_{i=1}^{n} \mathbf{v}_i^\mathsf{T} \left( \alpha \mathbf{I} + \sum_{k=1}^{i} \mathbf{v}_k \mathbf{v}_k^\mathsf{T} \right)^{-1} \mathbf{v}_i \leq \log \det \left( \alpha \mathbf{I} + \sum_{i=1}^{n} \mathbf{v}_i \mathbf{v}_i^\mathsf{T} \right) - \log \det \left( \alpha \mathbf{I} \right) $$
due to the concavity of $\log \det \left(\cdot\right): \mathbb{R}^{d \times d} \to \mathbb{R}$.
proof sketch:
- let $\mathbf{A}_i := \alpha \mathbf{I} + \sum_{k=1}^{i} \mathbf{v}_k \mathbf{v}_k^\mathsf{T}$. note that $\mathbf{A}_{i} = \mathbf{A}_{i-1} + \mathbf{v}_i \mathbf{v}_i^\mathsf{T}$.
- for any invertable matrix $\mathbf{M}$, $\nabla \log \det \left( \mathbf{M} \right) = \mathbf{M}^{-1}$, which can be proved by the chain rule, the definition of adjugate matrix, and the relation between the adjugate matrix and the inversion.
- for any concave function $f$, $f(\mathbf{y}) - f(\mathbf{x}) \leq \left\langle \nabla f(\mathbf{x}), \mathbf{y} - \mathbf{x} \right \rangle$.
- $\mathbf{v}_i^\mathsf{T} \mathbf{A}_i^{-1} \mathbf{v}_i = \left\langle \mathbf{A}_i^{-1}, \mathbf{v}_i \mathbf{v}_i^\mathsf{T} \right \rangle _{\text{Frobenius}} = \left\langle \nabla \log \det \left( \mathbf{A}_i \right), \mathbf{A}_i - \mathbf{A}_{i-1} \right \rangle _{\text{Frobenius}} \leq \log \det \left( \mathbf{A}_i \right) - \log \det \left( \mathbf{A}_{i-1} \right)$.
is it possible to prove a similar bound for $\sum_{i=1}^{n} \mathbf{v}_i^\mathsf{T} \left( \left( \alpha \mathbf{I} + \sum_{k=1}^{i} \mathbf{v}_k \mathbf{v}_k^\mathsf{T} \right)^{-1} \right)^{2} \mathbf{v}_i$?
intuitively, $\sum_{i=1}^{n} \mathbf{v}_i^\mathsf{T} \left( \alpha \mathbf{I} + \sum_{k=1}^{i} \mathbf{v}_k \mathbf{v}_k^\mathsf{T} \right)^{-1} \mathbf{v}_i$ looks like $\sum_{i=1}^{n} \frac{x_i^2}{\alpha + \sum_{k=1}^{i} x_k^2}$, which looks like $\sum_{i=1}^{n} \frac{1}{\alpha + i} = O\left( \log n \right)$.
similarly, $\sum_{i=1}^{n} \mathbf{v}_i^\mathsf{T} \left( \left( \alpha \mathbf{I} + \sum_{k=1}^{i} \mathbf{v}_k \mathbf{v}_k^\mathsf{T} \right)^{-1} \right)^{2} \mathbf{v}_i$looks like $\sum_{i=1}^{n} \frac{x_i^2}{\left( \alpha + \sum_{k=1}^{i} x_k^2 \right)^2}$, which looks like $\sum_{i=1}^{n} \frac{1}{\left( \alpha + i \right)^2} = O\left( 1 \right)$. (when $\alpha \approx 0$, the last summation is about $\frac{\pi^2}{6}$.)
my attempt:
by Sherman–Morrison formula / Woodbury matrix identity, $\mathbf{A}_i^{-1} \mathbf{v}_i = \frac{1}{1 + \mathbf{v}_i^\mathsf{T} \mathbf{A}_{i-1}^{-1} \mathbf{v}_i} \mathbf{A}_{i-1}^{-1} \mathbf{v}_i$, $\mathbf{v}_i^\mathsf{T} \left( \mathbf{A}_i^{-1} \right)^{2} \mathbf{v}_i = \left\| \mathbf{A}_i^{-1} \mathbf{v}_i \right\|_2^2 = \left( \frac{1}{1 + \mathbf{v}_i^\mathsf{T} \mathbf{A}_{i-1}^{-1} \mathbf{v}_i} \right)^2 \left\| \mathbf{A}_{i-1}^{-1} \mathbf{v}_i \right\|_2^2$.
i don't think this will work because $\mathbf{A}_{i-1}^{-1} \mathbf{v}_i$ is likely to break the telescoping.