Let's start with an
Example: Consider the PDFs of a multivariate normal distribution and a multivariate Student distribution. How to prove that they are linearly independent, as functions from $\mathbb R^n$ to $\mathbb R$?
More generally, I'm interested in general proof strategies (or references!) on the following:
Question: How to prove that a given set of PDFs is linearly independent? For example, under additional assumptions, e.g., unimodality and distinct modes.
To provide some motivation, I'm trying to understand better strict linear independence of probability measures: probability measures $P_1, \dotsc, P_K$ defined on a space $\mathcal X$ are strictly linearly independent if for every vector $\lambda\neq 0$ there exists a measurable set $A$ such that $$\lambda_1 P_1(A) + \cdots + \lambda_K P_K(A) \neq 0,$$ or, equivalently, $|\lambda_1 P_1 + \cdots + \lambda P_K|(\mathcal X) > 0$.
In case of $\mathcal X = \mathbb R^n$ and continuous PDFs $p_1, \dotsc, p_K$ this condition is equivalent to the linear independence of their PDFs.
The proof for exponential distributions with distinct rates is here and there exist proof strategies based on asymptotics for normal distributions (e.g., this one or that one). However, I think more general results should follow, perhaps based on the notion of strict linear independence of measures. Intuitively, if I have several different unimodal distributions and a given $\lambda \neq 0$, then trying as the candidate set $A$ small balls centered around different modes, may work. However, I have not been able to formalise this argument.
For example, let $f(x) = \lambda_1 N(x\mid 0, 1) + \lambda_2 N(x\mid 0, 2)$ be a linear combination of normal distribution PDFs. This $f$ does not need to correspond to a random variable (e.g., there may be regions with $f(x) < 0$) and I don't know how to define a characteristic function in this case. Could you please recommend a reference or show an example in an answer, so I can understand it better?
– Paweł Czyż May 16 '24 at 13:53