Questions tagged [covariance]

Questions about covariance, a measure of (linear) association between two random variables.

Covariance is a measure which shows how much two RVs are dependent. If they are fully independent it would be zero and as much as they are dependent it would have a greater value. You can have a much more powerful insight by description of the following formula:

The covariance of the random variables $X$ and $Y$ is the difference between the Expected value of their product ($E(XY)$) and the product of their expected values ($E(X)E(Y)$).

\begin{align*} \sigma(X,Y) = E(XY)-E(X)E(Y) \end{align*}

If they are independent then $E(XY)=E(X)E(Y)$ and therefore the covariance would be zero. Also, as much as they depend on each other their distance would be higher.

Though the main formula for definition of covariance is \begin{align*} \sigma(X,Y) = E \left[ \left(X-E(X)\right) \left(Y-E(Y)\right) \right] \end{align*}

we can convert it to the pre-explained one (for the finite-domain random variables):

\begin{align*} \sigma(X,Y) &= E \left[ \left(X-E(X)\right) \left(Y-E(Y)\right) \right] \\\ &= E \left[ X Y - X E(Y) - E(X) Y + E(X) E(Y) \right]\\\ &= E (X Y) - E(X) E(Y) - E(X) E(Y) + E(X) E(Y) \\\ &= E (X Y) - E(X) E(Y). \end{align*}

Also, for two vectors of random variables ($\mathbb{X}$ and $\mathbb{Y}$) the covariance matrix has been defined as a matrix in which each cell shows the covariance of corresponding cell in the matrix ($\mathbb{X} \times \mathbb{Y}^T$).

Reference:

2116 questions
167
votes
3 answers

Why is the eigenvector of a covariance matrix equal to a principal component?

If I have a covariance matrix for a data set and I multiply it times one of it's eigenvectors. Let's say the eigenvector with the highest eigenvalue. The result is the eigenvector or a scaled version of the eigenvector. What does this really…
44
votes
4 answers

What does Determinant of Covariance Matrix give?

I am representing my 3d data using its sample covariance matrix. I want to know what the determinant of covariance Matrix represents. If the determinant is positive, zero, negative, high positive, high negative, what does it mean or…
orange14
  • 583
43
votes
4 answers

Weak Law of Large Numbers for Dependent Random Variables with Bounded Covariance

I'm currently stuck on the following problem which involves proving the weak law of large numbers for a sequence of dependent but identically distributed random variables. Here's the full statement: Let $(X_n)$ be a sequence of dependent…
34
votes
4 answers

When does the inverse of a covariance matrix exist?

We know that a square matrix is a covariance matrix of some random vector if and only if it is symmetric and positive semi-definite (see Covariance matrix). We also know that every symmetric positive definite matrix is invertible (see Positive…
23
votes
4 answers

unbiased estimate of the covariance

How can I prove that $$ \frac 1 {n-1} \sum_{i=1}^n (X_i - \bar X)(Y_i-\bar Y) $$ is an unbiased estimate of the covariance $\operatorname{Cov}(X, Y)$ where $\bar X = \dfrac 1 n \sum_{i=1}^n X_i$ and $\bar Y = \dfrac 1 n \sum_{i=1}^n Y_i$ and $(X_1,…
21
votes
4 answers

Is the variance of the mean of a set of possibly dependent random variables less than the average of their respective variances?

Is the variance of the mean of a set of possibly dependent random variables less than or equal to the average of their respective variances? Mathematically, given random variables $X_1, X_2, ..., X_n$ that may be dependent: Let $\bar{X} =…
19
votes
1 answer

Show that the multinomial distribution has covariances ${\rm Cov}(X_i,X_j)=-r p_i p_j$

If $(X_1,\cdots, X_n)$ is a vector with multinomial distribution, proof that $\text{Cov}(X_i,X_j)=-rp_ip_j$, $i\neq j$ where $r$ is the number of trials of the experiment, $p_i$ is the probability of success for the variable…
17
votes
1 answer

Why are the eigenvalues of a covariance matrix equal to the variance of its eigenvectors?

This assertion came up in a Deep Learning course I am taking. I understand intuitively that the eigenvector with the largest eigenvalue will be the direction in which the most variance occurs. I understand why we use the covariance matrix's…
16
votes
1 answer

"we note that the matrix Σ can be taken to be symmetric, without loss of generality"

I'm reading the book Pattern Recognition and Machine Learning by Christopher Bishop, and on page 80, with regard to the multivariate gaussian distribution: $$ \mathcal{N}(\mathbf{x} | \boldsymbol{\mu}, \boldsymbol{\Sigma}) = …
16
votes
1 answer

Understanding the definition of the covariance operator

Let $\mathbb H$ be an arbitrary separable Hilbert space. The covariance operator $C:\mathbb H\to\mathbb H$ between two $\mathbb H$-valued zero mean random elements $X$ and $Y$ with $\operatorname E\|X\|^2<\infty$ and $\operatorname E\|Y\|^2<\infty$…
Cm7F7Bb
  • 17,879
  • 5
  • 43
  • 69
13
votes
1 answer

Covariance of increasing functions of random variables

Let $X$ be a random variable and $f, g: \mathbb{R} \rightarrow \mathbb{R}$ be increasing functions. Show that $cov(f(X), g(X)) \ge 0$. The following hint was also provided: Assume $X$, $Y$ are independent and identically distributed, then show…
elbarto
  • 3,446
13
votes
1 answer

Is a symmetric positive definite matrix always diagonally dominant?

A Hermitian diagonally dominant matrix $A$ with real non-negative diagonal entries is positive semidefinite. Is it possible to have a Hermitian matrix be positive semidefinite/definite and not be diagonally dominant? In other words, if I know that…
muon
  • 233
13
votes
1 answer

Uncorrelated successive differences of martingale

I read somewhere that given a martingale ${X_n}$, the successive differences of the martingale series are uncorrelated, namely $X_i −X_{i−1}$ is uncorrelated with $X_j −X_{j−1}$ for $i \neq j$. I tried showing this with the law of total covariance…
user19164
  • 241
12
votes
1 answer

How to tell is a matrix is a covariance matrix?

How can we know that these matrices are valid covariance matrices? $$ C= \begin{pmatrix} 1 & -1 & 2 \\ -1 & 2 & -1 \\ 2 & -1 & 1 \\ \end{pmatrix} \\ C= \begin{pmatrix} 4 & -1 & 1 \\ …
sara
  • 135
10
votes
1 answer

Constructing a probability measure on the Hypercube with given moments

Let $H = [-1, 1]^d$ be the $d$-dimensional hypercube, and let $\mu \in \text{int} H$. Under these conditions, I can explicitly construct a tractable probability measure $P$, supported on on $H$, which has $\mu$ as its mean. For my purposes,…
1
2 3
99 100