0

In a previous question I wondered how to make a stochastic matrix of the inverse of a stochastic matrix.


Being somewhat limited in my success of building such a distribution, I did come across a curiosity regarding the absolute value of ${\bf P}^{-1}$:

Consider the matrix

$${\bf P} = \frac 1 4 \left[\begin{array}{cccc} 2&2&0&0\\0&4&0&0\\0&0&4&0\\0&0&3&1 \end{array}\right] \text{ and it's inverse: } {\bf P}^{-1} = \left[\begin{array}{rrrr} 2&-1&0&0\\0&1&0&0\\0&0&1&0\\0&0&-3&4 \end{array}\right]$$

Now if we calculate (for the matrix logarithm) $$\log_2(|{\bf P}^{-1}|){\bf P}^{T} = \left[\begin{array}{cccc} 1&1&0&0\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&2&2 \end{array}\right]$$

  1. State 1 "leaks" 1 bit of information to itself and state 2.
  2. State 4 "leaks" 2 bits of information to itself and state 3.
  3. Neither state 2 or 3 leaks anything as they send 100% to themselves.

Do these observations make any sense or am I just blabbering about?

mathreadler
  • 26,534

1 Answers1

0

It seems I was mistaken in assuming this should be an expected value.

Anyway I did not find a way to make sense of it.

If we just look right at the ${\bf P}^{-1}$ we don't need to do anything particularly strange:

$${\bf P} = \frac 1 4 \left[\begin{array}{rrrr}4&0&0&0\\1&2&1&0\\0&0&4&0\\0&0&2&2\end{array}\right], \hspace{1cm} {\bf P}^{-1} = \left[\begin{array}{rrrr}1&0&0&0\\-0.5&2&-0.5&0\\0&0&1&0\\0&0&-1&2 \end{array}\right]$$ Now instead taking elementwise logarithm of a diagonally rescaled $\bf P^{-1}$:

$$\log_2(\epsilon + {\bf P}^{-1}\backslash\text{diag(diag(}{\bf P}^{-1}\text{))}-{\bf I}) \approx \left|\text{Truncate at some suitable dB} \right| \approx \left[\begin{array}{rrrr} 0&0&0&0\\-2&0&-2&0\\0&0&0&0\\0&0&-1&0\end{array}\right]$$

  1. Second state leaking -2 dB to 1 and 3.
  2. Fourth state leaking -1 dB to state 3.
mathreadler
  • 26,534