1

The following counteracts the statements made for the maximum entropy principle case in order to posit a pseudo "minimum entropy principle" case that is simply the polar opposite of the former.

A continuous random variable that has only 1 certain outcome is said to have a Shannon entropy of 0 because its distribution (outcome) is fully certain.

In turn, this zero-entropy variable is said to have maximum information in the sense that unexpected outcomes (all outcomes other than the 1 certain outcome) possess high information a priori. Distributions with such low entropy are undesirable because they represent the case where strong assumptions are made about the variable's outcomes a priori. In this case, we have strongly assumed that the variable will only generate the certain outcome and not deviate from it, based maybe purely on historical observations of the variable.

But what does a zero/low entropy variable mean ex post after the empirical data outcomes, unexpected or not, are actually realized? Does high information still exist in the zero entropy distribution? has its information content dissipitated somehow? has its entropy changed from $0$? How and why

develarist
  • 1,584
  • The first paragraph is just wrong. Where did you get that from ? – leonbloy Oct 08 '20 at 14:45
  • THe second paragraph is incomprehensible. Where did you read that a "zero-entropy variable" has "maximum information" ?? "Distributions with such low entropy are undesirable" ??? – leonbloy Oct 08 '20 at 14:47
  • the first paragraph is saying $H(X)=0 = -\sum p(x) \ln p(x) = -\sum 1 \times \ln(1) = 0$ (since it's just one outcome, I instead use discretized entropy here for readability purposes). The second paragraph builds off (is the polar opposite of) the first comment received under the following question https://stats.stackexchange.com/questions/479627/why-do-we-want-a-maximum-entropy-distribution-if-it-has-the-lowest-information – develarist Oct 08 '20 at 15:02
  • That does not apply to differential entropy, which is not an actual (Shannon) entropy. The differential entropy of a Dirac delta (a constant) is $-\infty$ See eg https://math.stackexchange.com/questions/2552895/uniformly-random-number-on-0-1-has-zero-entropy https://math.stackexchange.com/questions/1398438/differential-entropy/1398471 – leonbloy Oct 08 '20 at 15:21
  • I will change the question from differential entropy to Shannon entropy then – develarist Oct 08 '20 at 15:24
  • @develarist: this isn't my field at all ( although I've always wanted to learn more about it ) but I think that the idea is that one can only think of shannon entropy BEFORE the outcome is realized. Once the outcome is realized, there is no more entropy. I think it's a "before the outcome is realized" concept. – mark leeds Oct 09 '20 at 03:00

0 Answers0