The (differential) entropy of the multivariate normal distribution is given by:
$$H(\underline{X}) = \frac12 \ln(|2 \pi e \Sigma|)$$
Does the Shannon entropy:
$$ H(\underline{X})=−p(\underline{x}) \sum_{\underline{x}\in \underline{X}}\log_2 p(\underline{x}) $$
approach this in the limit as e.g. $\; p(\underline{x}) \rightarrow 0 \,$?
If not, are they related or comparable at all? From personal coding experiments, they do appear to become similar, but only for the standard normal: $$ \underline{X} \sim \mathcal{N}(0,1)$$
But I am not aware of any derivation explaining the relationship?