1

I am currently working on implementing the model EEG_DMNet. For pre-processing it calls for using differential entropy like $$ h(X) = -\int_{-\infty}^{\infty} p(x) \log p(x) \, dx $$ Assuming the Data I am using is gaussian-distributed the formula can be simplified to $$ h(X) = \frac{1}{2} \log(2\pi e \sigma^2) $$ My problem is, that I do not understand what "using differential entropy" means. The shape of the input x is $$ x \in \mathbb{R}^{n \times C} \times \mathbb{R}^{n \times T} $$ and the output shape should be the same. When trying to add a pre-processing layer in keras, the output shape is not correct.

def call(self, inputs):
     variance = tf.math.reduce_variance(inputs, axis=-1, keepdims=True)
     differential_entropy = 0.5 * tf.math.log(2 * np.pi * np.e * variance)
     return differential_entropy

1 Answers1

1

Maybe they take a sliding-window approach like this paper? For each point in the EEG (for a given channel), crop a window of data, and output a scalar DE. If you slide the window over all of the data, you will end up with an output shape close to the input shape (or the same shape, if you handle the edges). There are papers your study references when describing the DE part - those papers might explain in more detail.

Alternatively, could they compute DE over the timeseries per channel? That would result in a vector of $n_C$ values which could be used to normalise $x$ per channel. Not sure if that makes sense in this domain, as I am not familiar with EEG preprocessing.