1

I wrote a program that computes incrementally the mean and variance of values generated with the function $a\cos(\omega t)+\mu$.

The program will have to compute these for sums of cosine and sine of same frequency ($\omega$ values) but with various phases. The result of the sum is still a cosine or a sine, only the amplitude and phase will vary.

I tried changing $\omega$ and the number of iterations but it didn't increase the precision much which is around $10^{-3}$ and down to $10^{-5}$.

Is there a way I could increase the precision of the mean value which is used to compute the variance ? The real variance and mean are unknown when processing.

I have noticed that the computed mean oscillates around the real mean. Could I use this property to help increasing the precision ?

In the final application, the variance and mean will change over time and the program will have to update the computed values. The faster it is able to adjust to the new mean and variance values, the better.

I was thinking to simply sum the values over $2\pi$ periods and average these.

Edit:

To rephrase the problem in a more general and accurate form, I have a sequence of values $x_t$ that follows the function below for a discrete incremental sequence of t representing time.

$$x_t = A\cos(\omega t + \phi) + \mu = a\cos(\omega t) + b\sin(\omega t) =p+ \mu$$

The parameters A, $\omega$, $\phi$ and $mu$ are unknown.

I have to normalize the mean and standard deviation of $x_t$ and for that I need to determine it's mean ans standard deviation.

A naive mean and variance computation shows that they oscillate, which is normal due to the cosine. I'm looking for a way to increase the precision of the mean and variance, and thus ki d of counter balance the oscillations.

An incremental algorithm able do adapt to changes of parameter A and $mu$ would be optimal. The parameter $\omega$ is expected to change only very slowly or be constant.

I asked the same question on stackoverflow and got an interesting solution. I also found myself a way to determine the parameter $\omega$ (the pulsation) with simple arithmetic.

chmike
  • 187
  • (1) I suppose you know not to use the naive formula for variance. (2) For the sums involved in all the formulas, you could use Kahan summation. – NDB Jul 08 '23 at 13:27
  • @NDB for the variance I use the expression suggested here which is indeed much faster. I just tried the kahan summation, but it doesn't make any difference. – chmike Jul 08 '23 at 14:00
  • I also tried an iterative mean computation $m_{new} = m_{old} + \alpha(x - m_{old})$ with $\alpha=1/n$ until n reaches 1000 and then keep it constant. But it doesn't solve the problem. It make it worse and it doesn't converge. – chmike Jul 08 '23 at 14:22
  • That is precisely the formula that you should not use to compute variance. Those squares subtracted often produce catastrophic cancellation. See here for a better choice that is still one-pass. – NDB Jul 08 '23 at 14:59
  • @NDB I tried both methods and don't see any significant difference in the precision. The error has another origin. – chmike Jul 09 '23 at 09:56
  • @NDB you can test the code and the influence of various parameters on the mean here – chmike Jul 09 '23 at 12:17
  • Reminder: $\cos$ typesets the cosine function better than $cos$. – David K Jul 09 '23 at 21:52

1 Answers1

1

I think rather than consider the function as a sum of a sine and a cosine, just consider it a cosine with a phase shift. So one formula, $a \cos(\omega t + \phi)+\mu$, covers all cases for a sinusoid, and requires only three unknown parameters to describe (since you know $\omega$).

If you have a sequence of values that are consistent with this formula to a high precision, then I suppose it would not take many consecutive values to estimate all three unknown parameters. Once the parameters have been estimated you can estimate the variance. (The mean is one of the parameters, so it also has been estimated at that point.)

Now if you say that $a$, $\phi$, and $\mu$ might slowly change over time, you no longer have a sinusoid, but the sinusoidal model might still be useful. You just have to use methods to estimate the parameters that are not too sensitive to the precision and accuracy of the observed values. Perhaps a Kalman filter might solve your problem.

David K
  • 108,155
  • Fully agree with your generalization. Could you please explain how I can determine the four parameters ? I failed to find a solution. – chmike Jul 10 '23 at 07:26
  • A web search for "fitting a sinusoidal function to data" turns up an embarrassment of riches. – David K Jul 10 '23 at 11:44
  • $\omega$ is constant or nearly constant but unknown. I have edited my question to rephrase the problem and provided a link to the same question asked on stackoverflow. One answer uses a formula given by Maple to find the mean value from the values $x_t$. The second answer is from me and gives a method to compute w from the values $x_t$. The other parameters can be obtained by solving the linear equation. But an incremental method would be preferred. – chmike Jul 11 '23 at 12:35