aewallin/allantools

Validate magnitude of noise against ADev plot

Opened this issue · 1 comments

Hello,

Are there any tests in the test suite that validate the stochastically generated noise against the Adev plots?

For example (half-pseudocode):

samples = int(1e6)
data = noise.white(samples) * noise_magnitude
tau, dev, _, _ = 

# fit a line of slope -1/2 to log(tau), log(dev)
# todo...

assert line.y_intercept == noise_magnitude

IIRC all current tests are with fixed datasets and against xDEV values from Stable32 or another program.

Testing with random data against theoretical predictions is a good idea, but note that:

  • you are also testing the noise-generator and underlying random number source at the same time. is it clear that different platforms (win, mac, linux, x86, arm, etc.) produce the same random numbers?
  • the assertion needs to be "soft" to some extent, i.e. accept a confidence-interval at some give confidence-level. How do you set the level? Would it be OK if one in 1000 or a million tests fail although the algorithm is correct?
  • testing with random data with a new seed every time could be used to validate the confidence intervals produced by allantools. This would be a new and useful addition. Generate many new datasets, produce an xDEV histogram, and check that the confidence-interval agrees with the width of the histogram.

The confidence interval work is unfinished. I worked on the noise-ID algorithms in January, but didn't settle on how (in what order etc) the noise-ID should be done:
http://www.anderswallin.net/2018/01/power-law-noise-identification-for-allantools/
In principle the Stable32 source is now available if one wanted the same behavior as this "golden" code..