gudrunhe/secdec

Error estimation

Closed this issue · 2 comments

During some calculations i encountered an odd behaviour of results. I had an integral, calculated it in Euclidean region, with QMC, 5*10^8 points. Wanted to check how much accuracy do i check when it is calculated with 10^7 points. It turned out, that the accuracy for lower number of points is actually more precise than the other one, roughly by 2 orders. Below the numbers

5*10^8
eps^3: 59.1237525347561359 ± 8.41099093687659705e-12
eps^2: 17.5004972241645049 ± 2.53390845876766492e-12
eps^1: 2.91139159728811814 ± 4.08511507325681193e-13
eps^0 : 0.643415535874408695 ± 7.59117669767990508e-14

10^7
eps^3: 59.1237525348148694 ± 7.20756582442995630e-14
eps^2: 17.5004972241919177 ± 2.07670306502320253e-14
eps^1: 2.91139159729018981 ± 3.22539358168042113e-15
eps^0 : 0.643415535875348721 ± 9.26925711209401115e-16

The question arised, what is the origin of this and the only explanation we had was that with less points the error estimation is worse and gives falsely small error. Do You have any other ideas why it could happen? I enclose gen and int files, the only thing i have changed in between the calculations is 'minn' other than that it was all the same. If it is really about the error underestimation, how can we know the real error?

A few things that could be going on here:

  1. Your error with 10^7 points is very close to double precision, if this is true and you keep throwing points then it is natural to expect the error to increase. Essentially, once you have a perfectly accurate result, just throwing more points will slowly add numerical noise to the result.
  2. If you look at https://arxiv.org/pdf/1811.11720.pdf Fig14 left panel for lattice sizes ~10^8 you can see that the error estimate is very “noisy” in that sometimes much smaller lattices can give a lower error than a larger lattice. In cases where the analytic result was known, we found that the error estimate is actually reliable in these cases, there are just “good” and “bad” lattices and we don’t know how to find a priory which is which.

My suggestion would be:

  1. If any relative error is less than about 14 digits, it’s best to artificially increase that error to 14 digits. Double precision is usually only reliable for about that many digits.
  2. If you really want to check your error estimate, the following things can help:
    • Increase minm (the number of random shifts used to estimate the error) the default is 32 shifts. Maybe try minm = 64 or even minm = 128 shifts with minn = 10^7. With this many shifts I would really believe the error estimate.
    • Try a different e.g. Korobov transform (maybe Korobov3->Korobov4), this will alter the integrand a little bit and provide you with another way to estimate it. It may spoil the scaling though so don’t expect as good convergence, any digits that agree with your current result can probably be trusted.

I would guess that you can trust 14 digits of your 10^7 result.
Compute it with 128 shifts and you should see if this guess is correct.

I tried both suggested way outs and below is what I got:

ref value: minm=32, transform=korobov3, 10^7 points
eps^3: 59.1237525348148694 ± 7.20756582442995630e-14
eps^2^: 17.5004972241919177 ± 2.07670306502320253e-14
eps^1: 2.91139159729018981 ± 3.22539358168042113e-15
eps^0: 0.643415535875348721 ± 9.26925711209401115e-16

minm=128, transform=korobov3, 10^7 points
eps^3: 59.1237525348148978 +/- 3.69044372201279583e-14
eps^2: 17.5004972241919319 +/- 1.00136021147393009e-14
eps^1: 2.91139159729019825 +/- 1.90789909251233134e-15
eps^0 : 0.643415535875346278 +/- 5.18842989102919527e-16

minm=32, transform=korobov4, 10^7 points
eps^3: 59.1237525348141801 +/- 1.82443771467467448e-13
eps^2: 17.5004972241913528 +/- 5.79396868890196557e-14
eps^1: 2.91139159729016583 +/- 1.02079658772716189e-14
eps^0 : 0.643415535875318856 +/- 3.23332393310878283e-15

It looks like your guess was correct and the 10^7 points results accuracy is not spoiled in any way, and the 5*10^8 points result is just noisy.