algorithmica-org/algorithmica

Self contradiction

AlexanderNenninger opened this issue · 1 comments

Of course, a more general approach would be to switch to a more precise data type, like `double`, either way effectively squaring the machine epsilon. It can sort of be scaled by bundling two `double` variables together: one for storing the value, and another for its non-representable errors, so that they actually represent $a+b$. This approach is known as *double-double* arithmetic, and it can be similarly generalized to define quad-double and higher precision arithmetic.

You are contradicting yourself. On the one hand, you say switching to higher precision is not scalable, on the other hand, that’s what you recommend here.

Poor choice of words. I meant that for other types of problems, you should switch to larger types, use double-double arithmetic, etc. The Kahan summation algorithm is just for summation, but the underlying principle can be used elsewhere.