LouisDesdoigts/dLux

Calculate `MAX_DIFF` at `__init__` Time.

Closed this issue · 1 comments

Hi all,
MAX_DIFF is required for the basis generation (in __init__ time code) to be compilable. It may be worth removing this constant to make the code more readable since it will not occur at run time. However, it is theoretically possible that someone may want a large basis ~10-100 vectors. In this case being able to compile this code segment could be useful. Regardless, it is possible for the minimum MAX_DIFF to be calculated at __init__ time from the noll_indices. This could reduce memory usage (and hence time as well) be reducing MAX_DIFF for smaller, ,more typical cases.
Regards
Jordan.

So I decided to abandon MAX_DIFF entirely because it was faster and made the code much simpler.