JohannesBuchner/bexvar

UltraNest allocation issues on Apple OSX-arm64

Closed this issue · 5 comments

I recently switched from a MacBook Pro with Intel processors to a MacBook Pro with ARM-based chips. On the previous laptop, I used to run the Stingray implementation of bexvar and got the output without any issues. However, on the new laptop, there seems to be a numerical or logical issue causing the UltraNest library to try to allocate an array with a size far beyond practical limits.

I'm not sure how to solve this problem. I've attached a bexvar_input.txt file with an example I tested on both laptops and the bexvar_output.txt file detailing the error I obtained on the MacBook Pro with ARM-based chips. Thanks for the help.

bexvar_input.txt
bexvar_output.txt

Since you are using the Stingray implementation, I think you need to create an issue for Stingray. At a glance, it looks like there are no time data points used in the time series analysis, and therefore all points have the same loglikelihood (-1e300). Not sure what "Sim is 0.0" means here. Or can it be that there are only zero count time bins?

Then there is a ultranest iteration which tries to improve the uncertainty by getting more live points, but asks for a very unreasonable additional number of points. This could be filed as a ultranest bug / request for improvement there, but probably only occurs in buggy likelihoods. Probably bexvar (and stingray) should pass max_num_improvement_loops=0 to ultranest's run() function, and then this would not occur.

Apologies for posting on the wrong page. The error occurs when UltraNest attempts to sample an extremely high number of live points (9223372036854775407), approaching the upper limit of a 64-bit integer (2^63-1). I'm uncertain whether this stems from an issue within the bexvar function or if it indicates a potential bug or misconfiguration in UltraNest's handling or interpretation of parameters on ARM architecture (I've started here to seek clarity on this matter).

Both. It's a stingray bug and an odd behaviour of ultranest (but under conditions that shouldn't occur normally).

I'm not entirely sure if this is a stingray bug because I encountered the exact same above-mentioned error when I ran Bayesian analysis on some X-ray spectra (only in some cases, in most instances the analysis yields a successful outcome). If it's better, I can close the issue here and open a new one on the UltraNest GitHub page.

Interesting! Yes, please open an issue there, ideally one that can be reproduced (specifically, we need to know what the likelihood function returns).