ranieremenezes/easyfermi

Problem creating light curve

Closed this issue · 20 comments

Hello,
Thank you so much the tool is really helpful!
I was trying to create a light curve data for NGC253, the ltcube ran very smoothly and the plot quality is 3 so it should be no problem?
But when creating the light curve I encountered a valueError which says "zero-size array to reduction operation maximum which has no identity." and then the light curve cannot be assembled.
This happens specifically when I manually change model to LogPar or PLEC, with default Powerlaw model it works fine.

Here's Error code
Error.md

and fermilog file:
fermipy .log

If you can provide any help I'm very grateful.

Dear @StateAtol , thank you for pointing out this bug. The problem was happening because all of the lightcurve bins were upper limits. I just fixed it, and I think that now everything is working. Could you please update your version of easyFermi by following the steps below?

  1. open the Fermi environment in the terminal with the command "conda activate fermi"
  2. update easyFermi with the command "pip install easyFermi==1.0.11"

After that, easyFermi may work properly. Please let me know if you have any further issues.

@ranieremenezes Sorry my feedback is a bit late and thanks for the effort! Now it works like a charm :)
My apologies for asking again but I need advice for something else, sometimes when the number of bins is too large, like, 60 bins for a six-years-data's SED or LC, I get an error saying:

ValueError: Parameter scan points for 4FGL J0509.4+0542::Prefactor include infinite value.

What could be the cause of this error& is there a way to get through without reducing the bin number?
Any advice would be greatly appreciated, sorry to bother you again.

@ranieremenezes During further test another error occurred:

ValueError: cannot convert float NaN to integer

For NGC1068 the light curve can now be assembled normally(it couldn't before the update) but for NGC253 this new error appeared and it stopped working.
Here's error code:
ErrorCode2.md
And fermilog file:
fermipy (2).log
Please check it.

Regards

Hi @StateAtol , good that the updated version is working! On your new questions:

sometimes when the number of bins is too large, like, 60 bins for a six-years-data's SED or LC, I get an error saying:
ValueError: Parameter scan points for 4FGL J0509.4+0542::Prefactor include infinite value.
What could be the cause of this error& is there a way to get through without reducing the bin number?

  1. This is probably an error only of the SED computation, but let's discuss both cases.
  2. For the SED: There are some possibilities for this error to occur, but the most likely is that you have asked for too many energy bins (i.e. 60) such that some of them simply have no enough photons (in some cases even zero) to properly perform the likelihood fit. Other possibility is that you are asking easyFermi to cut the data in energy/time intervals beyond the downloaded data (see next bullet point).
  3. For the LC: I would expect it to occur only if you are working on very high energies (let's say only with photons above 80~100 GeV) or if the data cuts that you settled in easyFermi are different from the data cuts applied when downloading the data. E.g. maybe the time window for which you are asking easyFermi to compute the LC is longer then the time window of the downloaded data, such that some bins will have zero photons.

To get through this error without reducing the number of bins, I would recommend you to work only in low energies, let's say 100 MeV -- 10 GeV (or the minimum energy possible for you). Let me know if this helps you. To help you more precisely I would have to know more details, such as take a look on your configuration file (i.e. config.yaml) and have at least a rough idea of your goal.

for NGC253 this new error appeared and it stopped working.
Ok, according to the log file the error is "ValueError: cannot convert float NaN to integer" and it is associated with the array of upper limits "lc['eflux_ul95']", meaning that one or more upper limits could not be computed. This error is very likely due to low statistics, just like the error discussed above.

@ranieremenezes Thank you for the detailed explanation! I think I've got a rough idea what the problem is & why the error occurred. So basically the data cuts don't handle well on very high energies and I shall pay attention to make sure the LC matches the time window of the data.

The error codes and logs actually belong to another person in my group, she's trying to analyze several star-forming-galaxies when the error appeared. So after the update & lowering the energy range, she managed to create 50-60bin LCs without problem.

On the other hand, my goal is to analyze 3 AGN blazers related to IceCube observatory's neutrino flare events :
TXS 0506+056, GB6 J1542+6129 and PKS 1424+240 with energy ranging from 1Gev to 500Gev
I'm trying to :

  1. Plot LCs with as many bins as possible to compare the gamma ray flux periods with neutrino event timing.
  2. Plot SEDs to compare their energy distribution then find some shared properties of their spectrum. (there are several
    reports saying they all belong to one subcategory of blazer)

So i've tried to lower my energy range to 1Gev~10Gev for a 10-years-analysis of TXS0506, yet the error message

ValueError: zero-size array to reduction operation maximum which has no identity

kept occurring and it seems anything above 30 bins can not be achievable, I think this is due to low statistics when computing upper limits array too. But I got no idea how to improve the result of my research. So please share some thoughts when you have time.
fermipy.log
config.txt
ERROR.md
srcmdl_00 .txt
Target_results.txt

Hi @StateAtol ,

> So after the update & lowering the energy range, she managed to create 50-60bin LCs without problem.

  • Great! I am really glad to know about this.

> Plot LCs with as many bins as possible to compare the gamma ray flux periods with neutrino event timing.

  • I see. In this case, the usage of easyFermi should be as I said before: the size of each bin must be large enough such that they contain a reasonable amount of photons for the likelihood fit to converge.

> So i've tried to lower my energy range to 1Gev~10Gev for a 10-years-analysis of TXS0506, yet the error message
ValueError: zero-size array to reduction operation maximum which has no identity

  • Thank you for pointing this out. This was a tiny bug happening when easyFermi was trying to plot the LC. I've fixed it and ran a series of tests this afternoon. I think everything is working now. I will ask you to do the following:
  • open the Fermi environment in the terminal with the command "conda activate fermi"
  • update easyFermi with the command "pip install easyFermi==1.0.12"
  • and then try your analysis again. Please let me know if it works.

Hello @ranieremenezes,
I've run several analysis in various bin numbers after the update and I can confirm that everything is working very well.
I think I can proceed my research without problem now thanks to your explanations and debugging.
Thank you for all the help!

Great! I am glad everything is working.

Hello! I am a bachelor's student trying to use easyFermi. I installed it successfully and tried to test it using the parameters that you used in the YouTube video. While it was running, I opened the terminal of ipython and saw that an error had occurred, it says:
ls: cannot access 'Data/3.12': No such file or directory
ls: cannot access 'easyfermi1/photon/PH.fits': No such file or directory
ls: cannot access 'Data/3.12': No such file or directory
ls: cannot access 'easyfermi1/list.txt': No such file or directory

Also, is the isotropic medium .txt file to be saved where the .fits file of galactic emission model is saved?

I'd be glad if you could please help me sort this out.

Dear Krittika,

To better understand this problem, I need to see the "fermipy.log" file generated when you ran easyFermi and a screenshot of easyFermi window. Could you please attach it here?

Only from the error messages above, it seems that easyFermi did not find the photon files. If this is the case, the solution is simple: try putting the photon data in a separate directory, let's say "./data/", and then select this directory in the easyFermi window option "Dir. of photon files". Do the same for the spacecraft file and the background models. Let me know if this procedure fixes the problem.

Also, is the isotropic medium .txt file to be saved where the .fits file of galactic emission model is saved?

  • Yes, both must be in the same directory.

Hi Krittika,

I cannot see the attached files. Could you please upload them again or send them to me via email?

Hi Krittika,

Thanks for the files. There are only two small issues.

  1. Please update easyFermi to version 1.1.4. You can do that by opening the fermi environment on the terminal and typing:
    pip install easyFermi==1.1.4

  2. The major problem is just the data path. easyFermi does not have access to the data in your machine.
    You can try the following. Instead of using a directory path like that:
    /home/kritzzz/Desktop/Fermi sample1/3.12 easyfermi1/list.txt
    you can try:
    /home/kritzzz/Desktop/Fermi_sample1/3.12_easyfermi1/list.txt

I mean, try to remove the blank spaces from all paths.

Hmmm... I see...

How are you organizing the data files? Do you have a directory only with the photon data, one with only the spacecraft, and one with only the background models? Did you rename these files?

From the error in your terminal, I see that the data path is understood by easyFermi as easyfermi1/photon/PH.fits, which is different from the path listed in your config file.

I will ask you to follow the tutorial again but doing exactly the same thing as is shown there. I mean, even the directory organization.

I hope this can help you.

Hi Krittika,
We recently updated easyfermi (take a look at the new installation tutorial) and we also did several checks on Windows with WSL. Furthermore, stay tuned to the easyfermi YouTube channel. We are going to update some tutorials there in the next weeks.