Optical spec and total SED spec mismatch
yasmeenasali opened this issue · 6 comments
Hello,
I'm trying to get the full best fit SED from a fit with photometry (GALEX + DECam + WISE) + an optical spectrum (SDSS). I followed the suggestion in Issue #197 to create a copy of the obs
dict in order to return the full SED. When I generate the model spec across the whole wavelength range vs. just for the optical range it returns something slightly different for the same parameters theta_max
:
The two model spectra in that plot are generated using mean_model
, with the only difference being the obs
dictionary (see code below). I am concerned about the difference in relative emission line strengths between the two models. The difference in overall flux density between the two matches the difference in flux density between the observed photometry and optical spectrum (maybe an aperture effect or error in spectrophotometric calibration). I am using PolySpecModel
and TemplateLibrary["optimize_speccal"]
as in the psb_params example.
# model optical spec
mspec, _, _ = model.mean_model(theta_max, obs, sps=sps)
# model full sed
obs_copy = obs.copy()
obs_copy['spectrum'] = None
obs_copy['wavelength'] = None
mspec_copy, _, _ = model.mean_model(theta_max, obs_copy, sps=sps)
Thanks!
Yasmeen
Hi, yes this is because PolySpecModel fits a polynomial to the ratio of the model and the spectral data if it exists and applies that calibration polynomial to the model (yielding the red curve). But if there's no spectral data then no polynomial is computed or applied (blue curve) You can look at model._sed
after the calls to model.mean_model to get the prediction without the polynomial calibration factor applied.
Hi Yasmeen!
Following up on Ben's comment (since I wrote this already!). This normalization difference is an expected outcome of using the optimize_speccal
option. This option fits a high-order polynomial to the ratio of the observed spectrum and the model spectrum and re-normalizes the model spectrum to match the normalization + slope of the observed spectrum. It effectively "ignores" the flux calibration of the observed spectrum, fitting only the emission/absorption line EWs.
Your red model spectrum above therefore should match the observed spectroscopy (as you note) while the blue spectrum matches the observed photometry (as you also note). Which one is 'correct'? Depends on your situation but 99% of the time the photometry will have a better flux calibration, and in particular, as you say above, SDSS fiber spectra often have a missing aperture correction which is my first suspicion. I would suggest measuring the line luminosities from the blue spectrum which is scaled to be consistent with the photometry. You could imagine investigating further by asking what fraction of the light you'd expect the fiber to collect based on the size/surface brightness profile of the object, and checking to see that that's consistent with the normalization difference you find above.
Best,
Joel
It also looks like there might be emission line marginalization turned on? I ask since the emission line ratios appear to be changing between the two models. With marginalization the emission line strengths are determined from a fit to any spectral data, but if there is no spectral data then they come straight from cloudy. This will require a bit more care to get consistent emission line flux measurements between the two predictions
Thanks so much, this is very helpful! And yes, emission line marginalization is turned on!
In that case I suggest you use model._sed after the call with 'obs' for the part where you have spectroscopic data, and the result of model.predict() or model.mean_model() with 'obs_copy' for the rest of the spectrum. That way the emission line fluxes will include the on-the-fly adjustments computed with your spectroscopic data.
Assuming this is resolved, but please reopen if there is still an issue.