Let ScopeSim simulate raw exposures as unsigned integers, or floats with specified precision.
Closed this issue · 4 comments
Detectors detect individual photons. The raw data should therefore be integers; or ScopeSim should at least have the (default?) option to do so. Some telescopes might have a mode to only store averaged frames, those are usually floating point (but float 32 usually).
ShotNoise
already has this line:
data = np.floor(data)
so that's good.
Turning off both SummedExposure
and AutoExposure
and limiting DIT to an integer number of seconds will already ensure no fractional flux. But at some point some actual conversion to an int of a specific precision should be made. And it should work for non-integer DITs.
SummedExposure
essentially mimics the effect of averaging (except it sums), so that should keep the data float (or make it float).
You an idea @teutoburg ?
Maybe it would make some sense to convert the pixel values to an integer at the very end of OpticalTrain.observe()
. In between source and detector, we might have some floats due to things like filter curves or resampling. We could keep floats during the observe
loop for precision, as that's all inside the "black box", but then convert to "whole photons" afterwards. Everything in OpticalTrain.readout()
(aka detector effects) could be configured to use integers, as there shouldn't be much that requires floats, unless as you mentioned we're doing things like averaging...
The ShotNoise
should act on the floating point values I think, because it is essentially a property of the source. But the observe()
/ readout()
split forces the shot noise to occur in readout()
. So I think that the line mentioned above (the one that already does the rounding), would be the place to convert to integer. But it needs to be done also when the ShotNoise is not applied, so maybe both the rounding and the conversion to integers should be done in a separate effect.
Some of the other effects should also be performed before the quantization I think. Like the dark current. Not sure about non-linearity, it is currently after the ShotNoise in METIS..
Also, different instruments would have different settings, maybe some are 8 or 32 bit, which would be easiest to configure through the IRDB by introducing a new Effect. I was thinking that either the SummedExposure
(to sum or average) or this new effect (to quantize) should be used, depending on mode.
Yeah, all of this should be in #308.
I'm wondering whether we should close this issue now, or whether we should first try to use it in the IRDB. It feels like there is still some work to do, but I have a difficult time articulating what. One of the issues is the interplay with AutoExposure (which is apparently only used in METIS) and SummedExposure (which is used everywhere).
Infrared instruments often have the option to average or stack multiple readouts before writing them to disk. Usually this is indicated by an NDIT>1, and the result is usually a FITS file with a float type. That behaviour is emulated by SummedExposure, which simply multiplies the fluxes by DIT and NDIT. (There should also be an AverageExposure effect, that does not multiply by NDIT.)
So the flux should be float in cases where NDIT > 1. In reality it should be the average (or sum) of NDIT integer values; however, it is probably sufficient to emulate that by simply not quantizing. If NDIT > 1, then the Quantization effect should not be included.
However, it would preferable if we don't explicitly have to include or exclude the Quantization effect depending on the value of NDIT. Especially when since when AutoExposure is used, it is unknown until readout()
what the value of NDIT actually is.
Perhaps the Quantization
effect should therefore only quantize if NDIT == 1, and not if NDIT > 1?
Another, perhaps better, approach would be to say that if one uses AutoExposure, then one doesn't particularly care about a specific DIT or NDIT. This implies (to me) that it would be expected that the resulting pixel type is always float if AutoExposure is used, and that thus the Quantization effect should not be used in conjunction with AutoExposure at all.
Vice Versa, simulating actual raw observations requires forcing NDIT to a specific number (e.g. 1). But this is not possible with the current METIS effect list, because it requires explicitly turning the AutoExposure off.
Perhaps we can kill two birds with the same stone. I propose that:
- By default, AutoExposure, SummedExposure, and Quantization is included (or at least in METIS).
- Optionally: Allow the user to give either an NDIT + DIT combination, or an exposure time. This is already possible, but the means are different, DIT and NDIT can be given as input through the properties dictionary, e.g. https://github.com/AstarVienna/irdb/blob/dev_master/MICADO/docs/example_notebooks/1_scopesim_MCAO_4mas_galaxy.ipynb , and exposure time as argument to
readout()
, e.g. https://github.com/AstarVienna/irdb/blob/dev_master/METIS/docs/example_notebooks/Introduction_to_Scopesim_for_METIS.ipynb . So we don't necessarily have to do anything, but maybe we want to harmonize the mechanisms if the current situation is deemed confusing. But that change might be considered out of scope for this issue. - AutoExposure does nothing if DIT and NDIT are already explicitly given by the user. Otherwise it determines DIT and NDIT as-is
- Optionally: SummedExposure has an option to average instead. That means not multiplying by NDIT, but instead dividing the exposure time by NDIT. Or maybe an AveragedExposure effect should be added. But once again, this change might be out of scope for the issue at hand.
- Quantization does nothing if NDIT > 1, or when NDIT and DIT were calculated by AutoExposure. Maybe we need some flag to determine where the value of NDIT came from.
This way, we only need one set of effects, for all three cases:
- NDIT and DIT are explicitly set and NDIT == 1
- NDIT and DIT are explicitly set and NDIT > 1
- NDIT and DIT are automatically derived