ssec/sift

sift discretises 1,38 µm too much

Opened this issue · 5 comments

When investigating the 1.38 µm channel reflectance, it appears sift discretises this with steps of 0.01 reflectance units. However, the total range of the area under investigation spans from 0.002 to 0.015, making such a discretisation suboptimal.

sift screenshot when looking at the 1.38 µm channel:

grafik

Corresponding histogram for the values:

image

By default, this is not an optimal visualisation.

Looking at the channel in values of radiances is even worse. Here the range is between 0.05 mW·m⁻²·sr⁻¹ cm and 0.30 mW·m⁻²·sr⁻¹·cm, but when selecting those limits the image shows monochrome pink.

Is it possible for the user to affect this discretisation?

This algebraic between 0.9 and 1.0 is all black, despite the histogram showing plenty of values

grafik

I'm trying to understand where this is happening based on my understanding of old pre-EUM SIFT (just in case something has changed about this, I wouldn't know it).

In your first histogram, there are very clear spaces between the vertical bars of the histogram. Is this showing the issue you're seeing or is this expected for the data? This histogram should be using the real data which should be 32-bit floats or maybe 64-bit depending on the version of the Satpy reader used or if SIFT forces 32-bit. But still, that data that is read with Satpy and cached by SIFT and checked here should not be hitting anything but 32-bit float limitations.

The "raw" floating point band data should be sent to the GPU and then the colormap applied there. If we're hitting quantization issues there I would have expected them at smaller decimal values. If the issue is only in the visualization then it could also be the colormap if we're "cheating" and only sending 255 colors instead of telling the GPU to interpolate continuously between two colors.

The quantisation step due to the L1c integer encoding for the 1.3 channel is ~0.0002 refl (scale_factor*pi/solar_irradiance = 0.005*3.14/69.7), which is at the level of the gaps in the histogram, so the histogram looks plausible to me.

Seems to me that the issue is only in the visualisation, so an issue with the handling of the colors (steps) sounds plausible... is it just a configuration issue or is this more complicated to fix?

I can't remember all of the configuration options added as part of the EUM updates, but if the general idea of putting the floating point data into the GPU via the ImageVisual still holds then this will be harder to figure out or will be outside our control to some extent.

For what it is worth I'd be curious if changing colormaps has different effects on the visualized result. That said the "grays" colormap is something I would expect to work as is because it is very very simple:

https://github.com/vispy/vispy/blob/b72a201ed7e00c34d2ec676d23d45e3c62f618ed/vispy/color/colormap.py#L546-L557

The main point is it takes a normalized (0 to 1) data value and repeats it in the R, G, and B channel of the output color. It doesn't get simpler than that.

Then we get into how the color limits are actually applied for floating point data which happens here:

https://github.com/vispy/vispy/blob/b72a201ed7e00c34d2ec676d23d45e3c62f618ed/vispy/visuals/image.py#L91-L101

Which is pretty basic too.

So we might be talking about GPU floating point precision (which is less than 32-bit most times I think). If the color limits are being rounded before given to the GPU (by the SIFT GUI?) maybe that could be doing something here?

Lastly, if this is something GPU level that isn't standard for all OpenGL implementations, then I'm wondering if doing this on a different GPU gives different results.

Changing colormap doesn't help, indeed the spatial patterns repeat 1:1 between colormaps - below an example, where both colormaps just show 4 colors for the selected area, even though there are many different values between 0.01 and 0.02 in the histogram:
image
image