imshow in 2.1: color resolution depends on outliers?
Closed this issue · 16 comments
Since 2.1 the effective color resolution of imshow(x, vmin=-1, vmax=1)
seems to depend on large outliers in x
.
Is this intended behavior?
Thanks for clarification!
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
x = np.linspace(-1, 1, 500 )
x = np.ones([20, 1]) * x[np.newaxis, :]
x1 = x.copy()
x1[0, 0] = 1e16
x2 = x.copy()
x2[0, 0] = 1e17
_, axes = plt.subplots(nrows=3)
axes[0].imshow(x, vmin=-1, vmax=1)
axes[1].imshow(x1, vmin=-1, vmax=1)
axes[2].imshow(x2, vmin=-1, vmax=1)
axes[0].set_title(mpl.__version__)
plt.show()
Matplotlib version
- Operating system: Linux
- Matplotlib version: 2.1.0 (expected behavior in 2.0.2)
- Matplotlib backend: Qt5Agg
- Python version: 3.6.4
- Other libraries: numpy 1.13.3
- via conda
Hmm, I'm not sure about whether this is intended or not, it bisects to 12c27f3 so maybe @tacaswell will know more?
We messed around w/ all of this for 2.1.2. #10133 W/o looking carefully, does that fix this?
Pretty sure it does - it was a round off error that caused this, but its now been fixed for large floats.
It didn't seem fixed when I ran the above test on the master branch.
agreed. The image is stored as float64. You lose resolution at 2^53=9e15. I’m not sure how this worked in 2.1, but that’s the problem here.
I tried to set the dtype to float 128 fro big numbers, but I was told there is no such thing...
Image interpolation is the bug that just keeps giving!
The chain of bugs and fixes here goes:
- while doing 2.0 we noticed that we were interpolating after doing norm + rgb mapping -> leads to colors not in the color map in the final image so we moved the interpolation to be done on the normed, but not color mapped data.
- this introduced #8012, so we better masked around 'invalid' pixels when color mapping #8024
- as a consequence of this over/under/invlaid pixels effectively poisoned everything in their in their kernel window (#8631) so we moved interpolation to the raw data (#8966)
- we now have numerics issues for data with large dynamic range (this bug + #10072 which was fixed by #10133)
The ringing / saturation in the lower left of the test is correct. If you normalize first you are effectively clipping the impact of the outliers.
Well, we could do something like pre-clip the data to some suitably large number around the vmin/vmax, but not 17 orders of magnitude. That should preserve both behaviours
Or folks could mask their invalid data instead of passing in huge (I assume) invalid values.
I would not assume that huge data points are automatically invalid. In our application we have values between -1 and 1 in most of the image, but it can include regions with values approaching Inf.
I would find it strange to mask some of the data before plotting them. I'm also not aware that you have to do something like this in Matlab or gnuplot.