poor conversion between lchab and srgb
Closed this issue · 5 comments
lchab may be converted to value that is not optimal. For example LCHabColor(89, 7, 75) is converted to rgb(231, 222, 211) but for example rgb(233,222,211) would be a better target.
rgb(231, 222, 211) converted back to lchab is (88.9, 6.8, 79.8)
rgb(233, 222, 211) converted back to lchab is (88.7, 6.9, 74.2) what is closer to (89, 7, 75)
It is also possible that rgb to lchab conversion is broken. According to http://www.brucelindbloom.com/index.html?Math.html lchab -> srgb conversion works and srgb -> lchab is broken.
Curiously, http://www.colorhexa.com/ has different results for lchab -> srgb conversion but the same for srgb -> lchab.
According to brucelindbloom lchab(89, 7, 75) = rgb(231.3, 222.1, 210.7)
According to colormath it is (231, 222, 211)
According to colorhexa.com it is (233, 222, 211).
According to brucelindbloom rgb(231, 222, 211) = lchab(89.0, 6.8, 74.7)
According to colormath it is lchab(88.6, 6.7, 79.5)
According to colorhexa.com it is lchab(88.9, 6.6, 79.5)
Either there is something broken somewhere or I am missing parameter that is set differently in these tools. I see no way to set reference white and adaptation in python-colormath and colorhexa, so it seems that d50 and Bradford may be assumed to be defaults. sRGB nearly certainly may be assumed to be default in colorhexa.
from colormath.color_conversions import convert_color
from colormath.color_objects import LabColor, LCHabColor, SpectralColor, sRGBColor, \
XYZColor, LCHuvColor, IPTColor
lch = LCHabColor(89, 7, 75)
rgb = convert_color(lch, sRGBColor)
print(rgb.get_upscaled_value_tuple())
lch = convert_color(rgb, LCHabColor)
print(lch.get_value_tuple())
other_rgb = sRGBColor(233.0/256,222.0/256,211.0/256)
lch = convert_color(other_rgb, LCHabColor)
print(lch.get_value_tuple())
other_rgb = sRGBColor(231.0/256,222.0/256,211.0/256)
lch = convert_color(other_rgb, LCHabColor)
print(lch.get_value_tuple())
Current python-colormath seems correct overall, it is probably just related to whitepoint assumptions and the CAT being used:
import colour
# Assuming linear integer values.
sRGB = np.array([231., 222., 211.])
sRGB /= 255.
D50 = colour.ILLUMINANTS['cie_2_1931']['D50']
# CAT02 CAT by default in Colour.
print(colour.Lab_to_LCHab(colour.XYZ_to_Lab(colour.sRGB_to_XYZ(sRGB, D50), D50)))
# [ 88.98067605 6.90235558 75.13893553]
print(colour.Lab_to_LCHab(colour.XYZ_to_Lab(colour.sRGB_to_XYZ(sRGB, D50, 'Bradford'), D50)))
# [ 88.96687738 6.8042839 74.74557839]
D65 = colour.ILLUMINANTS['cie_2_1931']['D65']
print(colour.Lab_to_LCHab(colour.XYZ_to_Lab(colour.sRGB_to_XYZ(sRGB), D65)))
# [ 88.89177036 6.59010903 79.54149464]
So the problem is that in conversion from lchab to srgb and from srgb to lchab default whitepoint is different?
@KelSolaar How can I run this code? Adding import numpy as np
is not enough, it still crashes with AttributeError: 'module' object has no attribute 'ILLUMINANTS'
on
D50 = colour.ILLUMINANTS['cie_2_1931']['D50']
It is using an alternative colour science API for Python I'm contributing to: Colour
I'm watching closely Python-Colormath though :)
python-colormath behaviour is correct here, probably a mix of different CAT and precision settings as per #56 (comment)