gtaylor/python-colormath

Incompatibility with autograd package.

ThomasAuzinger opened this issue · 3 comments

I am using the colormath package together with the autograd package – a combination, which does not work. So far, I identified two problems that seem to be easily fixable:

  1. Float conversions are not supported by autograd:
    The colormath package uses float casts, e.g., self.spec_340nm = float(spec_340nm) (link); however, these fail when autograd uses the ArrayNode data type instead of conventional numbers.

  2. The builtin math package is not supported by autograd:
    The colormath package uses explicit function calls to math.pow, e.g., nonlinear_channels[channel] = 1.055 * math.pow(v, 1 / 2.4) - 0.055 link; however, the math package is not supported by autograd.

I am aware that it is not the job of the maintainers of colormath to keep it compatible with every other package; in this case, however, I think the required changes are trivial and should not cause problems. Thus, I wanted to know, if there are any reason not to do the following:

  1. Remove explicit float conversions in colormath.

  2. Replace the math.pow function with the corresponding operator **, which is more portable.

If these changes are acceptable, I would prepare a corresponding merge request.

Cheers,
Thomas

Hello @ThomasAuzinger,

We'd need to go to Python futures division for Python 2.x, which would be a pretty major undertaking at this point. I'm not super motivated to do all of the verification work to ensure such a big change doesn't break our math. With that said, I'm happy to change my tune if enough people request this.

I would accept a math.pow -> ** PR, though.

I admit to being confused as to how autograd just doesn't flat out support Python primitives like these, though. That sounds all kinds of weird to me.

Hi @gtaylor ,

Thanks for your fast answer. I will prepare the ** pull request.

I admit to being confused as to how autograd just doesn't flat out support Python primitives like these, though. That sounds all kinds of weird to me.

I am not completely sure what you mean with 'flat out support Python primitives' (i.e., is 'flat out' the verb or 'support'?). Anyhow, I wrote an issue for autograd in this respect that can be found here.
In summary, I think that the issue is that a __float__(self) overload must return a float type. However, autograd is using its own data type ArrayNode that tracks all function application to its values to compute the gradient. These two requirements seem incompatible.

Going to go on ahead to close this as not actionable.