planetlabs/color_balance

Why scale bands?

Closed this issue · 2 comments

@jreiberkyle Can you explain what the purpose is of the lines below (from colorimage.py).

Attempting to bump this to 16bits, each band gets scaled by 16 / 2 ** 8. The entire band is subject to integer division, turning into an array of zeros. Even when enforcing float division, the resultant array is not useful for the purpose of color balance.

# If 16-bit, convert to 8-bit, assuming 16-bit image is using the
# entire bit depth unless bit depth is given
if band.dtype == numpy.uint16:
    if bit_depth is not None:
        band = (band / (bit_depth / 2 ** 8))
    else:
        band = (band / 2 ** 8)

This code is for dealing with 16-bit images that only have intensity bit depth of 12-bits. It looks like there is a discontinuity in the way the code handles bit_depth and the way it's documented. The code handles it as if it was the maximum intensity value (2 ** 12 for a 12-bit image), while the documentation and naming indicates that it should be treated as the intensity bit depth (12 for a 12-bit image).

I see. Thanks for the clarification. bit_depth should be renamed to something like dtype_max.