google-research/augmix

augmentations used in augmix

kiranchari opened this issue · 3 comments

Hi,

I have a couple of questions about the augmentations used in augmix -

  1. The AugMix paper mentions that contrast augmentations were removed from augmix as that would overlap with one of the tested corruptions (Contrast) - but I see that AutoContrast is still used in the code: https://github.com/google-research/augmix/blob/master/augmentations.py#L141

  2. I am curious how or why the augmentations in augmix impact performance on these corruptions as the connection between them is not immediately clear. Do you have a take on this, perhaps through an ablation study of the augmentations in augmix?

Thank you.

Both Autocontrast and Histogram equalization (augmentations.equalize) can vary the distribution of contrast across images. Perhaps this provides some contrast robustness?

AutoContrast in PIL is a complicated histogram-based method that scales the image to take up the full range from min to max values, while the "contrast" corruption in ImageNet-C merely squishes inputs closer to their mean.

https://github.com/tensorflow/tpu/blob/8462d083dd89489a79e3200bcc8d4063bf362186/models/official/efficientnet/autoaugment.py#L285

def contrast(x, severity=1):
    c = [0.4, .3, .2, .1, .05][severity - 1]

    x = np.array(x) / 255.
    means = np.mean(x, axis=(0, 1), keepdims=True)
    return np.clip((x - means) * c + means, 0, 1) * 255