automl/trivialaugment

Which yaml file to use for TA (Wide)

zhengyu998 opened this issue · 2 comments

Hello! Which config files did you use to train WRN-28-10 on CIFAR10/100 for TA (Wide) with and without augmented batch?

Hey, good point. There are two versions of the paper: in the first we focused on the RA search space, in the second we focus on the wide search space. I did not update the configs after updating the paper. I will add the relevant configs, for now you can see: https://github.com/automl/trivialaugment/blob/master/confs/wresnet28x10_cifar10_b128_maxlr.1_weighteduniaug_w%3D0%2C1_widesesp_nowarmup_8timesbatch_200epochs.yaml

To make it a standard training you can remove all_workers_use_the_same_batches and reduce the number of GPUs to 1.
To make it cifar100, replace cifar10 with cifar100.

So, I updated the configs now to include wide variants for all experiments in the main table (2).