Hyperparameters tunning
AhmedHussKhalifa opened this issue · 3 comments
Hey,
Thank you for your great effort in creating this tool.
Is there a possible way to tune the hyperparameter using your current framework or should I add Ray Tune to your framework?
Hi @AhmedHussKhalifa
Thank you for the words!
It's a good question; Ray Tune looks like a good option for hyperparameter tuning in general, but I feel it is difficult for torchdistill to officially support the package (or integrate it to existing example code either) at least right now because it will first need to support Ray.
On top of that, I'd like to have 1 yaml config file -> 1 trial (fixed hyperparameters), that will enable others to reproduce the reported results shortly and keep the training log file compact. So at this time, my recommendation is to create multiple yaml config files (a set of hyperparams -> 1 yaml config file) and run it by shell script or distribute the jobs to different nodes if you are using HPC.
Hey,
Thank you for ur reply.
I have a question, If I want to increase the batch size, one way to achieve it, is to decrease the complexity of how you are running the teacher, Right?
I was thinking of gernerting the logits from the teacher model and save them. I know in the training process we needs random sampling, so I will save them as one pickel file that load these vectors by customized dataloader and another reponsable for images loading.
I would like to have your input in the previous modification.
I have a question, If I want to increase the batch size, one way to achieve it, is to decrease the complexity of how you are running the teacher, Right?
Unless you hit the limit of your computing resource e.g., GPU memory, RAM, etc, it is not always necessary to save complexity of extracting teacher's output(s).
One way to save the complexity without decreasing batch size is to increase grad_accum_step
.
e.g., grad_accum_step: 2
means gradients will be accumulated for 2 iterations, and then an optimizer will update its parameters.
I know in the training process we needs random sampling, so I will save them as one pickel file that load these vectors by customized dataloader and another reponsable for images loading.
Does it mean saving teacher's output given an input? If so, it's already implemented in torchdistill.
By specifying a cache directory inyour yaml file like cache_dir
: './cache/' (or other dir path), the outputs from teacher model will be saved at the first epoch.
From the second epoch, the saved outputs will be loaded instead of running the teacher model. Note that this approach is not effective when you use some data augmentation approach e.g., random crop, horizontal flip, etc when transforming inputs as the saved outputs are associated with the corresponding input indices defined in Dataset module.
FYI, torchdistill's default pipeline will apply torch.no_grad
for teacher model unless it has any updatable parameters during training