keras-team/keras-tuner

During distributed tuning specific which hyperparameter sets can be used on a "KERASTUNER_TUNER_ID"

Opened this issue · 0 comments

I am running a distributed tuning using the chief/tuner# method. I am tuning on 8x NVIDIA A100s and 8x Tesla V100s and 2x RTX8000s. One epoch takes about an hour, and I get convergence around 10ish epochs, so tuning will take awhile. However I do have access to almost a hundred Telsa M1060s, the problem is that they are only 4gb ram, and depending on hyperparameters my models vary from 3gb to 8gb.

It would helpful if I could run all the smaller models only on the older M1060s while running all the large (more layers and filters) on the newer GPUs. If I could somehow limit the search space of a trial or tuner