Inquiry about "CacheableDataset from wrapper"
AhmedHussKhalifa opened this issue · 3 comments
Hey,
I am trying to tailor your framework to accommodate my experiments on imagenet. One of my experiments is to implement this paper Knowledge distillation: A good teacher is patient and consistent where they do some data augmentation at the teacher and student side. Each model has different input data. I expect I would need to creat each of them a different dataloader. I will also maintain the same augmentation method for each image to be able to store the output vector of the teacher using default_idx2subpath function on my hard drive (SSD). I figured that saving the data on the SSD compared to loading the images made the training runs slower.
I am trying to train a Student model -ResNet18- using 2 GPUs on ImageNet and a teacher model -Resnet34-. Could u recommend what is the best scenario for this pipeline to run the code faster giving the current resources?
Another question, if I need to train using cosine scheduler [CosineAnnealingLR ] with Adam optimizer, which method should I change?
Thank you for the inquiry.
Another question, if I need to train using cosine scheduler [CosineAnnealingLR ] with Adam optimizer, which method should I change?
For instance, you can replace the following optimizer
and scheduler
entries in a YAML file
optimizer:
type: 'SGD'
params:
lr: 0.1
momentum: 0.9
weight_decay: 0.0005
scheduler:
type: 'MultiStepLR'
params:
milestones: [5, 15]
gamma: 0.1
with
optimizer:
type: 'Adam'
params:
lr: 0.1
scheduler:
type: 'CosineAnnealingLR'
params:
T_max: 100
eta_min: 0
last_epoch: -1
verbose: False
following this PyTorch official document (suppose you set num_epochs: 100
in the YAML file, thus T_max: 100
)
For reimplementing the method you mentioned, I'll need some time to see a big picture of the procedure.
In the meantime, could you close this issue (and #154) and start a new discussion in Discussions ? As explained in README, I would like to gather questions / feature requests in Discussions and bugs in Issues for now.
Thank you
@AhmedHussKhalifa Closing this issue as I haven't seen any follow-up for a while.
Open a new Discussion
(not Issue
) if you still have questions
Thank you @yoshitomo-matsubara.