HarukiYqM/Non-Local-Sparse-Attention

Which parameter should be set to continue training after a break in training?

cheun726 opened this issue · 12 comments

I stop training,Which parameter should I set to resume training from where I left off

—resume and you may have to manually adjust the epoch number and learning rate.

'--resume' Is this one?How do I set the value of this parameter and learning rate?Can you be more specific? Thank you

For example, if you want to train 100 epochs with learning rate 1e-4 for first 50 and 5e-5 for rest. Suppose you training stop at 53. To resume, please add —resume 53 and change —epoch from 100 to 47 and —lr to 5e-5.

thank you so much

thank you so much

How do I set the value of --resume to model_latest?as shown below
2

Why should the parameter of --data_range be set to 801-900 during test? as shown below
3

To recover from the latest checkpoint, you have to set —resume to the the latest epoch number.

This parameter does nothing in testing with benchmarks. Free feel to remove.

thank you very much

During training, why did the model with fewer parameters prompt that the GPU's memory was insufficient, but the model with more parameters did not ?as shown below
屏幕截图 2021-08-25 164802

Memory and parameters are not directly related. For example, when the input is larger, it requires more memory to store activation maps. The —patch_size is the output size. If you want to train X4 model, please set the patch size as 192 (48*4) to make the input same to x2.

hi,Is there any code in the program to compute FLOPS?thanks