Memory problem while testing nuscences
freshwk opened this issue · 6 comments
In order to speed up the training, the code preloads all the training samples into the memory to reduce the slow IO operations. We are considering adding an option to disable the preloading to reduce the memory cost. The update will be pushed very soon.
In order to speed up the training, the code preloads all the training samples into the memory to reduce the slow IO operations. We are considering adding an option to disable the preloading to reduce the memory cost. The update will be pushed very soon.
Thank you!Could you please tell your memory size? i will expand my memory
Hello, I've pushed the update to the repo. Now you can train/test the model without preloading the datasets (the preloading is disabled by default now).
I use a cloud server to run my code and the memory I have is 768 GB in total which is definitely enough for the preloading. Note that since multi-processes are used when you use multiple GPUS, there will be multiple copies of the preloaded datasets buffered in the memory. The other way to reduce memory usage is to just use fewer GPUs.
Hello, I've pushed the update to the repo. Now you can train/test the model without preloading the datasets (the preloading is disabled by default now).
I use a cloud server to run my code and the memory I have is 768 GB in total which is definitely enough for the preloading. Note that since multi-processes are used when you use multiple GPUS, there will be multiple copies of the preloaded datasets buffered in the memory. The other way to reduce memory usage is to just use fewer GPUs.
After retraining your code in Nuscences ,i get much better resuluts than the results in your paper.For example,i got 58.02/49.08 for trailer category but only 31.8/30.5 in your paper,what causes this problem?
The results we reported in our original paper were obtained using a different setup of the testing data (where more frames are dropped). We will update our main paper in arxiv very soon and please use that version for comparison if you are going to compare our method in your future publication.
The results we reported in our original paper were obtained using a different setup of the testing data (where more frames are dropped). We will update our main paper in arxiv very soon and please use that version for comparison if you are going to compare our method in your future publication.
Ok,thank you for your reply.