Sufficient equipment required to implement
okina13070358 opened this issue · 4 comments
Hi, thank you for your good model.
Now, I'm trying implemantation for my dataset. So these days, I practice sample datasets (e.g. S3DIS, STLPS3D). I've trained backbone, but I could not train entire model by cuda memory error.
So I tried these approach:
- convert from fp16=False to fp16=True
- reduced epoch, batch and crop size
But cuda memory error happend every approach.
I guess my equipments are lack of specs. What is the minimum GPU memory required to use full-spec ISBNet?
P.S
I'm using geforce 3060 ti (8GB memory). Are my equipment not enough?
Regards.
I have tried similar things, but I have encountered a cuda out of memory error.
Actually, I am not the author of this paper, but in this paper, author says that they train a model using one v100 gpu.
The memory size of the gpu I have is 24gb, but seeing that I encountered the cuda out of memory error, I think 8gb of memory may not be enough to learn.
It would be more accurate to get an answer from the author, but hopefully this will help.
Hi jongmin4422, thank you for your reply. I'm surprised cuda memory error happens with 24GB. I agree with your opinion.
I understand what state I'm in. But I would like to hear the comments about this from the author. So I'll leave open for a while (one or two week).
We conducted our experiments on a V100 GPU with 32GB memory and occasionally encountered out-of-memory (OOM) issues when running datasets with a large number of points, such as S3DIS. Unfortunately, I regret that a GPU with 8GB memory is insufficient to train our ISBNet, even with adjustments like using FP16 or reducing the batch size and the number of points. One solution is that you could create a mini version of ScanNetV2 by using farthest point sampling to subsample around 20K~40K points each scene, and try using our lightweight backbone with a small number of num_queries
.
When testing with a 24GB memory GPU and facing OOM, I guess that the bottleneck is in the mask_heads_forward() function, specifically in the dot product operation between numerous queries and thousands of points. To address this, we devised a solution outlined in our repository at
Line 762 in ae02fcb
Hi ngoductuanlhp. Thank you for your reply. I'll try your advice.
I got the answer about my condition. So I will close this issue.
And thank you jongmin4422. Your reply help me.