THU-luvision/OmniSeg3D

It is not working and it is consuming huge amounts of memory

Closed this issue · 2 comments

Hello,

This code is consuming tens of gigabytes of RAM and VRAM (more than 40GB of RAM and 35GB of VRAM) even though I only have one image in the folder.

Any reason why?

Then I get this following error;

(gaussian_grouping) C:\nerf\OmniSeg3D-GS>python run_sam.py --ckpt_path segment-anything\sam_ckpt\sam_vit_h_4b8939.pth --file_path data\teknikrum\images_2
Traceback (most recent call last):
File "run_sam.py", line 81, in
process_images(ckpt_path, files)
File "C:\Users\hu.conda\envs\gaussian_grouping\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "run_sam.py", line 62, in process_images
imwrite(paths[1], cm[indices].view_as(image).add_(image).cpu().numpy())
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)

Any suggestions?

Hi, thanks for your interest in our work! I hope the following tips could help.

  1. During running run_sam.py, it's common that images with high resolution may lead to high memory consumption. You can try to downsample the input images to 1920x1080 or similar resolution to see if the memory problem can be alleviated.

  2. The device related RuntimeError can be fixed by changing the line 62 shown below. I have corrected this in the main branch.

# Before the change
imwrite(paths[1], cm[indices].view_as(image).add_(image).cpu().numpy())

# After the change
imwrite(paths[1], cm[indices.cpu()].view_as(image).add_(image).cpu().numpy())

Hope you have solved the bug! I will close this issue then.