MrNeRF/gaussian-splatting-cuda

Config to limit CUDA memory usage

Closed this issue · 2 comments

MrNeRF commented

Discussed in #20

Originally posted by BennetLeff August 22, 2023
Howdy! I got everything building and running on my 3080ti (so there's a new card confirmed working).

The truck demo scene worked fine. Later I tried to import my own scene and torch reserved too much GPU memory. I don't have this problem with the same dataset in the python/original implementation. This project would be more accessible if some configs existed to control this! I don't have time at the moment to fix it but might soon :)

TODO:

  • reduce CUDA requirement to 11.8
  • CMAKE should find CUDA Architecture
  • load images asynchronously to gpu

Maybe make sure that it works synchronously first. Might be a hustle to get it asynchronous without blocking or introducing bugs.
See #20

MrNeRF commented

#23

only loading images asynchronously to gpu left1

MrNeRF commented

Memory Problem should be fixed with --emtpy-gpu-cache flag.