Running with custom dataset
bmikaili opened this issue · 4 comments
Hey, do you have pointers on how to run this with a custom dataset?
The NeRF Guru provides an excellent tutorial on this topic, which you can find at this link:
To summarize the process:
1)Extract Frames from Video: Use the command FFMPEG -i {path to video} -qscale:v 1 -qmin 1 -vf fps={frame extraction rate} %04d.jpg to sample images from your video.
2)Obtain Camera Poses: Utilize convert.py to get camera poses. Note that this step requires having COLMAP installed.
3)Start Training :D
For additional details, refer to the "Processing your own Scenes" section in the Gaussian-Splatting repository.
Ah I see, so it's like in the original paper.
but how would I train with pruning? Like with bash scripts/run_train_densify_prune.sh
?
Yeah, you can modify the path and pruning ratio in the sh file.
Or you can directly run the Python command with the default setting:
python train_densify_prune.py -s "/PATH/TO/DATASET/" -m "OUTPUT/PATH/"
BTW, if your primary interest lies in the final result, I suggest modifying the --save_iterations and --checkpoint_iterations arguments to a lower number of iterations. The current defaults are set to save additional data points, primarily for experimental purposes.
Thanks for the pointers, I'll try that out! Awesome work btw :)