Issues
- 0
- 0
Could you release the dataset that used to fine-tune video diffusion model for image-to-3D?
#27 opened by 2hiTee - 1
- 1
Will training code be released
#14 opened by jly0810 - 1
- 3
- 1
any example to run sparse view scene generation
#22 opened by rocksat - 0
meaning of these parameters?
#26 opened by Yhc-777 - 2
instant-nsr-pl requires transfomers_train.json
#21 opened by jclarkk - 0
- 2
Uknown './tmp/points3d.ply' ?
#16 opened by Mlosser - 0
Multi-view image reconstruction failed.
#23 opened by qixuanwang-233 - 3
unknown import
#7 opened by fffh1 - 0
- 0
How to use multi-views PixelNeRF model for inference?
#19 opened by Mlosser - 0
- 0
How to export cloud points with colors?
#17 opened by Mlosser - 0
Training object IDs
#15 opened by za-cheng - 0
long time no response "Run the V3D Video diffusion to generate dense multi-views"
#13 opened by luojin - 2
Python version requirement?
#3 opened by forrest-lam - 0
Does this algorihtm need to compute camera_pose by COLMAP for every frame while 3d GS reconstruction?
#11 opened by yuedajiong - 0
- 1
MODEL VEDIO double-sided?
#5 opened by aimarxjg - 0
- 0
- 1
Try the code on my own png failed
#6 opened by fffh1 - 0
- 1
vram requirements?
#2 opened by DenisKochetov