About pseudo ground truth point cloud P
Closed this issue · 1 comments
Hi, In your paper, you first render depth maps using Eq. (3) then back-project these depth maps to obtain 3D gaussians that lie on the surface, then farthest point sampling is adopted to extract pseudo ground truth point cloud P. However, in your first training stage, I didn't find any code related to back-projection, instead all 3D gaussians are forwarded to farthest point sampling algorithm. Could you help to clarify this?
And to force gaussians lie on the surface, if I'm understood correctly, all we need is the 0-1 opacity regularization term, binary pruning strategy, and flattening gaussians by resetting the scaling value of shortest axis, right?
Hi,
Actually there is two point clouds here used for uv mapping learning. The first one is applying FPS to all 3D gaussians, as described in extract_pcd.py, used for the Chamfer distance loss
. The second one is back-projected partial point clouds from each depth maps, as described here, used for the 3D consistency loss
.
Your understanding about the regularization terms is correct.