Inconsistent packed_info returned by traverse_grids(over_allocate=True)
Closed this issue · 1 comments
#197 implements over-allocation mode, but it seems that samples.packed_info
mismatches to samples.ray_indices
on https://github.com/KAIR-BAIR/nerfacc/blob/10315043bb6abd5a132deee39c2807afb684e13b/nerfacc/grid.py#L190
samples.ray_indices
contains so many 0s, because its chunks are aligned by traverse_step_limit
here. This is reasonable to parallelize traverse_grids_kernel
, but re-calculated chunk_starts
at https://github.com/KAIR-BAIR/nerfacc/blob/10315043bb6abd5a132deee39c2807afb684e13b/nerfacc/cuda/csrc/grid.cu#L402-L404 ignores this redundancy.
I don't want to fix this problem by just deleting compute_chunk_start()
, because it causes strange packed tensors and ray_indices
and t_starts (t_ends)
have mismatched shapes.
I think the best way is delete redundant 0s in intervals
and samples
in grid.cu
, but I don't know an efficient way.
I noticed RaySamples.is_valid
. Sorry for bothering you.