IndexError: tensors used as indices must be long, byte or bool tensors
StarsTesla opened this issue · 2 comments
StarsTesla commented
[Taichi] version 1.6.0, llvm 15.0.4, commit f1c6fbbd, linux, python 3.9.18
[Taichi] Starting on arch=cuda
0%| | 0/30001 [00:00<?, ?it/s]/data/anaconda3/envs/3dgs/lib/python3.9/site-packages/taichi/lang/expr.py:101: DeprecationWarning: In future, it will be an error for 'np.bool_' scalars to be interpreted as an index
return Expr(_ti_core.make_const_expr_int(constant_dtype, val))
0%| | 0/30001 [00:06<?, ?it/s]
Traceback (most recent call last):
File "/data/zxc/code/git/taichi_3d_gaussian_splatting/gaussian_point_train.py", line 20, in <module>
trainer.train()
File "/data/zxc/code/git/taichi_3d_gaussian_splatting/taichi_3d_gaussian_splatting/GaussianPointTrainer.py", line 176, in train
loss.backward()
File "/data/anaconda3/envs/3dgs/lib/python3.9/site-packages/torch/_tensor.py", line 488, in backward
torch.autograd.backward(
File "/data/anaconda3/envs/3dgs/lib/python3.9/site-packages/torch/autograd/__init__.py", line 197, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/data/anaconda3/envs/3dgs/lib/python3.9/site-packages/torch/autograd/function.py", line 267, in apply
return user_fn(self, *args)
File "/data/zxc/code/git/taichi_3d_gaussian_splatting/taichi_3d_gaussian_splatting/GaussianPointCloudRasterisation.py", line 1133, in backward
grad_point_in_camera=grad_pointcloud[point_id_in_camera_list],
IndexError: tensors used as indices must be long, byte or bool tensors
wanmeihuali commented
It seems Pytorch starts to support int32 tensor as indices in later versions, e.g. 2.0, but earlier version some times reports such an issue. So you can try upgrading Pytorch version to 2.0 or just have
grad_point_in_camera=grad_pointcloud[point_id_in_camera_list.long()],
StarsTesla commented
Ok,I successfully run the code, but I noticed that the training time is over 30 minutes in my 3090, is there any plan to improve it? Or any reason about it? Another thing is, more dataset and rendering scripts is a need as well.