limhoyeon/ToothGroupNetwork

Error while running inference_mid.py

Opened this issue · 3 comments

I'm running inference_mid.py from the challenge branch on an obj file without a ground truth json file. Here's the error I get while running it. Any advice on how to resolve this would be greatly appreciated!

Traceback (most recent call last):
File "inference_mid.py", line 65, in
pred_obj.process(stl_path_ls[i], os.path.join(args.save_path, os.path.basename(stl_path_ls[i]).replace(".obj", ".json")))
File "/scratch/madhavkris/ToothGroupNetwork-challenge_branch/predict_utils.py", line 140, in process
labels, instances, jaw = self.predict([input_path])
File "/scratch/madhavkris/ToothGroupNetwork-challenge_branch/predict_utils.py", line 101, in predict
pred_result = self.chl_pipeline(scan_path)
File "/scratch/madhavkris/ToothGroupNetwork-challenge_branch/inference_pipeline_mid.py", line 52, in call
first_results = self.get_first_module_results(input_cuda_feats, self.first_module)
File "/scratch/madhavkris/ToothGroupNetwork-challenge_branch/inference_pipeline_mid.py", line 212, in get_first_module_results
fg_points_labels_ls = tu.get_clustering_labels(moved_points_cpu, results["sem_2"]["full_masked_points"][:,3])
File "/scratch/madhavkris/ToothGroupNetwork-challenge_branch/tsg_utils.py", line 115, in get_clustering_labels
if eg_values_first_axis[i] / eg_values_first_axis[3:].mean() > 8:
IndexError: index 2 is out of bounds for axis 0 with size 2

The problem is likely because of the orientation of the data you use. The data should be in the same orientation as mentioned in the readme file (from my experience).

I just checked the obj file of the 3D Teeth challenge data and it doesn't follow the same orientation as seen in the readme file.

In any case, I see that my new obj file follows a different orientation. Now, when I run the inference_mid.py file, I don't get an error but I get the following:

/scratch/madhavkris/ToothGroupNetwork-challenge_branch/inference_pipeline_mid.py:25: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
self.first_module.load_state_dict(torch.load(self.config["fps_model_info"]["load_ckpt_path"]+".h5"))
/scratch/madhavkris/ToothGroupNetwork-challenge_branch/inference_pipeline_mid.py:29: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
self.bdl_module.load_state_dict(torch.load(self.config["boundary_model_info"]["load_ckpt_path"]+".h5"))

But I don't see any results saved in the folder supposed to contain the segmented output. Any idea why this is the case? @limhoyeon

I think that these logs are just warnings, so maybe it isnt related to the your problem.

Have you check that the output directory exists or input directory is properly set?