export kitti error
Closed this issue · 4 comments
hello, When I execute the export kitti command, it shown FileNotFoundError: [Errno 2] No such file or directory: 'logs/magicpoint_base_homoAdapt_kitti/predictions/train/2011_10_03_drive_0027_sync_02/0000004074.npz'
, and then it got stuck, all the previous programs are fine, my magicpoint_kitti_export.yaml
is
dataset: 'Kitti_inh' # 'coco' 'hpatches', 'Kitti', ''
export_folder: 'train'
alteration: 'all' # 'all' 'i' 'v'
root: 'datasets/kitti_wVal' # root for dataset
root_split_txt: 'datasets/kitti_split' # split file provided in datasets/kitti_split ,
Have you ever been in this situation? I'm looking forward to your reply
Hi @Y-pandaman,
Sorry for late reply. I tested and found a bug.
Please refer to this branch.
https://github.com/eric-yyjau/pytorch-superpoint/tree/test_20201212
Let me know if that works.
thanks for your reply, I try it, but it did not work, it show a new error:
(sp1) user@robot:~/catkin_ws/src/pytorch-superpoint$ python export.py export_detector_homoAdapt configs/magicpoint_kitti_export.yaml magicpoint_base_homoAdapt_kitti_200000
export.py:400: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
config = yaml.load(f)
check config!! {'data': {'dataset': 'Kitti_inh', 'export_folder': 'train', 'alteration': 'all', 'root': 'datasets/kitti_wVal', 'root_split_txt': 'datasets/kitti_split', 'preprocessing': {'resize': [375, 1242]}, 'gaussian_label': {'enable': False, 'sigma': 1.0}, 'homography_adaptation': {'enable': True, 'num': 20, 'aggregation': 'sum', 'filter_counts': 0, 'homographies': {'params': {'translation': True, 'rotation': True, 'scaling': True, 'perspective': True, 'scaling_amplitude': 0.2, 'perspective_amplitude_x': 0.2, 'perspective_amplitude_y': 0.2, 'allow_artifacts': True, 'patch_ratio': 0.85}}}}, 'model': {'name': 'SuperPointNet_gauss2', 'batch_size': 1, 'detection_threshold': 0.015, 'nms': 4, 'top_k': 600, 'params': {}, 'subpixel': {'enable': False}}, 'pretrained': 'logs/magicpoint_synth_t2/checkpoints/superPointNet_100000_checkpoint.pth.tar'}
[12/13/2020 11:01:50 INFO] Running command EXPORT_DETECTOR_HOMOADAPT
[12/13/2020 11:01:50 INFO] train on device: cuda:0
[12/13/2020 11:01:50 INFO] => will save everything to logs/magicpoint_base_homoAdapt_kitti_200000/checkpoints
[12/13/2020 11:01:50 INFO] workers_test: 1
[12/13/2020 11:01:50 INFO] load dataset from : Kitti_inh
root_split_txt: datasets/kitti_split
[12/13/2020 11:01:50 INFO] Finished crawl_folders for KITTI.
==> Loading pre-trained network.
path: logs/magicpoint_synth_t2/checkpoints/superPointNet_100000_checkpoint.pth.tar
model: SuperPointNet_gauss2
[12/13/2020 11:01:50 INFO] => creating model: SuperPointNet_gauss2
==> Successfully loaded pre-trained network.
=== Let's use 2 GPUs!
logs/magicpoint_synth_t2/checkpoints/superPointNet_100000_checkpoint.pth.tar
0it [00:00, ?it/s]
Traceback (most recent call last):
File "export.py", line 408, in
args.func(config, output_dir, args)
File "/home/user/anaconda3/envs/sp1/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "export.py", line 283, in export_detector_homoAdapt_gpu
for i, sample in tqdm(enumerate(test_loader)):
File "/home/user/anaconda3/envs/sp1/lib/python3.6/site-packages/tqdm/std.py", line 1167, in iter
for obj in iterable:
File "/home/user/anaconda3/envs/sp1/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 345, in next
data = self._next_data()
File "/home/user/anaconda3/envs/sp1/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 856, in _next_data
return self._process_data(data)
File "/home/user/anaconda3/envs/sp1/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 881, in _process_data
data.reraise()
File "/home/user/anaconda3/envs/sp1/lib/python3.6/site-packages/torch/_utils.py", line 395, in reraise
raise self.exc_type(msg)
cv2.error: Caught error in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/user/anaconda3/envs/sp1/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "/home/user/anaconda3/envs/sp1/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/user/anaconda3/envs/sp1/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/user/catkin_ws/src/pytorch-superpoint/datasets/Coco.py", line 247, in getitem
img_o = _read_image(sample['image'])
File "/home/user/catkin_ws/src/pytorch-superpoint/datasets/Coco.py", line 171, in _read_image
interpolation=cv2.INTER_AREA)
cv2.error: OpenCV(3.4.2) /io/opencv/modules/imgproc/src/resize.cpp:4044: error: (-215:Assertion failed) !ssize.empty() in function 'resize'
Hi @Y-pandaman ,
From the error message, I think the image is not loaded correctly.
Can you possibly print out the image sample['image']
and check if they are there?
Close due to no follow-ups.