KeyError: 'feat_map' in dataset.py
Closed this issue · 9 comments
Thanks for releasing your wonderful work!
When I run the training commend, I got this error. Then I print all keys of info
and I found that there is no feat_map
item. I'm sure the program load info from db_trainval_with_pool.pkl
file. Is there any thing wrong with my operation?
File "/home/x/Desktop/wsl/_HOI/IDN/dataset.py", line 290, in __getitem__
uni_vec.append(f['R'][info['feat_map'][cand_id], :])
KeyError: 'feat_map'
All keys in info
:
dict_keys(['obj_gt_classes', 'obj_scores', 'is_gt', 'labels_ro', 'cand_id', 'pair_ids', 'obj_id', 'labels_r', 'height', 'width', 'boxes', 'is_gt_pair', 'pair_iou', 'obj_classes', 'filename', 'labels_sr', 'pool', 'H_mapping'])
Looking forward to your replay!
BTW, when I run eval.py
, it told me that:
RuntimeError: exp/IDN_IPT_hico/epoch_30.pth is a zip archive (did you mean to use torch.jit.load()?)
According to web, this error is due to the version of pytorch. However, my env is torch==1.3.1 same to requirements.txt. How can I deal with this situation? Thanks a lot!
Thanks for releasing your wonderful work!
When I run the training commend, I got this error. Then I print all keys of
info
and I found that there is nofeat_map
item. I'm sure the program load info fromdb_trainval_with_pool.pkl
file. Is there any thing wrong with my operation?
File "/home/x/Desktop/wsl/_HOI/IDN/dataset.py", line 290, in __getitem__
uni_vec.append(f['R'][info['feat_map'][cand_id], :])
KeyError: 'feat_map'
All keys in
info
:
dict_keys(['obj_gt_classes', 'obj_scores', 'is_gt', 'labels_ro', 'cand_id', 'pair_ids', 'obj_id', 'labels_r', 'height', 'width', 'boxes', 'is_gt_pair', 'pair_iou', 'obj_classes', 'filename', 'labels_sr', 'pool', 'H_mapping'])
Looking forward to your replay!
We have fixed it in the latest commit.
BTW, when I run
eval.py
, it told me that:
RuntimeError: exp/IDN_IPT_hico/epoch_30.pth is a zip archive (did you mean to use torch.jit.load()?)
According to web, this error is due to the version of pytorch. However, my env is torch==1.3.1 same to requirements.txt. How can I deal with this situation? Thanks a lot!
We will update the requirements.txt
The semi-hard loss did result in loss exploding sometimes. Using the probability could help a lot in making the loss more stable with little performance drop (less than 0.1 in our experiments). We have updated the code to the more stable version.
This could be fine, though tuning down the loss scale by weighting the semi-hard loss might bring better performance.
Thank you very much for your reply. We are looking forward to your improvement of this project, which is very enlightening!
Thanks for your support! We will keep going to enrich this project.