DirtyHarryLYL/HAKE-Action-Torch

请问如何在普通的图片和视频中测试使用那?

ahong007007 opened this issue · 3 comments

想在一些影视作品视频中测试HOI,HAKE的效果,提示
No such file or directory: 'HAKE-Action-Torch/Data/metadata/data_path.json'
查询了一下,metadata都是HOI标注测试的内容。如通用目标检测一样,demo应该可以在普通测试看一下泛化能力和实习效果吧。

谢谢。

python -u tools/demo.py --cfg configs/a2v/a2v.yaml --input pics/1707510278359900_1_pic/ --mode image --show-res

hwfan commented

Thanks for playing with Activity2Vec!
Two ways to solve this problem:

  1. Follow the instruction of DATASET.md to download and extract metadata.tar.gz (only this file is needed).
  2. Comment L148 i.e. loading data_path.json in activity2vec/ult/config.py, but it will influence the procedure of training and testing.

We will move this line to the initialization stage of training/testing in our next version.

thank you for your work!

测试了一下,只有Activity2Vec可对普通图片测试,但是输出结果只有人体姿态和动作,没有论文中提到的HOI-DET三元组,不知道这步部分是不是可以放出来。

其他分支,Alpha-HOI/IDN等,虽然和HOI-DET有关,但是没有demo推断普通图片,没有在实际图片上评估一下性能和实际应用,不知道是否有相应的开源计划。目前看所有的HOI-DET代码,只有 iCan放出来了三元组推断代码,也烦请大佬们能更新一下。

感谢回复,祝工作顺利。英文太差,就不google翻译了,^_^

hwfan commented

Compared to the original implementation of PaStaNet, A2V simplifies the output format to <human-interaction> for adaptation to more general action understanding scenarios.

For the HOI-based version, please follow HAKE-Action.