IQIYI_VID competition
nttstar opened this issue · 17 comments
Recently www.iqiyi.com released a great video person dataset called IQIYI_VID and also launched a person search competition on it. We finally got rank 1st(with team name WitcheR) by using Arcface models. It is a very large and real dataset and worth trying to verify your face model accuracy precisely.
Our solution will open source soon here after PRCV2018 conference.
You can download our code here
Development leaderboard is now available so anyone can submit your prediction to get your mAP. Leave messages here if you have any problem.
想问下,现在的Megaface 有公司能达到99.9,这个用该项目能达到嘛
�Can I ask only frame based info. is used in iqiyi task? Audio and temporal features have not been used?
@gogo00007 right.
is possible to share your pretrained model of iqiyi task?
Hi, another request for your slides
@nttstar Hi, may I ask what models a, c, d, e and h respectively represent? Where can I download it?
@nttstar Hi, can you share the PPT or any other reference materials about this project? Some of the details are really hard to understand.
Hi, may I ask is the det quality control model same as the recognition model? And use the output vector norm to judge the quality?
Can somebody share it in google drive or dropbox?
Thanks!!
Does anyone use this code to test mAP on the IQIYI VID dataset? What is the result?
In addition, I found that this code uses the original softmax layer instead of the arcface softmax layer. Why?
@nttstar, thank you for sharing the code about 2018 iQIYI-VID challenge, would you mind share the pretrained model?