Requesting to add four papers published in 2022 and 2023
Closed this issue · 1 comments
ShramanPramanick commented
Please add the following four papers which use transformer backbones:
- Egocentric Video-language pre-training and solves video-text retrieval, video classification, text-guided video grounding, text-guided video summarization, video question-answering etc.
-
EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone (ICCV 2023) [Paper] [Code] [Project] [Poster]
-
Egocentric Video-Language Pretraining (NeurIPS 2022) [Paper] [Code] [Project] [Poster]
- Image-language pre-training and solves image captioning, image-text retrieval, object detection, segmentation, referring expression comprehension.
- VoLTA: Vision-Language Transformer with Weakly-Supervised Local-Feature Alignment (TMLR 2023) [Paper] [Code] [Project]
- Video temporal grounding, unifying diverse temporal annotations to power moment retrieval (interval), highlight detection (curve) and video summarization (point).
cmhungsteve commented
Thank you for sharing, @ShramanPramanick.
I have updated the repo with the papers above.