cmhungsteve/Awesome-Transformer-Attention

Requesting to add four papers published in 2022 and 2023

Closed this issue · 1 comments

Please add the following four papers which use transformer backbones:

  1. Egocentric Video-language pre-training and solves video-text retrieval, video classification, text-guided video grounding, text-guided video summarization, video question-answering etc.
  1. Image-language pre-training and solves image captioning, image-text retrieval, object detection, segmentation, referring expression comprehension.
  • VoLTA: Vision-Language Transformer with Weakly-Supervised Local-Feature Alignment (TMLR 2023) [Paper] [Code] [Project]
  1. Video temporal grounding, unifying diverse temporal annotations to power moment retrieval (interval), highlight detection (curve) and video summarization (point).
  • UniVTG: Towards Unified Video-Language Temporal Grounding (ICCV 2023) [Paper] [Code]

Thank you for sharing, @ShramanPramanick.
I have updated the repo with the papers above.