vqa
There are 234 repositories under vqa topic.
facebookresearch/mmf
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
OpenGVLab/InternGPT
InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBind, multimodal chat like GPT-4, SAM, interactive image editing, etc. Try it at igpt.opengvlab.com (支持DragGAN、ChatGPT、ImageBind、SAM的在线Demo系统)
BDBC-KG-NLP/QA-Survey-CN
北京航空航天大学大数据高精尖中心自然语言处理研究团队开展了智能问答的研究与应用总结。包括基于知识图谱的问答(KBQA),基于文本的问答系统(TextQA),基于表格的问答系统(TableQA)、基于视觉的问答系统(VisualQA)和机器阅读理解(MRC)等,每类任务分别对学术界和工业界进行了相关总结。
peteanderson80/bottom-up-attention
Bottom-up attention model for image captioning and VQA, based on Faster R-CNN and Visual Genome
NVlabs/prismer
The implementation of "Prismer: A Vision-Language Model with Multi-Task Experts".
microsoft/Oscar
Oscar and VinVL
hengyuan-hu/bottom-up-attention-vqa
An efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge.
hila-chefer/Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Cadene/vqa.pytorch
Visual Question Answering in Pytorch
jayleicn/ClipBERT
[CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning on image-text and video-text tasks.
jokieleung/awesome-visual-question-answering
A curated list of Visual Question Answering(VQA)(Image/Video Question Answering),Visual Question Generation ,Visual Dialog ,Visual Commonsense Reasoning and related area.
open-compass/VLMEvalKit
Open-source evaluation toolkit of large vision-language models (LVLMs), support GPT-4v, Gemini, QwenVLPlus, 50+ HF models, 20+ benchmarks
stanfordnlp/mac-network
Implementation for the paper "Compositional Attention Networks for Machine Reasoning" (Hudson and Manning, ICLR 2018)
chingyaoc/awesome-vqa
Visual Q&A reading list
vacancy/NSCL-PyTorch-Release
PyTorch implementation for the Neuro-Symbolic Concept Learner (NS-CL).
OpenGVLab/Multi-Modality-Arena
Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing images as inputs. Supports MiniGPT-4, LLaMA-Adapter V2, LLaVA, BLIP-2, and many more!
davidmascharka/tbd-nets
PyTorch implementation of "Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning"
MILVLG/openvqa
A lightweight, scalable, and general framework for visual question answering research
Cyanogenoid/pytorch-vqa
Strong baseline for visual question answering
FuxiaoLiu/LRV-Instruction
[ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
AIRI-Institute/OmniFusion
OmniFusion — a multimodal model to communicate using text and images
X-PLUG/mPLUG-2
mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video (ICML 2023)
abachaa/Existing-Medical-QA-Datasets
Multimodal Question Answering in the Medical Domain: A summary of Existing Datasets and Systems
linjieli222/VQA_ReGAT
Research Code for ICCV 2019 paper "Relation-aware Graph Attention Network for Visual Question Answering"
antoyang/FrozenBiLM
[NeurIPS 2022] Zero-Shot Video Question Answering via Frozen Bidirectional Language Models
thaolmk54/hcrn-videoqa
Implementation for the paper "Hierarchical Conditional Relation Networks for Video Question Answering" (Le et al., CVPR 2020, Oral)
wangleihitcs/Papers
读过的CV方向的一些论文,图像生成文字、弱监督分割等
vztu/VIDEVAL
[IEEE TIP'2021] "UGC-VQA: Benchmarking Blind Video Quality Assessment for User Generated Content", Zhengzhong Tu, Yilin Wang, Neil Birkbeck, Balu Adsumilli, Alan C. Bovik
antoyang/just-ask
[ICCV 2021 Oral + TPAMI] Just Ask: Learning to Answer Questions from Millions of Narrated Videos
yuleiniu/cfvqa
[CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias
cvlab-tohoku/Dense-CoAttention-Network
Improved Fusion of Visual and Language Representations by Dense Symmetric Co-Attention for Visual Question Answering
chingyaoc/VQA-tensorflow
Tensorflow Implementation of Deeper LSTM+ normalized CNN for Visual Question Answering
pairlab/SlotFormer
Code release for ICLR 2023 paper: SlotFormer on object-centric dynamics models
TheoCoombes/ClipCap
Using pretrained encoder and language models to generate captions from multimedia inputs.
asdf0982/vqa-mfb.pytorch
This project is out of date, I don't remember the details inside...