Pinned Repositories
abstract_binary_VQA
abstract_scenes_v002
The second version of the interface for Abstract Scenes research project.
GuessWhich
Evaluating Visual Conversational Agents via Cooperative Human-AI Games
torch-utilities
Utility functions for neural network implementations in Torch
vision_language_in_the_wild
VQA
VQA-Website
Visual Question Answering Website
vqa_browser
The VQA dataset browser back-end code, using nginx, Django, an PostgreSQL (running in Docker containers).
VQA_LSTM_CNN
Train a deeper LSTM and normalized CNN Visual Question Answering model. This current code can get 58.16 on OpenEnded and 63.09 on Multiple-Choice on test-standard.
Georgia Tech Visual Intelligence Lab's Repositories
GT-Vision-Lab/VQA
GT-Vision-Lab/VQA_LSTM_CNN
Train a deeper LSTM and normalized CNN Visual Question Answering model. This current code can get 58.16 on OpenEnded and 63.09 on Multiple-Choice on test-standard.
GT-Vision-Lab/abstract_scenes_v002
The second version of the interface for Abstract Scenes research project.
GT-Vision-Lab/GuessWhich
Evaluating Visual Conversational Agents via Cooperative Human-AI Games
GT-Vision-Lab/vision_language_in_the_wild
GT-Vision-Lab/VQA-Website
Visual Question Answering Website
GT-Vision-Lab/vqa_browser
The VQA dataset browser back-end code, using nginx, Django, an PostgreSQL (running in Docker containers).
GT-Vision-Lab/abstract_binary_VQA
GT-Vision-Lab/torch-utilities
Utility functions for neural network implementations in Torch