This repo contains codes for ''Self-Critical Reasoning for Robust Visual Question Answering'' with VQA-X human textual explanations This repo contains code modified from here, many thanks!
Python 3.7.1
PyTorch 1.1.0
spaCy (we use en_core_web_lg spaCy model)
h5py, pickle, json, cv2
Please download the detection features from this google drive and put it to 'data' folder
Please run bash tools/download.sh
to download other useful data files including VQA QA pairs and Glove embeddings
Please run bash tools/preprocess.sh
to preprocess the data
mkdir saved_models
The training propocess is split to three stage or two stage:
Three stage version (pretrain on CP, fine-tune using influential strengthening loss and fine-tune with both.)
(1) Pretrain on VQA-CP train dataset by runnning
CUDA_VISIBLE_DEVICES=0 python main.py --load_hint -1 --use_all 1 --learning_rate 0.001 --split v2cp_train --split_test v2cp_test --max_epochs 40
After the pretraining you will have a saved model in saved_models
named by the start training time.
Alternatively, you can directly download a model from here.
(2) Pretrain using the influential strengthening loss
Here, please replace the 86-th line in the train.py
with your VQA-CP pretrained models.
Then, please run the following line to strengthen the most influential object.
CUDA_VISIBLE_DEVICES=0 python main.py --load_hint 0 --use_all 0 --learning_rate 0.00001 --split v2cp_train_vqx --split_test v2cp_test --max_epochs 12 --hint_loss_weight 20
After the pretraining you will have anthor saved model in saved_models
named by the start training time.
Alternatively, you can directly download a model from here.
(3) Training with the self-critical objectives.
Here, please replace the 82-th line in the train.py
with your influence strengthened pretrained models.
Then, please run the following line for training.
CUDA_VISIBLE_DEVICES=0 python main.py --load_hint 1 --use_all 0 --learning_rate 0.00001 --split v2cp_train_vqx --split_test v2cp_test --max_epochs 5 --hint_loss_weight 20 --compare_loss_weight 1500
Alternatively, you can directly download a model from here.