/STT

A multi-task model which does image captioning, sentence paraphrasing and cross-modal retrieval.

Primary LanguagePython

Show, Translate and Tell

This repo contains code for training and evaluation of a multi-task model which performs image captioning, cross modal retrieval and sentence paraphrasing. The paper and results can be found at Show, Translate and Tell . This work has been accepted at ICIP 2019 . The proposed architecture is as shown in the figure Alt text

Generate Data

In the data folder, you can find scripts for generating TF-records for mscoco dataset. Update (7/31/2019): prepare_mscoco_pairs.py is added to the repo. This can be used as a reference to generate training_enc.txt and training_dec.txt which are basically paraphrases. Using 5 captions, it creates 20 permutations of paraphrases and writes them in the TF record along with the associated image. This script is not cleaned up and should only be used for reference (it might not be the final script that we used). Checkout command line arguments in the scripts for setting paths

  • To generate TF-records for MSCOCO
python -m data.coco_data_loader --num 10000

Args:

  • --num : Number of images to be written in TF record. Do not specify this unless you wnat to generate a subset of entire dataset.

  • Generate TF-records with Image, predicted caption and groundtruth caption


python -m data.coco_data_loader --precompute \
                                --record_path para_att_pred.tfrecord \
                                --feature_path coco_precomp/testall_ims.npy \
                                --captions_path <path_to_coco_captions>

Training on COCO dataset

sh scripts/train_coco.sh

Evaluation on COCO dataset

sh scripts/eval_coco.sh

If you find this research or codebase useful in your experiments, consider citing

@article{stt2019,
  title={Show, Translate and Tell},
  author={Peri, Dheeraj and Sah, Shagan and Ptucha, Raymond},
  journal={arXiv preprint arXiv:1903.06275},
  year={2019}
}