/Semantics-AssistedVideoCaptioning

Source code for Semantics-Assisted Video Captioning Model Trained with Scheduled Sampling Strategy

Primary LanguagePythonMIT LicenseMIT

Semantics-Assisted Video Captioning Model Trained with Scheduled Sampling Strategy

PRs Welcome Github Watchers GitHub stars GitHub forks License

Table of Contents

  1. Description
  2. Dependencies
  3. Manual
  4. Data
  5. Results
    1. Comparison on Youtube2Text
    2. Comparison on MSR-VTT
  6. Citation

Description

This repo contains the code of Semantics-Assisted Video Captioning Model, based on the paper "Semantics-Assisted Video Captioning Model Trained with Scheduled Sampling Strategy". It is under review at Frontiers in Robotics and AI.

We propose three ways to improve the video captioning model. First of all, we utilize both spatial features and dynamic spatio-temporal features as inputs for semantic detection network in order to generate meaningful semantic features for videos. Then, we propose a scheduled sampling strategy which gradually transfers the training phase from a teacher guiding manner towards a more self teaching manner. At last, the ordinary logarithm probability loss function is leveraged by sentence length so that short sentence inclination is alleviated. Our model achieves state-of-the-art results on the Youtube2Text dataset and is competitive with the state-of-the-art models on the MSR-VTT dataset.

The overall structure of our model looks like this overall structure. Here is some captions generated by our model. captions


If you need a newer and more powerful model, please refer to Delving-Deeper-into-the-Decoder-for-Video-Captioning.


Dependencies

  • Python3.6
  • TensorFlow 1.13
  • NumPy
  • sklearn
  • pycocoevalcap(Python3)

Manual

  1. Make sure you have installed all the required packages.
  2. Download pycocoevalcap and put it along with msrvttt, msvd, tagging folders.
  3. Download files in the Data section.
  4. cd path_to_directory_of_model; mkdir saves
  5. run_model.sh is used for training models and test_model.sh is used for testing models. Specify the GPU you want to use by modifying CUDA_VISIBLE_DEVICES value. Specify the needed data paths by modifying corpus, ecores, tag and ref values. The words will be sampled by argmax strategy if argmax is 1 and they will be sampled by multinomial strategy if argmax is 0. name is the name which you give to the model. test refers to the path of the saved model which is to be tested. Do not give a parameter to test if you want to train a model.
  6. After completing the configuration of the bash file, then bash run_model.sh for training, bash test_model.sh for testing.

Results

Comparison on Youtube2Text

Model B-4 C M R Overall
LSTM-E 45.3 31.0
h-RNN 49.9 65.8 32.6
aLSTMs 50.8 74.8 33.3
SCN 51.1 77.7 33.5
MTVC 54.5 92.4 36.0 72.8 0.9198
ECO 53.5 85.8 35.0
SibNet 54.2 88.2 34.8 71.7 0.8969
Our Model 61.8 103.0 37.8 76.8 1.0000

Comparison on MSR-VTT

Model B-4 C M R Overall
v2t_navigator 40.8 44.8 28.2 60.9 0.9325
Aalto 39.8 45.7 26.9 59.8 0.9157
VideoLAB 39.1 44.1 27.7 60.6 0.9140
MTVC 40.8 47.1 28.8 60.2 0.9459
CIDEnt-RL 40.5 51.7 28.4 61.4 0.9678
SibNet 40.9 47.5 27.5 60.2 0.9374
HACA 43.4 49.7 29.5 61.8 0.9856
TAMoE 42.2 48.9 29.4 62.0 0.9749
Our Model 43.8 51.4 28.9 62.4 0.9935

Data

MSVD

MSRVTT

ECO


Citation

@ARTICLE{2019arXiv190900121C,
       author = {{Chen}, Haoran and {Lin}, Ke and {Maye}, Alexander and {Li}, Jianming and
         {Hu}, Xiaolin},
        title = "{A Semantics-Assisted Video Captioning Model Trained with Scheduled Sampling}",
      journal = {arXiv e-prints},
     keywords = {Computer Science - Computer Vision and Pattern Recognition, Computer Science - Computation and Language},
         year = "2019",
        month = "Aug",
          eid = {arXiv:1909.00121},
        pages = {arXiv:1909.00121},
archivePrefix = {arXiv},
       eprint = {1909.00121},
 primaryClass = {cs.CV},
       adsurl = {https://ui.adsabs.harvard.edu/abs/2019arXiv190900121C},
      adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}