/OS-SGG

This is the repository for papr "One-Shot Scene Graph Generation"

Primary LanguagePython

One-shot Scene Graph Generation

This repository contains data and code for the paper “One-shot Scene Graph Generation”. This code is based on the neural-motifs.

Framework

Abstract

As a structured representation of the image content, the visual scene graph (visual relationship) acts as a bridge between computer vision and natural language processing. Existing models on the scene graph generation task notoriously require tens or hundreds of labeled samples. By contrast, human beings can learn visual relationships from a few or even one example. Inspired by this, we design a task named One-Shot Scene Graph Generation, where each relationship triplet (e.g., "dog-has-head") comes from only one labeled example. The key insight is that rather than learning from scratch, one can utilize rich prior knowledge. In this paper, we propose Multiple Structured Knowledge (Relational Knowledge and Commonsense Knowledge) for the one-shot scene graph generation task. Specifically, the Relational Knowledge represents the prior knowledge of relationships between entities extracted from the visual content, e.g., the visual relationships "standing in", "sitting in", and "lying in" may exist between "dog" and "yard", while the Commonsense Knowledge encodes "sense-making" knowledge like "dog can guard yard". By organizing these two kinds of knowledge in a graph structure, Graph Convolution Networks (GCNs) are used to extract knowledge-embedded semantic features of the entities. Besides, instead of extracting isolated visual features from each entity generated by Faster R-CNN, we utilize an Instance Relation Transformer encoder to fully explore their context information. Based on a constructed one-shot dataset, the experimental results show that our method significantly outperforms existing state-of-the-art methods by a large margin. Ablation studies also verify the effectiveness of the Instance Relation Transformer encoder and the Multiple Structured Knowledge.

Setup

  1. Install python3.6 and pytorch 3. I recommend the Anaconda distribution. To install PyTorch if you haven't already, use conda install pytorch=0.3.0 torchvision=0.2.0 cuda90 -c pytorch.

  2. Update the config file with the dataset paths. Specifically:

    • Visual Genome (the VG_100K folder, image_data.json, VG-SGG.h5, and VG-SGG-dicts.json). See data/stanford_filtered/README.md for the steps I used to download these.
    • ConceptNet. Some files are extracted from the ConceptNet and can be downloaded from BaiduYun (Password: hj2c)
    • Fix your PYTHONPATH: export PYTHONPATH=***/OS-SGG or Change Environment Variables in scripts.
  3. Compile everything. run make in the main directory: this compiles the Bilinear Interpolation operation for the RoIs.

  4. Pretrain VG detection. The old version involved pretraining COCO as well, but we got rid of that for simplicity. Run ./scripts/pretrain_detector.sh, or download the pretrain model from GoogleDriver.

  5. Train OS-SGG: Refer to the scripts ./scripts/train_models_sgcls.sh. Or you can download the trained model from BaiduYun (Password: hj2c). For the normal scene graph generation task, you should change the load_graphs_one_shot function to the load_graphs function in dataloaders/visual_genome.py

  6. Evaluate: Refer to the scripts ./scripts/eval_models_sgcls.sh.

  7. Scripts of models are in lib/models/

Help

Feel free to ping me if you encounter trouble getting it to work!

Bibtex

@inproceedings{sgg:oneshot,
  author    = {Yuyu Guo and
               Jingkuan Song and
               Lianli Gao and
               Heng Tao Shen},
  title     = {One-shot Scene Graph Generation},
  booktitle = {ACM MM},
  pages     = {3090--3098},
  year      = {2020}
}