The code for the paper Rethinking Knowledge Graph Propagation for Zero-Shot Learning.
@ARTICLE{2018arXiv180511724K,
author = {{Kampffmeyer}, M. and {Chen}, Y. and {Liang}, X. and {Wang}, H. and
{Zhang}, Y. and {Xing}, E.~P.},
title = "{Rethinking Knowledge Graph Propagation for Zero-Shot Learning}",
journal = {ArXiv e-prints},
archivePrefix = "arXiv",
eprint = {1805.11724},
primaryClass = "cs.CV",
keywords = {Computer Science - Computer Vision and Pattern Recognition},
year = 2018,
month = may,
adsurl = {http://adsabs.harvard.edu/abs/2018arXiv180511724K},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
- python 3
- pytorch 0.4.0
- nltk
There is a folder materials/
, which contains some meta data and programs already.
- Download: http://nlp.stanford.edu/data/glove.6B.zip
- Unzip it, find and put
glove.6B.300d.txt
tomaterials/
.
cd materials/
- Run
python make_induced_graph.py
, getimagenet-induced-graph.json
- Run
python make_dense_graph.py
, getimagenet-dense-graph.json
- Run
python make_dense_grouped_graph.py
, getimagenet-dense-grouped-graph.json
- Download: https://download.pytorch.org/models/resnet50-19c8e357.pth
- Rename and put it as
materials/resnet50-raw.pth
cd materials/
, runpython process_resnet.py
, getfc-weights.json
andresnet50-base.pth
Download ImageNet and AwA2, create the softlinks (command ln -s
): materials/datasets/imagenet
and materials/datasets/awa2
, to the root directory of the dataset.
An ImageNet root directory should contain image folders, each folder with the wordnet id of the class.
An AwA2 root directory should contain the folder JPEGImages.
Make a directory save/
for saving models.
In most programs, use --gpu
to specify the devices to run the code (default: use gpu 0).
- GPM: Run
python train_gcn_basic.py
, get results insave/gcn-basic
- DGPM: Run
python train_gcn_dense.py
, get results insave/gcn-dense
- ADGPM: Run
python train_gcn_dense_att.py
, get results insave/gcn-dense-att
In the results folder:
*.pth
is the state dict of GCN model*.pred
is the prediction file, which can be loaded bytorch.load()
. It is a python dict, having two keys:wnids
- the wordnet ids of the predicted classes,pred
- the predicted fc weights
Run python train_resnet_fit.py
with the args:
--pred
: the.pred
file for finetuning--train-dir
: the directory contains 1K imagenet training classes, each class with a folder named by its wordnet id--save-path
: the folder you want to save the result, e.g.save/resnet-fit-xxx
(In the paper's setting, --train-dir is the folder composed of 1K classes from fall2011.tar, with the missing class "teddy bear" from ILSVRC2012.)
Run python evaluate_imagenet.py
with the args:
--cnn
: path to resnet50 weights, e.g.materials/resnet50-base.pth
orsave/resnet-fit-xxx/x.pth
--pred
: the.pred
file for testing--test-set
: load test set inmaterials/imagenet-testsets.json
, choices:[2-hops, 3-hops, all]
- (optional)
--keep-ratio
for the ratio of testing data,--consider-trains
to include training classes' classifiers,--test-train
for testing with train classes images only.
Run python evaluate_awa2.py
with the args:
--cnn
: path to resnet50 weights, e.g.materials/resnet50-base.pth
orsave/resnet-fit-xxx/x.pth
--pred
: the.pred
file for testing- (optional)
--consider-trains
to include training classes' classifiers