Code release for "HashNet: Deep Learning to Hash by Continuation" (ICCV 2017)
We use ImageNet, NUS-WIDE and COCO dataset in our experiments. You can download the ImageNet dataset and NUS-WIDE dataset here. As for COCO dataset, we use COCO 2014, which can be downloaded here. And in case of COCO changes in the future, we also provide a download link here on google drive. After downloading, you need to move the imagenet.tar.gz to ./data/imagenet and extract the file there.
mv imagenet.tar.gz ./data/imagenet
cd ./data/imagenet
tar -zxvf imagenet.tar.gz
Also, for NUS-WIDE, you need to move the nus_wide.tar.gz to ./data/nuswide_81 and extract the file there.
mv nus_wide.tar.gz ./data/nuswide_81
cd ./data/nuswide_81
tar -zxvf nus_wide.tar.gz
For COCO dataset, you need to extract both train and val archive for COCO in ./data/coco. If you download from COCO download page,
mv train2014.zip ./data/coco
mv val2014.zip ./data/coco
cd ./data/coco
unzip train2014.zip
unzip val2014.zip
If you use our shared link
mv coco.tar.gz ./data/coco
cd ./data/coco
tar -zxvf coco.tar.gz
unzip train2014.zip
unzip val2014.zip
You can also modify the list file(txt format) in ./data as you like. Each line in the list file follows the following format:
<image path><space><one hot label representation>
The compiling process is the same as caffe. You can refer to Caffe installation instructions here.
You can train the model for each dataset using the followling command.
dataset_name = imagenet, nuswide_81 or coco
./build/tools/caffe train -solver models/train/dataset_name/solver.prototxt -weights ./models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel -gpu gpu_id
For more instructions about training and parameter setting, see the instructions in the training directory.
You can evaluate the Mean Average Precision(MAP) result on each dataset using the followling command.
dataset_name = imagenet, nuswide_81 or coco
python models/predict/dataset_name/predict_parallel.py --gpu gpu_id --model_path your_caffemodel_path --save_path the_path_to_save_your_code
We provide some trained models for each dataset for each code length in our experiment for evaluation. You can download them here if you want to use them.
For more instructions about training and parameter setting, see the instructions in the predicting directory.
If you use this code for your research, please consider citing:
@article{cao2017hashnet,
title={HashNet: Deep Learning to Hash by Continuation},
author={Cao, Zhangjie and Long, Mingsheng and Wang, Jianmin and Yu, Philip S},
journal={arXiv preprint arXiv:1702.00758},
year={2017}
}
If you have any problem about our code, feel free to contact caozhangjie14@gmail.com or describe your problem in Issues.