/DeepHash-Paddle

DeepHash research based on paddle

Primary LanguagePythonApache License 2.0Apache-2.0

DeepHash-Paddle

Implementation of Some Deep Hash Algorithms Baseline with PaddlePaddle

How to run

My environment is

python==3.7.0  paddle==2.2.1  

You can easily train and test any algorithm just by

pyhon DSHSD.py   

Precision Recall Curve

Precision Recall Curve

I add some code in DSH.py:

if "cifar10-1" == config["dataset"] and epoch > 29:
    P, R = pr_curve(trn_binary.numpy(), tst_binary.numpy(), trn_label.numpy(), tst_label.numpy())
    print(f'Precision Recall Curve data:\n"DSH":[{P},{R}],')

To get the Precision Recall Curve, you should copy the data (which is generated by the above code ) to precision_recall_curve.py and run this file.

cd utils
pyhon precision_recall_curve.py   

Dataset

There are three different configurations for cifar10

  • config["dataset"]="cifar10" will use 1000 images (100 images per class) as the query set, 5000 images( 500 images per class) as training set , the remaining 54,000 images are used as database.
  • config["dataset"]="cifar10-1" will use 1000 images (100 images per class) as the query set, the remaining 59,000 images are used as database, 5000 images( 500 images per class) are randomly sampled from the database as training set.
  • config["dataset"]="cifar10-2" will use 10000 images (1000 images per class) as the query set, 50000 images( 5000 images per class) as training set and database.

You can download NUS-WIDE here
Use data/nus-wide/code.py to randomly select 100 images per class as the query set (2,100 images in total). The remaining images are used as the database set, from which we randomly sample 500 images per class as the training set (10, 500 images in total).

You can download ImageNet, NUS-WIDE-m and COCO dataset here where is the data split copy from, or Baidu Pan(Password: hash).

NUS-WIDE-m is different from NUS-WIDE, so i made a distinction.

269,648 images in NUS-WIDE , and 195834 images which are associated with 21 most frequent concepts.

NUS-WIDE-m has 223,496 images,and NUS-WIDE-m is used in HashNet(ICCV2017) and code HashNet caffe and pytorch

download mirflickr , and use ./data/mirflickr/code.py to randomly select 1000 images as the test query set and 4000 images as the train set.

Paper And Code

It is difficult to implement all by myself, so I made some modifications based on these codes
DSH(CVPR2016)
paper Deep Supervised Hashing for Fast Image Retrieval
code DSH-pytorch

DPSH(IJCAI2016)
paper Feature Learning based Deep Supervised Hashing with Pairwise Labels
code DPSH-pytorch

DHN(AAAI2016)
paper Deep Hashing Network for Efficient Similarity Retrieval
code DeepHash-tensorflow

HashNet(ICCV2017)
paper HashNet: Deep Learning to Hash by Continuation
code HashNet caffe and pytorch

DSDH(NIPS2017)
paper Deep Supervised Discrete Hashing
code DSDH_PyTorch

LCDSH(IJCAI2017)
paper Locality-Constrained Deep Supervised Hashing for Image Retrieval

GreedyHash(NIPS2018)
paper Greedy Hash: Towards Fast Optimization for Accurate Hash Coding in CNN
code GreedyHash

DSHSD(IEEE ACCESS 2019)
paper Deep Supervised Hashing Based on Stable Distribution

Deep Unsupervised Image Hashing by Maximizing Bit Entropy(AAAI2021)
paper Deep Unsupervised Image Hashing by Maximizing Bit Entropy
code Deep-Unsupervised-Image-Hashing

Mean Average Precision,48 bits[AlexNet].

Algorithmsdatasetthis impl.paper
DSHcifar10-1 0.800 0.6755
nus_wide_21 0.798 0.5621
ms coco 0.655 -
imagenet 0.576 -
mirflickr 0.735 -

If you have any problems, feel free to contact me by email(451685052@qq.com) or raise an issue.