/pytorch-metric-learning

The easiest way to use deep metric learning in your application. Modular, flexible, and extensible. Written in PyTorch.

Primary LanguagePythonMIT LicenseMIT

PyTorch Metric Learning

PyPi version PyPi stats

Anaconda version Anaconda downloads

Commit activity License

Documentation

View the documentation here

Google Colab Example

See this notebook for an example of a complete training and testing workflow. View other examples in the examples folder

Benefits of this library

  1. Ease of use
    • Add metric learning to your application with just 2 lines of code in your training loop.
    • Mine pairs and triplets with a single function call.
  2. Flexibility
    • Mix and match losses, miners, and trainers in ways that other libraries don't allow.

Installation

Pip:

pip install pytorch-metric-learning

To get the latest dev version:

pip install pytorch-metric-learning==0.9.84

To install on Windows:

pip install torch===1.4.0 torchvision===0.5.0 -f https://download.pytorch.org/whl/torch_stable.html
pip install pytorch-metric-learning

Conda:

conda install pytorch-metric-learning -c metric-learning

We have recently noticed some sporadic issues with the conda installation, so we recommend installing with pip. You can use pip inside of conda:

conda install pip
pip install pytorch-metric-learning

If you run into problems during installation, please post in this issue.

Benchmark results

See powerful-benchmarker to view benchmark results and to use the benchmarking tool.

Library contents

Base Classes, Mixins, and Wrappers:

Overview

Let’s try the vanilla triplet margin loss. In all examples, embeddings is assumed to be of size (N, embedding_size), and labels is of size (N).

from pytorch_metric_learning import losses
loss_func = losses.TripletMarginLoss(margin=0.1)
loss = loss_func(embeddings, labels)

Loss functions typically come with a variety of parameters. For example, with the TripletMarginLoss, you can control how many triplets per sample to use in each batch. You can also use all possible triplets within each batch:

loss_func = losses.TripletMarginLoss(triplets_per_anchor="all")

Sometimes it can help to add a mining function:

from pytorch_metric_learning import miners, losses
miner = miners.MultiSimilarityMiner(epsilon=0.1)
loss_func = losses.TripletMarginLoss(margin=0.1)
hard_pairs = miner(embeddings, labels)
loss = loss_func(embeddings, labels, hard_pairs)

In the above code, the miner finds positive and negative pairs that it thinks are particularly difficult. Note that even though the TripletMarginLoss operates on triplets, it’s still possible to pass in pairs. This is because the library automatically converts pairs to triplets and triplets to pairs, when necessary.

In general, all loss functions take in embeddings and labels, with an optional indices_tuple argument (i.e. the output of a miner):

# From BaseMetricLossFunction
def forward(self, embeddings, labels, indices_tuple=None)

And (almost) all mining functions take in embeddings and labels:

# From BaseMiner
def forward(self, embeddings, labels)

For more complex approaches, like deep adversarial metric learning, use one of the trainers.

To check the accuracy of your model, use one of the testers. Which tester should you use? Almost definitely GlobalEmbeddingSpaceTester, because it does what most metric-learning papers do.

Also check out the example Google Colab notebooks.

To learn more about all of the above, see the documentation.

Development

In order to run unit tests do:

pip install -e .[dev]
pytest tests

The first command may fail initially on Windows. In such a case, install torch by following the official guide. Proceed to pip install -e .[dev] afterwards.

Acknowledgements

Facebook AI

Thank you to Ser-Nam Lim at Facebook AI, and my research advisor, Professor Serge Belongie. This project began during my internship at Facebook AI where I received valuable feedback from Ser-Nam, and his team of computer vision and machine learning engineers and research scientists. In particular, thanks to Ashish Shah and Austin Reiter for reviewing my code during its early stages of development.

Open-source repos

This library contains code that has been adapted and modified from the following great open-source repos:

Contributors

Thanks to the contributors who made pull requests!

Algorithm implementations

General improvements and bug fixes

Citing this library

If you'd like to cite pytorch-metric-learning in your paper, you can use this bibtex:

@misc{Musgrave2019,
  author = {Musgrave, Kevin and Lim, Ser-Nam and Belongie, Serge},
  title = {PyTorch Metric Learning},
  year = {2019},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/KevinMusgrave/pytorch-metric-learning}},
}