/crossmodal-contrastive-learning

CrossCLR: Cross-modal Contrastive Learning For Multi-modal Video Representations, ICCV 2021

Primary LanguagePythonApache License 2.0Apache-2.0

CrossCLR - ICCV 2021

This is the official implementation of paper:

CrossCLR: Cross-modal Contrastive Learning For Multi-modal Video Representations [Paper]

Authors: Mohammadreza Zolfaghari, Yi Zhu, Peter Gehler, Thomas Brox,

Update

[Dec 2021] CrossCLR-onlyIntraModality released

Loss Function

The loss function CrossCLR in loss.py takes video features and text features as input, and return the loss.

Usage:

from trainer.loss import CrossCLR_onlyIntraModality

# define loss with a temperature `temp` and weights for negative samples `w`
criterion = CrossCLR_onlyIntraModality(temperature=temp, negative_weight=w)

# features: [bsz, f_dim]
video_features = ...
text_features = ...

# CrossCLR
loss = criterion(video_features, text_features)

...

Qualitative samples

Reference

@article{crossclr_aws_21,
  author    = {Mohammadreza Zolfaghari and
               Yi Zhu and
               Peter V. Gehler and
               Thomas Brox},
  title     = {CrossCLR: Cross-modal Contrastive Learning For Multi-modal Video Representations},
  url       = {https://arxiv.org/abs/2109.14910},
  eprinttype = {arXiv},
  booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  month     = {October},
  year      = {2021},
}

Security

See CONTRIBUTING for more information.

License

This project is licensed under the Apache-2.0 License.