Header


Find the following information about this article:

Title: Large scale metric learning from equivalence constraints
Author: Kostinger, Hirzer, Wohlhart, Roth, Bischof
Journal/Conference: CVPR
Year: 2012

Graphical Abstract

![Graphical Abstract 1](images/D8_Graphical Abstract.jpg) ![Graphical Abstract 2](images/D9_Graphical Abstract.jpg)


Highlight

  • Different distance measure ?
  • Mahalanobis distance?
  • The topic indicates learning which correspond to what ?
  • What are the main difference and similiarity between prpopsed method and PCA and LDA?
  • What can be achieved from the distance?
  • Regularization to avoid overfitting
  • Maxliklihood --> Minimum distance
  • How to minimize (Do we need optimization method ? Or we can solve it without them in much easier way using this method?) highlights

Discussions


Preminilary

Definition of metric learning

  • Definition of Mahalanobis distance
  • Specific case of Euclidean distance
  • What Mahalahanobis and the covariance matrice will imply

Mahalanobis vs Euclidian distance and covariance matrix

![What is interesting in this paper](images/D2_Motivation_Minimization M_sans complex Alg.jpg)

State-of-the-art

LMNN

Check the following notebook for an entire description of LMNN. ![missing image](images/D3_state of the art.jpg)

ITML

![missing image](images/D4_state of the art.jpg) Entropy driven optimisation.

KISS ML

![KISS formulation](images/D5_KISS_Equations and Formulations.jpg) ![KISS formulation](images/D6_KISS_Equations and_Formulations.jpg)

![missingimage](images/D7_Supervised Learning_Where to apply.jpg)