/torch-metrics

Metrics for model evaluation in pytorch

Primary LanguagePythonMIT LicenseMIT

Torch-metrics: model evaluation metrics for PyTorch

PyPI version License: MIT

As summarized in this issue, Pytorch does not have a built-in libary torch.metrics for model evaluation metrics. This python library serves as a custom library to provide common evaluation metrics in Pytorch, similar to tf.keras.metrics. This is similar to the metrics library in PyTorch Lightning.

Usage

  • pip install --upgrade torch-metrics or
  • git clone https://github.com/enochkan/torch-metrics.git
from torch_metrics import Accuracy

## define metric ##
metric = Accuracy(logits=False)
y_pred = torch.tensor([1, 2, 3, 4])
y_true = torch.tensor([0, 2, 3, 4])

print(metric(y_pred, y_true))
from torch_metrics import Accuracy

## define metric ##
metric = Accuracy()

y_true = torch.tensor([0, 2, 3, 4])
y_pred = torch.tensor([[0.2, 0.6, 0.1, 0.05, 0.05],
                       [0.2, 0.1, 0.6, 0.05, 0.05],
                       [0.2, 0.05, 0.1, 0.6, 0.05],
                       [0.2, 0.05, 0.05, 0.05, 0.65]])
print(metric(y_pred, y_true))

Implementation

Metrics from tf.keras.metrics and other metrics that are already implemented vs. to-do

  • MeanSquaredError class
  • RootMeanSquaredError class
  • MeanAbsoluteError class
  • Precision class
  • Recall class
  • MeanIoU class
  • DSC class (Dice Similarity Coefficient)
  • F1Score class
  • RSquared class
  • Hinge class
  • SquaredHinge class
  • LogCoshError class
  • Accuracy class
  • KLDivergence class
  • BinaryAccuracy class
  • CosineSimilarity class
  • AUC class
  • BinaryCrossEntropy class
  • CategoricalCrossEntropy class
  • SparseCategoricalCrossentropy class

Please raise issues or feature requests here.