TextClassificationTransformer should log torchmetrics object instead of computed Tensors
stefan-schroedl opened this issue ยท 1 comments
๐ Bug
In line 58 of the TextClassificationTransformer.common_step()
method (https://github.com/Lightning-AI/lightning-transformers/blob/master/lightning_transformers/task/nlp/text_classification/model.py#L58), Logging is called with a dictionary of metric values computed for the current batch. I am new to this package but I believe the logger has to be called with the torchmetrics.Metric
subclass for cases where the computed value is different from the average of the per-batch values, such as for class Precision
- otherwise the aggregation gives wrong results. Similar code exists in TokenClassificationTransformer
and ImageClassificationTransformer
as well.
To Reproduce
Using TextClassificationTransformer with Precision and Recall metrics (as configured in the default) will result in inaccurate per-epoch values for validation and testing.
Environment
- PyTorch Version: 1.12.1
- OS: Linux
- How you installed PyTorch (
conda
,pip
, source):pip
- Build command you used (if compiling from source):
- Python version: 3.7
- CUDA/cuDNN version: