huggingface/evaluate
🤗 Evaluate: A library for easily evaluating machine learning models and datasets.
PythonApache-2.0
Issues
- 0
- 0
Add geometric mean of per-token-Perplexities
#551 opened - 2
- 2
- 0
- 0
- 0
- 17
several breakages due to recent `datasets`
#542 opened - 0
- 3
- 5
Is TextGenerationEvaluator incomplete?
#537 opened - 2
How should i cite this repository?
#534 opened - 1
- 2
- 2
- 0
- 0
- 0
- 0
- 1
- 0
Importing Error for evaluator module
#518 opened - 0
Cannot list evaluation modules types
#517 opened - 2
Can't calculate combined metric (WER + CER)
#516 opened - 0
Allow BLEURT optimisation parameters
#515 opened - 2
- 0
- 2
about evaluate/perplexity
#512 opened - 2
- 3
Compatibility Issue with datasets library
#504 opened - 0
- 0
Perplexity Space Broken
#502 opened - 0
- 1
Wrong accuracy on the validation set!!!
#500 opened - 2
Multi-label with f1
#497 opened - 1
CER.py error in class CERConfig
#495 opened - 7
- 0
- 1
- 1
- 6
- 2
Evaluate a quantize model
#487 opened - 3
- 0
- 7
- 1
- 4
- 13
meteor don't works with latest datasets
#480 opened - 2
- 0
- 0