Pinned issues
Issues
- 0
Scrollable dropdown
#1304 opened - 2
Leaderboard - Overview Issue
#1303 opened - 3
Arxiv paper submission
#1299 opened - 4
Standardize language codes for Chinese
#1298 opened - 2
- 17
Serialization and validation are slow
#1296 opened - 1
Add older MMTEB baselines
#1295 opened - 0
- 1
- 0
Hacktoberfest 🎉
#1279 opened - 2
- 1
New benchmark interface
#1272 opened - 3
Add FinMTEB
#1267 opened - 3
Add documentation for custom rerankers
#1266 opened - 5
Add PIRB
#1265 opened - 1
Allow Numpy 2.0
#1263 opened - 1
- 2
Sort benchmarks by name
#1257 opened - 0
Add benchmark CLI options
#1250 opened - 1
Massive Multimodal Extension of MTEB
#1249 opened - 0
SWE-Bench Retrieval
#1238 opened - 6
Add proper metadata to models
#1234 opened - 4
Allow aggregated tasks within benchmarks
#1231 opened - 2
rernaker eval
#1230 opened - 2
Unable to reproduce MTEB leaderboard results on TRECCOVID using `Snowflake/snowflake-arctic-embed-m-v1.5`
#1227 opened - 3
PreTrainedTokenizerFast._batch_encode_plus() got an unexpected keyword argument 'prompt_name'
#1224 opened - 2
- 5
Error on MTOPDomainClassification task
#1216 opened - 0
- 4
- 3
Add benchmark overview table
#1209 opened - 2
BUG: if the model defines a similarity score, STS pearson and spearman are written as arrays, not as scores.
#1206 opened - 3
Evaluating only on English
#1205 opened - 8
Main Example in the README fails
#1204 opened - 13
Sequence Length information in the leaderboard
#1202 opened - 1
- 3
- 1
- 4
- 4
- 2
Error while downloading
#1176 opened - 3
- 0
- 6
Add updated Touche-2020
#1170 opened - 2
Standardize license in task metadata
#1165 opened - 5
How to output the wrong retrieval data?
#1162 opened - 6
SummarizationEvaluator mean score issue
#1156 opened - 0
Add CRAG
#1151 opened - 8
[New dataset request] Please add MKQA
#1149 opened - 3
Missing import for SadeemQuestionRetrieval
#1143 opened