embeddings-benchmark/mteb

[mieb] EVA on CVBenchCount fails

Closed this issue · 2 comments

Maybe same as #1393

  with torch.no_grad(), torch.cuda.amp.autocast():
  0%|                                                                                | 0/788 [00:00<?, ?it/s]
ERROR:mteb.evaluation.MTEB:Error while evaluating CVBenchCount: not enough values to unpack (expected 4, got 
3)
Traceback (most recent call last):
  File "/data/niklas/mieb/mteb/scripts/run_mieb.py", line 90, in <module>
    results = evaluation.run(model, output_folder="/data/niklas/mieb/results-mieb-final", batch_size=1)
  File "/data/niklas/mieb/mteb/mteb/evaluation/MTEB.py", line 464, in run
    raise e
  File "/data/niklas/mieb/mteb/mteb/evaluation/MTEB.py", line 425, in run
    results, tick, tock = self._run_eval(
  File "/data/niklas/mieb/mteb/mteb/evaluation/MTEB.py", line 300, in _run_eval
    results = task.evaluate(
  File "/data/niklas/mieb/mteb/mteb/abstasks/AbsTask.py", line 126, in evaluate
    scores[hf_subset] = self._evaluate_subset(
  File "/data/niklas/mieb/mteb/mteb/abstasks/Image/AbsTaskAny2TextMultipleChoice.py", line 62, in _evaluate_subset
    scores = evaluator(model, encode_kwargs=encode_kwargs)
  File "/data/niklas/mieb/mteb/mteb/evaluation/evaluators/Image/Any2TextMultipleChoiceEvaluator.py", line 78, in __call__
    query_embeddings = model.get_fused_embeddings(
  File "/data/niklas/mieb/mteb/mteb/models/evaclip_models.py", line 128, in get_fused_embeddings
    image_embeddings = self.get_image_embeddings(images, batch_size)
  File "/data/niklas/mieb/mteb/mteb/models/evaclip_models.py", line 94, in get_image_embeddings
    image_outputs = self.model.encode_image(inputs.to(self.device))
  File "/data/niklas/mieb/mteb/EVA/EVA-CLIP/rei/eva_clip/model.py", line 302, in encode_image
    features = self.visual(image)
  File "/env/lib/conda/gritkto4/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/env/lib/conda/gritkto4/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data/niklas/mieb/mteb/EVA/EVA-CLIP/rei/eva_clip/eva_vit_model.py", line 529, in forward
    x = self.forward_features(x)
  File "/data/niklas/mieb/mteb/EVA/EVA-CLIP/rei/eva_clip/eva_vit_model.py", line 491, in forward_features
    x = self.patch_embed(x)
  File "/env/lib/conda/gritkto4/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/env/lib/conda/gritkto4/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data/niklas/mieb/mteb/EVA/EVA-CLIP/rei/eva_clip/eva_vit_model.py", line 321, in forward
    B, C, H, W = x.shape
ValueError: not enough values to unpack (expected 4, got 3)

I think this is the same as #1393

Closing this now. Feel free to reopen if the issue persists.