chigkim/Ollama-MMLU-Pro

Multiple Model Support

notasquid1938 opened this issue · 1 comments

This is a great set of scripts! I was wondering if there was a way to modify the config.toml to list multiple models for benchmarking. That way the script doesn't have to be rerun for every new model.

kth8 commented

My solution so far has just been to use a while loop for example:

curl -s https://ollama.com/library/llama3.2 | awk -F'["/]' '/700/ && $4 ~ /:/ && $4 ~ /q/ && $4 !~ /base|text|fp16|q4_0|q4_1|q5_0|q5_1/ { print $4 }' > models_list.txt
while read line; do ollama pull $line; pipenv run python run_openai.py --model $line; done < models_list.txt