VILA-Lab/ATLAS

Extend the repository for self-benchmark

Opened this issue · 0 comments

Hello!

I've found your paper very helpful for my studies, but I'd like to perform some of these benchmarks myself. Out of the box, it seems like the only script provided only works with some absent principle files in plain text rather than the provided JSON, and the OpenAI library. It would be great to have some resources to perform these benchmarks to a variety of different models (such as the Phi models), of which could be implemented with a universal library like litellm, or another library as such.