togethercomputer/MoA
Together Mixture-Of-Agents (MoA) – 65.1% on AlpacaEval with OSS models
PythonApache-2.0
Issues
- 4
seems that MoA does not work on MATH and QA with both weak and strong LLMs
#41 opened by yananchen1989 - 2
ollama support
#20 opened by win4r - 1
- 0
Evaluation on Objective Benchmarks
#40 opened by jingmingzhuo - 1
- 2
- 1
Error occur after all 805 tests
#28 opened by URRealHero - 2
You have been rate limited.
#27 opened by carrt123 - 1
Does the agent support GPT, Gemini, Claude, etc.
#16 opened by zsqdx - 0
--rounds 2 seems broken
#30 opened by tijszwinkels - 0
why the implement of `moa.py` isn't consistent with `inject_references_to_messages`
#26 opened by better629 - 1
- 2
how to deploy this locally with ollama UIs like `Open WebUI` and `Lobe Chat` ?
#14 opened by hemangjoshi37a - 9
Run locally?
#10 opened by CHesketh76 - 3
Agents?
#7 opened by logan-markewich - 0
Missing AlpacaEval gpt4 reference results - results/gpt4_1106_preview/model_outputs.json
#21 opened by morganmcg1 - 4
does it support `ollama` ?
#15 opened by hemangjoshi37a - 3
questions about the intermediate layers
#9 opened by yananchen1989