Make a checkbox for models to "remove" max response (don't provide this var when making the LLM call), so you'll get what ever output comes out without any restriction, and this is what i would expect from a model.
Opened this issue · 0 comments
enricoros commented
Make a checkbox for models to "remove" max response (don't provide this var when making the LLM call), so you'll get what ever output comes out without any restriction, and this is what i would expect from a model.