ur-whitelab/chemcrow-public

Wondering if exists token exceed problem

zhaoyiCC opened this issue · 1 comments

Thanks for ur excellent work, very insightful. But I'm wondering: with 13 tools and local knowledge sources, it seems very easy to reach the 16k token limit(turbo-0613),how to avoid these problems?

Hi! Thanks for raising the issue :)
The 16k context window has not been an issue so far in the experiments we've ran. If anything, newer models (gpt-4-1106-preview) incorporate a longer context window (128k tokens), so you could also work with that.