8k context limit for 16k models with .txt attachments
arash-ashra opened this issue · 9 comments
Looks like turbo-16k only has an 8k token limit for output so you'll need to use a gpt-4 model like 32k or 1106-preview
Did you try it on the bot on the server?
The file is just too big, I don't understand your issue here @ashra-academy, after a certain token amount it just won't work
@ashra-academy It is certainly not a bug in the bot, the bot summarization threshold is set to 5k. Your file is bigger than that so it won't work. I set the threshold to 100k, you can try again now with the bot on the server.
oh I didn't know about such a threshold parameter. It probably better be above 128k since the latest gpt4-preview is that much. Also how can I set this parameter on my own bot?
Above 128k doesn't make sense as it would hit the token limit before even beginning to summarize and also leaves no space for the summary afterwards, so something like 110k is more reasonable, you can set it with /system settings summarize_threshold
ooh I see. Thaaanks