vdeny opened this issue a year ago · 0 comments
Can you please add support for gpt-3.5-turbo-16k longer context model which support four times the context length of the 4k base model?