Clean up max_token selection
Kav-K opened this issue · 0 comments
Kav-K commented
Currently, when setting max_tokens for a conversation buffer memory within langchain, we use simple string selection to set the token limit to 29,000 if the model is a gpt-4 model, and 100,000 if the model is one of the preview models (these are 128k context)
It would be nicer to have some sort of get_max_conversation_tokens
where it would return the correct bound for a conversation buffer memory given the model name