Opened this issue 2 years ago · 1 comments
Currently the entire conversations context is sent to OpenAI API which causes high costs per GPT4 query, how can we limit the context to only last two messages?
If you want to avoid sending the entire context, what about just starting a new conversation?