seratch/ChatGPT-in-Slack

High Input Token Count

Closed this issue · 2 comments

The input tokens count are way too high in comparison to the output. ~28K Input token by just 37 requests (Output only 1.7k). Is there a way to reduce the input tokens being sent?

Modifying the initial export OPENAI_SYSTEM_TEXT doesn't seem to have much effect.

image

This app sends not only the prompt but also a few past messages in the same DM to let ChatGPT provide a better answer. This is crucial for response quality, but if you don't need this feature, you can fork the project and customize it accordingly.

Hope this was helpful. Since this is inactive for a while, let me close it now.