machinewrapped/gpt-subtrans

Upper limiting the previous context

Closed this issue · 3 comments

I am translating some documentaries where there are no distinct scenes, and I noticed that it is costing more than I expected.

I guess that the translator sends all summaries of all batches in the current scene. It can be pretty expensive, because the prompt grows linearly, and so the total cost grows quadratically with the length of the documentary.

My proposed solution is to only send a limited number of previous batches. This keeps the cost of a long scene down.

There should be an upper limit already, max_context_summaries in Options.py - it defaults to 10 and is applied in GetBatchContext in SubtitleFile.py... try a lower number if the summaries are getting long (GPT4 tends to write much more detailed summaries than GPT3.5).

There's no command line option to set it at the moment (easy to add one), but it can be set in the GUI (advanced settings) or as an environment variable in a .env file.

Thanks for the reply. Perhaps this can be added to the terminal option.

Sure, easily done!
30fa3a7