ChatGPT has also demonstrated its capabilities as a robust translator, capable of handling not just common languages, but also unconventional forms of writing like emojis and word scrambling. However, it may not always produce a deterministic output and adhere to line-to-line correlation, potentially disrupting the timing of subtitles, even when instructed to follow precise instructions and setting the model temperature
parameter to 0
.
This utility uses the OpenAI ChatGPT API to translate text, with a specific focus on line-based translation, especially for SRT subtitles. The translator optimizes token usage by removing SRT overhead, grouping text into batches, resulting in arbitrary length translations without excessive token consumption while ensuring a one-to-one match between line input and output.
Web Interface: https://cerlancism.github.io/chatgpt-subtitle-translator
- New work in progress: Web UI
- Line-based batching: avoiding token limit per request, reducing overhead token wastage, maintaining translation context to certain extent
- Checking with the free OpenAI Moderation tool: prevent token wastage if the model is highly likely to refuse to translate
- Streaming process output
- Request per minute (RPM) rate limits
- TODO: Tokens per minute rate limits (TPM)
- Progress resumption (CLI Only) - mitigation for frequent API gateway errors and downtimes
- TODO: Retry translation parts
Reference: https://github.com/openai/openai-quickstart-node#setup
- Node.js version
>= 16.13.0
required. This README assumesbash
shell environment - Clone this repository and navigate into the directory
git clone https://github.com/Cerlancism/chatgpt-subtitle-translator && cd chatgpt-subtitle-translator
- Install the requirements
npm install
- Give executable permission
chmod +x cli/translator.mjs
- Copy
.example.env
to.env
cp .env.example .env
- Add your API key to the newly created
.env
file- (Optional) Set rate limits: https://platform.openai.com/docs/guides/rate-limits/overview
cli/translator.mjs --help
Usage: translator [options]
Translation tool based on ChatGPT API
Options:
-
-f, --from <language>
Source language (default: "") -
-t, --to <language>
Target language (default: "English") -
-f, --file <file>
Input source text with the content of this file, in.srt
format or plain text -
-p, --plain-text <text>
Input source text with this plain text argument -
-s, --system-instruction <instruction>
Override the prompt system instruction templateTranslate ${from} to ${to}
with this plain text, ignoring--from
and--to
options -
--initial-prompts <prompts>
Initial prompts for the translation in JSON (default:"[]"
) -
--no-use-moderator
Don't use the OpenAI API Moderation endpoint -
--no-prefix-number
Don't prefix lines with numerical indices -
--no-line-matching
Don't enforce one to one line quantity input output matching -
-l, --history-prompt-length <length>
Length of prompt history to retain for next request batch (default: 10) -
-b, --batch-sizes <sizes>
Batch sizes of increasing order for translation prompt slices in JSON Array (default:"[10, 100]"
)The number of lines to include in each translation prompt, provided that they are estimated to within the token limit. In case of mismatched output line quantities, this number will be decreased step-by-step according to the values in the array, ultimately reaching one.
Larger batch sizes generally lead to more efficient token utilization and potentially better contextual translation. However, mismatched output line quantities or exceeding the token limit will cause token wastage, requiring resubmission of the batch with a smaller batch size.
Additional Options for ChatAPT:
-m, --model <model>
(default:"gpt-3.5-turbo"
) https://platform.openai.com/docs/api-reference/chat/create#chat/create-model--stream
Stream progress output to terminal https://platform.openai.com/docs/api-reference/chat/create#chat/create-stream-t, --temperature <temperature>
Sampling temperature to use, should set a low value below0.3
to be more deterministic for translation (default:1
) https://platform.openai.com/docs/api-reference/chat/create#chat/create-temperature--top_p <top_p>
Nucleus sampling parameter, top_p probability mass https://platform.openai.com/docs/api-reference/chat/create#chat/create-top_p--presence_penalty <presence_penalty>
Penalty for new tokens based on their presence in the text so far https://platform.openai.com/docs/api-reference/chat/create#chat/create-presence_penalty--frequency_penalty <frequency_penalty
Penalty for new tokens based on their frequency in the text so far https://platform.openai.com/docs/api-reference/chat/create#chat/create-frequency_penalty--logit_bias <logit_bias>
Modify the likelihood of specified tokens appearing in the completion https://platform.openai.com/docs/api-reference/chat/create#chat/create-logit_bias
cli/translator.mjs --plain-text "你好"
Standard Output
Hello.
cli/translator.mjs --stream --to "Emojis" --temperature 0 --plain-text "$(curl 'https://api.chucknorris.io/jokes/0ECUwLDTTYSaeFCq6YMa5A' | jq .value)"
Input Argument
Chuck Norris can walk with the animals, talk with the animals; grunt and squeak and squawk with the animals... and the animals, without fail, always say 'yessir Mr. Norris'.
Standard Output
👨🦰💪🚶♂️🦜🐒🐘🐅🐆🐎🐖🐄🐑🦏🐊🐢🐍🐿️🐇🐿️❗️🌳💬😲👉🤵👨🦰👊=🐕🐑🐐🦌🐘🦏🦍🦧🦓🐅🦌🦌🦌🐆🦍🐘🐘🐗🦓=👍🤵.
cli/translator.mjs --stream --system-instruction "Scramble characters of words while only keeping the start and end letter" --no-prefix-number --no-line-matching --temperature 0 --plain-text "Chuck Norris can walk with the animals, talk with the animals;"
Standard Output
Cuhck Nroris can wakl wtih the aiamnls, talk wtih the aiamnls;
cli/translator.mjs --stream --system-instruction "Unscramble characters back to English" --no-prefix-number --no-line-matching --temperature 0 --plain-text "Cuhck Nroris can wakl wtih the aiamnls, talk wtih the aiamnls;"
Standard Output
Chuck Norris can walk with the animals, talk with the animals;
cli/translator.mjs --stream --temperature 0 --file test/data/test_cn.txt
Input file: test/data/test_cn.txt
你好。
拜拜!
Standard Output
Hello.
Goodbye!
cli/translator.mjs --stream --temperature 0 --file test/data/test_ja_small.srt
Input file: test/data/test_ja_small.srt
1
00:00:00,000 --> 00:00:02,000
おはようございます。
2
00:00:02,000 --> 00:00:05,000
お元気ですか?
3
00:00:05,000 --> 00:00:07,000
はい、元気です。
4
00:00:08,000 --> 00:00:12,000
今日は天気がいいですね。
5
00:00:12,000 --> 00:00:16,000
はい、とてもいい天気です。
Output file: test/data/test_ja_small.srt.out_English.srt
1
00:00:00,000 --> 00:00:02,000
Good morning.
2
00:00:02,000 --> 00:00:05,000
How are you?
3
00:00:05,000 --> 00:00:07,000
Yes, I'm doing well.
4
00:00:08,000 --> 00:00:12,000
The weather is nice today, isn't it?
5
00:00:12,000 --> 00:00:16,000
Yes, it's very nice weather.
System Instruction
Tokens: 5
Translate Japanese to English
Input | Prompt | Transform | Output |
---|---|---|---|
Tokens: |
Tokens: |
Tokens: |
Tokens: |
1
00:00:00,000 --> 00:00:02,000
おはようございます。
2
00:00:02,000 --> 00:00:05,000
お元気ですか?
3
00:00:05,000 --> 00:00:07,000
はい、元気です。
4
00:00:08,000 --> 00:00:12,000
今日は天気がいいですね。
5
00:00:12,000 --> 00:00:16,000
はい、とてもいい天気です。 |
|
|
1
00:00:00,000 --> 00:00:02,000
Good morning.
2
00:00:02,000 --> 00:00:05,000
How are you?
3
00:00:05,000 --> 00:00:07,000
Yes, I'm doing well.
4
00:00:08,000 --> 00:00:12,000
The weather is nice today, isn't it?
5
00:00:12,000 --> 00:00:16,000
Yes, it's very nice weather. |
TODO: More analysis
5 SRT lines:
test/data/test_ja_small.srt
- None (Plain text SRT input output):
Tokens:299
- No batching, with SRT stripping but one line per prompt with System Instruction overhead, including up to 10 historical prompt context:
Tokens:362
- SRT stripping and line batching of 2:
Tokens:276
30 SRT lines:
test/data/test_ja.srt
- None (Plain text SRT input output):
Tokens:1625
- No batching, with SRT stripping but one line per prompt with System Instruction overhead, including up to 10 historical prompt context:
Tokens:6719
- SRT stripping and line batching of
[5, 10]
, including up to 10 historical prompt context:
Tokens:1036