anonymousException/renpy-translator

进行LLM翻译时使用System Prompt

Closed this issue · 3 comments

使用System Prompt可以提高模型指令服从能力,从而改善翻译效果

其实在调用 AI 翻译时,已经内置了提示词,详见代码:https://github.com/anonymousException/renpy-translator/blob/main/src/openai_translate.py#L145

source_lang_setup = f'You will receive a piece of {source} text in JSON dictionary format'
                    role_setup = 'You are a translation API that receives dictionary-type data in JSON format and returns dictionary-type results in JSON format'
                    format_requirement = 'do not consider that we are chatting or greeting me and simply reply to me with the translation in the same format as the original'
                    prompt = f'{role_setup}. {source_lang_setup}, where the key is the line number and the value is the content of the corresponding line. Please translate it into {target} according to the following requirements: \n' + \
                             '1. first read through the whole text, determine the type of text content and select the appropriate translation style before starting the translation; \n' + \
                             '2. use the homophonic translation for names of people and places consistently; \n' + \
                             '3. polish translation results to make them accurate and natural; \n' + \
                             '4. do not change or convert punctuation marks; \n' + \
                             f'5. {format_requirement}, which is an example of the format: \n' + \
                             'Me: {"1": "Contents of line 1", "2": "Contents of line 2", "3": "Contents of line 3"} \n' + \
                             'You: {"1": "Translation result for line 1", "2": "Translation result for line 2", "3": "Translation result for line 3"} \n' + \
                             f'Next you will receive the text that needs to be translated into {target}. \n' + \
                             f'{js}'
                    chat_completion = client.with_options(timeout=self.timeout, max_retries=2).chat.completions.create(
                        messages=[
                            {
                                "role": "user",
                                "content": prompt,
                            }
                        ],

但是从这里看来并没有用上System Prompt, 模型的指令遵循能力有一定减弱,建议将游戏文本输入部分分开,修改如下

                    prompt_sys = f'{role_setup}. {source_lang_setup}, where the key is the line number and the value is the content of the corresponding line. Please translate it into {target} according to the following requirements: \n' + \
                             '1. first read through the whole text, determine the type of text content and select the appropriate translation style before starting the translation; \n' + \
                             '2. use the homophonic translation for names of people and places consistently; \n' + \
                             '3. polish translation results to make them accurate and natural; \n' + \
                             '4. do not change or convert punctuation marks; \n' + \
                             f'5. {format_requirement}, which is an example of the format: \n' + \
                             'Me: {"1": "Contents of line 1", "2": "Contents of line 2", "3": "Contents of line 3"} \n' + \
                             'You: {"1": "Translation result for line 1", "2": "Translation result for line 2", "3": "Translation result for line 3"} \n' + \
                    prompt_user = f'Next you will receive the text that needs to be translated into {target}. \n' + \
                             f'{js}'
                    chat_completion = client.with_options(timeout=self.timeout, max_retries=2).chat.completions.create(
                        messages=[
                            {
                                "role": "system",
                                "content": prompt_sys,
                            },
                            {
                                "role": "user",
                                "content": prompt_user,
                            }
                        ],
                        model=self.model,
                        # response_format={"type": "json_object"},
                    )

已调整,感谢建议