Aedial/novelai-api

how to make the generated text always end with a complete sentence?

cwyuu opened this issue · 3 comments

cwyuu commented

I find that when I generate text on a web page, the endings are all one complete sentence, starting with "." but when I use the api to generate it, it always ends with an incomplete sentence. I have changed "global_settings.generate_until_sentence = True" in the api, because I find that the request in the web page is "True" here but the api defaults to "false". But I found it didn't work. So I would like to know: how can I make the api generate text that always ends with a complete sentence.Can anyone answer my question? Thanks

Translated with www.DeepL.com/Translator (free version)

Aedial commented

I see no bug here. Setting the generate_until_sentence to True does reflect on the request sent to the server when fed to the high_level.generate or high_level.generate_stream.
Note that it will continue sentence only if a period is actually found in the 20 tokens after the end. It means it is affected by context, preset, biases, bans, etc.
If you see an issue, it is likely incomplete copy of the settings on your side. For a deterministic comparison, set the top_k to 1 on both, and you should see the exact same content if both settings are the same.

cwyuu commented

I see no bug here. Setting the generate_until_sentence to True does reflect on the request sent to the server when fed to the high_level.generate or high_level.generate_stream. Note that it will continue sentence only if a period is actually found in the 20 tokens after the end. It means it is affected by context, preset, biases, bans, etc. If you see an issue, it is likely incomplete copy of the settings on your side. For a deterministic comparison, set the top_k to 1 on both, and you should see the exact same content if both settings are the same.

Thanks for the answer! My problem is solved, I double checked the "preset" parameter in the api and the parameter in the web request and found the difference between them. Finally I found that it was the "repetition_penalty" that was affecting the output. I used to think it had no effect. When I set the "repetition_penalty" from the default "2.25" to "1.148125", the output text starts with The output text ends with a complete sentence.

Translated with www.DeepL.com/Translator (free version)

Aedial commented

Thanks for the answer! My problem is solved, I double checked the "preset" parameter in the api and the parameter in the web request and found the difference between them. Finally I found that it was the "repetition_penalty" that was affecting the output. I used to think it had no effect. When I set the "repetition_penalty" from the default "2.25" to "1.148125", the output text starts with The output text ends with a complete sentence.

Translated with www.DeepL.com/Translator (free version)

It seems there is an obscure scaling done on repetition penalty, adjusting the value following 0.525*(X - 1)/7 + 1 (with X previous rep pen - formula extracted from minified JavaScript code). Seems kind of weird for it to exist frontend-side and not backend-side.
A fix is coming along with new sanity tests checking compliancy instead of just "nothing is broken".