Aedial/novelai-api

Some questions I had:

Closed this issue · 9 comments

  • Why am I getting this error even though the generation is working as it should?

Screenshot 2023-10-23 173424

  • For the text adventure module, how can I control 'do' or 'say' options, there seems to be no such parameter, there's only prompt.

  • Is there an example of managing context size that I can look at? I wanted to see how to delete context line by line? Currently I have my prompt as just plain text that I append into as you told, should I make it something like an array of sentences instead so that I can remove by index number?

  • How does the memory context that you provide in novelAI webapp work? Is it any different?

So.

  1. is about asyncio, you might be doing something wrong somewhere or it could be normal if you get it when the program ends (known asyncio issue).

  2. Do and Say are shortcuts to '>You ...' or '>You say "..."'. I don't really remember the exact behavior, but you should see it easily in the context window when using the website (right panel, Advanced, Current Context)

  3. Yes, you can have something like an array for Memory, one for Story, and one for AN. Then try to remove the first lines of Story until Memory + Story + AN fits in context (max context size - generation size - 20 if generate_until_sentence is enabled). That's the easiest way of context building.

  4. In the website ? The website handles the context building as a whole before sending it to the API (the API knows nothing about the story outside of context), and its complexity is unfathomable. Most of this complexity comes from Lorebook handling, and what to cut, when and how. I tried to replicate this behavior (still WIP), but it's a daunting task and it will never be full-featured.

Thanks for answering, I am not getting good responses from text adventure module so I'll probably won't end up using it

This issue is stale because it has been open for 30 days with no activity. It will be closed if no activity happens within the next 7 days.

Thank you for your response on this. I thought I could revive this thread with some follow up.

So then it is true, that the API itself doesn't know anything about Memory, Authors Note, etc. It's just the WebUI placing these text snippets at a specific position in the prompt, which is only one big string.

try to remove the first lines of Story until Memory + Story + AN fits in context (max context size - generation size - 20 if generate_until_sentence is enabled)

Right now I'm using api.high_level.generate where I pass the prompt string. I saw that the method also takes a token array.
So a good approach (in my case, I don't have an Author's Note) would be:

  • Tokenize the Memory as memory[int]
  • Tokenize the Story as story[int]
  • Remove items from the start of story[int] so that len(memory[int])+len(story[int]) <= 8192 if I have the 8192 token limit.

Is that approach correct?

Yes. I would even recommend trimming the story by paragraph instead of sentence, word or token, getting rid of the oldest first.

I think I can use that approach. I am currently just storing the string in a .txt file and reading from there before using it as prompt.

Do you know the specific place where the web UI inserts the memory string? I want tp place the [Title: ; Tags: ; Genre ;] line that you usually put in memory without hampering with the generation.

Also is there a way to program the responses in a specific structure? For example if I want to generate an item and I want it to generate as "Name: '', class: '', description: ''". Or if I want to generate a specific line like "Health: +20". I want to read these strings and update the database in my game (inventory, stats, etc) accordingly.

Do you know the specific place where the web UI inserts the memory string? I want tp place the [Title: ; Tags: ; Genre ;] line that you usually put in memory without hampering with the generation.

Memory is at the very top, AN 3 lines from bottom, Lorebook it depends on insertion order and position.

Also is there a way to program the responses in a specific structure? For example if I want to generate an item and I want it to generate as "Name: '', class: '', description: ''". Or if I want to generate a specific line like "Health: +20". I want to read these strings and update the database in my game (inventory, stats, etc) accordingly.

You could use a "generator", a few-shot example of what you want (there are examples in the official prompts). However, NAI is not very good at math or keeping a state of things like stats. I would recommend doing so externally.

This issue is stale because it has been open for 30 days with no activity. It will be closed if no activity happens within the next 7 days.

This issue was closed because it has been stale for 7 days with no activity. Reopen it if relevant or open a new issue, further discussion on closed issues might not be seen.