KeyError: 'add_automatic_return'
Closed this issue · 1 comments
Checking discussions database...
llama_model_load: loading model from './models/gpt4all-lora-quantized-ggml.bin' - please wait ...
llama_model_load: n_vocab = 32001
llama_model_load: n_ctx = 512
llama_model_load: n_embd = 4096
llama_model_load: n_mult = 256
llama_model_load: n_head = 32
llama_model_load: n_layer = 32
llama_model_load: n_rot = 128
llama_model_load: f16 = 2
llama_model_load: n_ff = 11008
llama_model_load: n_parts = 1
llama_model_load: type = 1
llama_model_load: ggml map size = 4017.70 MB
llama_model_load: ggml ctx size = 81.25 KB
llama_model_load: mem required = 5809.78 MB (+ 2052.00 MB per state)
llama_model_load: loading tensors from './models/gpt4all-lora-quantized-ggml.bin'
llama_model_load: model size = 4017.27 MB / num tensors = 291
llama_init_from_file: kv self size = 512.00 MB
Chatbot created successfully
- Serving Flask app 'GPT4All-WebUI'
- Debug mode: off
[2023-04-16 14:42:34,117] {_internal.py:224} INFO - WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. - Running on http://localhost:9600
[2023-04-16 14:42:34,117] {_internal.py:224} INFO - Press CTRL+C to quit
Received message : hi
[2023-04-16 14:42:47,260] {_internal.py:224} INFO - 127.0.0.1 - - [16/Apr/2023 14:42:47] "POST /bot HTTP/1.1" 200 -
[2023-04-16 14:42:47,405] {_internal.py:224} ERROR - Error on request:
Traceback (most recent call last):
File "/mnt/d/WSL/NAT-WORKDIR/chatgpt/NAT_WORKOUT/GPT4ALL/gpt4all-ui/env/lib/python3.11/site-packages/werkzeug/serving.py", line 333, in run_wsgi
execute(self.server.app)
File "/mnt/d/WSL/NAT-WORKDIR/chatgpt/NAT_WORKOUT/GPT4ALL/gpt4all-ui/env/lib/python3.11/site-packages/werkzeug/serving.py", line 322, in execute
for data in application_iter:
File "/mnt/d/WSL/NAT-WORKDIR/chatgpt/NAT_WORKOUT/GPT4ALL/gpt4all-ui/env/lib/python3.11/site-packages/werkzeug/wsgi.py", line 500, in next
return self._next()
^^^^^^^^^^^^
File "/mnt/d/WSL/NAT-WORKDIR/chatgpt/NAT_WORKOUT/GPT4ALL/gpt4all-ui/env/lib/python3.11/site-packages/werkzeug/wrappers/response.py", line 50, in _iter_encoded
for item in iterable:
File "/mnt/d/WSL/NAT-WORKDIR/chatgpt/NAT_WORKOUT/GPT4ALL/gpt4all-ui/env/lib/python3.11/site-packages/flask/helpers.py", line 149, in generator
yield from gen
File "/mnt/d/WSL/NAT-WORKDIR/chatgpt/NAT_WORKOUT/GPT4ALL/gpt4all-ui/env/lib/python3.11/site-packages/flask/helpers.py", line 149, in generator
yield from gen
File "/mnt/d/WSL/NAT-WORKDIR/chatgpt/NAT_WORKOUT/GPT4ALL/gpt4all-ui/app.py", line 207, in parse_to_prompt_stream
self.discussion_messages = self.prepare_query(message_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/d/WSL/NAT-WORKDIR/chatgpt/NAT_WORKOUT/GPT4ALL/gpt4all-ui/pyGpt4All/api.py", line 125, in prepare_query
if self.personality["add_automatic_return"]:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
KeyError: 'add_automatic_return'
Please pull the repo.
I have completely removed this part. Now you can specify custom separation between ai messages and user messages.