Exception: No existing llama_index.core.vector_stores
Closed this issue · 2 comments
I've started receiving this error in version 76, and it has popped up again in 78. This happens anytime the ai starts to look something up online.
Exception: No existing llama_index.core.vector_stores.simple found at /home/mike/.config/pygpt-net/idx/base/vector_store.json, skipping load.
Type: ValueErrorMessage: No existing llama_index.core.vector_stores.simple found at /home/mike/.config/pygpt-net/idx/base/vector_store.json, skipping load.
Traceback: File "/home/mike/.cache/pypoetry/virtualenvs/pygpt-net-sJoELmaO-py3.10/lib/python3.10/site-packages/llama_index/core/storage/storage_context.py", line 122, in from_defaults
vector_stores = SimpleVectorStore.from_namespaced_persist_dir(
File "/home/mike/.cache/pypoetry/virtualenvs/pygpt-net-sJoELmaO-py3.10/lib/python3.10/site-packages/llama_index/core/vector_stores/simple.py", line 160, in from_namespaced_persist_dir
vector_stores[DEFAULT_VECTOR_STORE] = cls.from_persist_dir(
File "/home/mike/.cache/pypoetry/virtualenvs/pygpt-net-sJoELmaO-py3.10/lib/python3.10/site-packages/llama_index/core/vector_stores/simple.py", line 125, in from_persist_dir
return cls.from_persist_path(persist_path, fs=fs)
File "/home/mike/.cache/pypoetry/virtualenvs/pygpt-net-sJoELmaO-py3.10/lib/python3.10/site-packages/llama_index/core/vector_stores/simple.py", line 305, in from_persist_path
raise ValueError(
I'm on Fedora 40 x86_64. I run the app using a clone of the repository and use poetry to setup and run.
If i'm understanding the documentation correctly, this vector is internal, and nothing needs to be passed to it.
Did you changed any parameters such as the embedding model or other custom arguments in Config -> Llama-index
after creating the index? Maybe try to recreate the index by clearing the ./config/pygpt-net/idx/base
directory (after deleting all files or entire base
directory, the index should re-create itself).
From Llama-index documentation:
Important: if you had initialized your index with a custom transformations, embed_model, etc., you will need to pass in the same options during load_index_from_storage, or have it set as the global settings.
https://docs.llamaindex.ai/en/stable/understanding/storing/storing/