Receiving ValidationError during Graph creation with llama3.1
Closed this issue · 4 comments
I used the same jupyter notebook on my VM and on google colab. on both machines i received the same error during the creation of the graphs from the dummytext.txt file.
Error
ValidationError Traceback (most recent call last)
[<ipython-input-16-5fb40075ac50>](https://localhost:8080/#) in <cell line: 8>()
6 llm_transformer = LLMGraphTransformer(llm=llm)
7
----> 8 graph_documents = llm_transformer.convert_to_graph_documents(documents)
4 frames
[/usr/local/lib/python3.10/dist-packages/pydantic/v1/main.py](https://localhost:8080/#) in __init__(__pydantic_self__, **data)
339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
340 if validation_error:
--> 341 raise validation_error
342 try:
343 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 2 validation errors for Node
id
none is not an allowed value (type=type_error.none.not_allowed)
type
none is not an allowed value (type=type_error.none.not_allowed)
Code Section
llm_type = os.getenv("LLM_TYPE", "ollama")
if llm_type == "ollama":
llm = ChatOllama(model="llama3.1:8b", temperature=0)
else:
llm = ChatOpenAI(temperature=0, model="gpt-4o-mini")
llm_transformer = LLMGraphTransformer(llm=llm)
graph_documents = llm_transformer.convert_to_graph_documents(documents)
I probably need help at that point.
Thanks in advance!
Its probably because the model Llama3.1:8b is probably not able to create the graph in a format as expected by Neo4j. For me this snippet works with GPT 4 Turbo.
So you can probably try a bigger local model such as the llama3.1:70b or use Open AI models.
I could run this code with my WSL2 (ubuntu 22.04) but I changed the code to the following to use ollama only:
llm=ChatOllama(model="llama3.1", temperature=0)
graph_documents = LLMGraphTransformer(llm=llm).convert_to_graph_documents(documents)
llm=ChatOllama(model="llama3.1", temperature=0)
Will still use llama3.1:8b as it is the default model. Its possible that different runs produce different results.
I don´t have a solution for this, I am sorry. It´s up to the model, the 8B Version is not not good enough to create the docs as expected. I sometimes had the same issue, sometimes not. With more capable models I did not run into this issue.
The only solution i have is to run the 70B model or just an OpenAI model, the gpt-4o-mini was able to create correct docs every time.