OSU-NLP-Group/HippoRAG

Passage NER exception

bupterlxp opened this issue · 7 comments

when i run this
DATA=sample
LLM=qwen2:7b
SYNONYM_THRESH=0.8
GPUS=0
LLM_API=ollama
bash src/setup_hipporag_colbert.sh $DATA $LLM $GPUS $SYNONYM_THRESH $LLM_API
an error occurs:
image
why is this happening?
how should i solve it?

Hello. Could you show how you add support for Qwen models? I think you may need to use langchain to add support for those models in the current HippoRAG framework: https://github.com/OSU-NLP-Group/HippoRAG/blob/main/src/langchain_util.py
Thanks!

Hello. Could you show how you add support for Qwen models? I think you may need to use langchain to add support for those models in the current HippoRAG framework: https://github.com/OSU-NLP-Group/HippoRAG/blob/main/src/langchain_util.py Thanks!

我是从ollama上运行”ollama run qwen2:7b“拉取的qwen2:7b模型,看上去我只需要选择ollama然后输入模型名称就可以了,如何使用langchain去添加支持呢?

Please reply in English to ensure all our maintainers and users understand your issue.
We have yet to test Ollama's model one by one.
To speed up the process of finding this error, I would suggest you find the Exception where the Passage NER reported an error to get the specific exception information and print that. The current error message makes it difficult for us to help you directly.

Thanks for your answer! there is no such error now, the new question is:
image
image
It is not known if this information is sufficient

Are contents in file output/sample_queries.named_entity_output.tsv correct? I would doubt if this step is correctly finished.
From the second figure, you can see something is already wrong before calling colbert because it is None.

Hi, I also submit an PR for supporting llama.cpp. llama.cpp also supports Qwen2 models. You can try it out after it's merged.
Ollama seems to require sudo privileges to install, whereas llama.cpp can be installed without sudo and supports many open-source models.

I meet the same question, could you please share how to solve the error expected string or bytes-like object