keldenl/gpt-llama.cpp

run with llama_index

shengkaixuan opened this issue · 2 comments

it can run with langchain (added support for it a couple days ago), so i'm assuming so too – let me see if i can get it working

Do you have an example available with llama_index embeddings? It would be of interest here too. Thanks!