RAG support
sak12009cb opened this issue · 1 comments
I would like to implement Retrieval augmented generation by using llama deployed through truss.Also i would like to understand if connecting to vector databases and providing the context to llm is supported?
Hi @sak12009cb -- you certainly can! The way to think about it is -- Truss allows you to deploy a model (like llama), and get an API endpoint back with which you can do inference. You can then integrate that into your RAG workflow.
This post doesn't cover the vector DB aspect, but should get you started w/ using Langchain w/ models on Baseten: https://www.baseten.co/blog/build-a-chatbot-with-llama-2-and-langchain/
Truss also supports writing arbitrary Python code, so you could certainly do parts of this (ie: connecting to your vector db) in your Truss if you wanted to (see docs for more info on how to write Trusses: https://truss.baseten.co/learn/intro)