In this repo you will find resources, demos, recipes... to work with LLMs on OpenShift with OpenShift AI or Open Data Hub.
The following Inference Servers for LLMs can be deployed standalone on OpenShift:
- vLLM: how to deploy vLLM, the "Easy, fast, and cheap LLM serving for everyone".
- Hugging Face TGI: how to deploy the Text Generation Inference server from Hugging Face.
- Caikit-TGIS-Serving (external): how to deploy the Caikit-TGIS-Serving stack, from OpenDataHub.
- Ollama: how to deploy Ollama using CPU only for inference.
The following Runtimes can be imported in the Single-Model Serving stack of Open Data Hub or OpenShift AI.
The following Databases can be used as a Vector Store for Retrieval Augmented Generation (RAG) applications:
- Milvus: Full recipe to deploy the Milvus vector store, in standalone or cluster mode.
- PostgreSQL+pgvector: Full recipe to create an instance of PostgreSQL with the pgvector extension, making it usable as a vector store.
- Redis: Full recipe to deploy Redis, create a Cluster and a suitable Database for a Vector Store.
- Caikit: Basic example demonstrating how to work with Caikit+TGIS for LLM serving.
- Langchain examples: Various notebooks demonstrating how to work with Langchain. Examples are provided for different types of LLM servers (standalone or using the Single-Model Serving stack of Open Data Hub or OpenShift AI) and different vector databases.
- Langflow examples: Various examples demonstrating how to work with Langflow.
- UI examples: Various examples on how to create and deploy a UI to interact with your LLM.