AIaaS (AI as a Service) for everyone. Create agents (projects) and consume them using a simple REST API.
Demo: https://ai.ince.pt Username: demo
Password: demo
- Projects: There are multiple types of agents (projects), each with its own features. (rag, ragsql, inference, vision)
- Users: A user represents a user of the system. It's used for authentication and authorization (basic auth). Each user may have access to multiple projects.
- LLMs: Supports any public LLM supported by LlamaIndex or any local LLM suported by Ollama.
- VRAM: Automatic VRAM management. RestAI will manage the VRAM usage, automatically loading and unloading models as needed and requested.
- API: The API is a first-class citizen of RestAI. All endpoints are documented using Swagger.
- Frontend: There is a frontend available at restai-frontend
- Embeddings: You may use any embeddings model supported by llamaindex. Check embeddings definition.
- Vectorstore: There are two vectorstores supported:
Chroma
andRedis
- Retrieval: It features an embeddings search and score evaluator, which allows you to evaluate the quality of your embeddings and simulate the RAG process before the LLM. Reranking is also supported, ColBERT and LLM based.
- Loaders: You may use any loader supported by llamaindex.
- Sandboxed mode: RAG agents (projects) have "sandboxed" mode, which means that a locked default answer will be given when there aren't embeddings for the provided question. This is useful for chatbots, where you want to provide a default answer when the LLM doesn't know how to answer the question, reduncing hallucination.
- Evaluation: You may evaluate your RAG agent using deepeval. Using the
eval
property in the RAG endpoint.
- Connection: Supply a MySQL or PostgreSQL connection string and it will automatically crawl the DB schema, using table and column names it’s able to figure out how to translate the question to sql and then write a response.
- text2img: RestAI supports local Stable Diffusion and Dall-E. It features prompt boosting, a LLM is internally used to boost the user prompt with more detail.
- img2text: RestAI supports LLaVA, BakLLaVA by default.
- img2img: RestAI supports InstantID and Qwen-VL default.
Stable Diffusion & InstantID
- You may use any LLM supported by Ollama and/or LlamaIndex.
- RestAI uses Poetry to manage dependencies. Install it with
pip install poetry
.
make install
make dev
(starts restai in development mode)make devfrontend
(starts restai's frontend in development mode)
make install
make prod
- Endpoints: All the API endpoints are documented and available at: Swagger
- Source code at https://github.com/apocas/restai-frontend.
make install
automatically installs the frontend.
- Tests are implemented using
pytest
. Run them withmake test
.
Pedro Dias - @pedromdias
Licensed under the Apache license, version 2.0 (the "license"); You may not use this file except in compliance with the license. You may obtain a copy of the license at:
http://www.apache.org/licenses/LICENSE-2.0.html
Unless required by applicable law or agreed to in writing, software distributed under the license is distributed on an "as is" basis, without warranties or conditions of any kind, either express or implied. See the license for the specific language governing permissions and limitations under the license.