An open platform for operating large language models(LLMs) in production.
Fine-tune, serve, deploy, and monitor any LLMs with ease.
With OpenLLM, you can run inference with any open-source large-language models(LLMs), deploy to the cloud or on-premises, and build powerful AI apps.
π SOTA LLMs: built-in supports a wide range of open-source LLMs and model runtime, including StableLM, Falcon, Dolly, Flan-T5, ChatGLM, StarCoder and more.
π₯ Flexible APIs: serve LLMs over RESTful API or gRPC with one command, query via WebUI, CLI, our Python/Javascript client, or any HTTP client.
βοΈ Freedom To Build: First-class support for LangChain and BentoML allows you to easily create your own AI apps by composing LLMs with other models and services.
π― Streamline Deployment: Automatically generate your LLM server Docker Images or deploy as serverless endpoint via βοΈ BentoCloud.
π€οΈ Bring your own LLM: Fine-tune any LLM to suit your needs with
LLM.tuning()
. (Coming soon)
To use OpenLLM, you need to have Python 3.8 (or newer) and pip
installed on
your system. We highly recommend using a Virtual Environment to prevent package
conflicts.
You can install OpenLLM using pip as follows:
pip install openllm
To verify if it's installed correctly, run:
$ openllm -h
Usage: openllm [OPTIONS] COMMAND [ARGS]...
βββββββ βββββββ ββββββββββββ ββββββ βββ ββββ ββββ
ββββββββββββββββββββββββββββββ ββββββ βββ βββββ βββββ
βββ βββββββββββββββββ ββββββ ββββββ βββ βββββββββββ
βββ ββββββββββ ββββββ βββββββββββββ βββ βββββββββββ
ββββββββββββ βββββββββββ βββββββββββββββββββββββββ βββ βββ
βββββββ βββ βββββββββββ ββββββββββββββββββββββββ βββ
An open platform for operating large language models in production.
Fine-tune, serve, deploy, and monitor any LLMs with ease.
To start an LLM server, use openllm start
. For example, to start a dolly-v2
server:
openllm start dolly-v2
Following this, a Web UI will be accessible at http://0.0.0.0:3000 where you can experiment with the endpoints and sample input prompts.
OpenLLM provides a built-in Python client, allowing you to interact with the model. In a different terminal window or a Jupyter notebook, create a client to start interacting with the model:
>>> import openllm
>>> client = openllm.client.HTTPClient('http://localhost:3000')
>>> client.query('Explain to me the difference between "further" and "farther"')
You can also use the openllm query
command to query the model from the
terminal:
export OPENLLM_ENDPOINT=http://localhost:3000
openllm query 'Explain to me the difference between "further" and "farther"'
Visit http://0.0.0.0:3000/docs.json
for OpenLLM's API specification.
Users can also specify different variants of the model to be served, by providing the
--model-id
argument, e.g.:
openllm start flan-t5 --model-id google/flan-t5-large
Use the openllm models
command to see the list of models and their variants supported
in OpenLLM.
The following models are currently supported in OpenLLM. By default, OpenLLM doesn't include dependencies to run all models. The extra model-specific dependencies can be installed with the instructions below:
Model | CPU | GPU | Installation | Model Ids |
---|---|---|---|---|
flan-t5 | β | β |
pip install "openllm[flan-t5]" |
google/flan-t5-small google/flan-t5-base google/flan-t5-large google/flan-t5-xl google/flan-t5-xxl |
dolly-v2 | β | β |
pip install openllm |
databricks/dolly-v2-3b databricks/dolly-v2-7b databricks/dolly-v2-12b |
chatglm | β | β |
pip install "openllm[chatglm]" |
thudm/chatglm-6b thudm/chatglm-6b-int8 thudm/chatglm-6b-int4 |
starcoder | β | β |
pip install "openllm[starcoder]" |
bigcode/starcoder bigcode/starcoderbase |
falcon | β | β |
pip install "openllm[falcon]" |
tiiuae/falcon-7b tiiuae/falcon-40b tiiuae/falcon-7b-instruct tiiuae/falcon-40b-instruct |
stablelm | β | β |
pip install openllm |
stabilityai/stablelm-tuned-alpha-3b stabilityai/stablelm-tuned-alpha-7b stabilityai/stablelm-base-alpha-3b stabilityai/stablelm-base-alpha-7b |
Different LLMs may have multiple runtime implementations. For instance, they
might use Pytorch (pt
), Tensorflow (tf
), or Flax (flax
).
If you wish to specify a particular runtime for a model, you can do so by
setting the OPENLLM_{MODEL_NAME}_FRAMEWORK={runtime}
environment variable
before running openllm start
.
For example, if you want to use the Tensorflow (tf
) implementation for the
flan-t5
model, you can use the following command:
OPENLLM_FLAN_T5_FRAMEWORK=tf openllm start flan-t5
For GPU support on Flax, refers to Jax's installation to make sure that you have Jax support for the corresponding CUDA version.
OpenLLM encourages contributions by welcoming users to incorporate their custom LLMs into the ecosystem. Check out Adding a New Model Guide to see how you can do it yourself.
OpenLLM is not just a standalone product; it's a building block designed to easily integrate with other powerful tools. We currently offer integration with BentoML and LangChain.
OpenLLM models can be integrated as a
Runner in your
BentoML service. These runners have a generate
method that takes a string as a
prompt and returns a corresponding output string. This will allow you to plug
and play any OpenLLM models with your existing ML workflow.
import bentoml
import openllm
model = "dolly-v2"
llm_config = openllm.AutoConfig.for_model(model)
llm_runner = openllm.Runner(model, llm_config=llm_config)
svc = bentoml.Service(
name=f"llm-dolly-v2-service", runners=[llm_runner]
)
@svc.api(input=Text(), output=Text())
async def prompt(input_text: str) -> str:
answer = await llm_runner.generate(input_text)
return answer
In future LangChain releases, you'll be able to effortlessly invoke OpenLLM models, like so:
from langchain.llms import OpenLLM
llm = OpenLLM.for_model(model_name='flan-t5')
llm("What is the difference between a duck and a goose?")
if you have an OpenLLM server deployed elsewhere, you can connect to it by specifying its URL:
from langchain.llms import OpenLLM
llm = OpenLLM.for_model(server_url='http://localhost:8000', server_type='http')
llm("What is the difference between a duck and a goose?")
To deploy your LLMs into production:
-
Building a Bento: With OpenLLM, you can easily build a Bento for a specific model, like
dolly-v2
, using thebuild
command.:openllm build dolly-v2
A Bento, in BentoML, is the unit of distribution. It packages your program's source code, models, files, artifacts, and dependencies.
-
Containerize your Bento
bentoml containerize <name:version>
BentoML offers a comprehensive set of options for deploying and hosting online ML services in production. To learn more, check out the Deploying a Bento guide.
OpenLLM collects usage data to enhance user experience and improve the product. We only report OpenLLM's internal API calls and ensure maximum privacy by excluding sensitive information. We will never collect user code, model data, or stack traces. For usage tracking, check out the code.
You can opt-out of usage tracking by using the --do-not-track
CLI option:
openllm [command] --do-not-track
Or by setting environment variable OPENLLM_DO_NOT_TRACK=True
:
export OPENLLM_DO_NOT_TRACK=True
Engage with like-minded individuals passionate about LLMs, AI, and more on our Discord!
OpenLLM is actively maintained by the BentoML team. Feel free to reach out and join us in our pursuit to make LLMs more accessible and easy-to-useπ Join our Slack community!
We welcome contributions! If you're interested in enhancing OpenLLM's capabilities or have any questions, don't hesitate to reach out in our discord channel.
Checkout our Developer Guide if you wish to contribute to OpenLLM's codebase.