| NPM Package | Get Started | Examples | Documentation | MLC LLM | Discord |
WebLLM is a modular and customizable javascript package that directly brings language model chats directly onto web browsers with hardware acceleration. Everything runs inside the browser with no server support and is accelerated with WebGPU.
WebLLM is fully compatible with OpenAI API. That is, you can use the same OpenAI API on any open source models locally, with functionalities including json-mode, function-calling, streaming, etc.
We can bring a lot of fun opportunities to build AI assistants for everyone and enable privacy while enjoying GPU acceleration.
Check out our demo webpage to try it out! You can use WebLLM as a base npm package and build your own web application on top of it by following the documentation and checking out Get Started. This project is a companion project of MLC LLM, which runs LLMs natively on iPhone and other native local environments.
WebLLM offers a minimalist and modular interface to access the chatbot in the browser. The WebLLM package itself does not come with UI, and is designed in a modular way to hook to any of the UI components. The following code snippet demonstrate a simple example that generates a streaming response on a webpage. You can check out examples/get-started to see the complete example.
import * as webllm from "@mlc-ai/web-llm";
async function main() {
const initProgressCallback = (report: webllm.InitProgressReport) => {
const label = document.getElementById("init-label");
label.innerText = report.text;
};
const selectedModel = "Llama-3-8B-Instruct-q4f32_1";
const engine: webllm.EngineInterface = await webllm.CreateEngine(
selectedModel,
/*engineConfig=*/{ initProgressCallback: initProgressCallback }
);
const reply0 = await engine.chat.completions.create({
messages: [{ "role": "user", "content": "Tell me about Pittsburgh." }]
});
console.log(reply0);
console.log(await engine.runtimeStatsText());
}
main();
Note that if you need to separate the instantiation of webllm.Engine
from loading a model, you could substitute
const engine: webllm.EngineInterface = await webllm.CreateEngine(
selectedModel,
/*engineConfig=*/{ initProgressCallback: initProgressCallback }
);
with the equivalent
const engine: webllm.EngineInterface = new webllm.Engine();
engine.setInitProgressCallback(initProgressCallback);
await engine.reload(selectedModel, chatConfig, appConfig);
WebLLM comes with API support for WebWorker so you can hook the generation process into a separate worker thread so that the compute in the webworker won't disrupt the UI.
We first create a worker script that created a Engine and hook it up to a handler that handles requests.
// worker.ts
import { EngineWorkerHandler, Engine } from "@mlc-ai/web-llm";
// Hookup an Engine to a worker handler
const engine = new Engine();
const handler = new EngineWorkerHandler(engine);
self.onmessage = (msg: MessageEvent) => {
handler.onmessage(msg);
};
Then in the main logic, we create a WebWorkerEngine
that
implements the same EngineInterface
. The rest of the logic remains the same.
// main.ts
import * as webllm from "@mlc-ai/web-llm";
async function main() {
// Use a WebWorkerEngine instead of Engine here
const engine: webllm.EngineInterface = await webllm.CreateWebWorkerEngine(
/*worker=*/new Worker(
new URL('./worker.ts', import.meta.url),
{ type: 'module' }
),
/*modelId=*/selectedModel,
/*engineConfig=*/{ initProgressCallback: initProgressCallback }
);
// everything else remains the same
}
You can find a complete chat app example in examples/simple-chat.
You can also find examples on building chrome extension with WebLLM in examples/chrome-extension and examples/chrome-extension-webgpu-service-worker. The latter one leverages service worker, so the extension is persisten in the background.
WebLLM is designed to be fully compatible with OpenAI API. Thus, besides building simple chat bot, you can also have the following functionalities with WebLLM:
- streaming: return output as chunks in real-time in the form of an AsyncGenerator
- json-mode: efficiently ensure output is in json format, see OpenAI Reference for more.
- function-calling: function calling with fields
tools
andtool_choice
. - seed-to-reproduce: use seeding to ensure reproducible output with fields
seed
.
We export all supported models in webllm.prebuiltAppConfig
, where you can see a list of models
that you can simply call const engine: webllm.EngineInterface = await webllm.CreateEngine(anyModel)
with.
Prebuilt models include:
- Llama-2
- Gemma
- Phi-1.5 and Phi-2
- Mistral-7B-Instruct
- OpenHermes-2.5-Mistral-7B
- NeuralHermes-2.5-Mistral-7B
- TinyLlama
- RedPajama
Alternatively, you can compile your own model and weights as described below.
WebLLM works as a companion project of MLC LLM. It reuses the model artifact and builds flow of MLC LLM, please check out MLC LLM document on how to add new model weights and libraries to WebLLM.
Here, we go over the high-level idea. There are two elements of the WebLLM package that enables new models and weight variants.
model_url
: Contains a URL to model artifacts, such as weights and meta-data.model_lib_url
: A URL to the web assembly library (i.e. wasm file) that contains the executables to accelerate the model computations.
Both are customizable in the WebLLM.
async main() {
const appConfig = {
"model_list": [
{
"model_url": "/url/to/my/llama",
"model_id": "MyLlama-3b-v1-q4f32_0"
"model_lib_url": "/url/to/myllama3b.wasm",
}
],
};
// override default
const chatOpts = {
"repetition_penalty": 1.01
};
const chat = new ChatModule();
// load a prebuilt model
// with a chat option override and app config
// under the hood, it will load the model from myLlamaUrl
// and cache it in the browser cache
// The chat will also load the model library from "/url/to/myllama3b.wasm",
// assuming that it is compatible to the model in myLlamaUrl.
const engine = await webllm.CreateEngine(
"MyLlama-3b-v1-q4f32_0",
/*engineConfig=*/{ chatOpts: chatOpts, appConfig: appConfig }
);
}
In many cases, we only want to supply the model weight variant, but
not necessarily a new model (e.g. NeuralHermes-Mistral
can reuse Mistral
's
model library). For examples on how a model library can be shared by different model variants,
see prebuiltAppConfig
.
NOTE: you don't need to build by yourself unless you would like to change the WebLLM package. To simply use the npm, follow Get Started or any of the examples instead.
WebLLM package is a web runtime designed for MLC LLM.
-
Install all the prerequisites for compilation:
- emscripten. It is an LLVM-based compiler that compiles C/C++ source code to WebAssembly.
- Follow the installation instruction to install the latest emsdk.
- Source
emsdk_env.sh
bysource path/to/emsdk_env.sh
, so thatemcc
is reachable from PATH and the commandemcc
works.
- Install jekyll by following the official guides. It is the package we use for website. This is not needed if you're using nextjs (see next-simple-chat in the examples).
- Install jekyll-remote-theme by command. Try gem mirror if install blocked.
gem install jekyll-remote-theme
We can verify the successful installation by trying out
emcc
andjekyll
in terminal, respectively. - emscripten. It is an LLVM-based compiler that compiles C/C++ source code to WebAssembly.
-
Setup necessary environment
Prepare all the necessary dependencies for web build:
./scripts/prep_deps.sh
-
Buld WebLLM Package
npm run build
-
Validate some of the sub-packages
You can then go to the subfolders in examples to validate some of the sub-packages. We use Parcelv2 for bundling. Although Parcel is not very good at tracking parent directory changes sometimes. When you make a change in the WebLLM package, try to edit the
package.json
of the subfolder and save it, which will trigger Parcel to rebuild.
- Demo page
- If you want to run LLM on native runtime, check out MLC-LLM
- You might also be interested in Web Stable Diffusion.
This project is initiated by members from CMU catalyst, UW SAMPL, SJTU, OctoML and the MLC community. We would love to continue developing and supporting the open-source ML community.
This project is only possible thanks to the shoulders open-source ecosystems that we stand on. We want to thank the Apache TVM community and developers of the TVM Unity effort. The open-source ML community members made these models publicly available. PyTorch and Hugging Face communities make these models accessible. We would like to thank the teams behind vicuna, SentencePiece, LLaMA, Alpaca. We also would like to thank the WebAssembly, Emscripten, and WebGPU communities. Finally, thanks to Dawn and WebGPU developers.