Looking for the JS/TS library? Check out AgentsJS
We're partnering with OpenAI on a new MultimodalAgent
API in the Agents framework. This class completely wraps OpenAI’s Realtime API, abstract away the raw wire protocol, and provide an ultra-low latency WebRTC transport between GPT-4o and your users’ devices. This same stack powers Advanced Voice in the ChatGPT app.
- Try the Realtime API in our playground [code]
- Check out our guide to building your first app with this new API
The Agents framework allows you to build AI-driven server programs that can see, hear, and speak in realtime. Your agent connects with end user devices through a LiveKit session. During that session, your agent can process text, audio, images, or video streaming from a user's device, and have an AI model generate any combination of those same modalities as output, and stream them back to the user.
- Plugins for popular LLMs, transcription and text-to-speech services, and RAG databases
- High-level abstractions for building voice agents or assistants with automatic turn detection, interruption handling, function calling, and transcriptions
- Compatible with LiveKit's telephony stack, allowing your agent to make calls to or receive calls from phones
- Integrated load balancing system that manages pools of agents with edge-based dispatch, monitoring, and transparent failover
- Running your agents is identical across localhost, self-hosted, and LiveKit Cloud environments
To install the core Agents library:
pip install livekit-agents
The framework includes a variety of plugins that make it easy to process streaming input or generate output. For example, there are plugins for converting text-to-speech or running inference with popular LLMs. Here's how you can install a plugin:
pip install livekit-plugins-openai
The following plugins are available today:
Plugin | Features |
---|---|
livekit-plugins-anthropic | LLM |
livekit-plugins-azure | STT, TTS |
livekit-plugins-deepgram | STT |
livekit-plugins-cartesia | TTS |
livekit-plugins-elevenlabs | TTS |
livekit-plugins-playht | TTS |
livekit-plugins-google | STT, TTS |
livekit-plugins-nltk | Utilities for working with text |
livekit-plugins-rag | Utilities for performing RAG |
livekit-plugins-openai | LLM, STT, TTS, Assistants API, Realtime API |
livekit-plugins-silero | VAD |
Documentation on the framework and how to use it can be found here
- A basic voice agent using a pipeline of STT, LLM, and TTS [demo | code]
- Voice agent using the new OpenAI Realtime API [demo | code]
- Super fast voice agent using Cerebras hosted Llama 3.1 [demo | code]
- Voice agent using Cartesia's Sonic model [demo]
- Agent that looks up the current weather via function call [code]
- Voice agent that performs a RAG-based lookup [code]
- Video agent that publishes a stream of RGB frames [code]
- Transcription agent that generates text captions from a user's speech [code]
- A chat agent you can text who will respond back with genereated speech [code]
- Localhost multi-agent conference call [code]
- Moderation agent that uses Hive to detect spam/abusive video [code]
The Agents framework is under active development in a rapidly evolving field. We welcome and appreciate contributions of any kind, be it feedback, bugfixes, features, new plugins and tools, or better documentation. You can file issues under this repo, open a PR, or chat with us in LiveKit's Slack community.
LiveKit Ecosystem | |
---|---|
Realtime SDKs | Browser · iOS/macOS/visionOS · Android · Flutter · React Native · Rust · Node.js · Python · Unity · Unity (WebGL) |
Server APIs | Node.js · Golang · Ruby · Java/Kotlin · Python · Rust · PHP (community) |
UI Components | React · Android Compose · SwiftUI |
Agents Frameworks | Python · Node.js · Playground |
Services | LiveKit server · Egress · Ingress · SIP |
Resources | Docs · Example apps · Cloud · Self-hosting · CLI |