Langroid
is an intuitive, lightweight, extensible and principled
Python framework to easily build LLM-powered applications.
You set up Agents, equip them with optional components (LLM,
vector-store and methods), assign them tasks, and have them
collaboratively solve a problem by exchanging messages.
This Multi-Agent paradigm is inspired by the
Actor Framework
(but you do not need to know anything about this!).
Langroid is a fresh take on LLM app-development, where considerable thought has gone
into simplifying the developer experience. It does not use Langchain
or Llama-Index
.
We welcome contributions -- See the contributions document for ideas on what to contribute.
Questions, Feedback, Ideas? Join us on Discord!
🔥 Updates/Releases
- Sep 2023:
- Use with local LLama Models: see tutorial here
- Langroid Blog/Newsletter Launched!: First post is here -- Please subscribe to stay updated.
- 0.1.56: Support Azure OpenAI.
- 0.1.55: Improved
SQLChatAgent
that efficiently retrieves relevant schema info when translating natural language to SQL.
- Aug 2023:
- Hierarchical computation example using Langroid agents and task orchestration.
- 0.1.51: Support for global state, see test_global_state.py.
- 🐳 Langroid Docker image, available, see instructions below.
- RecipientTool enables (+ enforces) LLM to specify an intended recipient when talking to 2 or more agents. See this test for example usage.
- Example: Answer questions using Google Search + vecdb-retrieval from URL contents.
- 0.1.39:
GoogleSearchTool
to enable Agents (their LLM) to do Google searches via function-calling/tools. See this chat example for how easy it is to add this tool to an agent. - Colab notebook to try the quick-start examples:
- 0.1.37: Added
SQLChatAgent
-- thanks to our latest contributor Rithwik Babu! - Multi-agent Example: Autocorrect chat
- July 2023:
- 0.1.30: Added
TableChatAgent
to chat with tabular datasets (dataframes, files, URLs): LLM generates Pandas code, and code is executed using Langroid's tool/function-call mechanism. - Demo: 3-agent system for Audience Targeting.
- 0.1.27: Added support for Momento Serverless Cache as an alternative to Redis.
- 0.1.24:
DocChatAgent
now accepts PDF files or URLs.
- 0.1.30: Added
Suppose you want to extract structured information about the key terms of a commercial lease document. You can easily do this with Langroid using a two-agent system, as we show in the langroid-examples repo. The demo showcases just a few of the many features of Langroid, such as:
- Multi-agent collaboration:
LeaseExtractor
is in charge of the task, and its LLM (GPT4) generates questions to be answered by theDocAgent
. - Retrieval augmented question-answering, with source-citation:
DocAgent
LLM (GPT4) uses retrieval from a vector-store to answer theLeaseExtractor
's questions, cites the specific excerpt supporting the answer. - Function-calling (also known as tool/plugin): When it has all the information it
needs, the
LeaseExtractor
LLM presents the information in a structured format using a Function-call.
Here is what it looks like in action:
- Agents as first-class citizens: The Agent class encapsulates LLM conversation state, and optionally a vector-store and tools. Agents are a core abstraction in Langroid; Agents act as message transformers, and by default provide 3 responder methods, one corresponding to each entity: LLM, Agent, User.
- Tasks: A Task class wraps an Agent, and gives the agent instructions (or roles, or goals),
manages iteration over an Agent's responder methods,
and orchestrates multi-agent interactions via hierarchical, recursive
task-delegation. The
Task.run()
method has the same type-signature as an Agent's responder's methods, and this is key to how a task of an agent can delegate to other sub-tasks: from the point of view of a Task, sub-tasks are simply additional responders, to be used in a round-robin fashion after the agent's own responders. - Modularity, Reusabilily, Loose coupling: The
Agent
andTask
abstractions allow users to design Agents with specific skills, wrap them in Tasks, and combine tasks in a flexible way. - LLM Support: Langroid supports OpenAI LLMs including GPT-3.5-Turbo, GPT-4.
- Caching of LLM responses: Langroid supports Redis and Momento to cache LLM responses.
- Vector-stores: Qdrant and Chroma are currently supported. Vector stores allow for Retrieval-Augmented-Generation (RAG).
- Grounding and source-citation: Access to external documents via vector-stores allows for grounding and source-citation.
- Observability, Logging, Lineage: Langroid generates detailed logs of multi-agent interactions and maintains provenance/lineage of messages, so that you can trace back the origin of a message.
- Tools/Plugins/Function-calling: Langroid supports OpenAI's recently released function calling feature. In addition, Langroid has its own native equivalent, which we call tools (also known as "plugins" in other contexts). Function calling and tools have the same developer-facing interface, implemented using Pydantic, which makes it very easy to define tools/functions and enable agents to use them. Benefits of using Pydantic are that you never have to write complex JSON specs for function calling, and when the LLM hallucinates malformed JSON, the Pydantic error message is sent back to the LLM so it can fix it!
🐳 For a simpler setup, see the Docker section below, which lets you get started just
by setting up environment variables in a .env
file.
Langroid requires Python 3.11+. We recommend using a virtual environment.
Use pip
to install langroid
(from PyPi) to your virtual environment:
pip install langroid
The core Langroid package lets you use OpenAI Embeddings models via their API.
If you instead want to use the all-MiniLM-L6-v2
embeddings model
from from HuggingFace, install Langroid like this:
pip install langroid[hf-embeddings]
Note that this will install torch
and sentence-transformers
libraries.
Optional Installs for using SQL Chat with a PostgreSQL DB
If you are using SQLChatAgent
(e.g. the script examples/data-qa/sql-chat/sql_chat.py
),
with a postgres db, you will need to:
- Install PostgreSQL dev libraries for your platform, e.g.
sudo apt-get install libpq-dev
on Ubuntu,brew install postgresql
on Mac, etc.
- Install langroid with the postgres extra, e.g.
pip install langroid[postgres]
orpoetry add langroid[postgres]
orpoetry install -E postgres
. If this gives you an error, trypip install psycopg2-binary
in your virtualenv.
To get started, all you need is an OpenAI API Key. If you don't have one, see this OpenAI Page. Currently only OpenAI models are supported. Others will be added later (Pull Requests welcome!).
In the root of the repo, copy the .env-template
file to a new file .env
:
cp .env-template .env
Then insert your OpenAI API Key.
Your .env
file should look like this:
OPENAI_API_KEY=your-key-here-without-quotes
Alternatively, you can set this as an environment variable in your shell (you will need to do this every time you open a new shell):
export OPENAI_API_KEY=your-key-here-without-quotes
Optional Setup Instructions (click to expand)
All of the following environment variable settings are optional, and some are only needed to use specific features (as noted below).
- Qdrant Vector Store API Key, URL. This is only required if you want to use Qdrant cloud. You can sign up for a free 1GB account at Qdrant cloud. If you skip setting up these, Langroid will use Qdrant in local-storage mode. Alternatively Chroma is also currently supported. We use the local-storage version of Chroma, so there is no need for an API key. Langroid uses Qdrant by default.
- Redis Password, host, port: This is optional, and only needed to cache LLM API responses using Redis Cloud. Redis offers a free 30MB Redis account which is more than sufficient to try out Langroid and even beyond. If you don't set up these, Langroid will use a pure-python Redis in-memory cache via the Fakeredis library.
- Momento Serverless Caching of LLM API responses (as an alternative to Redis).
To use Momento instead of Redis:
- enter your Momento Token in the
.env
file, as the value ofMOMENTO_AUTH_TOKEN
(see example file below), - in the
.env
file setCACHE_TYPE=momento
(instead ofCACHE_TYPE=redis
which is the default).
- enter your Momento Token in the
- GitHub Personal Access Token (required for apps that need to analyze git repos; token-based API calls are less rate-limited). See this GitHub page.
- Google Custom Search API Credentials: Only needed to enable an Agent to use the
GoogleSearchTool
. To use Google Search as an LLM Tool/Plugin/function-call, you'll need to set up a Google API key, then setup a Google Custom Search Engine (CSE) and get the CSE ID. (Documentation for these can be challenging, we suggest asking GPT4 for a step-by-step guide.) After obtaining these credentials, store them as values ofGOOGLE_API_KEY
andGOOGLE_CSE_ID
in your.env
file. Full documentation on using this (and other such "stateless" tools) is coming soon, but in the meantime take a peek at this chat example, which shows how you can easily equip an Agent with aGoogleSearchtool
.
If you add all of these optional variables, your .env
file should look like this:
OPENAI_API_KEY=your-key-here-without-quotes
GITHUB_ACCESS_TOKEN=your-personal-access-token-no-quotes
CACHE_TYPE=redis # or momento
REDIS_PASSWORD=your-redis-password-no-quotes
REDIS_HOST=your-redis-hostname-no-quotes
REDIS_PORT=your-redis-port-no-quotes
MOMENTO_AUTH_TOKEN=your-momento-token-no-quotes # instead of REDIS* variables
QDRANT_API_KEY=your-key
QDRANT_API_URL=https://your.url.here:6333 # note port number must be included
GOOGLE_API_KEY=your-key
GOOGLE_CSE_ID=your-cse-id
Optional setup instructions for Microsoft Azure OpenAI(click to expand)
When using Azure OpenAI, additional environment variables are required in the
.env
file.
This page Microsoft Azure OpenAI
provides more information, and you can set each environment variable as follows:
AZURE_API_KEY
, from the value ofAPI_KEY
AZURE_OPENAI_API_BASE
from the value ofENDPOINT
, typically looks likehttps://your.domain.azure.com
.- For
AZURE_OPENAI_API_VERSION
, you can use the default value in.env-template
, and latest version can be found here AZURE_OPENAI_DEPLOYMENT_NAME
is the name of the deployed model, which is defined by the user during the model setupAZURE_GPT_MODEL_NAME
GPT-3.5-Turbo or GPT-4 model names that you chose when you setup your Azure OpenAI account.
We provide a containerized version of the langroid-examples
repository via this Docker Image.
All you need to do is set up environment variables in the .env
file.
Please follow these steps to setup the container:
# get the .env file template from `langroid` repo
wget https://github.com/langroid/langroid/blob/main/.env-template .env
# Edit the .env file with your favorite editor (here nano),
# and add API keys as explained above
nano .env
# launch the container
docker run -it -v ./.env:/.env langroid/langroid
# Use this command to run any of the scripts in the `examples` directory
python examples/<Path/To/Example.py>
These are quick teasers to give a glimpse of what you can do with Langroid and how your code would look.
langroid-examples
repository.
ℹ️ The various LLM prompts and instructions in Langroid
have been tested to work well with GPT4.
Switching to GPT3.5-Turbo is easy via a config flag
(e.g., cfg = OpenAIGPTConfig(chat_model=OpenAIChatModel.GPT3_5_TURBO)
),
and may suffice for some applications, but in general you may see inferior results.
📖 Also see the
Getting Started Guide
for a detailed tutorial.
Click to expand any of the code examples below. All of these can be run in a Colab notebook:
Direct interaction with OpenAI LLM
from langroid.language_models.openai_gpt import (
OpenAIGPTConfig, OpenAIChatModel, OpenAIGPT,
)
from langroid.language_models.base import LLMMessage, Role
cfg = OpenAIGPTConfig(chat_model=OpenAIChatModel.GPT4)
mdl = OpenAIGPT(cfg)
messages = [
LLMMessage(content="You are a helpful assistant", role=Role.SYSTEM),
LLMMessage(content="What is the capital of Ontario?", role=Role.USER),
]
response = mdl.chat(messages, max_tokens=200)
print(response.message)
Define an agent, set up a task, and run it
from langroid.agent.chat_agent import ChatAgent, ChatAgentConfig
from langroid.agent.task import Task
from langroid.language_models.openai_gpt import OpenAIChatModel, OpenAIGPTConfig
config = ChatAgentConfig(
llm = OpenAIGPTConfig(
chat_model=OpenAIChatModel.GPT4,
),
vecdb=None, # no vector store
)
agent = ChatAgent(config)
# get response from agent's LLM, and put this in an interactive loop...
# answer = agent.llm_response("What is the capital of Ontario?")
# ... OR instead, set up a task (which has a built-in loop) and run it
task = Task(agent, name="Bot")
task.run() # ... a loop seeking response from LLM or User at each turn
Three communicating agents
A toy numbers game, where when given a number n
:
repeater_agent
's LLM simply returnsn
,even_agent
's LLM returnsn/2
ifn
is even, else says "DO-NOT-KNOW"odd_agent
's LLM returns3*n+1
ifn
is odd, else says "DO-NOT-KNOW"
First define the 3 agents, and set up their tasks with instructions:
from langroid.utils.constants import NO_ANSWER
from langroid.agent.chat_agent import ChatAgent, ChatAgentConfig
from langroid.agent.task import Task
from langroid.language_models.openai_gpt import OpenAIChatModel, OpenAIGPTConfig
config = ChatAgentConfig(
llm = OpenAIGPTConfig(
chat_model=OpenAIChatModel.GPT4,
),
vecdb = None,
)
repeater_agent = ChatAgent(config)
repeater_task = Task(
repeater_agent,
name = "Repeater",
system_message="""
Your job is to repeat whatever number you receive.
""",
llm_delegate=True, # LLM takes charge of task
single_round=False,
)
even_agent = ChatAgent(config)
even_task = Task(
even_agent,
name = "EvenHandler",
system_message=f"""
You will be given a number.
If it is even, divide by 2 and say the result, nothing else.
If it is odd, say {NO_ANSWER}
""",
single_round=True, # task done after 1 step() with valid response
)
odd_agent = ChatAgent(config)
odd_task = Task(
odd_agent,
name = "OddHandler",
system_message=f"""
You will be given a number n.
If it is odd, return (n*3+1), say nothing else.
If it is even, say {NO_ANSWER}
""",
single_round=True, # task done after 1 step() with valid response
)
Then add the even_task
and odd_task
as sub-tasks of repeater_task
,
and run the repeater_task
, kicking it off with a number as input:
repeater_task.add_sub_task([even_task, odd_task])
repeater_task.run("3")
Simple Tool/Function-calling example
Langroid leverages Pydantic to support OpenAI's Function-calling API as well as its own native tools. The benefits are that you don't have to write any JSON to specify the schema, and also if the LLM hallucinates a malformed tool syntax, Langroid sends the Pydantic validation error (suitiably sanitized) to the LLM so it can fix it!
Simple example: Say the agent has a secret list of numbers,
and we want the LLM to find the smallest number in the list.
We want to give the LLM a probe
tool/function which takes a
single number n
as argument. The tool handler method in the agent
returns how many numbers in its list are at most n
.
First define the tool using Langroid's ToolMessage
class:
from langroid.agent.tool_message import ToolMessage
class ProbeTool(ToolMessage):
request: str = "probe" # specifies which agent method handles this tool
purpose: str = """
To find how many numbers in my list are less than or equal to
the <number> you specify.
""" # description used to instruct the LLM on when/how to use the tool
number: int # required argument to the tool
Then define a SpyGameAgent
as a subclass of ChatAgent
,
with a method probe
that handles this tool:
from langroid.agent.chat_agent import ChatAgent, ChatAgentConfig
class SpyGameAgent(ChatAgent):
def __init__(self, config: ChatAgentConfig):
super().__init__(config)
self.numbers = [3, 4, 8, 11, 15, 25, 40, 80, 90]
def probe(self, msg: ProbeTool) -> str:
# return how many numbers in self.numbers are less or equal to msg.number
return str(len([n for n in self.numbers if n <= msg.number]))
We then instantiate the agent and enable it to use and respond to the tool:
from langroid.language_models.openai_gpt import OpenAIChatModel, OpenAIGPTConfig
spy_game_agent = SpyGameAgent(
ChatAgentConfig(
name="Spy",
llm = OpenAIGPTConfig(
chat_model=OpenAIChatModel.GPT4,
),
vecdb=None,
use_tools=False, # don't use Langroid native tool
use_functions_api=True, # use OpenAI function-call API
)
)
spy_game_agent.enable_message(ProbeTool)
For a full working example see the
chat-agent-tool.py
script in the langroid-examples
repo.
Tool/Function-calling to extract structured information from text
Suppose you want an agent to extract the key terms of a lease, from a lease document, as a nested JSON structure. First define the desired structure via Pydantic models:
from pydantic import BaseModel
class LeasePeriod(BaseModel):
start_date: str
end_date: str
class LeaseFinancials(BaseModel):
monthly_rent: str
deposit: str
class Lease(BaseModel):
period: LeasePeriod
financials: LeaseFinancials
address: str
Then define the LeaseMessage
tool as a subclass of Langroid's ToolMessage
.
Note the tool has a required argument terms
of type Lease
:
class LeaseMessage(ToolMessage):
request: str = "lease_info"
purpose: str = """
Collect information about a Commercial Lease.
"""
terms: Lease
Then define a LeaseExtractorAgent
with a method lease_info
that handles this tool,
instantiate the agent, and enable it to use and respond to this tool:
class LeaseExtractorAgent(ChatAgent):
def lease_info(self, message: LeaseMessage) -> str:
print(
f"""
DONE! Successfully extracted Lease Info:
{message.terms}
"""
)
return json.dumps(message.terms.dict())
lease_extractor_agent = LeaseExtractorAgent(
ChatAgentConfig(
llm=OpenAIGPTConfig(),
use_functions_api=False,
use_tools=True,
)
)
lease_extractor_agent.enable_message(LeaseMessage)
See the chat_multi_extract.py
script in the langroid-examples
repo for a full working example.
Chat with documents (file paths, URLs, etc)
Langroid provides a specialized agent class DocChatAgent
for this purpose.
It incorporates document sharding, embedding, storage in a vector-DB,
and retrieval-augmented query-answer generation.
Using this class to chat with a collection of documents is easy.
First create a DocChatAgentConfig
instance, with a
doc_paths
field that specifies the documents to chat with.
from langroid.agent.doc_chat_agent import DocChatAgentConfig
config = DocChatAgentConfig(
doc_paths = [
"https://en.wikipedia.org/wiki/Language_model",
"https://en.wikipedia.org/wiki/N-gram_language_model",
"/path/to/my/notes-on-language-models.txt",
]
llm = OpenAIGPTConfig(
chat_model=OpenAIChatModel.GPT4,
),
vecdb=VectorStoreConfig(
type="qdrant",
),
)
Then instantiate the DocChatAgent
(this ingests the docs into the vector-store):
agent = DocChatAgent(config)
Then we can either ask the agent one-off questions,
agent.chat("What is a language model?")
or wrap it in a Task
and run an interactive loop with the user:
from langroid.task import Task
task = Task(agent)
task.run()
See full working scripts in the
docqa
folder of the langroid-examples
repo.
🔥 Chat with tabular data (file paths, URLs, dataframes)
Using Langroid you can set up a TableChatAgent
with a dataset (file path, URL or dataframe),
and query it. The Agent's LLM generates Pandas code to answer the query,
via function-calling (or tool/plugin), and the Agent's function-handling method
executes the code and returns the answer.
Here is how you can do this:
from langroid.agent.special.table_chat_agent import TableChatAgent, TableChatAgentConfig
from langroid.agent.task import Task
from langroid.language_models.openai_gpt import OpenAIChatModel, OpenAIGPTConfig
Set up a TableChatAgent
for a data file, URL or dataframe
(Ensure the data table has a header row; the delimiter/separator is auto-detected):
dataset = "https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv"
# or dataset = "/path/to/my/data.csv"
# or dataset = pd.read_csv("/path/to/my/data.csv")
agent = TableChatAgent(
config=TableChatAgentConfig(
data=dataset,
llm=OpenAIGPTConfig(
chat_model=OpenAIChatModel.GPT4,
),
)
)
Set up a task, and ask one-off questions like this:
task = Task(
agent,
name = "DataAssistant",
default_human_response="", # to avoid waiting for user input
)
result = task.run(
"What is the average alcohol content of wines with a quality rating above 7?",
turns=2 # return after user question, LLM fun-call/tool response, Agent code-exec result
)
print(result.content)
Or alternatively, set up a task and run it in an interactive loop with the user:
task = Task(agent, name="DataAssistant")
task.run()
For a full working example see the
table_chat.py
script in the langroid-examples
repo.
❤️ Thank you to our supporters
If you like this project, please give it a star ⭐ and 📢 spread the word in your network or social media:
Your support will help build Langroid's momentum and community.
- Prasad Chalasani (IIT BTech/CS, CMU PhD/ML; Independent ML Consultant)
- Somesh Jha (IIT BTech/CS, CMU PhD/CS; Professor of CS, U Wisc at Madison)