eastworld is an open-source, language-agnostic framework for adding Generative Agents to your video games, visual novels, and other forms of interactive media.
This framework has two goals
-
To abstract away the complexities of prompt-engineering detailed Agents and elaborate Storylines using an easy to use no-code dashboard
-
To enable a variety of user-agent interactions out of the box beyond just chat - Agent Actions, Emotion Queries, Player Guardrails, etc. - and expose it in a simple small API
gameplay.mp4
A playable murder mystery game whose Agents were made with eastworld
See how you can add an agent to your game in ~5 minutes
- Agents can perform user-defined actions, not just chat:
- e.g. Player: "I'm going to attack you!" -> Agent: RunAway(speed=fast)
- includes guardrails to ask players to stay in character
- i.e. block players trying to jailbreak or from anachronistic behaviour like asking for a phone in a medieval game
- query agent's inner thoughts and emotions mid-conversation
- e.g. (to agent) "How suspicious are you that {player} suspects you as the
murderer?" -> very
- can trigger events in your game based off of this
- e.g. (to agent) "How suspicious are you that {player} suspects you as the
murderer?" -> very
- set manner of speech, dialect, and accents
- e.g. Peasant: "Just workin', yer Majesty. Fields ain't gonna plow 'emselves, are they?"
- selective memory to cut down on LLM inference costs
- i.e. vector embedding based retrieval of memories
- and more!
No-code tool to simplify Agent and Story prompt-engineering.
- construct characters' biographies, core beliefs, dialects, etc
- manage who knows which aspects of your world's Shared Lore to keep storylines consistent
- define Actions (function completions) that Agents can take
- use the chatbox with built-in debugging tools to quickly iterate on Agents
NOTE: not prod ready yet - lacks client authentication
- exposes OpenAPI spec so high quality clients can be autogenerated in any language
- blazing fast with FastAPI and async LLM completions
- supports local models out of the box with LocalAI
- simple deploy - only requires redis
The framework and server requires Python 3.10+, PDM package manager, and Redis.
The Agent Studio tool requires Node 19+.
brew install redis pdm node
If later on you get SSL certification issues with OpenAI, see this
- Install Redis, if you don't already have it. Most distros should come with it.
- Install our package manager PDM
- Install Node
- Install Redis
- Install our package manager PDM
- Install Node for Windows
Enter the repo and run:
pdm install
Install the frontend tooling:
cd app && npm install
IMPORTANT: Copy the example configuration file to config.ini
In main folder:
cp example_config.ini config.ini
(Easier) Setting up an OpenAI model:
In config.ini
, make sure the the following is set (especially the
openai_api_key
!):
[llm]
use_local_llm = false
openai_api_key = sk-my_openai_key
# Takes either {gpt-3.5-turbo, gpt-4} (or timestamped versions thereof)
# gpt-3.5-turbo is enough to produce very believable characters
# gpt-4 is amazing, but extremely expensive right now
chat_model = gpt-3.5-turbo
# text-embedding-ada-002
embedding_size = 1536
(Harder) To connect to a locally running model, see below.
For the backend, in separate terminal windows, run:
redis-server
pdm run uvicorn server.main:app --reload
By default, the server runs on http://localhost:8000
For the Agent Studio tool:
cd app && npm start
This runs by default on http://localhost:8000
We have an example game that you can play to get your bearings and see what the framework is capable of.
There is a demo game included with the Agent Studio when you run it for the first time. You can look through it and mess around with it to understand the framework.
We recommend looking at this video to understand Agent Studio workflow.
-
Generate a client for your language. You can install OpenAPI generator or language-specific generator
-
Direct the client's to your server (during development this should be
http://localhost:8000
) -
The core API consists of:
createSession() // call it to initiate an instance of the game
startChat() // starts a new chat and clears old conversation
chat() // Agent says something
interact() // Agent may chat or perform an Action
action() // ask Agent to perform an Action
query() // emotional queries into Agent's inner thoughts
guardrail() // make sure player respects tone/time period/etc of game
- Read the more detailed Swagger documentation at
http://localhost:8000/docs#/Game%20Sessions. The
Game Sessions
API is what you need for your game.- See Recipes for examples.
Have requests for one in particular? Ask in the discord
- we use prettier and eslint for
app/
- we use ruff and black-formatter for python code
- if you change a Pydantic schema, you need to
cd app && npm run generate-client
to reflect those changes in the frontend client.
Note that as of writing, agents are of much higher quality using GPT-3.5 or GPT-4 than any other model we tested.
-
Install docker-compose (recommended) or docker
-
Install LocalAI and follow the instructions
-
You will need two models that are compatible with LocalAI. Most GGML models are compatible. If you want Agents to take actions, you need a function-calling compatible model
- you need a chat-tuned LLM - e.g. WizardLM 13b uncensored
- you need an embedding model -
follow the guide to create a config
- NOTE: follow the instructions to set
name: text-embedding-ada-002
.
- NOTE: follow the instructions to set
-
Change
config.ini
[llm]
use_local_llm = true
openai_api_key = dummy_value
# I'm jealous of people with enough compute to run local models!
chat_model = my_local_model_name
embedding_size = dims_of_my_embedding_model
- Restart the server to test it out!
Using this generator for TypeScript.
// in app.tsx
import { OpenAPI } from "client";
...
OpenAPI.BASE = "http://localhost:8000";
// in interact.tsx
const sessUuid = await GameSessionsService.createSession(
params.gameUuid!,
);
...
const emptyChat = { conversation: { correspondent: MyCharacter } , history: [] };
await GameSessionsService.startChat(
sessionUuid!,
params.agentUuid!,
emptyChat,
);
...
const interact = await GameSessionsService.interact(
sessionUuid!,
params.agentUuid!,
text,
);
if (isAction(interact)) {
// Character.actions[...]()
} else {
// render message
}
We used this generator for Python.
from game_client import Client
api_client = Client(base_url="http://localhost:8000")
# ...
session_uuid = create.sync(
game_uuid=game_uuid,
client=api_client
)
# ...
response = chat.sync(
session_uuid=session_uuid,
client=client,
agent="Agent Name",
message=message
)
# do something with response