ICE is a Python library and trace visualizer for language model programs.
Execution trace visualized in ICE
- Run language model recipes in different modes: humans, human+LM, LM
- Inspect the execution traces in your browser for debugging
- Define and use new language model agents, e.g. chain-of-thought agents
- Run recipes quickly by parallelizing language model calls
- Reuse component recipes such as question-answering, ranking, and verification
-
Install Docker Desktop
-
Clone the repository:
git clone https://github.com/oughtinc/ice.git && cd ice
-
Add required secrets to
.env
. See.env.example
for the format. -
Start ICE in its own terminal and leave it running:
scripts/run-local.sh
-
Go through the Primer.
-
Recipes are decompositions of a task into subtasks.
The meaning of a recipe is: If a human executed these steps and did a good job at each workspace in isolation, the overall answer would be good. This decomposition may be informed by what we think ML can do at this point, but the recipe itself (as an abstraction) doesn’t know about specific agents.
-
Agents perform atomic subtasks of predefined shapes, like completion, scoring, or classification.
Agents don't know which recipe is calling them. Agents don’t maintain state between subtasks. Agents generally try to complete all subtasks they're asked to complete (however badly), but some will not have implementations for certain task types.
-
The mode in which a recipe runs is a global setting that can affect every agent call. For instance, whether to use humans or agents. Recipes can also run with certain
RecipeSettings
, which can map a task type to a specificagent_name
, which can modify which agent is used for that specfic type of task.
-
Join the ICE Slack channel to collaborate with other people composing language model tasks. You can also use it to ask questions about using ICE.
-
Watch the recording of Ought's Lab Meeting to understand the high-level goals for ICE, how it interacts with Ought's other work, and how it contributes to alignment research.
-
Read the ICE announcement post for another introduction.
ICE is an open-source project by Ought. We're an applied ML lab building the AI research assistant Elicit.
We welcome community contributions:
- If you're a developer, you can dive into the codebase and help us fix bugs, improve code quality and performance, or add new features.
- If you're a language model researcher, you can help us add new agents or improve existing ones, and refine or create new recipes and recipe components.
For larger contributions, make an issue for discussion before submitting a PR.
And for even larger contributions, join us - we're hiring!
To release a new version of ICE, follow these steps:
-
Update the version number in:
docker-compose*.yml
pyproject.toml
package.json
scripts/run-local.sh
-
Regenerate the
poetry.lock
file:docker compose exec ice poetry lock --no-update
-
Regenerate the
package-lock.json
file:docker compose exec ice npm --prefix ui install --package-lock-only
-
Update
CHANGELOG.md
. -
Commit the changes.
-
Tag the commit with the version number:
git tag <version>
-
Open a PR and verify that CI passes.
-
Build and push the Docker images:
# TODO: Script this, sharing code with scripts/run-local.sh. docker buildx bake -f docker-compose.yml -f docker-compose.build.yml --push docker buildx bake -f docker-compose.yml -f docker-compose.streamlit.yml -f docker-compose.build-streamlit.yml --push docker buildx bake -f docker-compose.yml -f docker-compose.torch.yml -f docker-compose.build-torch.yml --push
-
Push the tag:
git push --tags
-
Merge the PR.