/ai-chat

Primary LanguagePython

Multi-modal Agent Starter

Open in a VS Code Dev Container

Create a cloud-hosted LLM Agent with custom personality, multi-modal tools, and memory.

This repository is designed to pair with this Agent Building Guidebook

Getting Started

You can be up and running in under a minute. A full setup walk-through is here.

For localhost development with your own IDE

Clone this repository, then set up a Python virtual environment with:

python3.8 -m venv .venv
source .venv/bin/activate
python3.8 -m pip install -r requirements.txt

To use a GitHub Dev Container in your browser:

Visit https://github.dev/steamship-core/multimodal-agent-starter, then click on the "Cloud Container" icon at lower-left and re-open in a new Docker container.

To use a GitHub Dev Container on localhost, with Docker:

Just click here: Open in a VS Code Dev Container

Running your agent

A full guide to running is here.

With the proper Python environment set up and your STEAMSHIP_API_KEY environment variable set, just run:

PYTHONPATH=src python3.8 src/api.py

Deploying your agent

A full guide to deploying is here.

This project can be deployed straight to the cloud. Simply type:

ship deploy

and follow the prompts.

What tools can I use with my agent?

Tools help your agent perform actions or fetch information from the outside world. The Steamship SDK includes a large set of multi-modal & memory-aware tools you can use right away.

Your starter project already has a few tools in src/example_tools.

And you can import or find more open source tools in the Steamship SDK: