This project uses Restack, the open source AI agent orchestration at enterprise scale.
Restack has helped enterprise companies build AI agents at large scale where product teams often sit between domain experts (customer service, marketing, sales) and engineering teams.
-
The challenge: product teams wants to iterate quickly with domain experts to craft agent behavior and experience. But every change requires engineering coordination, creating bottlenecks that slow innovation.
-
Restack approach: empowers product teams with full autonomy from engineering. Product collaborates directly with domain experts to refine agent behavior. Engineering focuses on what they do best: building reliable integrations with 99.99% SLAs.
Built on Python + Kubernetes because enterprises already run AI workloads this way. Works with your existing infrastructure and team expertise.
cp .env.example .env- Set
OPENAI_API_KEYwith a valid OpenAI API key - Set
RESTACK_ENGINE_MCP_ADDRESSfor ngrok tunnel withngrok http 112233
pnpm localsetupThis installs dependencies, starts infrastructure (PostgreSQL, ClickHouse, Restack), runs migrations, and inserts demo data.
pnpm localdevThis starts infrastructure and all dev servers with hot reloading without resetting your database.
Manual setup (if you prefer step-by-step)
# Install dependencies
pnpm install
# Start infrastructure (PostgreSQL, ClickHouse, Restack)
pnpm infra:start
# Wait for services to be ready, then run migrations
pnpm db:migrate
# Insert demo data
pnpm db:demo:insert
# Start all dev servers
pnpm dev- Agent Orchestration: http://localhost:3000
- Developer Tracing: http://localhost:5233
- API: http://localhost:8000
- ClickHouse: http://localhost:8123 (metrics and analytics)
Performance tip: development mode uses hot reloading. For faster page loads, use pnpm build && pnpm start instead of pnpm localdev.
-
Customer Support Agents (Zendesk, Intercom, Slack)
Engineering connects support platforms. Product teams iterate on escalation rules, response tone, and handoff triggers. -
Product Intelligence Agents (PostHog, Linear, Slack)
Engineering builds analytics and project management pipelines. Product teams adjust feature prioritization logic and user feedback analysis. -
DevOps Monitoring Agents (Sentry, Datadog, Kubernetes, GitHub, OpenAI Codex)
Engineering integrates monitoring and development tools. Product teams define alert thresholds, incident response workflows, and automated troubleshooting. -
Performance Marketing Agents (Google Ads, Facebook Ads, PostHog, Slack)
Engineering establishes advertising and analytics connections. Product teams optimize campaign strategies, bidding algorithms, and performance reporting. -
Sales Intelligence Agents (Salesforce, HubSpot, Slack)
Engineering connects CRM and communication platforms. Product teams refine lead scoring, follow-up sequences, and sales forecasting models.
-
Product interface: web-based agent management with version control, testing playground, and deployment controls. Product teams change agent behavior without code dependencies.
-
Engineering infrastructure: python-based integration layer with Temporal workflow orchestration. Kubernetes deployment with enterprise-grade reliability and observability.
-
Integration protocol: Model Context Protocol automatically exposes Python functions as agent tools, enabling seamless tool discovery and use across agent workflows.
┌─────────────────────────────────────┐
│ Agent Orchestration │
│ ┌─────────────┐ ┌─────────────┐ │ ┌─────────────────┐
│ │ Frontend │◄─│ Backend │◄──┼────┤ MCP Server │
│ │ (Next.js) │ │ (Restack.py)│ │ │ (Integrations) │
│ └─────────────┘ └─────────────┘ │ └─────────────────┘
└─────────────────────────────────────┘ │
▼
┌─────────────────┐
│ External APIs │
│ (Zendesk, etc.) │
└─────────────────┘
This boilerplate uses OpenAI's response API for tool execution. You'll need:
- OpenAI API Key - Get one at platform.openai.com
- Public MCP URL - For OpenAI to call your local tools
cp env.development.example .env
# Add your OpenAI API key:
OPENAI_API_KEY=sk-your-key-hereOpenAI needs a public URL to call your MCP server tools:
# Install ngrok (if not installed)
brew install ngrok # or download from https://ngrok.com
# Expose MCP server
ngrok http 112233
# Add the ngrok URL to .env:
RESTACK_ENGINE_MCP_ADDRESS=https://your-ngrok-url.ngrok-free.app- Open Agent Orchestration at http://localhost:3000
- Login with demo credentials:
demo@example.com/password - Navigate to "Tasks" and select any completed task to see agent conversations
- Go to "Agents" to see 5 pre-configured agents across different teams
- Select the "Customer Support" agent
- Click "Create Task" and describe an issue: "Customer can't log in to mobile app"
- Watch the agent analyze the problem and suggest solutions
- See how it uses tools like Zendesk and knowledge base
- Click "Edit Agent" to change instructions
- Try: "Always ask for the customer's device type before troubleshooting"
- Open the Playground to test your changes
- Send the same login issue and see the improved response
- Click "Publish Version" to make it live
This workflow demonstrates the product-engineering partnership: product teams can iterate on agent behavior without touching code.
Build integrations in the MCP Server using Restack workflows with Pydantic types. Each function needs both a workflow and function definition to become an agent tool.
# apps/mcp_server/src/functions/zendesk.py
from pydantic import BaseModel
class SearchTicketsInput(BaseModel):
query: str
class TicketResult(BaseModel):
id: str
subject: str
status: str
async def search_zendesk_tickets(input: SearchTicketsInput) -> list[TicketResult]:
"""Search Zendesk tickets by query"""
# Mock implementation included for demo
return [
TicketResult(id="12345", subject="Login issues", status="open"),
TicketResult(id="12346", subject="Mobile app crash", status="pending")
]# apps/mcp_server/src/workflows/zendesk.py
from restack_ai import workflow
@workflow.defn(name="search_zendesk_tickets")
class SearchZendeskTicketsWorkflow:
@workflow.run
async def run(self, input: SearchTicketsInput) -> list[TicketResult]:
return await workflow.step(search_zendesk_tickets, input)- Create function with Pydantic types in
apps/mcp_server/src/functions/ - Create matching workflow in
apps/mcp_server/src/workflows/ - The MCP server auto-discovers workflows as agent tools
- Test in the playground, no restart needed
See apps/mcp_server/README.md for more integration examples.
Deploy on your own Kubernetes cluster:
# Add Restack Helm repository
helm repo add restack https://github.com/restackio/helm
# Deploy with your configuration
helm install restack restack/restack -f values.yamlSee Restack Helm Charts for full configuration options.
Fully managed infrastructure:
- Sign up at console.restack.io
- Deploy your agent workflows
- Connect your frontend to the managed backend
Frontend (Vercel)
- Connect your GitHub repo to Vercel
- Set build settings:
- Root Directory:
apps/frontend - Build Command:
turbo run build --filter=boilerplate-frontend
- Root Directory:
Backend (Restack Cloud)
- Deploy backend and MCP server to Restack Cloud
- Update frontend environment variables to point to cloud endpoints
# Production with Docker Compose
pnpm prod:up
# Or build and run locally
pnpm build
pnpm start# Quick commands
pnpm localsetup # First time setup (install, infra, migrations, demo data)
pnpm localdev # Start infrastructure + dev servers
pnpm dev # Start all dev servers with hot reloading (infra must be running)
# Infrastructure management
pnpm infra:start # Start infrastructure (PostgreSQL, ClickHouse, Restack)
pnpm infra:stop # Stop infrastructure
pnpm infra:restart # Restart infrastructure services
pnpm infra:logs # View container logs
pnpm infra:ps # Check service status
pnpm infra:reset # Complete infrastructure reset (⚠️ destroys data)
# Database operations
pnpm db:migrate # Run database migrations (uses localhost by default)
pnpm db:demo:insert # Insert demo data (uses localhost by default)
pnpm postgres:connect # Connect to PostgreSQL
pnpm clickhouse:connect # Connect to ClickHouse
# Production (self-hosted Docker)
pnpm build # Build for production
pnpm prod:up # Start production services
pnpm prod:down # Stop production services
pnpm prod:logs # View production logs
pnpm prod:restart # Restart production services (backend, mcp, webhook)
pnpm prod:reset # Full production reset (⚠️ destroys data)Want to contribute or change the platform? See CONTRIBUTING.md for:
- Development setup with hot reloading
- Architecture deep-dive
- Testing and debugging
- Code contribution guidelines
Licensed under the Apache License, Version 2.0.