Seer is a service that provides AI capabilities to Sentry by running inference on Sentry issues and providing user insights.
📣 Seer is currently in early development and not yet compatible with self-hosted Sentry instances. Stay tuned for updates!
These instructions require access to internal Sentry resources and are intended for internal Sentry employees.
- Install direnv or a similar tool
- Install pyenv and configure Python 3.11
- Install Docker
- Install Google Cloud SDK
- Clone the repository and navigate to the project root
- Run
direnv allow
to set up the Python environment - Create a
.env
file based on.env.example
and set the required values - (Optional) Add
SENTRY_AUTH_TOKEN=<your token>
to your.env
file
Download model artifacts:
gsutil cp -r gs://sentry-ml/seer/models ./models
-
Start the development environment:
make dev
-
If you encounter database errors, run:
make update
-
Expose port 9091 in your local Sentry configuration
-
Add the following to
~/.sentry/sentry.conf.py
:SEER_RPC_SHARED_SECRET = ["seers-also-very-long-value-haha"] SENTRY_FEATURES['projects:ai-autofix'] = True SENTRY_FEATURES['organizations:issue-details-autofix-ui'] = True
-
For local development, you may need to bypass certain checks in the Sentry codebase
-
Restart both Sentry and Seer
Note
Set NO_SENTRY_INTEGRATION=1
in .env
to ignore Local Sentry Integration
- Apply database migrations:
make update
- Create new migrations:
make migration
- Run type checker:
make mypy
- Run tests:
make test
- Open a shell:
make shell
To start fresh:
bash
docker compose down --volumes
make update && make dev
To enable Langfuse tracing, set these environment variables:
LANGFUSE_SECRET_KEY=...
LANGFUSE_PUBLIC_KEY=...
LANGFUSE_HOST=...
Autofix is an AI agent that identifies root causes of Sentry issues and suggests fixes.
Send a POST request to /v1/automation/autofix/evaluations/start
with the following JSON body:
{
"dataset_name": "string", // Name of the dataset to run on (currently only internal datasets available)
"run_name": "string", // Custom name for your evaluation run
"run_description": "string", // Description of your evaluation run
"run_type": "full | root_cause | execution", // Type of evaluation to perform
"test": boolean, // Set to true to run on a single item (for testing)
"random_for_test": boolean, // Set to true to use a random item when testing (requires "test": true)
"run_on_item_id": "string", // Specific item ID to run on (optional)
"n_runs_per_item": int // Number of runs to perform per item (optional, default 1)
}
Note: Currently, only internal datasets are available.