An orchestration platform for the development, production, and observation of data assets.
Dagster lets you define jobs in terms of the data flow between reusable, logical components, then test locally and run anywhere. With a unified view of jobs and the assets they produce, Dagster can schedule and orchestrate Pandas, Spark, SQL, or anything else that Python can invoke.
Dagster is designed for data platform engineers, data engineers, and full-stack data scientists. Building a data platform with Dagster makes your stakeholders more independent and your systems more robust. Developing data pipelines with Dagster makes testing easier and deploying faster.
With Dagster’s pluggable execution, the same computations can run in-process against your local file system, or on a distributed work queue against your production data lake. You can set up Dagster’s web interface in a minute on your laptop, deploy it on-premise, or in any cloud.
Dagster models data dependencies between steps in your orchestration graph and handles passing data between them. Optional typing on inputs and outputs helps catch bugs early.
Dagster’s Asset Manager tracks the data sets and ML models produced by your jobs, so you can understand how they were generated and trace issues when they don’t look how you expect.
Dagster helps platform teams build systems for data practitioners. Jobs are built from shared, reusable, configurable data processing and infrastructure components. Dagit, Dagster’s web interface, lets anyone inspect these objects and discover how to use them.
Dagster’s repository model lets you isolate codebases so that problems in one job don’t bring down the rest. Each job can have its own package dependencies and Python version. Jobs are run in isolated processes so user code issues can't bring the system down.
Dagit, Dagster’s web interface, includes expansive facilities for understanding the jobs it orchestrates. When inspecting a run of your job, you can query over logs, discover the most time consuming tasks via a Gantt chart, re-execute subsets of steps, and more.
Dagster is available on PyPI, and officially supports Python 3.6+.
$ pip install dagster dagit
This installs two modules:
- Dagster: the core programming model and abstraction stack; stateless, single-node, single-process and multi-process execution engines; and a CLI tool for driving those engines.
- Dagit: the UI for developing and operating Dagster pipelines, including a DAG browser, a type-aware config editor, and a live execution interface.
Next, jump right into our tutorial, read our complete documentation, or check out our GitHub Discussions. If you're actively using Dagster or have questions on getting started, we'd love to hear from you:
For details on contributing or running the project for development, check out our contributing
guide.
Dagster works with the tools and systems that you're already using with your data, including:
Integration | Dagster Library | |
Apache Airflow | dagster-airflow Allows Dagster pipelines to be scheduled and executed, either containerized or uncontainerized, as Apache Airflow DAGs. |
|
Apache Spark | dagster-spark · dagster-pyspark
Libraries for interacting with Apache Spark and PySpark. |
|
Dask | dagster-dask
Provides a Dagster integration with Dask / Dask.Distributed. |
|
Datadog | dagster-datadog
Provides a Dagster resource for publishing metrics to Datadog. |
|
/ | Jupyter / Papermill | dagstermill Built on the papermill library, dagstermill is meant for integrating productionized Jupyter notebooks into dagster pipelines. |
PagerDuty | dagster-pagerduty
A library for creating PagerDuty alerts from Dagster workflows. |
|
Snowflake | dagster-snowflake
A library for interacting with the Snowflake Data Warehouse. |
|
Cloud Providers | ||
AWS | dagster-aws
A library for interacting with Amazon Web Services. Provides integrations with Cloudwatch, S3, EMR, and Redshift. |
|
Azure | dagster-azure
A library for interacting with Microsoft Azure. |
|
GCP | dagster-gcp
A library for interacting with Google Cloud Platform. Provides integrations with GCS, BigQuery, and Cloud Dataproc. |
This list is growing as we are actively building more integrations, and we welcome contributions!