This repo contains Python code to accompany the videos that show how to use Prefect for Data Engineering. We will create ETL workflows to extract, transform, and load your data.
We will use Postgres and GCP's Google Cloud Storage and BigQuery.
Prefect helps you observe and orchestrate your dataflows.
Clone the repo locally.
In a conda environment, install all package dependencies with
pip install -r requirements.txt
Create another window and activate your conda environment. Start the Orion API server locally with
prefect orion start
- Log in to GCP
- Create a Project
- Set up Cloud Storage
- Set up BigQuery
- Create a service account with the required policies to interact with both services
prefect block register -m prefect_gcp
Create a GCP Credentials block in the UI.
Paste your service account information from your JSON file into the Service Account Info block's field.
Create a GCS Bucket block in UI
Alternatively, create these blocks using code by following the templates in the blocks folder.
Write your Python functions and add @flow
and @task
decorators.
Note: all code should be run from the top level of your folder to keep file paths consistent.
Create and apply your deployments.
Run a deployment ad hoc from the CLI or UI.
Or create a schedule from the UI or when you create your deployment.
Make sure your agent set up to poll the work queue you created when you made your deployment (default if you didn't specify a work queue).
Bake your flow code into a Docker image, create a DockerContainer, and your flow code in a Docker container.
Signup and use for free at https://app.prefect.cloud