/terraform-google-materialize

Terraform module for deploying Materialize on GCP with all required infrastructure components.

Primary LanguageHCLApache License 2.0Apache-2.0

Materialize on Google Cloud Platform

Terraform module for deploying Materialize on Google Cloud Platform (GCP) with all required infrastructure components.

This module sets up:

  • GKE cluster for Materialize workloads
  • Cloud SQL PostgreSQL instance for metadata storage
  • Cloud Storage bucket for persistence
  • Required networking and security configurations
  • Service accounts with proper IAM permissions

Warning

This module is intended for demonstration/evaluation purposes as well as for serving as a template when building your own production deployment of Materialize.

This module should not be directly relied upon for production deployments: future releases of the module will contain breaking changes. Instead, to use as a starting point for your own production deployment, either:

  • Fork this repo and pin to a specific version, or
  • Use the code as a reference when developing your own deployment.

The module has been tested with:

  • GKE version 1.28
  • PostgreSQL 15
  • Materialize Operator v0.1.0

Requirements

Name Version
terraform >= 1.0
google >= 6.0
helm ~> 2.0
kubernetes ~> 2.0

Providers

No providers.

Modules

Name Source Version
certificates ./modules/certificates n/a
database ./modules/database n/a
gke ./modules/gke n/a
load_balancers ./modules/load_balancers n/a
networking ./modules/networking n/a
operator github.com/MaterializeInc/terraform-helm-materialize v0.1.9
storage ./modules/storage n/a

Resources

No resources.

Inputs

Name Description Type Default Required
cert_manager_chart_version Version of the cert-manager helm chart to install. string "v1.17.1" no
cert_manager_install_timeout Timeout for installing the cert-manager helm chart, in seconds. number 300 no
cert_manager_namespace The name of the namespace in which cert-manager is or will be installed. string "cert-manager" no
database_config Cloud SQL configuration
object({
tier = optional(string, "db-custom-2-4096")
version = optional(string, "POSTGRES_15")
password = string
username = optional(string, "materialize")
db_name = optional(string, "materialize")
})
n/a yes
gke_config GKE cluster configuration. Make sure to use large enough machine types for your Materialize instances.
object({
node_count = number
machine_type = string
disk_size_gb = number
min_nodes = number
max_nodes = number
})
{
"disk_size_gb": 50,
"machine_type": "e2-standard-4",
"max_nodes": 2,
"min_nodes": 1,
"node_count": 1
}
no
helm_chart Chart name from repository or local path to chart. For local charts, set the path to the chart directory. string "materialize-operator" no
helm_values Values to pass to the Helm chart any {} no
install_cert_manager Whether to install cert-manager. bool false no
install_materialize_operator Whether to install the Materialize operator bool true no
install_metrics_server Whether to install the metrics-server for the Materialize Console. Defaults to false since GKE installs one by default in the kube-system namespace. Only set to true if the GKE cluster was deployed with monitoring explicitly turned off. Refer to the GKE docs for more information, including impact to GKE customer support efforts. bool false no
labels Labels to apply to all resources map(string) {} no
materialize_instances Configuration for Materialize instances
list(object({
name = string
namespace = optional(string)
database_name = string
create_database = optional(bool, true)
create_load_balancer = optional(bool, true)
internal_load_balancer = optional(bool, true)
environmentd_version = optional(string)
cpu_request = optional(string, "1")
memory_request = optional(string, "1Gi")
memory_limit = optional(string, "1Gi")
in_place_rollout = optional(bool, false)
request_rollout = optional(string)
force_rollout = optional(string)
balancer_memory_request = optional(string, "256Mi")
balancer_memory_limit = optional(string, "256Mi")
balancer_cpu_request = optional(string, "100m")
}))
[] no
namespace Kubernetes namespace for Materialize string "materialize" no
network_config Network configuration for the GKE cluster
object({
subnet_cidr = string
pods_cidr = string
services_cidr = string
})
n/a yes
operator_namespace Namespace for the Materialize operator string "materialize" no
operator_version Version of the Materialize operator to install string null no
orchestratord_version Version of the Materialize orchestrator to install string null no
prefix Prefix to be used for resource names string "materialize" no
project_id The ID of the project where resources will be created string n/a yes
region The region where resources will be created string "us-central1" no
use_local_chart Whether to use a local chart instead of one from a repository bool false no
use_self_signed_cluster_issuer Whether to install and use a self-signed ClusterIssuer for TLS. Due to limitations in Terraform, this may not be enabled before the cert-manager CRDs are installed. bool false no

Outputs

Name Description
connection_strings Formatted connection strings for Materialize
database Cloud SQL instance details
gke_cluster GKE cluster details
load_balancer_details Details of the Materialize instance load balancers.
network Network details
operator Materialize operator details
service_accounts Service account details
storage GCS bucket details

Connecting to Materialize instances

Access to the database is through the balancerd pods on:

  • Port 6875 for SQL connections.
  • Port 6876 for HTTP(S) connections.

Access to the web console is through the console pods on port 8080.

TLS support

For example purposes, optional TLS support is provided by using cert-manager and a self-signed ClusterIssuer.

More advanced TLS support using user-provided CAs or per-Materialize Issuers are out of scope for this Terraform module. Please refer to the cert-manager documentation for detailed guidance on more advanced usage.

To enable installation of cert-manager and configuration of the self-signed ClusterIssuer
  1. Set install_cert_manager to true.
  2. Run terraform apply.
  3. Set use_self_signed_cluster_issuer to true.
  4. Run terraform apply.

Due to limitations in Terraform, it cannot plan Kubernetes resources using CRDs that do not exist yet. We need to first install cert-manager in the first terraform apply, before defining any ClusterIssuer or Certificate resources which get created in the second terraform apply.