Terraform module for deploying Materialize on Google Cloud Platform (GCP) with all required infrastructure components.
This module sets up:
- GKE cluster for Materialize workloads
- Cloud SQL PostgreSQL instance for metadata storage
- Cloud Storage bucket for persistence
- Required networking and security configurations
- Service accounts with proper IAM permissions
Warning
This module is intended for demonstration/evaluation purposes as well as for serving as a template when building your own production deployment of Materialize.
This module should not be directly relied upon for production deployments: future releases of the module will contain breaking changes. Instead, to use as a starting point for your own production deployment, either:
- Fork this repo and pin to a specific version, or
- Use the code as a reference when developing your own deployment.
The module has been tested with:
- GKE version 1.28
- PostgreSQL 15
- terraform-helm-materialize v0.1.12 (Materialize Operator v25.1.7)
The materialize_instances variable is a list of objects that define the configuration for each Materialize instance.
Optional list of additional command-line arguments to pass to the environmentd container. This can be used to override default system parameters or enable specific features.
environmentd_extra_args = [
"--system-parameter-default=max_clusters=1000",
"--system-parameter-default=max_connections=1000",
"--system-parameter-default=max_tables=1000",
]These flags configure default limits for clusters, connections, and tables. You can provide any supported arguments here.
| Name | Version |
|---|---|
| terraform | >= 1.0 |
| deepmerge | ~> 1.0 |
| >= 6.0 | |
| helm | ~> 2.0 |
| kubernetes | ~> 2.0 |
No providers.
| Name | Source | Version |
|---|---|---|
| certificates | ./modules/certificates | n/a |
| database | ./modules/database | n/a |
| gke | ./modules/gke | n/a |
| load_balancers | ./modules/load_balancers | n/a |
| materialize_nodepool | ./modules/nodepool | n/a |
| networking | ./modules/networking | n/a |
| operator | github.com/MaterializeInc/terraform-helm-materialize | v0.1.36 |
| storage | ./modules/storage | n/a |
No resources.
| Name | Description | Type | Default | Required |
|---|---|---|---|---|
| cert_manager_chart_version | Version of the cert-manager helm chart to install. | string |
"v1.17.1" |
no |
| cert_manager_install_timeout | Timeout for installing the cert-manager helm chart, in seconds. | number |
300 |
no |
| cert_manager_namespace | The name of the namespace in which cert-manager is or will be installed. | string |
"cert-manager" |
no |
| database_config | Cloud SQL configuration | object({ |
n/a | yes |
| helm_chart | Chart name from repository or local path to chart. For local charts, set the path to the chart directory. | string |
"materialize-operator" |
no |
| helm_values | Values to pass to the Helm chart | any |
{} |
no |
| install_cert_manager | Whether to install cert-manager. | bool |
true |
no |
| install_materialize_operator | Whether to install the Materialize operator | bool |
true |
no |
| install_metrics_server | Whether to install the metrics-server for the Materialize Console. Defaults to false since GKE installs one by default in the kube-system namespace. Only set to true if the GKE cluster was deployed with monitoring explicitly turned off. Refer to the GKE docs for more information, including impact to GKE customer support efforts. | bool |
false |
no |
| labels | Labels to apply to all resources | map(string) |
{} |
no |
| materialize_instances | Configuration for Materialize instances | list(object({ |
[] |
no |
| materialize_node_group_disk_size_gb | Size of the disk attached to each Materialize worker node | number |
100 |
no |
| materialize_node_group_local_ssd_count | Number of local NVMe SSDs to attach to each Materialize node. In GCP, each disk is 375GB. | number |
1 |
no |
| materialize_node_group_machine_type | Machine type for Materialize worker nodes | string |
"n2-highmem-8" |
no |
| materialize_node_group_max_nodes | Maximum number of Materialize worker nodes | number |
2 |
no |
| materialize_node_group_min_nodes | Minimum number of Materialize worker nodes | number |
1 |
no |
| namespace | Kubernetes namespace for Materialize | string |
"materialize" |
no |
| network_config | Network configuration for the GKE cluster | object({ |
n/a | yes |
| operator_namespace | Namespace for the Materialize operator | string |
"materialize" |
no |
| operator_version | Version of the Materialize operator to install | string |
null |
no |
| orchestratord_version | Version of the Materialize orchestrator to install | string |
null |
no |
| prefix | Prefix to be used for resource names | string |
"materialize" |
no |
| project_id | The ID of the project where resources will be created | string |
n/a | yes |
| region | The region where resources will be created | string |
"us-central1" |
no |
| storage_bucket_version_ttl | Sets the TTL (in days) on non current storage bucket objects. This must be set if storage_bucket_versioning is turned on. | number |
7 |
no |
| storage_bucket_versioning | Enable bucket versioning. This should be enabled for production deployments. | bool |
false |
no |
| system_node_group_disk_size_gb | Size of the disk attached to each system node | number |
100 |
no |
| system_node_group_machine_type | Machine type for system nodes | string |
"n2-highmem-8" |
no |
| system_node_group_max_nodes | Maximum number of system nodes | number |
2 |
no |
| system_node_group_min_nodes | Minimum number of system nodes | number |
1 |
no |
| system_node_group_node_count | Number of nodes in the system node group | number |
1 |
no |
| use_local_chart | Whether to use a local chart instead of one from a repository | bool |
false |
no |
| use_self_signed_cluster_issuer | Whether to install and use a self-signed ClusterIssuer for TLS. To work around limitations in Terraform, this will be treated as false if no materialize instances are defined. |
bool |
true |
no |
| Name | Description |
|---|---|
| connection_strings | Formatted connection strings for Materialize |
| database | Cloud SQL instance details |
| gke_cluster | GKE cluster details |
| load_balancer_details | Details of the Materialize instance load balancers. |
| network | Network details |
| operator | Materialize operator details |
| service_accounts | Service account details |
| storage | GCS bucket details |
Access to the database is through the balancerd pods on:
- Port 6875 for SQL connections.
- Port 6876 for HTTP(S) connections.
Access to the web console is through the console pods on port 8080.
TLS support is provided by using cert-manager and a self-signed ClusterIssuer.
More advanced TLS support using user-provided CAs or per-Materialize Issuers are out of scope for this Terraform module. Please refer to the cert-manager documentation for detailed guidance on more advanced usage.
You must upgrade to at least v0.7.x before upgrading to v0.8.x of this terraform code.
Breaking changes:
- The system node group is renamed and significantly modified, forcing a recreation.
- Both node groups are now locked to consistent configurations and ON\_DEMAND scheduling.
- OpenEBS is removed, and with it all support for lgalloc, our legacy spill to disk mechanism.
This is an intermediate version to handle some changes that must be applied in stages. It is recommended to upgrade to v0.8.x after upgrading to this version.
Breaking changes:
- Swap is enabled by default.
- Support for lgalloc, our legacy spill to disk mechanism, is deprecated, and will be removed in the next version.
- We now always use two node groups, one for system workloads and one for Materialize workloads.
- Variables for configuring these node groups have been renamed, so they may be configured separately.
To avoid downtime when upgrading to future versions, you must perform a rollout at this version.
- Ensure your
environmentd_versionis at leastv26.0.0. - Update your
request_rollout(andforce_rolloutif already at the correctenvironmentd_version). - Run
terraform apply.
You must upgrade to at least v0.6.x before upgrading to v0.7.0 of this terraform code.
It is strongly recommended to have enabled swap on v0.6.x before upgrading to v0.7.0 or higher.
We now have some initial support for swap.
We recommend upgrading to at least v0.5.10 before upgrading to v0.6.x of this terraform code.
To use swap:
- Set
swap_enabledtotrue. - Ensure your
environmentd_versionis at leastv26.0.0. - Update your
request_rollout(andforce_rolloutif already at the correctenvironmentd_version). - Run
terraform apply.
This will create a new node group configured for swap, and migrate your clusterd pods there.
This version is missing the updated helm chart. Skip this version, go to v0.6.1.
We now install cert-manager and configure a self-signed ClusterIssuer by default.
Due to limitations in Terraform, it cannot plan Kubernetes resources using CRDs that do not exist yet. We have worked around this for new users by only generating the certificate resources when creating Materialize instances that use them, which also cannot be created on the first run.
For existing users upgrading Materialize instances not previously configured for TLS:
- Leave
install_cert_managerat its default oftrue. - Set
use_self_signed_cluster_issuertofalse. - Run
terraform apply. This will install cert-manager and its CRDs. - Set
use_self_signed_cluster_issuerback totrue(the default). - Update the
request_rolloutfield of the Materialize instance. - Run
terraform apply. This will generate the certificates and configure your Materialize instance to use them.
By default storage bucket versioning is turned off. This both reduces
costs and allows for easier cleanup of resources for testing. When running in
production, versioning should be turned on with a sufficient TTL to meet any
data-recovery requirements. See storage_bucket_versioning and storage_bucket_version_ttl.