You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To apply changes and kick off the Materialize instance upgrade, you must update the `requestRollout` field in the Materialize custom resource spec to a new UUID.
94
-
Be sure to consult the [Rollout Configurations](#rollout-configuration) to ensure you've selected the correct rollout behavior.
The behavior of the new version rollout follows your `rolloutStrategy` setting:
154
-
155
-
`WaitUntilReady` (default):
156
-
157
-
New instances are created and all dataflows are determined to be ready before cutover and terminating the old version, temporarily requiring twice the resources during the transition.
158
-
159
-
`ImmediatelyPromoteCausingDowntime`:
160
-
161
-
Tears down the prior version before creating and promoting the new version. This causes downtime equal to the duration it takes for dataflows to hydrate, but does not require additional resources.
162
-
163
-
##### In Place Rollout
164
-
165
-
The `inPlaceRollout` setting has been deprecated and will be ignored.
166
-
167
-
### Verifying the Upgrade
168
-
169
-
After initiating the rollout, you can monitor the status field of the Materialize custom resource to check on the upgrade.
170
-
171
-
```shell
172
-
# Watch the status of your Materialize environment
173
-
kubectl get materialize -n materialize-environment -w
Copy file name to clipboardExpand all lines: doc/user/content/installation/install-on-aws/appendix-deployment-guidelines.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,14 +15,14 @@ menu:
15
15
As a general guideline, we recommend:
16
16
17
17
- ARM-based CPU
18
-
- A 1:8 ratio of vCPU to GiB memory is recommended.
19
-
- When using swap, it is recommended to use a 8:1 ratio of GiB local instance storage to GiB Ram.
18
+
- A 1:8 ratio of vCPU to GiB memory.
19
+
- When using swap, use a 8:1 ratio of GiB local instance storage to GiB memory.
20
20
21
21
{{% self-managed/aws-recommended-instances %}}
22
22
23
23
## Locally-attached NVMe storage
24
24
25
-
Configuring swap on nodes to using locally-attached NVMe storage allows
25
+
Configuring swap on nodes to use locally-attached NVMe storage allows
26
26
Materialize to spill to disk when operating on datasets larger than main memory.
27
27
This setup can provide significant cost savings and provides a more graceful
28
28
degradation rather than OOMing. Network-attached storage (like EBS volumes) can
@@ -45,7 +45,7 @@ With this change, the Terraform:
45
45
See [Upgrade Notes](https://github.com/MaterializeInc/terraform-aws-materialize?tab=readme-ov-file#v061).
46
46
47
47
{{< note >}}
48
-
If deploying `v25.2` Materialize clusters will not automatically use swap unless they are configured with a `memory_request` less than their `memory_limit`. In `v26` this will be handled automatically.
48
+
If deploying `v25.2`, Materialize clusters will not automatically use swap unless they are configured with a `memory_request` less than their `memory_limit`. In `v26`, this will be handled automatically.
Please see the [top level](https://github.com/MaterializeInc/materialize-terraform-self-managed/tree/main) and [cloud specific](https://github.com/MaterializeInc/materialize-terraform-self-managed/tree/main/aws) documentation for a full understanding
22
-
of the module structure and customizations.
17
+
For details on the module structure and customization, see:
Also check out the [AWS deployment guide](/installation/install-on-aws/appendix-deployment-guidelines/) for details on recommended instance sizing and configuration.
25
22
@@ -43,7 +40,7 @@ Also check out the [AWS deployment guide](/installation/install-on-aws/appendix-
@@ -64,40 +61,56 @@ cd materialize-terraform-self-managed/aws/examples/simple
64
61
This example provisions the following infrastructure:
65
62
66
63
### Networking
67
-
- **VPC**: 10.0.0.0/16 with DNS hostnames and support enabled
68
-
- **Subnets**: 3 private subnets (10.0.1.0/24, 10.0.2.0/24, 10.0.3.0/24) and 3 public subnets (10.0.101.0/24, 10.0.102.0/24, 10.0.103.0/24) across availability zones us-east-1a, us-east-1b, us-east-1c
69
-
- **NAT Gateway**: Single NAT Gateway for all private subnets
70
-
- **Internet Gateway**: For public subnet connectivity
64
+
65
+
| Resource | Description |
66
+
|----------|-------------|
67
+
| VPC | 10.0.0.0/16 with DNS hostnames and support enabled |
68
+
| Subnets | 3 private subnets (10.0.1.0/24, 10.0.2.0/24, 10.0.3.0/24) and 3 public subnets (10.0.101.0/24, 10.0.102.0/24, 10.0.103.0/24) across availability zones us-east-1a, us-east-1b, us-east-1c |
69
+
| NAT Gateway | Single NAT Gateway for all private subnets |
70
+
| Internet Gateway | For public subnet connectivity |
71
71
72
72
### Compute
73
-
- **EKS Cluster**: Version 1.32 with CloudWatch logging (API, audit)
74
-
- **Base Node Group**: 2 nodes (t4g.medium) for Karpenter and CoreDNS
75
-
- **Karpenter**: Auto-scaling controller with two node classes:
76
-
- Generic nodepool: t4g.xlarge instances for general workloads
77
-
- Materialize nodepool: r7gd.2xlarge instances with swap enabled and dedicated taints to run materialize instance workloads.
73
+
74
+
| Resource | Description |
75
+
|----------|-------------|
76
+
| EKS Cluster | Version 1.32 with CloudWatch logging (API, audit) |
77
+
| Base Node Group | 2 nodes (t4g.medium) for Karpenter and CoreDNS |
78
+
| Karpenter | Auto-scaling controller with two node classes: Generic nodepool (t4g.xlarge instances for general workloads) and Materialize nodepool (r7gd.2xlarge instances with swap enabled and dedicated taints to run materialize instance workloads) |
78
79
79
80
### Database
80
-
- **RDS PostgreSQL**: Version 15, db.t3.large instance
81
-
- **Storage**: 50GB allocated, autoscaling up to 100GB
| IAM Role | IRSA role for Kubernetes service account access |
91
98
92
99
### Kubernetes Add-ons
93
-
- **AWS Load Balancer Controller**: For managing Network Load Balancers
94
-
- **cert-manager**: Certificate management controller for Kubernetes that automates TLS certificate provisioning and renewal
95
-
- **Self-signed ClusterIssuer**: Provides self-signed TLS certificates for Materialize instance internal communication (balancerd, console). Used by the Materialize instance for secure inter-component communication.
| cert-manager | Certificate management controller for Kubernetes that automates TLS certificate provisioning and renewal |
105
+
| Self-signed ClusterIssuer | Provides self-signed TLS certificates for Materialize instance internal communication (balancerd, console). Used by the Materialize instance for secure inter-component communication. |
96
106
97
107
### Materialize
98
-
- **Operator**: Materialize Kubernetes operator
99
-
- **Instance**: Single Materialize instance in`materialize-environment` namespace
Copy file name to clipboardExpand all lines: doc/user/content/installation/install-on-azure/appendix-deployment-guidelines.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,8 +13,8 @@ menu:
13
13
As a general guideline, we recommend:
14
14
15
15
- ARM-based CPU
16
-
- A 1:8 ratio of vCPU to GiB memory is recommended.
17
-
- When using swap, it is recommended to use a 8:1 ratio of GiB local instance storage to GiB Ram.
16
+
- A 1:8 ratio of vCPU to GiB memory.
17
+
- When using swap, use a 8:1 ratio of GiB local instance storage to GiB memory.
18
18
19
19
### Recommended Azure VM Types with Local NVMe Disks
20
20
@@ -39,7 +39,7 @@ when the VM is stopped or deleted.
39
39
40
40
## Locally-attached NVMe storage
41
41
42
-
Configuring swap on nodes to using locally-attached NVMe storage allows
42
+
Configuring swap on nodes to use locally-attached NVMe storage allows
43
43
Materialize to spill to disk when operating on datasets larger than main memory.
44
44
This setup can provide significant cost savings and provides a more graceful
45
45
degradation rather than OOMing. Network-attached storage (like EBS volumes) can
@@ -62,7 +62,7 @@ With this change, the Terraform:
62
62
See [Upgrade Notes](https://github.com/MaterializeInc/terraform-azurerm-materialize?tab=readme-ov-file#v061).
63
63
64
64
{{< note >}}
65
-
If deploying `v25.2` Materialize clusters will not automatically use swap unless they are configured with a `memory_request` less than their `memory_limit`. In `v26` this will be handled automatically.
65
+
If deploying `v25.2`, Materialize clusters will not automatically use swap unless they are configured with a `memory_request` less than their `memory_limit`. In `v26`, this will be handled automatically.
0 commit comments