v3.24.2 and want to update to >= v3.25.0 you should take a look at the bin/module-update/ utility. There are changes related to the EFS mount_point which may cause downtime if not addressed properly.
v3.0.0 and want to update to >= v3.5.0 you should take a look at the bin/module-update/ utility.
v3.0.0 you will need to migrate your state before updating the module to versions >= v3.0.0. See state-migration for more details.
-
examples/:- Purpose: Acts as an intuitive guide for module users.
- Contents: Features end-user Terraform configurations along with illustrative
tfvarssamples, showcasing potential setups.
-
modules/:- Purpose: Houses the primary modules that orchestrate the provisioning process.
- Contents: These modules act as the main entry points for the setup of the underlying infrastructure beneath the EKS cluster and its associated components.
-
tests/:- Purpose: Ensures the integrity and functionality of the module.
- Contents: Contains automation-driven tests intended for validation and continuous integration (CI) checks.
-
bin/state-migration/:- Purpose: Contains automation to perform terraform state migration, from a monolithic module to a multi-module structure.
- Contents: Script and documentation to perform terraform state migration.
-
bin/module-update/:- A helper script to update the module version. Compatible only with modules on version
>= v3.0.0. - Contents: The directory includes the script and associated documentation for performing module updates.
- A helper script to update the module version. Compatible only with modules on version
Always refer to each section's respective README or documentation for detailed information and usage guidelines.
- A host with
ssh-keygeninstalled - awscli >= 2.23.6
- terraform >= v1.10.5
- kubectl cli >= 1.32.0
- helm >= 3.17
- hcledit
- bash >= 4.0
We first need to setup the module structure.
Set up the following environment variables.
Update the following values(Using v3.0.0 and domino-deploy as an example):
MOD_VERSION='v3.0.0'
DEPLOY_DIR='domino-deploy'DEPLOY_DIR does not exist or is currently empty.
Create the DEPLOY_DIR and use terraform to bootstrap the module from source.
mkdir -p "$DEPLOY_DIR"
terraform -chdir="$DEPLOY_DIR" init -backend=false -from-module="github.com/dominodatalab/terraform-aws-eks.git//examples/deploy?ref=${MOD_VERSION}"Ignore this message:
Terraform initialized in an empty directory!
The directory has no Terraform configuration files. You may begin working
with Terraform immediately by creating Terraform configuration files.
✅ If successful, you should get a structure similar to this:
domino-deploy
├── README.md
├── meta.sh
├── set-mod-version.sh
├── terraform
│ ├── cluster
│ │ ├── README.md
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ ├── cluster.tfvars
│ ├── infra
│ │ ├── README.md
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ ├── infra.tfvars
│ ├── nodes
│ │ ├── README.md
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ └── nodes.tfvars
└── tf.shNote: It's recommended to go through the README.md within the DEPLOY_DIR for further details.
You can update the modules version using a script or manually.
You can update the modules version using the set-mod-version.sh
./set-mod-version.sh "$MOD_VERSION"Update the modules' source with the MOD_VERSION value.
For example if MOD_VERSION=v3.0.0
- infra/main.tf : Update
module.infra.sourcefrom"./../../../../modules/infra"togithub.com/dominodatalab/terraform-aws-eks.git//modules/infra?ref=v3.0.0 - cluster/main.tf : Update
module.eks.sourcefrom"./../../../../modules/eks"togithub.com/dominodatalab/terraform-aws-eks.git//modules/eks?ref=v3.0.0 - nodes/main.tf : Update
module.nodes.sourcefrom"./../../../../modules/nodes"togithub.com/dominodatalab/terraform-aws-eks.git//modules/nodes?ref=v3.0.0
Consult available variables within each of the modules variables.tf
-
domino-deploy/terraform/infra/variables.tfdeploy_idregiontagsnetworkdefault_node_groupsadditional_node_groupsstoragekmseksssh_pvt_key_pathroute53_hosted_zone_namebastion
-
domino-deploy/terraform/cluster/variables.tfekskms_info:⚠️ Variable is only intended for migrating infrastructure, it is not recommended to set it.
-
domino-deploy/terraform/nodes/variables.tfdefault_node_groupsadditional_node_groups
Configure terraform variables at:
domino-deploy/terraform/infra.tfvarsdomino-deploy/terraform/cluster.tfvarsdomino-deploy/terraform/nodes.tfvars
NOTE: The eks configuration is required in both the infra and cluster modules because the Kubernetes version is used for installing the kubectl binary on the bastion host. Similarly, default_node_groups and additional_node_groups must be defined in both the infra and nodes modules, as the availability zones for the nodes are necessary for setting up the network infrastructure.
The deployment requires an SSH key. Update the ssh_pvt_key_path variable in domino-deploy/terraform/infra.tfvars with the full path of your key (we recommend you place your key under the domino-deploy/terraform directory).
If you don't have an SSH key, you can create one using:
ssh-keygen -q -P '' -t rsa -b 4096 -m PEM -f domino.pem && chmod 600 domino.pemaws sts get-caller-identitycd domino-deploytf.sh usage.
At this point all requirements should be set to provision the infrastructure.
For each of the modules, run init, plan, inspect the plan, then apply in the following order:
infraclusternodes
Note: You can use all instead but it is recommended that the plan and apply be done one at a time, so that the plans can be carefully examined.
- Init all
./tf.sh all initinfraplan.
./tf.sh infra plan-
❗ Carefully inspect the actions detailed in the
infraplan for correctness, before proceeding. -
infraapply
./tf.sh infra applyclusterplan
./tf.sh cluster plan-
❗ Carefully inspect the actions detailed in the
clusterplan for correctness, before proceeding. -
clusterapply
./tf.sh cluster apply- nodes plan
./tf.sh nodes plan-
❗ Carefully inspect the actions detailed in the
nodesplan for correctness, before proceeding. -
nodesapply
./tf.sh nodes applyTo interact with the EKS Control Plane using kubectl or helm commands, you'll need to set up both the appropriate AWS credentials and the KUBECONFIG environment variable. If your EKS cluster is private, you can use mechanisms provided by this module to establish an SSH tunnel through a Bastion host. However, if your EKS endpoint is publicly accessible, you only need to follow steps 1-3 below.
For ease of setup, use the k8s-functions.sh script, which contains helper functions for cluster configuration.
- Verify AWS Credentials: Ensure your AWS credentials are properly configured by running the following command:
aws sts get-caller-identity- Import Functions: Source the k8s-functions.sh script to import its functions into your current shell.
source k8s-functions.sh- Set
KUBECONFIG: Use the check_kubeconfig function to set theKUBECONFIGenvironment variable appropriately.
check_kubeconfig- Open SSH Tunnel (Optional): If your EKS cluster is private, open an SSH tunnel through the Bastion host by executing:
open_ssh_tunnel_to_k8s_api- Close SSH Tunnel: To close the SSH tunnel, run:
close_ssh_tunnel_to_k8s_apiRun the command below to generate a list of infrastructure values. These values are necessary for configuring the domino.yaml file, which is in turn used for installing the Domino product.
./tf.sh infra output domino_config_valuesThis command will output a set of key-value pairs, extracted from the infrastructure setup, that can be used as inputs in the domino.yaml configuration file.
If you would like to increase the safety of data stored in AWS S3 and EFS by backing them up into another account (Accounts under same AWS Organization), use the terraform-aws-domino-backup module:
- Define another provider for the backup account in
main.tffor infra module.
Location
domino-deploy
├── terraform
│ ├── infra
│ │ ├── main.tfContent
provider "aws" {
alias = "domino-backup"
region = <<Backup Account Region>>
}
- Add the following content
module "backups" {
count = 1
source = "github.com/dominodatalab/terraform-aws-domino-backup.git?ref=v1.0.12"
providers = {
aws.dst = aws.domino-backup
}
}
This module supports running workloads on Karpenter instead of EKS managed node groups.
If you have an existing deployment and want to migrate to using karpenter, see karpenter-migration.
To create a new deployment that uses Karpenter
- Complete all standard steps
- Add the following to your
infra.tfvarsandnodes.tfvars. See Note on selecting AZs karpenter-availability_zone_ids
karpenter_node_groups = {
karpenter = {
availability_zone_ids = ["usw2-az1", "usw2-az2", "usw2-az3", "usw2-az4"]
single_nodegroup = true
}
}
default_node_groups = null
additional_node_groups = null- Add the following to the
cluster.tfvars:
# Consult the karpenter variable for additional options.
karpenter = {
enabled = true
}- Plan and Apply changes
./tf.sh all plan
./tf.sh all apply- See and karpenter-configurations on how to configure
ec2nodeclassesandnodepools.