Skip to content

Commit a60fb23

Browse files
author
Abdullah Khawer
committed
feat: Update code to set the threshold for CPU, memory, and Disk space utilization to 85%, create locals to define AWS VPC private subnets along with their length, select the correct AWS VPC private subnet ID even if there are fewer subnets than the number of AWS EC2 instances, select the correct private AWS Route 53 hosted zone if both public and private exist with the same name/domain, set correct AWS ECS cluster name under dimensions for AWS CloudWatch metric alarms, fix Terraform code with respect to the AWS Terraform provider v4.65.0, update backups AWS S3 bucket's lifecycle policy rules to set a rule for INTELLIGENT_TIERING, add code to wait for the first AWS EC2 instance to be running and complete status checks, refactor the whole Terraform code and update the README accordingly.
1 parent 7cccb19 commit a60fb23

19 files changed

+681
-633
lines changed

README.md

Lines changed: 22 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,21 @@
1-
# MongoDB cluster on AWS ECS
1+
# MongoDB Cluster on AWS ECS - Terraform Module
22

33
- Founder: Abdullah Khawer (LinkedIn: https://www.linkedin.com/in/abdullah-khawer/)
44

55
## Introduction
66

7-
A Terraform module developed to quickly deploy a secure, persistent, highly available, self healing, efficient and cost effective single-node or multi-node MongoDB NoSQL document database cluster on AWS ECS cluster as there is no managed service available for MongoDB on AWS with such features.
7+
A Terraform module developed to quickly deploy a secure, persistent, highly available, self healing, efficient and cost effective single-node or multi-node MongoDB NoSQL document database cluster on AWS ECS cluster with monitoring and alerting enabled as there is no managed service available for MongoDB on AWS with such features.
88

99
## Key Highlights
1010

11-
- A single-node or multi-node MongoDB cluster under AWS Auto Scaling group to launch multiple MongoDB nodes as replicas to make it highly available, efficient and self healing with a help of bootstrapping script with some customizations.
12-
- Using AWS ECS service registry with awsvpc as network mode instead of AWS ELB to save cost on networking side and make it more secure. AWS ECS task IPs are updated by the bootstrapping script on an AWS Route 53 hosted zone.
11+
- A single-node (1 node) or multi-node (2 or 3 nodes) MongoDB cluster under AWS Auto Scaling group to launch multiple MongoDB nodes as replicas to make it highly available, efficient and self healing with a help of bootstrapping script with some customizations.
12+
- Using AWS Route 53 private hosted zone for AWS ECS services with `awsvpc` as the network mode instead of AWS ELB to save cost on networking side and make it more secure. AWS ECS services' task IPs are updated on the AWS Route 53 private hosted zone by the bootstrapping script that runs on each AWS EC2 instance node as user data.
1313
- Persistent and encrypted AWS EBS volumes of type gp3 using rexray/ebs Docker plugin so the data stays secure and reliable.
14-
- AWS S3 bucket for backups storage for disaster recovery along with lifecycle rules for data archival and deletion.
15-
- Custom backup and restore scripts for data migration and disaster recovery capabilities available on each AWS EC2 instance due to a bootstrapping script.
16-
- Each AWS EC2 instance is configured with various customizations like pre-installed wget, unzip, awscli, Docker, ECS agent, MongoDB, Mongosh, MongoDB database tools, key file for MongoDB Cluster, custom agent for AWS EBS volumes disk usage monitoring and cronjobs to take a backup at 03:00 AM daily and to send disk usage metrics to AWS CloudWatch at every minute.
17-
- Each AWS EC2 instance is configured with soft rlimits and ulimits defined and transparent huge pages disabled to make MongoDB database more efficient.
14+
- AWS S3 bucket for backups storage for disaster recovery along with a lifecycle rule with Intelligent-Tiering as storage class for objects to save data storage cost.
15+
- Custom backup and restore scripts for data migration and disaster recovery capabilities are available on each AWS EC2 instance node by the bootstrapping script running as user data.
16+
- Each AWS EC2 instance node is configured with various customizations like pre-installed wget, unzip, awscli, Docker, ECS agent, MongoDB, Mongosh, MongoDB database tools, key file for MongoDB Cluster, custom agent for AWS EBS volumes disk usage monitoring and cronjobs to take a backup at 03:00 AM UTC daily and to send disk usage metrics to AWS CloudWatch at every minute.
17+
- Each AWS EC2 instance node is configured with soft rlimits and ulimits defined and transparent huge pages disabled to make MongoDB database more efficient.
18+
- AWS CloudWatch alarms to send alerts when the utilization of CPU, Memory and Disk Space goes beyond 85%.
1819

1920
## Usage Notes
2021

@@ -28,49 +29,50 @@ Following are the resources that should exist already before starting the deploy
2829
- `openssl rand -base64 756 > mongodb.key`
2930
- `chmod 400 mongodb.key`
3031
- 1 key pair named `[PROJECT]-[ENVIRONMENT_NAME]-mongodb` under **AWS EC2 Key Pairs**.
31-
- 1 private hosted zone under **AWS Route53** with any working domain.
32-
- 1 vpc under **AWS VPC** having at least 1 private subnet or ideally, 3 private and 3 public subnets with name tags (e.g., Private-1-Subnet, Private-2-Subnet, etc).
32+
- 1 private hosted zone under **AWS Route53** with a working domain.
33+
- 1 vpc under **AWS VPC** having at least 1, 2 or 3 private subnets having a name tag on each (e.g., Private-1-Subnet, Private-2-Subnet, etc).
34+
- 1 topic under **AWS SNS** to send notifications via AWS CloudWatch alarms.
3335

3436
## Deployment Instructions
3537

3638
Simply deploy it from the terraform directory directly or either as a Terraform module by specifying the desired values for the variables. You can check `terraform-usage-example.tf` file as an example.
3739

3840
## Post Deployment Replica Set Configuration
3941

40-
Once the deployment is done, log into the MongoDB cluster via its 1st AWS EC2 instance node using AWS SSM Session Manager using the following command: `mongosh "mongodb://[USERNAME]:[PASSWORD]@mongodb1.[ENVIRONMENT_NAME]-local:27017/admin?&retryWrites=false"`
42+
Once the deployment is done, log into the MongoDB cluster via its 1st AWS EC2 instance node using AWS SSM Session Manager using the following command after replacing `[USERNAME]`, `[PASSWORD]`, `[ENVIRONMENT_NAME]` and `[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]` in it: `mongosh "mongodb://[USERNAME]:[PASSWORD]@[ENVIRONMENT_NAME]-mongodb1.[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]:27017/admin?&retryWrites=false"`
4143

42-
Then initiate the replica set using the following command:
44+
Then initiate the replica set using the following command after replacing `[ENVIRONMENT_NAME]` and `[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]` in it:
4345

4446
```
4547
rs.initiate({
4648
_id: "rs0",
4749
members: [
48-
{ _id: 0, host: "mongodb1.[ENVIRONMENT_NAME]-local:27017" },
49-
{ _id: 1, host: "mongodb2.[ENVIRONMENT_NAME]-local:27017" },
50-
{ _id: 2, host: "mongodb3.[ENVIRONMENT_NAME]-local:27017" }
50+
{ _id: 0, host: "[ENVIRONMENT_NAME]-mongodb1.[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]:27017" },
51+
{ _id: 1, host: "[ENVIRONMENT_NAME]-mongodb2.[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]:27017" },
52+
{ _id: 2, host: "[ENVIRONMENT_NAME]-mongodb3.[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]:27017" }
5153
]
5254
})
5355
```
5456

55-
You can now connect to the replica set using the following command: `mongosh "mongodb://[USERNAME]:[PASSWORD]@mongodb1.[ENVIRONMENT_NAME]-local:27017,mongodb2.[ENVIRONMENT_NAME]-local:27017,mongodb3.[ENVIRONMENT_NAME]-local:27017/admin?replicaSet=rs0&readPreference=secondaryPreferred&retryWrites=true"`
57+
You can now connect to the replica set using the following command after replacing `[USERNAME]`, `[PASSWORD]`, `[ENVIRONMENT_NAME]` and `[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]` in it: `mongosh "mongodb://[USERNAME]:[PASSWORD]@[ENVIRONMENT_NAME]-mongodb1.[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]:27017,[ENVIRONMENT_NAME]-mongodb2.[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]:27017,[ENVIRONMENT_NAME]-mongodb3.[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]:27017/admin?replicaSet=rs0&readPreference=secondaryPreferred&retryWrites=true"`
5658

5759
*Note: The sample commands in the above example assumes that the cluster has 3 nodes.*
5860

5961
## Replica Set Recovery
6062

61-
If you lost the replica set, you can reconfigure it using the following commands:
63+
If you lost the replica set, you can reconfigure it using the following commands after replacing `[ENVIRONMENT_NAME]` and `[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]` in them:
6264

6365
```
6466
rs.reconfig({
6567
_id: "rs0",
6668
members: [
67-
{ _id: 0, host: "mongodb1.stage-local:27017" }
69+
{ _id: 0, host: "[ENVIRONMENT_NAME]-mongodb1.[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]:27017" }
6870
]
6971
}, {"force":true})
7072
71-
rs.add({ _id: 1, host: "mongodb2.stage-local:27017" })
73+
rs.add({ _id: 1, host: "[ENVIRONMENT_NAME]-mongodb2.[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]:27017" })
7274
73-
rs.add({ _id: 2, host: "mongodb3.stage-local:27017" })
75+
rs.add({ _id: 2, host: "[ENVIRONMENT_NAME]-mongodb3.[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]:27017" })
7476
```
7577

7678
*Note: The sample commands in the above example assumes that the cluster has 3 nodes.*

terraform-usage-example.tf

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -20,12 +20,12 @@ module "mongodb-cluster-on-aws-ecs" {
2020
image = "docker.io/mongo:5.0.6"
2121
hosted_zone_name = "project.net" # dummy value
2222
ec2_key_pair_name = "project-dev-mongodb" # dummy value
23-
number_of_instances = 3
24-
private_subnet_tag_name = "Private-1*" # dummy value
23+
number_of_instances = 3 # Minimum 1, Maximum 3
24+
private_subnet_tag_name = "Private-1*" # dummy value
2525

2626
# If you want to enable disk usage monitoring
2727
monitoring_enabled = true
28-
alarm_treat_missing_data = "missing"
28+
alarm_treat_missing_data = "ignore"
2929
aws_sns_topic = "arn:aws:sns:eu-west-1:012345678910:AWS_SNS_TOPIC_NAME"
3030

3131
# If you want to enable backups

terraform/cloudwatch.tf

Lines changed: 0 additions & 43 deletions
This file was deleted.

terraform/data.tf

Lines changed: 0 additions & 34 deletions
This file was deleted.

terraform/ec2_asg.tf

Lines changed: 0 additions & 83 deletions
This file was deleted.

terraform/ecs_cluster.tf

Lines changed: 0 additions & 8 deletions
This file was deleted.

terraform/ecs_service.tf

Lines changed: 0 additions & 12 deletions
This file was deleted.

terraform/ecs_task_definition.tf

Lines changed: 0 additions & 82 deletions
This file was deleted.

0 commit comments

Comments
 (0)