Skip to content

Helm #1

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 24 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 34 additions & 0 deletions .github/actions/setup/action.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
name: 'Setup'
runs:
using: "composite"
steps:
- name: Install aws iam authenticator
shell: bash
run: brew install aws-iam-authenticator

- name: Install awscli
shell: bash
run: brew install awscli

- name: Install kubectl
shell: bash
run: brew install kubernetes-cli

- name: Install wget
shell: bash
run: brew install wget

- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: '^1.18.0'

- name: Setup terraform
uses: hashicorp/setup-terraform@v1
with:
terraform_version: ">= 1.1.2"
terraform_wrapper: false

- name: Terraform init
shell: bash
run: terraform init
27 changes: 27 additions & 0 deletions .github/workflows/terraform-deploy.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
name: Terraform Deploy

env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

on:
push: {}

jobs:
deploy-terraform:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2

- name: Terraform init
uses: ./.github/actions/setup

- name: Terraform test
run: cd test && go test -v -timeout 2000s infra_test.go

- name: Terraform deploy
run: terraform apply --auto-approve

- name: Terraform destroy
run: terraform destroy --auto-approve
26 changes: 26 additions & 0 deletions .github/workflows/terraform-validate.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
name: Terraform Validate

env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

on: pull_request

jobs:
deploy-validate:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2

- name: Terraform init
uses: ./.github/actions/setup

- name: Terraform format
run: terraform fmt -check

- name: Terraform validate
run: terraform validate -no-color

- name: Terraform plan
run: terraform plan -no-color
99 changes: 99 additions & 0 deletions .terraform.lock.hcl

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

41 changes: 40 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1 +1,40 @@
# schedule-nginx-deployment
# deploy-nginx
deploy nginx will deploy surprisingly four replicas of nginx on eks cluster using helm release distributed on three nodes

## Setup
These enviroment variables are required:

- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY

```
$ terraform init
```

## Test

```
cd test && go test -v -timeout 2000s infra_test.go
```

## Deploy
```
$ terraform apply
```

## Destroy
```
$ terraform destroy
```

## Github Workflow

On every push for any branch will trigger ```deploy-terraform``` it will test, deploy then destroy (Will be locked only for main) \
On main and develop pull requests will trigger ```validate-terraform``` it will format, validate and plan

Environment variables are hooked up as secret like the following:
```
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
```
58 changes: 58 additions & 0 deletions eks-cluster.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
data "aws_eks_cluster" "cluster" {
name = aws_eks_cluster.this.id
}

data "aws_eks_cluster_auth" "cluster" {
name = aws_eks_cluster.this.id
}

locals {
cluster_name = "nginx-cluster${random_string.suffix.result}"
}

resource "random_string" "suffix" {
length = 8
special = false
}

resource "aws_eks_cluster" "this" {
name = local.cluster_name
version = var.kubernetes_version
role_arn = aws_iam_role.eks.arn

vpc_config {
subnet_ids = aws_subnet.this.*.id
}

depends_on = [
aws_iam_role_policy_attachment.eks_AmazonEKSClusterPolicy,
]
}

resource "aws_eks_node_group" "this" {
cluster_name = aws_eks_cluster.this.name
node_group_name = local.cluster_name
node_role_arn = aws_iam_role.eks_node.arn
subnet_ids = aws_subnet.this.*.id
instance_types = ["t2.micro"]

# https://stackoverflow.com/questions/72161772/k8s-deployment-is-not-scaling-on-eks-cluster-too-many-pods
scaling_config {
desired_size = 3
max_size = 4
min_size = 2
}

# Optional: Allow external changes without Terraform plan difference
lifecycle {
ignore_changes = [scaling_config[0].desired_size]
}

depends_on = [
aws_iam_role_policy_attachment.eks_AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.eks_AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.eks_AmazonEC2ContainerRegistryReadOnly,
]
}


53 changes: 53 additions & 0 deletions iam.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
resource "aws_iam_role" "eks" {
name = local.cluster_name

assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "eks_AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.eks.name
}

resource "aws_iam_role" "eks_node" {
name = "${local.cluster_name}-node"

assume_role_policy = jsonencode({
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
}]
Version = "2012-10-17"
})
}

resource "aws_iam_role_policy_attachment" "eks_AmazonEKSWorkerNodePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.eks_node.name
}

resource "aws_iam_role_policy_attachment" "eks_AmazonEKS_CNI_Policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.eks_node.name
}

resource "aws_iam_role_policy_attachment" "eks_AmazonEC2ContainerRegistryReadOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.eks_node.name
}
18 changes: 18 additions & 0 deletions nginx.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
resource "helm_release" "nginx" {
# Kubernetes 1.19+
# Helm 3.2.0+
create_namespace = true
chart = var.nginx_chart
name = var.nginx_name
namespace = var.namespace
repository = var.nginx_repository

set {
name = "replicaCount"
value = var.replica
}

depends_on = [
aws_eks_cluster.this
]
}
21 changes: 21 additions & 0 deletions output.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
output "cluster_endpoint" {
description = "eks cluster endpoint"
value = data.aws_eks_cluster.cluster.endpoint
}

output "cluster_name" {
description = "eks cluster name"
value = aws_eks_cluster.this.id
}


output "region" {
description = "aws region"
value = var.region
}


output "nginx_release_namespace" {
description = "nginx release namespace"
value = helm_release.nginx.namespace
}
Loading