Ingress Controller on Azure Kubernetes Service challenge is designed to foster learning via implementing Cloud Native Routing practices with a series of steps, the sample solution is based on NGINX and Apache HTTP web servers presented as containerized services on Azure Kubernetes Services using the ingress controller with cert-manager for routing and secure exposure.
The solution is splitted in the following services:
- NGINX (a web server which can also be used as a reverse proxy, load balancer, mail proxy and http cache)
- Apache HTTP Server (a web server application notable for playing a key role in the initial growth of the world wide web)
Azure Cloud Services:
- Kubernetes Service
- An active Azure subscription, there are several ways you can procure one:
- VS Code o Visual Studio 2019 Community
- Docker Desktop (https://www.docker.com/get-started). For for older Mac and Windows systems that do not meet the requirements of Docker Desktop for Mac and Docker Desktop for Windows you could use Docker Toolbox.
Is this your first time using Docker?, review the following links:
Is this your first time using Kubernetes Service?, review the following links:
Is this your first time using Ingress Controller?, review the following links:
Is this your first time using Cert-Manager?, review the following links:
-
The NGINX web server must be running and needs to be exposed in a specific cluster ip address.
-
The Apache HTTP Server must be running and needs to be exposed in a specific cluster ip address.
-
Ingress Controller must be installed in the cluster and replicated at least in two pods.
-
Production certificate must be generated correctly to expose a secure dns for ingress controller.
-
The Kubernetes Service must be up and running ready for monitoring through the dashboard.
-
NGINX and Apache HTTP Server must be able to accessible through the secure ingress controller dns.
Step 1:
-
Sign in to Azure Portal and open the cloud shell bash or use the Azure Cloud Shell.
Having issues? review the cheat link!
Step 2:
-
Using the cloud shell bash clone the GitHub repo.
git clone https://github.com/robece/microservices-aks.git
Having issues? review the cheat link!
Step 3:
-
Using the cloud shell bash create a new resource group to allocate all the resources of the challenge.
az group create --name [ResourceGroupName] --location [ResourceGroupLocation]
[ResourceGroupName] = name of the resource group e.g. myuniqueresourcegroup01 [ResourceGroupLocation] = location of the resource group e.g. westus
Important: Take note of the [ResourceGroupName] and [ResourceGroupLocation] you may use it later.
Having issues? review the cheat link!
Step 4:
-
Create a new Kubernetes Service resource, for this step you have two options: do it manually or executing the cloud shell bash script located in challenge-02/configuration/kubernetes-service/config.sh.
Important: In this script you will need to provide:
1. The name of the kubernetes cluster 2. The name of the resource group previously created for the challenge resources 3. The number of nodes of the kubernetes cluster 4. The vm size of the nodes of the kubernetes cluster
After the script execution remember to take note of the CALL TO ACTION annotation:
1. KubernetesCluster 2. KubernetesClusterResourceGroup
Having issues? review the cheat link!
Step 5:
-
At this point you will need to get the cluster credentials and validate the current context you will be working.
Get cluster context locally:
az aks get-credentials --name [KubernetesCluster] --resource-group [KubernetesClusterResourceGroup]
[KubernetesCluster] = kubernetes cluster name [KubernetesClusterResourceGroup] = kubernetes cluster resource group
Validate current context:
kubectl config current-context
Having issues? review the cheat link!
Step 6:
-
Create a Kubernetes Service namespaces.
Challenge-02 namespace:
kubectl create namespace challenge-02
Cert-Manager namespace:
kubectl create namespace cert-manager
Having issues? review the cheat link!
Step 7:
-
Install HELM 3 by following the instructions here: https://helm.sh/docs/intro/install/.
To validate if helm was installed successfully, run the command: helm version.
version.BuildInfo{Version:"v3.0.2", GitCommit:"19e47ee3283ae98139d98460de796c1be1e3975f", GitTreeState:"clean", GoVersion:"go1.13.15"}
Optional: If using Windows 10, use any linux bash shell or the cloud shell bash directly in the Azure Portal.
Having issues? review the cheat link!
Step 8:
-
Go to challenge-02/source and then deploy the nginx script directly in the cluster.
kubectl apply -f deployment-nginx.yml
Having issues? review the cheat link!
Step 9:
-
In the same path challenge-02/source expose internally the deployment.
kubectl expose deployment nginx --namespace challenge-02
Having issues? review the cheat link!
Step 10:
-
Go to challenge-02/source and then deploy the nginx script directly in the cluster.
kubectl apply -f deployment-httpd.yml
Having issues? review the cheat link!
Step 11:
-
In the same path challenge-02/source expose internally the deployment.
kubectl expose deployment httpd --namespace challenge-02
Having issues? review the cheat link!
Step 12:
-
Let's deploy the ingress controller chart directly in the cluster using HELM 3.
-
Add the Kubernetes charts repository.
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
-
Update helm repositories.
helm repo update
-
Install the ingress controller in the cluster.
helm install --generate-name stable/nginx-ingress --set controller.replicaCount=2 --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux --version=1.26.2 --namespace challenge-02
-
Get ingress controller service external ip.
kubectl get service -l app=nginx-ingress --namespace challenge-02
-
To validate the chart deployment use the following commands.
To get the deployed pods in challenge-02 namespace: kubectl get pods --namespace challenge-02
To get the deployed services in challenge-02 namespace: kubectl get services --namespace challenge-02
To see the charts deployed using helm in challenge-02 namespace: helm list --namespace challenge-02
To remove a chart deployed using helm in challenge-02 namespace: helm uninstall [chartName] --namespace challenge-02
Having issues? review the cheat link!
-
Step 13:
-
Let's deploy the cert-manager chart directly in the cluster using HELM 3.
- Install the CustomResourceDefinition resources separately.
kubectl apply --validate=false -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.12/deploy/manifests/00-crds.yaml
- Label the cert-manager namespace to disable resource validation.
kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true
- Add the Jetstack charts repository.
helm repo add jetstack https://charts.jetstack.io
- Update helm repositories.
helm repo update
- Install the cert-manager in the cluster.
helm install --generate-name jetstack/cert-manager --set nodeSelector."beta\.kubernetes\.io/os"=linux --set webhook.nodeSelector."beta\.kubernetes\.io/os"=linux --set cainjector.nodeSelector."beta\.kubernetes\.io/os"=linux --set ingressShim.defaultIssuerName=letsencrypt-prod --set ingressShim.defaultIssuerKind=ClusterIssuer --version v0.12.0 --namespace cert-manager
- To validate the chart deployment use the following commands.
Monitoring certificates in specific namespace: kubectl describe certificate certificate --namespace cert-manager
Monitoring cluster-issuer in specific namespace: kubectl describe clusterissuer letsencrypt-prod --namespace cert-manager
To get the deployed pods in cert-manager namespace: kubectl get pods --namespace cert-manager
To get the deployed services in cert-manager namespace: kubectl get services --namespace cert-manager
To see the charts deployed using helm in cert-manager namespace: helm list --namespace cert-manager
To remove a chart deployed using helm in cert-manager namespace: helm uninstall [chartName] --namespace cert-manager
Having issues? review the cheat link!
Step 14:
-
Go to challenge-02/configuration/scripts modify and run the following scripts.
-
Get the current external-ip address from nginx ingress controller service and replace the tag: [NGINX_INGRESS_PUBLIC_IP] in the dns-config.sh script.
-
Create a unique DNS (e.g. robecedns01) and replace the tag: [NGINX_INGRESS_DNS] with the new value in the dns-config.sh script.
-
Run the bash script:
sh dns-config.sh
-
Edit the cluster-issuer.yml file and replace the tag: [CUSTOM_EMAIL_ADDRESS] with a valid email.
-
Run the command:
kubectl apply -f cluster-issuer.yml
-
Edit the certificate.yml file and replace the tag: [NGINX_INGRESS_DNS] with the valid DNS previously defined in the bash script.
-
Edit the certificate.yml file and replace the tag: [CLUSTER_LOCATION] with the valid cluster location (e.g. westus | eastus).
-
Run the command:
kubectl apply -f certificate.yml
-
To validate your certificate has been issued correctly by Let's Encrypt, run the command:
kubectl describe certificate certificate --namespace cert-manager
You will get some similar result:
Name: certificate Namespace: cert-manager Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"cert-manager.io/v1alpha2","kind":"Certificate","metadata":{"annotations":{},"name":"certificate","namespace":"cert-manager"... API Version: cert-manager.io/v1alpha2 Kind: Certificate Metadata: Creation Timestamp: 2019-12-07T01:13:51Z Generation: 1 Resource Version: 64154 Self Link: /apis/cert-manager.io/v1alpha2/namespaces/cert-manager/certificates/certificate UID: d49f9eaf-188e-11ea-b185-8ab8eabd5da6 Spec: Dns Names: robece-challenge02.westus.cloudapp.azure.com Issuer Ref: Kind: ClusterIssuer Name: letsencrypt-prod Secret Name: certificate Status: Conditions: Last Transition Time: 2019-12-07T01:14:17Z Message: Certificate is up to date and has not expired Reason: Ready Status: True Type: Ready Not After: 2020-03-06T00:14:16Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal GeneratedKey 27s cert-manager Generated a new private key Normal Requested 27s cert-manager Created new CertificateRequest resource "certificate-574162192" Normal Issued 2s cert-manager Certificate issued successfully
-
Edit the ingress-route.yml file and replace the tag: [NGINX_INGRESS_DNS] with the valid DNS previously defined in the bash script.
-
Edit the ingress-route.yml file and replace the tag: [CLUSTER_LOCATION] with the valid cluster location (e.g. westus | eastus).
-
Run the command:
kubectl apply -f ingress-route.yml
Having issues? review the cheat link!
-
Step 15:
- Congratulations, you are now able to navigate through your exposed secured ingress controller to access to the internal load balancers in the cluster for NGINX and Apache HTTP Server.
You have now the knowledge to create and configure a secured ingress controller to protect your internal applications.