diff --git a/.gitignore b/.gitignore
index a60260e..3be752c 100644
--- a/.gitignore
+++ b/.gitignore
@@ -2,6 +2,7 @@
############################
*.crt
*.key
+*.jwt
*~
\#*
@@ -14,3 +15,16 @@ Thumbs.db
# Logs #
########
*.log
+/.vs
+
+# Misc Dir #
+############
+.apc/
+
+#Kubernetes-ingress#
+####################
+kubernetes-ingress
+
+n4a-configs-staging/
+*.tar.gz
+
diff --git a/CHANGELOG.md b/CHANGELOG.md
index da02d36..59a78e4 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,5 +1,5 @@
# Changelog
-## 1.0.0 (Month Date, Year)
+## 1.0.0 (April 1, 2024)
-Initial release of the NGINX template repository.
+Initial release of the NGINXaaS for Azure Workshop.
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 24dec40..b5dbd3e 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -8,8 +8,6 @@ The following is a set of guidelines for contributing to this project. We really
[Contributing](#contributing)
-[Code Guidelines](#code-guidelines)
-
[Code of Conduct](https://github.com/nginxinc/nginx-azure-workshops/blob/main/CODE_OF_CONDUCT.md)
## Getting Started
@@ -24,28 +22,6 @@ Follow our [Getting Started Guide](https://github.com/nginxinc/nginx-azure-works
To report a bug, open an issue on GitHub with the label `bug` using the available bug report issue template. Please ensure the bug has not already been reported. **If the bug is a potential security vulnerability, please report it using our [security policy](https://github.com/nginxinc/nginx-azure-workshops/blob/main/SECURITY.md).**
-### Suggest a Feature or Enhancement
-
-To suggest a feature or enhancement, please create an issue on GitHub with the label `enhancement` using the available [feature request template](https://github.com/nginxinc/nginx-azure-workshops/blob/main/.github/feature_request_template.md). Please ensure the feature or enhancement has not already been suggested.
-
-### Open a Pull Request
-
-- Fork the repo, create a branch, implement your changes, add any relevant tests, submit a PR when your changes are **tested** and ready for review.
-- Fill in [our pull request template](https://github.com/nginxinc/nginx-azure-workshops/blob/main/.github/pull_request_template.md).
-
-Note: if you'd like to implement a new feature, please consider creating a [feature request issue](https://github.com/nginxinc/nginx-azure-workshops/blob/main/.github/feature_request_template.md) first to start a discussion about the feature.
-
-## Code Guidelines
-
-
-
-### Git Guidelines
+### Provide Feedback on a Lab Exercise
-- Keep a clean, concise and meaningful git commit history on your branch (within reason), rebasing locally and squashing before submitting a PR.
-- If possible and/or relevant, use the [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/) format when writing a commit message, so that changelogs can be automatically generated
-- Follow the guidelines of writing a good commit message as described here and summarised in the next few points:
- - In the subject line, use the present tense ("Add feature" not "Added feature").
- - In the subject line, use the imperative mood ("Move cursor to..." not "Moves cursor to...").
- - Limit the subject line to 72 characters or less.
- - Reference issues and pull requests liberally after the subject line.
- - Add more detailed description in the body of the git message (`git commit -a` to give you more space and time in your text editor to write a good message instead of `git commit -am`).
+To send us feedback, please create an issue on GitHub with the label `feedback` using the available feedback template.
diff --git a/README.md b/README.md
index 1db4832..c2d629d 100644
--- a/README.md
+++ b/README.md
@@ -1,68 +1,80 @@
-[](https://www.repostatus.org/#active)
-[](https://securityscorecards.dev/viewer/?uri=github.com/nginxinc/template-repository)
-[](https://github.com/nginxinc/template-repository/blob/main/SUPPORT.md)
-[](https://github.com/nginxinc/template-repository/main/CODE_OF_CONDUCT.md)
+# NGINXaaS for Azure Workshop 301
-# NGINX Template Repository
+
-## What is included on this template?
+
-This template includes all the scaffolding you need to get started on an OSS repository that meets the required NGINX criteria:
+This Repo is for learning **`NGINX as a Service in Azure`**, with Instructor Lead and Hands-on Lab Exercises and Lab Guides that will teach a student using real world scenarios for using NGINX in front of Azure Resources.
-- [Apache License 2.0](https://github.com/nginxinc/template-repository/blob/main/LICENSE) (required for all NGINX OSS projects)
-- [`.gitignore`](https://github.com/nginxinc/template-repository/blob/main/.gitignore) with some minimal sensible defaults
-- [Issue](https://github.com/nginxinc/template-repository/blob/main/.github/ISSUE_TEMPLATE) and [PR](https://github.com/nginxinc/template-repository/blob/main/pull_request_template.md) templates
-- [Contributing](https://github.com/nginxinc/template-repository/blob/main/CONTRIBUTING.md) guidelines
-- [Support](https://github.com/nginxinc/template-repository/blob/main/SUPPORT.md) guidelines for either community and/or commercial support
-- [Security](https://github.com/nginxinc/template-repository/blob/main/SECURITY.md) guidelines for reporting major vulnerabilities
-- [Code of Conduct](https://github.com/nginxinc/template-repository/blob/main/CODE_OF_CONDUCT.md)
-- Open Source Security Foundation (OSSF) Scorecard [(implemented via a GitHub Action)](https://github.com/nginxinc/template-repository/blob/main/.github/workflows/ossf_scorecard.yml)
-- [README](https://github.com/nginxinc/template-repository/blob/main/README.md) placeholder. How you structure the README is up to you (although the template provides placeholder sections), but you will need to include:
- - A [repostatus](https://www.repostatus.org/) badge
- - An OSSF Scorecard badge. (Optional -- Some projects will by their nature have low scores. In such a case you might want to remove this badge!)
- - A community and/or commercial support badge. Include the latter -- and replace the commented out badge/URL placeholder with the relevant support URL -- if this repository contains a commercially supported project. You can find a commented out example below the community badge in this README.
- - A contributor covenant/code of conduct badge. (Optional -- If you already have multiple badges and want to reduce clutter, simply including the actual code of conduct is enough!)
- - An explicit link back to the [Apache License 2.0](https://github.com/nginxinc/template-repository/blob/main/LICENSE)
- - An up to date copyright notice
-- [Changelog](https://github.com/nginxinc/template-repository/blob/main/CHANGELOG.md) placeholder. (Optional -- A changelog is recommended, but it is not required and can diverge in format from the placeholder here included.)
-- [Codeowners](https://github.com/nginxinc/template-repository/blob/main/.github/CODEOWNERS) placeholder. (Optional -- Codeowners is a useful feature, but not all repositories require them.)
+
-**Note:** If you created a public repository before this template became available (or you didn't know about it's existence), please include any missing files found here in your repository. There is no need if you have a private repository, but we still recommend you include all of the above scaffolding should the repository ever become public.
+**This is an Advanced, 300 Level Workshop.**
-## How do I use this template?
+## Audience
-**DO NOT FORK** -- this template is meant to be used from the **[`Use this template`](https://github.com/nginxinc/template-repository/generate)** feature.
+This Workshop is meant for Cloud and Application Architects, Modern Application Developers, DevOps, Platform Ops, and SRE engineers working with NGINX, Azure, Docker, Kubernetes and Ingress Controllers, to learn and understand how NGINX for Azure works - how it is configured, deployed, monitored and managed. Using various Azure Resources like VMs, containers, AKS Clusters, and Azure networking, you will deploy real applications for external access using Nginx for Azure.
-1. Click on **[`Use this template`](https://github.com/nginxinc/template-repository/generate)**
-2. Give a name to your project
-3. Wait until the first run of CI finishes (GitHub Actions will process the template and commit to your new repo)
-4. Clone your new project and tweak any of the placeholders if necessary. Pay special attention to the README!
-5. Happy coding!
+`The Student taking this Advanced Workshop must have intermediate skills and knowledge with the following:`
-**NOTE**: **WAIT** until the first CI run on GitHub Actions finishes before cloning your new project.
+- Azure Cloud, Portal and Azure CLI
+- NGINX Webserver, Reverse Proxy, Load Balancing
+- NGINX Ingress Controller
+- Kubernetes Administration
+- Redis In Memory Cache and Redis Tools
+- TCP, HTTP/S, DNS, Redis protocols and traffic
+- Chrome or browser diagnostic tools
+- Linux OS commands and tools
+- Container / Docker administration
+- Visual Studio Code
----
+You should be proficient with the following technologies and concepts.
-
+- Nginx Webserver and Reverse Proxy
+- Nginx Ingress Controller
+- Kubernetes; nodes, pods, deployments, services, ingress, nodeport
+- Azure Cloud; subscriptions, networking, VMs, AKS Clusters
+- Various Desktop tools; Visual Studio, Linux, Terminal, Chrome
-[](https://www.repostatus.org/#concept)
-[](https://securityscorecards.dev/viewer/?uri=github.com/nginxinc/nginx-azure-workshops)
-[](https://github.com/nginxinc/nginx-azure-workshops/blob/main/SUPPORT.md)
-[](https://github.com/nginxinc/nginx-azure-workshops/main/CODE_OF_CONDUCT.md)
+
-# nginx_azure_workshops
+## Knowledge and Skills Requirements
-## Requirements
+
-Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam elit turpis, varius et arcu elementum, viverra rhoncus sem. Aliquam nec sodales magna, et egestas enim. Mauris lobortis ultrices euismod. Pellentesque in arcu lacus. Mauris cursus laoreet nulla, ac vehicula est. Vestibulum eu mauris quis lorem consectetur aliquam ac nec quam. Vestibulum commodo pharetra mi, at bibendum neque faucibus ut. Mauris et tortor sed sem consectetur eleifend ut non magna. Praesent feugiat placerat nibh, varius viverra orci bibendum sed. Vestibulum dapibus ex ut pulvinar facilisis. Quisque sodales enim et augue tempor mattis. Suspendisse finibus congue felis, ac blandit ligula. Praesent condimentum ultrices odio quis semper. Nunc ultrices, nibh quis mattis pellentesque, elit nulla bibendum felis, quis dapibus erat turpis ac urna.
+NGINXaaS for Azure | Hands-On Labs
+:-------------------------:|:-------------------------:
+ | 
+
+
+
+To meet the Prerequisite skills requirement, there are other Workshops from Nginx and Azure Learning to help you prepare. The student must have completed the previous two Nginx Workshops, prior to taking this workshop. (Or have equivalent knowledge).
+
+- Nginx Basics Workshop - 101 (https://github.com/nginxinc/nginx-basics-workshops/tree/master/labs)
+- Nginx Plus Ingress Workshop - 201 (https://github.com/nginxinc/nginx-ingress-workshops/tree/main/Plus/labs)
+- Azure Portal and AzureCLI training from Microsoft Learn (https://learn.microsoft.com/en-us/training/azure/)
+
+See [Lab0 Readme](/labs/lab0/readme.md) for the Hardware/Software and Skills Prerequisites for taking this Workshop and completing the Lab Exercises.
+
+
## Getting Started
-Duis sit amet sapien vel velit ornare vulputate. Nulla rutrum euismod risus ac efficitur. Curabitur in sagittis elit, a semper leo. Suspendisse malesuada aliquam velit, eu suscipit lorem vehicula at. Proin turpis lacus, semper in placerat in, accumsan non ipsum. Cras euismod, elit eget pretium laoreet, tortor nulla finibus tortor, nec hendrerit elit turpis ut eros. Quisque congue nisi id mauris molestie, eu condimentum dolor rutrum. Nullam eleifend elit ac lobortis tristique. Pellentesque nec tellus non mauris aliquet commodo a eu elit. Ut at feugiat metus, at tristique mauris. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia curae;
+Review the Github Repo content for the Nginx Basics and Nginx Plus Ingress Workshops. If you have taken these Workshops, and understand the content, you can successfully complete the Lab exercises in this Nginx for Azure Workshop. It is HIGHLY recommended that you complete the 101 and 201 Workshops prior.
+
+It is HIGHLY recommended that you complete Azure Training from http://learn.microsoft.com, so you are familiar with Azure Portal, menus, and various resources and components.
+
+It will take approximately 4 hours to complete the Nginx for Azure Workshop.
+
+
## How to Use
-Maecenas at vehicula justo. Suspendisse posuere elementum elit vel posuere. Etiam quis pulvinar massa. Integer tempor semper risus, vitae maximus eros ullamcorper vitae. In egestas, ex vitae gravida sodales, ipsum dolor varius est, et cursus lorem dui a mi. Morbi faucibus ut nisi id faucibus. Sed quis ullamcorper ex. In et dolor id nunc interdum suscipit.
+The content and lab exercises are presented in a sequence as you build and add additional Nginx and Azure features and functionality as you progress. It is essential that the Lab Exercises are completed in the order provided. This content provided is for example only, is not for production workloads. The user of this information assumes all risks.
+
+- Click [LabGuide](labs/readme.md) to begin the Lab Exercises.
+- Click [Lab0 Readme](labs/lab0/readme.md) to review the Lab0 Prerequisites - "Know before you Go".
+
+
## Contributing
diff --git a/SECURITY.md b/SECURITY.md
index fe15251..79cdb14 100644
--- a/SECURITY.md
+++ b/SECURITY.md
@@ -2,13 +2,13 @@
## Latest Versions
-We advise users to run or update to the most recent release of this project. Older versions of this project may not have all enhancements and/or bug fixes applied to them.
+We advise users to run the most recent release of the nginx_azure_workshops. Older versions of the nginx_azure_workshops may not have all enhancements and/or bug fixes applied to them.
## Reporting a Vulnerability
The F5 Security Incident Response Team (F5 SIRT) has an email alias that makes it easy to report potential security vulnerabilities.
- If you’re an F5 customer with an active support contract, please contact [F5 Technical Support](https://www.f5.com/services/support).
-- If you aren’t an F5 customer, please report any potential or current instances of security vulnerabilities with any F5 product to the F5 Security Incident Response Team at F5SIRT@f5.com
+- If you aren’t an F5 customer, please report any potential or current instances of security vulnerabilities with any F5 product to the F5 Security Incident Response Team at F5SIRT@f5.com.
For more information visit [https://www.f5.com/services/support/report-a-vulnerability](https://www.f5.com/services/support/report-a-vulnerability)
diff --git a/SUPPORT.md b/SUPPORT.md
index c17cd65..8332a0e 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -4,34 +4,56 @@
We use GitHub for tracking bugs and feature requests related to this project.
-Don't know how something in this project works? Curious if this project can achieve your desired functionality? Please open an issue on GitHub with the label `question`.
+Don't know how something in the nginx_azure_workshops works? Curious if the nginx_azure_workshops can achieve your desired functionality? Please open an issue on GitHub with the label `question`.
+
+
## NGINX Specific Questions and/or Issues
This isn't the right place to get support for NGINX specific questions, but the following resources are available below. Thanks for your understanding!
-### Community Slack
+
-We have a community [Slack](https://nginxcommunity.slack.com/)!
+## F5 Support
-If you are not a member, click [here](https://community.nginx.org/joinslack) to sign up (and let us know if the link does not seem to be working!)
+If you’re an F5 customer with NGINX Plus and an active support contract, please contact [F5 Technical Support](https://www.f5.com/services/support).
-Once you join, check out the `#beginner-questions` and `nginx-users` channels :)
+
-### Documentation
+## Documentation
-For a comprehensive list of all NGINX directives, check out .
+For a comprehensive list of all NGINX directives, check out .
+
+For a comprehensive list of all NGINX variables, check out .
+
+For a comprehensive list of admin and deployment guides for NGINX Plus, check out .
For a comprehensive list of admin and deployment guides for all NGINX products, check out .
+
+
### Mailing List
Want to get in touch with the NGINX development team directly? Try using the relevant mailing list found at !
+
+
## Contributing
Please see the [contributing guide](https://github.com/nginxinc/nginx-azure-workshops/blob/main/CONTRIBUTING.md) for guidelines on how to best contribute to this project.
+
+
## Commercial Support
Commercial support for this project may be available. Please get in touch with [NGINX sales](https://www.nginx.com/contact-sales/) or check your contract details for more info!
+
+
+
+### Community Slack
+
+We have a community [Slack](https://nginxcommunity.slack.com/)!
+
+If you are not a member, click [here](https://community.nginx.org/joinslack) to sign up (and let us know if the link does not seem to be working!)
+
+Once you join, check out the `#beginner-questions` and `nginx-users` channels :)
diff --git a/ca-notes/LabOutline.md b/ca-notes/LabOutline.md
deleted file mode 100644
index 7543cce..0000000
--- a/ca-notes/LabOutline.md
+++ /dev/null
@@ -1,184 +0,0 @@
-# Nginx for Azure Workshop Outline / Summary
-
-## Lab 0 - Prequesites - Subscription / Resources
-## Lab 1 - Azure VNet/Subnet / Network Security Group / Nginx for Azure Overview
-## Lab 2 - UbuntuVM/Docker / Windows VM / Cafe Demo Deployment
-## Lab 3 - AKS / ACR / Nginx Ingress Controller Deployment
-## Lab 4 - NIC Dashboard / Cafe Demo / Redis Deployment
-## Lab 5 - Nginx for Azure Load Balancing / Reverse Proxy
-## Lab 6 - Azure Key Vault / TLS Essentials
-## Lab 7 - Azure Monitoring / Logging Analytics
-## Lab 8 - Nginx Garage or Azure Petshop
-## Lab 9 - Nginx Caching / Rate Limits / Juiceshop
-## Lab 10 - Grafana for Azure
-## Lab 11 - Optional Exercises - Windows VM
-## Summary and Wrap-up
-
-
-
-### Lab 0 - Prequesites - Subscription / Resources
-
-- Overview
-In this Lab, the Prerequisite Requirements for both the Student and the Azure environment will be detailed. It is imperative that you have the appropriate computer, tools, skills, and Azure access to successfully complete the workshop. The Lab exercises must be done sequentially to build the environment as described. This is an intermediate level class, you must be proficient in several areas to successfully complete the workshop. Beginner level workshops are available from Nginx, to help prepare you for this workshop - see the References section below.
-
-- Learning Objectives
-Verify you have the proper computer requirements - hardware and software.
-- Hardware: Laptop, Admin rights, Internet connection
-- Software: Visual Studio, Terminal, Chrome, Docker, AKS and AZ CLI, Redis-CLI.
-Verify you have proper computer skills.
-- Computer skills: Linux CLI, file mgmt, SSH/Terminal, Docker/Compose, Azure Portal, HTTP/S, Kubernetes Nodes/Pods/Services skills, Load Balancing concepts
-- Optional: TLS, DNS, HTTP caching, Prometheus, Grafana, Redis
-Verify you have the proper access to Azure resources.
-- Azure subscription, list of Azure Roles/permissions here
-
-- Nginx for Azure Workshop has the following REQUIRED Nginx Skills
-Students must be familiar with Nginx basic operations, configurations, and concepts for HTTP traffic.
--- The Nginx Basics Workshop is HIGHLY recommended, students should have taken this workshop prior.
--- The Nginx Plus Ingress Controller workshop is also HIGHLY recommended, students should have taken this workshop prior.
--- Previous training on Azure Resource Groups, VMs, Azure Networking, AKS, ACR, and NSG is HIGHLY recommended.
-
-
-
-### Lab 1 - Azure VNet/Subnet / Network Security Group / Nginx for Azure Overview
-
-- Overview
-In this lab, you will be adding and configuring the Azure Networking components needed for this workshop. This will require a few network resources, and a Network Security Group to allow incoming traffic to your Nginx for Azure workshop resources. Then you will explore the Nginx for Azure product, as a quick Overview of what it is and how to deploy it.
-
-- Learning Objectives
-Setup your Azure Vnet and Subnets
-Setup your Azure Network Security Group for inbound traffic
-Explore Nginx for Azure
-Deploy an Nginx for Azure instance / enable logging
-Test Nginx for Azure welcome page
-
-
-
-### Lab 2 - Ubuntu VM/Docker / Windows VM / Cafe Demo Deployment
-
-- Overview
-In this lab, you will deploy an Ubuntu VM, and configure it for a Legacy web application. You will deploy a Windows VM. You will configure Nginx for Azure to proxy and load balance these backends.
-
-- Learning Objectives
-Deploy Ubuntu VM
-Install Docker and Docker-compose
-Run Legacy docker container apps - Cafe Demo
-Optional Exercise: Deploy Windows VM
-Configure Nginx Load Balancing for these apps
-
-
-
-### Lab 3 - AKS / ACR / Nginx Ingress Controller Deployment
-
-- Overview
-In this lab, you will deploy 2 AKS clusters, with Nginx Ingress Controllers. You will also deploy a private Container Registry.
-
-- Learning Objectives
-Deploy 1 AKS cluster using the Azure AZ CLI.
-Deploy 2nd AKS cluster with a bash script.
-Deploy Nginx Plus Ingress Controller with F5 Private Registry, to both the Clusters.
-Configure Nginx Plus Ingress Controller Dashboards.
-Expose the NIC Plus Dashboards externally for Live Monitoring.
-
-
-
-### 4 - Cafe Demo / Redis Deployment / Plus Dashboard
-
-- Overview
-In this lab, you will deploy 2 AKS clusters, with Nginx Ingress Controllers, a Redis cluster, and a Modern Web Application.
-
-- Learning Objectives
-Deploy a demo web application in the clusters.
-Deploy and test a Redis In Memory Cache to the AKS cluster.
-Configure Nginx Ingress for Cafe Demo.
-Configure Nginx Ingress for Redis Leader.
-Configure Nginx for Azure for Cafe and Redis applications.
-
-
-
-### Lab 5 - Nginx Load Balancing / Reverse Proxy
-
-- Overview
-In this lab, you will configure Nginx for Azure to Load Balance various workloads running in Azure. After successful configuration and adding Best Practice Nginx parameters, you will Load Test these applications, and test multiple load balancing and request routing parameters to suit different use cases.
-
-- Learning Objectives
-Configure Nginx for Azure, to Load Balance traffic to both AKS Nginx Ingress Controllers.
-Configure HTTP Split Clients, and route traffic to all 3 backend systems.
-Load test the Legacy and Modern web applications.
-
-
-
-### Lab 6 - Azure Key Vault / TLS Essentials
-
-- Overview
-In this lab, you use Azure Key Vault for TLS certificates and keys. You will configure Nginx for Azure to use these Azure resources to terminate TLS.
-
-- Learning Objectives
-Create a sample Azure Key Vault
-Create a TLS cert/key
-Configure and test Nginx for Azure to use the Azure Keys
-Update the previous Nginx configurations to use TLS for apps
-Update NSGs for TLS inbound traffic
-
-
-
-### Lab 7 - Azure Montoring / Logging Analytics
-
-- Overview
-Enable and configure Azure Monitoring for Nginx for Azure. Create custom Azure Dashboards for your applications. Gain experience using Azure Logs and logging tools.
-
-- Learning Objectives
-Enable, configure, and test Azure Monitoring for Nginx for Azure.
-Create a couple custom dashboards for your load balanced applications.
-Explore the Azure logging and Analytics tools available.
-
-
-
-### Lab 8 - Nginx Garage or Azure Petshop
-
-- Overview
-In this lab, you will deploy a modern application in your AKS cluster. You will expose it with Nginx Ingress Controller and Nginx for Azure.
-
-- Learning Objectives
-Deploy the modern app in AKS
-Test and Verify the app is working correctly
-Expose this application outside the cluster with Nginx Ingress Controller
-Configure Nginx for Azure for this new application
-
-
-
-### Lab 9 - Nginx Caching / Rate Limits / Juiceshop
-
-- Overview
-In this lab, you will deploy an image rich application, and use Nginx Caching to cache images to improve performance.
-
-- Learning Objectives
-Deploy JuiceShop in AKS cluster.
-Expose JuiceShop with Nginx Ingress Controller.
-Configure Nginx for Azure for load balancing JuiceShop.
-Add Nginx Caching to improve delivery of images.
-
-
-
-### Lab 10 - Grafana for Azure
-
-- Overview
-In this lab, you will explore the Nginx and Grafana for Azure integration.
-
-- Learning Objectives
-Deploy Grafana for Azure.
-Configure the Datasource
-Explore a sample Grafana Dashboard for Nginx for Azure
-
-
-
-
-### Lab 11 - Optional Exercises
-
-
-
-
-
-
-### Summary and Wrap-up
-
-- Summary and Wrap-up comments here
\ No newline at end of file
diff --git a/ca-notes/N4A Reference Arch.md b/ca-notes/N4A Reference Arch.md
deleted file mode 100644
index ee85d0d..0000000
--- a/ca-notes/N4A Reference Arch.md
+++ /dev/null
@@ -1,44 +0,0 @@
-N4A Reference Arch
-
-Plus features:
-
-Least time algo
-Active HC - not working
-Dashboard - not available
-metrics
-Prometheus
-KV store - no API access
-Zone synch
-Active Active nodes -
-OIDC-jwt - no
-Split clients
-NTLM - no app
-Caching
-FIPS - AKS/NIC only
-
-NIC - Deep Insight
-NLK
-NAP WAF - not available
-
-Azure Integrations
-Azure Console
-AzureAD - not available
-Azure Mon
-Azure DNS
-Azure Log Analisys
-Azure Key Vault - certs/keys
-Azure HSM - not possible
-
-
-Infrastructure
-FQDN Reg
-Public IPs
-
-2 Demo Apps,
-One for VMs with NTLM
-One for AKS with NIC
-And cafe to start
-
-****
-
-N4A Feedback / Issues, feedback, suggestions
diff --git a/ca-notes/N4A-feedback.md b/ca-notes/N4A-feedback.md
deleted file mode 100644
index 4f73adf..0000000
--- a/ca-notes/N4A-feedback.md
+++ /dev/null
@@ -1,154 +0,0 @@
-# N4A Feedback / Issues, feedback, suggestions
-
-
-
-## Azure Metrics are blank, text box is malformed, see screenshot
-
-Nginx Default config `nginx.conf` should have two status_zone directives included. One for server{} block, one for / location block. **This will allow metrics to show up immediately, without the user having to find, understand, and configure status_zones in their nginx.conf file, or included files in /etc/nginx/conf.d.**
-
-```nginx
-
-user nginx;
-worker_processes auto;
-worker_rlimit_nofile 8192;
-pid /run/nginx/nginx.pid;
-
-events {
- worker_connections 4000;
-}
-
-error_log /var/log/nginx/error.log error;
-
-http {
- access_log off;
- server_tokens "";
- server {
- listen 80 default_server;
- status_zone default; # Add something here
- server_name localhost;
- location / {
- status_zone /; # Add something here
- # Points to a directory with a basic html index file with
- # a "Welcome to NGINX as a Service for Azure!" page
- root /var/www;
- index index.html;
- }
- }
-}
-
-```
-
-There should be a step by step config guide for getting the Metrics to show up, and create a basic Dashboard for Nginx, including the Prerequisites.
-
-## Nginx Standards and Best Practice Violations/Issues
-
-The Nginx default HTML folder/files are missing. This should be included, `/usr/share/nginx/html`, with all the Nginx Error Pages, and other Nginx primitives. Consult a new installation of NginxPlus-R3x to match files.
-
-The usage of the `/var/www` folder is an Apache/Microsoft standard, not an Nginx standard. It should be replaced with `/usr/share/nginx/html` for Nginx users.
-
-The usage of the `/var/cache` folder for caching content is inconsistent with Nginx standards and docs. Most Nginx documentation for caching refers to the `/data/nginx/cache` folder location, and should be changed for Nginx users.
-
-Missing standard nginx.conf `include` directive, for including files in /etc/nginx/conf.d folder.
-
-## NGINX configuration issues
-
-Upload Config Package overwrites the existing nginx.conf, this is a terrible idea. Config package upload should be modified to only allow uploads to the /etc/nginx/conf.d folder, the Nginx standard location for http config files. Perhaps also allow uploads to `/etc/nginx/stream`, the Nginx standard for L4 config files. Even better, allow uploads to a dedicated folder that won't overwrite standard Nginx folders/files, but the user would have to manually copy/paste to move them into the correct folder. Lots of room for discussion and improvement here.
-
-## Caching
-
-From the docs: NGINXaaS for Azure only supports caching to /var/cache/nginx. This is because data at /var/cache/nginx will be stored in a separate Temporary Disk. The size of the temporary disk is 4GB.
-
-This is too small, and there should be an option to use other Azure storage options besides a Temporary disk.
-
-Caching Configuration example is incomplete. It only set up the cache_path location:
-
-http {
- # ...
- proxy_cache_path /var/cache/nginx keys_zone=mycache:10m;
-}
-
-It is `missing` all the other parameters needed for caching to work. A link to Nginx Content Caching is provided, but that is not very helpful.
-
-A complete Caching config example should be provided, perhaps with an include file, pre-configured ?
-
-```nginx
-
-http {
- ...
-
- proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=mycache:10m max_size=500m use_temp_path=off;
-
- ...
-
- server {
- ...
- server_name localhost;
- location /images {
- ...
- proxy_cache mycache; # Use the cache
- proxy_cache_key "$host$request_uri$cookie_user"; # Cache Key
- proxy_cache_min_uses 2; # Cache after 2 reqs
- proxy_cache_valid 200 30m; # Cache for 30m
- proxy_cache_valid 404 1m;
-
- # Required caching headers
- proxy_ignore_headers X-Accel-Expires Expires Cache-Control Set-Cookie;
- add_header Cache-Control "public";
- add_header X-Cache-Status $upstream_cache_status; # Add Cache status header
-
-
- }
- }
-}
-
-```
-
-## Default 'includes' Directive is missing
-
-## Can't see the Nginx Upstreams
-
-Without the Plus realtime dash board, there is no way to know if the Upstreams defined are correct or working, because
-
-## No access to Nginx Access or Error logs
-
-There is no realtime access to either the Error or Access Logs from Nginx. It makes it virtually impossible to "see what's going on" with Nginx without these logs.
-
-Using the Azure Logging services is complicated, you can't see the original Access or Error logs. An Azure Logging Workspace that shows the error and Access log should be included when the user Deploys N4A. It should be there by default. The lack of this feature will frustrate a large number of Nginx users, especially if they are new to Azure Monitoring / Logging Workspaces.
-
-
-## Nginx Keepalive for HTTP1.1 settings should be included, missing Best Practice.
-
-```nginx
-# Default is HTTP/1, keepalive is only enabled in HTTP/1.1
-proxy_http_version 1.1;
-
-# Remove the Connection header if the client sends it,
-# it could be "close" to close a keepalive connection
-proxy_set_header Connection "";
-
-# Host request header field, or the server name matching a request
-proxy_set_header Host $host;
-
-```
-
-## Nginx Azure Monitor - can't used Saved Dashboards.
-
-If you create and save a dashboard, it does not work. Looks like you have Share a Dashboard.
-
-I can see the name in the list, but Azure Monitor does not let me "load it and use it". It starts with a new, blank dashboard instead. If you refresh the browser page, all customizations are lost and you start at the beginning.
-
-nginx requests and responses, plus.http.status.4xx are reporting incorrectly. looks like 2xx and 4xx metrics are swapped!
-
-Unique Server, Location, and Upstream block metrics are not available, everything is aggregated in to a Total, no metrics with fine grain resolution.
-
-Very difficult to even see the Upstream IP addresses, this is critical for a Proxy configuration.
-
-
-***********
-
-## Engineering Areas to investigate
-
-Adam - AzureAD/DNS/Grafana - ingress-demo container update
-Chris - Plus LB, AZcompute, Cafe Demo, aks/nic/cafe
-Shouvik - AZ monitor, KeyVault, new repo
-Steve - Demo app, Redis
\ No newline at end of file
diff --git a/ca-notes/R30Plus-dashboard.html b/ca-notes/R30Plus-dashboard.html
deleted file mode 100644
index c80db68..0000000
--- a/ca-notes/R30Plus-dashboard.html
+++ /dev/null
@@ -1,1928 +0,0 @@
-NGINX Plus Dashboard
\ No newline at end of file
diff --git a/ca-notes/aks/aks-deployment.md b/ca-notes/aks/aks-deployment.md
deleted file mode 100644
index b191bbf..0000000
--- a/ca-notes/aks/aks-deployment.md
+++ /dev/null
@@ -1,13 +0,0 @@
-akker# az aks create --resource-group $MY_RESOURCEGROUP --name $MY_AKS --location $MY_LOCATION --node-count 3 --node-vm-size $AKS_NODE_VM --kubernetes-version $K8S_VERSION --tags owner=$MY_NAME --enable-addons monitoring --generate-ssh-keys --enable-fips-image
-
-
---vnet-subnet-id
---vnet-subnet-id /subscriptions/7a0bb4ab-c5a7-46b3-b4ad-c10376166020/resourceGroups/cakker/providers/Microsoft.Network/virtualNetworks/demo1-vnet/subnets/aks
-
-/subscriptions/7a0bb4ab-c5a7-46b3-b4ad-c10376166020/resourceGroups/cakker/providers/Microsoft.Network/virtualNetworks/demo1-vnet/subnets/aks2
-
-
-Second cluster, using azure CNI and new "aks2" subnet, as $MY_SUBNET :
-
-az aks create --resource-group $MY_RESOURCEGROUP --name $MY_AKS --location $MY_LOCATION --node-count 3 --node-vm-size $AKS_NODE_VM --kubernetes-version $K8S_VERSION --tags owner=$MY_NAME --vnet-subnet-id=$MY_SUBNET --network-plugin option: azure --enable-addons monitoring --generate-ssh-keys --enable-fips-image
-
diff --git a/ca-notes/aks/aks-store/aks-store-all-in-one.yaml b/ca-notes/aks/aks-store/aks-store-all-in-one.yaml
deleted file mode 100644
index c3f72cd..0000000
--- a/ca-notes/aks/aks-store/aks-store-all-in-one.yaml
+++ /dev/null
@@ -1,539 +0,0 @@
-apiVersion: apps/v1
-kind: StatefulSet
-metadata:
- name: mongodb
-spec:
- serviceName: mongodb
- replicas: 1
- selector:
- matchLabels:
- app: mongodb
- template:
- metadata:
- labels:
- app: mongodb
- spec:
- nodeSelector:
- "kubernetes.io/os": linux
- containers:
- - name: mongodb
- image: mcr.microsoft.com/mirror/docker/library/mongo:4.2
- ports:
- - containerPort: 27017
- name: mongodb
- resources:
- requests:
- cpu: 5m
- memory: 75Mi
- limits:
- cpu: 25m
- memory: 1024Mi
- livenessProbe:
- exec:
- command:
- - mongosh
- - --eval
- - db.runCommand('ping').ok
- initialDelaySeconds: 5
- periodSeconds: 5
----
-apiVersion: v1
-kind: Service
-metadata:
- name: mongodb
-spec:
- ports:
- - port: 27017
- selector:
- app: mongodb
- type: ClusterIP
----
-apiVersion: v1
-data:
- rabbitmq_enabled_plugins: |
- [rabbitmq_management,rabbitmq_prometheus,rabbitmq_amqp1_0].
-kind: ConfigMap
-metadata:
- name: rabbitmq-enabled-plugins
----
-apiVersion: apps/v1
-kind: StatefulSet
-metadata:
- name: rabbitmq
-spec:
- serviceName: rabbitmq
- replicas: 1
- selector:
- matchLabels:
- app: rabbitmq
- template:
- metadata:
- labels:
- app: rabbitmq
- spec:
- nodeSelector:
- "kubernetes.io/os": linux
- containers:
- - name: rabbitmq
- image: mcr.microsoft.com/mirror/docker/library/rabbitmq:3.10-management-alpine
- ports:
- - containerPort: 5672
- name: rabbitmq-amqp
- - containerPort: 15672
- name: rabbitmq-http
- env:
- - name: RABBITMQ_DEFAULT_USER
- value: "username"
- - name: RABBITMQ_DEFAULT_PASS
- value: "password"
- resources:
- requests:
- cpu: 10m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- volumeMounts:
- - name: rabbitmq-enabled-plugins
- mountPath: /etc/rabbitmq/enabled_plugins
- subPath: enabled_plugins
- volumes:
- - name: rabbitmq-enabled-plugins
- configMap:
- name: rabbitmq-enabled-plugins
- items:
- - key: rabbitmq_enabled_plugins
- path: enabled_plugins
----
-apiVersion: v1
-kind: Service
-metadata:
- name: rabbitmq
-spec:
- selector:
- app: rabbitmq
- ports:
- - name: rabbitmq-amqp
- port: 5672
- targetPort: 5672
- - name: rabbitmq-http
- port: 15672
- targetPort: 15672
- type: ClusterIP
----
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: order-service
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: order-service
- template:
- metadata:
- labels:
- app: order-service
- spec:
- nodeSelector:
- "kubernetes.io/os": linux
- containers:
- - name: order-service
- image: ghcr.io/azure-samples/aks-store-demo/order-service:latest
- ports:
- - containerPort: 3000
- env:
- - name: ORDER_QUEUE_HOSTNAME
- value: "rabbitmq"
- - name: ORDER_QUEUE_PORT
- value: "5672"
- - name: ORDER_QUEUE_USERNAME
- value: "username"
- - name: ORDER_QUEUE_PASSWORD
- value: "password"
- - name: ORDER_QUEUE_NAME
- value: "orders"
- - name: FASTIFY_ADDRESS
- value: "0.0.0.0"
- resources:
- requests:
- cpu: 1m
- memory: 50Mi
- limits:
- cpu: 75m
- memory: 128Mi
- startupProbe:
- httpGet:
- path: /health
- port: 3000
- failureThreshold: 3
- initialDelaySeconds: 15
- periodSeconds: 5
- readinessProbe:
- httpGet:
- path: /health
- port: 3000
- failureThreshold: 3
- initialDelaySeconds: 3
- periodSeconds: 5
- livenessProbe:
- httpGet:
- path: /health
- port: 3000
- failureThreshold: 5
- initialDelaySeconds: 3
- periodSeconds: 3
- initContainers:
- - name: wait-for-rabbitmq
- image: busybox
- command: ['sh', '-c', 'until nc -zv rabbitmq 5672; do echo waiting for rabbitmq; sleep 2; done;']
- resources:
- requests:
- cpu: 1m
- memory: 50Mi
- limits:
- cpu: 75m
- memory: 128Mi
----
-apiVersion: v1
-kind: Service
-metadata:
- name: order-service
-spec:
- type: ClusterIP
- ports:
- - name: http
- port: 3000
- targetPort: 3000
- selector:
- app: order-service
----
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: makeline-service
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: makeline-service
- template:
- metadata:
- labels:
- app: makeline-service
- spec:
- nodeSelector:
- "kubernetes.io/os": linux
- containers:
- - name: makeline-service
- image: ghcr.io/azure-samples/aks-store-demo/makeline-service:latest
- ports:
- - containerPort: 3001
- env:
- - name: ORDER_QUEUE_URI
- value: "amqp://rabbitmq:5672"
- - name: ORDER_QUEUE_USERNAME
- value: "username"
- - name: ORDER_QUEUE_PASSWORD
- value: "password"
- - name: ORDER_QUEUE_NAME
- value: "orders"
- - name: ORDER_DB_URI
- value: "mongodb://mongodb:27017"
- - name: ORDER_DB_NAME
- value: "orderdb"
- - name: ORDER_DB_COLLECTION_NAME
- value: "orders"
- resources:
- requests:
- cpu: 1m
- memory: 6Mi
- limits:
- cpu: 5m
- memory: 20Mi
- readinessProbe:
- httpGet:
- path: /health
- port: 3001
- failureThreshold: 3
- initialDelaySeconds: 3
- periodSeconds: 5
- livenessProbe:
- httpGet:
- path: /health
- port: 3001
- failureThreshold: 5
- initialDelaySeconds: 3
- periodSeconds: 3
----
-apiVersion: v1
-kind: Service
-metadata:
- name: makeline-service
-spec:
- type: ClusterIP
- ports:
- - name: http
- port: 3001
- targetPort: 3001
- selector:
- app: makeline-service
----
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: product-service
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: product-service
- template:
- metadata:
- labels:
- app: product-service
- spec:
- nodeSelector:
- "kubernetes.io/os": linux
- containers:
- - name: product-service
- image: ghcr.io/azure-samples/aks-store-demo/product-service:latest
- ports:
- - containerPort: 3002
- env:
- - name: AI_SERVICE_URL
- value: "http://ai-service:5001/"
- resources:
- requests:
- cpu: 1m
- memory: 1Mi
- limits:
- cpu: 2m
- memory: 10Mi
- readinessProbe:
- httpGet:
- path: /health
- port: 3002
- failureThreshold: 3
- initialDelaySeconds: 3
- periodSeconds: 5
- livenessProbe:
- httpGet:
- path: /health
- port: 3002
- failureThreshold: 5
- initialDelaySeconds: 3
- periodSeconds: 3
----
-apiVersion: v1
-kind: Service
-metadata:
- name: product-service
-spec:
- type: ClusterIP
- ports:
- - name: http
- port: 3002
- targetPort: 3002
- selector:
- app: product-service
----
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: store-front
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: store-front
- template:
- metadata:
- labels:
- app: store-front
- spec:
- nodeSelector:
- "kubernetes.io/os": linux
- containers:
- - name: store-front
- image: ghcr.io/azure-samples/aks-store-demo/store-front:latest
- ports:
- - containerPort: 8080
- name: store-front
- env:
- - name: VUE_APP_ORDER_SERVICE_URL
- value: "http://order-service:3000/"
- - name: VUE_APP_PRODUCT_SERVICE_URL
- value: "http://product-service:3002/"
- resources:
- requests:
- cpu: 1m
- memory: 200Mi
- limits:
- cpu: 1000m
- memory: 512Mi
- startupProbe:
- httpGet:
- path: /health
- port: 8080
- failureThreshold: 3
- initialDelaySeconds: 5
- periodSeconds: 5
- readinessProbe:
- httpGet:
- path: /health
- port: 8080
- failureThreshold: 3
- initialDelaySeconds: 3
- periodSeconds: 3
- livenessProbe:
- httpGet:
- path: /health
- port: 8080
- failureThreshold: 5
- initialDelaySeconds: 3
- periodSeconds: 3
----
-apiVersion: v1
-kind: Service
-metadata:
- name: store-front
-spec:
- ports:
- - port: 80
- targetPort: 8080
- selector:
- app: store-front
- type: LoadBalancer
----
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: store-admin
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: store-admin
- template:
- metadata:
- labels:
- app: store-admin
- spec:
- nodeSelector:
- "kubernetes.io/os": linux
- containers:
- - name: store-admin
- image: ghcr.io/azure-samples/aks-store-demo/store-admin:latest
- ports:
- - containerPort: 8081
- name: store-admin
- env:
- - name: VUE_APP_PRODUCT_SERVICE_URL
- value: "http://product-service:3002/"
- - name: VUE_APP_MAKELINE_SERVICE_URL
- value: "http://makeline-service:3001/"
- resources:
- requests:
- cpu: 1m
- memory: 200Mi
- limits:
- cpu: 1000m
- memory: 512Mi
- startupProbe:
- httpGet:
- path: /health
- port: 8081
- failureThreshold: 3
- initialDelaySeconds: 5
- periodSeconds: 5
- readinessProbe:
- httpGet:
- path: /health
- port: 8081
- failureThreshold: 3
- initialDelaySeconds: 3
- periodSeconds: 5
- livenessProbe:
- httpGet:
- path: /health
- port: 8081
- failureThreshold: 5
- initialDelaySeconds: 3
- periodSeconds: 3
----
-apiVersion: v1
-kind: Service
-metadata:
- name: store-admin
-spec:
- ports:
- - port: 80
- targetPort: 8081
- selector:
- app: store-admin
- type: LoadBalancer
----
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: virtual-customer
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: virtual-customer
- template:
- metadata:
- labels:
- app: virtual-customer
- spec:
- nodeSelector:
- "kubernetes.io/os": linux
- containers:
- - name: virtual-customer
- image: ghcr.io/azure-samples/aks-store-demo/virtual-customer:latest
- env:
- - name: ORDER_SERVICE_URL
- value: http://order-service:3000/
- - name: ORDERS_PER_HOUR
- value: "100"
- resources:
- requests:
- cpu: 1m
- memory: 1Mi
- limits:
- cpu: 1m
- memory: 7Mi
----
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: virtual-worker
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: virtual-worker
- template:
- metadata:
- labels:
- app: virtual-worker
- spec:
- nodeSelector:
- "kubernetes.io/os": linux
- containers:
- - name: virtual-worker
- image: ghcr.io/azure-samples/aks-store-demo/virtual-worker:latest
- env:
- - name: MAKELINE_SERVICE_URL
- value: http://makeline-service:3001
- - name: ORDERS_PER_HOUR
- value: "100"
- resources:
- requests:
- cpu: 1m
- memory: 1Mi
- limits:
- cpu: 1m
- memory: 7Mi
\ No newline at end of file
diff --git a/ca-notes/aks/cafe/cafe-secret.yaml b/ca-notes/aks/cafe/cafe-secret.yaml
deleted file mode 100644
index 86648b7..0000000
--- a/ca-notes/aks/cafe/cafe-secret.yaml
+++ /dev/null
@@ -1,9 +0,0 @@
-apiVersion: v1
-kind: Secret
-metadata:
- name: cafe-secret
-type: kubernetes.io/tls
-data:
- tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURMakNDQWhZQ0NRREFPRjl0THNhWFdqQU5CZ2txaGtpRzl3MEJBUXNGQURCYU1Rc3dDUVlEVlFRR0V3SlYKVXpFTE1Ba0dBMVVFQ0F3Q1EwRXhJVEFmQmdOVkJBb01HRWx1ZEdWeWJtVjBJRmRwWkdkcGRITWdVSFI1SUV4MApaREViTUJrR0ExVUVBd3dTWTJGbVpTNWxlR0Z0Y0d4bExtTnZiU0FnTUI0WERURTRNRGt4TWpFMk1UVXpOVm9YCkRUSXpNRGt4TVRFMk1UVXpOVm93V0RFTE1Ba0dBMVVFQmhNQ1ZWTXhDekFKQmdOVkJBZ01Ba05CTVNFd0h3WUQKVlFRS0RCaEpiblJsY201bGRDQlhhV1JuYVhSeklGQjBlU0JNZEdReEdUQVhCZ05WQkFNTUVHTmhabVV1WlhoaApiWEJzWlM1amIyMHdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFDcDZLbjdzeTgxCnAwanVKL2N5ayt2Q0FtbHNmanRGTTJtdVpOSzBLdGVjcUcyZmpXUWI1NXhRMVlGQTJYT1N3SEFZdlNkd0kyaloKcnVXOHFYWENMMnJiNENaQ0Z4d3BWRUNyY3hkam0zdGVWaVJYVnNZSW1tSkhQUFN5UWdwaW9iczl4N0RsTGM2SQpCQTBaalVPeWwwUHFHOVNKZXhNVjczV0lJYTVyRFZTRjJyNGtTa2JBajREY2o3TFhlRmxWWEgySTVYd1hDcHRDCm42N0pDZzQyZitrOHdnemNSVnA4WFprWldaVmp3cTlSVUtEWG1GQjJZeU4xWEVXZFowZXdSdUtZVUpsc202OTIKc2tPcktRajB2a29QbjQxRUUvK1RhVkVwcUxUUm9VWTNyemc3RGtkemZkQml6Rk8yZHNQTkZ4MkNXMGpYa05MdgpLbzI1Q1pyT2hYQUhBZ01CQUFFd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFLSEZDY3lPalp2b0hzd1VCTWRMClJkSEliMzgzcFdGeW5acS9MdVVvdnNWQTU4QjBDZzdCRWZ5NXZXVlZycTVSSWt2NGxaODFOMjl4MjFkMUpINnIKalNuUXgrRFhDTy9USkVWNWxTQ1VwSUd6RVVZYVVQZ1J5anNNL05VZENKOHVIVmhaSitTNkZBK0NuT0Q5cm4yaQpaQmVQQ0k1ckh3RVh3bm5sOHl3aWozdnZRNXpISXV5QmdsV3IvUXl1aTlmalBwd1dVdlVtNG52NVNNRzl6Q1Y3ClBwdXd2dWF0cWpPMTIwOEJqZkUvY1pISWc4SHc5bXZXOXg5QytJUU1JTURFN2IvZzZPY0s3TEdUTHdsRnh2QTgKN1dqRWVxdW5heUlwaE1oS1JYVmYxTjM0OWVOOThFejM4Zk9USFRQYmRKakZBL1BjQytHeW1lK2lHdDVPUWRGaAp5UkU9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
- tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBcWVpcCs3TXZOYWRJN2lmM01wUHJ3Z0pwYkg0N1JUTnBybVRTdENyWG5LaHRuNDFrCkcrZWNVTldCUU5semtzQndHTDBuY0NObzJhN2x2S2wxd2k5cTIrQW1RaGNjS1ZSQXEzTVhZNXQ3WGxZa1YxYkcKQ0pwaVJ6ejBza0lLWXFHN1BjZXc1UzNPaUFRTkdZMURzcGRENmh2VWlYc1RGZTkxaUNHdWF3MVVoZHErSkVwRwp3SStBM0kreTEzaFpWVng5aU9WOEZ3cWJRcCt1eVFvT05uL3BQTUlNM0VWYWZGMlpHVm1WWThLdlVWQ2cxNWhRCmRtTWpkVnhGbldkSHNFYmltRkNaYkp1dmRySkRxeWtJOUw1S0Q1K05SQlAvazJsUkthaTAwYUZHTjY4NE93NUgKYzMzUVlzeFR0bmJEelJjZGdsdEkxNURTN3lxTnVRbWF6b1Z3QndJREFRQUJBb0lCQVFDUFNkU1luUXRTUHlxbApGZlZGcFRPc29PWVJoZjhzSStpYkZ4SU91UmF1V2VoaEp4ZG01Uk9ScEF6bUNMeUw1VmhqdEptZTIyM2dMcncyCk45OUVqVUtiL1ZPbVp1RHNCYzZvQ0Y2UU5SNThkejhjbk9SVGV3Y290c0pSMXBuMWhobG5SNUhxSkpCSmFzazEKWkVuVVFmY1hackw5NGxvOUpIM0UrVXFqbzFGRnM4eHhFOHdvUEJxalpzVjdwUlVaZ0MzTGh4bndMU0V4eUZvNApjeGI5U09HNU9tQUpvelN0Rm9RMkdKT2VzOHJKNXFmZHZ5dGdnOXhiTGFRTC94MGtwUTYyQm9GTUJEZHFPZVBXCktmUDV6WjYvMDcvdnBqNDh5QTFRMzJQem9idWJzQkxkM0tjbjMyamZtMUU3cHJ0V2wrSmVPRmlPem5CUUZKYk4KNHFQVlJ6NWhBb0dCQU50V3l4aE5DU0x1NFArWGdLeWNrbGpKNkY1NjY4Zk5qNUN6Z0ZScUowOXpuMFRsc05ybwpGVExaY3hEcW5SM0hQWU00MkpFUmgySi9xREZaeW5SUW8zY2czb2VpdlVkQlZHWTgrRkkxVzBxZHViL0w5K3l1CmVkT1pUUTVYbUdHcDZyNmpleHltY0ppbS9Pc0IzWm5ZT3BPcmxEN1NQbUJ2ek5MazRNRjZneGJYQW9HQkFNWk8KMHA2SGJCbWNQMHRqRlhmY0tFNzdJbUxtMHNBRzR1SG9VeDBlUGovMnFyblRuT0JCTkU0TXZnRHVUSnp5K2NhVQprOFJxbWRIQ2JIelRlNmZ6WXEvOWl0OHNaNzdLVk4xcWtiSWN1YytSVHhBOW5OaDFUanNSbmU3NFowajFGQ0xrCmhIY3FIMHJpN1BZU0tIVEU4RnZGQ3haWWRidUI4NENtWmlodnhicFJBb0dBSWJqcWFNWVBUWXVrbENkYTVTNzkKWVNGSjFKelplMUtqYS8vdER3MXpGY2dWQ0thMzFqQXdjaXowZi9sU1JxM0hTMUdHR21lemhQVlRpcUxmZVpxYwpSMGlLYmhnYk9jVlZrSkozSzB5QXlLd1BUdW14S0haNnpJbVpTMGMwYW0rUlk5WUdxNVQ3WXJ6cHpjZnZwaU9VCmZmZTNSeUZUN2NmQ21mb09oREN0enVrQ2dZQjMwb0xDMVJMRk9ycW40M3ZDUzUxemM1em9ZNDR1QnpzcHd3WU4KVHd2UC9FeFdNZjNWSnJEakJDSCtULzZzeXNlUGJKRUltbHpNK0l3eXRGcEFOZmlJWEV0LzQ4WGY2ME54OGdXTQp1SHl4Wlp4L05LdER3MFY4dlgxUE9ucTJBNWVpS2ErOGpSQVJZS0pMWU5kZkR1d29seHZHNmJaaGtQaS80RXRUCjNZMThzUUtCZ0h0S2JrKzdsTkpWZXN3WEU1Y1VHNkVEVXNEZS8yVWE3ZlhwN0ZjanFCRW9hcDFMU3crNlRYcDAKWmdybUtFOEFSek00NytFSkhVdmlpcS9udXBFMTVnMGtKVzNzeWhwVTl6WkxPN2x0QjBLSWtPOVpSY21Vam84UQpjcExsSE1BcWJMSjhXWUdKQ2toaVd4eWFsNmhZVHlXWTRjVmtDMHh0VGwvaFVFOUllTktvCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
-
diff --git a/ca-notes/aks/nic/nginx-config-lastbyte.yaml b/ca-notes/aks/nic/nginx-config-lastbyte.yaml
deleted file mode 100644
index 5c1e1cb..0000000
--- a/ca-notes/aks/nic/nginx-config-lastbyte.yaml
+++ /dev/null
@@ -1,8 +0,0 @@
-kind: ConfigMap
-apiVersion: v1
-metadata:
- name: nginx-config
- namespace: nginx-ingress
-data:
- lb-method: "least_time last_byte"
-
\ No newline at end of file
diff --git a/ca-notes/aks/nic/nginx-config-roundrobin.yaml b/ca-notes/aks/nic/nginx-config-roundrobin.yaml
deleted file mode 100644
index fe81601..0000000
--- a/ca-notes/aks/nic/nginx-config-roundrobin.yaml
+++ /dev/null
@@ -1,8 +0,0 @@
-kind: ConfigMap
-apiVersion: v1
-metadata:
- name: nginx-config
- namespace: nginx-ingress
-data:
- lb-method: "round_robin"
-
\ No newline at end of file
diff --git a/ca-notes/grafana/n4a-dashboard.json b/ca-notes/grafana/n4a-dashboard.json
deleted file mode 100644
index e70278a..0000000
--- a/ca-notes/grafana/n4a-dashboard.json
+++ /dev/null
@@ -1,1061 +0,0 @@
-{
- "annotations": {
- "list": [
- {
- "builtIn": 1,
- "datasource": {
- "type": "grafana",
- "uid": "-- Grafana --"
- },
- "enable": true,
- "hide": true,
- "iconColor": "rgba(0, 211, 255, 1)",
- "name": "Annotations & Alerts",
- "type": "dashboard"
- }
- ]
- },
- "editable": true,
- "fiscalYearStartMonth": 0,
- "graphTooltip": 0,
- "id": 32,
- "links": [],
- "liveNow": false,
- "panels": [
- {
- "datasource": {
- "type": "grafana-azure-monitor-datasource",
- "uid": "azure-monitor-oob"
- },
- "description": "",
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "palette-classic"
- },
- "custom": {
- "axisBorderShow": false,
- "axisCenteredZero": false,
- "axisColorMode": "text",
- "axisLabel": "",
- "axisPlacement": "auto",
- "barAlignment": 0,
- "drawStyle": "line",
- "fillOpacity": 0,
- "gradientMode": "none",
- "hideFrom": {
- "legend": false,
- "tooltip": false,
- "viz": false
- },
- "insertNulls": false,
- "lineInterpolation": "linear",
- "lineWidth": 3,
- "pointSize": 5,
- "scaleDistribution": {
- "type": "linear"
- },
- "showPoints": "auto",
- "spanNulls": false,
- "stacking": {
- "group": "A",
- "mode": "none"
- },
- "thresholdsStyle": {
- "mode": "off"
- }
- },
- "mappings": [],
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- },
- {
- "color": "red",
- "value": 80
- }
- ]
- },
- "unitScale": true
- },
- "overrides": []
- },
- "gridPos": {
- "h": 9,
- "w": 12,
- "x": 0,
- "y": 0
- },
- "id": 3,
- "options": {
- "legend": {
- "calcs": [],
- "displayMode": "list",
- "placement": "bottom",
- "showLegend": true
- },
- "tooltip": {
- "mode": "single",
- "sort": "none"
- }
- },
- "targets": [
- {
- "azureMonitor": {
- "aggregation": "Total",
- "allowedTimeGrainsMs": [
- 60000,
- 300000,
- 900000,
- 1800000,
- 3600000,
- 21600000,
- 43200000,
- 86400000
- ],
- "customNamespace": "NGINX Requests and Response Statistics",
- "dimensionFilters": [
- {
- "dimension": "server_zone",
- "filters": [],
- "operator": "eq"
- },
- {
- "dimension": "",
- "filters": [],
- "operator": "eq"
- }
- ],
- "metricName": "plus.http.request.count",
- "metricNamespace": "nginx.nginxplus/nginxdeployments",
- "region": "westus2",
- "resources": [
- {
- "metricNamespace": "NGINX.NGINXPLUS/nginxDeployments",
- "region": "westus2",
- "resourceGroup": "cakker",
- "resourceName": "nginx1",
- "subscription": "7a0bb4ab-c5a7-46b3-b4ad-c10376166020"
- }
- ],
- "timeGrain": "auto"
- },
- "datasource": {
- "type": "grafana-azure-monitor-datasource",
- "uid": "azure-monitor-oob"
- },
- "queryType": "Azure Monitor",
- "refId": "A",
- "subscription": "7a0bb4ab-c5a7-46b3-b4ad-c10376166020"
- }
- ],
- "title": "HTTP Requests",
- "type": "timeseries"
- },
- {
- "datasource": {
- "type": "grafana-azure-monitor-datasource",
- "uid": "azure-monitor-oob"
- },
- "description": "",
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "palette-classic"
- },
- "custom": {
- "axisBorderShow": true,
- "axisCenteredZero": false,
- "axisColorMode": "text",
- "axisLabel": "",
- "axisPlacement": "auto",
- "barAlignment": 0,
- "drawStyle": "line",
- "fillOpacity": 0,
- "gradientMode": "none",
- "hideFrom": {
- "legend": false,
- "tooltip": false,
- "viz": false
- },
- "insertNulls": false,
- "lineInterpolation": "linear",
- "lineWidth": 3,
- "pointSize": 5,
- "scaleDistribution": {
- "type": "linear"
- },
- "showPoints": "auto",
- "spanNulls": false,
- "stacking": {
- "group": "A",
- "mode": "none"
- },
- "thresholdsStyle": {
- "mode": "off"
- }
- },
- "mappings": [],
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- },
- {
- "color": "red",
- "value": 80
- }
- ]
- },
- "unitScale": true
- },
- "overrides": []
- },
- "gridPos": {
- "h": 9,
- "w": 12,
- "x": 12,
- "y": 0
- },
- "id": 2,
- "options": {
- "legend": {
- "calcs": [],
- "displayMode": "table",
- "placement": "right",
- "showLegend": true
- },
- "tooltip": {
- "mode": "single",
- "sort": "none"
- }
- },
- "targets": [
- {
- "azureMonitor": {
- "aggregation": "Average",
- "allowedTimeGrainsMs": [
- 60000,
- 300000,
- 900000,
- 1800000,
- 3600000,
- 21600000,
- 43200000,
- 86400000
- ],
- "customNamespace": "NGINX Requests and Response Statistics",
- "dimensionFilters": [],
- "metricName": "plus.http.status.2xx",
- "metricNamespace": "nginx.nginxplus/nginxdeployments",
- "region": "westus2",
- "resources": [
- {
- "metricNamespace": "NGINX.NGINXPLUS/nginxDeployments",
- "region": "westus2",
- "resourceGroup": "cakker",
- "resourceName": "nginx1",
- "subscription": "7a0bb4ab-c5a7-46b3-b4ad-c10376166020"
- }
- ],
- "timeGrain": "auto"
- },
- "datasource": {
- "type": "grafana-azure-monitor-datasource",
- "uid": "azure-monitor-oob"
- },
- "queryType": "Azure Monitor",
- "refId": "A",
- "subscription": "7a0bb4ab-c5a7-46b3-b4ad-c10376166020"
- },
- {
- "azureMonitor": {
- "aggregation": "Average",
- "allowedTimeGrainsMs": [
- 60000,
- 300000,
- 900000,
- 1800000,
- 3600000,
- 21600000,
- 43200000,
- 86400000
- ],
- "customNamespace": "NGINX Requests and Response Statistics",
- "dimensionFilters": [],
- "metricName": "plus.http.status.3xx",
- "metricNamespace": "nginx.nginxplus/nginxdeployments",
- "region": "westus2",
- "resources": [
- {
- "metricNamespace": "NGINX.NGINXPLUS/nginxDeployments",
- "region": "westus2",
- "resourceGroup": "cakker",
- "resourceName": "nginx1",
- "subscription": "7a0bb4ab-c5a7-46b3-b4ad-c10376166020"
- }
- ],
- "timeGrain": "auto"
- },
- "datasource": {
- "type": "grafana-azure-monitor-datasource",
- "uid": "azure-monitor-oob"
- },
- "hide": false,
- "queryType": "Azure Monitor",
- "refId": "C",
- "subscription": "7a0bb4ab-c5a7-46b3-b4ad-c10376166020"
- },
- {
- "azureMonitor": {
- "aggregation": "Average",
- "allowedTimeGrainsMs": [
- 60000,
- 300000,
- 900000,
- 1800000,
- 3600000,
- 21600000,
- 43200000,
- 86400000
- ],
- "customNamespace": "NGINX Requests and Response Statistics",
- "dimensionFilters": [],
- "metricName": "plus.http.status.4xx",
- "metricNamespace": "nginx.nginxplus/nginxdeployments",
- "region": "westus2",
- "resources": [
- {
- "metricNamespace": "NGINX.NGINXPLUS/nginxDeployments",
- "region": "westus2",
- "resourceGroup": "cakker",
- "resourceName": "nginx1",
- "subscription": "7a0bb4ab-c5a7-46b3-b4ad-c10376166020"
- }
- ],
- "timeGrain": "auto"
- },
- "datasource": {
- "type": "grafana-azure-monitor-datasource",
- "uid": "azure-monitor-oob"
- },
- "hide": false,
- "queryType": "Azure Monitor",
- "refId": "B",
- "subscription": "7a0bb4ab-c5a7-46b3-b4ad-c10376166020"
- },
- {
- "azureMonitor": {
- "aggregation": "Average",
- "allowedTimeGrainsMs": [
- 60000,
- 300000,
- 900000,
- 1800000,
- 3600000,
- 21600000,
- 43200000,
- 86400000
- ],
- "customNamespace": "NGINX Requests and Response Statistics",
- "dimensionFilters": [],
- "metricName": "plus.http.status.5xx",
- "metricNamespace": "nginx.nginxplus/nginxdeployments",
- "region": "westus2",
- "resources": [
- {
- "metricNamespace": "NGINX.NGINXPLUS/nginxDeployments",
- "region": "westus2",
- "resourceGroup": "cakker",
- "resourceName": "nginx1",
- "subscription": "7a0bb4ab-c5a7-46b3-b4ad-c10376166020"
- }
- ],
- "timeGrain": "auto"
- },
- "datasource": {
- "type": "grafana-azure-monitor-datasource",
- "uid": "azure-monitor-oob"
- },
- "hide": false,
- "queryType": "Azure Monitor",
- "refId": "D",
- "subscription": "7a0bb4ab-c5a7-46b3-b4ad-c10376166020"
- }
- ],
- "title": "HTTP Metrics",
- "type": "timeseries"
- },
- {
- "datasource": {
- "type": "grafana-azure-monitor-datasource",
- "uid": "azure-monitor-oob"
- },
- "description": "",
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "palette-classic"
- },
- "custom": {
- "axisBorderShow": false,
- "axisCenteredZero": false,
- "axisColorMode": "text",
- "axisLabel": "",
- "axisPlacement": "auto",
- "barAlignment": 0,
- "drawStyle": "line",
- "fillOpacity": 0,
- "gradientMode": "none",
- "hideFrom": {
- "legend": false,
- "tooltip": false,
- "viz": false
- },
- "insertNulls": false,
- "lineInterpolation": "linear",
- "lineWidth": 3,
- "pointSize": 5,
- "scaleDistribution": {
- "type": "linear"
- },
- "showPoints": "auto",
- "spanNulls": false,
- "stacking": {
- "group": "A",
- "mode": "none"
- },
- "thresholdsStyle": {
- "mode": "off"
- }
- },
- "mappings": [],
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- },
- {
- "color": "red",
- "value": 80
- }
- ]
- },
- "unitScale": true
- },
- "overrides": []
- },
- "gridPos": {
- "h": 9,
- "w": 12,
- "x": 0,
- "y": 9
- },
- "id": 1,
- "options": {
- "legend": {
- "calcs": [],
- "displayMode": "list",
- "placement": "bottom",
- "showLegend": true
- },
- "tooltip": {
- "mode": "single",
- "sort": "none"
- }
- },
- "targets": [
- {
- "azureMonitor": {
- "aggregation": "Average",
- "allowedTimeGrainsMs": [
- 60000,
- 300000,
- 900000,
- 1800000,
- 3600000,
- 21600000,
- 43200000,
- 86400000
- ],
- "customNamespace": "NGINX Cache Statistics",
- "dimensionFilters": [
- {
- "dimension": "cache_zone",
- "filters": [
- "image_cache"
- ],
- "operator": "eq"
- }
- ],
- "metricName": "plus.cache.hit.ratio",
- "metricNamespace": "nginx.nginxplus/nginxdeployments",
- "region": "westus2",
- "resources": [
- {
- "metricNamespace": "NGINX.NGINXPLUS/nginxDeployments",
- "region": "westus2",
- "resourceGroup": "cakker",
- "resourceName": "nginx1",
- "subscription": "7a0bb4ab-c5a7-46b3-b4ad-c10376166020"
- }
- ],
- "timeGrain": "auto"
- },
- "datasource": {
- "type": "grafana-azure-monitor-datasource",
- "uid": "azure-monitor-oob"
- },
- "queryType": "Azure Monitor",
- "refId": "A",
- "subscription": "7a0bb4ab-c5a7-46b3-b4ad-c10376166020"
- }
- ],
- "title": "Cache Hit Ratio",
- "type": "timeseries"
- },
- {
- "datasource": {
- "type": "grafana-azure-monitor-datasource",
- "uid": "azure-monitor-oob"
- },
- "description": "",
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "palette-classic"
- },
- "custom": {
- "axisBorderShow": false,
- "axisCenteredZero": false,
- "axisColorMode": "text",
- "axisLabel": "",
- "axisPlacement": "auto",
- "barAlignment": 0,
- "drawStyle": "line",
- "fillOpacity": 0,
- "gradientMode": "none",
- "hideFrom": {
- "legend": false,
- "tooltip": false,
- "viz": false
- },
- "insertNulls": false,
- "lineInterpolation": "linear",
- "lineWidth": 1,
- "pointSize": 5,
- "scaleDistribution": {
- "type": "linear"
- },
- "showPoints": "auto",
- "spanNulls": false,
- "stacking": {
- "group": "A",
- "mode": "none"
- },
- "thresholdsStyle": {
- "mode": "off"
- }
- },
- "mappings": [],
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- },
- {
- "color": "red",
- "value": 80
- }
- ]
- },
- "unitScale": true
- },
- "overrides": []
- },
- "gridPos": {
- "h": 9,
- "w": 12,
- "x": 12,
- "y": 9
- },
- "id": 4,
- "options": {
- "legend": {
- "calcs": [],
- "displayMode": "list",
- "placement": "bottom",
- "showLegend": true
- },
- "tooltip": {
- "mode": "single",
- "sort": "none"
- }
- },
- "targets": [
- {
- "azureMonitor": {
- "aggregation": "Count",
- "allowedTimeGrainsMs": [
- 60000,
- 300000,
- 900000,
- 1800000,
- 3600000,
- 21600000,
- 43200000,
- 86400000
- ],
- "customNamespace": "NGINX SSL Statistics",
- "dimensionFilters": [
- {
- "dimension": "server_zone",
- "filters": [],
- "operator": "eq"
- }
- ],
- "metricName": "plus.http.ssl.handshakes",
- "metricNamespace": "nginx.nginxplus/nginxdeployments",
- "region": "westus2",
- "resources": [
- {
- "metricNamespace": "NGINX.NGINXPLUS/nginxDeployments",
- "region": "westus2",
- "resourceGroup": "cakker",
- "resourceName": "nginx1",
- "subscription": "7a0bb4ab-c5a7-46b3-b4ad-c10376166020"
- }
- ],
- "timeGrain": "auto"
- },
- "datasource": {
- "type": "grafana-azure-monitor-datasource",
- "uid": "azure-monitor-oob"
- },
- "queryType": "Azure Monitor",
- "refId": "A",
- "subscription": "7a0bb4ab-c5a7-46b3-b4ad-c10376166020"
- },
- {
- "azureMonitor": {
- "aggregation": "Count",
- "allowedTimeGrainsMs": [
- 60000,
- 300000,
- 900000,
- 1800000,
- 3600000,
- 21600000,
- 43200000,
- 86400000
- ],
- "customNamespace": "NGINX SSL Statistics",
- "dimensionFilters": [
- {
- "dimension": "server_zone",
- "filters": [],
- "operator": "eq"
- }
- ],
- "metricName": "plus.http.ssl.session.reuses",
- "metricNamespace": "nginx.nginxplus/nginxdeployments",
- "region": "westus2",
- "resources": [
- {
- "metricNamespace": "NGINX.NGINXPLUS/nginxDeployments",
- "region": "westus2",
- "resourceGroup": "cakker",
- "resourceName": "nginx1",
- "subscription": "7a0bb4ab-c5a7-46b3-b4ad-c10376166020"
- }
- ],
- "timeGrain": "auto"
- },
- "datasource": {
- "type": "grafana-azure-monitor-datasource",
- "uid": "azure-monitor-oob"
- },
- "hide": false,
- "queryType": "Azure Monitor",
- "refId": "B",
- "subscription": "7a0bb4ab-c5a7-46b3-b4ad-c10376166020"
- },
- {
- "azureMonitor": {
- "aggregation": "Count",
- "allowedTimeGrainsMs": [
- 60000,
- 300000,
- 900000,
- 1800000,
- 3600000,
- 21600000,
- 43200000,
- 86400000
- ],
- "customNamespace": "NGINX SSL Statistics",
- "dimensionFilters": [
- {
- "dimension": "server_zone",
- "filters": [],
- "operator": "eq"
- }
- ],
- "metricName": "plus.http.ssl.handshakes.failed",
- "metricNamespace": "nginx.nginxplus/nginxdeployments",
- "region": "westus2",
- "resources": [
- {
- "metricNamespace": "NGINX.NGINXPLUS/nginxDeployments",
- "region": "westus2",
- "resourceGroup": "cakker",
- "resourceName": "nginx1",
- "subscription": "7a0bb4ab-c5a7-46b3-b4ad-c10376166020"
- }
- ],
- "timeGrain": "auto"
- },
- "datasource": {
- "type": "grafana-azure-monitor-datasource",
- "uid": "azure-monitor-oob"
- },
- "hide": false,
- "queryType": "Azure Monitor",
- "refId": "C",
- "subscription": "7a0bb4ab-c5a7-46b3-b4ad-c10376166020"
- }
- ],
- "title": "SSL Metrics",
- "type": "timeseries"
- },
- {
- "datasource": {
- "type": "grafana-azure-monitor-datasource",
- "uid": "azure-monitor-oob"
- },
- "description": "",
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "palette-classic"
- },
- "custom": {
- "axisBorderShow": false,
- "axisCenteredZero": false,
- "axisColorMode": "text",
- "axisLabel": "",
- "axisPlacement": "auto",
- "barAlignment": 0,
- "drawStyle": "line",
- "fillOpacity": 0,
- "gradientMode": "none",
- "hideFrom": {
- "legend": false,
- "tooltip": false,
- "viz": false
- },
- "insertNulls": false,
- "lineInterpolation": "linear",
- "lineWidth": 3,
- "pointSize": 5,
- "scaleDistribution": {
- "type": "linear"
- },
- "showPoints": "auto",
- "spanNulls": false,
- "stacking": {
- "group": "A",
- "mode": "none"
- },
- "thresholdsStyle": {
- "mode": "off"
- }
- },
- "mappings": [],
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- },
- {
- "color": "red",
- "value": 80
- }
- ]
- },
- "unitScale": true
- },
- "overrides": []
- },
- "gridPos": {
- "h": 10,
- "w": 12,
- "x": 0,
- "y": 18
- },
- "id": 5,
- "options": {
- "legend": {
- "calcs": [],
- "displayMode": "list",
- "placement": "bottom",
- "showLegend": true
- },
- "tooltip": {
- "mode": "single",
- "sort": "none"
- }
- },
- "targets": [
- {
- "azureMonitor": {
- "aggregation": "Average",
- "allowedTimeGrainsMs": [
- 60000,
- 300000,
- 900000,
- 1800000,
- 3600000,
- 21600000,
- 43200000,
- 86400000
- ],
- "customNamespace": "NGINX Upstream Statistics",
- "dimensionFilters": [
- {
- "dimension": "upstream",
- "filters": [],
- "operator": "eq"
- }
- ],
- "metricName": "plus.http.upstream.peers.response.time",
- "metricNamespace": "nginx.nginxplus/nginxdeployments",
- "region": "westus2",
- "resources": [
- {
- "metricNamespace": "NGINX.NGINXPLUS/nginxDeployments",
- "region": "westus2",
- "resourceGroup": "cakker",
- "resourceName": "nginx1",
- "subscription": "7a0bb4ab-c5a7-46b3-b4ad-c10376166020"
- }
- ],
- "timeGrain": "auto"
- },
- "datasource": {
- "type": "grafana-azure-monitor-datasource",
- "uid": "azure-monitor-oob"
- },
- "queryType": "Azure Monitor",
- "refId": "A",
- "subscription": "7a0bb4ab-c5a7-46b3-b4ad-c10376166020"
- }
- ],
- "title": "Upstream Response Time",
- "type": "timeseries"
- },
- {
- "datasource": {
- "type": "grafana-azure-monitor-datasource",
- "uid": "azure-monitor-oob"
- },
- "description": "",
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "palette-classic"
- },
- "custom": {
- "axisBorderShow": false,
- "axisCenteredZero": false,
- "axisColorMode": "text",
- "axisLabel": "",
- "axisPlacement": "auto",
- "barAlignment": 0,
- "drawStyle": "line",
- "fillOpacity": 0,
- "gradientMode": "none",
- "hideFrom": {
- "legend": false,
- "tooltip": false,
- "viz": false
- },
- "insertNulls": false,
- "lineInterpolation": "linear",
- "lineWidth": 3,
- "pointSize": 5,
- "scaleDistribution": {
- "type": "linear"
- },
- "showPoints": "auto",
- "spanNulls": false,
- "stacking": {
- "group": "A",
- "mode": "none"
- },
- "thresholdsStyle": {
- "mode": "off"
- }
- },
- "mappings": [],
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- },
- {
- "color": "red",
- "value": 80
- }
- ]
- },
- "unitScale": true
- },
- "overrides": []
- },
- "gridPos": {
- "h": 10,
- "w": 12,
- "x": 12,
- "y": 18
- },
- "id": 6,
- "options": {
- "legend": {
- "calcs": [],
- "displayMode": "list",
- "placement": "bottom",
- "showLegend": true
- },
- "tooltip": {
- "mode": "single",
- "sort": "none"
- }
- },
- "targets": [
- {
- "azureMonitor": {
- "aggregation": "Count",
- "allowedTimeGrainsMs": [
- 60000,
- 300000,
- 900000,
- 1800000,
- 3600000,
- 21600000,
- 43200000,
- 86400000
- ],
- "customNamespace": "NGINX Upstream Statistics",
- "dimensionFilters": [
- {
- "dimension": "upstream",
- "filters": [],
- "operator": "eq"
- }
- ],
- "metricName": "plus.http.upstream.peers.health_checks.checks",
- "metricNamespace": "nginx.nginxplus/nginxdeployments",
- "region": "westus2",
- "resources": [
- {
- "metricNamespace": "NGINX.NGINXPLUS/nginxDeployments",
- "region": "westus2",
- "resourceGroup": "cakker",
- "resourceName": "nginx1",
- "subscription": "7a0bb4ab-c5a7-46b3-b4ad-c10376166020"
- }
- ],
- "timeGrain": "auto"
- },
- "datasource": {
- "type": "grafana-azure-monitor-datasource",
- "uid": "azure-monitor-oob"
- },
- "queryType": "Azure Monitor",
- "refId": "A",
- "subscription": "7a0bb4ab-c5a7-46b3-b4ad-c10376166020"
- },
- {
- "azureMonitor": {
- "aggregation": "Count",
- "allowedTimeGrainsMs": [
- 60000,
- 300000,
- 900000,
- 1800000,
- 3600000,
- 21600000,
- 43200000,
- 86400000
- ],
- "customNamespace": "NGINX Upstream Statistics",
- "dimensionFilters": [
- {
- "dimension": "upstream",
- "filters": [],
- "operator": "eq"
- }
- ],
- "metricName": "plus.http.upstream.peers.health_checks.unhealthy",
- "metricNamespace": "nginx.nginxplus/nginxdeployments",
- "region": "westus2",
- "resources": [
- {
- "metricNamespace": "NGINX.NGINXPLUS/nginxDeployments",
- "region": "westus2",
- "resourceGroup": "cakker",
- "resourceName": "nginx1",
- "subscription": "7a0bb4ab-c5a7-46b3-b4ad-c10376166020"
- }
- ],
- "timeGrain": "auto"
- },
- "datasource": {
- "type": "grafana-azure-monitor-datasource",
- "uid": "azure-monitor-oob"
- },
- "hide": false,
- "queryType": "Azure Monitor",
- "refId": "B",
- "subscription": "7a0bb4ab-c5a7-46b3-b4ad-c10376166020"
- }
- ],
- "title": "Health Checks",
- "type": "timeseries"
- }
- ],
- "refresh": "5s",
- "schemaVersion": 39,
- "tags": [],
- "templating": {
- "list": []
- },
- "time": {
- "from": "now-5m",
- "to": "now"
- },
- "timeRangeUpdatedDuringEditOrView": false,
- "timepicker": {
- "hidden": false
- },
- "timezone": "",
- "title": "Nginx4AzureDashboard",
- "uid": "b70b2c11-a815-4506-a10a-3f15b6970797",
- "version": 14,
- "weekStart": ""
- }
-
\ No newline at end of file
diff --git a/ca-notes/n4a-configs/aks1-dashboard-upstreams.conf b/ca-notes/n4a-configs/aks1-dashboard-upstreams.conf
deleted file mode 100644
index 0dbcd7d..0000000
--- a/ca-notes/n4a-configs/aks1-dashboard-upstreams.conf
+++ /dev/null
@@ -1,15 +0,0 @@
-# Nginx 4 Azure to NIC, AKS Node for Upstreams
-# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
-#
-# nginx ingress dashboard
-#
-upstream nic1_dashboard {
- zone nic1_dashboard 256k;
-
- # from nginx-ingress NodePort Service / aks Node IPs
- server aks-userpool-76919110-vmss000002:32090; #aks node1:
- server aks-userpool-76919110-vmss000003:32090; #aks node2:
-
- keepalive 8;
-
-}
diff --git a/ca-notes/n4a-configs/aks1-juice-headless.conf b/ca-notes/n4a-configs/aks1-juice-headless.conf
deleted file mode 100644
index 171fc60..0000000
--- a/ca-notes/n4a-configs/aks1-juice-headless.conf
+++ /dev/null
@@ -1,22 +0,0 @@
-# Nginx 4 Azure direct to Juice for Upstreams
-# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
-#
-# direct to juiceshop-svc Headless Service ( no NodePort )
-#
-upstream aks1_juice_direct {
- zone aks1_juice_direct 256k;
-
- least_time last_byte;
-
- # direct to nginx-ingress Headless Service Endpoints
- # Resolvers set to kube-dns Endpoints List
- resolver 172.16.4.64 172.16.4.224 valid=10s ipv6=off status_zone=kube-dns; # 172.16.4.64;
-
- # Server name must follow this format
- # server ..svc.cluster.local:port
- server juiceshop-svc.juice.svc.cluster.local:3000 resolve;
- # server 172.16.4.74:80;
-
- keepalive 32;
-
-}
diff --git a/ca-notes/n4a-configs/aks1-nic-headless.conf b/ca-notes/n4a-configs/aks1-nic-headless.conf
deleted file mode 100644
index 0a80dec..0000000
--- a/ca-notes/n4a-configs/aks1-nic-headless.conf
+++ /dev/null
@@ -1,22 +0,0 @@
-# Nginx 4 Azure direct to NIC for Upstreams
-# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
-#
-# direct to nginx ingress Headless Service ( no NodePort )
-#
-upstream aks1_nic_direct {
- zone aks1_nic_direct 256k;
-
- least_time last_byte;
-
- # direct to nginx-ingress Headless Service Endpoints
- # Resolvers set to kube-dns Endpoints List
- resolver 172.16.4.64 172.16.4.224 valid=10s ipv6=off status_zone=kube-dns; # 172.16.4.64;
-
- # Server name must follow this format
- # server ..svc.cluster.local
- server nginx-ingress-headless.nginx-ingress.svc.cluster.local:80 resolve;
- # server 172.16.4.74:80;
-
- keepalive 32;
-
-}
diff --git a/ca-notes/n4a-configs/aks1-upstreams.conf b/ca-notes/n4a-configs/aks1-upstreams.conf
deleted file mode 100644
index fe689c0..0000000
--- a/ca-notes/n4a-configs/aks1-upstreams.conf
+++ /dev/null
@@ -1,17 +0,0 @@
-# Nginx 4 Azure to NIC, AKS Nodes for Upstreams
-# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
-#
-# AKS1 nginx ingress upstreams
-#
-upstream aks1_ingress {
- zone aks1_ingress 256k;
-
- least_time last_byte;
-
- # from nginx-ingress NodePort Service / aks Node IPs
- server aks-userpool-76919110-vmss000002:32080; #aks node1:
- server aks-userpool-76919110-vmss000003:32080; #aks node2:
-
- keepalive 32;
-
-}
diff --git a/ca-notes/n4a-configs/aks2-upstreams.conf b/ca-notes/n4a-configs/aks2-upstreams.conf
deleted file mode 100644
index 5f78a4b..0000000
--- a/ca-notes/n4a-configs/aks2-upstreams.conf
+++ /dev/null
@@ -1,19 +0,0 @@
-# Nginx 4 Azure to NIC, AKS Node for Upstreams
-# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
-#
-# nginx ingress upstreams
-#
-upstream aks2_ingress {
- zone aks2_ingress 256k;
-
- least_time last_byte;
-
- # from nginx-ingress NodePort Service / aks Node IPs
- #server aks-nodepool1-19485366:32080; #aks2 cluster nodes:
- server aks-nodepool1-19485366-vmss00000h:32080; #aks node1:
- server aks-nodepool1-19485366-vmss00000i:32080; #aks node2:
- server aks-nodepool1-19485366-vmss00000j:32080; #aks node3:
-
- keepalive 32;
-
-}
diff --git a/ca-notes/n4a-configs/cafe-docker-upstreams.conf b/ca-notes/n4a-configs/cafe-docker-upstreams.conf
deleted file mode 100644
index 7158929..0000000
--- a/ca-notes/n4a-configs/cafe-docker-upstreams.conf
+++ /dev/null
@@ -1,16 +0,0 @@
-# Nginx 4 Azure, Cafe Nginx Demo Upstreams
-# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
-#
-# cafe-nginx servers
-#
-upstream cafe_nginx {
- zone cafe_nginx 256k;
-
- # from docker compose
- server n4avm1:81;
- server n4avm1:82;
- server n4avm1:83;
-
- keepalive 32;
-
-}
diff --git a/ca-notes/n4a-configs/cafe.nginxazure.build.conf b/ca-notes/n4a-configs/cafe.nginxazure.build.conf
deleted file mode 100644
index cce1698..0000000
--- a/ca-notes/n4a-configs/cafe.nginxazure.build.conf
+++ /dev/null
@@ -1,62 +0,0 @@
-# Nginx 4 Azure - Cafe Nginx HTTP
-# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
-#
-server {
-
- listen 80; # Listening on port 80 on all IP addresses on this machine
-
- server_name cafe.nginxazure.build; # Set hostname to match in request
- status_zone cafe.nginxazure.build;
-
- access_log /var/log/nginx/cafe.example.com.log main;
- # access_log /var/log/nginx/cafe.example.com.log main_ext; Extended Logging
- error_log /var/log/nginx/cafe.example.com_error.log info;
-
- location / {
- #
- # Uncomment to enable HTTP keep-alives
- # include /etc/nginx/includes/keepalive.conf;
-
- # return 200 "You have reached cafe.example.com, location /\n";
-
- # proxy_pass http://n4avm1:32779; # Proxy to another server
- # proxy_pass http://nginx.org; # Proxy to another website
- # proxy_pass http://cafe_nginx; # Proxy AND load balance to a list of servers
- # proxy_pass http://aks1_ingress; # Proxy to AKS Nginx Ingress Controllers NodePort
- # proxy_pass http://aks1_nic_direct; # Proxy to AKS Nginx Ingress Controllers Direct
- proxy_pass http://$upstream; # Use Split Clients config
-
- add_header X-Proxy-Pass SplitClient; # Custom Header
-
- }
-
- # healthchecks for UbuntuVM cafe docker containers
- #location @nginx_cafe_healthcheck {
-
- # health_check;
- # proxy_set_header Host: cafe.nginxazure.build;
- # proxy_pass http://cafe_nginx;
-
- #}
-
- # healthchecks for AKS Cluster 1 Cafe
- #location @aks1_healthcheck {
-
- # health_check;
- # proxy_http_version 1.1;
- # proxy_set_header Connection: "";
- # proxy_set_header Host: cafe.nginxazure.build;
- # proxy_pass http://aks1_ingress;
-
- #}
-
- # healthchecks for AKS Cluster 2 Dashboard
- #location @aks2_healthcheck {
-
- # health_check interval=10 fails=3 passes=1;
- # proxy_set_header Host: cafe.nginxazure.build;
- # proxy_pass http://aks2_ingress;
-
- #}
-
-}
diff --git a/ca-notes/n4a-configs/mygarage-docker-upstreams.conf b/ca-notes/n4a-configs/mygarage-docker-upstreams.conf
deleted file mode 100644
index d6745de..0000000
--- a/ca-notes/n4a-configs/mygarage-docker-upstreams.conf
+++ /dev/null
@@ -1,14 +0,0 @@
-# Nginx 4 Azure, My Garage Demo Upstreams
-# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
-#
-# my garage servers
-upstream mygarage {
- zone mygarage 256k;
-
- # from docker compose
-
- server n4avm1:80;
-
- keepalive 32;
-
-}
diff --git a/ca-notes/n4a-configs/mygarage.nginxazure.build.conf b/ca-notes/n4a-configs/mygarage.nginxazure.build.conf
deleted file mode 100644
index f375c52..0000000
--- a/ca-notes/n4a-configs/mygarage.nginxazure.build.conf
+++ /dev/null
@@ -1,24 +0,0 @@
-# Nginx 4 Azure - My Garage HTTP
-# Chris Akker, Shouvik Dutta, Adam Currier, Steve Wagner - Mar 2024
-#
-server {
-
- listen 80; # Listening on port 80 on all IP addresses on this machine
-
- server_name mygarage.nginxazure.build; # Set hostname to match in request
- status_zone mygarage.nginxazure.build;
-
- access_log /var/log/nginx/mygarage.nginxazure.build.log main;
- # access_log /var/log/nginx/cafe.example.com.log main_ext; Extended Logging
- error_log /var/log/nginx/mygarage.nginxazure.build_error.log info;
-
- location / {
- #
- # return 200 "You have reached mygarage.example.com, location /\n";
-
- proxy_pass http://mygarage; # Proxy to My Garage
-
- add_header X-Proxy-Pass mygarage;
-
- }
-}
diff --git a/ca-notes/n4a-configs/new-dashboard.conf b/ca-notes/n4a-configs/new-dashboard.conf
deleted file mode 100644
index a66a219..0000000
--- a/ca-notes/n4a-configs/new-dashboard.conf
+++ /dev/null
@@ -1,31 +0,0 @@
-# Nginx 4 Azure - Nginx Ingress Dashboards HTTP
-# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
-#
-server {
-
- listen 9000; # Listening on port 9000 on all IP addresses on this machine
-
- server_name dashboard.nginxazure.build; # Set hostname to match in request
- status_zone dashboard.nginxazure.build; # Metrics zone name
-
- access_log /var/log/nginx/dashboard.example.com.log main;
- error_log /var/log/nginx/dashboard.example.com_error.log info;
-
-
- location = /aks1/dashboard.html {
- proxy_pass http://nic1_dashboard/dashboard.html;
- }
-
- location /aks1/api/ {
- proxy_pass http://nic1_dashboard/api/;
- }
-
- location = /aks2/dashboard.html {
- proxy_pass http://nic2_dashboard/dashboard.html;
- }
-
- location /aks2/api/ {
- proxy_pass http://nic2_dashboard/api/;
- }
-
-}
diff --git a/ca-notes/n4a-configs/stream/redis-leader-upstreams.conf b/ca-notes/n4a-configs/stream/redis-leader-upstreams.conf
deleted file mode 100644
index 6716a73..0000000
--- a/ca-notes/n4a-configs/stream/redis-leader-upstreams.conf
+++ /dev/null
@@ -1,17 +0,0 @@
-# Nginx 4 Azure to NIC, AKS Node for Upstreams
-# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
-#
-# nginx ingress upstreams
-#
-upstream aks2_redis_leader {
- zone aks2_redis_leader 256k;
-
- least_time last_byte;
-
- # from nginx-ingress NodePort Service / aks Node IPs
- #server aks-nodepool1-19485366:32080; #aks2 cluster nodes:
- server aks-nodepool1-19485366-vmss00000h:32379; #aks node1:
- server aks-nodepool1-19485366-vmss00000i:32379; #aks node2:
- server aks-nodepool1-19485366-vmss00000j:32379; #aks node3:
-
-}
diff --git a/ca-notes/n4a-configs/windowsvm-upstream.conf b/ca-notes/n4a-configs/windowsvm-upstream.conf
deleted file mode 100644
index 97619e2..0000000
--- a/ca-notes/n4a-configs/windowsvm-upstream.conf
+++ /dev/null
@@ -1,16 +0,0 @@
-# Nginx 4 Azure to Windows VM for Upstreams
-# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
-#
-# Windows VM upstreams
-#
-upstream windowsvm {
- zone windowsvm 256k;
-
- least_time last_byte;
-
- # windows VM upstreams
- server 172.16.5.120:80;
-
- keepalive 32;
-
-}
diff --git a/ca-notes/ubuntuvm/cafe/docker-compose.yml b/ca-notes/ubuntuvm/cafe/docker-compose.yml
deleted file mode 100644
index 5222e37..0000000
--- a/ca-notes/ubuntuvm/cafe/docker-compose.yml
+++ /dev/null
@@ -1,30 +0,0 @@
-# NGINX webservers with ingress-demo pages
-# NGINX for Azure, Mar 2024
-# Chris Akker, Shouvik Dutta, Adam Currier
-#
-version: '3.3'
-services:
- web1:
- hostname: docker-web1
- container_name: docker-web1
- image: nginxinc/ingress-demo # Image from Docker Hub
- restart: always
- ports:
- - "81:80" # Open for HTTP
- - "4431:443" # Open for HTTPS
- web2:
- hostname: docker-web2
- container_name: docker-web2
- image: nginxinc/ingress-demo
- restart: always
- ports:
- - "82:80"
- - "4432:443"
- web3:
- hostname: docker-web3
- container_name: docker-web3
- image: nginxinc/ingress-demo
- restart: always
- ports:
- - "83:80"
- - "4433:443"
\ No newline at end of file
diff --git a/ca-notes/ubuntuvm/mygarage/docker-compose.yml b/ca-notes/ubuntuvm/mygarage/docker-compose.yml
deleted file mode 100644
index 29a61f0..0000000
--- a/ca-notes/ubuntuvm/mygarage/docker-compose.yml
+++ /dev/null
@@ -1,14 +0,0 @@
-version: '3.8'
-
-services:
- the-garage:
- image: ghcr.io/ciroque/the-garage:${IMAGE_SHA:-latest}
- ports:
- - "8080:8080"
-
- my-garage:
- image: ghcr.io/ciroque/my-garage:${IMAGE_SHA:-latest}
- ports:
- - "80:80"
- depends_on:
- - the-garage
diff --git a/ca-notes/ubuntuvm/ubuntu-build-notes.md b/ca-notes/ubuntuvm/ubuntu-build-notes.md
deleted file mode 100644
index eace350..0000000
--- a/ca-notes/ubuntuvm/ubuntu-build-notes.md
+++ /dev/null
@@ -1,44 +0,0 @@
-# Notes for building the Ubuntu VM for N4A workshop
-
-Create a new VM, Ubuntu 22.02, 8GB ram, 128GB disk.
-Use the same Vnet Subnet as your AKS cluster for the Networking.
-
-Modify the NSG for SSH access, to allow your IP address to SSH to the VM.
-
-Install docker-ce
-
- 14 sudo apt update
- 15 sudo apt install apt-transport-https ca-certificates curl software-properties-common
- 16 curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
- 17 echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
- 18 sudo apt update
- 19 apt-cache policy docker-ce
- 20 sudo apt install docker-ce
-
-Install docker-compose
-
- 39 sudo apt install docker-compose
- 40 mkdir -p ~/.docker/cli-plugins/
- 41 curl -SL https://github.com/docker/compose/releases/download/v2.3.3/docker-compose-linux-x86_64 -o ~/.docker/cli-plugins/docker-compose
- 42 chmod +x ~/.docker/cli-plugins/docker-compose
- 43 docker compose version
-
-
-Test docker works
-
-sudo docker run hello-world
-
-Install net-tools
-
- 48 sudo apt-get update
- 49 sudo apt-get install net-tools
-
-Create folder /home/azureuser/cafe
-
-azureuser@n4avm1: mkdir cafe
-
-Create docker-compose.yml file
-
-Start three Cafe demo containers
-
-docker-compose up -d
diff --git a/labs/lab0/media/docker-icon.png b/labs/lab0/media/docker-icon.png
new file mode 100644
index 0000000..02ee3f1
Binary files /dev/null and b/labs/lab0/media/docker-icon.png differ
diff --git a/labs/lab0/media/kubernetes-icon.png b/labs/lab0/media/kubernetes-icon.png
new file mode 100644
index 0000000..63c679e
Binary files /dev/null and b/labs/lab0/media/kubernetes-icon.png differ
diff --git a/labs/lab0/media/n4a-workshop-diagram-r7.png b/labs/lab0/media/n4a-workshop-diagram-r7.png
new file mode 100644
index 0000000..31a84e6
Binary files /dev/null and b/labs/lab0/media/n4a-workshop-diagram-r7.png differ
diff --git a/labs/lab0/media/nginx-azure-icon.png b/labs/lab0/media/nginx-azure-icon.png
new file mode 100644
index 0000000..70ab132
Binary files /dev/null and b/labs/lab0/media/nginx-azure-icon.png differ
diff --git a/labs/lab0/media/nginx-plus-icon.png b/labs/lab0/media/nginx-plus-icon.png
new file mode 100644
index 0000000..21f0e25
Binary files /dev/null and b/labs/lab0/media/nginx-plus-icon.png differ
diff --git a/labs/lab0/media/redis-icon.png b/labs/lab0/media/redis-icon.png
new file mode 100644
index 0000000..9d34aff
Binary files /dev/null and b/labs/lab0/media/redis-icon.png differ
diff --git a/labs/lab0/readme.md b/labs/lab0/readme.md
index f113c03..379940e 100644
--- a/labs/lab0/readme.md
+++ b/labs/lab0/readme.md
@@ -2,50 +2,52 @@
## Introduction
-In this lab, you will build ( x,y,x ).
+In this Workshop, you will build a working Lab environment in Azure, and use Nginx for Azure to control traffic to these Azure Resources. The architecture you will build will look like this diagram:
-< Lab specific Images here, in the /media sub-folder >
-
-NGINX aaS | Docker
-:-------------------------:|:-------------------------:
- |
+
-## Learning Objectives
+In order to build this environment, your computer hardware, software, and applications must be properly installed and fuctional. This is the list of Prerequisite needed to successfully complete this Workshop as a Student.
+
+>It is `highly recommended` for Students attending this Workshop to be proficient with NGINX and Azure and have some experience with Kubernetes and Docker administration, networking tools, and Load Balancing concepts. An `Azure Subscription` and Admin level access to Azure Portal is required. Previous experience with VisualStudio Code and Redis Tools is also recommended.
-By the end of the lab you will be able to:
+
-- Introduction to `xx`
-- Build an `yyy` Nginx configuration
-- Test access to your lab enviroment with Curl and Chrome
-- Investigate `zzz`
+## Prerequisites
+In this Lab0, the requirements for both the Student and the Azure environment will be described. *It is imperative that you have the appropriate computer, tools, and Azure access to successfully complete the Workshop.*
-## Pre-Requisites
+
-- You must have `aaaa` installed and running
-- You must have `bbbbb` installed
-- See `Lab0` for instructions on setting up your system for this Workshop
-- Familiarity with basic Linux commands and commandline tools
-- Familiarity with basic Docker concepts and commands
-- Familiarity with basic HTTP protocol
+NGINXaaS for Azure | NGINX Plus | Kubernetes | Docker | Redis
+:-------------------------:|:-------------------------:|:-------------------------:|:-------------------------:|:-------------------------:
+ |  |  |  | 
-### Lab exercise 1
+### Student Hardware/Software/Azure Requirements
-
+Verify you have the proper computer requirements - hardware and software.
+- Hardware: Laptop, Admin rights, Internet connection
+- Software: Visual Studio, Terminal, Chrome, Docker, AKS and AZ CLI.
+- Verify you have proper computer skills: Linux CLI, files, SSH/Terminal, Docker/Compose, Azure Portal, Load Balancing concepts, Linux tools, Azure CLI
+- Verify you have the proper access to Azure resources: Azure Subscription with Admin level access
-### Lab exercise 2
-
+
+
+## Required Skills
-### Lab exercise 3
+- Nginx for Azure NGINXperts Workshop has minimum REQUIRED Nginx Skills: Students must be familiar with Nginx operation, configurations, and concepts for HTTP traffic.
+- The NGINXperts Basics Workshop is HIGHLY recommended, students should have taken this workshop prior.
+- The NGINXperts Plus Ingress Controller workshop is also HIGHLY recommended, students should have taken this workshop prior.
+- Azure admin skills, previous training from Microsoft Learn is HIGHLY recommended.
+- Recommended: TLS, DNS, HTTP caching, Grafana, Redis
-
+
-### << more exercises/steps>>
+[NGINXperts Basics Workshop](https://github.com/nginxinc/nginx-basics-workshops)
-
+[NGINXperts Nginx Plus Ingress Controller Workshop](https://github.com/nginxinc/nginx-ingress-workshops/tree/main/Plus/labs)
@@ -71,7 +73,8 @@ By the end of the lab you will be able to:
- Chris Akker - Solutions Architect - Community and Alliances @ F5, Inc.
- Shouvik Dutta - Solutions Architect - Community and Alliances @ F5, Inc.
- Adam Currier - Solutions Architect - Community and Alliances @ F5, Inc.
+- Steve Wagner - Solutions Architect - Community and Alliances @ F5, Inc.
-------------
-Navigate to ([Lab1](../lab1/readme.md) | [LabX](../labX/readme.md))
+Navigate to ([Lab1](../lab1/readme.md) | [LabGuide](../readme.md))
diff --git a/labs/lab1/logs.json b/labs/lab1/logs.json
deleted file mode 100644
index b4dffc0..0000000
--- a/labs/lab1/logs.json
+++ /dev/null
@@ -1 +0,0 @@
-[{category:NginxLogs,enabled:true,retention-policy:{enabled:false,days:0}}]
\ No newline at end of file
diff --git a/labs/lab1/media/enable_system_identity.png b/labs/lab1/media/enable_system_identity.png
deleted file mode 100644
index 066d873..0000000
Binary files a/labs/lab1/media/enable_system_identity.png and /dev/null differ
diff --git a/labs/lab1/media/enable_system_identity_success.png b/labs/lab1/media/enable_system_identity_success.png
deleted file mode 100644
index 3c8184a..0000000
Binary files a/labs/lab1/media/enable_system_identity_success.png and /dev/null differ
diff --git a/labs/lab1/media/lab1_azure-network.png b/labs/lab1/media/lab1_azure-network.png
new file mode 100644
index 0000000..4f44498
Binary files /dev/null and b/labs/lab1/media/lab1_azure-network.png differ
diff --git a/labs/lab1/media/lab1_azure-subnets.png b/labs/lab1/media/lab1_azure-subnets.png
new file mode 100644
index 0000000..44add21
Binary files /dev/null and b/labs/lab1/media/lab1_azure-subnets.png differ
diff --git a/labs/lab1/media/lab1_copy_ip_address.png b/labs/lab1/media/lab1_copy_ip_address.png
new file mode 100644
index 0000000..37d32a2
Binary files /dev/null and b/labs/lab1/media/lab1_copy_ip_address.png differ
diff --git a/labs/lab1/media/lab1_diagram.png b/labs/lab1/media/lab1_diagram.png
new file mode 100644
index 0000000..80ad694
Binary files /dev/null and b/labs/lab1/media/lab1_diagram.png differ
diff --git a/labs/lab1/media/n4a_index_page.png b/labs/lab1/media/lab1_n4a_index_page.png
similarity index 100%
rename from labs/lab1/media/n4a_index_page.png
rename to labs/lab1/media/lab1_n4a_index_page.png
diff --git a/labs/lab1/media/nginx_conf_editor.png b/labs/lab1/media/lab1_nginx_conf_editor.png
similarity index 100%
rename from labs/lab1/media/nginx_conf_editor.png
rename to labs/lab1/media/lab1_nginx_conf_editor.png
diff --git a/labs/lab1/media/nginx_conf_populate.png b/labs/lab1/media/lab1_nginx_conf_populate.png
similarity index 100%
rename from labs/lab1/media/nginx_conf_populate.png
rename to labs/lab1/media/lab1_nginx_conf_populate.png
diff --git a/labs/lab1/media/nginx_conf_submit_success.png b/labs/lab1/media/lab1_nginx_conf_submit_success.png
similarity index 100%
rename from labs/lab1/media/nginx_conf_submit_success.png
rename to labs/lab1/media/lab1_nginx_conf_submit_success.png
diff --git a/labs/lab1/media/portal_n4a_home.png b/labs/lab1/media/lab1_portal_n4a_home.png
similarity index 100%
rename from labs/lab1/media/portal_n4a_home.png
rename to labs/lab1/media/lab1_portal_n4a_home.png
diff --git a/labs/lab1/media/portal_rg_home.png b/labs/lab1/media/lab1_portal_rg_home.png
similarity index 100%
rename from labs/lab1/media/portal_rg_home.png
rename to labs/lab1/media/lab1_portal_rg_home.png
diff --git a/labs/lab1/media/nginx-azure-icon.png b/labs/lab1/media/nginx-azure-icon.png
new file mode 100644
index 0000000..70ab132
Binary files /dev/null and b/labs/lab1/media/nginx-azure-icon.png differ
diff --git a/labs/lab1/readme.md b/labs/lab1/readme.md
index 345820c..3488ab2 100644
--- a/labs/lab1/readme.md
+++ b/labs/lab1/readme.md
@@ -1,28 +1,30 @@
-# Azure Network Build and Nginx for Azure Overview
+# Azure Network Build and NGINX for Azure Overview
## Introduction
-In this lab, you will be adding and configuring the Azure Networking components needed for this workshop. This will require a few network resources, and a Network Security Group to allow incoming traffic to your Nginx for Azure workshop resources. Then you will explore the Nginx for Azure product, as a quick Overview of what it is and how to deploy it.
+In this lab, you will be adding and configuring the Azure components needed for this workshop. This will require a few network resources, a Network Security Group and a Public IP to allow incoming traffic to your NGINX for Azure workshop resource. You will also deploy NGINX for Azure resource. Then you will explore the Nginx for Azure product, as a quick Overview of what it is and how to deploy it.
-< Lab specific Images here, in the /media sub-folder >
+
-NGINX aaS | Docker
-:-------------------------:|:-------------------------:
- |
+NGINX aaS for Azure |
+:-------------------------:|
+|
+
+
## Learning Objectives
By the end of the lab you will be able to:
-- Setup your Azure resource group for this workshop
+- Setup your Azure Resource Group for this workshop
- Setup your Azure Virtual Network, Subnets and Network Security Group for inbound traffic
-- Create Public IP and user assigned managed identity to access NGINX for Azure
+- Create a Public IP and user assigned managed identity to access NGINX for Azure
- Deploy an Nginx for Azure resource
- Explore Nginx for Azure
- Create an initial Nginx configuration for testing
-- Create Log Analytics workspace to collect NGINX error and access logs from NGINX for azure
+- Create Log Analytics workspace to collect NGINX error and access logs from Nginx for Azure
-## Pre-Requisites
+## Prerequisites
- You must have an Azure account
- You must have the Azure CLI software installed on your local system
@@ -33,7 +35,11 @@ By the end of the lab you will be able to:
-### Setup your Azure resource group for this workshop
+
+
+
+
+### Setup your Azure Resource group for this workshop
1. In your local machine open terminal and make sure you have Azure Command Line Interface (CLI) installed by running below command.
@@ -45,7 +51,9 @@ By the end of the lab you will be able to:
1. Create a new Azure Resource Group called `-workshop` , where `` is your last name (or any unique value). This would hold all the Azure resources that you would create for this workshop.
- Also you need to specify a Azure location while creating the resource group. Check out the [Azure Latency Test](https://www.azurespeed.com/Azure/Latency) and select a region that provides the lowest latency.
+ Check out the available [Datacenter regions](https://azure.microsoft.com/en-us/explore/global-infrastructure/geographies/#geographies) and decide on a region that is closest to you and meets your needs.
+
+ You can make use of [Azure Latency Test](https://www.azurespeed.com/Azure/Latency) to select a region that provides the lowest latency.
I am located in Chicago, Illinois so I will opt to use `Central US` as my Azure location.
@@ -62,14 +70,29 @@ By the end of the lab you will be able to:
az group list -o table | grep workshop
```
+
+
### Setup your Azure Virtual Network, Subnets and Network Security Group
+
+
+You will create an Azure Vnet for this Workshop. Inside of this Vnet are 4 different subnets, representing various backend networks for Azure resources like Nginx for Azure, VMs, and Kubernetes clusters.
+
+Name | Subnet | Assignment
+:---:|:----:|:---:
+n4a-subnet | 172.16.1.0/24 | Nginx for Azure
+vm-subnet | 172.16.2.0/24 | Virtual Machines
+aks1-subnet | 172.16.10.0/23 | AKS Cluster #1
+aks2-subnet | 172.16.20.0/23 | AKS Cluster #2
+
+
+
1. Create a virtual network (vnet) named `n4a-vnet` using below command.
```bash
## Set environment variables
- MY_RESOURCEGROUP=s.dutta-workshop
- MY_PUBLICIP=$(curl -4 ifconfig.co)
+ export MY_RESOURCEGROUP=s.dutta-workshop
+ export MY_PUBLICIP=$(curl ipinfo.io/ip)
```
```bash
@@ -103,7 +126,9 @@ By the end of the lab you will be able to:
}
```
-1. Create a network security group(NSG) named `n4a-nsg` using below command.
+ > **NOTE:** Within the output json you should have a `"provisioningState": "Succeeded"` field which validates the command successfully provisioned the resource.
+
+2. Create a network security group (NSG) named `n4a-nsg` using below command.
```bash
az network nsg create \
@@ -111,7 +136,7 @@ By the end of the lab you will be able to:
--name n4a-nsg
```
-1. Add two NSG rules to allow any traffic on port 80 and 443 from your system's public IP. Run below command to create the two rules.
+3. Add two NSG rules to allow any traffic on port 80 and 443 from your system's public IP. Run below command to create the two rules.
```bash
## Rule 1 for HTTP traffic
@@ -199,7 +224,9 @@ By the end of the lab you will be able to:
}
```
-1. Create a subnet that you will use with NGINX for Azure resource. You will also attach the NSG that you just created to this subnet.
+ > **NOTE:** Within the output json you should have a `"provisioningState": "Succeeded"` field which validates the command successfully provisioned the resource.
+
+4. Create a subnet that you will use with NGINX for Azure resource. You will also attach the NSG that you just created to this subnet.
```bash
az network vnet subnet create \
@@ -244,9 +271,21 @@ By the end of the lab you will be able to:
}
```
-1. In similar fashion create two more subnets that would be used with AKS cluster in later labs.
+ > **NOTE:** Within the output json you should have a `"provisioningState": "Succeeded"` field which validates the command successfully provisioned the resource.
+
+5. In similar fashion create three more subnets that would be used with docker virtual machines and AKS clusters in later labs.
+
+ ```bash
+ # VM Subnet
+ az network vnet subnet create \
+ --resource-group $MY_RESOURCEGROUP \
+ --name vm-subnet \
+ --vnet-name n4a-vnet \
+ --address-prefixes 172.16.2.0/24
+ ```
```bash
+ # AKS1 Subnet
az network vnet subnet create \
--resource-group $MY_RESOURCEGROUP \
--name aks1-subnet \
@@ -255,6 +294,7 @@ By the end of the lab you will be able to:
```
```bash
+ # AKS2 Subnet
az network vnet subnet create \
--resource-group $MY_RESOURCEGROUP \
--name aks2-subnet \
@@ -262,6 +302,12 @@ By the end of the lab you will be able to:
--address-prefixes 172.16.20.0/23
```
+Your completed Vnet/Subnets should look similar to this:
+
+
+
+
+
### Create Public IP and user assigned managed identity to access NGINX for Azure
1. Create a Public IP that you will attach to NGINX for Azure. You will use this public IP to access NGINX for Azure from outside the Azure network. Use below command to create a new Public IP.
@@ -302,7 +348,9 @@ By the end of the lab you will be able to:
}
```
-1. Create a user assigned managed identity that would be tied to the NGINX for Azure resource. This managed identity would be used to read certificates and keys from Azure keyvault in later labs.
+ > **NOTE:** Within the output json you should have a `"provisioningState": "Succeeded"` field which validates the command successfully provisioned the resource.
+
+1. Create a user assigned managed identity that would be tied to the NGINX for Azure resource. This managed identity would be used to read certificates and keys from Azure key vault in later labs.
```bash
az identity create \
@@ -326,24 +374,27 @@ By the end of the lab you will be able to:
}
```
-### Deploy an Nginx for Azure resource
+
-1. Once all the previous Azure resources have been created, you will then create the NGINX for Azure resource using below commands (This would take couple of minutes to finish)
+## Deploy an Nginx for Azure Resource
+
+
+
+1. Once all the previous Azure resources have been created, you will then create the NGINX for Azure resource using below commands (This will take couple of minutes to finish):
```bash
## Set environment variables
- MY_RESOURCEGROUP=s.dutta-workshop
- MY_SUBSCRIPTIONID=$(az account show --query id -o tsv)
+ export MY_RESOURCEGROUP=s.dutta-workshop
+ export MY_SUBSCRIPTIONID=$(az account show --query id -o tsv)
```
```bash
az nginx deployment create \
--resource-group $MY_RESOURCEGROUP \
--name nginx4a \
- --location centralus \
--sku name="standard_Monthly" \
--network-profile front-end-ip-configuration="{public-ip-addresses:[{id:/subscriptions/$MY_SUBSCRIPTIONID/resourceGroups/$MY_RESOURCEGROUP/providers/Microsoft.Network/publicIPAddresses/n4a-publicIP}]}" network-interface-configuration="{subnet-id:/subscriptions/$MY_SUBSCRIPTIONID/resourceGroups/$MY_RESOURCEGROUP/providers/Microsoft.Network/virtualNetworks/n4a-vnet/subnets/n4a-subnet}" \
- --identity="{type:UserAssigned,userAssignedIdentities:{/subscriptions/$MY_SUBSCRIPTIONID/resourceGroups/$MY_RESOURCEGROUP/providers/Microsoft.ManagedIdentity/userAssignedIdentities/n4a-useridentity:{}}}"
+ --identity="{type:'SystemAssigned, UserAssigned',userAssignedIdentities:{/subscriptions/$MY_SUBSCRIPTIONID/resourceGroups/$MY_RESOURCEGROUP/providers/Microsoft.ManagedIdentity/userAssignedIdentities/n4a-useridentity:{}}}"
```
```bash
@@ -351,7 +402,9 @@ By the end of the lab you will be able to:
{
"id": "/subscriptions//resourceGroups/s.dutta-workshop/providers/Nginx.NginxPlus/nginxDeployments/nginx4a",
"identity": {
- "type": "UserAssigned",
+ "principalId": "xxxx-xxxx-xxxx-xxxx-xxxx",
+ "tenantId": "xxxx-xxxx-xxxx-xxxx-xxxx",
+ "type": "SystemAssigned, UserAssigned",
"userAssignedIdentities": {
"/subscriptions//resourceGroups/s.dutta-workshop/providers/Microsoft.ManagedIdentity/userAssignedIdentities/n4a-useridentity": {
"clientId": "xxxx-xxxx-xxxx-xxxx-xxxx",
@@ -404,48 +457,15 @@ By the end of the lab you will be able to:
}
```
-### Explore Nginx for Azure
-
-In this section you will be looking at all the resources that you created within Azure portal.
-
-1. Open Azure portal within your browser and then open your resource group.
- 
-
-1. Click on your NGINX for Azure resource (nginx4a) which should open the Overview section of your resource. You can see useful information like Status, NGINX for Azure resource's public IP, which nginx version is running, which vnet/subnet it is tied to etc.
- 
-
-1. From the left pane click on `NGINX Configuration`. As you are opening this resource for first time and you donot have any configuration present, Azure would prompt you to "Get started with a Configuration example". Click on `Populate now` button to start with a sample configuration example.
- 
+ > **NOTE:** Within the output json you should have a `"provisioningState": "Succeeded"` field which validates the command successfully provisioned the resource.
-1. Once you click on the `Populate now` button you will see the configuration editor section has been populated with `nginx.conf` and an `index.html` page. Click on the `Submit` button to deploy this sample config files to the NGINX for Azure resource.
- 
-
-1. Once you have submitted the configuration, you watch its progress in the notification tab present in right top corner. Intially status would be "Updating NGINX configuration" which would change to "Updated NGINX configuration successfully".
- 
-
-1. Navigate back to Overview section and copy the public IP address of NGINX for Azure resource.
-
-1. In a new browser window, paste the public IP into address bar. You will notice the sample index page gets rendered into your browser.
- 
-
-1. This completes the validation of all the resources that you created using Azure CLI. In the upcoming labs you would be modifying the configuration files and exploring various features of NGINX for Azure resources.
+
### Create Log Analytics workspace to collect NGINX error and Access logs from NGINX for azure
In this section you will create a Log Analytics resource that would collect Nginx logs from your Nginx for Azure resource. As this resource takes time to get provisioned and attached to NGINX for Azure resource, you are building it up here.
-1. Within the NGINX for Azure resource (nginx4a), navigate to managed identity section by clicking on `Identity` from the left menu. Within this section, inside the `System assigned` tab, enable system managed identity by changing the status to `on`. Click on `Save` to save your changes. Press `Yes` for the "Enable system assigned managed identity" prompt.
- 
-
-2. If you open up the Notifications pane, you should see a success status as shown below.
- 
-
-3. Now open a terminal and create a Log Analytics workspace resource that you will attach to NGINX for Azure using Azure CLI. This resource would be used to capture and store NGINX error and access logs. Use below command to create this resource.
-
- ```bash
- ## Set environment variables
- MY_RESOURCEGROUP=s.dutta-workshop
- ```
+1. Create a Log Analytics workspace resource that you will attach to NGINX for Azure using Azure CLI. This resource would be used to capture and store NGINX error and access logs. Use below command to create this resource.
```bash
az monitor log-analytics workspace create \
@@ -453,7 +473,39 @@ In this section you will create a Log Analytics resource that would collect Ngin
--name n4a-loganalytics
```
-4. Next you will update your NGINX for Azure resource to enable sending metrics to Azure monitor by setting the `--enable-diagnostics` flag to `true` using below command.
+ ```bash
+ {
+ "createdDate": "2024-04-17T20:42:48.2028783Z",
+ "customerId": "xxxx-xxxx-xxxx-xxxx-xxxx",
+ "etag": "\"98028759-0000-0500-0000-662fc5330000\"",
+ "features": {
+ "enableLogAccessUsingOnlyResourcePermissions": true
+ },
+ "id": "/subscriptions//resourceGroups/s.dutta-workshop/providers/Microsoft.OperationalInsights/workspaces/n4a-loganalytics",
+ "location": "centralus",
+ "modifiedDate": "2024-04-29T16:05:07.3687572Z",
+ "name": "n4a-loganalytics",
+ "provisioningState": "Succeeded",
+ "publicNetworkAccessForIngestion": "Enabled",
+ "publicNetworkAccessForQuery": "Enabled",
+ "resourceGroup": "s.dutta-workshop",
+ "retentionInDays": 30,
+ "sku": {
+ "lastSkuUpdate": "2024-04-17T20:42:48.2028783Z",
+ "name": "PerGB2018"
+ },
+ "type": "Microsoft.OperationalInsights/workspaces",
+ "workspaceCapping": {
+ "dailyQuotaGb": -1.0,
+ "dataIngestionStatus": "RespectQuota",
+ "quotaNextResetTime": "2024-04-30T09:00:00Z"
+ }
+ }
+ ```
+
+ > **NOTE:** Within the output json you should have a `"provisioningState": "Succeeded"` field which validates the command successfully provisioned the resource.
+
+2. Next you will update your NGINX for Azure resource to enable sending metrics to Azure monitor by setting the `--enable-diagnostics` flag to `true` using below command.
```bash
az nginx deployment update \
@@ -462,41 +514,127 @@ In this section you will create a Log Analytics resource that would collect Ngin
--enable-diagnostics true
```
-5. The last step that you need to perform to start collecting NGINX logs is to create an Azure diagnostic settings resource that will stream the NGINX logs to the log-analytics workspace that you created in previous step. Run below commands to create this resource.
+ ```bash
+ ##Sample Output##
+ {
+ "id": "/subscriptions//resourceGroups/s.dutta-workshop/providers/Nginx.NginxPlus/nginxDeployments/nginx4a",
+ "identity": {
+ "principalId": "xxxx-xxxx-xxxx-xxxx-xxxx",
+ "tenantId": "xxxx-xxxx-xxxx-xxxx-xxxx",
+ "type": "SystemAssigned, UserAssigned",
+ "userAssignedIdentities": {
+ "/subscriptions//resourceGroups/s.dutta-workshop/providers/Microsoft.ManagedIdentity/userAssignedIdentities/n4a-useridentity": {
+ "clientId": "xxxx-xxxx-xxxx-xxxx-xxxx",
+ "principalId": "xxxx-xxxx-xxxx-xxxx-xxxx"
+ }
+ }
+ },
+ "location": "centralus",
+ "name": "nginx4a",
+ "properties": {
+ "autoUpgradeProfile": {
+ "upgradeChannel": "stable"
+ },
+ "enableDiagnosticsSupport": true,
+
+ ...
+ },
+
+ ...
+ }
+ ```
+
+3. The last step that you need to perform to start collecting NGINX logs is to create an Azure diagnostic settings resource that will stream the NGINX logs to the log-analytics workspace that you created in previous step. Run below commands to create this resource.
```bash
## Set environment variables
- N4A_ID=$(az nginx deployment show \
+ export MY_N4A_ID=$(az nginx deployment show \
--resource-group $MY_RESOURCEGROUP \
--name nginx4a \
--query id \
--output tsv)
- LOG_ANALYTICS_ID=$(az monitor log-analytics workspace show \
+ export MY_LOG_ANALYTICS_ID=$(az monitor log-analytics workspace show \
--resource-group $MY_RESOURCEGROUP \
--name n4a-loganalytics \
--query id \
--output tsv)
```
- Below command is throwing a Bad request as it doesn't recognize `NginxLogs` as a valid category. Working with Azure devs to see what is wrong.
-
```bash
az monitor diagnostic-settings create \
- --resource $N4A_ID \
+ --resource $MY_N4A_ID \
--name n4a-nginxlogs \
--resource-group $MY_RESOURCEGROUP \
- --workspace $LOG_ANALYTICS_ID \
+ --workspace $MY_LOG_ANALYTICS_ID \
--logs "[{category:NginxLogs,enabled:true,retention-policy:{enabled:false,days:0}}]"
```
```bash
- az nginx deployment update -n $NAME -g $MY_RESOURCEGROUP --identity {type:SystemAssignedUserAssigned, user-assigned-identities:'/subscriptions/7a0bb4ab-c5a7-46b3-b4ad-c10376166020/resourceGroups/s.dutta-workshop/providers/Microsoft.ManagedIdentity/userAssignedIdentities/n4a-useridentity:{}'}
-
- az nginx deployment update -n nginx4a -g s.dutta-workshop --identity{type:SystemAssignedUserAssigned,userAssignedIdentities:{/subscriptions/7a0bb4ab-c5a7-46b3-b4ad-c10376166020/resourceGroups/s.dutta-workshop/providers/Microsoft.ManagedIdentity/userAssignedIdentities/n4a-useridentity:{}}}
+ ##Sample Output##
+ {
+ "id": "/subscriptions//resourcegroups/s.dutta-workshop/providers/nginx.nginxplus/nginxdeployments/nginx4a/providers/microsoft.insights/diagnosticSettings/n4a-nginxlogs",
+ "logs": [
+ {
+ "category": "NginxLogs",
+ "enabled": true,
+ "retentionPolicy": {
+ "days": 0,
+ "enabled": false
+ }
+ }
+ ],
+ "metrics": [],
+ "name": "n4a-nginxlogs",
+ "resourceGroup": "s.dutta-workshop",
+ "type": "Microsoft.Insights/diagnosticSettings",
+ "workspaceId": "/subscriptions//resourceGroups/s.dutta-workshop/providers/Microsoft.OperationalInsights/workspaces/n4a-loganalytics"
+ }
```
-6. In lab7, you will explore and learn more about NGINX logs and make use of these resources that you built in this section.
+4. In upcoming labs, you will explore and learn more about NGINX logs and make use of these resources that you built in this section.
+
+
+
+### Explore Nginx for Azure
+
+
+
+NGINX as a Service for Azure is a service offering that is tightly integrated into Microsoft Azure public cloud and its ecosystem, making applications fast, efficient, and reliable with full lifecycle management of advanced NGINX traffic services. NGINXaaS for Azure is available in the Azure Marketplace.
+
+NGINXaaS for Azure is powered by NGINX Plus, which extends NGINX Open Source with advanced functionality and provides customers with a complete application delivery solution. Initial use cases covered by NGINXaaS include L4 TCP and L7 HTTP load balancing and reverse proxy which can be managed through various Azure management tools. NGINXaaS allows you to provision distinct deployments as per your business or technical requirements.
+
+In this section you will be looking at NGINX for Azure resource that you created within Azure portal.
+
+1. Open Azure portal within your browser and then open your resource group.
+
+ 
+
+2. Click on your NGINX for Azure resource (nginx4a) which should open the Overview section of your resource. You can see useful information like Status, NGINX for Azure resource's public IP, which Nginx version is running, which vnet/subnet it is using, etc.
+
+ 
+
+3. From the left pane click on `Settings > NGINX Configuration`. As you are opening this resource for first time and you do not have any configuration present, Azure will prompt you to "Get started with a Configuration example". Click on `Populate now` button to start with a sample configuration example.
+
+ 
+
+4. Once you click on the `Populate now` button you will see the configuration editor section has been populated with `nginx.conf` and an `index.html` page. Click on the `Submit` button to deploy this sample config files to the NGINX for Azure resource.
+
+ 
+
+5. Once you have submitted the configuration, you can watch its progress in the notification tab present in right top corner. Intially status would be "Updating NGINX configuration" which would change to "Updated NGINX configuration successfully".
+
+ 
+
+6. Navigate back to Overview section and copy the public IP address of NGINX for Azure resource.
+
+ 
+
+7. In a new browser window, paste the public IP into the address bar. You will notice the sample index page gets rendered into your browser.
+
+ 
+
+8. This completes the validation of all the resources that you created using Azure CLI. In the upcoming labs you would be modifying the configuration files and exploring various features of NGINX for Azure resources.
@@ -522,4 +660,4 @@ In this section you will create a Log Analytics resource that would collect Ngin
-------------
-Navigate to ([Lab2](../lab2/readme.md) | [LabX](../labX/readme.md))
+Navigate to ([Lab2](../lab2/readme.md) | [LabGuide](../readme.md))
diff --git a/labs/lab10/N4A-Dashboard.json b/labs/lab10/N4A-Dashboard.json
new file mode 100644
index 0000000..0cf0a46
--- /dev/null
+++ b/labs/lab10/N4A-Dashboard.json
@@ -0,0 +1,3511 @@
+{
+ "annotations": {
+ "list": [
+ {
+ "builtIn": 1,
+ "datasource": {
+ "type": "grafana",
+ "uid": "-- Grafana --"
+ },
+ "enable": true,
+ "hide": true,
+ "iconColor": "rgba(0, 211, 255, 1)",
+ "name": "Annotations & Alerts",
+ "type": "dashboard"
+ }
+ ]
+ },
+ "editable": true,
+ "fiscalYearStartMonth": 0,
+ "graphTooltip": 0,
+ "id": 19,
+ "links": [],
+ "liveNow": false,
+ "panels": [
+ {
+ "collapsed": false,
+ "gridPos": {
+ "h": 1,
+ "w": 24,
+ "x": 0,
+ "y": 0
+ },
+ "id": 10,
+ "panels": [],
+ "title": "NGINXaaS",
+ "type": "row"
+ },
+ {
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "thresholds"
+ },
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green",
+ "value": null
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ }
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 9,
+ "w": 12,
+ "x": 0,
+ "y": 1
+ },
+ "id": 13,
+ "options": {
+ "minVizHeight": 75,
+ "minVizWidth": 75,
+ "orientation": "auto",
+ "reduceOptions": {
+ "calcs": [
+ "lastNotNull"
+ ],
+ "fields": "",
+ "values": false
+ },
+ "showThresholdLabels": false,
+ "showThresholdMarkers": true,
+ "sizing": "auto"
+ },
+ "pluginVersion": "10.4.1",
+ "targets": [
+ {
+ "azureMonitor": {
+ "aggregation": "Count",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "nginxaas statistics",
+ "dimensionFilters": [],
+ "metricName": "ncu.provisioned",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Nginx.NginxPlus/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "queryType": "Azure Monitor",
+ "refId": "A"
+ },
+ {
+ "azureMonitor": {
+ "aggregation": "Count",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "nginxaas statistics",
+ "dimensionFilters": [],
+ "metricName": "ncu.requested",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Nginx.NginxPlus/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "hide": false,
+ "queryType": "Azure Monitor",
+ "refId": "B"
+ },
+ {
+ "azureMonitor": {
+ "aggregation": "Count",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "NGINXaaS Statistics",
+ "dimensionFilters": [],
+ "metricName": "ncu.consumed",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Nginx.NginxPlus/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "hide": false,
+ "queryType": "Azure Monitor",
+ "refId": "C"
+ }
+ ],
+ "title": "NCU Stats",
+ "type": "gauge"
+ },
+ {
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "thresholds"
+ },
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green",
+ "value": null
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ }
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 5,
+ "w": 12,
+ "x": 12,
+ "y": 1
+ },
+ "id": 9,
+ "options": {
+ "colorMode": "value",
+ "graphMode": "area",
+ "justifyMode": "auto",
+ "orientation": "auto",
+ "reduceOptions": {
+ "calcs": [
+ "lastNotNull"
+ ],
+ "fields": "",
+ "values": false
+ },
+ "showPercentChange": false,
+ "textMode": "auto",
+ "wideLayout": true
+ },
+ "pluginVersion": "10.4.1",
+ "targets": [
+ {
+ "azureMonitor": {
+ "aggregation": "Count",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "NGINX Connections Statistics",
+ "dimensionFilters": [],
+ "metricName": "nginx.conn.accepted",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Nginx.NginxPlus/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "hide": false,
+ "queryType": "Azure Monitor",
+ "refId": "C"
+ },
+ {
+ "azureMonitor": {
+ "aggregation": "Count",
+ "alias": "",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "NGINX Connections Statistics",
+ "dimensionFilters": [],
+ "metricName": "nginx.conn.dropped",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Nginx.NginxPlus/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto",
+ "top": ""
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "hide": false,
+ "queryType": "Azure Monitor",
+ "refId": "B"
+ },
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "NGINX Connections Statistics",
+ "dimensionFilters": [],
+ "metricName": "nginx.conn.active",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Nginx.NginxPlus/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "queryType": "Azure Monitor",
+ "refId": "A"
+ },
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "NGINX Connections Statistics",
+ "dimensionFilters": [],
+ "metricName": "nginx.conn.idle",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Nginx.NginxPlus/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "hide": false,
+ "queryType": "Azure Monitor",
+ "refId": "D"
+ },
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "NGINX Connections Statistics",
+ "dimensionFilters": [],
+ "metricName": "nginx.conn.current",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Nginx.NginxPlus/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "hide": false,
+ "queryType": "Azure Monitor",
+ "refId": "E"
+ }
+ ],
+ "title": "NGINX connections statistics",
+ "type": "stat"
+ },
+ {
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "fieldConfig": {
+ "defaults": {
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green",
+ "value": null
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "short"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 4,
+ "w": 12,
+ "x": 12,
+ "y": 6
+ },
+ "id": 14,
+ "options": {
+ "colorMode": "background",
+ "graphMode": "none",
+ "justifyMode": "auto",
+ "orientation": "auto",
+ "reduceOptions": {
+ "calcs": [
+ "lastNotNull"
+ ],
+ "fields": "",
+ "values": false
+ },
+ "showPercentChange": false,
+ "textMode": "auto",
+ "wideLayout": true
+ },
+ "pluginVersion": "10.4.1",
+ "targets": [
+ {
+ "azureMonitor": {
+ "aggregation": "Count",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "NGINX Worker Statistics",
+ "dimensionFilters": [],
+ "metricName": "plus.worker.conn.accepted",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Nginx.NginxPlus/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "queryType": "Azure Monitor",
+ "refId": "A"
+ },
+ {
+ "azureMonitor": {
+ "aggregation": "Count",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "NGINX Worker Statistics",
+ "dimensionFilters": [],
+ "metricName": "plus.worker.conn.dropped",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Nginx.NginxPlus/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "hide": false,
+ "queryType": "Azure Monitor",
+ "refId": "B"
+ },
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "NGINX Worker Statistics",
+ "dimensionFilters": [],
+ "metricName": "plus.worker.conn.active",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Nginx.NginxPlus/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "hide": false,
+ "queryType": "Azure Monitor",
+ "refId": "C"
+ },
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "NGINX Worker Statistics",
+ "dimensionFilters": [],
+ "metricName": "plus.worker.conn.idle",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Nginx.NginxPlus/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "hide": false,
+ "queryType": "Azure Monitor",
+ "refId": "D"
+ },
+ {
+ "azureMonitor": {
+ "aggregation": "Count",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "NGINX Worker Statistics",
+ "dimensionFilters": [],
+ "metricName": "plus.worker.http.request.total",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Nginx.NginxPlus/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "hide": false,
+ "queryType": "Azure Monitor",
+ "refId": "E"
+ },
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "NGINX Worker Statistics",
+ "dimensionFilters": [],
+ "metricName": "plus.worker.http.request.current",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Nginx.NginxPlus/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "hide": false,
+ "queryType": "Azure Monitor",
+ "refId": "F"
+ }
+ ],
+ "title": "NGINX worker statistics",
+ "type": "stat"
+ },
+ {
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "description": "",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineWidth": 3,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green",
+ "value": null
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ }
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 10,
+ "w": 12,
+ "x": 0,
+ "y": 10
+ },
+ "id": 5,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "NGINX Upstream Statistics",
+ "dimensionFilters": [
+ {
+ "dimension": "upstream",
+ "filters": [],
+ "operator": "eq"
+ }
+ ],
+ "metricName": "plus.http.upstream.peers.response.time",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "NGINX.NGINXPLUS/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "queryType": "Azure Monitor",
+ "refId": "A"
+ }
+ ],
+ "title": "Upstream Response Time",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "description": "",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green",
+ "value": null
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ }
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 10,
+ "w": 12,
+ "x": 12,
+ "y": 10
+ },
+ "id": 2,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "pluginVersion": "10.4.1",
+ "targets": [
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "NGINX Requests and Response Statistics",
+ "dimensionFilters": [],
+ "metricName": "plus.http.status.2xx",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "NGINX.NGINXPLUS/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "queryType": "Azure Monitor",
+ "refId": "A"
+ },
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "NGINX Requests and Response Statistics",
+ "dimensionFilters": [],
+ "metricName": "plus.http.status.3xx",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "NGINX.NGINXPLUS/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "hide": false,
+ "queryType": "Azure Monitor",
+ "refId": "C"
+ },
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "NGINX Requests and Response Statistics",
+ "dimensionFilters": [],
+ "metricName": "plus.http.status.4xx",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "NGINX.NGINXPLUS/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "hide": false,
+ "queryType": "Azure Monitor",
+ "refId": "B"
+ },
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "NGINX Requests and Response Statistics",
+ "dimensionFilters": [],
+ "metricName": "plus.http.status.5xx",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "NGINX.NGINXPLUS/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "hide": false,
+ "queryType": "Azure Monitor",
+ "refId": "D"
+ }
+ ],
+ "title": "HTTP Metrics",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "fieldConfig": {
+ "defaults": {
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green",
+ "value": null
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "short"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 8,
+ "w": 12,
+ "x": 0,
+ "y": 20
+ },
+ "id": 15,
+ "options": {
+ "colorMode": "value",
+ "graphMode": "none",
+ "justifyMode": "auto",
+ "orientation": "auto",
+ "reduceOptions": {
+ "calcs": [
+ "lastNotNull"
+ ],
+ "fields": "",
+ "values": false
+ },
+ "showPercentChange": false,
+ "textMode": "auto",
+ "wideLayout": true
+ },
+ "pluginVersion": "10.4.1",
+ "targets": [
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "nginx system statistics",
+ "dimensionFilters": [],
+ "metricName": "system.cpu",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Nginx.NginxPlus/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "queryType": "Azure Monitor",
+ "refId": "A"
+ },
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "nginx system statistics",
+ "dimensionFilters": [],
+ "metricName": "system.interface.bytes_rcvd",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Nginx.NginxPlus/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "hide": false,
+ "queryType": "Azure Monitor",
+ "refId": "B"
+ },
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "nginx system statistics",
+ "dimensionFilters": [],
+ "metricName": "system.interface.bytes_sent",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Nginx.NginxPlus/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "hide": false,
+ "queryType": "Azure Monitor",
+ "refId": "C"
+ },
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "nginx system statistics",
+ "dimensionFilters": [],
+ "metricName": "system.interface.packets_rcvd",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Nginx.NginxPlus/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "hide": false,
+ "queryType": "Azure Monitor",
+ "refId": "D"
+ },
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "nginx system statistics",
+ "dimensionFilters": [],
+ "metricName": "system.interface.packets_sent",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Nginx.NginxPlus/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "hide": false,
+ "queryType": "Azure Monitor",
+ "refId": "E"
+ },
+ {
+ "azureMonitor": {
+ "aggregation": "Total",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "nginx system statistics",
+ "dimensionFilters": [],
+ "metricName": "system.interface.total_bytes",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Nginx.NginxPlus/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "hide": false,
+ "queryType": "Azure Monitor",
+ "refId": "F"
+ },
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "nginx system statistics",
+ "dimensionFilters": [],
+ "metricName": "system.interface.egress_throughput",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Nginx.NginxPlus/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "hide": false,
+ "queryType": "Azure Monitor",
+ "refId": "G"
+ }
+ ],
+ "title": "NGINX system statistics",
+ "type": "stat"
+ },
+ {
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "description": "",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green",
+ "value": null
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ }
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 8,
+ "w": 12,
+ "x": 12,
+ "y": 20
+ },
+ "id": 4,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "azureMonitor": {
+ "aggregation": "Count",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "NGINX SSL Statistics",
+ "dimensionFilters": [
+ {
+ "dimension": "server_zone",
+ "filters": [],
+ "operator": "eq"
+ }
+ ],
+ "metricName": "plus.http.ssl.handshakes",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "NGINX.NGINXPLUS/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "queryType": "Azure Monitor",
+ "refId": "A"
+ },
+ {
+ "azureMonitor": {
+ "aggregation": "Count",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "NGINX SSL Statistics",
+ "dimensionFilters": [
+ {
+ "dimension": "server_zone",
+ "filters": [],
+ "operator": "eq"
+ }
+ ],
+ "metricName": "plus.http.ssl.session.reuses",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "NGINX.NGINXPLUS/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "hide": false,
+ "queryType": "Azure Monitor",
+ "refId": "B"
+ },
+ {
+ "azureMonitor": {
+ "aggregation": "Count",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "NGINX SSL Statistics",
+ "dimensionFilters": [
+ {
+ "dimension": "server_zone",
+ "filters": [],
+ "operator": "eq"
+ }
+ ],
+ "metricName": "plus.http.ssl.handshakes.failed",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "NGINX.NGINXPLUS/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "hide": false,
+ "queryType": "Azure Monitor",
+ "refId": "C"
+ }
+ ],
+ "title": "SSL Metrics",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "description": "",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineWidth": 3,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green",
+ "value": null
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ }
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 10,
+ "w": 12,
+ "x": 0,
+ "y": 28
+ },
+ "id": 1,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "NGINX Cache Statistics",
+ "dimensionFilters": [
+ {
+ "dimension": "cache_zone",
+ "filters": [
+ "image_cache"
+ ],
+ "operator": "eq"
+ }
+ ],
+ "metricName": "plus.cache.hit.ratio",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "NGINX.NGINXPLUS/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "queryType": "Azure Monitor",
+ "refId": "A"
+ }
+ ],
+ "title": "Cache Hit Ratio",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "description": "",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineWidth": 3,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green",
+ "value": null
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ }
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 10,
+ "w": 12,
+ "x": 12,
+ "y": 28
+ },
+ "id": 6,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "azureMonitor": {
+ "aggregation": "Count",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "NGINX Upstream Statistics",
+ "dimensionFilters": [
+ {
+ "dimension": "upstream",
+ "filters": [],
+ "operator": "eq"
+ }
+ ],
+ "metricName": "plus.http.upstream.peers.health_checks.checks",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "NGINX.NGINXPLUS/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "queryType": "Azure Monitor",
+ "refId": "A"
+ },
+ {
+ "azureMonitor": {
+ "aggregation": "Count",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "customNamespace": "NGINX Upstream Statistics",
+ "dimensionFilters": [
+ {
+ "dimension": "upstream",
+ "filters": [],
+ "operator": "eq"
+ }
+ ],
+ "metricName": "plus.http.upstream.peers.health_checks.unhealthy",
+ "metricNamespace": "nginx.nginxplus/nginxdeployments",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "NGINX.NGINXPLUS/nginxDeployments",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$MY_RESOURCENAME"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "hide": false,
+ "queryType": "Azure Monitor",
+ "refId": "B"
+ }
+ ],
+ "title": "Health Checks",
+ "type": "timeseries"
+ },
+ {
+ "collapsed": true,
+ "gridPos": {
+ "h": 1,
+ "w": 24,
+ "x": 0,
+ "y": 38
+ },
+ "id": 11,
+ "panels": [
+ {
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "thresholds"
+ },
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ }
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 8,
+ "w": 12,
+ "x": 0,
+ "y": 2
+ },
+ "id": 7,
+ "options": {
+ "colorMode": "value",
+ "graphMode": "area",
+ "justifyMode": "auto",
+ "orientation": "auto",
+ "reduceOptions": {
+ "calcs": [
+ "lastNotNull"
+ ],
+ "fields": "",
+ "values": false
+ },
+ "showPercentChange": false,
+ "textMode": "auto",
+ "wideLayout": true
+ },
+ "pluginVersion": "10.4.1",
+ "targets": [
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000
+ ],
+ "dimensionFilters": [],
+ "metricName": "apiserver_current_inflight_requests",
+ "metricNamespace": "microsoft.containerservice/managedclusters",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Microsoft.ContainerService/managedClusters",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$AKSCluster1"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "queryType": "Azure Monitor",
+ "refId": "A"
+ }
+ ],
+ "title": "AKS1 Inflight Requests",
+ "type": "stat"
+ },
+ {
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "thresholds"
+ },
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ }
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 8,
+ "w": 12,
+ "x": 12,
+ "y": 2
+ },
+ "id": 8,
+ "options": {
+ "colorMode": "value",
+ "graphMode": "area",
+ "justifyMode": "auto",
+ "orientation": "auto",
+ "reduceOptions": {
+ "calcs": [
+ "lastNotNull"
+ ],
+ "fields": "",
+ "values": false
+ },
+ "showPercentChange": false,
+ "textMode": "auto",
+ "wideLayout": true
+ },
+ "pluginVersion": "10.4.1",
+ "targets": [
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000
+ ],
+ "dimensionFilters": [],
+ "metricName": "apiserver_current_inflight_requests",
+ "metricNamespace": "microsoft.containerservice/managedclusters",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Microsoft.ContainerService/managedClusters",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$AKSCluster2"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "queryType": "Azure Monitor",
+ "refId": "A"
+ }
+ ],
+ "title": "AKS2 Inflight Requests",
+ "type": "stat"
+ },
+ {
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "fieldConfig": {
+ "defaults": {
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "short"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 4,
+ "w": 12,
+ "x": 0,
+ "y": 10
+ },
+ "id": 16,
+ "options": {
+ "colorMode": "background",
+ "graphMode": "none",
+ "justifyMode": "auto",
+ "orientation": "auto",
+ "reduceOptions": {
+ "calcs": [
+ "lastNotNull"
+ ],
+ "fields": "",
+ "values": false
+ },
+ "showPercentChange": false,
+ "textMode": "auto",
+ "wideLayout": true
+ },
+ "pluginVersion": "10.4.1",
+ "targets": [
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000
+ ],
+ "dimensionFilters": [],
+ "metricName": "node_network_in_bytes",
+ "metricNamespace": "microsoft.containerservice/managedclusters",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Microsoft.ContainerService/managedClusters",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$AKSCluster1"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "hide": false,
+ "queryType": "Azure Monitor",
+ "refId": "B"
+ },
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000
+ ],
+ "dimensionFilters": [],
+ "metricName": "node_network_out_bytes",
+ "metricNamespace": "microsoft.containerservice/managedclusters",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Microsoft.ContainerService/managedClusters",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$AKSCluster1"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "queryType": "Azure Monitor",
+ "refId": "A"
+ }
+ ],
+ "title": "AKS 1 Network Stats",
+ "type": "stat"
+ },
+ {
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "fieldConfig": {
+ "defaults": {
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "short"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 4,
+ "w": 12,
+ "x": 12,
+ "y": 10
+ },
+ "id": 17,
+ "options": {
+ "colorMode": "background",
+ "graphMode": "none",
+ "justifyMode": "auto",
+ "orientation": "auto",
+ "reduceOptions": {
+ "calcs": [
+ "lastNotNull"
+ ],
+ "fields": "",
+ "values": false
+ },
+ "showPercentChange": false,
+ "textMode": "auto",
+ "wideLayout": true
+ },
+ "pluginVersion": "10.4.1",
+ "targets": [
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000
+ ],
+ "dimensionFilters": [],
+ "metricName": "node_network_in_bytes",
+ "metricNamespace": "microsoft.containerservice/managedclusters",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Microsoft.ContainerService/managedClusters",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$AKSCluster2"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "queryType": "Azure Monitor",
+ "refId": "A"
+ },
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000
+ ],
+ "dimensionFilters": [],
+ "metricName": "node_network_out_bytes",
+ "metricNamespace": "microsoft.containerservice/managedclusters",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Microsoft.ContainerService/managedClusters",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$AKSCluster2"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "hide": false,
+ "queryType": "Azure Monitor",
+ "refId": "B"
+ }
+ ],
+ "title": "AKS 2 Network Stats",
+ "type": "stat"
+ },
+ {
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "continuous-GrYlRd"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 20,
+ "gradientMode": "scheme",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "smooth",
+ "lineWidth": 3,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ }
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 8,
+ "w": 12,
+ "x": 0,
+ "y": 14
+ },
+ "id": 18,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "hidden",
+ "placement": "right",
+ "showLegend": false
+ },
+ "tooltip": {
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000
+ ],
+ "dimensionFilters": [],
+ "metricName": "node_cpu_usage_percentage",
+ "metricNamespace": "microsoft.containerservice/managedclusters",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Microsoft.ContainerService/managedClusters",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$AKSCluster1"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "queryType": "Azure Monitor",
+ "refId": "A"
+ }
+ ],
+ "title": "AKS1 CPU Usage",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "continuous-GrYlRd"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 20,
+ "gradientMode": "scheme",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "smooth",
+ "lineWidth": 3,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ }
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 8,
+ "w": 12,
+ "x": 12,
+ "y": 14
+ },
+ "id": 19,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "hidden",
+ "placement": "right",
+ "showLegend": false
+ },
+ "tooltip": {
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000
+ ],
+ "dimensionFilters": [],
+ "metricName": "node_cpu_usage_percentage",
+ "metricNamespace": "microsoft.containerservice/managedclusters",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Microsoft.ContainerService/managedClusters",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$AKSCluster2"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "queryType": "Azure Monitor",
+ "refId": "A"
+ }
+ ],
+ "title": "AKS1 CPU Usage",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "continuous-GrYlRd"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "bars",
+ "fillOpacity": 90,
+ "gradientMode": "scheme",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ }
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 8,
+ "w": 12,
+ "x": 0,
+ "y": 22
+ },
+ "id": 20,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "hidden",
+ "placement": "right",
+ "showLegend": false
+ },
+ "tooltip": {
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000
+ ],
+ "dimensionFilters": [],
+ "metricName": "node_memory_working_set_percentage",
+ "metricNamespace": "microsoft.containerservice/managedclusters",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Microsoft.ContainerService/managedClusters",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$AKSCluster1"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "queryType": "Azure Monitor",
+ "refId": "A"
+ }
+ ],
+ "title": "AKS1 Memory Working Set %",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "continuous-GrYlRd"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "bars",
+ "fillOpacity": 90,
+ "gradientMode": "scheme",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ }
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 8,
+ "w": 12,
+ "x": 12,
+ "y": 22
+ },
+ "id": 21,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "hidden",
+ "placement": "right",
+ "showLegend": false
+ },
+ "tooltip": {
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000
+ ],
+ "dimensionFilters": [],
+ "metricName": "node_memory_working_set_percentage",
+ "metricNamespace": "microsoft.containerservice/managedclusters",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Microsoft.ContainerService/managedClusters",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$AKSCluster1"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "queryType": "Azure Monitor",
+ "refId": "A"
+ }
+ ],
+ "title": "AKS2 Memory Working Set %",
+ "type": "timeseries"
+ }
+ ],
+ "title": "K8s Clusters",
+ "type": "row"
+ },
+ {
+ "collapsed": true,
+ "gridPos": {
+ "h": 1,
+ "w": 24,
+ "x": 0,
+ "y": 39
+ },
+ "id": 22,
+ "panels": [
+ {
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "continuous-GrYlRd"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 20,
+ "gradientMode": "scheme",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "smooth",
+ "lineWidth": 3,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ }
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 8,
+ "w": 12,
+ "x": 0,
+ "y": 3
+ },
+ "id": 23,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "hidden",
+ "placement": "right",
+ "showLegend": false
+ },
+ "tooltip": {
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "dimensionFilters": [],
+ "metricName": "Inbound Flows",
+ "metricNamespace": "microsoft.compute/virtualmachines",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Microsoft.Compute/virtualMachines",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$ubuntuvm"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "queryType": "Azure Monitor",
+ "refId": "A"
+ }
+ ],
+ "title": "Ubuntu VM Inbound Flow",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "continuous-GrYlRd"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 20,
+ "gradientMode": "scheme",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "smooth",
+ "lineWidth": 3,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ }
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 8,
+ "w": 12,
+ "x": 12,
+ "y": 3
+ },
+ "id": 24,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "hidden",
+ "placement": "right",
+ "showLegend": false
+ },
+ "tooltip": {
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "dimensionFilters": [],
+ "metricName": "Inbound Flows",
+ "metricNamespace": "microsoft.compute/virtualmachines",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Microsoft.Compute/virtualMachines",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$windowsvm"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "queryType": "Azure Monitor",
+ "refId": "A"
+ }
+ ],
+ "title": "Windows VM Inbound Flow",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "fieldConfig": {
+ "defaults": {
+ "mappings": [],
+ "thresholds": {
+ "mode": "percentage",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "orange",
+ "value": 70
+ },
+ {
+ "color": "red",
+ "value": 85
+ }
+ ]
+ }
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 8,
+ "w": 12,
+ "x": 0,
+ "y": 11
+ },
+ "id": 25,
+ "options": {
+ "minVizHeight": 75,
+ "minVizWidth": 75,
+ "orientation": "auto",
+ "reduceOptions": {
+ "calcs": [
+ "lastNotNull"
+ ],
+ "fields": "",
+ "values": false
+ },
+ "showThresholdLabels": false,
+ "showThresholdMarkers": true,
+ "sizing": "auto"
+ },
+ "pluginVersion": "10.4.1",
+ "targets": [
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "dimensionFilters": [],
+ "metricName": "Disk Read Bytes",
+ "metricNamespace": "microsoft.compute/virtualmachines",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Microsoft.Compute/virtualMachines",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$ubuntuvm"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "queryType": "Azure Monitor",
+ "refId": "A"
+ },
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "dimensionFilters": [],
+ "metricName": "Disk Write Bytes",
+ "metricNamespace": "microsoft.compute/virtualmachines",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Microsoft.Compute/virtualMachines",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$ubuntuvm"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "hide": false,
+ "queryType": "Azure Monitor",
+ "refId": "B"
+ },
+ {
+ "azureMonitor": {
+ "aggregation": "Count",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "dimensionFilters": [],
+ "metricName": "Percentage CPU",
+ "metricNamespace": "microsoft.compute/virtualmachines",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Microsoft.Compute/virtualMachines",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$ubuntuvm"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "hide": false,
+ "queryType": "Azure Monitor",
+ "refId": "C"
+ },
+ {
+ "azureMonitor": {
+ "aggregation": "Total",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "dimensionFilters": [],
+ "metricName": "Available Memory Bytes",
+ "metricNamespace": "microsoft.compute/virtualmachines",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Microsoft.Compute/virtualMachines",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$ubuntuvm"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "hide": false,
+ "queryType": "Azure Monitor",
+ "refId": "D"
+ }
+ ],
+ "title": "Ubuntu System Stats",
+ "type": "gauge"
+ },
+ {
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "fieldConfig": {
+ "defaults": {
+ "mappings": [],
+ "thresholds": {
+ "mode": "percentage",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "orange",
+ "value": 70
+ },
+ {
+ "color": "red",
+ "value": 85
+ }
+ ]
+ }
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 8,
+ "w": 12,
+ "x": 12,
+ "y": 11
+ },
+ "id": 26,
+ "options": {
+ "minVizHeight": 75,
+ "minVizWidth": 75,
+ "orientation": "auto",
+ "reduceOptions": {
+ "calcs": [
+ "lastNotNull"
+ ],
+ "fields": "",
+ "values": false
+ },
+ "showThresholdLabels": false,
+ "showThresholdMarkers": true,
+ "sizing": "auto"
+ },
+ "pluginVersion": "10.4.1",
+ "targets": [
+ {
+ "azureMonitor": {
+ "aggregation": "Average",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "dimensionFilters": [],
+ "metricName": "Disk Read Bytes",
+ "metricNamespace": "microsoft.compute/virtualmachines",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Microsoft.Compute/virtualMachines",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$windowsvm"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "queryType": "Azure Monitor",
+ "refId": "A"
+ },
+ {
+ "azureMonitor": {
+ "aggregation": "Total",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "dimensionFilters": [],
+ "metricName": "Disk Write Bytes",
+ "metricNamespace": "microsoft.compute/virtualmachines",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Microsoft.Compute/virtualMachines",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$windowsvm"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "hide": false,
+ "queryType": "Azure Monitor",
+ "refId": "B"
+ },
+ {
+ "azureMonitor": {
+ "aggregation": "Count",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "dimensionFilters": [],
+ "metricName": "Percentage CPU",
+ "metricNamespace": "microsoft.compute/virtualmachines",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Microsoft.Compute/virtualMachines",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$windowsvm"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "hide": false,
+ "queryType": "Azure Monitor",
+ "refId": "C"
+ },
+ {
+ "azureMonitor": {
+ "aggregation": "Total",
+ "allowedTimeGrainsMs": [
+ 60000,
+ 300000,
+ 900000,
+ 1800000,
+ 3600000,
+ 21600000,
+ 43200000,
+ 86400000
+ ],
+ "dimensionFilters": [],
+ "metricName": "Available Memory Bytes",
+ "metricNamespace": "microsoft.compute/virtualmachines",
+ "region": "$MY_LOCATION",
+ "resources": [
+ {
+ "metricNamespace": "Microsoft.Compute/virtualMachines",
+ "region": "$MY_LOCATION",
+ "resourceGroup": "$MY_RESOURCEGROUP",
+ "resourceName": "$windowsvm"
+ }
+ ],
+ "timeGrain": "auto"
+ },
+ "datasource": {
+ "type": "grafana-azure-monitor-datasource",
+ "uid": "azure-monitor-oob"
+ },
+ "hide": false,
+ "queryType": "Azure Monitor",
+ "refId": "D"
+ }
+ ],
+ "title": "Windows System Stats",
+ "type": "gauge"
+ }
+ ],
+ "title": "Virtual Machines",
+ "type": "row"
+ }
+ ],
+ "refresh": "5s",
+ "schemaVersion": 39,
+ "tags": [],
+ "templating": {
+ "list": [
+ {
+ "current": {
+ "selected": false,
+ "text": "eastus",
+ "value": "eastus"
+ },
+ "hide": 0,
+ "name": "MY_LOCATION",
+ "options": [],
+ "query": "eastus",
+ "skipUrlSync": false,
+ "type": "textbox"
+ },
+ {
+ "current": {
+ "selected": false,
+ "text": "a.currier-workshop",
+ "value": "a.currier-workshop"
+ },
+ "hide": 0,
+ "name": "MY_RESOURCEGROUP",
+ "options": [],
+ "query": "a.currier-workshop",
+ "skipUrlSync": false,
+ "type": "textbox"
+ },
+ {
+ "current": {
+ "selected": false,
+ "text": "nginx4a",
+ "value": "nginx4a"
+ },
+ "hide": 0,
+ "name": "MY_RESOURCENAME",
+ "options": [],
+ "query": "nginx4a",
+ "skipUrlSync": false,
+ "type": "textbox"
+ },
+ {
+ "current": {
+ "selected": false,
+ "text": "n4a-aks1",
+ "value": "n4a-aks1"
+ },
+ "hide": 0,
+ "label": "AKS Cluster 1",
+ "name": "AKSCluster1",
+ "options": [],
+ "query": "n4a-aks1",
+ "skipUrlSync": false,
+ "type": "textbox"
+ },
+ {
+ "current": {
+ "selected": false,
+ "text": "n4a-aks2",
+ "value": "n4a-aks2"
+ },
+ "hide": 0,
+ "label": "AKS Cluster 2",
+ "name": "AKSCluster2",
+ "options": [],
+ "query": "n4a-aks2",
+ "skipUrlSync": false,
+ "type": "textbox"
+ },
+ {
+ "current": {
+ "selected": false,
+ "text": "windowsvm",
+ "value": "windowsvm"
+ },
+ "hide": 0,
+ "label": "Windows VM",
+ "name": "windowsvm",
+ "options": [
+ {
+ "selected": true,
+ "text": "windowsvm",
+ "value": "windowsvm"
+ }
+ ],
+ "query": "windowsvm",
+ "skipUrlSync": false,
+ "type": "textbox"
+ },
+ {
+ "current": {
+ "selected": false,
+ "text": "ubuntuvm",
+ "value": "ubuntuvm"
+ },
+ "hide": 0,
+ "label": "Ubuntu VM",
+ "name": "ubuntuvm",
+ "options": [],
+ "query": "ubuntuvm",
+ "skipUrlSync": false,
+ "type": "textbox"
+ }
+ ]
+ },
+ "time": {
+ "from": "now-2d",
+ "to": "now"
+ },
+ "timepicker": {
+ "hidden": false
+ },
+ "timezone": "",
+ "title": "Nginx 4 Azure Workshop",
+ "uid": "b70b2c11-a815-4506-a10a-3f15b6970797",
+ "version": 20,
+ "weekStart": ""
+}
+
diff --git a/labs/lab10/media/EntraID-sign_in.png b/labs/lab10/media/EntraID-sign_in.png
new file mode 100644
index 0000000..690776e
Binary files /dev/null and b/labs/lab10/media/EntraID-sign_in.png differ
diff --git a/labs/lab10/media/NGINXaaS-icon.png b/labs/lab10/media/NGINXaaS-icon.png
new file mode 100644
index 0000000..1998d39
Binary files /dev/null and b/labs/lab10/media/NGINXaaS-icon.png differ
diff --git a/labs/lab10/media/grafana-dashboards-import.png b/labs/lab10/media/grafana-dashboards-import.png
new file mode 100644
index 0000000..f9e60f3
Binary files /dev/null and b/labs/lab10/media/grafana-dashboards-import.png differ
diff --git a/labs/lab10/media/grafana-dashboards-import2.png b/labs/lab10/media/grafana-dashboards-import2.png
new file mode 100644
index 0000000..0272b64
Binary files /dev/null and b/labs/lab10/media/grafana-dashboards-import2.png differ
diff --git a/labs/lab10/media/grafana-dashboards-json.png b/labs/lab10/media/grafana-dashboards-json.png
new file mode 100644
index 0000000..994ba83
Binary files /dev/null and b/labs/lab10/media/grafana-dashboards-json.png differ
diff --git a/labs/lab10/media/grafana-dashboards-k8s-vm.png b/labs/lab10/media/grafana-dashboards-k8s-vm.png
new file mode 100644
index 0000000..cde2cb0
Binary files /dev/null and b/labs/lab10/media/grafana-dashboards-k8s-vm.png differ
diff --git a/labs/lab10/media/grafana-dashboards-n4a.png b/labs/lab10/media/grafana-dashboards-n4a.png
new file mode 100644
index 0000000..ea01b2a
Binary files /dev/null and b/labs/lab10/media/grafana-dashboards-n4a.png differ
diff --git a/labs/lab10/media/grafana-dashboards-new.png b/labs/lab10/media/grafana-dashboards-new.png
new file mode 100644
index 0000000..8f03448
Binary files /dev/null and b/labs/lab10/media/grafana-dashboards-new.png differ
diff --git a/labs/lab10/media/grafana-dashboards.png b/labs/lab10/media/grafana-dashboards.png
new file mode 100644
index 0000000..4e1f63e
Binary files /dev/null and b/labs/lab10/media/grafana-dashboards.png differ
diff --git a/labs/lab10/media/grafana-icon.png b/labs/lab10/media/grafana-icon.png
new file mode 100644
index 0000000..49a13fb
Binary files /dev/null and b/labs/lab10/media/grafana-icon.png differ
diff --git a/labs/lab10/media/grafana-landing-page.png b/labs/lab10/media/grafana-landing-page.png
new file mode 100644
index 0000000..3ac7bb8
Binary files /dev/null and b/labs/lab10/media/grafana-landing-page.png differ
diff --git a/labs/lab10/media/grafana-variables.png b/labs/lab10/media/grafana-variables.png
new file mode 100644
index 0000000..8a3c81d
Binary files /dev/null and b/labs/lab10/media/grafana-variables.png differ
diff --git a/labs/lab10/media/managed-grafana.png b/labs/lab10/media/managed-grafana.png
new file mode 100644
index 0000000..4cd2781
Binary files /dev/null and b/labs/lab10/media/managed-grafana.png differ
diff --git a/labs/lab10/readme.md b/labs/lab10/readme.md
index 5063452..f51c685 100644
--- a/labs/lab10/readme.md
+++ b/labs/lab10/readme.md
@@ -1,68 +1,165 @@
-# NGINX Caching / Juiceshop or Garage
+# Monitoring NGINXaaS for Azure with Grafana
## Introduction
-In this lab, you will build ( x,y,x ).
+In this lab, you will be exploring the integration between NGINXaaS for Azure and Grafana for monitoring of the service. NGINX as a Service for Azure is a service offering that is tightly integrated into Microsoft Azure public cloud and its ecosystem, making applications fast, efficient, and reliable with full lifecycle management of advanced NGINX traffic services.
-< Lab specific Images here, in the /media sub-folder >
+NGINXaaS is powered by NGINX Plus, so much of the configuration is similar to what you are already used to. We will use Grafana to create a dashboard in which we will monitor:
+- HTTP requests
+- HTTP Metrics
+- Cache Hit Ratio
+- SSL Metrics
+- Upstream Response Time
+- Health checks
-NGINX aaS | Docker
+The data for these will be based on the work done in previous labs.
+
+
+
+NGINXaaS for Azure | Grafana
:-------------------------:|:-------------------------:
- |
+ | 
## Learning Objectives
By the end of the lab you will be able to:
-- Introduction to `xx`
-- Build an `yyy` Nginx configuration
-- Test access to your lab enviroment with Curl and Chrome
-- Investigate `zzz`
-
+- Create a Grafana managed instance in Azure
+- Create a Dashboard to monitor metrics in NGINXaaS for Azure
+- Test the Grafana Server
+- View the Grafana Dashboard
## Pre-Requisites
-- You must have `aaaa` installed and running
-- You must have `bbbbb` installed
+- You must be using NGINXaaS for Azure
- See `Lab0` for instructions on setting up your system for this Workshop
+- Have Docker installed to run workloads (for graph data)
- Familiarity with basic Linux commands and commandline tools
-- Familiarity with basic Docker concepts and commands
- Familiarity with basic HTTP protocol
+- Familiarity with Grafana
+
+
+1. Ensure you are in the `lab10` folder. We will set two environment variables and then use these to create the Grafana Instance via the Azure CLI.
+
+> Please Note there is a charge associated with standing up a Managed Grafana instance in Azure and you should be sure to delete the resources when you are finished exploring the lab.
+
+The resource group should be the same as the one you have been using for the whole workshop. If it is not set, do it here. The MY_GRAFANA variable is what the resource will be called when you are looking for it in Azure.
+
+```bash
+export MY_RESOURCEGROUP=a.currier-workshop
+export MY_GRAFANA=grafanaworkshop
+
+az grafana create --name $MY_GRAFANA --resource-group $MY_RESOURCEGROUP
+```
+
+2. In the output of the above command, take note of the endpoint that has been created for you. It should be found in a key labelled *endpoint*.
+
+
+
+Using the endpoint URL you can log into the Managed Grafana instance using your Microsoft Entra ID (that you have been using for these labs). If you forgot to grab the endpoint URL, you can retrieve it via the Azure CLI tool:
+```bash
+az grafana show --name $MY_GRAFANA --resource-group $MY_RESOURCEGROUP --query "properties.endpoint" --output tsv
+```
+
+Open a web browser and go to the endpoint address listed. You should see an Entra ID login which may or may not have your credentials pre-populated:
+
+
+
+
+Once signed in, you will be taken to the default Grafana landing page.
+
+
+
+From here we will want to click on the Dashboards Menu on the left hand side.
+
+
+
+In the upper right of the page is a blue drop down button. We will select *Import*:
+
+
+
+In Visual Studio Code, navigate to the Lab10 folder. Open the N4A-Dashboard.json file and inspect it.
+
+This template file makes use of Grafana Variables to make it easier to customize to your environment. Let's retrieve the values we will need for the dashboard in the VS terminal by running the following commands:
+
+```bash
+export MY_RESOURCEGROUP=$(az resource list --resource-group a.currier-workshop --resource-type Nginx.NginxPlus/nginxDeployments --query "[].resourceGroup" -o tsv)
+export MY_RESOURCENAME=$(az resource list --resource-group a.currier-workshop --resource-type Nginx.NginxPlus/nginxDeployments --query "[].name" -o tsv)
+export MY_LOCATION=$(az resource list --resource-group a.currier-workshop --resource-type Nginx.NginxPlus/nginxDeployments --query "[].location" -o tsv)
+export MY_AKSCluster1=n4a-aks1
+export MY_AKSCluster2=n4a-aks2
+export MY_WindowsVM=windowsvm
+export MY_UbuntuVM=ubuntuvm
+```
+
+Confirm the values were set:
+```bash
+set | grep MY
+MY_AKSCluster1=n4a-aks1
+MY_AKSCluster2=n4a-aks2
+MY_LOCATION=eastus
+MY_RESOURCEGROUP=a.currier-workshop
+MY_RESOURCENAME=nginx4a
+MY_UbuntuVM=ubuntuvm
+MY_WindowsVM=windowsvm
+```
+
+Now that we have these 7 values we can use them in the Dashboard template.
+
+Copy the code from the N4A-Dashboard.json file. In the grafana import window, paste the code into the import field and then click the blue load button.
+
+
+
+To get the Dashboards to load. Replace each variable field at the top (see image) with the values you retrieved for your lab:
+
+
+
+### Generate a workload
+
+1. Start the WRK load generation tool. This will provide some traffic to the NGINXaaS for Azure instance, so that the statistics will be increasing.
+
+```bash
+docker run --rm williamyeh/wrk -t20 -d600s -c1000 https://cafe.example.com/
+```
-### Lab exercise 1
-
+### Grafana
+
+We now have a working dashboard displaying some key metrics of the NGINX for Azure service. As with most dashboards, you can adjust the time intervals, etc. to get a better look at the data. Feel free to explore each of the data panels.
+
+
-### Lab exercise 2
+
-
+There are many different metrics available to use and you have the option to create and build dashboards to suit your needs. For these pre-built ones, we added three sections. The first section highlights metrics for NGINXaaS. These are taken directly from the NGINXaaS Metrics page that you can find linked below. The next section is monitoring your Kubernetes clusters that you built in the previous labs. The final section adds a few metrics for the Virtual Machines that were previously created. Feel free to review each of these panels and explore adding panels of your own.
-### Lab exercise 3
+To delete the Managed Grafana instance, you can do so via the CLI using this command:
-
+```bash
+az grafana delete --name $MY_GRAFANA --resource-group $MY_RESOURCEGROUP --yes
+```
+
+> If the `wrk` load generation tool is still running, then you can stop it by pressing `ctrl + c`.
-### << more exercises/steps>>
-
-**This completes Lab10.**
+**This completes Lab 10.**
## References:
-- [NGINX As A Service for Azure](https://docs.nginx.com/nginxaas/azure/)
-- [NGINX Plus Product Page](https://docs.nginx.com/nginx/)
-- [NGINX Ingress Controller](https://docs.nginx.com//nginx-ingress-controller/)
-- [NGINX on Docker](https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-docker/)
-- [NGINX Directives Index](https://nginx.org/en/docs/dirindex.html)
-- [NGINX Variables Index](https://nginx.org/en/docs/varindex.html)
+- [NGINX For Azure Metrics Catalog](https://docs.nginx.com/nginxaas/azure/monitoring/metrics-catalog/)
+- [Azure Managed Grafana Docs](https://learn.microsoft.com/en-us/azure/managed-grafana/)
+- [Build a Grafana Dashboard](https://grafana.com/docs/grafana/latest/getting-started/build-first-dashboard/)
+- [NGINX Admin Guide](https://docs.nginx.com/nginx/admin-guide/)
- [NGINX Technical Specs](https://docs.nginx.com/nginx/technical-specs/)
-- [NGINX - Join Community Slack](https://community.nginx.org/joinslack)
+
+
@@ -74,4 +171,6 @@ By the end of the lab you will be able to:
-------------
-Navigate to ([LabX](../labX/readme.md) | [LabX](../labX/readme.md))
+Navigate to ([Lab Guide](../readme.md))
+
+
diff --git a/labs/lab2/cafe-docker-upstreams.conf b/labs/lab2/cafe-docker-upstreams.conf
index 5f55ac9..dd8d2b2 100644
--- a/labs/lab2/cafe-docker-upstreams.conf
+++ b/labs/lab2/cafe-docker-upstreams.conf
@@ -7,9 +7,9 @@ upstream cafe_nginx {
zone cafe_nginx 256k;
# from docker compose
- server ubuntuvm:81;
- server ubuntuvm:82;
- server ubuntuvm:83;
+ server n4a-ubuntuvm:81;
+ server n4a-ubuntuvm:82;
+ server n4a-ubuntuvm:83;
keepalive 32;
diff --git a/labs/lab2/docker-compose.yml b/labs/lab2/docker-compose.yml
index 5222e37..9286d15 100644
--- a/labs/lab2/docker-compose.yml
+++ b/labs/lab2/docker-compose.yml
@@ -2,7 +2,6 @@
# NGINX for Azure, Mar 2024
# Chris Akker, Shouvik Dutta, Adam Currier
#
-version: '3.3'
services:
web1:
hostname: docker-web1
diff --git a/labs/lab2/init.sh b/labs/lab2/init.sh
new file mode 100644
index 0000000..30700f7
--- /dev/null
+++ b/labs/lab2/init.sh
@@ -0,0 +1,13 @@
+#!/bin/bash
+
+sudo apt-get update
+
+# Install Linux Networking Tools
+sudo apt install -y net-tools
+
+# Install Docker
+sudo apt-get install -y docker.io
+
+# Install Docker Compose
+sudo curl -L "https://github.com/docker/compose/releases/download/v2.27.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
+sudo chmod +x /usr/local/bin/docker-compose
\ No newline at end of file
diff --git a/labs/lab2/media/cafe-icon.png b/labs/lab2/media/cafe-icon.png
new file mode 100644
index 0000000..bd93a41
Binary files /dev/null and b/labs/lab2/media/cafe-icon.png differ
diff --git a/labs/lab2/media/docker-icon.png b/labs/lab2/media/docker-icon.png
new file mode 100644
index 0000000..02ee3f1
Binary files /dev/null and b/labs/lab2/media/docker-icon.png differ
diff --git a/labs/lab2/media/lab2-cloudshell.png b/labs/lab2/media/lab2-cloudshell.png
new file mode 100644
index 0000000..4458504
Binary files /dev/null and b/labs/lab2/media/lab2-cloudshell.png differ
diff --git a/labs/lab2/media/lab2_cafe-diagram.png b/labs/lab2/media/lab2_cafe-diagram.png
new file mode 100644
index 0000000..44a730f
Binary files /dev/null and b/labs/lab2/media/lab2_cafe-diagram.png differ
diff --git a/labs/lab2/media/lab2_cafe-docker-upstreams.png b/labs/lab2/media/lab2_cafe-docker-upstreams.png
new file mode 100644
index 0000000..0ca49f6
Binary files /dev/null and b/labs/lab2/media/lab2_cafe-docker-upstreams.png differ
diff --git a/labs/lab2/media/lab2_cafe-example-com-conf.png b/labs/lab2/media/lab2_cafe-example-com-conf.png
new file mode 100644
index 0000000..813ec4e
Binary files /dev/null and b/labs/lab2/media/lab2_cafe-example-com-conf.png differ
diff --git a/labs/lab2/media/lab2_cafe-inspect.png b/labs/lab2/media/lab2_cafe-inspect.png
new file mode 100644
index 0000000..6352302
Binary files /dev/null and b/labs/lab2/media/lab2_cafe-inspect.png differ
diff --git a/labs/lab2/media/lab2_cafe-out-of-stock.png b/labs/lab2/media/lab2_cafe-out-of-stock.png
new file mode 100644
index 0000000..1b6aaa5
Binary files /dev/null and b/labs/lab2/media/lab2_cafe-out-of-stock.png differ
diff --git a/labs/lab2/media/lab2_cafe-windows-iis.png b/labs/lab2/media/lab2_cafe-windows-iis.png
new file mode 100644
index 0000000..5a9aee0
Binary files /dev/null and b/labs/lab2/media/lab2_cafe-windows-iis.png differ
diff --git a/labs/lab2/media/lab2_diagram.png b/labs/lab2/media/lab2_diagram.png
new file mode 100644
index 0000000..79ae0d2
Binary files /dev/null and b/labs/lab2/media/lab2_diagram.png differ
diff --git a/labs/lab2/media/lab2_windows-upstreams.png b/labs/lab2/media/lab2_windows-upstreams.png
new file mode 100644
index 0000000..ca137e6
Binary files /dev/null and b/labs/lab2/media/lab2_windows-upstreams.png differ
diff --git a/labs/lab2/media/nginx-azure-icon.png b/labs/lab2/media/nginx-azure-icon.png
new file mode 100644
index 0000000..70ab132
Binary files /dev/null and b/labs/lab2/media/nginx-azure-icon.png differ
diff --git a/labs/lab2/media/ubuntu-icon.png b/labs/lab2/media/ubuntu-icon.png
new file mode 100644
index 0000000..cd4fa72
Binary files /dev/null and b/labs/lab2/media/ubuntu-icon.png differ
diff --git a/labs/lab2/media/windows-icon.png b/labs/lab2/media/windows-icon.png
new file mode 100644
index 0000000..05ac958
Binary files /dev/null and b/labs/lab2/media/windows-icon.png differ
diff --git a/labs/lab2/nginx.conf b/labs/lab2/nginx.conf
new file mode 100644
index 0000000..29eeb45
--- /dev/null
+++ b/labs/lab2/nginx.conf
@@ -0,0 +1,42 @@
+# Nginx 4 Azure - Default - Updated Nginx.conf
+# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
+#
+user nginx;
+worker_processes auto;
+worker_rlimit_nofile 8192;
+pid /run/nginx/nginx.pid;
+
+events {
+ worker_connections 4000;
+}
+
+error_log /var/log/nginx/error.log error;
+
+http {
+ log_format main '$remote_addr - $remote_user [$time_local] "$request" '
+ '$status $body_bytes_sent "$http_referer" '
+ '"$http_user_agent" "$http_x_forwarded_for"';
+
+ access_log off;
+ server_tokens "";
+ server {
+ listen 80 default_server;
+ server_name localhost;
+ location / {
+ # Points to a directory with a basic html index file with
+ # a "Welcome to NGINX as a Service for Azure!" page
+ root /var/www;
+ index index.html;
+ }
+ }
+
+ include /etc/nginx/conf.d/*.conf;
+ # include /etc/nginx/includes/*.conf; # shared files
+
+}
+
+# stream {
+
+# include /etc/nginx/stream/*.conf; # Stream TCP nginx files
+
+# }
diff --git a/labs/lab2/readme.md b/labs/lab2/readme.md
index 8370f2a..f5516a7 100644
--- a/labs/lab2/readme.md
+++ b/labs/lab2/readme.md
@@ -1,4 +1,4 @@
-# UbuntuVM/Docker / Windows VM / Cafe Demo Deployment
+# Ubuntu VM / Docker / Windows VM / Cafe Demo Deployment
## Introduction
@@ -6,30 +6,32 @@ In this lab, you will be creating various application backend resources. You wi
Your completed Ubuntu and Windows VM deployment will look like this:
-< Lab specific Images here, in the /media sub-folder >
+
NGINX aaS | Ubuntu | Docker | Windows
-:---------------------:|:---------------------:|:---------------------:
- | | |
-
+:---------------------:|:---------------------:|:---------------------:|:---------------------:
+ | | |
+
+
+
## Learning Objectives
By the end of the lab you will be able to:
-- Deploy Ubuntu VM with Azure CLI
-- Install Docker and Docker Compose
+- Deploy Ubuntu VM with Docker and Docker-Compose preinstalled using Azure CLI
- Run Nginx demo application containers
-- Deploy Windows VM with Azure CLI
+- Configure Nginx for Azure to Load Balance Docker containers
- Test and validate your lab environment
-- Configure Nginx for Azure to load balance these resources
+- Deploy Windows VM with Azure CLI
+- Configure Nginx for Azure to proxy to the Windows VM
- Test your Nginx for Azure configs
## Pre-Requisites
- You must have Azure Networking configured for this Workshop
-- You must have access to Azure VMs
+- You must have proper access to create Azure VMs
- You must have Azure CLI tool installed on your local system
- You must have an SSH client software installed on your local system
- You must have your Nginx for Azure instance deployed and running
@@ -40,624 +42,628 @@ By the end of the lab you will be able to:
-## Deploy UbuntuVM with Azure CLI
-
-After logging onto your Azure tenant, set the following Environment variables needed for this lab:
-
-```bash
-export MY_RESOURCEGROUP=myResourceGroup
-export REGION=CentralUS
-export MY_VM_NAME=ubuntuvm
-export MY_USERNAME=azureuser
-export MY_VM_IMAGE="Canonical:0001-com-ubuntu-server-jammy:server-22_04-lts-gen2:latest"
-
-```
-
-To see a list of UbuntuVMs available to you, try:
+## Deploy Ubuntu VM with Docker and Docker-Compose preinstalled using Azure CLI
+
+1. In your local machine open terminal and make sure you are logged onto your Azure tenant. Set the following Environment variable which points to your Resource Group:
+
+ ```bash
+ ## Set environment variables
+ export MY_RESOURCEGROUP=s.dutta-workshop
+ ```
+
+ >*Make sure your Terminal is the `nginx-azure-workshops/labs` directory for all commands during this Workshop.*
+
+1. Create the Ubuntu VM that would be acting as your backend application server using below command:
+
+ ```bash
+ az vm create \
+ --resource-group $MY_RESOURCEGROUP \
+ --name n4a-ubuntuvm \
+ --image Ubuntu2204 \
+ --admin-username azureuser \
+ --vnet-name n4a-vnet \
+ --subnet vm-subnet \
+ --assign-identity \
+ --generate-ssh-keys \
+ --public-ip-sku Standard \
+ --custom-data lab2/init.sh
+ ```
+
+ ```bash
+ ##Sample Output##
+ {
+ "fqdns": "",
+ "id": "/subscriptions//resourceGroups/s.dutta-workshop/providers/Microsoft.Compute/virtualMachines/n4a-ubuntuvm",
+ "identity": {
+ "systemAssignedIdentity": "xxxx-xxxx-xxxx-xxxx-xxxx",
+ "userAssignedIdentities": {}
+ },
+ "location": "centralus",
+ "macAddress": "00-22-48-4A-3B-1E",
+ "powerState": "VM running",
+ "privateIpAddress": "172.16.2.4",
+ "publicIpAddress": "",
+ "resourceGroup": "s.dutta-workshop",
+ "zones": ""
+ }
+ ```
-```bash
-az vm image list --location $MY_LOCATION --publisher Canonical --output table
+ Make a Note of the `publicIpAddress`, this IP would be needed later on to access your VM remotely, with SSH.
-```
+ The above command would create below resources within your resource group:
+ - **n4a-ubuntuvm:** This is your virtual machine(vm) resource.
+ - **n4a-ubuntuvm_OsDisk_1_:** This is your OS Disk resource tied to your vm.
+ - **n4a-ubuntuvmVMNic:** This is your network interface resource tied to your vm.
+ - **n4a-ubuntuvmNSG:** This is your network security group resource tied to the network interface of your vm.
+ - **n4a-ubuntuvmPublicIP:** This is your public IP resource tied to your vm.
+
+ This command will also generate a SSH key file named `id_rsa` under `~/.ssh` folder if you don't have one already.
-```bash
-#Sample output
-You are viewing an offline list of images, use --all to retrieve an up-to-date list
-Architecture Offer Publisher Sku Urn UrnAlias Version
--------------- ---------------------------- ----------- -------------- ------------------------------------------------------------ ---------- ---------
-x64 0001-com-ubuntu-server-jammy Canonical 22_04-lts-gen2 Canonical:0001-com-ubuntu-server-jammy:22_04-lts-gen2:latest Ubuntu2204 latest
+ **SECURITY WARNING:** This new VM has SSH/port22 open to the entire Internet, and is only using an SSH Key file for security. Take appropriate steps to secure your VM if you will be using it for more than a couple hours!
-```
+1. **(Optional Step):** You can lock down your Network Security Group by allowing SSH/port22 access only to your publicIP using below command.
-Create the Ubuntu VM:
+ ```bash
+ ##Set environment variable
+ export MY_PUBLICIP=$(curl ipinfo.io/ip)
+ ```
-```bash
-az vm create \
+ ```bash
+ az network nsg rule update \
--resource-group $MY_RESOURCEGROUP \
- --location $MY_LOCATION \
- --tags owner=$MY_NAME \
- --name $MY_VM_NAME \
- --image $MY_VM_IMAGE \
- --admin-username $MY_USERNAME \
- --vnet-name $MY_VNET \
- --subnet $MY_SUBNET \
- --assign-identity \
- --generate-ssh-keys \
- --public-ip-sku Standard
-
-```
-
-```bash
-#Sample output
-{
- "fqdns": "",
- "id": "/subscriptions/7a0bb4ab-c5a7-46b3-b4ad-c10376166020/resourceGroups/cakker/providers/Microsoft.Compute/virtualMachines/ubuntuvm",
- "identity": {
- "systemAssignedIdentity": "39f7bca4-830b-402a-b773-b381a65de685",
- "userAssignedIdentities": {}
- },
- "location": "westus2",
- "macAddress": "00-22-48-79-32-B6",
- "powerState": "VM running",
- "privateIpAddress": "172.16.24.4",
- "publicIpAddress": "52.137.81.233",
- "resourceGroup": "cakker",
- "zones": ""
-}
-
-```
-
-Make a Note of the `publicIpAddress`, this is how you will access your VM remotely, with SSH.
-
-**SECURITY Warning** This new VM has SSH/port22 open to the entire Internet, and is not using an SSH Key file or NSG for security. Take appropriate steps to secure your VM if you will be using it for more than a couple hours!
-
-**Note:** If you cannot connect, you likely have a networking issue. Most often, you need to add your local public IP address to the Network Security Group for SSH access to the VM. You will find the NSG in the Azure Portal, under your Resource Group, called `ubuntuvm-nsg`. Use `whatsmyip.org` or other tool to display what your local public IP is using across the Internet. Update the NSG rule, to allow your Public IP inbound SSH access to the Ubuntu VM.
-
-Verify you have SSH access to the Ubuntu VM that was deployed previously. Open a Terminal, and using your `ubuntuvm.pem` SSH key file, connect to the public `ubuntuvm-ip`, and log in. For example:
-
-```bash
-ssh -i ubuntuvm_key.pem azureuser@52.247.231.156
-```
-
-Where:
-`ssh` - is the local command to start an SSH session, or use another applcation of your choosing.
-`-i ~/ubuntuvm.key.pem` - is your local ssh key file in your Home directory, it must be in a path where the SSH program can read it. You should have saved this file when you created the VM earlier. If you don't have it, you will find it in the Azure Portal, under your Resource Group, called `unbuntuvm_key`.
-`azureuser` is the default user for Azure hosted Linux VMs
-`@52.247.231.156` is the Public IP Addresses assinged to your Ubuntu VM. You will find this in Azure Portal, under your Resource Group, `ubuntuvm-ip`.
-
-<< OPTIONAL >>
-Using your local system's SSH program, connect to your `ubuntuvm`, and login with `azureuser`:
-
-```bash
-ssh azureuser@52.137.81.233
-
-```
-<< END OPTIOINAL >>
-
-Install some Linux Networking Tools, needed for lab exercises:
-
-```bash
-sudo apt install net-tools
-
-```
-
-Install Docker Community Edition, run these commands:
-
-```bash
-sudo apt update
-sudo apt install apt-transport-https ca-certificates curl software-properties-common
-curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
-echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
-sudo apt update
-apt-cache policy docker-ce
-sudo apt install docker-ce
-
-```
-
-Install Docker Compose, run these commands:
-
-```bash
-sudo apt install docker-compose
-mkdir -p ~/.docker/cli-plugins/
-curl -SL https://github.com/docker/compose/releases/download/v2.3.3/docker-compose-linux-x86_64 -o ~/.docker/cli-plugins/docker-compose
-chmod +x ~/.docker/cli-plugins/docker-compose
-
-```
-
-After installation, check the versions:
-
-```bash
-sudo docker version
-sudo docker compose version
-
-```
-
-```bash
-#Sample output
-Client: Docker Engine - Community
- Version: 26.0.0
- API version: 1.45
- Go version: go1.21.8
- Git commit: 2ae903e
- Built: Wed Mar 20 15:17:48 2024
- OS/Arch: linux/amd64
- Context: default
+ --nsg-name n4a-ubuntuvmNSG \
+ --name default-allow-ssh \
+ --source-address-prefix $MY_PUBLICIP
+ ```
-Server: Docker Engine - Community
- Engine:
- Version: 26.0.0
- API version: 1.45 (minimum version 1.24)
- Go version: go1.21.8
- Git commit: 8b79278
- Built: Wed Mar 20 15:17:48 2024
- OS/Arch: linux/amd64
- Experimental: false
- containerd:
- Version: 1.6.28
- GitCommit: ae07eda36dd25f8a1b98dfbf587313b99c0190bb
- runc:
- Version: 1.1.12
- GitCommit: v1.1.12-0-g51d5e94
- docker-init:
- Version: 0.19.0
- GitCommit: de40ad0
-azureuser@ubuntuvm:~$ sudo docker compose version
-Docker Compose version v2.25.0
+1. Verify you have SSH access to the Ubuntu VM that you deployed in previous steps. Open a Terminal, and use your public IP tied to ubuntu vm, to start a new ssh session.
-```
+ ```bash
+ ssh azureuser@
-Test and see if Docker will run the `Hello-World` container:
+ #eg
+ ssh azureuser@11.22.33.44
+ ```
-```bash
-sudo docker run hello-world
+ Where:
+ - `ssh` - is the local command to start an SSH session, or use another applcation of your choosing.
+ - `azureuser` is the local user for Azure VM that you created.
+ - `@11.22.33.44` is the Public IP Address assigned to your Ubuntu VM.
-```
+ **Note:** If you cannot connect using your local machine, you likely having ssh client issues. You can make use of Azure CloudShell to access your VM which would create an `id_rsa` ssh key file within the `~/.ssh` directory of your Azure cloud shell.
-```bash
-#Sample output
-Unable to find image 'hello-world:latest' locally
-latest: Pulling from library/hello-world
-c1ec31eb5944: Pull complete
-Digest: sha256:53641cd209a4fecfc68e21a99871ce8c6920b2e7502df0a20671c6fccc73a7c6
-Status: Downloaded newer image for hello-world:latest
+ 
-Hello from Docker!
-This message shows that your installation appears to be working correctly.
+1. Within the Ubuntu VM, run below commands to validate docker and docker compose are installed as part of the `init.sh` script that you passed as one of the parameters to the `az vm create` command
-To generate this message, Docker took the following steps:
- 1. The Docker client contacted the Docker daemon.
- 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
- (amd64)
- 3. The Docker daemon created a new container from that image which runs the
- executable that produces the output you are currently reading.
- 4. The Docker daemon streamed that output to the Docker client, which sent it
- to your terminal.
+ ```bash
+ docker version
+ docker-compose version
+ ```
-To try something more ambitious, you can run an Ubuntu container with:
- $ docker run -it ubuntu bash
+1. Test and see if Docker will run the `Hello-World` container:
-Share images, automate workflows, and more with a free Docker ID:
- https://hub.docker.com/
+ ```bash
+ sudo docker run hello-world
+ ```
-For more examples and ideas, visit:
- https://docs.docker.com/get-started/
-
-```
-
-Checkout a few Docker things:
-
-```bash
-sudo docker image list
-sudo docker ps -a
+ ```bash
+ ##Sample Output##
+ Unable to find image 'hello-world:latest' locally
+ latest: Pulling from library/hello-world
+ c1ec31eb5944: Pull complete
+ Digest: sha256:53641cd209a4fecfc68e21a99871ce8c6920b2e7502df0a20671c6fccc73a7c6
+ Status: Downloaded newer image for hello-world:latest
-```
+ Hello from Docker!
+ This message shows that your installation appears to be working correctly.
-You should find the hello-world image was pulled, and that the container ran and exited.
+ To generate this message, Docker took the following steps:
+ 1. The Docker client contacted the Docker daemon.
+ 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
+ (amd64)
+ 3. The Docker daemon created a new container from that image which runs the
+ executable that produces the output you are currently reading.
+ 4. The Docker daemon streamed that output to the Docker client, which sent it
+ to your terminal.
-Success! You have an Ubuntu VM with Docker that can run various containers needed for future Lab exercises. Reminder: Don't forget to shutdown this VM when you are finished with it later, or set an Auto Shutdown policy using Azure Portal.
+ To try something more ambitious, you can run an Ubuntu container with:
+ $ docker run -it ubuntu bash
-Leave your SSH Terminal running, you will use it in the next Exercise.
+ Share images, automate workflows, and more with a free Docker ID:
+ https://hub.docker.com/
-### Deploy Nginx Demo containers
-
-You will now use Docker Compose to create and deploy three Nginx `ingress-demo` containers. These will be your first group of `backends` that will be used for load balancing with Nginx for Azure.
+ For more examples and ideas, visit:
+ https://docs.docker.com/get-started/
+ ```
-On the Ubuntu VM, create a new folder in the `/home/azureuser` directory, call it `cafe`.
+1. Checkout a few Docker things:
-```bash
-azureuser@ubuntuvm: cd $HOME
-azureuser@ubuntuvm: sudo mkdir cafe
-azureuser@ubuntuvm: cd cafe
-azureuser@ubuntuvm: sudo vi docker-compose.yml
+ ```bash
+ sudo docker images
+ sudo docker ps -a
+ ```
-```
+ You should find the hello-world image was pulled, and that the container ran and exited.
-Inspect the `lab3/docker-compose.yml` file. Notice you are pulling the `nginxinc/ingress-demo` image, and starting three containers. The three containers are configured as follows:
+ >Success! You have an Ubuntu VM with Docker that can run various containers needed for future Lab exercises. Reminder: Don't forget to shutdown this VM when you are finished with it later, or set an Auto Shutdown policy using Azure Portal.
-Container Name | Name:port
-:-------------:|:------------:
-docker-web1 | ubuntuvm:81
-docker-web2 | ubuntuvm:82
-docker-web3 | ubuntuvm:83
+ Leave your SSH Terminal running, you will use it in the next section.
+
-Copy the contents from the `lab3/docker-compose.yml` file, into the same filename on the Ubuntu VM. Save the file and exit VI.
-
-Start up the three Nginx demo containers. This tells Docker to read the compose file and start the three containers:
-
-```bash
-sudo docker-compose up -d
-
-```
-
-Check the containers are running:
-
-```bash
-sudo docker ps -a
-
-```
+### Deploy Nginx Demo containers
-It should look similar to this. Notice that each container is listening on a unique TCP port on the Docker host - Ports 81, 82, and 83 for web1, web2 and web3, respectively.
+You will now use Docker Compose to create and deploy three Nginx `ingress-demo` containers. These will be your first group of `backends` that will be used for load balancing with Nginx for Azure.
-```bash
-#Sample output
-CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
-33ca8329cece nginxinc/ingress-demo "/docker-entrypoint.…" 2 minutes ago Up 2 minutes 0.0.0.0:82->80/tcp, :::82->80/tcp, 0.0.0.0:4432->443/tcp, :::4432->443/tcp docker-web2
-d3bf38f7b575 nginxinc/ingress-demo "/docker-entrypoint.…" 2 minutes ago Up 2 minutes 0.0.0.0:83->80/tcp, :::83->80/tcp, 0.0.0.0:4433->443/tcp, :::4433->443/tcp docker-web3
-1982b1a4356d nginxinc/ingress-demo "/docker-entrypoint.…" 2 minutes ago Up 2 minutes 0.0.0.0:81->80/tcp, :::81->80/tcp, 0.0.0.0:4431->443/tcp, :::4431->443/tcp docker-web1
+1. Inspect the `lab2/docker-compose.yml` file. Notice you are pulling the `nginxinc/ingress-demo` image, and starting three containers. The three containers are configured as follows:
-```
+ Container Name | Name:port
+ :-------------:|:------------:
+ docker-web1 | ubuntuvm:81
+ docker-web2 | ubuntuvm:82
+ docker-web3 | ubuntuvm:83
+
+1. On the Ubuntu VM, create a new sub-directory in the `/home/azureuser` directory, call it `cafe`.
+
+ ```bash
+ cd $HOME
+ mkdir cafe
+ ```
+
+1. Within the `cafe` sub-directory, you will now add `docker-compose.yml`.
+ You can do this in two ways. You can create a new `docker-compose.yml` file and copy the contents from the `lab2/docker-compose.yml` file, into this new file on the Ubuntu VM using editor of your choice (Below example uses vi tool).
+
+ ```bash
+ cd cafe
+ vi docker-compose.yml
+ ```
-Verify that all THREE containers have their TCP ports exposed on the Ubuntu VM host:
+ Alternatively, you can get this file by running the `wget` command as shown below:
-```bash
-azureuser@ubuntuvm:~/cafe$ netstat -tnl
+ ```bash
+ cd cafe
+ wget https://raw.githubusercontent.com/nginxinc/nginx-azure-workshops/main/labs/lab2/docker-compose.yml
+ ```
-```
+1. Start up the three Nginx demo containers using below command. This instructs Docker to read the compose file and start the three containers:
-```bash
-#Sample output
-Active Internet connections (only servers)
-Proto Recv-Q Send-Q Local Address Foreign Address State
-tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN
-tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
-tcp 0 0 0.0.0.0:81 0.0.0.0:* LISTEN
-tcp 0 0 0.0.0.0:83 0.0.0.0:* LISTEN
-tcp 0 0 0.0.0.0:82 0.0.0.0:* LISTEN
-tcp 0 0 0.0.0.0:4433 0.0.0.0:* LISTEN
-tcp 0 0 0.0.0.0:4432 0.0.0.0:* LISTEN
-tcp 0 0 0.0.0.0:4431 0.0.0.0:* LISTEN
-tcp6 0 0 :::22 :::* LISTEN
-tcp6 0 0 :::81 :::* LISTEN
-tcp6 0 0 :::83 :::* LISTEN
-tcp6 0 0 :::82 :::* LISTEN
-tcp6 0 0 :::4433 :::* LISTEN
-tcp6 0 0 :::4432 :::* LISTEN
-tcp6 0 0 :::4431 :::* LISTEN
+ ```bash
+ sudo docker-compose up -d
+ ```
-```
+1. Check the containers are running:
-Yes, looks like ports 81, 82, and 83 are Listening. Note: If you used a different VM, you may need some host Firewall rules to allow traffic to the containers.
+ ```bash
+ sudo docker ps
+ ```
-Test all three containers with curl:
+ ```bash
+ ##Sample Output##
+ CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
+ 33ca8329cece nginxinc/ingress-demo "/docker-entrypoint.…" 2 minutes ago Up 2 minutes 0.0.0.0:82->80/tcp, :::82->80/tcp, 0.0.0.0:4432->443/tcp, :::4432->443/tcp docker-web2
+ d3bf38f7b575 nginxinc/ingress-demo "/docker-entrypoint.…" 2 minutes ago Up 2 minutes 0.0.0.0:83->80/tcp, :::83->80/tcp, 0.0.0.0:4433->443/tcp, :::4433->443/tcp docker-web3
+ 1982b1a4356d nginxinc/ingress-demo "/docker-entrypoint.…" 2 minutes ago Up 2 minutes 0.0.0.0:81->80/tcp, :::81->80/tcp, 0.0.0.0:4431->443/tcp, :::4431->443/tcp docker-web1
+ ```
-```bash
-azureuser@ubuntuvm:~/cafe$ curl -s localhost:81 |grep Server
+ Notice that each container is listening on a unique TCP port on the Docker host - Ports 81, 82, and 83 for docker-web1, docker-web2 and docker-web3, respectively.
-```
+1. Verify that all THREE containers have their TCP ports exposed on the Ubuntu VM host:
-Gives you the 1st Container Name as `Server Name`, and Container's IP address as `Server Address`:
+ ```bash
+ netstat -tnl
+ ```
-```bash
-#Sample output
-
Server Name:docker-web1
-
Server Address:172.18.0.2:80
+ ```bash
+ #Sample output
+ Active Internet connections (only servers)
+ Proto Recv-Q Send-Q Local Address Foreign Address State
+ tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN
+ tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
+ tcp 0 0 0.0.0.0:81 0.0.0.0:* LISTEN
+ tcp 0 0 0.0.0.0:83 0.0.0.0:* LISTEN
+ tcp 0 0 0.0.0.0:82 0.0.0.0:* LISTEN
+ tcp 0 0 0.0.0.0:4433 0.0.0.0:* LISTEN
+ tcp 0 0 0.0.0.0:4432 0.0.0.0:* LISTEN
+ tcp 0 0 0.0.0.0:4431 0.0.0.0:* LISTEN
+ tcp6 0 0 :::22 :::* LISTEN
+ tcp6 0 0 :::81 :::* LISTEN
+ tcp6 0 0 :::83 :::* LISTEN
+ tcp6 0 0 :::82 :::* LISTEN
+ tcp6 0 0 :::4433 :::* LISTEN
+ tcp6 0 0 :::4432 :::* LISTEN
+ tcp6 0 0 :::4431 :::* LISTEN
+ ```
-```
+ Yes, looks like ports 81, 82, and 83 are Listening. Note: If you used a different VM, you may need to update the VM Host Firewall rules to allow traffic to the containers.
-```bash
-azureuser@ubuntuvm:~/cafe$ curl -s localhost:82 |grep Server
+1. Test all three containers by running curl command within the Ubuntu VM:
-```
+ ```bash
+ curl -s localhost:81 |grep Server
+ ```
-Gives you the 2nd Container Name as `Server Name`, and Container's IP address as `Server Address`:
+ Gives you the 1st Container Name as `Server Name`, and Container's IP address as `Server Address`:
-```bash
-#Sample output
-
Server Name:docker-web2
-
Server Address:172.18.0.3:80
+ ```bash
+ ##Sample Output##
+
Server Name:docker-web1
+
Server Address:172.18.0.2:80
+ ```
-```
+ ```bash
+ curl -s localhost:82 |grep Server
+ ```
-```bash
-azureuser@ubuntuvm:~/cafe$ curl -s localhost:83 |grep Server
+ Gives you the 2nd Container Name as `Server Name`, and Container's IP address as `Server Address`:
+
+ ```bash
+ ##Sample Output##
+
Server Name:docker-web2
+
Server Address:172.18.0.3:80
+ ```
-```
+ ```bash
+ curl -s localhost:83 |grep Server
+ ```
+
+ Gives you the 3rd Container Name as `Server Name`, and Container's IP address as `Server Address`:
+
+ ```bash
+ ##Sample Output##
+
Server Name:docker-web3
+
Server Address:172.18.0.4:80
+ ```
+
+ If you able to see Responses from all THREE containers, you can continue.
-Gives you the 3rd Container Name as `Server Name`, and Container's IP address as `Server Address`:
+
-```bash
-#Sample output
-
Server Name:docker-web3
-
Server Address:172.18.0.4:80
+## Configure Nginx for Azure to Load Balance Docker containers
-```
+
-If you able to see Responses from all THREE containers, you can continue.
+In this exercise, you will create your first Nginx config files, for the Nginx Server, Location, and Upstream blocks, to load balance your three Docker containers running on the Ubuntu VM.
-## Configure Nginx for Azure to Load Balance Docker containers
-In this exercise, you will create your first Nginx config files, for the Nginx Server, Location, and Upstream blocks, to load balance your three Docker containers running on the Ubuntu VM.
+
-< diagram here >
+
NGINX aaS | Docker | Cafe Demo
:-------------------------:|:-------------------------:|:-------------------------:
 | |
-Open the Azure Portal, your Resource Group, then Nginx for Azure, Settings, and then the NGINX Configuration panel.
+1. Open Azure portal within your browser and then open your Resource Group. Click on your NGINX for Azure resource (nginx4a) which should open the Overview section of your resource. From the left pane click on `NGINX Configuration` under Settings.
-Click on `+ New File`, to create a new Nginx config file.
+1. Click on `+ New File`, to create a new Nginx config file. Name the new file `/etc/nginx/conf.d/cafe-docker-upstreams.conf`.
-Name the new file `/etc/nginx/conf.d/cafe-docker-upstreams.conf`.
+ **Important:** You must use the full Linux /directory/filename path for every Nginx config file, for it to be properly created and placed in the correct directory. If you forget, you can delete it and must re-create it. The Azure Portal Text Edit panels do not let you move, or drag-n-drop files or directories. You can `rename` a file by clicking the Pencil icon, and `delete` a file by clicking the Trashcan icon at the top.
-**Important:** You must use the full Linux /folder/filename path for every Nginx config file, for it to be properly created and placed in the correct folder. If you forget, you can delete it and must re-create it. The Azure Portal Text Edit panels do not let you move, or drag-n-drop files or folders. You can `rename` a file by clicking the Pencil icon, and `delete` a file by clicking the Trashcan icon at the top.
+1. Copy and paste the contents from the matching file present in `lab2` directory from Github, into the Configuration Edit window, shown here:
-Copy and paste the contents from the matching file from Github, into the Configuration Edit window, shown here:
+ ```nginx
+ # Nginx 4 Azure, Cafe Nginx Demo Upstreams
+ # Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
+ #
+ # cafe-nginx servers
+ #
+ upstream cafe_nginx {
+ zone cafe_nginx 256k;
+
+ # from docker compose
+ server n4a-ubuntuvm:81;
+ server n4a-ubuntuvm:82;
+ server n4a-ubuntuvm:83;
-```nginx
-# Nginx 4 Azure, Cafe Nginx Demo Upstreams
-# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
-#
-# cafe-nginx servers
-#
-upstream cafe_nginx {
- zone cafe_nginx 256k;
-
- # from docker compose
- server ubuntuvm:81;
- server ubuntuvm:82;
- server ubuntuvm:83;
+ keepalive 32;
- keepalive 32;
+ }
+ ```
-}
+ 
-```
+ This creates an Nginx Upstream Block, which defines the backend server group that Nginx will load balance traffic to.
-<< ss here >>
+ Click `Submit` to save your Nginx configuration.
-This creates an Nginx Upstream Block, which defines the backend server group that Nginx will load balance traffic to.
+1. Click the ` + New File` again, and create a second Nginx config file, using the same Nginx for Azure Configuration editor tool. Name the second file `/etc/nginx/conf.d/cafe.example.com.conf`.
-Click the ` + New File` again, and create a second Nginx config file, using the same Nginx for Azure Configuration editor tool.
+1. Copy and paste the contents of the matching file present in `lab2` directory from Github, into the Configuration Edit window, shown here:
-Name the second file `/etc/nginx/conf.d/cafe.example.com.conf`.
+ ```nginx
+ # Nginx 4 Azure - Cafe Nginx HTTP
+ # Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
+ #
+ server {
+
+ listen 80; # Listening on port 80 on all IP addresses on this machine
-Copy, then paste the contents of the matching file from Github, into the Configuration Edit window, shown here:
+ server_name cafe.example.com; # Set hostname to match in request
+ status_zone cafe.example.com; # Metrics zone name
-```nginx
-# Nginx 4 Azure - Cafe Nginx HTTP
-# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
-#
-server {
-
- listen 80; # Listening on port 80 on all IP addresses on this machine
+ access_log /var/log/nginx/cafe.example.com.log main;
+ error_log /var/log/nginx/cafe.example.com_error.log info;
- server_name cafe.example.com; # Set hostname to match in request
- status_zone cafe.example.com; # Metrics zone name
+ location / {
+ #
+ # return 200 "You have reached cafe.example.com, location /\n";
+
+ proxy_pass http://cafe_nginx; # Proxy AND load balance to a list of servers
+ add_header X-Proxy-Pass cafe_nginx; # Custom Header
- access_log /var/log/nginx/cafe.example.com.log main;
- error_log /var/log/nginx/cafe.example.com_error.log info;
+ # proxy_pass http://windowsvm; # Proxy AND load balance to a list of servers
+ # add_header X-Proxy-Pass windowsvm; # Custom Header
- location / {
- #
- # return 200 "You have reached cafe.example.com, location /\n";
-
- proxy_pass http://cafe_nginx; # Proxy AND load balance to a list of servers
- add_header X-Proxy-Pass cafe_nginx; # Custom Header
+ }
}
-}
-
-```
+ ```
-Click the `Submit` Button above the Editor. Nginx will validate your configuration, and if successfull, will reload Nginx with your new configuration. If you receive an error, you will need to fix it before you proceed.
+ Click `Submit` to save your Nginx configuration.
-### Update your local system's DNS /etc/host file
+1. Now you need to include these new files into your main `nginx.conf` file within your `nginx4a` resource. Copy and paste the contents of the `nginx.conf` file present in `lab2` directory from Github, into the `nginx.conf` file using Configuration Edit window, shown here:
-For easy access your new website, you will need to add the hostname `cafe.example.com` and the Nginx4Azure Public IP address, to your local system DNS hosts file for name resolution. Your N4A Public IP address can be found in your Azure Portal, under `nginx1-ip`. Use VI or other text editor to add the entry to `/etc/hosts`:
+ ```nginx
+ # Nginx 4 Azure - Default - Updated Nginx.conf
+ # Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
+ #
+ user nginx;
+ worker_processes auto;
+ worker_rlimit_nofile 8192;
+ pid /run/nginx/nginx.pid;
-```bash
-cat /etc/hosts
+ events {
+ worker_connections 4000;
+ }
-127.0.0.1 localhost
-...
-# Nginx for Azure testing
-20.3.16.67 cafe.example.com
-...
+ error_log /var/log/nginx/error.log error;
+
+ http {
+ log_format main '$remote_addr - $remote_user [$time_local] "$request" '
+ '$status $body_bytes_sent "$http_referer" '
+ '"$http_user_agent" "$http_x_forwarded_for"';
+
+ access_log off;
+ server_tokens "";
+ server {
+ listen 80 default_server;
+ server_name localhost;
+ location / {
+ # Points to a directory with a basic html index file with
+ # a "Welcome to NGINX as a Service for Azure!" page
+ root /var/www;
+ index index.html;
+ }
+ }
+
+ include /etc/nginx/conf.d/*.conf;
+ # include /etc/nginx/includes/*.conf; # shared files
+
+ }
+ ```
-```
+ Notice that the Nginx standard / Best Practice of placing the HTTP Context config files in the `/etc/nginx/conf.d` folder is being followed, and the `include` directive is being used to read these files at Nginx configuration load time.
-Save your /etc/hosts file, and quit VI.
+1. Click the `Submit` Button above the Editor. Nginx will validate your configurations, and if successful, will reload Nginx with your new configurations. If you receive an error, you will need to fix it before you proceed.
-### Update your Azure Network Security Group
+
-You likely have one, or more, Azure Network Security Groups that need to updated to allow port 80 HTTP traffic inbound to your Resources. Check and verify that your Source IP is allowed access to both your VNet, and your `nginx1` instance.
+### Test your Nginx for Azure configuration
-### Test your Nginx4Azure configuration
+1. For easy access your new website, update your local system's DNS `/etc/hosts` file. You will add the hostname `cafe.example.com` and the Nginx for Azure Public IP address, to your local system DNS hosts file for name resolution. Your Nginx for Azure Public IP address can be found in your Azure Portal, under `n4a-publicIP`. Use vi tool or any other text editor to add an entry to `/etc/hosts` as shown below:
-Using a new Terminal, send a curl command to `http://cafe.example.com`, what do you see ?
+ ```bash
+ cat /etc/hosts
-```bash
-curl -I http://cafe.example.com
-```
+ 127.0.0.1 localhost
+ ...
-```bash
-#Sample output
-HTTP/1.1 200 OK
-Server: N4A-1.25.1
-Date: Thu, 04 Apr 2024 21:36:30 GMT
-Content-Type: text/html; charset=utf-8
-Connection: keep-alive
-Expires: Thu, 04 Apr 2024 21:36:29 GMT
-Cache-Control: no-cache
-X-Proxy-Pass: cafe-nginx
+ # Nginx for Azure testing
+ 11.22.33.44 cafe.example.com
-```
+ ...
+ ```
-Try the coffee and tea URLs, at http://cafe.example.com/coffee and /tea.
+ where
+ - `11.22.33.44` replace with your `n4a-publicIP` resource IP address.
-You should see a 200 OK Response. Did you see the `X-Proxy-Pass` header - set to the Upstream block name.
+1. Once you have updated the host your /etc/hosts file, save it and quit vi tool.
-Did you notice the `Server` header? This is the Nginx Server Token. Optional - Change the Server token to your name, and Submit your configuration. The server_tokens directive is found in the `nginx.conf` file. Change it from `N4A-$nginx_version`, to `N4A-$nginx_version-myname`, and click Submit.
+1. Using a new Terminal, send a curl command to `http://cafe.example.com`, what do you see ?
-Try the curl again. See the change ? Set it back if you like, the Server token is usually hidden for Security reasons, but you can use it as a quick identity tool temporarily. (Which server did I hit?)
+ ```bash
+ curl -I http://cafe.example.com
+ ```
-```bash
-#Sample output
-HTTP/1.1 200 OK
-Server: N4A-1.25.1-cakker # appended a name
-Date: Thu, 04 Apr 2024 21:41:04 GMT
-Content-Type: text/html; charset=utf-8
-Connection: keep-alive
-Expires: Thu, 04 Apr 2024 21:41:03 GMT
-Cache-Control: no-cache
-X-Proxy-Pass: cafe-nginx
+ ```bash
+ ##Sample Output##
+ HTTP/1.1 200 OK
+ Date: Thu, 04 Apr 2024 21:36:30 GMT
+ Content-Type: text/html; charset=utf-8
+ Connection: keep-alive
+ Expires: Thu, 04 Apr 2024 21:36:29 GMT
+ Cache-Control: no-cache
+ X-Proxy-Pass: cafe-nginx
+ ```
-```
+ Try the coffee and tea URLs, at http://cafe.example.com/coffee and http://cafe.example.com/tea.
-### Test Nginx 4 Azure to Docker
+ You should see a 200 OK Response. Did you see the `X-Proxy-Pass` header - set to the Upstream block name?
-Try access to your website with a Browser. Open Chrome, and nagivate to `http://cafe.example.com`. You should see an `Out of Stock` image, with a gray metadata panel, filled with names, IP addresses, URLs, etc. This panel comes from the Docker container, using Nginx $variables to populate the gray panel fields. If you open Chrome Developer Tools, and look at the Response Headers, you should be able to see the Server and X-Proxy-Pass Headers set respectively.
+1. Now try access to your cafe application with a Browser. Open Chrome, and nagivate to `http://cafe.example.com`. You should see an `Out of Stock` image, with a gray metadata panel, filled with names, IP addresses, URLs, etc. This panel comes from the Docker container, using Nginx $variables to populate the gray panel fields. If you Right+Click, and Inspect to open Chrome Developer Tools, and look at the Response Headers, you should be able to see the `Server and X-Proxy-Pass Headers` set respectively.
-<< out of stock ss here >>
+
Click Refresh serveral times. You will notice the `Server Name` and `Server Ip` fields changing, as N4A is round-robin load balancing the three Docker containers - docker-web1, 2, and 3 respectively. If you open Chrome Developer Tools, and look at the Response Headers, you should be able to see the Server and X-Proxy-Pass Headers set respectively.
-Congratulations!! You have just completed launching a simple web application with Nginx for Azure, running on the Internet, with just a VM, Docker, and 2 config files for Nginx for Azure. That wasn't so hard now, was it?
-
-
+
-## Deploy Windows VM with Azure CLI
+Try http://cafe.example.com/coffee and http://cafe.example.com/tea in Chrome, refreshing several times. You should find Nginx for Azure is load balancing these Docker web containers as expected.
-After logging onto your Azure tenant, set the following Environment variables needed for this lab:
+>**Congratulations!!** You have just completed launching a simple web application with Nginx for Azure, running on the Internet, with just a VM, Docker, and 2 config files for Nginx for Azure. That pretty easy, not so hard now, was it?
-```bash
-export MY_RESOURCEGROUP=myResourceGroup
-export REGION=CentralUS
-export MY_VM_NAME=windowsvm
-export MY_USERNAME=azureuser
-export MY_VM_IMAGE="Windows Server 20xx"
+
-```
+### Deploy Windows VM with Azure CLI
+
+Similar to how you deployed an Ubuntu VM, you will now deploy a Windows VM.
+
+1. In your local machine open terminal and make sure you are logged onto your Azure tenant. Set the following Environment variables:
+
+ ```bash
+ export MY_RESOURCEGROUP=s.dutta-workshop
+ export MY_VM_IMAGE=cognosys:iis-on-windows-server-2016:iis-on-windows-server-2016:1.2019.1009
+ ```
+
+1. Create the Windows VM (This will take some time to deploy):
+
+ ```bash
+ az vm create \
+ --resource-group $MY_RESOURCEGROUP \
+ --name n4a-windowsvm \
+ --image $MY_VM_IMAGE \
+ --vnet-name n4a-vnet \
+ --subnet vm-subnet \
+ --admin-username azureuser \
+ --public-ip-sku Standard
+ ```
+
+ ```bash
+ ##Sample Output##
+ Admin Password:
+ Confirm Admin Password:
+ Consider upgrading security for your workloads using Azure Trusted Launch VMs. To know more about Trusted Launch, please visit https://aka.ms/TrustedLaunch.
+ {
+ "fqdns": "",
+ "id": "/subscriptions//resourceGroups/s.dutta-workshop/providers/Microsoft.Compute/virtualMachines/n4a-windowsvm",
+ "location": "centralus",
+ "macAddress": "00-0D-3A-96-C5-F1",
+ "powerState": "VM running",
+ "privateIpAddress": "172.16.2.5",
+ "publicIpAddress": "",
+ "resourceGroup": "s.dutta-workshop",
+ "zones": ""
+ }
+ ```
-Create the Windows VM:
+ The above command would create below resources within your resource group:
+ - **n4a-windowsvm:** This is your virtual machine(vm) resource.
+ - **n4a-windowsvm_OsDisk_1_:** This is your OS Disk resource tied to your vm.
+ - **n4a-windowsvmVMNic:** This is your network interface resource tied to your vm.
+ - **n4a-windowsvmNSG:** This is your network security group resource tied to the network interface of your vm.
+ - **n4a-windowsvmPublicIP:** This is your public IP resource tied to your vm.
-```bash
-az vm create \
- --resource-group $MY_RESOURCEGROUP \
- --location $MY_LOCATION \
- --tags owner=$MY_NAME \
- --name $MY_VM_NAME \
- --image $MY_VM_IMAGE \
- --admin-username $MY_USERNAME \
- --vnet-name $MY_VNET \
- --subnet $MY_SUBNET \
- --assign-identity \
- --generate-ssh-keys \
- --public-ip-sku Standard
+ **SECURITY WARNING:** This new VM has rdp/port3389 open to the entire Internet. Take appropriate steps to secure your VM if you will be using it for more than a couple hours!
-```
+1. **(Optional Step):** You can lock down your Network Security Group by allowing rdp/port3389 access only to your publicIP using below command.
-```bash
-#Sample output
+ ```bash
+ ##Set environment variable
+ export MY_PUBLICIP=$(curl ipinfo.io/ip)
+ ```
+ ```bash
+ az network nsg rule update \
+ --resource-group $MY_RESOURCEGROUP \
+ --nsg-name n4a-windowsvmNSG \
+ --name rdp \
+ --source-address-prefix $MY_PUBLICIP
+ ```
-```
+
## Configure Nginx for Azure to proxy the Windows VM
-In this exercise, you will create your second Nginx config file, for the Nginx Server, Location, and Upstream blocks, to proxy your IIS Server running on the Windows VM.
+In this exercise, you will create another Nginx config file, for the Windows VM Upstream block, to proxy your IIS Server running on the Windows VM.
-< diagram here >
+
-NGINX aaS | Windows | ? Which Demo Pages
-:-------------------------:|:-------------------------:|:-------------------------:
- | |
+NGINX aaS | Windows VM / IIS
+:-------------------------:|:-------------------------:
+ | 
-Open the Azure Portal, your Resource Group, then Nginx for Azure, Settings, and then the NGINX Configuration panel.
+
-Click on `+ New File`, to create a new Nginx config file.
+1. Open Azure portal within your browser and then open your Resource Group. Click on your NGINX for Azure resource (nginx4a) which should open the Overview section of your resource. From the left pane click on `NGINX Configuration` under settings.
-Name the new file `/etc/nginx/conf.d/windows-upstreams.conf`.
+1. Click on `+ New File`, to create a new Nginx config file. Name the new file `/etc/nginx/conf.d/windows-upstreams.conf`.
-**Important:** You must use the full Linux /folder/filename path for every Nginx config file, for it to be properly created and placed in the correct folder. If you forget, you can delete it and must re-create it. The Azure Portal Text Edit panels do not let you move, or drag-n-drop files or folders. You can `rename` a file by clicking the Pencil icon, and `delete` a file by clicking the Trashcan icon at the top.
+ **Important:** You must use the full Linux /folder/filename path for every Nginx config file, for it to be properly created and placed in the correct folder. If you forget, you can delete it and must re-create it. The Azure Portal Text Edit panels do not let you move, or drag-n-drop files or folders. You can `rename` a file by clicking the Pencil icon, and `delete` a file by clicking the Trashcan icon at the top.
-Copy and paste the contents from the matching file from Github, into the Configuration Edit window, shown here:
+1. Copy and paste the contents from the matching file present in `lab2` directory from Github, into the Configuration Edit window, shown here:
-```nginx
-# Nginx 4 Azure, Windows IIS Upstreams
-# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
-#
-# windows IIS server
-#
-upstream windowsvm {
- zone windowsvm 256k;
-
- server windowsvm:80; # IIS Server
+ ```nginx
+ # Nginx 4 Azure, Windows IIS Upstreams
+ # Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
+ #
+ # windows IIS server
+ #
+ upstream windowsvm {
+ zone windowsvm 256k;
+
+ server n4a-windowsvm:80; # IIS Server
- keepalive 32;
+ keepalive 32;
-}
+ }
+ ```
-```
+ 
-<< ss here >>
+ Click `Submit` to save your Nginx configuration.
+
+ This creates a new Nginx Upstream Block, which defines the Windows IIS backend server group that Nginx will load balance traffic to.
-This creates a new Nginx Upstream Block, which defines the Windows IIS backend server group that Nginx will load balance traffic to.
+1. Edit the comment characters in `/etc/nginx/conf.d/cafe.example.com.conf`, to enable the `proxy_pass` to the `windowsvm`, and disable it for the `cafe-nginx`, as follows:
-Edit the comment characters in `/etc/nginx/conf.d/cafe.example.com.conf`, to enable the `proxy_pass` to the `windowsvm`, and disable it for the `cafe-nginx`, as follows:
+ ```nginx
+ # Nginx 4 Azure - Cafe Nginx and Windows IIS HTTP
+ # Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
+ #
+ server {
+
+ listen 80; # Listening on port 80 on all IP addresses on this machine
-```nginx
-# Nginx 4 Azure - Cafe Nginx and Windows IIS HTTP
-# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
-#
-server {
-
- listen 80; # Listening on port 80 on all IP addresses on this machine
+ server_name cafe.example.com; # Set hostname to match in request
+ status_zone cafe.example.com; # Metrics zone name
- server_name cafe.example.com; # Set hostname to match in request
- status_zone cafe.example.com; # Metrics zone name
+ access_log /var/log/nginx/cafe.example.com.log main;
+ error_log /var/log/nginx/cafe.example.com_error.log info;
- access_log /var/log/nginx/cafe.example.com.log main;
- error_log /var/log/nginx/cafe.example.com_error.log info;
+ location / {
+ #
+ # return 200 "You have reached cafe.example.com, location /\n";
+
+ # proxy_pass http://cafe_nginx; # Proxy AND load balance to a list of servers
+ # add_header X-Proxy-Pass cafe_nginx; # Custom Header
- location / {
- #
- # return 200 "You have reached cafe.example.com, location /\n";
-
- # proxy_pass http://cafe_nginx; # Proxy AND load balance to a list of servers
- # add_header X-Proxy-Pass cafe_nginx; # Custom Header
+ proxy_pass http://windowsvm; # Proxy AND load balance to a list of servers
+ add_header X-Proxy-Pass windowsvm; # Custom Header
- proxy_pass http://windowsvm; # Proxy AND load balance to a list of servers
- add_header X-Proxy-Pass windowsvm; # Custom Header
+ }
}
+ ```
-}
+1. Click the `Submit` Button above the Editor. Nginx will validate your configuration, and if successfull, will reload Nginx with your new configuration. If you receive an error, you will need to fix it before you proceed.
+
+
-```
+### Test your Nginx for Azure configs
-Click the `Submit` Button above the Editor. Nginx will validate your configuration, and if successfull, will reload Nginx with your new configuration. If you receive an error, you will need to fix it before you proceed.
+1. Test access again to http://cafe.example.com. You will now see the IIS default server page, instead of the Cafe Out of Stock page. If you check Chrome Dev Tools, the X-Proxy-Pass Header should now show `windowsvm`.
-Test access again to http://cafe.example.com. You will now see the IIS default server page, instead of the Out of Stock page. If you check Chrome Dev Tools, the X-Proxy-Pass header should now show `windowsvm`.
+ 
-Notice how easy it was, to create a new backend server, and then tell Nginx to proxy_pass to a different Upstream ? You used the same Hostname, DNS record, and Nginx Server block, but you just told Nginx to switch backends.
+ >Notice how easy it was, to create a new backend server, and then tell Nginx to `proxy_pass` to a different Upstream. You used the same Hostname, DNS record, and Nginx Server block, but you just told Nginx to switch backends with a different `proxy_pass` directive.
-Edit the `cafe.example.com.conf` file again, and change the comments to enable the `proxy_pass` for `cafe_nginx`, as you will use it again in a future lab exercise.
+1. Edit the `cafe.example.com.conf` file again, and change the comments to disable `windowsvm`, and re-enable the `proxy_pass` for `cafe_nginx`, as you will use it again in a future lab exercise.
-Submit your changes, and re-test to verify that http://cafe.example.com works again for Cafe Nginx. Don't forget to change the custom header as well.
+1. Submit your Nginx changes, and re-test to verify that http://cafe.example.com works again for Cafe Nginx. Don't forget to change the custom Header as well.
+
+
**This completes Lab2.**
@@ -666,8 +672,9 @@ Submit your changes, and re-test to verify that http://cafe.example.com works ag
## References:
- [NGINX As A Service for Azure](https://docs.nginx.com/nginxaas/azure/)
+
- [NGINX Plus Product Page](https://docs.nginx.com/nginx/)
-- [NGINX on Docker](https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-docker/)
+
- [NGINX Directives Index](https://nginx.org/en/docs/dirindex.html)
- [NGINX Variables Index](https://nginx.org/en/docs/varindex.html)
- [NGINX Technical Specs](https://docs.nginx.com/nginx/technical-specs/)
@@ -683,4 +690,4 @@ Submit your changes, and re-test to verify that http://cafe.example.com works ag
-------------
-Navigate to ([Lab3](../lab3/readme.md) | [LabX](../labX/readme.md))
+Navigate to ([Lab3](../lab3/readme.md) | [LabGuide](../readme.md))
diff --git a/labs/lab2/windows-upstreams.conf b/labs/lab2/windows-upstreams.conf
index 8478bfe..0ffb181 100644
--- a/labs/lab2/windows-upstreams.conf
+++ b/labs/lab2/windows-upstreams.conf
@@ -6,7 +6,7 @@
upstream windowsvm {
zone windowsvm 256k;
- server windowsvm:80; # IIS Server
+ server n4a-windowsvm:80; # IIS Server
keepalive 32;
diff --git a/labs/lab3/PUT-NGINXplus-REPO-JWT-HERE b/labs/lab3/PUT-NGINXplus-REPO-JWT-HERE
new file mode 100644
index 0000000..e69de29
diff --git a/ca-notes/aks/nic/dashboard-vs.yaml b/labs/lab3/dashboard-vs.yaml
similarity index 89%
rename from ca-notes/aks/nic/dashboard-vs.yaml
rename to labs/lab3/dashboard-vs.yaml
index 24151a0..040e92a 100644
--- a/ca-notes/aks/nic/dashboard-vs.yaml
+++ b/labs/lab3/dashboard-vs.yaml
@@ -18,7 +18,7 @@ metadata:
name: dashboard-vs
namespace: nginx-ingress
spec:
- host: dashboard.nginxazure.build
+ host: dashboard.example.com
upstreams:
- name: dashboard
service: dashboard-svc
@@ -29,4 +29,5 @@ spec:
pass: dashboard
- path: /api
action:
- pass: dashboard
\ No newline at end of file
+ pass: dashboard
+
\ No newline at end of file
diff --git a/labs/lab3/media/aks-icon.png b/labs/lab3/media/aks-icon.png
new file mode 100644
index 0000000..0d4ff61
Binary files /dev/null and b/labs/lab3/media/aks-icon.png differ
diff --git a/labs/lab3/media/aks-icon2.png b/labs/lab3/media/aks-icon2.png
new file mode 100644
index 0000000..37de29c
Binary files /dev/null and b/labs/lab3/media/aks-icon2.png differ
diff --git a/labs/lab3/media/lab3_diagram.png b/labs/lab3/media/lab3_diagram.png
new file mode 100644
index 0000000..30eeac8
Binary files /dev/null and b/labs/lab3/media/lab3_diagram.png differ
diff --git a/labs/lab3/media/lab3_nic-dashboards-diagram.png b/labs/lab3/media/lab3_nic-dashboards-diagram.png
new file mode 100644
index 0000000..83111a4
Binary files /dev/null and b/labs/lab3/media/lab3_nic-dashboards-diagram.png differ
diff --git a/labs/lab3/media/nginx-azure-icon.png b/labs/lab3/media/nginx-azure-icon.png
new file mode 100644
index 0000000..70ab132
Binary files /dev/null and b/labs/lab3/media/nginx-azure-icon.png differ
diff --git a/labs/lab3/media/nginx-ingress-icon.png b/labs/lab3/media/nginx-ingress-icon.png
new file mode 100644
index 0000000..0196a5a
Binary files /dev/null and b/labs/lab3/media/nginx-ingress-icon.png differ
diff --git a/ca-notes/aks/nic/nginx-plus-ingress.yaml b/labs/lab3/nginx-plus-ingress.yaml
similarity index 96%
rename from ca-notes/aks/nic/nginx-plus-ingress.yaml
rename to labs/lab3/nginx-plus-ingress.yaml
index a3cde18..51a0990 100644
--- a/ca-notes/aks/nic/nginx-plus-ingress.yaml
+++ b/labs/lab3/nginx-plus-ingress.yaml
@@ -19,6 +19,8 @@ spec:
prometheus.io/scheme: http
spec:
serviceAccountName: nginx-ingress
+ imagePullSecrets:
+ - name: regcred
automountServiceAccountToken: true
securityContext:
seccompProfile:
@@ -33,7 +35,7 @@ spec:
# - name: nginx-log
# emptyDir: {}
containers:
- - image: acrakker.azurecr.io/nginx-plus-ingress:3.2.1-alpine-fips
+ - image: private-registry.nginx.com/nginx-ic/nginx-plus-ingress:3.3.2
imagePullPolicy: IfNotPresent
name: nginx-plus-ingress
ports:
diff --git a/labs/lab3/nic1-dashboard-upstreams.conf b/labs/lab3/nic1-dashboard-upstreams.conf
new file mode 100644
index 0000000..0018d19
--- /dev/null
+++ b/labs/lab3/nic1-dashboard-upstreams.conf
@@ -0,0 +1,16 @@
+# Nginx 4 Azure to NIC, AKS Node for Upstreams
+# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
+#
+# nginx ingress dashboard
+#
+upstream nic1_dashboard {
+ zone nic1_dashboard 256k;
+
+ # from nginx-ingress NodePort Service / aks1 Node IPs
+ server aks-nodepool1-19055428-vmss000003:32090; #aks1 node1
+ server aks-nodepool1-19055428-vmss000004:32090; #aks1 node2
+ server aks-nodepool1-19055428-vmss000005:32090; #aks1 node3
+
+ keepalive 8;
+
+}
diff --git a/ca-notes/n4a-configs/nic1-dashboard.conf b/labs/lab3/nic1-dashboard.conf
similarity index 77%
rename from ca-notes/n4a-configs/nic1-dashboard.conf
rename to labs/lab3/nic1-dashboard.conf
index a3a1dd0..8393b0e 100644
--- a/ca-notes/n4a-configs/nic1-dashboard.conf
+++ b/labs/lab3/nic1-dashboard.conf
@@ -1,8 +1,8 @@
-# N4A NIC Dashboard config
+# N4A NIC Dashboard config for AKS1
#
server {
listen 9001;
- server_name dashboard.nginxazure.build;
+ server_name dashboard.example.com;
access_log off;
location = /dashboard.html {
diff --git a/ca-notes/n4a-configs/aks2-dashboard-upstreams.conf b/labs/lab3/nic2-dashboard-upstreams.conf
similarity index 50%
rename from ca-notes/n4a-configs/aks2-dashboard-upstreams.conf
rename to labs/lab3/nic2-dashboard-upstreams.conf
index 99370d6..263deca 100644
--- a/ca-notes/n4a-configs/aks2-dashboard-upstreams.conf
+++ b/labs/lab3/nic2-dashboard-upstreams.conf
@@ -7,10 +7,10 @@ upstream nic2_dashboard {
zone nic2_dashboard 256k;
# from nginx-ingress NodePort Service / aks Node IPs
- #server aks-nodepool1-19485366:32090; #aks2 cluster nodes:
- server aks-nodepool1-19485366-vmss00000h:32090; #aks node1:
- server aks-nodepool1-19485366-vmss00000i:32090; #aks node2:
- server aks-nodepool1-19485366-vmss00000j:32090; #aks node3:
+ server aks-nodepool1-29147198-vmss000000:32090; #aks2 node1
+ server aks-nodepool1-29147198-vmss000001:32090; #aks2 node2
+ server aks-nodepool1-29147198-vmss000002:32090; #aks2 node3
+ server aks-nodepool1-29147198-vmss000003:32090; #aks2 node4
keepalive 8;
diff --git a/ca-notes/n4a-configs/nic2-dashboard.conf b/labs/lab3/nic2-dashboard.conf
similarity index 77%
rename from ca-notes/n4a-configs/nic2-dashboard.conf
rename to labs/lab3/nic2-dashboard.conf
index e07551b..76cc874 100644
--- a/ca-notes/n4a-configs/nic2-dashboard.conf
+++ b/labs/lab3/nic2-dashboard.conf
@@ -1,8 +1,8 @@
-# N4A NIC Dashboard config
+# N4A NIC Dashboard config for AKS2
#
server {
listen 9002;
- server_name dashboard.nginxazure.build;
+ server_name dashboard.example.com;
access_log off;
location = /dashboard.html {
diff --git a/labs/lab3/nodeport-static.yaml b/labs/lab3/nodeport-static.yaml
new file mode 100644
index 0000000..3c47ee4
--- /dev/null
+++ b/labs/lab3/nodeport-static.yaml
@@ -0,0 +1,23 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: nginx-ingress
+ namespace: nginx-ingress
+spec:
+ type: NodePort
+ ports:
+ - port: 80
+ nodePort: 32080
+ protocol: TCP
+ name: http
+ - port: 443
+ nodePort: 32443
+ protocol: TCP
+ name: https
+ - port: 9000
+ nodePort: 32090
+ protocol: TCP
+ name: dashboard
+ selector:
+ app: nginx-ingress
+
\ No newline at end of file
diff --git a/labs/lab3/readme.md b/labs/lab3/readme.md
index 963ceff..b447c1a 100644
--- a/labs/lab3/readme.md
+++ b/labs/lab3/readme.md
@@ -1,725 +1,996 @@
-# AKS / NGINX Ingress Controller / Cafe or Garage Demo Deployment
+# AKS / Nginx Ingress Controller Deployment
## Introduction
-In this lab, you will explore how Nginx for Azure can route and load balance traffic to backend Kubernetes applications, pods, and services. You will create 2 AKS Kubernetes clusters, install NGINX Plus Ingress Controllers, and several demo applications. This will be your testing platform for Nginx for Azure with AKS - deploying and managing applications, networking, and using both NGINX for Azure and NGINX Ingress features to control traffic to your Modern Apps running in the clusters. Then you will pull an NGINX Plus Ingress Controller Image from the F5 NGINX Private Registry. Then you will deploy the NGINX Ingress Controllers, and configure it to route traffic to the demo app.
+In this lab, you will expand your test environment by adding AKS, Azure Kubernetes Resources. You will create 2 new AKS Kubernetes clusters, and deploy NGINX Plus Ingress Controller. This will be your testing platform for Nginx for Azure with AKS - deploying and managing applications, networking, and using both NGINX for Azure and NGINX Plus Ingress features to control traffic to your Modern Apps running in the clusters. You will use a Kubernetes Service to access the Nginx Plus Ingress Dashboard, and expose it with Nginx for Azure, so you can see in real time what is happening inside both AKS clusters.
+
+
+
+NGINX aaS | AKS | Nginx Plus Ingress
+:---------------------:|:---------------------:|:---------------------:
+ | |
## Learning Objectives
+- Understand what is Azure AKS?
- Deploy 2 Kubernetes clusters using Azure CLI.
-- Pulling and deploying the NGINX Plus Ingress Controller image.
-- Deploying the Nginx Ingress Dashboard
-- Deploying the Cafe Demo application
-- Test and verify proper operation on both AKS clusters.
-- Expose the Cafe Demo app and Nginx Ingress Dashboards
+- Test and verify proper operation of both AKS clusters.
+- Deploy the NGINX Plus Ingress Controller image.
+- Deploy the Nginx Plus Ingress Dashboard.
+- Expose the Nginx Ingress Dashboards with Nginx for Azure.
+
+## Prerequisites
+
+- You must have Azure Networking configured for this Workshop
+- You must have Azure CLI tool installed on your local system
+- You must have Kubectl installed on your local system
+- You must have Git installed on your local system
+- You must Docker Desktop or Docker client tools installed on your local system
+- You must have your Nginx for Azure instance deployed and running
+- Familiarity with Azure Resource types - Resource Groups, VMs, NSG, AKS, etc
+- Familiarity with basic Linux commands and commandline tools
+- Familiarity with Kubernetes / AKS concepts and commands
+- Familiarity with basic HTTP protocol
+- Familiarity with Ingress Controller concepts
+- See `Lab0` for instructions on setting up your system for this Workshop
-## What is Azure AKS?
+
-Azure Kubernetes Service is a service provided for Kubernetes on Azure
-infrastructure. The Kubernetes resources will be fully managed by Microsoft Azure, which offloads the burden of maintaining the infrastructure, and makes sure these resources are highly available and reliable at all times. This is often a good choice for Modern Applications running as containers, and using Kubernetes Services to control them.
+Your new Lab Diagram will look similar to this:
-## Azure Regions and naming convention suggestions
+
-1. Check out the available [Azure Regions](https://learn.microsoft.com/en-us/azure/reliability/availability-zones-overview).
-Decide on a [Datacenter region](https://azure.microsoft.com/en-us/explore/global-infrastructure/geographies/#geographies) that is closest to you and meets your needs.
-Check out the [Azure latency test](https://www.azurespeed.com/Azure/Latency)! We will need to choose one and input a region name in the following steps.
+
-2. Consider a naming and tagging convention to organize your cloud assets to support user identification of shared subscriptions.
+### What is Azure AKS?
-**Example:**
+Azure Kubernetes Service is a service provided for Kubernetes on Azure infrastructure. The Kubernetes resources will be fully managed by Microsoft Azure, which offloads the burden of maintaining the infrastructure, and makes sure these resources are highly available and reliable at all times. This is often a good choice for Modern Applications running as containers, and using Kubernetes Services to control them.
-You are located in Chicago, Illinois. You choose the Datacenter region
-`Central US`. These labs will use the following naming convention:
+### Deploy first Kubernetes Cluster with Azure CLI
-```bash
---
+1. With the use of single Azure CLI command, you will deploy a production-ready AKS cluster with some additional options. (**This will take a while**).
-```
+ First, initialise the Environment variables based on your setup, which are passed to the Azure CLI command as shown below:
-So for the 2 AKS Clusters you will deploy in `Central US`, and
-will name your Clusters `aks-shouvik-centralus` and `aks2-shouvik-centralus`.
+ ```bash
+ ## Set environment variables
+ export MY_RESOURCEGROUP=s.dutta-workshop
+ export MY_AKS=n4a-aks1
+ export MY_NAME=s.dutta
+ export K8S_VERSION=1.27
+ export MY_SUBNET=$(az network vnet subnet show -g $MY_RESOURCEGROUP -n aks1-subnet --vnet-name n4a-vnet --query id -o tsv)
+ ```
-You will also use the Owner tag `owner=shouvik` to further identify your assets in a shared account.
+ ```bash
+ # Create First AKS Cluster
+ az aks create \
+ --resource-group $MY_RESOURCEGROUP \
+ --name $MY_AKS \
+ --node-count 3 \
+ --node-vm-size Standard_B2s \
+ --kubernetes-version $K8S_VERSION \
+ --tags owner=$MY_NAME \
+ --vnet-subnet-id=$MY_SUBNET \
+ --enable-addons monitoring \
+ --generate-ssh-keys
+ ```
-## Azure CLI Basic Configuration Setting
+ >**Note**: At the time of this writing, 1.27 is the latest kubernetes long-term supported (LTS) version available in Azure AKS.
-You will need the Azure Command Line Interface (CLI) tool installed on your client machine to manage your Azure services. See [How to install the Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli)
+1. **(Optional Step)**: If kubectl ultility tool is not installed in your workstation then you can install `kubectl` locally using below command:
-If you do not have Azure CLI installed, you will need to install it to continue the lab exercises. To check Azure CLI version run below command:
+ ```bash
+ az aks install-cli
+ ```
-```bash
-az --version
+1. Configure `kubectl` to connect to your Azure AKS cluster using below command.
-```
+ ```bash
+ az aks get-credentials \
+ --resource-group $MY_RESOURCEGROUP \
+ --name n4a-aks1
+ ```
+
+### Install NGINX Plus Ingress Controller to first cluster
+
+
+
+In this section, you will be installing NGINX Plus Ingress Controller in first AKS cluster using manifest files. You will be then checking and verifying the Ingress Controller is running.
+
+1. Make sure your AKS cluster is running. Check the Nodes using below command.
+
+ ```bash
+ kubectl get nodes
+ ```
+
+ ```bash
+ ##Sample Output##
+ NAME STATUS ROLES AGE VERSION
+ aks-nodepool1-19055428-vmss000003 Ready agent 3m52s v1.27.9
+ aks-nodepool1-19055428-vmss000004 Ready agent 3m12s v1.27.9
+ aks-nodepool1-19055428-vmss000005 Ready agent 3m37s v1.27.9
+ ```
+
+1. Ensure that you are in the `/labs` directory of the workshop repository within your terminal.
+
+ ```bash
+ # Check your current directory
+ pwd
+ ```
-1. Sign in with Azure CLI using your preferred method listed [here](https://learn.microsoft.com/en-us/cli/azure/authenticate-azure-cli).
-
- >**Note:** We made use of Sign in interactively method for this workshop
```bash
- az login
+ ##Sample Output##
+ /nginx-azure-workshops/labs
+ ```
+
+1. Git Clone the Nginx Ingress Controller repo and navigate into the `/deployments` directory to make it your working directory for installing NGINX Ingress Controller:
+
+ ```bash
+ git clone https://github.com/nginxinc/kubernetes-ingress.git --branch v3.3.2
+ cd kubernetes-ingress/deployments
+ ```
+
+ >**Note**: At the time of this writing `3.3.2` is the latest NGINX Plus Ingress version that is available. Please feel free to use the latest version of NGINX Plus Ingress Controller. Look into [references](#references) for the latest Ingress images.
- ```
+1. Create necessary Kubernetes objects needed for Ingress Controller:
-1. Once you have logged in you can run below command to validate your tenant and subscription ID and name.
```bash
- az account show
+ # Create namespace and a service account
+ kubectl apply -f common/ns-and-sa.yaml
+
+ # Create cluster role and cluster role bindings
+ kubectl apply -f rbac/rbac.yaml
+
+ # Create default server secret with TLS certificate and a key
+ kubectl apply -f ../examples/shared-examples/default-server-secret/default-server-secret.yaml
+ # Create config map
+ kubectl apply -f common/nginx-config.yaml
+
+ # Create IngressClass resource
+ kubectl apply -f common/ingress-class.yaml
+
+ # Create CRDs
+ kubectl apply -f common/crds/k8s.nginx.org_virtualservers.yaml
+ kubectl apply -f common/crds/k8s.nginx.org_virtualserverroutes.yaml
+ kubectl apply -f common/crds/k8s.nginx.org_transportservers.yaml
+ kubectl apply -f common/crds/k8s.nginx.org_policies.yaml
+
+ # Create GlobalConfiguration resource
+ kubectl apply -f common/crds/k8s.nginx.org_globalconfigurations.yaml
```
-2. Optional: If you have multiple subscriptions and would like to change the current subscription to another then run below command.
```bash
- # change the active subscription using the subscription name
- az account set --subcription "{subscription name}"
+ ##Sample Output##
+ namespace/nginx-ingress created
+ serviceaccount/nginx-ingress created
+ clusterrole.rbac.authorization.k8s.io/nginx-ingress created
+ clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress created
+ secret/default-server-secret created
+ configmap/nginx-config created
+ ingressclass.networking.k8s.io/nginx created
+ customresourcedefinition.apiextensions.k8s.io/virtualservers.k8s.nginx.org created
+ customresourcedefinition.apiextensions.k8s.io/virtualserverroutes.k8s.nginx.org created
+ customresourcedefinition.apiextensions.k8s.io/transportservers.k8s.nginx.org created
+ customresourcedefinition.apiextensions.k8s.io/policies.k8s.nginx.org created
+ customresourcedefinition.apiextensions.k8s.io/globalconfigurations.k8s.nginx.org created
+ ```
+
+1. To deploy NGINX Plus Ingress Controller, you must have a software subscription license – download the NGINX Plus Ingress Controller license JWT Token file (`nginx-repo.jwt`) from your account on [MyF5](https://my.f5.com/).
+
+ **NOTE:** If you do not have a license, you can request a 30-day Trial key from [here](https://www.nginx.com/free-trial-connectivity-stack-kubernetes/).
+ An email will arrive in your Inbox in a few minutes, with links to download the license files.
+
+ However, for this workshop, a Trial License will be provided to you, so you can pull and run the Nginx Plus Commercial version of the Ingress Controller. This is NOT the same Ingress Controller provided by the Kubernetes Community. (If you are unsure which Ingress Controller you are using in your other Kubernetes environments, you can find a link to the Blog from Nginx that explains the differences).
+
+1. Once your Workshop Instructor has provide the JWT file, follow these instructions to create a Kubernetes Secret named `regcred`, of type `docker-registry`. You will need to create the Secret in both of your AKS clusters.
+
+1. Copy the `nginx-repo.jwt` file provided in the `/labs/lab3` directory within your cloned workshop repository.
+
+1. Navigate back to `/labs` directory and export the contents of the JWT file to an environment variable.
+
+ ```bash
+ cd /nginx-azure-workshops/labs
+
+ export JWT=$(cat lab3/nginx-repo.jwt)
+ ```
+
+ ```bash
+ # Check $JWT
+ echo $JWT
+ ```
+
+1. Create a Kubernetes `docker-registry` Secret on your First cluster, using the JWT token as the username and none for password (as the password is not used). The name of the docker server is `private-registry.nginx.com`. Notice how `$JWT` variable, which holds the contents of the `nginx-repo.jwt` file, is passed to `docker-username` flag:
+
+ ```bash
+ kubectl create secret docker-registry regcred \
+ --docker-server=private-registry.nginx.com \
+ --docker-username=$JWT \
+ --docker-password=none \
+ -n nginx-ingress
+ ```
+
+ > **Note:** It is important to note that the --docker-username= contains the contents of the token and is not pointing to the token itself. Ensure that when you copy the contents of the JWT token, there are no additional characters or extra whitespaces. This can invalidate the token and cause 401 errors when trying to authenticate to the registry.
+
+1. Confirm the Secret was created successfully by running:
+
+ ```bash
+ kubectl get secret regcred -n nginx-ingress -o yaml
+ ```
+
+ ```bash
+ ##Sample Output##
+ apiVersion: v1
+ data:
+ .dockerconfigjson:
+ kind: Secret
+ metadata:
+ creationTimestamp: "2024-04-16T19:21:09Z"
+ name: regcred
+ namespace: nginx-ingress
+ resourceVersion: "5838852"
+ uid: 30c60523-6b89-41b3-84d8-d22ec60d30a5
+ type: kubernetes.io/dockerconfigjson
+ ```
- # OR
+1. Once you have created the `regcred` kubernetes secret, you are ready to deploy the Ingress Controller as a Deployment:
- # change the active subscription using the subscription ID
- az account set --subscription "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
+ You will find the sample deployment file (`nginx-plus-ingress.yaml`) in the `deployment` sub-directory within your `kubernetes-ingress` directory that was added when you ran the git clone command. ***Do not use this file, use the updated one provided in the `/lab3 directory`.***
+ You will use the `nginx-plus-ingress.yaml` manifest file provided in the `/lab3` directory, which has the follow changes highlighted below:
+
+ - Change Image Pull to Nginx Private Repo with Docker Secret
+ - Enable Prometheus
+ - Add port and name for dashboard
+ - Change Dashboard Port to 9000
+ - Allow all IPs to access dashboard
+ - Make use of default TLS certificate
+ - Enable Global Configuration for Transport Server
+
+1. Inspect the `lab3/nginx-plus-ingress.yaml` looking at these changes:
+
+ - On lines #16-19, we have enabled `Prometheus` related annotations.
+ - On Lines #22-23, the ImagePullSecret is set to the Docker Config Secret `regcred` you created previously.
+ - On line #38, the `nginx-plus-ingress:3.3.2` placeholder is changed to the Nginx Private Registry image.
+ - On lines #52-53, we have added TCP port 9000 for the Plus Dashboard.
+ - On line #97, uncomment to make use of default TLS secret
+ - On lines #98-99, we have enabled the Dashboard and set the IP access controls to the Dashboard.
+ - On line #108, we have enabled Prometheus to collect metrics from the NGINX Plus stats API.
+ - On line #111, uncomment to enable the use of Global Configurations.
+
+1. Now deploy NGINX Ingress Controller as a Deployment using your updated manifest file.
+
+ ```bash
+ kubectl apply -f lab3/nginx-plus-ingress.yaml
```
-3. Create a new Azure Resource Group called `-workshop` , where `` is your last name. This will hold all the Azure resources that you will create for this workshop.
```bash
- az group create --name -workshop --location centralus
+ ##Sample Output##
+ deployment.apps/nginx-ingress created
+ ```
+
+### Check your NGINX Ingress Controller within first cluster
+
+1. Verify the NGINX Plus Ingress controller is up and running correctly in the Kubernetes cluster:
+ ```bash
+ kubectl get pods -n nginx-ingress
```
-## Deploy 1st Kubernetes Cluster with Azure CLI
+ ```bash
+ ##Sample Output##
+ NAME READY STATUS RESTARTS AGE
+ nginx-ingress-5764ddfd78-ldqcs 1/1 Running 0 17s
+ ```
+
+ **Note**: You must use the `kubectl` "`-n`", namespace flag, followed by namespace name, to see pods that are not in the default namespace.
+
+2. Instead of remembering the unique pod name, `nginx-ingress-xxxxxx-yyyyy`, you can store the Ingress Controller pod name into the `$AKS1_NIC` variable to be used throughout the lab.
+
+ >**Note:** This variable is stored for the duration of the Terminal session, and so if you close the Terminal it will be lost. At any time you can refer back to this step to create the `$AKS1_NIC` variable again.
-1. With the use of Azure CLI, you can deploy a production-ready AKS cluster with some options using a single command (**This will take a while**).
```bash
- MY_RESOURCEGROUP=s.dutta
- MY_LOCATION=centralus
- MY_AKS=aks-shouvik
- MY_NAME=shouvik
- AKS_NODE_VM=Standard_B2s
- K8S_VERSION=1.27
+ export AKS1_NIC=$(kubectl get pods -n nginx-ingress -o jsonpath='{.items[0].metadata.name}')
- # Create AKS Cluster
- az aks create \
- --resource-group $MY_RESOURCEGROUP \
- --name $MY_AKS \
- --location $MY_LOCATION \
- --node-count 3 \
- --node-vm-size $AKS_NODE_VM \
- --kubernetes-version $K8S_VERSION \
- --tags owner=$MY_NAME \
- --enable-addons monitoring \
- --generate-ssh-keys
```
- >**Note**:
- >1. At the time of this writing, 1.27 is the latest kubernetes version available in Azure AKS.
- >2. To list all possible VM sizes that an AKS node can use, run below command:
- > ```bash
- > az vm list-sizes --location centralus --output table
- > ```
+ Verify the variable is set correctly.
-2. **(Optional Step)**: If kubectl ultility tool is not installed in your workstation then you can install `kubectl` locally using below command:
```bash
- az aks install-cli
+ echo $AKS1_NIC
```
-3. Configure `kubectl`` to connect to your Azure AKS cluster using below command.
+ **Note:** If this command doesn't show the name of the pod then run the previous command again.
+
+### Test Access to the Nginx Plus Ingress Dashboard within first cluster
+
+Just a quick test, is your Nginx Plus Ingress Controller running, and can you see the Dashboard? Let's try it:
+
+1. Using Kubernetes Port-Forward, connect to the $AKS1_NIC pod:
+
```bash
- MY_RESOURCEGROUP=s.dutta
- MY_AKS=aks-shouvik
+ kubectl port-forward $AKS1_NIC -n nginx-ingress 9000:9000
- az aks get-credentials --resource-group $MY_RESOURCEGROUP --name $MY_AKS
```
-## Deploy 2nd Kubernetes Cluster with Azure CLI
+1. Open your browser to http://localhost:9000/dashboard.html.
-1. Open a second Terminal, log into to Azure, and repeat the Steps above for the Second AKS Cluster, this one has 4 nodes and a different name.
+ You should see the Nginx Plus Dashboard. This dashboard would provide more metrics as you progress through the workshop.
+
+ Type `Ctrl+C` within your terminal to stop the Port-Forward when you are finished.
+
+### Deploy second Kubernetes Cluster with Azure CLI
+
+In this section, similar to how you deployed the first AKS cluster, you will deploy a second AKS cluster named `n4a-aks2` which has 4 nodes.
+
+1. Run below commands to deploy your second AKS cluster (**This will take a while**).
```bash
- MY_RESOURCEGROUP=s.dutta
- MY_LOCATION=centralus
- MY_AKS=aks2-shouvik # Change name to aks2
- MY_NAME=shouvik
- AKS_NODE_VM=Standard_B2s
- K8S_VERSION=1.27
+ ## Set environment variables
+ export MY_RESOURCEGROUP=s.dutta-workshop
+ export MY_AKS=n4a-aks2
+ export MY_NAME=s.dutta
+ export K8S_VERSION=1.27
+ export MY_SUBNET=$(az network vnet subnet show -g $MY_RESOURCEGROUP -n aks2-subnet --vnet-name n4a-vnet --query id -o tsv)
+
+ ```
+ ```bash
# Create Second AKS Cluster
az aks create \
--resource-group $MY_RESOURCEGROUP \
--name $MY_AKS \
- --location $MY_LOCATION \
--node-count 4 \
- --node-vm-size $AKS_NODE_VM \
+ --node-vm-size Standard_B2s \
--kubernetes-version $K8S_VERSION \
--tags owner=$MY_NAME \
+ --vnet-subnet-id=$MY_SUBNET \
+ --network-plugin azure \
--enable-addons monitoring \
--generate-ssh-keys
```
-1. **Managing Both Clusters:** As you are managing multiple Kubernetes clusters, you can easily change between Contexts using the `kubectl config set-context` command:
-
- ```bash
- # Get a list of kubernetes clusters in your local .kube config file:
- kubectl config get-clusters
- ```
- ```bash
- ###Sample Output###
- NAME
- local-k8s-cluster
- aks-development
- minikube
- aks-shouvik
- aks2-shouvik
- ```
- ```bash
- # Set context
- kubectl config set-context aks-shouvik
- ```
- ```bash
- # Check which context you are currently targeting
- kubectl config current-context
- ```
- ```bash
- ###Sample Output###
- aks-shouvik
- ```
+ >**Note**: At the time of this writing, 1.27 is the latest kubernetes long-term supported (LTS) version available in Azure AKS.
+
+2. Configure `kubectl` to connect to your Azure AKS cluster using below command.
+
```bash
- # Allows you to switch between contexts using their name
- kubectl config use-context
+ az aks get-credentials \
+ --resource-group $MY_RESOURCEGROUP \
+ --name n4a-aks2
```
-1. Test if you are able to access your newly created AKS cluster.
+
+### Install NGINX Plus Ingress Controller to second cluster
+
+
+
+In this section, similar to how you installed NGINX Plus Ingress Controller in first AKS cluster using manifest files, you will install it on the second AKS cluster.
+
+1. Make sure your current context is pointing to second AKS cluster and the cluster is running. Run below command to do the checks.
+
```bash
+ # Set context to 2nd cluster(n4a-aks2)
+ kubectl config use-context n4a-aks2
+
# Get Nodes in the target kubernetes cluster
kubectl get nodes
```
+
```bash
- ###Sample Output###
- NAME STATUS ROLES AGE VERSION
- aks-nodepool1-76910942-vmss000000 Ready agent 9m23s v1.27.3
- aks-nodepool1-76910942-vmss000001 Ready agent 9m32s v1.27.3
- aks-nodepool1-76910942-vmss000002 Ready agent 9m30s v1.27.3
+ ##Sample Output##
+ Switched to context "n4a-aks2".
+ NAME STATUS ROLES AGE VERSION
+ aks-nodepool1-29147198-vmss000000 Ready agent 21h v1.27.9
+ aks-nodepool1-29147198-vmss000001 Ready agent 21h v1.27.9
+ aks-nodepool1-29147198-vmss000002 Ready agent 21h v1.27.9
+ aks-nodepool1-29147198-vmss000003 Ready agent 21h v1.27.9
```
-1. Finally to stop a running AKS cluster use this command.
+1. Navigate to `/kubernetes-ingress/deployments` directory within `/labs` directory
+
+ ```bash
+ cd /nginx-azure-workshops/labs/kubernetes-ingress/deployments
+ ```
+
+1. Create necessary Kubernetes objects needed for Ingress Controller:
+
```bash
- MY_RESOURCEGROUP=s.dutta
- MY_AKS=aks-shouvik
+ # Create namespace and a service account
+ kubectl apply -f common/ns-and-sa.yaml
+
+ # Create cluster role and cluster role bindings
+ kubectl apply -f rbac/rbac.yaml
+
+ # Create default server secret with TLS certificate and a key
+ kubectl apply -f ../examples/shared-examples/default-server-secret/default-server-secret.yaml
- az aks stop --resource-group $MY_RESOURCEGROUP --name $MY_AKS
+ # Create config map
+ kubectl apply -f common/nginx-config.yaml
+
+ # Create IngressClass resource
+ kubectl apply -f common/ingress-class.yaml
+
+ # Create CRDs
+ kubectl apply -f common/crds/k8s.nginx.org_virtualservers.yaml
+ kubectl apply -f common/crds/k8s.nginx.org_virtualserverroutes.yaml
+ kubectl apply -f common/crds/k8s.nginx.org_transportservers.yaml
+ kubectl apply -f common/crds/k8s.nginx.org_policies.yaml
+
+ # Create GlobalConfiguration resource
+ kubectl apply -f common/crds/k8s.nginx.org_globalconfigurations.yaml
```
-1. To start an already deployed AKS cluster use this command.
```bash
- MY_RESOURCEGROUP=s.dutta
- MY_AKS=aks-shouvik
+ ##Sample Output##
+ namespace/nginx-ingress created
+ serviceaccount/nginx-ingress created
+ clusterrole.rbac.authorization.k8s.io/nginx-ingress created
+ clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress created
+ secret/default-server-secret created
+ configmap/nginx-config created
+ ingressclass.networking.k8s.io/nginx created
+ customresourcedefinition.apiextensions.k8s.io/virtualservers.k8s.nginx.org created
+ customresourcedefinition.apiextensions.k8s.io/virtualserverroutes.k8s.nginx.org created
+ customresourcedefinition.apiextensions.k8s.io/transportservers.k8s.nginx.org created
+ customresourcedefinition.apiextensions.k8s.io/policies.k8s.nginx.org created
+ customresourcedefinition.apiextensions.k8s.io/globalconfigurations.k8s.nginx.org created
+ ```
+
+1. Use the same JWT file that you used in first cluster to create a Kubernetes Secret named `regcred`, of type `docker-registry`.
+
+1. Navigate back to `/labs` directory and make sure to export the contents of the JWT file to an environment variable.
+
+ ```bash
+ cd /nginx-azure-workshops/labs
+
+ export JWT=$(cat lab3/nginx-repo.jwt)
+ ```
- az aks start --resource-group $MY_RESOURCEGROUP --name $MY_AKS
+ ```bash
+ # Check $JWT
+ echo $JWT
```
-## Create an Azure Container Registry (ACR)
+1. Create a Kubernetes `docker-registry` Secret on your second cluster similar to how you did in first cluster.
-1. Create a container registry using the `az acr create` command. The registry name must be unique within Azure, and contain 5-50 alphanumeric characters
```bash
- MY_RESOURCEGROUP=s.dutta
- MY_ACR=acrshouvik
-
- az acr create \
- --resource-group $MY_RESOURCEGROUP \
- --name $MY_ACR \
- --sku Basic
+ kubectl create secret docker-registry regcred \
+ --docker-server=private-registry.nginx.com \
+ --docker-username=$JWT \
+ --docker-password=none \
+ -n nginx-ingress
```
-2. From the output of the `az acr create` command, make a note of the `loginServer`. The value of `loginServer` key is the fully qualified registry name. In our example the registry name is `acrshouvik` and the login server name is `acrshouvik.azurecr.io`.
+1. Confirm the Secret was created successfully by running:
-3. Login to the registry using below command. Make sure your local Docker daemon is up and running.
```bash
- MY_ACR=acrshouvik
+ kubectl get secret regcred -n nginx-ingress -o yaml
+ ```
- az acr login --name $MY_ACR
+ ```bash
+ ##Sample Output##
+ apiVersion: v1
+ data:
+ .dockerconfigjson:
+ kind: Secret
+ metadata:
+ creationTimestamp: "2024-04-16T19:21:09Z"
+ name: regcred
+ namespace: nginx-ingress
+ resourceVersion: "5838852"
+ uid: 30c60523-6b89-41b3-84d8-d22ec60d30a5
+ type: kubernetes.io/dockerconfigjson
```
- At the end of the output you should see `Login Succeeded`!
-### Test access to your Azure ACR
+1. Once you have created the `regcred` kubernetes secret, you are ready to deploy the Ingress Controller as a Deployment within second cluster:
-We can quickly test the ability to push images to our Private ACR from our client machine.
+1. Deploy NGINX Plus Ingress Controller as a Deployment using the same manifest file that you used with the first cluster.
-1. If you do not have a test container image to push to ACR, you can use a simple container for testing, e.g.[nginxinc/ingress-demo](https://hub.docker.com/r/nginxinc/ingress-demo). You will use this same container for the lab exercises.
+ ```bash
+ kubectl apply -f lab3/nginx-plus-ingress.yaml
+ ```
```bash
- az acr import --name $MY_ACR --source docker.io/nginxinc/ingress-demo:latest --image nginxinc/ingress-demo:v1
+ ##Sample Output##
+ deployment.apps/nginx-ingress created
```
- The above command pulls the `nginxinc/ingress-demo` image from docker hub and pushes it to Azure ACR.
-2. Check if the image was successfully pushed to ACR using the azure cli command below:
+### Check your NGINX Ingress Controller within second cluster
+
+1. Verify the NGINX Plus Ingress controller is up and running correctly in the Kubernetes cluster:
```bash
- MY_ACR=acrshouvik
- az acr repository list --name $MY_ACR --output table
+ kubectl get pods -n nginx-ingress
```
+
```bash
- ###Sample Output###
- Result
- ---------------------
- nginxinc/ingress-demo
+ ##Sample Output##
+ NAME READY STATUS RESTARTS AGE
+ nginx-ingress-5764ddfd78-ldqcs 1/1 Running 0 17s
```
-### Attach an Azure Container Registry (ACR) to Azure Kubernetes cluster (AKS)
+ **Note**: You must use the `kubectl` "`-n`", namespace flag, followed by namespace name, to see pods that are not in the default namespace.
+
+2. Instead of remembering the unique pod name, `nginx-ingress-xxxxxx-yyyyy`, you can store the Ingress Controller pod name into the `$AKS2_NIC` variable to be used throughout the lab.
+
+ >**Note:** This variable is stored for the duration of the Terminal session, and so if you close the Terminal it will be lost. At any time you can refer back to this step to create the `$AKS2_NIC` variable again.
-1. You will attach the newly created ACR to both AKS clusters. This will enable you to pull private images within AKS clusters directly from your ACR. Run below command to attach ACR to 1st AKS cluster:
```bash
- MY_RESOURCEGROUP=s.dutta
- MY_AKS=aks-shouvik # first cluster
- MY_ACR=acrshouvik
+ export AKS2_NIC=$(kubectl get pods -n nginx-ingress -o jsonpath='{.items[0].metadata.name}')
- az aks update -n $MY_AKS -g $MY_RESOURCEGROUP --attach-acr $MY_ACR
```
-1. Change the $MY_AKS environment variable, so you can attach your ACR to your second Cluster:
- ```bash
- MY_RESOURCEGROUP=s.dutta
- MY_AKS=aks2-shouvik # change to second cluster
- MY_ACR=acrshouvik
+ Verify the variable is set correctly.
- az aks update -n $MY_AKS -g $MY_RESOURCEGROUP --attach-acr $MY_ACR
+ ```bash
+ echo $AKS2_NIC
```
- **NOTE:** You need the Owner, Azure account administrator, or Azure co-administrator role on your Azure subscription. To avoid needing one of these roles, you can instead use an existing managed identity to authenticate ACR from AKS. See [references](#references) for more details.
+ **Note:** If this command doesn't show the name of the pod then run the previous command again.
+### Test Access to the Nginx Plus Ingress Dashboard within second cluster
-## Pulling NGINX Plus Ingress Controller Image using F5 Private Registry
+Just a quick test, is your Nginx Plus Ingress Controller running, and can you see the Dashboard? Let's try it:
-<< can we change the following NIC Pull process to use the JWT token instead ?? >>
+1. Using Kubernetes Port-Forward, connect to the $AKS2_NIC pod:
-Yes, plz change
+ ```bash
+ kubectl port-forward $AKS2_NIC -n nginx-ingress 9000:9000
-1. For NGINX Ingress Controller, you must have the NGINX Ingress Controller subscription – download the NGINX Plus Ingress Controller (per instance) certificate (nginx-repo.crt) and the key (nginx-repo.key) from [MyF5](https://my.f5.com/). You can also request for a 30-day trial key from [here](https://www.nginx.com/free-trial-connectivity-stack-kubernetes/).
-
-2. Once you have the certificate and key, you need to configure the Docker environment to use certificate-based client-server authentication with F5 private container registry `private-registry.nginx.com`.
-To do so create a `private-registry.nginx.com` directory under below paths based on your operating system. (See [references](#references) section for more details)
- - **linux** : `/etc/docker/certs.d`
- - **mac** : `~/.docker/certs.d`
- - **windows** : `~/.docker/certs.d`
-
-3. Copy your `nginx-repo.crt` and `nginx-repo.key` file in the newly created directory.
- - Below are the commands for mac/windows based systems
- ```bash
- mkdir -p ~/.docker/certs.d/private-registry.nginx.com
- cp nginx-repo.crt ~/.docker/certs.d/private-registry.nginx.com/client.cert
- cp nginx-repo.key ~/.docker/certs.d/private-registry.nginx.com/client.key
- ```
-
-4. ***Optional** Step only for Mac and Windows system
- - Restart Docker Desktop so that it copies the `~/.docker/certs.d` directory from your Mac or Windows system to the `/etc/docker/certs.d` directory on **Moby** (the Docker Desktop `xhyve` virtual machine).
-
-5. Once Docker Desktop has restarted, run below command to pull the NGINX Plus Ingress Controller image from F5 private container registry.
- ```bash
- docker pull private-registry.nginx.com/nginx-ic/nginx-plus-ingress:3.2.1-alpine
- ```
- >**Note**: At the time of this writing `3.2.1-alpine` is the latest NGINX Plus Ingress version that is available. Please feel free to use the latest version of NGINX Plus Ingress Controller. Look into [references](#references) for the latest Ingress images.
+ ```
-6. Set below variables to tag and push image to Azure ACR
- ```bash
- MY_ACR=acrshouvik
- MY_REPO=nginxinc/nginx-plus-ingress
- MY_TAG=3.2.1-alpine
- MY_IMAGE_ID=$(docker images private-registry.nginx.com/nginx-ic/nginx-plus-ingress:$MY_TAG --format "{{.ID}}")
- ```
- Check all variables have been set properly by running below command:
- ```bash
- set | grep MY_
- ```
+1. Open your browser to http://localhost:9000/dashboard.html.
+
+ You should see the Nginx Plus Dashboard. This dashboard would provide more metrics as you progress through the workshop.
+
+ Type `Ctrl+C` within your terminal to stop the Port-Forward when you are finished.
+
+### Expose NGINX Plus Ingress Controller Dashboard using Virtual Server
+
+In this section, you are going to expose the NGINX Plus Dashboard to monitor both NGINX Ingress Controller as well as your backend applications as Upstreams. This is a great Plus feature to allow you to watch and triage any potential issues with NGINX Plus Ingress controller as well as any issues with your backend applications in real time.
+
+You will deploy a `Service` and a `VirtualServer` resource to provide access to the NGINX Plus Dashboard for live monitoring. NGINX Ingress [`VirtualServer`](https://docs.nginx.com/nginx-ingress-controller/configuration/virtualserver-and-virtualserverroute-resources/) is a [Custom Resource Definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) used by NGINX to configure NGINX Server and Location blocks for NGINX configurations.
+
+1. Switch your context to point to first AKS cluster using below command:
-7. After setting the variables, tag the pulled NGINX Plus Ingress image using below command
- ```bash
- docker tag $MY_IMAGE_ID $MY_ACR.azurecr.io/$MY_REPO:$MY_TAG
- ```
-8. Login to the ACR registry using below command.
```bash
- az acr login --name $MY_ACR
+ # Set context to 1st cluster(n4a-aks1)
+ kubectl config use-context n4a-aks1
```
-9. Push your tagged image to ACR registry
```bash
- docker push $MY_ACR.azurecr.io/$MY_REPO:$MY_TAG
+ ##Sample Output##
+ Switched to context "n4a-aks1".
```
-10. Once pushed you can check the image by running below command
- ```bash
- az acr repository list --name $MY_ACR --output table
- ```
+1. Inspect the `lab3/dashboard-vs` manifest. This will deploy a `Service` and a `VirtualServer` resource that will be used to expose the NGINX Plus Ingress Controller Dashboard outside the cluster, so you can see what it is doing.
-<< we need the output here, to show the ingress-demo and Nic images exist >>
+ ```bash
+ kubectl apply -f lab3/dashboard-vs.yaml
+ ```
-
+ ```bash
+ ##Sample Output##
+ service/dashboard-svc created
+ virtualserver.k8s.nginx.org/dashboard-vs created
+ ```
-## Deploy Nginx Plus Ingress Controller to both clusters
+1. Verify the Service and Virtual Server were created in first cluster and are Valid:
-< NIC deployment steps here - use nodeport-static Manifest at the end >
+ ```bash
+ kubectl get svc,vs -n nginx-ingress
+ ```
-## Introduction
+ ```bash
+ ##Sample Output##
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ service/dashboard-svc ClusterIP 10.0.197.220 9000/TCP 4m55s
-In this section, you will be installing NGINX Ingress Controller in both AKS clusters using manifest files. You will be then checking and verifying the Ingress Controller is running.
+ NAME STATE HOST IP PORTS AGE
+ virtualserver.k8s.nginx.org/dashboard-vs Valid dashboard.example.com 4m54s
+ ```
-Finally, you are going to use the NGINX Plus Dashboard to monitor both NGINX Ingress Controller as well as our backend applications. This is a great feature to allow you to watch and triage any potential issues with NGINX Plus Ingress controller as well as any issues with your backend applications.
+1. Now change the context to point to second AKS cluster and apply the `dashboard-vs.yaml` file in similar fashion to expose the second cluster's NGINX Plus Ingress Controller Dashboard, using below command:
-
+ ```bash
+ # Set context to 2nd cluster(n4a-aks2)
+ kubectl config use-context n4a-aks2
-## Learning Objectives
+ kubectl apply -f lab3/dashboard-vs.yaml
+ ```
-- Install NGINX Ingress Controller using manifest files
-- Check your NGINX Ingress Controller
-- Deploy the NGINX Ingress Controller Dashboard
-- (Optional Section): Look "under the hood" of NGINX Ingress Controller
+ ```bash
+ ##Sample Output##
+ Switched to context "n4a-aks2".
+ service/dashboard-svc created
+ virtualserver.k8s.nginx.org/dashboard-vs created
+ ```
-## Install NGINX Ingress Controller using Manifest files
+1. Verify the Service and Virtual Server were created in second cluster and are Valid:
-1. Make sure your AKS cluster is running. If it is in stopped state then you can start it using below command.
```bash
- MY_RESOURCEGROUP=s.dutta
- MY_AKS=aks-shouvik
-
- az aks start --resource-group $MY_RESOURCEGROUP --name $MY_AKS
+ kubectl get svc,vs -n nginx-ingress
```
- >**Note**: The FQDN for API server for AKS might change on restart of the cluster which would result in errors running `kubectl` commands from your workstation. To update the FQDN re-import the credentials again using below command. This command would prompt about overwriting old objects. Enter "y" to overwrite the existing objects.
- >```bash
- >az aks get-credentials --resource-group $MY_RESOURCEGROUP --name $MY_AKS
- >```
- >```bash
- >###Sample Output###
- >A different object named aks-shouvik already exists in your kubeconfig file.
- >Overwrite? (y/n): y
- >A different object named clusterUser_s.dutta_aks-shouvik already exists in your kubeconfig file.
- >Overwrite? (y/n): y
- >Merged "aks-shouvik" as current context in /Users/shodutta/.kube/config
- >```
-2. Clone the Nginx Ingress Controller repo and navigate into the /deployments folder to make it your working directory:
```bash
- git clone https://github.com/nginxinc/kubernetes-ingress.git --branch v3.2.1
- cd kubernetes-ingress/deployments
+ ##Sample Output##
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ service/dashboard-svc ClusterIP 10.0.145.255 9000/TCP 110s
+
+ NAME STATE HOST IP PORTS AGE
+ virtualserver.k8s.nginx.org/dashboard-vs Valid dashboard.example.com 110s
```
-3. Create a namespace and a service account for the Ingress Controller
- ```bash
- kubectl apply -f common/ns-and-sa.yaml
- ```
-4. Create a cluster role and cluster role binding for the service account
- ```bash
- kubectl apply -f rbac/rbac.yaml
- ```
+### Expose your Nginx Ingress Controller with NodePort
-5. Create Common Resources:
- 1. Create a secret with TLS certificate and a key for the default server in NGINX.
- ```bash
- cd ..
- kubectl apply -f examples/shared-examples/default-server-secret/default-server-secret.yaml
- cd deployments
- ```
- 2. Create a config map for customizing NGINX configuration.
- ```bash
- kubectl apply -f common/nginx-config.yaml
- ```
- 3. Create an IngressClass resource.
-
- >**Note:** If you would like to set the NGINX Ingress Controller as the default one, uncomment the annotation `ingressclass.kubernetes.io/is-default-class` within the below file.
- ```bash
- kubectl apply -f common/ingress-class.yaml
- ```
-
-6. Create Custom Resources
- 1. Create custom resource definitions for VirtualServer and VirtualServerRoute, TransportServer and Policy resources:
- ```bash
- kubectl apply -f common/crds/k8s.nginx.org_virtualservers.yaml
- kubectl apply -f common/crds/k8s.nginx.org_virtualserverroutes.yaml
- kubectl apply -f common/crds/k8s.nginx.org_transportservers.yaml
- kubectl apply -f common/crds/k8s.nginx.org_policies.yaml
- ```
-
- 2. Create a custom resource for GlobalConfiguration resource:
- ```bash
- kubectl apply -f common/crds/k8s.nginx.org_globalconfigurations.yaml
- ```
-7. Deploy the Ingress Controller as a Deployment:
+1. Inspect the `lab3/nodeport-static.yaml` manifest. This is a NodePort Service defintion that will open high-numbered ports on the Kubernetes nodes, to expose several Services that are running on the Nginx Ingress. The NodePorts are intentionally defined as static, because you will be using these port numbers with Nginx for Azure, and you don't want them to change. (Note: If you use ephemeral NodePorts, you see **HTTP 502 Errors** when they change!) We are using the following table to expose different Services on different Ports:
- The sample deployment file(`nginx-plus-ingress.yaml`) can be found within `deployment` sub-directory within your present working directory.
+ Service Port | External NodePort | Name
+ |:--------:|:------:|:-------:|
+ 80 | 32080 | http
+ 443 | 32443 | https
+ 9000 | 32090 | dashboard
- Highlighted below are some of the parameters that would be changed in the sample `nginx-plus-ingress.yaml` file.
- - Change Image Pull to Private Repo
- - Enable Prometheus
- - Add port and name for dashboard
- - Change Dashboard Port to 9000
- - Allow all IPs to access dashboard
- - Make use of default TLS certificate
- - Enable Global Configuration for Transport Server
-
-
+1. Deploy a NodePort Service within first cluster to expose the NGINX Plus Ingress Controller outside the cluster.
- Navigate back to the Workshop's `labs` directory
- ```bash
- cd ../../labs
- ```
-
- Observe the `lab3/nginx-plus-ingress.yaml` looking at below details:
- - On line #36, the `nginx-plus-ingress:3.2.1` placeholder is changed to the workshop image that you pushed to your private ACR registry as instructed in a previous step.
-
- >**Note:** Make sure you replace the image with the appropriate image that you pushed in your ACR registry.
- - On lines #50-51, we have added TCP port 9000 for the Plus Dashboard.
- - On lines #96-97, we have enabled the Dashboard and set the IP access controls to the Dashboard.
- - On lines #16-19, we have enabled Prometheus related annotations.
- - On line #106, we have enabled Prometheus to collect metrics from the NGINX Plus stats API.
- - On line #95, uncomment to make use of default TLS secret.
- - On line #109, uncomment to enable the use of Global Configurations.
-
- Now deploy NGINX Ingress Controller as a Deployment using your updated manifest file.
- ```bash
- kubectl apply -f lab3/nginx-plus-ingress.yaml
- ```
+ ```bash
+ # Set context to 1st cluster(n4a-aks1)
+ kubectl config use-context n4a-aks1
-## Check your NGINX Ingress Controller
+ kubectl apply -f lab3/nodeport-static.yaml
+ ```
-1. Verify the NGINX Plus Ingress controller is up and running correctly in the Kubernetes cluster:
+1. Verify the NodePort Service was created within first cluster:
```bash
- kubectl get pods -n nginx-ingress
+ kubectl get svc nginx-ingress -n nginx-ingress
```
```bash
- ###Sample Output###
- NAME READY STATUS RESTARTS AGE
- nginx-ingress-5764ddfd78-ldqcs 1/1 Running 0 17s
+ ##Sample Output##
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ nginx-ingress NodePort 10.0.211.17 80:32080/TCP,443:32443/TCP,9000:32090/TCP 14s
```
- >**Note**: You must use the `kubectl` "`-n`", namespace switch, followed by namespace name, to see pods that are not in the default namespace.
+1. Similarly, deploy a NodePort Service within second cluster to expose the NGINX Plus Ingress Controller outside the cluster.
-2. Instead of remembering the unique pod name, `nginx-ingress-xxxxxx-yyyyy`, we can store the Ingress Controller pod name into the `$NIC` variable to be used throughout the lab.
+ ```bash
+ # Set context to 2nd cluster(n4a-aks2)
+ kubectl config use-context n4a-aks2
+
+ kubectl apply -f lab3/nodeport-static.yaml
+ ```
- >**Note:** This variable is stored for the duration of the terminal session, and so if you close the terminal it will be lost. At any time you can refer back to this step to save the `$NIC` variable again.
+1. Verify the NodePort Service was created within second cluster:
```bash
- export NIC=$(kubectl get pods -n nginx-ingress -o jsonpath='{.items[0].metadata.name}')
+ kubectl get svc nginx-ingress -n nginx-ingress
```
- Verify the variable is set correctly.
```bash
- echo $NIC
+ ##Sample Output##
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ nginx-ingress NodePort 10.0.14.247 80:32080/TCP,443:32443/TCP,9000:32090/TCP 11s
```
- >**Note:** If this command doesn't show the name of the pod then run the previous command again.
-## Deploy the NGINX Ingress Controller Dashboard
+ Note there are THREE NodePorts open to the Ingress Controller - for port 80 HTTP traffic, port 443 for HTTPS traffic, and port 9000 for the Plus Dashboard.
-We will deploy a `Service` and a `VirtualServer` resource to provide access to the NGINX Plus Dashboard for live monitoring. NGINX Ingress [`VirtualServer`](https://docs.nginx.com/nginx-ingress-controller/configuration/virtualserver-and-virtualserverroute-resources/) is a [Custom Resource Definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) used by NGINX to configure NGINX Server and Location blocks for NGINX configurations.
+**QUESTION?** You are probably asking, why not use the AKS/Azure Loadbalancer Service to expose the Ingress Controllers? It will automatically give you an External-IP, right? You can certainly do that. But if you do, you would need 2 additional Public external IP addresses, one for each NIC that you have to manage. Instead, you will be using your Nginx for Azure instance for your Public External-IP, thereby `simplifying your Architecture`, running on Nginx! Nginx will use Host / Port / Path-based routing to forward the requests to the appropriate backends, including VMs, Docker containers, both AKS clusters, Ingress Controllers, Services, and Pods. You will do ALL of this in the next few labs.
+## Expose the NGINX Plus Ingress Dashboards with Nginx for Azure
-1. In the `lab3` folder, apply the `dashboard-vs.yaml` file to deploy a `Service` and a `VirtualServer` resource to provide access to the NGINX Plus Dashboard for live monitoring:
+Being able to see your NGINX Plus Ingress Dashboards remotely will be a big help in observing your traffic metrics and patterns within each AKS cluster. It will require only two Nginx for Azure configuration items for each cluster - a new Nginx Server block and a new Upstream block.
- ```bash
- kubectl apply -f lab3/dashboard-vs.yaml
- ```
- ```bash
- ###Sample output###
- service/dashboard-svc created
- virtualserver.k8s.nginx.org/dashboard-vs created
- ```
+This will be the logical network diagram for accessing the Nginx Ingress Dashboards.
-## Deploy the Nginx CAFE Demo app
+So why use ports 9001 and 9002 for the NIC Dashboards? Will this work on port 80/443? Yes, it will, but separating this type of monitoring traffic from production traffic is generally considered a Best Practice. It also shows you that Nginx for Azure is able to use any port for Port Based routing, it is not limited to just ports 80 and 443 like some cloud load balancers.
-In this section, you will deploy the "Cafe Nginx" Ingress Demo, which represents a Coffee Shop website with Coffee and Tea applications. You will be adding the following components to your Kubernetes Cluster: Coffee and Tea pods, matching coffee and tea services, and a Cafe VirtualServer.
+
-The Cafe application that you will deploy looks like the following diagram below. Coffee and Tea pods and services, with NGINX Ingress routing the traffic for /coffee and /tea routes, using the `cafe.example.com` Hostname. There is also a hidden third service - more on that later!
+1. First, create the Upstream server block for AKS cluster #1. You will need the AKS1 Node Names from the Node Pool. Make sure your Kube Context is n4a-aks1:
-< cafe diagram here >
+ ```bash
+ kubectl config use-context n4a-aks1
+ kubectl get nodes
+ ```
-1. Inspect the `lab3/cafe.yaml` manifest. You will see we are deploying 3 replicas of each the coffee and tea Pods, and create a matching Service for each.
+ ```bash
+ ##Sample Output##
+ Switched to context "n4a-aks1".
+ NAME STATUS ROLES AGE VERSION
+ aks-nodepool1-19055428-vmss000003 Ready agent 4h32m v1.27.9
+ aks-nodepool1-19055428-vmss000004 Ready agent 4h31m v1.27.9
+ aks-nodepool1-19055428-vmss000005 Ready agent 4h32m v1.27.9
+ ```
-1. Deploy the Cafe application by applying these two manifests:
+1. Use the 3 Node Names as your Upstream Servers, and add `:32090` as your port number. This matches the NodePort-Static that you configured in previous section.
-```bash
-kubectl apply -f lab3/cafe.yaml
-kubectl apply -f lab3/cafe-virtualserver.yaml
+1. Open Azure portal within your browser and then open your resource group. Click on your NGINX for Azure resource (nginx4a) which should open the Overview section of your resource. From the left pane click on `NGINX Configuration` under settings.
-```
+1. Click on `+ New File`, to create a new Nginx config file. Name the new file `/etc/nginx/conf.d/nic1-dashboard-upstreams.conf`. You can use the example provided, just edit the Node Names to match your cluster:
-```bash
-###Sample output###
-deployment.apps/coffee created
-service/coffee-svc created
-deployment.apps/tea created
-service/tea-svc created
-virtualserver.k8s.nginx.org/cafe-vs created
+ ```nginx
+ # Nginx 4 Azure to NIC, AKS Node for Upstreams
+ # Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
+ #
+ # nginx ingress dashboard
+ #
+ upstream nic1_dashboard {
+ zone nic1_dashboard 256k;
+
+ # from nginx-ingress NodePort Service / aks1 Node IPs
+ server aks-nodepool1-19055428-vmss000003:32090; #aks1 node1
+ server aks-nodepool1-19055428-vmss000004:32090; #aks1 node2
+ server aks-nodepool1-19055428-vmss000005:32090; #aks1 node3
-```
+ keepalive 8;
-1. Check that all pods are running, you should see three Coffee and three Tea pods:
+ }
+ ```
-```bash
-kubectl get pods
-###Sample output###
-NAME READY STATUS RESTARTS AGE
-coffee-56b7b9b46f-9ks7w 1/1 Running 0 28s
-coffee-56b7b9b46f-mp9gs 1/1 Running 0 28s
-coffee-56b7b9b46f-v7xxp 1/1 Running 0 28s
-tea-568647dfc7-54r7k 1/1 Running 0 27s
-tea-568647dfc7-9h75w 1/1 Running 0 27s
-tea-568647dfc7-zqtzq 1/1 Running 0 27s
+1. Click the `Submit` Button above the Editor. Nginx will validate your configurations, and if successful, will reload Nginx with your new configurations. If you receive an error, you will need to fix it before you proceed.
-```
+1. Repeat the Step 1-5, but for your Second AKS Cluster:
-1. In AKS1 cluster, you will run only 2 Replicas of the coffee and tea pods, so Scale both deployments down:
+ Using the NGINX for Azure **NGINX Configuration** pane, create a new Nginx config file called `/etc/nginx/conf.d/nic2-dashboard-upstreams.conf`. You can use the example provided, just edit the Node Names to match your cluster:
-```bash
-kubectl scale deployment coffee --replicas=2
-kubectl scale deployment tea --replicas=2
+ ```nginx
+ # Nginx 4 Azure to NIC, AKS Node for Upstreams
+ # Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
+ #
+ # nginx ingress dashboard
+ #
+ upstream nic2_dashboard {
+ zone nic2_dashboard 256k;
+
+ # from nginx-ingress NodePort Service / aks Node IPs
+ server aks-nodepool1-29147198-vmss000000:32090; #aks2 node1
+ server aks-nodepool1-29147198-vmss000001:32090; #aks2 node2
+ server aks-nodepool1-29147198-vmss000002:32090; #aks2 node3
+ server aks-nodepool1-29147198-vmss000003:32090; #aks2 node4
-```
+ keepalive 8;
-Now there should be only 2 of each running:
+ }
+ ```
+
+ > Notice, there are 3 upstreams for Cluster1, and 4 upstreams for Cluster2, matching the Node count for each cluster. This was intentional so you can see the differences.
+
+1. Again using the NGINX for Azure **NGINX Configuration** pane, create a new file, called `/etc/nginx/conf.d/nic1-dashboard.conf`, using the example provided, just copy and paste the config content. This is the new Nginx Server block, with a hostname, port number 9001, and proxy_pass directive needed to route requests for the Dashboard to AKS Cluster1:NodePort where the Ingress Dashboard is listening:
+
+ ```nginx
+ # N4A NIC Dashboard config for AKS1
+ #
+ server {
+ listen 9001;
+ server_name dashboard.example.com;
+ access_log off;
+
+ location = /dashboard.html {
+ #return 200 "You have reached /nic1dashboard.";
+
+ proxy_pass http://nic1_dashboard;
+
+ }
+
+ location /api/ {
+
+ proxy_pass http://nic1_dashboard;
+ }
+
+ }
+
+ ```
-```bash
-kubectl get pods
-###Sample output###
-NAME READY STATUS RESTARTS AGE
-coffee-56b7b9b46f-9ks7w 1/1 Running 0 28s
-coffee-56b7b9b46f-mp9gs 1/1 Running 0 28s
-tea-568647dfc7-54r7k 1/1 Running 0 27s
-tea-568647dfc7-9h75w 1/1 Running 0 27s
+1. Click the `Submit` Button above the Editor to save and reload your new Nginx for Azure configuration.
-```
+1. Repeat step 7-8, for the NIC Dashboard in AKS2:
-1. Check that the Cafe `VirtualServer`, **cafe-vs**, is running:
+ Using the NGINX for Azure **NGINX Configuration** pane, create a new file, called `/etc/nginx/conf.d/nic2-dashboard.conf`, using the example provided, ust copy and paste the config content. This is the Second new Nginx Server block, with the same hostname, but using `port number 9002`, and proxy_pass directives needed to route requests for the Dashboard to AKS Cluster2:NodePort where the Ingress Dashboard is listening:
-```bash
-kubectl get virtualserver cafe-vs
+ ```nginx
+ # N4A NIC Dashboard config for AKS2
+ #
+ server {
+ listen 9002;
+ server_name dashboard.example.com;
+ access_log off;
+
+ location = /dashboard.html {
+ #return 200 "You have reached /nic2dashboard.";
-```
-```bash
-###Sample output###
-NAME STATE HOST IP PORTS AGE
-cafe-vs Valid cafe.example.com 4m6s
+ proxy_pass http://nic2_dashboard;
-```
+ }
-**Note:** The `STATE` should be `Valid`. If it is not, then there is an issue with your yaml manifest file (cafe-vs.yaml). You could also use `kubectl describe vs cafe-vs` to get more information about the VirtualServer you just created.
+ location /api/ {
+
+ proxy_pass http://nic2_dashboard;
+ }
-### Deploy the Nginx Ingress Dashboard
+ }
-1. Inspect the `lab3/dashboard-vs` manifest. This will create an `nginx-ingress` Service and a VirtualServer that will expose the Nginx Ingress Controller's Plus Dashboard outside the cluster, so you can see what Nginx Ingress Controller is doing.
+ ```
-```bash
-kubectl apply -f lab3/dashboard-vs.yaml
+1. Click the `Submit` Button above the Editor to save and reload your new Nginx for Azure configuration.
-```
+ You have just configured `Port-based routing with NGINX for Azure`, sending traffic on port 9001 to AKS1 NIC Dashboard, and port 9002 to send traffic to AKS2 NIC Dashboard.
-1. Test access to the NIC's Plus Dashboard. Using Kubernetes Port-Forward utility, connect to the NIC pod in cluster #1.
+1. Using the Azure CLI, add ports `9001-9002` to the NSG (`n4a-nsg`) for your VNET (`n4a-vnet`):
-```bash
-# Set Kube Context to cluster 1:
-kubectl config use-context aks1-
+ ```bash
+ ## Set environment variables
+ export MY_RESOURCEGROUP=s.dutta-workshop
+ export MY_PUBLICIP=$(curl ipinfo.io/ip)
+ ```
-```
+ ```bash
+ az network nsg rule create \
+ --resource-group $MY_RESOURCEGROUP \
+ --nsg-name n4a-nsg \
+ --name NIC_Dashboards \
+ --priority 330 \
+ --source-address-prefix $MY_PUBLICIP \
+ --source-port-range '*' \
+ --destination-address-prefix '*' \
+ --destination-port-range 9001-9002 \
+ --direction Inbound \
+ --access Allow \
+ --protocol Tcp \
+ --description "Allow traffic to NIC Dashboards"
+ ```
-Use the $NIC Nginx Ingress Controller pod name variable:
+ >**Security Warning!** These Nginx Ingress Dashboards are now exposed to the open Internet, with only your Network Security Group for protection. This is probably fine for a few hours during the Workshop, but do NOT do this in Production, use appropriate Security measures to protect them (not covered in this Workshop).
-Port-forward to the NIC Pod on port 9000:
-```bash
-kubectl port-forward $NIC -n nginx-ingress 9000:9000
-```
+1. Update your local system DNS `/etc/hosts` file, to add `dashboard.example.com`, using the same public IP of your N4A instance.
-Open your local browser to http://localhost:9000/dashboard.html. You should see the Plus dashboard. It should have the `HTTP Zones` cafe.example.com and dashboard.example.com - these are your VirtualServers / Hostnames. If you check the `HTTP Upstreams` tab, it should have 2 coffee and 2 tea pods.
+ ```bash
+ cat /etc/hosts
-When you are done checking out the Dashboard, type `Ctrl+C` to quit the Kubectl Port-Forward.
+ 127.0.0.1 localhost
+ ...
+ # Nginx for Azure testing
+ 11.22.33.44 cafe.example.com dashboard.example.com
+ ...
+ ```
-1. Change your `Kube Context` to your second AKS cluster, and check access to the Dashboard using the steps as above. You should find the exact same output, the Nginx Ingress Plus Dashboard running, with Zones and Upstreams of similar. However, the IP addresses of the Upstreams `WILL` be different between the clusters, because each cluster assigns IPs to it's Pods.
+ where
+ - `11.22.33.44` replace with your `n4a-publicIP` resource IP address.
-1. Optional Exercise: If you want to see both NIC Dashboards at the same time, you can use 2 Terminals, each with a different Kube Context, and different Port-Forward commands. In Terminal#1, try port-forward 9001:9000 for cluster1, and in Terminal#2, try port-forward 9002:9000 for cluster2. Then two browser windows side by side for comparison.
+1. Use Chrome or other browser to test remote access to your NIC Dashboards. Create a new Tab or Window for each Dashboard.
-Try scaling the number of coffee pods in one cluster, and see what happens.
+ http://dashboard.example.com:9001/dashboard.html > AKS1-NIC
-```bash
-kubectl scale deployment coffee --replicas=8
-```
+ http://dashboard.example.com:9002/dashboard.html > AKS2-NIC
-> Pretty cool - Nginx Ingress picks up the new Pods, health-checks them first, and brings them online for load balancing just a few seconds after Kubernetes spins them up. Scale them up and down as you choose, while watching the Dashboard, Nginx will track them accordingly.
+ Bookmark these pages, and leave both of these browser Tabs or Windows open during the Workshop, you will be using them often in the next Lab Exercises, watching what Nginx Ingress is doing in each Cluster.
-### Expose your Nginx Ingress Controller
+ If you are not familiar with the Nginx Plus Dashboard, you can find a link to more information in the References Section.
-1. Inspect the `lab4/nodeport-static.yaml` manifest. This is a NodePort Service defintion that will open high-numbered ports on the Kubernetes nodes, to expose several Services that are running in the cluster. The NodePorts are defined as static, because you will be using these port numbers with N4A, and you don't them to change. We are using the following table to expose different Services on different Ports:
+### (Optional) Managing Both Clusters using Azure CLI
-Service Port | External NodePort | Name
-|:--------:|:------:|:-------:|
-80 | 32080 | http
-443 | 32443 | https
-9000 | 32090 | dashboard
+1. As you are managing multiple Kubernetes clusters, you can easily change between Kubectl Admin Contexts using the `kubectl config use-context` command:
+ ```bash
+ # Set context to 1st cluster(n4a-aks1)
+ kubectl config use-context n4a-aks1
+ ```
-1. Deploy a NodePort Service to expose the Nginx Ingress Controller outside the cluster.
+ ```bash
+ ##Sample Output##
+ Switched to context "n4a-aks1".
+ ```
-```bash
-kubectl apply -f lab3/nodeport-static.yaml
+1. To display all the kubernetes contexts that are present in your local `.kube` config file you can run below command.
-```
+ ```bash
+ # List all kubernetes contexts
+ kubectl config get-contexts
+ ```
-1. Verify the NodePort Service was created:
+ ```bash
+ ##Sample Output##
+ CURRENT NAME CLUSTER AUTHINFO NAMESPACE
+ aks-shouvik-fips aks-shouvik-fips clusterUser_s.dutta_aks-shouvik-fips
+ * n4a-aks1 n4a-aks1 clusterUser_s.dutta-workshop_n4a-aks1
+ n4a-aks2 n4a-aks2 clusterUser_s.dutta-workshop_n4a-aks2
+ rancher-desktop rancher-desktop rancher-desktop
+ ```
-```bash
-kubectl get svc nginx-ingress -n nginx-ingress
+1. To display the current kubernetes context, run below command.
-```
+ ```bash
+ # List current kubernetes context
+ kubectl config current-context
+ ```
-```bash
-#Sample output
+ ```bash
+ ##Sample Output##
+ n4a-aks1
+ ```
+1. Test if you are able to access your first AKS cluster(`n4a-aks1`).
-```
+ ```bash
+ # Get Nodes in the target kubernetes cluster
+ kubectl get nodes
+ ```
-## Deploy the Nginx CAFE Demo app in the 2nd cluster
+ ```bash
+ ##Sample Output##
+ NAME STATUS ROLES AGE VERSION
+ aks-nodepool1-19055428-vmss000000 Ready agent 46m v1.27.9
+ aks-nodepool1-19055428-vmss000001 Ready agent 46m v1.27.9
+ aks-nodepool1-19055428-vmss000002 Ready agent 46m v1.27.9
+ ```
-1. Repeat the previous section to deploy the CAFE Demo app in your second cluster. Do not Scale the coffee and tea replicas down, leave three of each pod running.
-1. Report the same NodePort deployment, to expose the Nginx Ingress Controller outside the cluster.
+1. Test if you are able to access your second AKS cluster(`n4a-aks2`).
-## Update local DNS
+ ```bash
+ # Set context to 2nd cluster(n4a-aks2)
+ kubectl config use-context n4a-aks2
+ ```
-We will be using FQDN hostnames for the labs, and you will need to update your local computer's `/etc/hosts` file, to use these names with N4A and Nginx Ingress Controller.
+ ```bash
+ # Get Nodes in the target kubernetes cluster
+ kubectl get nodes
+ ```
-Edit your local hosts file, adding the FQDNs as shown below. Use the `External-IP` Address of Nginx for Azure:
+ ```bash
+ ##Sample Output##
+ NAME STATUS ROLES AGE VERSION
+ aks-nodepool1-29147198-vmss000000 Ready agent 27m v1.27.9
+ aks-nodepool1-29147198-vmss000001 Ready agent 27m v1.27.9
+ aks-nodepool1-29147198-vmss000002 Ready agent 27m v1.27.9
+ aks-nodepool1-29147198-vmss000003 Ready agent 27m v1.27.9
+ ```
-```bash
-vi /etc/hosts
+1. Finally to stop a running AKS cluster use below command.
-13.86.100.10 cafe.example.com dashboard.example.com # Added for N4A Workshop
-```
+ ```bash
+ export MY_RESOURCEGROUP=s.dutta
+ export MY_AKS=n4a-aks1
->**Note:** Both hostnames are mapped to the same N4A External-IP. You will use the NGINX Ingress Controller to route the traffic correctly in the upcoming labs.
-Your N4A External-IP address will be different than the example.
+ az aks stop \
+ --resource-group $MY_RESOURCEGROUP \
+ --name $MY_AKS
+ ```
-## Test Access to Nginx Cafe, and Nginx Ingress Dashboards
+1. To start an already deployed AKS cluster use this command.
+ ```bash
+ export MY_RESOURCEGROUP=s.dutta
+ export MY_AKS=n4a-aks1
+ az aks start \
+ --resource-group $MY_RESOURCEGROUP \
+ --name $MY_AKS
+ ```
-**This completes the Lab.**
+**This completes Lab3.**
-## References:
+## References:
- [Deploy AKS cluster using Azure CLI](https://learn.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-cli)
- [Azure CLI command list for AKS](https://learn.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest)
-- [Create private container registry using Azure CLI](https://learn.microsoft.com/en-us/azure/container-registry/container-registry-get-started-azure-cli)
-- [Azure CLI command list for ACR](https://learn.microsoft.com/en-us/cli/azure/acr?view=azure-cli-latest)
-- [Authenticate with ACR from AKS cluster](https://learn.microsoft.com/en-us/azure/container-registry/authenticate-kubernetes-options#scenarios)
-- [Pulling NGINX Plus Ingress Controller Image](https://docs.nginx.com/nginx-ingress-controller/installation/pulling-ingress-controller-image)
-- [Add Client Certificate Mac](https://docs.docker.com/desktop/faqs/macfaqs/#add-client-certificates)
-- [Add Client Certificate Windows](https://docs.docker.com/desktop/faqs/windowsfaqs/#how-do-i-add-client-certificates)
-- [Docker Engine Security Documentation](https://docs.docker.com/engine/security/certificates/)
+- [Installing NGINX Plus Ingress Controller Image](https://docs.nginx.com/nginx-ingress-controller/installation/nic-images/using-the-jwt-token-docker-secret/)
- [Latest NGINX Plus Ingress Images](https://docs.nginx.com/nginx-ingress-controller/technical-specifications/#images-with-nginx-plus)
+- [Nginx Live Monitoring Dashboard](https://docs.nginx.com/nginx/admin-guide/monitoring/live-activity-monitoring/)
+- [Nginx Ingress Controller Product](https://docs.nginx.com/nginx-ingress-controller/)
+- [Nginx Ingress Installation - the REAL Nginx](https://docs.nginx.com/nginx-ingress-controller/installation/installing-nic/installation-with-manifests/)
+- [Nginx Blog - Which Ingress AM I RUNNING?](https://www.nginx.com/blog/guide-to-choosing-ingress-controller-part-1-identify-requirements/)
-
### Authors
- Chris Akker - Solutions Architect - Community and Alliances @ F5, Inc.
- Shouvik Dutta - Solutions Architect - Community and Alliances @ F5, Inc.
-- Adam Currier - Solutions Architect - Community and Alliances @ F5, Inc.
-------------
-Navigate to ([Lab5](../lab5/readme.md) | [LabX](../labX/readme.md))
+Navigate to ([Lab4](../lab4/readme.md) | [LabGuide](../readme.md))
diff --git a/ca-notes/aks/cafe/cafe-vs.yaml b/labs/lab4/cafe-vs.yaml
similarity index 76%
rename from ca-notes/aks/cafe/cafe-vs.yaml
rename to labs/lab4/cafe-vs.yaml
index 0d109bd..99ec01a 100644
--- a/ca-notes/aks/cafe/cafe-vs.yaml
+++ b/labs/lab4/cafe-vs.yaml
@@ -5,7 +5,7 @@ kind: VirtualServer
metadata:
name: cafe-vs
spec:
- host: cafe.nginxazure.build
+ host: cafe.example.com
#tls:
#secret: cafe-secret
#redirect:
@@ -15,8 +15,6 @@ spec:
- name: tea
service: tea-svc
port: 80
- slow-start: 20s
- # lb-method: least_time last_byte
healthCheck:
enable: true
path: /tea
@@ -29,8 +27,6 @@ spec:
- name: coffee
service: coffee-svc
port: 80
- slow-start: 20s
- # lb-method: least_time last_byte
healthCheck:
enable: true
path: /coffee
@@ -44,7 +40,7 @@ spec:
- path: /
action:
redirect:
- url: http://cafe.nginxazure.build/coffee
+ url: http://cafe.example.com/coffee
code: 302
- path: /tea
action:
@@ -52,10 +48,10 @@ spec:
- path: /coffee
action:
pass: coffee
- - path: /n4a
+ - path: /workshop
action:
return:
code: 200
type: text/html
- body: "Welcome to Nginx4Azure & NGINX Plus Ingress Controller !!"
+ body: "Welcome to Nginx4Azure Workshop !!"
diff --git a/ca-notes/aks/cafe/cafe.yaml b/labs/lab4/cafe.yaml
similarity index 100%
rename from ca-notes/aks/cafe/cafe.yaml
rename to labs/lab4/cafe.yaml
diff --git a/ca-notes/aks/nic/global-configuration-redis.yaml b/labs/lab4/global-configuration-redis.yaml
similarity index 95%
rename from ca-notes/aks/nic/global-configuration-redis.yaml
rename to labs/lab4/global-configuration-redis.yaml
index d822c4c..0fd4c7c 100644
--- a/ca-notes/aks/nic/global-configuration-redis.yaml
+++ b/labs/lab4/global-configuration-redis.yaml
@@ -1,4 +1,4 @@
-
+# Nginx For Azure
# NIC Global Config manifest for custom TCP ports for Redis
# Chris Akker Jan 2024
#
diff --git a/labs/lab4/media/azure-icon.png b/labs/lab4/media/azure-icon.png
new file mode 100644
index 0000000..51ffec5
Binary files /dev/null and b/labs/lab4/media/azure-icon.png differ
diff --git a/labs/lab4/media/cafe-icon.png b/labs/lab4/media/cafe-icon.png
new file mode 100644
index 0000000..0ec53be
Binary files /dev/null and b/labs/lab4/media/cafe-icon.png differ
diff --git a/labs/lab4/media/lab4_cafe-upstreams-2.png b/labs/lab4/media/lab4_cafe-upstreams-2.png
new file mode 100644
index 0000000..6d3f77d
Binary files /dev/null and b/labs/lab4/media/lab4_cafe-upstreams-2.png differ
diff --git a/labs/lab4/media/lab4_cafe-upstreams-3.png b/labs/lab4/media/lab4_cafe-upstreams-3.png
new file mode 100644
index 0000000..ac47b85
Binary files /dev/null and b/labs/lab4/media/lab4_cafe-upstreams-3.png differ
diff --git a/labs/lab4/media/lab4_diagram.png b/labs/lab4/media/lab4_diagram.png
new file mode 100644
index 0000000..5aff4f0
Binary files /dev/null and b/labs/lab4/media/lab4_diagram.png differ
diff --git a/labs/lab4/media/lab4_http-zones.png b/labs/lab4/media/lab4_http-zones.png
new file mode 100644
index 0000000..302a2e5
Binary files /dev/null and b/labs/lab4/media/lab4_http-zones.png differ
diff --git a/labs/lab4/media/lab4_redis-upstreams.png b/labs/lab4/media/lab4_redis-upstreams.png
new file mode 100644
index 0000000..f17353f
Binary files /dev/null and b/labs/lab4/media/lab4_redis-upstreams.png differ
diff --git a/labs/lab4/media/lab4_redis-zones.png b/labs/lab4/media/lab4_redis-zones.png
new file mode 100644
index 0000000..dbe2828
Binary files /dev/null and b/labs/lab4/media/lab4_redis-zones.png differ
diff --git a/labs/lab4/media/nginx-ingress-icon.png b/labs/lab4/media/nginx-ingress-icon.png
new file mode 100644
index 0000000..0196a5a
Binary files /dev/null and b/labs/lab4/media/nginx-ingress-icon.png differ
diff --git a/labs/lab4/media/readme.md b/labs/lab4/media/readme.md
new file mode 100644
index 0000000..a20b986
--- /dev/null
+++ b/labs/lab4/media/readme.md
@@ -0,0 +1,666 @@
+# Cafe Demo / Redis Deployment
+
+## Introduction
+
+In this lab, you deploy the Nginx Cafe Demo, and Redis In Memory cache applications. You will configure Nginx for Azure to expose these applications to the Internet. Then you will test and load test them, to make sure they perform as expected. You will use the Nginx Plus Dashboard and Azure Monitoring to watch the metrics about your traffic.
+
+NGINX aaS | Cafe | Redis
+:-------------------:|:-------------------:|:-------------------:
+ | |
+
+## Learning Objectives
+
+By the end of the lab you will be able to:
+
+- Deploying the Cafe Demo application
+- Deploying the Redis In Memory cache
+- Expose the Cafe Demo app with Nginx for Azure
+- Expose the Redis Cache with Nginx for Azure
+
+
+## Pre-Requisites
+
+- You must have both AKS clusters up and running
+- You must have both Nginx Ingress Controllers running
+- You must have the NIC Dashboard available
+- See `Lab0` for instructions on setting up your system for this Workshop
+- Familiarity with basic Linux commands and commandline tools
+- Familiarity with basic Kubernetes concepts and commands
+- Familiarity with basic HTTP protocol
+
+
+
+## Deploy the Nginx CAFE Demo app
+
+In this section, you will deploy the "Cafe Nginx" Ingress Demo, which represents a Coffee Shop website with Coffee and Tea applications. You will be adding the following components to your Kubernetes Cluster: Coffee and Tea pods, matching coffee and tea services, and a Cafe VirtualServer.
+
+The Cafe application that you will deploy looks like the following diagram below. Coffee and Tea pods and services, with NGINX Ingress routing the traffic for /coffee and /tea routes, using the `cafe.example.com` Hostname. There is also a hidden third service - more on that later!
+
+< cafe diagram here >
+
+1. Inspect the `lab4/cafe.yaml` manifest. You will see we are deploying 3 replicas of each the coffee and tea Pods, and create a matching Service for each.
+
+1. Inspect the `lab4/cafe-vs.yaml` manifest. This is the VirtualServer CRD used by Nginx Ingress to expose these apps, using the `cafe.example.com` Hostname.
+
+1. Deploy the Cafe application by applying these two manifests:
+
+```bash
+kubectl apply -f lab4/cafe.yaml
+kubectl apply -f lab4/cafe-vs.yaml
+
+```
+
+```bash
+###Sample output###
+deployment.apps/coffee created
+service/coffee-svc created
+deployment.apps/tea created
+service/tea-svc created
+virtualserver.k8s.nginx.org/cafe-vs created
+
+```
+
+1. Check that all pods and services are running, you should see three Coffee and three Tea pods:
+
+```bash
+kubectl get pods,svc
+###Sample output###
+NAME READY STATUS RESTARTS AGE
+coffee-56b7b9b46f-9ks7w 1/1 Running 0 28s
+coffee-56b7b9b46f-mp9gs 1/1 Running 0 28s
+coffee-56b7b9b46f-v7xxp 1/1 Running 0 28s
+tea-568647dfc7-54r7k 1/1 Running 0 27s
+tea-568647dfc7-9h75w 1/1 Running 0 27s
+tea-568647dfc7-zqtzq 1/1 Running 0 27s
+
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+service/kubernetes ClusterIP 10.0.0.1 443/TCP 34d
+service/coffee-svc ClusterIP None 80/TCP 34d
+service/tea-svc ClusterIP None 80/TCP 34d
+
+```
+
+```bash
+#Sample output
+NAME STATE HOST IP PORTS AGE
+cafe-vs Valid cafe.example.com 3m
+```
+
+1. In your AKS1 cluster, you will run only 2 Replicas of the coffee and tea pods, so Scale down both deployments:
+
+```bash
+kubectl scale deployment coffee --replicas=2
+kubectl scale deployment tea --replicas=2
+
+```
+
+Now there should be only 2 of each running:
+
+```bash
+kubectl get pods
+###Sample output###
+NAME READY STATUS RESTARTS AGE
+coffee-56b7b9b46f-9ks7w 1/1 Running 0 28s
+coffee-56b7b9b46f-mp9gs 1/1 Running 0 28s
+tea-568647dfc7-54r7k 1/1 Running 0 27s
+tea-568647dfc7-9h75w 1/1 Running 0 27s
+
+```
+
+1. Check that the Cafe `VirtualServer`, **cafe-vs**, is running and the STATE is Valid:
+
+```bash
+kubectl get virtualserver cafe-vs
+
+```
+```bash
+###Sample output###
+NAME STATE HOST IP PORTS AGE
+cafe-vs Valid cafe.example.com 4m6s
+
+```
+
+**Note:** The `STATE` should be `Valid`. If it is not, then there is an issue with your yaml manifest file (cafe-vs.yaml). You could also use `kubectl describe vs cafe-vs` to get more information about the VirtualServer you just created.
+
+1. Check your Nginx Plus Ingress Controller Dashboard for Cluster1, http://dashboard.example.com:9000/dashboard.html. You should now see `cafe.example.com` in the HTTP Zones tab, and 2 each of the coffee and tea Pods in the HTTP Uptreams tab. Nginx is health checking the Pods, so they should show a Green status.
+
+< cafe dashboard ss here >
+
+### Deploy the Nginx CAFE Demo app in the 2nd cluster
+
+1. Repeat the previous section to deploy the CAFE Demo app in your second AKS2 cluster, dont' forget to change your Kubectl Context first.
+
+1. Use the same /lab4 `cafe` and `cafe-vs` manifests. However - do not Scale down the coffee and tea replicas, leave three of each pod running.
+
+1. Check your Second Nginx Ingress Controller Dashboard, at http://dashboard.example.com:9002/dashboard.html. You should find the same HTTP Zones, and 3 each of the coffee and tea pods for HTTP Upstreams.
+
+## Configure Nginx for Azure for Cafe Demo
+
+In this exercise, you will create the Nginx config files needed for access to the Cafe Demo application, running in both AKS clusters. You will need an Upstream block for each cluster/nodeport, and you will use the existing `cafe.example.com.conf` file, just change the `proxy_pass` directive, to tell Nginx to send requests to the AKS cluster Ingress, instead of the Docker containers.
+
+1. Using the Nginx for Azure Configuration Console, create a new file called `/etc/nginx/conf.d/aks1-upstreams.conf`. You will again need your AKS Node Names for the server names in this config file. Add `:32080` for your port number - this matches your previous NodePort-Static manifest.
+
+Use the example provided, just update the names as before:
+
+```nginx
+# Nginx 4 Azure to NIC, AKS Nodes for Upstreams
+# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
+#
+# AKS1 nginx ingress upstreams
+#
+upstream aks1_ingress {
+ zone aks1_ingress 256k;
+
+ least_time last_byte;
+
+ # to nginx-ingress NodePort Service / aks Node IPs
+ server aks-userpool-76919110-vmss000001:32080; #aks1 node1:
+ server aks-userpool-76919110-vmss000002:32080; #aks1 node2:
+
+ keepalive 32;
+
+}
+
+```
+
+Submit your Nginx Configuration.
+
+> Important: You are creating an Upstream to send all traffic to the Nginx Ingress Controller, not to the Cafe Pods! That is why the Upstream is named `aks1-ingress` or `aks2-ingress`. It will then be the Ingress Controllers responsibility to then route/load balance traffic to the correct Pods inside the cluster.
+
+>>There are TWO layers of Nginx load balancing here, one outside the Cluster using N4A, one inside the Cluster using Nginx Ingress. You must configure both for traffic to be routed and flow correctly.
+
+1. Create another Nginx conf file for your second AKS2 cluster, named `/etc/nginx/conf.d/aks2-upstreams.conf`. Change the server names to the AKS2 Node names, add port :32080.
+
+Use the example provided, just update the server names as before:
+
+```nginx
+# Nginx 4 Azure to NIC, AKS Node for Upstreams
+# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
+#
+# nginx ingress upstreams
+#
+upstream aks2_ingress {
+ zone aks2_ingress 256k;
+
+ least_time last_byte;
+
+ # to nginx-ingress NodePort Service / aks Node IPs
+ server aks-nodepool1-19485366-vmss000003:32080; #aks2 node1:
+ server aks-nodepool1-19485366-vmss000004:32080; #aks2 node2:
+ server aks-nodepool1-19485366-vmss000005:32080; #aks2 node3:
+
+ keepalive 32;
+
+}
+
+```
+
+### Test access to Cafe Demo in AKS1 Cluster / Nginx Ingress.
+
+1. Modify the `proxy_pass` directive in your `cafe.example.com.conf` file, to use `aks1-ingress` for the backend. As shown, just comment out the current `cafe_nginx` proxy pass and Header, and add a new ones for `aks1_ingress`. That way, if you want to go back and test Docker again, it's a quick edit.
+
+```nginx
+# Nginx 4 Azure - Cafe Nginx to AKS1 NIC
+# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
+#
+server {
+
+ listen 80; # Listening on port 80
+
+ server_name cafe.example.com; # Set hostname to match in request
+ status_zone cafe.example.com; # Metrics zone name
+
+ access_log /var/log/nginx/cafe.example.com.log main;
+ error_log /var/log/nginx/cafe.example.com_error.log info;
+
+ location / {
+ #
+ # return 200 "You have reached cafe.example.com, location /\n";
+
+## Comment out cafe_nginx ##
+
+ #proxy_pass http://cafe_nginx; # Proxy AND load balance to Docker VM
+ #add_header X-Proxy-Pass cafe_nginx; # Custom Header
+
+ proxy_pass http://aks1_ingress; # Proxy AND load balance to AKS2 Nginx Ingress
+ add_header X-Proxy-Pass aks1_ingress; # Custom Header
+
+ }
+
+}
+
+```
+
+Submit your Nginx Configuration.
+
+1. Test it with Chrome or other browser, http://cafe.example.com/coffee. Refresh several times, what do you see ?
+
+- Nginx 4 Azure is sending requests to the Ingress, and the Ingress is loadbalancing the coffee pods in Cluster1.
+- Look at the Server Name and Server IP fields in the grey box. What do they coorelate to?
+-- Those are the Pod Names and Pod Ips.
+- Check your Nginx Plus Ingress NIC Dashboard, what does it show while you Refresh the browser ?
+-- The Request Metrics for the Pods should be increasing, and the HTTP Zone counter should also increase.
+- Right click on Chrome, and Inspect. Refresh again ... can you find the custom HTTP Header - what does it say ?
+
+### Test access to Cafe Demo in AKS2 Cluster / Nginx Ingress.
+
+This is the exact same test as the previous step, but for AKS2.
+
+1. Using the Nginx for Azure Console, again modify the `proxy_pass directive` in your `cafe.example.com.conf` file. Just change it to the `aks2_ingress` Upstream.
+
+```nginx
+# Nginx 4 Azure - Cafe Nginx to AKS2 NIC
+# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
+#
+server {
+
+ listen 80; # Listening on port 80
+
+ server_name cafe.example.com; # Set hostname to match in request
+ status_zone cafe.example.com; # Metrics zone name
+
+ access_log /var/log/nginx/cafe.example.com.log main;
+ error_log /var/log/nginx/cafe.example.com_error.log info;
+
+ location / {
+ #
+ # return 200 "You have reached cafe.example.com, location /\n";
+
+ #proxy_pass http://cafe_nginx; # Proxy AND load balance to Docker VM
+ #add_header X-Proxy-Pass cafe_nginx; # Custom Header
+
+ #proxy_pass http://aks1_ingress; # Proxy AND load balance to AKS1 Nginx Ingress
+ #add_header X-Proxy-Pass aks1_ingress; # Custom Header
+
+ proxy_pass http://aks2_ingress; # Proxy AND load balance to AKS2 Nginx Ingress
+ add_header X-Proxy-Pass aks1_ingress; # Custom Header
+
+ }
+
+}
+
+```
+
+Submit your Nginx Configuration.
+
+1. Try it with Chrome or other browser, http://cafe.example.com/coffee. Refresh several times, what do you see ?
+
+- Nginx is sending requests to the Ingress, and the Ingress is loadbalancing the coffee pods in Cluster2.
+- Look at the Server Name and Server IP fields in the grey box. Do they coorelate to the coffee Pods in Cluster2?
+-- Check your Nginx Plus Ingress NIC Dashboard in Cluster2, what does it show while you Refresh the browser ?
+-- The Request Metrics for the Pods should be increasing, and the HTTP Zone counter should also increase.
+- Right click on Chrome, and Inspect. Refresh again ... can you find the custom HTTP Header - what does it say ?
+
+> See how easy that was, to create a couple new Upstream configurations, representing your 2 new clusters, and then just change the Proxy_Pass to use the new Resources? This is just the tip of the iceberg of what you can do with Nginx for Azure.
+
+## Deploy Redis In Memory Caching in AKS#2
+
+In this exercise, you will deploy Redis in your Second AKS2 Cluster, and use both Nginx Ingress and Nginx for Azure to expose this Redis Cache to the Internet. Similar to the Cafe Demo deployment, we start with AKS pods and services, add Nginx Ingress Transport Server for TCP, expose with NodePort, create Upstreams, and then finally add new Server block for `redis.example.com`. As Redis operates at the TCP level, you will be using the `Nginx stream` context for your configurations, not the HTTP context.
+
+1. Inspect the Redis Leader and Follower manifest. `Thank You to our friends at Google` for this sample Redis Kubernetes configuration, it seems to work well.
+
+1. Deploy Redis Leader and Follower to your AKS2 Cluster.
+
+ ```bash
+ kubectl config use-context n4a-aks2
+ kubectl apply -f lab4/redis-leader.yaml
+ kubectl apply -f lab4/redis-follower.yaml
+
+ ```
+
+1. Check they are running:
+
+ ```bash
+ kubectl get pods,svc
+
+ ```
+
+ ```bash
+ #Sample Output / Coffee and Tea removed for clarity
+ NAME READY STATUS RESTARTS AGE
+ pod/redis-follower-847b67dd4f-f8ct5 1/1 Running 0 22h
+ pod/redis-follower-847b67dd4f-rt5hg 1/1 Running 0 22h
+ pod/redis-leader-58b566dc8b-8q55p 1/1 Running 0 22h
+
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ service/redis-follower ClusterIP 10.0.222.46 6379/TCP 24m
+ service/redis-leader ClusterIP 10.0.125.35 6379/TCP 24m
+
+ ```
+
+1. Configure Nginx Ingress Controller to enable traffic to Redis. Use the following manifests to Open the Redis TCP Ports, and create a Transport Server for TCP traffic.
+
+ Inspect the `lab4/global-configuration-redis.yaml` manifest. This configures Nginx Ingress for new Stream Server blocks and listen on two more ports:
+
+ ```yaml
+ # NIC Global Config manifest for custom TCP ports for Redis
+ # Chris Akker Jan 2024
+ #
+ apiVersion: k8s.nginx.org/v1alpha1
+ kind: GlobalConfiguration
+ metadata:
+ name: nginx-configuration
+ namespace: nginx-ingress
+ spec:
+ listeners:
+ - name: redis-leader-listener
+ port: 6379
+ protocol: TCP
+ - name: redis-follower-listener
+ port: 6380
+ protocol: TCP
+
+ ```
+
+1. Create the Global Configuration:
+
+ ```bash
+ kubectl apply -f lab4/global-configuration-redis.yaml
+
+ ```
+
+ ```bash
+ #Sample output
+ globalconfiguration.k8s.nginx.org/nginx-configuration created
+
+ ```
+
+1. Check and inspect the Global Configuration:
+
+ ```bash
+ kubectl describe gc nginx-configuration -n nginx-ingress
+
+ ```
+ ```bash
+ #Sample output
+ Name: nginx-configuration
+ Namespace: nginx-ingress
+ Labels:
+ Annotations:
+ API Version: k8s.nginx.org/v1alpha1
+ Kind: GlobalConfiguration
+ Metadata:
+ Creation Timestamp: 2024-03-25T21:12:27Z
+ Generation: 1
+ Resource Version: 980829
+ UID: 7afbed08-364c-43bc-acc4-dcbeab3afee8
+ Spec:
+ Listeners:
+ Name: redis-leader-listener
+ Port: 6379
+ Protocol: TCP
+ Name: redis-follower-listener
+ Port: 6380
+ Protocol: TCP
+ Events:
+
+ ```
+
+1. Create the Nginx Ingress Transport Servers, for Redis Leader and Follow traffic:
+
+ ```bash
+ kubectl apply -f lab4/redis-leader-ts.yaml
+ kubectl apply -f lab4/redis-follower-ts.yaml
+
+ ```
+
+1. Verify the Nginx Ingress Controller is now running 2 Transport Servers for Redis traffic, the STATE should be Valid:
+
+ ```bash
+ kubectl get transportserver
+
+ ```
+
+ ```bash
+ #Sample output
+ NAME STATE REASON AGE
+ redis-follower-ts Valid AddedOrUpdated 24m
+ redis-leader-ts Valid AddedOrUpdated 24m
+
+ ```
+
+1. Do a quick check your Nginx Ingress Dashboard for AKS2, you should now see TCP Zones and TCP Upstreams. These are the Transport Servers and Pods that NIC will use for Redis traffic.
+
+ << NIC Redis SS here >>
+
+1. Inspect the `lab4/nodeport-static-redis.yaml` manifest. This will update the NodePort definitions to include ports for Redis Leader and Follower. Once again, these are static NodePorts.
+
+ ```yaml
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: nginx-ingress
+ namespace: nginx-ingress
+ spec:
+ type: NodePort
+ ports:
+ - port: 80
+ nodePort: 32080
+ protocol: TCP
+ name: http
+ - port: 443
+ nodePort: 32443
+ protocol: TCP
+ name: https
+ - port: 6379
+ nodePort: 32379
+ protocol: TCP
+ name: redis-leader
+ - port: 6380
+ nodePort: 32380
+ protocol: TCP
+ name: redis-follower
+ - port: 9000
+ nodePort: 32090
+ protocol: TCP
+ name: dashboard
+ selector:
+ app: nginx-ingress
+
+ ```
+
+1. Apply the new NodePort manifest:
+
+ ```bash
+ kubectl apply -f lab4/nodeport-static-redis.yaml
+
+ ```
+
+1. Verify there are now 5 Open Nginx Ingress NodePorts on your AKS2 cluster:
+
+ ```bash
+ kubectl get svc -n nginx-ingress
+
+ ```
+
+ ```bash
+ #Sample output
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ dashboard-svc ClusterIP 10.0.226.36 9000/TCP 28d
+ nginx-ingress NodePort 10.0.84.8 80:32080/TCP,443:32443/TCP,6379:32379/TCP,6380:32380/TCP,9000:32090/TCP 28m
+
+ ```
+
+### Configure Nginx for Azure for Redis traffic
+
+Following Nginx Best Practices, and standard Nginx disk folder/files layout, the `TCP Stream context` configuration files will be created in a new folder, called `/etc/nginx/stream/`.
+
+1. Using the Nginx for Azure Console, modify the `nginx.conf` file, to enable the Stream Context, and to include the config files. Place this stanza at the bottom of your nginx.conf file:
+
+ ```nginx
+
+ stream {
+
+ include /etc/nginx/stream/*.conf; # Stream Context nginx files
+
+ }
+ ```
+
+ Submit your Nginx Config.
+
+1. Using the Nginx for Azure Console, create a new Nginx conf file called `/etc/nginx/stream/redis-leader-upstreams.conf`. Use your AKS2 Node names for server names, and add `:32379` for your port number, matching the NodePort for Redis Leader. Use the example provided, just change the server names:
+
+ ```nginx
+ # Nginx 4 Azure to NIC, AKS Node for Upstreams
+ # Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
+ #
+ # nginx ingress upstreams for Redis Leader
+ #
+ upstream aks2_redis_leader {
+ zone aks2_redis_leader 256k;
+
+ least_time last_byte;
+
+ # from nginx-ingress NodePort Service / aks Node IPs
+ server aks-nodepool1-19485366-vmss000003:32379; #aks2 node1:
+ server aks-nodepool1-19485366-vmss000004:32379; #aks2 node2:
+ server aks-nodepool1-19485366-vmss000005:32379; #aks2 node3:
+
+ }
+
+ ```
+
+ Submit your Nginx Config.
+
+1. Using the Nginx for Azure Console, create a new Nginx conf file called `/etc/nginx/stream/redis.example.com.conf`. Use the example provided:
+
+ ```nginx
+ # Nginx 4 Azure to NIC, AKS Node for Upstreams
+ # Stream for Redis Leader
+ # Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
+ #
+ server {
+
+ listen 6379;
+ status_zone redis-leader;
+
+ proxy_pass aks2_redis_leader;
+
+ }
+
+ ```
+
+ Submit your Nginx Config.
+
+1. Update your Nginx for Azure NSG to allow port 6379 inbound, so you can connect to the Redis Leader:
+
+<< TODO - add NSG update here >>
+
+## Update local DNS
+
+You will be using FQDN hostnames for the labs, and you will need to update your local computer's `/etc/hosts` file, to use these names with Nginx for Azure.
+
+Edit your local hosts file, adding the FQDNs as shown below. Use the `External-IP` Address of your Nginx for Azure instance:
+
+ ```bash
+ cat /etc/hosts
+
+ # Added for N4A Workshop
+ 13.86.100.10 cafe.example.com dashboard.example.com redis.example.com
+
+ ```
+ >**Note:** All hostnames are mapped to the same N4A External-IP. Your N4A External-IP address will be different than the example.
+
+### Test Access to the Redis Leader with Redis Tools
+
+1. Using the `Redis-cli` tool, see if you can connect/ping to the Redis Leader:
+
+ ```bash
+ redis-cli -h redis.example.com PING
+
+ ```
+ ```bash
+ #Response
+ PONG
+ ```
+ ```bash
+ redis-cli -h redis.example.com HELLO 2
+
+ ```
+ ```bash
+ #Response
+ 1) "server"
+ 2) "redis"
+ 3) "version"
+ 4) "6.0.5"
+ 5) "proto"
+ 6) (integer) 2
+ 7) "id"
+ 8) (integer) 7590
+ 9) "mode"
+ 10) "standalone"
+ 11) "role"
+ 12) "master"
+ 13) "modules"
+ 14) (empty array)
+ ```
+
+Now how cool is that? A Redis Cluster running in AKS, exposed with NIC and NodePort, and access provided by Nginx for Azure on the Internet, using a standard hostname and port to connect to.
+
+**Optional:** Run Redis-benchmark on your new Leader, see what performance you can get. Watch your Nginx Ingress Dashboard to see the traffic inside the cluster. Watch your Nginx for Azure with Azure Monitoring as well.
+
+ ```bash
+ redis-benchmark -h redis.nginxazure.build -c 100 -q
+
+ ```
+ ```bash
+ #Sample output
+ PING_INLINE: 1585.84 requests per second, p50=61.855 msec
+ PING_MBULK: 1604.57 requests per second, p50=61.343 msec
+ SET: 1596.37 requests per second, p50=61.759 msec
+ GET: 1596.12 requests per second, p50=61.567 msec
+ INCR: 1594.44 requests per second, p50=61.663 msec
+ LPUSH: 1592.66 requests per second, p50=61.855 msec
+ RPUSH: 1577.39 requests per second, p50=62.111 msec
+ LPOP: 1603.69 requests per second, p50=61.503 msec
+ RPOP: 1610.72 requests per second, p50=61.279 msec
+ SADD: 1596.63 requests per second, p50=61.567 msec
+ HSET: 1522.12 requests per second, p50=61.951 msec
+ SPOP: 1414.31 requests per second, p50=61.791 msec
+ ZADD: 1587.96 requests per second, p50=61.759 msec
+ ZPOPMIN: 1578.38 requests per second, p50=61.887 msec
+ LPUSH (needed to benchmark LRANGE): 1581.40 requests per second, p50=62.207 msec
+ LRANGE_100 (first 100 elements): 1552.14 requests per second, p50=62.175 msec
+ LRANGE_300 (first 300 elements): 1380.80 requests per second, p50=68.991 msec
+ LRANGE_500 (first 500 elements): 1047.39 requests per second, p50=90.175 msec
+ LRANGE_600 (first 600 elements): 1014.97 requests per second, p50=91.903 msec
+ MSET (10 keys): 1559.36 requests per second, p50=62.783 msec
+ XADD: 1581.40 requests per second, p50=61.983 msec
+
+ ```
+
+ Some screenshots for you:
+
+ << Redis Benchmark SS here >>
+
+You will likely find that the Redis performance is dimished by the Round trip latency of your Internet and Cloud network path. Redis performance/latency is directly related to network performance. However, the value of running a Redis cluster, in any Kubernetes cluster you like, and have access to it anywhere in the world could be a possible Solution for you.
+
+>**Security Warning!** There is no Redis Authentication, or other protections in this Redis configuration, just your Azure NSG IP/port filters. Do NOT use this configuration for Production workloads. The example provided in the Workshop is to show that running Redis is easy, and Nginx makes it easy to access. Take appropriate measures to secure Redis data as needed.
+
+
+
+**This completes Lab4.**
+
+
+
+## References:
+
+- [NGINX As A Service for Azure](https://docs.nginx.com/nginxaas/azure/)
+- [NGINX Plus Product Page](https://docs.nginx.com/nginx/)
+- [NGINX Ingress Controller](https://docs.nginx.com//nginx-ingress-controller/)
+- [NGINX Directives Index](https://nginx.org/en/docs/dirindex.html)
+- [NGINX Variables Index](https://nginx.org/en/docs/varindex.html)
+- [NGINX Technical Specs](https://docs.nginx.com/nginx/technical-specs/)
+- [NGINX - Join Community Slack](https://community.nginx.org/joinslack)
+
+
+
+### Authors
+
+- Chris Akker - Solutions Architect - Community and Alliances @ F5, Inc.
+- Shouvik Dutta - Solutions Architect - Community and Alliances @ F5, Inc.
+- Adam Currier - Solutions Architect - Community and Alliances @ F5, Inc.
+
+-------------
+
+Navigate to ([Lab5](../lab5/readme.md) | [LabX](../labX/readme.md))
diff --git a/labs/lab4/media/readme.md.old b/labs/lab4/media/readme.md.old
deleted file mode 100644
index 27aa48e..0000000
--- a/labs/lab4/media/readme.md.old
+++ /dev/null
@@ -1,77 +0,0 @@
-# NGINX for Azure Overview and Deployment
-
-## Introduction
-
-In this lab, you will build ( x,y,x ).
-
-< Lab specific Images here, in the /media sub-folder >
-
-NGINX aaS | Docker
-:-------------------------:|:-------------------------:
- |
-
-## Learning Objectives
-
-By the end of the lab you will be able to:
-
-- Introduction to `xx`
-- Build an `yyy` Nginx configuration
-- Test access to your lab enviroment with Curl and Chrome
-- Investigate `zzz`
-
-
-## Pre-Requisites
-
-- You must have `aaaa` installed and running
-- You must have `bbbbb` installed
-- See `Lab0` for instructions on setting up your system for this Workshop
-- Familiarity with basic Linux commands and commandline tools
-- Familiarity with basic Docker concepts and commands
-- Familiarity with basic HTTP protocol
-
-
-
-### Lab exercise 1
-
-
-
-### Lab exercise 2
-
-
-
-### Lab exercise 3
-
-
-
-### << more exercises/steps>>
-
-
-
-
-
-**This completes Lab2.**
-
-
-
-## References:
-
-- [NGINX As A Service for Azure](https://docs.nginx.com/nginxaas/azure/)
-- [NGINX Plus Product Page](https://docs.nginx.com/nginx/)
-- [NGINX Ingress Controller](https://docs.nginx.com//nginx-ingress-controller/)
-- [NGINX on Docker](https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-docker/)
-- [NGINX Directives Index](https://nginx.org/en/docs/dirindex.html)
-- [NGINX Variables Index](https://nginx.org/en/docs/varindex.html)
-- [NGINX Technical Specs](https://docs.nginx.com/nginx/technical-specs/)
-- [NGINX - Join Community Slack](https://community.nginx.org/joinslack)
-
-
-
-### Authors
-
-- Chris Akker - Solutions Architect - Community and Alliances @ F5, Inc.
-- Shouvik Dutta - Solutions Architect - Community and Alliances @ F5, Inc.
-- Adam Currier - Solutions Architect - Community and Alliances @ F5, Inc.
-
--------------
-
-Navigate to ([Lab3](../lab3/readme.md) | [LabX](../labX/readme.md))
diff --git a/labs/lab4/media/redis-icon.png b/labs/lab4/media/redis-icon.png
new file mode 100644
index 0000000..9d34aff
Binary files /dev/null and b/labs/lab4/media/redis-icon.png differ
diff --git a/ca-notes/aks/nic/nodeport-static.yaml b/labs/lab4/nodeport-static-redis.yaml
similarity index 83%
rename from ca-notes/aks/nic/nodeport-static.yaml
rename to labs/lab4/nodeport-static-redis.yaml
index e0d60d5..a23806f 100644
--- a/ca-notes/aks/nic/nodeport-static.yaml
+++ b/labs/lab4/nodeport-static-redis.yaml
@@ -1,3 +1,6 @@
+# Nginx 4 Azure, AKS2 NIC NodePort for Redis
+# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
+#
apiVersion: v1
kind: Service
metadata:
diff --git a/labs/lab4/readme.md b/labs/lab4/readme.md
new file mode 100644
index 0000000..06f9c17
--- /dev/null
+++ b/labs/lab4/readme.md
@@ -0,0 +1,442 @@
+# Cafe Demo / Redis Deployment
+
+## Introduction
+
+In this lab, you will deploy the Nginx Cafe Demo, and Redis In Memory cache applications to your AKS Clusters. You will configure Nginx Ingress to expose these applications external to the Clusters. You will use the Nginx Plus Dashboard to watch the Ingress Resources.
+
+
+
+Nginx Ingress | Cafe | Redis
+:--------------:|:--------------:|:--------------:
+ | |
+
+
+
+## Learning Objectives
+
+By the end of the lab you will be able to:
+
+- Deploy the Cafe Demo application
+- Deploy the Redis In Memory Cache
+- Expose the Cafe Demo app with NodePort
+- Expose the Redis Cache with NodePort
+- Monitor with Nginx Plus Ingress dashboard
+
+## Pre-Requisites
+
+- You must have both AKS Clusters up and running
+- You must have both Nginx Ingress Controllers running
+- You must have both the NIC Dashboards available
+- Familiarity with basic Linux commands and commandline tools
+- Familiarity with basic Kubernetes concepts and commands
+- Familiarity with Kubernetes NodePort
+- Familiarity with Nginx Ingress Controller CRDs - Custom Resource Definitions
+- Familiartiy with Nginx Ingress VirtualServers and TransportServers
+- See `Lab0` for instructions on setting up your system for this Workshop
+
+
+
+## Deploy the Nginx Cafe Demo app
+
+
+
+In this section, you will deploy the "Cafe Nginx" Ingress Demo, which represents a Coffee Shop website with Coffee and Tea applications. You will be adding the following components to your Kubernetes Clusters:
+
+- Coffee and Tea pods
+- Matching coffee and tea services
+- Cafe VirtualServer
+
+The Cafe application that you will deploy looks like the following diagram below. *BOTH* AKS clusters will have the Coffee and Tea pods and services, with NGINX Ingress routing the traffic for /coffee and /tea routes, using the `cafe.example.com` Hostname. There is also a third hidden service, more on that later!
+
+
+
+1. Inspect the `lab4/cafe.yaml` manifest. You will see we are deploying 3 replicas of each the coffee and tea Pods, and create a matching Service for each.
+
+2. Inspect the `lab4/cafe-vs.yaml` manifest. This is the Nginx Ingress VirtualServer CRD (Custom Resource Definition) used by Nginx Ingress to expose these apps, using the `cafe.example.com` Hostname. You will also see that active healthchecks are enabled, and the /coffee and /tea routes are being used. (NOTE: The VirtualServer CRD from Nginx is an `upgrade` to the standard Kubernetes Ingress object).
+
+3. Deploy the Cafe application by applying these two manifests in first cluster:
+
+ > Make sure your Terminal is the `nginx-azure-workshops/labs` directory for all commands during this Workshop.
+
+ ```bash
+ # Set context to 1st cluster(n4a-aks1)
+ kubectl config use-context n4a-aks1
+
+ kubectl apply -f lab4/cafe.yaml
+ kubectl apply -f lab4/cafe-vs.yaml
+
+ ```
+
+ ```bash
+ ##Sample Output##
+ Switched to context "n4a-aks1".
+ deployment.apps/coffee created
+ service/coffee-svc created
+ deployment.apps/tea created
+ service/tea-svc created
+ virtualserver.k8s.nginx.org/cafe-vs created
+ ```
+
+4. Check that all pods and services are running within first cluster, you should see three Coffee and three Tea pods, and the coffee-svc and tea-svc Services.
+
+ ```bash
+ kubectl get pods,svc
+ ```
+
+ ```bash
+ ##Sample Output##
+ NAME READY STATUS RESTARTS AGE
+ coffee-56b7b9b46f-9ks7w 1/1 Running 0 28s
+ coffee-56b7b9b46f-mp9gs 1/1 Running 0 28s
+ coffee-56b7b9b46f-v7xxp 1/1 Running 0 28s
+ tea-568647dfc7-54r7k 1/1 Running 0 27s
+ tea-568647dfc7-9h75w 1/1 Running 0 27s
+ tea-568647dfc7-zqtzq 1/1 Running 0 27s
+
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ service/kubernetes ClusterIP 10.0.0.1 443/TCP 34d
+ service/coffee-svc ClusterIP None 80/TCP 34d
+ service/tea-svc ClusterIP None 80/TCP 34d
+ ```
+
+5. *For your first cluster (`n4a-aks1`) only*, you will run `2 Replicas` of the coffee and tea pods, so Scale down both deployments:
+
+ ```bash
+ kubectl scale deployment coffee --replicas=2
+ kubectl scale deployment tea --replicas=2
+ ```
+
+ ```bash
+ deployment.apps/coffee scaled
+ deployment.apps/tea scaled
+ ```
+
+ Now there should be only 2 of each Pod running:
+
+ ```bash
+ kubectl get pods
+ ```
+
+ ```bash
+ ##Sample Output##
+ NAME READY STATUS RESTARTS AGE
+ coffee-56b7b9b46f-9ks7w 1/1 Running 0 28s
+ coffee-56b7b9b46f-mp9gs 1/1 Running 0 28s
+ tea-568647dfc7-54r7k 1/1 Running 0 27s
+ tea-568647dfc7-9h75w 1/1 Running 0 27s
+ ```
+
+6. Check that the Cafe VirtualServer (`cafe-vs`), is running and the STATE is `Valid`:
+
+ ```bash
+ kubectl get virtualserver cafe-vs
+ ```
+
+ ```bash
+ ##Sample Output##
+ NAME STATE HOST IP PORTS AGE
+ cafe-vs Valid cafe.example.com 4m6s
+ ```
+
+ >**NOTE:** The `STATE` should be `Valid`. If it is not, then there is an issue with your yaml manifest file (cafe-vs.yaml). You could also use `kubectl describe vs cafe-vs` to get more information about the VirtualServer you just created.
+
+7. Check your Nginx Plus Ingress Controller Dashboard for first cluster(`n4a-aks1`), at http://dashboard.example.com:9001/dashboard.html. You should now see `cafe.example.com` in the **HTTP Zones** tab, and 2 each of the coffee and tea Pods in the **HTTP Upstreams** tab. Nginx is health checking the Pods, so they should show a Green status.
+
+ 
+
+ 
+
+ >**NOTE:** You should see two Coffee/Tea pods in Cluster 1.
+
+## Deploy the Nginx Cafe Demo app in the 2nd cluster
+
+1. Repeat the previous section to deploy the Cafe Demo app in your second cluster (`n4a-aks2`), don't forget to change your Kubectl Context using below command.
+
+ ```bash
+ kubectl config use-context n4a-aks2
+ ```
+
+ ```bash
+ ##Sample Output##
+ Switched to context "n4a-aks2".
+ ```
+
+2. Use the same /lab4 `cafe` and `cafe-vs` manifests.
+
+ >*However - do not Scale down the coffee and tea replicas, leave three of each pod running in AKS2.*
+
+3. Check your Second Nginx Plus Ingress Controller Dashboard, at http://dashboard.example.com:9002/dashboard.html. You should find the same HTTP Zones, and 3 each of the coffee and tea pods for HTTP Upstreams.
+
+
+
+
+
+## Deploy Redis In Memory Caching in AKS Cluster 2 (n4a-aks2)
+
+Azure | Redis
+:--------------:|:--------------:
+ | 
+
+
+
+In this exercise, you will deploy Redis in your second cluster (`n4a-aks2`), and use both Nginx Ingress and Nginx for Azure to expose this Redis Cache to the Internet. Similar to the Cafe Demo deployment, you will deploy:
+
+- `Redis Leader and Follower` pods and services in n4a-aks2 cluster.
+- Add Nginx Ingress `Transport Server` for TCP traffic.
+- Expose Redis with NodePorts.
+
+>**NOTE:** As Redis operates at the TCP level, you will be using the `Nginx stream` context in your Nginx Ingress configurations, not the HTTP context.
+
+### Deploy Redis Leader and Follower in AKS2
+
+1. Inspect the Redis-Leader and Redis-Follower manifests. `Shout-out: Thank You to our friends at Google` for this sample Redis for Kubernetes configuration, it works well. You will see a single Leader Pod, and 2 Follower pods with matching services.
+
+1. Deploy Redis Leader and Follower to your AKS2 Cluster.
+
+ ```bash
+ kubectl config use-context n4a-aks2
+ kubectl apply -f lab4/redis-leader.yaml
+ kubectl apply -f lab4/redis-follower.yaml
+ ```
+
+ ```bash
+ ##Sample Output##
+ Switched to context "n4a-aks2".
+ deployment.apps/redis-leader created
+ service/redis-leader created
+ deployment.apps/redis-follower created
+ service/redis-follower created
+ ```
+
+1. Check they are running:
+
+ ```bash
+ kubectl get pods,svc -l app=redis
+ ```
+
+ ```bash
+ ##Sample Output##
+ NAME READY STATUS RESTARTS AGE
+ pod/redis-follower-847b67dd4f-f8ct5 1/1 Running 0 22h
+ pod/redis-follower-847b67dd4f-rt5hg 1/1 Running 0 22h
+ pod/redis-leader-58b566dc8b-8q55p 1/1 Running 0 22h
+
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ service/redis-follower ClusterIP 10.0.222.46 6379/TCP 24m
+ service/redis-leader ClusterIP 10.0.125.35 6379/TCP 24m
+ ```
+
+1. Configure Nginx Ingress Controller to enable traffic to Redis. This requires three things:
+
+ - Open the TCP ports on Nginx Ingress
+ - Create a TransportServer for Redis Leader
+ - Create a TransportServer for Redis Follower
+
+1. Use the following manifests to Open the Redis Leader and Follower TCP Ports, using the Nginx Ingress Global Configuration CRD:
+
+ Inspect the `lab4/global-configuration-redis.yaml` manifest. This configures Nginx Ingress for new `Stream context` Server blocks and listens on two additional ports for Redis. Take note that you are using the Redis standard `6379` port for the Leader, and port `6380` for the Follower. (If you are unfamiliar with Redis, you can find a link in the References section to read more about it).
+
+ ```yaml
+ # NIC Global Config manifest for custom TCP ports for Redis
+ # Chris Akker Jan 2024
+ #
+ apiVersion: k8s.nginx.org/v1alpha1
+ kind: GlobalConfiguration
+ metadata:
+ name: nginx-configuration
+ namespace: nginx-ingress
+ spec:
+ listeners:
+ - name: redis-leader-listener
+ port: 6379
+ protocol: TCP
+ - name: redis-follower-listener
+ port: 6380
+ protocol: TCP
+ ```
+
+1. Create the Global Configuration:
+
+ ```bash
+ kubectl apply -f lab4/global-configuration-redis.yaml
+ ```
+
+ ```bash
+ ##Sample Output##
+ globalconfiguration.k8s.nginx.org/nginx-configuration created
+ ```
+
+1. Check and inspect the Global Configuration:
+
+ ```bash
+ kubectl describe gc nginx-configuration -n nginx-ingress
+ ```
+
+ ```bash
+ ##Sample Output##
+ Name: nginx-configuration
+ Namespace: nginx-ingress
+ Labels:
+ Annotations:
+ API Version: k8s.nginx.org/v1alpha1
+ Kind: GlobalConfiguration
+ Metadata:
+ Creation Timestamp: 2024-03-25T21:12:27Z
+ Generation: 1
+ Resource Version: 980829
+ UID: 7afbed08-364c-43bc-acc4-dcbeab3afee8
+ Spec:
+ Listeners:
+ Name: redis-leader-listener
+ Port: 6379
+ Protocol: TCP
+ Name: redis-follower-listener
+ Port: 6380
+ Protocol: TCP
+ Events:
+
+ ```
+
+1. Create the Nginx Ingress Transport Servers, for Redis Leader and Follow traffic, using the Transport Server CRD:
+
+ ```bash
+ kubectl apply -f lab4/redis-leader-ts.yaml
+ kubectl apply -f lab4/redis-follower-ts.yaml
+ ```
+
+ ```bash
+ ##Sample Output##
+ transportserver.k8s.nginx.org/redis-leader-ts created
+ transportserver.k8s.nginx.org/redis-follower-ts created
+ ```
+
+1. Verify the Nginx Ingress Controller is now running 2 Transport Servers for Redis traffic, the STATE should be Valid:
+
+ ```bash
+ kubectl get transportserver
+ ```
+
+ ```bash
+ ##Sample Output##
+ NAME STATE REASON AGE
+ redis-follower-ts Valid AddedOrUpdated 24m
+ redis-leader-ts Valid AddedOrUpdated 24m
+
+ ```
+
+ >**NOTE:** The Nginx Ingress Controller uses `VirtualServer CRD` for HTTP context/traffic, and uses `TransportServer CRD` for TCP stream context/traffic.
+
+1. Do a quick check of your Nginx Plus Ingress Dashboard for AKS2, you should now see `TCP Zones` and `TCP Upstreams`. These are the Transport Servers and Pods that Nginx Ingress will use for Redis traffic.
+
+ 
+ 
+
+1. Inspect the `lab4/nodeport-static-redis.yaml` manifest. This will update the previous `nginx-ingress` NodePort definitions to include the ports for Redis Leader and Follower. Once again, these are static NodePorts.
+
+ ```yaml
+ # Nginx 4 Azure, AKS2 NIC NodePort for Redis
+ # Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
+ #
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: nginx-ingress
+ namespace: nginx-ingress
+ spec:
+ type: NodePort
+ ports:
+ - port: 80
+ nodePort: 32080
+ protocol: TCP
+ name: http
+ - port: 443
+ nodePort: 32443
+ protocol: TCP
+ name: https
+ - port: 6379
+ nodePort: 32379
+ protocol: TCP
+ name: redis-leader
+ - port: 6380
+ nodePort: 32380
+ protocol: TCP
+ name: redis-follower
+ - port: 9000
+ nodePort: 32090
+ protocol: TCP
+ name: dashboard
+ selector:
+ app: nginx-ingress
+
+ ```
+
+1. Apply the new NodePort manifest (n4a-aks2 cluster only - Redis is not running in n4a-aks1 cluster!):
+
+ ```bash
+ kubectl config use-context n4a-aks2
+ kubectl apply -f lab4/nodeport-static-redis.yaml
+ ```
+
+ ```bash
+ ##Sample Output##
+ Switched to context "n4a-aks2".
+ service/nginx-ingress created
+ ```
+
+1. Verify there are now `5 Open Nginx Ingress NodePorts` on your AKS2 cluster:
+
+ ```bash
+ kubectl get svc -n nginx-ingress
+ ```
+
+ ```bash
+ ##Sample Output##
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ dashboard-svc ClusterIP 10.0.226.36 9000/TCP 28d
+ nginx-ingress NodePort 10.0.84.8 80:32080/TCP,443:32443/TCP,6379:32379/TCP,6380:32380/TCP,9000:32090/TCP 28m
+
+ ```
+
+To recap, the 5 open port mappings for `nginx-ingress` are as follows:
+
+Service Port | External NodePort | Name
+:--------:|:------:|:-------:
+80 | 32080 | http
+443 | 32443 | https
+6379 | 32379 | redis leader
+6380 | 32380 | redis follower
+9000 | 32090 | dashboard
+
+You will use these new Redis NodePorts for your Nginx for Azure upstreams in the next Lab.
+
+
+
+**This completes Lab4.**
+
+
+
+## References:
+
+- [NGINX As A Service for Azure](https://docs.nginx.com/nginxaas/azure/)
+- [NGINX Cafe Demo](https://hub.docker.com/r/nginxinc/ingress-demo)
+- [Redis Product Page](https://redis.io/)
+- [Redis with Nginx Ingress Lab](https://github.com/nginxinc/nginx-ingress-workshops/tree/main/AdvancedNIC/labs/lab9)
+- [NGINX Plus Product Page](https://docs.nginx.com/nginx/)
+- [NGINX Ingress Controller](https://docs.nginx.com//nginx-ingress-controller/)
+- [NGINX Ingress Transport Server CRD](https://docs.nginx.com/nginx-ingress-controller/configuration/transportserver-resource/)
+- [NGINX Directives Index](https://nginx.org/en/docs/dirindex.html)
+- [NGINX Variables Index](https://nginx.org/en/docs/varindex.html)
+- [NGINX Technical Specs](https://docs.nginx.com/nginx/technical-specs/)
+- [NGINX - Join Community Slack](https://community.nginx.org/joinslack)
+
+
+
+### Authors
+
+- Chris Akker - Solutions Architect - Community and Alliances @ F5, Inc.
+- Shouvik Dutta - Solutions Architect - Community and Alliances @ F5, Inc.
+- Adam Currier - Solutions Architect - Community and Alliances @ F5, Inc.
+
+-------------
+
+Navigate to ([Lab5](../lab5/readme.md) | [LabGuide](../readme.md))
diff --git a/ca-notes/aks/redis/redis-follower-ts.yaml b/labs/lab4/redis-follower-ts.yaml
similarity index 92%
rename from ca-notes/aks/redis/redis-follower-ts.yaml
rename to labs/lab4/redis-follower-ts.yaml
index b846333..6e7f559 100644
--- a/ca-notes/aks/redis/redis-follower-ts.yaml
+++ b/labs/lab4/redis-follower-ts.yaml
@@ -1,5 +1,5 @@
# NIC Plus TransportServer file
-# Add ports 6380 for Redis Follower
+# Add ports 6379 for Redis Follower
# Chris Akker, Jan 2024
#
apiVersion: k8s.nginx.org/v1alpha1
diff --git a/ca-notes/aks/redis/redis-follower.yaml b/labs/lab4/redis-follower.yaml
similarity index 100%
rename from ca-notes/aks/redis/redis-follower.yaml
rename to labs/lab4/redis-follower.yaml
diff --git a/ca-notes/aks/redis/redis-leader-ts.yaml b/labs/lab4/redis-leader-ts.yaml
similarity index 100%
rename from ca-notes/aks/redis/redis-leader-ts.yaml
rename to labs/lab4/redis-leader-ts.yaml
diff --git a/ca-notes/aks/redis/redis-leader.yaml b/labs/lab4/redis-leader.yaml
similarity index 100%
rename from ca-notes/aks/redis/redis-leader.yaml
rename to labs/lab4/redis-leader.yaml
diff --git a/labs/lab5/aks1-upstreams.conf b/labs/lab5/aks1-upstreams.conf
index 4497ea9..4791c22 100644
--- a/labs/lab5/aks1-upstreams.conf
+++ b/labs/lab5/aks1-upstreams.conf
@@ -8,11 +8,12 @@ upstream aks1_ingress {
least_time last_byte;
- # from nginx-ingress NodePort Service / aks Node names
+ # from nginx-ingress NodePort Service / aks1 Node names
# Note: change servers to match
#
- server aks-userpool-76919110-vmss000002:32080; #aks1 node1:
- server aks-userpool-76919110-vmss000003:32080; #aks1 node2:
+ server aks-userpool-76919110-vmss000001:32080; #aks1 node1
+ server aks-userpool-76919110-vmss000002:32080; #aks1 node2
+ server aks-userpool-76919110-vmss000003:32080; #aks1 node3
keepalive 32;
diff --git a/labs/lab5/aks2-nic-headless.conf b/labs/lab5/aks2-nic-headless.conf
new file mode 100644
index 0000000..f333f91
--- /dev/null
+++ b/labs/lab5/aks2-nic-headless.conf
@@ -0,0 +1,24 @@
+# Nginx 4 Azure direct to NIC for Upstreams
+# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
+#
+# direct to nginx ingress Headless Service ( no NodePort )
+#
+upstream aks2_nic_headless {
+ zone aks2_nic_headless 256k;
+
+ least_time last_byte;
+
+ # direct to nginx-ingress Headless Service Endpoint Cluster IP
+ # Resolvers set to kube-dns Endpoints List
+
+ resolver 172.16.4.64 172.16.4.224 valid=10s ipv6=off status_zone=kube-dns;
+
+ # Server name must follow this Kubernetes Service Name format
+ # server ..svc.cluster.local
+
+ server nginx-ingress-headless.nginx-ingress.svc.cluster.local:80 resolve;
+ # server 172.16.4.74:80;
+
+ keepalive 32;
+
+}
diff --git a/labs/lab5/ask2-upstreams.conf b/labs/lab5/ask2-upstreams.conf
index 3c523c7..242d51e 100644
--- a/labs/lab5/ask2-upstreams.conf
+++ b/labs/lab5/ask2-upstreams.conf
@@ -8,12 +8,13 @@ upstream aks2_ingress {
least_time last_byte;
- # from nginx-ingress NodePort Service / aks Node names
+ # from nginx-ingress NodePort Service / aks2 Node names
# Note: change servers to match
#
- server aks-nodepool1-19485366-vmss00000h:32080; #aks node1:
- server aks-nodepool1-19485366-vmss00000i:32080; #aks node2:
- server aks-nodepool1-19485366-vmss00000j:32080; #aks node3:
+ server aks-nodepool1-19485366-vmss000003:32080; #aks2 node1
+ server aks-nodepool1-19485366-vmss000004:32080; #aks2 node2
+ server aks-nodepool1-19485366-vmss000005:32080; #aks2 node3
+ server aks-nodepool1-19485366-vmss000006:32080; #aks2 node4
keepalive 32;
diff --git a/labs/lab6/cafe.example.com.conf b/labs/lab5/cafe.example.com.conf
similarity index 56%
rename from labs/lab6/cafe.example.com.conf
rename to labs/lab5/cafe.example.com.conf
index 03e9075..a5276d6 100644
--- a/labs/lab6/cafe.example.com.conf
+++ b/labs/lab5/cafe.example.com.conf
@@ -1,16 +1,13 @@
-# Nginx 4 Azure - Cafe Nginx HTTPS
+# Nginx 4 Azure - Cafe Nginx to AKS2 NIC
# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
#
server {
- listen 443 ssl; # Listening on port 443 with "ssl" parameter for terminating TLS on all IP addresses on this machine
+ listen 80; # Listening on port 80
server_name cafe.example.com; # Set hostname to match in request
status_zone cafe.example.com; # Metrics zone name
- ssl_certificate /etc/nginx/certs/n4a-cert.cert;
- ssl_certificate_key /etc/nginx/certs/n4a-cert.key;
-
access_log /var/log/nginx/cafe.example.com.log main;
error_log /var/log/nginx/cafe.example.com_error.log info;
@@ -18,8 +15,15 @@ server {
#
# return 200 "You have reached cafe.example.com, location /\n";
- proxy_pass http://cafe_nginx; # Proxy AND load balance to a list of servers
+ proxy_pass http://cafe_nginx; # Proxy AND load balance to Docker VM
add_header X-Proxy-Pass cafe_nginx; # Custom Header
+ #proxy_pass http://aks1_ingress; # Proxy AND load balance to AKS1 Nginx Ingress
+ #add_header X-Proxy-Pass aks1_ingress; # Custom Header
+
+ #proxy_pass http://aks2_ingress; # Proxy AND load balance to AKS2 Nginx Ingress
+ #add_header X-Proxy-Pass aks1_ingress; # Custom Header
+
}
-}
\ No newline at end of file
+
+}
diff --git a/ca-notes/n4a-configs/includes/keepalive.conf b/labs/lab5/keepalive.conf
similarity index 100%
rename from ca-notes/n4a-configs/includes/keepalive.conf
rename to labs/lab5/keepalive.conf
diff --git a/labs/lab5/media/aks-icon.png b/labs/lab5/media/aks-icon.png
new file mode 100644
index 0000000..b0e5fc4
Binary files /dev/null and b/labs/lab5/media/aks-icon.png differ
diff --git a/labs/lab5/media/bluegreen-icon.jpg b/labs/lab5/media/bluegreen-icon.jpg
new file mode 100644
index 0000000..42efac2
Binary files /dev/null and b/labs/lab5/media/bluegreen-icon.jpg differ
diff --git a/labs/lab5/media/docker-icon.png b/labs/lab5/media/docker-icon.png
new file mode 100644
index 0000000..02ee3f1
Binary files /dev/null and b/labs/lab5/media/docker-icon.png differ
diff --git a/labs/lab5/media/kubernetes-icon.png b/labs/lab5/media/kubernetes-icon.png
new file mode 100644
index 0000000..63c679e
Binary files /dev/null and b/labs/lab5/media/kubernetes-icon.png differ
diff --git a/labs/lab5/media/lab5_aks1-kubenet.png b/labs/lab5/media/lab5_aks1-kubenet.png
new file mode 100644
index 0000000..9c6a961
Binary files /dev/null and b/labs/lab5/media/lab5_aks1-kubenet.png differ
diff --git a/labs/lab5/media/lab5_aks2-azurecni.png b/labs/lab5/media/lab5_aks2-azurecni.png
new file mode 100644
index 0000000..d0c8091
Binary files /dev/null and b/labs/lab5/media/lab5_aks2-azurecni.png differ
diff --git a/labs/lab5/media/lab5_cafe-3way-split.png b/labs/lab5/media/lab5_cafe-3way-split.png
new file mode 100644
index 0000000..d0bbebe
Binary files /dev/null and b/labs/lab5/media/lab5_cafe-3way-split.png differ
diff --git a/labs/lab5/media/lab5_cafe-aks1-loadtest.png b/labs/lab5/media/lab5_cafe-aks1-loadtest.png
new file mode 100644
index 0000000..0ae499c
Binary files /dev/null and b/labs/lab5/media/lab5_cafe-aks1-loadtest.png differ
diff --git a/labs/lab5/media/lab5_cafe-aks1-split1.png b/labs/lab5/media/lab5_cafe-aks1-split1.png
new file mode 100644
index 0000000..ca4da7d
Binary files /dev/null and b/labs/lab5/media/lab5_cafe-aks1-split1.png differ
diff --git a/labs/lab5/media/lab5_cafe-aks1-split30.png b/labs/lab5/media/lab5_cafe-aks1-split30.png
new file mode 100644
index 0000000..ec5052f
Binary files /dev/null and b/labs/lab5/media/lab5_cafe-aks1-split30.png differ
diff --git a/labs/lab5/media/lab5_cafe-aks1-split50.png b/labs/lab5/media/lab5_cafe-aks1-split50.png
new file mode 100644
index 0000000..d456cd6
Binary files /dev/null and b/labs/lab5/media/lab5_cafe-aks1-split50.png differ
diff --git a/labs/lab5/media/lab5_cafe-aks1-split99.png b/labs/lab5/media/lab5_cafe-aks1-split99.png
new file mode 100644
index 0000000..e374fde
Binary files /dev/null and b/labs/lab5/media/lab5_cafe-aks1-split99.png differ
diff --git a/labs/lab5/media/lab5_cafe-aks1.png b/labs/lab5/media/lab5_cafe-aks1.png
new file mode 100644
index 0000000..583a8b6
Binary files /dev/null and b/labs/lab5/media/lab5_cafe-aks1.png differ
diff --git a/labs/lab5/media/lab5_cafe-aks2-loadtest.png b/labs/lab5/media/lab5_cafe-aks2-loadtest.png
new file mode 100644
index 0000000..4610ac7
Binary files /dev/null and b/labs/lab5/media/lab5_cafe-aks2-loadtest.png differ
diff --git a/labs/lab5/media/lab5_cafe-docker.png b/labs/lab5/media/lab5_cafe-docker.png
new file mode 100644
index 0000000..a35677c
Binary files /dev/null and b/labs/lab5/media/lab5_cafe-docker.png differ
diff --git a/labs/lab5/media/lab5_cafe-nic1-upstreams.png b/labs/lab5/media/lab5_cafe-nic1-upstreams.png
new file mode 100644
index 0000000..0067ca7
Binary files /dev/null and b/labs/lab5/media/lab5_cafe-nic1-upstreams.png differ
diff --git a/labs/lab5/media/lab5_cafe-nic2-upstreams.png b/labs/lab5/media/lab5_cafe-nic2-upstreams.png
new file mode 100644
index 0000000..c093a6b
Binary files /dev/null and b/labs/lab5/media/lab5_cafe-nic2-upstreams.png differ
diff --git a/labs/lab5/media/lab5_diagram.png b/labs/lab5/media/lab5_diagram.png
new file mode 100644
index 0000000..f6f1876
Binary files /dev/null and b/labs/lab5/media/lab5_diagram.png differ
diff --git a/labs/lab5/media/lab5_nic-headless-diagram.png b/labs/lab5/media/lab5_nic-headless-diagram.png
new file mode 100644
index 0000000..322b3c7
Binary files /dev/null and b/labs/lab5/media/lab5_nic-headless-diagram.png differ
diff --git a/labs/lab5/media/lab5_redis-bench.png b/labs/lab5/media/lab5_redis-bench.png
new file mode 100644
index 0000000..c8cb39b
Binary files /dev/null and b/labs/lab5/media/lab5_redis-bench.png differ
diff --git a/labs/lab5/media/lab5_redis-benchmark.png b/labs/lab5/media/lab5_redis-benchmark.png
new file mode 100644
index 0000000..cafb834
Binary files /dev/null and b/labs/lab5/media/lab5_redis-benchmark.png differ
diff --git a/labs/lab5/media/nginx-azure-icon.png b/labs/lab5/media/nginx-azure-icon.png
new file mode 100644
index 0000000..70ab132
Binary files /dev/null and b/labs/lab5/media/nginx-azure-icon.png differ
diff --git a/labs/lab5/media/nginx-ingress-icon.png b/labs/lab5/media/nginx-ingress-icon.png
new file mode 100644
index 0000000..0196a5a
Binary files /dev/null and b/labs/lab5/media/nginx-ingress-icon.png differ
diff --git a/labs/lab5/media/redis-benchmark-icon.png b/labs/lab5/media/redis-benchmark-icon.png
new file mode 100644
index 0000000..d238371
Binary files /dev/null and b/labs/lab5/media/redis-benchmark-icon.png differ
diff --git a/labs/lab5/media/redis-icon.png b/labs/lab5/media/redis-icon.png
new file mode 100644
index 0000000..9d34aff
Binary files /dev/null and b/labs/lab5/media/redis-icon.png differ
diff --git a/labs/lab5/media/windows-icon.png b/labs/lab5/media/windows-icon.png
new file mode 100644
index 0000000..05ac958
Binary files /dev/null and b/labs/lab5/media/windows-icon.png differ
diff --git a/ca-notes/aks/nic/nginx-ingress-headless.yaml b/labs/lab5/nginx-ingress-headless.yaml
similarity index 59%
rename from ca-notes/aks/nic/nginx-ingress-headless.yaml
rename to labs/lab5/nginx-ingress-headless.yaml
index 98ef3f2..7a30ae1 100644
--- a/ca-notes/aks/nic/nginx-ingress-headless.yaml
+++ b/labs/lab5/nginx-ingress-headless.yaml
@@ -17,17 +17,5 @@ spec:
#nodePort: 32443
protocol: TCP
name: https
-# - port: 6379
-# nodePort: 32379
-# protocol: TCP
-# name: redis-leader
-# - port: 6380
-# nodePort: 32378
-# protocol: TCP
-# name: redis-follower
-# - port: 9000
-# nodePort: 32090
-# protocol: TCP
-# name: dashboard
selector:
app: nginx-ingress
diff --git a/labs/lab5/nginx.conf b/labs/lab5/nginx.conf
new file mode 100644
index 0000000..0523e84
--- /dev/null
+++ b/labs/lab5/nginx.conf
@@ -0,0 +1,47 @@
+# Nginx 4 Azure - Nginx.conf
+# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
+#
+user nginx;
+worker_processes auto;
+worker_rlimit_nofile 8192;
+pid /run/nginx/nginx.pid;
+
+events {
+ worker_connections 4000;
+}
+
+error_log /var/log/nginx/error.log error;
+
+http {
+ log_format main '$remote_addr - $remote_user [$time_local] "$request" '
+ '$status $body_bytes_sent "$http_referer" '
+ '"$http_user_agent" "$http_x_forwarded_for"';
+
+
+ # access_log off;
+ server_tokens N4A-$nginx_version-cakker;
+ server {
+ # listen 80 default_server;
+ listen 80;
+ server_name www.example.com;
+ status_zone www.example.com;
+ add_header X-Host-Header $host;
+ location / {
+ status_zone /;
+ # Points to a directory with a basic html index file with
+ # a "Welcome to NGINX as a Service for Azure!" page
+ root /var/www;
+ index index.html;
+ }
+ }
+
+ include /etc/nginx/conf.d/*.conf;
+ include /etc/nginx/includes/*.conf; # shared files
+
+}
+
+stream {
+
+ include /etc/nginx/stream/*.conf; # Stream TCP nginx files
+
+}
diff --git a/labs/lab5/readme.md b/labs/lab5/readme.md
index 0199a79..4946053 100644
--- a/labs/lab5/readme.md
+++ b/labs/lab5/readme.md
@@ -1,13 +1,12 @@
-# NGINX Load Balancing / Reverse Proxy
+# NGINX Load Balancing / Reverse Proxy
## Introduction
-In this lab, you will configure Nginx4Azure to Proxy and Load Balance several different backend systems, including Nginx Ingress Controllers in AKS, and a Windows VM. You will create and configure the needed Nginx config files, and then verify access to these systems. The Docker containers, VMs, or AKS Pods are running simple websites that represent web applications. You will also configure and load balance traffic to a Redis in-memory cache running in the AKS cluster. The AKS Clusters and Nginx Ingress Controllers provide access to these various K8s workloads.
-
+In this lab, you will configure Nginx4Azure to Proxy and Load Balance several different backend systems, including Nginx Ingress Controllers in AKS, and a Windows VM. You will create and configure the needed Nginx config files, and then verify access to these systems. The Docker containers, VMs, or AKS Pods are running simple websites that represent web applications. You will also configure and load balance traffic to a Redis in-memory cache running in the AKS cluster. The AKS Clusters and Nginx Ingress Controllers provide access to these various K8s workloads.
NGINX aaS | AKS | Nginx Ingress | Redis
:-----------------:|:-----------------:|:-----------------:|:-----------------:
- | | |
+ | | |
## Learning Objectives
@@ -17,7 +16,10 @@ By the end of the lab you will be able to:
- Configure Nginx4Azure to Proxy a Windows Server VM
- Test access to your N4A configurations with Curl and Chrome
- Inspect the HTTP content coming from these systems
-- Enable some advanced Nginx features and test them
+- Run an HTTP Load Test on your systems
+- Enable HTTP Split Clients for Blue/Green, A/B Testing
+- Configure Nginx4Azure for Redis Cluster
+- Configure Nginx4Azure to Proxy to Nginx Ingress Headless
## Pre-Requisites
@@ -29,243 +31,1238 @@ By the end of the lab you will be able to:
- You should have Redis Client Tools installed on your local system
- See `Lab0` for instructions on setting up your system for this Workshop
-
-< Lab specific Image here >
+
-
### Nginx 4 Azure Proxy to AKS Clusters
-This exercise will create Nginx Upstream configurations for the AKS clusters. You will add the NodePorts of the Nginx Ingress Controllers running in AKS cluster 1, and AKS Cluster 2. These were previously deployed and configured in a previous lab. Now the fun part, sending traffic to them!
+This exercise will create Nginx Upstream configurations for the AKS Clusters. You will use the Nodepool node names, and you will add the port number `32080` from the NodePort of the Nginx Ingress Controllers running in AKS cluster 1, and AKS Cluster 2. These were previously deployed and configured in a previous lab. Now the fun part, sending traffic to them!
-Using the Nginx4Azure configuration tool, create a new file called `/etc/nginx/conf.d/aks1-upstreams.conf`. Copy and Paste the contents of the provided file. You will have to EDIT this example config file, and change the `server` entries to match your AKS Cluster1 nodepool node names. You can find your AKS1 nodepool nodenames from the Azure Portal. Make sure you use `:32080` for the port number, this is the static `nginx-ingress NodePort Service` for HTTP traffic that was defined earlier.
+Configure the Upstream for AKS Cluster1.
-```nginx
+1. Using kubectl, get the Nodepool nodes for AKS Cluster1: (You can also find these in your Azure Portal - AKS Nodepool definitions.)
-# Nginx 4 Azure to NIC, AKS Nodes for Upstreams
-# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
-#
-# AKS1 nginx ingress upstreams
-#
-upstream aks1_ingress {
- zone aks1_ingress 256k;
+ ```bash
+ kubectl config use-context n4a-aks1
+ kubectl get nodes
+ ```
- least_time last_byte;
-
- # from nginx-ingress NodePort Service / aks Node names
- # Note: change servers to match
- #
- server aks-userpool-76919110-vmss000002:32080; #aks1 node1:
- server aks-userpool-76919110-vmss000003:32080; #aks1 node2:
+ ```bash
+ ##Sample Output##
+ aks-userpool-76919110-vmss000001 #aks1 node1
+ aks-userpool-76919110-vmss000002 #aks1 node2
+ aks-userpool-76919110-vmss000003 #aks1 node3
+ ```
- keepalive 32;
+1. Using the Nginx4Azure configuration tool, create a new file called `/etc/nginx/conf.d/aks1-upstreams.conf`. Copy and Paste the contents of the provided file. You will have to EDIT this example config file, and change the `server` entries to match your AKS Cluster1 Nodepool node names. You can find your AKS1 nodepool nodenames from `kubectl get nodes` or the Azure Portal. Make sure you use `:32080` for the port number, this is the static `nginx-ingress NodePort Service` for HTTP traffic that was defined earlier.
-}
+ ```nginx
+ # Nginx 4 Azure to NIC, AKS Nodes for Upstreams
+ # Chris Akker, Shouvik Dutta, Adam Currier, Steve Wagner - Mar 2024
+ #
+ # AKS1 nginx ingress upstreams
+ #
+ upstream aks1_ingress {
+ zone aks1_ingress 256k;
-```
+ least_time last_byte;
+
+ # from nginx-ingress NodePort Service / aks Node names
+ # Note: change servers to match
+ #
+ server aks-userpool-76919110-vmss000001:32080; #aks1 node1
+ server aks-userpool-76919110-vmss000002:32080; #aks1 node2
+ server aks-userpool-76919110-vmss000003:32080; #aks1 node3
-Submit your Changes. If you have the Server names:port correct, Nginx4Azure will validate and return a Success message.
+ keepalive 32;
-**Important!** If you stop then re-start your AKS cluster, or scale up/down, or add/remove VMSS worker nodes in the AKS NodePools, this Upstream list `WILL` have to be updated to match! Any changes to the Worker nodes in the Cluster will need to be matched exactly, as it is a static configuration that must match the Worker Nodes:NodePort definition in your AKS cluster. If you change the static nginx-ingress NodePort Service, you will have to match it here as well.
+ }
+ ```
-Repeat the step above, but create a new file called `/etc/nginx/conf.d/aks2-upstreams.conf`, for your second, AKS2 Cluster:
+ Submit your Nginx Configuration. If you have the Server names:port correct, Nginx for Azure will validate and return a Success message.
-```nginx
-# Nginx 4 Azure to NIC, AKS Node for Upstreams
-# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
-#
-# AKS2 nginx ingress upstreams
-#
-upstream aks2_ingress {
- zone aks2_ingress 256k;
+ >**Important!** If you stop then re-start your AKS cluster, or scale the Nodepool up/down, or add/remove nodes in the AKS NodePools, this Upstream list `WILL have to be updated to match!`. Any changes to the Worker nodes in the Cluster will need to be matched exactly, as it is a static Nginx configuration that must match the Kubernetes workers - Nodes:NodePort definition in your AKS cluster. If you change the static nginx-ingress NodePort Service Port number, you will have to match it here as well.
- least_time last_byte;
-
- # from nginx-ingress NodePort Service / aks Node names
- # Note: change servers to match
- #
- server aks-nodepool1-19485366-vmss00000h:32080; #aks node1:
- server aks-nodepool1-19485366-vmss00000i:32080; #aks node2:
- server aks-nodepool1-19485366-vmss00000j:32080; #aks node3:
+ *Currently, there is no auto-magic way to synchronize the Nginx for Azure upstream list with the AKS node list, but don't worry - Nginx Devs are working on that!*
- keepalive 32;
+Configure the Upstream for AKS Cluster2.
-}
+1. Repeat the previous Step, using the Nginx4Azure configuration tool, create a new file called `/etc/nginx/conf.d/aks2-upstreams.conf`. Copy and Paste the contents of the provided file. You will have to EDIT this example config file and change the `server` entries to match your AKS Cluster2 Nodepool node names. You can find your AKS2 nodepool nodenames from `kubectl get nodes` or the Azure Portal. Make sure you use `:32080` for the port number; this is the static `nginx-ingress NodePort Service` for HTTP traffic that was defined earlier.
-```
+1. Using kubectl, get the Nodepool nodes for AKS Cluster2: (You can also find these in your Azure Portal - AKS Nodepool definitions.)
-Note, there are 3 upstreams, matching the 3 Worker Nodes in AKS2 cluster.
+ ```bash
+ kubectl config use-context n4a-aks2
+ kubectl get nodes
+ ```
-Submit your Changes. If you have the Server name:port correct, Nginx4Azure will validate and return a Success message.
+ ```bash
+ ##Sample Output##
+ aks-nodepool1-19485366-vmss000003 #aks2 node1
+ aks-nodepool1-19485366-vmss000004 #aks2 node2
+ aks-nodepool1-19485366-vmss000005 #aks2 node3
+ aks-nodepool1-19485366-vmss000006 #aks2 node4
+ ```
-**Warning:** If you stop and start your AKS cluster, or add/remove Nodes in the Pools, this Upstream list `WILL` have to be updated to match. It is a static configuration that must match the Worker Nodes:NodePort definition in your AKS cluster. If you change the static nginx-ingress NodePort Service, you will have to match it here as well. Unfortunately, there are no auto-magic way to synchronize AKS/NodePorts with N4A Upstreams... yet :-)
+ ```nginx
+ # Nginx 4 Azure to NIC, AKS Node for Upstreams
+ # Chris Akker, Shouvik Dutta, Adam Currier, Steve Wagner - Mar 2024
+ #
+ # AKS2 nginx ingress upstreams
+ #
+ upstream aks2_ingress {
+ zone aks2_ingress 256k;
-### Test Nginx 4 Azure to AKS1 Cluster Ingress Controller
+ least_time last_byte;
+
+ # from nginx-ingress NodePort Service / aks Node names
+ # Note: change servers to match
+ #
+ server aks-nodepool1-19485366-vmss000003:32080; #aks2 node1
+ server aks-nodepool1-19485366-vmss000004:32080; #aks2 node2
+ server aks-nodepool1-19485366-vmss000005:32080; #aks2 node3
+ server aks-nodepool1-19485366-vmss000006:32080; #aks2 node4
-Now that you have these new Nginx Upstream blocks created, you can test them.
+ keepalive 32;
-Inspect, then modify the `# comments for proxy_pass` in the `location /` block in the `/etc/nginx/conf.d/cafe.example.com.conf` file, to disable the proxy_pass to `cafe-nginx`, and enable the proxy_pass to `aks1_ingress`, as shown:
+ }
+ ```
-```nginx
-...
+Note that there are 4 upstreams, matching the 4 Nodepool nodes in AKS2 cluster.
- location / {
- #
- # return 200 "You have reached cafe.example.com, location /\n";
-
- # proxy_pass http://cafe_nginx; # Proxy AND load balance to a list of servers
- # proxy_pass http://vm1:32779; # Proxy to another server
- # proxy_pass http://nginx.org; # Proxy to another website
-
- proxy_pass http://aks1_ingress; # Proxy to AKS1 Nginx Ingress Controller NodePort
- add_header X-Proxy-Pass aks1_ingress; # Custom Header
+Submit your Nginx Configuration. If you have the Server name:port correct, Nginx4Azure will validate and return a Success message.
+
+**Warning:** The same warning applies to Upstream configuration for AKS2, *if you make any Nodepool changes, Nginx must be updated to match those changes.*
+
+### Update Nginx Config and add HTTP/1.1 Keepalive
+
+In order for Nginx 4 Azure and Nginx Ingress to work correctly, the HTTP Host Headers, and perhaps other headers, will need to be passed. This is done by changing the HTTP Version to 1.1, so that the Host Header can be included.
+
+1. Inspect the `lab5/keepalive.conf`. This is where the HTTP Protocol and Headers are set for proxied traffic. This is a common requirement so it is shared among all the different Nginx configurations.
+
+ Using the Nginx for Azure console, create a new file, `/etc/nginx/includes/keepalive.conf`. Use the example provided, just copy/paste.
+
+ ```nginx
+ # Nginx 4 Azure - Mar 2024
+ # Chris Akker, Shouvik Dutta, Adam Currier, Steve Wagner - Mar 2024
+ #
+ # Default is HTTP/1.0 to upstreams, keepalives is only enabled for HTTP/1.1
+ proxy_http_version 1.1;
+
+ # Set the Connection header to empty
+ proxy_set_header Connection "";
+
+ # Host request header field, or the server name matching a request
+ proxy_set_header Host $host;
+ ```
+
+ Submit your Nginx Configuration.
+
+1. Inspect the `lab5/nginx.conf` file. Uncomment the `include` directive near the bottom, as shown:
+
+ ```nginx
+ # Nginx 4 Azure - Default - Updated Nginx.conf
+ # Chris Akker, Shouvik Dutta, Adam Currier, Steve Wagner - Mar 2024
+ #
+ user nginx;
+ worker_processes auto;
+ worker_rlimit_nofile 8192;
+ pid /run/nginx/nginx.pid;
+
+ events {
+ worker_connections 4000;
+ }
+
+ error_log /var/log/nginx/error.log error;
+
+ http {
+ log_format main '$remote_addr - $remote_user [$time_local] "$request" '
+ '$status $body_bytes_sent "$http_referer" '
+ '"$http_user_agent" "$http_x_forwarded_for"';
+
+ access_log off;
+ server_tokens "";
+ server {
+ listen 80 default_server;
+ server_name localhost;
+ location / {
+ # Points to a directory with a basic html index file with
+ # a "Welcome to NGINX as a Service for Azure!" page
+ root /var/www;
+ index index.html;
+ }
+ }
+
+ include /etc/nginx/conf.d/*.conf;
+ include /etc/nginx/includes/*.conf; # shared files
+
+ }
+
+ # stream {
- # proxy_pass http://aks2_ingress; # Proxy to AKS2 Nginx Ingress Controller NodePort
- # proxy_pass http://aks1_nic_direct; # Proxy to AKS Nginx Ingress Controller Direct
- # proxy_pass http://$upstream; # Use Split Clients config
+ # include /etc/nginx/stream/*.conf; # Stream TCP nginx files
+
+ # }
+ ```
+
+Submit your Nginx Configuration.
+
+### Test Nginx 4 Azure to AKS1 Cluster Nginx Ingress Controller
+
+Now that you have these new Nginx Upstream blocks created, you can test them.
+
+1. Inspect, then modify the `# comments for proxy_pass` in the `location /` block in the `/etc/nginx/conf.d/cafe.example.com.conf` file, making these changes as shown.
+
+ - Disable the proxy_pass to `cafe-nginx`
+ - Enable the proxy_pass to `aks1_ingress`
+ - Change the comments for the X-Proxy-Pass Header as well.
+
+ ```nginx
+ ...
+
+ location / {
+ #
+ # return 200 "You have reached cafe.example.com, location /\n";
+ # proxy_pass http://cafe_nginx; # Proxy AND load balance to a list of servers
+ # add_header X-Proxy-Pass cafe_nginx; # Custom Header
+
+ proxy_pass http://aks1_ingress; # Proxy to AKS1 Nginx Ingress Controller NodePort
+ add_header X-Proxy-Pass aks1_ingress; # Custom Header
+
+ # proxy_pass http://aks2_ingress; # Proxy to AKS2 Nginx Ingress Controller NodePort
+ # add_header X-Proxy-Pass aks2_ingress; # Custom Header
+
+ }
+ ...
+ ```
+
+ This changes where Nginx will `proxy_pass` the requests. Nginx will now proxy and load balance requests to your AKS1 Nginx Ingress Controller, listening on port 32080 on each AKS1 Node. The X-Proxy-Pass Header will now show `aks1-ingress` instead of cafe-nginx.
+
+ Submit your Nginx Configuration.
+
+1. Test your change with curl. Do you see the X-Proxy-Pass Header that you added, so you know which Upstream block is being used ?
+
+ ```bash
+ curl -I http://cafe.example.com/coffee
+ ```
+
+ ```bash
+ ##Sample Output##
+ HTTP/1.1 200 OK
+ Server: N4A-1.25.1-cakker
+ Date: Fri, 05 Apr 2024 20:08:24 GMT
+ Content-Type: text/html; charset=utf-8
+ Connection: keep-alive
+ Expires: Fri, 05 Apr 2024 20:08:23 GMT
+ Cache-Control: no-cache
+ X-Proxy-Pass: aks1_ingress
+ ```
+
+1. Test your change in proxy_pass with Chrome at http://cafe.example.com/coffee, hitting Refresh several times - what do you see - Docker Containers or AKS1 Pods?
+
+ Cafe Docker | Cafe AKS1
+ :-----------------:|:-----------------:
+  | 
+
+1. Verify this with `kubectl`. Set your Kubectl Config Context to n4a-aks1:
+
+ ```bash
+ kubectl config use-context n4a-aks1
+ kubectl get pods
+ ```
+
+ ```bash
+ ##Sample Output##
+ NAME READY STATUS RESTARTS AGE
+ coffee-869854dd6-bs8t6 1/1 Running 0 3d3h
+ coffee-869854dd6-wq9lp 1/1 Running 0 3d3h
+ ...
+ ```
+
+ Notice the Names of the `coffee` pods. For the coffee Pod IPs, check the `coffee-svc` Endpoints:
+
+ ```bash
+ kubectl describe svc coffee-svc
+ ```
+
+ ```bash
+ ##Sample Output##
+ Name: coffee-svc
+ Namespace: default
+ Labels:
+ Annotations:
+ Selector: app=coffee
+ Type: ClusterIP
+ IP Family Policy: SingleStack
+ IP Families: IPv4
+ IP: None
+ IPs: None
+ Port: http 80/TCP
+ TargetPort: 80/TCP
+ Endpoints: 10.244.0.10:80,10.244.0.20:80
+ Session Affinity: None
+ Events:
+ ```
+
+ You should see a list of the (2) POD IPs for the Coffee Service.
+
+ You can also see this list, using the Nginx Plus Dashboard for the Ingress Controller, check the HTTP Upstreams Tab as before, you should see the Pod IPs for both the coffee-svc and tea-svc.
+
+ 
+
+1. **Loadtest Time!** While you are in the Nginx Ingress Dashboard watching the coffee upstreams, throw some load at them using WRK HTTP Load tool.
+
+ Using your local Docker Desktop, you will start and run the WRK loadtest from a container. Try this for a 1 minute loadtest:
+
+ ```bash
+ docker run --name wrk --rm williamyeh/wrk -t4 -c200 -d1m --timeout 2s http://cafe.example.com/coffee
+ ```
+
+ ```bash
+ ##Sample Output##
+ Running 1m test @ http://cafe.example.com/coffee
+ 4 threads and 200 connections
+ Thread Stats Avg Stdev Max +/- Stdev
+ Latency 97.45ms 6.79ms 401.32ms 94.86%
+ Req/Sec 515.15 23.08 0.94k 92.31%
+ 123138 requests in 1.00m, 202.21MB read
+ Requests/sec: 2048.84
+ Transfer/sec: 3.36MB
+ ```
+
+And your Nginx Ingress Dashboard should show similar stats. How many requests did you get in 1 minute? Post your AKS1 Loadtest Results in Zoom Chat!
+
+
+
+### Test Nginx 4 Azure to AKS2 Cluster Nginx Ingress Controller
+
+Repeat the last procedure, to test access to the AKS2 Cluster and pods.
+
+1. Change the `# comments for proxy_pass` in the `location /` block in the `/etc/nginx/conf.d/cafe.example.com.conf` file.
+
+ - Disable the proxy_pass to aks1_ingress
+ - Enable the proxy_pass to `aks2_ingress`, as shown
+ - Change the X-Proxy_Pass Header
+
+ ```nginx
+ ...
+
+ location / {
+ #
+ # return 200 "You have reached cafe.example.com, location /\n";
+
+ # proxy_pass http://cafe_nginx; # Proxy AND load balance to a list of servers
+ # add_header X-Proxy-Pass cafe_nginx; # Custom Header
+ #proxy_pass http://aks1_ingress; # Proxy to AKS1 Nginx Ingress Controller NodePort
+ # add_header X-Proxy-Pass aks1_ingress; # Custom Header
+
+ proxy_pass http://aks2_ingress; # Proxy to AKS2 Nginx Ingress Controller NodePort
+ add_header X-Proxy-Pass aks2_ingress; # Custom Header
+
+ }
+ ...
+ ```
+
+ This again changes where Nginx will `proxy_pass` the requests. Nginx will now forward and load balance requests to your AKS2 Ingress Controller, also listening on port 32080 on each AKS2 Node.
+
+ Submit your Nginx Configuration.
+
+1. Test your change with curl. Do you see the X-Proxy-Pass Header that you added, so you know which Upstream block is being used ?
+
+ ```bash
+ curl -I http://cafe.example.com/coffee
+ ```
+
+ ```bash
+ HTTP/1.1 200 OK
+ Server: N4A-1.25.1-cakker
+ Date: Fri, 05 Apr 2024 20:08:24 GMT
+ Content-Type: text/html; charset=utf-8
+ Connection: keep-alive
+ Expires: Fri, 05 Apr 2024 20:08:23 GMT
+ Cache-Control: no-cache
+ X-Proxy-Pass: aks2_ingress
+ ```
+
+1. Test your change in Upstreams with Chrome, hitting Refresh several times - what do you see ?
+
+ The Server Name and IP address should now match PODS running in your AKS2 cluster! (they were AKS1 names before) But how do you verify this ? Observe again, the Server name is a Kubernetes assigned POD name, and the Server IP address is the POD IP address, also assiged by Kubernetes.
+
+ Verify this with `kubectl`. Set your Kubectl Config Context to aks2:
+
+ ```bash
+ kubectl config use-context n4a-aks2
+ kubectl get pods
+ ```
+
+ Then list the Pods:
+
+ ```bash
+ ##Sample Output##
+ coffee-869854dd6-fchxm 1/1 Running 0 3d3h
+ coffee-869854dd6-nn5d4 1/1 Running 0 3d3h
+ coffee-869854dd6-zqbbv 1/1 Running 0 3d3h
+ ...
+ ```
+
+ Notice the names of the coffee and and tea pods. For Pod IPs check the `coffee-svc` Endpoints:
+
+ ```bash
+ kubectl describe svc coffee-svc
+ ```
+
+ ```bash
+ ##Sample Output##
+ Name: coffee-svc
+ Namespace: default
+ Labels:
+ Annotations:
+ Selector: app=coffee
+ Type: ClusterIP
+ IP Family Policy: SingleStack
+ IP Families: IPv4
+ IP: None
+ IPs: None
+ Port: http 80/TCP
+ TargetPort: 80/TCP
+ Endpoints: 172.16.4.99:80,172.16.5.101:80,172.16.5.118:80
+ Session Affinity: None
+ Events:
+ ```
+
+ You should see a list of the (3) POD IPs for the coffee Service.
+
+**TAKE NOTE:** The Pod IPs are on a completely different IP subnet, from Docker or the first and second AKS cluster, which was configured using the Azure CNI - did you catch that distinction? Understanding the Backend IP/networking is critical to configuring your Nginx for Azure Upstreams properly.
+
+You built and used different CNIs and subnets so that you can see the differences. Nginx for Azure can work with `any` of these different backend applications and networks, as long as there is an IP Path to the Upstreams.
+
+(Yes, if you add the appropriate routing with VNet Gateways, you can use Upstreams in other Regions/Clusters/VMs.)
+
+
+
+Platform | Docker | AKS1 | AKS2
+:--------------:|:--------------:|:-----------------:|:-----------------:
+Network Type | Docker | Kubenet | Azure CNI
+POD IP Subnet | 172.18.x.y | 10.244.x.y | 172.16.20.y
+
+AKS1 Pod Network | AKS2 Pod Network
+:--------------:|:--------------:
+ | 
+
+You can also see this list, using the Nginx Plus Dashboard for the Ingress Controller in AKS2 - check the HTTP Upstreams, you should see the Pod IPs for both the coffee-svc and tea-svc.
+
+
+
+1. **Loadtest Time!** While you are in the Nginx Ingress Dashboard watching the coffee upstreams, throw some load at them:
+
+ Using your local Docker Desktop, you will start and run the WRK loadtest from a container. Try this for a 1 minute loadtest:
+
+ ```bash
+ docker run --name wrk --rm williamyeh/wrk -t4 -c200 -d1m --timeout 2s http://cafe.example.com/coffee
+ ```
+
+ ```bash
+ ##Sample Output##
+ Running 1m test @ http://cafe.example.com/coffee
+ 4 threads and 200 connections
+ Thread Stats Avg Stdev Max +/- Stdev
+ Latency 112.42ms 38.11ms 705.15ms 81.00%
+ Req/Sec 450.43 83.36 720.00 73.49%
+ 107730 requests in 1.00m, 177.09MB read
+ Requests/sec: 1792.74
+ Transfer/sec: 2.95MB
+ ```
+
+1. Your Nginx Ingress Dashboard should show similar stats. How many requests did you get in 1 minute? Post your Results in the Zoom Chat!
+
+ 
+
+### Nginx for Azure, Upstream Configuration Recap
+
+During this lab exercise, you created and tested THREE different Upstream configurations to use with Nginx for Azure. This demonstrates how easy it is to have different platforms for your backend applications, and Nginx can easily be configured to change where it sends the Requests with `proxy_pass`. You can use Azure VMs, Docker, Containers, AKS Clusters, and/or Windows VMs for your apps. You also added a custom HTTP Header, to help you track which Upstream block is being used.
+
+
+## Nginx for Azure Split Clients for Blue/Green, A/B, Canary Testing
+
+
+
+Now you will be introduced to a very popular Nginx feature - `HTTP Split Clients`. This is an Nginx module that can split traffic to multiple Upstreams or Servers, based on a hash of the Request. This Split Clients tool is very handy for the testing of new versions of code, most likely part of QA testing phases of CI/CD pipeline progressions.
+
+This concept of using `Live Traffic`, to test a new version or release of an application has several names, like `Blue/Green, or A/B, or Canary testing.` We will use the term Blue/Green for this exercise, and show you how to control 0-100% of your incoming requests, and route/split them to different backend Upstreams with Nginx for Azure. You will use the Nginx `http_split_clients` feature, to support these common application software Dev/Test/Pre-Prod/Prod patterns.
+
+Docker | NGINXaaS | Kubernetes
+:-----------------:|:-----------------:|:-----------------:
+ | |
+
+
+As your team is diligently working towards all applications being developed and tested in Kubernetes, you could really use a process to make the migration from old Legacy Docker VMs to Kubernetes MicroServices easier. Wouldn't it be nice if you could test and migrate Live Traffic with NO DOWNTIME? `(Service Outages and Tickets are a DRAG... ugh!)`
+
+You will start with the Nginx Cafe Demo, and your Docker VMs, as the current running Version of your application.
+
+Also using Cafe Demo, you decide that AKS Cluster1 is your Pre-Production test environment, where final QA checks of software releases are `signed-off` before being rolled out into Production.
+
+- As the software QA tests in your pipeline continue to pass, you will incrementally `increase the split ratio to AKS1`, and eventually migrate ALL 100% of your Live Traffic to the AKS1 Cluster - `with NO DOWNTIME, lost packets, connections, or user disruption.` No WAY - it can't be that EASY?
+- Just as importantly, if you do encounter any serious application bugs or even infrastructure problems, you can just as quickly `roll-back` sending 100% of the traffic to the Docker VMs. *You just might be an NGINXpert HERO.*
+
+Your first CI/CD test case is taking just 1% of your Live incoming traffic and sending it to AKS Cluster 1. There you likely have enabled debug level logging and monitoring of your containers. Now you can see how the new version of Nginx Cafe is running. (You do run these types of pre-release tests, correct?)
+
+To accomplish the Split Client functionality with Nginx, you only need 3 things.
+- The `split_clients directive`
+- A Map block to configure the incoming request object of interest (a cookie name, cookie value, Header, or URL, etc)
+- The destination Upstream Blocks, with percentages declared for the split ratios, with a new `$upstream` variable
+-- As you want 1% traffic for AKS1, leaving 99% for Docker, that is the configuration you will start with
+-- The other ratios are provided, but commented out, you will use them as more of the QA tests pass
+
+1. Inspect the `/lab5/split-clients.conf` file. This is the Map Block you will use, configured to look at the `$request_id` Nginx variable. As you should already know, the $request_id is a unique 64-bit number assigned to every incoming request by Nginx. You are telling Nginx to look at `every single request` when performing the Split hash algorithm. You can use any Nginx Request $variable that you choose, and combinations of $variables are supported as well. You can find more details on the http_split_clients module in the References section.
+
+1. Create a new Nginx config file for the Split Clients directive and Map Block, called `/etc/nginx/includes/split-clients.conf`. You can use the example provided, just Copy/Paste:
+
+ ```nginx
+ # Nginx 4 Azure to AKS1/2 NICs and/or UbuntuVMs for Upstreams
+ # Chris Akker, Shouvik Dutta, Adam Currier, Steve Wagner - Mar 2024
+ # HTTP Split Clients Configuration for AKS Cluster1/Cluster2 or UbuntuVM ratios
+ #
+ split_clients $request_id $upstream {
+
+ # Uncomment the percent wanted for AKS Cluster #1, #2, or UbuntuVM
+ #0.1% aks1_ingress;
+ 1.0% aks1_ingress; # Starting with 1% Live Traffic
+ #5.0% aks1_ingress;
+ #30% aks1_ingress;
+ #50% aks1_ingress;
+ #80% aks1_ingress;
+ #95% aks1_ingress;
+ #99% aks1_ingress;
+ #* aks1_ingress;
+ #* aks2_ingress;
+ #30% aks2_ingress;
+ * cafe_nginx; # Ubuntu VM containers
+ #* aks1_nic_headless; # Direct to NIC pods - headless/no nodeport
}
-...
+ ```
-```
+1. In your `/etc/nginx/conf.d/cafe.example.com.conf` file, modify the `proxy_pass` directive in your `location /` block, to use the `$upstream variable`. This tells Nginx to use the Map Block where Split Clients is configured.
-This changes where Nginx will `proxy_pass` the requests. Nginx will now forward and load balance requests to your AKS1 Ingress Controller, listening on port 32080 on each AKS1 Node.
+ ```nginx
+ ...
+ location / {
+ #
+ # return 200 "You have reached cafe.example.com, location /\n";
-Submit your change.
+ proxy_pass http://$upstream; # Use Split Clients config
-Test your change with curl. Do you see the X-Proxy-Pass Header that you added, so you know which Upstream block is being used ?
+ add_header X-Proxy-Pass $upstream; # Custom Header
+
+ #proxy_pass http://cafe_nginx; # Proxy AND load balance to Docker VM
+ #add_header X-Proxy-Pass cafe_nginx; # Custom Header
-```bash
-HTTP/1.1 200 OK
-Server: N4A-1.25.1-cakker
-Date: Fri, 05 Apr 2024 20:08:24 GMT
-Content-Type: text/html; charset=utf-8
-Connection: keep-alive
-Expires: Fri, 05 Apr 2024 20:08:23 GMT
-Cache-Control: no-cache
-X-Proxy-Pass: aks1_ingress
+ #proxy_pass http://aks1_ingress; # Proxy AND load balance to AKS1 Nginx Ingress
+ #add_header X-Proxy-Pass aks1_ingress; # Custom Header
-```
+ #proxy_pass http://aks2_ingress; # Proxy AND load balance to AKS2 Nginx Ingress
+ #add_header X-Proxy-Pass aks2_ingress; # Custom Header
-Test your change in Upstreams with Chrome, hitting Refresh several times - what do you see ?
+ }
-The Server Name and IP address should now match PODS running in your AKS1 cluster! (they were Docker names before, remember?) But how do you verify this ? Observe, the Server name is a K8s assigned POD name, and the Server IP address is the POD IP address, also assiged by K8s.
+ ...
+ ```
-Verify this with `kubectl`. Set your Kubectl Config Context to aks1:
+Submit your Nginx Configuration.
-```bash
-kubectl config use-context aks1
+1. Test with Chrome, hit Refresh several times, and Inspect the page, look at your custom Header. It should say `cafe_nginx` or `aks1_ingress` depending on which Upstream was chosen by Split Client.
-```
+Unfortunately, refreshing about 100 times and trying to catch the 1% sent to AKS1 will be difficult with a browser. Instead, you will use an HTTP Loadtest tool called `WRK` which runs as a local Docker container sending HTTP requests to your Nginx for Azure's Cafe Demo.
-Then list the Pod names:
-```bash
-kubectl get pods
-```
+1. Open a separate Terminal, and start the WRK load tool. Use the example here, but change the IP address to your Nginx for Azure Public IP:
-Notice the names of the coffee and tea pods. Check the `coffee-svc` Endpoints:
+ ```bash
+ ## Set environment variables
+ export MY_RESOURCEGROUP=c.akker-workshop
+ export MY_N4A_IP=$(az network public-ip show \
+ --resource-group $MY_RESOURCEGROUP \
+ --name n4a-publicIP \
+ --query ipAddress \
+ --output tsv)
+
+ docker run --name wrk --rm williamyeh/wrk -t4 -c200 -d20m -H 'Host: cafe.example.com' --timeout 2s http://$MY_N4A_IP/coffee
+ ```
-```bash
-kubectl describe svc coffee-svc
+This will open 200 Connections, and run for 20 minutes while we try different Split Ratios. The Host Header `cafe.example.com` is required, to match your Server Block in your Nginx for Azure configuration.
-```
+1. Open your AKS1 NIC Dashboard (the one you bookmarked earlier), the HTTP Upstreams Tab, coffee upstreams. These are the Pods running the latest version of your Nginx Cafe Application. You should see about 1% of your Requests trickling into the AKS1 Ingress Controller, and it is load balancing those requests to a couple coffee Pods. In the NIC Dashboard, you should see about 20-30 Requests/sec for AKS1 coffee.
-You should see a list of the POD IPs for the Service.
+You can check your Azure Monitor where you would find about 99% going to the cafe_nginx upstreams, the three Docker containers running on Ubuntu VM.
-You can also see this list, using the Nginx Plus Dashboard for the Ingress Controller, check the HTTP Upstreams, you should see the Pod IPs for both the coffee-svc and tea-svc.
+1. Using the Nginx for Azure - Monitoring - Metrics menu, open Azure Monitoring. In the top middle of the graph, under Metric Namespace, Select `nginx upstream statistics`. Then select `plus.http.upstream.peers.request.count`. Then Click the `Add filter` button, Property `upstream`, `=` with Values of `cafe-nginx` and `aks1-ingress` and `aks2-ingress` checked. Click the `Apply splitting` button, and select `upstream.` In the upper right corner, change the Graph Time to `Last 30 minutes with 1 minute display`. Now you can watch the Request count stats for these 2 upstream blocks, the same two you enabled in the Split Clients config.
-### Test Nginx 4 Azure to AKS2 Cluster Ingress Controller
+ It will take a few minutes for Azure Logging to show these values, and it Aggregates the values to the minute. Leave this Azure Monitor graph open in a separate window, you will watch it while we change the Ratios!
-Repeat the last procedure, to test access to the AKS2 Cluster and pods.
+
+
+*Great news* - the QA Lead has signed off on the 1% test and your code, and you are `good to go` for the next test. Turn down your logging level as now you will try `30% Live traffic to AKS1`, you are confident and bold, *make it or break it* is your motto.
+
+1. Again modify your `/etc/nginx/includes/split-clients.conf` file, this time setting `aks1_ingress` to 30%:
+
+ ```nginx
+ # Nginx 4 Azure to AKS1/2 NICs and/or UbuntuVMs for Upstreams
+ # Chris Akker, Shouvik Dutta, Adam Currier, Steve Wagner - Mar 2024
+ # HTTP Split Clients Configuration for AKS Cluster1/Cluster2 or UbuntuVM ratios
+ #
+ split_clients $request_id $upstream {
+
+ # Uncomment the percent wanted for AKS Cluster #1, #2, or UbuntuVM
+ #0.1% aks1_ingress;
+ #1.0% aks1_ingress;
+ #5.0% aks1_ingress;
+ 30% aks1_ingress; # Next test, 30% Live Traffic
+ #50% aks1_ingress;
+ #80% aks1_ingress;
+ #95% aks1_ingress;
+ #99% aks1_ingress;
+ #* aks1_ingress;
+ #30% aks2_ingress;
+ * cafe_nginx; # Ubuntu VM containers
+ #* aks1_nic_direct; # Direct to NIC pods - headless/no nodeport
+
+ }
+ ```
-Change the `# comments for proxy_pass` in the `location /` block in the `/etc/nginx/conf.d/cafe.example.com.conf` file, to disable the proxy_pass to aks1_ingress, and enable the proxy_pass to `aks2_ingress`, as shown:
+Submit your Nginx Configuration, while watching the AKS1 NIC Dashboard. In a few seconds, traffic stats should jump now to 30%! Hang on to your debugger ...
-```nginx
-...
+Checking the Nginx for Azure Monitor Window that you left open, you should see something like this:
- location / {
- #
- # return 200 "You have reached cafe.example.com, location /\n";
-
- # proxy_pass http://cafe_nginx; # Proxy AND load balance to a list of servers
- # proxy_pass http://vm1:32779; # Proxy to another server
- # proxy_pass http://nginx.org; # Proxy to another website
- #proxy_pass http://aks1_ingress; # Proxy to AKS1 Nginx Ingress Controller NodePort
+
- proxy_pass http://aks2_ingress; # Proxy to AKS2 Nginx Ingress Controller NodePort
- add_header X-Proxy-Pass aks2_ingress; # Custom Header
+After a couple hours of 30%, all the logs are clean, the dev and test tools are happy, there are NO support tickets, and all is looking peachy.
+
+1. Next up is the 50% test. You know what to do. Modify your `split-clients.conf` file, setting AKS1 Ingress to `50% Live Traffic`. Watch the NIC Dashboard and your Monitoring tools closely.
+
+ ```nginx
+ # Nginx 4 Azure to AKS1/2 NICs and/or UbuntuVMs for Upstreams
+ # Chris Akker, Shouvik Dutta, Adam Currier, Steve Wagner - Mar 2024
+ # HTTP Split Clients Configuration for AKS Cluster1/Cluster2 or UbuntuVM ratios
+ #
+ split_clients $request_id $upstream {
+
+ # Uncomment the percent wanted for AKS Cluster #1, #2, or UbuntuVM
+ #0.1% aks1_ingress;
+ #1.0% aks1_ingress;
+ #5.0% aks1_ingress;
+ #30% aks1_ingress;
+ 50% aks1_ingress; # Next test, 50% Live Traffic
+ #80% aks1_ingress;
+ #95% aks1_ingress;
+ #99% aks1_ingress;
+ #* aks1_ingress;
+ #* aks2_ingress;
+ #30% aks2_ingress;
+ * cafe_nginx; # Ubuntu VM containers
+ #* aks1_nic_headless; # Direct to NIC pods - headless/no nodeport
+
+ }
+ ```
+
+Submit your 50% Split configuration and cross your fingers. HERO or ZERO, what will it be today? If the WRK load test has stopped, start it again.
+
+Looking pretty good, traffic is even, no logging errors or tickets, no whining and complaining and texting from Mgmt. Nginx is making you a Rock Star!
+
+
+50% Split
+
+>Go for it! - Increase to 99%, or 100% (but not on a Friday!).
+
+
+99% Split
+
+>Now that you get the concept and the configuration steps, you can see how EASY it is with Nginx Split Clients to route traffic to different backend applications, including different versions of apps - it's as easy as creating a new Upstream block, and determining the Split Ratio. Consider this not so subtle point - *you did not have to create ONE ticket, change a single DNS record and WAIT, change any firewall rules, update cloud XYZ devices - nothing!* All you did was tell Nginx to Split existing Live traffic, accelerating your app development velocity into OverDrive.
+
+>>The Director of Development has heard about your success with Nginx for Azure Split Clients, and now also wants a small percentage of Live Traffic for the next App version, but this Alpha version is `running in AKS Cluster2`. Oh NO!! - Success usually does mean more work. But lucky for you, Split clients can work with many Upstreams. So after several beers, microwaved pizza and intense discussions, your QA team decides on the following Splits:
+
+- AKS2 will get 1% traffic - for the Dev Director's request
+- AKS1 will get 80% traffic - for new version
+- Docker VM will get 19% traffic - for legacy/current version
+
+
+1. Once again, modify the `split-clients.conf` file, with the percentages needed. Open your Dashboards and Monitoring so that you can watch in real time. You tell the Director, here it comes:
+
+ ```nginx
+ # Nginx 4 Azure to AKS1/2 NICs and/or UbuntuVMs for Upstreams
+ # Chris Akker, Shouvik Dutta, Adam Currier, Steve Wagner - Mar 2024
+ # HTTP Split Clients Configuration for AKS Cluster1/Cluster2 or UbuntuVM ratios
+ #
+ split_clients $request_id $upstream {
+
+ # Uncomment the percent wanted for AKS Cluster #1, #2, or UbuntuVM
+ #0.1% aks1_ingress;
+ 1.0% aks2_ingress; # For the Dev Director
+ #5.0% aks1_ingress;
+ #30% aks1_ingress;
+ #50% aks1_ingress;
+ 80% aks1_ingress; # For new version
+ #95% aks1_ingress;
+ #99% aks1_ingress;
+ #* aks1_ingress;
+ #* aks2_ingress;
+ #30% aks2_ingress;
+ * cafe_nginx; # Ubuntu VM containers
+ #* aks1_nic_headless; # Direct to NIC pods - headless/no nodeport
+
+ }
+ ```
+
+Submit your Nginx Configuration.
+
+Check your Nginx Ingress Dashboards, do you see traffic on both, making use of the (5) coffee upstream pods? What about Azure Monitor ...
+
+
+
+Cafe Nginx - Split 3 ways: DockerVM, AKS1, AKS2
+
+**TADA!!** You are now splitting Live traffic to THREE separate backend platforms, simulating multiple versions of Cafe Nginx / your application code. (To be fair, in this lab exercise we used the same Cafe Demo image, but you get the idea). Just as quick and easy, you can fire up another Upstream target, and add it to the Splits configuration.
+
+**NOTE:** Several words of caution with Split Clients.
+
+- The ratios must add up to 100% or Nginx will not apply the configuration.
+- .01% is the smallest split ratio available, that = 1/10,000th.
+- The * asterick means either 100%, or the remainder after other ratios.
+- If all the servers in an Upstream Block are DOWN, you will get that ratio of 502 errors, so always test your Upstreams prior to adding them to Split configurations. There is no elegant way to "re-try" when using Splits. Changing Splits under HIGH load is not recommended, there is always a chance something could go wrong and you will drop clients/traffic. A maintenance window for changes is always a Best Practice.
+- Split Clients is also available for TCP traffic, like your Redis Cluster. It splits traffic based on new incoming TCP connections. Ever heard of Active/Active Redis Clusters? Yes, you can do that and control the ratios, just like shown here for HTTP traffic.
+
+>*HIT a nasty bug! - Director of Dev says the new code can't handle that 1% load, and several other backend systems have crashed!* - not quite ready for testing like his devs told him...
+
+No worries, you comment out the `aks2_ingress` in the Split Config, and his 1% Live traffic is now going somewhere safe, as soon as you Submit your Nginx Configuration!
+
+But don't be surprised - in a few days he will ask again to send traffic to AKS2, and you can begin the Split migration process, this time from AKS1 to AKS2.
+
+
+>>Now you've reached the Ultimate Kubernetes Application Solution, `Mutli Cluster Load Balancing, Active/Active, with Dynamic Split Ratios.` No one else can do this for your app team this easily, it's just Nginx!
+
+>Cherry on top - not only can you do Split Client `outside` the Cluster with Nginx for Azure, but Nginx Ingress Controller can also do Split Clients `inside` the cluster, ratios between different Services. You can find that example in `Lab10 of the Nginx Plus Ingress Workshop` :-)
+
+### Nginx HTTP Split Client Solutions Overview
+
+Using the HTTP Split Clients module from Nginx can provide multiple traffic management Solutions. Consider some of these that might be applicable to your Kubernetes environments:
+
+- Multi Cluster Active/Active Load Balancing
+- Horizontal Cluster Scaling
+- HTTP Split Clients - for A/B, Blue/Green, and Canary test and production traffic steering. Allows Cluster operations/maintainence like:
+- - Node upgrades / additions
+- - Software upgrades/security patches
+- - Cluster resource expansions - memory, compute, storage, network, nodes
+- - QA and Troubleshooting, using Live Traffic if needed
+- - ^^ With NO downtime or reloads
+- API Gateway testing/upgrades/migrations
+
+## Configure Nginx for Azure for Redis traffic
+
+
+NGINXaaS | Redis
+:-------:|:------:
+| 
+
+
+In this exerices, you will use Nginx for Azure to expose the `Redis Leader Service` running in AKS Cluster #2. As Redis communicates with TCP instead of HTTP, the Nginx Stream Context will be used. Following Nginx Best Practices, and standard Nginx folders/files layout, the `TCP Stream context` configuration files will be created in a new folder, called `/etc/nginx/stream/`.
+
+1. Using the Nginx for Azure Console, modify the `nginx.conf` file, to enable the Stream Context, and include the appropriate config files. Place this stanza at the bottom of your nginx.conf file:
+
+ ```nginx
+ ...
+ stream {
- # proxy_pass http://aks1_nic_direct; # Proxy to AKS Nginx Ingress Controller Direct
- # proxy_pass http://$upstream; # Use Split Clients config
+ include /etc/nginx/stream/*.conf; # Stream Context nginx files
+ }
+ ```
+
+ Submit your Nginx Configuration.
+
+1. Using the Nginx for Azure Console, create a new Nginx config file called `/etc/nginx/stream/redis-leader-upstreams.conf`. Use your AKS2 Nodepool names for server names, and add `:32379` for your port number, matching the Static NodePort for Redis Leader. Use the example provided, just change the server names:
+
+ ```nginx
+ # Nginx 4 Azure to NIC, AKS Node for Upstreams
+ # Chris Akker, Shouvik Dutta, Adam Currier, Steve Wagner - Mar 2024
+ #
+ # nginx ingress upstreams for Redis Leader
+ #
+ upstream aks2_redis_leader {
+ zone aks2_redis_leader 256k;
+ least_time last_byte;
+
+ # from nginx-ingress NodePort Service / aks Node IPs
+ server aks-nodepool1-19485366-vmss000003:32379; #aks2 node1:
+ server aks-nodepool1-19485366-vmss000004:32379; #aks2 node2:
+ server aks-nodepool1-19485366-vmss000005:32379; #aks2 node3:
+ server aks-nodepool1-19485366-vmss000006:32379; #aks2 node4:
}
-...
+ ```
-```
+ Submit your Nginx Configuration.
-This again changes where Nginx will `proxy_pass` the requests. Nginx will now forward and load balance requests to your AKS2 Ingress Controller, also listening on port 32080 on each AKS2 Node.
+1. Using the Nginx for Azure Console, create a second Nginx config file called `/etc/nginx/stream/redis.example.com.conf`. This will create the Server block for Nginx for Azure, using the Stream TCP Context. Just copy/paste the example provided:
-Submit your change.
+ ```nginx
+ # Nginx 4 Azure to NIC, AKS Node for Upstreams
+ # Stream for Redis Leader
+ # Chris Akker, Shouvik Dutta, Adam Currier, Steve Wagner - Mar 2024
+ #
+ server {
+
+ listen 6379; # Standard Redis Port
+ status_zone aks2-redis-leader;
-Test your change with curl. Do you see the X-Proxy-Pass Header that you added, so you know which Upstream block is being used ?
+ proxy_pass aks2_redis_leader;
+ }
+ ```
+
+ Submit your Nginx Configuration.
+
+1. Update your Nginx for Azure NSG `n4a-nsg` to allow port 6379 inbound, so you can connect to the Redis Leader:
+
+ Set your `$Enviroment Variables` as appropriate for your Resource Group.
+
+ ```bash
+ ## Set environment variables
+ export MY_RESOURCEGROUP=c.akker-workshop
+ export MY_PUBLICIP=$(curl ipinfo.io/ip)
+ ```
+
+ Use Azure CLI to add a new Rule for Redis.
+
+ ```bash
+ ## Rule for Redis traffic
+
+ az network nsg rule create \
+ --resource-group $MY_RESOURCEGROUP \
+ --nsg-name n4a-nsg \
+ --name Redis \
+ --priority 400 \
+ --source-address-prefix $MY_PUBLICIP \
+ --source-port-range '*' \
+ --destination-address-prefix '*' \
+ --destination-port-range 6379 \
+ --direction Inbound \
+ --access Allow \
+ --protocol Tcp \
+ --description "Allow Redis traffic"
+ ```
+
+ ```bash
+ ##Sample Output##
+ {
+ "access": "Allow",
+ "description": "Allow Redis traffic",
+ "destinationAddressPrefix": "*",
+ "destinationAddressPrefixes": [],
+ "destinationPortRange": "6379",
+ "destinationPortRanges": [],
+ "direction": "Inbound",
+ "etag": "W/\"19a674d2-2cc0-481e-b642-f7db545e9f07\"",
+ "id": "/subscriptions//resourceGroups/c.akker-workshop/providers/Microsoft.Network/networkSecurityGroups/n4a-nsg/securityRules/Redis",
+ "name": "Redis",
+ "priority": 400,
+ "protocol": "Tcp",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "c.akker-workshop",
+ "sourceAddressPrefix": "209.166.XX.XX",
+ "sourceAddressPrefixes": [],
+ "sourcePortRange": "*",
+ "sourcePortRanges": [],
+ "type": "Microsoft.Network/networkSecurityGroups/securityRules"
+ }
+
+ ```
+
+### Update local DNS
+
+As you are using FQDN hostnames for the labs, and you will need to update your local computer's `/etc/hosts` file, to use these names with Nginx for Azure.
+
+Edit your local hosts file, adding the `redis.example.com` FQDN as shown below. Use the `External-IP` Address of your Nginx for Azure instance:
```bash
-HTTP/1.1 200 OK
-Server: N4A-1.25.1-cakker
-Date: Fri, 05 Apr 2024 20:08:24 GMT
-Content-Type: text/html; charset=utf-8
-Connection: keep-alive
-Expires: Fri, 05 Apr 2024 20:08:23 GMT
-Cache-Control: no-cache
-X-Proxy-Pass: aks2_ingress
+cat /etc/hosts
+
+# Added for N4A Workshop
+13.86.100.10 cafe.example.com dashboard.example.com redis.example.com
```
-Test your change in Upstreams with Chrome, hitting Refresh several times - what do you see ?
+>**Note:** All hostnames are mapped to the same N4A External-IP. Your N4A External-IP address will be different than the example.
-The Server Name and IP address should now match PODS running in your AKS2 cluster! (they were AKS1 names before) But how do you verify this ? Observe again, the Server name is a K8s assigned POD name, and the Server IP address is the POD IP address, also assiged by K8s.
-Verify this with `kubectl`. Set your Kubectl Config Context to aks2:
+## Test Access to the Redis Leader with Redis Tools
+
+If you need to install redis and tools locally, you can follow the instructions on the official site: https://redis.io/docs/latest/operate/oss_and_stack/install/install-redis/
+
+1. Using the `Redis-cli` tool, see if you can connect/ping to the Redis Leader:
+
+ ```bash
+ redis-cli -h redis.example.com PING
+ ```
+
+ ```bash
+ #Response
+ PONG
+ ```
+
+ ```bash
+ redis-cli -h redis.example.com HELLO 2
+ ```
+
+ ```bash
+ #Response
+ 1) "server"
+ 2) "redis"
+ 3) "version"
+ 4) "6.0.5"
+ 5) "proto"
+ 6) (integer) 2
+ 7) "id"
+ 8) (integer) 7590
+ 9) "mode"
+ 10) "standalone"
+ 11) "role"
+ 12) "master"
+ 13) "modules"
+ 14) (empty array)
+ ```
+
+Now how cool is that? A Redis Cluster running in AKS, exposed with Nginx Ingress and NodePort, with access provided by Nginx for Azure on the Internet, using a standard hostname and port to connect to.
+
+**Optional:** Run Redis-benchmark on your new Leader, see what performance you can get. Watch your Nginx Ingress Dashboard to see the traffic inside the cluster. Watch your Nginx for Azure with Azure Monitoring as well.
+
+
+
+
+```bash
+redis-benchmark -h redis.example.com -c 50 -n 10000 -q --csv
+```
```bash
-kubectl config use-context aks2
+##Sample Output##
+"test","rps","avg_latency_ms","min_latency_ms","p50_latency_ms","p95_latency_ms","p99_latency_ms","max_latency_ms"
+"PING_INLINE","1882.53","26.406","20.720","24.847","29.263","81.599","268.287"
+"PING_MBULK","1875.47","26.478","20.880","24.799","29.359","91.647","176.255"
+"SET","1871.26","26.571","20.368","24.911","29.391","84.607","274.175"
+"GET","1948.94","25.487","20.912","24.911","29.119","42.175","76.543"
+"INCR","1895.38","26.223","20.976","24.991","29.535","68.671","264.703"
+"LPUSH","1943.26","25.578","20.384","24.847","28.399","39.551","106.815"
+"RPUSH","1870.56","26.538","20.976","24.911","29.999","85.631","268.799"
+"LPOP","1926.41","25.761","20.928","24.879","29.183","52.799","173.823"
+"RPOP","1949.70","25.480","20.576","24.911","28.703","41.567","232.063"
+"SADD","1893.58","26.237","20.912","24.943","29.343","76.031","120.063"
+"HSET","1841.96","26.989","20.944","25.023","30.847","91.135","307.455"
+"SPOP","1891.79","26.277","20.928","24.767","28.335","77.887","268.799"
+"ZADD","1870.21","26.547","20.912","24.927","28.623","96.127","280.063"
+"ZPOPMIN","1855.98","26.485","20.704","24.847","28.607","83.007","269.311"
+"LPUSH (needed to benchmark LRANGE)","1997.20","24.886","21.008","24.751","27.423","29.983","45.567"
+"LRANGE_100 (first 100 elements)","1776.83","27.972","21.232","25.311","31.055","107.967","526.847"
+"LRANGE_300 (first 300 elements)","1160.77","42.292","21.616","35.455","98.367","118.527","291.071"
+"LRANGE_500 (first 500 elements)","738.50","65.716","23.792","59.391","109.759","119.999","342.015"
+"LRANGE_600 (first 600 elements)","603.86","76.472","25.344","70.399","121.087","125.247","446.975"
+"MSET (10 keys)","1827.49","27.192","22.352","26.863","30.415","34.399","179.839"
+"XADD","1972.39","25.176","21.152","24.975","27.887","30.655","43.071"
```
-Then list the Pods:
+Some screenshots for you:
+
+
+
+You will likely find that the Redis performance is dimished by the Round trip latency of your Internet and Cloud network path. Redis performance/latency is directly related to network performance. However, the value of running a Redis cluster in any Kubernetes cluster you like, and have access to it anywhere in the world could be a potential solution for you.
+
+>**Security Warning!** There is no Redis Authentication, or other protections in this Redis configuration, just your Azure NSG IP/port filters. Do NOT use this configuration for Production workloads. The example provided in the Workshop is to show that running Redis is easy, and Nginx makes it easy to access. Take appropriate measures to secure Redis data as needed.
+
+*NOTE:* You only exposed the `Redis Leader` Service with Nginx for Azure. As an Optional Exercise, you can also expose the `Redis Follower` Service with Nginx for Azure. Create a new Upstream block, and then update the `redis.example.com.conf` to add a listener on the Follower port and proxy_pass to the Followers in AKS2. *Redis is not running in AKS1, only AKS2 (unless you want to add it).*
+
+Nginx Split Clients is also available for the TCP Stream context. You can run Multiple Redis Clusters, and use Split Clients to control the Ratio of traffic between them, just like you did earlier for HTTP requests. Ever thought of `Active/Active Redis Clusters`, with Dynamic Split Ratios ... Nginx can do that!
+
+## Optional - Nginx for Azure / Load Balancing the Nginx Ingress Headless Service
+
+This an Advanced 400 Level Lab Exercise, you will configure a Headless Kubernetes Service, and configure Nginx for Azure to load balance requests directly to the Nginx Ingress Controller(s) running in AKS2, leveraging the Azure CNI / Calico. This architecture will `bypass NodePort` on the Kubernetes Nodes, allowing `Nginx 4 Azure to connect to Nginx Ingress Pod(s) directly on the same Subnet, n4a-aks2`. You will use the `Nginx Plus Resolver`, to dynamically create the Upstream list, by querying Kube-DNS for the Pod IPs.
+
+>**NOTE:** This exercise requires detailed understanding and expertise with Kubernetes networking/CNI, Kube-DNS, Nginx Ingress, and the Nginx Plus Resolver. The Nginx Plus DNS Resolver is *NOT* the same as a Linux OS DNS client, it is separate and built into Nginx Plus only. You will configure this Nginx Resolver to query Kube-DNS for the A records, Pod IP Addresses, of the Nginx Ingress Pods. These Pod IPs are `dynamically added to the Upstream block` by Nginx Plus.
+
+
+
+In order to configure this correctly, you will need the following items.
+
+- New Kubernetes Service, for nginx-ingress-headless
+- Kube-DNS Pod IP Addresses
+- New Nginx for Azure upstream block
+- Change the `proxy_pass` to use the new upstream block
+
+1. Inspect the `lab5/nginx-ingress-headless.yaml` manifest. You are creating another Service, that represents the Nginx Plus Ingress Controller Pod(s).
+
+- Notice the NodePort is commented out, so you can see that it is not being used.
+- Notice the ClusterIP is set to None.
+- The service name is also different, it's called `nginx-ingress-headless`.
+- This is in addition to the existing NodePort Service you created earlier.
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: nginx-ingress-headless
+ namespace: nginx-ingress
+spec:
+ type: ClusterIP
+ clusterIP: None
+ ports:
+ - port: 80
+ targetPort: 80
+ #nodePort: 32080
+ protocol: TCP
+ name: http
+ - port: 443
+ targetPort: 443
+ #nodePort: 32443
+ protocol: TCP
+ name: https
+ selector:
+ app: nginx-ingress
+```
+
+1. Create the `nginx-ingress-headless` Service in AKS2, using the manifest provided.
```bash
-kubectl get pods
+kubectl config use-context n4a-aks2
+kubectl apply -f lab5/nginx-ingress-headless.yaml
```
-Notice the names of the coffee and tea pods. Check the `coffee-svc` Endpoints:
+Check it out:
```bash
-kubectl describe svc coffee-svc
+kubectl get svc -n nginx-ingress
+```
+```bash
+##Sample Output##
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+dashboard-svc ClusterIP 10.0.58.119 9000/TCP 24d
+nginx-ingress NodePort 10.0.169.30 80:32080/TCP,443:32443/TCP,6379:32379/TCP,6380:32380/TCP,9000:32090/TCP 24d
+nginx-ingress-headless ClusterIP None 80/TCP,443/TCP
```
-You should see a list of the POD IPs for the Service.
+1. Verify the Headless Service points to the actual IP Address for the Nginx Ingress Controller:
+
+ ```bash
+ kubectl describe svc nginx-ingress-headless -n nginx-ingress
+ ```
+
+ ```bash
+ ##Sample Output##
+ Name: nginx-ingress-headless
+ Namespace: nginx-ingress
+ Labels:
+ Annotations:
+ Selector: app=nginx-ingress
+ Type: ClusterIP
+ IP Family Policy: SingleStack
+ IP Families: IPv4
+ IP: None
+ IPs: None
+ Port: http 80/TCP
+ TargetPort: 80/TCP
+ Endpoints: 172.16.4.240:80
+ Port: https 443/TCP
+ TargetPort: 443/TCP
+ Endpoints: 172.16.4.240:443
+ Session Affinity: None
+ Events:
+ ```
+
+1. Take NOTE of the Endpoint IP Address, `172.16.4.240` in this example. It should be the same as the IP Address of the NIC Pod, check it out:
+
+ ```bash
+ kubectl describe pod $NIC -n nginx-ingress |grep IP
+ ```
+
+ ```bash
+ ##Sample Output##
+ IP: 172.16.4.240
+ IPs:
+ IP: 172.16.4.240
+ ```
+
+Yes, they both match, so your Service definition and Headless manifests are configured correctly.
+
+1. Next you will need the Pod IP Addresses of the Kube-DNS Servers running in AKS2 (!not the Service/Cluster IP Address!). These IPs will be used by the Nginx Resolver for DNS queries. These are, after all, the primary/secondary DNS Servers running in your cluster!
+
+ ```bash
+ kubectl describe svc kube-dns -n kube-system
+ ```
+
+ ```bash
+ ##Sample Output##
+ Name: kube-dns
+ Namespace: kube-system
+ Labels: addonmanager.kubernetes.io/mode=Reconcile
+ k8s-app=kube-dns
+ kubernetes.io/cluster-service=true
+ kubernetes.io/name=CoreDNS
+ Annotations:
+ Selector: k8s-app=kube-dns
+ Type: ClusterIP
+ IP Family Policy: SingleStack
+ IP Families: IPv4
+ IP: 10.0.0.10
+ IPs: 10.0.0.10
+ Port: dns 53/UDP
+ TargetPort: 53/UDP
+ Endpoints: 172.16.4.115:53,172.16.4.178:53 # Use these IPs for Nginx Resolver
+ Port: dns-tcp 53/TCP
+ TargetPort: 53/TCP
+ Endpoints: 172.16.4.115:53,172.16.4.178:53
+ Session Affinity: None
+ Events:
+ ```
+
+You will use these two IP Addresses from DNS Service Endpoints in your Nginx for Azure configuration. `172.16.4.115 and 172.16.4.178` in this example.
+
+1. Inspect the `lab5/aks2-nic-headless.conf` file.
+
+- Notice the Nginx `Resolver` directive configured with the 2 Kube-DNS Endpoint IPs.
+- The `valid=10s` parameter tells Nginx to re-query every 10 seconds, in case there are changes, like scaling up/down or re-starting.
+- The `ipv6=off` disables IPv6
+- The `status_zone=kube-dns` parameter collects the metrics for Nginx Resolver's queries, successes and failures, which can be seen in Azure Monitoring.
+- Notice the server `resolve` directive is added, to query `kube-dns` for the IP Address(es) of the Nginx Ingress Controller's Pod IP(s).
+- If there are more than 1 Nginx Ingress Controller running, a list IPs will be returned, and Nginx 4 Azure will load balance all of them. You can see the list of Nginx Ingress Pod IPs in the Azure Monitor, in the `aks2_nic_headless` Upstream.
+
+Now that the Nginx Headless Service has been configured, and you have the Kube-DNS Pod IP Addresses, you can configure Nginx for Azure.
+
+1. Using the Nginx 4 Azure Console, create a new Nginx config file, `/etc/nginx/conf.d/aks2-nic-headless.conf`. Copy/paste using the example file provided. Just change the IP Addresses to your Kube-DNS IPs.
+
+ ```nginx
+ # Nginx 4 Azure direct to NIC for Upstreams
+ # Chris Akker, Shouvik Dutta, Adam Currier, Steve Wagner - Mar 2024
+ #
+ # direct to nginx ingress Headless Service ( no NodePort )
+ #
+ upstream aks2_nic_headless {
+ zone aks2_nic_headless 256k;
+ least_time last_byte;
+
+ # direct to nginx-ingress Headless Service Endpoint
+ # Resolver set to kube-dns IPs
+
+ resolver 172.16.4.115 172.16.4.178 valid=10s ipv6=off status_zone=kube-dns;
+
+ # Server name must follow this Kubernetes Service Name format
+ # server ..svc.cluster.local
+
+ server nginx-ingress-headless.nginx-ingress.svc.cluster.local:80 resolve;
+
+ keepalive 32;
+ }
+ ```
+
+Submit your Nginx Configuration.
+
+### Test Nginx for Azure to NIC Headless
+
+1. Once again, change your `proxy_pass` directive in `/etc/nginx/conf.d/cafe.example.com.conf`, to use the new `aks2_nic_headless` upstream.
+
+ ```nginx
+ ...
+ location / {
+ #
+ # return 200 "You have reached cafe.example.com, location /\n";
+
+ #proxy_pass http://cafe_nginx; # Proxy AND load balance to Docker VM
+ #add_header X-Proxy-Pass cafe_nginx; # Custom Header
+ #proxy_pass http://aks1_ingress; # Proxy AND load balance to AKS1 Nginx Ingress
+ #add_header X-Proxy-Pass aks1_ingress; # Custom Header
+ #proxy_pass http://aks2_ingress; # Proxy AND load balance to AKS2 Nginx Ingress
+ #add_header X-Proxy-Pass aks1_ingress; # Custom Header
+ #proxy_pass http://$upstream; # Proxy AND load balance to Split Client
+ #add_header X-Proxy-Pass $upstream; # Custom Header
+ proxy_pass http://aks2_nic_headless; # Proxy to AKS2 Nginx Ingress Controllers Headless
+ add_header X-Proxy-Pass aks2_nic_headless; # Custom Header
+ }
+ ```
+
+Submit your Nginx Configuration.
+
+### Test Nginx for Azure to NIC Headless
+
+1. Open Chrome to http://cafe.example.com/coffee, and hit refresh several times. Inspect the page with Dev Tools, you should see the updated Header value = `aks2_nic_headless`. Notice the `Ingress Controller IP` address is the same as your NIC Pod. Watch your Nginx Ingress Dashboard on AKS2, you will see traffic on all three coffee pods.
+
+*Optional:* Fire up a loadtest with WRK again, modify your Upstream Selected Filter in Azure Monitor and add `aks2_nic_headless`. All the traffic should be going there.
+
+**Advanced Deep Dive Exercise:** If you `SCALE UP` the number of Nginx Ingress Pods, the Nginx Ingress Headless Service will represent all of the NIC Replicas. As the Nginx for Azure Resolver is set to re-query every 10 seconds, it should pick up this change in the Nginx Headless Endpoints list quickly. Using the A records from Kube-DNS, Nginx for Azure will update its `aks2_nic_headless` Upstream list, and load balance traffic to ALL the NIC Replicas. You can see the Upstreams List in Azure Monitoring.
+
+Give it a try:
+
+1. Scale UP the number of Nginx Ingress Controllers running to 3:
+
+ ```bash
+ kubectl scale deployment nginx-ingress -n nginx-ingress --replicas=3
+ ```
+
+ Confirm they started:
+
+ ```bash
+ kubectl get pods -n nginx-ingress
+ ```
+
+ ```bash
+ ##Sample Output##
+ NAME READY STATUS RESTARTS AGE
+ nginx-ingress-69b95fb8ff-n8mn8 1/1 Running 0 16s
+ nginx-ingress-69b95fb8ff-ntdwz 1/1 Running 0 2d17h
+ nginx-ingress-69b95fb8ff-sgv2b 1/1 Running 0 16s
+ ```
+
+ Check again, the `nginx-ingress` Headless Service, you should now see THREE Endpoints.
+
+ ```bash
+ kubectl describe svc nginx-ingress-headless -n nginx-ingress
+ ```
+
+ ```bash
+ ##Sample Output##
+ Name: nginx-ingress-headless
+ Namespace: nginx-ingress
+ Labels:
+ Annotations:
+ Selector: app=nginx-ingress
+ Type: ClusterIP
+ IP Family Policy: SingleStack
+ IP Families: IPv4
+ IP: None
+ IPs: None
+ Port: http 80/TCP
+ TargetPort: 80/TCP
+ Endpoints: 172.16.4.201:80,172.16.4.221:80,172.16.4.240:80
+ Port: https 443/TCP
+ TargetPort: 443/TCP
+ Endpoints: 172.16.4.201:443,172.16.4.221:443,172.16.4.240:443
+ Session Affinity: None
+ Events:
+ ```
+
+If you recall, 172.16.2.240 was your first Nginx Ingress Pod, now you have 2 more, 172.16.4.221 and .201. If you `kubectl describe pod` on each one, the NIC Pod IP Addresses will match the Headless Service list, that's how Kubernetes Services work.
-You can also see this list, using the Nginx Plus Dashboard for the Ingress Controller, check the HTTP Upstreams, you should see the Pod IPs for both the coffee-svc and tea-svc.
+1. Test with Chrome. Open your browser to http://cafe.example.com/coffee, and Refresh several times. Watch the `Ingress Controller IP address`, it will change to the 3 NIC Pod IPs, 172.16.4.240, .221, and .201 in this example. Nginx for Azure is load balancing all three Ingress Controllers.
-### Summary
+NOTE: The aks2_nic_headless Upstream is configured for `least_time last_byte`, so Nginx for Azure will choose the fastest NIC Pod. If you want to see it in Round-Robin mode, comment out the `least_time last_byte` directive.
-During this Lab exercise, you created and tested THREE different Upstream configurations to use with Nginx. This demonstrates how easy it is to have different platforms for your backend applications, and Nginx can easily be configured to change where it sends the Requests coming in. You can use Azure VMs, Docker, Containers, or even AKS clusters for your apps. You also added a customer HTTP Header, to help you track which upstream block is being used.
+1. Scale your NICs back to just ONE Pod, and check again with Chrome. Now there is only one Nginx Ingress Controller IP being used, as when you started.
+**NOTE:** It is considered a Best Practice, to run at least THREE Nginx Ingress Controllers for Production workloads, to provide High Availability and additional traffic processing power for your Applications' Pods and Services. Nginx for Azure can work with your Nginx Ingress Controllers nicely to achieve this requirement, as shown here.
+**Optional Exercise:** Install a DNS testing Pod in your Cluster, like busy-box or Ubuntu, and use `dig or nslookup` to query the A records from Kube-DNS.
-### << more exercises/steps>>
-
+## Wrap Up
-
+As you have seen, using Nginx for Azure is quite easy to create various backend Systems, Services, platforms of different types and have Nginx Load Balance them through a single entry point. Using Advanced Nginx directives/configs with Resolver, Nginx Ingress Controllers, Headless, and even Split Clients help you control and manage dev/test/pre-prod and even Production workloads with ease. Dashboards and Monitoring give you insight with over 240 useful metrics, providing data needed for decisions based on both real time and historical metadata about your Apps and Traffic.
**This completes Lab5.**
-
## References:
@@ -278,14 +1275,14 @@ During this Lab exercise, you created and tested THREE different Upstream config
- [NGINX Technical Specs](https://docs.nginx.com/nginx/technical-specs/)
- [NGINX - Join Community Slack](https://community.nginx.org/joinslack)
-
### Authors
- Chris Akker - Solutions Architect - Community and Alliances @ F5, Inc.
- Shouvik Dutta - Solutions Architect - Community and Alliances @ F5, Inc.
- Adam Currier - Solutions Architect - Community and Alliances @ F5, Inc.
+- Steve Wagner - Solutions Architect - Community and Alliances @ F5, Inc.
-------------
-Navigate to ([Lab6](../lab6/readme.md) | [LabX](../labX/readme.md))
+Navigate to ([Lab6](../lab6/readme.md) | [LabGuide](../readme.md))
diff --git a/labs/lab5/redis-leader-upstreams.conf b/labs/lab5/redis-leader-upstreams.conf
new file mode 100644
index 0000000..91fa8d4
--- /dev/null
+++ b/labs/lab5/redis-leader-upstreams.conf
@@ -0,0 +1,16 @@
+# Nginx 4 Azure to NIC, AKS Node for Upstreams
+# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
+#
+# nginx ingress upstreams for Redis Leader
+#
+upstream aks2_redis_leader {
+ zone aks2_redis_leader 256k;
+
+ least_time last_byte;
+
+ # from nginx-ingress NodePort Service / aks Node IPs
+ server aks-nodepool1-19485366-vmss000003:32379; #aks2 node1:
+ server aks-nodepool1-19485366-vmss000004:32379; #aks2 node2:
+ server aks-nodepool1-19485366-vmss000005:32379; #aks2 node3:
+
+}
diff --git a/ca-notes/n4a-configs/stream/redis-leader.conf b/labs/lab5/redis.example.com.conf
similarity index 56%
rename from ca-notes/n4a-configs/stream/redis-leader.conf
rename to labs/lab5/redis.example.com.conf
index c0387d4..79185c0 100644
--- a/ca-notes/n4a-configs/stream/redis-leader.conf
+++ b/labs/lab5/redis.example.com.conf
@@ -1,12 +1,11 @@
# Nginx 4 Azure to NIC, AKS Node for Upstreams
+# Stream for Redis Leader
# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
#
-# nginx ingress upstreams for Redis Leader
-#
server {
- listen 6379;
- status_zone redis-leader;
+ listen 6379; # Standard Redis Port
+ status_zone aks2-redis-leader;
proxy_pass aks2_redis_leader;
diff --git a/ca-notes/n4a-configs/includes/split-clients.conf b/labs/lab5/split-clients.conf
similarity index 100%
rename from ca-notes/n4a-configs/includes/split-clients.conf
rename to labs/lab5/split-clients.conf
diff --git a/labs/lab6/media/add_certificate.png b/labs/lab6/media/add_certificate.png
deleted file mode 100644
index 1374ad7..0000000
Binary files a/labs/lab6/media/add_certificate.png and /dev/null differ
diff --git a/labs/lab6/media/add_certificate_sucess.png b/labs/lab6/media/add_certificate_sucess.png
deleted file mode 100644
index a16605b..0000000
Binary files a/labs/lab6/media/add_certificate_sucess.png and /dev/null differ
diff --git a/labs/lab6/media/docker-icon.png b/labs/lab6/media/docker-icon.png
new file mode 100644
index 0000000..02ee3f1
Binary files /dev/null and b/labs/lab6/media/docker-icon.png differ
diff --git a/labs/lab6/media/keyvault_screen.png b/labs/lab6/media/keyvault_screen.png
deleted file mode 100644
index 27572fd..0000000
Binary files a/labs/lab6/media/keyvault_screen.png and /dev/null differ
diff --git a/labs/lab6/media/lab6_cafe_access_log_update.png b/labs/lab6/media/lab6_cafe_access_log_update.png
new file mode 100644
index 0000000..dc130ee
Binary files /dev/null and b/labs/lab6/media/lab6_cafe_access_log_update.png differ
diff --git a/labs/lab6/media/lab6_cafe_query.png b/labs/lab6/media/lab6_cafe_query.png
new file mode 100644
index 0000000..b83b0b4
Binary files /dev/null and b/labs/lab6/media/lab6_cafe_query.png differ
diff --git a/labs/lab6/media/lab6_cafe_query_details.png b/labs/lab6/media/lab6_cafe_query_details.png
new file mode 100644
index 0000000..09ad979
Binary files /dev/null and b/labs/lab6/media/lab6_cafe_query_details.png differ
diff --git a/labs/lab6/media/lab6_cafe_query_save.png b/labs/lab6/media/lab6_cafe_query_save.png
new file mode 100644
index 0000000..6571476
Binary files /dev/null and b/labs/lab6/media/lab6_cafe_query_save.png differ
diff --git a/labs/lab6/media/lab6_create_dashboard.png b/labs/lab6/media/lab6_create_dashboard.png
new file mode 100644
index 0000000..9ec79ed
Binary files /dev/null and b/labs/lab6/media/lab6_create_dashboard.png differ
diff --git a/labs/lab6/media/lab6_default_chart.png b/labs/lab6/media/lab6_default_chart.png
new file mode 100644
index 0000000..9cf06bd
Binary files /dev/null and b/labs/lab6/media/lab6_default_chart.png differ
diff --git a/labs/lab6/media/lab6_default_query.png b/labs/lab6/media/lab6_default_query.png
new file mode 100644
index 0000000..590602d
Binary files /dev/null and b/labs/lab6/media/lab6_default_query.png differ
diff --git a/labs/lab6/media/lab6_main_access_log_update.png b/labs/lab6/media/lab6_main_access_log_update.png
new file mode 100644
index 0000000..7225b7d
Binary files /dev/null and b/labs/lab6/media/lab6_main_access_log_update.png differ
diff --git a/labs/lab6/media/lab6_main_ext_logformat_add.png b/labs/lab6/media/lab6_main_ext_logformat_add.png
new file mode 100644
index 0000000..97e5380
Binary files /dev/null and b/labs/lab6/media/lab6_main_ext_logformat_add.png differ
diff --git a/labs/lab6/media/lab6_nginx_conf_editor.png b/labs/lab6/media/lab6_nginx_conf_editor.png
new file mode 100644
index 0000000..616f48e
Binary files /dev/null and b/labs/lab6/media/lab6_nginx_conf_editor.png differ
diff --git a/labs/lab6/media/lab6_pin_upstream_chart.png b/labs/lab6/media/lab6_pin_upstream_chart.png
new file mode 100644
index 0000000..5d0a48c
Binary files /dev/null and b/labs/lab6/media/lab6_pin_upstream_chart.png differ
diff --git a/labs/lab6/media/lab6_server_request_chart.png b/labs/lab6/media/lab6_server_request_chart.png
new file mode 100644
index 0000000..bcfbf65
Binary files /dev/null and b/labs/lab6/media/lab6_server_request_chart.png differ
diff --git a/labs/lab6/media/lab6_show_dashboard.png b/labs/lab6/media/lab6_show_dashboard.png
new file mode 100644
index 0000000..1da2f59
Binary files /dev/null and b/labs/lab6/media/lab6_show_dashboard.png differ
diff --git a/labs/lab6/media/lab6_upstream_chart_dashboard.png b/labs/lab6/media/lab6_upstream_chart_dashboard.png
new file mode 100644
index 0000000..282d892
Binary files /dev/null and b/labs/lab6/media/lab6_upstream_chart_dashboard.png differ
diff --git a/labs/lab6/media/lab6_upstream_response_time_chart.png b/labs/lab6/media/lab6_upstream_response_time_chart.png
new file mode 100644
index 0000000..b4bf7fb
Binary files /dev/null and b/labs/lab6/media/lab6_upstream_response_time_chart.png differ
diff --git a/labs/lab6/media/nginx-azure-icon.png b/labs/lab6/media/nginx-azure-icon.png
new file mode 100644
index 0000000..70ab132
Binary files /dev/null and b/labs/lab6/media/nginx-azure-icon.png differ
diff --git a/labs/lab6/media/nginx4a_logs.png b/labs/lab6/media/nginx4a_logs.png
new file mode 100644
index 0000000..638ab58
Binary files /dev/null and b/labs/lab6/media/nginx4a_logs.png differ
diff --git a/labs/lab6/readme.md b/labs/lab6/readme.md
index 12c2132..dd30e4b 100644
--- a/labs/lab6/readme.md
+++ b/labs/lab6/readme.md
@@ -1,10 +1,8 @@
-# Azure Key Vault / TLS Essentials
+# Azure Montoring / Logging Analytics
## Introduction
-In this lab, you will create a new key-vault resource that would be storing self-signed certificates. You will then configure Nginx for Azure to listen for https traffic and then terminate TLS before proxying and load balancing back to the backend system.
-
-< Lab specific Images here, in the /media sub-folder >
+In this lab, you will explore Azure based monitoring and Logging capabilities. You will create the basic access log_format within NGINX for Azure resource. As the basic log_format only contains a fraction of the information, you will then extend it and create a new log_format to include much more information, especially about the Upstream backend servers. You will add access logging to your NGINX for Azure resource and finally capture/see those logs within Azure monitoring tools.
NGINX aaS | Docker
:-------------------------:|:-------------------------:
@@ -14,196 +12,222 @@ NGINX aaS | Docker
By the end of the lab you will be able to:
-- Build your own Azure Key Vault resource.
-- Create your self-signed TLS certificate.
-- Configure NGINX for Azure to listen for HTTPS traffic
-- Test and validate TLS traffic components and settings
+- Enable basic log format within NGINX for Azure resource
+
+- Create enhance log format with additional logging metrics
+
+- Test access logs within log analytics workspace
+
+- Explore Azure Monitoring for NGINX for Azure
## Pre-Requisites
-- You must have your Nginx for Azure resource up and running
-- You must have `Owner` role on the resource group that includes NGINX for Azure resource
-- You must also have backend system resources up and running.
-- See `Lab0` for instructions on setting up your system for this Workshop
+- Within your NGINX for Azure resource, you must have enabled sending metrics to Azure monitor.
+
+- You must have created `Log Analytics workspace`.
+- You must have created an Azure diagnostic settings resource that will stream the NGINX logs to the Log Analytics workspace.
+- See `Lab1` for instructions if you missed any of the above steps.
-### Create Azure Key Vault resource
+
-1. Create an Azure key vault within the same resource group which holds your NGINX for azure resource.
+### Enable basic log format
- ```bash
- ## Set environment variable
- MY_RESOURCEGROUP=s.dutta-workshop
+1. Within Azure portal, open your resource group and then open your NGINX for Azure resource (nginx4a). From the left pane click on `Settings > NGINX Configuration`. This should open the configuration editor section. Open `nginx.conf` file.
+
+ 
+
+1. You will notice in previous labs, you have added the default basic log format inside the `http` block within the `nginx.conf` file as highlighted in above screenshot. You will make use of this log format initially to capture some useful metrics within NGINX logs.
+
+ ```nginx
+ log_format main '$remote_addr - $remote_user [$time_local] "$request" '
+ '$status $body_bytes_sent "$http_referer" '
+ '"$http_user_agent" "$http_x_forwarded_for"';
```
- Once the environment variables are all set, run below command to create the key vault resource
+1. Update the `access_log` directive to enable logging. Within this directive, you will pass the full path of the log file (eg. `/var/log/nginx/access.log`) and also the `main` log format that you created in previous step. Click on `Submit` to apply the changes.
- ```bash
- az keyvault create \
- --resource-group $MY_RESOURCEGROUP \
- --name n4a-keyvault \
- --enable-rbac-authorization false
+ ```nginx
+ access_log /var/log/nginx/access.log main;
```
-2. The above command should provide a json output. If you look at its content then it should have a `provisioningState` key with `Succeeded` as it value. This field is an easy way to validate the command successfully provisioned the resource.
+ 
-3. Next you would provide permissions to access this keyvault to the user assigned managed identity that you created while creating NGINX for Azure resource.
-4. Copy the `PrincipalID` of the user identity into an environment variable using below command.
+1. In subsequent sections you will test out the logs inside log analytics workspace.
- ```bash
- ## Set environment variable
- MY_PRINCIPALID=$(az identity show \
- --resource-group $MY_RESOURCEGROUP \
- --name n4a-useridentity \
- --query principalId \
- --output tsv)
+### Create enhance log format with additional logging metrics
+
+In this section you will create an extended log format which you will use with `cafe.example.com` server's access log.
+
+1. Within the NGINX for Azure resource (nginx4a), open the `Settings > NGINX Configuration` pane.
+
+1. Within the `nginx.conf` file add a new extended log format named `main_ext` as shown in the below screenshot. Click on `Submit` to save the config file
+
+ ```nginx
+ # Extended Log Format
+ log_format main_ext 'remote_addr="$remote_addr", '
+ '[time_local=$time_local], '
+ 'request="$request", '
+ 'status="$status", '
+ 'http_referer="$http_referer", '
+ 'body_bytes_sent="$body_bytes_sent", '
+ 'Host="$host", '
+ 'sn="$server_name", '
+ 'request_time=$request_time, '
+ 'http_user_agent="$http_user_agent", '
+ 'http_x_forwarded_for="$http_x_forwarded_for", '
+ 'request_length="$request_length", '
+ 'upstream_address="$upstream_addr", '
+ 'upstream_status="$upstream_status", '
+ 'upstream_connect_time="$upstream_connect_time", '
+ 'upstream_header_time="$upstream_header_time", '
+ 'upstream_response_time="$upstream_response_time", '
+ 'upstream_response_length="$upstream_response_length", ';
```
-5. Now assign GET secrets and GET certificates permission to this user assigned managed identity for your keyvault using below command.
+ 
- ```bash
- az keyvault set-policy \
- --name n4a-keyvault \
- --certificate-permissions get \
- --secret-permissions get \
- --object-id $MY_PRINCIPALID
+1. Once the extended log format has been created, open `cafe.example.com.conf` file and update the `access_log` to make use of the extended log format as shown in the below screenshot. Click on `Submit` to apply the changes.
+
+ ```nginx
+ access_log /var/log/nginx/cafe.example.com.log main_ext;
```
-### Create a self-signed TLS certificate
+ 
+
+1. In subsequent sections you will test out the extended log format within inside log analytics workspace.
+
+### Test the access logs within log analytics workspace
-1. In this section, you will create a self-signed certificate using the Azure CLI.
+1. To test out access logs, generate some traffic on your `cafe.example.com` server.
- **NOTE:** It should be clearly understood, that Self-signed certificates are exactly what the name suggest - they are created and signed by you or someone else. **They are not signed by any official Certificate Authority**, so they are not recommended for any use other than testing in lab exercises within this workshop. Most Modern Internet Browsers will display Security Warnings when they receive a Self-Signed certificate from a webserver. In some environments, the Browser may actually block access completely. So use Self-signed certificates with **CAUTION**.
+1. You can generate some traffic using your local Docker Desktop. Start and run the `WRK` load generation tool from a container using below command to generate traffic:
-2. Create a self-signed certificate by running the below command.
+ First save your NGINX for Azure resource public IP in a environment variable.
```bash
- az keyvault certificate create \
- --vault-name n4a-keyvault \
- --name n4a-cert \
- --policy @labs/lab6/self-certificate-policy.json
+ ## Set environment variables
+ export MY_RESOURCEGROUP=s.dutta-workshop
+ export MY_N4A_IP=$(az network public-ip show \
+ --resource-group $MY_RESOURCEGROUP \
+ --name n4a-publicIP \
+ --query ipAddress \
+ --output tsv)
```
-3. The above command should provide a json output. If you look at its content then it should have a `status` key with `completed` as it value. This field is an easy way to validate the command successfully created the certificate.
+ Make request to the default server block which is using the `main` log format for access logging by running below command.
-4. Now log into Azure portal and navigate to your resource-group and then click on the `n4a-keyvault` key vault resource.
+ ```bash
+ docker run --name wrk --rm williamyeh/wrk -t4 -c200 -d1m --timeout 2s http://$MY_N4A_IP
+ ```
-5. Within the keyvault resources window, click on `Certificates` from the left pane. You should see a self-signed certificate named `n4a-cert` within the certificates pane.
- 
+ Make request to the `cafe.example.com` server block which is using the `main_ext` log format for access logging by running below command.
-6. Click on the newly created certificate and then open up `Issuance Policy` tab for more details on the certificate. You will use this certificate with NGINX for Azure resource to listen for HTTPS traffic.
+ ```bash
+ docker run --name wrk --rm williamyeh/wrk -t4 -c200 -d1m --timeout 2s -H 'Host: cafe.example.com' http://$MY_N4A_IP/coffee
+ ```
-### Configure NGINX for Azure to listen for HTTPS traffic
+1. Within Azure portal, open your NGINX for Azure resource (nginx4a). From the left pane click on `Monitoring > Logs`. This should open a new Qeury pane. Select `Resource type` from drop down and then type in `nginx` in the search box. This should show all the sample queries related to NGINX for Azure. Under `Show NGINXaaS access logs` click on `Run` button
-Now that you have a self signed TLS certificate for testing, you will configure NGINX for Azure resource to use them.
+ 
-1. Within your resource-group, click on the NGINX for Azure resource (`nginx4a`).
+1. This should open a `new query` window, which is made up of a query editor pane at the top and query result pane at the bottom as shown in below screenshot.
-1. From the left pane, click on `NGINX certificates` under `Settings` and then click on the `+ Add certificate` button to add your self signed certificate that you created in previous section.
+ 
- 
+ > **NOTE:** The logs may take couple of minutes to show up. If the results pane doesn't show the logs then wait for a minute and then click on the `Run` button to run the query again.
-1. Within the `Add Certificate` pane, fill in below details:
- - **Preferred name:** Any unique name for the certificate (eg. n4a-cert)
- - **Certificate path:** Logical path where the certificate would recide. (eg. /etc/nginx/cert/n4a-cert.cert)
- - **Key path:** Logical path where the key would recide. (eg. /etc/nginx/cert/n4a-cert.key)
- - **Key vault:** Select your key vault (eg. n4a-keyvault)
- - **Certificate name:** Select a certificate (eg. n4a-cert)
-
- Once all the fields have been filled, click on `Save` to save the certificate within NGINX for Azure.
- 
+1. Azure makes use of Kusto Query Language(KQL) to query logs. Have a look in the [references](#references) section to learn more about KQL.
-1. You should see your certificate in a `Succeeded` status if the values that you entered in previous step was all correct.
+1. You will modify the default query to show logs for `cafe.example.com` server block. Update the default query with the below query in the query editor pane. Click on the `Run` button to execute the query.
- 
+ ```kql
+ // Show NGINXaaS access logs
+ // A list of access logs sorted by time.
+ NGXOperationLogs
+ | where FilePath == "/var/log/nginx/cafe.example.com.log"
+ | sort by TimeGenerated desc
+ | project TimeGenerated, FilePath, Message
+ | limit 100
+ ```
-1. Now you will modify your `cafe.example.com.conf` file that you created in `lab2` to set up cafe.example.com as a HTTPS server. First you will add the `ssl` parameter to the `listen` directive in the `server` block. You will then specify the server certificate and private key file within the configuration to point to the certificate that you added in previous steps.
+ 
-1. Open `lab6/cafe.example.com.conf`. Below is the list of changes that you can observe which has changed from `lab2/cafe.example.com.conf` file to enable HTTPS traffic on cafe.example.com.
- - On line #6, the listen port has been updated from port 80 to 443. Also `ssl` parameter has been added to enable TLS termination for this `server` block.
- - On line #11-12, the `ssl_certificate` and `ssl_certificate_key` directives have been added and points to the certificate path that you provided when you added certificate to the NGINX for Azure resource.
-
- ```nginx
- server {
-
- listen 443 ssl; # Listening on port 443 with "ssl" parameter for terminating TLS on all IP addresses on this machine
+1. Within the Results pane, expand one of the logs to look into its details. You can also hover your mouse over the message to show the message details as shown in below screenshot. Note that the message follows the `main_ext` log format.
+
+ 
- server_name cafe.example.com; # Set hostname to match in request
- status_zone cafe.example.com; # Metrics zone name
+1. You can save the custom query if you wish by clicking on the `Save` button and then selecting `Save as query`. Within the `Save as query` pane provide a query name and optional description and then finally click on `Save` button.
- ssl_certificate /etc/nginx/certs/n4a-cert.cert;
- ssl_certificate_key /etc/nginx/certs/n4a-cert.key;
+ 
- snip...
- }
+### Explore Azure Monitoring for NGINX for Azure
+
+1. Generate some steady traffic using your local Docker Desktop. Start and run the `WRK` load generation tool from a container using below command to generate traffic:
+
+ ```bash
+ docker run --name wrk --rm williamyeh/wrk -t4 -c200 -d30m --timeout 2s http://cafe.example.com/coffee
```
-1. Within the Azure portal, open your resource-group, click on the NGINX for Azure resource (`nginx4a`).
+ The above command would run for 30 minutes and send request to `http://cafe.example.com/coffee` using 4 threads and 200 connections.
-1. From the left pane, click on `NGINX configuration` under `Settings` and then open the `cafe.example.com.conf` file under `/etc/nginx/conf.d` directory. This would open the config file in the editor.
+1. Within Azure portal, open your NGINX for Azure resource (nginx4a). From the left pane click on `Monitoring > Metrics`. This should open a new Chart pane.
-1. Copy the content of `lab6/cafe.example.com.conf` file and replace the existing `cafe.example.com.conf` content with it.
+ 
-1. Click on `Submit` to push the config changes to the NGINX for Azure resource.
+1. For the first chart, within **Metric Namespace** drop-down, select `nginx requests and response statistics`. For the **metrics** drop-down, select `plus.http.request.count`. For the **Aggregation** drop-down, select `Avg`.
-### Test and validate TLS traffic components and settings
+ Click on the **Apply Splitting** button. Within the **Values** drop-down, select `server_zone`. From top right change the **Time range** to `Last 30 minutes`. This should generate a chart similar to below screenshot.
-1. Make sure you have mapped your NGINX for Azure resource public IP to `cafe.example.com` hostname within your host file. If not present then please do insert it as you would require the mapping for testing.
+ 
- ```bash
- cat /etc/hosts | grep cafe.example.com
- ```
+1. You will now save this chart in a new custom dashboard. Within the chart pane, click on `Save to dashboard > Pin to dashboard`.
-1. Using your terminal, try to run the below curl command
+ Within the `Pin to dashboard` pane, select the `Create new` tab to create your new custom dashboard. Provide a name for your custom dashboard. Once done click on `Create and pin` button to finish dashboard creation.
- ```bash
- curl https://cafe.example.com
- ```
+ 
- ```bash
- ##Sample Output##
- curl: (60) SSL certificate problem: unable to get local issuer certificate
- More details here: https://curl.se/docs/sslcerts.html
+1. To view your newly created dashboard, within Azure portal, navigate to `Dashboard` resource.
- curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above.
- ```
+ By default, this should open the default `My Dashboard` private dashboard. From the top drop-down select your custom dashboard name (`Nginx4a Dashboard`in the screenshot). This should open your custom dashboard which includes the pinned server request chart.
- As you can see, **curl reports an error** that the certificate is not legitimate (because it is self-signed) and refuses to complete the request. Adding the `-k` flag means `-insecure`, would tell curl to ignore this error. This flag is required for self-signed certificates.
+ 
-1. Try again now with a `-k` flag added to curl
+1. Now you will add some more charts to your newly created dashboard. Navigate back to NGINX for Azure resource (nginx4a) and from the left pane click on `Monitoring > Metrics`.
- ```bash
- curl -k https://cafe.example.com
- ```
+1. Within the chart pane, click on **Metric Namespace** drop-down and select `nginx upstream statistics`. For the **metrics** drop-down, select `plus.http.upstream.peers.response.time`. For the **Aggregation** drop-down, select `Avg`.
- << Copy sample output once cafe upstream has been added >>
+ Click on the **Add filter** button. Within the **Property** drop-down, select `upstream`. Leave the **Operator** to `=`. In **values** drop-down, select `aks1_ingress`, `aks2_ingress` and `cafe_nginx`.
-1. Now try it with a browser, go to https://cafe.example.com. YIKES - what's this?? Most modern browsers will display an **Error or Security Warning**:
+ Click on the **Apply Splitting** button. Within the **Values** drop-down, select `upstream`. From top right change the **Time range** to `Last 30 minutes`. This should generate a chart similar to below screenshot.
- 
+ 
-1. You can use browser's built-in certificate viewer to look at the details of the TLS certificate that was sent from NGINX to your browser. In address bar, click on the `Not Secure` icon, then click on `Certificate is not valid`. This will display the certificate. You can verify looking at the `Comman Name` field that this is the same certificate that you provided to NGINX for Azure resource.
+1. You will now pin this chart to your custom dashboard. Within the chart pane, click on `Save to dashboard > Pin to dashboard`.
- 
+ Within the `Pin to dashboard` pane, by default the `Existing` tab should be open with your recent created dashboard selected. Click on `Pin` button to pin the chart to your dashboard.
-1. Within the browser, close the Certificate Viewer, click on the `Advanced` button, and then click on `Proceed to cafe.example.com (unsafe)` link, to bypass the warning and continue.
- > CAUTION: Ignoring Browser Warnings is **Dangerous**, only Ignore these warnings if you are 100% sure it is safe to proceed!!
+ 
-1. After you safely Proceed, you should see the cafe.example.com output as below
+1. Navigate back to `Dashboard` resource within Azure portal and select your dashboard. You will notice that the chart that you recently pinned shows up in your dashboard.
- << Add output screenshot once cafe upstream has been added >>
+ 
-
+ You can also edit your dashboard by clicking on the pencil icon to reposition/resize charts as per your taste.
-**This completes Lab6.**
+1. Please look into the [References](#references) section to check the metric catalog and explore various other metrics available with NGINX for Azure. Feel free to play around and pin multiple metrics to your dashboard.
+**This completes Lab6.**
+
## References:
- [NGINX As A Service for Azure](https://docs.nginx.com/nginxaas/azure/)
-- [NGINX As A Service SSL/TLS Docs](https://docs.nginx.com/nginxaas/azure/getting-started/ssl-tls-certificates/)
-- [NGINX Directives Index](https://nginx.org/en/docs/dirindex.html)
+- [Kusto Query Language](https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/tutorials/learn-common-operators)
+
+- [NGINX Metrics catalog](https://docs.nginx.com/nginxaas/azure/monitoring/metrics-catalog/)
+
- [NGINX - Join Community Slack](https://community.nginx.org/joinslack)
@@ -216,4 +240,4 @@ Now that you have a self signed TLS certificate for testing, you will configure
-------------
-Navigate to ([Lab7](../lab7/readme.md) | [LabX](../labX/readme.md))
+Navigate to ([Lab7](../lab7/readme.md) | [LabGuide](../readme.md))
diff --git a/labs/lab7/cafe.example.com.conf b/labs/lab7/cafe.example.com.conf
new file mode 100644
index 0000000..e471ad2
--- /dev/null
+++ b/labs/lab7/cafe.example.com.conf
@@ -0,0 +1,37 @@
+# Nginx 4 Azure - Cafe Nginx HTTPS
+# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
+#
+server {
+
+ listen 443 ssl; # Listening on port 443 with "ssl" parameter for terminating TLS on all IP addresses on this machine
+
+ server_name cafe.example.com; # Set hostname to match in request
+ status_zone cafe.example.com; # Metrics zone name
+
+ ssl_certificate /etc/nginx/cert/n4a-cert.cert;
+ ssl_certificate_key /etc/nginx/cert/n4a-cert.key;
+
+ access_log /var/log/nginx/cafe.example.com.log main;
+ error_log /var/log/nginx/cafe.example.com_error.log info;
+
+ location / {
+ #
+ # return 200 "You have reached cafe.example.com, location /\n";
+
+ proxy_pass http://cafe_nginx; # Proxy AND load balance to a list of servers
+ add_header X-Proxy-Pass cafe_nginx; # Custom Header
+
+ # proxy_pass http://windowsvm; # Proxy AND load balance to a list of servers
+ # add_header X-Proxy-Pass windowsvm; # Custom Header
+
+ # proxy_pass http://aks1_ingress; # Proxy AND load balance to AKS1 Nginx Ingress
+ # add_header X-Proxy-Pass aks1_ingress; # Custom Header
+
+ # proxy_pass http://aks2_ingress; # Proxy AND load balance to AKS2 Nginx Ingress
+ # add_header X-Proxy-Pass aks1_ingress; # Custom Header
+
+ # proxy_pass http://$upstream; # Use Split Clients config
+ # add_header X-Proxy-Pass $upstream; # Custom Header
+
+ }
+}
\ No newline at end of file
diff --git a/labs/lab7/media/docker-icon.png b/labs/lab7/media/docker-icon.png
new file mode 100644
index 0000000..02ee3f1
Binary files /dev/null and b/labs/lab7/media/docker-icon.png differ
diff --git a/labs/lab7/media/lab7_add_certificate1.png b/labs/lab7/media/lab7_add_certificate1.png
new file mode 100644
index 0000000..c09d55f
Binary files /dev/null and b/labs/lab7/media/lab7_add_certificate1.png differ
diff --git a/labs/lab7/media/lab7_add_certificate2.png b/labs/lab7/media/lab7_add_certificate2.png
new file mode 100644
index 0000000..415ea9c
Binary files /dev/null and b/labs/lab7/media/lab7_add_certificate2.png differ
diff --git a/labs/lab7/media/lab7_add_certificate_save.png b/labs/lab7/media/lab7_add_certificate_save.png
new file mode 100644
index 0000000..ce9c4e2
Binary files /dev/null and b/labs/lab7/media/lab7_add_certificate_save.png differ
diff --git a/labs/lab7/media/lab7_add_certificate_success.png b/labs/lab7/media/lab7_add_certificate_success.png
new file mode 100644
index 0000000..969584b
Binary files /dev/null and b/labs/lab7/media/lab7_add_certificate_success.png differ
diff --git a/labs/lab6/media/browser_cert_details.png b/labs/lab7/media/lab7_browser_cert_details.png
similarity index 100%
rename from labs/lab6/media/browser_cert_details.png
rename to labs/lab7/media/lab7_browser_cert_details.png
diff --git a/labs/lab6/media/browser_cert_invalid.png b/labs/lab7/media/lab7_browser_cert_invalid.png
similarity index 100%
rename from labs/lab6/media/browser_cert_invalid.png
rename to labs/lab7/media/lab7_browser_cert_invalid.png
diff --git a/labs/lab7/media/lab7_browser_success.png b/labs/lab7/media/lab7_browser_success.png
new file mode 100644
index 0000000..fe45efc
Binary files /dev/null and b/labs/lab7/media/lab7_browser_success.png differ
diff --git a/labs/lab7/media/lab7_certificate_issuance.png b/labs/lab7/media/lab7_certificate_issuance.png
new file mode 100644
index 0000000..ca02368
Binary files /dev/null and b/labs/lab7/media/lab7_certificate_issuance.png differ
diff --git a/labs/lab7/media/lab7_keyvault_screen.png b/labs/lab7/media/lab7_keyvault_screen.png
new file mode 100644
index 0000000..bdb271a
Binary files /dev/null and b/labs/lab7/media/lab7_keyvault_screen.png differ
diff --git a/labs/lab6/media/n4a_cert_screen.png b/labs/lab7/media/lab7_n4a_cert_screen.png
similarity index 100%
rename from labs/lab6/media/n4a_cert_screen.png
rename to labs/lab7/media/lab7_n4a_cert_screen.png
diff --git a/labs/lab7/media/nginx-azure-icon.png b/labs/lab7/media/nginx-azure-icon.png
new file mode 100644
index 0000000..70ab132
Binary files /dev/null and b/labs/lab7/media/nginx-azure-icon.png differ
diff --git a/labs/lab7/readme.md b/labs/lab7/readme.md
index 9cac8be..f5e2c4a 100644
--- a/labs/lab7/readme.md
+++ b/labs/lab7/readme.md
@@ -1,10 +1,8 @@
-# Azure Montoring / Logging Analytics
+# Azure Key Vault / TLS Essentials
## Introduction
-In this lab, you will build ( x,y,x ).
-
-< Lab specific Images here, in the /media sub-folder >
+In this lab, you will create a new key-vault resource that would be storing self-signed certificates. You will then configure Nginx for Azure to listen for https traffic and then terminate TLS before proxying and load balancing back to the backend system.
NGINX aaS | Docker
:-------------------------:|:-------------------------:
@@ -14,38 +12,290 @@ NGINX aaS | Docker
By the end of the lab you will be able to:
-- Introduction to `xx`
-- Build an `yyy` Nginx configuration
-- Test access to your lab enviroment with Curl and Chrome
-- Investigate `zzz`
-
+- Build your own Azure Key Vault resource.
+- Create your self-signed TLS certificate.
+- Configure NGINX for Azure to listen and terminate TLS traffic
+- Test and validate TLS traffic components and settings
## Pre-Requisites
-- You must have `aaaa` installed and running
-- You must have `bbbbb` installed
+- You must have your Nginx for Azure resource up and running
+- You must have `Owner` role on the resource group that includes NGINX for Azure resource
+- You must also have backend system resources up and running.
- See `Lab0` for instructions on setting up your system for this Workshop
-- Familiarity with basic Linux commands and commandline tools
-- Familiarity with basic Docker concepts and commands
-- Familiarity with basic HTTP protocol
-
+### Create Azure Key Vault resource
+
+1. Create an Azure key vault within the same resource group which holds your NGINX for azure resource.
+
+ ```bash
+ ## Set environment variable
+ export MY_RESOURCEGROUP=s.dutta-workshop
+ export MY_INITIALS=sdutta
+ export MY_KEYVAULT=n4a-keyvault-$MY_INITIALS
+ ```
+
+ Once the environment variables are all set, run below command to create the key vault resource
+
+ ```bash
+ az keyvault create \
+ --resource-group $MY_RESOURCEGROUP \
+ --name $MY_KEYVAULT \
+ --enable-rbac-authorization false
+ ```
+
+ ```bash
+ ##Sample Output##
+ {
+ "id": "/subscriptions//resourceGroups/s.dutta-workshop/providers/Microsoft.KeyVault/vaults/n4a-keyvault-sdutta",
+ "location": "centralus",
+ "name": "n4a-keyvault-sdutta",
+ "properties": {
+ "accessPolicies": [
+ {
+ "applicationId": null,
+ "objectId": "xxxx-xxxx-xxxx-xxxx-xxxx",
+ "permissions": {
+ "certificates": [
+ "all"
+ ],
+ "keys": [
+ "all"
+ ],
+ "secrets": [
+ "all"
+ ],
+ "storage": [
+ "all"
+ ]
+ },
+ "tenantId": "xxxx-xxxx-xxxx-xxxx-xxxx"
+ }
+ ],
+ "createMode": null,
+ "enablePurgeProtection": null,
+ "enableRbacAuthorization": false,
+ "enableSoftDelete": true,
+ "enabledForDeployment": false,
+ "enabledForDiskEncryption": null,
+ "enabledForTemplateDeployment": null,
+ "hsmPoolResourceId": null,
+ "networkAcls": null,
+ "privateEndpointConnections": null,
+ "provisioningState": "Succeeded",
+ "publicNetworkAccess": "Enabled",
+ "sku": {
+ "family": "A",
+ "name": "standard"
+ },
+ "softDeleteRetentionInDays": 90,
+ "tenantId": "xxxx-xxxx-xxxx-xxxx-xxxx",
+ "vaultUri": "https://n4a-keyvault-sdutta.vault.azure.net/"
+ },
+ "resourceGroup": "s.dutta-workshop",
+ "systemData": {
+ "createdAt": "2024-05-08T12:51:45.338000+00:00",
+ "createdBy": "",
+ "createdByType": "User",
+ "lastModifiedAt": "2024-05-08T12:51:45.338000+00:00",
+ "lastModifiedBy": "",
+ "lastModifiedByType": "User"
+ },
+ "tags": {},
+ "type": "Microsoft.KeyVault/vaults"
+ }
+ ```
+
+ > **NOTE:** Within the output json you should have a `"provisioningState": "Succeeded"` field which validates the command successfully provisioned the resource.
+
+2. Next you would provide permissions to access this keyvault to the user assigned managed identity that you created while creating NGINX for Azure resource.
+3. Copy the `PrincipalID` of the user identity into an environment variable using below command.
+
+ ```bash
+ ## Set environment variable
+ MY_PRINCIPALID=$(az identity show \
+ --resource-group $MY_RESOURCEGROUP \
+ --name n4a-useridentity \
+ --query principalId \
+ --output tsv)
+ ```
+
+4. Now assign GET secrets and GET certificates permission to this user assigned managed identity for your keyvault using below command.
+
+ ```bash
+ az keyvault set-policy \
+ --name $MY_KEYVAULT \
+ --certificate-permissions get \
+ --secret-permissions get \
+ --object-id $MY_PRINCIPALID
+ ```
+
+ > **NOTE:** Within the output json you should have a `"provisioningState": "Succeeded"` field which validates the command successfully set the policy.
+
+### Create a self-signed TLS certificate
+
+1. In this section, you will create a self-signed certificate using the Azure CLI.
-### Lab exercise 1
+ >**NOTE:** It should be clearly understood, that Self-signed certificates are exactly what the name suggest - they are created and signed by you or someone else. **They are not signed by any official Certificate Authority**, so they are not recommended for any use other than testing in lab exercises within this workshop. Most Modern Internet Browsers will display Security Warnings when they receive a Self-Signed certificate from a webserver. In some environments, the Browser may actually block access completely. So use Self-signed certificates with **CAUTION**.
-
+2. Create a self-signed certificate by running the below command.
-### Lab exercise 2
+ > **NOTE:** Make sure your Terminal is the `nginx-azure-workshops/labs` directory before running the below command.
-
+ ```bash
+ az keyvault certificate create \
+ --vault-name $MY_KEYVAULT \
+ --name n4a-cert \
+ --policy @lab7/self-certificate-policy.json
+ ```
-### Lab exercise 3
+ ```bash
+ ##Sample Output##
+ {
+ "cancellationRequested": false,
+ "csr": "",
+ "error": null,
+ "id": "https://n4a-keyvault-sdutta.vault.azure.net/certificates/n4a-cert/pending",
+ "issuerParameters": {
+ "certificateTransparency": null,
+ "certificateType": null,
+ "name": "Self"
+ },
+ "name": "n4a-cert",
+ "requestId": "9e3abe3b0977420cba1733c326fe26e5",
+ "status": "completed",
+ "statusDetails": null,
+ "target": "https://n4a-keyvault-sdutta.vault.azure.net/certificates/n4a-cert"
+ }
+ ```
-
+ > **NOTE:** Within the output json you should have a `"status": "completed"` field which validates the command successfully created the certificate.
+
+3. Now log into Azure portal and navigate to your resource-group and then click on the `n4a-keyvault-$MY_INITIALS` key-vault resource.
+
+4. Within the keyvault resources window, click on `Certificates` under `Objects` from the left pane. You should see a self-signed certificate named `n4a-cert` within the certificates pane.
+
+ 
+
+5. Click on the newly created certificate and then open up `Issuance Policy` tab for more details on the certificate. You will use this certificate with NGINX for Azure resource to listen for HTTPS traffic.
+
+ 
+
+### Configure NGINX for Azure to listen listen and terminate TLS traffic
+
+Now that you have a self signed TLS certificate for testing, you will configure NGINX for Azure resource to use them.
+
+1. Within your resource-group, click on the NGINX for Azure resource (`nginx4a`).
+
+1. From the left pane, click on `NGINX certificates` under `Settings` and then click on the `+ Add certificate` button to add your self signed certificate that you created in previous section.
+
+ 
+
+1. Within the `Add Certificate` pane, fill in below details:
+ - **Preferred name:** Any unique name for the certificate (eg. n4a-cert)
+ - **Certificate path:** Logical path where the certificate would recide. (eg. /etc/nginx/cert/n4a-cert.cert)
+ - **Key path:** Logical path where the key would recide. (eg. /etc/nginx/cert/n4a-cert.key)
+
+ 
+
+ - Click on the `Select Certificate` button and then fill in below certificate details. Once done click `Select`
+ - **Key vault:** Select your key vault (eg. n4a-keyvault-sdutta)
+ - **Certificate name:** Select a certificate (eg. n4a-cert)
+
+ 
+
+1. Once all the fields have been filled, click on `Add Certificate` to save the certificate within NGINX for Azure.
-### << more exercises/steps>>
+ 
-
+1. You should see your certificate in a `Succeeded` status if the values that you entered in previous step was all correct.
+
+ 
+
+1. Now you will modify your `cafe.example.com.conf` file that you created in `lab2` to set up cafe.example.com as a HTTPS server. First you will add the `ssl` parameter to the `listen` directive in the `server` block. You will then specify the server certificate and private key file within the configuration to point to the certificate that you added in previous steps.
+
+1. Open `lab7/cafe.example.com.conf`. Below is the list of changes that you can observe which has changed from `lab2/cafe.example.com.conf` file to enable HTTPS traffic on cafe.example.com.
+ - On line #6, the listen port has been updated from port 80 to 443. Also `ssl` parameter has been added to enable TLS termination for this `server` block.
+ - On line #11-12, the `ssl_certificate` and `ssl_certificate_key` directives have been added and points to the certificate path that you provided when you added certificate to the NGINX for Azure resource.
+
+ ```nginx
+ server {
+
+ listen 443 ssl; # Listening on port 443 with "ssl" parameter for terminating TLS on all IP addresses on this machine
+
+ server_name cafe.example.com; # Set hostname to match in request
+ status_zone cafe.example.com; # Metrics zone name
+
+ ssl_certificate /etc/nginx/cert/n4a-cert.cert;
+ ssl_certificate_key /etc/nginx/cert/n4a-cert.key;
+
+ snip...
+ }
+ ```
+
+1. Within the Azure portal, open your resource-group, click on the NGINX for Azure resource (`nginx4a`).
+
+1. From the left pane, click on `NGINX configuration` under `Settings` and then open the `cafe.example.com.conf` file under `/etc/nginx/conf.d` directory. This would open the config file in the editor.
+
+1. Copy the content of `lab7/cafe.example.com.conf` file and replace the existing `cafe.example.com.conf` content with it.
+
+1. Click on `Submit` to push the config changes to the NGINX for Azure resource.
+
+### Test and validate TLS traffic components and settings
+
+1. Make sure you have mapped your NGINX for Azure resource public IP to `cafe.example.com` hostname within your host file. If not present then please do insert it as you would require the mapping for testing.
+
+ ```bash
+ cat /etc/hosts | grep cafe.example.com
+ ```
+
+1. Using your terminal, try to run the below curl command
+
+ ```bash
+ curl -I https://cafe.example.com
+ ```
+
+ ```bash
+ ##Sample Output##
+ curl: (60) SSL certificate problem: unable to get local issuer certificate
+ More details here: https://curl.se/docs/sslcerts.html
+
+ curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above.
+ ```
+
+ As you can see, **curl reports an error** that the certificate is not legitimate (because it is self-signed) and refuses to complete the request. Adding the `-k` flag means `-insecure`, would tell curl to ignore this error. This flag is required for self-signed certificates.
+
+1. Try again now with a `-k` flag added to curl
+
+ ```bash
+ curl -k -I https://cafe.example.com
+ ```
+
+ ```bash
+ ##Sample Output##
+ HTTP/1.1 200 OK
+ Date: Wed, 08 May 2024 15:51:24 GMT
+ Content-Type: text/html; charset=utf-8
+ Connection: keep-alive
+ Expires: Wed, 08 May 2024 15:51:23 GMT
+ Cache-Control: no-cache
+ X-Proxy-Pass: cafe_nginx
+ ```
+
+1. Now try it with a browser, go to https://cafe.example.com. YIKES - what's this?? Most modern browsers will display an **Error or Security Warning**:
+
+ 
+
+1. You can use browser's built-in certificate viewer to look at the details of the TLS certificate that was sent from NGINX to your browser. In address bar, click on the `Not Secure` icon, then click on `Certificate is not valid`. This will display the certificate. You can verify looking at the `Comman Name` field that this is the same certificate that you provided to NGINX for Azure resource.
+
+ 
+
+1. Within the browser, close the Certificate Viewer, click on the `Advanced` button, and then click on `Proceed to cafe.example.com (unsafe)` link, to bypass the warning and continue.
+ > CAUTION: Ignoring Browser Warnings is **Dangerous**, only Ignore these warnings if you are 100% sure it is safe to proceed!!
+
+1. After you safely Proceed, you should see the cafe.example.com output as below
+
+ 
@@ -56,12 +306,9 @@ By the end of the lab you will be able to:
## References:
- [NGINX As A Service for Azure](https://docs.nginx.com/nginxaas/azure/)
-- [NGINX Plus Product Page](https://docs.nginx.com/nginx/)
-- [NGINX Ingress Controller](https://docs.nginx.com//nginx-ingress-controller/)
-- [NGINX on Docker](https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-docker/)
+
+- [NGINX As A Service SSL/TLS Docs](https://docs.nginx.com/nginxaas/azure/getting-started/ssl-tls-certificates/)
- [NGINX Directives Index](https://nginx.org/en/docs/dirindex.html)
-- [NGINX Variables Index](https://nginx.org/en/docs/varindex.html)
-- [NGINX Technical Specs](https://docs.nginx.com/nginx/technical-specs/)
- [NGINX - Join Community Slack](https://community.nginx.org/joinslack)
@@ -74,4 +321,4 @@ By the end of the lab you will be able to:
-------------
-Navigate to ([Lab8](../lab8/readme.md) | [LabX](../labX/readme.md))
+Navigate to ([Lab8](../lab8/readme.md) | [LabGuide](../readme.md))
diff --git a/labs/lab6/self-certificate-policy.json b/labs/lab7/self-certificate-policy.json
similarity index 100%
rename from labs/lab6/self-certificate-policy.json
rename to labs/lab7/self-certificate-policy.json
diff --git a/labs/lab8/MyGarage-Home.png b/labs/lab8/MyGarage-Home.png
new file mode 100644
index 0000000..2635caa
Binary files /dev/null and b/labs/lab8/MyGarage-Home.png differ
diff --git a/labs/lab8/MyGarage-PhotoGallery.png b/labs/lab8/MyGarage-PhotoGallery.png
new file mode 100644
index 0000000..53e777a
Binary files /dev/null and b/labs/lab8/MyGarage-PhotoGallery.png differ
diff --git a/labs/lab8/MyGarage-SeedData.png b/labs/lab8/MyGarage-SeedData.png
new file mode 100644
index 0000000..1ef1f47
Binary files /dev/null and b/labs/lab8/MyGarage-SeedData.png differ
diff --git a/labs/lab8/MyGarage-Vehicles.png b/labs/lab8/MyGarage-Vehicles.png
new file mode 100644
index 0000000..2dcddc3
Binary files /dev/null and b/labs/lab8/MyGarage-Vehicles.png differ
diff --git a/labs/lab8/my-garage-deployment.yaml b/labs/lab8/my-garage-deployment.yaml
new file mode 100644
index 0000000..6ae67a7
--- /dev/null
+++ b/labs/lab8/my-garage-deployment.yaml
@@ -0,0 +1,69 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: my-garage-deployment
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: my-garage
+ template:
+ metadata:
+ labels:
+ app: my-garage
+ spec:
+ containers:
+ - name: my-garage
+ image: ghcr.io/ciroque/my-garage:18fb641c
+ ports:
+ - containerPort: 80
+ env:
+ - name: ASPNETCORE_ENVIRONMENT
+ value: "Production"
+
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: my-garage-svc
+spec:
+ type: ClusterIP
+ clusterIP: None
+ ports:
+ - name: http
+ protocol: TCP
+ port: 80
+ targetPort: 80
+ selector:
+ app: my-garage
+
+---
+apiVersion: k8s.nginx.org/v1
+kind: VirtualServer
+metadata:
+ name: my-garage-vs
+spec:
+ host: my-garage.example.com
+ upstreams:
+ - name: my-garage
+ service: my-garage-svc
+ port: 80
+ healthCheck:
+ enable: true
+ path: /
+ interval: 10s
+ jitter: 3s
+ fails: 3
+ passes: 2
+ connect-timeout: 30s
+ read-timeout: 20s
+ routes:
+ - path: /
+ action:
+ pass: my-garage
+ - path: /garage-lab
+ action:
+ return:
+ code: 200
+ type: text/html
+ body: "Welcome to Nginx4Azure Garage Lab !!"
\ No newline at end of file
diff --git a/labs/lab8/readme.md b/labs/lab8/readme.md
index 43cec3f..f2d26b2 100644
--- a/labs/lab8/readme.md
+++ b/labs/lab8/readme.md
@@ -1,51 +1,423 @@
-# NGINX Garage or Azure Petshop
+# NGINX Garage (UNDER CONSTRUCTION)
+
## Introduction
-In this lab, you will build ( x,y,x ).
-< Lab specific Images here, in the /media sub-folder >
+In this lab, you will install the My Garage application, configure it for external access, learn to scale the web service, and set up caching for the image gallery (optional).
+
+The My Garage application is a modern web application built using Microsoft .Net technologies. It is comprised of a frontend application and supporting web service backend. The front-end is a Single Page Application (SPA) that uses [Blazor WebAssembly](https://dotnet.microsoft.com/en-us/apps/aspnet/web-apps/blazor) to render the UI in the browser. The back-end is a RESTful API built using [ASP.Net Core MVC](https://learn.microsoft.com/en-us/aspnet/core/mvc/overview?view=aspnetcore-8.0).
+
+|  |  |
+|------|------|
+|  |  |
+
-NGINX aaS | Docker
-:-------------------------:|:-------------------------:
- |
-
## Learning Objectives
By the end of the lab you will be able to:
-- Introduction to `xx`
-- Build an `yyy` Nginx configuration
-- Test access to your lab enviroment with Curl and Chrome
-- Investigate `zzz`
-
+- Create all the resources necessary to deploy the My Garage application
+- Ensure the My Garage application is accessible from the internet
+- Monitor traffic to the My Garage application using the NGINX Dashboard
## Pre-Requisites
-- You must have `aaaa` installed and running
-- You must have `bbbbb` installed
-- See `Lab0` for instructions on setting up your system for this Workshop
+You need to have followed the labs up to this point. Specifically, Lab 0 and Lab 4 are required to have been completed.
+
+- You must have the Azure CLI installed and configured to manage Azure Resources
- Familiarity with basic Linux commands and commandline tools
- Familiarity with basic Docker concepts and commands
-- Familiarity with basic HTTP protocol
+- You must have created a Resource Group
+Set some environment variables to be used when creating the Azure Resources:
+
+```shell
+# Modify these as desired
+export MY_RESOURCEGROUP="${MY_RESOURCEGROUP:-$(whoami)-n4a-workshop}"
+export SAS_EXPIRY=2024-12-31Z23:59:59
+export REDIS_CONNECTION_STRING=redis-leader
+export LOCATION=westus2
+
+# Using the ticks value ensures the naming requirements for Azure Resources are met
+timestamp=$(date +%s)
+milliseconds=$(date +%N | awk '{print $1 / 1000}')
+export TICKS="$timestamp$milliseconds"
+export STORAGE_ACCOUNT_NAME="mygsa$TICKS"
+export STORAGE_CONTAINER_NAME="mygsa$TICKS"
+export SAS_TOKEN_NAME="mygsas$TICKS"
+export APP_CONFIG_NAME="mygac$TICKS"
+export OWNER=$(whoami)
+
+# These correspond to the keys in the AppConfig that the My Garage application will use to access the resources, do not modify
+export SAS_TOKEN_APP_CONFIG_KEY="AzureStorageSasToken"
+export STORAGE_CONNECTION_STRING_CONFIG_KEY="AzureStorageConnectionString"
+export STORAGE_CONTAINER_NAME_CONFIG_KEY="AzureStorageContainerName"
+export REDIS_CONNECTION_STRING_CONFIG_KEY="RedisConnectionString"
+```
+
### Lab exercise 1
-
+In this exercise you will establish the necessary Azure resources to deploy the My Garage application. There are two Azure resources that need to be created:
+
+1. A Storage Account to store the images for the photo gallery
+1. An AppConfig to store configuration settings for the My Garage application
+
+ 1. First, let's establish the Storage Account. Containers, where the Photo Gallery files will be saved, are created in the Storage Account.
+ The `--allow-blob-public-access true` is required to allow the container to be public.
+
+ ```shell
+ az storage account create --name $STORAGE_ACCOUNT_NAME --resource-group $MY_RESOURCEGROUP --location $LOCATION --sku Standard_LRS --kind StorageV2 --access-tier Cool --allow-blob-public-access true --tags owner=$OWNER
+ ```
+
+ Sample output
+
+ ```shell
+ {
+ "accessTier": "Cool",
+ "accountMigrationInProgress": null,
+ "allowBlobPublicAccess": true,
+ "allowCrossTenantReplication": false,
+ "allowSharedKeyAccess": null,
+ "allowedCopyScope": null,
+ "azureFilesIdentityBasedAuthentication": null,
+ "blobRestoreStatus": null,
+ "creationTime": "2024-05-30T19:44:17.165991+00:00",
+ "customDomain": null,
+ "defaultToOAuthAuthentication": null,
+ "dnsEndpointType": null,
+ "enableHttpsTrafficOnly": true,
+ "enableNfsV3": null,
+ "encryption": {
+ "encryptionIdentity": null,
+ "keySource": "Microsoft.Storage",
+ "keyVaultProperties": null,
+ "requireInfrastructureEncryption": null,
+ "services": {
+ "blob": {
+ "enabled": true,
+ "keyType": "Account",
+ "lastEnabledTime": "2024-05-30T19:44:17.322273+00:00"
+ },
+ "file": {
+ "enabled": true,
+ "keyType": "Account",
+ "lastEnabledTime": "2024-05-30T19:44:17.322273+00:00"
+ },
+ {
+ "queue": null,
+ "table": null
+ },
+ "extendedLocation": null,
+ "failoverInProgress": null,
+ "geoReplicationStats": null,
+ "id": "/subscriptions/7a0bb4ab-c5a7-46b3-b4ad-c10376166020/resourceGroups/ciroque-n4-workshop/providers/Microsoft.Storage/storageAccounts/mygsa1717095655594493",
+ "identity": null,
+ "immutableStorageWithVersioning": null,
+ "isHnsEnabled": null,
+ "isLocalUserEnabled": null,
+ "isSftpEnabled": null,
+ "isSkuConversionBlocked": null,
+ "keyCreationTime": {
+ "key1": "2024-05-30T19:44:17.306681+00:00",
+ "key2": "2024-05-30T19:44:17.306681+00:00"
+ },
+ "keyPolicy": null,
+ "kind": "StorageV2",
+ "largeFileSharesState": null,
+ "lastGeoFailoverTime": null,
+ "location": "westus2",
+ "minimumTlsVersion": "TLS1_0",
+ "name": "mygsa1717095655594493",
+ "networkRuleSet": {
+ "bypass": "AzureServices",
+ "defaultAction": "Allow",
+ "ipRules": [],
+ "ipv6Rules": [],
+ "resourceAccessRules": null,
+ "virtualNetworkRules": []
+ },
+ "primaryEndpoints": {
+ "blob": "https://mygsa1717095655594493.blob.core.windows.net/",
+ "dfs": "https://mygsa1717095655594493.dfs.core.windows.net/",
+ "file": "https://mygsa1717095655594493.file.core.windows.net/",
+ "internetEndpoints": null,
+ "microsoftEndpoints": null,
+ "queue": "https://mygsa1717095655594493.queue.core.windows.net/",
+ "table": "https://mygsa1717095655594493.table.core.windows.net/",
+ "web": "https://mygsa1717095655594493.z5.web.core.windows.net/"
+ },
+ "primaryLocation": "westus2",
+ "privateEndpointConnections": [],
+ "provisioningState": "Succeeded",
+ "publicNetworkAccess": null,
+ "resourceGroup": "ciroque-n4-workshop",
+ "routingPreference": null,
+ "sasPolicy": null,
+ "secondaryEndpoints": null,
+ "secondaryLocation": null,
+ "sku": {
+ "name": "Standard_LRS",
+ "tier": "Standard"
+ },
+ "statusOfPrimary": "available",
+ "statusOfSecondary": null,
+ "storageAccountSkuConversionStatus": null,
+ "tags": {
+ "owner": "ciroque"
+ },
+ "type": "Microsoft.Storage/storageAccounts"
+ }
+ ```
+
+1. After this is created, grab the Account Key for later use...
+
+ ```shell
+ export ACCOUNT_KEY=$(az storage account keys list --resource-group $MY_RESOURCEGROUP --account-name $STORAGE_ACCOUNT_NAME --query "[0].value" --output tsv)
+ echo $ACCOUNT_KEY
+ ```
+
+1. Add CORS to ensure that the storage account can be accessed from the web
+
+ ```shell
+ az storage cors add --account-name $STORAGE_ACCOUNT_NAME --account-key $ACCOUNT_KEY --services b --origins "*" --methods GET HEAD --allowed-headers "*" --exposed-headers "*" --max-age 3600
+ ```
+
+1. The Storage Connection String is necessary for the App Configuration Store, it is used to let the Application store images
+
+ ```shell
+ export STORAGE_CONNECTION_STRING=$(az storage account show-connection-string --name $STORAGE_ACCOUNT_NAME --resource-group $MY_RESOURCEGROUP --output tsv)
+ echo $STORAGE_CONNECTION_STRING
+ ```
+
+1. Storage Container, The Place to Store Images
+
+ ```shell
+ az storage container create --name $STORAGE_CONTAINER_NAME --account-name $STORAGE_ACCOUNT_NAME --account-key $ACCOUNT_KEY --public-access blob
+ ```
+
+ Sample output:
+
+ ```shell
+ {
+ "created": true
+ }
+ ```
+
+1. The App Configuration Store, The Place to Store Configuration
+
+ ```shell
+ az appconfig create --name $APP_CONFIG_NAME --resource-group $MY_RESOURCEGROUP --location $LOCATION --sku Standard --query id --output tsv
+ ```
+
+ Sample output:
+
+ ```shell
+ /subscriptions/7a0bb4ab-c5a7-46b3-b4ad-c10376166020/resourceGroups/ciroque-n4-workshop/providers/Microsoft.AppConfiguration/configurationStores/mygac1717095655594493
+ ```
+
+1. The Web Application -- My Garage -- needs this to be able to connect to the AppConfig instance and grab configuration
+
+ ```shell
+ export APP_CONFIG_CONNECTION_STRING=$(az appconfig credential list --name $APP_CONFIG_NAME --resource-group $MY_RESOURCEGROUP --query "[?name=='Primary Read Only'].connectionString" -o tsv)
+ echo $APP_CONFIG_CONNECTION_STRING
+ ```
+
+ Sample output:
+
+ ```shell
+ Endpoint=https://mygac1717095655594493.azconfig.io;Id=FdhE;Secret=7mx92osNJalVfzg7AkDllEaDT8yqeSxNLWggm5m44SGq5cJ5KgBfJQQJ99AEAC8vTIns17ujAAABAZACYsE6
+ ```
+
+ 1. The values required by the application need to be seeded. Note that all except for the RedisConnectionString have been gathered by this script
+
+ ```shell
+ az appconfig kv set --yes --name $APP_CONFIG_NAME --key $STORAGE_CONNECTION_STRING_CONFIG_KEY --value "$STORAGE_CONNECTION_STRING"
+ az appconfig kv set --yes --name $APP_CONFIG_NAME --key $STORAGE_CONTAINER_NAME_CONFIG_KEY --value "$STORAGE_CONTAINER_NAME"
+ az appconfig kv set --yes --name $APP_CONFIG_NAME --key $REDIS_CONNECTION_STRING_CONFIG_KEY --value "$REDIS_CONNECTION_STRING"
+
+ echo "AppConfig Connection String: $APP_CONFIG_CONNECTION_STRING"
+ ```
+
+ Sample output:
+
+ ```shell
+ {
+ "contentType": "",
+ "etag": "-E-bQ-9J3tM60m1wMleds1D1X0HQZ10ImgOlOlnaG-k",
+ "key": "AzureStorageConnectionString",
+ "label": null,
+ "lastModified": "2024-05-30T19:52:50+00:00",
+ "locked": false,
+ "tags": {},
+ "value": "<...>"
+ }
+ {
+ "contentType": "",
+ "etag": "4FCiXDJPWO_QKTU2SrkShDxakOxQHPFXLHcGAsPbd_4",
+ "key": "AzureStorageContainerName",
+ "label": null,
+ "lastModified": "2024-05-30T19:52:52+00:00",
+ "locked": false,
+ "tags": {},
+ "value": "mygsa1717095655594493"
+ }
+ {
+ "contentType": "",
+ "etag": "ZIbhl1qmkZk3U8fJ6P7JvehSof0v-30GYGpri5RmNMU",
+ "key": "RedisConnectionString",
+ "label": null,
+ "lastModified": "2024-05-30T19:52:54+00:00",
+ "locked": false,
+ "tags": {},
+ "value": "redis.example.com:6379"
+ }
+
+ AppConfig Connection String: Endpoint=https://mygac1717095655594493.azconfig.io;Id=FdhE;Secret=7mx92osNJalVfzg7AkDllEaDT8yqeSxNLWggm5m44SGq5cJ5KgBfJQQJ99AEAC8vTIns17ujAAABAZACYsE6
+ ```
### Lab exercise 2
-
+In this exercise you will establish the NGINX for Azure configuration.
+
+1. We will be installing The Garage app into AKS Cluster 2, so we'll need to be in that context and have a list of the nodes:
+
+ ```shell
+ kubectl config use-context n4a-aks2
+ kubectl get nodes
+ ```
+
+ Sample Output
+
+ ```shell
+ Switched to context "n4a-aks2".
+ NAME STATUS ROLES AGE VERSION
+ aks-nodepool1-59322765-vmss000000 Ready agent 14d v1.27.9
+ aks-nodepool1-59322765-vmss000001 Ready agent 14d v1.27.9
+ aks-nodepool1-59322765-vmss000002 Ready agent 14d v1.27.9
+ aks-nodepool1-59322765-vmss000003 Ready agent 14d v1.27.9
+ ```
+1. Open Azure portal within your browser and then open your resource group. Click on your NGINX for Azure resource (nginx4a) which should open the Overview section of your resource. From the left pane click on `NGINX Configuration` under settings.
+
+1. Click on `+ New File`, to create a new Nginx config file. Name the new file `/etc/nginx/conf.d/the-garage-upstreams.conf`. You can use the example provided, just edit the Node Names to match your cluster:
+
+ ```nginx
+ upstream the_garage {
+ zone the_garage 256k;
+
+ least_time last_byte;
+
+ server aks-nodepool1--vmss000000:32080; #aks2 node1
+ server aks-nodepool1--vmss000001:32080; #aks2 node2
+ server aks-nodepool1--vmss000002:32080; #aks2 node3
+ server aks-nodepool1--vmss000003:32080; #aks2 node4
+
+ keepalive 32;
+
+ }
+ ```
+
+1. Click the `Submit` Button above the Editor. Nginx will validate your configurations, and if successful, will reload Nginx with your new configurations. If you receive an error, you will need to fix it before you proceed.
+
+1. Click on `+ New File`, to create a new Nginx config file. Name the new file `/etc/nginx/conf.d/the-garage.conf`. You can use the example provided, just edit the Node Names to match your cluster:
+
+ ```nginx
+ server {
+ listen 8080;
+
+ server_name the-garage.example.com;
+ status_zone the-garage.example.com; # Metrics zone name
+
+ access_log /var/log/nginx/the-garage.example.com.log main_ext;
+ error_log /var/log/nginx/the-garage.example.com_error.log info;
+
+ location / {
+ proxy_pass http://the_garage;
+ add_header X-Proxy-Pass the-garage; # Custom Header
+ }
+ }
+ ```
+
+1. Click the `Submit` Button above the Editor. Nginx will validate your configurations, and if successful, will reload Nginx with your new configurations. If you receive an error, you will need to fix it before you proceed.
+
+1. Click on `+ New File`, to create a new Nginx config file. Name the new file `/etc/nginx/conf.d/my-garage-upstreams.conf`. You can use the example provided, just edit the Node Names to match your cluster:
+
+ ```nginx
+ upstream my_garage {
+ zone my_garage 256k;
+
+ server aks-nodepool1--vmss000000:32080; #aks2 node1
+ server aks-nodepool1--vmss000001:32080; #aks2 node2
+ server aks-nodepool1--vmss000002:32080; #aks2 node3
+ server aks-nodepool1--vmss000003:32080; #aks2 node4
+
+ keepalive 8;
+ }
+ ```
+
+1. Click the `Submit` Button above the Editor. Nginx will validate your configurations, and if successful, will reload Nginx with your new configurations. If you receive an error, you will need to fix it before you proceed.
+
+1. Click on `+ New File`, to create a new Nginx config file. Name the new file `/etc/nginx/conf.d/my-garage.conf`. You can use the example provided, just edit the Node Names to match your cluster:
+
+ ```nginx
+ server {
+ listen 80;
+
+ server_name my-garage.example.com;
+ status_zone my-garage.example.com; # Metrics zone name
+
+ access_log /var/log/nginx/my-garage.example.com.log main_ext;
+ error_log /var/log/nginx/my-garage.example.com_error.log info;
+
+ location / {
+ proxy_pass http://my_garage;
+ add_header X-Proxy-Pass my-garage; # Custom Header
+ }
+ }
+ ```
+
+1. Click the `Submit` Button above the Editor. Nginx will validate your configurations, and if successful, will reload Nginx with your new configurations. If you receive an error, you will need to fix it before you proceed.
+
### Lab exercise 3
-
+In this exercise you will deploy the application into AKS Cluster 2.
+
+There are two Manifests that can be used to deploy the two services.
+
+1. `the-garage-deployment.yaml`: Deploys The Garage ASP.Net Core MVC application, this acts as the data layer;
+2. `my-garage-deployment.yaml`: Deploys the ASP.Net Blazor WebAssembly application, the front-end;
+
+To prepare the the-garage service for deployment, replace the `_YOUR_APP_CONFIG_CONNECTION_STRING_HERE_` token in `the-garage-deployment.yaml` file with the value of
+`APP_CONFIG_CONNECTION_STRING` environment variable...
+
+Then apply the Manifest:
+
+```bash
+kubectl apply -f ./the-garage-deployment.yaml
+```
+
+Navigate to the AKS Cluster 2 (n4a-aks2) in the Azure Console, then Workloads on the left. In the list of Workloads on the right,
+find the "the-garage-deployment" and click on it. You should see a Pod running.
+
+Next, to deploy the front-end, apply the Manifest:
+
+```bash
+kubectl apply -f ./my-garage-deployment.yaml
+```
+
+Navigate to the AKS Cluster 2 (n4a-aks2) in the Azure Console, then Workloads on the left. In the list of Workloads on the right,
+find the "my-garage-deployment" and click on it. You should see a Pod running.
+
+Next, update your hosts file to include `the-garage.example.com`, and `my-garage.example.com`:
-### << more exercises/steps>>
+```text
+0.0.0.0 example.com dashboard.example.com cafe.example.com the-garage.example.com my-garage.example.com
+```
-
+Then navigate to the application [My Garage](http://my-garage.example.com)
@@ -71,7 +443,8 @@ By the end of the lab you will be able to:
- Chris Akker - Solutions Architect - Community and Alliances @ F5, Inc.
- Shouvik Dutta - Solutions Architect - Community and Alliances @ F5, Inc.
- Adam Currier - Solutions Architect - Community and Alliances @ F5, Inc.
+- Steve Wagner - Solutions Architect - Community and Alliances @ F5, Inc.
-------------
-Navigate to ([Lab9](../lab9/readme.md) | [LabX](../labX/readme.md))
+Navigate to ([Lab9](../lab9/readme.md) | [LabGuide](../readme.md))
diff --git a/labs/lab8/the-garage-deployment.yaml b/labs/lab8/the-garage-deployment.yaml
new file mode 100644
index 0000000..ad95d2c
--- /dev/null
+++ b/labs/lab8/the-garage-deployment.yaml
@@ -0,0 +1,83 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: the-garage-deployment
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: the-garage
+ template:
+ metadata:
+ labels:
+ app: the-garage
+ spec:
+ containers:
+ - name: the-garage
+ image: ghcr.io/ciroque/the-garage:18fb641c
+ ports:
+ - containerPort: 8080
+ env:
+ - name: ASPNETCORE_ENVIRONMENT
+ value: "Production"
+ - name: ConnectionStrings__AppConfig
+ value: "Endpoint=https://mygac1719520913635582.azconfig.io;Id=wOL3;Secret=Bnt3dVF8vU4ZqnmPyl2i6Zroi7z5GUOCu6Zjr6rViB0FUCTkwyFDJQQJ99AFAC8vTIns17ujAAABAZACiTCv"
+ livenessProbe:
+ httpGet:
+ path: "/vehicles"
+ port: 8080
+ initialDelaySeconds: 30
+ periodSeconds: 10
+ readinessProbe:
+ httpGet:
+ path: "/vehicles"
+ port: 8080
+ initialDelaySeconds: 30
+ periodSeconds: 10
+
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: the-garage-svc
+spec:
+ type: ClusterIP
+ clusterIP: None
+ ports:
+ - name: http-alt
+ protocol: TCP
+ port: 8080
+ targetPort: 8080
+ selector:
+ app: the-garage
+
+---
+apiVersion: k8s.nginx.org/v1
+kind: VirtualServer
+metadata:
+ name: the-garage-vs
+spec:
+ host: the-garage.example.com
+ upstreams:
+ - name: the-garage
+ service: the-garage-svc
+ port: 8080
+ healthCheck:
+ enable: true
+ path: /vehicles
+ interval: 10s
+ jitter: 3s
+ fails: 3
+ passes: 2
+ connect-timeout: 30s
+ read-timeout: 20s
+ routes:
+ - path: /
+ action:
+ pass: the-garage
+ - path: /garage-lab
+ action:
+ return:
+ code: 200
+ type: text/html
+ body: "You have found The Garage"
\ No newline at end of file
diff --git a/ca-notes/aks/juiceshop/juiceshop-vs.yaml b/labs/lab9/juiceshop-vs.yaml
similarity index 70%
rename from ca-notes/aks/juiceshop/juiceshop-vs.yaml
rename to labs/lab9/juiceshop-vs.yaml
index 09a5d36..346c8df 100644
--- a/ca-notes/aks/juiceshop/juiceshop-vs.yaml
+++ b/labs/lab9/juiceshop-vs.yaml
@@ -6,14 +6,20 @@ metadata:
name: juiceshop-vs
namespace: juice
spec:
- host: juiceshop.nginxazure.build
- tls:
- secret: juice-secret
+ host: juiceshop.example.com
+ #tls:
+ #secret: juice-secret
upstreams:
- name: juiceshop
service: juiceshop-svc
port: 80
- slow-start: 5s
+ #slow-start: 5s
+ sessionCookie:
+ enable: true
+ name: srv_id
+ path: /
+ expires: 1h
+ domain: .example.com
healthCheck:
enable: true
port: 3000
diff --git a/ca-notes/n4a-configs/juiceshop.nginxazure.build.conf b/labs/lab9/juiceshop.example.com.conf
similarity index 72%
rename from ca-notes/n4a-configs/juiceshop.nginxazure.build.conf
rename to labs/lab9/juiceshop.example.com.conf
index 5ef1197..d483206 100644
--- a/ca-notes/n4a-configs/juiceshop.nginxazure.build.conf
+++ b/labs/lab9/juiceshop.example.com.conf
@@ -10,12 +10,12 @@ server {
listen 80; # Listening on port 80 on all IP addresses on this machine
- server_name juiceshop.nginxazure.build; # Set hostname to match in request
+ server_name juiceshop.example.com; # Set hostname to match in request
status_zone juiceshop;
# access_log /var/log/nginx/juiceshop.log main;
- access_log /var/log/nginx/juiceshop.nginxazure.build.log main_ext; # Extended Logging
- error_log /var/log/nginx/juiceshop.nginxazure.build_error.log info;
+ access_log /var/log/nginx/juiceshop.example.com.log main_ext; # Extended Logging
+ error_log /var/log/nginx/juiceshop.example.com_error.log info;
location / {
@@ -23,17 +23,17 @@ server {
# Set Rate Limit, uncomment below
# limit_req zone=limit100; #burst=110; # Set Limit and burst here
- # limit_req_status 429; # Set HTTP Return Code, better than 503s
+ # limit_req_status 429; # Set HTTP Status Code, better than 503s
# limit_req_dry_run on; # Test the Rate limit, logged, but not enforced
# add_header X-Ratelimit-Status $limit_req_status; # Add a custom status header
proxy_pass http://aks1_ingress; # Proxy to AKS1 Nginx Ingress Controllers
- # proxy_pass http://aks1_juice_direct; # Proxy directly to Juiceshop Headless Service
+ add_header X-Proxy-Pass aks1_ingress_juiceshop; # Custom Header
}
# Cache Proxy example for static images / page components
- # Match common files
+ # Match common files with Regex
location ~* \.(?:ico|jpg|png)$ {
### Uncomment for new status_zone in dashboard
@@ -52,8 +52,8 @@ server {
add_header X-Cache-Status $upstream_cache_status;
- #proxy_pass http://aks1_ingress; # Proxy AND load balance to AKS1 NIC
- proxy_pass http://aks1_juice_direct; # Proxy directly to Juiceshop Headless Service
+ proxy_pass http://aks1_ingress; # Proxy AND load balance to AKS1 NIC
+ add_header X-Proxy-Pass nginxazure_imagecache; # Custom Header
}
diff --git a/ca-notes/aks/juiceshop/juiceshop.yaml b/labs/lab9/juiceshop.yaml
similarity index 100%
rename from ca-notes/aks/juiceshop/juiceshop.yaml
rename to labs/lab9/juiceshop.yaml
diff --git a/labs/lab9/media/juiceshop-icon.png b/labs/lab9/media/juiceshop-icon.png
new file mode 100644
index 0000000..afb8ed1
Binary files /dev/null and b/labs/lab9/media/juiceshop-icon.png differ
diff --git a/labs/lab9/media/lab9_chrome-add-headers.png b/labs/lab9/media/lab9_chrome-add-headers.png
new file mode 100644
index 0000000..8065937
Binary files /dev/null and b/labs/lab9/media/lab9_chrome-add-headers.png differ
diff --git a/labs/lab9/media/lab9_chrome-hit-miss-expired.png b/labs/lab9/media/lab9_chrome-hit-miss-expired.png
new file mode 100644
index 0000000..e16e626
Binary files /dev/null and b/labs/lab9/media/lab9_chrome-hit-miss-expired.png differ
diff --git a/labs/lab9/media/lab9_chrome-manage-headers.png b/labs/lab9/media/lab9_chrome-manage-headers.png
new file mode 100644
index 0000000..48ca807
Binary files /dev/null and b/labs/lab9/media/lab9_chrome-manage-headers.png differ
diff --git a/labs/lab9/media/lab9_chrome-new-columns.png b/labs/lab9/media/lab9_chrome-new-columns.png
new file mode 100644
index 0000000..6dc1725
Binary files /dev/null and b/labs/lab9/media/lab9_chrome-new-columns.png differ
diff --git a/labs/lab9/media/lab9_diagram.png b/labs/lab9/media/lab9_diagram.png
new file mode 100644
index 0000000..df8573e
Binary files /dev/null and b/labs/lab9/media/lab9_diagram.png differ
diff --git a/labs/lab9/media/lab9_juiceshop-upstreams.png b/labs/lab9/media/lab9_juiceshop-upstreams.png
new file mode 100644
index 0000000..f7edc70
Binary files /dev/null and b/labs/lab9/media/lab9_juiceshop-upstreams.png differ
diff --git a/labs/lab9/media/lab9_rate-100.png b/labs/lab9/media/lab9_rate-100.png
new file mode 100644
index 0000000..1a758bd
Binary files /dev/null and b/labs/lab9/media/lab9_rate-100.png differ
diff --git a/labs/lab9/media/lab9_rate-1000.png b/labs/lab9/media/lab9_rate-1000.png
new file mode 100644
index 0000000..f56c5ae
Binary files /dev/null and b/labs/lab9/media/lab9_rate-1000.png differ
diff --git a/labs/lab9/media/lab9_ratelimit-429.png b/labs/lab9/media/lab9_ratelimit-429.png
new file mode 100644
index 0000000..fa2f483
Binary files /dev/null and b/labs/lab9/media/lab9_ratelimit-429.png differ
diff --git a/labs/lab9/media/lab9_ratelimit-503.png b/labs/lab9/media/lab9_ratelimit-503.png
new file mode 100644
index 0000000..bbcb1da
Binary files /dev/null and b/labs/lab9/media/lab9_ratelimit-503.png differ
diff --git a/labs/lab9/media/lab9_ratelimit-dry-run.png b/labs/lab9/media/lab9_ratelimit-dry-run.png
new file mode 100644
index 0000000..689a952
Binary files /dev/null and b/labs/lab9/media/lab9_ratelimit-dry-run.png differ
diff --git a/labs/lab9/media/mygarage-icon.png b/labs/lab9/media/mygarage-icon.png
new file mode 100644
index 0000000..4e52540
Binary files /dev/null and b/labs/lab9/media/mygarage-icon.png differ
diff --git a/labs/lab9/media/nginx-azure-icon.png b/labs/lab9/media/nginx-azure-icon.png
new file mode 100644
index 0000000..70ab132
Binary files /dev/null and b/labs/lab9/media/nginx-azure-icon.png differ
diff --git a/labs/lab9/media/nginx-cache-icon.png b/labs/lab9/media/nginx-cache-icon.png
new file mode 100644
index 0000000..462c184
Binary files /dev/null and b/labs/lab9/media/nginx-cache-icon.png differ
diff --git a/labs/lab9/media/speedometer-icon.jpeg b/labs/lab9/media/speedometer-icon.jpeg
new file mode 100644
index 0000000..0a50e0f
Binary files /dev/null and b/labs/lab9/media/speedometer-icon.jpeg differ
diff --git a/ca-notes/n4a-configs/nginx.conf b/labs/lab9/nginx.conf
similarity index 88%
rename from ca-notes/n4a-configs/nginx.conf
rename to labs/lab9/nginx.conf
index 119ba14..c0dbadf 100644
--- a/ca-notes/n4a-configs/nginx.conf
+++ b/labs/lab9/nginx.conf
@@ -1,3 +1,6 @@
+# Nginx 4 Azure - Default - Updated Nginx.conf
+# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
+#
user nginx;
worker_processes auto;
worker_rlimit_nofile 8192;
@@ -34,17 +37,13 @@ http {
'upstream_response_length="$upstream_response_length", '
'cachestatus=“$upstream_cache_status“, '
'limitstatus=“$limit_req_status“ ';
-
- # access_log off;
- server_tokens N4A-$nginx_version;
+
+ access_log off;
+ server_tokens "";
server {
- # listen 80 default_server;
- listen 80;
- server_name www.nginxazure.build; # nginxazure.com;
- status_zone www.nginxazure.build;
- add_header X-Host-Header $host;
+ listen 80 default_server;
+ server_name localhost;
location / {
- status_zone /;
# Points to a directory with a basic html index file with
# a "Welcome to NGINX as a Service for Azure!" page
root /var/www;
diff --git a/ca-notes/n4a-configs/includes/rate_limits.conf b/labs/lab9/rate_limits.conf
similarity index 68%
rename from ca-notes/n4a-configs/includes/rate_limits.conf
rename to labs/lab9/rate_limits.conf
index 2b5c03f..9011890 100644
--- a/ca-notes/n4a-configs/includes/rate_limits.conf
+++ b/labs/lab9/rate_limits.conf
@@ -1,3 +1,8 @@
+# Nginx 4 Azure - Mar 2024
+# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
+#
+# Define HTTP Request Limit Zones
+#
limit_req_zone $binary_remote_addr zone=limitone:10m rate=1r/s;
limit_req_zone $binary_remote_addr zone=limit10:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=limit100:10m rate=100r/s;
diff --git a/labs/lab9/readme.md b/labs/lab9/readme.md
index 892935b..753b6e0 100644
--- a/labs/lab9/readme.md
+++ b/labs/lab9/readme.md
@@ -1,51 +1,503 @@
-# NGINX and AzureAD / Entra Integration
+# Nginx Caching / Rate Limits / Juiceshop
## Introduction
-In this lab, you will build ( x,y,x ).
+In this lab, you will deploy an image rich application, and use Nginx Caching to cache images to improve performance and provide a better user experience. This will offload the image delivery workload from your applications, saving resources. You will also explore, configure, and test Rate Limits with Nginx for Azure, allowing you to control incoming request levels for different applications.
-< Lab specific Images here, in the /media sub-folder >
+
-NGINX aaS | Docker
-:-------------------------:|:-------------------------:
- |
+NGINXaaS Azure | Cache | Juiceshop | My Garage
+:-----------------:|:-----------------:|:-----------------:|:-----------------:
+ | | |
## Learning Objectives
-By the end of the lab you will be able to:
-
-- Introduction to `xx`
-- Build an `yyy` Nginx configuration
-- Test access to your lab enviroment with Curl and Chrome
-- Investigate `zzz`
-
+- Deploy JuiceShop in AKS cluster.
+- Expose JuiceShop with Nginx Ingress Controller.
+- Configure Nginx for Azure for load balancing JuiceShop.
+- Configure Nginx for Azure for load balancing Mygarage.
+- Add Nginx Caching to improve delivery of images.
+- Explore, configure, and test HTTP Request Limits
## Pre-Requisites
-- You must have `aaaa` installed and running
-- You must have `bbbbb` installed
+- You must have your Nginx for Azure instance running
+- You must have your AKS1 Cluster running
- See `Lab0` for instructions on setting up your system for this Workshop
- Familiarity with basic Linux commands and commandline tools
-- Familiarity with basic Docker concepts and commands
- Familiarity with basic HTTP protocol
+- Familiarity with HTTP Caching parameters, directives, headers
+
+
+
+
+
+Lab9 Diagram
+
+
+
+## Deploy Juiceshop to AKS Cluster #1
+
+
+
+
+
+In this exercise, you will deploy the demo Juiceshop application to AKS1. Juiceshop is a demo application example of a Retail store, selling different juices, smoothies, and snacks. The images used on the various web pages make ideal candidates for image caching. You will also configure Nginx Ingress Controller for this application.
+
+1. Inspect the `lab9/juiceshop.yaml` and then `lab9/juiceshop-vs.yaml` manifests. You will see definitions for three Juiceshop application pods being deployed, and a new VirtualServer being added to Nginx Ingress to expose the app outside the Cluster.
+
+1. Using your Terminal, create a new namespace `juice` and deploy the Juiceshop application to AKS1. Also deploy the Nginx Ingress VirtualServer, to create the Service and VirtualServer for `juiceshop.example.com`. Use the Manifests provided in the `lab9` folder:
+
+ ```bash
+ kubectl config use-context n4a-aks1
+ kubectl create namespace juice
+ kubectl apply -f lab9/juiceshop.yaml
+ kubectl apply -f lab9/juiceshop-vs.yaml
+
+ ```
+
+ ```bash
+ #Sample output
+ namespace/juice created
+ deployment.apps/juiceshop created
+ service/juiceshop-svc created
+ secret/juice-secret created
+ virtualserver.k8s.nginx.org "juiceshop-vs" deleted
+
+ ```
+
+1. Check your Nginx Ingress Dashboard for AKS1, you should now find `juiceshop.example.com` in the HTTP Zones, and a new `vs_juice_juiceshop-vs_juiceshop` Upstream block in the HTTP Upstreams tab, with 3 Pods running on port 3000.
+
+
+
+## Update Nginx.conf for Extended Logging
+
+During the next exercises, an updated `main_ext` logging format will be used, to capture Nginx Cache and Rate Limit logging variables.
+
+1. Inspect, then modify your `/etc/nginx/nginx.conf` file to match the `main_ext logging format` example provided. You will noticed two new Nginx Logging variables have been added to this log format, `$upstream_cache_status` and `$limit_req_status.` These will be used in the next few exercises.
+
+```nginx
+# Nginx 4 Azure - Default - Updated Nginx.conf
+# Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
+#
+user nginx;
+worker_processes auto;
+worker_rlimit_nofile 8192;
+pid /run/nginx/nginx.pid;
+
+events {
+ worker_connections 4000;
+}
+
+error_log /var/log/nginx/error.log error;
+
+http {
+ log_format main '$remote_addr - $remote_user [$time_local] "$request" '
+ '$status $body_bytes_sent "$http_referer" '
+ '"$http_user_agent" "$http_x_forwarded_for"';
+
+ log_format main_ext 'remote_addr="$remote_addr", '
+ '[time_local=$time_local], '
+ 'request="$request", '
+ 'status="$status", '
+ 'http_referer="$http_referer", '
+ 'body_bytes_sent="$body_bytes_sent", '
+ 'Host="$host", '
+ 'sn="$server_name", '
+ 'request_time=$request_time, '
+ 'http_user_agent="$http_user_agent", '
+ 'http_x_forwarded_for="$http_x_forwarded_for", '
+ 'request_length="$request_length", '
+ 'upstream_address="$upstream_addr", '
+ 'upstream_status="$upstream_status", '
+ 'upstream_connect_time="$upstream_connect_time", '
+ 'upstream_header_time="$upstream_header_time", '
+ 'upstream_response_time="$upstream_response_time", '
+ 'upstream_response_length="$upstream_response_length", '
+ 'cachestatus=“$upstream_cache_status“, '
+ 'limitstatus=“$limit_req_status“ ';
+
+ access_log off;
+ server_tokens "";
+ server {
+ listen 80 default_server;
+ server_name localhost;
+ location / {
+ # Points to a directory with a basic html index file with
+ # a "Welcome to NGINX as a Service for Azure!" page
+ root /var/www;
+ index index.html;
+ }
+ }
+
+ include /etc/nginx/conf.d/*.conf;
+ include /etc/nginx/includes/*.conf; # shared files
+
+}
+
+stream {
+
+ include /etc/nginx/stream/*.conf; # Stream TCP nginx files
+
+}
+
+```
+
+Submit your Nginx Configuration.
+
+## Add Caching to Nginx for Azure
+
+
-### Lab exercise 1
+In this exercise, you will create an Nginx for Azure configuration, to add Caching for the images of the Juiceshop application. You will also configure Nginx for Azure to expose your Juiceshop application to the Internet. You will test it, and use various tools to verify that caching is working as expected.
+
+1. Inspect the `lab9/juiceshop.example.com.conf` configuration file. Make note of the following items, which enable `Nginx Proxy Caching` for images:
+
+ - Line #7 - create the Cache - /path on disk, cache name=image_cache, :memory zone and size, max image size, disable temp files.
+ - Line #13 - set the hostname
+ - Line #14 - create a status zone for metrics
+ - Line #17,18 - set the logging filenames
+ - Line #30 - send requests to Nginx Ingress in AKS1
+ - Line #31 - set the Header for tracking
+ - Lines #37-62 - A new `location block`, with the following parameters
+ - - Line #39 - Use a Regular Expression (regex) to identify image types.
+ - - Line #42 - new status zone for image metrics
+ - - - Lines #44-46 - use the `image_cache` created earlier on Line #7
+ - - - cache 200 responses for 60 seconds
+ - - - use a cache key, made up of three Nginx request $variables
+ - - Lines #49-51 - Set and Control Caching Headers
+ - - Line #55 - Set a Custom Header for Cache Status = HIT, MISS, EXPIRED
+ - - Line #57 - Send requests to Nginx Ingress in AKS1
+ - - Line #58 - Set another Custom Header for tracking
+
+ As you can see, there are quite a few Caching directives and parameters that must be set properly. There are Advanced Nginx Caching classes available from Nginx University that cover architectures and many more details and use cases if you would like to learn more. You will also find quite a few blogs and a Webinar on Nginx Caching, it is a popular topic. See the References Section.
+
+ But for this exercise, you will just enable it with the minimal configuration and test it out with Chrome.
+
+1. Create the Nginx for Azure configuration needed for `juiceshop.example.com.`
+
+ Using the Nginx for Azure Console, create a new config file, `/etc/nginx/conf.d/juiceshop.example.com.conf`. You can use the example file provided, just Copy/Paste.
+
+ ```nginx
+ # Nginx 4 Azure - Juiceshop Nginx HTTP
+ # Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
+ #
+ # Image Caching for Juiceshop
+ # Rate Limits testing
+ #
+ proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=image_cache:10m max_size=100m use_temp_path=off;
+ #
+ server {
+
+ listen 80; # Listening on port 80 on all IP addresses on this machine
+
+ server_name juiceshop.example.com; # Set hostname to match in request
+ status_zone juiceshop;
+
+ # access_log /var/log/nginx/juiceshop.log main;
+ access_log /var/log/nginx/juiceshop.example.com.log main_ext; # Extended Logging
+ error_log /var/log/nginx/juiceshop.example.com_error.log info;
+
+ location / {
+
+ # return 200 "You have reached juiceshop server block, location /\n";
+
+ # Set Rate Limit, uncomment below
+ # limit_req zone=limit100; #burst=110; # Set Limit and burst here
+ # limit_req_status 429; # Set HTTP Return Code, better than 503s
+ # limit_req_dry_run on; # Test the Rate limit, logged, but not enforced
+ # add_header X-Ratelimit-Status $limit_req_status; # Add a custom status header
+
+ proxy_pass http://aks1_ingress; # Proxy to AKS1 Nginx Ingress Controllers
+ add_header X-Proxy-Pass aks1_ingress_juiceshop; # Custom Header
+
+ }
+
+ # Cache Proxy example for static images / page components
+ # Match common files with Regex
+ location ~* \.(?:ico|jpg|png)$ {
+
+ ### Uncomment for new status_zone in dashboard
+ status_zone images;
+
+ proxy_cache image_cache;
+ proxy_cache_valid 200 60s;
+ proxy_cache_key $scheme$proxy_host$request_uri;
+
+ # Override cache control headers
+ proxy_ignore_headers X-Accel-Expires Expires Cache-Control Set-Cookie;
+ expires 365d;
+ add_header Cache-Control "public";
+
+ # Add a Cache status header - MISS, HIT, EXPIRED
+
+ add_header X-Cache-Status $upstream_cache_status;
+
+ proxy_pass http://aks1_ingress; # Proxy AND load balance to AKS1 NIC
+ add_header X-Proxy-Pass nginxazure_imagecache; # Custom Header
+
+ }
+
+ }
+
+ ```
+
+ Submit your Nginx Configuration.
+
+1. Update your local DNS `/etc/hosts` file, and add `juiceshop.example.com` to your list of Hostnames for this Workshop, using the Public IP of your Nginx for Azure instance. This now makes it FOUR hostnames active on 1 IP Address.
+
+ ```bash
+ cat /etc/hosts
+ # Nginx for Azure Workshop
+ 13.86.100.10 cafe.example.com dashboard.example.com redis.example.com juiceshop.example.com
+
+ ```
+
+### Test out Nginx for Azure Caching with Juiceshop
+
+1. Open Chrome and go to `http://juiceshop.example.com`. You should see the main Juiceshop page, explore around a bit if you like, find a great tasting smoothy.
+
+1. Right+Click, and choose `Inspect` on the Chrome menu to open Developer tools. On the top Nav bar, click the `Network Tab`, and make sure the `Disable cache` is checked, you don't want Chrome caching any images for this exercise.
+
+1. Click Refresh, and you will see a long list of items being sent from the application.
+
+1. In the Object Details Display Bar, where you see `Name Status Type Size, Time, etc`, Right+Click again, then `Response Headers`, then `Manage Header Columns`.
+
+ 
+
+ You will be adding your THREE custom Nginx headers to the display for easy viewing. Click on `Add custom header...` , input these names one at a time:
+
+ - X-Cache-Status
+ - X-Proxy-Pass
+ - X-RateLimit-Status
+
+ This added these Headers to the display, making it easy to see the Header Values.
+
+ 
+
+ 
+
+ *Now your Object Details Display should have these three new Headers you can watch.*
+
+1. Click Refresh again, what do you see? `The X-Cache-Status` header will display `HIT, MISS, EXPIRED`, depending on how Nginx is caching, or not caching, each object. A MISS means the object was not in the cache at all, of course. Clear the Dev tool display, and Click Refresh a couple more times - see if you can find some HITS? If you wait more than 60 seconds, Refresh again, and these same objects will show EXPIRED. Click on one of the objects of interest, and check the Response Headers.
+
+ 
+
+ What does X-Proxy-Pass show? Does it show 2 different Values?
+ - one for `aks1_ingress_juiceshop` for your first `location / block`
+ - and `nginxazure_imagecache` for your `REGEX location block` for the image types?
-
+>Does Nginx for Azure actually proxy_pass to the aks1_ingress? That is a trick question!!
-### Lab exercise 2
+- YES, for Cache MISS and EXPIRED, right?
+- YES, for objects not matching the REGEX location block, correct?
+- NO, for Cache HITS, they are served from cache and do not need to be sent to the origin server.
+
+**Optional Exercise:** If you are comfortable with Regex, modify it to ADD `.js` and `.css` objects, Javascript and Cascading Style Sheet files, and re-test. What are your observations?
+
+Here is a hint:
+
+```nginx
+...
+ location ~* \.(?:ico|jpg|png|js|css)$
+...
+
+```
+
+*Knowledge Test*
+
+Find the `carrot_juice` and `melon_bike` objects. What are different about them? Can you figure out what's going on?
+
+>Provide your Answer via Private Zoom Chat if you figure it out *and fix it!*
+
+
+
+## Nginx for Azure Caching Wrap Up
+
+Notice that is was pretty easy to define, and enable Nginx Caching for images and even other static page objects. Also notice that you set the Valid time = 60 seconds. This was intentional so you can see objects Expire quickly. However, in Production, you will coordinate with your app team to determine the proper Cache age timer for different object types. You can create multiple caches, with different names and Regex's, to have granular control over type, age time, size, etc. It's EASY with Nginx for Azure!
+
+
+
+## Explore, configure, and test HTTP Request Limits
+
+
+
+In this exercise, Nginx HTTP Request Rate Limiting will be explored. You will configure some Limits, apply them to the Juiceshop application, and see the results of introducing various Limits. Rate Limiting has many practical use cases - limiting attacks, limiting bots, protecting request load sensitive URLs/APIs, classes of service, and others.
+
+1. Inspect the `lab9/rate-limits.conf` file. You will see 4 different Rate Limits defined, using the `limit_req_zone` directive. This directive creates an Nginx memory zone where the limit Keys and counters are stored. When a request matches a Key, the counter is incremented. If no key exists, it is added to the zone and the counter is incremented, as you would expect. Keys are ephemeral, they are lost if you restart Nginx, but are preserved during an Nginx Reload.
+
+ **Example: limit_req_zone $binary_remote_addr zone=limitone:10m rate=1r/s;**
+
+ A. The first parameter, `$binary_remote_addr` is the Key used in the memory zone for tracking. In this example, the client's IP Address in binary format is used. Binary being shorter, using less memory, than a dot.ted.dec.imal IP Address string. You can use whatever Key $variable you like, as long as it is an Nginx $variable available when Nginx receives an HTTP request - like a cookie, URL argument, HTTP Header, TLS Serial Number, etc. There are literally hundreds of request $variables you could use, and you can combine multiple $variables together.
+
+ B. The second parmater, `zone=limitXYZ:10m`, is the name of the zone, and the size is 10MB. You can define larger memory zones if needed, 10MB is a good starting point. Each zone must have a unique name, which matches the actual limit being defined in this example. The size needed depends on how many Keys are stored.
+
+ - - limitone is the zone for 1 request/second
+ - - limit10 is the zone for 10 requests/second
+ - - limit100 is the zone for 100 requests/second
+ - - limit1000 is the zone for 1,000 requests/second
+
+ C. The third parameter is the actual Rate Limit Value, expressed as `r/s` for `requests/second`.
+
+ - You can define as many zones as you need, as long as you have enough memory for it. You can use a zone more than once in an Nginx configuration. You can see the number of requests that are being counted in each limit zone with Azure Monitoring. You can also use Nginx Logging $variables to track when Limits are being counted and used for the request. You will create an HTTP Header that will also show you the limit status of the request when Nginx sends back the response. *So you will have very good visibility into how/when the limits are being used.*
+
+1. Using the Nginx for Azure Console, create a new file called `/etc/nginx/includes/rate-limits.conf`. You can use the example file provided, just Copy/Paste.
+
+ ```nginx
+ # Nginx 4 Azure - Mar 2024
+ # Chris Akker, Shouvik Dutta, Adam Currier - Mar 2024
+ #
+ # Define HTTP Request Limit Zones
+ #
+ limit_req_zone $binary_remote_addr zone=limitone:10m rate=1r/s;
+ limit_req_zone $binary_remote_addr zone=limit10:10m rate=10r/s;
+ limit_req_zone $binary_remote_addr zone=limit100:10m rate=100r/s;
+ limit_req_zone $binary_remote_addr zone=limit1000:10m rate=1000r/s;
+
+ ```
+
+1. Enable the Rate Limit section of the `/etc/nginx/conf.d/juiceshop.example.com.conf` file, by removing the comments - these have already been provided for this exercise. You will test them all, but one at a time, starting with `limit100`, 100 reqs/second:
+
+ ```nginx
+ ...
+
+ location / {
+
+ # return 200 "You have reached juiceshop server block, location /\n";
+
+ # Set Rate Limit, uncomment below
+ limit_req zone=limit100; #burst=110; # Set Limit and burst here
+ # limit_req_status 429; # Set HTTP Return Code, better than 503s
+ # limit_req_dry_run on; # Test the Rate limit, logged, but not enforced
+ add_header X-Ratelimit-Status $limit_req_status; # Add a custom status header
+
+ proxy_pass http://aks1_ingress; # Proxy to AKS1 Nginx Ingress Controllers
+ add_header X-Proxy-Pass aks1_ingress_juiceshop; # Custom Header
+
+ }
+
+ ```
+
+Notice the 2 directives enabled:
+
+- `limit_req` sets the active zone being used, in this example, limit100, meaning 100 requests/second. `Burst` is optional, allowing you to define an overage, allowing for some elasticity in the limit enforcement.
+- `add_header` creates a Custom Header, and adds the `limit_req_status $variable`, so you can see it with Chrome Dev Tools or curl.
+
+Submit your Nginx Configuration.
+
+### Test Rate Limit on Juiceshop application
+
+1. Using Chrome, navigate to `http://juiceshop.example.com`, and open the Chrome Dev Tools. You previously added the Nginx Custom Headers to the display, so you should already have a Header Column labeled `X-Ratelimit-Status`. Click Refresh Several times, what do you see?
+
+ 
+
+ You will see a partial Juiceshop webpage, as Nginx is only allowing your computer to send 100 req/s. You see the Header status set to PASSED for requests that were allowed. Other requests were stopped for `Exceeding the Rate Limit`. Check the HTTP Status Code on an item that failed, you will find the `503 Service Temporarily Unavailable`. Well, this is not actually the real situation, right? You have set a limit, not turned off the Service. So you will `change the HTTP Status code`, using the `limit_req_status` directive, which lets you set a custom HTTP Status code. The HTTP standard for "excessive requests" is normally `429.` So you will change it to that.
+
+ 
+
+1. Using the Nginx for Azure Console, uncomment the `limit_req_status 429`, as shown. This will change the 503 Status Code to a more friendly HTTP Status Code of 429, which means `Too Many Requests`. This could be useful for clients that can use this 429 code to perform a back-off of the Requests, and try again after a time delay. (Most Browsers do not do this).
+
+ ```nginx
+ ...
+ location / {
+
+ # return 200 "You have reached juiceshop server block, location /\n";
+
+ # Set Rate Limit, uncomment below
+ limit_req zone=limit100; #burst=110; # Set Limit and burst here
+ limit_req_status 429; # Set HTTP Status Code, better than 503s
+ # limit_req_dry_run on; # Test the Rate limit, logged, but not enforced
+ add_header X-Ratelimit-Status $limit_req_status; # Add a custom status header
+
+ proxy_pass http://aks1_ingress; # Proxy to AKS1 Nginx Ingress Controllers
+ add_header X-Proxy-Pass aks1_ingress_juiceshop; # Custom Header
+
+ }
+
+ ```
+
+ Submit your Nginx Configuration.
+
+1. Test again with Chrome and Dev Tools, Refresh, and verify that you now see `HTTP Status Code 429` for limited requests.
+
+ 
+
+ Ater consulation with your Dev team and testing different limits, you decide to change the limit to `1,000 Requests/Second`, to accommodate the normal traffic profile of the Juiceshop application. *This is likely an exercise you will have to do for every application using limits - determine normal traffic patterns and backend system performance, and set limits appropriately.*
+
+1. Using the Nginx for Azure Console, change the `req_limit` to the `limit1000` zone as shown (and change Burst value if using it):
+
+ ```nginx
+ location / {
+
+ # return 200 "You have reached juiceshop server block, location /\n";
+
+ # Set Rate Limit, uncomment below
+ limit_req zone=limit1000; #burst=1100; # Set Limit and burst here
+ limit_req_status 429; # Set HTTP Status Code, better than 503s
+ # limit_req_dry_run on; # Test the Rate limit, logged, but not enforced
+ add_header X-Ratelimit-Status $limit_req_status; # Add a custom status header
+
+ proxy_pass http://aks1_ingress; # Proxy to AKS1 Nginx Ingress Controllers
+ add_header X-Proxy-Pass aks1_ingress_juiceshop; # Custom Header
+
+ }
+
+ ```
+
+ Submit your Nginx Configuration.
+
+1. Clear the Chrome Dev Tools display, and try Juiceshop again, much better!
+
+ However, 1,000 Reqs/s may not quite be enough, if you Refresh several times, you will still see some 429 Status codes.
+
+ 
+
+### Test Nginx Rate Limit Dry Run
+
+To make it easier to `fine-tune and test` your Rate Limits, Nginx provides the `limit_req_dry_run` directive. This creates the limit, but DOES NOT enforce the limit. So you can see the impact of the limit without actually dropping traffic - a very nice tool indeed!
+
+1. Using the Nginx for Azure Console, uncomment the `limit_req_dry_run on` directive.
+
+ ```nginx
+ location / {
+
+ # return 200 "You have reached juiceshop server block, location /\n";
+
+ # Set Rate Limit, uncomment below
+ limit_req zone=limit1000; #burst=1100; # Set Limit and burst here
+ limit_req_status 429; # Set HTTP Status Code, better than 503s
+ limit_req_dry_run on; # Test the Rate limit, logged, but not enforced
+ add_header X-Ratelimit-Status $limit_req_status; # Add a custom status header
+
+ proxy_pass http://aks1_ingress; # Proxy to AKS1 Nginx Ingress Controllers
+ add_header X-Proxy-Pass aks1_ingress_juiceshop; # Custom Header
+
+ }
+
+ ```
+
+ Submit your Nginx Configuration.
+
+1. Test again with Chrome and Dev Tools. What do you see? You should see the `X-RateLimit-Status` Header now have some more metadata, like `REJECTED_DRY_RUN` - which means, this request exceeded the limit and `would be dropped` if Dry Run is disabled. You will also find this info in your Enhanced Logging format, using the $limit_req_status logging variable.
+
+ 
+
+>**IMPORTANT NOTE:** The Rate Limit does NOT apply to the Regex location block for Juiceshop Images, WHY? Because you enabled the limit in the `/ location` block. As image requests do NOT match that location block REGEX, the Rate Limit (limit1000) does not apply, and the X-RateLimit-Status is not set. That is pretty cool, you can be very exact in where, how, and at what rate you apply Request Limits with Nginx!
+
+
-
+## Nginx Rate Limit Wrap Up
-### Lab exercise 3
+During these exercises, you configured, enabled various Request Limit directives, and tested them with an example application. You improved the information for your Dev team to help tune the Limits to match the application. Nginx can also set headers and logging values to monitor how your Requests Limits work.
-
+As a side note, NGINX also provides Limit directives for TCP, and for Bandwidth:
-### << more exercises/steps>>
+- limit_conn for TCP Connection controls (https://nginx.org/en/docs/http/ngx_http_limit_conn_module.html#limit_conn)
+- limit_rate for bandwidth/throughput controls (https://nginx.org/en/docs/http/ngx_http_limit_conn_module.html#limit_conn)
-
+And you can use multiple limit directives together for very fine-grain control of your traffic.
@@ -56,6 +508,9 @@ By the end of the lab you will be able to:
## References:
- [NGINX As A Service for Azure](https://docs.nginx.com/nginxaas/azure/)
+- [NGINX Caching](https://www.nginx.com/products/nginx/caching/)
+- [NGINX Caching Blog](https://www.nginx.com/blog/nginx-caching-guide/)
+- [NGINX Caching Admin Guide](https://docs.nginx.com/nginx/admin-guide/content-cache/content-caching/)
- [NGINX Plus Product Page](https://docs.nginx.com/nginx/)
- [NGINX Ingress Controller](https://docs.nginx.com//nginx-ingress-controller/)
- [NGINX on Docker](https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-docker/)
@@ -74,4 +529,4 @@ By the end of the lab you will be able to:
-------------
-Navigate to ([Lab10](../lab10/readme.md) | [LabX](../labX/readme.md))
+Navigate to ([Lab10](../lab10/readme.md) | [LabGuide](../readme.md))
diff --git a/labs/labs-optional/readme.md b/labs/labs-optional/readme.md
index 3d162ba..0a5109e 100644
--- a/labs/labs-optional/readme.md
+++ b/labs/labs-optional/readme.md
@@ -1,4 +1,4 @@
-# Optional Exercises / Grafana
+# Optional Exercises / Grafana
## Introduction
@@ -31,9 +31,75 @@ By the end of the lab you will be able to:
-### Lab exercise 1
+## Create and attach Azure Container Registry (ACR)
+
+1. Create a container registry using the `az acr create` command. The registry name must be unique within Azure, and contain 5-50 alphanumeric characters
+ ```bash
+ MY_RESOURCEGROUP=s.dutta
+ MY_ACR=acrshouvik
+
+ az acr create \
+ --resource-group $MY_RESOURCEGROUP \
+ --name $MY_ACR \
+ --sku Basic
+ ```
+
+2. From the output of the `az acr create` command, make a note of the `loginServer`. The value of `loginServer` key is the fully qualified registry name. In our example the registry name is `acrshouvik` and the login server name is `acrshouvik.azurecr.io`.
+
+3. Login to the registry using below command. Make sure your local Docker daemon is up and running.
+ ```bash
+ MY_ACR=acrshouvik
+
+ az acr login --name $MY_ACR
+ ```
+ At the end of the output you should see `Login Succeeded`!
+
+### Test access to your Azure ACR
+
+We can quickly test the ability to push images to our Private ACR from our client machine.
+
+1. If you do not have a test container image to push to ACR, you can use a simple container for testing, e.g.[nginxinc/ingress-demo](https://hub.docker.com/r/nginxinc/ingress-demo). You will use this same container for the lab exercises.
+
+ ```bash
+ az acr import --name $MY_ACR --source docker.io/nginxinc/ingress-demo:latest --image nginxinc/ingress-demo:v1
+ ```
+ The above command pulls the `nginxinc/ingress-demo` image from docker hub and pushes it to Azure ACR.
+
+2. Check if the image was successfully pushed to ACR using the azure cli command below:
+
+ ```bash
+ MY_ACR=acrshouvik
+ az acr repository list --name $MY_ACR --output table
+ ```
+ ```bash
+ ###Sample Output###
+ Result
+ ---------------------
+ nginxinc/ingress-demo
+ ```
+
+### Attach an Azure Container Registry (ACR) to Azure Kubernetes cluster (AKS)
+
+1. You will attach the newly created ACR to both AKS clusters. This will enable you to pull private images within AKS clusters directly from your ACR. Run below command to attach ACR to 1st AKS cluster:
+ ```bash
+ MY_RESOURCEGROUP=s.dutta
+ MY_AKS=aks-shouvik # first cluster
+ MY_ACR=acrshouvik
+
+ az aks update -n $MY_AKS -g $MY_RESOURCEGROUP --attach-acr $MY_ACR
+ ```
+
+1. Change the $MY_AKS environment variable, so you can attach your ACR to your second Cluster:
+ ```bash
+ MY_RESOURCEGROUP=s.dutta
+ MY_AKS=aks2-shouvik # change to second cluster
+ MY_ACR=acrshouvik
+
+ az aks update -n $MY_AKS -g $MY_RESOURCEGROUP --attach-acr $MY_ACR
+ ```
+
+ **NOTE:** You need the Owner, Azure account administrator, or Azure co-administrator role on your Azure subscription. To avoid needing one of these roles, you can instead use an existing managed identity to authenticate ACR from AKS. See [references](#references) for more details.
-
### Lab exercise 2
@@ -76,4 +142,4 @@ Nginx Rate Limiting here
-------------
-Navigate to ([LabX](../labX/readme.md) | [LabX](../labX/readme.md))
+Navigate to ([Lab Guide](../readme.md))
diff --git a/labs/media/developer-seated.svg b/labs/media/developer-seated.svg
new file mode 100644
index 0000000..2e58577
--- /dev/null
+++ b/labs/media/developer-seated.svg
@@ -0,0 +1,266 @@
+
+
+
diff --git a/labs/media/docker-icon.png b/labs/media/docker-icon.png
new file mode 100644
index 0000000..02ee3f1
Binary files /dev/null and b/labs/media/docker-icon.png differ
diff --git a/labs/media/kubernetes-icon.png b/labs/media/kubernetes-icon.png
new file mode 100644
index 0000000..63c679e
Binary files /dev/null and b/labs/media/kubernetes-icon.png differ
diff --git a/labs/media/n4aworkshop-banner.png b/labs/media/n4aworkshop-banner.png
new file mode 100644
index 0000000..f59177f
Binary files /dev/null and b/labs/media/n4aworkshop-banner.png differ
diff --git a/labs/media/nginx-azure-icon.png b/labs/media/nginx-azure-icon.png
new file mode 100644
index 0000000..70ab132
Binary files /dev/null and b/labs/media/nginx-azure-icon.png differ
diff --git a/labs/media/nginx-plus-icon.png b/labs/media/nginx-plus-icon.png
new file mode 100644
index 0000000..21f0e25
Binary files /dev/null and b/labs/media/nginx-plus-icon.png differ
diff --git a/labs/media/redis-icon.png b/labs/media/redis-icon.png
new file mode 100644
index 0000000..9d34aff
Binary files /dev/null and b/labs/media/redis-icon.png differ
diff --git a/labs/media/robot.svg b/labs/media/robot.svg
new file mode 100644
index 0000000..8a61287
--- /dev/null
+++ b/labs/media/robot.svg
@@ -0,0 +1,163 @@
+
+
+
diff --git a/labs/readme.md b/labs/readme.md
index cf6da8a..edbd2a4 100644
--- a/labs/readme.md
+++ b/labs/readme.md
@@ -1,196 +1,103 @@
-# Nginx for Azure Workshop Outline / Summary
-
-## Lab 0 - Prequesites - Subscription / Resources
-## Lab 1 - Azure VNet and Subnet and Network Security Group
-## Lab 2 - Nginx for Azure Overview and Deployment
-## Lab 3 - UbuntuVM, Docker, and Cafe or Garage Demo Deployment
-## Lab 4 - AKS / ACR / Nginx Ingress Controller / Cafe / Garage Demo
-## Lab 5 - Nginx Load Balancing / Reverse Proxy
-## Lab 6 - Azure Key Vault / TLS Essentials
-## Lab 7 - Azure Montoring / Logging Analytics
-## Lab 8 - Nginx Garage or Azure Petshop
-## Lab 9 - Nginx Split Clients
-## Lab 10 - Nginx Caching & Rate Limits / Juiceshop
-## Lab 11 - Optional Exercises / Grafana 4 Azure
-## Summary and Wrap-up
+
-### Lab 0 - Prequesites - Subscription / Resources
-
-- Overview
-In this Lab, the requirements for both the student and the Azure environment will be detailed. It is imperative that you have the appropriate computer, tools, and Azure access to successfully complete the workshop. The lab exercises must be done sequentially to build the environment as described.
-
-- Learning Objectives
-Verify you have the proper computer requirements - hardware and software.
-- Hardware: Laptop, Admin rights, Internet connection
-- Software: Visual Studio, Terminal, Chrome, Docker, AKS and AZ CLI.
-Verify you have proper computer skills.
-- Computer skills: Linux CLI, files, SSH/Terminal, Docker/Compose, Azure Portal, HTTP/S, Load Balancing concepts
-- Optional: TLS, DNS, FIPS, HTTP caching, Prometheus, Grafana
-Verify you have the proper access to Azure resources.
-- Azure subscription, list of Azure Roles/permissions here
-
-- Nginx for Azure Workshop has minimum REQUIRED Nginx Skills
-Students must be familiar with Nginx basic operation, configurations, and concepts for HTTP traffic.
--- The Nginx Basics Workshop is HIGHLY recommended, students should have taken this workshop prior.
--- The Nginx Plus Ingress Controller workshop is also HIGHLY, students should have taken this workshop prior.
+## NGINXaaS for Azure Workshop
-### Lab 1 - Azure VNet and Subnet and Network Security Group
-
-- Overview
-In this lab, you will be adding and configuring the Azure Networking components needed for this workshop. This will require only a few network resources, and a Network Security Group to allow incoming traffic to your Azure resources.
-
-- Learning Objectives
-Setup your Azure Vnet
-Setup your Azure Subnets
-Setuo your Azure Network Security group for inbound traffic
+### Overview
-### Lab 2 - Nginx for Azure Overview and Deployment
-
-- Overview
-In this lab, you will deploy and config a new Nginx for Azure instance.
-
-- Learning Objectives
-Deploy Nginx for Azure
-Enable Log Analytics
-Test basic HTTP traffic
-Create inital Nginx configurations to test with
+> >Welcome to the NGINXaaS for Azure Workshop!
-### Lab 3 - UbuntuVM, Docker, and Cafe or Garage Demo Deployment
+This **NGINXperts Workshop** will introduce **`NGINXaaS for Azure`** with hands-on practice through lab exercises.
-## Overview
-In this lab, you will deploy an Ubuntu VM, install Docker and Docker Compose, and configure it for a Legacy web application. You will configure Nginx for Azure to Load balance this application.
+You will learn and explore NGINX for Azure, deploy and configure it with various Azure Resources. You will use many NGINX Plus features, for routing traffic, terminate TLS, splitting traffic, and caching. You will build a sample Enterprise environment in Azure with apps and services in Linux and Windows VMs, use Docker, and multiple Kubernetes AKS clusters. You will terminate TLS, route HTTP/S traffic, load balance running VMs, containers, pods and Nginx Ingress Controllers. You will configure Advanced Nginx Plus features like Caching and Dynamic Split Clients for Blue/Green testing, using live traffic. You will expose both Web and TCP applications on the Internet. You will explore the integrations of Nginx with Azure Cloud Resources like Key Vault, Monitoring, Logging/Analytics, and Grafana.
-## Learning Objectives
+The Hands-on Lab Exercises are designed to build upon each other, adding additional services and features as you progress through them, completing the labs in sequential order is required. You will follow along as an instructor guides you through these exercises.
-- Deploy Ubuntu VM with Azure CLI
-- Install Docker and Docker Compose
-- Run Nginx demo application containers
-- Test and validate your lab environment
-- Configure Nginx for Azure to load balance the Docker Containers on the UbuntuVM
-- Test your Nginx for Azure configs
+This is the third Workshop in the `NGINXperts Series' from the Nginx Communities and Alliances Team at Nginx.
-### Lab 4 - AKS / Nginx Ingress / Redis / Cafe or Garage Demo Deployment
-
-- Overview
-In this lab, you will deploy 2 AKS clusters, with Nginx Ingress Controllers, a Redis cluster, and a Modern Web Application.
-
-- Learning Objectives
-Deploy 2 AKS clusters using the Azure AZ CLI.
-Pull and Push the Nginx Plus Ingress Controller image to Azure Container Registry, and deploy to the Clusters.
-Deploy and test a Redis In-Memory Cache to the AKS cluster.
-Configure Nginx Ingress for Redis Leader traffic.
-Test access to Redis Leader and Follower services.
-Deploy a modern web application in the cluster.
-Configure Nginx Ingress Controller to route traffic to the application.
+NGINXaaS for Azure | NGINXperts Workshops
+:-------------------------:|:-------------------------:
+ | 
-### Lab 5 - Nginx Load Balancing / Reverse Proxy
-
-- Overview
-In this lab, you will configure Nginx for Azure to Load Balance various workloads running in Azure. After successful configuration and adding Best Practice Nginx parameters, you will Load Test these applications, and test multiple load balancing and request routing parameters to suit different use cases.
+The Hands-On Lab Exercises are designed to build upon each other, adding additional services and features as you progress through them. `It is important to complete the lab exercises in sequential order.`
--- Learning Objectives
-
-- Configure Nginx for Azure, to load balance traffic to both AKS Nginx Ingress Controllers.
-- Expose the NIC Plus Dashboards externally for Live Monitoring
-- Configure Nginx for Azure, to Proxy to Windows Server VM
-- Run Load tests on the Legacy and Modern web applications.
-- Configure HTTP Split Clients, and route traffic to 2 or 3 backend upstreams.
-- Configure Nginx for Azure to load balance directly to Nginx Ingress Controllers without NodePort.
-- Configure Nginx for Azure and Nginx Ingress for Redis Caching
+By the end of this Workshop, you will have a working, operational NGINX for Azure deployment and Lab environment, with the skills to deploy and operate one for your Modern Application projects in Azure.
-### Lab 6 - Azure Key Vault / TLS Essentials
-
-- Overview
-In this lab, you use Azure Key Vault for TLS certificates and keys. You will configure Nginx for Azure to use these Azure resources to terminate TLS.
-
-- Learning Objectives
-Create a sample Azure Key Vault
-Create a TLS cert/key
-Configure and test Nginx for Azure to use the Azure Keys
-Update the previous Nginx configurations to use TLS for apps
-Update NSGs for TLS inbound traffic
-
-
+### Prerequisites
-### Lab 7 - Azure Montoring / Logging Analytics
+See the [Lab0 Readme.md](lab0/readme.md) for details on Student Prerequisites for this Workshop.
-- Overview
-Enable and configure Azure Monitoring for Nginx for Azure. Create custom Azure Dashboards for your applications. Gain experience using Azure Logs and logging tools.
+
-- Learning Objectives
-Enable, configure, and test Azure Monitoring for Nginx for Azure.
-Create a couple custom dashboards for your load balanced applications.
-Explore the Azure logging and Analytics tools available.
+NGINXaaS for Azure | NGINX Plus | Kubernetes | Docker | Redis
+:-------------------------:|:-------------------------:|:-------------------------:|:-------------------------:|:-------------------------:
+ |  |  |  | 
-### Lab 8 - Nginx Garage or Azure Petshop
+## Lab Outline
-- Overview
-In this lab, you will deploy a modern application in your AKS cluster. You will expose it with Nginx Ingress Controller and Nginx for Azure.
+### Lab 0: Prerequesites - Azure Subscription / Resources
+- [Lab 0: Prerequesites - Azure Subscription / Resources](lab0/readme.md)
-- Learning Objectives
-Deploy the modern app in AKS
-Test and Verify the app is working correctly
-Expose this application outside the cluster with Nginx Ingress Controller
-Configure Nginx for Azure for this new application
+### Lab 1: Azure VNet and Subnet and Network Security Group
+- [Lab 1: Azure VNet and Subnet and Network Security Group](lab1/readme.md)
-
+### Lab 2: Nginx for Azure Overview and Deployment
+- [Lab 2: Nginx for Azure Overview and Deployment](lab2/readme.md)
-### Lab 9 - Nginx and AzureAD / Entra Integration
+### Lab 3: Ubuntu VM / Docker / Windows VM / Cafe Demo
+- [Lab 3: Ubuntu VM / Docker / Windows VM / Cafe Demo](lab3/readme.md)
-- Overview
-In this lab, you will create and configure an Azure Active Directory integration that will provide User based authentication to your web application. You will then create and test the Nginx for Azure AD configuration that will enforce this user authentication requirement.
+### Lab 4: AKS / Nginx Ingress Controller / Cafe Demo / Redis
+- [Lab 4: AKS / Nginx Ingress Controller / Cafe Demo / Redis](lab4/readme.md)
-- Learning Objectives
-Create and configure Azure AD Security settings.
-Create and configure Nginx for Azure to provide user authentication to your web application.
-Test and validate AzureAD is working as expected.
-Explore AzureAD related logging.
+### Lab 5: Nginx Load Balancing / Blue-Green / Split Clients
+- [Lab 5: Nginx Load Balancing / Blue-Green / Split Clients](lab5/readme.md)
-
+### Lab 6: Azure Monitoring / Logging Analytics
+- [Lab 6: Azure Monitoring / Logging Analytics](lab6/readme.md)
-### Lab 10 - Nginx Caching / Garage / Juiceshop
+### Lab 7: Azure Key Vault / TLS Essentials
+- [Lab 7: Azure Key Vault / TLS Essentials](lab7/readme.md)
-- Overview
-In this lab, you will deploy an image rich application, and use Nginx Caching to cache images to improve performance.
+### Lab 8: Nginx Garage Demo
+- [Lab 8: Nginx Garage Demo](lab8/readme.md)
-- Learning Objectives
-Deploy JuiceShop in AKS cluster.
-Expose JuiceShop with Nginx Ingress Controller
-Configure Nginx for Azure for load balancing JuiceShop.
-Add Nginx Caching to improve delivery of images.
+### Lab 9: Nginx Caching / Rate Limits / Juiceshop
+- [Lab9: Nginx Caching / Rate Limits / Juiceshop](lab9/readme.md)
-
+### Lab 10: Nginx with Grafana for Azure
+- [Lab10: Nginx with Grafana for Azure](lab10/readme.md)
-### Lab 11 - Optional Exercises
+#### Labs Optional: Optional Exercises
+- [Labs Optional: Optional Exercises](labs-optional/readme.md)
-Optional - Grafana with Azure
-- Overview
-In this lab, you will explore the Nginx and Grafana for Azure integration.
+
-- Learning Objectives
-Deploy Grafana for Azure.
-Configure the Datasource
-Explore a sample Grafana Dashboard for Nginx for Azure
+### Authors
+- Chris Akker - Solutions Architect - Community and Alliances @ F5, Inc.
+- Shouvik Dutta - Solutions Architect - Community and Alliances @ F5, Inc.
+- Adam Currier - Solutions Architect - Community and Alliances @ F5, Inc.
+- Steve Wagner - Solutions Architect - Community and Alliances @ F5, Inc.
-### Summary and Wrap-up
+Click [Lab0: Student Prerequisites](lab0/readme.md) for details on Student Prerequisite Requirements for this Workshop.
+
+Click [Lab1: Azure VNet and Subnet and Network Security Group](lab1/readme.md) to get started!
-- Summary and Wrap-up comments here
\ No newline at end of file
diff --git a/n4a-configs/etc/nginx/conf.d/aks1-upstreams.conf b/n4a-configs/etc/nginx/conf.d/aks1-upstreams.conf
new file mode 100644
index 0000000..444156e
--- /dev/null
+++ b/n4a-configs/etc/nginx/conf.d/aks1-upstreams.conf
@@ -0,0 +1,15 @@
+upstream aks1_ingress {
+ zone aks1_ingress 256k;
+
+ least_time last_byte;
+
+ # from nginx-ingress NodePort Service / aks Node names
+ # Note: change servers to match
+ #
+ server aks-nodepool1-_AKS1_NODES_-vmss000000:32080; #aks1 node1
+ server aks-nodepool1-_AKS1_NODES_-vmss000001:32080; #aks1 node2
+ server aks-nodepool1-_AKS1_NODES_-vmss000002:32080; #aks1 node3
+
+ keepalive 32;
+
+}
diff --git a/n4a-configs/etc/nginx/conf.d/aks2-upstreams.conf b/n4a-configs/etc/nginx/conf.d/aks2-upstreams.conf
new file mode 100644
index 0000000..2657b55
--- /dev/null
+++ b/n4a-configs/etc/nginx/conf.d/aks2-upstreams.conf
@@ -0,0 +1,16 @@
+upstream aks2_ingress {
+ zone aks2_ingress 256k;
+
+ least_time last_byte;
+
+ # from nginx-ingress NodePort Service / aks Node names
+ # Note: change servers to match
+ #
+ server aks-nodepool1-_AKS2_NODES_-vmss000000:32080; #aks2 node1
+ server aks-nodepool1-_AKS2_NODES_-vmss000001:32080; #aks2 node2
+ server aks-nodepool1-_AKS2_NODES_-vmss000002:32080; #aks2 node3
+ server aks-nodepool1-_AKS2_NODES_-vmss000003:32080; #aks2 node4
+
+ keepalive 32;
+
+}
diff --git a/n4a-configs/etc/nginx/conf.d/my-garage-upstreams.conf b/n4a-configs/etc/nginx/conf.d/my-garage-upstreams.conf
new file mode 100644
index 0000000..7ac3084
--- /dev/null
+++ b/n4a-configs/etc/nginx/conf.d/my-garage-upstreams.conf
@@ -0,0 +1,10 @@
+ upstream my_garage {
+ zone my_garage 256k;
+
+ server aks-nodepool1-_AKS2_NODES_-vmss000000:32080; #aks2 node1
+ server aks-nodepool1-_AKS2_NODES_-vmss000001:32080; #aks2 node2
+ server aks-nodepool1-_AKS2_NODES_-vmss000002:32080; #aks2 node3
+ server aks-nodepool1-_AKS2_NODES_-vmss000003:32080; #aks2 node4
+
+ keepalive 8;
+ }
\ No newline at end of file
diff --git a/n4a-configs/etc/nginx/conf.d/my-garage.conf b/n4a-configs/etc/nginx/conf.d/my-garage.conf
new file mode 100644
index 0000000..4108494
--- /dev/null
+++ b/n4a-configs/etc/nginx/conf.d/my-garage.conf
@@ -0,0 +1,14 @@
+server {
+ listen 80;
+
+ server_name my-garage.example.com;
+ status_zone my-garage.example.com; # Metrics zone name
+
+ access_log /var/log/nginx/my-garage.example.com.log main_ext;
+ error_log /var/log/nginx/my-garage.example.com_error.log info;
+
+ location / {
+ proxy_pass http://my_garage;
+ add_header X-Proxy-Pass my-garage; # Custom Header
+ }
+}
diff --git a/n4a-configs/etc/nginx/conf.d/nic1-dashboard-upstreams.conf b/n4a-configs/etc/nginx/conf.d/nic1-dashboard-upstreams.conf
new file mode 100644
index 0000000..43e37b4
--- /dev/null
+++ b/n4a-configs/etc/nginx/conf.d/nic1-dashboard-upstreams.conf
@@ -0,0 +1,11 @@
+upstream nic1_dashboard {
+ zone nic1_dashboard 256k;
+
+ # from nginx-ingress NodePort Service / aks1 Node IPs
+ server aks-nodepool1-_AKS1_NODES_-vmss000000:32090; #aks1 node1
+ server aks-nodepool1-_AKS1_NODES_-vmss000001:32090; #aks1 node2
+ server aks-nodepool1-_AKS1_NODES_-vmss000002:32090; #aks1 node3
+
+ keepalive 8;
+
+}
diff --git a/n4a-configs/etc/nginx/conf.d/nic1-dashboard.conf b/n4a-configs/etc/nginx/conf.d/nic1-dashboard.conf
new file mode 100644
index 0000000..e9fc063
--- /dev/null
+++ b/n4a-configs/etc/nginx/conf.d/nic1-dashboard.conf
@@ -0,0 +1,18 @@
+server {
+ listen 9001;
+ server_name dashboard.example.com;
+ access_log off;
+
+ location = /dashboard.html {
+ #return 200 "You have reached /nic1dashboard.";
+
+ proxy_pass http://nic1_dashboard;
+
+ }
+
+ location /api/ {
+
+ proxy_pass http://nic1_dashboard;
+ }
+
+}
diff --git a/n4a-configs/etc/nginx/conf.d/nic2-dashboard-upstreams.conf b/n4a-configs/etc/nginx/conf.d/nic2-dashboard-upstreams.conf
new file mode 100644
index 0000000..2fdfd9b
--- /dev/null
+++ b/n4a-configs/etc/nginx/conf.d/nic2-dashboard-upstreams.conf
@@ -0,0 +1,11 @@
+upstream nic2_dashboard {
+ zone nic2_dashboard 256k;
+
+ # from nginx-ingress NodePort Service / aks Node IPs
+ server aks-nodepool1-_AKS2_NODES_-vmss000000:32090; #aks2 node1
+ server aks-nodepool1-_AKS2_NODES_-vmss000001:32090; #aks2 node2
+ server aks-nodepool1-_AKS2_NODES_-vmss000002:32090; #aks2 node3
+ server aks-nodepool1-_AKS2_NODES_-vmss000003:32090; #aks2 node4
+
+ keepalive 8;
+}
diff --git a/n4a-configs/etc/nginx/conf.d/nic2-dashboard.conf b/n4a-configs/etc/nginx/conf.d/nic2-dashboard.conf
new file mode 100644
index 0000000..db349aa
--- /dev/null
+++ b/n4a-configs/etc/nginx/conf.d/nic2-dashboard.conf
@@ -0,0 +1,18 @@
+server {
+ listen 9002;
+ server_name dashboard.example.com;
+ access_log off;
+
+ location = /dashboard.html {
+ #return 200 "You have reached /nic2dashboard.";
+
+ proxy_pass http://nic2_dashboard;
+
+ }
+
+ location /api/ {
+
+ proxy_pass http://nic2_dashboard;
+ }
+
+}
diff --git a/n4a-configs/etc/nginx/conf.d/the-garage-upstreams.conf b/n4a-configs/etc/nginx/conf.d/the-garage-upstreams.conf
new file mode 100644
index 0000000..3387ffd
--- /dev/null
+++ b/n4a-configs/etc/nginx/conf.d/the-garage-upstreams.conf
@@ -0,0 +1,14 @@
+upstream the_garage {
+ zone the_garage 256k;
+
+ least_time last_byte;
+
+ # NOTE: These are for use on port 8080
+ server aks-nodepool1-_AKS2_NODES_-vmss000000:32080; #aks2 node1
+ server aks-nodepool1-_AKS2_NODES_-vmss000001:32080; #aks2 node2
+ server aks-nodepool1-_AKS2_NODES_-vmss000002:32080; #aks2 node3
+ server aks-nodepool1-_AKS2_NODES_-vmss000003:32080; #aks2 node4
+
+ keepalive 32;
+
+}
\ No newline at end of file
diff --git a/n4a-configs/etc/nginx/conf.d/the-garage.conf b/n4a-configs/etc/nginx/conf.d/the-garage.conf
new file mode 100644
index 0000000..51f11b2
--- /dev/null
+++ b/n4a-configs/etc/nginx/conf.d/the-garage.conf
@@ -0,0 +1,14 @@
+server {
+ listen 8080;
+
+ server_name the-garage.example.com;
+ status_zone the-garage.example.com; # Metrics zone name
+
+ access_log /var/log/nginx/the-garage.example.com.log main_ext;
+ error_log /var/log/nginx/the-garage.example.com_error.log info;
+
+ location / {
+ proxy_pass http://the_garage;
+ add_header X-Proxy-Pass the-garage; # Custom Header
+ }
+}
diff --git a/n4a-configs/etc/nginx/includes/keepalive.conf b/n4a-configs/etc/nginx/includes/keepalive.conf
new file mode 100644
index 0000000..17c6cc9
--- /dev/null
+++ b/n4a-configs/etc/nginx/includes/keepalive.conf
@@ -0,0 +1,8 @@
+# Default is HTTP/1.0 to upstreams, keepalives is only enabled for HTTP/1.1
+proxy_http_version 1.1;
+
+# Set the Connection header to empty
+proxy_set_header Connection "";
+
+# Host request header field, or the server name matching a request
+proxy_set_header Host $host;
diff --git a/n4a-configs/var/nginx.conf b/n4a-configs/var/nginx.conf
new file mode 100644
index 0000000..d195ac9
--- /dev/null
+++ b/n4a-configs/var/nginx.conf
@@ -0,0 +1,55 @@
+user nginx;
+worker_processes auto;
+worker_rlimit_nofile 8192;
+pid /run/nginx/nginx.pid;
+
+events {
+ worker_connections 4000;
+}
+
+error_log /var/log/nginx/error.log error;
+
+http {
+ log_format main '$remote_addr - $remote_user [$time_local] "$request" '
+ '$status $body_bytes_sent "$http_referer" '
+ '"$http_user_agent" "$http_x_forwarded_for"';
+
+ log_format main_ext 'remote_addr="$remote_addr", '
+ '[time_local=$time_local], '
+ 'request="$request", '
+ 'status="$status", '
+ 'http_referer="$http_referer", '
+ 'body_bytes_sent="$body_bytes_sent", '
+ 'Host="$host", '
+ 'sn="$server_name", '
+ 'request_time=$request_time, '
+ 'http_user_agent="$http_user_agent", '
+ 'http_x_forwarded_for="$http_x_forwarded_for", '
+ 'request_length="$request_length", '
+ 'upstream_address="$upstream_addr", '
+ 'upstream_status="$upstream_status", '
+ 'upstream_connect_time="$upstream_connect_time", '
+ 'upstream_header_time="$upstream_header_time", '
+ 'upstream_response_time="$upstream_response_time", '
+ 'upstream_response_length="$upstream_response_length", ';
+
+ access_log off;
+ server_tokens "";
+ server {
+ listen 80 default_server;
+ server_name localhost;
+ location / {
+ # Points to a directory with a basic html index file with
+ # a "Welcome to NGINX as a Service for Azure!" page
+ root /var/www;
+ index index.html;
+ }
+ }
+
+ include /etc/nginx/conf.d/*.conf;
+ include /etc/nginx/includes/*.conf; # shared files
+}
+
+stream {
+ include /etc/nginx/stream/*.conf; # Stream TCP nginx files
+}
\ No newline at end of file
diff --git a/n4a-configs/var/www/index.html b/n4a-configs/var/www/index.html
new file mode 100644
index 0000000..867de81
--- /dev/null
+++ b/n4a-configs/var/www/index.html
@@ -0,0 +1,39 @@
+
+
+
+
+ Welcome to NGINXaaS for Azure!
+
+
+
+
+
Welcome to NGINX as a Service for Azure!
+
If you see this page, the NGINX instance is successfully installed and
+ working. Further configuration is required.
+
+
+
+
+
+
+
+
diff --git a/sd-notes/azure-cli.md b/sd-notes/azure-cli.md
deleted file mode 100644
index 0be44d6..0000000
--- a/sd-notes/azure-cli.md
+++ /dev/null
@@ -1,196 +0,0 @@
-## Azure CLI Basic Configuration Setting
-
-You will need Azure Command Line Interface (CLI) installed on your client machine to manage your Azure services. See [How to install the Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli)
-
-If you do not have Azure CLI installed, you will need to install it to continue with the lab exercises. To check Azure CLI version run below command:
-
-```bash
-az --version
-```
-
-1. Sign in with Azure CLI using your preferred method listed [here](https://learn.microsoft.com/en-us/cli/azure/authenticate-azure-cli).
-
- >**Note:** We made use of Sign in interactively method for this workshop
-
- ```bash
- az login
- ```
-
-1. Once you have logged in you can run below command to validate your tenent and subscription ID and name.
-
- ```bash
- az account show
- ```
-
-1. Optional: If you have multiple subscriptions and would like to change the current subscription to another then run below command.
-
- ```bash
- # change the active subscription using the subscription name
- az account set --subcription "{subscription name}"
-
- # OR
-
- # change the active subscription using the subscription ID
- az account set --subscription "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
- ```
-
-1. Create a new Azure Resource Group called `-workshop` , where `` is your last name (or any unique value). This would hold all the Azure resources that you would create for this workshop.
-
- ```bash
- az group create --name -workshop --location
-
- ## example
- az group create --name s.dutta-workshop --location centralus
- ```
-
-1. Make sure the new Azure Resource Group has been created by running below command.
-
- ```bash
- az group list -o table | grep workshop
- ```
-
-## Create Virtual Network, Subnets and Network Security Group
-
-1. Create a virtual network (vnet) named `n4a-vnet` using below command.
-
- ```bash
- ## Set environment variables
- MY_RESOURCEGROUP=s.dutta-workshop
- MY_PUBLICIP=$(curl -4 ifconfig.co)
- ```
-
- ```bash
- az network vnet create \
- --resource-group $MY_RESOURCEGROUP \
- --name n4a-vnet \
- --address-prefixes 10.0.0.0/16
- ```
-
-1. Create a network security group(NSG) named `n4a-nsg` using below command.
-
- ```bash
- az network nsg create \
- --resource-group $MY_RESOURCEGROUP \
- --name n4a-nsg
- ```
-
-1. Add two NSG rules to allow any traffic on port 80 and 443 from your system's public IP. Run below command to create the two rules.
-
- ```bash
- ## Rule 1 for HTTP traffic
-
- az network nsg rule create \
- --resource-group $MY_RESOURCEGROUP \
- --nsg-name n4a-nsg \
- --name HTTP \
- --priority 320 \
- --source-address-prefix $MY_PUBLICIP \
- --source-port-range '*' \
- --destination-address-prefix '*' \
- --destination-port-range 80 \
- --direction Inbound \
- --access Allow \
- --protocol Tcp \
- --description "Allow HTTP traffic"
- ```
-
- ```bash
- ## Rule 2 for HTTPS traffic
-
- az network nsg rule create \
- --resource-group $MY_RESOURCEGROUP \
- --nsg-name n4a-nsg \
- --name HTTPS \
- --priority 300 \
- --source-address-prefix $MY_PUBLICIP \
- --source-port-range '*' \
- --destination-address-prefix '*' \
- --destination-port-range 443 \
- --direction Inbound \
- --access Allow \
- --protocol Tcp \
- --description "Allow HTTPS traffic"
- ```
-
-1. Create a subnet that you will use with NGINX for Azure resource. You will also attach the NSG that you just created to this subnet.
-
- ```bash
- az network vnet subnet create \
- --resource-group $MY_RESOURCEGROUP \
- --name n4a-subnet \
- --vnet-name n4a-vnet \
- --address-prefixes 10.0.1.0/24 \
- --network-security-group n4a-nsg \
- --delegations NGINX.NGINXPLUS/nginxDeployments
- ```
-
-1. Create another subnet for your AKS cluster
-
- ```bash
- az network vnet subnet create \
- --resource-group $MY_RESOURCEGROUP \
- --name aks2-subnet \
- --vnet-name n4a-vnet \
- --address-prefixes 10.0.2.0/23
- ```
-
-
-
-## Create Public IP and user identity to access NGINX for Azure resource
-
-1. Now create a Public IP to access NGINX for Azure from outside using below command.
-
- ```bash
- az network public-ip create \
- --resource-group $MY_RESOURCEGROUP \
- --name n4a-publicIP \
- --allocation-method Static \
- --sku Standard
- ```
-
-1. Create a user identity that would be tied to the NGINX for Azure resource
-
- ```bash
- az identity create \
- --resource-group $MY_RESOURCEGROUP \
- --name n4a-useridentity
- ```
-
-## Create NGINX for Azure resource
-
-Once Vnet, NSG and publicIP has been created, you will now create the NGINX for Azure resource object using below command
-
-```bash
-## Set environment variables
-MY_RESOURCEGROUP=s.dutta-workshop
-MY_SUBSCRIPTIONID=$(az account show --query id -o tsv)
-```
-
-Below command error outs with `(LinkedAuthorizationFailed)` exception. Dev team is working with Microsoft partner engineer to figure out why it is not working.
-
-```bash
-az nginx deployment create \
---resource-group $MY_RESOURCEGROUP \
---name nginx4a \
---location centralus \
---sku name="standard_Monthly_gmz7xq9ge3py" \
---network-profile front-end-ip-configuration="{public-ip-addresses:[{id:/subscriptions/$MY_SUBSCRIPTIONID/resourceGroups/$MY_RESOURCEGROUP/providers/Microsoft.Network/publicIPAddresses/n4a-publicIP}]}" network-interface-configuration="{subnet-id:/subscriptions/$MY_SUBSCRIPTIONID/resourceGroups/$MY_RESOURCEGROUP/providers/Microsoft.Network/virtualNetworks/n4a-vnet/subnets/n4a-subnet}" \
---identity="{type:UserAssigned,userAssignedIdentities:{/subscriptions/$MY_SUBSCRIPTIONID/resourceGroups/$MY_RESOURCEGROUP/providers/Microsoft.ManagedIdentity/userAssignedIdentities/n4a-useridentity:{}}}"
-```
-
-## Things to improve
-
-1. Documentation for N4A AzureCLI still has sku pointing to `preview_Monthly_gmz7xq9ge3py`. It is not intuitive to guess what SKU name to use for standard deployment. I had to deploy N4A using Azure portal and then run `az nginx deployment show` command to look inside the deployment deployed via azure portal to figure out what is the name for Standard SKU.
-
-1. To create diagnostic-settings resource within azure monitor you require system assigned managed identity. There is not a single example as to how this can be done using Azure CLI. Also Managed Identity doc (https://docs.nginx.com/nginxaas/azure/getting-started/managed-identity/) misguides you as it says you can have either user assigned or system assigned identity to work with Azure keyvault, monitor and storage. It misses the point that Azure monitor needs to have a system assigned identity to create above mentioned resource.
diff --git a/sd-notes/azure_monitoring_logging.md b/sd-notes/azure_monitoring_logging.md
deleted file mode 100644
index 66398a2..0000000
--- a/sd-notes/azure_monitoring_logging.md
+++ /dev/null
@@ -1,26 +0,0 @@
-# Setup Steps
-
-## Enable Access and Error logs for N4A
-
-1. To enable logging you need to first create a `Log Analytics Workspace` resource which would be storing all the logs.
-
-### Create Log Analytics Workspace
-
-
-
-2. Logging makes use of Kusto Query Language (KQL)
- [https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/tutorials/learn-common-operators]
-
-3. Sample KQL to print all access and error logs
-
- ```Kusto
- NGXOperationLogs
- | where FilePath == "/var/log/nginx/access.log" or FilePath == "/var/log/nginx/error.log"
- | sort by TimeGenerated asc
- | take 100
- | project TimeGenerated, FilePath, Message
- ```
-
-4. Access logs donot show up instantly in the logs tool (Takes atleast 3 minutes to show up). Real time analysis is not possible due to this.
-
-5.
\ No newline at end of file
diff --git a/sd-notes/media/n4A_configuration.png b/sd-notes/media/n4A_configuration.png
deleted file mode 100644
index 5958e0e..0000000
Binary files a/sd-notes/media/n4A_configuration.png and /dev/null differ
diff --git a/sd-notes/ssl-tls-notes.md b/sd-notes/ssl-tls-notes.md
deleted file mode 100644
index 666b79d..0000000
--- a/sd-notes/ssl-tls-notes.md
+++ /dev/null
@@ -1,83 +0,0 @@
-# Setup steps
-
-## Create Azure Key Vault resource
-
-1. Create an Azure key vault within the same resource group which has the NGINX as a service resource.
-
-2. Below are the things it would prompt you to select/enter
- - Resource Group
- - Key vault name
- - Region
- - Pricing Tier : Default value is `Standard`.
- - Purge protection: Keep default value which is `Disable purge protection`
- - Click `Next`
-
- In next `Access Configuration` section, select below values
- - Permission model: Keep default value which is `Azure role-based access control`
- - Resource access: No need to check any box.
- - click `Next`
-
- In next `Networking` section, keep everything as default and click `Next`
-
- In next `Tags` section, add any tags if you like.
-
- Click on `Review + Create` to create the key vault.
-
- If it is successfully deployed then you should see a "Your deployment is complete" message.
-
-## Create a new self-signed certificate
-
-1. Within the newly created key vault, create a self signed certificate.
-
-2. To do so open the `keyvault` resource and then click on `Certificates` from the left pane.
-
-3. In the `Certificates` section, click on `Generate/Import` button present in top pane.
-4. Make sure your account has necessary permissions to generate a new certificate. (Role needed `Key Vault Certificates Officer`).
-5. Within the `Create a certificate` section, enter all the required fields
- 1. Certificate Name
- 2. Subject (eg. "CN=*.example.com")
- 3. Content Type (Select `PEM`)
-6. Click on `Create` to create your new self signed certificate.
-
-## Use new self signed certificate with N4A
-
-1. Open your N4A resource.
-
-2. From the left pane select `NGINX certificate` section.
-
-3. Within the `NGINX certificates` section, you will click on `+ Add certificate` option from top pane to add your newly created self-signed certificate to N4A.
-
-4. In the `Add Certificate` sub section, fill in below required fields
- 1. Preferred name (Provide a name of your choice)
- 2. Certificate Path (This is where the cert would reside within N4A eg. `/etc/nginx/certs/example.cert`)
- 3. Key Path (This is where the key would reside within N4A eg. `/etc/nginx/certs/example.key`)
- 4. Key Vault (Select the key vault name that you created in above section)
- 5. Certificate name (Select the self signed certificate that your created within the selected keyvault)
- 6. Click on `Save`
-
-5. Once you have added a certificate next task is to reference the certificate within nginx configuration.
-
-6. From the left pane select `NGINX configuration` section.
-
-7. Within the server block provide the reference of the certificate and key to the `ssl_certificate` and `ssl_certificate_key` directives to make use of your self-signed certificate.
-
- ```nginx
- ...
- server {
-
- listen 443 ssl;
- server_name www.example.com;
-
- ssl_certificate /etc/nginx/cert/example.cert;
- ssl_certificate_key /etc/nginx/cert/example.key;
-
- ...
- }
- ```
-
-## Things to improve
-
-1. Once the tls cert and key are tied to n4a resource, if you navigate to the nginx configuration pane, the cert and key don't show up in the file explorer pane. Adding them to the file explorer would provide a visual confirmation that the cert and keys have been added to the relevant directory. If it is a security concern then those files can be present as greyed out and non-clickable.
-
- 
-
diff --git a/steve.sh b/steve.sh
new file mode 100755
index 0000000..65b8818
--- /dev/null
+++ b/steve.sh
@@ -0,0 +1,441 @@
+#!/usr/bin/env bash
+
+export OWNER=$(whoami)
+
+export MY_RESOURCEGROUP=${OWNER}-n4a-workshop
+export LOCATION=westus2
+export MY_PUBLICIP=$(curl ipinfo.io/ip)
+export MY_SUBSCRIPTIONID=$(az account show --query id -o tsv)
+export MY_AZURE_PUBLIC_IP_NAME=n4a-publicIP
+
+cat < Creating Resource Group..."
+az group create --name $MY_RESOURCEGROUP --location $LOCATION --tags owner=$OWNER
+
+## az group list -o table | grep workshop
+
+echo
+echo "--> Creating VNet..."
+az network vnet create \
+--resource-group $MY_RESOURCEGROUP \
+--name n4a-vnet \
+--address-prefixes 172.16.0.0/16
+
+echo
+echo "--> Creating Network Security Group..."
+az network nsg create \
+--resource-group $MY_RESOURCEGROUP \
+--name n4a-nsg
+
+echo
+echo "--> Creating NSG Rules..."
+az network nsg rule create \
+--resource-group $MY_RESOURCEGROUP \
+--nsg-name n4a-nsg \
+--name HTTP \
+--priority 320 \
+--source-address-prefix $MY_PUBLICIP \
+--source-port-range '*' \
+--destination-address-prefix '*' \
+--destination-port-range 80 \
+--direction Inbound \
+--access Allow \
+--protocol Tcp \
+--description "Allow HTTP traffic"
+
+az network nsg rule create \
+--resource-group $MY_RESOURCEGROUP \
+--nsg-name n4a-nsg \
+--name HTTPS \
+--priority 300 \
+--source-address-prefix $MY_PUBLICIP \
+--source-port-range '*' \
+--destination-address-prefix '*' \
+--destination-port-range 443 \
+--direction Inbound \
+--access Allow \
+--protocol Tcp \
+--description "Allow HTTPS traffic"
+
+az network nsg rule create \
+--resource-group $MY_RESOURCEGROUP \
+--nsg-name n4a-nsg \
+--name HTTP_ALT \
+--priority 310 \
+--source-address-prefix $MY_PUBLICIP \
+--source-port-range '*' \
+--destination-address-prefix '*' \
+--destination-port-range 8080 \
+--direction Inbound \
+--access Allow \
+--protocol Tcp \
+--description "Allow HTTPS traffic"
+
+echo
+echo "--> Creating Subnet ..."
+az network vnet subnet create \
+--resource-group $MY_RESOURCEGROUP \
+--name n4a-subnet \
+--vnet-name n4a-vnet \
+--address-prefixes 172.16.1.0/24 \
+--network-security-group n4a-nsg \
+--delegations NGINX.NGINXPLUS/nginxDeployments
+
+echo
+echo "--> Creating Subnet for AKS Cluster One..."
+az network vnet subnet create \
+--resource-group $MY_RESOURCEGROUP \
+--name aks1-subnet \
+--vnet-name n4a-vnet \
+--address-prefixes 172.16.10.0/23
+
+echo
+echo "--> Creating Subnet for AKS Cluster Two..."
+az network vnet subnet create \
+--resource-group $MY_RESOURCEGROUP \
+--name aks2-subnet \
+--vnet-name n4a-vnet \
+--address-prefixes 172.16.20.0/23
+
+echo
+echo "--> Creating Public IP..."
+az network public-ip create \
+--resource-group $MY_RESOURCEGROUP \
+--name $MY_AZURE_PUBLIC_IP_NAME \
+--allocation-method Static \
+--sku Standard
+
+echo
+echo "--> Caching Public Azure IP..."
+export MY_AZURE_PUBLIC_IP=$(az network public-ip show --resource-group $MY_RESOURCEGROUP --name $MY_AZURE_PUBLIC_IP_NAME --query ipAddress --output tsv)
+
+echo
+echo "--> Creating Identity..."
+az identity create \
+--resource-group $MY_RESOURCEGROUP \
+--name n4a-useridentity
+
+echo
+echo "--> Creating N4A Deployment..."
+az nginx deployment create \
+--resource-group $MY_RESOURCEGROUP \
+--name nginx4a \
+--sku name="standard_Monthly" \
+--network-profile front-end-ip-configuration="{public-ip-addresses:[{id:/subscriptions/$MY_SUBSCRIPTIONID/resourceGroups/$MY_RESOURCEGROUP/providers/Microsoft.Network/publicIPAddresses/n4a-publicIP}]}" network-interface-configuration="{subnet-id:/subscriptions/$MY_SUBSCRIPTIONID/resourceGroups/$MY_RESOURCEGROUP/providers/Microsoft.Network/virtualNetworks/n4a-vnet/subnets/n4a-subnet}" \
+--identity="{type:'SystemAssigned, UserAssigned',userAssignedIdentities:{/subscriptions/$MY_SUBSCRIPTIONID/resourceGroups/$MY_RESOURCEGROUP/providers/Microsoft.ManagedIdentity/userAssignedIdentities/n4a-useridentity:{}}}"
+
+echo
+echo "--> Creating Log Analytics Monitor..."
+az monitor log-analytics workspace create \
+--resource-group $MY_RESOURCEGROUP \
+--name n4a-loganalytics
+
+echo
+echo "--> Updating N4A Deployment to enable diagnostics... "
+az nginx deployment update \
+--resource-group $MY_RESOURCEGROUP \
+--name nginx4a \
+--enable-diagnostics true
+
+echo
+echo "--> Caching the N4A Id... "
+export MY_N4A_ID=$(az nginx deployment show \
+--resource-group $MY_RESOURCEGROUP \
+--name nginx4a \
+--query id \
+--output tsv)
+
+echo
+echo "--> Caching the Analytics Id... "
+export MY_LOG_ANALYTICS_ID=$(az monitor log-analytics workspace show \
+--resource-group $MY_RESOURCEGROUP \
+--name n4a-loganalytics \
+--query id \
+--output tsv)
+
+echo
+echo "--> Creating Diagnostics Setting..."
+az monitor diagnostic-settings create \
+--resource $MY_N4A_ID \
+--name n4a-nginxlogs \
+--resource-group $MY_RESOURCEGROUP \
+--workspace $MY_LOG_ANALYTICS_ID \
+--logs "[{category:NginxLogs,enabled:true,retention-policy:{enabled:false,days:0}}]"
+
+## Lab 3
+
+export MY_AKS1=n4a-aks1
+export MY_AKS2=n4a-aks2
+export MY_NAME=${OWNER:-$(whoami)}
+export K8S_VERSION=1.27
+export MY_SUBNET1=$(az network vnet subnet show -g $MY_RESOURCEGROUP -n aks1-subnet --vnet-name n4a-vnet --query id -o tsv)
+export MY_SUBNET2=$(az network vnet subnet show -g $MY_RESOURCEGROUP -n aks2-subnet --vnet-name n4a-vnet --query id -o tsv)
+source ~/nginx-trial.jwt
+
+echo
+echo "--> Creating AKS Cluster One..."
+az aks create \
+ --resource-group $MY_RESOURCEGROUP \
+ --name $MY_AKS1 \
+ --node-count 3 \
+ --node-vm-size Standard_B2s \
+ --kubernetes-version $K8S_VERSION \
+ --tags owner=$MY_NAME \
+ --vnet-subnet-id=$MY_SUBNET1 \
+ --enable-addons monitoring \
+ --generate-ssh-keys
+
+echo
+echo "--> Getting the credentials for AKS Cluster One..."
+az aks get-credentials \
+ --resource-group $MY_RESOURCEGROUP \
+ --name n4a-aks1 \
+ --overwrite-existing
+
+echo
+echo "--> Cloning the NGINX Ingress Controller Repo..."
+git clone https://github.com/nginxinc/kubernetes-ingress.git --branch v3.3.2
+cd kubernetes-ingress/deployments
+
+echo
+echo "--> Creating NGINX Ingress Controller Resources ..."
+kubectl apply -f common/ns-and-sa.yaml
+kubectl apply -f rbac/rbac.yaml
+kubectl apply -f ../examples/shared-examples/default-server-secret/default-server-secret.yaml
+kubectl apply -f common/nginx-config.yaml
+kubectl apply -f common/ingress-class.yaml
+kubectl apply -f common/crds/k8s.nginx.org_virtualservers.yaml
+kubectl apply -f common/crds/k8s.nginx.org_virtualserverroutes.yaml
+kubectl apply -f common/crds/k8s.nginx.org_transportservers.yaml
+kubectl apply -f common/crds/k8s.nginx.org_policies.yaml
+kubectl apply -f common/crds/k8s.nginx.org_globalconfigurations.yaml
+
+cd -
+
+echo
+echo "--> Creating Secret for NGINX Plus JWT..."
+kubectl create secret docker-registry regcred \
+ --docker-server=private-registry.nginx.com \
+ --docker-username=$JWT \
+ --docker-password=none \
+ -n nginx-ingress
+
+kubectl get secret regcred -n nginx-ingress -o yaml
+
+echo
+echo "--> Deploying NGINX Ingress Controller ..."
+kubectl apply -f labs/lab3/nginx-plus-ingress.yaml
+
+kubectl get pods -n nginx-ingress
+
+echo
+echo "--> Caching the AKS1 NIC ..."
+export AKS1_NIC=$(kubectl get pods -n nginx-ingress -o jsonpath='{.items[0].metadata.name}')
+
+echo
+echo "--> Creating AKS Cluster Two..."
+az aks create \
+ --resource-group $MY_RESOURCEGROUP \
+ --name $MY_AKS2 \
+ --node-count 4 \
+ --node-vm-size Standard_B2s \
+ --kubernetes-version $K8S_VERSION \
+ --tags owner=$MY_NAME \
+ --vnet-subnet-id=$MY_SUBNET2 \
+ --network-plugin azure \
+ --enable-addons monitoring \
+ --generate-ssh-keys
+
+echo
+echo "--> Getting the credentials for AKS Cluster Two..."
+az aks get-credentials \
+ --resource-group $MY_RESOURCEGROUP \
+ --name n4a-aks2 \
+ --overwrite-existing
+
+kubectl config use-context n4a-aks2
+kubectl get nodes
+
+cd kubernetes-ingress/deployments
+
+echo
+echo "--> Creating NGINX Ingress Controller Resources..."
+kubectl apply -f common/ns-and-sa.yaml
+kubectl apply -f rbac/rbac.yaml
+kubectl apply -f ../examples/shared-examples/default-server-secret/default-server-secret.yaml
+kubectl apply -f common/nginx-config.yaml
+kubectl apply -f common/ingress-class.yaml
+kubectl apply -f common/crds/k8s.nginx.org_virtualservers.yaml
+kubectl apply -f common/crds/k8s.nginx.org_virtualserverroutes.yaml
+kubectl apply -f common/crds/k8s.nginx.org_transportservers.yaml
+kubectl apply -f common/crds/k8s.nginx.org_policies.yaml
+kubectl apply -f common/crds/k8s.nginx.org_globalconfigurations.yaml
+
+cd -
+
+echo
+echo "--> Creating Secret for NGINX Plus JWT..."
+kubectl create secret docker-registry regcred \
+ --docker-server=private-registry.nginx.com \
+ --docker-username=$JWT \
+ --docker-password=none \
+ -n nginx-ingress
+
+kubectl get secret regcred -n nginx-ingress -o yaml
+
+kubectl apply -f labs/lab3/nginx-plus-ingress.yaml
+
+echo
+echo "--> Deploying the NGINX Ingress Controller ..."
+kubectl get pods -n nginx-ingress
+
+echo
+echo "--> Caching the AKS2 NIC..."
+export AKS2_NIC=$(kubectl get pods -n nginx-ingress -o jsonpath='{.items[0].metadata.name}')
+
+kubectl config use-context n4a-aks1
+kubectl apply -f labs/lab3/dashboard-vs.yaml
+kubectl get svc,vs -n nginx-ingress
+
+kubectl config use-context n4a-aks2
+kubectl apply -f labs/lab3/dashboard-vs.yaml
+kubectl get svc,vs -n nginx-ingress
+
+kubectl config use-context n4a-aks1
+kubectl apply -f labs/lab4/nodeport-static-redis.yaml
+kubectl get svc nginx-ingress -n nginx-ingress
+
+
+kubectl config use-context n4a-aks2
+kubectl apply -f labs/lab4/nodeport-static-redis.yaml
+kubectl get svc nginx-ingress -n nginx-ingress
+
+
+kubectl config use-context n4a-aks1
+kubectl get nodes
+
+az network nsg rule create \
+--resource-group $MY_RESOURCEGROUP \
+--nsg-name n4a-nsg \
+--name NIC_Dashboards \
+--priority 330 \
+--source-address-prefix $MY_PUBLICIP \
+--source-port-range '*' \
+--destination-address-prefix '*' \
+--destination-port-range 9001-9002 \
+--direction Inbound \
+--access Allow \
+--protocol Tcp \
+--description "Allow traffic to NIC Dashboards"
+
+## lab 4
+
+kubectl config use-context n4a-aks1
+kubectl apply -f labs/lab4/cafe.yaml
+kubectl apply -f labs/lab4/cafe-vs.yaml
+kubectl scale deployment coffee --replicas=2
+kubectl scale deployment tea --replicas=2
+
+kubectl config use-context n4a-aks2
+kubectl apply -f labs/lab4/cafe.yaml
+kubectl apply -f labs/lab4/cafe-vs.yaml
+
+kubectl config use-context n4a-aks2
+kubectl apply -f labs/lab4/redis-leader.yaml
+kubectl apply -f labs/lab4/redis-follower.yaml
+
+kubectl get pods,svc -l app=redis
+
+kubectl apply -f labs/lab4/global-configuration-redis.yaml
+kubectl describe gc nginx-configuration -n nginx-ingress
+
+kubectl apply -f labs/lab4/redis-leader-ts.yaml
+kubectl apply -f labs/lab4/redis-follower-ts.yaml
+
+kubectl get transportserver
+
+kubectl config use-context n4a-aks2
+kubectl apply -f labs/lab4/nodeport-static-redis.yaml
+
+kubectl get svc -n nginx-ingress
+
+echo
+echo "--> Getting the Node Ids for the AKS1 Cluster..."
+
+kubectl config use-context n4a-aks1
+export AKS1_NODE_NUMBER=$(kubectl get nodes -o jsonpath="{.items[0].metadata.name}" | cut -d- -f 3)
+
+echo
+echo "--> Getting the Node Ids for the AKS2 Cluster..."
+
+kubectl config use-context n4a-aks2
+export AKS2_NODE_NUMBER=$(kubectl get nodes -o jsonpath="{.items[0].metadata.name}" | cut -d- -f 3)
+
+echo
+echo "--> Update n4a configs..."
+
+cp -r n4a-configs/ n4a-configs-staging/
+sed -i "s/_AKS1_NODES_/$AKS1_NODE_NUMBER/g" n4a-configs-staging/etc/nginx/conf.d/aks1-upstreams.conf
+sed -i "s/_AKS2_NODES_/$AKS2_NODE_NUMBER/g" n4a-configs-staging/etc/nginx/conf.d/aks2-upstreams.conf
+sed -i "s/_AKS1_NODES_/$AKS1_NODE_NUMBER/g" n4a-configs-staging/etc/nginx/conf.d/nic1-dashboard-upstreams.conf
+sed -i "s/_AKS2_NODES_/$AKS2_NODE_NUMBER/g" n4a-configs-staging/etc/nginx/conf.d/nic2-dashboard-upstreams.conf
+sed -i "s/_AKS2_NODES_/$AKS2_NODE_NUMBER/g" n4a-configs-staging/etc/nginx/conf.d/the-garage-upstreams.conf
+sed -i "s/_AKS2_NODES_/$AKS2_NODE_NUMBER/g" n4a-configs-staging/etc/nginx/conf.d/my-garage-upstreams.conf
+
+echo
+echo "--> Creating the archive..."
+cd n4a-configs-staging
+tar -czf ../n4a-configs.tar.gz *
+cd ..
+rm -r n4a-configs-staging
+
+echo
+echo "--> Prepare the upload package..."
+export B64_N4A_CONFIG=$(base64 -i n4a-configs.tar.gz | tr -d '\n')
+cat << EOF > package.json
+{
+"data": "$B64_N4A_CONFIG"
+}
+EOF
+
+echo
+echo "--> Uploading the configuration..."
+az nginx deployment configuration create \
+ --configuration-name default \
+ --deployment-name nginx4a \
+ --resource-group $MY_RESOURCEGROUP \
+ --root-file var/nginx.conf \
+ --package "@package.json"
+
+echo
+echo "--> Updating the hosts file with the new Azure Public IP..."
+sudo sed -i "/N4AWSEX/{n;s/^[^ ]*/$MY_AZURE_PUBLIC_IP/}" /etc/hosts
+
+cat <