diff --git a/_banners/upgrade-r33.md b/_banners/upgrade-r33.md index 97cb82155..bed7946f9 100644 --- a/_banners/upgrade-r33.md +++ b/_banners/upgrade-r33.md @@ -1,5 +1,5 @@ {{< banner "caution" "NGINX Plus R33 requires NGINX Instance Manager 2.18 or later" >}} If your NGINX data plane instances are running NGINX Plus R33 or later, you must upgrade to NGINX Instance Manager 2.18 or later to support usage reporting. NGINX Plus R33 instances must report usage data to the F5 licensing endpoint or NGINX Instance Manager. Otherwise, they will stop processing traffic.

- For more details about usage reporting and enforcement, see [About solution licenses](../../../../solutions/about-subscription-licenses) + For more details about usage reporting and enforcement, see [About solution licenses]({{< ref "/solutions/about-subscription-licenses.md" >}}) {{}} \ No newline at end of file diff --git a/content/includes/nginx-one/add-file/existing-ssl-bundle.md b/content/includes/nginx-one/add-file/existing-ssl-bundle.md index e6a8c59a4..151d06103 100644 --- a/content/includes/nginx-one/add-file/existing-ssl-bundle.md +++ b/content/includes/nginx-one/add-file/existing-ssl-bundle.md @@ -2,7 +2,11 @@ docs: --- -With this option, You can incorporate [Managed certificates]({{< ref "/nginx-one/how-to/certificates/manage-certificates.md#managed-and-unmanaged-certificates" >}}). +<<<<<<< HEAD +With this option, you can incorporate [Managed certificates]({{< ref "/nginx-one/nginx-configs/certificates/manage-certificates.md#managed-and-unmanaged-certificates" >}}). +======= +With this option, You can incorporate [Managed certificates]({{< ref "/nginx-one/certificates/manage-certificates.md#managed-and-unmanaged-certificates" >}}). +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) In the **Choose Certificate** drop-down, select the managed certificate of your choice, and select **Add**. You can then: 1. Review details of the certificate. The next steps depend on whether the certificate is a CA bundle or a certificate / key pair. diff --git a/content/includes/nginx-one/how-to/add-instance.md b/content/includes/nginx-one/how-to/add-instance.md new file mode 100644 index 000000000..b9c5c86b8 --- /dev/null +++ b/content/includes/nginx-one/how-to/add-instance.md @@ -0,0 +1,29 @@ +--- +docs: +<<<<<<< HEAD +files: + - content/nginx-one/connect-instances/add-instance.md + - content/nginx-one/getting-started.md +======= +>>>>>>> 4067237f (More) +--- + +You can add an instance to NGINX One Console in the following ways: + +- Directly, under **Instances** +- Indirectly, by selecting a Config Sync Group, and selecting **Add Instance to Config Sync Group** + +In either case, NGINX One Console gives you a choice for data plane keys: + +- Create a new key +- Use an existing key + +NGINX One Console takes the option you use, and adds the data plane key to a command that you'd use to register your target instance. You should see the command in the **Add Instance** screen in the console. + +Connect to the host where your NGINX instance is running. Run the provided command to [install NGINX Agent]({{< ref "/nginx-one/getting-started#install-nginx-agent" >}}) dependencies and packages on that host. + +```bash +curl https://agent.connect.nginx.com/nginx-agent/install | DATA_PLANE_KEY="" sh -s -- -y +``` + +Once the process is complete, you can configure that instance in your NGINX One Console. \ No newline at end of file diff --git a/content/includes/use-cases/monitoring/n1c-dashboard-overview.md b/content/includes/use-cases/monitoring/n1c-dashboard-overview.md new file mode 100644 index 000000000..3018b83d8 --- /dev/null +++ b/content/includes/use-cases/monitoring/n1c-dashboard-overview.md @@ -0,0 +1,39 @@ +--- +docs: +files: + - content/nginx-one/metrics/enable-metrics.md + - content/nginx-one/getting-started.md +--- + +Navigating the dashboard: + +- **Drill down into specifics**: For in-depth information on a specific metric, like expiring certificates, click on the relevant link in the metric's card to go to a detailed overview page. +- **Refine metric timeframe**: Metrics show the last hour's data by default. To view data from a different period, select the time interval you want from the drop-down menu. + + +{{< img src="nginx-one/images/nginx-one-dashboard.png">}} + + +{{}} +**NGINX One dashboard metrics** +| Metric | Description | Details | +|---|---|---| +| **Instance availability** | Understand the operational status of your NGINX instances. | - **Online**: The NGINX instance is actively connected and functioning properly.
- **Offline**: NGINX Agent is connected but the NGINX instance isn't running, isn't installed, or can't communicate with NGINX Agent.
- **Unavailable**: The connection between NGINX Agent and NGINX One has been lost or the instance has been decommissioned.
- **Unknown**: The current state can't be determined at the moment. | +| **NGINX versions by instance** | See which NGINX versions are in use across your instances. | | +| **Operating systems** | Find out which operating systems your instances are running on. | | +| **Certificates** | Monitor the status of your SSL certificates to know which are expiring soon and which are still valid. | | +| **Config recommendations** | Get configuration recommendations to optimize your instances' settings. | | +| **CVEs (Common Vulnerabilities and Exposures)** | Evaluate the severity and number of potential security threats in your instances. | - **Major**: Indicates a high-severity threat that needs immediate attention.
- **Medium**: Implies a moderate threat level.
- **Minor** and **Low**: Represent less critical issues that still require monitoring.
- **Other**: Encompasses any threats that don't fit the standard categories. | +| **CPU utilization** | Track CPU usage trends and pinpoint instances with high CPU demand. | | +| **Memory utilization** | Watch memory usage patterns to identify instances using significant memory. | | +| **Disk space utilization** | Monitor how much disk space your instances are using and identify those nearing capacity. | | +| **Unsuccessful response codes** | Look for instances with a high number of HTTP server errors and investigate their error codes. | | +| **Top network usage** | Review the network usage and bandwidth consumption of your instances. | | + +{{
}} + + + + + + diff --git a/content/nginx-one/about.md b/content/nginx-one/about.md index f20c40bea..72e06d6bb 100644 --- a/content/nginx-one/about.md +++ b/content/nginx-one/about.md @@ -1,7 +1,7 @@ --- description: '' docs: DOCS-1392 -title: About +title: Manage your NGINX fleet toc: true weight: 10 type: @@ -19,7 +19,3 @@ NGINX One offers the following key benefits: - **Performance optimization**: Track your NGINX versions and receive recommendations for tuning your configurations for better performance. - **Graphical Metrics Display**: Access a dashboard that shows key metrics for your NGINX instances, including instance availability, version distribution, system health, and utilization trends. - **Real-time alerts**: Receive alerts about critical issues. - -## Legal notice: Licensing agreements for NGINX products - -Using NGINX One is subject to our End User Service Agreement (EUSA). For [NGINX Plus]({{< ref "/nginx" >}}), usage is governed by the End User License Agreement (EULA). Open source projects, including [NGINX Agent](https://github.com/nginx/agent) and [NGINX OSS](https://github.com/nginx/nginx), are covered under their respective licenses. For more details on these licenses, follow the provided links. diff --git a/content/nginx-one/api/_index.md b/content/nginx-one/api/_index.md index e1f50db88..5372d8f12 100644 --- a/content/nginx-one/api/_index.md +++ b/content/nginx-one/api/_index.md @@ -1,6 +1,18 @@ --- -title: API +<<<<<<< HEAD +<<<<<<< HEAD +<<<<<<< HEAD +title: Automate with the NGINX One API +======= +title: NGINX One API +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) +======= +title: Automation with the NGINX One API +>>>>>>> 614bafed (more) +======= +title: Automate with the NGINX One API +>>>>>>> 4da8aa7e (based on Jason's feedback) description: -weight: 1000 +weight: 700 url: /nginx-one/api ---- \ No newline at end of file +--- diff --git a/content/nginx-one/certificates/_index.md b/content/nginx-one/certificates/_index.md new file mode 100644 index 000000000..0ea28d683 --- /dev/null +++ b/content/nginx-one/certificates/_index.md @@ -0,0 +1,6 @@ +--- +description: +title: Monitor your certificates +weight: 400 +url: /nginx-one/certificates +--- diff --git a/content/nginx-one/how-to/certificates/manage-certificates.md b/content/nginx-one/certificates/manage-certificates.md similarity index 95% rename from content/nginx-one/how-to/certificates/manage-certificates.md rename to content/nginx-one/certificates/manage-certificates.md index 0d53b6947..136a4e299 100644 --- a/content/nginx-one/how-to/certificates/manage-certificates.md +++ b/content/nginx-one/certificates/manage-certificates.md @@ -33,8 +33,8 @@ From the NGINX One Console you can: You can manage the certificates for: -- [Unique instances]({{< ref "/nginx-one/how-to/nginx-configs/add-file.md#new-ssl-certificate-or-ca-bundle" >}}) -- For all instances that are members of a [Config Sync Group]({{< ref "/nginx-one/how-to/config-sync-groups/manage-config-sync-groups/#configuration-management" >}}) +- [Unique instances]({{< ref "/nginx-one/nginx-configs/add-file.md#new-ssl-certificate-or-ca-bundle" >}}) +- For all instances that are members of a [Config Sync Group]({{< ref "/nginx-one/config-sync-groups/manage-config-sync-groups/#configuration-management" >}}) {{< tip >}} @@ -178,7 +178,7 @@ If you register an instance to NGINX One Console, as described in [Add your NGIN - Are used in their NGINX configuration - Do _not_ match an existing managed SSL certificate/CA bundle -These certificates appear in the list of unmanaged certificates. NGINX One Console does not store unmanaged certs or keys, only metadata associated with certs for monitoring. +These certificates appear in the list of unmanaged certificates. We recommend that you convert your unmanaged certificates. Converting to a managed certificate allows you to centrally manage, update, and deploy a certificate to your data plane from the NGINX One Console. @@ -193,5 +193,5 @@ To convert these cerificates to managed, start with the Certificates menu, and s ## See also - [Create and manage data plane keys]({{< ref "/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md" >}}) -- [View and edit NGINX configurations]({{< ref "/nginx-one/how-to/nginx-configs/view-edit-nginx-configurations.md" >}}) -- [Add a file in a configuration]({{< ref "/nginx-one/how-to/nginx-configs/add-file.md" >}}) +- [Add an instance]({{< ref "/nginx-one/nginx-configs/add-instance.md" >}}) +- [Add a file in a configuration]({{< ref "/nginx-one/nginx-configs/add-file.md" >}}) diff --git a/content/nginx-one/changelog.md b/content/nginx-one/changelog.md index e41c765f8..4fefc47b1 100644 --- a/content/nginx-one/changelog.md +++ b/content/nginx-one/changelog.md @@ -83,8 +83,12 @@ You can: - Remove a deployed certificate from a Config Sync Group For more information, including warnings about risks, see our documentation on how you can: -- [Add a file]({{< ref "/nginx-one/how-to/nginx-configs/add-file.md" >}}) -- [Manage certificates]({{< ref "/nginx-one/how-to/certificates/manage-certificates.md" >}}) +- [Add a file]({{< ref "/nginx-one/nginx-configs/add-file.md" >}}) +<<<<<<< HEAD +- [Manage certificates]({{< ref "/nginx-one/nginx-configs/certificates/manage-certificates.md" >}}) +======= +- [Manage certificates]({{< ref "/nginx-one/certificates/manage-certificates.md" >}}) +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) ### Revert a configuration @@ -108,7 +112,11 @@ From the NGINX One Console you can now: - Ensure that your certificates are current and correct. - Manage your certificates from a central location. This can help you simplify operations and remotely update, rotate, and deploy those certificates. -For more information, see the full documentation on how you can [Manage Certificates]({{< ref "/nginx-one/how-to/certificates/manage-certificates.md" >}}). +<<<<<<< HEAD +For more information, see the full documentation on how you can [Manage Certificates]({{< ref "/nginx-one/nginx-configs/certificates/manage-certificates.md" >}}). +======= +For more information, see the full documentation on how you can [Manage Certificates]({{< ref "/nginx-one/certificates/manage-certificates.md" >}}). +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) ## August 22, 2024 @@ -116,7 +124,11 @@ For more information, see the full documentation on how you can [Manage Certific Config Sync Groups are now available in the F5 NGINX One Console. This feature allows you to manage and synchronize NGINX configurations across multiple instances as a single entity, ensuring consistency and simplifying the management of your NGINX environment. -For more information, see the full documentation on [Managing Config Sync Groups]({{< ref "/nginx-one/how-to/config-sync-groups/manage-config-sync-groups.md" >}}). +<<<<<<< HEAD +For more information, see the full documentation on [Managing Config Sync Groups]({{< ref "/nginx-one/nginx-configs/config-sync-groups/manage-config-sync-groups.md" >}}). +======= +For more information, see the full documentation on [Managing Config Sync Groups]({{< ref "/nginx-one/config-sync-groups/manage-config-sync-groups.md" >}}). +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) ## August 8, 2024 @@ -136,7 +148,7 @@ Select the link for each CVE to see the details, including the CVE's publish dat ### Edit NGINX configurations -You can now make configuration changes to your NGINX instances. For more details, see [View and edit NGINX configurations]({{< ref "/nginx-one/how-to/nginx-configs/view-edit-nginx-configurations.md" >}}). +You can now make configuration changes to your NGINX instances. For more details, see [View and edit NGINX configurations]({{< ref "/nginx-one/nginx-configs/view-edit-nginx-configurations.md" >}}). ## May 28, 2024 diff --git a/content/nginx-one/config-sync-groups/_index.md b/content/nginx-one/config-sync-groups/_index.md new file mode 100644 index 000000000..eaefeaea3 --- /dev/null +++ b/content/nginx-one/config-sync-groups/_index.md @@ -0,0 +1,6 @@ +--- +description: +title: Change multiple instances with one push +weight: 400 +url: /nginx-one/config-sync-groups +--- diff --git a/content/nginx-one/how-to/config-sync-groups/add-file-csg.md b/content/nginx-one/config-sync-groups/add-file-csg.md similarity index 85% rename from content/nginx-one/how-to/config-sync-groups/add-file-csg.md rename to content/nginx-one/config-sync-groups/add-file-csg.md index ad8d31ca0..c416848a8 100644 --- a/content/nginx-one/how-to/config-sync-groups/add-file-csg.md +++ b/content/nginx-one/config-sync-groups/add-file-csg.md @@ -58,10 +58,10 @@ Enter the name of the desired configuration file, such as `abc.conf` and select ### Existing SSL Certificate or CA Bundle {{< include "nginx-one/add-file/existing-ssl-bundle.md" >}} -With this option, You can incorporate [Managed certificates]({{< ref "/nginx-one/how-to/certificates/manage-certificates.md#managed-and-unmanaged-certificates" >}}). +With this option, You can incorporate [Managed certificates]({{< ref "/nginx-one/certificates/manage-certificates.md#managed-and-unmanaged-certificates" >}}). ## See also - [Create and manage data plane keys]({{< ref "/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md" >}}) -- [View and edit NGINX configurations]({{< ref "/nginx-one/how-to/nginx-configs/view-edit-nginx-configurations.md" >}}) -- [Manage certificates]({{< ref "/nginx-one/how-to/certificates/manage-certificates.md" >}}) +- [Add an NGINX instance]({{< ref "/nginx-one/nginx-configs/add-instance.md" >}}) +- [Manage certificates]({{< ref "/nginx-one/certificates/manage-certificates.md" >}}) diff --git a/content/nginx-one/how-to/config-sync-groups/manage-config-sync-groups.md b/content/nginx-one/config-sync-groups/manage-config-sync-groups.md similarity index 94% rename from content/nginx-one/how-to/config-sync-groups/manage-config-sync-groups.md rename to content/nginx-one/config-sync-groups/manage-config-sync-groups.md index d686e713e..056414a67 100644 --- a/content/nginx-one/how-to/config-sync-groups/manage-config-sync-groups.md +++ b/content/nginx-one/config-sync-groups/manage-config-sync-groups.md @@ -37,7 +37,7 @@ Config Sync Groups support configuration inheritance and persistance. If you've On the other hand, if you remove all instances from a Config Sync Group, the original configuration persists. In other words, the group retains the configuration from that first instance (or the original configuration). Any new instance that you add later still inherits that configuration. -{{< tip >}}You can use _unmanaged_ certificates. NGINX One Console does not store unmanaged certs or keys, only metadata associated with the certs or keys for monitoring. Your actions can affect the [Config Sync Group status](#config-sync-group-status). For future instances on the data plane, if it: +{{< tip >}}You can use _unmanaged_ certificates. Your actions can affect the [Config Sync Group status](#config-sync-group-status). For future instances on the data plane, if it: - Has unmanaged certificates in the same file paths as defined by the NGINX configuration as the Config Sync Group, that instance will be [**In Sync**](#config-sync-group-status). - Will be [**Out of Sync**](#config-sync-group-status) if the instance: @@ -100,12 +100,6 @@ Now that you created a Config Sync Group, you can add instances to that group. A Any instance that joins the group afterwards inherits that configuration. -{{< note >}} If you see the following [Config Sync Group Status](#config-sync-group-status) message: **Out of Sync**: - - - Review the instance details in NGINX One Console to identify any publication problems. - - After you change the configuration of the Config Sync Group, [Publish it](#publish-the-config-sync-group-configuration]. -In that case, review and resolve discrepancies between the Instance and the rest of the Config Sync Group. {{< /note >}} - ### Add an existing instance to a Config Sync Group {#add-an-existing-instance-to-a-config-sync-group} You can add existing NGINX instances that are already registered with NGINX One to a Config Sync Group. @@ -264,4 +258,4 @@ Monitor the **Config Sync Status** column. It can help you ensure that your conf ## See also - [Create and manage data plane keys]({{< ref "/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md" >}}) -- [View and edit NGINX configurations]({{< ref "/nginx-one/how-to/nginx-configs/view-edit-nginx-configurations.md" >}}) +- [Add an NGINX instance]({{< ref "/nginx-one/nginx-configs/add-instance.md" >}}) diff --git a/content/nginx-one/connect-instances/_index.md b/content/nginx-one/connect-instances/_index.md new file mode 100644 index 000000000..ea3ed0292 --- /dev/null +++ b/content/nginx-one/connect-instances/_index.md @@ -0,0 +1,6 @@ +--- +description: +title: Connect your instances +weight: 200 +url: /nginx-one/connect-instances/ +--- diff --git a/content/nginx-one/how-to/nginx-configs/add-instance.md b/content/nginx-one/connect-instances/add-instance.md similarity index 60% rename from content/nginx-one/how-to/nginx-configs/add-instance.md rename to content/nginx-one/connect-instances/add-instance.md index 8bae0ef02..328d93724 100644 --- a/content/nginx-one/how-to/nginx-configs/add-instance.md +++ b/content/nginx-one/connect-instances/add-instance.md @@ -16,34 +16,16 @@ to set up a data plane key to connect your instances to NGINX One. Before you add an instance to NGINX One Console, ensure: -- You have administrator access to NGINX One Console. -- You have configured instances of NGINX that you want to manage through NGINX One Console. -- You have or are ready to configure a data plane key. -- You have or are ready to set up managed certificates. +- You have [administrator access]({{< ref "/nginx-one/rbac/roles.md" >}}) to NGINX One Console. +- You have [configured instances of NGINX]({{< ref "/nginx-one/getting-started.md#add-your-nginx-instances-to-nginx-one" >}}) that you want to manage through NGINX One Console. +- You have or are ready to configure a [data plane key]({{< ref "/nginx-one/getting-started.md#generate-data-plane-key" >}}). +- You have or are ready to set up [managed certificates]({{< ref "/nginx-one/nginx-configs/certificates/manage-certificates.md" >}}). -{{< note >}}If this is the first time an instance is being added to a Config Sync Group, and you have not yet defined the configuration for that Config Sync Group, that instance provides the template for that group. For more information, see [Configuration management]({{< ref "nginx-one/how-to/config-sync-groups/manage-config-sync-groups#configuration-management" >}}).{{< /note >}} +{{< note >}}If this is the first time an instance is being added to a Config Sync Group, and you have not yet defined the configuration for that Config Sync Group, that instance provides the template for that group. For more information, see [Configuration management]({{< ref "nginx-one/nginx-configs/config-sync-groups/manage-config-sync-groups#configuration-management" >}}).{{< /note >}} ## Add an instance -You can add an instance to NGINX One Console in the following ways: - -- Directly, under **Instances** -- Indirectly, by selecting a Config Sync Group, and selecting **Add Instance to Config Sync Group** - -In either case, NGINX One Console gives you a choice for data plane keys: - -- Create a new key -- Use an existing key - -NGINX One Console takes the option you use, and adds the data plane key to a command that you'd use to register your target instance. You should see the command in the **Add Instance** screen in the console. - -Connect to the host where your NGINX instance is running. Run the provided command to [install NGINX Agent]({{< ref "/nginx-one/getting-started#install-nginx-agent" >}}) dependencies and packages on that host. - -```bash -curl https://agent.connect.nginx.com/nginx-agent/install | DATA_PLANE_KEY="" sh -s -- -y -``` - -Once the process is complete, you can configure that instance in your NGINX One Console. +{{< include "/nginx-one/how-to/add-instance.md" >}} ## Managed and Unmanaged Certificates @@ -71,5 +53,5 @@ Once you've completed the process, NGINX One reassigns this as a managed certifi ## Add an instance to a Config Sync Group -When you [Manage Config Sync Group membership]({{< ref "nginx-one/how-to/config-sync-groups/manage-config-sync-groups#manage-config-sync-group-membership" >}}), you can add an existing or new instance to the group of your choice. +When you [Manage Config Sync Group membership]({{< ref "nginx-one/nginx-configs/config-sync-groups/manage-config-sync-groups#manage-config-sync-group-membership" >}}), you can add an existing or new instance to the group of your choice. That instance inherits the setup of that Config Sync Group. diff --git a/content/nginx-one/how-to/containers/connect-nginx-plus-container-images-to-nginx-one.md b/content/nginx-one/connect-instances/connect-nginx-plus-container-images-to-nginx-one.md similarity index 94% rename from content/nginx-one/how-to/containers/connect-nginx-plus-container-images-to-nginx-one.md rename to content/nginx-one/connect-instances/connect-nginx-plus-container-images-to-nginx-one.md index daae3af88..968ecc836 100644 --- a/content/nginx-one/how-to/containers/connect-nginx-plus-container-images-to-nginx-one.md +++ b/content/nginx-one/connect-instances/connect-nginx-plus-container-images-to-nginx-one.md @@ -1,7 +1,7 @@ --- description: '' docs: null -title: Connect NGINX Plus container images to NGINX One +title: Connect NGINX Plus container images toc: true weight: 400 type: @@ -19,7 +19,7 @@ This guide explains how to set up an F5 NGINX Plus Docker container with NGINX A Before you start, make sure you have: - A valid JSON Web Token (JWT) for your NGINX subscription. -- [A data plane key from NGINX One]({{< ref "/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md" >}}). +- [A data plane key from NGINX One]({{< ref "/nginx-one/connect-instances/create-manage-data-plane-keys.md" >}}). - Docker installed and running on your system. #### Download your JWT license from MyF5 diff --git a/content/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md b/content/nginx-one/connect-instances/create-manage-data-plane-keys.md similarity index 98% rename from content/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md rename to content/nginx-one/connect-instances/create-manage-data-plane-keys.md index 9ac000860..224b3ff51 100644 --- a/content/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md +++ b/content/nginx-one/connect-instances/create-manage-data-plane-keys.md @@ -1,7 +1,7 @@ --- description: '' docs: DOCS-1395 -title: Create and manage data plane keys +title: Prepare - Create and manage data plane keys toc: true weight: 100 type: diff --git a/content/nginx-one/how-to/proxy-setup/set-up-nginx-proxy-for-nginx-one.md b/content/nginx-one/connect-instances/set-up-nginx-proxy-for-nginx-one.md similarity index 87% rename from content/nginx-one/how-to/proxy-setup/set-up-nginx-proxy-for-nginx-one.md rename to content/nginx-one/connect-instances/set-up-nginx-proxy-for-nginx-one.md index 974f7851c..6c97ffccc 100644 --- a/content/nginx-one/how-to/proxy-setup/set-up-nginx-proxy-for-nginx-one.md +++ b/content/nginx-one/connect-instances/set-up-nginx-proxy-for-nginx-one.md @@ -1,7 +1,7 @@ --- description: '' docs: null -title: Set up NGINX as a proxy for NGINX One +title: Minimize connections - Set up NGINX as a proxy toc: true weight: 300 type: @@ -17,7 +17,7 @@ This guide explains how to set up NGINX as a proxy for other NGINX instances to ## Before you start - [Install NGINX Open Source or NGINX Plus]({{< ref "/nginx/admin-guide/installing-nginx/" >}}). -- [Get a Data Plane Key from NGINX One]({{< ref "/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md" >}}). +- [Get a Data Plane Key from NGINX One]({{< ref "/nginx-one/connect-instances/create-manage-data-plane-keys.md" >}}). --- @@ -61,7 +61,7 @@ In this step, we'll configure an NGINX instance to act as a proxy server for NGI --- -## Configure NGINX Agent to use the proxy instance +## Configure NGINX Agent to use the proxy To set up your other NGINX instances to use the proxy instance to connect to NGINX One, update the NGINX Agent configuration on those instances to use the proxy NGINX instance's IP address. See the example NGINX Agent configuration below. @@ -95,7 +95,7 @@ To set up your other NGINX instances to use the proxy instance to connect to NGI For more information, refer to the following resources: -- [Installing NGINX and NGINX Plus]({{< ref "/nginx/admin-guide/installing-nginx/" >}}) -- [Create and manage data plane keys]({{< ref "/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md" >}}) +- [Install NGINX and NGINX Plus]({{< ref "/nginx/admin-guide/installing-nginx/" >}}) +- [Create and manage data plane keys]({{< ref "/nginx-one/connect-instances/create-manage-data-plane-keys.md" >}}) - [NGINX Agent Installation and upgrade](https://docs.nginx.com/nginx-agent/installation-upgrade/) -- [NGINX Agent Configuration](https://docs.nginx.com/nginx-agent/configuration/) \ No newline at end of file +- [NGINX Agent Configuration](https://docs.nginx.com/nginx-agent/configuration/) diff --git a/content/nginx-one/how-to/settings/_index.md b/content/nginx-one/connect-instances/settings/_index.md similarity index 57% rename from content/nginx-one/how-to/settings/_index.md rename to content/nginx-one/connect-instances/settings/_index.md index cdbbc1636..3bdb5095b 100644 --- a/content/nginx-one/how-to/settings/_index.md +++ b/content/nginx-one/connect-instances/settings/_index.md @@ -2,6 +2,6 @@ description: title: Settings weight: 500 -url: /nginx-one/how-to/settings +url: /nginx-one/connect-instances/settings draft: true --- diff --git a/content/nginx-one/getting-started.md b/content/nginx-one/getting-started.md index 9e3a0e8e5..162dd363f 100644 --- a/content/nginx-one/getting-started.md +++ b/content/nginx-one/getting-started.md @@ -9,6 +9,17 @@ docs: DOCS-1393 This guide provides step-by-step instructions on how to activate and start using F5 NGINX One Console. NGINX One is a management console for monitoring and managing NGINX data plane instances. +## Confirm access to the F5 Distributed Cloud + +You can access NGINX One Console through the F5 Distributed Cloud. + +1. Log in to [MyF5](https://my.f5.com/manage/s/). +1. Go to **My Products & Plans > Subscriptions** to see your active subscriptions. +1. Within one of your subscriptions, you should see either an NGINX and/or a Distributed Cloud subscription + - If the above does not appear in any of your subscriptions, please reach out to either your F5 Account Team or Customer Success Manager. + +Now identify your tenant. You or someone in your organization should have received an email from no-reply@cloud.f5.com asking you to update your password. The account name referenced in the E-Mail is the tenant name. Navigate to https://YOUR_TENANT_NAME.console.ves.volterra.io to access the F5 Distributed Cloud. + ## Enable the NGINX One service {#enable-nginx-one} To get started using NGINX One, enable the service on F5 Distributed Cloud. @@ -24,12 +35,10 @@ To get started using NGINX One, enable the service on F5 Distributed Cloud. Next, add your NGINX instances to NGINX One. You'll need to create a data plane key and then install NGINX Agent on each instance you want to monitor. -### Add an instance - -Depending on whether this is your first time using NGINX One Console or you've used it before, follow the appropriate steps to add an instance: +The following instructions include minimal information, sufficient to "get started." See the following links for detailed instructions: -- **For first-time users:** On the welcome screen, select **Add Instance**. -- **For returning users:** If you've added instances previously and want to add more, select **Instances** on the left menu, then select **Add Instance**. +- [Create and manage data plane keys]({{< ref "/nginx-one/connect-instances/create-manage-data-plane-keys.md" >}}) +- [Add an NGINX instance]({{< ref "/nginx-one/connect-instances/add-instance.md" >}}) ### Generate a data plane key {#generate-data-plane-key} @@ -43,11 +52,18 @@ To generate a data plane key: {{}} Data plane keys are displayed only once and cannot be retrieved later. Be sure to copy and store this key securely. -Data plane keys expire after one year. You can change this expiration date later by [editing the key]({{< ref "nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md#change-expiration-date" >}}). +Data plane keys expire after one year. You can change this expiration date later by [editing the key]({{< ref "nginx-one/connect-instances/create-manage-data-plane-keys.md#change-expiration-date" >}}). -[Revoking a data plane key]({{< ref "nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md#revoke-data-plane-key" >}}) disconnects all instances that were registered with that key. +[Revoking a data plane key]({{< ref "nginx-one/connect-instances/create-manage-data-plane-keys.md#revoke-data-plane-key" >}}) disconnects all instances that were registered with that key. {{}} +### Add an instance + +Depending on whether this is your first time using NGINX One Console or you've used it before, follow the appropriate steps to add an instance: + +- **For first-time users:** On the welcome screen, select **Add Instance**. +- **For returning users:** If you've added instances previously and want to add more, select **Instances** on the left menu, then select **Add Instance**. + ### Install NGINX Agent @@ -134,37 +150,11 @@ If you followed the [Installation and upgrade](https://docs.nginx.com/nginx-agen --- -## Enable NGINX metrics reporting - The NGINX One Console dashboard relies on APIs for NGINX Plus and NGINX Open Source Stub Status to report traffic and system metrics. The following sections show you how to enable those metrics. ### Enable NGINX Plus API - -To collect metrics for NGINX Plus, add the following to your NGINX Plus configuration file: - -```nginx -# Enable the /api/ location with appropriate access control -# to use the NGINX Plus API. -# -location /api/ { - api write=on; - allow 127.0.0.1; - deny all; -} -``` - -This configuration: - -- Enables the NGINX Plus API. -- Allows requests only from `127.0.0.1` (localhost). -- Blocks all other requests for security. - -After saving the changes, reload NGINX to apply the new configuration: - -```shell -nginx -s reload -``` +{{< include "/use-cases/monitoring/enable-nginx-plus-api.md" >}} ### Enable NGINX Open Source Stub Status API @@ -183,6 +173,8 @@ After connecting your NGINX instances to NGINX One, you can monitor their perfor ### Overview of the NGINX One dashboard +{{< include "/use-cases/monitoring/n1c-dashboard-overview.md" >}} + Navigating the dashboard: - **Drill down into specifics**: For in-depth information on a specific metric, like expiring certificates, click on the relevant link in the metric's card to go to a detailed overview page. diff --git a/content/nginx-one/glossary.md b/content/nginx-one/glossary.md index c315d35ef..3104b27ea 100644 --- a/content/nginx-one/glossary.md +++ b/content/nginx-one/glossary.md @@ -3,7 +3,11 @@ description: '' docs: DOCS-1396 title: Glossary toc: true -weight: 1000 +<<<<<<< HEAD +weight: 800 +======= +weight: 2000 +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) type: - reference --- @@ -14,7 +18,11 @@ This glossary defines terms used in the F5 NGINX One Console and F5 Distributed {{}} | Term | Definition | |-------------|-------------| -| **Config Sync Group** | A group of NGINX systems (or instances) with identical configurations. They may also share the same certificates. However, the instances in a Config Sync Group could belong to different systems and even different clusters. For more information, see this explanation of [Important considerations]({{< ref "/nginx-one/how-to/config-sync-groups/manage-config-sync-groups.md#important-considerations" >}}) | +<<<<<<< HEAD +| **Config Sync Group** | A group of NGINX systems (or instances) with identical configurations. They may also share the same certificates. However, the instances in a Config Sync Group could belong to different systems and even different clusters. For more information, see this explanation of [Important considerations]({{< ref "/nginx-one/nginx-configs/config-sync-groups/manage-config-sync-groups.md#important-considerations" >}}) | +======= +| **Config Sync Group** | A group of NGINX systems (or instances) with identical configurations. They may also share the same certificates. However, the instances in a Config Sync Group could belong to different systems and even different clusters. For more information, see this explanation of [Important considerations]({{< ref "/nginx-one/config-sync-groups/manage-config-sync-groups.md#important-considerations" >}}) | +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) | **Data Plane** | The data plane is the part of a network architecture that carries user traffic. It handles tasks like forwarding data packets between devices and managing network communication. In the context of NGINX, the data plane is responsible for tasks such as load balancing, caching, and serving web content. | | **Instance** | An instance is an individual system with NGINX installed. You can group the instances of your choice in a Config Sync Group. When you add an instance to NGINX One, you need to use a data plane key. | | **Namespace** | In F5 Distributed Cloud, a namespace groups a tenant’s configuration objects, similar to administrative domains. Every object in a namespace must have a unique name, and each namespace must be unique to its tenant. This setup ensures isolation, preventing cross-referencing of objects between namespaces. You'll see the namespace in the NGINX One Console URL as `/namespaces//` | @@ -22,6 +30,10 @@ This glossary defines terms used in the F5 NGINX One Console and F5 Distributed | **Tenant** | A tenant in F5 Distributed Cloud is an entity that owns a specific set of configuration and infrastructure. It is fundamental for isolation, meaning a tenant cannot access objects or infrastructure of other tenants. Tenants can be either individual or enterprise, with the latter allowing multiple users with role-based access control (RBAC). | {{}} +## Legal notice: Licensing agreements for NGINX products + +Using NGINX One is subject to our End User Service Agreement (EUSA). For [NGINX Plus]({{< ref "/nginx" >}}), usage is governed by the End User License Agreement (EULA). Open source projects, including [NGINX Agent](https://github.com/nginx/agent) and [NGINX Open Source](https://github.com/nginx/nginx), are covered under their respective licenses. For more details on these licenses, follow the provided links. + --- ## References diff --git a/content/nginx-one/how-to/_index.md b/content/nginx-one/how-to/_index.md index 3e88ec7ae..e7b505736 100644 --- a/content/nginx-one/how-to/_index.md +++ b/content/nginx-one/how-to/_index.md @@ -1,6 +1,6 @@ --- description: title: How-to guides -weight: 200 +weight: 700 url: /nginx-one/how-to/ --- diff --git a/content/nginx-one/how-to/certificates/_index.md b/content/nginx-one/how-to/certificates/_index.md deleted file mode 100644 index 39e16a174..000000000 --- a/content/nginx-one/how-to/certificates/_index.md +++ /dev/null @@ -1,6 +0,0 @@ ---- -description: -title: Certificates -weight: 400 -url: /nginx-one/how-to/certificates ---- diff --git a/content/nginx-one/how-to/config-sync-groups/_index.md b/content/nginx-one/how-to/config-sync-groups/_index.md deleted file mode 100644 index 31f258b69..000000000 --- a/content/nginx-one/how-to/config-sync-groups/_index.md +++ /dev/null @@ -1,6 +0,0 @@ ---- -description: -title: Config Sync Groups -weight: 250 -url: /nginx-one/how-to/config-sync-groups ---- diff --git a/content/nginx-one/how-to/containers/_index.md b/content/nginx-one/how-to/containers/_index.md deleted file mode 100644 index c3617fd7d..000000000 --- a/content/nginx-one/how-to/containers/_index.md +++ /dev/null @@ -1,6 +0,0 @@ ---- -description: -title: Containers -weight: 300 -url: /nginx-one/how-to/containers ---- diff --git a/content/nginx-one/how-to/data-plane-keys/_index.md b/content/nginx-one/how-to/data-plane-keys/_index.md deleted file mode 100644 index 0aa1ba7bf..000000000 --- a/content/nginx-one/how-to/data-plane-keys/_index.md +++ /dev/null @@ -1,6 +0,0 @@ ---- -description: -title: Data plane keys -weight: 100 -url: /nginx-one/how-to/data-plane-keys ---- diff --git a/content/nginx-one/how-to/nginx-configs/_index.md b/content/nginx-one/how-to/nginx-configs/_index.md deleted file mode 100644 index b7fa815da..000000000 --- a/content/nginx-one/how-to/nginx-configs/_index.md +++ /dev/null @@ -1,6 +0,0 @@ ---- -description: -title: Instances and Configurations -weight: 200 -url: /nginx-one/how-to/nginx ---- diff --git a/content/nginx-one/how-to/proxy-setup/_index.md b/content/nginx-one/how-to/proxy-setup/_index.md deleted file mode 100644 index 16f858cc2..000000000 --- a/content/nginx-one/how-to/proxy-setup/_index.md +++ /dev/null @@ -1,6 +0,0 @@ ---- -description: -title: Proxy setup -weight: 600 -url: /nginx-one/how-to/settings/nginx-as-proxy ---- diff --git a/content/nginx-one/how-to/staged-configs/_index.md b/content/nginx-one/how-to/staged-configs/_index.md deleted file mode 100644 index 51e07d1aa..000000000 --- a/content/nginx-one/how-to/staged-configs/_index.md +++ /dev/null @@ -1,6 +0,0 @@ ---- -description: -title: Staged Configurations -weight: 800 -url: /nginx-one/how-to/staged-configs ---- diff --git a/content/nginx-one/how-to/staged-configs/api-staged-config.md b/content/nginx-one/how-to/staged-configs/api-staged-config.md deleted file mode 100644 index ff559d014..000000000 --- a/content/nginx-one/how-to/staged-configs/api-staged-config.md +++ /dev/null @@ -1,29 +0,0 @@ ---- -# We use sentence case and present imperative tone -title: Use the API to manage your Staged Configurations -# Weights are assigned in increments of 100: determines sorting order -weight: 500 -# Creates a table of contents and sidebar, useful for large documents -toc: true -# Types have a 1:1 relationship with Hugo archetypes, so you shouldn't need to change this -type: how-to -# Intended for internal catalogue and search, case sensitive: -product: NGINX One ---- - -You can use F5 NGINX One Console API to manage your Staged Configurations. With our API, you can: - -- [Create an NGINX Staged Configuration]({{< ref "/nginx-one/api/api-reference-guide/#operation/createStagedConfig" >}}) - - Use details to add existing configuration files. -- [Get a list of existing Staged Configurations]({{< ref "/nginx-one/api/api-reference-guide/#operation/listStagedConfigs" >}}) - - Record the `object_id` of your target Staged Configuration for your analysis report. -- [Get an analysis report for an existing Staged Configuration]({{< ref "/nginx-one/api/api-reference-guide/#operation/getStagedConfigReport" >}}) - - Review the same recommendations found in the UI. -- [Export a Staged Configuration]({{< ref "/nginx-one/api/api-reference-guide/#operation/exportStagedConfig" >}}) - - Exports an existing Staged Configuration from the console. It sends you an archive of that configuration in `tar.gz` format. -- [Import a Staged Configuration]({{< ref "/nginx-one/api/api-reference-guide/#operation/importStagedConfig" >}}) - - Imports an existing Staged Configuration from your system and sends it to the console. This REST call assumes that your configuration is archived in `tar.gz` format. -- [Bulk manage multiple Staged Configurations]({{< ref "/nginx-one/api/api-reference-guide/#operation/bulkStagedConfigs" >}}) - - Allows you to delete multiple Staged Configurations. Requires each `object_id`. - - For several API endpoints, we ask for a `conf_path`. Make sure to set an absolute file path. If you make a REST call without an absolute file path, you'll see a 400 error message. diff --git a/content/nginx-one/metrics/_index.md b/content/nginx-one/metrics/_index.md new file mode 100644 index 000000000..9602b6a8b --- /dev/null +++ b/content/nginx-one/metrics/_index.md @@ -0,0 +1,6 @@ +--- +description: +title: Set up metrics +weight: 500 +url: /nginx-one/metrics/ +--- diff --git a/content/nginx-one/metrics/enable-metrics.md b/content/nginx-one/metrics/enable-metrics.md new file mode 100644 index 000000000..0e677a78c --- /dev/null +++ b/content/nginx-one/metrics/enable-metrics.md @@ -0,0 +1,23 @@ +--- +# We use sentence case and present imperative tone +title: "Enable metrics" +# Weights are assigned in increments of 100: determines sorting order +weight: i00 +# Creates a table of contents and sidebar, useful for large documents +toc: true +# Types have a 1:1 relationship with Hugo archetypes, so you shouldn't need to change this +nd-content-type: tutorial +# Intended for internal catalogue and search, case sensitive: +# Agent, N4Azure, NIC, NIM, NGF, NAP-DOS, NAP-WAF, NGINX One, NGINX+, Solutions, Unit +nd-product: NGINX-One +--- + +The NGINX One Console dashboard relies on APIs for NGINX Plus and NGINX Open Source Stub Status to report traffic and system metrics. The following sections show you how to enable those metrics. + +### Enable NGINX Plus API + +{{< include "/use-cases/monitoring/enable-nginx-plus-api.md" >}} + +### Enable NGINX Open Source Stub Status API + +{{< include "/use-cases/monitoring/enable-nginx-oss-stub-status.md" >}} diff --git a/content/nginx-one/metrics/review-metrics.md b/content/nginx-one/metrics/review-metrics.md new file mode 100644 index 000000000..2920ca63e --- /dev/null +++ b/content/nginx-one/metrics/review-metrics.md @@ -0,0 +1,23 @@ +--- +# We use sentence case and present imperative tone +title: "Review metrics on the NGINX One dashboard" +# Weights are assigned in increments of 100: determines sorting order +weight: i00 +# Creates a table of contents and sidebar, useful for large documents +toc: true +# Types have a 1:1 relationship with Hugo archetypes, so you shouldn't need to change this +nd-content-type: how-to +# Intended for internal catalogue and search, case sensitive: +# Agent, N4Azure, NIC, NIM, NGF, NAP-DOS, NAP-WAF, NGINX One, NGINX+, Solutions, Unit +nd-product: NGINX-One +--- + +After connecting your NGINX instances to NGINX One, you can monitor their performance and health. The NGINX One dashboard is designed for this purpose, offering an easy-to-use interface. + +### Log in to NGINX One + +1. Log in to [F5 Distributed Console](https://www.f5.com/cloud/products/distributed-cloud-console). +1. Select **NGINX One > Visit Service**. + +{{< include "/use-cases/monitoring/n1c-dashboard-overview.md" >}} + diff --git a/content/nginx-one/nginx-configs/_index.md b/content/nginx-one/nginx-configs/_index.md new file mode 100644 index 000000000..1e0f420e0 --- /dev/null +++ b/content/nginx-one/nginx-configs/_index.md @@ -0,0 +1,30 @@ +--- +description: +<<<<<<< HEAD +<<<<<<< HEAD +<<<<<<< HEAD +<<<<<<< HEAD +<<<<<<< HEAD +<<<<<<< HEAD +title: Manage your NGINX instances +======= +title: Organize your NGINX instances +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) +======= +title: Watch your NGINX instances +>>>>>>> 09d8a53f (More) +======= +title: Access and connect to your NGINX instances +>>>>>>> 614bafed (more) +======= +title: Manage your NGINX instances +>>>>>>> 4da8aa7e (based on Jason's feedback) +======= +title: Add and manage your NGINX instances +>>>>>>> f6a622d3 (More) +======= +title: Add and manage NGINX instances +>>>>>>> 2fec8a44 (More) +weight: 300 +url: /nginx-one/nginx-configs +--- diff --git a/content/nginx-one/how-to/nginx-configs/add-file.md b/content/nginx-one/nginx-configs/add-file.md similarity index 65% rename from content/nginx-one/how-to/nginx-configs/add-file.md rename to content/nginx-one/nginx-configs/add-file.md index 7b654d86e..0f7570ba6 100644 --- a/content/nginx-one/how-to/nginx-configs/add-file.md +++ b/content/nginx-one/nginx-configs/add-file.md @@ -2,7 +2,11 @@ docs: null title: Add a file to an instance toc: true +<<<<<<< HEAD +weight: 300 +======= weight: 400 +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) type: - how-to --- @@ -21,7 +25,11 @@ Before you add files in your configuration, ensure: ## Important considerations If your instance is a member of a Config Sync Group, changes that you make may be synchronized to other instances in that group. -For more information, see how you can [Manage Config Sync Groups]({{< ref "/nginx-one/how-to/config-sync-groups/manage-config-sync-groups.md" >}}). +<<<<<<< HEAD +For more information, see how you can [Manage Config Sync Groups]({{< ref "/nginx-one/nginx-configs/config-sync-groups/manage-config-sync-groups.md" >}}). +======= +For more information, see how you can [Manage Config Sync Groups]({{< ref "/nginx-one/config-sync-groups/manage-config-sync-groups.md" >}}). +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) ## Add a file @@ -62,6 +70,12 @@ Enter the name of the desired configuration file, such as `abc.conf` and select ## See also +<<<<<<< HEAD +- [Create and manage data plane keys]({{< ref "/nginx-one/connect-instances/create-manage-data-plane-keys.md" >}}) +- [Add an NGINX instance]({{< ref "/nginx-one/connect-instances/add-instance.md" >}}) +- [Manage certificates]({{< ref "/nginx-one/nginx-configs/certificates/manage-certificates.md" >}}) +======= - [Create and manage data plane keys]({{< ref "/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md" >}}) -- [View and edit NGINX configurations]({{< ref "/nginx-one/how-to/nginx-configs/view-edit-nginx-configurations.md" >}}) -- [Manage certificates]({{< ref "/nginx-one/how-to/certificates/manage-certificates.md" >}}) +- [Add an NGINX instance]({{< ref "/nginx-one/nginx-configs/add-instance.md" >}}) +- [Manage certificates]({{< ref "/nginx-one/certificates/manage-certificates.md" >}}) +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) diff --git a/content/nginx-one/nginx-configs/add-instance.md b/content/nginx-one/nginx-configs/add-instance.md new file mode 100644 index 000000000..fd3b70fd5 --- /dev/null +++ b/content/nginx-one/nginx-configs/add-instance.md @@ -0,0 +1,57 @@ +--- +description: '' +title: Add an NGINX instance +toc: true +weight: 100 +type: +- how-to +--- + +## Overview + +This guide explains how to add an F5 NGINX instance in F5 NGINX One Console. You can add an instance from the NGINX One Console individually, or as part of a [Config Sync Group]({{< ref "/nginx-one/glossary.md" >}}). In either case, you need +to set up a data plane key to connect your instances to NGINX One. + +## Before you start + +Before you add an instance to NGINX One Console, ensure: + +- You have [administrator access]({{< ref "/nginx-one/rbac/roles.md" >}}) to NGINX One Console. +- You have [configured instances of NGINX]({{< ref "/nginx-one/getting-started.md#add-your-nginx-instances-to-nginx-one" >}}) that you want to manage through NGINX One Console. +- You have or are ready to configure a [data plane key]({{< ref "/nginx-one/getting-started.md#generate-data-plane-key" >}}). +- You have or are ready to set up [managed certificates]({{< ref "/nginx-one/certificates/manage-certificates.md" >}}). + +{{< note >}}If this is the first time an instance is being added to a Config Sync Group, and you have not yet defined the configuration for that Config Sync Group, that instance provides the template for that group. For more information, see [Configuration management]({{< ref "nginx-one/config-sync-groups/manage-config-sync-groups#configuration-management" >}}).{{< /note >}} + +## Add an instance + +{{< include "/nginx-one/how-to/add-instance.md" >}} + +## Managed and Unmanaged Certificates + +If you add an instance with SSL/TLS certificates, those certificates can match an existing managed SSL certificate/CA bundle. + +### If the certificate is already managed + +If you add an instance with a managed certificate, as described in [Add your NGINX instances to NGINX One]({{< ref "/nginx-one/getting-started.md#add-your-nginx-instances-to-nginx-one" >}}), these certificates are added to your list of **Managed Certificates**. + +NGINX One Console can manage your instances along with those certificates. + +### If the certificate is not managed + +These certificates appear in the list of **Unmanaged Certificates**. + +To take full advantage of NGINX One, you can convert these to **Managed Certificates**. You can then manage, update, and deploy a certificate to all of your NGINX instances in a Config Sync Group. + +To convert these cerificates, start with the Certificates menu, and select **Unmanaged**. You should see a list of **Unmanaged Certificates or CA Bundles**. Then: + +- Select a certificate +- Select **Convert to Managed** +- In the window that appears, you can now include the same information as shown in the [Add a new certificate](#add-a-new-certificate) section + +Once you've completed the process, NGINX One reassigns this as a managed certificate, and assigns it to the associated instance or Config Sync Group. + +## Add an instance to a Config Sync Group + +When you [Manage Config Sync Group membership]({{< ref "nginx-one/config-sync-groups/manage-config-sync-groups#manage-config-sync-group-membership" >}}), you can add an existing or new instance to the group of your choice. +That instance inherits the setup of that Config Sync Group. diff --git a/content/nginx-one/nginx-configs/certificates/_index.md b/content/nginx-one/nginx-configs/certificates/_index.md new file mode 100644 index 000000000..b97d42034 --- /dev/null +++ b/content/nginx-one/nginx-configs/certificates/_index.md @@ -0,0 +1,6 @@ +--- +description: +title: Monitor your certificates +weight: 500 +url: /nginx-one/nginx-configs/certificates +--- diff --git a/content/nginx-one/nginx-configs/certificates/manage-certificates.md b/content/nginx-one/nginx-configs/certificates/manage-certificates.md new file mode 100644 index 000000000..b80d52c3c --- /dev/null +++ b/content/nginx-one/nginx-configs/certificates/manage-certificates.md @@ -0,0 +1,198 @@ +--- +docs: null +title: Manage certificates +toc: true +weight: 100 +aliases: /nginx-one/how-to/certificates/manage-certificates/ +type: +- how-to +--- + +## Overview + +This guide explains how you can manage SSL/TLS certificates with the F5 NGINX One Console. Valid certificates support encrypted connections between NGINX and your users. + +You may have separate sets of SSL/TLS certificates, as described in the following table: + +{{}} +| Functionality | Typical file names | Notes | +|-------------------|--------------------------------------------------------------------|----------------------------------------------------------------------------------------| +| Website traffic | /etc/nginx/ssl/example.com.crt
/etc/nginx/ssl/example.com.key | Typically purchased from a Certificate Authority (CA) | +| Repository access | /etc/ssl/nginx/nginx-repo.crt
/etc/ssl/nginx/nginx-repo.key | Supports access to repositories to download and install NGINX packages | +| NGINX Licensing | /etc/ssl/nginx/server.crt
/etc/ssl/nginx/server.key | Supports access to repositories. Based on licenses downloaded from https://my.f5.com/ | +{{
}} + +Allowed directories depend on the [NGINX Agent]({{< ref "/nginx-one/getting-started/#install-nginx-agent" >}}). Look for the `/etc/nginx-agent/nginx-agent.conf` file. +Find the `config_dirs` parameter in that file, as described in the NGINX Agent [Basic configuration](https://docs.nginx.com/nginx-agent/configuration/configuration-overview/#cli-flags--environment-variables). +You may need to add a directory like `/etc/ssl` to that parameter. + +From the NGINX One Console you can: + +- Monitor all certificates configured for use by your connected NGINX Instances. +- Ensure that your certificates are current and correct. +- Manage your certificates from a central location. This can help you simplify operations and remotely update, rotate, and deploy those certificates. + +You can manage the certificates for: + +- [Unique instances]({{< ref "/nginx-one/nginx-configs/add-file.md#new-ssl-certificate-or-ca-bundle" >}}) +- For all instances that are members of a [Config Sync Group]({{< ref "/nginx-one/nginx-configs/config-sync-groups/manage-config-sync-groups/#configuration-management" >}}) + + +{{< tip >}} + +If you are managing the certificate from NGINX One Console, we recommend that you avoid directly manipulating the files on the data plane. + +{{< /tip >}} + +## Before you start + +Before you add and manage certificates with the NGINX One Console make sure: + +- You have access to the NGINX One Console +- You have access through the F5 Distributed Cloud role, as described in the [Authentication]({{< ref "/nginx-one/api/authentication.md" >}}) guide, to manage SSL/TLS certificates + - You have the `f5xc-nginx-one-user` role for your account +- Your SSL/TLS certificates and keys match + +### SSL/TLS certificates and more + +NGINX One Console supports certificates for access to repositories. You may need a copy of these files from your Certificate Authority (CA) to upload them to NGINX One Console: + +- SSL Certificate + - Example file extensions: .crt, .pem +- Privacy certificate + - Example file extensions: .key, .pem + +The NGINX One Console allows you to upload these certificates as text and as files. You can also upload your own certificate files (with file extensions such as .crt and .key). + +Make sure your certificates, keys, and pem files are encrypted to one of the following standards: + +- RSA +- ECC/ECDSA + +In other words, any private key of this type should be supported, regardless of the curve types or hashing algorithm. + +For exmaple, if you use ECDSA private keys in PEM format, the PEM headers should contain: + +``` +-----BEGIN EC PRIVATE KEY----- + +-----END EC PRIVATE KEY----- + +``` + +If you use one of these keys, the US National Institute of Standards and Technology, in [Publication 800-57 Part 3 (PDF)](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-57Pt3r1.pdf), recommends a key size of at least +2048 bits. It also has recommnedations for ECDSA. + +### Include certificates in NGINX configuration + +For NGINX configuration, these files are typically associated with the following NGINX directives: + +- [`ssl_certificate`](https://nginx.org/en/docs/stream/ngx_stream_ssl_module.html#ssl_certificate) +- [`ssl_certificate_key`](https://nginx.org/en/docs/stream/ngx_stream_ssl_module.html#ssl_certificate_key) +- [`ssl_trusted_certificate`](https://nginx.org/en/docs/stream/ngx_stream_ssl_module.html#ssl_trusted_certificate) +- [`ssl_client_certificate`](https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_client_certificate) +- [`proxy_ssl_certificate`](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ssl_certificate) +- [`proxy_ssl_certificate_key`](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ssl_certificate_key) + +## Important considerations + +Most websites include valid information from public keys and certificates or CA bundles. However, the NGINX One Console accepts, but provides warnings for these use cases: + +- When the public certificate is expired +- When the leaf certificate part of a certificate chain is expired +- When any of the components of a CA bundle are expired +- When the public key does not match the private certificate + +In such cases, you may get websites that present "Your connection is not private" warning messages in client web browsers. + +## Review existing certificates + +Follow these steps to review existing certificates for your instances. + +On the left-hand pane, select **Certificates**. In the window that appears, you see: + +{{}} +| Term | Definition | +|-------------|-------------| +| **Total** | Total number of certificates available to NGINX One Console | +| **Valid (31+ days)** | Number of certificates that expire in 31 or more days | +| **Expires Soon (<31 days)** | Number of certificates that expire in less than 31 days | +| **Expired** | Number of exprired certificates | +| **Not Ready** | Certificates with a start date in the future | +| **Managed** | Managed by and stored in the NGINX One Console | +| **Unmanaged** | Detected by, and not managed by NGINX One Console. To convert to managed, you may need to upload the certificate and key during the process. | +{{}} + +You can **Add Filter** to filter certificates by: + +- Name +- Status +- Subject Name +- Type + +The Export option supports exports of basic certification file information to a CSV file. It does _not_ include the content of the public certificate or the private key. + +## Add a new certificate or bundle + +To add a new certificate, select **Add Certificate**. + + +In the screen that appears, you can add a certificate name. If you don't add a name, NGINX One will add a name for you, based on the expiration date for the certificate. + +You can add certificates in the following formats: + +- **SSL Certificate and Key** +- **CA Certificate Bundle** + +In each case, you can upload files directly, or enter the content of the certificates in a text box. Once you upload these certificates, you'll see: + +- **Certificate Details**, with the Subject Name, start and end dates. +- **Key Details**, with the encryption key size and algorithm, such as RSA + + +## Edit an existing certificate or bundle + +You can modify existing certificates from the **Certificates** screen. Select the certificate of your choice. Depending on the type of certificate, you'll then see either a **Edit Certificate** or **Edit CA Bundle** option. The NGINX One Console then presents a window with the same options as shown when you [Add a new certificate](#add-a-new-certificate-or-bundle). + +If that certificate is already managed as part of a Config Sync Group, the changes you make affect all instances in that group. + +## Remove a deployed certificate + +You can remove a deployed certificate from an independent instance or from a Config Sync Group. This will remove the certificate's association with the instance or group, but it does not delete the certificate files from the instance(s). + +Every instance with a deployed certificate includes paths to certificates in their configuration files. If you remove the deployed file path to one certificate, that change is limited to that one instance. + +Every Config Sync Group also includes paths to certificates in its configuration files. If you remove the deployed path to one certificate, that change affects all instances which belong to that Config Sync Group. + +## Delete a deployed certificate + +To delete a certificate, find the name in the **Certificates** screen. Find the **Actions** column associated with the certificate. Select the ellipsis (`...`) and then select **Delete**. Before deleting that certificate, you should see a warning. + +If that certificate is managed and is part of a Config Sync Group, that change affects all instances in that group. + +{{< warning >}} Be cautious if you want to delete certificates that are being used by an instance or a Config Sync Group. Deleting such certificates leads to failure in affected NGINX deployments. {{< /warning >}} + +## Managed and unmanaged certificates + +If you register an instance to NGINX One Console, as described in [Add your NGINX instances to NGINX One]({{< ref "/nginx-one/getting-started.md#add-your-nginx-instances-to-nginx-one" >}}), and the associated SSL/TLS certificates: + +- Are used in their NGINX configuration +- Do _not_ match an existing managed SSL certificate/CA bundle + +These certificates appear in the list of unmanaged certificates. + +We recommend that you convert your unmanaged certificates. Converting to a managed certificate allows you to centrally manage, update, and deploy a certificate to your data plane from the NGINX One Console. + +To convert these cerificates to managed, start with the Certificates menu, and select **Unmanaged**. You should see a list of **Unmanaged Certificates or CA Bundles**. Then: + +- Select a certificate +- Select **Convert to Managed** +- In the window that appears, you can now include the same information as shown in the [Add a new certificate](#add-a-new-certificate) section + + + +## See also + +- [Create and manage data plane keys]({{< ref "/nginx-one/connect-instances/create-manage-data-plane-keys.md" >}}) +- [Add an instance]({{< ref "/nginx-one/connect-instances/add-instance.md" >}}) +- [Add a file in a configuration]({{< ref "/nginx-one/nginx-configs/add-file.md" >}}) diff --git a/content/nginx-one/how-to/nginx-configs/clean-up-unavailable-instances.md b/content/nginx-one/nginx-configs/clean-up-unavailable-instances.md similarity index 96% rename from content/nginx-one/how-to/nginx-configs/clean-up-unavailable-instances.md rename to content/nginx-one/nginx-configs/clean-up-unavailable-instances.md index 6a119617d..77d406013 100644 --- a/content/nginx-one/how-to/nginx-configs/clean-up-unavailable-instances.md +++ b/content/nginx-one/nginx-configs/clean-up-unavailable-instances.md @@ -3,7 +3,11 @@ description: '' docs: null title: Clean up unavailable NGINX instances toc: true +<<<<<<< HEAD +weight: 1000 +======= weight: 200 +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) type: - how-to --- diff --git a/content/nginx-one/nginx-configs/config-sync-groups/_index.md b/content/nginx-one/nginx-configs/config-sync-groups/_index.md new file mode 100644 index 000000000..eaefeaea3 --- /dev/null +++ b/content/nginx-one/nginx-configs/config-sync-groups/_index.md @@ -0,0 +1,6 @@ +--- +description: +title: Change multiple instances with one push +weight: 400 +url: /nginx-one/config-sync-groups +--- diff --git a/content/nginx-one/nginx-configs/config-sync-groups/add-file-csg.md b/content/nginx-one/nginx-configs/config-sync-groups/add-file-csg.md new file mode 100644 index 000000000..147b83950 --- /dev/null +++ b/content/nginx-one/nginx-configs/config-sync-groups/add-file-csg.md @@ -0,0 +1,67 @@ +--- +docs: null +title: Add a file to a Config Sync Group +toc: true +weight: 400 +type: +- how-to +--- + +## Overview + +{{< include "nginx-one/add-file/overview.md" >}} + +## Before you start + +Before you add files in your configuration, ensure: + +- You have access to the NGINX One Console. +- Config Sync Groups are properly registered with NGINX One Console + +## Important considerations + +This page applies when you want to add a file to a Config Sync Group. Any changes you make here apply to all [Instances]({{< ref "/nginx-one/glossary.md" >}}) of that Config Sync Group. + +## Add a file + +You can use the NGINX One Console to add a file to a specific Config Sync Group. To do so: + +1. Select the Config Sync Group to manage. +1. Select the **Configuration** tab. + + {{< tip >}} + + {{< include "nginx-one/add-file/edit-config-tip.md" >}} + + {{< /tip >}} + +1. Select **Edit Configuration**. +1. In the **Edit Configuration** window that appears, select **Add File**. + +You now have multiple options, described in the sections which follow. + +### New Configuration File + +Enter the name of the desired configuration file, such as `abc.conf` and select **Add**. The configuration file appears in the **Edit Configuration** window. + +### New SSL Certificate or CA Bundle + +{{< include "nginx-one/add-file/new-ssl-bundle.md" >}} + + {{< tip >}} + + Make sure to specify the path to your certificate in your NGINX configuration, + with the `ssl_certificate` and `ssl_certificate_key` directives. + + {{< /tip >}} + +### Existing SSL Certificate or CA Bundle + +{{< include "nginx-one/add-file/existing-ssl-bundle.md" >}} +With this option, you can incorporate [Managed certificates]({{< ref "/nginx-one/nginx-configs/certificates/manage-certificates.md#managed-and-unmanaged-certificates" >}}). + +## See also + +- [Create and manage data plane keys]({{< ref "/nginx-one/connect-instances/create-manage-data-plane-keys.md" >}}) +- [Add an NGINX instance]({{< ref "/nginx-one/connect-instances/add-instance.md" >}}) +- [Manage certificates]({{< ref "/nginx-one/nginx-configs/certificates/manage-certificates.md" >}}) diff --git a/content/nginx-one/how-to/nginx-configs/manage-config-sync-groups.md b/content/nginx-one/nginx-configs/config-sync-groups/manage-config-sync-groups.md similarity index 53% rename from content/nginx-one/how-to/nginx-configs/manage-config-sync-groups.md rename to content/nginx-one/nginx-configs/config-sync-groups/manage-config-sync-groups.md index 8bc10cce6..d5811fe1a 100644 --- a/content/nginx-one/how-to/nginx-configs/manage-config-sync-groups.md +++ b/content/nginx-one/nginx-configs/config-sync-groups/manage-config-sync-groups.md @@ -1,6 +1,6 @@ --- docs: null -title: Manage config sync groups +title: Manage Config Sync Groups toc: true weight: 300 type: @@ -9,60 +9,104 @@ type: ## Overview -This guide explains how to create and manage config sync groups in the F5 NGINX One Console. Config sync groups synchronize NGINX configurations across multiple NGINX instances, ensuring consistency and ease of management. +If you work with several instances of NGINX, it can help to organize these instances in Config Sync Groups. Each instance in a Config Sync Group has the same configuration. -If you’ve used [instance groups in NGINX Instance Manager]({{< ref "/nim/nginx-instances/manage-instance-groups.md" >}}), you’ll find config sync groups in NGINX One similar, though the steps and terminology differ slightly. +This guide explains how to create and manage Config Sync Groups in the F5 NGINX One Console. Config Sync Groups synchronize NGINX configurations across multiple NGINX instances, ensuring consistency and ease of management. + +If you’ve used [instance groups in NGINX Instance Manager]({{< ref "/nim/nginx-instances/manage-instance-groups.md" >}}), you’ll find Config Sync Groups in NGINX One similar, though the steps and terminology differ slightly. + +Config Sync Groups are functionally different from syncing instances in a cluster. They let you to manage and synchronize configurations across multiple NGINX instances, all at once. + +This is particularly useful when your NGINX instances are load-balanced by an external load balancer, as it ensures consistency across all instances. In contrast, cluster syncing, like [zone syncing]({{< ref "nginx/admin-guide/high-availability/zone_sync_details.md" >}}), ensures data consistency and high availability across NGINX instances in a cluster. While Config Sync Groups focus on configuration management, cluster syncing supports failover and data consistency. ## Before you start -Before you create and manage config sync groups, ensure: +Before you create and manage Config Sync Groups, ensure: - You have access to the NGINX One Console. -- You have the necessary permissions to create and manage config sync groups. -- NGINX instances are properly registered with NGINX One if you plan to add existing instances to a config sync group. +- You have the necessary permissions to create and manage Config Sync Groups. +- If you plan to add existing instances to a Config Sync Group, make sure those NGINX instances are properly registered with NGINX One. -## Important considerations +## Configuration management + +Config Sync Groups support configuration inheritance and persistance. If you've just created a Config Sync Group, you can define the configuration for that group in the following ways: + +- Before adding an instance to a group, you can [Define the Config Sync Group configuration manually](#define-the-config-sync-group-configuration-manually). +- When you add the first instance to a group, that instance defines the configuration for that Config Sync Group. +- Afterwards, you can modify the configuration of the Config Sync Group. That modifies the configuration of all member instances. Future members of that group inherit that modified configuration. + +On the other hand, if you remove all instances from a Config Sync Group, the original configuration persists. In other words, the group retains the configuration from that first instance (or the original configuration). Any new instance that you add later still inherits that configuration. + +{{< tip >}}You can use _unmanaged_ certificates. Your actions can affect the [Config Sync Group status](#config-sync-group-status). For future instances on the data plane, if it: + +- Has unmanaged certificates in the same file paths as defined by the NGINX configuration as the Config Sync Group, that instance will be [**In Sync**](#config-sync-group-status). +- Will be [**Out of Sync**](#config-sync-group-status) if the instance: + - Does not have unmanaged certificates in the same file paths + - Has unmanaged certificates in a different directory from the Config Sync Group +{{< /tip >}} + +### Risk when adding multiple instances to a Config Sync Group + +If you add multiple instances to a single Config Sync Group, simultaneously (with automation), there's a risk that the instance selects a random configuration. To prevent this problem, you should: + +1. Create a Config Sync Group. +1. Add a configuration to the Config Sync Group, so all instances inherit it. +1. Add the instances in a separate operation. + +Your instances should synchronize with your desired configuration within 30 seconds. -- **NGINX Agent configuration file location**: When you run the NGINX Agent installation script to register an instance with NGINX One, the script creates the `agent-dynamic.conf` file, which contains settings for the NGINX Agent, including the specified config sync group. This file is typically located in `/var/lib/nginx-agent/` on most systems; however, on FreeBSD, it's located at `/var/db/nginx-agent/`. +### Use an instance to define the Config Sync Group configuration -- **Mixing NGINX Open Source and NGINX Plus instances**: You can add both NGINX Open Source and NGINX Plus instances to the same config sync group, but there are limitations. If your configuration includes features exclusive to NGINX Plus, synchronization will fail on NGINX Open Source instances because they don't support these features. NGINX One allows you to mix NGINX instance types for flexibility, but it’s important to ensure that the configurations you're applying are compatible with all instances in the group. +1. Follow the steps in the [**Add an existing instance to a Config Sync Group**](#add-an-existing-instance-to-a-config-sync-group) or [**Add a new instance to a Config Sync Group**](#add-a-new-instance-to-a-config-sync-group) sections to add your first instance to the group. +2. The NGINX configuration from this instance will automatically become the group's configuration. +3. You can further edit and publish this configuration by following the steps in the [**Publish the Config Sync Group configuration**](#publish-the-config-sync-group-configuration) section. -- **Single config sync group membership**: An instance can join only one config sync group at a time. +### Define the Config Sync Group configuration manually -- **Configuration inheritance**: If the config sync group already has a configuration defined, that configuration will be pushed to instances when they join. +You can manually define the group's configuration before adding any instances. When you add instances to the group later, they automatically inherit this configuration. -- **Using an instance's configuration for the group configuration**: If an instance is the first to join a config sync group and the group's configuration hasn't been defined, the instance’s configuration will become the group’s configuration. Any instances added later will automatically inherit this configuration. +To manually set the group configuration: - {{< note >}} If you add multiple instances to a single config sync group, simultaneously (with automation), follow these steps. Your instances will inherit your desired configuration: +1. Follow steps 1–4 in the [**Create a Config Sync Group**](#create-a-config-sync-group) section to create your Config Sync Group. +2. After creating the group, select the **Configuration** tab. +3. Since no instances have been added, the **Configuration** tab will show an empty configuration with a message indicating that no config files exist yet. +4. To add a configuration, select **Edit Configuration**. +5. In the editor, define your NGINX configuration as needed. This might include adding or modifying `nginx.conf` or other related files. +6. After making your changes, select **Next** to view a split screen showing your changes. +7. If you're satisfied with the configuration, select **Save and Publish**. + +## Important considerations - 1. Create a config sync group. - 1. Add a configuration to the config sync group, so all instances inherit it. - 1. Add the instances in a separate operation. +When you plan Config Sync Groups, consider the following factors: - Your instances should synchronize with your desired configuration within 30 seconds. {{< /note >}} +- **Single Config Sync Group membership**: You can add an instance to only one Config Sync Group. -- **Persistence of a config sync group's configuration**: The configuration for a config sync group persists until you delete the group. Even if you remove all instances, the group's configuration stays intact. Any new instances that join later will automatically inherit this configuration. +- **NGINX Agent configuration file location**: When you run the NGINX Agent installation script to register an instance with NGINX One, the script creates the `agent-dynamic.conf` file, which contains settings for the NGINX Agent, including the specified Config Sync Group. This file is typically located in `/var/lib/nginx-agent/` on most systems; however, on FreeBSD, it's located at `/var/db/nginx-agent/`. -- **Config sync groups vs. cluster syncing**: Config sync groups are not the same as cluster syncing. Config sync groups let you to manage and synchronize configurations across multiple NGINX instances as a single entity. This is particularly useful when your NGINX instances are load-balanced by an external load balancer, as it ensures consistency across all instances. In contrast, cluster syncing, like [zone syncing]({{< ref "nginx/admin-guide/high-availability/zone_sync_details.md" >}}), ensures data consistency and high availability across NGINX instances in a cluster. While config sync groups focus on configuration management, cluster syncing supports failover and data consistency. +- **Mixing NGINX Open Source and NGINX Plus instances**: You can add both NGINX Open Source and NGINX Plus instances to the same Config Sync Group, but there are limitations. If your configuration includes features exclusive to NGINX Plus, synchronization will fail on NGINX Open Source instances because they don't support these features. NGINX One allows you to mix NGINX instance types for flexibility, but it’s important to ensure that the configurations you're applying are compatible with all instances in the group. -## Create a config sync group +## Create a Config Sync Group -Creating a config sync group allows you to manage the configurations of multiple NGINX instances as a single entity. +When you create a Config Sync Group, you can manage the configurations of multiple NGINX instances as a single entity. 1. On the left menu, select **Config Sync Groups**. 2. Select **Add Config Sync Group**. -3. In the **Name** field, type a name for your config sync group. -4. Select **Create** to add the config sync group. +3. In the **Name** field, type a name for your Config Sync Group. +4. Select **Create** to add the Config Sync Group. + +## Manage Config Sync Group membership + +Now that you created a Config Sync Group, you can add instances to that group. As described in [Configuration management](#configuration-management), the first instance you add to a group, when you add it, defines the initial configuration for the group. You can update the configuration for the entire Config Sync Group. -## Manage config sync group membership +Any instance that joins the group afterwards inherits that configuration. -### Add an existing instance to a config sync group {#add-an-existing-instance-to-a-config-sync-group} +### Add an existing instance to a Config Sync Group {#add-an-existing-instance-to-a-config-sync-group} -You can add existing NGINX instances that are already registered with NGINX One to a config sync group. +You can add existing NGINX instances that are already registered with NGINX One to a Config Sync Group. 1. Open a command-line terminal on the NGINX instance. 2. Open the `/var/lib/nginx-agent/agent-dynamic.conf` file in a text editor. -3. At the end of the file, add a new line beginning with `instance_group:`, followed by the config sync group name. +3. At the end of the file, add a new line beginning with `instance_group:`, followed by the Config Sync Group name. ``` text instance_group: @@ -74,14 +118,14 @@ You can add existing NGINX instances that are already registered with NGINX One sudo systemctl restart nginx-agent ``` -### Add a new instance to a config sync group {#add-a-new-instance-to-a-config-sync-group} +### Add a new instance to a Config Sync Group {#add-a-new-instance-to-a-config-sync-group} When adding a new NGINX instance that is not yet registered with NGINX One, you need a data plane key to securely connect the instance. You can generate a new data plane key during the process or use an existing one if you already have it. 1. On the left menu, select **Config Sync Groups**. -2. Select the config sync group in the list. +2. Select the Config Sync Group in the list. 3. In the **Instances** pane, select **Add Instance to Config Sync Group**. -4. In the **Add Instance to Config Sync Group** dialog, select **Register a new instance with NGINX One then add to config sync group**. +4. In the **Add Instance to Config Sync Group** dialog, select **Register a new instance with NGINX One then add to Config Sync Group**. 5. Select **Next**. 6. **Generate a new data plane key** (choose this option if you don't have an existing key): @@ -107,7 +151,7 @@ When adding a new NGINX instance that is not yet registered with NGINX One, you 8. **Log in to the NGINX private registry**: - - Replace `YOUR_JWT_HERE` with your JSON Web Token (JWT) from [MyF5](https://my.f5.com/manage/s/). + - Replace `YOUR_JWT_HERE` with your JSON Web Token (JWT) license from [MyF5](https://my.f5.com/manage/s/). ```shell sudo docker login private-registry.nginx.com --username=YOUR_JWT_HERE --password=none @@ -120,6 +164,12 @@ When adding a new NGINX instance that is not yet registered with NGINX One, you **Note**: Subject to availability, you can modify the `agent: ` to match the specific NGINX Plus version, OS type, and OS version you need. For example, you might use `agent: r32-ubi-9`. For more details on version tags and how to pull an image, see [Deploying NGINX and NGINX Plus on Docker]({{< ref "nginx/admin-guide/installing-nginx/installing-nginx-docker.md#pulling-the-image" >}}). + + - From the **OS Type** list, choose the appropriate operating system for your Docker image. + - After selecting the OS, run the provided command to pull the Docker image. + + **Note**: Subject to availability, you can modify the `agent: ` to match the specific NGINX Plus version, OS type, and OS version you need. For example, you might use `agent: r32-ubi-9`. For more details on version tags and how to pull an image, see [Deploying NGINX and NGINX Plus on Docker]({{< ref "nginx/admin-guide/installing-nginx/installing-nginx-docker.md#pulling-the-image" >}}). + 10. Run the provided command, which includes the data plane key, in your NGINX instance terminal to start the Docker container. 11. Select **Done** to complete the process. @@ -132,17 +182,17 @@ When adding a new NGINX instance that is not yet registered with NGINX One, you Data plane keys are required for registering NGINX instances with the NGINX One Console. These keys serve as secure tokens, ensuring that only authorized instances can connect and communicate with NGINX One. -For more details on creating and managing data plane keys, see [Create and manage data plane keys]({{}}). +For more details on creating and managing data plane keys, see [Create and manage data plane keys]({{}}). {{}} -### Change the config sync group for an instance +### Move an instance to a different Config Sync Group -If you need to move an NGINX instance to a different config sync group, follow these steps: +If you need to move an NGINX instance to a different Config Sync Group, follow these steps: 1. Open a command-line terminal on the NGINX instance. 2. Open the `/var/lib/nginx-agent/agent-dynamic.conf` file in a text editor. -3. Locate the line that begins with `instance_group:` and change it to the name of the new config sync group. +3. Locate the line that begins with `instance_group:` and change it to the name of the new Config Sync Group. ``` text instance_group: @@ -154,11 +204,11 @@ If you need to move an NGINX instance to a different config sync group, follow t sudo systemctl restart nginx-agent ``` -**Important:** If the instance is the first to join the new config sync group and a group configuration hasn’t been added manually beforehand, the instance’s configuration will automatically become the group’s configuration. Any instances added to this group later will inherit this configuration. +If you move an instance with certificates from one Config Sync Group to another, NGINX One adds or removes those certificates from the data plane, to synchronize with the deployed certificates of the group. -### Remove an instance from a config sync group +### Remove an instance from a Config Sync Group -If you need to remove an NGINX instance from a config sync group without adding it to another group, follow these steps: +If you need to remove an NGINX instance from a Config Sync Group without adding it to another group, follow these steps: 1. Open a command-line terminal on the NGINX instance. 2. Open the `/var/lib/nginx-agent/agent-dynamic.conf` file in a text editor. @@ -174,66 +224,38 @@ If you need to remove an NGINX instance from a config sync group without adding sudo systemctl restart nginx-agent ``` -By removing or commenting out this line, the instance will no longer be associated with any config sync group. - -## Add the config sync group configuration - -You can set the configuration for a config sync group in two ways: - -### Define the group configuration manually - -You can manually define the group's configuration before adding any instances. When you add instances to the group later, they automatically inherit this configuration. - -To manually set the group configuration: - -1. Follow steps 1–4 in the [**Create a config sync group**](#create-a-config-sync-group) section to create your config sync group. -2. After creating the group, select the **Configuration** tab. -3. Since no instances have been added, the **Configuration** tab will show an empty configuration with a message indicating that no config files exist yet. -4. To add a configuration, select **Edit Configuration**. -5. In the editor, define your NGINX configuration as needed. This might include adding or modifying `nginx.conf` or other related files. -6. After making your changes, select **Next** to view a split screen showing your changes. -7. If you're satisfied with the configuration, select **Save and Publish**. - -### Use an instance's configuration - -If you don't manually define a group config, the NGINX configuration of the first instance added to a config sync group becomes the group's configuration. Any additional instances added afterward inherit this group configuration. - -To set the group configuration by adding an instance: - -1. Follow the steps in the [**Add an existing instance to a config sync group**](#add-an-existing-instance-to-a-config-sync-group) or [**Add a new instance to a config sync group**](#add-a-new-instance-to-a-config-sync-group) sections to add your first instance to the group. -2. The NGINX configuration from this instance will automatically become the group's configuration. -3. You can further edit and publish this configuration by following the steps in the [**Publish the config sync group configuration**](#publish-the-config-sync-group-configuration) section. +By removing or commenting out this line, the instance will no longer be associated with any Config Sync Group. -## Publish the config sync group configuration {#publish-the-config-sync-group-configuration} +## Publish the Config Sync Group configuration {#publish-the-config-sync-group-configuration} -After the config sync group is created, you can modify and publish the group's configuration as needed. Any changes made to the group configuration will be applied to all instances within the group. +After the Config Sync Group is created, you can modify and publish the group's configuration as needed. Any changes made to the group configuration will be applied to all instances within the group. 1. On the left menu, select **Config Sync Groups**. -2. Select the config sync group in the list. +2. Select the Config Sync Group in the list. 3. Select the **Configuration** tab to view the group's NGINX configuration. 4. To modify the group's configuration, select **Edit Configuration**. 5. Make the necessary changes to the configuration. 6. When you're finished, select **Next**. A split view displays the changes. 7. If you're satisfied with the changes, select **Save and Publish**. -Publishing the group configuration ensures that all instances within the config sync group are synchronized with the latest group configuration. This helps maintain consistency across all instances in the group, preventing configuration drift. +Publishing the group configuration ensures that all instances within the Config Sync Group are synchronized with the latest group configuration. This helps maintain consistency across all instances in the group, preventing configuration drift. -## Understanding config sync statuses +## Config Sync Group status The **Config Sync Status** column on the **Config Sync Groups** page provides insight into the synchronization state of your NGINX instances within each group. {{}} | **Status** | **Description** | |-----------------------|------------------------------------------------------------------------------------------------------------------------------------------------------| -| **In Sync** | All instances within the config sync group have configurations that match the group configuration. No action is required. | +| **In Sync** | All instances within the Config Sync Group have configurations that match the group configuration. No action is required. | | **Out of Sync** | At least one instance in the group has a configuration that differs from the group's configuration. You may need to review and resolve discrepancies to ensure consistency. | | **Sync in Progress** | An instance is currently being synchronized with the group's configuration. This status appears when an instance is moved to a new group or when a configuration is being applied. | | **Unknown** | The synchronization status of the instances in this group cannot be determined. This could be due to connectivity issues, instances being offline, or other factors. Investigating the cause of this status is recommended. | {{}} -Monitoring the **Config Sync Status** helps ensure that your configurations are consistently applied across all instances in a group, reducing the risk of configuration drift. +Monitor the **Config Sync Status** column. It can help you ensure that your configurations are consistently applied across all instances in a group. ## See also -- [Create and manage data plane keys]({{< ref "/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md" >}}) -- [View and edit NGINX configurations]({{< ref "/nginx-one/how-to/nginx-configs/view-edit-nginx-configurations.md" >}}) +- [Create and manage data plane keys]({{< ref "/nginx-one/connect-instances/create-manage-data-plane-keys.md" >}}) +- [Add an NGINX instance]({{< ref "/nginx-one/connect-instances/add-instance.md" >}}) diff --git a/content/nginx-one/how-to/nginx-configs/view-edit-nginx-configurations.md b/content/nginx-one/nginx-configs/view-edit-nginx-configurations.md similarity index 63% rename from content/nginx-one/how-to/nginx-configs/view-edit-nginx-configurations.md rename to content/nginx-one/nginx-configs/view-edit-nginx-configurations.md index 37d4fb6f5..074dbdbaa 100644 --- a/content/nginx-one/how-to/nginx-configs/view-edit-nginx-configurations.md +++ b/content/nginx-one/nginx-configs/view-edit-nginx-configurations.md @@ -1,8 +1,18 @@ --- # We use sentence case and present imperative tone +<<<<<<< HEAD +<<<<<<< HEAD +title: View and edit an NGINX instance +# Weights are assigned in increments of 100: determines sorting order +weight: 200 +======= title: View and edit NGINX configurations +======= +title: View and edit an NGINX instance +>>>>>>> 4da8aa7e (based on Jason's feedback) # Weights are assigned in increments of 100: determines sorting order weight: 300 +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) # Creates a table of contents and sidebar, useful for large documents toc: true # Types have a 1:1 relationship with Hugo archetypes, so you shouldn't need to change this @@ -12,6 +22,10 @@ product: NGINX One --- +<<<<<<< HEAD +<<<<<<< HEAD +This guide explains how to edit the configuration of an existing **Instance** in your NGINX One Console. +======= ## Overview This guide explains how to add a **Instances** to your NGINX One Console. @@ -23,6 +37,10 @@ Before you add **Instances** to NGINX One Console, ensure: - You have an NGINX One Console account with staged configuration permissions.``` Once you've registered your NGINX Instances with the F5 NGINX One Console, you can view and edit their NGINX configurations on the **Instances** details page. +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) +======= +This guide explains how to edit the configuration of an existing **Instance** in your NGINX One Console. +>>>>>>> 4da8aa7e (based on Jason's feedback) To view and edit an NGINX configuration, follow these steps: @@ -34,8 +52,12 @@ To view and edit an NGINX configuration, follow these steps: 6. When you are satisfied with the changes, select **Next**. 7. Compare and verify your changes before selecting **Save and Publish** to publish the edited configuration. -Alternatively, you can select **Save Changes As**. In the window that appears, you can set up this instance as a [**Staged Configuration**]({{< ref "/nginx-one/how-to/staged-configs/_index.md" >}}). +Alternatively, you can select **Save Changes As**. In the window that appears, you can set up this instance as a [**Staged Configuration**]({{< ref "/nginx-one/staged-configs/_index.md" >}}). ## See also -- [Manage Config Sync Groups]({{< ref "/nginx-one/how-to/config-sync-groups/manage-config-sync-groups.md" >}}) +<<<<<<< HEAD +- [Manage Config Sync Groups]({{< ref "/nginx-one/nginx-configs/config-sync-groups/manage-config-sync-groups.md" >}}) +======= +- [Manage Config Sync Groups]({{< ref "/nginx-one/config-sync-groups/manage-config-sync-groups.md" >}}) +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) diff --git a/content/nginx-one/rbac/_index.md b/content/nginx-one/rbac/_index.md index a1f7050ff..cac8d28a1 100644 --- a/content/nginx-one/rbac/_index.md +++ b/content/nginx-one/rbac/_index.md @@ -1,6 +1,16 @@ --- -title: Role-based access control +<<<<<<< HEAD +<<<<<<< HEAD +title: Organize users with RBAC description: -weight: 300 +weight: 600 +======= +title: Organize your administrators with RBAC +======= +title: Organize administrators with RBAC +>>>>>>> 4da8aa7e (based on Jason's feedback) +description: +weight: 500 +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) url: /nginx-one/rbac --- diff --git a/content/nginx-one/rbac/overview.md b/content/nginx-one/rbac/overview.md index ccab68d4b..2bcdfc17b 100644 --- a/content/nginx-one/rbac/overview.md +++ b/content/nginx-one/rbac/overview.md @@ -1,5 +1,5 @@ --- -title: "Role-based access control overview" +title: "Learn about Role-based access control" weight: 400 toc: true type: reference diff --git a/content/nginx-one/rbac/rbac-api.md b/content/nginx-one/rbac/rbac-api.md index 82953365a..11e90cfc3 100644 --- a/content/nginx-one/rbac/rbac-api.md +++ b/content/nginx-one/rbac/rbac-api.md @@ -1,5 +1,5 @@ --- -title: "Custom roles and API groups" +title: "Set up custom roles with API groups" weight: 500 toc: true type: reference diff --git a/content/nginx-one/rbac/roles.md b/content/nginx-one/rbac/roles.md index 646f0d5cb..e2d33a15b 100644 --- a/content/nginx-one/rbac/roles.md +++ b/content/nginx-one/rbac/roles.md @@ -1,5 +1,5 @@ --- -title: "Default roles" +title: "Review default roles" weight: 500 toc: true type: reference diff --git a/content/nginx-one/secure-your-fleet/_index.md b/content/nginx-one/secure-your-fleet/_index.md new file mode 100644 index 000000000..d9fea82ff --- /dev/null +++ b/content/nginx-one/secure-your-fleet/_index.md @@ -0,0 +1,6 @@ +--- +title: Secure your fleet +description: +weight: 450 +url: /nginx-one/secure-your-fleet +--- diff --git a/content/nginx-one/secure-your-fleet/secure.md b/content/nginx-one/secure-your-fleet/secure.md new file mode 100644 index 000000000..638874258 --- /dev/null +++ b/content/nginx-one/secure-your-fleet/secure.md @@ -0,0 +1,53 @@ +--- +title: "Set up security alerts" +weight: 500 +toc: true +type: how-to +product: NGINX One +docs: DOCS-000 +--- + +The F5 Distributed Cloud generates alerts from all its services including NGINX One. You can configure rules to send those alerts to a receiver of your choice. These instructions walk you through how to configure an email notification when we see new CVEs or detect security issues with your NGINX instances. + +This page describes basic steps to set up an email alert. For authoritative documentation, see +[Alerts - Email & SMS](https://flatrender.tora.reviews/docs-v2/shared-configuration/how-tos/alerting/alerts-email-sms). + +## Configure alerts to be sent to your email + +To configure security-related alerts, follow these steps: + +1. Navigate to the F5 Distributed Cloud Console at https://INSERT_YOUR_TENANT_NAME.console.ves.volterra.io. +1. Find **Audit Logs & Alerts** > **Alerts Management**. +1. Select **Add Alert Receiver**. +1. Configure the **Alert Receivers** + 1. Enter the name of your choice + 1. (Optional) Specify a label and description +1. Under **Receiver**, select Email and enter your email address. +1. Select **Save and Exit**. +1. Your Email receiver should now appear on the list of Alert Receivers. +1. Under the Actions column, select Verify Email. +1. Select **Send email** to confirm. +1. You should receive a verification code in the email provided. Copy that code. +1. Under the Actions column, select **Enter verification code**. +1. Paste the code and select **Verify receiver**. + +## Configure Alert Policy + +Next, configure the policy that identifies when you'll get an alert. + +1. Navigate to **Alerts Management > Alert Policies**. +1. Select **Add Alert Policy**. + 1. Enter the name of your choice + 1. (Optional) Specify a label and description +1. Under Alert Reciever Configuration > Alert Receivers, select the Alert Receiver you just created +1. Under Policy Rules select Configure. +1. Select Add Item. +1. Under Select Alerts (TBD) +1. Set the Action as Send and select Apply + +Now set a second alert related to Common Vulnerabilities and Exposures (CVEs). + +1. Select Add Item. +1. Under Select Alerts {adding additional Alert type for CVE). +1. Set the Action as Send and select Apply. +1. Select **Save and Exit**. diff --git a/content/nginx-one/staged-configs/_index.md b/content/nginx-one/staged-configs/_index.md new file mode 100644 index 000000000..3da38e3e6 --- /dev/null +++ b/content/nginx-one/staged-configs/_index.md @@ -0,0 +1,24 @@ +--- +description: +<<<<<<< HEAD +<<<<<<< HEAD +<<<<<<< HEAD +<<<<<<< HEAD +title: Draft new configurations +weight: 400 +url: /nginx-one/staged-configs +======= +title: Set up new instances +======= +title: Draft new instances +>>>>>>> 614bafed (more) +======= +title: Draft new instances (Staged Configuration) +>>>>>>> 4da8aa7e (based on Jason's feedback) +======= +title: Draft new instances (Staged Configs) +>>>>>>> 2fec8a44 (More) +weight: 200 +url: /nginx-one/how-to/staged-configs +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) +--- diff --git a/content/nginx-one/how-to/staged-configs/add-staged-config.md b/content/nginx-one/staged-configs/add-staged-config.md similarity index 94% rename from content/nginx-one/how-to/staged-configs/add-staged-config.md rename to content/nginx-one/staged-configs/add-staged-config.md index e69c0da78..c46b120e5 100644 --- a/content/nginx-one/how-to/staged-configs/add-staged-config.md +++ b/content/nginx-one/staged-configs/add-staged-config.md @@ -6,10 +6,14 @@ weight: 100 # Creates a table of contents and sidebar, useful for large documents toc: true # Types have a 1:1 relationship with Hugo archetypes, so you shouldn't need to change this -nd-content-type: how-to +type: tutorial # Intended for internal catalogue and search, case sensitive: # Agent, N4Azure, NIC, NIM, NGF, NAP-DOS, NAP-WAF, NGINX One, NGINX+, Solutions, Unit -nd-product: NGINX One +<<<<<<< HEAD +product: NGINX-One +======= +product: +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) --- ## Overview @@ -33,7 +37,7 @@ You can add a Staged Configuration from: - An existing Config Sync Group - An existing Staged Configuration -To start the process from NGINX One Console, select **Manage > Staged Configurations**. Select **Add Staged Configuration**. +To start the process from NGINX One Console, select **Manage > Staged Configruations**. Select **Add Staged Configuration**. The following sections start from the **Add Staged Configuration** window that appears. diff --git a/content/nginx-one/staged-configs/api-staged-config.md b/content/nginx-one/staged-configs/api-staged-config.md new file mode 100644 index 000000000..d95524b83 --- /dev/null +++ b/content/nginx-one/staged-configs/api-staged-config.md @@ -0,0 +1,24 @@ +--- +# We use sentence case and present imperative tone +title: Use the API to manage your Staged Configurations +# Weights are assigned in increments of 100: determines sorting order +weight: 300 +# Creates a table of contents and sidebar, useful for large documents +toc: true +# Types have a 1:1 relationship with Hugo archetypes, so you shouldn't need to change this +type: tutorial +# Intended for internal catalogue and search, case sensitive: +<<<<<<< HEAD +product: NGINX-One +======= +product: NGINX One +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) +--- + +You can use F5 NGINX One Console API to manage your Staged Configurations. With our API, you can: + +- [Create an NGINX Staged Configuration]({{< ref "/nginx-one/api/api-reference-guide/#operation/createStagedConfig" >}}) + - The details allow you to add existing configuration files. +- [Get a list of existing Staged Configurations]({{< ref "/nginx-one/api/api-reference-guide/#operation/listStagedConfigs" >}}) + - Be sure to record the `object_id` of your target Staged Configuration for your analysis report. +- [Get an analysis report for an existing Staged Configuration]({{< ref "/nginx-one/api/api-reference-guide/#operation/getStagedConfigReport" >}}) diff --git a/content/nginx-one/how-to/staged-configs/edit-staged-config.md b/content/nginx-one/staged-configs/edit-staged-config.md similarity index 100% rename from content/nginx-one/how-to/staged-configs/edit-staged-config.md rename to content/nginx-one/staged-configs/edit-staged-config.md diff --git a/content/nginx-one/how-to/staged-configs/import-export-staged-config.md b/content/nginx-one/staged-configs/import-export-staged-config.md similarity index 97% rename from content/nginx-one/how-to/staged-configs/import-export-staged-config.md rename to content/nginx-one/staged-configs/import-export-staged-config.md index 192fad4c4..68993ee99 100644 --- a/content/nginx-one/how-to/staged-configs/import-export-staged-config.md +++ b/content/nginx-one/staged-configs/import-export-staged-config.md @@ -26,7 +26,7 @@ Before you import or export a Staged Configuration to NGINX One Console, ensure: - You have an NGINX One Console account with staged configuration permissions. -You can also import, export, and manage multiple Staged Configurations through [the API]({{< ref "/nginx-one/how-to/staged-configs/api-staged-config.md" >}}). +You can also import, export, and manage multiple Staged Configurations through [the API]({{< ref "/nginx-one/staged-configs/api-staged-config.md" >}}). ## Considerations diff --git a/content/nginx/admin-guide/installing-nginx/installing-nginx-plus-amazon-web-services.md b/content/nginx/admin-guide/installing-nginx/installing-nginx-plus-amazon-web-services.md index 88cd20853..66a814d97 100644 --- a/content/nginx/admin-guide/installing-nginx/installing-nginx-plus-amazon-web-services.md +++ b/content/nginx/admin-guide/installing-nginx/installing-nginx-plus-amazon-web-services.md @@ -30,8 +30,8 @@ To quickly set up an NGINX Plus environment on AWS: Click the **Continue to Subscribe** button to proceed to the **Launch on EC2** page. -3. Select the type of launch by clicking the appropriate tab (**1‑Click Launch***, **Manual Launch**, or **Service Catalog**). Choose the desired options for billing, instance size, and so on, and click the **Accept Software Terms…** button. -4. When configuring the firewall rules, add a rule to accept web traffic on TCP ports 80 and 443 (this happens automatically if you launch from the **1‑Click Launch** tab). +3. Select the type of launch by clicking the appropriate tab ({{}}**1-Click Launch**{{}}, {{}}**Manual Launch**{{}}, or {{}}**Service Catalog**{{}}). Choose the desired options for billing, instance size, and so on, and click the {{}}**Accept Software Terms…**{{}} button. +4. When configuring the firewall rules, add a rule to accept web traffic on TCP ports 80 and 443 (this happens automatically if you launch from the {{}}**1-Click Launch**{{}} tab). 5. As soon as the new EC2 instance launches, NGINX Plus starts automatically and serves a default **index.html** page. To view the page, use a web browser to access the public DNS name of the new instance. You can also check the status of the NGINX Plus server by logging into the EC2 instance and running this command: ```nginx diff --git a/content/nginx/admin-guide/installing-nginx/installing-nginx-plus.md b/content/nginx/admin-guide/installing-nginx/installing-nginx-plus.md index 3ca6da52a..aed4c97fe 100644 --- a/content/nginx/admin-guide/installing-nginx/installing-nginx-plus.md +++ b/content/nginx/admin-guide/installing-nginx/installing-nginx-plus.md @@ -171,17 +171,26 @@ NGINX Plus can be installed on the following versions of Debian or Ubuntu: - **For Debian**: ```shell - sudo apt update - sudo apt install apt-transport-https lsb-release ca-certificates wget gnupg2 debian-archive-keyring + sudo apt update && \ + sudo apt install apt-transport-https \ + lsb-release \ + ca-certificates \ + wget \ + gnupg2 \ + debian-archive-keyring ``` - **For Ubuntu**: ```shell - sudo apt update - sudo apt install apt-transport-https lsb-release ca-certificates wget gnupg2 ubuntu-keyring + sudo apt update && \ + sudo apt install apt-transport-https \ + lsb-release \ + ca-certificates \ + wget \ + gnupg2 \ + ubuntu-keyring ``` - 1. Download and add NGINX signing key: ```shell diff --git a/content/nginx/admin-guide/monitoring/new-relic-plugin.md b/content/nginx/admin-guide/monitoring/new-relic-plugin.md index b56831edb..e22e19f44 100644 --- a/content/nginx/admin-guide/monitoring/new-relic-plugin.md +++ b/content/nginx/admin-guide/monitoring/new-relic-plugin.md @@ -33,7 +33,7 @@ Download the [plug‑in and installation instructions](https://docs.newrelic.com ## Configuring the Plug‑In -The configuration file for the NGINX plug‑in is **/etc/nginx‑nr‑agent/nginx‑nr‑agent.ini**. The minimal configuration includes: +The configuration file for the NGINX plug‑in is {{}}**/etc/nginx-nr-agent/nginx-nr-agent.ini**{{}}. The minimal configuration includes: - Your New Relic license key in the `newrelic_license_key` statement in the `global` section. @@ -44,7 +44,7 @@ The configuration file for the NGINX plug‑in is **/etc/nginx‑nr‑ag You can include the optional `http_user` and `http_pass` statements to set HTTP basic authentication credentials in cases where the corresponding location is protected by the NGINX [auth_basic](https://nginx.org/en/docs/http/ngx_http_auth_basic_module.html#auth_basic) directive. -The default log file is **/var/log/nginx‑nr‑agent.log**. +The default log file is {{}}**/var/log/nginx-nr-agent.log**{{}}. ## Running the Plug‑In diff --git a/content/nginx/deployment-guides/amazon-web-services/high-availability-keepalived.md b/content/nginx/deployment-guides/amazon-web-services/high-availability-keepalived.md index a3023a49b..8df51d9ae 100644 --- a/content/nginx/deployment-guides/amazon-web-services/high-availability-keepalived.md +++ b/content/nginx/deployment-guides/amazon-web-services/high-availability-keepalived.md @@ -96,8 +96,8 @@ Allocate an Elastic IP address and remember its ID. For detailed instructions, s The NGINX Plus HA solution uses two scripts, which are invoked by `keepalived`: -- **nginx‑ha‑check** – Determines the health of NGINX Plus. -- **nginx‑ha‑notify** – Moves the Elastic IP address when a state transition happens, for example when the backup instance becomes the primary. +- {{}}**nginx-ha-check**{{}} – Determines the health of NGINX Plus. +- {{}}**nginx-ha-notify**{{}} – Moves the Elastic IP address when a state transition happens, for example when the backup instance becomes the primary. 1. Create a directory for the scripts, if it doesn’t already exist. @@ -121,7 +121,7 @@ The NGINX Plus HA solution uses two scripts, which are invoked by `keepalived`: There are two configuration files for the HA solution: - **keepalived.conf** – The main configuration file for `keepalived`, slightly different for each NGINX Plus instance. -- **nginx‑ha‑notify** – The script you downloaded in [Step 4](#ha-aws_ha-scripts), with several user‑defined variables. +- {{}}**nginx-ha-notify**{{}} – The script you downloaded in [Step 4](#ha-aws_ha-scripts), with several user‑defined variables. ### Creating keepalived.conf @@ -158,8 +158,8 @@ You must change values for the following configuration keywords. As you do so, a - `script` in the `chk_nginx_service` block – The script that sends health checks to NGINX Plus. - - On Ubuntu systems, **/usr/lib/keepalived/nginx‑ha‑check** - - On CentOS systems, **/usr/libexec/keepalived/nginx‑ha‑check** + - On Ubuntu systems, {{}}**/usr/lib/keepalived/nginx-ha-check**{{}} + - On CentOS systems, {{}}**/usr/libexec/keepalived/nginx-ha-check**{{}} - `priority` – The value that controls which instance becomes primary, with a higher value meaning a higher priority. Use `101` for the primary instance and `100` for the backup. @@ -171,13 +171,13 @@ You must change values for the following configuration keywords. As you do so, a - `notify` – The script that is invoked during a state transition. - - On Ubuntu systems, **/usr/lib/keepalived/nginx‑ha‑notify** - - On CentOS systems, **/usr/libexec/keepalived/nginx‑ha‑notify** + - On Ubuntu systems, {{}}**/usr/lib/keepalived/nginx-ha-notify**{{}} + - On CentOS systems, {{}}**/usr/libexec/keepalived/nginx-ha-notify**{{}} ### Creating nginx-ha-notify -Modify the user‑defined variables section of the **nginx‑ha‑notify** script, replacing each `` placeholder with the value specified in the list below: +Modify the user‑defined variables section of the {{}}**nginx-ha-notify**{{}} script, replacing each `` placeholder with the value specified in the list below: ```none export AWS_ACCESS_KEY_ID= @@ -223,7 +223,7 @@ Check the state on the backup instance, confirming that it has transitioned to ` ## Troubleshooting -If the solution doesn’t work as expected, check the `keepalived` logs, which are written to **/var/log/syslog**. Also, you can manually run the commands that invoke the `awscli` utility in the **nginx‑ha‑notify** script to check that the utility is working properly. +If the solution doesn’t work as expected, check the `keepalived` logs, which are written to **/var/log/syslog**. Also, you can manually run the commands that invoke the `awscli` utility in the {{}}**nginx-ha-notify**{{}} script to check that the utility is working properly. ## Caveats diff --git a/content/nginx/deployment-guides/amazon-web-services/high-availability-network-load-balancer.md b/content/nginx/deployment-guides/amazon-web-services/high-availability-network-load-balancer.md index 6c7a451a9..c37434906 100644 --- a/content/nginx/deployment-guides/amazon-web-services/high-availability-network-load-balancer.md +++ b/content/nginx/deployment-guides/amazon-web-services/high-availability-network-load-balancer.md @@ -291,7 +291,7 @@ Configure NGINX Plus instances as load balancers. These distribute requests to Use the *Step‑by‑step* instructions in our deployment guide, [Setting Up an NGINX Demo Environment]({{< ref "/nginx/deployment-guides/setting-up-nginx-demo-environment.md" >}}). -Repeat the instructions on both **ngx‑plus‑1** and **ngx‑plus‑2**. +Repeat the instructions on both {{}}**ngx-plus-1**{{}} and {{}}**ngx-plus-2**{{}}. ### Automate instance setup with Packer and Terraform @@ -317,7 +317,7 @@ To run the scripts, follow these instructions: 3. Set your AWS credentials in the Packer and Terraform scripts: - - For Packer, set your credentials in the `variables` block in both **packer/ngx‑oss/packer.json** and **packer/ngx‑plus/packer.json**: + - For Packer, set your credentials in the `variables` block in both {{}}**packer/ngx-oss/packer.json**{{}} and {{}}**packer/ngx-plus/packer.json**{{}}: ```none "variables": { diff --git a/content/nginx/deployment-guides/amazon-web-services/ingress-controller-elastic-kubernetes-services.md b/content/nginx/deployment-guides/amazon-web-services/ingress-controller-elastic-kubernetes-services.md index e1c9811b3..09f4df0a0 100644 --- a/content/nginx/deployment-guides/amazon-web-services/ingress-controller-elastic-kubernetes-services.md +++ b/content/nginx/deployment-guides/amazon-web-services/ingress-controller-elastic-kubernetes-services.md @@ -43,14 +43,14 @@ This guide covers the `eksctl` command as it is the simplest option. 1. Follow the instructions in the [eksctl.io documentation](https://eksctl.io/installation/) to install or update the `eksctl` command. -2. Create an Amazon EKS cluster by following the instructions in the [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html). Select the **Managed nodes – Linux** option for each step. Note that the `eksctl create cluster` command in the first step can take ten minutes or more. +2. Create an Amazon EKS cluster by following the instructions in the [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html). Select the {{}}**Managed nodes – Linux**{{}} option for each step. Note that the `eksctl create cluster` command in the first step can take ten minutes or more. ## Push the NGINX Plus Ingress Controller Image to AWS ECR This step is only required if you do not plan to use the prebuilt NGINX Open Source image. -1. Use the [AWS documentation](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html) to create a repository in the Amazon Elastic Container Registry (ECR). In Step 4 of the AWS instructions, name the repository **nginx‑plus‑ic** as that is what we use in this guide. +1. Use the [AWS documentation](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html) to create a repository in the Amazon Elastic Container Registry (ECR). In Step 4 of the AWS instructions, name the repository {{}}**nginx-plus-ic**{{}} as that is what we use in this guide. 2. Run the following AWS CLI command. It generates an auth token for your AWS ECR registry, then pipes it into the `docker login` command. This lets AWS ECR authenticate and authorize the upcoming Docker requests. For details about the command, see the [AWS documentation](https://docs.aws.amazon.com/AmazonECR/latest/userguide/registry_auth.html). diff --git a/content/nginx/deployment-guides/amazon-web-services/route-53-global-server-load-balancing.md b/content/nginx/deployment-guides/amazon-web-services/route-53-global-server-load-balancing.md index 32038fedb..3cf5c6530 100644 --- a/content/nginx/deployment-guides/amazon-web-services/route-53-global-server-load-balancing.md +++ b/content/nginx/deployment-guides/amazon-web-services/route-53-global-server-load-balancing.md @@ -40,7 +40,7 @@ The setup for global server load balancing (GSLB) in this guide combines Amazon Diagram showing a topology for global server load balancing (GSLB). Eight backend servers, four in each of two regions, host the content for a domain. Two NGINX Plus load balancers in each region route traffic to the backend servers. For each client requesting DNS information for the domain, Amazon Route 53 provides the DNS record for the region closest to the client. -Route 53 is a Domain Name System (DNS) service that performs global server load balancing by routing each request to the AWS region closest to the requester's location. This guide uses two regions: **US West (Oregon)** and **US East (N. Virginia)**. +Route 53 is a Domain Name System (DNS) service that performs global server load balancing by routing each request to the AWS region closest to the requester's location. This guide uses two regions: {{}}**US West (Oregon)**{{}} and {{}}**US East (N. Virginia)**{{}}. In each region, two or more NGINX Plus load balancers are deployed in a high‑availability (HA) configuration. In this guide, there are two NGINX Plus load balancer instances per region. You can also use NGINX Open Source for this purpose, but it lacks the [application health checks](https://docs.nginx.com/nginx/admin-guide/load-balancer/http-health-check/) that make for more precise error detection. For simplicity, we'll refer to NGINX Plus load balancers throughout this guide, noting when features specific to NGINX Plus are used. @@ -79,7 +79,7 @@ Create a _hosted zone_, which basically involves designating a domain name to be 1. Log in to the [AWS Management Console](https://console.aws.amazon.com/) (**console.aws.amazon.com/**). -2. Access the Route 53 dashboard page by clicking **Services** in the top AWS navigation bar, mousing over **Networking** in the **All AWS Services** column and then clicking **Route 53**. +2. Access the Route 53 dashboard page by clicking **Services** in the top AWS navigation bar, mousing over **Networking** in the {{}}**All AWS Services**{{}} column and then clicking **Route 53**. Screenshot showing how to access the Amazon Route 53 dashboard to configure global load balancing (GLB) with NGINX Plus @@ -87,7 +87,7 @@ Create a _hosted zone_, which basically involves designating a domain name to be Screenshot showing the Route 53 Registered domains tab during configuration of NGINX GSLB (global server load balancing) - If you see the Route 53 home page instead, access the **Registered domains** tab by clicking the  Get started now  button under **Domain registration**. + If you see the Route 53 home page instead, access the **Registered domains** tab by clicking the  Get started now  button under {{}}**Domain registration**{{}}. Screenshot showing the Amazon Route 53 homepage for a first-time Route 53 user during configuration of AWS GSLB (global server load balancing) with NGINX Plus @@ -124,19 +124,19 @@ Create records sets for your domain: 4. Fill in the fields in the **Create Record Set** column: - **Name** – You can leave this field blank, but for this guide we are setting the name to **www.nginxroute53.com**. - - **Type** – **A – IPv4 address**. + - **Type** – **A {{}}– IPv4 address**{{}}. - **Alias** – **No**. - **TTL (Seconds)** – **60**. **Note**: Reducing TTL from the default of **300** in this way can decrease the time that it takes for Route 53 to fail over when both NGINX Plus load balancers in the region are down, but there is always a delay of about two minutes regardless of the TTL setting. This is a built‑in limitation of Route 53. - - **Value** – [Elastic IP addresses](#elastic-ip) of the NGINX Plus load balancers in the first region [in this guide, **US West (Oregon)**]. + - **Value** – [Elastic IP addresses](#elastic-ip) of the NGINX Plus load balancers in the first region [in this guide, {{}}**US West (Oregon)**]{{}}. - **Routing Policy** – **Latency**. 5. A new area opens when you select **Latency**. Fill in the fields as indicated (see the figure below): - - **Region** – Region to which the load balancers belong (in this guide, **us‑west‑2**). - - **Set ID** – Identifier for this group of load balancers (in this guide, **US West LBs**). + - **Region** – Region to which the load balancers belong (in this guide, {{}}**us-west-2**{{}}). + - **Set ID** – Identifier for this group of load balancers (in this guide, {{}}**US West LBs**{{}}). - **Associate with Health Check** – **No**. When you complete all fields, the tab looks like this: @@ -145,7 +145,7 @@ Create records sets for your domain: 6. Click the  Create  button. -7. Repeat Steps 3 through 6 for the load balancers in the other region [in this guide, **US East (N. Virginia)**]. +7. Repeat Steps 3 through 6 for the load balancers in the other region [in this guide, {{}}**US East (N. Virginia)**]{{}}. You can now test your website. Insert your domain name into a browser and see that your request is being load balanced between servers based on your location. @@ -172,21 +172,21 @@ We create health checks both for each NGINX Plus load balancer individually and Screenshot of Amazon Route 53 welcome screen seen by first-time user of Route 53 during configuration of global server load balancing (GSLB) with NGINX Plus -2. Click the  Create health check  button. In the **Configure health check** form that opens, specify the following values, then click the  Next  button. +2. Click the  Create health check  button. In the {{}}**Configure health check**{{}} form that opens, specify the following values, then click the  Next  button. - - **Name** – Identifier for an NGINX Plus load balancer instance, for example **US West LB 1**. + - **Name** – Identifier for an NGINX Plus load balancer instance, for example {{}}**US West LB 1**{{}}. - **What to monitor** – **Endpoint**. - - **Specify endpoint by** – **IP address**. + - **Specify endpoint by** – {{}}**IP address**{{}}. - **IP address** – The [elastic IP address](#elastic-ip) of the NGINX Plus load balancer. - **Port** – The port advertised to clients for your domain or web service (the default is **80**). Screenshot of Amazon Route 53 interface for configuring health checks, during configuration of AWS global load balancing (GLB) with NGINX Plus -3. On the **Get notified when health check fails** screen that opens, set the **Create alarm** radio button to **Yes** or **No** as appropriate, then click the  Create health check  button. +3. On the {{}}**Get notified when health check fails**{{}} screen that opens, set the **Create alarm** radio button to **Yes** or **No** as appropriate, then click the  Create health check  button. Screenshot of Route 53 configuration screen for enabling notifications of failed health checks, during configuration of Route 53 global load balancing (GLB) with NGINX Plus -4. Repeat Steps 2 and 3 for your other NGINX Plus load balancers (in this guide, **US West LB 2**, **US East LB 1**, and **US East LB 2**). +4. Repeat Steps 2 and 3 for your other NGINX Plus load balancers (in this guide, {{}}**US West LB 2**{{}}, {{}}**US East LB 1**{{}}, and {{}}**US East LB 2**{{}}). 5. Proceed to the next section to configure health checks for the load balancer pairs. @@ -195,18 +195,18 @@ We create health checks both for each NGINX Plus load balancer individually and 1. Click the  Create health check  button. -2. In the **Configure health check** form that opens, specify the following values, then click the  Next  button. +2. In the {{}}**Configure health check**{{}} form that opens, specify the following values, then click the  Next  button. - - **Name** – Identifier for the pair of NGINX Plus load balancers in the first region, for example **US West LBs**. - - **What to monitor** – **Status of other health checks **. + - **Name** – Identifier for the pair of NGINX Plus load balancers in the first region, for example {{}}**US West LBs**{{}}. + - **What to monitor** – {{}}**Status of other health checks{{}} **. - **Health checks to monitor** – The health checks of the two US West load balancers (add them one after the other by clicking in the box and choosing them from the drop‑down menu as shown). - **Report healthy when** – **at least 1 of 2 selected health checks are healthy** (the choices in this field are obscured in the screenshot by the drop‑down menu). Screenshot of Amazon Route 53 interface for configuring a health check of combined other health checks, during configuration of global server load balancing (GSLB) with NGINX Plus -3. On the **Get notified when health check fails** screen that opens, set the **Create alarm** radio button as appropriate (see Step 5 in the previous section), then click the  Create health check  button. +3. On the {{}}**Get notified when health check fails**{{}} screen that opens, set the **Create alarm** radio button as appropriate (see Step 5 in the previous section), then click the  Create health check  button. -4. Repeat Steps 1 through 3 for the paired load balancers in the other region [in this guide, **US East (N. Virginia)**]. +4. Repeat Steps 1 through 3 for the paired load balancers in the other region [in this guide, {{}}**US East (N. Virginia)**]{{}}. When you have finished configuring all six health checks, the **Health checks** tab looks like this: @@ -223,13 +223,13 @@ When you have finished configuring all six health checks, the **Health checks** The tab changes to display the record sets for the domain. -3. In the list of record sets that opens, click the row for the record set belonging to your first region [in this guide, **US West (Oregon)**]. The Edit Record Set column opens on the right side of the tab. +3. In the list of record sets that opens, click the row for the record set belonging to your first region [in this guide, {{}}**US West (Oregon)**]{{}}. The Edit Record Set column opens on the right side of the tab. Screenshot of interface for editing Route 53 record sets during configuration of global server load balancing (GSLB) with NGINX Plus 4. Change the **Associate with Health Check** radio button to **Yes**. -5. In the **Health Check to Associate** field, select the paired health check for your first region (in this guide, **US West LBs**). +5. In the **Health Check to Associate** field, select the paired health check for your first region (in this guide, {{}}**US West LBs**{{}}). 6. Click the  Save Record Set  button. @@ -242,7 +242,7 @@ These instructions assume that you have configured NGINX Plus on two EC2 instan **Note:** Some commands require `root` privilege. If appropriate for your environment, prefix commands with the `sudo` command. -1. Connect to the **US West LB 1** instance. For instructions, see Connecting to an EC2 Instance. +1. Connect to the {{}}**US West LB 1**{{}} instance. For instructions, see Connecting to an EC2 Instance. 2. Change directory to **/etc/nginx/conf.d**. @@ -250,7 +250,7 @@ These instructions assume that you have configured NGINX Plus on two EC2 instan cd /etc/nginx/conf.d ``` -3. Edit the **west‑lb1.conf** file and add the **@healthcheck** location to set up health checks. +3. Edit the {{}}**west-lb1.conf**{{}} file and add the **@healthcheck** location to set up health checks. ```nginx upstream backend-servers { @@ -282,9 +282,9 @@ These instructions assume that you have configured NGINX Plus on two EC2 instan nginx -s reload ``` -5. Repeat Steps 1 through 4 for the other three load balancers (**US West LB 2**, **US East LB 1**, and **US East LB2**). +5. Repeat Steps 1 through 4 for the other three load balancers ({{}}**US West LB 2**{{}}, {{}}**US East LB 1**{{}}, and {{}}**US East LB2**{{}}). - In Step 3, change the filename as appropriate (**west‑lb2.conf**, **east‑lb1.conf**, and **east‑lb2.conf**). In the **east‑lb1.conf** and **east‑lb2.conf** files, the `server` directives specify the public DNS names of Backup 3 and Backup 4. + In Step 3, change the filename as appropriate ({{}}**west-lb2.conf**{{}}, {{}}**east-lb1.conf**{{}}, and {{}}**east-lb2.conf**{{}}). In the {{}}**east-lb1.conf**{{}} and {{}}**east-lb2.conf**{{}} files, the `server` directives specify the public DNS names of Backup 3 and Backup 4. ## Appendix @@ -307,31 +307,31 @@ Step‑by‑step instructions for creating EC2 instances and installing NGINX so Assign the following names to the instances, and then install the indicated NGINX software. -- In the first region, which is **US West (Oregon)** in this guide: +- In the first region, which is {{}}**US West (Oregon)**{{}} in this guide: - Two load balancer instances running NGINX Plus: - - **US West LB 1** - - **US West LB 2** + - {{}}**US West LB 1**{{}} + - {{}}**US West LB 2**{{}} - Two backend instances running NGINX Open Source: - * **Backend 1** - - **Backend 2** + * {{}}**Backend 1**{{}} + - {{}}**Backend 2**{{}} -- In the second region, which is **US East (N. Virginia)** in this guide: +- In the second region, which is {{}}**US East (N. Virginia)**{{}} in this guide: - Two load balancer instances running NGINX Plus: - - **US East LB 1** - - **US East LB 2** + - {{}}**US East LB 1**{{}} + - {{}}**US East LB 2**{{}} - Two backend instances running NGINX Open Source: - * **Backend 3** - - **Backend 4** + * {{}}**Backend 3**{{}} + - {{}}**Backend 4**{{}} -Here's the **Instances** tab after we create the four instances in the **N. Virginia** region. +Here's the **Instances** tab after we create the four instances in the {{}}**N. Virginia**{{}} region. Screenshot showing newly created EC2 instances in one of two regions, which is a prerequisite to configuring AWS GSLB (global server load balancing) with NGINX Plus @@ -366,7 +366,7 @@ After you complete the instructions on all instances, the list for a region (her ### Configuring NGINX Open Source on the Backend Servers -Perform these steps on all four backend servers: **Backend 1**, **Backend 2**, **Backend 3**, and **Backend 4**. In Step 3, substitute the appropriate name for `Backend X` in the **index.html** file. +Perform these steps on all four backend servers: {{}}**Backend 1**{{}}, {{}}**Backend 2**{{}}, {{}}**Backend 3**{{}}, and {{}}**Backend 4**{{}}. In Step 3, substitute the appropriate name for `Backend X` in the **index.html** file. **Note:** Some commands require `root` privilege. If appropriate for your environment, prefix commands with the `sudo` command. @@ -421,7 +421,7 @@ Perform these steps on all four backend servers: **Backend 1**, **Backend&n ### Configuring NGINX Plus on the Load Balancers -Perform these steps on all four backend servers: **US West LB 1**, **US West LB 2**, **US East LB 1**, and **US West LB 2**. +Perform these steps on all four backend servers: {{}}**US West LB 1**{{}}, {{}}**US West LB 2**{{}}, {{}}**US East LB 1**{{}}, and {{}}**US West LB 2**{{}}. **Note:** Some commands require `root` privilege. If appropriate for your environment, prefix commands with the `sudo` command. @@ -439,10 +439,10 @@ Perform these steps on all four backend servers: **US West LB 1** 4. Create a new file containing the following text, which configures load balancing of the two backend servers in the relevant region. The filename on each instance is: - - For **US West LB 1** – **west‑lb1.conf** - - For **US West LB 2** – **west‑lb2.conf** - - For **US East LB 1** – **east‑lb1.conf** - - For **US West LB 2** – **east‑lb2.conf** + - For {{}}**US West LB 1**{{}} – {{}}**west-lb1.conf**{{}} + - For {{}}**US West LB 2**{{}} – {{}}**west-lb2.conf**{{}} + - For {{}}**US East LB 1**{{}} – {{}}**east-lb1.conf**{{}} + - For {{}}**US West LB 2**{{}} – {{}}**east-lb2.conf**{{}} In the `server` directives in the `upstream` block, substitute the public DNS names of the backend instances in the region; to learn them, see the **Instances** tab in the EC2 Dashboard. diff --git a/content/nginx/deployment-guides/global-server-load-balancing/ns1-global-server-load-balancing.md b/content/nginx/deployment-guides/global-server-load-balancing/ns1-global-server-load-balancing.md index 8e425506e..8e14e22e7 100644 --- a/content/nginx/deployment-guides/global-server-load-balancing/ns1-global-server-load-balancing.md +++ b/content/nginx/deployment-guides/global-server-load-balancing/ns1-global-server-load-balancing.md @@ -93,7 +93,7 @@ The solution functions alongside other NS1 capabilities, such as geo‑proximal - **Canadian province(s)** – Two‑letter codes for Canadian provinces - **Country/countries** – Two‑letter codes for nations and territories - - **Geographic region(s)** – Identifiers like **US‑WEST** and **ASIAPAC** + - **Geographic region(s)** – Identifiers like {{}}**US-WEST**{{}} and **ASIAPAC** - **ISO region code** – Identification codes for nations and territories as defined in [ISO 3166](https://www.iso.org/iso-3166-country-codes.html) - **Latitude** – Degrees, minutes, and seconds of latitude (northern or southern hemisphere) - **Longitude** – Degrees, minutes, and seconds of longitude (eastern or western hemisphere) @@ -114,8 +114,8 @@ The solution functions alongside other NS1 capabilities, such as geo‑proximal 12. In the **Add Filters** window that pops up, click the plus sign (+) on the button for each filter you want to apply. In this guide, we're configuring the filters in this order: - **Up** in the ** HEALTHCHECKS ** section - - **Geotarget Country** in the ** GEOGRAPHIC ** section - - **Select First N** in the ** TRAFFIC MANAGEMENT ** section + - {{}}**Geotarget Country**{{}} in the ** GEOGRAPHIC ** section + - {{}}**Select First N**{{}} in the ** {{}}TRAFFIC MANAGEMENT{{}} ** section Click the  Save Filter Chain  button. @@ -128,17 +128,17 @@ In this section we install and configure the NS1 agent on the same hosts as our 1. Follow the instructions in the [NS1 documentation](https://help.ns1.com/hc/en-us/articles/360020474154) to set up and connect a separate data feed for each of the three NGINX Plus instances, which NS1 calls _answers_. - On the first page (**Configure a new data source from NSONE Data Feed API v1**) specify a name for the _data source_, which is the administrative container for the data feeds you will be creating. Use the same name each of the three times you go through the instructions. We're naming the data source **NGINX‑GSLB**. + On the first page (**Configure a new data source from NSONE Data Feed API v1**) specify a name for the _data source_, which is the administrative container for the data feeds you will be creating. Use the same name each of the three times you go through the instructions. We're naming the data source {{}}**NGINX-GSLB**{{}}. On the next page (**Create Feed from NSONE Data Feed API v1**), create a data feed for the instance. Because the **Name** field is just for internal use, any value is fine. The value in the **Label** field is used in the YAML configuration file for the instance (see Step 4 below). We're specifying labels that indicate the country (using the ISO 3166 codes) in which the instance is running: - - **us‑nginxgslb‑datafeed** for instance 1 in the US - - **de‑nginxgslb‑datafeed** for instance 2 in Germany - - **sg‑nginxgslb‑datafeed** for instance 3 in Singapore + - {{}}**us-nginxgslb-datafeed**{{}} for instance 1 in the US + - {{}}**de-nginxgslb-datafeed**{{}} for instance 2 in Germany + - {{}}**sg-nginxgslb-datafeed**{{}} for instance 3 in Singapore After creating the three feeds, note the value in the **Feeds URL** field on the  INTEGRATIONS  tab. The final element of the URL is the ```` you will specify in the YAML configuration file in Step 4. In the third screenshot in the [NS1 documentation](https://help.ns1.com/hc/en-us/articles/360020474154), for example, it is **e566332c5d22c6b66aeaa8837eae90ac**. -2. Follow the instructions in the [NS1 documentation](https://help.ns1.com/hc/en-us/articles/360017341694-Creating-managing-API-keys) to create an NS1 API key for the agent, if you have not already. (To access **Account Settings** in Step 1, click your username in the upper right corner of the NS1 title bar.) We're naming the app **NGINX‑GSLB**. Make note of the key value – you'll specify it as ```` in the YAML configuration file in Step 4. To see the actual hexadecimal value, click on the circled letter **i** in the **API Key** field. +2. Follow the instructions in the [NS1 documentation](https://help.ns1.com/hc/en-us/articles/360017341694-Creating-managing-API-keys) to create an NS1 API key for the agent, if you have not already. (To access **Account Settings** in Step 1, click your username in the upper right corner of the NS1 title bar.) We're naming the app {{}}**NGINX-GSLB**{{}}. Make note of the key value – you'll specify it as ```` in the YAML configuration file in Step 4. To see the actual hexadecimal value, click on the circled letter **i** in the **API Key** field. 3. On each NGINX Plus host, clone the [GitHub repo](https://github.com/nginxinc/nginx-ns1-gslb) for the NS1 agent. @@ -349,7 +349,7 @@ First we perform these steps to create the shed filter: Screenshot of NS1 GUI: clicking Shed Load button on Add Filters page -3. The **Shed Load** filter is added as the fourth (lowest) box in the **Active Filters** section. Move it to be third by clicking and dragging it above the **Select First N** box. +3. The **Shed Load** filter is added as the fourth (lowest) box in the **Active Filters** section. Move it to be third by clicking and dragging it above the {{}}**Select First N**{{}} box. 4. Click the  Save Filter Chain  button. @@ -363,7 +363,7 @@ First we perform these steps to create the shed filter: 7. In the **Answer Metadata** window that opens, set values for the following metadata. In each case, click the icon in the  FEED  column of the metadata's row, then select or enter the indicated value in the  AVAILABLE  column. (For testing purposes, we're setting very small values for the watermarks so that the threshold is exceeded very quickly.) - - **Active connections** – **us‑nginxgslb‑datafeed** + - **Active connections** – {{}}**us-nginxgslb-datafeed**{{}} - **High watermark** – **5** - **Low watermark** – **2** diff --git a/content/nginx/deployment-guides/google-cloud-platform/high-availability-all-active.md b/content/nginx/deployment-guides/google-cloud-platform/high-availability-all-active.md index cf9e705b0..f44b95e75 100644 --- a/content/nginx/deployment-guides/google-cloud-platform/high-availability-all-active.md +++ b/content/nginx/deployment-guides/google-cloud-platform/high-availability-all-active.md @@ -15,7 +15,7 @@ This guide explains how to deploy F5 NGINX Plus in a high-availability configura **Notes:** - The GCE environment changes constantly. This could include names and arrangements of GUI elements. This guide was accurate when published. But, some GCE GUI elements might have changed over time. Use this guide as a reference and adapt to the current GCE working environment. -- The configuration described in this guide allows anyone from a public IP address to access the NGINX Plus instances. While this works in common scenarios in a test environment, we do not recommend it in production. Block external HTTP/HTTPS access to **app‑1** and **app‑2** instances to external IP address before production deployment. Alternatively, remove the external IP addresses for all application instances, so they're accessible only on the internal GCE network. +- The configuration described in this guide allows anyone from a public IP address to access the NGINX Plus instances. While this works in common scenarios in a test environment, we do not recommend it in production. Block external HTTP/HTTPS access to {{}}**app-1**{{}} and {{}}**app-2**{{}} instances to external IP address before production deployment. Alternatively, remove the external IP addresses for all application instances, so they're accessible only on the internal GCE network. @@ -38,7 +38,7 @@ The GCE network LB assigns each new client to a specific NGINX Plus LB. This ass NGINX Plus LB uses the round-robin algorithm to forward requests to specific app instances. It also adds a session cookie. It keeps future requests from the same client on the same app instance as long as it's running. -This deployment guide uses two groups of app instances: – **app‑1** and **app‑2**. It demonstrates [load balancing](https://www.nginx.com/products/nginx/load-balancing/) between different app types. But both groups have the same app configurations. +This deployment guide uses two groups of app instances: – {{}}**app-1**{{}} and {{}}**app-2**{{}}. It demonstrates [load balancing](https://www.nginx.com/products/nginx/load-balancing/) between different app types. But both groups have the same app configurations. You can adapt the deployment to distribute unique connections to different groups of app instances. This can be done by creating discrete upstream blocks and routing content based on the URI. @@ -69,17 +69,17 @@ Create a new GCE project to host the all‑active NGINX Plus deployment. 1. Log into the [GCP Console](http://console.cloud.google.com) at **console.cloud.google.com**. -2. The GCP **Home > Dashboard** tab opens. Its contents depend on whether you have any existing projects. +2. The GCP {{}}**Home > Dashboard**{{}} tab opens. Its contents depend on whether you have any existing projects. - If there are no existing projects, click the  Create a project  button. Screenshot of the Google Cloud Platform dashboard that appears when there are no existing projects (creating a project is the first step in configuring NGINX Plus as the Google Cloud load balancer) - - If there are existing projects, the name of one of them appears in the upper left of the blue header bar (in the screenshot, it's  My Test Project ). Click the project name and select **Create project** from the menu that opens. + - If there are existing projects, the name of one of them appears in the upper left of the blue header bar (in the screenshot, it's  My Test Project ). Click the project name and select {{}}**Create project**{{}} from the menu that opens. Screenshot of the Google Cloud Platform page that appears when other projects already exist (creating a project is the first step in configuring NGINX Plus as the Google Cloud load balancer) -3. Type your project name in the **New Project** window that pops up, then click CREATE. We're naming the project **NGINX Plus All‑Active‑LB**. +3. Type your project name in the {{}}**New Project**{{}} window that pops up, then click CREATE. We're naming the project **NGINX {{}}Plus All-Active-LB**{{}}. Screenshot of the New Project pop-up window for naming a new project on the Google Cloud Platform, which is the first step in configuring NGINX Plus as the Google load balancer @@ -87,24 +87,24 @@ Create a new GCE project to host the all‑active NGINX Plus deployment. Create firewall rules that allow access to the HTTP and HTTPS ports on your GCE instances. You'll attach the rules to all the instances you create for the deployment. -1. Navigate to the **Networking > Firewall rules** tab and click  +  CREATE FIREWALL RULE. (The screenshot shows the default rules provided by GCE.) +1. Navigate to the {{}}**Networking > Firewall rules**{{}} tab and click  +  CREATE FIREWALL RULE. (The screenshot shows the default rules provided by GCE.) Screenshot of the Google Cloud Platform page for defining new firewall rules; when configuring NGINX Plus as the Google Cloud load balancer, we open ports 80, 443, and 8080 for it. -2. Fill in the fields on the **Create a firewall rule** screen that opens: +2. Fill in the fields on the {{}}**Create a firewall rule**{{}} screen that opens: - - **Name** – **nginx‑plus‑http‑fw‑rule** + - **Name** – {{}}**nginx-plus-http-fw-rule**{{}} - **Description** – **Allow access to ports 80, 8080, and 443 on all NGINX Plus instances** - - **Source filter** – On the drop-down menu, select either **Allow from any source (0.0.0.0/0)**, or **IP range** if you want to restrict access to users on your private network. In the second case, fill in the **Source IP ranges** field that opens. In the screenshot, we are allowing unrestricted access. - - **Allowed protocols and ports** – **tcp:80; tcp:8080; tcp:443** + - {{}}**Source filter**{{}} – On the drop-down menu, select either **Allow from any source (0.0.0.0/0)**, or {{}}**IP range**{{}} if you want to restrict access to users on your private network. In the second case, fill in the {{}}**Source IP ranges**{{}} field that opens. In the screenshot, we are allowing unrestricted access. + - {{}}**Allowed protocols and ports**{{}} – {{}}**tcp:80; tcp:8080; tcp:443**{{}} **Note:** As noted in the introduction, allowing access from any public IP address is appropriate only in a test environment. Before deploying the architecture in production, create a firewall rule. Use this rule to block access to the external IP address for your application instances. Alternatively, you can disable external IP addresses for the instances. This limits access only to the internal GCE network. - - **Target tags** – **nginx‑plus‑http‑fw‑rule** + - {{}}**Target tags**{{}} – {{}}**nginx-plus-http-fw-rule**{{}} Screenshot of the interface for creating a Google Compute Engine (GCE) firewall rule, used during deployment of NGINX Plus as the Google load balancer. -3. Click the  Create  button. The new rule is added to the table on the **Firewall rules** tab. +3. Click the  Create  button. The new rule is added to the table on the {{}}**Firewall rules**{{}} tab. ## Task 2: Creating Source Instances @@ -123,29 +123,29 @@ The methods to create a source instance are different. Once you've created the s Create three source VM instances based on a GCE VM image. We're basing our instances on the Ubuntu 16.04 LTS image. -1. Verify that the **NGINX Plus All‑Active‑LB** project is still selected in the Google Cloud Platform header bar. +1. Verify that the **NGINX {{}}Plus All-Active-LB**{{}} project is still selected in the Google Cloud Platform header bar. -2. Navigate to the **Compute Engine > VM instances** tab. +2. Navigate to the {{}}**Compute Engine > VM instances**{{}} tab. -3. Click the  Create instance  button. The **Create an instance** page opens. +3. Click the  Create instance  button. The {{}}**Create an instance**{{}} page opens. #### Creating the First Application Instance from a VM Image -1. On the **Create an instance** page, modify or verify the fields and checkboxes as indicated (a screenshot of the completed page appears in the next step): +1. On the {{}}**Create an instance**{{}} page, modify or verify the fields and checkboxes as indicated (a screenshot of the completed page appears in the next step): - - **Name** – **nginx‑plus‑app‑1** - - **Zone** – The GCP zone that makes sense for your location. We're using **us‑west1‑a**. - - **Machine type** – The appropriate size for the level of traffic you anticipate. We're selecting **micro**, which is ideal for testing purposes. - - **Boot disk** – Click **Change**. The **Boot disk** page opens to the OS images subtab. Perform the following steps: + - **Name** – {{}}**nginx-plus-app-1**{{}} + - **Zone** – The GCP zone that makes sense for your location. We're using {{}}**us-west1-a**{{}}. + - {{}}**Machine type**{{}} – The appropriate size for the level of traffic you anticipate. We're selecting **micro**, which is ideal for testing purposes. + - {{}}**Boot disk**{{}} – Click **Change**. The {{}}**Boot disk**{{}} page opens to the OS images subtab. Perform the following steps: - - Click the radio button for the Unix or Linux image of your choice (here, **Ubuntu 16.04 LTS**). - - Accept the default values in the **Boot disk type** and **Size (GB)** fields (**Standard persistent disk** and **10** respectively). + - Click the radio button for the Unix or Linux image of your choice (here, {{}}**Ubuntu 16.04 LTS**{{}}). + - Accept the default values in the {{}}**Boot disk type**{{}} and {{}}**Size (GB)**{{}} fields ({{}}**Standard persistent disk**{{}} and **10** respectively). - Click the  Select  button. Screenshot of the 'Boot disk' page in Google Cloud Platform for selecting the OS on which a VM runs. In the deployment of NGINX Plus as the Google load balancer, we select Ubuntu 16.04 LTS. - - **Identity and API access** – Keep the defaults for the **Service account ** field and **Access scopes** radio button. Unless you need more granular control. + - {{}}**Identity and API access**{{}} – Keep the defaults for the {{}}**Service account **{{}} field and {{}}**Access scopes**{{}} radio button. Unless you need more granular control. - **Firewall** – Verify that neither check box is checked (the default). The firewall rule invoked in the **Tags** field on the **Management** subtab (see Step 3 below) controls this type of access. 2. Click Management, disk, networking, SSH keys to open that set of subtabs. (The screenshot shows the values entered in the previous step.) @@ -154,11 +154,11 @@ Create three source VM instances based on a GCE VM image. We're basing our insta 3. On the **Management** subtab, modify or verify the fields as indicated: - - **Description** – **NGINX Plus app‑1 Image** - - **Tags** – **nginx‑plus‑http‑fw‑rule** - - **Preemptibility** – **Off (recommended)** (the default) - - **Automatic restart** – **On (recommended)** (the default) - - **On host maintenance** – **Migrate VM instance (recommended)** (the default) + - **Description** – **NGINX {{}}Plus app-1 Image**{{}} + - **Tags** – {{}}**nginx-plus-http-fw-rule**{{}} + - **Preemptibility** – {{}}**Off (recommended)**{{}} (the default) + - {{}}**Automatic restart**{{}} – {{}}**On (recommended)**{{}} (the default) + - {{}}**On host maintenance**{{}} – {{}}**Migrate VM instance (recommended)**{{}} (the default) Screenshot of the Management subtab used during creation of a new VM instance, part of deploying NGINX Plus as the Google load balancer. @@ -166,38 +166,38 @@ Create three source VM instances based on a GCE VM image. We're basing our insta Screenshot of the Disks subtab used during creation of a new VM instance, part of deploying NGINX Plus as the Google Cloud load balancer. -5. On the **Networking** subtab, verify the default settings, in particular **Ephemeral** for **External IP** and **Off** for **IP Forwarding**. +5. On the **Networking** subtab, verify the default settings, in particular **Ephemeral** for {{}}**External IP**{{}} and **Off** for {{}}**IP Forwarding**{{}}. Screenshot of the Networking subtab used during creation of a new VM instance, part of deploying NGINX Plus as the Google Cloud load balancer. -6. If you're using your own SSH public key instead of your default GCE keys, paste the hexadecimal key string on the **SSH Keys** subtab. Right into the box that reads **Enter entire key data**. +6. If you're using your own SSH public key instead of your default GCE keys, paste the hexadecimal key string on the {{}}**SSH Keys**{{}} subtab. Right into the box that reads {{}}**Enter entire key data**{{}}. Screenshot of the SSH Keys subtab used during creation of a new VM instance, part of deploying NGINX Plus as the Google Cloud Platform load balancer. -7. Click the  Create  button at the bottom of the **Create an instance** page. +7. Click the  Create  button at the bottom of the {{}}**Create an instance**{{}} page. - The **VM instances** summary page opens. It can take several minutes for the instance to be created. Wait to continue until the green check mark appears. + The {{}}**VM instances**{{}} summary page opens. It can take several minutes for the instance to be created. Wait to continue until the green check mark appears. Screenshot of the summary page that verifies the creation of a new VM instance, part of deploying NGINX Plus as the load balancer for Google Cloud. #### Creating the Second Application Instance from a VM Image -1. On the **VM instances** summary page, click CREATE INSTANCE. +1. On the {{}}**VM instances**{{}} summary page, click CREATE INSTANCE. 2. Repeat the steps in Creating the First Application Instance to create the second application instance. Specify the same values as for the first application instance, except: - - In Step 1, **Name** – **nginx‑plus‑app‑2** - - In Step 3, **Description** – **NGINX Plus app‑2 Image** + - In Step 1, **Name** – {{}}**nginx-plus-app-2**{{}} + - In Step 3, **Description** – **NGINX {{}}Plus app-2 Image**{{}} #### Creating the Load-Balancing Instance from a VM Image -1. On the **VM instances** summary page, click CREATE INSTANCE. +1. On the {{}}**VM instances**{{}} summary page, click CREATE INSTANCE. 2. Repeat the steps in Creating the First Application Instance to create the load‑balancing instance. Specify the same values as for the first application instance, except: - - In Step 1, **Name** – **nginx‑plus‑lb** + - In Step 1, **Name** – {{}}**nginx-plus-lb**{{}} - In Step 3, **Description** – **NGINX Plus Load Balancing Image** @@ -205,14 +205,14 @@ Create three source VM instances based on a GCE VM image. We're basing our insta Install and configure PHP and FastCGI on the instances. -Repeat these instructions for all three source instances (**nginx‑plus‑app‑1**, **nginx‑plus‑app‑2**, and **nginx‑plus‑lb**). +Repeat these instructions for all three source instances ({{}}**nginx-plus-app-1**{{}}, {{}}**nginx-plus-app-2**{{}}, and {{}}**nginx-plus-lb**{{}}). **Note:** Some commands require `root` privilege. If appropriate for your environment, prefix commands with the `sudo` command. 1. Connect to the instance over SSH using the method of your choice. GCE provides a built-in mechanism: - - Navigate to the **Compute Engine > VM instances** tab. - - In the instance's row in the table, click the triangle icon in the **Connect** column at the far right and select a method (for example, **Open in browser window**). + - Navigate to the {{}}**Compute Engine > VM instances**{{}} tab. + - In the instance's row in the table, click the triangle icon in the **Connect** column at the far right and select a method (for example, {{}}**Open in browser window**{{}}). Screenshot showing how to connect via SSH to a VM instance, part of deploying NGINX Plus as the Google load balancer. @@ -253,7 +253,7 @@ Now install NGINX Plus and download files that are specific to the all‑active Both the configuration and content files are available at the [NGINX GitHub repository](https://github.com/nginxinc/NGINX-Demos/tree/master/gce-nginx-plus-deployment-guide-files). -Repeat these instructions for all three source instances (**nginx‑plus‑app‑1**, **nginx‑plus‑app‑2**, and **nginx‑plus‑lb**). +Repeat these instructions for all three source instances ({{}}**nginx-plus-app-1**{{}}, {{}}**nginx-plus-app-2**{{}}, and {{}}**nginx-plus-lb**{{}}). **Note:** Some commands require `root` privilege. If appropriate for your environment, prefix commands with the `sudo` command. @@ -265,7 +265,7 @@ Both the configuration and content files are available at the [NGINX GitHub repo 4. Copy the right configuration file from the **etc\_nginx\_conf.d** subdirectory of the cloned repository to **/etc/nginx/conf.d**: - - On both **nginx‑plus‑app‑1** and **nginx‑plus‑app‑2**, copy **gce‑all‑active‑app.conf**. + - On both {{}}**nginx-plus-app-1**{{}} and {{}}**nginx-plus-app-2**{{}}, copy {{}}**gce-all-active-app.conf**{{}}. You can also run the following commands to download the configuration file directly from the GitHub repository: @@ -281,7 +281,7 @@ Both the configuration and content files are available at the [NGINX GitHub repo wget https://github.com/nginxinc/NGINX-Demos/blob/master/gce-nginx-plus-deployment-guide-files/etc_nginx_conf.d/gce-all-active-app.conf ``` - - On **nginx‑plus‑lb**, copy **gce‑all‑active‑lb.conf**. + - On {{}}**nginx-plus-lb**{{}}, copy {{}}**gce-all-active-lb.conf**{{}}. You can also run the following commands to download the configuration file directly from the GitHub repository: @@ -297,9 +297,9 @@ Both the configuration and content files are available at the [NGINX GitHub repo wget https://github.com/nginxinc/NGINX-Demos/blob/master/gce-nginx-plus-deployment-guide-files/etc_nginx_conf.d/gce-all-active-lb.conf ``` -5. On the LB instance (**nginx‑plus‑lb**), use a text editor to open **gce‑all‑active‑lb.conf**. Change the `server` directives in the `upstream` block to reference the internal IP addresses of the **nginx‑plus‑app‑1** and **nginx‑plus‑app‑2** instances (substitute the address for the expression in angle brackets). You do not need to modify the two application instances. +5. On the LB instance ({{}}**nginx-plus-lb**{{}}), use a text editor to open {{}}**gce-all-active-lb.conf**{{}}. Change the `server` directives in the `upstream` block to reference the internal IP addresses of the {{}}**nginx-plus-app-1**{{}} and {{}}**nginx-plus-app-2**{{}} instances (substitute the address for the expression in angle brackets). You do not need to modify the two application instances. - You can look up internal IP addresses in the **Internal IP** column of the table on the **Compute Engine > VM instances** summary page. + You can look up internal IP addresses in the {{}}**Internal IP**{{}} column of the table on the {{}}**Compute Engine > VM instances**{{}} summary page. ```nginx upstream upstream_app_pool { @@ -341,7 +341,7 @@ Both the configuration and content files are available at the [NGINX GitHub repo nginx -s reload ``` -9. Verify the instance is working by accessing it at its external IP address. (As previously noted, we recommend blocking access to the external IP addresses of the application instances in a production environment.) The external IP address for the instance appears on the **Compute Engine > VM instances** summary page, in the **External IP** column of the table. +9. Verify the instance is working by accessing it at its external IP address. (As previously noted, we recommend blocking access to the external IP addresses of the application instances in a production environment.) The external IP address for the instance appears on the {{}}**Compute Engine > VM instances**{{}} summary page, in the {{}}**External IP**{{}} column of the table. - Access the **index.html** page either in a browser or by running this `curl` command. @@ -351,7 +351,7 @@ Both the configuration and content files are available at the [NGINX GitHub repo - Access its NGINX Plus live activity monitoring dashboard in a browser, at: - **https://_external‑IP‑address_:8080/status.html** + {{}}**https://_external-IP-address_:8080/status.html**{{}} 10. Proceed to [Task 3: Creating "Gold" Images](#gold). @@ -363,7 +363,7 @@ Create three source instances based on a prebuilt NGINX Plus image running on < #### Creating the First Application Instance from a Prebuilt Image -1. Verify that the **NGINX Plus All‑Active‑LB** project is still selected in the Google Cloud Platform header bar. +1. Verify that the **NGINX {{}}Plus All-Active-LB**{{}} project is still selected in the Google Cloud Platform header bar. 2. Navigate to the GCP Marketplace and search for **nginx plus**. @@ -373,16 +373,16 @@ Create three source instances based on a prebuilt NGINX Plus image running on < 4. On the **NGINX Plus** page that opens, click the  Launch on Compute Engine  button. -5. Fill in the fields on the **New NGINX Plus deployment** page as indicated. +5. Fill in the fields on the {{}}**New NGINX{{}} {{}}Plus deployment**{{}} page as indicated. - - **Deployment name** – **nginx‑plus‑app‑1** - - **Zone** – The GCP zone that makes sense for your location. We're using **us‑west1‑a**. - - **Machine type** – The appropriate size for the level of traffic you anticipate. We're selecting **micro**, which is ideal for testing purposes. - - **Disk type** – **Standard Persistent Disk** (the default) - - **Disk size in GB** – **10** (the default and minimum allowed) - - **Network name** – **default** - - **Subnetwork name** – **default** - - **Firewall** – Verify that the **Allow HTTP traffic** checkbox is checked. + - {{}}**Deployment name**{{}} – {{}}**nginx-plus-app-1**{{}} + - **Zone** – The GCP zone that makes sense for your location. We're using {{}}**us-west1-a**{{}}. + - {{}}**Machine type**{{}} – The appropriate size for the level of traffic you anticipate. We're selecting **micro**, which is ideal for testing purposes. + - {{}}**Disk type**{{}} – {{}}**Standard Persistent Disk**{{}} (the default) + - {{}}**Disk size in GB**{{}} – **10** (the default and minimum allowed) + - {{}}**Network name**{{}} – **default** + - {{}}**Subnetwork name**{{}} – **default** + - **Firewall** – Verify that the {{}}**Allow HTTP traffic**{{}} checkbox is checked. Screenshot of the page for creating a prebuilt NGINX Plus VM instance when deploying NGINX Plus as the Google Cloud Platform load balancer. @@ -392,25 +392,25 @@ Create three source instances based on a prebuilt NGINX Plus image running on < Screenshot of the page that confirms the creation of a prebuilt NGINX Plus VM instance when deploying NGINX Plus as the Google load balancer. -7. Navigate to the **Compute Engine > VM instances** tab and click **nginx‑plus‑app‑1‑vm** in the Name column in the table. (The **‑vm** suffix is added automatically to the name of the newly created instance.) +7. Navigate to the {{}}**Compute Engine > VM instances**{{}} tab and click {{}}**nginx-plus-app-1-vm**{{}} in the Name column in the table. (The {{}}**-vm**{{}} suffix is added automatically to the name of the newly created instance.) Screenshot showing how to access the page where configuration details for a VM instance can be modified during deployment of NGINX Plus as the Google Cloud load balancer. -8. On the **VM instances** page that opens, click EDIT at the top of the page. In fields that can be edited, the value changes from static text to text boxes, drop‑down menus, and checkboxes. +8. On the {{}}**VM instances**{{}} page that opens, click EDIT at the top of the page. In fields that can be edited, the value changes from static text to text boxes, drop‑down menus, and checkboxes. 9. Modify or verify the indicated editable fields (non‑editable fields are not listed): - - **Tags** – If a default tag appears (for example, **nginx‑plus‑app‑1‑tcp‑80**), click the **X** after its name to remove it. Then, type in **nginx‑plus‑http‑fw‑rule**. - - **External IP** – **Ephemeral** (the default) - - **Boot disk and local disks** – Uncheck the checkbox labeled **Delete boot disk when instance is deleted**. - - **Additional disks** – No changes - - **Network** – If you must change the defaults, for example, when configuring a production environment, select default Then, select EDIT on the opened **Network details** page. After making your changes select the  Save  button. + - **Tags** – If a default tag appears (for example, {{}}**nginx-plus-app-1-tcp-80**{{}}), click the **X** after its name to remove it. Then, type in {{}}**nginx-plus-http-fw-rule**{{}}. + - {{}}**External IP**{{}} – **Ephemeral** (the default) + - {{}}**Boot disk and local disks**{{}} – Uncheck the checkbox labeled **Delete boot disk when instance is deleted**. + - {{}}**Additional disks**{{}} – No changes + - **Network** – If you must change the defaults, for example, when configuring a production environment, select default Then, select EDIT on the opened {{}}**Network details**{{}} page. After making your changes select the  Save  button. - **Firewall** – Verify that neither check box is checked (the default). The firewall rule named in the **Tags** field that's above on the current page (see the first bullet in this list) controls this type of access. - - **Automatic restart** – **On (recommended)** (the default) - - **On host maintenance** – **Migrate VM instance (recommended)** (the default) - - **Custom metadata** – No changes - - **SSH Keys** – If you're using your own SSH public key instead of your default GCE keys, paste the hexadecimal key string into the box labeled **Enter entire key data**. - - **Serial port** – Verify that the check box labeled **Enable connecting to serial ports** is not checked (the default). + - {{}}**Automatic restart**{{}} – {{}}**On (recommended)**{{}} (the default) + - {{}}**On host maintenance**{{}} – {{}}**Migrate VM instance (recommended)**{{}} (the default) + - {{}}**Custom metadata**{{}} – No changes + - {{}}**SSH Keys**{{}} – If you're using your own SSH public key instead of your default GCE keys, paste the hexadecimal key string into the box labeled {{}}**Enter entire key data**{{}}. + - {{}}**Serial port**{{}} – Verify that the check box labeled {{}}**Enable connecting to serial ports**{{}} is not checked (the default). The screenshot shows the results of your changes. It omits some fields that can't be edited or for which we recommend keeping the defaults. @@ -423,29 +423,29 @@ Create three source instances based on a prebuilt NGINX Plus image running on < Create the second application instance by cloning the first one. -1. Navigate back to the summary page on the **Compute Engine > VM instances** tab (click the arrow that is circled in the following figure). +1. Navigate back to the summary page on the {{}}**Compute Engine > VM instances**{{}} tab (click the arrow that is circled in the following figure). Screenshot showing how to return to the VM instance summary page during deployment of NGINX Plus as the Google Cloud Platform load balancer. -2. Click **nginx‑plus‑app‑1‑vm** in the Name column of the table (shown in the screenshot in Step 7 of Creating the First Application Instance). +2. Click {{}}**nginx-plus-app-1-vm**{{}} in the Name column of the table (shown in the screenshot in Step 7 of Creating the First Application Instance). -3. On the **VM instances** page that opens, click CLONE at the top of the page. +3. On the {{}}**VM instances**{{}} page that opens, click CLONE at the top of the page. -4. On the **Create an instance** page that opens, modify or verify the fields and checkboxes as indicated: +4. On the {{}}**Create an instance**{{}} page that opens, modify or verify the fields and checkboxes as indicated: - - **Name** – **nginx‑plus‑app‑2‑vm**. Here we're adding the **‑vm** suffix to make the name consistent with the first instance; GCE does not add it automatically when you clone an instance. - - **Zone** – The GCP zone that makes sense for your location. We're using **us‑west1‑a**. - - **Machine type** – The appropriate size for the level of traffic you anticipate. We're selecting **f1‑micro**, which is ideal for testing purposes. - - **Boot disk type** – **New 10 GB standard persistent disk** (the value inherited from **nginx‑plus‑app‑1‑vm**) - - **Identity and API access** – Set the **Access scopes** radio button to **Allow default access** and accept the default values in all other fields. If you want more granular control over access than is provided by these settings, modify the fields in this section as appropriate. + - **Name** – {{}}**nginx-plus-app-2-vm**{{}}. Here we're adding the {{}}**-vm**{{}} suffix to make the name consistent with the first instance; GCE does not add it automatically when you clone an instance. + - **Zone** – The GCP zone that makes sense for your location. We're using {{}}**us-west1-a**{{}}. + - {{}}**Machine type**{{}} – The appropriate size for the level of traffic you anticipate. We're selecting {{}}**f1-micro**{{}}, which is ideal for testing purposes. + - {{}}**Boot disk type**{{}} – {{}}**New 10 GB standard persistent disk**{{}} (the value inherited from {{}}**nginx-plus-app-1-vm**{{}}) + - {{}}**Identity and API access**{{}} – Set the {{}}**Access scopes**{{}} radio button to {{}}**Allow default access**{{}} and accept the default values in all other fields. If you want more granular control over access than is provided by these settings, modify the fields in this section as appropriate. - **Firewall** – Verify that neither check box is checked (the default). 5. Click Management, disk, networking, SSH keys to open that set of subtabs. 6. Verify the following settings on the subtabs, modifying them as necessary: - - **Management** – In the **Tags** field: **nginx‑plus‑http‑fw‑rule** - - **Disks** – The **Deletion rule** checkbox (labeled **Delete boot disk when instance is deleted**) is not checked + - **Management** – In the **Tags** field: {{}}**nginx-plus-http-fw-rule**{{}} + - **Disks** – The {{}}**Deletion rule**{{}} checkbox (labeled **Delete boot disk when instance is deleted**) is not checked 7. Select the  Create  button. @@ -454,21 +454,21 @@ Create the second application instance by cloning the first one. Create the source load‑balancing instance by cloning the first instance again. -Repeat Steps 2 through 7 of Creating the Second Application Instance. In Step 4, specify **nginx‑plus‑lb‑vm** as the name. +Repeat Steps 2 through 7 of Creating the Second Application Instance. In Step 4, specify {{}}**nginx-plus-lb-vm**{{}} as the name. #### Configuring PHP and FastCGI on the Prebuilt-Based Instances Install and configure PHP and FastCGI on the instances. -Repeat these instructions for all three source instances (**nginx‑plus‑app‑1‑vm**, **nginx‑plus‑app‑2‑vm**, and **nginx‑plus‑lb‑vm**). +Repeat these instructions for all three source instances ({{}}**nginx-plus-app-1-vm**{{}}, {{}}**nginx-plus-app-2-vm**{{}}, and {{}}**nginx-plus-lb-vm**{{}}). **Note:** Some commands require `root` privilege. If appropriate for your environment, prefix commands with the `sudo` command. 1. Connect to the instance over SSH using the method of your choice. GCE provides a built‑in mechanism: - - Navigate to the **Compute Engine > VM instances** tab. - - In the table, find the row for the instance. Select the triangle icon in the **Connect** column at the far right. Then, select a method (for example, **Open in browser window**). + - Navigate to the {{}}**Compute Engine > VM instances**{{}} tab. + - In the table, find the row for the instance. Select the triangle icon in the **Connect** column at the far right. Then, select a method (for example, {{}}**Open in browser window**{{}}). The screenshot shows instances based on the prebuilt NGINX Plus images. @@ -511,7 +511,7 @@ Now download files that are specific to the all‑active deployment: Both the configuration and content files are available at the [NGINX GitHub repository](https://github.com/nginxinc/NGINX-Demos/tree/master/gce-nginx-plus-deployment-guide-files). -Repeat these instructions for all three source instances (**nginx‑plus‑app‑1‑vm**, **nginx‑plus‑app‑2‑vm**, and **nginx‑plus‑lb‑vm**). +Repeat these instructions for all three source instances ({{}}**nginx-plus-app-1-vm**{{}}, {{}}**nginx-plus-app-2-vm**{{}}, and {{}}**nginx-plus-lb-vm**{{}}). **Note:** Some commands require `root` privilege. If appropriate for your environment, prefix commands with the `sudo` command. @@ -522,7 +522,7 @@ Both the configuration and content files are available at the [NGINX GitHub repo 3. Copy the right configuration file from the **etc\_nginx\_conf.d** subdirectory of the cloned repository to **/etc/nginx/conf.d**: - - On both **nginx‑plus‑app‑1‑vm** and **nginx‑plus‑app‑2‑vm**, copy **gce‑all‑active‑app.conf**. + - On both {{}}**nginx-plus-app-1-vm**{{}} and {{}}**nginx-plus-app-2-vm**{{}}, copy {{}}**gce-all-active-app.conf**{{}}. You can also run these commands to download the configuration file from GitHub: @@ -538,7 +538,7 @@ Both the configuration and content files are available at the [NGINX GitHub repo wget https://github.com/nginxinc/NGINX-Demos/blob/master/gce-nginx-plus-deployment-guide-files/etc_nginx_conf.d/gce-all-active-app.conf ``` - - On **nginx‑plus‑lb‑vm**, copy **gce‑all‑active‑lb.conf**. + - On {{}}**nginx-plus-lb-vm**{{}}, copy {{}}**gce-all-active-lb.conf**{{}}. You can also run the following commands to download the configuration file directly from the GitHub repository: @@ -554,9 +554,9 @@ Both the configuration and content files are available at the [NGINX GitHub repo wget https://github.com/nginxinc/NGINX-Demos/blob/master/gce-nginx-plus-deployment-guide-files/etc_nginx_conf.d/gce-all-active-lb.conf ``` -4. On the LB instance (**nginx‑plus‑lb‑vm**), use a text editor to open **gce‑all‑active‑lb.conf**. Change the `server` directives in the `upstream` block to reference the internal IP addresses of the **nginx‑plus‑app‑1‑vm** and **nginx‑plus‑app‑2‑vm** instances. (No action is required on the two application instances themselves.) +4. On the LB instance ({{}}**nginx-plus-lb-vm**{{}}), use a text editor to open {{}}**gce-all-active-lb.conf**{{}}. Change the `server` directives in the `upstream` block to reference the internal IP addresses of the {{}}**nginx-plus-app-1-vm**{{}} and {{}}**nginx-plus-app-2-vm**{{}} instances. (No action is required on the two application instances themselves.) - You can look up internal IP addresses in the **Internal IP** column of the table on the **Compute Engine > VM instances** summary page. + You can look up internal IP addresses in the {{}}**Internal IP**{{}} column of the table on the {{}}**Compute Engine > VM instances**{{}} summary page. ```nginx upstream upstream_app_pool { @@ -598,7 +598,7 @@ Both the configuration and content files are available at the [NGINX GitHub repo nginx -s reload ``` -8. Verify the instance is working by accessing it at its external IP address. (As noted, we recommend blocking access, in production, to the external IPs of the app.) The external IP address for the instance appears on the **Compute Engine > VM instances** summary page, in the **External IP** column of the table. +8. Verify the instance is working by accessing it at its external IP address. (As noted, we recommend blocking access, in production, to the external IPs of the app.) The external IP address for the instance appears on the {{}}**Compute Engine > VM instances**{{}} summary page, in the {{}}**External IP**{{}} column of the table. - Access the **index.html** page either in a browser or by running this `curl` command. @@ -608,7 +608,7 @@ Both the configuration and content files are available at the [NGINX GitHub repo - Access the NGINX Plus live activity monitoring dashboard in a browser, at: - **https://_external‑IP‑address‑of‑NGINX‑Plus‑server_:8080/dashboard.html** + {{}}**https://_external-IP-address-of-NGINX-Plus-server_:8080/dashboard.html**{{}} 9. Proceed to [Task 3: Creating "Gold" Images](#gold). @@ -617,14 +617,14 @@ Both the configuration and content files are available at the [NGINX GitHub repo Create _gold images_, which are base images that GCE clones automatically when it needs to scale up the number of instances. They are derived from the instances you created in [Creating Source Instances](#source). Before creating the images, delete the source instances. This breaks the attachment between them and the disk. (you can't create an image from a disk attached to a VM instance). -1. Verify that the **NGINX Plus All‑Active‑LB** project is still selected in the Google Cloud Platform header bar. +1. Verify that the **NGINX {{}}Plus All-Active-LB**{{}} project is still selected in the Google Cloud Platform header bar. -2. Navigate to the **Compute Engine > VM instances** tab. +2. Navigate to the {{}}**Compute Engine > VM instances**{{}} tab. 3. In the table, select all three instances: - - If you created source instances from [VM (Ubuntu) images](#source-vm): **nginx‑plus‑app‑1**, **nginx‑plus‑app‑2**, and **nginx‑plus‑lb** - - If you created source instances from [prebuilt NGINX Plus images](#source-prebuilt): **nginx‑plus‑app‑1‑vm**, **nginx‑plus‑app‑2‑vm**, and **nginx‑plus‑lb‑vm** + - If you created source instances from [VM (Ubuntu) images](#source-vm): {{}}**nginx-plus-app-1**{{}}, {{}}**nginx-plus-app-2**{{}}, and {{}}**nginx-plus-lb**{{}} + - If you created source instances from [prebuilt NGINX Plus images](#source-prebuilt): {{}}**nginx-plus-app-1-vm**{{}}, {{}}**nginx-plus-app-2-vm**{{}}, and {{}}**nginx-plus-lb-vm**{{}} 4. Click STOP in the top toolbar to stop the instances. @@ -634,43 +634,43 @@ Create _gold images_, which are base images that GCE clones automatically when i **Note:** If the pop-up warns that it will delete the boot disk for any instance, cancel the deletion. Then, perform the steps below for each affected instance: - - Navigate to the **Compute Engine > VM instances** tab and click the instance in the Name column in the table. (The screenshot shows **nginx‑plus‑app‑1‑vm**.) + - Navigate to the {{}}**Compute Engine > VM instances**{{}} tab and click the instance in the Name column in the table. (The screenshot shows {{}}**nginx-plus-app-1-vm**.){{}} Screenshot showing how to access the page where configuration details for a VM instance can be modified during deployment of NGINX Plus as the Google Cloud load balancer. - - On the **VM instances** page that opens, click EDIT at the top of the page. In fields that can be edited, the value changes from static text to text boxes, drop‑down menus, and checkboxes. - - In the **Boot disk and local disks** field, uncheck the checkbox labeled **Delete boot disk when instance is deleted**. + - On the {{}}**VM instances**{{}} page that opens, click EDIT at the top of the page. In fields that can be edited, the value changes from static text to text boxes, drop‑down menus, and checkboxes. + - In the {{}}**Boot disk and local disks**{{}} field, uncheck the checkbox labeled **Delete boot disk when instance is deleted**. - Click the  Save  button. - - On the **VM instances** summary page, select the instance in the table and click DELETE in the top toolbar to delete it. + - On the {{}}**VM instances**{{}} summary page, select the instance in the table and click DELETE in the top toolbar to delete it. -6. Navigate to the **Compute Engine > Images** tab. +6. Navigate to the {{}}**Compute Engine > Images**{{}} tab. 7. Click [+] CREATE IMAGE. -8. On the **Create an image** page that opens, modify or verify the fields as indicated: +8. On the {{}}**Create an image**{{}} page that opens, modify or verify the fields as indicated: - - **Name** – **nginx‑plus‑app‑1‑image** + - **Name** – {{}}**nginx-plus-app-1-image**{{}} - **Family** – Leave the field empty - **Description** – **NGINX Plus Application 1 Gold Image** - - **Encryption** – **Automatic (recommended)** (the default) + - **Encryption** – {{}}**Automatic (recommended)**{{}} (the default) - **Source** – **Disk** (the default) - - **Source disk** – **nginx‑plus‑app‑1** or **nginx‑plus‑app‑1‑vm**, depending on the method you used to create source instances (select the source instance from the drop‑down menu) + - {{}}**Source disk**{{}} – {{}}**nginx-plus-app-1**{{}} or {{}}**nginx-plus-app-1-vm**{{}}, depending on the method you used to create source instances (select the source instance from the drop‑down menu) 9. Click the  Create  button. 10. Repeat Steps 7 through 9 to create a second image with the following values (retain the default values in all other fields): - - **Name** – **nginx‑plus‑app‑2‑image** - - **Description** – **NGINX Plus Application 2 Gold Image** - - **Source disk** – **nginx‑plus‑app‑2** or **nginx‑plus‑app‑2‑vm**, depending on the method you used to create source instances (select the source instance from the drop‑down menu) + - **Name** – {{}}**nginx-plus-app-2-image**{{}} + - **Description** – **NGINX {{}}Plus Application 2 Gold Image**{{}} + - {{}}**Source disk**{{}} – {{}}**nginx-plus-app-2**{{}} or {{}}**nginx-plus-app-2-vm**{{}}, depending on the method you used to create source instances (select the source instance from the drop‑down menu) 11. Repeat Steps 7 through 9 to create a third image with the following values (retain the default values in all other fields): - - **Name** – **nginx‑plus‑lb‑image** + - **Name** – {{}}**nginx-plus-lb-image**{{}} - **Description** – **NGINX Plus LB Gold Image** - - **Source disk** – **nginx‑plus‑lb** or **nginx‑plus‑lb‑vm**, depending on the method you used to create source instances (select the source instance from the drop‑down menu) + - {{}}**Source disk**{{}} – {{}}**nginx-plus-lb**{{}} or {{}}**nginx-plus-lb-vm**{{}}, depending on the method you used to create source instances (select the source instance from the drop‑down menu) -12. Verify that the three images appear at the top of the table on the **Compute Engine > Images** tab. +12. Verify that the three images appear at the top of the table on the {{}}**Compute Engine > Images**{{}} tab. ## Task 4: Creating Instance Templates @@ -681,31 +681,31 @@ Create _instance templates_. They are the compute workloads in instance groups. ### Creating the First Application Instance Template -1. Verify that the **NGINX Plus All‑Active‑LB** project is still selected in the Google Cloud Platform header bar. +1. Verify that the **NGINX {{}}Plus All-Active-LB**{{}} project is still selected in the Google Cloud Platform header bar. -2. Navigate to the **Compute Engine > Instance templates** tab. +2. Navigate to the {{}}**Compute Engine > Instance templates**{{}} tab. 3. Click the  Create instance template  button. -4. On the **Create an instance template** page that opens, modify or verify the fields as indicated: +4. On the {{}}**Create an instance template**{{}} page that opens, modify or verify the fields as indicated: - - **Name** – **nginx‑plus‑app‑1‑instance‑template** - - **Machine type** – The appropriate size for the level of traffic you anticipate. We're selecting **micro**, which is ideal for testing purposes. - - **Boot disk** – Click **Change**. The **Boot disk** page opens. Perform the following steps: + - **Name** – {{}}**nginx-plus-app-1-instance-template**{{}} + - {{}}**Machine type**{{}} – The appropriate size for the level of traffic you anticipate. We're selecting **micro**, which is ideal for testing purposes. + - {{}}**Boot disk**{{}} – Click **Change**. The {{}}**Boot disk**{{}} page opens. Perform the following steps: - - Open the **Custom Images** subtab. + - Open the {{}}**Custom Images**{{}} subtab. Screenshot of the 'Boot disk' page in Google Cloud Platform for selecting the source instance of a new instance template, part of deploying NGINX Plus as the Google load balancer. - - Select **NGINX Plus All‑Active‑LB** from the drop-down menu labeled **Show images from**. + - Select **NGINX {{}}Plus All-Active-LB**{{}} from the drop-down menu labeled {{}}**Show images from**{{}}. - - Click the **nginx‑plus‑app‑1‑image** radio button. + - Click the {{}}**nginx-plus-app-1-image**{{}} radio button. - - Accept the default values in the **Boot disk type** and **Size (GB)** fields (**Standard persistent disk** and **10** respectively). + - Accept the default values in the {{}}**Boot disk type**{{}} and {{}}**Size (GB)**{{}} fields ({{}}**Standard persistent disk**{{}} and **10** respectively). - Click the  Select  button. - - **Identity and API access** – Unless you want more granular control over access, keep the defaults in the **Service account** field (**Compute Engine default service account**) and **Access scopes** field (**Allow default access**). + - {{}}**Identity and API access**{{}} – Unless you want more granular control over access, keep the defaults in the {{}}**Service account**{{}} field (**Compute Engine default service account**) and {{}}**Access scopes**{{}} field ({{}}**Allow default access**{{}}). - **Firewall** – Verify that neither check box is checked (the default). The firewall rule invoked in the **Tags** field on the **Management** subtab (see Step 6 below) controls this type of access. 5. Select Management, disk, networking, SSH keys (indicated with a red arrow in the following screenshot) to open that set of subtabs. @@ -714,11 +714,11 @@ Create _instance templates_. They are the compute workloads in instance groups. 6. On the **Management** subtab, modify or verify the fields as indicated: - - **Description** – **NGINX Plus app‑1 Instance Template** - - **Tags** – **nginx‑plus‑http‑fw‑rule** - - **Preemptibility** – **Off (recommended)** (the default) - - **Automatic restart** – **On (recommended)** (the default) - - **On host maintenance** – **Migrate VM instance (recommended)** (the default) + - **Description** – **NGINX {{}}Plus app-1 Instance Template**{{}} + - **Tags** – {{}}**nginx-plus-http-fw-rule**{{}} + - **Preemptibility** – {{}}**Off (recommended)**{{}} (the default) + - {{}}**Automatic restart**{{}} – {{}}**On (recommended)**{{}} (the default) + - {{}}**On host maintenance**{{}} – {{}}**Migrate VM instance (recommended)**{{}} (the default) Screenshot of the Management subtab used during creation of a new VM instance template, part of deploying NGINX Plus as the Google load balancer. @@ -728,11 +728,11 @@ Create _instance templates_. They are the compute workloads in instance groups. Screenshot of the Disks subtab used during creation of a new VM instance template, part of deploying NGINX Plus as the Google Cloud load balancer. -8. On the **Networking** subtab, verify the default settings of **Ephemeral** for **External IP** and **Off** for **IP Forwarding**. +8. On the **Networking** subtab, verify the default settings of **Ephemeral** for {{}}**External IP**{{}} and **Off** for {{}}**IP Forwarding**{{}}. Screenshot of the Networking subtab used during creation of a new VM instance template, part of deploying NGINX Plus as the Google load balancer. -9. If you're using your own SSH public key instead of your default keys, paste the hexadecimal key string on the **SSH Keys** subtab. Right into the box that reads **Enter entire key data**. +9. If you're using your own SSH public key instead of your default keys, paste the hexadecimal key string on the {{}}**SSH Keys**{{}} subtab. Right into the box that reads {{}}**Enter entire key data**{{}}. Screenshot of the SSH Keys subtab used during creation of a new VM instance, part of deploying NGINX Plus as the Google Cloud Platform load balancer. @@ -741,25 +741,25 @@ Create _instance templates_. They are the compute workloads in instance groups. ### Creating the Second Application Instance Template -1. On the **Instance templates** summary page, click CREATE INSTANCE TEMPLATE. +1. On the {{}}**Instance templates**{{}} summary page, click CREATE INSTANCE TEMPLATE. 2. Repeat Steps 4 through 10 of Creating the First Application Instance Template to create a second application instance template. Use the same values as for the first instance template, except as noted: - In Step 4: - - **Name** – **nginx‑plus‑app‑2‑instance‑template** - - **Boot disk** – Click the **nginx‑plus‑app‑2‑image** radio button - - In Step 6, **Description** – **NGINX Plus app‑2 Instance Template** + - **Name** – {{}}**nginx-plus-app-2-instance-template**{{}} + - {{}}**Boot disk**{{}} – Click the {{}}**nginx-plus-app-2-image**{{}} radio button + - In Step 6, **Description** – **NGINX {{}}Plus app-2 Instance Template**{{}} ### Creating the Load-Balancing Instance Template -1. On the **Instance templates** summary page, click CREATE INSTANCE TEMPLATE. +1. On the {{}}**Instance templates**{{}} summary page, click CREATE INSTANCE TEMPLATE. 2. Repeat Steps 4 through 10 of Creating the First Application Instance Template to create the load‑balancing instance template. Use the same values as for the first instance template, except as noted: - In Step 4: - - **Name** – **nginx‑plus‑lb‑instance‑template**. - - **Boot disk** – Click the **nginx‑plus‑lb‑image** radio button + - **Name** – {{}}**nginx-plus-lb-instance-template**{{}}. + - {{}}**Boot disk**{{}} – Click the {{}}**nginx-plus-lb-image**{{}} radio button - In Step 6, **Description** – **NGINX Plus Load‑Balancing Instance Template** @@ -768,28 +768,28 @@ Create _instance templates_. They are the compute workloads in instance groups. Define the simple HTTP health check that GCE uses. This verifies that each NGINX Plus LB image is running (and to re-create any LB instance that isn't running). -1. Verify that the **NGINX Plus All‑Active‑LB** project is still selected in the Google Cloud Platform header bar. +1. Verify that the **NGINX {{}}Plus All-Active-LB**{{}} project is still selected in the Google Cloud Platform header bar. -2. Navigate to the **Compute Engine > Health checks** tab. +2. Navigate to the {{}}**Compute Engine > Health checks**{{}} tab. 3. Click the  Create a health check  button. -4. On the **Create a health check** page that opens, modify or verify the fields as indicated: +4. On the {{}}**Create a health check**{{}} page that opens, modify or verify the fields as indicated: - - **Name** – **nginx‑plus‑http‑health‑check** + - **Name** – {{}}**nginx-plus-http-health-check**{{}} - **Description** – **Basic HTTP health check to monitor NGINX Plus instances** - **Protocol** – **HTTP** (the default) - **Port** – **80** (the default) - - **Request path** – **/status‑old.html** + - {{}}**Request path**{{}} – {{}}**/status-old.html**{{}} -5. If the **Health criteria** section is not already open, click More. +5. If the {{}}**Health criteria**{{}} section is not already open, click More. 6. Modify or verify the fields as indicated: - - **Check interval** – **10 seconds** - - **Timeout** – **10 seconds** - - **Healthy threshold** – **2 consecutive successes** (the default) - - **Unhealthy threshold** – **10 consecutive failures** + - {{}}**Check interval**{{}} – {{}}**10 seconds**{{}} + - **Timeout** – {{}}**10 seconds**{{}} + - {{}}**Healthy threshold**{{}} – {{}}**2 consecutive successes**{{}} (the default) + - {{}}**Unhealthy threshold**{{}} – {{}}**10 consecutive failures**{{}} 7. Click the  Create  button. @@ -800,28 +800,28 @@ Define the simple HTTP health check that GCE uses. This verifies that each NGINX Create three independent instance groups, one for each type of function-specific instance. -1. Verify that the **NGINX Plus All‑Active‑LB** project is still selected in the Google Cloud Platform header bar. +1. Verify that the **NGINX {{}}Plus All-Active-LB**{{}} project is still selected in the Google Cloud Platform header bar. -2. Navigate to the **Compute Engine > Instance groups** tab. +2. Navigate to the {{}}**Compute Engine > Instance groups**{{}} tab. 3. Click the  Create instance group  button. ### Creating the First Application Instance Group -1. On the **Create a new instance group** page that opens, modify or verify the fields as indicated. Ignore fields that are not mentioned: +1. On the {{}}**Create a new instance group**{{}} page that opens, modify or verify the fields as indicated. Ignore fields that are not mentioned: - - **Name** – **nginx‑plus‑app‑1‑instance‑group** + - **Name** – {{}}**nginx-plus-app-1-instance-group**{{}} - **Description** – **Instance group to host NGINX Plus app-1 instances** - **Location** – - - Click the **Single‑zone** radio button (the default). - - **Zone** – The GCP zone you specified when you created source instances (Step 1 of [Creating the First Application Instance from a VM Image](#source-vm-app-1) or Step 5 of [Creating the First Application Instance from a Prebuilt Image](#source-prebuilt)). We're using **us‑west1‑a**. - - **Creation method** – **Use instance template** radio button (the default) - - **Instance template** – **nginx‑plus‑app‑1‑instance‑template** (select from the drop-down menu) + - Click the {{}}**Single-zone**{{}} radio button (the default). + - **Zone** – The GCP zone you specified when you created source instances (Step 1 of [Creating the First Application Instance from a VM Image](#source-vm-app-1) or Step 5 of [Creating the First Application Instance from a Prebuilt Image](#source-prebuilt)). We're using {{}}**us-west1-a**{{}}. + - {{}}**Creation method**{{}} – {{}}**Use instance template**{{}} radio button (the default) + - {{}}**Instance template**{{}} – {{}}**nginx-plus-app-1-instance-template**{{}} (select from the drop-down menu) - **Autoscaling** – **Off** (the default) - - **Number of instances** – **2** - - **Health check** – **nginx‑plus‑http‑health‑check** (select from the drop-down menu) - - **Initial delay** – **300 seconds** (the default) + - {{}}**Number of instances**{{}} – **2** + - {{}}**Health check**{{}} – {{}}**nginx-plus-http-health-check**{{}} (select from the drop-down menu) + - {{}}**Initial delay**{{}} – {{}}**300 seconds**{{}} (the default) 3. Click the  Create  button. @@ -834,25 +834,25 @@ Create three independent instance groups, one for each type of function-specific 2. Repeat the steps in [Creating the First Application Instance Group](#groups-app-1) to create a second application instance group. Specify the same values as for the first instance template, except for these fields: - - **Name** – **nginx‑plus‑app‑2‑instance‑group** + - **Name** – {{}}**nginx-plus-app-2-instance-group**{{}} - **Description** – **Instance group to host NGINX Plus app-2 instances** - - **Instance template** – **nginx‑plus‑app‑2‑instance‑template** (select from the drop-down menu) + - {{}}**Instance template**{{}} – {{}}**nginx-plus-app-2-instance-template**{{}} (select from the drop-down menu) ### Creating the Load-Balancing Instance Group -1. On the **Instance groups** summary page, click CREATE INSTANCE GROUP. +1. On the {{}}**Instance groups**{{}} summary page, click CREATE INSTANCE GROUP. 2. Repeat the steps in [Creating the First Application Instance Group](#groups-app-1) to create the load‑balancing instance group. Specify the same values as for the first instance template, except for these fields: - - **Name** – **nginx‑plus‑lb‑instance‑group** + - **Name** – {{}}**nginx-plus-lb-instance-group**{{}} - **Description** – **Instance group to host NGINX Plus load balancing instances** - - **Instance template** – **nginx‑plus‑lb‑instance‑template** (select from the drop-down menu) + - {{}}**Instance template**{{}} – {{}}**nginx-plus-lb-instance-template**{{}} (select from the drop-down menu) ### Updating and Testing the NGINX Plus Configuration -Update the NGINX Plus configuration on the two LB instances (**nginx‑plus‑lb‑instance‑group‑[a...z]**). It should list the internal IP addresses of the four application servers (two instances each of **nginx‑plus‑app‑1‑instance‑group‑[a...z]** and **nginx‑plus‑app‑2‑instance‑group‑[a...z]**). +Update the NGINX Plus configuration on the two LB instances ({{}}**nginx-plus-lb-instance-group-[a...z]**{{}}). It should list the internal IP addresses of the four application servers (two instances each of {{}}**nginx-plus-app-1-instance-group-[a...z]**{{}} and {{}}**nginx-plus-app-2-instance-group-[a...z]**{{}}). Repeat these instructions for both LB instances. @@ -860,10 +860,10 @@ Update the NGINX Plus configuration on the two LB instances (**nginx‑plus 1. Connect to the LB instance over SSH using the method of your choice. GCE provides a built-in mechanism: - - Navigate to the **Compute Engine > VM instances** tab. - - In the table, find the row for the instance. Click the triangle icon in the **Connect** column at the far right. Then, select a method (for example, **Open in browser window**). + - Navigate to the {{}}**Compute Engine > VM instances**{{}} tab. + - In the table, find the row for the instance. Click the triangle icon in the **Connect** column at the far right. Then, select a method (for example, {{}}**Open in browser window**{{}}). -2. In the SSH terminal, use your preferred text editor to edit **gce‑all‑active‑lb.conf**. Change the `server` directives in the `upstream` block to reference the internal IPs of the two **nginx‑plus‑app‑1‑instance‑group‑[a...z]** instances and the two **nginx‑plus‑app‑2‑instance‑group‑[a...z]** instances. You can check the addresses in the **Internal IP** column of the table on the **Compute Engine > VM instances** summary page. For example: +2. In the SSH terminal, use your preferred text editor to edit {{}}**gce-all-active-lb.conf**{{}}. Change the `server` directives in the `upstream` block to reference the internal IPs of the two {{}}**nginx-plus-app-1-instance-group-[a...z]**{{}} instances and the two {{}}**nginx-plus-app-2-instance-group-[a...z]**{{}} instances. You can check the addresses in the {{}}**Internal IP**{{}} column of the table on the {{}}**Compute Engine > VM instances**{{}} summary page. For example: ```nginx upstream upstream_app_pool { @@ -887,9 +887,9 @@ Update the NGINX Plus configuration on the two LB instances (**nginx‑plus nginx -s reload ``` -4. Verify that the four application instances are receiving traffic and responding. To do this, access the NGINX Plus live activity monitoring dashboard on the load-balancing instance (**nginx‑plus‑lb‑instance‑group‑[a...z]**). You can see the instance's external IP address on the **Compute Engine > VM instances** summary page in the **External IP** column of the table. +4. Verify that the four application instances are receiving traffic and responding. To do this, access the NGINX Plus live activity monitoring dashboard on the load-balancing instance ({{}}**nginx-plus-lb-instance-group-[a...z]**{{}}). You can see the instance's external IP address on the {{}}**Compute Engine > VM instances**{{}} summary page in the {{}}**External IP**{{}} column of the table. - **https://_LB‑external‑IP‑address_:8080/status.html** + {{}}**https://_LB-external-IP-address_:8080/status.html**{{}} 5. Verify that NGINX Plus is load balancing traffic among the four application instance groups. Do this by running this command on a separate client machine: @@ -904,50 +904,50 @@ Update the NGINX Plus configuration on the two LB instances (**nginx‑plus Set up a GCE network load balancer. It will distribute incoming client traffic to the NGINX Plus LB instances. First, reserve the static IP address the GCE network load balancer advertises to clients. -1. Verify that the **NGINX Plus All‑Active‑LB** project is still selected in the Google Cloud Platform header bar. +1. Verify that the **NGINX {{}}Plus All-Active-LB**{{}} project is still selected in the Google Cloud Platform header bar. -2. Navigate to the **Networking > External IP addresses** tab. +2. Navigate to the {{}}**Networking > External IP addresses**{{}} tab. 3. Click the  Reserve static address  button. -4. On the **Reserve a static address** page that opens, modify or verify the fields as indicated: +4. On the {{}}**Reserve a static address**{{}} page that opens, modify or verify the fields as indicated: - - **Name** – **nginx‑plus‑network‑lb‑static‑ip** + - **Name** – {{}}**nginx-plus-network-lb-static-ip**{{}} - **Description** – **Static IP address for Network LB frontend to NGINX Plus LB instances** - **Type** – Click the **Regional** radio button (the default) - - **Region** – The GCP zone you specified when you created source instances (Step 1 of [Creating the First Application Instance from a VM Image](#source-vm-app-1) or Step 5 of [Creating the First Application Instance from a Prebuilt Image](#source-prebuilt)). We're using **us‑west1**. - - **Attached to** – **None** (the default) + - **Region** – The GCP zone you specified when you created source instances (Step 1 of [Creating the First Application Instance from a VM Image](#source-vm-app-1) or Step 5 of [Creating the First Application Instance from a Prebuilt Image](#source-prebuilt)). We're using {{}}**us-west1**{{}}. + - {{}}**Attached to**{{}} – **None** (the default) 5. Click the  Reserve  button. Screenshot of the interface for reserving a static IP address for Google Compute Engine network load balancer. -6. Navigate to the **Networking > Load balancing** tab. +6. Navigate to the {{}}**Networking > Load balancing**{{}} tab. 7. Click the  Create load balancer  button. -8. On the **Load balancing** page that opens, click **Start configuration** in the **TCP Load Balancing** box. +8. On the {{}}**Load balancing**{{}} page that opens, click {{}}**Start configuration**{{}} in the {{}}**TCP Load Balancing**{{}} box. -9. On the page that opens, click the **From Internet to my VMs** and **No (TCP)** radio buttons (the defaults). +9. On the page that opens, click the {{}}**From Internet to my VMs**{{}} and {{}}**No (TCP)**{{}} radio buttons (the defaults). -10. Click the  Continue  button. The **New TCP load balancer** page opens. +10. Click the  Continue  button. The {{}}**New TCP load balancer**{{}} page opens. -11. In the **Name** field, type **nginx‑plus‑network‑lb‑frontend**. +11. In the **Name** field, type {{}}**nginx-plus-network-lb-frontend**{{}}. -12. Click **Backend configuration** in the left column to open the **Backend configuration** interface in the right column. Fill in the fields as indicated: +12. Click {{}}**Backend configuration**{{}} in the left column to open the {{}}**Backend configuration**{{}} interface in the right column. Fill in the fields as indicated: - - **Region** – The GCP region you specified in Step 4. We're using **us‑west1**. - - **Backends** – With **Select existing instance groups** selected, select **nginx‑plus‑lb‑instance‑group** from the drop-down menu - - **Backup pool** – **None** (the default) - - **Failover ratio** – **10** (the default) - - **Health check** – **nginx‑plus‑http‑health‑check** - - **Session affinity** – **Client IP** + - **Region** – The GCP region you specified in Step 4. We're using {{}}**us-west1**{{}}. + - **Backends** – With {{}}**Select existing instance groups**{{}} selected, select {{}}**nginx-plus-lb-instance-group**{{}} from the drop-down menu + - {{}}**Backup pool**{{}} – **None** (the default) + - {{}}**Failover ratio**{{}} – **10** (the default) + - {{}}**Health check**{{}} – {{}}**nginx-plus-http-health-check**{{}} + - {{}}**Session affinity**{{}} – {{}}**Client IP**{{}} Screenshot of the interface for backend configuration of GCE network load balancer, used during deployment of NGINX Plus as the Google Cloud Platform load balancer. -13. Select **Frontend configuration** in the left column. This opens up the **Frontend configuration** interface on the right column. +13. Select {{}}**Frontend configuration**{{}} in the left column. This opens up the {{}}**Frontend configuration**{{}} interface on the right column. -14. Create three **Protocol‑IP‑Port** tuples, each with: +14. Create three {{}}**Protocol-IP-Port**{{}} tuples, each with: - **Protocol** – **TCP** - **IP** – The address you reserved in Step 5, selected from the drop-down menu (if there is more than one address, select the one labeled in parentheses with the name you specified in Step 5) @@ -978,7 +978,7 @@ If load balancing is working properly, the unique **Server** field from the inde To verify that high availability is working: -1. Connect to one of the instances in the **nginx‑plus‑lb‑instance‑group** over SSH and run this command to force it offline: +1. Connect to one of the instances in the {{}}**nginx-plus-lb-instance-group**{{}} over SSH and run this command to force it offline: ```shell iptables -A INPUT -p tcp --destination-port 80 -j DROP diff --git a/content/nginx/deployment-guides/load-balance-third-party/apache-tomcat.md b/content/nginx/deployment-guides/load-balance-third-party/apache-tomcat.md index df0ff8cb6..0cc3af9b2 100644 --- a/content/nginx/deployment-guides/load-balance-third-party/apache-tomcat.md +++ b/content/nginx/deployment-guides/load-balance-third-party/apache-tomcat.md @@ -171,7 +171,7 @@ http { } ``` -You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files **_function_‑http.conf**, this is an appropriate `include` directive: +You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files {{}}**_function_-http.conf**{{}}, this is an appropriate `include` directive: ```nginx http { @@ -294,7 +294,7 @@ To configure load balancing, first create a named _upstream group_, which lists 2. In the `server` block for HTTPS traffic that we created in [Configuring Virtual Servers for HTTP and HTTPS Traffic](#virtual-servers), include two `location` blocks: - - The first one matches HTTPS requests in which the path starts with **/tomcat‑app/**, and proxies them to the **tomcat** upstream group we created in the previous step. + - The first one matches HTTPS requests in which the path starts with {{}}**/tomcat-app/**{{}}, and proxies them to the **tomcat** upstream group we created in the previous step. - The second one funnels all traffic to the first `location` block, by doing a temporary redirect of all requests for **"http://example.com/"**. @@ -409,7 +409,7 @@ To enable basic caching in NGINX Open Source< Directive documentation: [proxy_cache_path](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_path) -2. In the `location` block that matches HTTPS requests in which the path starts with **/tomcat‑app/**, include the `proxy_cache` directive to reference the cache created in the previous step. +2. In the `location` block that matches HTTPS requests in which the path starts with {{}}**/tomcat-app/**{{}}, include the `proxy_cache` directive to reference the cache created in the previous step. ```nginx # In the 'server' block for HTTPS traffic @@ -440,11 +440,11 @@ HTTP/2 is fully supported in both NGINX Open - In NGINX Plus R8 and later, NGINX Plus supports HTTP/2 by default. (Support for SPDY is deprecated as of that release). Specifically: - In NGINX Plus R11 and later, the **nginx‑plus** package continues to support HTTP/2 by default, but the **nginx‑plus‑extras** package available in previous releases is deprecated by [dynamic modules](https://www.nginx.com/products/nginx/dynamic-modules/). + In NGINX Plus R11 and later, the {{}}**nginx-plus**{{}} package continues to support HTTP/2 by default, but the {{}}**nginx-plus-extras**{{}} package available in previous releases is deprecated by [dynamic modules](https://www.nginx.com/products/nginx/dynamic-modules/). - For NGINX Plus R8 through R10, the **nginx‑plus** and **nginx‑plus‑extras** packages support HTTP/2 by default. + For NGINX Plus R8 through R10, the {{}}**nginx-plus**{{}} and {{}}**nginx-plus-extras**{{}} packages support HTTP/2 by default. - If using NGINX Plus R7, you must install the **nginx‑plus‑http2** package instead of the **nginx‑plus** or **nginx‑plus‑extras** package. + If using NGINX Plus R7, you must install the {{}}**nginx-plus-http2**{{}} package instead of the {{}}**nginx-plus**{{}} or {{}}**nginx-plus-extras**{{}} package. To enable HTTP/2 support, add the `http2` directive in the `server` block for HTTPS traffic that we created in [Configuring Virtual Servers for HTTP and HTTPS Traffic](#virtual-servers), so that it looks like this: @@ -636,7 +636,7 @@ Health checks are out-of-band HTTP req Because the `health_check` directive is placed in the `location` block, we can enable different health checks for each application. -1. In the `location` block that matches HTTPS requests in which the path starts with **/tomcat‑app/** (created in [Configuring Basic Load Balancing](#load-balancing-basic)), add the `health_check` directive. +1. In the `location` block that matches HTTPS requests in which the path starts with {{}}**/tomcat-app/**{{}} (created in [Configuring Basic Load Balancing](#load-balancing-basic)), add the `health_check` directive. Here we configure NGINX Plus to send an out-of-band request for the top‑level URI **/** (slash) to each of the servers in the **tomcat** upstream group every 2 seconds, which is more aggressive than the default 5‑second interval. If a server does not respond correctly, it is marked down and NGINX Plus stops sending requests to it until it passes five subsequent health checks in a row. We include the `match` parameter to define a nondefault set of health‑check tests. @@ -719,7 +719,7 @@ The quickest way to configure the module and the built‑in dashboard is to down Directive documentation: [include](https://nginx.org/en/docs/ngx_core_module.html#include) - If you are using the conventional configuration scheme and your existing `include` directives use the wildcard notation discussed in [Creating and Modifying Configuration Files](#config-files), you can either add a separate `include` directive for **status.conf** as shown above, or change the name of **status.conf** so it is captured by the wildcard in an existing `include` directive in the `http` block. For example, changing it to **status‑http.conf** means it is captured by the `include` directive for `*-http.conf`. + If you are using the conventional configuration scheme and your existing `include` directives use the wildcard notation discussed in [Creating and Modifying Configuration Files](#config-files), you can either add a separate `include` directive for **status.conf** as shown above, or change the name of **status.conf** so it is captured by the wildcard in an existing `include` directive in the `http` block. For example, changing it to {{}}**status-http.conf**{{}} means it is captured by the `include` directive for `*-http.conf`. 3. Comments in **status.conf** explain which directives you must customize for your deployment. In particular, the default settings in the sample configuration file allow anyone on any network to access the dashboard. We strongly recommend that you restrict access to the dashboard with one or more of the following methods: diff --git a/content/nginx/deployment-guides/load-balance-third-party/microsoft-exchange.md b/content/nginx/deployment-guides/load-balance-third-party/microsoft-exchange.md index 912302769..bfaabc179 100644 --- a/content/nginx/deployment-guides/load-balance-third-party/microsoft-exchange.md +++ b/content/nginx/deployment-guides/load-balance-third-party/microsoft-exchange.md @@ -371,7 +371,7 @@ To set up the conventional configuration scheme, perform these steps: Directive documentation: [include](https://nginx.org/en/docs/ngx_core_module.html#include) - You can also use wildcard notation to read all function‑specific files for either HTTP or TCP traffic into the appropriate context block. For example, if you name all HTTP configuration files **_function_‑http.conf** and all TCP configuration files **_function_‑stream.conf** (the filenames we specify in this section conform to this pattern), the wildcarded `include` directives are: + You can also use wildcard notation to read all function‑specific files for either HTTP or TCP traffic into the appropriate context block. For example, if you name all HTTP configuration files {{}}**_function_-http.conf**{{}} and all TCP configuration files {{}}**_function_-stream.conf**{{}} (the filenames we specify in this section conform to this pattern), the wildcarded `include` directives are: ```nginx http { @@ -383,9 +383,9 @@ To set up the conventional configuration scheme, perform these steps: } ``` -2. In the **/etc/nginx/conf.d** directory, create a new file called **exchange‑http.conf** for directives that pertain to Exchange HTTP and HTTPS traffic (or substitute the name you chose in Step 1). Copy in the directives from the `http` configuration block in the downloaded configuration file. Remember not to copy the first line (`http` `{`) or the closing curly brace (`}`) for the block, because the `http` block you created in Step 1 already has them. +2. In the **/etc/nginx/conf.d** directory, create a new file called {{}}**exchange-http.conf**{{}} for directives that pertain to Exchange HTTP and HTTPS traffic (or substitute the name you chose in Step 1). Copy in the directives from the `http` configuration block in the downloaded configuration file. Remember not to copy the first line (`http` `{`) or the closing curly brace (`}`) for the block, because the `http` block you created in Step 1 already has them. -3. Also in the **/etc/nginx/conf.d** directory, create a new file called **exchange‑stream.conf** for directives that pertain to Exchange TCP traffic (or substitute the name you chose in Step 1). Copy in the directives from the `stream` configuration block in the dowloaded configuration file. Again, do not copy the first line (`stream` `{`) or the closing curly brace (`}`). +3. Also in the **/etc/nginx/conf.d** directory, create a new file called {{}}**exchange-stream.conf**{{}} for directives that pertain to Exchange TCP traffic (or substitute the name you chose in Step 1). Copy in the directives from the `stream` configuration block in the dowloaded configuration file. Again, do not copy the first line (`stream` `{`) or the closing curly brace (`}`). For reference purposes, the text of the full configuration files is included in this document: @@ -468,7 +468,7 @@ The directives in the top‑level `stream` configuration block configure TCP loa } ``` -3. This `server` block defines the virtual server that proxies traffic on port 993 to the **exchange‑imaps** upstream group configured in Step 1. +3. This `server` block defines the virtual server that proxies traffic on port 993 to the {{}}**exchange-imaps**{{}} upstream group configured in Step 1. ```nginx # In the 'stream' block @@ -481,7 +481,7 @@ The directives in the top‑level `stream` configuration block configure TCP loa Directive documentation: [listen](https://nginx.org/en/docs/stream/ngx_stream_core_module.html#listen), [proxy_pass](https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_pass), [server](https://nginx.org/en/docs/stream/ngx_stream_core_module.html#server), [status_zone](https://nginx.org/en/docs/http/ngx_http_status_module.html#status_zone) -4. This `server` block defines the virtual server that proxies traffic on port 25 to the **exchange‑smtp** upstream group configured in Step 2. If you wish to change the port number from 25 (for example, to 587), change the `listen` directive. +4. This `server` block defines the virtual server that proxies traffic on port 25 to the {{}}**exchange-smtp**{{}} upstream group configured in Step 2. If you wish to change the port number from 25 (for example, to 587), change the `listen` directive. ```nginx # In the 'stream' block @@ -615,11 +615,11 @@ HTTP/2 is fully supported in NGINX Plus R7NGINX Plus R8 and later, NGINX Plus supports HTTP/2 by default, and does not support SPDY: -- In NGINX Plus R11 and later, the **nginx‑plus** package continues to support HTTP/2 by default, but the **nginx‑plus‑extras** package available in previous releases is deprecated by [dynamic modules](https://www.nginx.com/products/nginx/dynamic-modules/). +- In NGINX Plus R11 and later, the {{}}**nginx-plus**{{}} package continues to support HTTP/2 by default, but the {{}}**nginx-plus-extras**{{}} package available in previous releases is deprecated by [dynamic modules](https://www.nginx.com/products/nginx/dynamic-modules/). -- For NGINX Plus R8 through R10, the **nginx‑plus** and **nginx‑plus‑extras** packages support HTTP/2 by default. +- For NGINX Plus R8 through R10, the {{}}**nginx-plus**{{}} and {{}}**nginx-plus-extras**{{}} packages support HTTP/2 by default. -If using NGINX Plus R7, you must install the **nginx‑plus‑http2** package instead of the **nginx‑plus** or **nginx‑plus‑extras** package. +If using NGINX Plus R7, you must install the {{}}**nginx-plus-http2**{{}} package instead of the {{}}**nginx-plus**{{}} or {{}}**nginx-plus-extras**{{}} package. To enable HTTP/2 support, add the `http2` directive in the `server` block for HTTPS traffic that we created in [Configuring Virtual Servers for HTTP and HTTPS Traffic](#virtual-servers), so that it looks like this: @@ -926,7 +926,7 @@ Exchange CASs interact with various applications used by clients on different ty } ``` - - Mobile clients like iPhone and Android access the ActiveSync location (**/Microsoft‑Server‑ActiveSync**). + - Mobile clients like iPhone and Android access the ActiveSync location ({{}}**/Microsoft-Server-ActiveSync**{{}}). ```nginx # In the 'server' block for HTTPS traffic @@ -1092,7 +1092,7 @@ The quickest way to configure the module and the built‑in dashboard is to down include conf.d/status.conf; ``` - If you are using the conventional configuration scheme and your existing `include` directives use the wildcard notation discussed in [Creating and Modifying Configuration Files](#config-files), you can either add a separate `include` directive for **status.conf** as shown above, or change the name of **status.conf** so it is captured by the wildcard in an existing `include` directive in the `http` block. For example, changing it to **status‑http.conf** means it is captured by the `include` directive for `*-http.conf`. + If you are using the conventional configuration scheme and your existing `include` directives use the wildcard notation discussed in [Creating and Modifying Configuration Files](#config-files), you can either add a separate `include` directive for **status.conf** as shown above, or change the name of **status.conf** so it is captured by the wildcard in an existing `include` directive in the `http` block. For example, changing it to {{}}**status-http.conf**{{}} means it is captured by the `include` directive for `*-http.conf`. Directive documentation: [include](https://nginx.org/en/docs/ngx_core_module.html#include) diff --git a/content/nginx/deployment-guides/load-balance-third-party/node-js.md b/content/nginx/deployment-guides/load-balance-third-party/node-js.md index c37183f64..93c8b64da 100644 --- a/content/nginx/deployment-guides/load-balance-third-party/node-js.md +++ b/content/nginx/deployment-guides/load-balance-third-party/node-js.md @@ -175,7 +175,7 @@ http { Directive documentation: [include](https://nginx.org/en/docs/ngx_core_module.html#include) -You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files **_function_‑http.conf**, this is an appropriate `include` directive: +You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files {{}}**_function_-http.conf**{{}}, this is an appropriate `include` directive: ```nginx http { @@ -433,13 +433,13 @@ HTTP/2 is fully supported in both NGINX 1.9.5 and later, and NGINX Plus R7 and - If using NGINX Open Source, note that in version 1.9.5 and later the SPDY module is completely removed from the codebase and replaced with the [HTTP/2](https://nginx.org/en/docs/http/ngx_http_v2_module.html) module. After upgrading to version 1.9.5 or later, you can no longer configure NGINX Open Source to use SPDY. If you want to keep using SPDY, you need to compile NGINX Open Source from the sources in the [NGINX 1.8.x branch](https://nginx.org/en/download.html). -- If using NGINX Plus, in R11 and later the **nginx‑plus** package supports HTTP/2 by default, and the **nginx‑plus‑extras** package available in previous releases is deprecated by separate [dynamic modules](https://www.nginx.com/products/nginx/modules/) authored by NGINX. +- If using NGINX Plus, in R11 and later the {{}}**nginx-plus**{{}} package supports HTTP/2 by default, and the {{}}**nginx-plus-extras**{{}} package available in previous releases is deprecated by separate [dynamic modules](https://www.nginx.com/products/nginx/modules/) authored by NGINX. - In NGINX Plus R8 through R10, the **nginx‑plus** and **nginx‑plus‑extras** packages support HTTP/2 by default. + In NGINX Plus R8 through R10, the {{}}**nginx-plus**{{}} and {{}}**nginx-plus-extras**{{}} packages support HTTP/2 by default. In NGINX Plus R8 and later, NGINX Plus supports HTTP/2 by default, and does not support SPDY. - If using NGINX Plus R7, you must install the **nginx‑plus‑http2** package instead of the **nginx‑plus** or **nginx‑plus‑extras** package. + If using NGINX Plus R7, you must install the {{}}**nginx-plus-http2**{{}} package instead of the {{}}**nginx-plus**{{}} or {{}}**nginx-plus-extras**{{}} package. To enable HTTP/2 support, add the [http2](https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2) directive in the `server` block for HTTPS traffic that we created in [Configuring Virtual Servers for HTTP and HTTPS Traffic](#virtual-servers), so that it looks like this: @@ -459,7 +459,7 @@ To verify that HTTP/2 translation is working, you can use the "HTTP/2 and SPDY i The full configuration for basic load balancing appears here for your convenience. It goes in the `http` context. The complete file is available for [download](https://www.nginx.com/resource/conf/nodejs-basic.conf) from the NGINX website. -We recommend that you do not copy text directly from this document, but instead use the method described in [Creating and Modifying Configuration Files](#config-files) to include these directives in your configuration – add an `include` directive to the `http` context of the main **nginx.conf** file to read in the contents of **/etc/nginx/conf.d/nodejs‑basic.conf**. +We recommend that you do not copy text directly from this document, but instead use the method described in [Creating and Modifying Configuration Files](#config-files) to include these directives in your configuration – add an `include` directive to the `http` context of the main **nginx.conf** file to read in the contents of {{}}**/etc/nginx/conf.d/nodejs-basic.conf**{{}}. ```nginx proxy_cache_path /tmp/NGINX_cache/ keys_zone=backcache:10m; @@ -785,9 +785,9 @@ Parameter documentation: [service](https://nginx.org/en/docs/http/ngx_http_upstr The full configuration for enhanced load balancing appears here for your convenience. It goes in the `http` context. The complete file is available for [download](https://www.nginx.com/resource/conf/nodejs-enhanced.conf) from the NGINX website. -We recommend that you do not copy text directly from this document, but instead use the method described in [Creating and Modifying Configuration Files](#config-files) to include these directives in your configuration – namely, add an `include` directive to the `http` context of the main **nginx.conf** file to read in the contents of **/etc/nginx/conf.d/nodejs‑enhanced.conf**. +We recommend that you do not copy text directly from this document, but instead use the method described in [Creating and Modifying Configuration Files](#config-files) to include these directives in your configuration – namely, add an `include` directive to the `http` context of the main **nginx.conf** file to read in the contents of {{}}**/etc/nginx/conf.d/nodejs-enhanced.conf**{{}}. -**Note:** The `api` block in this configuration summary and the [downloadable](https://www.nginx.com/resource/conf/nodejs-enhanced.conf) **nodejs‑enhanced.conf** file is for the [API method](#reconfiguration-api) of dynamic reconfiguration. If you want to use the [DNS method](#reconfiguration-dns) instead, make the appropriate changes to the block. (You can also remove or comment out the directives for the NGINX Plus API in that case, but they do not conflict with using the DNS method and enable features other than dynamic reconfiguration.) +**Note:** The `api` block in this configuration summary and the [downloadable](https://www.nginx.com/resource/conf/nodejs-enhanced.conf) {{}}**nodejs-enhanced.conf**{{}} file is for the [API method](#reconfiguration-api) of dynamic reconfiguration. If you want to use the [DNS method](#reconfiguration-dns) instead, make the appropriate changes to the block. (You can also remove or comment out the directives for the NGINX Plus API in that case, but they do not conflict with using the DNS method and enable features other than dynamic reconfiguration.) ```nginx proxy_cache_path /tmp/NGINX_cache/ keys_zone=backcache:10m; diff --git a/content/nginx/deployment-guides/load-balance-third-party/oracle-e-business-suite.md b/content/nginx/deployment-guides/load-balance-third-party/oracle-e-business-suite.md index 6e456bacd..47466094e 100644 --- a/content/nginx/deployment-guides/load-balance-third-party/oracle-e-business-suite.md +++ b/content/nginx/deployment-guides/load-balance-third-party/oracle-e-business-suite.md @@ -322,7 +322,7 @@ http { Directive documentation: [include](https://nginx.org/en/docs/ngx_core_module.html#include) -You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files **_function_‑http.conf**, this is an appropriate include directive: +You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files {{}}**_function_-http.conf**{{}}, this is an appropriate include directive: ```nginx http { @@ -505,13 +505,13 @@ HTTP/2 is fully supported in both NGINX 1.9.5 and later, and NGINX Plus R7 and - If using open source NGINX, note that in version 1.9.5 and later the SPDY module is completely removed from the NGINX codebase and replaced with the [HTTP/2](https://nginx.org/en/docs/http/ngx_http_v2_module.html) module. After upgrading to version 1.9.5 or later, you can no longer configure NGINX to use SPDY. If you want to keep using SPDY, you need to compile NGINX from the sources in the [NGINX 1.8 branch](https://nginx.org/en/download.html). -- If using NGINX Plus, in R11 and later the **nginx‑plus** package supports HTTP/2 by default, and the **nginx‑plus‑extras** package available in previous releases is deprecated by separate [dynamic modules](https://www.nginx.com/products/nginx/modules/) authored by NGINX. +- If using NGINX Plus, in R11 and later the {{}}**nginx-plus**{{}} package supports HTTP/2 by default, and the {{}}**nginx-plus-extras**{{}} package available in previous releases is deprecated by separate [dynamic modules](https://www.nginx.com/products/nginx/modules/) authored by NGINX. - In NGINX Plus R8 through R10, the **nginx‑plus** and **nginx‑plus‑extras** packages support HTTP/2 by default. + In NGINX Plus R8 through R10, the {{}}**nginx-plus**{{}} and {{}}**nginx-plus-extras**{{}} packages support HTTP/2 by default. In NGINX Plus R8 and later, NGINX Plus supports HTTP/2 by default, and does not support SPDY. - If using NGINX Plus R7, you must install the **nginx‑plus‑http2** package instead of the **nginx‑plus** or **nginx‑plus‑extras** package. + If using NGINX Plus R7, you must install the {{}}**nginx-plus-http2**{{}} package instead of the {{}}**nginx-plus**{{}} or {{}}**nginx-plus-extras**{{}} package. To enable HTTP/2 support, add the `http2` directive in the `server` block for HTTPS traffic that we created in [Configuring Virtual Servers for HTTP and HTTPS Traffic](#virtual-servers), so that it looks like this: diff --git a/content/nginx/deployment-guides/load-balance-third-party/oracle-weblogic-server.md b/content/nginx/deployment-guides/load-balance-third-party/oracle-weblogic-server.md index de3b44837..872e44ebf 100644 --- a/content/nginx/deployment-guides/load-balance-third-party/oracle-weblogic-server.md +++ b/content/nginx/deployment-guides/load-balance-third-party/oracle-weblogic-server.md @@ -173,7 +173,7 @@ http { } ``` -You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files **_function_‑http.conf**, this is an appropriate include directive: +You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files {{}}**_function_-http.conf**{{}}, this is an appropriate include directive: ```nginx http { @@ -299,7 +299,7 @@ By putting NGINX Open Source or NGINX Plus in front of WebLogic Server servers 2. In the `server` block for HTTPS traffic that we created in [Configuring Virtual Servers for HTTP and HTTPS Traffic](#virtual-servers), include two `location` blocks: - - The first one matches HTTPS requests in which the path starts with **/weblogic‑app/**, and proxies them to the **weblogic** upstream group we created in the previous step. + - The first one matches HTTPS requests in which the path starts with {{}}**/weblogic-app/**{{}}, and proxies them to the **weblogic** upstream group we created in the previous step. - The second one funnels all traffic to the first `location` block, by doing a temporary redirect of all requests for **"http://example.com/"**. @@ -414,7 +414,7 @@ To create a very simple caching configuration: Directive documentation: [proxy_cache_path](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_path) -2. In the `location` block that matches HTTPS requests in which the path starts with **/weblogic‑app/**, include the `proxy_cache` directive to reference the cache created in the previous step. +2. In the `location` block that matches HTTPS requests in which the path starts with {{}}**/weblogic-app/**{{}}, include the `proxy_cache` directive to reference the cache created in the previous step. ```nginx # In the 'server' block for HTTPS traffic @@ -443,13 +443,13 @@ HTTP/2 is fully supported in both NGINX 1.9.5 and later, and NGINX Plus R7 and - If using NGINX Open Source, note that in version 1.9.5 and later the SPDY module is completely removed from the codebase and replaced with the [HTTP/2](https://nginx.org/en/docs/http/ngx_http_v2_module.html) module. After upgrading to version 1.9.5 or later, you can no longer configure NGINX Open Source to use SPDY. If you want to keep using SPDY, you need to compile NGINX Open Source from the sources in the [NGINX 1.8.x branch](https://nginx.org/en/download.html). -- If using NGINX Plus, in R11 and later the **nginx‑plus** package supports HTTP/2 by default, and the **nginx‑plus‑extras** package available in previous releases is deprecated by separate [dynamic modules](https://www.nginx.com/products/nginx/modules/) authored by NGINX. +- If using NGINX Plus, in R11 and later the {{}}**nginx-plus**{{}} package supports HTTP/2 by default, and the {{}}**nginx-plus-extras**{{}} package available in previous releases is deprecated by separate [dynamic modules](https://www.nginx.com/products/nginx/modules/) authored by NGINX. - In NGINX Plus R8 through R10, the **nginx‑plus** and **nginx‑plus‑extras** packages support HTTP/2 by default. + In NGINX Plus R8 through R10, the {{}}**nginx-plus**{{}} and {{}}**nginx-plus-extras**{{}} packages support HTTP/2 by default. In NGINX Plus R8 and later, NGINX Plus supports HTTP/2 by default, and does not support SPDY. - If using NGINX Plus R7, you must install the **nginx‑plus‑http2** package instead of the **nginx‑plus** or **nginx‑plus‑extras** package. + If using NGINX Plus R7, you must install the {{}}**nginx-plus-http2**{{}} package instead of the {{}}**nginx-plus**{{}} or {{}}**nginx-plus-extras**{{}} package. To enable HTTP/2 support, add the `http2` directive in the `server` block for HTTPS traffic that we created in [Configuring Virtual Servers for HTTP and HTTPS Traffic](#virtual-servers), so that it looks like this: @@ -601,7 +601,7 @@ Health checks are out‑of‑band HTTP requests sent to a server at fixed interv Because the `health_check` directive is placed in the `location` block, we can enable different health checks for each application. -1. In the `location` block that matches HTTPS requests in which the path starts with **/weblogic‑app/** (created in [Configuring Basic Load Balancing](#load-balancing-basic)), add the `health_check` directive. +1. In the `location` block that matches HTTPS requests in which the path starts with {{}}**/weblogic-app/**{{}} (created in [Configuring Basic Load Balancing](#load-balancing-basic)), add the `health_check` directive. Here we configure NGINX Plus to send an out‑of‑band request for the URI **/benefits** to each of the servers in the **weblogic** upstream group every 5 seconds (the default frequency). If a server does not respond correctly, it is marked down and NGINX Plus stops sending requests to it until it passes a subsequent health check. We include the `match` parameter to the `health_check` directive to define a nondefault set of health‑check tests. @@ -814,7 +814,7 @@ To enable dynamic reconfiguration of your upstream group of WebLogic Server app The full configuration for enhanced load balancing appears here for your convenience. It goes in the `http` context. The complete file is available for [download](https://www.nginx.com/resource/conf/weblogic-enhanced.conf) from the NGINX website. -We recommend that you do not copy text directly from this document, but instead use the method described in [Creating and Modifying Configuration Files](#config-files) to include these directives in your configuration – add an `include` directive to the `http` context of the main **nginx.conf** file to read in the contents of **/etc/nginx/conf.d/weblogic‑enhanced.conf**. +We recommend that you do not copy text directly from this document, but instead use the method described in [Creating and Modifying Configuration Files](#config-files) to include these directives in your configuration – add an `include` directive to the `http` context of the main **nginx.conf** file to read in the contents of {{}}**/etc/nginx/conf.d/weblogic-enhanced.conf**{{}}. ```nginx proxy_cache_path /tmp/NGINX_cache/ keys_zone=backcache:10m; diff --git a/content/nginx/deployment-guides/load-balance-third-party/wildfly.md b/content/nginx/deployment-guides/load-balance-third-party/wildfly.md index 2e92a9243..92c0d72b4 100644 --- a/content/nginx/deployment-guides/load-balance-third-party/wildfly.md +++ b/content/nginx/deployment-guides/load-balance-third-party/wildfly.md @@ -169,7 +169,7 @@ http { Directive documentation: [include](https://nginx.org/en/docs/ngx_core_module.html#include) -You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files **_function_‑http.conf**, this is an appropriate include directive: +You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files {{}}**_function_-http.conf**{{}}, this is an appropriate include directive: ```nginx http { @@ -429,9 +429,9 @@ HTTP/2 is fully supported in both NGINX Open In [NGINX Plus R11]({{< ref "/nginx/releases.md#r11" >}}) and later, the **nginx-plus** package continues to support HTTP/2 by default, but the **nginx-plus-extras** package available in previous releases is deprecated and replaced by [dynamic modules]({{< ref "/nginx/admin-guide/dynamic-modules/dynamic-modules.md" >}}). - For NGINX Plus R8 through R10, the **nginx‑plus** and **nginx‑plus‑extras** packages support HTTP/2 by default. + For NGINX Plus R8 through R10, the {{}}**nginx-plus**{{}} and {{}}**nginx-plus-extras**{{}} packages support HTTP/2 by default. - If using NGINX Plus R7, you must install the **nginx‑plus‑http2** package instead of the **nginx‑plus** or **nginx‑plus‑extras** package. + If using NGINX Plus R7, you must install the {{}}**nginx-plus-http2**{{}} package instead of the {{}}**nginx-plus**{{}} or {{}}**nginx-plus-extras**{{}} package. To enable HTTP/2 support, add the `http2` directive in the `server` block for HTTPS traffic that we created in [Configuring Virtual Servers for HTTP and HTTPS Traffic](#virtual-servers), so that it looks like this: @@ -793,7 +793,7 @@ The full configuration for enhanced load balancing appears here for your conveni We recommend that you do not copy text directly from this document, but instead use the method described in [Creating and Modifying Configuration Files](#config-files) to include these directives in your configuration – add an `include` directive to the `http` context of the main **nginx.conf** file to read in the contents of /etc/nginx/conf.d/jboss-enhanced.conf. -**Note:** The `api` block in this configuration summary and the [downloadable](https://www.nginx.com/resource/conf/jboss-enhanced.conf) **jboss‑enhanced.conf** file is for the [API method](#reconfiguration-api) of dynamic reconfiguration. If you want to use the [DNS method](#reconfiguration-dns) instead, make the appropriate changes to the block. (You can also remove or comment out the directives for the NGINX Plus API in that case, but they do not conflict with using the DNS method and enable features other than dynamic reconfiguration.) +**Note:** The `api` block in this configuration summary and the [downloadable](https://www.nginx.com/resource/conf/jboss-enhanced.conf) {{}}**jboss-enhanced.conf**{{}} file is for the [API method](#reconfiguration-api) of dynamic reconfiguration. If you want to use the [DNS method](#reconfiguration-dns) instead, make the appropriate changes to the block. (You can also remove or comment out the directives for the NGINX Plus API in that case, but they do not conflict with using the DNS method and enable features other than dynamic reconfiguration.) ```nginx proxy_cache_path /tmp/NGINX_cache/ keys_zone=backcache:10m; diff --git a/content/nginx/deployment-guides/microsoft-azure/high-availability-standard-load-balancer.md b/content/nginx/deployment-guides/microsoft-azure/high-availability-standard-load-balancer.md index 792d14bbb..e804a2ca2 100644 --- a/content/nginx/deployment-guides/microsoft-azure/high-availability-standard-load-balancer.md +++ b/content/nginx/deployment-guides/microsoft-azure/high-availability-standard-load-balancer.md @@ -71,7 +71,7 @@ These instructions assume you have the following: - An Azure [account](https://azure.microsoft.com/en-us/free/). - An Azure [subscription](https://docs.microsoft.com/en-us/azure/azure-glossary-cloud-terminology?toc=/azure/virtual-network/toc.json#subscription). -- An Azure [resource group](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/overview#resource-groups), preferably dedicated to the HA solution. In this guide, it is called **NGINX‑Plus‑HA**. +- An Azure [resource group](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/overview#resource-groups), preferably dedicated to the HA solution. In this guide, it is called {{}}**NGINX-Plus-HA**{{}}. - An Azure [virtual network](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-overview). - Six Azure VMs, four running NGINX Open Source and two running NGINX Plus (in each region where you deploy the solution). You need a paid or trial subscription for each NGINX Plus instance. @@ -100,10 +100,10 @@ With NGINX Open Source and NGINX Plus installed and configured on the Azure VMs 4. On the **Create load balancer** page that opens (to the **Basics** tab), enter the following values: - - **Subscription** – Name of your subscription (**NGINX‑Plus‑HA‑subscription** in this guide) - - **Resource group** – Name of your resource group (**NGINX‑Plus‑HA** in this guide) + - **Subscription** – Name of your subscription ({{}}**NGINX-Plus-HA-subscription**{{}} in this guide) + - **Resource group** – Name of your resource group ({{}}**NGINX-Plus-HA**{{}} in this guide) - **Name** – Name of your Standard Load Balancer (**lb** in this guide) - - **Region** – Name selected from the drop‑down menu (**(US) West US 2** in this guide) + - **Region** – Name selected from the drop‑down menu ({{}}**(US) West US 2**{{}} in this guide) - **Type** – **Public** - **SKU** – **Standard** - **Public IP address** – **Create new** @@ -139,21 +139,21 @@ With NGINX Open Source and NGINX Plus installed and configured on the Azure VMs Screenshot of selecting 'Backend pools' on details page for an Azure Standard Load Balancer -4. On the **lb | Backend Pools** page that opens, click **+ Add** in the upper left corner of the main pane. +4. On the {{}}**lb | Backend Pools**{{}} page that opens, click **+ Add** in the upper left corner of the main pane. -5. On the **Add backend pool** page that opens, enter the following values, then click the  Add  button: +5. On the {{}}**Add backend pool**{{}} page that opens, enter the following values, then click the  Add  button: - **Name** – Name of the new backend pool (**lb\_backend_pool** in this guide) - **IP version** – **IPv4** - - **Virtual machines** – **ngx‑plus‑1** and **ngx‑plus‑2** + - **Virtual machines** – {{}}**ngx-plus-1**{{}} and {{}}**ngx-plus-2**{{}} Screenshot of Azure 'Add backend pool' page for Standard Load Balancer After a few moments the virtual machines appear in the new backend pool. -6. Click **Health probes** in the left navigation column, and then **+ Add** in the upper left corner of the main pane on the **lb | Health probes** page that opens. +6. Click **Health probes** in the left navigation column, and then **+ Add** in the upper left corner of the main pane on the {{}}**lb | Health probes**{{}} page that opens. -7. On the **Add health probe** page that opens, enter the following values, then click the  OK  button. +7. On the {{}}**Add health probe**{{}} page that opens, enter the following values, then click the  OK  button. - **Name** – Name of the new backend pool (**lb\_probe** in this guide) - **Protocol** – **HTTP** or **HTTPS** @@ -164,20 +164,20 @@ With NGINX Open Source and NGINX Plus installed and configured on the Azure VMs Screenshot of Azure 'Add health probe' page for Standard Load Balancer - After a few moments the new probe appears in the table on the **lb | Health probes** page. This probe queries the NGINX Plus landing page every five seconds to check whether NGINX Plus is running. + After a few moments the new probe appears in the table on the {{}}**lb | Health probes**{{}} page. This probe queries the NGINX Plus landing page every five seconds to check whether NGINX Plus is running. -8. Click **Load balancing rules** in the left navigation column, and then **+ Add** in the upper left corner of the main pane on the **lb | Load balancing rules** page that opens. +8. Click {{}}**Load balancing rules**{{}} in the left navigation column, and then **+ Add** in the upper left corner of the main pane on the {{}}**lb | Load balancing rules**{{}} page that opens. -9. On the **Add load balancing rule** page that opens, enter or select the following values, then click the  OK  button. +9. On the {{}}**Add load balancing rule**{{}} page that opens, enter or select the following values, then click the  OK  button. - **Name** – Name of the rule (**lb\_rule** in this guide) - **IP version** – **IPv4** - - **Frontend IP address** – The Standard Load Balancer's public IP address, as reported in the **Public IP address** field on the **Overview** tag of the Standard Load Balancer's page (for an example, see [Step 3](#slb-configure-lb-overview) above); in this guide it is **51.143.107.x (LoadBalancerFrontEnd)** + - **Frontend IP address** – The Standard Load Balancer's public IP address, as reported in the {{}}**Public IP address**{{}} field on the **Overview** tag of the Standard Load Balancer's page (for an example, see [Step 3](#slb-configure-lb-overview) above); in this guide it is {{}}**51.143.107.x (LoadBalancerFrontEnd)**{{}} - **Protocol** – **TCP** - **Port** – **80** - **Backend port** – **80** - **Backend pool** – **lb_backend** - - **Health probe** – **lb_probe (HTTP:80)** + - **Health probe** – {{}}**lb_probe (HTTP:80)**{{}} - **Session persistence** – **None** - **Idle timeout (minutes)** – **4** - **TCP reset** – **Disabled** @@ -186,14 +186,14 @@ With NGINX Open Source and NGINX Plus installed and configured on the Azure VMs Screenshot of Azure 'Add load balancing rule' page for Standard Load Balancer - After a few moments the new rule appears in the table on the **lb | Load balancing rules** page. + After a few moments the new rule appears in the table on the {{}}**lb | Load balancing rules**{{}} page. ### Verifying Correct Operation -1. To verify that Standard Load Balancer is working correctly, open a new browser window and navigate to the IP address for the Standard Load Balancer front end, which appears in the **Public IP address** field on the **Overview** tab of the load balancer's page on the dashboard (for an example, see [Step 3](#slb-configure-lb-overview) of _Configuring the Standard Load Balancer_). +1. To verify that Standard Load Balancer is working correctly, open a new browser window and navigate to the IP address for the Standard Load Balancer front end, which appears in the {{}}**Public IP address**{{}} field on the **Overview** tab of the load balancer's page on the dashboard (for an example, see [Step 3](#slb-configure-lb-overview) of _Configuring the Standard Load Balancer_). -2. The default **Welcome to nginx!** page indicates that the Standard Load Balancer has successfully forwarded a request to one of the two NGINX Plus instances. +2. The default {{}}**Welcome to nginx!**{{}} page indicates that the Standard Load Balancer has successfully forwarded a request to one of the two NGINX Plus instances. Screenshot of 'Welcome to nginx!' page that verifies correct configuration of an Azure Standard Load Balancer @@ -210,7 +210,7 @@ Once you’ve tested that the Standard Load Balancer has been correctly deployed In this case, you need to set up Azure Traffic Manager for DNS‑based global server load balancing (GSLB) among the regions. The involves creating a DNS name for the Standard Load Balancer and registering it as an endpoint in Traffic Manager. -1. Navigate to the **Public IP addresses** page. (One way is to enter **Public IP addresses** in the search field of the Azure title bar and select that value in the **Services** section of the resulting drop‑down menu.) +1. Navigate to the {{}}**Public IP addresses**{{}} page. (One way is to enter {{}}**Public IP addresses**{{}} in the search field of the Azure title bar and select that value in the **Services** section of the resulting drop‑down menu.) 2. Click the name of the Standard Load Balancer's public IP address in the **Name** column of the table (here it is **public\_ip_lb**). @@ -218,33 +218,33 @@ In this case, you need to set up Azure Traffic Manager for DNS‑based global se 3. On the **public\_ip_lb** page that opens, click **Configuration** in the left navigation column. -4. Enter the DNS name for the Standard Load Balancer in the **DNS name label** field. In this guide, we're accepting the default, **public‑ip‑dns**. +4. Enter the DNS name for the Standard Load Balancer in the {{}}**DNS name label**{{}} field. In this guide, we're accepting the default, {{}}**public-ip-dns**{{}}. Screenshot of Azure page for public IP address of a Standard Load Balancer -5. Navigate to the **Traffic Manager profiles** tab. (One way is to enter **Traffic Manager profiles** in the search field of the Azure title bar and select that value in the **Services** section of the resulting drop‑down menu.) +5. Navigate to the {{}}**Traffic Manager profiles**{{}} tab. (One way is to enter {{}}**Traffic Manager profiles**{{}} in the search field of the Azure title bar and select that value in the **Services** section of the resulting drop‑down menu.) 6. Click **+ Add** in the upper left corner of the page. -7. On the **Create Traffic Manager profile** page that opens, enter or select the following values and click the  Create  button. +7. On the {{}}**Create Traffic Manager profile**{{}} page that opens, enter or select the following values and click the  Create  button. - **Name** – Name of the profile (**ngx** in this guide) - **Routing method** – **Performance** - - **Subscription** – **NGINX‑Plus‑HA‑subscription** in this guide - - **Resource group** – **NGINX‑Plus‑HA** in this guide + - **Subscription** – {{}}**NGINX-Plus-HA-subscription**{{}} in this guide + - **Resource group** – {{}}**NGINX-Plus-HA**{{}} in this guide _Azure-create-lb-create-Traffic-Manager-profile_ Screenshot of Azure 'Create Traffic Manager profile' page -8. It takes a few moments to create the profile. When it appears in the table on the **Traffic Manager profiles** page, click its name in the **Name** column. +8. It takes a few moments to create the profile. When it appears in the table on the {{}}**Traffic Manager profiles**{{}} page, click its name in the **Name** column. 9. On the **ngx** page that opens, click **Endpoints** in the left navigation column, then **+ Add** in the main part of the page. 10. On the **Add endpoint** window that opens, enter or select the following values and click the  Add  button. - - **Type** – **Azure endpoint** - - **Name** – Endpoint name (**ep‑lb‑west‑us** in this guide) - - **Target resource type** – **Public IP address** + - **Type** – {{}}**Azure endpoint**{{}} + - **Name** – Endpoint name ({{}}**ep-lb-west-us**{{}} in this guide) + - **Target resource type** – {{}}**Public IP address**{{}} - **Public IP address** – Name of the Standard Load Balancer's public IP address (**public\_ip_lb (51.143.107.x)** in this guide) - **Custom Header settings** – None in this guide diff --git a/content/nginx/deployment-guides/microsoft-azure/virtual-machines-for-nginx.md b/content/nginx/deployment-guides/microsoft-azure/virtual-machines-for-nginx.md index 487983211..ebbc17c53 100644 --- a/content/nginx/deployment-guides/microsoft-azure/virtual-machines-for-nginx.md +++ b/content/nginx/deployment-guides/microsoft-azure/virtual-machines-for-nginx.md @@ -23,7 +23,7 @@ These instructions assume you have: - An Azure [account](https://azure.microsoft.com/en-us/free/). - An Azure [subscription](https://docs.microsoft.com/en-us/azure/azure-glossary-cloud-terminology?toc=/azure/virtual-network/toc.json#subscription). -- An Azure [resource group](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview#resource-groups). In this guide, it is called **NGINX‑Plus‑HA**. +- An Azure [resource group](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview#resource-groups). In this guide, it is called {{}}**NGINX-Plus-HA**{{}}. - An Azure [virtual network](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-overview). - If using the instructions in [Automating Installation with Ansible](#automate-ansible), basic Linux system administration skills, including installation of Linux software from vendor‑supplied packages, and file creation and editing. @@ -48,25 +48,25 @@ In addition, to install NGINX software by following the linked instructions, you 4. In the **Create a virtual machine** window that opens, enter the requested information on the **Basics** tab. In this guide, we're using the following values: - - **Subscription** – **NGINX‑Plus‑HA‑subscription** - - **Resource group** – **NGINX‑Plus‑HA** - - **Virtual machine name** – **ngx‑plus‑1** + - **Subscription** – {{}}**NGINX-Plus-HA-subscription**{{}} + - **Resource group** – {{}}**NGINX-Plus-HA**{{}} + - **Virtual machine name** – {{}}**ngx-plus-1**{{}} - The value **ngx‑plus‑1** is one of the six used for VMs in [Active-Active HA for NGINX Plus on Microsoft Azure Using the Azure Standard Load Balancer]({{< ref "high-availability-standard-load-balancer.md" >}}). See Step 7 below for the other instance names. + The value {{}}**ngx-plus-1**{{}} is one of the six used for VMs in [Active-Active HA for NGINX Plus on Microsoft Azure Using the Azure Standard Load Balancer]({{< ref "high-availability-standard-load-balancer.md" >}}). See Step 7 below for the other instance names. - - **Region** – **(US) West US 2** - - **Availability options** – **No infrastructure redundancy required** + - **Region** – {{}}**(US) West US 2**{{}} + - **Availability options** – {{}}**No infrastructure redundancy required**{{}} This option is sufficient for a demo like the one in this guide. For production deployments, you might want to select a more robust option; we recommend deploying a copy of each VM in a different Availability Zone. For more information, see the [Azure documentation](https://docs.microsoft.com/en-us/azure/availability-zones/az-overview). - - **Image** – **Ubuntu Server 18.04 LTS** + - **Image** – {{}}**Ubuntu Server 18.04 LTS**{{}} - **Azure Spot instance** – **No** - - **Size** – **B1s** (click Select size to access the **Select a VM size** window, click the **B1s** row, and click the  Select  button to return to the **Basics** tab) - - **Authentication type** – **SSH public key** + - **Size** – **B1s** (click Select size to access the {{}}**Select a VM size**{{}} window, click the **B1s** row, and click the  Select  button to return to the **Basics** tab) + - **Authentication type** – {{}}**SSH public key**{{}} - **Username** – **nginx_azure** - - **SSH public key source** – **Generate new key pair** (the other choices on the drop‑down menu are to use an existing key stored in Azure or an existing public key) + - **SSH public key source** – {{}}**Generate new key pair**{{}} (the other choices on the drop‑down menu are to use an existing key stored in Azure or an existing public key) - **Key pair name** – **nginx_key** - - **Public inbound ports** – **Allow selected ports** - - **Select inbound ports** – Select from the drop-down menu: **SSH (22)** and **HTTP (80)**, plus **HTTPS (443)** if you plan to configure NGINX and NGINX Plus for SSL/TLS + - **Public inbound ports** – {{}}**Allow selected ports**{{}} + - **Select inbound ports** – Select from the drop-down menu: {{}}**SSH (22)**{{}} and {{}}**HTTP (80)**{{}}, plus {{}}**HTTPS (443)**{{}} if you plan to configure NGINX and NGINX Plus for SSL/TLS screenshot of 'Basics' tab on Azure 'Create a virtual machine' page @@ -75,7 +75,7 @@ In addition, to install NGINX software by following the linked instructions, you For simplicity, we recommend allocating **Standard** public IP addresses for all six VMs used in the deployment. At the time of initial publication of this guide, the hourly cost for six such VMs was only $0.008 more than for six VMs with Basic addresses; for current pricing, see the [Microsoft documentation](https://azure.microsoft.com/en-us/pricing/details/ip-addresses/). - To allocate a **Standard** public IP address, open the **Networking** tab on the **Create a virtual machine** window. Click Create new below the **Public IP** field. In the **Create public IP address** column that opens at right, click the **Standard** radio button under **SKU**. You can change the value in the **Name** field; here we are accepting the default created by Azure, **ngx‑plus‑1‑ip**. Click the ** OK ** button. + To allocate a **Standard** public IP address, open the **Networking** tab on the **Create a virtual machine** window. Click Create new below the **Public IP** field. In the {{}}**Create public IP address**{{}} column that opens at right, click the **Standard** radio button under **SKU**. You can change the value in the **Name** field; here we are accepting the default created by Azure, {{}}**ngx-plus-1-ip**{{}}. Click the ** OK ** button. screenshot of 'Networking' tab on Azure 'Create a virtual machine' page @@ -87,7 +87,7 @@ In addition, to install NGINX software by following the linked instructions, you To change any settings, open the appropriate tab. If the settings are correct, click the  Create  button. - If you chose in [Step 4](#create-vm_Basics) to generate a new key pair, a **Generate new key pair** window pops up. Click the  Download key and create private resource  button. + If you chose in [Step 4](#create-vm_Basics) to generate a new key pair, a {{}}**Generate new key pair**{{}} window pops up. Click the  Download key and create private resource  button. screenshot of validation message on Azure 'Create a virtual machine' page @@ -107,7 +107,7 @@ In addition, to install NGINX software by following the linked instructions, you For **ngx-plus-2**, it is probably simplest to repeat Steps 2 through 6 above (or purchase a second prebuilt VM in the [Microsoft Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps?search=NGINX%20Plus)). - For the NGINX Open Source VMs, you can create them individually using Steps 2 through 6. Alternatively, create them based on an Azure image. To do so, follow Steps 2 through 6 above to create a source VM (naming it **nginx‑oss**), [install the NGINX Open Source software](#install-nginx) on it, and then follow the instructions in [Optional: Creating an NGINX Open Source Image](#create-nginx-oss-image). + For the NGINX Open Source VMs, you can create them individually using Steps 2 through 6. Alternatively, create them based on an Azure image. To do so, follow Steps 2 through 6 above to create a source VM (naming it {{}}**nginx-oss**{{}}), [install the NGINX Open Source software](#install-nginx) on it, and then follow the instructions in [Optional: Creating an NGINX Open Source Image](#create-nginx-oss-image). ## Connecting to a Virtual Machine @@ -118,7 +118,7 @@ To install and configure NGINX Open Source or NGINX Plus on a VM, you need to o screenshot of Azure 'Virtual machines' page with list of VMs -2. On the page that opens (**ngx‑plus‑1** in this guide), note the VM's public IP address (in the **Public IP address** field in the right column). +2. On the page that opens ({{}}**ngx-plus-1**{{}} in this guide), note the VM's public IP address (in the {{}}**Public IP address**{{}} field in the right column). screenshot of details page for 'ngx-plus-1' VM in Azure @@ -130,7 +130,7 @@ To install and configure NGINX Open Source or NGINX Plus on a VM, you need to o where - - `` is the name of the file containing the private key paired with the public key you entered in the **SSH public key** field in Step 4 of _Creating a Microsoft Azure Virtual Machine_. + - `` is the name of the file containing the private key paired with the public key you entered in the {{}}**SSH public key**{{}} field in Step 4 of _Creating a Microsoft Azure Virtual Machine_. - `` is the name you entered in the **Username** field in Step 4 of _Creating a Microsoft Azure Virtual Machine_ (in this guide it is **nginx_azure**). - `` is the address you looked up in the previous step. @@ -169,7 +169,7 @@ NGINX publishes a unified Ansible role for NGINX Open Source and NGINX Plus on ansible-galaxy install nginxinc.nginx ``` -4. (NGINX Plus only) Copy the **nginx‑repo.key** and **nginx‑repo.crt** files provided by NGINX to **~/.ssh/ngx‑certs/**. +4. (NGINX Plus only) Copy the {{}}**nginx-repo.key**{{}} and {{}}**nginx-repo.crt**{{}} files provided by NGINX to {{}}**~/.ssh/ngx-certs/**{{}}. 5. Create a file called **playbook.yml** with the following contents: @@ -196,7 +196,7 @@ To streamline the process of installing NGINX Open Source on multiple VMs, you c 2. Navigate to the **Virtual machines** page, if you are not already there. -2. In the list of VMs, click the name of the one to use as a source image (in this guide, we have called it **ngx‑oss**). Remember that NGINX Open Source needs to be installed on it already. +2. In the list of VMs, click the name of the one to use as a source image (in this guide, we have called it {{}}**ngx-oss**{{}}). Remember that NGINX Open Source needs to be installed on it already. 3. On the page than opens, click the **Capture** icon in the top navigation bar. @@ -207,10 +207,10 @@ To streamline the process of installing NGINX Open Source on multiple VMs, you c Then select the following values: - **Name** – Keep the current value. - - **Resource group** – Select the appropriate resource group from the drop‑down menu. Here it is **NGINX‑Plus‑HA**. + - **Resource group** – Select the appropriate resource group from the drop‑down menu. Here it is {{}}**NGINX-Plus-HA**{{}}. - **Automatically delete this virtual machine after creating the image** – We recommend checking the box, since you can't do anything more with the image anyway. - **Zone resiliency** – **On**. - - **Type the virtual machine name** – Name of the source VM (**ngx‑oss** in this guide). + - **Type the virtual machine name** – Name of the source VM ({{}}**ngx-oss**{{}} in this guide). Click the  Create  button. diff --git a/content/nginx/deployment-guides/migrate-hardware-adc/citrix-adc-configuration.md b/content/nginx/deployment-guides/migrate-hardware-adc/citrix-adc-configuration.md index 404b54b18..de397e3b0 100644 --- a/content/nginx/deployment-guides/migrate-hardware-adc/citrix-adc-configuration.md +++ b/content/nginx/deployment-guides/migrate-hardware-adc/citrix-adc-configuration.md @@ -327,7 +327,7 @@ NGINX Plus and Citrix ADC handle high availability (HA) in similar but slightly Citrix ADC handles the monitoring and failover of the VIP in a proprietary way. - For [on‑premises deployments]({{< ref "nginx/admin-guide/high-availability/ha-keepalived.md" >}}), NGINX Plus uses a separate software package called ****nginx‑ha‑keepalived**** to handle the VIP and the failover process for an active‑passive pair of NGINX Plus servers. The package implements the VRRP protocol to handle the VIP. Limited [active‑active]({{< ref "nginx/admin-guide/high-availability/ha-keepalived-nodes.md" >}}) scenarios are also possible with the **nginx‑ha‑keepalived** package. + For [on‑premises deployments]({{< ref "nginx/admin-guide/high-availability/ha-keepalived.md" >}}), NGINX Plus uses a separate software package called {{}}****nginx-ha-keepalived****{{}} to handle the VIP and the failover process for an active‑passive pair of NGINX Plus servers. The package implements the VRRP protocol to handle the VIP. Limited [active‑active]({{< ref "nginx/admin-guide/high-availability/ha-keepalived-nodes.md" >}}) scenarios are also possible with the {{}}**nginx-ha-keepalived**{{}} package. Solutions for high availability of NGINX Plus in cloud environments are also available, including these: diff --git a/content/nginx/deployment-guides/migrate-hardware-adc/f5-big-ip-configuration.md b/content/nginx/deployment-guides/migrate-hardware-adc/f5-big-ip-configuration.md index d7b82fc44..1cc9c9455 100644 --- a/content/nginx/deployment-guides/migrate-hardware-adc/f5-big-ip-configuration.md +++ b/content/nginx/deployment-guides/migrate-hardware-adc/f5-big-ip-configuration.md @@ -99,7 +99,7 @@ In addition to these networking concepts, there are two other important technolo BIG-IP LTM uses a built‑in HA mechanism to handle the failover. - For [on‑premises deployments]({{< ref "nginx/admin-guide/high-availability/ha-keepalived.md" >}}), NGINX Plus uses a separate software package called ****nginx‑ha‑keepalived**** to handle the VIP and the failover process for an active‑passive pair of NGINX Plus servers. The package implements the VRRP protocol to handle the VIP. Limited [active‑active]({{< ref "nginx/admin-guide/high-availability/ha-keepalived-nodes.md" >}}) scenarios are also possible with the **nginx‑ha‑keepalived** package. + For [on‑premises deployments]({{< ref "nginx/admin-guide/high-availability/ha-keepalived.md" >}}), NGINX Plus uses a separate software package called {{}}****nginx-ha-keepalived****{{}} to handle the VIP and the failover process for an active‑passive pair of NGINX Plus servers. The package implements the VRRP protocol to handle the VIP. Limited [active‑active]({{< ref "nginx/admin-guide/high-availability/ha-keepalived-nodes.md" >}}) scenarios are also possible with the {{}}**nginx-ha-keepalived**{{}} package. Solutions for high availability of NGINX Plus in cloud environments are also available, including these: diff --git a/content/nginx/deployment-guides/nginx-plus-high-availability-chef.md b/content/nginx/deployment-guides/nginx-plus-high-availability-chef.md index 5dce8601b..3e709cab9 100644 --- a/content/nginx/deployment-guides/nginx-plus-high-availability-chef.md +++ b/content/nginx/deployment-guides/nginx-plus-high-availability-chef.md @@ -27,9 +27,9 @@ To set up the highly available active/passive cluster, we’re using the [HA sol ## Modifying the NGINX Cookbook -First we set up the Chef files for installing of the NGINX Plus HA package (**nginx‑ha‑keepalived**) and creating the `keepalived` configuration file, **keepalive.conf**. +First we set up the Chef files for installing of the NGINX Plus HA package ({{}}**nginx-ha-keepalived**{{}}) and creating the `keepalived` configuration file, **keepalive.conf**. -1. Modify the existing **plus_package** recipe to include package and configuration templates for the HA solution, by adding the following code to the bottom of the **plus_package.rb** file (per the instructions in the previous post, the file is in the **~/chef‑zero/playground/cookbooks/nginx/recipes** directory). +1. Modify the existing **plus_package** recipe to include package and configuration templates for the HA solution, by adding the following code to the bottom of the **plus_package.rb** file (per the instructions in the previous post, the file is in the {{}}**~/chef-zero/playground/cookbooks/nginx/recipes**{{}} directory). We are using the **eth1** interface on each NGINX host, which makes the code a bit more complicated than if we used **eth0**. In case you are using **eth0**, the relevant code appears near the top of the file, commented out. @@ -37,7 +37,7 @@ First we set up the Chef files for installing of the NGINX Plus HA package (**n - It looks up the IP address of the **eth1** interface on the node where NGINX Plus is being installed, and assigns the value to the `origip` variable so it can be passed to the template. - It finds the other node in the HA pair by using Chef’s `search` function to iterate through all Chef nodes, then looks up the IP address for that node’s **eth1** interface and assigns the address to the `ha_pair_ips` variable. - - It installs the **nginx‑ha‑keepalived** package, registers the `keepalived` service with Chef, and generates the **keepalived.conf** configuration file as a template, passing in the values of the `origip` and `ha_pair_ips` variables. + - It installs the {{}}**nginx-ha-keepalived**{{}} package, registers the `keepalived` service with Chef, and generates the **keepalived.conf** configuration file as a template, passing in the values of the `origip` and `ha_pair_ips` variables. ```nginx if node['nginx']['enable_ha_mode'] == 'true' @@ -102,7 +102,7 @@ First we set up the Chef files for installing of the NGINX Plus HA package (**n You can download the [full recipe file](https://www.nginx.com/resource/conf/plus_package.rb-chef-recipe) from the NGINX, Inc. website. -2. Create the Chef template for creating **keepalived.conf**, by copying the following content to a new template file, **nginx_plus_keepalived.conf.erb**, in the **~/chef‑zero/playground/cookbooks/nginx/templates/default** directory. +2. Create the Chef template for creating **keepalived.conf**, by copying the following content to a new template file, **nginx_plus_keepalived.conf.erb**, in the {{}}**~/chef-zero/playground/cookbooks/nginx/templates/default**{{}} directory. We’re using a combination of variables and attributes to pass the necessary information to **keepalived.conf**. We’ll set the attributes in the next step. Here we set the two variables in the template file to the host IP addresses that were set with the `variables` directive in the **plus_package.rb** recipe (modified in the previous step): @@ -186,7 +186,7 @@ First we set up the Chef files for installing of the NGINX Plus HA package (**n ``` -3. Create a role that sets attributes used in the recipe and template files created in the previous steps, by copying the following contents to a new role file, **nginx_plus_ha.rb** in the **~/chef‑zero/playground/roles** directory. +3. Create a role that sets attributes used in the recipe and template files created in the previous steps, by copying the following contents to a new role file, **nginx_plus_ha.rb** in the {{}}**~/chef-zero/playground/roles**{{}} directory. Four attributes need to be set, and in the role we set the following three: @@ -290,13 +290,13 @@ Now we bootstrap the nodes and get them ready for the installation. Note that th ` -2. Create a local copy of the node definition file, which we’ll edit as appropriate for the node we bootstrapped in the previous step, **chef‑test‑1**: +2. Create a local copy of the node definition file, which we’ll edit as appropriate for the node we bootstrapped in the previous step, {{}}**chef-test-1**{{}}: ```nginx root@chef-server:~/chef-zero/playground# knife node show chef-test-1 --format json > nodes/chef-test-1.json ``` -3. Edit **chef‑test‑1.json** to have the following contents. In particular, we’re updating the run list and setting the `ha_primary` attribute, as required for the HA deployment. +3. Edit {{}}**chef-test-1.json**{{}} to have the following contents. In particular, we’re updating the run list and setting the `ha_primary` attribute, as required for the HA deployment. ```json { @@ -323,7 +323,7 @@ Now we bootstrap the nodes and get them ready for the installation. Note that th Updated Node chef-test-1! ``` -5. Log in on the **chef‑test‑1** node and run the `chef-client` command to get everything configured: +5. Log in on the {{}}**chef-test-1**{{}} node and run the `chef-client` command to get everything configured: ```text username@chef-test-1:~$ sudo chef-client @@ -616,7 +616,7 @@ Enter your password: 10.100.10.102 Chef Client finished, 18/50 resources updated in 10 seconds` -If we look at **keepalived.conf** at this point, we see that there is a peer set in the `unicast_peer` section. But the following command shows that **chef‑test‑2**, which we intend to be the secondary node, is also assigned the VIP (10.100.10.50). This is because we haven’t yet updated the Chef configuration on **chef‑test‑1** to make its `keepalived` aware of the secondary node. +If we look at **keepalived.conf** at this point, we see that there is a peer set in the `unicast_peer` section. But the following command shows that {{}}**chef-test-2**{{}}, which we intend to be the secondary node, is also assigned the VIP (10.100.10.50). This is because we haven’t yet updated the Chef configuration on {{}}**chef-test-1**{{}} to make its `keepalived` aware of the secondary node. username@chef-test-2:~$ ip addr show eth1 3: eth1: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 @@ -630,7 +630,7 @@ If we look at **keepalived.conf** at this point, we see that there is a peer set ### Synchronizing the Nodes -To make `keepalived` on **chef‑test‑1** aware of **chef‑test‑2** and its IP address, we rerun the `chef-client` command on **chef‑test‑1**: +To make `keepalived` on {{}}**chef-test-1**{{}} aware of {{}}**chef-test-2**{{}} and its IP address, we rerun the `chef-client` command on {{}}**chef-test-1**{{}}: ```text username@chef-test-1:~$ sudo chef-client @@ -741,7 +741,7 @@ Chef Client finished, 2/47 resources updated in 05 seconds ``` -We see that **chef‑test‑1** is still assigned the VIP: +We see that {{}}**chef-test-1**{{}} is still assigned the VIP: ```nginx username@chef-test-1:~$ ip addr show eth1 @@ -755,7 +755,7 @@ We see that **chef‑test‑1** is still assigned the VIP: valid_lft forever preferred_lft forever ``` -And **chef‑test‑2**, as the secondary node, is now assigned only its physical IP address: +And {{}}**chef-test-2**{{}}, as the secondary node, is now assigned only its physical IP address: ```nginx username@chef-test-2:~$ ip addr show eth1 diff --git a/content/nginx/deployment-guides/setting-up-nginx-demo-environment.md b/content/nginx/deployment-guides/setting-up-nginx-demo-environment.md index dc85877b0..a14367e24 100644 --- a/content/nginx/deployment-guides/setting-up-nginx-demo-environment.md +++ b/content/nginx/deployment-guides/setting-up-nginx-demo-environment.md @@ -21,7 +21,7 @@ Some commands require `root` privilege. If appropriate for your environment, pre ## Configuring NGINX Open Source for web serving -The steps in this section configure an NGINX Open Source instance as a web server to return a page like the following, which specifies the server name, address, and other information. The page is defined in the **demo‑index.html** configuration file you create in Step 4 below. +The steps in this section configure an NGINX Open Source instance as a web server to return a page like the following, which specifies the server name, address, and other information. The page is defined in the {{}}**demo-index.html**{{}} configuration file you create in Step 4 below. diff --git a/content/nginx/deployment-guides/single-sign-on/oidc-njs/active-directory-federation-services.md b/content/nginx/deployment-guides/single-sign-on/oidc-njs/active-directory-federation-services.md index 870198ac6..6d0173780 100644 --- a/content/nginx/deployment-guides/single-sign-on/oidc-njs/active-directory-federation-services.md +++ b/content/nginx/deployment-guides/single-sign-on/oidc-njs/active-directory-federation-services.md @@ -50,9 +50,9 @@ The instructions assume you have the following: Create an AD FS application for NGINX Plus: -1. Open the AD FS Management window. In the navigation column on the left, right‑click on the **Application Groups** folder and select **Add Application Group** from the drop‑down menu. +1. Open the AD FS Management window. In the navigation column on the left, right‑click on the **Application Groups** folder and select {{}}**Add Application Group**{{}} from the drop‑down menu. - The **Add Application Group Wizard** window opens. The left navigation column shows the steps you will complete to add an application group. + The {{}}**Add Application Group Wizard**{{}} window opens. The left navigation column shows the steps you will complete to add an application group. 2. In the **Welcome** step, type the application group name in the **Name** field. Here we are using **ADFSSSO**. In the **Template** field, select **Server application** under **Standalone applications**. Click the  Next >  button. @@ -63,7 +63,7 @@ Create an AD FS application for NGINX Plus: 1. Make a note of the value in the **Client Identifier** field. You will add it to the NGINX Plus configuration in [Step 4 of _Configuring NGINX Plus_](#nginx-plus-variables).
- 2. In the **Redirect URI** field, type the URI of the NGINX Plus instance including the port number, and ending in **/\_codexch**. Here we’re using **https://my‑nginx.example.com:443/\_codexch**. Click the  Add  button. + 2. In the **Redirect URI** field, type the URI of the NGINX Plus instance including the port number, and ending in **/\_codexch**. Here we’re using {{}}**https://my-nginx.example.com:443/\_codexch**{{}}. Click the  Add  button. **Notes:** @@ -75,7 +75,7 @@ Create an AD FS application for NGINX Plus: -4. In the **Configure Application Credentials** step, click the **Generate a shared secret** checkbox. Make a note of the secret that AD FS generates (perhaps by clicking the **Copy to clipboard** button and pasting the clipboard content into a file). You will add the secret to the NGINX Plus configuration in [Step 4 of _Configuring NGINX Plus_](#nginx-plus-variables). Click the  Next >  button. +4. In the {{}}**Configure Application Credentials**{{}} step, click the {{}}**Generate a shared secret**{{}} checkbox. Make a note of the secret that AD FS generates (perhaps by clicking the {{}}**Copy to clipboard**{{}} button and pasting the clipboard content into a file). You will add the secret to the NGINX Plus configuration in [Step 4 of _Configuring NGINX Plus_](#nginx-plus-variables). Click the  Next >  button. @@ -87,7 +87,7 @@ Create an AD FS application for NGINX Plus: Configure NGINX Plus as the OpenID Connect relying party: -1. Create a clone of the [**nginx‑openid‑connect**](https://github.com/nginxinc/nginx-openid-connect) GitHub repository. +1. Create a clone of the {{}}[**nginx-openid-connect**](https://github.com/nginxinc/nginx-openid-connect){{}} GitHub repository. ```shell git clone https://github.com/nginxinc/nginx-openid-connect @@ -150,7 +150,7 @@ In a browser, enter the address of your NGINX Plus instance and try to log in u ## Troubleshooting -See the [**Troubleshooting**](https://github.com/nginxinc/nginx-openid-connect#troubleshooting) section at the **nginx‑openid‑connect** repository on GitHub. +See the [**Troubleshooting**](https://github.com/nginxinc/nginx-openid-connect#troubleshooting) section at the {{}}**nginx-openid-connect**{{}} repository on GitHub. ### Revision History diff --git a/content/nginx/deployment-guides/single-sign-on/oidc-njs/cognito.md b/content/nginx/deployment-guides/single-sign-on/oidc-njs/cognito.md index 6fa141df9..94e62f0f8 100644 --- a/content/nginx/deployment-guides/single-sign-on/oidc-njs/cognito.md +++ b/content/nginx/deployment-guides/single-sign-on/oidc-njs/cognito.md @@ -59,7 +59,7 @@ Create a new application for NGINX Plus in the Cognito GUI: -3. In the **Create a user pool** window that opens, type a value in the **Pool name** field (in this guide, it's **nginx‑plus‑pool**), then click the Review defaults button. +3. In the **Create a user pool** window that opens, type a value in the **Pool name** field (in this guide, it's {{}}**nginx-plus-pool**{{}}), then click the Review defaults button. @@ -70,11 +70,11 @@ Create a new application for NGINX Plus in the Cognito GUI: 5. On the **App clients** tab which opens, click Add an app client. -6. On the **Which app clients will have access to this user pool?** window which opens, enter a value (in this guide, **nginx‑plus‑app**) in the **App client name** field. Make sure the **Generate client secret** box is checked, then click the  Create app client  button. +6. On the **Which app clients will have access to this user pool?** window which opens, enter a value (in this guide, {{}}**nginx-plus-app**{{}}) in the {{}}**App client name**{{}} field. Make sure the {{}}**Generate client secret**{{}} box is checked, then click the  Create app client  button. -7. On the confirmation page which opens, click **Return to pool details** to return to the **Review** tab. On that tab click the  Create pool  button at the bottom. (The screenshot in [Step 4](#cognito-review-tab) shows the button.) +7. On the confirmation page which opens, click {{}}**Return to pool details**{{}} to return to the **Review** tab. On that tab click the  Create pool  button at the bottom. (The screenshot in [Step 4](#cognito-review-tab) shows the button.) 8. On the details page which opens to confirm the new user pool was successfully created, make note of the value in the **Pool Id** field; you will add it to the NGINX Plus configuration in [Step 3 of _Configuring NGINX Plus_](#nginx-plus-variables). @@ -82,36 +82,36 @@ Create a new application for NGINX Plus in the Cognito GUI: 'General settings' tab in Amazon Cognito GUI -9. Click **Users and groups** in the left navigation column. In the interface that opens, designate the users (or group of users, on the **Groups** tab) who will be able to use SSO for the app being proxied by NGINX Plus. For instructions, see the Cognito documentation about [creating users](https://docs.aws.amazon.com/cognito/latest/developerguide/how-to-create-user-accounts.html), [importing users](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-using-import-tool.html), or [adding a group](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-user-groups.html). +9. Click {{}}**Users and groups**{{}} in the left navigation column. In the interface that opens, designate the users (or group of users, on the **Groups** tab) who will be able to use SSO for the app being proxied by NGINX Plus. For instructions, see the Cognito documentation about [creating users](https://docs.aws.amazon.com/cognito/latest/developerguide/how-to-create-user-accounts.html), [importing users](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-using-import-tool.html), or [adding a group](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-user-groups.html). 'Users and groups' tab in Amazon Cognito GUI -10. Click **App clients** in the left navigation bar. On the tab that opens, click the Show Details button in the box labeled with the app client name (in this guide, **nginx‑plus‑app**). +10. Click **App clients** in the left navigation bar. On the tab that opens, click the Show Details button in the box labeled with the app client name (in this guide, {{}}**nginx-plus-app**{{}}). 'App clients' tab in Amazon Cognito GUI -11. On the details page that opens, make note of the values in the **App client id** and **App client secret** fields. You will add them to the NGINX Plus configuration in [Step 3 of _Configuring NGINX Plus_](#nginx-plus-variables). +11. On the details page that opens, make note of the values in the {{}}**App client id**{{}} and {{}}**App client secret**{{}} fields. You will add them to the NGINX Plus configuration in [Step 3 of _Configuring NGINX Plus_](#nginx-plus-variables). -12. Click **App client settings** in the left navigation column. In the tab that opens, perform the following steps: +12. Click {{}}**App client settings**{{}} in the left navigation column. In the tab that opens, perform the following steps: - 1. In the **Enabled Identity Providers** section, click the **Cognito User Pool** checkbox (the **Select all** box gets checked automatically). - 2. In the **Callback URL(s)** field of the **Sign in and sign out URLs** section, type the URI of the NGINX Plus instance including the port number, and ending in **/\_codexch**. Here we’re using **https://my‑nginx‑plus.example.com:443/_codexch**. + 1. In the {{}}**Enabled Identity Providers**{{}} section, click the {{}}**Cognito User Pool**{{}} checkbox (the **Select all** box gets checked automatically). + 2. In the **Callback URL(s)** field of the {{}}**Sign in and sign out URLs**{{}} section, type the URI of the NGINX Plus instance including the port number, and ending in **/\_codexch**. Here we’re using {{}}**https://my-nginx-plus.example.com:443/_codexch**{{}}. **Notes:** - For production, we strongly recommend that you use SSL/TLS (port 443). - The port number is mandatory even when you're using the default port for HTTP (80) or HTTPS (443). - 3. In the **OAuth 2.0** section, click the **Authorization code grant** checkbox under **Allowed OAuth Flows** and the **email**, **openid**, and **profile** checkboxes under **Allowed OAuth Scopes**. + 3. In the **OAuth 2.0** section, click the {{}}**Authorization code grant**{{}} checkbox under {{}}**Allowed OAuth Flows**{{}} and the **email**, **openid**, and **profile** checkboxes under {{}}**Allowed OAuth Scopes**{{}}. 4. Click the  Save changes  button. -13. Click **Domain name** in the left navigation column. In the tab that opens, type a domain prefix in the **Domain prefix** field under **Amazon Cognito domain** (in this guide, **my‑nginx‑plus**). Click the  Save changes  button. +13. Click **Domain name** in the left navigation column. In the tab that opens, type a domain prefix in the **Domain prefix** field under {{}}**Amazon Cognito domain**{{}} (in this guide, {{}}**my-nginx-plus**{{}}). Click the  Save changes  button. @@ -120,7 +120,7 @@ Create a new application for NGINX Plus in the Cognito GUI: Configure NGINX Plus as the OpenID Connect relying party: -1. Create a clone of the [**nginx‑openid‑connect**](https://github.com/nginxinc/nginx-openid-connect) GitHub repository. +1. Create a clone of the {{}}[**nginx-openid-connect**](https://github.com/nginxinc/nginx-openid-connect){{}} GitHub repository. ```shell git clone https://github.com/nginxinc/nginx-openid-connect @@ -135,12 +135,12 @@ Configure NGINX Plus as the OpenID Connect relying party: 3. In your preferred text editor, open **/etc/nginx/conf.d/frontend.conf**. Change the second parameter of each of the following [set](http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#set) directives to the specified value. - The `` variable is the full value in the **Domain prefix** field in [Step 13 of _Configuring Amazon Cognito_](#cognito-domain-name). In this guide it is **https://my‑nginx‑plus.auth.us‑east‑2.amazoncognito.com**. + The `` variable is the full value in the **Domain prefix** field in [Step 13 of _Configuring Amazon Cognito_](#cognito-domain-name). In this guide it is {{}}**https://my-nginx-plus.auth.us-east-2.amazoncognito.com**{{}}. - `set $oidc_authz_endpoint` – `/oauth2/authorize` - `set $oidc_token_endpoint` – `/oauth2/token` - - `set $oidc_client` – Value in the **App client id** field from [Step 11 of _Configuring Amazon Cognito_](#cognito-app-client-id-secret) (in this guide, `2or4cs8bjo1lkbq6143tqp6ist`) - - `set $oidc_client_secret` – Value in the **App client secret** field from [Step 11 of _Configuring Amazon Cognito_](#cognito-app-client-id-secret) (in this guide, `1k63m3nrcnu...`) + - `set $oidc_client` – Value in the {{}}**App client id**{{}} field from [Step 11 of _Configuring Amazon Cognito_](#cognito-app-client-id-secret) (in this guide, `2or4cs8bjo1lkbq6143tqp6ist`) + - `set $oidc_client_secret` – Value in the {{}}**App client secret**{{}} field from [Step 11 of _Configuring Amazon Cognito_](#cognito-app-client-id-secret) (in this guide, `1k63m3nrcnu...`) - `set $oidc_hmac_key` – A unique, long, and secure phrase 4. Configure the JWK file. The file's URL is @@ -154,7 +154,7 @@ Configure NGINX Plus as the OpenID Connect relying party: In this guide, the URL is - **https://cognito‑idp.us‑east‑2.amazonaws.com/us‑east‑2_mLoGHJpOs/.well‑known/jwks.json**. + {{}}**https://cognito-idp.us-east-2.amazonaws.com/us-east-2_mLoGHJpOs/.well-known/jwks.json**{{}}. The method for configuring the JWK file depends on which version of NGINX Plus you are using: @@ -187,7 +187,7 @@ In a browser, enter the address of your NGINX Plus instance and try to log in u ## Troubleshooting -See the [**Troubleshooting**](https://github.com/nginxinc/nginx-openid-connect#troubleshooting) section at the **nginx‑openid‑connect** repository on GitHub. +See the [**Troubleshooting**](https://github.com/nginxinc/nginx-openid-connect#troubleshooting) section at the {{}}**nginx-openid-connect**{{}} repository on GitHub. ### Revision History diff --git a/content/nginx/deployment-guides/single-sign-on/oidc-njs/keycloak.md b/content/nginx/deployment-guides/single-sign-on/oidc-njs/keycloak.md index f904a1811..25c8af487 100644 --- a/content/nginx/deployment-guides/single-sign-on/oidc-njs/keycloak.md +++ b/content/nginx/deployment-guides/single-sign-on/oidc-njs/keycloak.md @@ -60,15 +60,15 @@ Create a Keycloak client for NGINX Plus in the Keycloak GUI: 3. On the **Add Client** page that opens, enter or select these values, then click the  Save  button. - - **Client ID** – The name of the application for which you're enabling SSO (Keycloak refers to it as the “client”). Here we're using **NGINX‑Plus**. - - **Client Protocol** – **openid‑connect**. + - **Client ID** – The name of the application for which you're enabling SSO (Keycloak refers to it as the “client”). Here we're using {{}}**NGINX-Plus**{{}}. + - **Client Protocol** – {{}}**openid-connect**{{}}. 4. On the **NGINX Plus** page that opens, enter or select these values on the Settings tab: - **Access Type** – **confidential** - - **Valid Redirect URIs** – The URI of the NGINX Plus instance, including the port number, and ending in **/\_codexch** (in this guide it is **https://my‑nginx.example.com:443/_codexch**) + - **Valid Redirect URIs** – The URI of the NGINX Plus instance, including the port number, and ending in **/\_codexch** (in this guide it is {{}}**https://my-nginx.example.com:443/_codexch**{{}}) **Notes:** @@ -84,14 +84,14 @@ Create a Keycloak client for NGINX Plus in the Keycloak GUI: 6. Click the Roles tab, then click the **Add Role** button in the upper right corner of the page that opens. -7. On the **Add Role** page that opens, type a value in the **Role Name** field (here it is **nginx‑keycloak‑role**) and click the  Save  button. +7. On the **Add Role** page that opens, type a value in the **Role Name** field (here it is {{}}**nginx-keycloak-role**{{}}) and click the  Save  button. 8. In the left navigation column, click **Users**. On the **Users** page that opens, either click the name of an existing user, or click the **Add user** button in the upper right corner to create a new user. For complete instructions, see the [Keycloak documentation](https://www.keycloak.org/docs/latest/server_admin/index.html#user-management). -9. On the management page for the user (here, **user01**), click the Role Mappings tab. On the page that opens, select **NGINX‑Plus** on the **Client Roles** drop‑down menu. Click **nginx‑keycloak‑role** in the **Available Roles** box, then click the **Add selected** button below the box. The role then appears in the **Assigned Roles** and **Effective Roles** boxes, as shown in the screenshot. +9. On the management page for the user (here, **user01**), click the Role Mappings tab. On the page that opens, select {{}}**NGINX-Plus**{{}} on the **Client Roles** drop‑down menu. Click {{}}**nginx-keycloak-role**{{}} in the **Available Roles** box, then click the **Add selected** button below the box. The role then appears in the **Assigned Roles** and **Effective Roles** boxes, as shown in the screenshot. @@ -101,7 +101,7 @@ Create a Keycloak client for NGINX Plus in the Keycloak GUI: Configure NGINX Plus as the OpenID Connect relying party: -1. Create a clone of the [**nginx‑openid‑connect**](https://github.com/nginxinc/nginx-openid-connect) GitHub repository. +1. Create a clone of the {{}}[**nginx-openid-connect**](https://github.com/nginxinc/nginx-openid-connect){{}} GitHub repository. ```shell git clone https://github.com/nginxinc/nginx-openid-connect @@ -165,7 +165,7 @@ In a browser, enter the address of your NGINX Plus instance and try to log in u ## Troubleshooting -See the [**Troubleshooting**](https://github.com/nginxinc/nginx-openid-connect#troubleshooting) section at the **nginx‑openid‑connect** repository on GitHub. +See the [**Troubleshooting**](https://github.com/nginxinc/nginx-openid-connect#troubleshooting) section at the {{}}**nginx-openid-connect**{{}} repository on GitHub. ### Revision History diff --git a/content/nginx/deployment-guides/single-sign-on/oidc-njs/onelogin.md b/content/nginx/deployment-guides/single-sign-on/oidc-njs/onelogin.md index 9d2c00fdb..b6c8fde9a 100644 --- a/content/nginx/deployment-guides/single-sign-on/oidc-njs/onelogin.md +++ b/content/nginx/deployment-guides/single-sign-on/oidc-njs/onelogin.md @@ -48,15 +48,15 @@ Create a new application for NGINX Plus in the OneLogin GUI: -3. On the **Find Applications** page that opens, type **OpenID Connect** in the search box. Click on the **OpenID Connect (OIDC)** row that appears. +3. On the **Find Applications** page that opens, type {{}}**OpenID Connect**{{}} in the search box. Click on the **OpenID Connect (OIDC)** row that appears. -4. On the **Add OpenId Connect (OIDC)** page that opens, change the value in the **Display Name** field to **NGINX Plus** and click the  Save  button. +4. On the **Add OpenId Connect (OIDC)** page that opens, change the value in the **Display Name** field to {{}}**NGINX Plus**{{}} and click the  Save  button. -5. When the save completes, a new set of choices appears in the left navigation bar. Click **Configuration**. In the **Redirect URI's** field, type the URI of the NGINX Plus instance including the port number, and ending in **/\_codexch** (in this guide it is **https://my‑nginx.example.com:443/_codexch**). Then click the  Save  button. +5. When the save completes, a new set of choices appears in the left navigation bar. Click **Configuration**. In the **Redirect URI's** field, type the URI of the NGINX Plus instance including the port number, and ending in **/\_codexch** (in this guide it is {{}}**https://my-nginx.example.com:443/_codexch**{{}}). Then click the  Save  button. **Notes:** @@ -66,12 +66,12 @@ Create a new application for NGINX Plus in the OneLogin GUI: -6. When the save completes, click **SSO** in the left navigation bar. Click **Show client secret** below the **Client Secret** field. Record the values in the **Client ID** and **Client Secret** fields. You will add them to the NGINX Plus configuration in [Step 4 of _Configuring NGINX Plus_](#nginx-plus-variables). +6. When the save completes, click **SSO** in the left navigation bar. Click {{}}**Show client secret**{{}} below the **Client Secret** field. Record the values in the **Client ID** and **Client Secret** fields. You will add them to the NGINX Plus configuration in [Step 4 of _Configuring NGINX Plus_](#nginx-plus-variables). -7. Assign users to the application (in this guide, **NGINX Plus**) to enable them to access it for SSO. OneLogin recommends using [roles](https://onelogin.service-now.com/kb_view_customer.do?sysparm_article=KB0010606) for this purpose. You can access the **Roles** page under  Users  in the title bar. +7. Assign users to the application (in this guide, {{}}**NGINX Plus**{{}}) to enable them to access it for SSO. OneLogin recommends using [roles](https://onelogin.service-now.com/kb_view_customer.do?sysparm_article=KB0010606) for this purpose. You can access the **Roles** page under  Users  in the title bar. diff --git a/content/nginx/deployment-guides/single-sign-on/oidc-njs/ping-identity.md b/content/nginx/deployment-guides/single-sign-on/oidc-njs/ping-identity.md index d4901c65a..495b2ebad 100644 --- a/content/nginx/deployment-guides/single-sign-on/oidc-njs/ping-identity.md +++ b/content/nginx/deployment-guides/single-sign-on/oidc-njs/ping-identity.md @@ -56,30 +56,30 @@ Create a new application for NGINX Plus: 1. Log in to your Ping Identity account. The administrative dashboard opens automatically. In this guide, we show the PingOne for Enterprise dashboard, and for brevity refer simply to ”PingOne”. -2. Click  APPLICATIONS  in the title bar, and on the **My Applications** page that opens, click **OIDC** and then the **+ Add Application** button. +2. Click  APPLICATIONS  in the title bar, and on the **My Applications** page that opens, click **OIDC** and then the {{}}**+ Add Application**{{}} button. -3. The **Add OIDC Application** window pops up. Click the ADVANCED CONFIGURATION box, and then the  Next  button. +3. The {{}}**Add OIDC Application**{{}} window pops up. Click the ADVANCED CONFIGURATION box, and then the  Next  button. -4. In section 1 (PROVIDE DETAILS ABOUT YOUR APPLICATION), type a name in the **APPLICATION NAME** field and a short description in the **SHORT DESCRIPTION** field. Here, we're using **nginx‑plus‑application** and **NGINX Plus**. Choose a value from the **CATEGORY** drop‑down menu; here we’re using **Information Technology**. You can also add an icon if you wish. Click the  Next  button. +4. In section 1 (PROVIDE DETAILS ABOUT YOUR APPLICATION), type a name in the **APPLICATION NAME** field and a short description in the **SHORT DESCRIPTION** field. Here, we're using {{}}**nginx-plus-application**{{}} and {{}}**NGINX Plus**{{}}. Choose a value from the **CATEGORY** drop‑down menu; here we’re using {{}}**Information Technology**{{}}. You can also add an icon if you wish. Click the  Next  button. 5. In section 2 (AUTHORIZATION SETTINGS), perform these steps: - 1. Under **GRANTS**, click both **Authorization Code** and **Implicit**.
- 2. Under **CREDENTIALS**, click the **+ Add Secret** button. PingOne creates a client secret and opens the **CLIENT SECRETS** field to display it, as shown in the screenshot. To see the actual value of the secret, click the eye icon.
+ 1. Under **GRANTS**, click both {{}}**Authorization Code**{{}} and **Implicit**.
+ 2. Under **CREDENTIALS**, click the {{}}**+ Add Secret**{{}} button. PingOne creates a client secret and opens the **CLIENT SECRETS** field to display it, as shown in the screenshot. To see the actual value of the secret, click the eye icon.
3. Click the  Next  button. 6. In section 3 (SSO FLOW AND AUTHENTICATION SETTINGS): - 1. In the **START SSO URL** field, type the URL where users access your application. Here we’re using **https://example.com**. - 2. In the **REDIRECT URIS** field, type the URI of the NGINX Plus instance including the port number, and ending in **/\_codexch**. Here we’re using **https://my‑nginx‑plus.example.com:443/\_codexch** (the full value is not visible in the screenshot). + 1. In the {{}}**START SSO URL**{{}} field, type the URL where users access your application. Here we’re using **https://example.com**. + 2. In the **REDIRECT URIS** field, type the URI of the NGINX Plus instance including the port number, and ending in **/\_codexch**. Here we’re using {{}}**https://my-nginx-plus.example.com:443/\_codexch**{{}} (the full value is not visible in the screenshot). **Notes:** @@ -88,11 +88,11 @@ Create a new application for NGINX Plus: -7. In section 4 (DEFAULT USER PROFILE ATTRIBUTE CONTRACT), optionally add attributes to the required **sub** and **idpid** attributes, by clicking the **+ Add Attribute** button. We’re not adding any in this example. When finished, click the  Next  button. +7. In section 4 (DEFAULT USER PROFILE ATTRIBUTE CONTRACT), optionally add attributes to the required **sub** and **idpid** attributes, by clicking the {{}}**+ Add Attribute**{{}} button. We’re not adding any in this example. When finished, click the  Next  button. -8. In section 5 (CONNECT SCOPES), click the circled plus-sign on the **OpenID Profile (profile)** and **OpenID Profile Email (email)** scopes in the **LIST OF SCOPES** column. They are moved to the **CONNECTED SCOPES** column, as shown in the screenshot. Click the  Next  button. +8. In section 5 (CONNECT SCOPES), click the circled plus-sign on the {{}}**OpenID Profile (profile)**{{}} and {{}}**OpenID Profile Email (email)**{{}} scopes in the {{}}**LIST OF SCOPES**{{}} column. They are moved to the **CONNECTED SCOPES** column, as shown in the screenshot. Click the  Next  button. @@ -107,14 +107,14 @@ Create a new application for NGINX Plus: -11. You are returned to the **My Applications** window, which now includes a row for **nginx‑plus‑application**. Click the toggle switch at the right end of the row to the “on” position, as shown in the screenshot. Then click the “expand” icon at the end of the row, to display the application’s details. +11. You are returned to the **My Applications** window, which now includes a row for {{}}**nginx-plus-application**{{}}. Click the toggle switch at the right end of the row to the “on” position, as shown in the screenshot. Then click the “expand” icon at the end of the row, to display the application’s details. 12. On the page that opens, make note of the values in the following fields on the **Details** tab. You will add them to the NGINX Plus configuration in [Step 4 of _Configuring NGINX Plus_](#nginx-plus-variables). - - **CLIENT ID** (in the screenshot, **28823604‑83c5‑4608‑88da‑c73fff9c607a**) + - **CLIENT ID** (in the screenshot, {{}}**28823604-83c5-4608-88da-c73fff9c607a**{{}}) - **CLIENT SECRETS** (in the screenshot, **7GMKILBofxb...**); click on the eye icon to view the actual value @@ -124,7 +124,7 @@ Create a new application for NGINX Plus: Configure NGINX Plus as the OpenID Connect relying party: -1. Create a clone of the [**nginx‑openid‑connect**](https://github.com/nginxinc/nginx-openid-connect) GitHub repository. +1. Create a clone of the {{}}[**nginx-openid-connect**](https://github.com/nginxinc/nginx-openid-connect){{}} GitHub repository. ```shell git clone https://github.com/nginxinc/nginx-openid-connect @@ -190,7 +190,7 @@ In a browser, enter the address of your NGINX Plus instance and try to log in u ## Troubleshooting -See the [**Troubleshooting**](https://github.com/nginxinc/nginx-openid-connect#troubleshooting) section at the **nginx‑openid‑connect** repository on GitHub. +See the [**Troubleshooting**](https://github.com/nginxinc/nginx-openid-connect#troubleshooting) section at the {{}}**nginx-openid-connect**{{}} repository on GitHub. ### Revision History diff --git a/content/nginx/technical-specs.md b/content/nginx/technical-specs.md index c65fcb06d..5405859c6 100644 --- a/content/nginx/technical-specs.md +++ b/content/nginx/technical-specs.md @@ -163,9 +163,10 @@ See [Sizing Guide for Deploying NGINX Plus on Bare Metal Servers](https://www.ng - [Limit Requests](https://nginx.org/en/docs/http/ngx_http_limit_req_module.html) – Limit rate of request processing for a client IP address or other keyed value - [Limit Responses](https://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate) – Limit rate of responses per client connection -### HTTP/2 and SSL/TLS +### HTTP/2, HTTP/3 and SSL/TLS - [HTTP/2](https://nginx.org/en/docs/http/ngx_http_v2_module.html) – Process HTTP/2 traffic +- [HTTP/3](https://nginx.org/en/docs/http/ngx_http_v3_module.html) – Process HTTP/3 traffic - [SSL/TLS](https://nginx.org/en/docs/http/ngx_http_ssl_module.html) – Process HTTPS traffic ### Mail diff --git a/go.mod b/go.mod index a7dc51830..fd5114fe6 100644 --- a/go.mod +++ b/go.mod @@ -2,4 +2,4 @@ module github.com/nginxinc/docs go 1.19 -require github.com/nginxinc/nginx-hugo-theme v0.42.28 // indirect +require github.com/nginxinc/nginx-hugo-theme v0.42.36 // indirect diff --git a/go.sum b/go.sum index 8a25e80eb..e54ae2fb6 100644 --- a/go.sum +++ b/go.sum @@ -4,3 +4,5 @@ github.com/nginxinc/nginx-hugo-theme v0.42.27 h1:D80Sf/o9lR4P0NDFfP/hCQllohz6C5q github.com/nginxinc/nginx-hugo-theme v0.42.27/go.mod h1:DPNgSS5QYxkjH/BfH4uPDiTfODqWJ50NKZdorguom8M= github.com/nginxinc/nginx-hugo-theme v0.42.28 h1:1SGzBADcXnSqP4rOKEhlfEUloopH6UvMg+XTyVVQyjU= github.com/nginxinc/nginx-hugo-theme v0.42.28/go.mod h1:DPNgSS5QYxkjH/BfH4uPDiTfODqWJ50NKZdorguom8M= +github.com/nginxinc/nginx-hugo-theme v0.42.36 h1:vFBavxB+tw2fs0rLTpA3kYPMdBK15LtZkfkX21kzrDo= +github.com/nginxinc/nginx-hugo-theme v0.42.36/go.mod h1:DPNgSS5QYxkjH/BfH4uPDiTfODqWJ50NKZdorguom8M= diff --git a/layouts/partials/list-main.html b/layouts/partials/list-main.html new file mode 100644 index 000000000..be455babf --- /dev/null +++ b/layouts/partials/list-main.html @@ -0,0 +1,119 @@ +{{/* TODO: Delete this page, and use the one from nginx-hugo-them */}} +
+ {{ $PageTitle := .Title }} +
+ +
+ + {{ if or (lt .WordCount 1) (eq $PageTitle "F5 NGINX One Console") (eq $PageTitle "F5 NGINX App Protect DoS") (eq $PageTitle "F5 NGINX Plus") }} +
+
+
+ {{ range .Pages.GroupBy "Section" }} + {{ range .Pages.ByWeight }} +
+
+

+ + {{ .Title }} +

+ {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Manage your NGINX fleet")}} +

Simplify, scale, secure, and collaborate with your NGINX fleet

+ {{ end }} + {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Get started")}} +

See benefits from the NGINX One Console

+ {{ end }} + {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Draft new configurations")}} +

Work with Staged Configurations

+ {{ end }} + + {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Manage your NGINX instances")}} +

Monitor and maintain your deployments

+ {{ end }} + {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Organize users with RBAC")}} +

Assign responsibilities with role-based access control

+ {{ end }} + {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Automate with the NGINX One API")}} +

Manage your NGINX fleet over REST

+ {{ end }} + {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Glossary")}} +

Learn terms unique to NGINX One Console

+ {{ end }} + {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Connect your instances") }} +

Work with data plane keys, containers, and proxy servers

+ {{ end }} + {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Set up metrics") }} +

Review your deployments in a dashboard

+ {{ end }} + {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "API")}} +

These are API docs

+ + {{ end }} + {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Changelog") }} + {{ partial "changelog-date.html" . }} + {{ end }} +
+
+ {{ end }} + {{ end }} +
+ {{ if eq $PageTitle "F5 NGINX One Console" }} +

Other Products

+ {{ $nginxProducts := slice + (dict "title" "NGINX Instance Manager" "url" "/nginx-instance-manager" "imgSrc" "NGINX-Instance-Manager-product-icon" "type" "local-console-option" "description" "Track and control NGINX Open Source and NGINX Plus instances.") + (dict "title" "NGINX Ingress Controller" "url" "/nginx-ingress-controller" "imgSrc" "NGINX-Ingress-Controller-product-icon" "type" "kubernetes-solutions" "description" "Kubernetes traffic management with API gateway, identity, and observability features.") + (dict "title" "NGINX Gateway Fabric" "url" "/nginx-gateway-fabric" "imgSrc" "NGINX-product-icon" "type" "kubernetes-solutions" "description" "Next generation Kubernetes connectivity using the Gateway API.") + (dict "title" "NGINX App Protect WAF" "url" "/nginx-app-protect-waf" "imgSrc" "NGINX-App-Protect-WAF-product-icon" "type" "security" "description" "Lightweight, high-performance, advanced protection against Layer 7 attacks on your apps and APIs.") + (dict "title" "NGINX App Protect DoS" "url" "/nginx-app-protect-dos" "imgSrc" "NGINX-App-Protect-DoS-product-icon" "type" "security" "description" "Defend, adapt, and mitigate against Layer 7 denial-of-service attacks on your apps and APIs.") + (dict "title" "NGINX Plus" "url" "/nginx" "imgSrc" "NGINX-Plus-product-icon-RGB" "type" "modern-app-delivery" "description" "The all-in-one load balancer, reverse proxy, web server, content cache, and API gateway.") + (dict "title" "NGINX Open Source" "url" "https://nginx.org/en/docs/" "imgSrc" "NGINX-product-icon" "type" "modern-app-delivery" "description" "The open source all-in-one load balancer, content cache, and web server") + }} + {{ $groupedProducts := dict + "local-console-option" (where $nginxProducts "type" "local-console-option") + "kubernetes-solutions" (where $nginxProducts "type" "kubernetes-solutions") + "security" (where $nginxProducts "type" "security") + "modern-app-delivery" (where $nginxProducts "type" "modern-app-delivery") + }} + {{ range $type, $products := $groupedProducts }} +
+

{{ $type | humanize | title }}

+ {{ range $products }} +
+
+

+ + {{ .title }} +

+

+ {{ if .description }}{{ .description | markdownify }}{{ end }} +

+
+
+ {{ end }} +
+ {{ end }} + {{ end }} +
+
+ {{end}} +