An ansible playbook and sample configuration for Red Hat Satellite.
The redhat.satellite collection MUST be installed in order for this playbook to work.
Generally speaking there are two ways to install this collection:
- Install the
ansible-collection-redhat-satellite
RPM which is available in the Satellite repository - Install from Ansible Automation Hub by:
- Update your ansible.cfg file to include:
[galaxy]
server_list = automation_hub
[galaxy_server.automation_hub]
url=https://console.redhat.com/api/automation-hub/content/published/
auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token
token=CHANGEME
-
Replace
CHANGEME
with a valid token which can be obtained at the following URL: https://console.redhat.com/ansible/automation-hub/token. See also see Getting started with Red Hat APIs -
Install the collection as the current user:
ansible-galaxy collection install redhat.satellite
By default, this will install into ~/.ansible/collections/ansible_collections/redhat/satellite/
Clone this repository into a working area. Read the notes below and browse the sample_inventories to locate an inventory structure that matches your environment. If you are automating the configuration of a single Satellite server, sample_inventories/single_org_single_satellite is a sensible choice. Copy the selected inventory structure to an inventories
directory.
Example:
git clone https://github.com/rjo-uk/satellite-configuration
cd satellite-configuration
cp -rpv sample_inventories/single_org_single_satellite inventories
The configuration of your Satellite will take place within this inventories
directory.
Update the following fields in the inventories/inventory.yml
file:
Example_Organization:
Replace with your company, department or organization name. See https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/administering_red_hat_satellite/managing_organizations_admin for details. Ideally, this field will not have special characters and spaces
should be replaced with underscores _
.
production.satellite.example.com
The FQDN of the satellite server.
satellite_validate_certs: false
If the server from where you will run the playbook can securely connect to the Satellite server over https, this can be set to true
. If the server has a self-signed certificate, you will likely want this to be to false
.
satellite_username: admin
The default administrative username for Satellite is admin
, however if another account should be used to configure Satellite, enter it here.
satellite_password: !vault | $ANSIBLE_VAULT;1.1;AES256 64653563643633393937366439653764316430646532376163623737346135363165316239346531
An encrypted or unencrypted password that allows the playbook to login to the Satellite server.
satellite_organization: "Example Organization"
A human friendly name of your company, department or organization name. This is typically the same as the organization name above, but can have spaces.
registry_redhat_io_username: !vault | $ANSIBLE_VAULT;1.1;AES256 64653563643633393937366439653764316430646532376163623737346135363165316239346531
Optional. If synchronizing containers from quay.io, the encrypted or unencrypted username to authenticate with.
registry_redhat_io_password: !vault | $ANSIBLE_VAULT;1.1;AES256 64653563643633393937366439653764316430646532376163623737346135363165316239346531
Optional. If synchronizing containers from quay.io, the encrypted or unencrypted username to authenticate with.
rhsm_username: !vault | $ANSIBLE_VAULT;1.1;AES256 64653563643633393937366439653764316430646532376163623737346135363165316239346531
Optional. If the playbook should download the Red Hat Satellite manifest from the Red Hat customer portal, enter the encrypted or unencrypted username to authenticate with.
rhsm_password: !vault | $ANSIBLE_VAULT;1.1;AES256 64653563643633393937366439653764316430646532376163623737346135363165316239346531
Optional. If the playbook should download the Red Hat Satellite manifest from the Red Hat customer portal, the encrypted or unencrypted password to authenticate with.
All satellite_
content within the inventories
directory tree should be commented out. Running the playbook as follows should result in no changes being made:
ansible-playbook -i inventories/ satellite-configuration.yml -C
All tasks are skipped since the configuration is commented out.
Rename the directory inventories/group_vars/Example_Organization
to match the updated name you provided above. For example, if your organization is called New Widgets
you would do the following:
mv inventories/group_vars/Example_Organization inventories/group_vars/New_Widgets
You can now begin to make changes. For example, uncomment and update the satellite_organizations.yml
file in the inventories/group_vars/New_Widgets/
directory.
Example:
---
satellite_organizations:
- name: Default Organization
label: Default_Organization
state: present
- name: New Widgets
label: New_Widgets
state: present
...
Re-run the playbook and should see the changes that would be made:
ansible-playbook -i inventories/ satellite-configuration.yml -C
Re-run the playbook a final time without the -C
option to apply the changes when you are happy. Continue by updating the other yaml configuration files in the inventories directories and running the playbook as needed.
The playbook to configure satellite is named satellite-configuration.yml and when combined with an appropriate inventory structure, can configure one or many Satellites comprising of one or many Organizations. Ansible tags can be supplied when running the playbook so that specific configuration items can be run or skipped. The playbook only calls Satellite roles if appropriate variables are defined.
The documentation below covers the following scenarios:
- Configuration of a single Organization running on a single Satellite server
- Configuration of a single Organization running on two Satellite servers - one being a 'production' Satellite server and one being a 'development' Satellite server. The 'development' Satellite server has a similar configuration as production but uses a different Satellite manifest file and (optionally) different products, activation keys, locations and content views.
- Configuration of multiple Organizations running many different Satellite servers, with configuration split between:
- Configuration that is common in all Organizations
- Configuration that is unique to an Organization
- Configuration that is unique to a specific Organization on a specific Satellite server
A sample inventory for Example Organization
running on a single Satellite server can be found at sample_inventories/single_org_single_satellite.
├── group_vars
│ └── Example_Organization
│ ├── satellite_activation_keys.yml
│ ├── satellite_auth_sources_ldap.yml
│ ├── satellite_compute_profiles.yml
│ ├── satellite_compute_resources.yml
│ ├── satellite_content_credentials.yml
│ ├── satellite_content_view_version_cleanup_keep.yml
│ ├── satellite_content_views.yml
│ ├── satellite_convert_to_rhel.yml
│ ├── satellite_domains.yml
│ ├── satellite_hostgroups.yml
│ ├── satellite_lifecycle_environments.yml
│ ├── satellite_locations.yml
│ ├── satellite_manifest.yml
│ ├── satellite_operatingsystems.yml
│ ├── satellite_organizations.yml
│ ├── satellite_products.yml
│ ├── satellite_provisioning_templates.yml
│ ├── satellite_settings.yml
│ ├── satellite_subnets.yml
│ └── satellite_sync_plans.yml
└── inventory.yml
The configuration consists of an inventory file at sample_inventories/single_org_single_satellite/inventory.yml and configuration defined as group_vars in sample_inventories/single_org_single_satellite/group_vars/Example_Organization. The advantage of storing configuration in this way is that if a second satellite server is added later (eg for testing) then the configuration can be updated to fit the layout defined in the next section.
A sample inventory for Example Organization
running on two Satellite servers can be found at sample_inventories/single_org_multi_satellite.
├── group_vars
│ └── Example_Organization
│ ├── satellite_activation_keys.yml
│ ├── satellite_auth_sources_ldap.yml
│ ├── satellite_compute_profiles.yml
│ ├── satellite_compute_resources.yml
│ ├── satellite_content_credentials.yml
│ ├── satellite_content_view_version_cleanup_keep.yml
│ ├── satellite_content_views.yml
│ ├── satellite_convert_to_rhel.yml
│ ├── satellite_domains.yml
│ ├── satellite_hostgroups.yml
│ ├── satellite_lifecycle_environments.yml
│ ├── satellite_locations.yml
│ ├── satellite_operatingsystems.yml
│ ├── satellite_organizations.yml
│ ├── satellite_products.yml
│ ├── satellite_provisioning_templates.yml
│ ├── satellite_settings.yml
│ ├── satellite_subnets.yml
│ └── satellite_sync_plans.yml
├── host_vars
│ ├── development.satellite.example.com
│ │ ├── satellite_content_views.yml
│ │ ├── satellite_hostgroups.yml
│ │ ├── satellite_locations.yml
│ │ ├── satellite_manifest.yml
│ │ ├── satellite_products.yml
│ │ └── satellite_sync_plans.yml
│ └── production.satellite.example.com
│ ├── satellite_activation_keys.yml
│ ├── satellite_hostgroups.yml
│ └── satellite_manifest.yml
└── inventory.yml
The inventory mirrors a 'production' environment at production.satellite.example.com
and a 'development' environment at development.satellite.example.com
. Using the standard Ansible precedence rules, configuration can be performed for common settings via sample_inventories/single_org_multi_satellite/group_vars/Example_Organization and individual configuration via sample_inventories/single_org_multi_satellite/host_vars. For example, the development server could match production but mirror a subset of the repositories and host a subset of the content views. The development environment overrides would be set via sample_inventories/single_org_multi_satellite/host_vars in the appropriate inventory directory.
If you wish you configure multiple Satellite servers, a valid option would be to replicate the single server inventory described above for each Satellite/Organization as needed. These is nothing wrong with this approach. However, if many of the configuration items are consistent between servers (for example, you want to have exactly the same products across all Satellite servers) then you can define the configuration in a single inventory, and define per-Satellite and per-Organization differences in the appropriate location.
As sample inventory structure to manage this setup can be found at sample_inventories/multi_org_multi_satellite. This inventory represents the following:
Satellite Server | Organization | Inventory Name | Inventory Group / Satellite Organization Label |
---|---|---|---|
production.satellite.example.com | ACME Organization | production-acme-organization | ACME_Organization |
production.satellite.example.com | External Organization | production-external-organization | External_Organization |
production.satellite.example.com | Internal Organization | production-internal-organization | Internal_Organization |
development.satellite.example.com | ACME Organization | development-acme-organization | ACME_Organization |
development.satellite.example.com | External_Organization | development-externals-organization | Internal_Organization |
development.satellite.example.com | Internal Organization | development-internal-organization | External_Organization |
Configuration that is required on all Organizations (and all Satellite servers) will be placed in sample_inventories/multi_org_multi_satellite/group_vars/all. For example, the configuration that sets the same locations on all Organizations is placed in sample_inventories/multi_org_multi_satellite/group_vars/all/satellite_locations.yml.
Note: it is still possible to set per-Organization or per-Organization-AND-Satellite overrides, as described in the next section.
Configuration that is specific to an Organization (on all Satellite servers where the Organization is configured to run) will be placed in the Organization directory in sample_inventories/multi_org_multi_satellite/group_vars. For example, the configuration that sets the Satellite Products for the ACME Organization is placed in sample_inventories/multi_org_multi_satellite/group_vars/ACME_Organization/satellite_products.yml.
Note: it is still possible to set per-Organization-AND-Satellite overrides, as described in the next section.
Configuration that is specific to an Organization on a named Satellite server will be placed in the Inventory Name
directory in sample_inventories/multi_org_multi_satellite/host_vars. For example, each Organization on each Satellite server will need to have it's own manifest file. The configuration that sets the Satellite Manifest for the ACME Organization on the development Satellite server is placed in sample_inventories/multi_org_multi_satellite/host_vars/development-acme-organization/satellite_manifest.yml.
The example repository sample_inventories/multi_org_multi_satellite/host_vars has the following structure which defines a unique Satellite manifest for each environment, with custom location and sync plans defined for the development environment.
sample_inventories/multi_org_multi_satellite/host_vars/
├── development-acme-organization
│ ├── satellite_locations.yml
│ ├── satellite_manifest.yml
│ └── satellite_sync_plans.yml
├── development-external-organization
│ ├── satellite_locations.yml
│ ├── satellite_manifest.yml
│ └── satellite_sync_plans.yml
├── development-internal-organization
│ ├── satellite_locations.yml
│ ├── satellite_manifest.yml
│ └── satellite_sync_plans.yml
├── production-acme-organization
│ └── satellite_manifest.yml
├── production-external-organization
│ └── satellite_manifest.yml
└── production-internal-organization
└── satellite_manifest.yml
A number of credentials are needed to run the playbook. At a minimum, a login to the Satellite server is required. Additional credentials may also be required to configure Satellite options or be stored within Satellite.
To simplify deployment, all credentials can be placed in the inventory file. These can be unencrypted (not recommended) or encrypted with ansible-vault. The quickest way to generate an encrypted record is via the ansible-vault encrypt_string
command.
For example:
ansible-vault encrypt_string mysecretpassword
New Vault password:
Confirm New Vault password:
Encryption successful
!vault |
$ANSIBLE_VAULT;1.1;AES256
37616631303933386465363436386166643136643130353635326664623532393462363233316331
3131303633646239373832653939613963666265653462390a393463656239616136626565326364
37666535356339366236663266386334643566343765663838396564323664663262646634333464
6232613066616639660a393838613635633338643962333430363963646339613935343936343162
30323163393836356132616431653930323031636566353936653238396166353763
To eliminate the need to provide the Vault password each run, a vault password file can be defined in ansible.cfg and secured locally via filesystem permissions. This repository contains a sample entry in ansible.cfg for achieving this.
Details of the credentials used in the playbooks are shown below:
Inventory File | Variable Name | Variable Use | Required | Where used |
---|---|---|---|---|
sample_inventories/ single_org_single_satellite/ inventory.yml | satellite_username | The username that the Ansible playbook will use to login to the Satellite server | Yes | Used by all roles called by satellite-configuration.yml |
sample_inventories/ single_org_single_satellite/ inventory.yml | satellite_password | The username that the Ansible playbook will use to login to the Satellite server | Yes | Used by all roles called by satellite-configuration.yml |
sample_inventories/ single_org_single_satellite/ inventory.yml | rhsm_username | The username that the Ansible playbook will use to login to the Red Hat portal to download the Satellite manifest | No | Used when satellite_manifest is configured, for example in group_vars/Example_Organization/ satellite_manifest.yml |
sample_inventories/ single_org_single_satellite/ inventory.yml | rhsm_password | The password that the Ansible playbook will use to login to the Red Hat portal to download the Satellite manifest | No | Used when satellite_manifest is configured, for example in group_vars/Example_Organization/satellite_manifest.yml |
Inventory File | Variable Name | Variable Use | Required | Where used |
---|---|---|---|---|
sample_inventories/ multi_org_multi_satellite/ inventory.yml | satellite_username | The username that the Ansible playbook will use to login to the Satellite server | Yes | Used by all roles called by satellite-configuration.yml |
sample_inventories/ multi_org_multi_satellite/ inventory.yml | satellite_password | The username that the Ansible playbook will use to login to the Satellite server | Yes | Used by all roles called by satellite-configuration.yml |
sample_inventories/ multi_org_multi_satellite/ inventory.yml | rhsm_username | The username that the Ansible playbook will use to login to the Red Hat portal to download the Satellite manifest | No | Used when satellite_manifest is configured, for example in sample_inventories/multi_org_multi_satellite/host_vars/. Note that a unique manifest is required per Satellite organization AND per Satellite server. |
sample_inventories/ multi_org_multi_satellite/ inventory.yml | rhsm_password | The password that the Ansible playbook will use to login to the Red Hat portal to download the Satellite manifest | No | Used when satellite_manifest is configured, for example in sample_inventories/multi_org_multi_satellite/host_vars/, Note that a unique manifest is required per Satellite organization AND per Satellite server. |
sample_inventories/ multi_org_multi_satellite/ inventory.yml | registry_redhat_io_username | The username that will be stored in Red Hat Satellite to allow downloads from registry.redhat.io | No | Used when docker type repositories are configured for download from registry.redhat.io, for example in group_vars/ACME_Organization/satellite_products.yml. |
sample_inventories/ multi_org_multi_satellite/ inventory.yml | registry_redhat_io_password | The username that will be stored in Red Hat Satellite to allow downloads from registry.redhat.io | No | Used when docker type repositories are configured for download from registry.redhat.io, for example in group_vars/ACME_Organization/satellite_products.yml. |
Note, if a different login is required for each Satellite server, update the details in sample_inventories/multi_org_multi_satellite/inventory.yml.
ansible-playbook --limit production.satellite.example.com -i sample_inventories/multi_org_multi_satellite satellite-configuration.yml
When the playbook is first run with all tags (the default) on a newly installed Satellite server, any content views which are defined will be created but NOT published. This may mean that later tasks such as activation key creation may fail, due to the content views not being available in the lifecycle environments. It is expected that on a new install, the configuration will gradually be built up and tested, using tags to control which parts of the configuration are applied. As part of this, manual testing of product synchronization and content view publishing may be required. Once content views are published, the playbook should then continue to run successfully and should be fully idempotent in behavior.
Although not required, a standard naming convention for Satellite resources provides support teams with a consistent experience and allows automation and scripting tools to use pattern matching and regular expressions to audit and manipulate the environment. The PDF guide 10 Steps to Build an SOE: How Red Hat Satellite 6 Supports Setting up a Standard Operating Environment suggests a possible naming convention. This repository uses some naming conventions from that guide along with some opinionated modifications.
Avoid special characters such as @ ! " ' , ? # : ;
in object names, as this allows objects to be manipulated easily in CSV and YAML formats. If there is an established naming convention (eg for a product) consider replicating it. Although Satellite objects can often be renamed, you may find that items such as labels
cannot be changed after they are created. As an example, if a Content View is initially created as RHEL 9
(the name) with label RHEL_9
and the name is subsequently updated in the UI, you might end up with a name cv-rhel-9
but with label RHEL_9
. This can cause confusion. It is advisable to avoid this if possible by configuring your resources in the yaml files and performing some analysis before implementing the configuration.
Naming Convention | Examples |
---|---|
< vendor or upstream project > - < product name or purpose > [ - < RHEL release > ] | MariaDB |
EPEL | |
HPE |
Note that it is now possible to have repositories with different RHEL releases and different GPG keys under a single Product. This means, for example, that it's no longer required to have an EPEL-8 product and an EPEL-9 product. As such, the 'RHEL release' may not be required.
Naming Convention | Examples |
---|---|
RPM-GPG-KEY- [ - < version or release > ] | RPM-GPG-KEY-EPEL-8 |
RPM-GPG-KEY-EPEL-9 | |
RPM-GPG-KEY-MariaDB | |
RPM-GPG-KEY-SPP |
This above naming convention is used (with uppercase characters) to mirror filenames typically placed in /etc/pki/rpm-gpg
.
Naming Convention | Examples |
---|---|
< vendor > [ - < product > ] [ - < product version > ] - for -< os version > - < architecture > - < repo type > | mariadb-11-2-for-rhel-8-x86_64-rpms |
mariadb-11-4-for-rhel-8-x86_64-rpms | |
epel-8-for-rhel-8-x86_64-rpms | |
epel-9-for-rhel-9-x86_64-rpms | |
hpe-spp-gen9-for-rhel-8-x86_64-rpms | |
hpe-spp-gen10-for-rhel-8-x86_64-rpms | |
hpe-spp-gen11-for-rhel-8-x86_64-rpms | |
hpe-spp-gen9-for-rhel-9-x86_64-rpms | |
hpe-spp-gen10-for-rhel-9-x86_64-rpms | |
hpe-spp-gen11-for-rhel-9-x86_64-rpms |
The naming standard here attempts to mirror the RHEL repo naming standards as of RHEL 8 and RHEL 9.
Naming Convention | Examples |
---|---|
dev-test/stage/prod-dr | dev-test |
stage | |
prod-dr |
These names should match the environments used in your organization.
Naming Convention | Examples |
---|---|
cv - < os/app > - < profile name > [ - < version or release > ] | cv-os-rhel-6 |
cv-os-rhel-7 | |
cv-os-rhel-8 | |
cv-os-rhel-9 | |
cv-os-rhel-monthly | |
cv-app-mariadb | |
cv-app-spp |
The naming convention described depends on whether Content Views will be consumed by clients directly, or whether Composite Content Views are used. The use of os
and app
allows for operating system views to be deployed at a different cadence to application ones. A common approach may to be have a monthly or quarterly os
patching content view. Organizations may have different opinions as to whether applications should be updated at the same time.
Naming Convention | Examples |
---|---|
ccv - < biz/infra > - < role name > [ - < version or release > ] | ccv-biz-soe-monthly |
ccv-biz-database | |
ccv-infra-containerhost |
Naming Convention | Examples |
---|---|
act - < lifecycle environment > - < biz/infra/os > - < role name > | act-dev-test-biz-soe |
act-stage-biz-soe | |
act-prod-dr-biz-soe | |
act-dev-test-biz-database | |
act-dev-test-biz-database | |
act-prod-dr-biz-database | |
act-stage-infra-containerhost |
Naming Convention | Examples |
---|---|
< org > < provisioning template name > | Example Organization Kickstart Default |
Naming Convention | Examples |
---|---|
ptable - < org > - < ptable name > | ptable-example-organization-gitserver |
Naming Convention | Examples |
---|---|
< org > - < compute resource name > - < location > | example-organization-rhelosp-london |