-
Notifications
You must be signed in to change notification settings - Fork 4
Alpha Cluster Deployment
The alpha cluster is deployed under the same architecture described for the test cluster
The alpha cluster deploys the following instances of each class
1 master/control plane server:
- kubmaster01
2 node/worker servers:
- kubnode01
- kubnode02
1 NFS Storage server:
- kubvol01
All of the deployed alpha nodes are one of two type of instances:
- 1GB instance: (kubmaster01, kubvol01)
- 2GB instance: (kubnode01, kubnode02)
All instances were deployed in the same datacenter of the same provider in order to enable private network communication
- 1GB RAM
- 1 vCPU
- 20GB Storage
- 2GB RAM
- 1 vCPU
- 30GB Storage
All three of these machines are deployed as Fedora 25 instances
- Set the system hostname
- Apply shared cluster configurations
- Disable password logins for the root user
- Install netdata for node monitoring
- Open firewall port for netdata
- Secure public ports
- Allow private network traffic
- Disable SELinux
Before copy/pasting, set shell variable:
-
host: desired machine hostname
(
set -e
dnf install -y salt-minion git-core
echo "${host?}" > /etc/salt/minion_id
git clone https://github.com/CodeForPhilly/ops.git /opt/ops
ln -s /opt/ops/kubernetes/alpha-cluster/salt /srv/salt
ln -s /opt/ops/kubernetes/alpha-cluster/pillar /srv/pillar
salt-call --local state.highstate
)
This machine is deployed as an openSUSE Leap 42.2 instance
- Set the system hostname
- Apply the shared cluster configurations
- Disable password logins for the root user
- Install
mancommand (don't ask me why it's not there to start with...) or why it depends on 30 f'ing packages) - Install netdata for node monitoring
- Lockdown public firewall
- Open firewall to private network
Before copy/pasting, set shell variable:
-
host: desired machine hostname
(
set -e
zypper in -y salt-minion git-core
echo "${host?}" > /etc/salt/minion_id
git clone https://github.com/CodeForPhilly/ops.git /opt/ops
ln -s /opt/ops/kubernetes/alpha-cluster/salt /srv/salt
ln -s /opt/ops/kubernetes/alpha-cluster/pillar /srv/pillar
salt-call --local state.highstate
)
These instruction presume that the workstation from which the administrator is working has been appropriately configured with the necessary workstation resources.
These nodes are deployed using the kubernetes contrib ansible playbooks. The python environment from which ansible is run will require the python-netaddr module in order to use the playbooks.
Once the dependencies are satisfied, the following steps will provision the kubernetes nodes and master:
- Apply cluster configuration data to ansible playbooks
- Run ansible playbooks
- Open API port on master for remote use of kubectl
Before copy/pasting, set shell variables:
-
repo_contrib: path to kubernetes contrib repo -
repo_ops: path to the ops repo
(
set -e
cp "${repo_ops?}/kubernetes/alpha-cluster/workstation-resources/kubernetes-contrib.patch" "${repo_contrib?}/kubernetes-contrib.patch"
cd "${repo_contrib?}"
git apply kubernetes-contrib.patch
cd ansible/scripts
./deploy-cluster.sh
ssh root@kubmaster01 'firewallctl zone "" -p add service https && firewallctl reload'
)
- Install ZFS and NFS
- Load ZFS kernel module
- Create ZFS pool for container volumes
- Run NFS server
- Run ZFS programs
This provisioning is taken care of as part of the previous salt highstate
The default logging and monitoring deployed by the ansible playbooks is disabled, and a separate architecture has been deployed in its place. The architecture is as follows:
fluentd collectors -- forward --> fluentd aggregator -- GELF --> GrayLog Appliance
The fluentd collectors are run as a kubernetes daemon set which mount docker log files stored on the system within the container and consume them from there. The role of the collectors is to intake these logs, parse them, and send them to the aggregator.
See also:
The fluentd aggregator serves as a single point at which all messages from the collectors are gathered, converted into a message format used by the logging backend, and sent to the logging backend.
See also:
- Image configuration
- Deployment configuration
- Storage configuration
- Persistent Volume Claim configuration
The graylog server is deployed from the Openstack appliance image which is converted into a raw image and cloned directly onto the root block device of the server. Each application which feeds logs into Graylog can be assigned its own stream, which users can be granted permissions to.
There is a PostgreSQL container deployed into the cluster which is available for shared use by all applications.
See also:
- Image configuration
- Deployment configuration
- Storage configuration
- Persistent Volume Claim configurations:
The edge traffic router layer is handled by a daemon set which runs an OpenResty container on each node. These containers claim ports 80 and 443 on the node and forward public traffic to the appropriate container within the cluster.
See also: