Skip to content

Alpha Cluster Deployment

James Lott edited this page Nov 16, 2018 · 27 revisions

Architecture

The alpha cluster is deployed under the same architecture described for the test cluster

Deployed servers

The alpha cluster deploys the following instances of each class

1 master/control plane server:

  • kubmaster01

2 node/worker servers:

  • kubnode01
  • kubnode02

1 NFS Storage server:

  • kubvol01

Instance information

All of the deployed alpha nodes are one of two type of instances:

  • 1GB instance: (kubmaster01, kubvol01)
  • 2GB instance: (kubnode01, kubnode02)

All instances were deployed in the same datacenter of the same provider in order to enable private network communication

1GB Instance

  • 1GB RAM
  • 1 vCPU
  • 20GB Storage

2GB Instance

  • 2GB RAM
  • 1 vCPU
  • 30GB Storage

Base system deployment

kubmaster01, kubnode01, kubnode02

All three of these machines are deployed as Fedora 25 instances

Post-deployment configuration

  1. Set the system hostname
  2. Apply shared cluster configurations
  3. Disable password logins for the root user
  4. Install netdata for node monitoring
  5. Open firewall port for netdata
  6. Secure public ports
  7. Allow private network traffic
  8. Disable SELinux

Before copy/pasting, set shell variable:

  • host: desired machine hostname
(
  set -e
  dnf install -y salt-minion git-core
  echo "${host?}" > /etc/salt/minion_id
  git clone https://github.com/CodeForPhilly/ops.git /opt/ops
  ln -s /opt/ops/kubernetes/alpha-cluster/salt /srv/salt
  ln -s /opt/ops/kubernetes/alpha-cluster/pillar /srv/pillar
  salt-call --local state.highstate
)

kubvol01

This machine is deployed as an openSUSE Leap 42.2 instance

Post-deployment configuration

  1. Set the system hostname
  2. Apply the shared cluster configurations
  3. Disable password logins for the root user
  4. Install man command (don't ask me why it's not there to start with...) or why it depends on 30 f'ing packages)
  5. Install netdata for node monitoring
  6. Lockdown public firewall
  7. Open firewall to private network

Before copy/pasting, set shell variable:

  • host: desired machine hostname
(
  set -e
  zypper in -y salt-minion git-core
  echo "${host?}" > /etc/salt/minion_id
  git clone https://github.com/CodeForPhilly/ops.git /opt/ops
  ln -s /opt/ops/kubernetes/alpha-cluster/salt /srv/salt
  ln -s /opt/ops/kubernetes/alpha-cluster/pillar /srv/pillar
  salt-call --local state.highstate
)

Cluster provisioning

These instruction presume that the workstation from which the administrator is working has been appropriately configured with the necessary workstation resources.

kubmaster01, kubnode01, kubnode02

These nodes are deployed using the kubernetes contrib ansible playbooks. The python environment from which ansible is run will require the python-netaddr module in order to use the playbooks.

Once the dependencies are satisfied, the following steps will provision the kubernetes nodes and master:

  1. Apply cluster configuration data to ansible playbooks
  2. Run ansible playbooks
  3. Open API port on master for remote use of kubectl

Before copy/pasting, set shell variables:

  • repo_contrib: path to kubernetes contrib repo
  • repo_ops: path to the ops repo
(
  set -e
  cp "${repo_ops?}/kubernetes/alpha-cluster/workstation-resources/kubernetes-contrib.patch" "${repo_contrib?}/kubernetes-contrib.patch"
  cd "${repo_contrib?}"
  git apply kubernetes-contrib.patch
  cd ansible/scripts
  ./deploy-cluster.sh
  ssh root@kubmaster01 'firewallctl zone "" -p add service https && firewallctl reload'
)

kubvol01

  1. Install ZFS and NFS
  2. Load ZFS kernel module
  3. Create ZFS pool for container volumes
  4. Run NFS server
  5. Run ZFS programs

This provisioning is taken care of as part of the previous salt highstate

Cluster services

Logging

The default logging and monitoring deployed by the ansible playbooks is disabled, and a separate architecture has been deployed in its place. The architecture is as follows:

fluentd collectors -- forward --> fluentd aggregator -- GELF --> GrayLog Appliance 

Fluentd Collectors

The fluentd collectors are run as a kubernetes daemon set which mount docker log files stored on the system within the container and consume them from there. The role of the collectors is to intake these logs, parse them, and send them to the aggregator.

See also:

Fluentd aggregator

The fluentd aggregator serves as a single point at which all messages from the collectors are gathered, converted into a message format used by the logging backend, and sent to the logging backend.

See also:

Graylog appliance

The graylog server is deployed from the Openstack appliance image which is converted into a raw image and cloned directly onto the root block device of the server. Each application which feeds logs into Graylog can be assigned its own stream, which users can be granted permissions to.

Postgres Database

There is a PostgreSQL container deployed into the cluster which is available for shared use by all applications.

See also:

Edge traffic router

The edge traffic router layer is handled by a daemon set which runs an OpenResty container on each node. These containers claim ports 80 and 443 on the node and forward public traffic to the appropriate container within the cluster.

See also:

Clone this wiki locally