command/cheat sheet
- create deployment - `k run first-deployment --image=nginx` (creates deployment **imperatively**) -`k exec -it /bin/bash` - `k get pods `: - ready 1/2 refers to containers in pod - look at `.spec.containers[].State` to see whats wrong - `k edit deployments first-deployment` (use with caution, edits live objects and applies edit to object immediately) - `k scale deployment first-deployment --replicas=3` - `k get --show-labels` - `k set image deployment/ nginx=nginx:1.9.1` - `k create -f ....` - `k expose deployment my-deployment --type="LoadBalancer" --name"example-service"`Types of command line tools:
kubectl
kubeadm
- to administer k8s to nodes?
kubefed
- for federated clusters..
kubelet...kube-proxy
or...use client library
-
clis send POST requests to kube api-server to instruct api-server what to do
-
edit live deployments:
k edit deployment <deployment-name>
Imperative commands (suitable for deletion)
- No yaml or config files - applies to objects that are already live - eg `k run/expose/autoscale` - eg `k create first-deployment --image=nginx`Declarative Object config (preferred way...just use this..)
- only YAML files used - `k apply -f config.yml` - Takes live obj config, current obj config, last applied obj config into consideration- Virtual cluster
- Provides scope for names...names need to be unique only inside a namespace
- Don't use namespaces for versioning..use labels
- Create
secret
resource (from file / base64 string), mount secret as volume
- See secrets.yml & secrets-pod.yml
Bash output
k8s-101/k8s_on_the_cloud on master [?] at ☸️ gke_parabolic-craft-216311_us-central1-a_my-first-clust
er
➜ k create -f lab/secrets-pod.yml
pod/secret-test-pod created
k8s-101/k8s_on_the_cloud on master [?] at ☸️ gke_parabolic-craft-216311_us-central1-a_my-first-clust
er took 2s
➜ k exec secret-test-pod -- cat /etc/secret-volume/username
user1234
k8s-101/k8s_on_the_cloud on master [?] at ☸️ gke_parabolic-craft-216311_us-central1-a_my-first-clust
er took 3s
➜ k exec secret-test-pod -- cat /etc/secret-volume/password
1234qwer
k8s-101/k8s_on_the_cloud on master [?] at ☸️ gke_parabolic-craft-216311_us-central1-a_my-first-clust
er took 3s
➜
kubectl create secret generic sensitive --from-file=./username.txt --from-file=./password.txt
- see secrets-pod-file.yml
➜ k get secrets
NAME TYPE DATA AGE
default-token-79dd7 kubernetes.io/service-account-token 3 96m
sensitive Opaque 2 6s
test-secret Opaque 2 21m
➜ k describe secrets sensitive
Name: sensitive
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
password.txt: 14 bytes
username.txt: 14 bytes
Environment variables referenced by secretKeyRef are hidden in kubectl describe
➜ k describe pod secret-test-pod-file
Name: secret-test-pod-file
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: gke-my-first-cluster-default-pool-1fe788fb-g00n/10.128.0.2
Start Time: Sat, 07 Dec 2019 17:26:16 +0800
Labels: <none>
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container test
-container
Status: Running
IP: 10.40.0.13
Containers:
test-container:
Container ID: docker://591830164a760102fec811114665f259a79c46d95242f3b52c1dbd9164a011d5
Image: nginx
Image ID: docker-pullable://nginx@sha256:189cce606b29fb2a33ebc2fcecfa8e33b0b99740da4737133cd
bcee92f3aba0a
Port: <none>
Host Port: <none>
State: Running
Started: Sat, 07 Dec 2019 17:26:18 +0800
Ready: True
mrk Restart Count: 0
Requests:
cpu: 100m
Environment:
SECRET_USERNAME: <set to the key 'username.txt' in secret 'sensitive'> Optional: false
SECRET_PASSWORD: <set to the key 'password.txt' in secret 'sensitive'> Optional: false
-
see here
-
Create hardware specific Persistent Volume (PV) manifest
-
Create PV via PersistentVolumeClaim / Dynmic storage provisioning link
- see configmaps and configmap-pod.yml
-
downward API: used to pass k8s specific data from pod to container
- expose information of a pod to its downward container
- eg. labels, annotations can be mounted into a container via
volumes.downwardAPI
and the usualspec.containers[].volumeMounts
- see dapi-volume.yml
fieldRef gets mounted at `mountPath/path`
➜ k exec -it dapi-volume /bin/bash root@dapi-volume:/# cd /etc/podinfo root@dapi-volume:/etc/podinfo# ls annotations labels root@dapi-volume:/etc/podinfo# cat annotations build="two" builder="john-doe"
- Exposed as hooks
- PostStart (called immediately after container is created, no params)
- PreStop (immediately before container terminates)
- Register hook handlers
- Exec
- HTTP
- atleast-once
Place inside spec.containers[] of Pod template
lifecycle:
postStart:
exec:
command: ["/bin/bash", "-c", "poststart.sh"]
preStop:
exec:
command: ["/bin/bash", "c", "prestop.sh"]
- NodeSelectors:
- Tag nodes with labesl, add nodeSelector to pod template
- Add in spec.containers[]
- Affinity:
- Node Affinity: steer pod to node
- Pod Affinity: steer pods towards or away from pods
nodeSelector:
<some-node-label>: <some-node-label-value>
- For anti-affinity between Pods and Nodes
- To make sure certain nodes never host certain nodes:
- marks nodes with a taint
- only pods with a tolerance for that taint are allowed to run on that node
- For Dedicated nodes for certain users
- Nodes with special hardware, reserve for pods that actually make use of hardware
- Taints
k taint nodes <node-name> env=dev:NoSchedule # key=value:effect,
# here its NoSchedule (don't schedule pods on this node
# unless pod has toleration for this key value pair)
- Tolerations
- add in spec.containers[] of pod template
tolerations: - key: "dev" operator: "Equal" value: "env" effect: "NoSchedule" # this pod has a toleration for this taint
- add in spec.containers[] of pod template
- Run before app containers
- Always run-to-completion
- Run serially (each only starts after previous one finishes)
- example init.yml
- see here
- see here
- Liveness
-
spec.containers[].livenessProbe
- see exec-liveness
- Readiness
spec.containers[].readinessProbe
- probe to determing if pod IP should be added to Endpoints object to be part of service
- Manifest contains:
- pod template
- pod selector
- number of replicas
- label of replica set
- see frontend-replicaset
- Delete just ReplicaSet but not its pods:
k delete --cascade=false
- pods are now orphans
- can create new replicaSet and use labelSelector to include orphaned pods as part of ReplicaSet
- Isolating pods from ReplicaSet
- Scaling a ReplicaSet
- edit replicaset manifest and do
k apply -f manifest.yml
- edit replicaset manifest and do
- Auto-scaling a ReplicaSet HorizontalAutoScaling
--horizontal-pod-autoscaler-downscale-delay
Encapsulates ReplicaSet, versioning, magic
- ReplicaSets associated with Deployment (encapsulates ReplicaSet template in Deployment)
- Rollback / Versioning
- every change to deployment template is tracked (only for PodTemplate changes)
- creates a new revision of the deployment for each change
k rollout history deplyoment/nginx-deployment
to see reivison historyk rollout undo deployment/nginx-deployment [--to-revision=]
rollback deployment.spec.revisionHistoryLimit
controls how many revisions (ReplicaSets) are keptk rollout status deployment deployment-name
see status of rollout
- Update container in pod template in deployment
- new ReplicaSet and new pods created (running new version of container)
- old ReplicaSet continues to exists but pods in old ReplicaSet gradually reduce to 0
- Magic
- versioning
- instant rollback
- rolling deployments
- blue/green
- canary
- while creating deployments, append
--record
..keeps track of commands made - Fields
- strategy
.spec.stragety.type==Recreate
.spec.stragety.type==RollingUpdate
- strategy
- Pause/Resume Deployment
- either imperatively or declaratively
k rollout paus deployment/nginx-deployment
causes rolling update deployment to be paused
- see nginx deployment
Description
- Pods are created from the same spec, not interchangeable, each has persistent identifier that is maintained across scheduling
- for ordered graceful deployment/scaling/deletion/termination/rolling updates
- storage needs to be Persistent
Output of stateful set
- see stateful-set
k8s-101/k8s_on_the_cloud/lab on master [»!+] at ☸️ gke_parabolic-craft-216311_us-central1-a_my-fir
st-cluster
➜ k apply -f stateful-set.yml
statefulset.apps/web created
k8s-101/k8s_on_the_cloud/lab on master [»!+] at ☸️ gke_parabolic-craft-216311_us-central1-a_my-fir
st-cluster took 2s
➜ k get pods -l app=nginx
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 12s
web-1 1/1 Running 0 9s
k8s-101/k8s_on_the_cloud/lab on master [»!+] at ☸️ gke_parabolic-craft-216311_us-central1-a_my-fir
st-cluster
➜ k scale statefulset web --replicas=4
statefulset.apps/web scaled
k8s-101/k8s_on_the_cloud/lab on master [»!+] at ☸️ gke_parabolic-craft-216311_us-central1-a_my-fir
st-cluster took 2s
➜ k get pods -l app=nginx
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 73s
web-1 1/1 Running 0 70s
web-2 1/1 Running 0 5s
web-3 0/1 Pending 0 3s
k8s-101/k8s_on_the_cloud/lab on master [»!+] at ☸️ gke_parabolic-craft-216311_us-central1-a_my-fir
st-cluster took 2s
➜ k describe statefulset web
Name: web
Namespace: default
CreationTimestamp: Tue, 10 Dec 2019 09:22:38 +0800
Selector: app=nginx
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{},"nam
e":"web","namespace":"default"},"spec":{"replicas":2,"select...
Replicas: 4 desired | 4 total
Update Strategy: RollingUpdate
Partition: 824639281820
Pods Status: 3 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
Volume Claims: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 95s statefulset-controller create Pod web-0 in StatefulSet web succes
sful
Normal SuccessfulCreate 92s statefulset-controller create Pod web-1 in StatefulSet web succes
sful
Normal SuccessfulCreate 27s statefulset-controller create Pod web-2 in StatefulSet web succes
sful
Normal SuccessfulCreate 25s statefulset-controller create Pod web-3 in StatefulSet web succes
sful
- refer to Job Resources
- see jobs
- VirtualIp access to service
- can be used by clients from outside cluster to access service object
- implemented by kube-proxy (which runs on each node)
- Each kube-proxy will relay external traffic to correct VIPs (proxies incoming request on NodePort services)
- ClusterIp, NodePortIp, LoadBalancer, ExternalName
- Internal DNS provides domain name resolution..
- eg: http://service_name:port/...
- eg: http://service_name.namespace:port/... if calling from another namespace
- DNS SRV records for named ports: port-name.port-protocol.svc.namespace.svc.cluster.local
- returns port number
- Securing services
- use HTTPS to secure channel
- need cert + configure server to use certs + secret to make services accessible to pods
- use HTTPS to secure channel
k8s-101/k8s_on_the_cloud/lab on master [!] at ☸️ gke_parabolic-craft-216311_us-central1-a_my-first
-cluster took 5s
➜ k run nginx --image=nginx
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Us
e kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx created
k8s-101/k8s_on_the_cloud/lab on master [!] at ☸️ gke_parabolic-craft-216311_us-central1-a_my-first
-cluster took 2s
➜ k expose deployment nginx --type=LoadBalancer --name=nginx-service --port=8080
service/nginx-service exposed
k8s-101/k8s_on_the_cloud/lab on master [!] at ☸️ gke_parabolic-craft-216311_us-central1-a_my-first
-cluster
➜ k get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.240.1 <none> 443/TCP 4d7h
nginx-service LoadBalancer 10.43.242.0 <pending> 8080:31060/TCP 39s
k8s-101/k8s_on_the_cloud/lab on master [!] at ☸️ gke_parabolic-craft-216311_us-central1-a_my-first
-cluster
➜ k get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.240.1 <none> 443/TCP 4d7h
nginx-service LoadBalancer 10.43.242.0 35.226.53.23 8080:31060/TCP 109s
- see RBAC
- for Identity and Access management
- Identities:
- Indivi Users, Groups, Service Accounts
- Access:
- RBAC
- Roles - apply to namespace
- ClusterRole - apply to entire cluster
- Achieve identity-access association with Binding
- ClusterRoleBinding, RoleBindings
- ClusterRB has no namespace in metadata
- see overfishing