Certified Kubernetes Administrator (CKA) learning note
Contents
Install
Install docker.io
1apt install docker.io
If you install docker with other cgroup driver, you have to make sure that docker and Kubernetes will use same cgroup driver.
1 2 3 4 5 |
cat << EOF >> /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF |
install apt key and source to system
1 2 3 4 5
root@kube-master:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - OK root@kube-master:~# cat <<EOF >> /etc/apt/sources.list.d/kubernetes.list > deb http://apt.kubernetes.io/ kubernetes-xenial main > EOF
Then install kubernetes packages
1 2 |
root@kube-master:~# apt update -y root@kube-master:~# apt install -y kubelet kubeadm kubectl |
- setup and config with kubeadm
You must choose a CNI when you execute kubeadm init, in this post I choose
flunnel, so I have to add --pod-network-cidr options.
1
|
root@k8sm:~# kubeadm init --pod-network-cidr=10.244.0.0/16 |
after waiting for some while, you should see below message that shows the install is complete
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.122.75:6443 --token .....<snip> |
Follow the instruction, run below comand as a regular user
1 2 3 |
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config |
And you need to memo the command start with kubeadm join, you will need to
exeute this command on your kubernetes node to join to the cluster
In order for your pods to communicate with one another, you’ll need to install pod networking. We are going to use Flannel for our Container Network Interface (CNI) because it’s easy to install and reliable. Enter this command:
1
|
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml |
Next run below command to make sure everything is coming up.
1
|
kubectl get pods --all-namespaces |
If you see the coredns-xxxxxx pod is running, and your master node is Ready, your cluster is ready to accept worker nodes.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
wshi@k8sm:~$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-78fcdf6894-ltgw2 1/1 Running 0 17m kube-system coredns-78fcdf6894-n8hw2 1/1 Running 0 17m kube-system etcd-k8sm 1/1 Running 0 16m kube-system kube-apiserver-k8sm 1/1 Running 0 16m kube-system kube-controller-manager-k8sm 1/1 Running 0 16m kube-system kube-flannel-ds-amd64-ktcqm 1/1 Running 0 1m kube-system kube-proxy-nczhf 1/1 Running 0 17m kube-system kube-scheduler-k8sm 1/1 Running 0 16m wshi@k8sm:~$ kubectl get node NAME STATUS ROLES AGE VERSION k8sm Ready master 17m v1.11.2 |
- setup other node and join to the cluster
For the rest worker nodes, you just need to install kubectl, kubeadm, kubelet
and docker refer above, then execute the kubeadm join ... command which was
mentioned before.
After a while, you should see all worker nodes are ready to use.
1 2 3 4 5 |
wshi@k8sm:~$ kubectl get node NAME STATUS ROLES AGE VERSION k8sm Ready master 44m v1.11.2 k8sn1 Ready <none> 11m v1.11.2 k8sn2 Ready <none> 11m v1.11.2 |
Run a Job
Applications that running inside a pod are called “jobs”.
Most Kubernetes objects are created using yaml. Here is a sample yaml for a job which uses perl to calculate pi to 2000 digits and then stops.
|
|
Create this yaml file on your master node and call it “pi-job.yaml”. Run the job with the command:
|
|
Get the detail information of this job with the command:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
...
pi-72c7r 0/1 Completed 0 3m
$ kubectl describe pod pi-72c7r
Name: pi-72c7r
Namespace: default
Node: juju-cfb27c-2/10.188.44.225
Start Time: Wed, 29 Aug 2018 02:45:34 +0000
Labels: controller-uid=9a903f30-ab35-11e8-9b51-feb3e5f3b327
job-name=pi
Annotations: <none>
Status: Succeeded
IP: 10.1.33.7
Controlled By: Job/pi
Containers:
pi:
Container ID: docker://0d48f71cc6a2825cf4113f237170e63b06e1e310eca2e950dc979b48f26fb41f
Image: perl
Image ID: docker-pullable://perl@sha256:a264b269d0ea9687ea1485e47a0f4039b2dab99fc9c6e3faf001b452b57d6087
Port: <none>
Host Port: <none>
Command:
perl
-Mbignum=bpi
-wle
print bpi(2000)
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 29 Aug 2018 02:47:01 +0000
Finished: Wed, 29 Aug 2018 02:47:07 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-pkk4t (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-pkk4t:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-pkk4t
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m default-scheduler Successfully assigned default/pi-72c7r to juju-cfb27c-2
Normal Pulling 3m kubelet, juju-cfb27c-2 pulling image "perl"
Normal Pulled 1m kubelet, juju-cfb27c-2 Successfully pulled image "perl"
Normal Created 1m kubelet, juju-cfb27c-2 Created container
Normal Started 1m kubelet, juju-cfb27c-2 Started container |
And view the log(STDOUT) with below command:
|
|
Here is another example YAML file for job which use the image “busybox” and sleep for 10 seconds
|
|
Deploy a Pod
Pods usually represent running applications in a Kubernetes cluster. Here is an example of some yaml which defines a pod:
|
|
run a Pod
|
|
delete a Pod
|
|
Or
1
|
kubectl delete pod alpine |
1
|
kubectl delete pod/alpine |
Examine the current status
1 2 3 |
kubectl get nodes kubectl describe node node-name kubectl get pods --all-namespaces -o wide |
Use -n will specify the namespace in use
1
|
kubectl get pods -n kube-system |
Deployment
A yaml file for an nginx deployment
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80 |
Create the deployment and pod
1
|
kubectl create -f nginx-deployment.yaml |
Find the detail info
1
|
kubectl describe deployment nginx-deployment |
Check pod is running on which node
1 2 3 |
$ kubectl get pod nginx-deployment-7fc9b7bd96-c6wwh -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-deployment-7fc9b7bd96-c6wwh 1/1 Running 0 21h 10.1.33.6 juju-cfb27c-2 <none> |
rollout image version
Change the image version to 1.8, you run below command
1
|
kubectl set image deployment nginx-deployment nginx=nginx:1.8 |
Or, you can update the line in the yaml to 1.8 version of the image, and apply the changes with
1
|
kubectl apply -f nginx-deployment.yaml |
Check the status of the rollout with below command
1
|
kubectl rollout status deployment nginx-deployment |
Undo the previous rollout
1
|
kubectl rollout undo deployment nginx-deployment |
View the history
1
|
kubectl rollout history deployment nginx-deployment |
Go to a specific point in history
1
|
kubectl rollout history deployment nginx-deployment --revision=x |
Setting Container Environment Variables
Deploy a pod to print Environment Variables
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
apiVersion: v1
kind: Pod
metadata:
name: env-dump
spec:
containers:
- name: busybox
image: busybox
command:
- env
env:
- name: STUDENT_NAME
value: "Your Name"
- name: SCHOOL
value: "Linux Academy"
- name: KUBERNETES
value: "is awesome" |
After executed the pod, you can check Environment Viriables by log
1 2 3 4 5 6 |
$ kubectl logs env-dump .... STUDENT_NAME=Your Name SCHOOL=Linux Academy KUBERNETES=is awesome .... |
Scaling pod
command line
Use scale to deployment with –replicas=X
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
$ kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deployment 2 2 2 2 21h $ kubectl scale deployment nginx-deployment --replicas=3 deployment.extensions/nginx-deployment scaled $ kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deployment 3 3 3 3 21h $ kubectl get pod NAME READY STATUS RESTARTS AGE ... nginx-deployment-7fc9b7bd96-c6wwh 1/1 Running 0 21h nginx-deployment-7fc9b7bd96-kddj5 1/1 Running 0 31s nginx-deployment-7fc9b7bd96-s86gc 1/1 Running 0 21h |
yaml file
Update replicas: x part in yaml file, and apply the changes with
1
|
kubectl apply -f nginx-deployment.yml |
Replication Controllers, Replica Sets, and Deployments
Deployments replaced the older ReplicationController functionality, but it never hurts to know where you came from. Deployments are easier to work with, and here’s a brief exercise to show you how. A Replication Controller ensures that a specified number of pod replicas are running at any one time. In other words, a Replication Controller makes sure that a pod or a homogeneous set of pods is always up and available.
To maintain three copies of an nginx container
Replication Controllers
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80 |
ReplicaSet
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
apiVersion: apps/v1beta2
kind: ReplicaSet
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80 |
Deployment
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
apiVersion: apps/v1beta2 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80 |
Label
Label node with colors
1 2 3 4 |
kubectl label node node1-name color=black kubectl label node node2-name color=red kubectl label node node3-name color=green kubectl label node node4-name color=blue |
Label all pod in default namespace by using --all
1
|
kubectl label pods -n default color=white --all |
Get pod/node/etc with specific label
1
|
kubectl get pods -l color=white -n default |
Get pod with multi labels
1
|
kubectl get pods -l color=white,app=nginx |
DaemonSet
Deploy nginx pod on all node with
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cthulu
labels:
daemon: "yup"
spec:
selector:
matchLabels:
daemon: "pod"
template:
metadata:
labels:
daemon: pod
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: cthulu-jr
image: nginx |
Confirm that pod is running on each node
1 2 3 4 5 |
$ kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE cthulu-kvbk9 1/1 Running 0 2m 10.1.33.8 juju-cfb27c-2 <none> cthulu-t7hfc 1/1 Running 0 2m 10.1.45.13 juju-cfb27c-1 <none> cthulu-x8hdf 1/1 Running 0 2m 10.1.31.9 juju-cfb27c-3 <none> |
Label a Node & Schedule a Pod
Label a Node to let you can schedule a pod on it.
1
|
kubectl label node juju-cfb27c-3 deploy=here |
use nodeSelector in yaml file to let a pod being deployed on the specific
node
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- name: busybox
image: busybox
command:
- sleep
- "300"
imagePullPolicy: IfNotPresent
restartPolicy: Always
nodeSelector:
deploy: here |
Confirm
1 2 3 |
$ kubectl get pod busybox -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE busybox 1/1 Running 0 10s 10.1.31.10 juju-cfb27c-3 <none> |
Specific Schedulers
Ordinarily, we don’t need to specify the scheduler’s name in the spec because everyone uses a single default one. Sometimes, however, developers need to have custom schedulers in charge of placing pods due to legacy or specialized hardware constraints.
Use schedulerName in yaml to specific a customer scheduler
1 2 3 4 5 6 7 8 9 10 11 12 13 |
apiVersion: v1
kind: Pod
metadata:
name: annotation-default-scheduler
labels:
name: multischeduler
annotations:
scheduledBy: custom-scheduler
spec:
schedulerName: custom-scheduler
containers:
- name: pod-container
image: k8s.gcr.io/pause:2.0 |
Logs
View the current logs of a pod
1
|
kubectl logs pod-name |
View the current logs of a pod interactively
1
|
kubectl logs pod-name -f |
Print last 10 lines of the log.
1
|
kubectl logs pod-name --tail=10 |
Log file in master/node machine can be found at /var/log/containers directory
Node maintenance
Maintenance a node by preventing the scheduler from putting new pods on to it and evicting any existing pods. Ignore the DaemonSets – those pods are only providing services to other local pods and will come back up when the node comes back up.
In this example I’m going to remove juju-cfb27c-2 from cluster.
1 2 3 |
root@juju-cfb27c-0:~# kubectl drain juju-cfb27c-2 --ignore-daemonsets node/juju-cfb27c-2 cordoned WARNING: Ignoring DaemonSet-managed pods: cthulu-kvbk9, nginx-ingress-kubernetes-worker-controller-t6qh9 |
juju-cfb27c-2 is marked as “SchedulingDisabled”
1 2 3 4 5 |
# kubectl get node NAME STATUS ROLES AGE VERSION juju-cfb27c-1 Ready <none> 4d v1.11.2 juju-cfb27c-2 Ready,SchedulingDisabled <none> 4d v1.11.2 juju-cfb27c-3 Ready <none> 4d v1.11.2 |
Now -2 node can be shutdown and do maintenance work, no pod will be schedule on it. If you create some new pods, they will only placed on juju-cfb27c-1 and -3.
Next, when -2 node is ready to use, you can get it back with
1 2 3 4 5 6 7 |
# kubectl uncordon juju-cfb27c-2 node/juju-cfb27c-2 uncordoned # kubectl get node NAME STATUS ROLES AGE VERSION juju-cfb27c-1 Ready <none> 4d v1.11.2 juju-cfb27c-2 Ready <none> 4d v1.11.2 juju-cfb27c-3 Ready <none> 4d v1.11.2 |
Upgrading Kubernetes Components
Confirm the current version with
1
|
kubectl get nodes |
Upgrade kubeadm on the master node
1sudo apt upgrade kubeadm
And confirm the version of kubeadm with
1
|
kubeadm version |
check the upgrade plan
1sudo kubeadm upgrade plan
apply the upgrade plan
1sudo kubeadm upgrade apply v1.x.x
upgrade kubelet
Before upgrade kubelet, first you need to drain the node which you want to upgrade
1
|
kubectl drain NODENAME --ignore-daemonsets |
Then, update kubelet manaully with
1 2 |
sudo apt update sudo apt upgrade kubelet |
Don’t forget to make your node avaliable after the upgrade.
1
|
kubectl uncordon NODENAME |
Network
Inbound Node Port Requirements
- Master Nodes
- TCP 6443 – Kubernetes API Server
- TCP 2379-2380 – etcd server client API
- TCP 10250 – Kubelet API
- TCP 10251 – Kube-scheduler
- TCP 10252 – kube-controller-manager
- TCP 10255 – Read-only Kubelet API
- Worker Nodes
- TCP 10250 – Kubelet API
- TCP 10255 – Read-only Kubelet API
- TCP 30000-32767 – Node Port Services
export pod to the internet
1
|
# kubectl expost deployment NAME --type="NodePort" --port XX |
Deploying a Load Balancer
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
Kind:Service
apiVersion: v1
metadata:
name: la-lb-service
spec:
selector:
app: la-lb
ports:
- protocol: TCP
port: 80
targetPort:9376
clusterIP: 10.0.171.223
loadBalancerIP: 78.12.23.17
type: LoadBalancer |
Author Wenhan Shi
LastMod 2020-03-26 (02c19e3)