IT-SDK-Kubernetes-YAML
Jump to navigation
Jump to search
Contents
- 1 Introduction
- 2 Infrastructure
- 3 Linux-Admin
- 4 Top-Themen
- 5 Install kubectl
- 6 Install minikube
- 7 Life Cycle: kubeadm
- 8 Life Cycle: kubectl
- 9 YAML
- 10 Schulung
- 10.1 Introduction
- 10.2 Basics of Kubernetes
- 10.3 Installation and Configuration
- 10.4 Kubernetes Architecture
- 10.5 APIs and Access
- 10.6 API Objects
- 10.7 Managing State With Deployments
- 10.8 Services
- 10.9 Volumes and Data
- 10.10 Ingress
- 10.11 Scheduling
- 10.12 Logging and Troubleshooting
- 10.13 Custom Resource Definition
- 10.14 Helm
- 10.15 Security
- 10.16 High Availability
Introduction
- Web-Source: https://medium.com/google-cloud/kubernetes-101-pods-nodes-containers-and-clusters-c1509e409e16
- Web-Source: https://kubernetes.io/docs/reference/kubectl/cheatsheet/
- Web-Source-http://kubernetesbyexample.com/
Notes
- Cluster >>> Nodes >>> Deployments >>> Pods (Endpoint) >>> Containers (App) >> Service (s:app=A)
- Node: Has a Node-IP ### Pod: Has an Endpoint-IP ### Service: Has a Cluster-IP
- Master-Components:
- Node-Components: kubelet, kube-proxy
- Deleting a deployment does not delete the endpoints (Pod) or services.
- Deployment: primary purpose is to declare how many replicas of a pod should be running at a time.
- Resource: ???
- Persistent Volumes: To store data permanently
- Isolation between pods
Services
- Ingress: communicate with a service running in a pod >> Ingress-Controller / LoadBalancer
- Service in Kubernetes defines a logical set of Pods and a policy by which to access them.
- The set of Pods targeted by a Service is usually determined by a Label (Selector).
- Services can be exposed in different ways by specifying a type in the ServiceSpec.
- Typ: ClusterIP, NodePort, LoadBalancer, ExternalName
Infrastructure
- Installation with Vagrant: https://kubernetes.io/blog/2019/03/15/kubernetes-setup-using-ansible-and-vagrant/
- Master (CPU: 3, MEM: 4G, Storage: 5G)
- Worker (CPU: 1, MEM: 2G, Storage: 5G)
- Ubuntu 16.04 LTS: ubuntu/xenial64
- Ubuntu 18.04 LTS: ubuntu/bionic64
# -*- mode: ruby -*-
# vi: set ft=ruby :
IMAGE_NAME = "ubuntu/xenial64"
Vagrant.configure("2") do |config|
config.ssh.insert_key = false
config.vm.provider "virtualbox" do |vb|
vb.cpus = 2
vb.memory = 3072
end
config.vm.define "k8s-a-master" do |master|
master.vm.box = IMAGE_NAME
master.vm.hostname = "k8s-a-master"
master.vm.network "private_network", ip: "192.168.50.10"
master.vm.network "public_network", ip: "192.168.178.110", :mac => "0800278A8081"
end
config.vm.define "k8s-a-node01" do |node|
node.vm.box = IMAGE_NAME
node.vm.hostname = "k8s-a-node01"
node.vm.network "private_network", ip: "192.168.50.11"
node.vm.network "public_network", ip: "192.168.178.111", :mac => "0800278A8082"
end
end
Linux-Admin
$ vi /etc/sudoers.d #Add: student ALL=(ALL) ALL $ PATH=$PATH:/usr/sbin:/sbin $ export PATH="/home/sh/.minishift/cache/oc/v3.11.0/linux:$PATH" $ tar -xvf filename $ ip addr show $ vim /etc/hosts $ less filaname.txt # Dispaly the contents of a file $ cat filename.txt # Display the content of a file $ tee filename.txt # Redirect output to multiple files
Top-Themen
- Installation
- Cluster
- Nodes
- Pods
- InitContainers
- Deployments, StatefulSet
- Services, ServiceDiscovery, Expose, PortForward
- Volumes
- Labels
- Taints
- ReplicaSets, DaemonSets, Rollout
- Secrets
- Logging
- HealtCheck
- Jobs & CronJobs
- APIs
- Tool-Monitoring
- Tool-Helm
Install kubectl
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl sudo install kubectl /sdk/bin
Install minikube
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube sudo install minikube /sdk/bin
Life Cycle: kubeadm
$ kubeadm init $ kubeadm join $ kubeadm config $ kubeadm token
Life Cycle: kubectl
Basics-Main
$ kubectl config --kubeconfig=$CFG_FILE $ kubectl config --kubeconfig=$CFG_FILE use-context $CONTEXT_NAME ... $ kubectl run $NAME --image=nginx --replicas=10 $ kubectl create deployment $NAME --image=nginx ... $ kubectl run $NAME --image=nginx:1.23 --replicas=2 --port=9876 # Create and run a particular image. $ kubectl create -f file.yaml # Create a resource from a file. $ kubectl apply -f file.yaml # Apply a configuration to a resource by filename. Create the resource initially with either 'apply' or 'create --save-config'. $ kubectl replace -f file.yaml # Terminate and Replace a resource by filename. ... $ kubectl get all $ kubectl get all --all-namesapces $ kubectl get all -o wide $ kubectl get namespaces $ kubectl get nodes $ kubectl get depolyments $ kubectl get pods $ kubectl get services $ kubectl get endpoints $ kubectl get jobs ... $ kubectl describe $RESOURCE $NAME $ kubectl describe deployment nginx ... $ kubectl delete $TYP --all -n $NAME $ kubectl delete $TYP $NAME $ kubectl delete deployments $NAME $ kubectl delete pod $NAME $ kubectl delete service $NAME $ kubectl delete endpoint $NAME $ kubectl delete job $NAME
Basics-Mix
$ kubectl get deployment nginx -o yaml > file.yaml $ kubectl scale deployment nginx --replicas=3 $ kubectl apply -f project/k8s/development --recursive $ kubectl get pods -Lapp -Ltier -Lrole $ kubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx' $ kubectl autoscale deployment/my-nginx --min=1 --max=3
Services
$ kubectl service hello-minikube --url
Labels
$ kubectl label nodes $NAME typ=node1 $ kubectl label pods $NAME owner=michael $ kubectl get pods --show-labels $ kubectl get pods --selector owner=michael $ kubectl get pods -l env=development $ kubectl get pods -l 'env in (production, development)' $ kubectl delete pods -l 'env in (production, development)'
ReplicaSet
exec
$ kubectl exec -it $POD -- printenv $ kubectl exec -it $POD -- /bin/bash $ kubectl exec -it $POD -- /bin/bash -c 'env' $ kubectl exec -it $POD -- /bin/bash -c 'df -ha |grep car' $ kubectl exec -it $POD -- /bin/bash -c 'echo $ilike' $ kubectl exec -it $POD -- /bin/bash -c 'cat /etc/cars/car.trim' $ kubectl exec -it $POD -c shell -- ping $SVC.$NAMESPACE.svc.cluster.local $ kubectl exec -it $POD -c c1 -- bash
Expose
- ClusterIP, NodePort, LoadBalancer, or ExternalName.
$ kubectl run $NAME --image=nginx:1.12 --port=9876 $ kubectl expose deployment $NAME # Exposes the Service on ClusterIP. $ kubectl expose deployment $NAME --type=LoadBalancer # Exposes the Service on external. $ kubectl expose deployment $NAME --type=NodePort --port=80 # Exposes the Service on Node. $ kubectl edit ingress $CFG_INGRESS
Taint
$ kubectl taint nodes --all node-role.kubernetes.io/master- $ kubectl taint nodes --all node.kubernetes.io/not-ready ... $ kubectl taint nodes node2 node2=DoNotSchedulePods:NoExecute $ kubectl taint nodes node3 node3=DoNotSchedulePods:NoSchedule $ kubectl taint nodes node2 node2:NoExecute- $ kubectl taint nodes node3 node3:NoSchedule- ... $ kubectl taint nodes $NODE_ID bubba=value:NoExecute $ kubectl taint nodes $NODE_ID bubba=value:NoSchedule $ kubectl taint nodes $NODE_ID bubba=value:PreferNoSchedule $ kubectl taint nodes $NODE_ID bubba- $ kubectl drain $NODE_ID $ kubectl uncordon $NODE_ID ... $ kubectl describe node | grep Taint $ kubectl describe nodes $NODE_ID | grep -i taint $ kubectl describe nodes $NODE_ID | grep Taint
Volumus
- Typ: node-local such as emptyDir or hostPath
- Typ: file-sharing such as nfs
- Typ: cloud-provider such as awsElasticBlockStore, azureDisk, or gcePersistentDisk
- Typ: distributed-file such as glusterfs or cephfs
- Typ: special-purpose such as secret, gitRepo, PersistentVolume
In Pod
kind: Pod
volumeMounts:
- name: xchange
mountPath: "/tmp/xchange"
- name: mypod
mountPath: "/tmp/persistent"
...
volumes:
- name: xchange
emptyDir: {}
- name: mypd
persistentVolumeClaim:
claimName: myclaim
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce | ReadOnlyMany | ReadWriteMany
resources:
requests:
storage: 1Gi
Secret
$ kubectl create secret generic $NAME --from-file=./file.txt $ kubectl get secrets
Loging
$ kubectl logs --tail=5 $POD -c $CONTAINER $ kubectl logs --since=10s $POD -c $CONTAINER $ kubectl logs -p $POD -c $CONTAINER
YAML
Yaml-Config
kind: Config
preferences: {}
clusters (cluster, name)
users (name, user)
contexts (cluster, namespace, user)
current-context
Yaml-ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: 1.15.1 controlPlaneEndpoint: "k8smaster:6443" networking: podSubnet: 192.168.0.0/16
Yaml-Deployment
kind: Deployment metadata (name, labels, namespace) spec (replicas, template) - template (metadata, spec) --- spec (containers, volumes, nodeSelector) ---- containers (name, image, imagePullPolicy, ports, env, securityContext, volumeMounts)
Yaml-Pod
kind: Pod
metadaten
name: podName
labels:
env: development
app: anything
spec:
containers:
- name: conName
image: nginx
ports:
- containerPort: 9876
command:
- "bin/bash"
- "-c"
- "sleep 10000"
resources:
limits:
memory: "64Mi"
cpu: "500m"
Yaml-Service
kind: Service
metadata (name, namespace, labels, selfLink)
spec (clusterIP, ports, selector, type)
...
kind: Service
metadata:
name: nameService
spec:
ports:
- port: 80
targetPort: 9876
selector:
app: sise
Yaml-Route
kind: Route metadata (name, namespace, labels) spec (host, to, port, tls)
Schulung
Introduction
Basics of Kubernetes
Installation and Configuration
Installing Master
[user@master:~$] sudo -i [root@master:~$] apt-get update && apt-get upgrade -y [root@master:~$] apt-get install -y docker.io ... [root@master:~$] vim /etc/apt/sources.list.d/kubernetes.list [root@master:~$] echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" >> /etc/apt/sources.list.d/kubernetes.list [root@master:~$] curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - [root@master:~$] apt-get update ... [root@master:~$] apt-get install -y kubeadm=1.15.1-00 kubelet=1.15.1-00 kubectl=1.15.1-00 ... [root@master:~$] vim /etc/hosts # Add an local DNS alias for master server [root@master:~$] vim kubeadm-config.yaml # Add Kubernetes-Version, Node-Alais, IP-Range ----------------------------------------------------------------------------------------------- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: 1.15.1 #<-- Use the word stable for newest version controlPlaneEndpoint: "k8smaster:6443" #<-- Use the node alias not the IP networking: podSubnet: 192.168.0.0/16 #<-- Match the IP with Calico config file ----------------------------------------------------------------------------------------------- [root@master:~$] kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out [root@master:~$] exit ... [user@master:~$] mkdir -p $HOME/.kube [user@master:~$] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [user@master:~$] sudo chown $(id -u):$(id -g) $HOME/.kube/config [user@master:~$] less .kube/config ... [user@master:~$] wget https://tinyurl.com/yb4xturm -O rbac-kdd.yaml [user@master:~$] wget https://tinyurl.com/y8lvqc9g -O calico.yaml [user@master:~$] sudo cp /root/rbac-kdd.yaml . [user@master:~$] sudo cp /root/calico.yaml . [user@master:~$] kubectl apply -f rbac-kdd.yaml [user@master:~$] kubectl apply -f calico.yaml ... [user@master:~$] source <(kubectl completion bash) [user@master:~$] echo "source <(kubectl completion bash)" >> ~/.bashrc ... [user@master:~$] sudo kubeadm config print init-defaults
Installing Worker
[user@node01:~$] sudo -i [root@node01:~$] apt-get update && apt-get upgrade -y [root@node01:~$] apt-get install -y docker.io [root@node01:~$] vim /etc/apt/sources.list.d/kubernetes.list >>>> add:deb http://apt.kubernetes.io/ kubernetes-xenial main [root@node01:~$] curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - [root@node01:~$] apt-get update [root@node01:~$] apt-get install -y kubeadm=1.15.1-00 kubelet=1.15.1-00 kubectl=1.15.1-00 [root@node01:~$] exit ... [user@master:~$] ip addr show ens4 | grep inet [user@master:~$] sudo kubeadm token list [user@master:~$] sudo kubeadm token create [user@master:~$] openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' ... [root@node01:~$] vim /etc/hosts [root@node01:~$] kubeadm join --token 27eee4.6e66ff60318da929 k8smaster:6443 --discovery-token-ca-cert-hash sha256:6d541678b05652e1fa5d43908e75e67376e994c3483d6683f2a18673e5d2a1b0 [root@node01:~$] exit [user@node01:~$] kubectl get nodes [user@node01:~$] ls -l .kube
Setting Taint
App life cycle 1
- core: deployment >> pod >> service
App life cycle 2
Kubernetes Architecture
APIs and Access
$ kubectl proxy --port=8080 $ curl http://localhost:8080/api/v1 $ curl http://127.0.0.1:8001/api/v1/namespaces $ kubectl get --raw=/api/v1 $ kubectl api-versions ... $ export client= $(grep client-cert ~/.kube/config |cut -d" " -f 6) $ export key=$(grep client-key-data ~/.kube/config |cut -d " " -f 6) $ export auth= $(grep certificate-authority-data ~/.kube/config |cut -d " " -f 6) $ echo $client, $key, $auth ... $ echo $client | base64 -d - > ./client.pem $ echo $key | base64 -d - > ./client-key.pem $ echo $auth | base64 -d - > ./ca.pem ... $ curl --cert ./client.pem --key ./client-key.pem --cacert ./ca.pem https://k8smaster:6443/api/v1/pods
API Objects
Jobs & Cronjobs
Jobs
kind: Job metadata (name) spec (completions, parallelism, activeDeadlineSeconds) ---containers (name, image, command, args)
Cronjobs
* * * * * command to execute # minute (0 - 59) # hour (0 - 23) # day of the month (1 - 31) # month (1 - 12) # day of the week (0 - 6) ... kind: CronJob metadata (name) spec (schedule,jobTemplate) ---containers (name, image, args)
Managing State With Deployments
ReplicaSet
A ReplicaSet is defined with fields, including a selector that specifies how to identify Pods it can acquire, a number of replicas indicating how many Pods it should be maintaining, and a pod template specifying the data of new Pods it should create to meet the number of replicas criteria. By deleteing rs, just pod with the same system-label will be deleted.
kubectl get rs kubectl create -f rs.yaml kubectl delete rs rs-one --cascade=false kubectl edit po $POD_ID # change system: ReplicaOne >>to>> system: IsolatedPod kubectl get po -L system kubectl delete rs rs-one
DaemonSet
The DaemonSet ensures that when a node is added to a cluster a pods will be created on that node
kubectl create -f ds.yaml
kubectl get ds
kubectl set image ds ds-one nginx=nginx:1.12.1
kubectl rollout history ds ds-one
kubectl rollout history ds ds-one --revision=1
kubectl rollout undo ds ds-one --to-revision=1
...
kubectl create -f ds.yaml
////////////
name: ds-two
updateStrategy:
type: OnDelete
type: RollingUpdate
////////////
kubectl rollout status ds ds-two
Services
kubectl get nodes --show-labels kubectl label node $NODE_ID system=secondOne kubectl get pods -l app=nginx --all-namespaces kubectl expose deployment nginx-one ... kubectl expose deployment nginx-one --type=NodePort --name=service-lab kubectl describe services ... kubectl get deploy --show-labels kubectl delete deploy -l system=secondary kubectl label node $NODE_ID system-
Volumes and Data
- Ceph is also another popular solution for dynamic, persistent volumes.
- spec.volumes
- spec.containers.volumeMounts
kubectl create configmap colors --from-literal=text=black --from-file=./favorite --from-file=./primary/ kubectl get configmap colors -o yaml kubectl exec -it shell-demo -- /bin/bash -c 'echo $ilike' kubectl exec -it shell-demo -- /bin/bash -c 'env' kubectl exec -it shell-demo -- /bin/bash -c 'df -ha |grep car' kubectl exec -it shell-demo -- /bin/bash -c 'cat /etc/cars/car.trim' ... kubectl delete pods shell-demo kubectl delete configmap fast-car colors ... kubectl create -f pv.yaml # PersistentVolume kubectl get pv ... kubectl create -f pvc.yaml # PersistentVolumeClaim kubectl get pvc
Ingress
kubectl create deployment secondapp --image=nginx kubectl get deployments secondapp -o yaml |grep label -A2 kubectl expose deployment secondapp --type=NodePort --port=80 kubectl create -f ingress.rbac.yaml
Scheduling
kubectl describe nodes |grep -i label
kubectl describe nodes |grep -i taint
kubectl get deployments --all-namespaces
sudo docker ps |wc -l
kubectl label nodes $NODE_ID status=vip
kubectl get nodes --show-labels
////
nodeSelector:
status: vip
////