Difference between revisions of "IT-SDK-Kubernetes-YAML"
Jump to navigation
Jump to search
(→Installing Master) |
Samerhijazi (talk | contribs) m (Samerhijazi moved page IT-SDK-Kubernetes to IT-SDK-Kubernetes-YAML without leaving a redirect) |
||
| (75 intermediate revisions by 2 users not shown) | |||
| Line 1: | Line 1: | ||
== Introduction == | == Introduction == | ||
| + | ===Source-Top=== | ||
| + | * https://kubernetes.io/docs/reference/kubectl/cheatsheet/ | ||
| + | * http://kubernetesbyexample.com/ | ||
| + | ===Source-Mix=== | ||
| + | * https://kubernetes.io/blog/2019/03/15/kubernetes-setup-using-ansible-and-vagrant/ | ||
| + | * https://kubernetes.io/blog/2020/06/working-with-terraform-and-kubernetes/ | ||
| + | * https://opensource.com/article/20/5/kubectl-cheat-sheet | ||
| + | * https://developers.redhat.com/blog/2020/05/11/top-10-must-know-kubernetes-design-patterns/?utm_medium=Email&utm_campaign=weekly&sc_cid=7013a000002DolXAAS | ||
| + | * https://www.redhat.com/en/events/webinar/kubernetes-101—-introduction-containers-kubernetes-and-openshift-red-hat-training | ||
| + | * Lens | The Kubernetes IDE: https://k8slens.dev/ | ||
| + | * https://ubuntu.com/tutorials/how-to-kubernetes-cluster-on-raspberry-pi#1-overview | ||
* Web-Source: https://medium.com/google-cloud/kubernetes-101-pods-nodes-containers-and-clusters-c1509e409e16 | * Web-Source: https://medium.com/google-cloud/kubernetes-101-pods-nodes-containers-and-clusters-c1509e409e16 | ||
* Web-Source: https://kubernetes.io/docs/reference/kubectl/cheatsheet/ | * Web-Source: https://kubernetes.io/docs/reference/kubectl/cheatsheet/ | ||
| − | * | + | * https://linuxacademy.com/site-content/uploads/2019/04/Kubernetes-Cheat-Sheet_07182019.pdf |
| − | === | + | * https://linuxacademy.com/blog/containers/kubernetes-cheat-sheet/?utm_source=linkedin&utm_medium=social&utm_campaign=2020_kubernetesblogs |
| − | * Cluster >>> Nodes | + | * ... |
| − | * '''Node''': Has a Node-IP | + | * https://bitnami.com/stack/redmine/helm |
| − | * | + | * https://docs.bitnami.com/kubernetes/get-started-kubernetes/ |
| + | *https://docs.bitnami.com/kubernetes/how-to/deploy-application-kubernetes-helm/ | ||
| + | *https://docs.bitnami.com/azure/get-started-aks/ | ||
| + | *https://docs.bitnami.com/aws/get-started-charts-eks-marketplace/ | ||
| + | *https://docs.helm.sh/using_helm/#install-kubernetes-or-have-access-to-a-cluster | ||
| + | *https://docs.helm.sh/using_helm/#install-helm | ||
| + | |||
| + | === Structure === | ||
| + | * Cluster >>> Nodes >>> Pods (Endpoint) >>> Containers (App) #### Deployments >>> Service (selector:app=A) | ||
| + | * '''Node''': Has a Node-IP. | ||
| + | * '''Pod''': Has an Endpoint-IP. | ||
| + | * '''Service''': Has a Cluster-IP. | ||
| + | * Pod has: (label, nodeSelector); containerPort; (podIP, hostIP). | ||
| + | * Service has: selector, port, targetPort, nodePort | ||
* Node-Components: kubelet, kube-proxy | * Node-Components: kubelet, kube-proxy | ||
| + | |||
| + | ===Deployment=== | ||
| + | * Deployment: primary purpose is to declare how many '''replicas''' of a pod should be running at a time. | ||
* Deleting a deployment does not delete the endpoints (Pod) or services. | * Deleting a deployment does not delete the endpoints (Pod) or services. | ||
| − | |||
| − | |||
* Persistent Volumes: To store data permanently | * Persistent Volumes: To store data permanently | ||
| − | * Isolation between pods | + | * Isolation between pods. |
| − | === Services === | + | ===Services=== |
| + | * Service in Kubernetes defines a logical set of Pods and a policy by which to access them. | ||
* Ingress: communicate with a service running in a pod >> Ingress-Controller / LoadBalancer | * Ingress: communicate with a service running in a pod >> Ingress-Controller / LoadBalancer | ||
| − | |||
* The set of Pods targeted by a Service is usually determined by a Label ('''Selector'''). | * The set of Pods targeted by a Service is usually determined by a Label ('''Selector'''). | ||
* Services can be exposed in different ways by specifying a type in the ServiceSpec. | * Services can be exposed in different ways by specifying a type in the ServiceSpec. | ||
* Typ: '''ClusterIP''', '''NodePort''', '''LoadBalancer''', '''ExternalName''' | * Typ: '''ClusterIP''', '''NodePort''', '''LoadBalancer''', '''ExternalName''' | ||
| + | == Linux-Admin == | ||
| + | <pre class="code"> | ||
| + | $ vi /etc/sudoers.d #Add: student ALL=(ALL) ALL | ||
| + | $ PATH=$PATH:/usr/sbin:/sbin | ||
| + | $ export PATH="/home/sh/.minishift/cache/oc/v3.11.0/linux:$PATH" | ||
| + | $ tar -xvf filename | ||
| + | $ ip addr show | ||
| + | $ vim /etc/hosts | ||
| + | $ less filaname.txt # Dispaly the contents of a file | ||
| + | $ cat filename.txt # Display the content of a file | ||
| + | $ tee filename.txt # Redirect output to multiple files | ||
| + | </pre> | ||
| + | <pre class="code"> | ||
| + | alias k="kubectl" | ||
| + | alias kgp="kubectl get pods -owide" | ||
| + | alias kgd="kubectl get deployment -o wide" | ||
| + | alias kgs="kubectl get svc -o wide" | ||
| + | alias kgn="kubectl get nodes -owide" | ||
| + | # ... | ||
| + | alias kdp="kubectl describe pod" | ||
| + | alias kdd="kubectl describe deployment" | ||
| + | alias kds="kubectl describe service" | ||
| + | alias kdn="kubectl describe nodes" | ||
| + | </pre> | ||
== Infrastructure== | == Infrastructure== | ||
* Installation with Vagrant: https://kubernetes.io/blog/2019/03/15/kubernetes-setup-using-ansible-and-vagrant/ | * Installation with Vagrant: https://kubernetes.io/blog/2019/03/15/kubernetes-setup-using-ansible-and-vagrant/ | ||
| Line 26: | Line 75: | ||
* Ubuntu 16.04 LTS: ubuntu/xenial64 | * Ubuntu 16.04 LTS: ubuntu/xenial64 | ||
* Ubuntu 18.04 LTS: ubuntu/bionic64 | * Ubuntu 18.04 LTS: ubuntu/bionic64 | ||
| + | * Ubuntu 20.04 LTS: ubuntu/focal64 | ||
<pre class="code"> | <pre class="code"> | ||
# -*- mode: ruby -*- | # -*- mode: ruby -*- | ||
# vi: set ft=ruby : | # vi: set ft=ruby : | ||
| − | IMAGE_NAME = "ubuntu/ | + | IMAGE_NAME = "ubuntu/focal64" |
Vagrant.configure("2") do |config| | Vagrant.configure("2") do |config| | ||
| Line 35: | Line 85: | ||
config.vm.provider "virtualbox" do |vb| | config.vm.provider "virtualbox" do |vb| | ||
| + | vb.gui = false | ||
vb.cpus = 2 | vb.cpus = 2 | ||
| − | vb.memory = | + | vb.memory = 4096 |
end | end | ||
| − | config.vm.define "k8s | + | config.vm.define "k8s-master" do |master| |
master.vm.box = IMAGE_NAME | master.vm.box = IMAGE_NAME | ||
| − | master.vm.hostname = "k8s | + | master.vm.hostname = "k8s-master" |
| − | + | master.vm.network "public_network", bridge: "br0" | |
| − | master.vm.network "public_network", | ||
end | end | ||
| − | config.vm.define "k8s | + | config.vm.define "k8s-node01" do |node| |
node.vm.box = IMAGE_NAME | node.vm.box = IMAGE_NAME | ||
| − | node.vm.hostname = "k8s | + | node.vm.hostname = "k8s-node01" |
| − | node.vm.network | + | node.vm.network "public_network", bridge: "br0" |
| − | |||
end | end | ||
end | end | ||
</pre> | </pre> | ||
| − | == | + | == Installation == |
| + | ===Installing kubectl=== | ||
| + | <pre class="code"> | ||
| + | $ curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl | ||
| + | $ sudo install kubectl /sdk/bin | ||
| + | </pre> | ||
| + | ===Installing minikube=== | ||
| + | <pre class="code"> | ||
| + | $ curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube | ||
| + | $ sudo install minikube /sdk/bin | ||
| + | $ minikube start | ||
| + | </pre> | ||
| + | ===Installing Master=== | ||
<pre class="code"> | <pre class="code"> | ||
| − | $ | + | [user@master:~$] sudo -i |
| − | $ | + | [root@master:~$] apt-get update && apt-get upgrade -y |
| − | + | ... | |
| − | $ | + | [root@master:~$] echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" >> /etc/apt/sources.list.d/kubernetes.list |
| − | $ ip addr show | + | [root@master:~$] curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - |
| − | $ vim /etc/hosts | + | [root@master:~$] apt-get update |
| − | $ | + | ... |
| − | + | [root@master:~$] apt-get install -y docker.io | |
| − | $ tee | + | [root@master:~$] apt-get install -y kubeadm=1.15.1-00 kubelet=1.15.1-00 kubectl=1.15.1-00 |
| + | ... | ||
| + | [root@master:~$] ip addr show | ||
| + | [root@master:~$] vim /etc/hosts # Add an local DNS alias for master server | ||
| + | [root@master:~$] vim kubeadm-config.yaml # Add Kubernetes-Version, Node-Alais, IP-Range | ||
| + | ----------------------------------------------------------------------------------------------- | ||
| + | apiVersion: kubeadm.k8s.io/v1beta2 | ||
| + | kind: ClusterConfiguration | ||
| + | kubernetesVersion: 1.15.1 #<-- Use the word stable for newest version | ||
| + | controlPlaneEndpoint: "k8smaster:6443" #<-- Use the node alias not the IP | ||
| + | networking: | ||
| + | podSubnet: 192.168.0.0/16 #<-- Match the IP with Calico config file | ||
| + | ----------------------------------------------------------------------------------------------- | ||
| + | [root@master:~$] kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out | ||
| + | [root@master:~$] exit | ||
</pre> | </pre> | ||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
<pre class="code"> | <pre class="code"> | ||
| − | + | [user@master:~$] mkdir -p $HOME/.kube | |
| − | + | [user@master:~$] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config | |
| + | [user@master:~$] sudo chown $(id -u):$(id -g) $HOME/.kube/config | ||
| + | [user@master:~$] less .kube/config | ||
| + | ... | ||
| + | [user@master:~$] kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml | ||
| + | [user@master:~$] kubectl taint nodes --all node-role.kubernetes.io/master- # Remove the taints on the master to schedule pods on it | ||
| + | ... | ||
| + | [user@master:~$] wget https://tinyurl.com/yb4xturm -O rbac-kdd.yaml | ||
| + | [user@master:~$] wget https://tinyurl.com/y8lvqc9g -O calico.yaml | ||
| + | [user@master:~$] kubectl apply -f rbac-kdd.yaml | ||
| + | [user@master:~$] kubectl apply -f calico.yaml | ||
| + | ... | ||
| + | [user@master:~$] source <(kubectl completion bash) | ||
| + | [user@master:~$] echo "source <(kubectl completion bash)" >> ~/.bashrc | ||
| + | ... | ||
| + | [user@master:~$] sudo kubeadm config print init-defaults | ||
</pre> | </pre> | ||
| − | == | + | ===Installing Worker=== |
<pre class="code"> | <pre class="code"> | ||
| − | curl - | + | [user@node01:~$] sudo -i |
| − | + | [root@node01:~$] apt-get update && apt-get upgrade -y | |
| + | ... | ||
| + | [root@node01:~$] echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" >> /etc/apt/sources.list.d/kubernetes.list | ||
| + | [root@node01:~$] curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - | ||
| + | [root@node01:~$] apt-get update | ||
| + | ... | ||
| + | [root@node01:~$] apt-get install -y docker.io | ||
| + | [root@node01:~$] apt-get install -y kubeadm kubelet kubectl | ||
| + | ... | ||
| + | [root@node01:~$] ip addr show | ||
| + | [root@node01:~$] echo "192.168.50.10 k8smaster" >> /etc/hosts # Add an local DNS alias for master server | ||
| + | [root@node01:~$] echo "192.168.50.11 k8snode01" >> /etc/hosts # Add an local DNS alias for master server | ||
| + | ... | ||
| + | [root@node01:~$] kubeadm join --token 27eee4.6e66ff60318da929 k8smaster:6443 --discovery-token-ca-cert-hash sha256:6d541678b05652e1fa5d43908e75e67376e994c3483d6683f2a18673e5d2a1b0 | ||
| + | [root@node01:~$] exit | ||
</pre> | </pre> | ||
| Line 109: | Line 191: | ||
== Life Cycle: kubectl == | == Life Cycle: kubectl == | ||
| − | === | + | ===Themen=== |
| + | * Cluster | ||
| + | * Nodes | ||
| + | * Pods | ||
| + | * InitContainers: https://docs.openshift.com/container-platform/4.1/nodes/containers/nodes-containers-init.html | ||
| + | * Deployments, StatefulSet | ||
| + | * ServiceDiscovery, PortForward | ||
| + | * Taints | ||
| + | * Rollout | ||
| + | * Secrets | ||
| + | * HealtCheck | ||
| + | * Jobs & CronJobs | ||
| + | * Tool-Monitoring | ||
| + | * Tool-Helm | ||
| + | ===Config=== | ||
<pre class="code"> | <pre class="code"> | ||
| − | |||
$ kubectl config --kubeconfig=$CFG_FILE use-context $CONTEXT_NAME | $ kubectl config --kubeconfig=$CFG_FILE use-context $CONTEXT_NAME | ||
| − | + | $ kubectl config set-context $NAME --namespace=$NAME | |
| − | $ kubectl run $NAME --image=nginx | + | ===Basics-Main=== |
| + | $ kubectl run $NAME --image=nginx | ||
$ kubectl create deployment $NAME --image=nginx | $ kubectl create deployment $NAME --image=nginx | ||
... | ... | ||
| Line 158: | Line 254: | ||
<pre class="code"> | <pre class="code"> | ||
$ kubectl service hello-minikube --url | $ kubectl service hello-minikube --url | ||
| + | $ kubectl get nodes --show-labels | ||
| + | $ kubectl get pods -l app=nginx --all-namespaces | ||
| + | $ kubectl get deploy --show-labels | ||
| + | ... | ||
| + | $ kubectl label node $NODE_ID system=secondOne | ||
| + | $ kubectl label node $NODE_ID system- | ||
| + | ... | ||
| + | $ kubectl expose deployment nginx-one --type=NodePort --name=service-lab | ||
| + | $ kubectl expose deployment nginx-one | ||
| + | $ kubectl describe services | ||
| + | ... | ||
| + | $ kubectl delete deploy -l system=secondary | ||
</pre> | </pre> | ||
| + | |||
===Labels=== | ===Labels=== | ||
<pre class="code"> | <pre class="code"> | ||
| − | $ kubectl label | + | $ kubectl label [node,pod,deployment,service] $NAME $LABEL_KEY=LABEL_VALUE |
| − | + | $ kubectl get [node,pod,deployment,service] --show-labels | |
| − | $ kubectl get | + | $ kubectl get [node,pod,deployment,service] -l $LABEL_KEY=LABEL_VALUE |
| + | ... | ||
| + | $ kubectl label pod nginx type=test | ||
$ kubectl get pods --selector owner=michael | $ kubectl get pods --selector owner=michael | ||
$ kubectl get pods -l env=development | $ kubectl get pods -l env=development | ||
| Line 171: | Line 282: | ||
===ReplicaSet=== | ===ReplicaSet=== | ||
| + | A ReplicaSet is defined with fields, including a selector that specifies how to identify Pods it can acquire, a number of replicas indicating how many Pods it should be maintaining, and a pod template specifying the data of new Pods it should create to meet the number of replicas criteria. By deleteing ReplicaSet, just pod with the same system-label will be deleted. | ||
| + | <pre class="code"> | ||
| + | kubectl get rs | ||
| + | kubectl create -f rs.yaml | ||
| + | kubectl delete rs rs-one --cascade=false | ||
| + | kubectl edit po $POD_ID # change system: ReplicaOne >>to>> system: IsolatedPod | ||
| + | kubectl get po -L system | ||
| + | kubectl delete rs rs-one | ||
| + | </pre> | ||
| + | ===DaemonSet=== | ||
| + | The DaemonSet ensures that when a node is added to a cluster a pods will be created on that node | ||
<pre class="code"> | <pre class="code"> | ||
| + | kubectl get ds | ||
| + | kubectl set image [pod,deploy,rc,rs,ds] $NAME $CONTAINER_ID=nginx:1.16 | ||
| + | kubectl rollout status $TYPE $NAME | ||
| + | kubectl rollout history $TYPE $NAME | ||
| + | kubectl rollout history $TYPE $NAME --revision=1 | ||
| + | kubectl rollout undo $TYPE $NAME --to-revision=1 | ||
</pre> | </pre> | ||
| + | |||
===exec=== | ===exec=== | ||
<pre class="code"> | <pre class="code"> | ||
| Line 186: | Line 315: | ||
===Expose=== | ===Expose=== | ||
| − | * ClusterIP, NodePort, LoadBalancer, | + | * Expose-Typen: ClusterIP, NodePort, LoadBalancer, ExternalName. |
<pre class="code"> | <pre class="code"> | ||
| + | $ kubectl expose [pod,svc,deploy,rc,rs] $NAME --port=1234 --type=[NodePort,LoadBalancer] | ||
| + | ... | ||
$ kubectl run $NAME --image=nginx:1.12 --port=9876 | $ kubectl run $NAME --image=nginx:1.12 --port=9876 | ||
| + | $ kubectl create deployment $NAME --image=nginx | ||
| + | ... | ||
$ kubectl expose deployment $NAME # Exposes the Service on ClusterIP. | $ kubectl expose deployment $NAME # Exposes the Service on ClusterIP. | ||
$ kubectl expose deployment $NAME --type=LoadBalancer # Exposes the Service on external. | $ kubectl expose deployment $NAME --type=LoadBalancer # Exposes the Service on external. | ||
| Line 200: | Line 333: | ||
$ kubectl taint nodes --all node.kubernetes.io/not-ready | $ kubectl taint nodes --all node.kubernetes.io/not-ready | ||
... | ... | ||
| − | $ kubectl taint nodes | + | $ kubectl taint nodes $NODE_ID key=value:NoExecute |
| − | $ kubectl taint nodes | + | $ kubectl taint nodes $NODE_ID key=value:NoSchedule |
| − | $ kubectl taint nodes | + | $ kubectl taint nodes $NODE_ID key=value:PreferNoSchedule |
| − | $ kubectl taint nodes | + | ... |
| + | $ kubectl taint nodes $NODE_ID key- | ||
| + | $ kubectl taint nodes $NODE_ID key:NoExecute- | ||
| + | $ kubectl taint nodes $NODE_ID key:NoSchedule- | ||
| + | $ kubectl taint nodes $NODE_ID key:PreferNoSchedule- | ||
... | ... | ||
| − | |||
| − | |||
| − | |||
| − | |||
$ kubectl drain $NODE_ID | $ kubectl drain $NODE_ID | ||
$ kubectl uncordon $NODE_ID | $ kubectl uncordon $NODE_ID | ||
... | ... | ||
| − | |||
$ kubectl describe nodes $NODE_ID | grep -i taint | $ kubectl describe nodes $NODE_ID | grep -i taint | ||
$ kubectl describe nodes $NODE_ID | grep Taint | $ kubectl describe nodes $NODE_ID | grep Taint | ||
| Line 218: | Line 350: | ||
===Volumus=== | ===Volumus=== | ||
| + | * Ceph is also another popular solution for dynamic, persistent volumes. | ||
* Typ: node-local such as '''emptyDir''' or '''hostPath''' | * Typ: node-local such as '''emptyDir''' or '''hostPath''' | ||
* Typ: file-sharing such as '''nfs''' | * Typ: file-sharing such as '''nfs''' | ||
| Line 223: | Line 356: | ||
* Typ: distributed-file such as '''glusterfs''' or '''cephfs''' | * Typ: distributed-file such as '''glusterfs''' or '''cephfs''' | ||
* Typ: special-purpose such as '''secret''', '''gitRepo''', '''PersistentVolume''' | * Typ: special-purpose such as '''secret''', '''gitRepo''', '''PersistentVolume''' | ||
| + | <pre class="code"> | ||
| + | kubectl create -f pv.yaml # PersistentVolume | ||
| + | kubectl create -f pvc.yaml # PersistentVolumeClaim | ||
| + | ... | ||
| + | kubectl create configmap colors --from-literal=text=black --from-file=./favorite --from-file=./primary/ | ||
| + | kubectl get configmap colors -o yaml | ||
| + | ... | ||
| + | kubectl exec -it shell-demo -- /bin/bash -c 'echo $ilike' | ||
| + | kubectl exec -it shell-demo -- /bin/bash -c 'env' | ||
| + | kubectl exec -it shell-demo -- /bin/bash -c 'df -ha |grep car' | ||
| + | kubectl exec -it shell-demo -- /bin/bash -c 'cat /etc/cars/car.trim' | ||
| + | </pre> | ||
'''In Pod''' | '''In Pod''' | ||
<pre class="code"> | <pre class="code"> | ||
| Line 239: | Line 384: | ||
claimName: myclaim | claimName: myclaim | ||
</pre> | </pre> | ||
| + | |||
| + | '''PVC''' | ||
<pre class="code"> | <pre class="code"> | ||
kind: PersistentVolumeClaim | kind: PersistentVolumeClaim | ||
| Line 261: | Line 408: | ||
$ kubectl logs --since=10s $POD -c $CONTAINER | $ kubectl logs --since=10s $POD -c $CONTAINER | ||
$ kubectl logs -p $POD -c $CONTAINER | $ kubectl logs -p $POD -c $CONTAINER | ||
| + | </pre> | ||
| + | ===API=== | ||
| + | <pre class="code"> | ||
| + | $ kubectl proxy --port=8080 | ||
| + | $ curl http://localhost:8080/api/v1 | ||
| + | $ curl http://127.0.0.1:8001/api/v1/namespaces | ||
| + | $ kubectl get --raw=/api/v1 | ||
| + | $ kubectl api-versions | ||
| + | ... | ||
| + | $ export client= $(grep client-cert ~/.kube/config |cut -d" " -f 6) | ||
| + | $ export key=$(grep client-key-data ~/.kube/config |cut -d " " -f 6) | ||
| + | $ export auth= $(grep certificate-authority-data ~/.kube/config |cut -d " " -f 6) | ||
| + | $ echo $client, $key, $auth | ||
| + | ... | ||
| + | $ echo $client | base64 -d - > ./client.pem | ||
| + | $ echo $key | base64 -d - > ./client-key.pem | ||
| + | $ echo $auth | base64 -d - > ./ca.pem | ||
| + | ... | ||
| + | $ curl --cert ./client.pem --key ./client-key.pem --cacert ./ca.pem https://k8smaster:6443/api/v1/pods | ||
| + | </pre> | ||
| + | <pre class="code"> | ||
| + | $ kubectl proxy --port=8080 | ||
| + | $ curl http://localhost:8080/api/v1 | ||
| + | $ curl http://127.0.0.1:8001/api/v1/namespaces | ||
| + | $ kubectl get --raw=/api/v1 | ||
| + | $ kubectl api-versions | ||
| + | ... | ||
| + | $ export client= $(grep client-cert ~/.kube/config |cut -d" " -f 6) | ||
| + | $ export key=$(grep client-key-data ~/.kube/config |cut -d " " -f 6) | ||
| + | $ export auth= $(grep certificate-authority-data ~/.kube/config |cut -d " " -f 6) | ||
| + | $ echo $client, $key, $auth | ||
| + | ... | ||
| + | $ echo $client | base64 -d - > ./client.pem | ||
| + | $ echo $key | base64 -d - > ./client-key.pem | ||
| + | $ echo $auth | base64 -d - > ./ca.pem | ||
| + | ... | ||
| + | $ curl --cert ./client.pem --key ./client-key.pem --cacert ./ca.pem https://k8smaster:6443/api/v1/pods | ||
</pre> | </pre> | ||
| + | ===Namespace=== | ||
| + | <pre class="code"> | ||
| + | $ kubectl config set-context --current --namespace=$NAME | ||
| + | </pre> | ||
==YAML== | ==YAML== | ||
===Yaml-Config=== | ===Yaml-Config=== | ||
| Line 339: | Line 527: | ||
spec (host, to, port, tls) | spec (host, to, port, tls) | ||
</pre> | </pre> | ||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
Latest revision as of 18:42, 14 September 2021
Contents
Introduction
Source-Top
Source-Mix
- https://kubernetes.io/blog/2019/03/15/kubernetes-setup-using-ansible-and-vagrant/
- https://kubernetes.io/blog/2020/06/working-with-terraform-and-kubernetes/
- https://opensource.com/article/20/5/kubectl-cheat-sheet
- https://developers.redhat.com/blog/2020/05/11/top-10-must-know-kubernetes-design-patterns/?utm_medium=Email&utm_campaign=weekly&sc_cid=7013a000002DolXAAS
- https://www.redhat.com/en/events/webinar/kubernetes-101—-introduction-containers-kubernetes-and-openshift-red-hat-training
- Lens | The Kubernetes IDE: https://k8slens.dev/
- https://ubuntu.com/tutorials/how-to-kubernetes-cluster-on-raspberry-pi#1-overview
- Web-Source: https://medium.com/google-cloud/kubernetes-101-pods-nodes-containers-and-clusters-c1509e409e16
- Web-Source: https://kubernetes.io/docs/reference/kubectl/cheatsheet/
- https://linuxacademy.com/site-content/uploads/2019/04/Kubernetes-Cheat-Sheet_07182019.pdf
- https://linuxacademy.com/blog/containers/kubernetes-cheat-sheet/?utm_source=linkedin&utm_medium=social&utm_campaign=2020_kubernetesblogs
- ...
- https://bitnami.com/stack/redmine/helm
- https://docs.bitnami.com/kubernetes/get-started-kubernetes/
- https://docs.bitnami.com/kubernetes/how-to/deploy-application-kubernetes-helm/
- https://docs.bitnami.com/azure/get-started-aks/
- https://docs.bitnami.com/aws/get-started-charts-eks-marketplace/
- https://docs.helm.sh/using_helm/#install-kubernetes-or-have-access-to-a-cluster
- https://docs.helm.sh/using_helm/#install-helm
Structure
- Cluster >>> Nodes >>> Pods (Endpoint) >>> Containers (App) #### Deployments >>> Service (selector:app=A)
- Node: Has a Node-IP.
- Pod: Has an Endpoint-IP.
- Service: Has a Cluster-IP.
- Pod has: (label, nodeSelector); containerPort; (podIP, hostIP).
- Service has: selector, port, targetPort, nodePort
- Node-Components: kubelet, kube-proxy
Deployment
- Deployment: primary purpose is to declare how many replicas of a pod should be running at a time.
- Deleting a deployment does not delete the endpoints (Pod) or services.
- Persistent Volumes: To store data permanently
- Isolation between pods.
Services
- Service in Kubernetes defines a logical set of Pods and a policy by which to access them.
- Ingress: communicate with a service running in a pod >> Ingress-Controller / LoadBalancer
- The set of Pods targeted by a Service is usually determined by a Label (Selector).
- Services can be exposed in different ways by specifying a type in the ServiceSpec.
- Typ: ClusterIP, NodePort, LoadBalancer, ExternalName
Linux-Admin
$ vi /etc/sudoers.d #Add: student ALL=(ALL) ALL $ PATH=$PATH:/usr/sbin:/sbin $ export PATH="/home/sh/.minishift/cache/oc/v3.11.0/linux:$PATH" $ tar -xvf filename $ ip addr show $ vim /etc/hosts $ less filaname.txt # Dispaly the contents of a file $ cat filename.txt # Display the content of a file $ tee filename.txt # Redirect output to multiple files
alias k="kubectl" alias kgp="kubectl get pods -owide" alias kgd="kubectl get deployment -o wide" alias kgs="kubectl get svc -o wide" alias kgn="kubectl get nodes -owide" # ... alias kdp="kubectl describe pod" alias kdd="kubectl describe deployment" alias kds="kubectl describe service" alias kdn="kubectl describe nodes"
Infrastructure
- Installation with Vagrant: https://kubernetes.io/blog/2019/03/15/kubernetes-setup-using-ansible-and-vagrant/
- Master (CPU: 3, MEM: 4G, Storage: 5G)
- Worker (CPU: 1, MEM: 2G, Storage: 5G)
- Ubuntu 16.04 LTS: ubuntu/xenial64
- Ubuntu 18.04 LTS: ubuntu/bionic64
- Ubuntu 20.04 LTS: ubuntu/focal64
# -*- mode: ruby -*-
# vi: set ft=ruby :
IMAGE_NAME = "ubuntu/focal64"
Vagrant.configure("2") do |config|
config.ssh.insert_key = false
config.vm.provider "virtualbox" do |vb|
vb.gui = false
vb.cpus = 2
vb.memory = 4096
end
config.vm.define "k8s-master" do |master|
master.vm.box = IMAGE_NAME
master.vm.hostname = "k8s-master"
master.vm.network "public_network", bridge: "br0"
end
config.vm.define "k8s-node01" do |node|
node.vm.box = IMAGE_NAME
node.vm.hostname = "k8s-node01"
node.vm.network "public_network", bridge: "br0"
end
end
Installation
Installing kubectl
$ curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl $ sudo install kubectl /sdk/bin
Installing minikube
$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube $ sudo install minikube /sdk/bin $ minikube start
Installing Master
[user@master:~$] sudo -i [root@master:~$] apt-get update && apt-get upgrade -y ... [root@master:~$] echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" >> /etc/apt/sources.list.d/kubernetes.list [root@master:~$] curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - [root@master:~$] apt-get update ... [root@master:~$] apt-get install -y docker.io [root@master:~$] apt-get install -y kubeadm=1.15.1-00 kubelet=1.15.1-00 kubectl=1.15.1-00 ... [root@master:~$] ip addr show [root@master:~$] vim /etc/hosts # Add an local DNS alias for master server [root@master:~$] vim kubeadm-config.yaml # Add Kubernetes-Version, Node-Alais, IP-Range ----------------------------------------------------------------------------------------------- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: 1.15.1 #<-- Use the word stable for newest version controlPlaneEndpoint: "k8smaster:6443" #<-- Use the node alias not the IP networking: podSubnet: 192.168.0.0/16 #<-- Match the IP with Calico config file ----------------------------------------------------------------------------------------------- [root@master:~$] kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out [root@master:~$] exit
[user@master:~$] mkdir -p $HOME/.kube [user@master:~$] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [user@master:~$] sudo chown $(id -u):$(id -g) $HOME/.kube/config [user@master:~$] less .kube/config ... [user@master:~$] kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml [user@master:~$] kubectl taint nodes --all node-role.kubernetes.io/master- # Remove the taints on the master to schedule pods on it ... [user@master:~$] wget https://tinyurl.com/yb4xturm -O rbac-kdd.yaml [user@master:~$] wget https://tinyurl.com/y8lvqc9g -O calico.yaml [user@master:~$] kubectl apply -f rbac-kdd.yaml [user@master:~$] kubectl apply -f calico.yaml ... [user@master:~$] source <(kubectl completion bash) [user@master:~$] echo "source <(kubectl completion bash)" >> ~/.bashrc ... [user@master:~$] sudo kubeadm config print init-defaults
Installing Worker
[user@node01:~$] sudo -i [root@node01:~$] apt-get update && apt-get upgrade -y ... [root@node01:~$] echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" >> /etc/apt/sources.list.d/kubernetes.list [root@node01:~$] curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - [root@node01:~$] apt-get update ... [root@node01:~$] apt-get install -y docker.io [root@node01:~$] apt-get install -y kubeadm kubelet kubectl ... [root@node01:~$] ip addr show [root@node01:~$] echo "192.168.50.10 k8smaster" >> /etc/hosts # Add an local DNS alias for master server [root@node01:~$] echo "192.168.50.11 k8snode01" >> /etc/hosts # Add an local DNS alias for master server ... [root@node01:~$] kubeadm join --token 27eee4.6e66ff60318da929 k8smaster:6443 --discovery-token-ca-cert-hash sha256:6d541678b05652e1fa5d43908e75e67376e994c3483d6683f2a18673e5d2a1b0 [root@node01:~$] exit
Life Cycle: kubeadm
$ kubeadm init $ kubeadm join $ kubeadm config $ kubeadm token
Life Cycle: kubectl
Themen
- Cluster
- Nodes
- Pods
- InitContainers: https://docs.openshift.com/container-platform/4.1/nodes/containers/nodes-containers-init.html
- Deployments, StatefulSet
- ServiceDiscovery, PortForward
- Taints
- Rollout
- Secrets
- HealtCheck
- Jobs & CronJobs
- Tool-Monitoring
- Tool-Helm
Config
$ kubectl config --kubeconfig=$CFG_FILE use-context $CONTEXT_NAME $ kubectl config set-context $NAME --namespace=$NAME ===Basics-Main=== $ kubectl run $NAME --image=nginx $ kubectl create deployment $NAME --image=nginx ... $ kubectl run $NAME --image=nginx:1.23 --replicas=2 --port=9876 # Create and run a particular image. $ kubectl create -f file.yaml # Create a resource from a file. $ kubectl apply -f file.yaml # Apply a configuration to a resource by filename. Create the resource initially with either 'apply' or 'create --save-config'. $ kubectl replace -f file.yaml # Terminate and Replace a resource by filename. ... $ kubectl get all $ kubectl get all --all-namesapces $ kubectl get all -o wide $ kubectl get namespaces $ kubectl get nodes $ kubectl get depolyments $ kubectl get pods $ kubectl get services $ kubectl get endpoints $ kubectl get jobs ... $ kubectl describe $RESOURCE $NAME $ kubectl describe deployment nginx ... $ kubectl delete $TYP --all -n $NAME $ kubectl delete $TYP $NAME $ kubectl delete deployments $NAME $ kubectl delete pod $NAME $ kubectl delete service $NAME $ kubectl delete endpoint $NAME $ kubectl delete job $NAME
Basics-Mix
$ kubectl get deployment nginx -o yaml > file.yaml $ kubectl scale deployment nginx --replicas=3 $ kubectl apply -f project/k8s/development --recursive $ kubectl get pods -Lapp -Ltier -Lrole $ kubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx' $ kubectl autoscale deployment/my-nginx --min=1 --max=3
Services
$ kubectl service hello-minikube --url $ kubectl get nodes --show-labels $ kubectl get pods -l app=nginx --all-namespaces $ kubectl get deploy --show-labels ... $ kubectl label node $NODE_ID system=secondOne $ kubectl label node $NODE_ID system- ... $ kubectl expose deployment nginx-one --type=NodePort --name=service-lab $ kubectl expose deployment nginx-one $ kubectl describe services ... $ kubectl delete deploy -l system=secondary
Labels
$ kubectl label [node,pod,deployment,service] $NAME $LABEL_KEY=LABEL_VALUE $ kubectl get [node,pod,deployment,service] --show-labels $ kubectl get [node,pod,deployment,service] -l $LABEL_KEY=LABEL_VALUE ... $ kubectl label pod nginx type=test $ kubectl get pods --selector owner=michael $ kubectl get pods -l env=development $ kubectl get pods -l 'env in (production, development)' $ kubectl delete pods -l 'env in (production, development)'
ReplicaSet
A ReplicaSet is defined with fields, including a selector that specifies how to identify Pods it can acquire, a number of replicas indicating how many Pods it should be maintaining, and a pod template specifying the data of new Pods it should create to meet the number of replicas criteria. By deleteing ReplicaSet, just pod with the same system-label will be deleted.
kubectl get rs kubectl create -f rs.yaml kubectl delete rs rs-one --cascade=false kubectl edit po $POD_ID # change system: ReplicaOne >>to>> system: IsolatedPod kubectl get po -L system kubectl delete rs rs-one
DaemonSet
The DaemonSet ensures that when a node is added to a cluster a pods will be created on that node
kubectl get ds kubectl set image [pod,deploy,rc,rs,ds] $NAME $CONTAINER_ID=nginx:1.16 kubectl rollout status $TYPE $NAME kubectl rollout history $TYPE $NAME kubectl rollout history $TYPE $NAME --revision=1 kubectl rollout undo $TYPE $NAME --to-revision=1
exec
$ kubectl exec -it $POD -- printenv $ kubectl exec -it $POD -- /bin/bash $ kubectl exec -it $POD -- /bin/bash -c 'env' $ kubectl exec -it $POD -- /bin/bash -c 'df -ha |grep car' $ kubectl exec -it $POD -- /bin/bash -c 'echo $ilike' $ kubectl exec -it $POD -- /bin/bash -c 'cat /etc/cars/car.trim' $ kubectl exec -it $POD -c shell -- ping $SVC.$NAMESPACE.svc.cluster.local $ kubectl exec -it $POD -c c1 -- bash
Expose
- Expose-Typen: ClusterIP, NodePort, LoadBalancer, ExternalName.
$ kubectl expose [pod,svc,deploy,rc,rs] $NAME --port=1234 --type=[NodePort,LoadBalancer] ... $ kubectl run $NAME --image=nginx:1.12 --port=9876 $ kubectl create deployment $NAME --image=nginx ... $ kubectl expose deployment $NAME # Exposes the Service on ClusterIP. $ kubectl expose deployment $NAME --type=LoadBalancer # Exposes the Service on external. $ kubectl expose deployment $NAME --type=NodePort --port=80 # Exposes the Service on Node. $ kubectl edit ingress $CFG_INGRESS
Taint
$ kubectl taint nodes --all node-role.kubernetes.io/master- $ kubectl taint nodes --all node.kubernetes.io/not-ready ... $ kubectl taint nodes $NODE_ID key=value:NoExecute $ kubectl taint nodes $NODE_ID key=value:NoSchedule $ kubectl taint nodes $NODE_ID key=value:PreferNoSchedule ... $ kubectl taint nodes $NODE_ID key- $ kubectl taint nodes $NODE_ID key:NoExecute- $ kubectl taint nodes $NODE_ID key:NoSchedule- $ kubectl taint nodes $NODE_ID key:PreferNoSchedule- ... $ kubectl drain $NODE_ID $ kubectl uncordon $NODE_ID ... $ kubectl describe nodes $NODE_ID | grep -i taint $ kubectl describe nodes $NODE_ID | grep Taint
Volumus
- Ceph is also another popular solution for dynamic, persistent volumes.
- Typ: node-local such as emptyDir or hostPath
- Typ: file-sharing such as nfs
- Typ: cloud-provider such as awsElasticBlockStore, azureDisk, or gcePersistentDisk
- Typ: distributed-file such as glusterfs or cephfs
- Typ: special-purpose such as secret, gitRepo, PersistentVolume
kubectl create -f pv.yaml # PersistentVolume kubectl create -f pvc.yaml # PersistentVolumeClaim ... kubectl create configmap colors --from-literal=text=black --from-file=./favorite --from-file=./primary/ kubectl get configmap colors -o yaml ... kubectl exec -it shell-demo -- /bin/bash -c 'echo $ilike' kubectl exec -it shell-demo -- /bin/bash -c 'env' kubectl exec -it shell-demo -- /bin/bash -c 'df -ha |grep car' kubectl exec -it shell-demo -- /bin/bash -c 'cat /etc/cars/car.trim'
In Pod
kind: Pod
volumeMounts:
- name: xchange
mountPath: "/tmp/xchange"
- name: mypod
mountPath: "/tmp/persistent"
...
volumes:
- name: xchange
emptyDir: {}
- name: mypd
persistentVolumeClaim:
claimName: myclaim
PVC
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce | ReadOnlyMany | ReadWriteMany
resources:
requests:
storage: 1Gi
Secret
$ kubectl create secret generic $NAME --from-file=./file.txt $ kubectl get secrets
Loging
$ kubectl logs --tail=5 $POD -c $CONTAINER $ kubectl logs --since=10s $POD -c $CONTAINER $ kubectl logs -p $POD -c $CONTAINER
API
$ kubectl proxy --port=8080 $ curl http://localhost:8080/api/v1 $ curl http://127.0.0.1:8001/api/v1/namespaces $ kubectl get --raw=/api/v1 $ kubectl api-versions ... $ export client= $(grep client-cert ~/.kube/config |cut -d" " -f 6) $ export key=$(grep client-key-data ~/.kube/config |cut -d " " -f 6) $ export auth= $(grep certificate-authority-data ~/.kube/config |cut -d " " -f 6) $ echo $client, $key, $auth ... $ echo $client | base64 -d - > ./client.pem $ echo $key | base64 -d - > ./client-key.pem $ echo $auth | base64 -d - > ./ca.pem ... $ curl --cert ./client.pem --key ./client-key.pem --cacert ./ca.pem https://k8smaster:6443/api/v1/pods
$ kubectl proxy --port=8080 $ curl http://localhost:8080/api/v1 $ curl http://127.0.0.1:8001/api/v1/namespaces $ kubectl get --raw=/api/v1 $ kubectl api-versions ... $ export client= $(grep client-cert ~/.kube/config |cut -d" " -f 6) $ export key=$(grep client-key-data ~/.kube/config |cut -d " " -f 6) $ export auth= $(grep certificate-authority-data ~/.kube/config |cut -d " " -f 6) $ echo $client, $key, $auth ... $ echo $client | base64 -d - > ./client.pem $ echo $key | base64 -d - > ./client-key.pem $ echo $auth | base64 -d - > ./ca.pem ... $ curl --cert ./client.pem --key ./client-key.pem --cacert ./ca.pem https://k8smaster:6443/api/v1/pods
Namespace
$ kubectl config set-context --current --namespace=$NAME
YAML
Yaml-Config
kind: Config
preferences: {}
clusters (cluster, name)
users (name, user)
contexts (cluster, namespace, user)
current-context
Yaml-ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: 1.15.1 controlPlaneEndpoint: "k8smaster:6443" networking: podSubnet: 192.168.0.0/16
Yaml-Deployment
kind: Deployment metadata (name, labels, namespace) spec (replicas, template) - template (metadata, spec) --- spec (containers, volumes, nodeSelector) ---- containers (name, image, imagePullPolicy, ports, env, securityContext, volumeMounts)
Yaml-Pod
kind: Pod
metadaten
name: podName
labels:
env: development
app: anything
spec:
containers:
- name: conName
image: nginx
ports:
- containerPort: 9876
command:
- "bin/bash"
- "-c"
- "sleep 10000"
resources:
limits:
memory: "64Mi"
cpu: "500m"
Yaml-Service
kind: Service
metadata (name, namespace, labels, selfLink)
spec (clusterIP, ports, selector, type)
...
kind: Service
metadata:
name: nameService
spec:
ports:
- port: 80
targetPort: 9876
selector:
app: sise
Yaml-Route
kind: Route metadata (name, namespace, labels) spec (host, to, port, tls)