Difference between revisions of "IT-SDK-Kubernetes-YAML"

From wiki.samerhijazi.net
Jump to navigation Jump to search
(Installing Master)
m (Samerhijazi moved page IT-SDK-Kubernetes to IT-SDK-Kubernetes-YAML without leaving a redirect)
 
(75 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
== Introduction ==
 
== Introduction ==
 +
===Source-Top===
 +
* https://kubernetes.io/docs/reference/kubectl/cheatsheet/
 +
* http://kubernetesbyexample.com/
 +
===Source-Mix===
 +
* https://kubernetes.io/blog/2019/03/15/kubernetes-setup-using-ansible-and-vagrant/
 +
* https://kubernetes.io/blog/2020/06/working-with-terraform-and-kubernetes/
 +
* https://opensource.com/article/20/5/kubectl-cheat-sheet
 +
* https://developers.redhat.com/blog/2020/05/11/top-10-must-know-kubernetes-design-patterns/?utm_medium=Email&utm_campaign=weekly&sc_cid=7013a000002DolXAAS
 +
* https://www.redhat.com/en/events/webinar/kubernetes-101—-introduction-containers-kubernetes-and-openshift-red-hat-training
 +
* Lens | The Kubernetes IDE: https://k8slens.dev/
 +
* https://ubuntu.com/tutorials/how-to-kubernetes-cluster-on-raspberry-pi#1-overview
 
* Web-Source: https://medium.com/google-cloud/kubernetes-101-pods-nodes-containers-and-clusters-c1509e409e16
 
* Web-Source: https://medium.com/google-cloud/kubernetes-101-pods-nodes-containers-and-clusters-c1509e409e16
 
* Web-Source: https://kubernetes.io/docs/reference/kubectl/cheatsheet/
 
* Web-Source: https://kubernetes.io/docs/reference/kubectl/cheatsheet/
* Web-Source-http://kubernetesbyexample.com/
+
* https://linuxacademy.com/site-content/uploads/2019/04/Kubernetes-Cheat-Sheet_07182019.pdf
=== Notes ===
+
* https://linuxacademy.com/blog/containers/kubernetes-cheat-sheet/?utm_source=linkedin&utm_medium=social&utm_campaign=2020_kubernetesblogs
* Cluster >>> Nodes >>> Deployments >>> Pods (Endpoint) >>> Containers (App) >> Service (s:app=A)
+
* ...
* '''Node''': Has a Node-IP ### '''Pod''': Has an Endpoint-IP ### '''Service''': Has a Cluster-IP
+
* https://bitnami.com/stack/redmine/helm
* Master-Components:  
+
* https://docs.bitnami.com/kubernetes/get-started-kubernetes/
 +
*https://docs.bitnami.com/kubernetes/how-to/deploy-application-kubernetes-helm/
 +
*https://docs.bitnami.com/azure/get-started-aks/
 +
*https://docs.bitnami.com/aws/get-started-charts-eks-marketplace/
 +
*https://docs.helm.sh/using_helm/#install-kubernetes-or-have-access-to-a-cluster
 +
*https://docs.helm.sh/using_helm/#install-helm
 +
 
 +
=== Structure ===
 +
* Cluster >>> Nodes >>> Pods (Endpoint) >>> Containers (App) #### Deployments >>> Service (selector:app=A)
 +
* '''Node''': Has a Node-IP.
 +
* '''Pod''': Has an Endpoint-IP.
 +
* '''Service''': Has a Cluster-IP.
 +
* Pod has: (label, nodeSelector); containerPort; (podIP, hostIP).
 +
* Service has: selector, port, targetPort, nodePort
 
* Node-Components: kubelet, kube-proxy
 
* Node-Components: kubelet, kube-proxy
 +
 +
===Deployment===
 +
* Deployment: primary purpose is to declare how many '''replicas''' of a pod should be running at a time.
 
* Deleting a deployment does not delete the endpoints (Pod) or services.
 
* Deleting a deployment does not delete the endpoints (Pod) or services.
* Deployment: primary purpose is to declare how many replicas of a pod should be running at a time.
 
* Resource: ???
 
 
* Persistent Volumes: To store data permanently
 
* Persistent Volumes: To store data permanently
* Isolation between pods
+
* Isolation between pods.
=== Services ===
+
===Services===
 +
* Service in Kubernetes defines a logical set of Pods and a policy by which to access them.
 
* Ingress: communicate with a service running in a pod >> Ingress-Controller / LoadBalancer
 
* Ingress: communicate with a service running in a pod >> Ingress-Controller / LoadBalancer
* Service in Kubernetes defines a logical set of Pods and a policy by which to access them.
 
 
* The set of Pods targeted by a Service is usually determined by a Label ('''Selector''').
 
* The set of Pods targeted by a Service is usually determined by a Label ('''Selector''').
 
* Services can be exposed in different ways by specifying a type in the ServiceSpec.
 
* Services can be exposed in different ways by specifying a type in the ServiceSpec.
 
* Typ: '''ClusterIP''', '''NodePort''', '''LoadBalancer''', '''ExternalName'''
 
* Typ: '''ClusterIP''', '''NodePort''', '''LoadBalancer''', '''ExternalName'''
  
 +
== Linux-Admin ==
 +
<pre class="code">
 +
$ vi /etc/sudoers.d #Add: student ALL=(ALL) ALL
 +
$ PATH=$PATH:/usr/sbin:/sbin
 +
$ export PATH="/home/sh/.minishift/cache/oc/v3.11.0/linux:$PATH"
 +
$ tar -xvf filename
 +
$ ip addr show
 +
$ vim /etc/hosts
 +
$ less filaname.txt # Dispaly the contents of a file
 +
$ cat filename.txt # Display the content of a file
 +
$ tee filename.txt # Redirect output to multiple files
 +
</pre>
 +
<pre class="code">
 +
alias k="kubectl"
 +
alias kgp="kubectl get pods -owide"
 +
alias kgd="kubectl get deployment -o wide"
 +
alias kgs="kubectl get svc -o wide"
 +
alias kgn="kubectl get nodes -owide"
 +
# ...
 +
alias kdp="kubectl describe pod"
 +
alias kdd="kubectl describe deployment"
 +
alias kds="kubectl describe service"
 +
alias kdn="kubectl describe nodes"
 +
</pre>
 
== Infrastructure==
 
== Infrastructure==
 
* Installation with Vagrant: https://kubernetes.io/blog/2019/03/15/kubernetes-setup-using-ansible-and-vagrant/
 
* Installation with Vagrant: https://kubernetes.io/blog/2019/03/15/kubernetes-setup-using-ansible-and-vagrant/
Line 26: Line 75:
 
* Ubuntu 16.04 LTS: ubuntu/xenial64
 
* Ubuntu 16.04 LTS: ubuntu/xenial64
 
* Ubuntu 18.04 LTS: ubuntu/bionic64
 
* Ubuntu 18.04 LTS: ubuntu/bionic64
 +
* Ubuntu 20.04 LTS: ubuntu/focal64
 
<pre class="code">
 
<pre class="code">
 
# -*- mode: ruby -*-
 
# -*- mode: ruby -*-
 
# vi: set ft=ruby :
 
# vi: set ft=ruby :
IMAGE_NAME = "ubuntu/xenial64"
+
IMAGE_NAME = "ubuntu/focal64"
  
 
Vagrant.configure("2") do |config|
 
Vagrant.configure("2") do |config|
Line 35: Line 85:
  
 
     config.vm.provider "virtualbox" do |vb|
 
     config.vm.provider "virtualbox" do |vb|
 +
        vb.gui = false
 
         vb.cpus = 2
 
         vb.cpus = 2
         vb.memory = 3072
+
         vb.memory = 4096
 
     end
 
     end
 
        
 
        
     config.vm.define "k8s-a-master" do |master|
+
     config.vm.define "k8s-master" do |master|
 
         master.vm.box = IMAGE_NAME
 
         master.vm.box = IMAGE_NAME
         master.vm.hostname = "k8s-a-master"       
+
         master.vm.hostname = "k8s-master"
        master.vm.network "private_network", ip: "192.168.50.10"
+
         master.vm.network "public_network", bridge: "br0"
         master.vm.network "public_network", ip: "192.168.178.110", :mac => "0800278A8081"
 
 
     end
 
     end
 
      
 
      
     config.vm.define "k8s-a-node01" do |node|
+
     config.vm.define "k8s-node01" do |node|
 
         node.vm.box = IMAGE_NAME
 
         node.vm.box = IMAGE_NAME
         node.vm.hostname = "k8s-a-node01"             
+
         node.vm.hostname = "k8s-node01"             
         node.vm.network "private_network", ip: "192.168.50.11"
+
         node.vm.network "public_network", bridge: "br0"
        node.vm.network  "public_network", ip: "192.168.178.111", :mac => "0800278A8082"
 
 
     end   
 
     end   
 
end
 
end
 
</pre>
 
</pre>
  
== Linux-Admin ==
+
== Installation ==
 +
===Installing kubectl===
 +
<pre class="code">
 +
$ curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
 +
$ sudo install kubectl /sdk/bin
 +
</pre>
 +
===Installing minikube===
 +
<pre class="code">
 +
$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube
 +
$ sudo install minikube /sdk/bin
 +
$ minikube start
 +
</pre>
 +
===Installing Master===
 
<pre class="code">
 
<pre class="code">
$ vi /etc/sudoers.d #Add: student ALL=(ALL) ALL
+
[user@master:~$] sudo -i
$ PATH=$PATH:/usr/sbin:/sbin
+
[root@master:~$] apt-get update && apt-get upgrade -y
$ export PATH="/home/sh/.minishift/cache/oc/v3.11.0/linux:$PATH"
+
...
$ tar -xvf filename
+
[root@master:~$] echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" >> /etc/apt/sources.list.d/kubernetes.list
$ ip addr show
+
[root@master:~$] curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
$ vim /etc/hosts
+
[root@master:~$] apt-get update
$ less filaname.txt # Dispaly the contents of a file
+
...
$ cat filename.txt # Display the content of a file
+
[root@master:~$] apt-get install -y docker.io
$ tee filename.txt # Redirect output to multiple files
+
[root@master:~$] apt-get install -y kubeadm=1.15.1-00 kubelet=1.15.1-00 kubectl=1.15.1-00
 +
...
 +
[root@master:~$] ip addr show
 +
[root@master:~$] vim /etc/hosts               # Add an local DNS alias for master server
 +
[root@master:~$] vim kubeadm-config.yaml      # Add Kubernetes-Version, Node-Alais, IP-Range
 +
-----------------------------------------------------------------------------------------------
 +
apiVersion: kubeadm.k8s.io/v1beta2
 +
kind: ClusterConfiguration
 +
kubernetesVersion: 1.15.1              #<-- Use the word stable for newest version
 +
controlPlaneEndpoint: "k8smaster:6443"  #<-- Use the node alias not the IP
 +
networking:
 +
  podSubnet: 192.168.0.0/16            #<-- Match the IP with Calico config file
 +
-----------------------------------------------------------------------------------------------
 +
[root@master:~$] kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out
 +
[root@master:~$] exit
 
</pre>
 
</pre>
 
== Top-Themen ==
 
* Installation
 
* Cluster
 
* Nodes
 
* Pods
 
* InitContainers
 
* Deployments, StatefulSet
 
* Services, ServiceDiscovery, Expose, PortForward
 
* Volumes
 
* Labels
 
* Taints
 
* ReplicaSets, DaemonSets, Rollout
 
* Secrets
 
* Logging
 
* HealtCheck
 
* Jobs & CronJobs
 
* APIs
 
* Tool-Monitoring
 
* Tool-Helm
 
 
== Install kubectl ==
 
 
<pre class="code">
 
<pre class="code">
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
+
[user@master:~$] mkdir -p $HOME/.kube
sudo install kubectl /sdk/bin
+
[user@master:~$] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 +
[user@master:~$] sudo chown $(id -u):$(id -g) $HOME/.kube/config
 +
[user@master:~$] less .kube/config
 +
...
 +
[user@master:~$] kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
 +
[user@master:~$] kubectl taint nodes --all node-role.kubernetes.io/master-   # Remove the taints on the master to schedule pods on it
 +
...
 +
[user@master:~$] wget https://tinyurl.com/yb4xturm -O rbac-kdd.yaml
 +
[user@master:~$] wget https://tinyurl.com/y8lvqc9g -O calico.yaml
 +
[user@master:~$] kubectl apply -f rbac-kdd.yaml
 +
[user@master:~$] kubectl apply -f calico.yaml
 +
...
 +
[user@master:~$] source <(kubectl completion bash)
 +
[user@master:~$] echo "source <(kubectl completion bash)" >> ~/.bashrc
 +
...
 +
[user@master:~$] sudo kubeadm config print init-defaults
 
</pre>
 
</pre>
  
== Install minikube ==
+
===Installing Worker===
 
<pre class="code">
 
<pre class="code">
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube
+
[user@node01:~$] sudo -i
sudo install minikube /sdk/bin
+
[root@node01:~$] apt-get update && apt-get upgrade -y
 +
...
 +
[root@node01:~$] echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" >> /etc/apt/sources.list.d/kubernetes.list
 +
[root@node01:~$] curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
 +
[root@node01:~$] apt-get update
 +
...
 +
[root@node01:~$] apt-get install -y docker.io
 +
[root@node01:~$] apt-get install -y kubeadm kubelet kubectl
 +
...
 +
[root@node01:~$] ip addr show
 +
[root@node01:~$] echo "192.168.50.10  k8smaster" >> /etc/hosts              # Add an local DNS alias for master server
 +
[root@node01:~$] echo "192.168.50.11  k8snode01" >> /etc/hosts              # Add an local DNS alias for master server
 +
...
 +
[root@node01:~$] kubeadm join --token 27eee4.6e66ff60318da929 k8smaster:6443 --discovery-token-ca-cert-hash sha256:6d541678b05652e1fa5d43908e75e67376e994c3483d6683f2a18673e5d2a1b0
 +
[root@node01:~$] exit
 
</pre>
 
</pre>
  
Line 109: Line 191:
  
 
== Life Cycle: kubectl ==
 
== Life Cycle: kubectl ==
===Basics-Main===
+
===Themen===
 +
* Cluster
 +
* Nodes
 +
* Pods
 +
* InitContainers: https://docs.openshift.com/container-platform/4.1/nodes/containers/nodes-containers-init.html
 +
* Deployments, StatefulSet
 +
* ServiceDiscovery, PortForward
 +
* Taints
 +
* Rollout
 +
* Secrets
 +
* HealtCheck
 +
* Jobs & CronJobs
 +
* Tool-Monitoring
 +
* Tool-Helm
 +
===Config===
 
<pre class="code">
 
<pre class="code">
$ kubectl config --kubeconfig=$CFG_FILE
 
 
$ kubectl config --kubeconfig=$CFG_FILE use-context $CONTEXT_NAME
 
$ kubectl config --kubeconfig=$CFG_FILE use-context $CONTEXT_NAME
...
+
$ kubectl config set-context $NAME --namespace=$NAME
$ kubectl run $NAME --image=nginx --replicas=10
+
===Basics-Main===
 +
$ kubectl run $NAME --image=nginx
 
$ kubectl create deployment $NAME --image=nginx
 
$ kubectl create deployment $NAME --image=nginx
 
...
 
...
Line 158: Line 254:
 
<pre class="code">
 
<pre class="code">
 
$ kubectl service hello-minikube --url
 
$ kubectl service hello-minikube --url
 +
$ kubectl get nodes --show-labels
 +
$ kubectl get pods -l app=nginx --all-namespaces
 +
$ kubectl get deploy --show-labels
 +
...
 +
$ kubectl label node $NODE_ID system=secondOne
 +
$ kubectl label node $NODE_ID system-
 +
...
 +
$ kubectl expose deployment nginx-one --type=NodePort --name=service-lab
 +
$ kubectl expose deployment nginx-one
 +
$ kubectl describe services
 +
...
 +
$ kubectl delete deploy -l system=secondary
 
</pre>
 
</pre>
 +
 
===Labels===
 
===Labels===
 
<pre class="code">
 
<pre class="code">
$ kubectl label nodes $NAME typ=node1
+
$ kubectl label [node,pod,deployment,service] $NAME $LABEL_KEY=LABEL_VALUE
$ kubectl label pods $NAME owner=michael
+
$ kubectl get [node,pod,deployment,service] --show-labels
$ kubectl get pods --show-labels
+
$ kubectl get [node,pod,deployment,service] -l $LABEL_KEY=LABEL_VALUE
 +
...
 +
$ kubectl label pod nginx type=test
 
$ kubectl get pods --selector owner=michael
 
$ kubectl get pods --selector owner=michael
 
$ kubectl get pods -l env=development
 
$ kubectl get pods -l env=development
Line 171: Line 282:
  
 
===ReplicaSet===
 
===ReplicaSet===
 +
A ReplicaSet is defined with fields, including a selector that specifies how to identify Pods it can acquire, a number of replicas indicating how many Pods it should be maintaining, and a pod template specifying the data of new Pods it should create to meet the number of replicas criteria. By deleteing ReplicaSet, just pod with the same system-label will be deleted.
 +
<pre class="code">
 +
kubectl get rs
 +
kubectl create -f rs.yaml
 +
kubectl delete rs rs-one --cascade=false
 +
kubectl edit po $POD_ID  # change system: ReplicaOne >>to>> system: IsolatedPod
 +
kubectl get po -L system
 +
kubectl delete rs rs-one
 +
</pre>
 +
===DaemonSet===
 +
The DaemonSet ensures that when a node is added to a cluster a pods will be created on that node
 
<pre class="code">
 
<pre class="code">
 +
kubectl get ds
 +
kubectl set image [pod,deploy,rc,rs,ds] $NAME $CONTAINER_ID=nginx:1.16
 +
kubectl rollout status $TYPE $NAME
 +
kubectl rollout history $TYPE $NAME
 +
kubectl rollout history $TYPE $NAME --revision=1
 +
kubectl rollout undo $TYPE $NAME --to-revision=1
 
</pre>
 
</pre>
 +
 
===exec===
 
===exec===
 
<pre class="code">
 
<pre class="code">
Line 186: Line 315:
  
 
===Expose===
 
===Expose===
* ClusterIP, NodePort, LoadBalancer, or ExternalName.
+
* Expose-Typen: ClusterIP, NodePort, LoadBalancer, ExternalName.
 
<pre class="code">
 
<pre class="code">
 +
$ kubectl expose [pod,svc,deploy,rc,rs] $NAME --port=1234 --type=[NodePort,LoadBalancer]
 +
...
 
$ kubectl run $NAME --image=nginx:1.12 --port=9876
 
$ kubectl run $NAME --image=nginx:1.12 --port=9876
 +
$ kubectl create deployment $NAME --image=nginx
 +
...
 
$ kubectl expose deployment $NAME                            # Exposes the Service on ClusterIP.
 
$ kubectl expose deployment $NAME                            # Exposes the Service on ClusterIP.
 
$ kubectl expose deployment $NAME --type=LoadBalancer        # Exposes the Service on external.
 
$ kubectl expose deployment $NAME --type=LoadBalancer        # Exposes the Service on external.
Line 200: Line 333:
 
$ kubectl taint nodes --all node.kubernetes.io/not-ready
 
$ kubectl taint nodes --all node.kubernetes.io/not-ready
 
...
 
...
$ kubectl taint nodes node2 node2=DoNotSchedulePods:NoExecute
+
$ kubectl taint nodes $NODE_ID key=value:NoExecute
$ kubectl taint nodes node3 node3=DoNotSchedulePods:NoSchedule
+
$ kubectl taint nodes $NODE_ID key=value:NoSchedule
$ kubectl taint nodes node2 node2:NoExecute-
+
$ kubectl taint nodes $NODE_ID key=value:PreferNoSchedule
$ kubectl taint nodes node3 node3:NoSchedule-
+
...
 +
$ kubectl taint nodes $NODE_ID key-
 +
$ kubectl taint nodes $NODE_ID key:NoExecute-
 +
$ kubectl taint nodes $NODE_ID key:NoSchedule-
 +
$ kubectl taint nodes $NODE_ID key:PreferNoSchedule-
 
...
 
...
$ kubectl taint nodes $NODE_ID bubba=value:NoExecute
 
$ kubectl taint nodes $NODE_ID bubba=value:NoSchedule
 
$ kubectl taint nodes $NODE_ID bubba=value:PreferNoSchedule
 
$ kubectl taint nodes $NODE_ID bubba-
 
 
$ kubectl drain $NODE_ID  
 
$ kubectl drain $NODE_ID  
 
$ kubectl uncordon $NODE_ID
 
$ kubectl uncordon $NODE_ID
 
...
 
...
$ kubectl describe node | grep Taint
 
 
$ kubectl describe nodes $NODE_ID | grep -i taint
 
$ kubectl describe nodes $NODE_ID | grep -i taint
 
$ kubectl describe nodes $NODE_ID | grep Taint
 
$ kubectl describe nodes $NODE_ID | grep Taint
Line 218: Line 350:
  
 
===Volumus===
 
===Volumus===
 +
* Ceph is also another popular solution for dynamic, persistent volumes.
 
* Typ: node-local such as '''emptyDir''' or '''hostPath'''
 
* Typ: node-local such as '''emptyDir''' or '''hostPath'''
 
* Typ: file-sharing such as '''nfs'''
 
* Typ: file-sharing such as '''nfs'''
Line 223: Line 356:
 
* Typ: distributed-file such as '''glusterfs''' or '''cephfs'''
 
* Typ: distributed-file such as '''glusterfs''' or '''cephfs'''
 
* Typ: special-purpose such as '''secret''', '''gitRepo''', '''PersistentVolume'''
 
* Typ: special-purpose such as '''secret''', '''gitRepo''', '''PersistentVolume'''
 +
<pre class="code">
 +
kubectl create -f pv.yaml  # PersistentVolume
 +
kubectl create -f pvc.yaml  # PersistentVolumeClaim
 +
...
 +
kubectl create configmap colors --from-literal=text=black --from-file=./favorite --from-file=./primary/
 +
kubectl get configmap colors -o yaml
 +
...
 +
kubectl exec -it shell-demo -- /bin/bash -c 'echo $ilike'
 +
kubectl exec -it shell-demo -- /bin/bash -c 'env'
 +
kubectl exec -it shell-demo -- /bin/bash -c 'df -ha |grep car'
 +
kubectl exec -it shell-demo -- /bin/bash -c 'cat /etc/cars/car.trim'
 +
</pre>
 
'''In Pod'''
 
'''In Pod'''
 
<pre class="code">
 
<pre class="code">
Line 239: Line 384:
 
     claimName: myclaim
 
     claimName: myclaim
 
</pre>
 
</pre>
 +
 +
'''PVC'''
 
<pre class="code">
 
<pre class="code">
 
kind: PersistentVolumeClaim
 
kind: PersistentVolumeClaim
Line 261: Line 408:
 
$ kubectl logs --since=10s $POD -c $CONTAINER
 
$ kubectl logs --since=10s $POD -c $CONTAINER
 
$ kubectl logs -p $POD -c $CONTAINER
 
$ kubectl logs -p $POD -c $CONTAINER
 +
</pre>
 +
===API===
 +
<pre class="code">
 +
$ kubectl proxy --port=8080
 +
$ curl http://localhost:8080/api/v1
 +
$ curl http://127.0.0.1:8001/api/v1/namespaces
 +
$ kubectl get --raw=/api/v1
 +
$ kubectl api-versions
 +
...
 +
$ export client= $(grep client-cert ~/.kube/config |cut -d" " -f 6)
 +
$ export key=$(grep client-key-data ~/.kube/config |cut -d " " -f 6)
 +
$ export auth=  $(grep certificate-authority-data ~/.kube/config |cut -d " " -f 6)
 +
$ echo $client, $key, $auth
 +
...
 +
$ echo $client | base64 -d - > ./client.pem
 +
$ echo $key    | base64 -d - > ./client-key.pem
 +
$ echo $auth  | base64 -d - > ./ca.pem
 +
...
 +
$ curl --cert ./client.pem --key ./client-key.pem --cacert ./ca.pem https://k8smaster:6443/api/v1/pods
 +
</pre>
 +
<pre class="code">
 +
$ kubectl proxy --port=8080
 +
$ curl http://localhost:8080/api/v1
 +
$ curl http://127.0.0.1:8001/api/v1/namespaces
 +
$ kubectl get --raw=/api/v1
 +
$ kubectl api-versions
 +
...
 +
$ export client= $(grep client-cert ~/.kube/config |cut -d" " -f 6)
 +
$ export key=$(grep client-key-data ~/.kube/config |cut -d " " -f 6)
 +
$ export auth=  $(grep certificate-authority-data ~/.kube/config |cut -d " " -f 6)
 +
$ echo $client, $key, $auth
 +
...
 +
$ echo $client | base64 -d - > ./client.pem
 +
$ echo $key    | base64 -d - > ./client-key.pem
 +
$ echo $auth  | base64 -d - > ./ca.pem
 +
...
 +
$ curl --cert ./client.pem --key ./client-key.pem --cacert ./ca.pem https://k8smaster:6443/api/v1/pods
 
</pre>
 
</pre>
  
 +
===Namespace===
 +
<pre class="code">
 +
$ kubectl config set-context --current --namespace=$NAME
 +
</pre>
 
==YAML==
 
==YAML==
 
===Yaml-Config===
 
===Yaml-Config===
Line 339: Line 527:
 
spec (host, to, port, tls)
 
spec (host, to, port, tls)
 
</pre>
 
</pre>
== Schulung ==
 
=== Introduction ===
 
=== Basics of Kubernetes ===
 
=== Installation and Configuration ===
 
==== Installing Master ====
 
<pre class="code">
 
[user@master:~$] sudo -i
 
[root@master:~$] apt-get update && apt-get upgrade -y
 
[root@master:~$] apt-get install -y docker.io
 
...
 
[root@master:~$] vim /etc/apt/sources.list.d/kubernetes.list
 
[root@master:~$] echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" >> /etc/apt/sources.list.d/kubernetes.list
 
[root@master:~$] curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
 
[root@master:~$] apt-get update
 
...
 
[root@master:~$] apt-get install -y kubeadm=1.15.1-00 kubelet=1.15.1-00 kubectl=1.15.1-00
 
...
 
[root@master:~$] ip addr show
 
[root@master:~$] vim /etc/hosts              # Add an local DNS alias for master server
 
[root@master:~$] vim kubeadm-config.yaml      # Add Kubernetes-Version, Node-Alais, IP-Range
 
-----------------------------------------------------------------------------------------------
 
apiVersion: kubeadm.k8s.io/v1beta2
 
kind: ClusterConfiguration
 
kubernetesVersion: 1.15.1              #<-- Use the word stable for newest version
 
controlPlaneEndpoint: "k8smaster:6443"  #<-- Use the node alias not the IP
 
networking:
 
  podSubnet: 192.168.0.0/16            #<-- Match the IP with Calico config file
 
-----------------------------------------------------------------------------------------------
 
[root@master:~$] kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out
 
[root@master:~$] exit
 
...
 
[user@master:~$] mkdir -p $HOME/.kube
 
[user@master:~$] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 
[user@master:~$] sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
[user@master:~$] less .kube/config
 
...
 
[user@master:~$] wget https://tinyurl.com/yb4xturm -O rbac-kdd.yaml
 
[user@master:~$] wget https://tinyurl.com/y8lvqc9g -O calico.yaml
 
[user@master:~$] sudo cp /root/rbac-kdd.yaml .
 
[user@master:~$] sudo cp /root/calico.yaml .
 
[user@master:~$] kubectl apply -f rbac-kdd.yaml
 
[user@master:~$] kubectl apply -f calico.yaml
 
...
 
[user@master:~$] source <(kubectl completion bash)
 
[user@master:~$] echo "source <(kubectl completion bash)" >> ~/.bashrc
 
...
 
[user@master:~$] sudo kubeadm config print init-defaults
 
</pre>
 
 
==== Installing Worker====
 
<pre class="code">
 
[user@node01:~$] sudo -i
 
[root@node01:~$] apt-get update && apt-get upgrade -y
 
[root@node01:~$] apt-get install -y docker.io
 
[root@node01:~$] vim /etc/apt/sources.list.d/kubernetes.list >>>> add:deb http://apt.kubernetes.io/ kubernetes-xenial main
 
[root@node01:~$] curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
 
[root@node01:~$] apt-get update
 
[root@node01:~$] apt-get install -y kubeadm=1.15.1-00 kubelet=1.15.1-00 kubectl=1.15.1-00
 
[root@node01:~$] exit
 
...
 
[user@master:~$] ip addr show ens4 | grep inet
 
[user@master:~$] sudo kubeadm token list
 
[user@master:~$] sudo kubeadm token create
 
[user@master:~$] openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
 
...
 
[root@node01:~$] vim /etc/hosts
 
[root@node01:~$] kubeadm join --token 27eee4.6e66ff60318da929 k8smaster:6443 --discovery-token-ca-cert-hash sha256:6d541678b05652e1fa5d43908e75e67376e994c3483d6683f2a18673e5d2a1b0
 
[root@node01:~$] exit
 
[user@node01:~$] kubectl get nodes
 
[user@node01:~$] ls -l .kube
 
</pre>
 
 
==== Setting Taint ====
 
<pre class="code">
 
</pre>
 
 
==== App life cycle 1 ====
 
* core: deployment >> pod >> service
 
<pre class="code">
 
 
</pre>
 
 
==== App life cycle 2 ====
 
<pre class="code">
 
 
</pre>
 
 
=== Kubernetes Architecture ===
 
 
=== APIs and Access ===
 
<pre class="code">
 
$ kubectl proxy --port=8080
 
$ curl http://localhost:8080/api/v1
 
$ curl http://127.0.0.1:8001/api/v1/namespaces
 
$ kubectl get --raw=/api/v1
 
$ kubectl api-versions
 
...
 
$ export client= $(grep client-cert ~/.kube/config |cut -d" " -f 6)
 
$ export key=$(grep client-key-data ~/.kube/config |cut -d " " -f 6)
 
$ export auth=  $(grep certificate-authority-data ~/.kube/config |cut -d " " -f 6)
 
$ echo $client, $key, $auth
 
...
 
$ echo $client | base64 -d - > ./client.pem
 
$ echo $key    | base64 -d - > ./client-key.pem
 
$ echo $auth  | base64 -d - > ./ca.pem
 
...
 
$ curl --cert ./client.pem --key ./client-key.pem --cacert ./ca.pem https://k8smaster:6443/api/v1/pods
 
</pre>
 
 
=== API Objects ===
 
==== Jobs & Cronjobs ====
 
'''Jobs'''
 
<pre class="code">
 
kind: Job
 
metadata (name)
 
spec (completions, parallelism, activeDeadlineSeconds)
 
---containers (name, image, command, args)
 
</pre>
 
'''Cronjobs'''
 
<pre class="code">
 
* * * * * command to execute
 
# minute (0 - 59)
 
# hour (0 - 23)
 
# day of the month (1 - 31)
 
# month (1 - 12)
 
# day of the week (0 - 6)
 
...
 
kind: CronJob
 
metadata (name)
 
spec (schedule,jobTemplate)
 
---containers (name, image, args)
 
</pre>
 
 
=== Managing State With Deployments ===
 
==== ReplicaSet ====
 
A ReplicaSet is defined with fields, including a selector that specifies how to identify Pods it can acquire, a number of replicas indicating how many Pods it should be maintaining, and a pod template specifying the data of new Pods it should create to meet the number of replicas criteria.
 
By deleteing rs, just pod with the same '''system-label''' will be deleted.
 
<pre class="code">
 
kubectl get rs
 
kubectl create -f rs.yaml
 
kubectl delete rs rs-one --cascade=false
 
kubectl edit po $POD_ID  # change system: ReplicaOne >>to>> system: IsolatedPod
 
kubectl get po -L system
 
kubectl delete rs rs-one
 
</pre>
 
==== DaemonSet ====
 
The DaemonSet ensures that when a node is added to a cluster a pods will be created on that node
 
<pre class="code">
 
kubectl create -f ds.yaml
 
kubectl get ds
 
kubectl set image ds ds-one nginx=nginx:1.12.1
 
kubectl rollout history ds ds-one
 
kubectl rollout history ds ds-one --revision=1
 
kubectl rollout undo ds ds-one --to-revision=1
 
...
 
kubectl create -f ds.yaml
 
////////////
 
name: ds-two
 
updateStrategy:
 
    type: OnDelete
 
type: RollingUpdate
 
////////////
 
kubectl rollout status ds ds-two
 
</pre>
 
 
=== Services ===
 
<pre class="code">
 
kubectl get nodes --show-labels
 
kubectl label node $NODE_ID system=secondOne
 
kubectl get pods -l app=nginx --all-namespaces
 
kubectl expose deployment nginx-one
 
...
 
kubectl expose deployment nginx-one --type=NodePort --name=service-lab
 
kubectl describe services
 
...
 
kubectl get deploy --show-labels
 
kubectl delete deploy -l system=secondary
 
kubectl label node $NODE_ID system-
 
</pre>
 
 
=== Volumes and Data ===
 
* Ceph is also another popular solution for dynamic, persistent volumes.
 
* spec.volumes
 
* spec.containers.volumeMounts
 
<pre class="code">
 
kubectl create configmap colors --from-literal=text=black --from-file=./favorite --from-file=./primary/
 
kubectl get configmap colors -o yaml
 
kubectl exec -it shell-demo -- /bin/bash -c 'echo $ilike'
 
kubectl exec -it shell-demo -- /bin/bash -c 'env'
 
kubectl exec -it shell-demo -- /bin/bash -c 'df -ha |grep car'
 
kubectl exec -it shell-demo -- /bin/bash -c 'cat /etc/cars/car.trim'
 
...
 
kubectl delete pods shell-demo
 
kubectl delete configmap fast-car colors
 
...
 
kubectl create -f pv.yaml  # PersistentVolume
 
kubectl get pv
 
...
 
kubectl create -f pvc.yaml  # PersistentVolumeClaim
 
kubectl get pvc
 
</pre>
 
 
=== Ingress ===
 
<pre class="code">
 
kubectl create deployment secondapp --image=nginx
 
kubectl get deployments secondapp -o yaml |grep label -A2
 
kubectl expose deployment secondapp --type=NodePort --port=80
 
kubectl create -f ingress.rbac.yaml
 
</pre>
 
 
=== Scheduling ===
 
<pre class="code">
 
kubectl describe nodes |grep -i label
 
kubectl describe nodes |grep -i taint
 
 
kubectl get deployments --all-namespaces
 
sudo docker ps |wc -l
 
 
kubectl label nodes $NODE_ID status=vip
 
kubectl get nodes --show-labels
 
////
 
nodeSelector:
 
    status: vip
 
////
 
</pre>
 
 
=== Logging and Troubleshooting ===
 
=== Custom Resource Definition ===
 
=== Helm ===
 
=== Security ===
 
=== High Availability ===
 

Latest revision as of 18:42, 14 September 2021

Introduction

Source-Top

Source-Mix

Structure

  • Cluster >>> Nodes >>> Pods (Endpoint) >>> Containers (App) #### Deployments >>> Service (selector:app=A)
  • Node: Has a Node-IP.
  • Pod: Has an Endpoint-IP.
  • Service: Has a Cluster-IP.
  • Pod has: (label, nodeSelector); containerPort; (podIP, hostIP).
  • Service has: selector, port, targetPort, nodePort
  • Node-Components: kubelet, kube-proxy

Deployment

  • Deployment: primary purpose is to declare how many replicas of a pod should be running at a time.
  • Deleting a deployment does not delete the endpoints (Pod) or services.
  • Persistent Volumes: To store data permanently
  • Isolation between pods.

Services

  • Service in Kubernetes defines a logical set of Pods and a policy by which to access them.
  • Ingress: communicate with a service running in a pod >> Ingress-Controller / LoadBalancer
  • The set of Pods targeted by a Service is usually determined by a Label (Selector).
  • Services can be exposed in different ways by specifying a type in the ServiceSpec.
  • Typ: ClusterIP, NodePort, LoadBalancer, ExternalName

Linux-Admin

$ vi /etc/sudoers.d #Add: student ALL=(ALL) ALL
$ PATH=$PATH:/usr/sbin:/sbin
$ export PATH="/home/sh/.minishift/cache/oc/v3.11.0/linux:$PATH"
$ tar -xvf filename
$ ip addr show
$ vim /etc/hosts
$ less filaname.txt # Dispaly the contents of a file
$ cat filename.txt # Display the content of a file
$ tee filename.txt # Redirect output to multiple files
alias k="kubectl"
alias kgp="kubectl get pods -owide"
alias kgd="kubectl get deployment -o wide"
alias kgs="kubectl get svc -o wide"
alias kgn="kubectl get nodes -owide"
# ...
alias kdp="kubectl describe pod"
alias kdd="kubectl describe deployment"
alias kds="kubectl describe service"
alias kdn="kubectl describe nodes"

Infrastructure

# -*- mode: ruby -*-
# vi: set ft=ruby :
IMAGE_NAME = "ubuntu/focal64"

Vagrant.configure("2") do |config|
    config.ssh.insert_key = false

    config.vm.provider "virtualbox" do |vb|
        vb.gui = false
        vb.cpus = 2
        vb.memory = 4096
    end
      
    config.vm.define "k8s-master" do |master|
        master.vm.box = IMAGE_NAME
        master.vm.hostname = "k8s-master"
        master.vm.network "public_network", bridge: "br0"
    end
    
    config.vm.define "k8s-node01" do |node|
        node.vm.box = IMAGE_NAME
        node.vm.hostname = "k8s-node01"            
        node.vm.network "public_network", bridge: "br0"
    end    	
end

Installation

Installing kubectl

$ curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
$ sudo install kubectl /sdk/bin

Installing minikube

$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube
$ sudo install minikube /sdk/bin
$ minikube start

Installing Master

[user@master:~$] sudo -i
[root@master:~$] apt-get update && apt-get upgrade -y
...
[root@master:~$] echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" >> /etc/apt/sources.list.d/kubernetes.list
[root@master:~$] curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
[root@master:~$] apt-get update
...
[root@master:~$] apt-get install -y docker.io
[root@master:~$] apt-get install -y kubeadm=1.15.1-00 kubelet=1.15.1-00 kubectl=1.15.1-00
...
[root@master:~$] ip addr show
[root@master:~$] vim /etc/hosts               # Add an local DNS alias for master server
[root@master:~$] vim kubeadm-config.yaml      # Add Kubernetes-Version, Node-Alais, IP-Range
-----------------------------------------------------------------------------------------------
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: 1.15.1               #<-- Use the word stable for newest version
controlPlaneEndpoint: "k8smaster:6443"  #<-- Use the node alias not the IP
networking:
  podSubnet: 192.168.0.0/16             #<-- Match the IP with Calico config file
-----------------------------------------------------------------------------------------------
[root@master:~$] kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out
[root@master:~$] exit
[user@master:~$] mkdir -p $HOME/.kube
[user@master:~$] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[user@master:~$] sudo chown $(id -u):$(id -g) $HOME/.kube/config
[user@master:~$] less .kube/config
...
[user@master:~$] kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
[user@master:~$] kubectl taint nodes --all node-role.kubernetes.io/master-   # Remove the taints on the master to schedule pods on it
...
[user@master:~$] wget https://tinyurl.com/yb4xturm -O rbac-kdd.yaml
[user@master:~$] wget https://tinyurl.com/y8lvqc9g -O calico.yaml
[user@master:~$] kubectl apply -f rbac-kdd.yaml
[user@master:~$] kubectl apply -f calico.yaml
...
[user@master:~$] source <(kubectl completion bash)
[user@master:~$] echo "source <(kubectl completion bash)" >> ~/.bashrc
...
[user@master:~$] sudo kubeadm config print init-defaults

Installing Worker

[user@node01:~$] sudo -i
[root@node01:~$] apt-get update && apt-get upgrade -y
...
[root@node01:~$] echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" >> /etc/apt/sources.list.d/kubernetes.list
[root@node01:~$] curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
[root@node01:~$] apt-get update
...
[root@node01:~$] apt-get install -y docker.io
[root@node01:~$] apt-get install -y kubeadm kubelet kubectl
...
[root@node01:~$] ip addr show
[root@node01:~$] echo "192.168.50.10   k8smaster" >> /etc/hosts               # Add an local DNS alias for master server
[root@node01:~$] echo "192.168.50.11   k8snode01" >> /etc/hosts               # Add an local DNS alias for master server
...
[root@node01:~$] kubeadm join --token 27eee4.6e66ff60318da929 k8smaster:6443 --discovery-token-ca-cert-hash sha256:6d541678b05652e1fa5d43908e75e67376e994c3483d6683f2a18673e5d2a1b0
[root@node01:~$] exit

Life Cycle: kubeadm

$ kubeadm init
$ kubeadm join
$ kubeadm config
$ kubeadm token

Life Cycle: kubectl

Themen

Config

$ kubectl config --kubeconfig=$CFG_FILE use-context $CONTEXT_NAME
$ kubectl config set-context $NAME --namespace=$NAME
===Basics-Main===
$ kubectl run $NAME --image=nginx
$ kubectl create deployment $NAME --image=nginx
...
$ kubectl run $NAME --image=nginx:1.23 --replicas=2 --port=9876 # Create and run a particular image.
$ kubectl create  -f file.yaml   # Create a resource from a file.
$ kubectl apply   -f file.yaml   # Apply a configuration to a resource by filename. Create the resource initially with either 'apply' or 'create --save-config'.
$ kubectl replace -f file.yaml   # Terminate and Replace a resource by filename.
...
$ kubectl get all
$ kubectl get all --all-namesapces
$ kubectl get all -o wide
$ kubectl get namespaces
$ kubectl get nodes
$ kubectl get depolyments
$ kubectl get pods
$ kubectl get services
$ kubectl get endpoints
$ kubectl get jobs
...
$ kubectl describe $RESOURCE $NAME
$ kubectl describe deployment nginx
...
$ kubectl delete $TYP --all -n $NAME
$ kubectl delete $TYP  $NAME
$ kubectl delete deployments $NAME
$ kubectl delete pod $NAME
$ kubectl delete service $NAME
$ kubectl delete endpoint $NAME
$ kubectl delete job $NAME

Basics-Mix

$ kubectl get deployment nginx -o yaml > file.yaml
$ kubectl scale deployment nginx --replicas=3
$ kubectl apply -f project/k8s/development --recursive
$ kubectl get pods -Lapp -Ltier -Lrole
$ kubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx'
$ kubectl autoscale deployment/my-nginx --min=1 --max=3

Services

$ kubectl service hello-minikube --url
$ kubectl get nodes --show-labels
$ kubectl get pods -l app=nginx --all-namespaces
$ kubectl get deploy --show-labels
...
$ kubectl label node $NODE_ID system=secondOne
$ kubectl label node $NODE_ID system-
...
$ kubectl expose deployment nginx-one --type=NodePort --name=service-lab
$ kubectl expose deployment nginx-one
$ kubectl describe services
...
$ kubectl delete deploy -l system=secondary

Labels

$ kubectl label [node,pod,deployment,service] $NAME $LABEL_KEY=LABEL_VALUE
$ kubectl get [node,pod,deployment,service] --show-labels
$ kubectl get [node,pod,deployment,service] -l $LABEL_KEY=LABEL_VALUE
...
$ kubectl label pod nginx type=test
$ kubectl get pods --selector owner=michael
$ kubectl get pods -l env=development
$ kubectl get pods -l 'env in (production, development)'
$ kubectl delete pods -l 'env in (production, development)'

ReplicaSet

A ReplicaSet is defined with fields, including a selector that specifies how to identify Pods it can acquire, a number of replicas indicating how many Pods it should be maintaining, and a pod template specifying the data of new Pods it should create to meet the number of replicas criteria. By deleteing ReplicaSet, just pod with the same system-label will be deleted.

kubectl get rs
kubectl create -f rs.yaml
kubectl delete rs rs-one --cascade=false
kubectl edit po $POD_ID  # change system: ReplicaOne >>to>> system: IsolatedPod
kubectl get po -L system
kubectl delete rs rs-one

DaemonSet

The DaemonSet ensures that when a node is added to a cluster a pods will be created on that node

kubectl get ds
kubectl set image [pod,deploy,rc,rs,ds] $NAME $CONTAINER_ID=nginx:1.16
kubectl rollout status $TYPE $NAME
kubectl rollout history $TYPE $NAME
kubectl rollout history $TYPE $NAME --revision=1
kubectl rollout undo $TYPE $NAME --to-revision=1

exec

$ kubectl exec -it $POD -- printenv
$ kubectl exec -it $POD -- /bin/bash
$ kubectl exec -it $POD -- /bin/bash -c 'env'
$ kubectl exec -it $POD -- /bin/bash -c 'df -ha |grep car'
$ kubectl exec -it $POD -- /bin/bash -c 'echo $ilike'
$ kubectl exec -it $POD -- /bin/bash -c 'cat /etc/cars/car.trim'
$ kubectl exec -it $POD -c shell -- ping $SVC.$NAMESPACE.svc.cluster.local
$ kubectl exec -it $POD -c c1 -- bash

Expose

  • Expose-Typen: ClusterIP, NodePort, LoadBalancer, ExternalName.
$ kubectl expose [pod,svc,deploy,rc,rs] $NAME --port=1234 --type=[NodePort,LoadBalancer]
...
$ kubectl run $NAME --image=nginx:1.12 --port=9876
$ kubectl create deployment $NAME --image=nginx
...
$ kubectl expose deployment $NAME                            # Exposes the Service on ClusterIP.
$ kubectl expose deployment $NAME --type=LoadBalancer        # Exposes the Service on external.
$ kubectl expose deployment $NAME --type=NodePort --port=80  # Exposes the Service on Node.
$ kubectl edit ingress $CFG_INGRESS

Taint

$ kubectl taint nodes --all node-role.kubernetes.io/master-
$ kubectl taint nodes --all node.kubernetes.io/not-ready
...
$ kubectl taint nodes $NODE_ID key=value:NoExecute
$ kubectl taint nodes $NODE_ID key=value:NoSchedule
$ kubectl taint nodes $NODE_ID key=value:PreferNoSchedule
...
$ kubectl taint nodes $NODE_ID key-
$ kubectl taint nodes $NODE_ID key:NoExecute-
$ kubectl taint nodes $NODE_ID key:NoSchedule-
$ kubectl taint nodes $NODE_ID key:PreferNoSchedule-
...
$ kubectl drain $NODE_ID 
$ kubectl uncordon $NODE_ID
...
$ kubectl describe nodes $NODE_ID | grep -i taint
$ kubectl describe nodes $NODE_ID | grep Taint

Volumus

  • Ceph is also another popular solution for dynamic, persistent volumes.
  • Typ: node-local such as emptyDir or hostPath
  • Typ: file-sharing such as nfs
  • Typ: cloud-provider such as awsElasticBlockStore, azureDisk, or gcePersistentDisk
  • Typ: distributed-file such as glusterfs or cephfs
  • Typ: special-purpose such as secret, gitRepo, PersistentVolume
kubectl create -f pv.yaml   # PersistentVolume
kubectl create -f pvc.yaml  # PersistentVolumeClaim
...
kubectl create configmap colors --from-literal=text=black --from-file=./favorite --from-file=./primary/
kubectl get configmap colors -o yaml
...
kubectl exec -it shell-demo -- /bin/bash -c 'echo $ilike'
kubectl exec -it shell-demo -- /bin/bash -c 'env'
kubectl exec -it shell-demo -- /bin/bash -c 'df -ha |grep car'
kubectl exec -it shell-demo -- /bin/bash -c 'cat /etc/cars/car.trim'

In Pod

kind: Pod
volumeMounts:
- name: xchange
  mountPath: "/tmp/xchange"
- name: mypod
  mountPath: "/tmp/persistent"
...
volumes:
- name: xchange
  emptyDir: {}
- name: mypd
  persistentVolumeClaim:
    claimName: myclaim

PVC

kind: PersistentVolumeClaim
metadata:
  name: myclaim
spec:
  accessModes:
  - ReadWriteOnce | ReadOnlyMany | ReadWriteMany 
  resources:
    requests:
      storage: 1Gi

Secret

$ kubectl create secret generic $NAME --from-file=./file.txt
$ kubectl get secrets

Loging

$ kubectl logs --tail=5 $POD -c $CONTAINER
$ kubectl logs --since=10s $POD -c $CONTAINER
$ kubectl logs -p $POD -c $CONTAINER

API

$ kubectl proxy --port=8080
$ curl http://localhost:8080/api/v1
$ curl http://127.0.0.1:8001/api/v1/namespaces
$ kubectl get --raw=/api/v1
$ kubectl api-versions
...
$ export client= $(grep client-cert ~/.kube/config |cut -d" " -f 6)
$ export key=$(grep client-key-data ~/.kube/config |cut -d " " -f 6)
$ export auth=   $(grep certificate-authority-data ~/.kube/config |cut -d " " -f 6)
$ echo $client, $key, $auth
...
$ echo $client | base64 -d - > ./client.pem
$ echo $key    | base64 -d - > ./client-key.pem
$ echo $auth   | base64 -d - > ./ca.pem
...
$ curl --cert ./client.pem --key ./client-key.pem --cacert ./ca.pem https://k8smaster:6443/api/v1/pods
$ kubectl proxy --port=8080
$ curl http://localhost:8080/api/v1
$ curl http://127.0.0.1:8001/api/v1/namespaces
$ kubectl get --raw=/api/v1
$ kubectl api-versions
...
$ export client= $(grep client-cert ~/.kube/config |cut -d" " -f 6)
$ export key=$(grep client-key-data ~/.kube/config |cut -d " " -f 6)
$ export auth=   $(grep certificate-authority-data ~/.kube/config |cut -d " " -f 6)
$ echo $client, $key, $auth
...
$ echo $client | base64 -d - > ./client.pem
$ echo $key    | base64 -d - > ./client-key.pem
$ echo $auth   | base64 -d - > ./ca.pem
...
$ curl --cert ./client.pem --key ./client-key.pem --cacert ./ca.pem https://k8smaster:6443/api/v1/pods

Namespace

$ kubectl config set-context --current --namespace=$NAME

YAML

Yaml-Config

kind: Config
preferences: {}
clusters (cluster, name)
users (name, user)
contexts (cluster, namespace, user)
current-context

Yaml-ClusterConfiguration

apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: 1.15.1
controlPlaneEndpoint: "k8smaster:6443"
networking:
   podSubnet: 192.168.0.0/16

Yaml-Deployment

kind: Deployment
metadata (name, labels, namespace)
spec (replicas, template)
- template (metadata, spec)
--- spec (containers, volumes, nodeSelector)
---- containers (name, image, imagePullPolicy, ports, env, securityContext, volumeMounts)

Yaml-Pod

kind: Pod
metadaten
  name: podName
  labels:
    env: development
    app: anything
spec:
  containers:
  - name: conName
    image: nginx
    ports:
    - containerPort: 9876
    command:
      - "bin/bash"
      - "-c"
      - "sleep 10000"    
    resources:
      limits:
        memory: "64Mi"
        cpu: "500m"

Yaml-Service

kind: Service
metadata (name, namespace, labels, selfLink)
spec (clusterIP, ports, selector, type)
...
kind: Service
metadata:
  name: nameService
spec:
  ports:
    - port: 80
      targetPort: 9876
  selector:
    app: sise

Yaml-Route

kind: Route
metadata (name, namespace, labels)
spec (host, to, port, tls)