Difference between revisions of "IT-SDK-Kubernetes-Basics"
Jump to navigation
Jump to search
(→Liveness, Readiness and Startup Probes) |
Samerhijazi (talk | contribs) (→Cluster) |
||
| (276 intermediate revisions by 3 users not shown) | |||
| Line 1: | Line 1: | ||
=Ref.= | =Ref.= | ||
* https://kind.sigs.k8s.io/ | * https://kind.sigs.k8s.io/ | ||
| − | * | + | * https://kubebyexample.com/ |
* https://kubernetes.io/docs/reference/kubectl/cheatsheet/ | * https://kubernetes.io/docs/reference/kubectl/cheatsheet/ | ||
| − | = | + | * https://github.com/cncf/curriculum |
| − | == | + | * https://github.com/cncf/curriculum/blob/master/CKA_Curriculum_v1.21.pdf |
| − | * ''' | + | * https://github.com/zealvora/certified-kubernetes-administrator |
| − | * ''' | + | * https://github.com/bbachi/CKAD-Practice-Questions |
| − | * ''' | + | * https://killer.sh/attendee/{YOUR_SESSION_UUID}/content |
| − | * ''' | + | * https://github.com/digitalocean/kubernetes-sample-apps |
| − | == | + | |
| + | =Init= | ||
| + | ==Init-Begriffe== | ||
| + | * '''Container''': A container image is a ready-to-run software package, containing everything needed to run an application. | ||
| + | * '''Pod''': Pods are the smallest, most basic deployable objects in Kubernetes. | ||
| + | * '''Service''': a service allows clients to reliably connect to the containers running in the pod using the VIP. | ||
| + | * '''Deployment''': A Deployment ensures that a particular number of pods are created in general. Several pods could be on a single node. | ||
| + | * '''DaemonSet''': A DaemonSet ensures that all Nodes run a copy of a current Pod. | ||
| + | * '''ReplicaSet''': A ReplicaSet ensures that a stable number of pods running at any given time. | ||
| + | * '''StatefulSet''': Stateful-Components that saves its state in DB. | ||
| + | * '''Endpoint''': every pod has an Endpoint-IP. The Service to pods calls the Endpoint to that service. | ||
| + | * '''Components''': kube-let, kube-proxy, kube-apiserver, kube-controller-manager, kube-scheduler. | ||
| + | |||
| + | ==Init-Notes== | ||
| + | * kubectl (create vs apply): Creates a new resource. Apply chnages on an exists resource. | ||
| + | * ReplicationController: A '''Deployment''' that configures a '''ReplicaSet''' is now the recommended way to set up replication. | ||
| + | |||
| + | ==Init-Exam-CKA== | ||
| + | * https://kubernetes.io/docs/tasks/ | ||
| + | * https://github.com/bbachi/CKAD-Practice-Questions | ||
| + | * https://github.com/dgkanatsios/CKAD-exercises | ||
| + | * https://medium.com/bb-tutorials-and-thoughts/how-to-pass-the-certified-kubernetes-administrator-cka-exam-9e01f1aa93b8 | ||
| + | |||
| + | ==Init-RoadMap== | ||
| + | <pre class="code"> | ||
| + | * Cluster: Installation Master | ||
| + | * Cluster: Installation Worker | ||
| + | * Cluster: Upgrade Master | ||
| + | * Cluster: Upgrade Worker | ||
| + | * Cluster: Backup etcd | ||
| + | * Cluster: Taints, Proxy, PortForward | ||
| + | * Cluster: Fix kubelet | ||
| + | --- | ||
| + | * Pod ===>>> CPU/RAM, Enviruments, Arguments, Commands, Labels, nodeSelector, nodeName, Ports | ||
| + | * Pod ===>>> emptyDir, hostPath, NTF, PVC, Secret, ConfigMap | ||
| + | * Deployment && ReplicaSet ===>>> selector:matchLabels, Scale, Rollout | ||
| + | * DaemonSet && StatefulSet | ||
| + | * Services ===>>> ExposePort | ||
| + | * Ingress | ||
| + | * Rollout | ||
| + | * NetworkPolicy | ||
| + | * Job && ConJob | ||
| + | * RBAC && ServiceAccount | ||
| + | </pre> | ||
| + | <pre class="code"> | ||
| + | * Job: backoffLimit (number of retries), activeDeadlineSeconds, completions, parallelism | ||
| + | * ConJob: schedule: "*/1 * * * *" | ||
| + | * ConfigMap: env, envFrom, volume | ||
| + | * Namespace: LimitRange | ||
| + | --- | ||
| + | * TLS | ||
| + | * Secret | ||
| + | * Ingress | ||
| + | * Labels | ||
| + | * Service: selector, Expose: (ClusterIP, NodePort, LoadBalancer, ExternalName) | ||
| + | --- | ||
| + | * Volumes: emptyDir, hostPath, nfs, ConfigMap, Sercrets | ||
| + | * PersistentVolume (PV) | ||
| + | * PersistentVolumeClaim(PVC) | ||
| + | * ResourceQuota | ||
| + | --- | ||
| + | * Helm | ||
| + | * Service-Mesh | ||
| + | * NetworkPolicy: | ||
| + | --- | ||
| + | * Role Based Access Control (RBAC): ServiceAccount >>> ClusterRole >>> ClusterRoleBinding | ||
| + | * CustomResourceDefinitions (CRD): | ||
| + | --- | ||
| + | * Certificate-data | ||
| + | * Key-data | ||
| + | * Certificate-Authority-data | ||
| + | </pre> | ||
| + | |||
| + | ==Init-DryRun== | ||
| + | <pre class="code"> | ||
| + | alias k=kubectl | ||
| + | export KUBE_EDITOR="nano" | ||
| + | export do="--dry-run=client -o yaml" | ||
| + | source <(kubectl completion bash) | ||
| + | complete -F __start_kubectl k | ||
| + | --- | ||
| + | kubectl config use-context kubernetes-admin@kubernetes | ||
| + | ... | ||
| + | kubectl create deployment nginx --image=nginx --replicas=2 --port=5701 | ||
| + | kubectl expose deployment nginx --type=LoadBalancer --port=80 | ||
| + | kubectl scale deployment nginx --replicas=4 | ||
| + | ... | ||
| + | kubectl create job hello --image=busybox -- echo "Hello World" | ||
| + | kubectl create cronjob hello --image=busybox --schedule="*/1 * * * *" -- echo "Hello World" | ||
| + | ... | ||
| + | kubectl run --image=nginx -o yaml --dry-run=client > pod-defination.yaml | ||
| + | kubectl create deployment --image=nginx --replicas=3 -o yaml --dry-run=client > deployment-defination.yaml | ||
| + | </pre> | ||
| − | + | <pre class="code"> | |
| − | + | /var/lib/docker/containers/xyz ## Where logs for euch POD saved in cluster | |
| − | + | k get role --no-headers | wc -l | |
| − | + | --- | |
| − | + | kubectl proxy | |
| + | kubectl port-forward deployment/kibana 5601 | ||
| + | kubectl port-forward deployment/kibana 8080:5601 -n default | ||
| + | --- | ||
| + | kubectl set image ds ds-one nginx=nginx:1.21 | ||
| + | kubectl describe pod ds-one-z31r4 |grep Image: | ||
| + | --- | ||
| + | kubectl rollout restart daemonset/kibana | ||
| + | kubectl rollout restart statefulset/kibana | ||
| + | </pre> | ||
| − | = | + | =Infrastructure= |
| + | * Mini CPU: 2 | ||
| + | * Mini RAM: 1700MB | ||
| + | ==Vagrant== | ||
| + | * Vagrant-File: https://github.com/samerhijazi/collections/blob/main/vagrant_kubernetes.rb | ||
| + | ==Ansible== | ||
| + | * Installation with '''Vagrant''': https://kubernetes.io/blog/2019/03/15/kubernetes-setup-using-ansible-and-vagrant/ | ||
| + | * File: https://github.com/samerhijazi/collections/blob/main/ansible_kubernetes_vagrant.yaml | ||
| + | ==kind== | ||
| + | * https://kind.sigs.k8s.io/docs/user/quick-start/#installation | ||
| + | <pre class="code"> | ||
| + | curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64 | ||
| + | ------------------------------------------------ | ||
| + | kind get clusters | ||
| + | kind delete cluster --name k8s-master | ||
| + | ------------------------------------------------ | ||
| + | kind create cluster | ||
| + | kind create cluster --name k8s-master | ||
| + | kind create cluster --config kind-config.yaml | ||
| + | ------------------------------------------------ | ||
| + | kubectl cluster-info --context kind-kind | ||
| + | </pre> | ||
| + | ==k3d== | ||
| + | * https://k3d.io/v5.4.6/#quick-start | ||
<pre class="code"> | <pre class="code"> | ||
| − | + | k3d cluster create mycluster | |
| − | |||
</pre> | </pre> | ||
| − | =minikube= | + | ==minikube== |
* https://minikube.sigs.k8s.io/docs/ | * https://minikube.sigs.k8s.io/docs/ | ||
<pre class="code"> | <pre class="code"> | ||
| Line 33: | Line 156: | ||
minikube start -p aged --kubernetes-version=v1.16.1 #Create a second cluster | minikube start -p aged --kubernetes-version=v1.16.1 #Create a second cluster | ||
minikube delete --all #Delete all of the minikube | minikube delete --all #Delete all of the minikube | ||
| + | </pre> | ||
| + | |||
| + | =Installation= | ||
| + | ==k8s-master== | ||
| + | <pre class="code"> | ||
| + | swapoff -a | ||
| + | sudo apt update | ||
| + | sudo apt install docker.io | ||
| + | --------------------------------------------------------- | ||
| + | sudo sh -c "curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -" | ||
| + | sudo sh -c "echo 'deb http://apt.kubernetes.io/ kubernetes-xenial main' >> /etc/apt/sources.list.d/kubernetes.list" | ||
| + | sudo apt update | ||
| + | --------------------------------------------------------- | ||
| + | sudo apt install kubeadm=1.20.1-00 kubelet=1.20.1-00 kubectl=1.20.1-00 | ||
| + | sudo apt-mark hold kubeadm kubelet kubectl | ||
| + | --------------------------------------------------------- | ||
| + | sudo sh -c "echo '192.168.178.80 k8s-master' >> /etc/hosts" | ||
| + | </pre> | ||
| + | <pre class="code"> | ||
| + | # nano kubeadm-config.yaml | ||
| + | --------------------------------------------------------- | ||
| + | apiVersion: kubeadm.k8s.io/v1beta2 | ||
| + | kind: ClusterConfiguration | ||
| + | kubernetesVersion: 1.20.1 | ||
| + | controlPlaneEndpoint: "k8s-master:6443" | ||
| + | networking: | ||
| + | podSubnet: 192.168.0.0/16 | ||
| + | --- | ||
| + | apiVersion: kubelet.config.k8s.io/v1beta1 | ||
| + | kind: KubeletConfiguration | ||
| + | cgroupDriver: systemd | ||
| + | --------------------------------------------------------- | ||
| + | </pre> | ||
| + | <pre class="code"> | ||
| + | sudo kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out | ||
| + | sudo kubeadm init --control-plane-endpoint="k8s-master:6443" --pod-network-cidr="192.168.0.0/16" --upload-certs | tee kubeadm-init.out | ||
| + | --------------------------------------------------------- | ||
| + | mkdir -p $HOME/.kube | ||
| + | sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config | ||
| + | sudo chown $(id -u):$(id -g) $HOME/.kube/config | ||
| + | --------------------------------------------------------- | ||
| + | kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml | ||
| + | kubectl get node | ||
| + | </pre> | ||
| + | |||
| + | ==k8s-worker== | ||
| + | <pre class="code"> | ||
| + | swapoff -a | ||
| + | sudo apt-get update | ||
| + | sudo apt-get install docker.io | ||
| + | ---- | ||
| + | sudo sh -c "curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -" | ||
| + | sudo sh -c "echo 'deb http://apt.kubernetes.io/ kubernetes-xenial main' >> /etc/apt/sources.list.d/kubernetes.list" | ||
| + | ---- | ||
| + | sudo apt-get update | ||
| + | sudo apt-get install kubeadm=1.20.1-00 kubelet=1.20.1-00 kubectl=1.20.1-00 | ||
| + | sudo apt-mark hold kubeadm kubelet kubectl | ||
| + | ---- | ||
| + | sudo sh -c "echo '192.168.178.80 k8s-master' >> /etc/hosts" | ||
| + | sudo sh -c "echo '192.168.178.81 k8s-worker01' >> /etc/hosts" | ||
| + | ---- | ||
| + | sudo kubeadm token create --print-join-command | ||
| + | ---- | ||
| + | sudo kubeadm token list | ||
| + | openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' | ||
| + | ---- | ||
| + | kubeadm join k8s-master:6443 \ | ||
| + | --token bmv8x8.xpcw9pg0lzs98cey \ | ||
| + | --discovery-token-ca-cert-hash sha256:7a7cb2068572629ab3461c9c2282e22281915487fb41789477cb5c01aefd3b98 | ||
| + | </pre> | ||
| + | |||
| + | ==Updating== | ||
| + | <pre class="code"> | ||
| + | sudo apt-cache madison kubeadm | ||
| + | ------------------------------------------------------------------------------- | ||
| + | sudo apt-mark unhold kubeadm | ||
| + | sudo apt update | ||
| + | sudo apt install kubeadm=1.21.1-00 | ||
| + | sudo apt-mark hold kubeadm | ||
| + | ------------------------------------------------------------------------------- | ||
| + | sudo kubeadm upgrade plan # Verify and show the upgrade plan | ||
| + | sudo kubeadm upgrade apply v1.21.1 # Upgrade "Master-Node" with Version | ||
| + | sudo kubeadm upgrade node # Upgrade "Worker-Node" | ||
| + | ------------------------------------------------------------------------------- | ||
| + | kubectl drain k8s-master --ignore-daemonsets # Mark the node as unschedulable. | ||
| + | ------------------------------------------------------------------------------- | ||
| + | sudo apt-mark unhold kubelet kubectl | ||
| + | sudo apt update | ||
| + | sudo apt install kubelet=1.21.1-00 kubectl=1.21.1-00 | ||
| + | sudo apt-mark hold kubelet kubectl | ||
| + | ------------------------------------------------------------------------------- | ||
| + | sudo systemctl daemon-reload | ||
| + | sudo systemctl restart kubelet | ||
| + | ------------------------------------------------------------------------------- | ||
| + | kubectl uncordon k8s-master # Mark the node as schedulable. | ||
| + | </pre> | ||
| + | |||
| + | ==Backup-etcd== | ||
| + | <pre class="code"> | ||
| + | /etc/kubernetes/manifests/etcd.yaml ### ETCD-Manifesto. | ||
| + | /etc/kubernetes/pki/etcd ### ETCD-PKI | ||
| + | ------------------------------------------------------------------------------------------------------------------- | ||
| + | CACERT=/etc/kubernetes/pki/etcd/ca.crt ### certificate authority | ||
| + | CERT=/etc/kubernetes/pki/etcd/server.crt ### certificate | ||
| + | KEY=/etc/kubernetes/pki/etcd/server.key ### key | ||
| + | ------------------------------------------------------------------------------------------------------------------- | ||
| + | kubectl -n kube-system exec -it etcd-k8s-master -- sh -c "xxx" | ||
| + | ------------------------------------------------------------------------------------------------------------------- | ||
| + | ETCDCTL_API=3 etcdctl endpoint healt --endpoints=https://127.0.0.1:2379 --cacert=$CACERT --cert=$CERT --key=$KEY | ||
| + | ETCDCTL_API=3 etcdctl member list -w table --endpoints=https://127.0.0.1:2379 --cacert=$CACERT --cert=$CERT --key=$KEY | ||
| + | ETCDCTL_API=3 etcdctl snapshot save $LOCATION --endpoints=https://127.0.0.1:2379 --cacert=$CACERT --cert=$CERT --key=$KEY | ||
| + | ETCDCTL_API=3 etcdctl snapshot status $LOCATION --endpoints=https://127.0.0.1:2379 --cacert=$CACERT --cert=$CERT --key=$KEY | ||
| + | ETCDCTL_API=3 etcdctl snapshot restore $LOCATION --endpoints=https://127.0.0.1:2379 --cacert=$CACERT --cert=$CERT --key=$KEY | ||
| + | </pre> | ||
| + | |||
| + | ==Helm== | ||
| + | * https://artifacthub.io/ | ||
| + | <pre class="code"> | ||
| + | ls $HOME/.cache/helm/repository # Location of Charts Repos | ||
| + | --- | ||
| + | helm search hub argocd | ||
| + | helm repo add argo https://argoproj.github.io/argo-helm | ||
| + | helm repo update | ||
| + | helm repo list | ||
| + | helm repo remove argo | ||
| + | --- | ||
| + | helm upgrade | ||
| + | helm list | ||
| + | helm install argo-cd argo/argo-cd | ||
| + | helm uninstall argo-cd | ||
| + | --- | ||
| + | helm repo add elastic https://Helm.elastic.co | ||
| + | helm install elasticsearch elastic/elasticsearch | ||
| + | helm install kibana elastic/kibana | ||
| + | --- | ||
| + | helm repo add bitnami https://charts.bitnami.com/bitnami | ||
| + | helm install fluentd bitnami/fluentd | ||
| + | helm install apache bitnami/apache | ||
| + | --- | ||
| + | helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx | ||
| + | helm install ingress-nginx ingress-nginx/ingress-nginx | ||
| + | --- | ||
| + | helm install stable/prometheus-operator | ||
| + | --- | ||
| + | helm repo add hivemq https://hivemq.github.io/helm-charts | ||
| + | helm install hivemq hivemq/hivemq-operator | ||
| + | </pre> | ||
| + | |||
| + | ==Metrics-Server== | ||
| + | <pre class="code"> | ||
| + | kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml | ||
| + | ---- | ||
| + | kubectl top nodes | ||
| + | kubectl top pods -A | ||
| + | </pre> | ||
| + | |||
| + | =Settings= | ||
| + | <pre class="code"> | ||
| + | kubectl describe node | grep -i taint | ||
| + | kubectl taint nodes --all node-role.kubernetes.io/master- | ||
| + | kubectl -n kube-system describe secret default | ||
| + | </pre> | ||
| + | <pre class="text"> | ||
| + | ## Client-Admin (kubeadm) | ||
| + | ## Client-CTL (kubectl) | ||
| + | -------------------------------------------------------------------------------------------------------------- | ||
| + | ## Cluster-API (kube-apiserver) >>> The Gateway for Cluster | ||
| + | ## Cluster-Controller (kube-controller-manager) >>> Controlles the status of the cluster. | ||
| + | ## Cluster-Scheduler (kube-scheduler) >>> The scheduler Decides on which Node new Pods schould be created. | ||
| + | ## Cluster-DNS (CoreDNS) >>> Is a flexible, extensible DNS server that can serve as the Kubernetes cluster DNS. | ||
| + | ## Cluster-ETCD (etcd) >>> The storage of the Settings of the cluster. | ||
| + | -------------------------------------------------------------------------------------------------------------- | ||
| + | ## Worker-Let (kubelet) >>> Installed in every Node. Responsible for starting the Pods. | ||
| + | ## Worker-Proxy (kube-proxy) >>> Installed in every Node. Responsible for the communications (nodes, pods) | ||
| + | </pre> | ||
| + | |||
| + | =manifests= | ||
| + | <pre class="code"> | ||
| + | /etc/systemd/system/ | ||
| + | /etc/systemd/system/kubelet.service.d/10-kubeadm.conf | ||
| + | -------------------------------------------------------------------------------------------------------------- | ||
| + | /etc/kubernetes/ | ||
| + | /etc/kubernetes/admin.conf | ||
| + | /etc/kubernetes/kubelet.conf | ||
| + | /etc/kubernetes/controller-manager.conf | ||
| + | /etc/kubernetes/scheduler.conf | ||
| + | -------------------------------------------------------------------------------------------------------------- | ||
| + | /etc/kubernetes/manifests/ | ||
| + | /etc/kubernetes/manifests/etcd.yaml | ||
| + | /etc/kubernetes/manifests/kube-apiserver.yaml | ||
| + | /etc/kubernetes/manifests/kube-controller-manager.yaml | ||
| + | /etc/kubernetes/manifests/kube-scheduler.yaml | ||
| + | -------------------------------------------------------------------------------------------------------------- | ||
| + | /etc/kubernetes/pki/etcd/ | ||
| + | /etc/kubernetes/pki/etcd/ca.crt ### File for Certificate-Authority | ||
| + | /etc/kubernetes/pki/etcd/server.crt ### File for certificate | ||
| + | /etc/kubernetes/pki/etcd/server.key ### File for Key | ||
| + | -------------------------------------------------------------------------------------------------------------- | ||
| + | /var/lib/etcd ### etcd-data | ||
| + | </pre> | ||
| + | ==openssl== | ||
| + | * https://kubernetes.io/docs/tasks/administer-cluster/certificates/ | ||
| + | <pre class="code"> | ||
| + | openssl genrsa -out ca.key 2048 ### Generate a KEY "ca.key" with 2048bit | ||
| + | openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt ### Generate a Certificate "ca.crt" | ||
| + | openssl x509 -noout -text -in server.crt ### View the certificate | ||
| + | openssl x509 -noout -fingerprint -sha256 -inform pem -in key.crt | ||
| + | </pre> | ||
| + | |||
| + | =Kinds= | ||
| + | *Secrets | ||
| + | *ConfigMap | ||
| + | *PersistentVolume | ||
| + | *PersistentVolumeClaim | ||
| + | =Types= | ||
| + | ==Types-Services== | ||
| + | *ref: https://kubernetes.io/docs/concepts/services-networking/service/ | ||
| + | *--- | ||
| + | *'''ClusterIP''': Service is reachableonly from within the cluster. | ||
| + | *'''NodePort''': Service is reachable from outside the cluster. | ||
| + | *'''LoadBalancer''': Service is reachable from outside the cluster (Using a cloud provider's load balancer). | ||
| + | *'''ExternalName''': t.b.d. | ||
| + | ==Types-Ports== | ||
| + | *port | ||
| + | *nodePort | ||
| + | *containerPort | ||
| + | *targetPort | ||
| + | |||
| + | ==Types-UpdateStrategy== | ||
| + | *RollingUpdate | ||
| + | *OnDelete | ||
| + | |||
| + | =Components= | ||
| + | ==Fix-Kubelet== | ||
| + | <pre class="code"> | ||
| + | whereis kubelet | ||
| + | /etc/kubernetes/kubelet.conf | ||
| + | /etc/systemd/system/kubelet.service.d/10-kubeadm.conf | ||
| + | </pre> | ||
| + | |||
| + | ==Fix-Scheduler== | ||
| + | <pre class="code"> | ||
| + | mv /etc/kubernetes/mainfesto/kube-scheduler.yaml /etc/kubernetes/ | ||
| + | </pre> | ||
| + | |||
| + | ==Fix-Proxy== | ||
| + | <pre class="code"> | ||
| + | </pre> | ||
| + | ==Fix-CIDR-Range== | ||
| + | <pre class="code"> | ||
| + | ### Change die IP-Range für kube-apiserver && kube-controller-manager | ||
| + | service-cluster-ip-range=11.96.0.0/12 | ||
| + | /etc/kubernetes/manifests/kube-apiserver.yaml | ||
| + | /etc/kubernetes/manifests/kube-controller-manager.yaml | ||
| + | </pre> | ||
| + | |||
| + | =Resources= | ||
| + | * https://kubernetes.io/docs/reference/kubectl/overview/#resource-types | ||
| + | ==Cluster== | ||
| + | * '''dns''' >> '''etcd''' >> '''kube-proxy''' >> '''kube-let''' >> '''kube-apiserver''' >> '''kube-scheduler''' >> '''kube-controller''' | ||
| + | * '''Clusters''' >> '''Users''' >> '''Contexts''' (user && cluster) | ||
| + | <pre class="code"> | ||
| + | kubectl cluster-info | ||
| + | ---- | ||
| + | kubectl config view | ||
| + | kubectl config current-context | ||
| + | kubectl config use-context | ||
| + | ---- | ||
| + | kubectl config get-useres | ||
| + | kubectl config get-clusters | ||
| + | kubectl config get-contexts | ||
| + | ---- | ||
| + | kubectl config set-credentials $USER --client-certificate=file.crt --client-key=file.key | ||
| + | kubectl config set-cluster $NAME_CLUSTER --server=$SERVER | ||
| + | kubectl config set-context $CONTEXT --cluster=$NAME_CLUSTER --namespace=$NAME_SPACE --user=$USER | ||
| + | kubectl config set-context --current --namespace=samer | ||
| + | ---- | ||
| + | kubectl proxy --port=8080 | ||
| + | kubectl get --raw /api/v1 | ||
| + | kubectl api-versions | ||
| + | kubectl api-resources | ||
| + | kubectl get pods --context=k8s-studing | ||
| + | kubectl get ep,ns,no,pvc,pv,svc,deploy,rs | ||
| + | ---- | ||
| + | kubectl port-forward pods/NAME 7000:6379 | ||
| + | kubectl port-forward deployment/NAME 7000:6379 | ||
| + | kubectl port-forward replicaset/NAME 7000:6379 | ||
| + | kubectl port-forward service/NAME_DEPLOYMENT 7000:6379 | ||
| + | </pre> | ||
| + | |||
| + | ==RBAC (Role Based Access Control)== | ||
| + | * Ref: https://kubernetes.io/docs/reference/access-authn-authz/rbac/ | ||
| + | * ServiceAccount, Role, Rolebinding | ||
| + | * Verbs: ["get", "list", "watch", "create", "update", "patch", "delete", "post", "bind"] | ||
| + | * Resources: ["services", "endpoints", "pods", "secrets", "configmaps"] | ||
| + | <pre class="code"> | ||
| + | k create serviceaccount $NAME_SERVICEACCOUNT | ||
| + | k create role $NAME_ROLE --verb=create,get --resource=pods,svc | ||
| + | k create rolebinding $NAME_ROLEBINDING --role=$NAME_ROLE --serviceaccount=$NAME_SERVICEACCOUNT | ||
| + | k auth can-i $VERB $TYPE | ||
| + | </pre> | ||
| + | |||
| + | ==Logging== | ||
| + | * Logging-Tools: Prometheus, metrics-server | ||
| + | <pre class="code"> | ||
| + | /var/log/kube-apiserver.log ## Logs for API. | ||
| + | /var/log/kube-scheduler.log ## Logs for Scheduling. | ||
| + | /var/log/kube-controller-manager.log ## Logs for Replication controllers. | ||
| + | /var/log/kubelet.log ## Logs for Containers running on node. | ||
| + | /var/log/kube-proxy.log ## Logs for LoadBalancing. | ||
| + | /var/log/containers ## Logs for Containers. | ||
| + | /var/log/pods/ ## Logs for Pods. | ||
| + | </pre> | ||
| + | <pre class="code"> | ||
| + | kubectl logs pod-nginx | ||
| + | kubectl top pod pod-nginx | ||
| + | kubectl get events | ||
| + | </pre> | ||
| + | <pre class="code"> | ||
| + | kubectl get serviceaccounts | ||
| + | kubectl create clusterrolebinding *** | ||
| + | kubectl describe secrets *** | ||
| + | </pre> | ||
| + | |||
| + | ==Node== | ||
| + | <pre class="code"> | ||
| + | </pre> | ||
| + | ==Pods== | ||
| + | <pre class="code"> | ||
| + | k run name01 --image=nginx --requests "cpu=10m,memory=20Mi" | ||
| + | k run name02 --image=nginx --restart=Never -it --rm -- sh | ||
| + | k expose pod name-pod --name name-service --type=NodePort --port 80 | ||
| + | --- | ||
| + | k exec name-pod -c name-container -- env | ||
| + | </pre> | ||
| + | |||
| + | ==Deployment== | ||
| + | * Deployment: deploy | ||
| + | * StatefulSet: sfs | ||
| + | <pre class="code"> | ||
| + | k create deplo nginx2 --image=nginx --dry-run=client -o yaml | ||
| + | k scale deploy nginx1 --replicas=5 --record | ||
| + | </pre> | ||
| + | |||
| + | ==exec== | ||
| + | <pre class="code"> | ||
| + | k exec $NAME_POD -it -- /bin/bash | ||
| + | k exec $NAME_POD -it -c $NAME_CONTAINER -- /bin/bash | ||
| + | </pre> | ||
| + | |||
| + | ==Labels== | ||
| + | *deploy.spec.selector.matchLabels | ||
| + | *pod.spec.nodeSelector | ||
| + | <pre class="code"> | ||
| + | k label pod nginx1 stage=dev | ||
| + | k label pod nginx1 stage- | ||
| + | --- | ||
| + | k get pods -l stage=dev --show-labels | ||
| + | k get pods -L app ## Show colum "APP" as label | ||
| + | --- | ||
| + | k delete pods -l stage=dev | ||
| + | </pre> | ||
| + | |||
| + | ==Scheduler== | ||
| + | <pre class="code"> | ||
| + | kubectl -n kube-system get pod | grep schedule | ||
| + | cd /etc/kubernetes/manifests/ | ||
| + | mv kube-scheduler.yaml .. | ||
| + | </pre> | ||
| + | |||
| + | ==Taint/Schedule== | ||
| + | <pre class="code"> | ||
| + | NoExecute | ||
| + | NoSchedule | ||
| + | PreferNoSchedule | ||
| + | --- | ||
| + | kubectl taint nodes node1 key1=value1:NoSchedule | ||
| + | kubectl taint nodes node1 key1=value1:NoSchedule- | ||
| + | --- | ||
| + | kubectl describe nodes | grep -i taint | ||
| + | Taints: node-role.kubernetes.io/master:NoSchedule | ||
| + | kubectl taint nodes --all node-role.kubernetes.io/master- | ||
| + | </pre> | ||
| + | <pre class="code"> | ||
| + | ## Taint/Schedules (prevents scheduling on that node) | ||
| + | ## Cordon/Uncordon (stop scheduling on that node) | ||
| + | ## Drain (remove existing pods and reschedule them on other nodes) | ||
| + | </pre> | ||
| + | |||
| + | ==Ports== | ||
| + | <pre class="code"> | ||
| + | spec.containers.ports.containerPort: | ||
| + | </pre> | ||
| + | |||
| + | ==Rollout== | ||
| + | * Deployments, DaemonSets, StatefulSets | ||
| + | <pre class="code"> | ||
| + | kubectl rollout history ds ds-one | ||
| + | kubectl rollout history ds ds-one --revision=1 | ||
| + | kubectl rollout undo ds ds-one --to-revision=1 | ||
| + | </pre> | ||
| + | ==Ingress== | ||
| + | <pre class="code"> | ||
| + | rules.host: *** | ||
| + | rules.http.paths.path: *** | ||
| + | rules.http.paths.backend.service.name: *** | ||
| + | rules.http.paths.backend.service.port.name: *** | ||
| + | --------------------------------------------------- | ||
| + | kubectl create ing ingress05 \ | ||
| + | --rule="foo.com/bar*=svc1:8080" \ | ||
| + | --rule="foo.com/api=svc2:http" \ | ||
| + | --rule="/path=svc:port" \ | ||
| + | --rule="foo.com/=svc:https,tls" \ | ||
| + | --rule="foo.com/*=svc:https,tls" | ||
| + | </pre> | ||
| + | |||
| + | ==Probes== | ||
| + | * ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/ | ||
| + | <pre class="code"> | ||
| + | * '''Probe''': describes a health check to be performed against a '''container''' to determine whether it is alive or ready to receive traffic. | ||
| + | * '''Liveness''': to know when to restart a container. | ||
| + | * '''Readiness''': to know when a container is ready to start accepting traffic. | ||
| + | * '''Startup''': to know when a container application has started. If such a probe is configured, it disables liveness and readiness checks until it succeeds. | ||
| + | </pre> | ||
| + | <pre class="code"> | ||
| + | * '''initialDelaySeconds''': wait x seconds before performing the first probe. | ||
| + | * '''periodSeconds''': every x seconds to perform probe. | ||
| + | * '''timeoutSeconds''': wait x seconds after which the probe times out. | ||
| + | * '''successThreshold''': x times to considered successful after having failed (Defaults=1). | ||
| + | * '''failureThreshold''': x times to giving up after fails (Defaults=3). Giving up in case of liveness probe means restarting the container. | ||
| + | </pre> | ||
| + | ==Services== | ||
| + | <pre class="code"> | ||
| + | kubectl create deployment nginx --image=nginx --replicas=2 --port=5701 | ||
| + | kubectl expose deployment nginx --type=LoadBalancer --port=80 | ||
| + | kubectl expose pod nginx --port=80 --target-port=9376 | ||
| + | </pre> | ||
| + | |||
| + | ==Namespace== | ||
| + | <pre class="code"> | ||
| + | kubectl get pods --all-namespaces | ||
| + | kubectl get pods -n development | ||
| + | kubectl config set-context --current --namespace=samer | ||
| + | ---- | ||
| + | kubectl api-resources --namespaced -o name | ||
</pre> | </pre> | ||
Latest revision as of 10:33, 5 December 2022
Contents
Ref.
- https://kind.sigs.k8s.io/
- https://kubebyexample.com/
- https://kubernetes.io/docs/reference/kubectl/cheatsheet/
- https://github.com/cncf/curriculum
- https://github.com/cncf/curriculum/blob/master/CKA_Curriculum_v1.21.pdf
- https://github.com/zealvora/certified-kubernetes-administrator
- https://github.com/bbachi/CKAD-Practice-Questions
- https://killer.sh/attendee/{YOUR_SESSION_UUID}/content
- https://github.com/digitalocean/kubernetes-sample-apps
Init
Init-Begriffe
- Container: A container image is a ready-to-run software package, containing everything needed to run an application.
- Pod: Pods are the smallest, most basic deployable objects in Kubernetes.
- Service: a service allows clients to reliably connect to the containers running in the pod using the VIP.
- Deployment: A Deployment ensures that a particular number of pods are created in general. Several pods could be on a single node.
- DaemonSet: A DaemonSet ensures that all Nodes run a copy of a current Pod.
- ReplicaSet: A ReplicaSet ensures that a stable number of pods running at any given time.
- StatefulSet: Stateful-Components that saves its state in DB.
- Endpoint: every pod has an Endpoint-IP. The Service to pods calls the Endpoint to that service.
- Components: kube-let, kube-proxy, kube-apiserver, kube-controller-manager, kube-scheduler.
Init-Notes
- kubectl (create vs apply): Creates a new resource. Apply chnages on an exists resource.
- ReplicationController: A Deployment that configures a ReplicaSet is now the recommended way to set up replication.
Init-Exam-CKA
- https://kubernetes.io/docs/tasks/
- https://github.com/bbachi/CKAD-Practice-Questions
- https://github.com/dgkanatsios/CKAD-exercises
- https://medium.com/bb-tutorials-and-thoughts/how-to-pass-the-certified-kubernetes-administrator-cka-exam-9e01f1aa93b8
Init-RoadMap
* Cluster: Installation Master * Cluster: Installation Worker * Cluster: Upgrade Master * Cluster: Upgrade Worker * Cluster: Backup etcd * Cluster: Taints, Proxy, PortForward * Cluster: Fix kubelet --- * Pod ===>>> CPU/RAM, Enviruments, Arguments, Commands, Labels, nodeSelector, nodeName, Ports * Pod ===>>> emptyDir, hostPath, NTF, PVC, Secret, ConfigMap * Deployment && ReplicaSet ===>>> selector:matchLabels, Scale, Rollout * DaemonSet && StatefulSet * Services ===>>> ExposePort * Ingress * Rollout * NetworkPolicy * Job && ConJob * RBAC && ServiceAccount
* Job: backoffLimit (number of retries), activeDeadlineSeconds, completions, parallelism * ConJob: schedule: "*/1 * * * *" * ConfigMap: env, envFrom, volume * Namespace: LimitRange --- * TLS * Secret * Ingress * Labels * Service: selector, Expose: (ClusterIP, NodePort, LoadBalancer, ExternalName) --- * Volumes: emptyDir, hostPath, nfs, ConfigMap, Sercrets * PersistentVolume (PV) * PersistentVolumeClaim(PVC) * ResourceQuota --- * Helm * Service-Mesh * NetworkPolicy: --- * Role Based Access Control (RBAC): ServiceAccount >>> ClusterRole >>> ClusterRoleBinding * CustomResourceDefinitions (CRD): --- * Certificate-data * Key-data * Certificate-Authority-data
Init-DryRun
alias k=kubectl export KUBE_EDITOR="nano" export do="--dry-run=client -o yaml" source <(kubectl completion bash) complete -F __start_kubectl k --- kubectl config use-context kubernetes-admin@kubernetes ... kubectl create deployment nginx --image=nginx --replicas=2 --port=5701 kubectl expose deployment nginx --type=LoadBalancer --port=80 kubectl scale deployment nginx --replicas=4 ... kubectl create job hello --image=busybox -- echo "Hello World" kubectl create cronjob hello --image=busybox --schedule="*/1 * * * *" -- echo "Hello World" ... kubectl run --image=nginx -o yaml --dry-run=client > pod-defination.yaml kubectl create deployment --image=nginx --replicas=3 -o yaml --dry-run=client > deployment-defination.yaml
/var/lib/docker/containers/xyz ## Where logs for euch POD saved in cluster k get role --no-headers | wc -l --- kubectl proxy kubectl port-forward deployment/kibana 5601 kubectl port-forward deployment/kibana 8080:5601 -n default --- kubectl set image ds ds-one nginx=nginx:1.21 kubectl describe pod ds-one-z31r4 |grep Image: --- kubectl rollout restart daemonset/kibana kubectl rollout restart statefulset/kibana
Infrastructure
- Mini CPU: 2
- Mini RAM: 1700MB
Vagrant
Ansible
- Installation with Vagrant: https://kubernetes.io/blog/2019/03/15/kubernetes-setup-using-ansible-and-vagrant/
- File: https://github.com/samerhijazi/collections/blob/main/ansible_kubernetes_vagrant.yaml
kind
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64 ------------------------------------------------ kind get clusters kind delete cluster --name k8s-master ------------------------------------------------ kind create cluster kind create cluster --name k8s-master kind create cluster --config kind-config.yaml ------------------------------------------------ kubectl cluster-info --context kind-kind
k3d
k3d cluster create mycluster
minikube
minikube start minikube dashboard minikube stop #Halt the cluster: minikube config set memory 16384 #Set memory limit minikube addons list #Browse the catalog minikube start -p aged --kubernetes-version=v1.16.1 #Create a second cluster minikube delete --all #Delete all of the minikube
Installation
k8s-master
swapoff -a sudo apt update sudo apt install docker.io --------------------------------------------------------- sudo sh -c "curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -" sudo sh -c "echo 'deb http://apt.kubernetes.io/ kubernetes-xenial main' >> /etc/apt/sources.list.d/kubernetes.list" sudo apt update --------------------------------------------------------- sudo apt install kubeadm=1.20.1-00 kubelet=1.20.1-00 kubectl=1.20.1-00 sudo apt-mark hold kubeadm kubelet kubectl --------------------------------------------------------- sudo sh -c "echo '192.168.178.80 k8s-master' >> /etc/hosts"
# nano kubeadm-config.yaml --------------------------------------------------------- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: 1.20.1 controlPlaneEndpoint: "k8s-master:6443" networking: podSubnet: 192.168.0.0/16 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: systemd ---------------------------------------------------------
sudo kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out sudo kubeadm init --control-plane-endpoint="k8s-master:6443" --pod-network-cidr="192.168.0.0/16" --upload-certs | tee kubeadm-init.out --------------------------------------------------------- mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config --------------------------------------------------------- kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml kubectl get node
k8s-worker
swapoff -a sudo apt-get update sudo apt-get install docker.io ---- sudo sh -c "curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -" sudo sh -c "echo 'deb http://apt.kubernetes.io/ kubernetes-xenial main' >> /etc/apt/sources.list.d/kubernetes.list" ---- sudo apt-get update sudo apt-get install kubeadm=1.20.1-00 kubelet=1.20.1-00 kubectl=1.20.1-00 sudo apt-mark hold kubeadm kubelet kubectl ---- sudo sh -c "echo '192.168.178.80 k8s-master' >> /etc/hosts" sudo sh -c "echo '192.168.178.81 k8s-worker01' >> /etc/hosts" ---- sudo kubeadm token create --print-join-command ---- sudo kubeadm token list openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' ---- kubeadm join k8s-master:6443 \ --token bmv8x8.xpcw9pg0lzs98cey \ --discovery-token-ca-cert-hash sha256:7a7cb2068572629ab3461c9c2282e22281915487fb41789477cb5c01aefd3b98
Updating
sudo apt-cache madison kubeadm ------------------------------------------------------------------------------- sudo apt-mark unhold kubeadm sudo apt update sudo apt install kubeadm=1.21.1-00 sudo apt-mark hold kubeadm ------------------------------------------------------------------------------- sudo kubeadm upgrade plan # Verify and show the upgrade plan sudo kubeadm upgrade apply v1.21.1 # Upgrade "Master-Node" with Version sudo kubeadm upgrade node # Upgrade "Worker-Node" ------------------------------------------------------------------------------- kubectl drain k8s-master --ignore-daemonsets # Mark the node as unschedulable. ------------------------------------------------------------------------------- sudo apt-mark unhold kubelet kubectl sudo apt update sudo apt install kubelet=1.21.1-00 kubectl=1.21.1-00 sudo apt-mark hold kubelet kubectl ------------------------------------------------------------------------------- sudo systemctl daemon-reload sudo systemctl restart kubelet ------------------------------------------------------------------------------- kubectl uncordon k8s-master # Mark the node as schedulable.
Backup-etcd
/etc/kubernetes/manifests/etcd.yaml ### ETCD-Manifesto. /etc/kubernetes/pki/etcd ### ETCD-PKI ------------------------------------------------------------------------------------------------------------------- CACERT=/etc/kubernetes/pki/etcd/ca.crt ### certificate authority CERT=/etc/kubernetes/pki/etcd/server.crt ### certificate KEY=/etc/kubernetes/pki/etcd/server.key ### key ------------------------------------------------------------------------------------------------------------------- kubectl -n kube-system exec -it etcd-k8s-master -- sh -c "xxx" ------------------------------------------------------------------------------------------------------------------- ETCDCTL_API=3 etcdctl endpoint healt --endpoints=https://127.0.0.1:2379 --cacert=$CACERT --cert=$CERT --key=$KEY ETCDCTL_API=3 etcdctl member list -w table --endpoints=https://127.0.0.1:2379 --cacert=$CACERT --cert=$CERT --key=$KEY ETCDCTL_API=3 etcdctl snapshot save $LOCATION --endpoints=https://127.0.0.1:2379 --cacert=$CACERT --cert=$CERT --key=$KEY ETCDCTL_API=3 etcdctl snapshot status $LOCATION --endpoints=https://127.0.0.1:2379 --cacert=$CACERT --cert=$CERT --key=$KEY ETCDCTL_API=3 etcdctl snapshot restore $LOCATION --endpoints=https://127.0.0.1:2379 --cacert=$CACERT --cert=$CERT --key=$KEY
Helm
ls $HOME/.cache/helm/repository # Location of Charts Repos --- helm search hub argocd helm repo add argo https://argoproj.github.io/argo-helm helm repo update helm repo list helm repo remove argo --- helm upgrade helm list helm install argo-cd argo/argo-cd helm uninstall argo-cd --- helm repo add elastic https://Helm.elastic.co helm install elasticsearch elastic/elasticsearch helm install kibana elastic/kibana --- helm repo add bitnami https://charts.bitnami.com/bitnami helm install fluentd bitnami/fluentd helm install apache bitnami/apache --- helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm install ingress-nginx ingress-nginx/ingress-nginx --- helm install stable/prometheus-operator --- helm repo add hivemq https://hivemq.github.io/helm-charts helm install hivemq hivemq/hivemq-operator
Metrics-Server
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml ---- kubectl top nodes kubectl top pods -A
Settings
kubectl describe node | grep -i taint kubectl taint nodes --all node-role.kubernetes.io/master- kubectl -n kube-system describe secret default
## Client-Admin (kubeadm) ## Client-CTL (kubectl) -------------------------------------------------------------------------------------------------------------- ## Cluster-API (kube-apiserver) >>> The Gateway for Cluster ## Cluster-Controller (kube-controller-manager) >>> Controlles the status of the cluster. ## Cluster-Scheduler (kube-scheduler) >>> The scheduler Decides on which Node new Pods schould be created. ## Cluster-DNS (CoreDNS) >>> Is a flexible, extensible DNS server that can serve as the Kubernetes cluster DNS. ## Cluster-ETCD (etcd) >>> The storage of the Settings of the cluster. -------------------------------------------------------------------------------------------------------------- ## Worker-Let (kubelet) >>> Installed in every Node. Responsible for starting the Pods. ## Worker-Proxy (kube-proxy) >>> Installed in every Node. Responsible for the communications (nodes, pods)
manifests
/etc/systemd/system/ /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -------------------------------------------------------------------------------------------------------------- /etc/kubernetes/ /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf -------------------------------------------------------------------------------------------------------------- /etc/kubernetes/manifests/ /etc/kubernetes/manifests/etcd.yaml /etc/kubernetes/manifests/kube-apiserver.yaml /etc/kubernetes/manifests/kube-controller-manager.yaml /etc/kubernetes/manifests/kube-scheduler.yaml -------------------------------------------------------------------------------------------------------------- /etc/kubernetes/pki/etcd/ /etc/kubernetes/pki/etcd/ca.crt ### File for Certificate-Authority /etc/kubernetes/pki/etcd/server.crt ### File for certificate /etc/kubernetes/pki/etcd/server.key ### File for Key -------------------------------------------------------------------------------------------------------------- /var/lib/etcd ### etcd-data
openssl
openssl genrsa -out ca.key 2048 ### Generate a KEY "ca.key" with 2048bit
openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt ### Generate a Certificate "ca.crt"
openssl x509 -noout -text -in server.crt ### View the certificate
openssl x509 -noout -fingerprint -sha256 -inform pem -in key.crt
Kinds
- Secrets
- ConfigMap
- PersistentVolume
- PersistentVolumeClaim
Types
Types-Services
- ref: https://kubernetes.io/docs/concepts/services-networking/service/
- ---
- ClusterIP: Service is reachableonly from within the cluster.
- NodePort: Service is reachable from outside the cluster.
- LoadBalancer: Service is reachable from outside the cluster (Using a cloud provider's load balancer).
- ExternalName: t.b.d.
Types-Ports
- port
- nodePort
- containerPort
- targetPort
Types-UpdateStrategy
- RollingUpdate
- OnDelete
Components
Fix-Kubelet
whereis kubelet /etc/kubernetes/kubelet.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Fix-Scheduler
mv /etc/kubernetes/mainfesto/kube-scheduler.yaml /etc/kubernetes/
Fix-Proxy
Fix-CIDR-Range
### Change die IP-Range für kube-apiserver && kube-controller-manager service-cluster-ip-range=11.96.0.0/12 /etc/kubernetes/manifests/kube-apiserver.yaml /etc/kubernetes/manifests/kube-controller-manager.yaml
Resources
Cluster
- dns >> etcd >> kube-proxy >> kube-let >> kube-apiserver >> kube-scheduler >> kube-controller
- Clusters >> Users >> Contexts (user && cluster)
kubectl cluster-info ---- kubectl config view kubectl config current-context kubectl config use-context ---- kubectl config get-useres kubectl config get-clusters kubectl config get-contexts ---- kubectl config set-credentials $USER --client-certificate=file.crt --client-key=file.key kubectl config set-cluster $NAME_CLUSTER --server=$SERVER kubectl config set-context $CONTEXT --cluster=$NAME_CLUSTER --namespace=$NAME_SPACE --user=$USER kubectl config set-context --current --namespace=samer ---- kubectl proxy --port=8080 kubectl get --raw /api/v1 kubectl api-versions kubectl api-resources kubectl get pods --context=k8s-studing kubectl get ep,ns,no,pvc,pv,svc,deploy,rs ---- kubectl port-forward pods/NAME 7000:6379 kubectl port-forward deployment/NAME 7000:6379 kubectl port-forward replicaset/NAME 7000:6379 kubectl port-forward service/NAME_DEPLOYMENT 7000:6379
RBAC (Role Based Access Control)
- Ref: https://kubernetes.io/docs/reference/access-authn-authz/rbac/
- ServiceAccount, Role, Rolebinding
- Verbs: ["get", "list", "watch", "create", "update", "patch", "delete", "post", "bind"]
- Resources: ["services", "endpoints", "pods", "secrets", "configmaps"]
k create serviceaccount $NAME_SERVICEACCOUNT k create role $NAME_ROLE --verb=create,get --resource=pods,svc k create rolebinding $NAME_ROLEBINDING --role=$NAME_ROLE --serviceaccount=$NAME_SERVICEACCOUNT k auth can-i $VERB $TYPE
Logging
- Logging-Tools: Prometheus, metrics-server
/var/log/kube-apiserver.log ## Logs for API. /var/log/kube-scheduler.log ## Logs for Scheduling. /var/log/kube-controller-manager.log ## Logs for Replication controllers. /var/log/kubelet.log ## Logs for Containers running on node. /var/log/kube-proxy.log ## Logs for LoadBalancing. /var/log/containers ## Logs for Containers. /var/log/pods/ ## Logs for Pods.
kubectl logs pod-nginx kubectl top pod pod-nginx kubectl get events
kubectl get serviceaccounts kubectl create clusterrolebinding *** kubectl describe secrets ***
Node
Pods
k run name01 --image=nginx --requests "cpu=10m,memory=20Mi" k run name02 --image=nginx --restart=Never -it --rm -- sh k expose pod name-pod --name name-service --type=NodePort --port 80 --- k exec name-pod -c name-container -- env
Deployment
- Deployment: deploy
- StatefulSet: sfs
k create deplo nginx2 --image=nginx --dry-run=client -o yaml k scale deploy nginx1 --replicas=5 --record
exec
k exec $NAME_POD -it -- /bin/bash k exec $NAME_POD -it -c $NAME_CONTAINER -- /bin/bash
Labels
- deploy.spec.selector.matchLabels
- pod.spec.nodeSelector
k label pod nginx1 stage=dev k label pod nginx1 stage- --- k get pods -l stage=dev --show-labels k get pods -L app ## Show colum "APP" as label --- k delete pods -l stage=dev
Scheduler
kubectl -n kube-system get pod | grep schedule cd /etc/kubernetes/manifests/ mv kube-scheduler.yaml ..
Taint/Schedule
NoExecute NoSchedule PreferNoSchedule --- kubectl taint nodes node1 key1=value1:NoSchedule kubectl taint nodes node1 key1=value1:NoSchedule- --- kubectl describe nodes | grep -i taint Taints: node-role.kubernetes.io/master:NoSchedule kubectl taint nodes --all node-role.kubernetes.io/master-
## Taint/Schedules (prevents scheduling on that node) ## Cordon/Uncordon (stop scheduling on that node) ## Drain (remove existing pods and reschedule them on other nodes)
Ports
spec.containers.ports.containerPort:
Rollout
- Deployments, DaemonSets, StatefulSets
kubectl rollout history ds ds-one kubectl rollout history ds ds-one --revision=1 kubectl rollout undo ds ds-one --to-revision=1
Ingress
rules.host: *** rules.http.paths.path: *** rules.http.paths.backend.service.name: *** rules.http.paths.backend.service.port.name: *** --------------------------------------------------- kubectl create ing ingress05 \ --rule="foo.com/bar*=svc1:8080" \ --rule="foo.com/api=svc2:http" \ --rule="/path=svc:port" \ --rule="foo.com/=svc:https,tls" \ --rule="foo.com/*=svc:https,tls"
Probes
* '''Probe''': describes a health check to be performed against a '''container''' to determine whether it is alive or ready to receive traffic. * '''Liveness''': to know when to restart a container. * '''Readiness''': to know when a container is ready to start accepting traffic. * '''Startup''': to know when a container application has started. If such a probe is configured, it disables liveness and readiness checks until it succeeds.
* '''initialDelaySeconds''': wait x seconds before performing the first probe. * '''periodSeconds''': every x seconds to perform probe. * '''timeoutSeconds''': wait x seconds after which the probe times out. * '''successThreshold''': x times to considered successful after having failed (Defaults=1). * '''failureThreshold''': x times to giving up after fails (Defaults=3). Giving up in case of liveness probe means restarting the container.
Services
kubectl create deployment nginx --image=nginx --replicas=2 --port=5701 kubectl expose deployment nginx --type=LoadBalancer --port=80 kubectl expose pod nginx --port=80 --target-port=9376
Namespace
kubectl get pods --all-namespaces kubectl get pods -n development kubectl config set-context --current --namespace=samer ---- kubectl api-resources --namespaced -o name