30 Ocak 2023 Pazartesi

Kubernetes Worker Üzerindeki Kube-Proxy - Node Üzerindeki Ağ Kurallarını Yönetir

1. Kube-Proxy Worker Node Üzerindeki Çalışır
Açıklaması şöyle. Kube-Proxy worker nod üzerinde çalışır.  
Kube proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
2. Kube-Proxy Worker Node Üzerindeki Ağ Kurallarını Yönetir
Açıklaması şöyle. Kube-Proxy network kurallarını iptables ile yönetir. 
Kube-Proxy listens for changes to Services and then updates the local IPTables or IPVS rules accordingly. This ensures that traffic is correctly routed to the appropriate pods in the cluster.

For example, suppose a Service is created in Kubernetes that maps to a set of pods with the label “app=myapp”. Kube-Proxy will create IPTables or IPVS rules that direct traffic to the appropriate pod based on the Service’s selector.

Şeklen şöyle


Açıklaması şöyle
Who is configuring those iptables rules?

It’s kube-proxy that collects endpoints from the control plane and maps service IP addresses to pod IPs (it also load balances the connections).

Kube-proxy is a DaemonSet that listens to changes to the Kubernetes API.
2. Load Balancing Yapar
Açıklaması şöyle
One of the tasks for the Kubernetes service is to load balance across these pods. To enable this, every node in a Kubernetes cluster runs a kube-proxy. kube-proxy is responsible for implementing a form of virtual IP for Services .

Kube-Proxy works in three modes — User Space, iptables, and IPVS. Kube-Proxy watches the Kubernetes control plane for the addition and removal of Service and Endpoint objects. It uses either of these modes to choose the backend pod. In userspace mode, it chooses a backend via a round-robin algorithm. In other modes, it's more of a random pick but they provide faster routing as they work in kernel space. You can read more on this here.
Why You Can't Ping a Kubernetes Service ?
Pod'a açılmış bir shell'den bir servisin IP adresini pinglersek cevap gelmediğini görürüz.

3. Kube-Proxy Kurulum
Açıklaması şöyle
Kube-Proxy usually runs in your cluster in the form of a DaemonSet. But it can also be installed directly as a Linux process on the node. This depends on your cluster installation type.

If you use kubeadm, it will install Kube-Proxy as a DaemonSet. If you manually install the cluster components using official Linux tarball binaries, it will run directly as a process on the node.
4. Kube-Proxy Modları
Bunlar şöyle
1 IPtables mode
Açıklaması şöyle
This is the default and most widely used mode today. In this mode Kube-Proxy relies on a Linux feature called IPtables. IPtables works as an internal packet processing and filtering component. It inspects incoming and outgoing traffic to the Linux machine. Then it applies specific rules against packets that match specific criteria.
2. IPVS mode
Açıklaması şöyle
IPVS is a Linux feature designed specifically for load balancing. This makes it a perfect choice for Kube-Proxy to use. In this mode, Kube-Proxy inserts rules into IPVS instead of IPtables.
...
Despite its advantages, IPVS might not be present in all Linux systems today. In contrast to IPtables which is a core feature of almost every Linux operating system.
3. KernelSpace mode
Açıklaması şöyle
This mode is specific to Windows nodes. In this mode Kube-proxy uses Windows Virtual Filtering Platform (VFP) to insert the packet filtering rules. The VFP on Windows works the same as IPtables on Linux, which means that these rules will also be responsible for rewriting the packet encapsulation and replacing the destination IP address with the IP of the backend Pod.

5. Kube-Proxy Modunu Görmek
Açıklaması şöyle
By default, Kube-proxy runs on port 10249 and exposes a set of endpoints that you can use to query Kube-proxy for information.

You can use the /proxyMode endpoint to check the kube-proxy mode.
Şöyle yaparız
curl -v localhost:10249/proxyMode.
6. Envoy - Kube-Proxy Alternatifi
Açıklaması şöyle
In addition to Kube-Proxy, another popular proxy used in Kubernetes is Envoy. Envoy is a high-performance proxy that provides advanced traffic management and load-balancing capabilities. Envoy can be used as a replacement for Kube-Proxy to implement Kubernetes Services or can be used as an independent component to provide advanced traffic management features.

Envoy is used in many production environments and can provide benefits such as advanced load-balancing algorithms, circuit breaking, and distributed tracing.

However, Envoy requires additional setup and configuration compared to Kube-Proxy, and may not be compatible with all network environments. Additionally, Envoy is generally used in more complex scenarios, such as multi-cluster or multi-cloud environments, and may be overkill for simpler use cases.



27 Ocak 2023 Cuma

Secret'i Volume Olarak Kullanma

Giriş
1. volume ile Secret volume haline getirilir
2. Pod volumeMounts ile bu volume'u yükler.

Örnek
Şöyle yaparız
apiVersion: v1
stringData:
  file.conf: |-
     username=demo
     password=my_plain_password
kind: Secret
metadata:
  name: my_secret
type: Opaque
---
apiVersion: v1
kind: Deployment
metadata:
  name: my-deployment
spec:
  containers:
     ...
     volumeMounts:
     - name: secret-file
       mountPath: "path/in/the/pod/where/to/mount/the/file"
       subPath: file.conf # Just the file to mount
volumes:
  - name: secret-file
  secret:
     secretName: my_secret # same as secret's metadata name
Örnek
Şöyle yaparız. Burada secret veri bir volume'a yükleniyor. Her Key/Value çifti ayrı bir dosya
apiVersion: v1
kind: Secret
metadata:
  name: my-secret
type: Opaque
data:
  username: YWRtaW4=
  password: MTIzNDU2
--
apiVersion: v1
kind: Pod
metadata:
  name: basic-app
spec:
  volumes:
    - name: my-volume-for-secret
      secret:
        secretName: my-secret
  containers:
    - name: basic-app
      image: nginx
      volumeMounts:
        - name: my-volume-for-secret
          mountPath: /etc/my-secret-vol
          readOnly: true
Secret veriye erişmek için şöyle yaparız
> kubectl exec basic-app -- ls /etc/my-secret-vol
password 
username

> kubectl exec basic-app — cat /etc/my-secret-vol/username
admin

> kubectl exec basic-app — cat /etc/my-secret-vol/password
123456

ConfigMap'i Volume Olarak Kullanma

Giriş
1. volume ile ConfigMap volume haline getirilir
2. Pod volumeMounts/mountPath ile bu volume'u yükler. 
3. volumeMounts/mountPath bir dizin ismi ise ConfigMap'teki her data satırı ayrı bir dosya gibidir

Örnek - Çoklu Dosya
Şöyle yaparız. Burada ConfigMap bir volume'a yükleniyor. Her Key/Value satırı ayrı bir dosya gibidir.
apiVersion: v1
kind: ConfigMap
metadata:
  name: my-configmap
data:
  env: prod
  welcomeMessage: "Hello, welcome to kubernetes in a nutshell"
--
apiVersion: v1
kind: Pod
metadata:
  name: my-app
spec:
  volumes:
    - name: my-volume
      configMap:
        name: my-configmap
  containers:
    - name: basic-app
      image: nginx
      volumeMounts:
        - name: my-volume
          mountPath: /etc/name
Görmek için şöyle yaparız
> kubectl exec my-app — cat /etc/name/env
prod

> kubectl exec my-app — cat /etc/name/welcomeMessage
Hello, welcome to kubernetes in a nutshell
Örnek - Çoklu Dosya
Elimizde şöyle bir dizim olsun
root/config-files/
                 |- user-data.txt
                 |- admin-info.txt


# user-data.txt
username: superuser
password: admin123

# admin-info.txt
city: Nobeoka
state: Miyazaki
country: Japan
ve tüm dosyaları ConfigMap yapalım
# Configmap with directory
k create configmap user-config --from-file=/root/config-files
ConfigMap'e bakalım. Çıktısı şöyle. Burada user-config bir volume ve içinde de iki tane dosya var
> k get configmap user-config -o yaml

apiVersion: v1
data:
  admin-info.txt: |
    username: superuser
    password: admin123
  user-data.txt: |
    city: Nobeoka
    state: Miyazaki
    country: Japan
kind: ConfigMap
metadata:
  creationTimestamp: "2022-08-07T09:38:22Z"
  name: user-config
  namespace: default
  resourceVersion: "2007"
  uid: 915e805a-cb55-4309-977a-566b7a8ed6ac
Volume olarak yüklemek için şöyle yaparız
# Pod-definition with configmap mounted as a volume into the pod
apiVersion: v1
kind: Pod
metadata:
  name: web-server
spec:
  containers:
  - name: wordpress
    image: wordpress
    volumeMounts:
    - name: config-volume
      mountPath: /etc/config  # Directory where files will be mounted
  volumes:
    - name: config-volume
      configMap:
        name: user-config
Artık pod'un /etc/config dizininde admin-info.txt ve user-data.txt isimli iki tane dosya var. Eğer tüm dosyaları değil de sadece bazı dosyaları kullanmak istersek şöyle yaparız. Burada items ile ConfigMap üzerindeki dosya ismi ve Pod üzerindeki ismi belirtiliyor
# Pod-definition with configmap. Importing only the necessary files into the pods. 
# e.g: user-data.txt

apiVersion: v1
kind: Pod
metadata:
  name: web-server
spec:
  containers:
    - name: wordpress
      image: wordpress
      volumeMounts:
      - name: config-volume
        mountPath: /etc/config
  volumes:
    - name: config-volume
      configMap:
        name: user-config
        items:
        - key: user-data.txt  # filename on configmap
          path: pod-user-data.txt # filename on pod
Örnek - Tek Dosya
Şöyle yaparız. Burada file.conf isimli tek bir dosya var. subPath ile volume üzerindeki dosya belirtilir. mountPath ile de dosyanın pod üzerindeki yeri belirtilir. Normalde subPath'e gerek yok
apiVersion: v1
kind: ConfigMap
metadata:
  name: my-config-map
data:
  file.conf: |
     param1=value1
     param2=value2
     paramN=valueN
---
apiVersion: v1
kind: Deployment
metadata:
  name: my-deployment
spec:
  containers:
     ...
     volumeMounts:
     - name: config
       mountPath: "path/in/the/pod/where/to/mount/the/file"
       subPath: file.conf
volumes:
  - name: config
  configMap:
    name: my-config-map
    items:
    - key: "file.conf" # filename on configmap
      path: "file.conf" # filename on pod


17 Ocak 2023 Salı

kubectl config rename-context seçeneği

Örnek
Şöyle yaparız
# Fetch the Kubernetes cluster credentials
gcloud container clusters get-credentials \
  <<kubernetes_cluster_name_output_from_gke_primary>>
  --zone=europe-west4 \
  --project=<<your_gcp_project_id>>

gcloud container clusters get-credentials \
  <<kubernetes_cluster_name_output_from_gke_secondary>> \
  --zone=europe-west2 \
  --project=<<your_gcp_project_id>>

# Update kubectl context for simplicity
kubectl config rename-context \
  gke_<<your_gcp_project_id>>_europe-west4_<<kubernetes_cluster_name_output_from_gke_primary>> \
  gke_pri

kubectl config rename-context \
  gke_<<your_gcp_project_id>>_europe-west2_<<kubernetes_cluster_name_output_from_gke_secondary>> \
  gke_sec
Ve şöyle yapabiliriz
# 1
kubectl config use-context gke_pri
kubectl apply -f namespace.yaml
kubectl apply -f deployment.yaml

# 2
kubectl config use-context gke_sec
kubectl apply -f namespace.yaml
kubectl apply -f deployment.yaml

15 Ocak 2023 Pazar

Common Pods Errors - CrashLoopBackOff

Giriş
Açıklaması şöyle
If the container can’t start, then K8s shows the CrashLoopBackOff message as a status.

Usually, a container can’t start when:

1. There’s an error in the application that prevents it from starting.
2. You misconfigured the container.
3. The Liveness probe failed too many times.
4. You should try and retrieve the logs from that container to investigate why it failed.

If you can’t see the logs because your container is restarting too quickly, you can use the following command:
Önce CrashLoopBackOff olan Pod olup olmadığını görmek için şöyle yaparız. Evet bir tane var
kubectl get pods -n <namespace>

NAME                     READY     STATUS             RESTARTS   AGE
nginx-5796d5bc7d-xtl6q   0/1       CrashLoopBackOff   4          1m
CrashLoopBackOff  Olmasaydı
Açıklaması şöyle. Yani kısaca gereksiz yere çok fazla kayna tüketilecekti
- In the absence of CrashLoopBackOff, Kubernetes would try to restart the container right after it crashes.
- This could lead to a significant number of restart attempts within a short span of time, thereby putting unnecessary strain on the system.
- The increased failure rate could affect the availability of the application running inside the container.
Neden CrashLoopBackOff Olduğunu Görmek İçin Kullanılabilecek Komutlar Şöyle
1. kubectl describe pod
2. kubectl logs
3. kubectl get events

1. kubectl describe pod
Burada Last State altındaki Reason alanına bakmak gerekir. 
Reason Error İse
Örnek  - Yetersiz Heap
"Caused by: java.lang.OutOfMemoryError: Java heap space" şöyledir
Containers:
  heapkiller:
    ....
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
...
Events:
  Type     Reason     Age                From       Message
  ----     ------     ----               ----       -------
...
  Warning  BackOff    7s (x7 over 89s)   kubelet    Back-off restarting failed container
Reason OOMKilled İse
Örnek - Kubernetes Killed The Pod
Şöyledir. OOMKilled Kubernetes, Pod'u sınırları aştığı için öldürdü anlamına gelir.
Containers:
  heapkiller:
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       OOMKilled
      Exit Code:    137
Events:
  Type     Reason     Age                  From       Message
 ----     ------     ----                 ----        ------
...  
...
 Warning  BackOff    6s (x7 over 107s)    kubelet     Back-off restarting failed container
Status Evicted
Örnek
Şöyledir. SystemOOM hatası da görülür
~ kubectl describe pod/heapkiller

Status:           Failed
Reason:           Evicted
Message:          The node was low on resource: memory.
Containers:
  heapkiller:
    State:          Terminated
      Reason:       ContainerStatusUnknown
      Message:      The container could not be located when the pod was terminated
      Exit Code:    137
      Reason:       OOMKilled
2. kubectl logs
Şöyle yaparız -p ile bir önceki hatalı Pod'un logları görülebilir. Veya bazen Pod içindeki bir container'a bakmak gerekir.
kubectl logskubectl logs <pod name> -n <namespace> -p
kubectl logs <pod name> -n <namespace> --previous
kubectl logs <pod name> -n <namespace> --all-containers
kubectl logs <pod name> -n <namespace> -c mycontainer
kubectl logs <pod name> -n <namespace> --tail 50


12 Ocak 2023 Perşembe

helmify - Yaml Dosyasını Helm Projesi Haline Getirir

Örnek
Şöyle yaparız. FNR ile satır numarası 1 ise ve ilk dosya değilse "---" string'i basılıyor
cat web.yaml | helmify nginx-server

awk 'FNR==1 && NR!=1  {print "---"}{print}' /<my_directory>/*.yaml | helmify mychart


10 Ocak 2023 Salı

Kubernetes kind: Job

Örnek
Elimizde şöyle bash script içeren bir ConfigMap olsun
apiVersion: v1
kind: ConfigMap
metadata:
  name: slim-shady-configmap
data:
  slim-shady.sh: |
    #!/bin/bash

    echo "Hi!"
    echo "My name is"
    echo "What?"
    echo "My name is"
    echo "Who?"
    echo "My name is"
    echo "Chika-chika"
    echo "Slim Shady"
Job şöyle olsun
apiVersion: batch/v1
kind: Job
metadata:
  name: chicka-chicka-slim-shady
spec:
  template:
    spec:
      containers:
        - name: shady
          image: centos
          command: ["/script/slim-shady.sh"]
          volumeMounts:
            - name: script
              mountPath: "/script"
      volumes:
        - name: script
          configMap:
            name: slim-shady-configmap
            defaultMode: 0500
      restartPolicy: Never
Açıklaması şöyle
In the above Job manifest you can see that we create a new volume for the configmap. We then take that volume and mount it under the shady container at “/script” and then execute the script file slim-shady.sh which is the name of the file in our configmap.

Two things I would like to point out here:

A) Inside the volume, we must specify a defaultMode of atleast 0500 (read+execute to the user). This defines the file permissions inside the container when the pod is run. If we do not set this, we get permission denied errors. This is because as a default the configmap will be mounted with 0644 and you will end up with an error like below.

 Warning  Failed     7s    kubelet            Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/script/slim-shady.sh": permission denied: unknown
B) Here we were also able to use an existing and well known container image instead of creating our own. This takes less effort for us because we did not have to build a new image with new code etc… but instead only needed to deploy the configmap and a Job.
Örnek - Python
Şöyle yaparız
apiVersion: batch/v1
kind: Job
metadata:
  name: promote-model-a
spec:
  template:
    spec:
      containers:
        - name: promote
          image: wbassler/mlflow-utils:0.0.1
          command:
            - python
          args:
            - /mlflow/promote.py
          env: 
            - name: MODEL_NAME
              value: model-a
          volumeMounts:
            - name: promote
              mountPath: "/mlflow"
      volumes:
        - name: promote
          configMap:
            name: promote-model
            defaultMode: 0500
      restartPolicy: Never





Kubernetes kind: AdmissionConfiguration - Pod Security On Cluster Level

Giriş
PodSecurityPolicy yazısı namespace seviyesinde Pod Security ataması yapmak için
Bu yazı ise Cluster seviyesinde Pod Security ataması yapmak için

Örnek
Açıklaması şöyle
Suppose, we want to apply a policy that will enforce the “baseline” pod security standards on the whole cluster except the “default” namespace. In addition to that, we want to apply “restricted” pod security standards in warn mode.
Şöyle yaparız. kube-apiserver'u bu ayarla başlatmak gerekir.
#/etc/kubernetes/admission/PodSecurityConfiguration.yaml

apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: PodSecurity
  configuration:
    apiVersion: pod-security.admission.config.k8s.io/v1
    kind: PodSecurityConfiguration
    defaults:
      enforce: "baseline"
      enforce-version: "latest"
      warn: "restricted"
      warn-version: "latest"
    exemptions:
      # Array of authenticated usernames to exempt.
      usernames: []
      # Array of runtime class names to exempt.
      runtimeClasses: []
      # Array of namespaces to exempt.
      namespaces: ["default"]
Elimizde şöyle bir pod olsun. Bu pod default isim alanında çalışır ancak başka bir isim alanında çalışmaz
# demo-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: demo-pod
  name: demo-pod
spec:
  containers:
  - image: httpd
    name: httpd-container
    securityContext:
       privileged: true
Elimizde şöyle bir pod olsun. Bu pod runAsUser kullandığı için restricted standardına dahildir ve restricted standardı da warn olduğundan  her hangi bir isim alanında çalışır 
# nginx-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
spec:
  containers:
  - name: nginx-container
    image: nginx
    securityContext:
      runAsUser: 0


Cluster Propotional Autoscaler - ReplicaSet Ekler/Siler

Giriş Açıklaması şöyle CPA aims to horizontally scale the number of Pod replicas based on the cluster’s scale. A common example is DNS ser...