26 Nisan 2023 Çarşamba

kubectl debug seçeneği

Giriş
Kubernetes 1.25 ile geliyor. Açıklaması şöyle. Yani içinde shell olmayan bir container'ı bir şekilde bir başka container ile sarmalıyor ve shell erişimi veriyor.
Ephemeral containers are useful for interactive troubleshooting when kubectl exec is insufficient because a container has crashed or a container image doesn’t include debugging utilities, such as with distroless images.

You can use the kubectl debug command to add ephemeral containers to a running Pod.
Örnek
Şöyle yaparız. Image distroless olduğu için içinde shell yok ve hata alırız
# run the container
kubectl run node --image=gcr.io/distroless/nodejs18-debian11:latest --command -- /nodejs/bin/node -e "while(true) { console.log('hello') }" # Try opening a shell to the container kubectl exec -it node -- sh // Output OCI runtime exec failed: exec failed: unable to start container process: exec: "sh": executable file not found in $PATH: unknown command terminated with exit code 126
Şöyle yaparız. Burada sarmalamak için bash kullanılıyor
kubectl debug -it \
  --image=bash \  # Image to attach. As we want a shell, we are using bash
  --target=node \ # Name of the container to attach to
  node            # For some reason I don’t understand, we must repeat it
Örnek
Şöyle yaparız. Burada sarmalamak için busybox kullanılıyor
kubectl alpha debug -it podname --image=busybox --target=containername
Örnek
Şöyle yaparız. Burada sarmalamak için busybox kullanılıyor
kubectl debug pod/myapp-pod -it \
  --image=busybox \
  --copy-to=myapp-debug --container=myapp-container















17 Nisan 2023 Pazartesi

PersistentVolume İçin hostPath Volume - Single Node İçindir, Host Bilgisayarın Diskindeki Bir Dizini Pod Kullanabilir

Giriş
Şeklen şöyle
Şöyle yaparız
apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: /mnt/data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
Pod için yaparız
apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
    - name: my-container
      image: nginx
      volumeMounts:
        - name: my-volume
          mountPath: /data
  volumes:
    - name: my-volume
      persistentVolumeClaim:
        claimName: my-pvc

Örnek
Örnek
Elimizde şöyle bir StorageClass olsun
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
reclaimPolicy: Delete
Şöyle yaparız. Burada storage üzerinde bir PersistentVolume yaratılıyor.
apiVersion: v1
kind: PersistentVolume
metadata:
  name: local-pv1
spec:
  storageClassName: local-storage
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/storage/data1"

---

apiVersion: v1
kind: PersistentVolume
metadata:
  name: local-pv2
spec:
  storageClassName: local-storage
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/storage/data2"

14 Nisan 2023 Cuma

IP Virtual Server - IPVS

Giriş
Açıklaması şöyle
IPVS (IP Virtual Server) is a Linux kernel module that provides network load-balancing capabilities. In Kubernetes, IPVS is used as an alternative to kube-proxy and IPTables for implementing Services.

When a Service is created in Kubernetes and the Service type is set to “LoadBalancer”, IPVS is used to create a virtual IP address (VIP) for the Service. The VIP is used as the target address for client traffic and is associated with a set of pods that provide the actual service.

IPVS works by intercepting incoming traffic to the VIP and distributing it among the available pods using a load-balancing algorithm. There are several load-balancing algorithms available in IPVS, including round-robin, least-connection, and weighted least-connection.

IPVS also provides health checks to ensure that traffic is only sent to healthy pods. When a pod fails a health check, IPVS removes it from the list of available pods and redistributes traffic among the remaining healthy pods.

IPVS has several advantages over kube-proxy and IPTables, including better scalability and performance, and more flexible load-balancing algorithms. IPVS can handle large numbers of connections and is optimized for high throughput and low latency. It also supports more advanced load-balancing features, such as session persistence and connection draining.

However, IPVS requires additional configuration and setup compared to kube-proxy and IPTables, and may not be compatible with all network environments. IPVS also requires kernel support and may not be available on all Linux distributions.

Container Networking Interface

Giriş
Pluginler açısından açıklaması şöyle
The Container Networking Interface (CNI) is a specification and set of tools for configuring networking in containerized environments, such as those provided by Kubernetes. The goal of CNI is to provide a common standard for network plugins so that container runtimes and orchestration systems can work with any networking solution that supports the CNI API.

CNI defines a standard way for container runtimes, such as Docker or CRI-O, to call networking plugins to configure the network interfaces of containers. The plugins are responsible for creating and configuring network interfaces for the containers, as well as configuring the network namespace and routing tables.

...

The use of CNI provides several benefits in containerized environments. First, it allows for a common standard that can be used by multiple container runtimes and orchestration systems. This means that network plugins can be developed independently of the container runtime or orchestration system, which promotes flexibility and compatibility.

Second, CNI provides a modular and extensible architecture that allows for easy integration with other networking solutions. This enables users to choose the best networking solution for their specific use case and avoid vendor lock-in.

Finally, CNI provides a simple and flexible API for configuring container networking, which makes it easy for developers to create and deploy custom networking solutions tailored to their needs.
Açıklaması şöyle. Yani CNI eklentileri tek başına çalışan bir şey olabilir
CNI plugins can be either built into the container runtime or provided as standalone binaries. There are many CNI plugins available, each with its own strengths and weaknesses. Some popular CNI plugins include Calico, Flannel, and Weave Net.
Podlar Arasında İletişim İçin
Açıklaması şöyle. Yani Pod'lar arasında NAT gerektirmeden iletişim için CNI gerekir
In Kubernetes, each Pod is assigned a unique IP address and can communicate with other Pods without requiring NAT. To provide networking to Pods, Kubernetes uses Container Network Interface (CNI), a library for configuring network interfaces in Linux containers. The kubelet is responsible for setting up the network for new Pods using the CNI plugin specified in the configuration file located in the /etc/cni/net.d/ directory on the node.
Pluginler açısından açıklaması şöyle
In Kubernetes, CNI is used by the kubelet to configure the network interfaces of pods. When a pod is created, the kubelet invokes the CNI plugin to configure the pod’s network. The CNI plugin then creates and configures the network interfaces for the pod, sets up any necessary routing rules, and adds the pod’s IP address to the appropriate network namespace.
Pod İçin CNI Eklentisi Nerede Tanımlanır
Açıklaması şöyle
In Kubernetes, the kubelet is responsible for setting up the network for a new Pod using the CNI plugin specified in the network configuration file located in the /etc/cni/net.d/ directory on the node. This configuration file contains necessary parameters to configure the network for the Pod.

The required CNI plugins referenced by the configuration should be installed in the /opt/cni/bin directory, which is the directory used by Kubernetes to store the CNI plugin binaries that manage network connectivity for Pods.

When a pod is created, the kubelet reads the network configuration file and identifies the CNI plugin specified in the file. The kubelet then loads the CNI plugin and invokes its “ADD” command with the Pod’s network configuration parameters. The CNI plugin takes over and creates a network namespace, configures the network interface, and sets up routing and firewall rules based on the configuration parameters provided by the kubelet. The kubelet saves the actual network configuration parameters used by the CNI plugin in a file in the Pod’s network namespace, located in the /var/run/netns/ directory on the node.

Finally, the kubelet notifies the container runtime, such as Docker, that the network is ready for the Pod to start.
/etc/cni/net.d/ Dizini
network configuration file dosyaları bu dizindedir. Yani Pod'un kullanmasını istediğimiz eklentiler bu dizinde tanımlanır

Örnek 
Şöyle yaparız. Burada bridge CNI eklentisi kullanılıyor
{
    "cniVersion": "0.3.1",
    "name": "mynet",
    "type": "bridge",
    "bridge": "mybridge",
    "isGateway": true,
    "ipMasq": true,
    "ipam": {
        "type": "host-local",
        "subnet": "10.244.0.0/16",
        "routes": [
            { "dst": "0.0.0.0/0" }
        ]
    }
}
Açıklaması şöyle
cniVersion: The version of the CNI specification that the configuration file adheres to.
name: A name that uniquely identifies the network configuration.
type: The type of the network plugin to use.
bridge: The name of the bridge device to create.
isGateway: A boolean value that specifies whether the bridge device should be used as the default gateway for containers.
ipMasq: A boolean value that specifies whether to enable IP masquerading for traffic leaving the network.
ipam: The IP address management plugin to use. In this example, it is set to "host-local". This plugin assigns IP addresses to containers based on the network namespace where the container is created.
subnet: The subnet from which to allocate IP addresses.
routes: The routing table entries to add to the container's network namespace.

10 Nisan 2023 Pazartesi

kubeconfig Dosyası

Dosya Nerededir
Açıklaması şöyle
The default path for the kubeconfig file is $HOME/.kube/config, but it can be specified using the --kubeconfig flag.
Komutlar
kubectl config view : Dosyayı gösterir
kubectl config view --kubeconfig=<path_to_kubeconfig> : Belirtilen dosyayı gösterir
kubectl use-context <context_name> : Context değiştmek içindir

Hangi Bölümlerden Oluşur
3 bölümden oluşur
1. clusters
2. users
3. contexts

Bu dosya sayesinde komut satırındaki parametreleri kısaltırız. Şeklen şöyle


Örnek
Şöyle yaparız. Burada 3 tane cluster var. productiondevelopment ve test. 
production ortamına prod-user erişebilir
development ortamına dev-user erişebilir. dev-user aynı zamanda namespace2'yi kullanır
test ortamına test-user erişebilir
apiVersion: v1
kind: Config
clusters:
- name: production
  cluster:
    server: https://production.example.com
    certificate-authority: /path/to/production/ca.crt
- name: development
  cluster:
    server: https://development.example.com
    certificate-authority: /path/to/development/ca.crt
- name: test
  cluster:
    server: https://test.example.com
    certificate-authority: /path/to/test/ca.crt
contexts:
- name: production
  context:
    cluster: production
    user: prod-user
- name: development
  context:
    cluster: development
    user: dev-user
namespace: namespace2
- name: test
  context:
    cluster: test
    user: test-user
current-context: production
users:
- name: prod-user
  user:
    client-certificate: /path/to/production/prod-user.crt
    client-key: /path/to/production/prod-user.key
- name: dev-user
  user:
    client-certificate: /path/to/development/dev-user.crt
    client-key: /path/to/development/dev-user.key
- name: test-user
  user:
    client-certificate: /path/to/test/test-user.crt
    client-key: /path/to/test/test-user.key
Açıklaması şöyle
The current-context specifies which context should be used by default when kubectl is run.

5 Nisan 2023 Çarşamba

ConfigMap'i Environment Olarak Kullanma

Giriş
1. Pod envfrom/configMapRef ile istenilen ConfigMap'in tamamını environment olarak yükler.
2. Pod env/valueFrom/configMapKeyRef ile istenilen ConfigMap'in bir alanını environment olarak yükler.

1. envfrom
Örnek
Şöyle yaparız. envfrom başlığı altında configMapRef kullanılıyor
apiVersion: v1
kind: ConfigMap
metadata:
  name: mysql-config
data:
  MYSQL_ROOT_HOST: root
  MYSQL_ROOT_PASSWORD: password

#Inject entire configmap to the pod
apiVersion: v1
kind: Pod
metadata:
 labels:
   app: mysql-db
 name: mysql-db
spec:
 containers:
 - image: mysql
   name: mysql-db
   envFrom:
     - configMapRef:
         name: mysql-config
Örnek
Şöyle yaparızenvfrom başlığı altında configMapRef kullanılıyor. Burada aynı zamanda configmap volume olarak ta yükleniyor ama bence gereksiz.
apiVersion: v1
kind: ConfigMap
metadata:
  name: my-config
data:
  DATABASE_URL: jdbc:mysql://localhost:3306/mydb
  DATABASE_USERNAME: myuser
  DATABASE_PASSWORD: mypassword
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  template:
    spec:
      containers:
        - name: my-container
          image: my-image
          envFrom:
            - configMapRef:
                name: my-config
          volumeMounts:
            - name: config-volume
              mountPath: /config
      volumes:
        - name: config-volume
          configMap:
            name: my-config
SpringBoot projesinde şöyle yaparız
spring:
  datasource:
    url: ${DATABASE_URL}
    username: ${DATABASE_USERNAME}
    password: ${DATABASE_PASSWORD}
2. env
Eğer tüm configmap yerine içindeki bazı key değerleri kullanmak istersek şöyle yaparızenvfrom yerine env başlığı altında configMapKeyRef kullanılıyor
# Inject only the necessary environment varible from a configmap to the pod
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: mysql-db
  name: mysql-db
spec:
  containers:
  - image: mysql
    name: mysql-db
    env:
      - name: MYSQL_ROOT_PASSWORD
        valueFrom:
          configMapKeyRef:
             name: mysql-config
             key: MYSQL_ROOT_PASSWORD
Eğer bazı key değeleri birkaç farklı ConfigMap'ten almak istersek şöyle yaparız
# Inject environment varible from multiple configmaps to the pod
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: mysql-db
  name: mysql-db
spec:
  containers:
  - image: mysql
    name: mysql-db
    env:
      - name: MYSQL_ROOT_PASSWORD
        valueFrom:
          configMapKeyRef:
             name: mysql-config
             key: MYSQL_ROOT_PASSWORD
      - name: APP_STATE
        valueFrom:
          configMapKeyRef:
             name: app-config
             key: APP_STATE

3 Nisan 2023 Pazartesi

Headless Service - Load-balancer'a Uğramadan Direkt DNS Kaydı İle Bir POD'a Erişilebilir

Giriş
Açıklaması şöyle. Böylece load-balancer'a uğramadan direkt DNS kaydı ile bir POD'a erişilebilir.
A headless service in Kubernetes can be a useful tool for creating distributed applications. It allows you to directly access the individual pods in a service.
Örnek
Şöyle yaparız
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  clusterIP: None
  selector:
    app: my-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376
Açıklaması şöyle
To create a headless service in Kubernetes, we need to define a service with the clusterIP field set to "None". 

In this example, we’ve defined a headless service named “my-service”. We’ve set the clusterIP field to "None" to indicate that we want a headless service. We've also specified a selector to associate pods with the service. The ports field specifies the ports that the service will forward traffic to.

Once you’ve created a headless service, you can access each pod associated with the service through DNS. The DNS record for each pod will be in the format <pod-name>.<headless-service-name>.<namespace>.svc.cluster.local.
Örnek - StatefulSet
Elimizde şöyle bir headless service olsun
apiVersion: v1
kind: Service
metadata:
  name: my-db-service
spec:
  clusterIP: None
  selector:
    app: my-db
  ports:
    - protocol: TCP
      port: 3306
      targetPort: 3306
Şöyle yaparız
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: my-db-statefulset
spec:
  serviceName: my-db-service
  replicas: 3
  selector:
    matchLabels:
      app: my-db
  template:
    metadata:
      labels:
        app: my-db
    spec:
      containers:
      - name: my-container
        image: my-db-image
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: my-password
  volumeClaimTemplates:
  - metadata:
      name: my-pvc
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 10Gi
Örnek - StatefulSet
Şöyle yaparız
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: kafka
spec:
  selector:
    matchLabels:
      app: kafka
  serviceName: kafka
  replicas: 3
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: kafka
    spec:
      hostname: kafka
      containers:
        - name: kafka
          image: <kafka-image>
          env:
            - name: KAFKA_ZOOKEEPER_CONNECT
              value: <zookeeper-endpoints>
            - name: KAFKA_ADVERTISED_LISTENERS
              value: PLAINTEXT://$(hostname -f):9092
          ports:
            - containerPort: 9092
              name: kafka
          volumeMounts:
            - name: data
              mountPath: /var/lib/kafka/data
      volumes:
        - name: data
          persistentVolumeClaim:
            claimName: kafka-data
  volumeClaimTemplates:
    - metadata:
        name: kafka-data
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 10Gi

---

apiVersion: v1
kind: Service
metadata:
  name: kafka
spec:
  clusterIP: None
  ports:
    - name: kafka
      port: 9092
      targetPort: 9092
  selector:
    app: kafka


Örnek - Deployment
Şöyle yaparız
apiVersion: v1
kind: Service
metadata:
  name: my-service-a
spec:
  clusterIP: None
  selector:
    app: my-service-a
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-service-a-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-service-a
  template:
    metadata:
      labels:
        app: my-service-a
    spec:
      containers:
      - name: my-container
        image: my-service-a-image
        ports:
        - containerPort: 8080

kubectl cluster-info seçeneği

Cluster bilgisini gösterir