29 Temmuz 2022 Cuma

Kubernetes kind : Pod

kind : Deployment vs kind : Pod
Açıklaması şöyle. Yani Deployment kullanılırsa, eğer pod kapanırsa tekrar başlatılır. Pod kullanılırsa tekrar başlatılmaz.
The create command can be used to create a pod directly, or it can create a pod or pods through a Deployment. It is highly recommended that you use a Deployment to create your pods. It watches for failed pods and will start up new pods as required to maintain the specified number. If you don’t want a Deployment to monitor your pod (e.g. your pod is writing non-persistent data which won’t survive a restart, or your pod is intended to be very short-lived), you can create a pod directly with the create command.

Note: We recommend using a Deployment to create pods. You should use the instructions below only if you don’t want to create a Deployment.
Örnek
Şöyle yaparız
apiVersion: v1
kind: Pod
metadata:
  name: ""
  labels:
    name: ""
  namespace: ""
  annotations: []
  generateName: ""
spec:
  ? "// See 'The spec schema' for details."
  : ~

Kubernetes kind: Service İle ClusterIP Service

Örnek
Kafka podlarına diğer podların erişebilmesi için  9092 and 9093 portlarının açılması gerekir. Ayrıca Kafka podunun  Zookeeper poduna erişebilmesi için Zookeeper podunun 2181 portunun da açılması gerekir. Kafka podları için şöyle yaparız
kind: Service
apiVersion: v1
metadata:
  name: example-kafka
  namespace: kafka-example
  labels:
    app: example-kafka
spec:
  ports:
    - name: external
      protocol: TCP
      port: 9093
      targetPort: 9093
    - name: internal
      protocol: TCP
      port: 9092
      targetPort: 9092
  selector:
    app: example-kafka
  type: ClusterIP
  sessionAffinity: None
Zookeeper podu için şöyle  yaparız
kind: Service
apiVersion: v1
metadata:
  name: example-zookeeper
  namespace: kafka-example
  labels:
    app: example-zookeeper
spec:
  ports:
    - name: main
      protocol: TCP
      port: 2181
      targetPort: 2181
  selector:
    app: example-zookeeper
  type: ClusterIP
  sessionAffinity: None

Kubernetes kind : Namespace

Giriş
Aynı şey kubectl create namespace seçeneği ile yapmak ta mümkün.

metadata/name ile belirtilen isim alanını yaratır

Örnek
Şöyle yaparız
apiVersion: v1
kind: Namespace
metadata:
  name: kafka-example
  labels:
    name: kafka-example
Örnek
Şöyle yaparız
apiVersion: v1
kind: Namespace
metadata:
  name: development

18 Temmuz 2022 Pazartesi

Kubernetes Worker

Worker Bileşenleri Nedir?
Bunlar şöyle
- Kubelet
- Kube-Proxy
- Pod
- Container Engine
Şeklen şöyle

Kubelet Nedir?
Kubelet yazısına taşıdım

Kube-Proxy Nedir?
Kube-Proxy yazısına taşıdım

Container Runtime Nedir?
Açıklaması şöyle. Yani pod içindeki container'ları çalıştırır
The container runtime is the software that is responsible for running containers.

Kubernetes supports several container runtimes: Docker, containerd, CRI-O, and any implementation of the Kubernetes CRI (Container Runtime Interface).

15 Temmuz 2022 Cuma

ceph Volume

Ceph Nedir?
Açıklaması şöyle. Yani CRUSH hash ile dosyanın nerede olduğunu bulmak hızlı. Ceph ile 
1. Block Storage, 
2. File Storage
3. Object Storage yapılabilir.
Ceph is an open source, distributed, scaled-out, software-defined storage system that can provide block, object, and file storage. The clusters of Ceph are designed in order to run on any hardware with the help of an algorithm called CRUSH (Controlled Replication Under Scalable Hashing).

The CRUSH algorithm enables the client to independently computes where data should be written to or read from. By deriving this metadata dynamically, there is no need to manage a centralized table.

Servers can perform a CRUSH lookup very quickly; moreover, a smaller computing load can be distributed across cluster nodes, leveraging the power of distributed storage. This allows Ceph can quickly scale to hundreds of petabytes without the risk of bottlenecks and the associated single points of failure.

Ceph is a true unified storage solution that provides block, file, and object services from a single unified software defined backend. Ceph provides three main types of storage:
- Block storage via the RADOS Block Device (RBD)
- File storage via CephFS
- Object storage via RADOS Gateway, which provides S3 and Swift-compatible storage.
1. Ceph Block Storage (RBD) : Farklı podlar tarafından paylaşılamaz
2. Ceph File Storage - CephFS : NFS gibi farkı podlar tarafından paylaşılabilir.
3. Ceph Object Storage - AWS S3 gibidir

Ceph'in Tarihçesi
Açıklaması şöyle. Yani 2014 yılında RedHat satın almış.
2003–2007: Ceph was developed at University of California by Sage Weil in 2003 as a part of his PhD project. Then it was open sourced in 2006 under a LGPL to serve as a reference implementation and research platform. Lawrence Livermore National Laboratory supported Sage’s early followup work from 2003 to 2007.

2007–2011: DreamHost supported Ceph development from 2007 to 2011. During this period the core components of Ceph gained stability and reliability, new features were implemented, and the road map for the future was drawn.

2012 — Current: In 2012 Sage Weil founded Inktank to enable the widespread adoption of Ceph. In 2014 Red Hat agreed to acquire Inktank
Kurulum
Açıklaması şöyle. Kubernetes ortamında Rook ile kurulur
There are several different ways to install Ceph, such as:
Cephadm: Installs and manages a Ceph cluster using containers and systemd, with tight integration with the CLI and dashboard GUI.
Rook: Deploys and manages Ceph clusters running in K8s, while also enabling management of storage resources and provisioning via K8s APIs.
ceph-ansible: Deploys and manages Ceph clusters using Ansible.
ceph-salt: Installs Ceph using Salt and cephadm.
jaas.ai/ceph-mon: Installs Ceph using Juju.
github.com/openstack/puppet-ceph: Installs Ceph via Puppet.
Manual: Ceph can also be installed manually.
Ceph Block Storage - RBD
Açıklaması şöyle.  RBD kısaltması RADOS Block Device anlamına gelir. Tek bir HDD/SDD disk gibi düşünülebilir.
if you are running on bare-metal for production use rook-ceph operator to use ceph RBD as a storage class
1. CephBlockPool yaratılır
2. StorageClass yaratılır. Provisioner olarak "rook-ceph.rbd.csi.ceph.com" veya "ceph.com/rbd" kullanılır
3. PVC ile bu StorageClass  kullanılır. Bir PVC tek bir Pod tarafından kullanılabilir.

Örnek - PVC + POD içinde PVC'yi Kullanma
PVC için şöyle yaparız
apiVersion: v1
kind: PersistentVolumeClaim metadata: name: mariadb-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi storageClassName: rook-ceph-block
Açıklaması şöyle
You should set accessModes to ReadWriteOnce when using rbd. ReadWriteMany is supported by cephfs. 
Kullanmak için şöyle yaparız
apiVersion: v1
kind: Pod
metadata:
  name: mariadb-pod
  labels:
    app: mariadb
spec:
  containers:
    - image: mariadb
      name: mariadb
      env:
        - name: MYSQL_ROOT_PASSWORD
          value: qwer1234
      ports:
        - containerPort: 3306
          name: mariadb
      volumeMounts:
        - name: db-vol
          mountPath: /var/lib/mysql
  volumes:
    - name: db-vol
      persistentVolumeClaim:
        claimName: mariadb-pvc
Örnek - StorageClass + PVC
Şöyle yaparız
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast-rbd
provisioner: ceph.com/rbd
parameters:
  monitors: 10.0.1.118:6789, 10.0.1.227:6789, 10.0.1.172:6789
  adminId: admin
  adminSecretName: ceph-secret
  adminSecretNamespace: kube-system
  pool: kube
  userId: kube
  userSecretName: ceph-secret-kube
  userSecretNamespace: kube-system
  imageFormat: "2"
  imageFeatures: layering
PVC için şöyle yaparız
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: testclaim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: fast-rbd
PV ve PVC'ye bakarsak şunu görürüz
# kubectl get pvc
NAME      STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
testclaim  Bound     pvc-c215ad98-95b3-11e9-8b5d-12e154d66096   1Gi        RWO            fast-rbd       2m

# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM             STORAGECLASS   REASON    AGE
pvc-c215ad98-95b3-11e9-8b5d-12e154d66096   1Gi        RWO            Delete           Bound     default/testclaim   fast-rbd                 8m
Örnek - CephBlockPool + StorageClass + PVC
Ceph RBD yaratmak için şöyle yaparız
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
  name: replicapool2
  namespace: rook-ceph
spec:
  failureDomain: host
  replicated:
    size: 200
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: rook-ceph-block2
# Change "rook-ceph" provisioner prefix to match the operator namespace if needed
provisioner: rook-ceph.rbd.csi.ceph.com
parameters:
    # clusterID is the namespace where the rook cluster is running
    clusterID: rook-ceph
    # Ceph pool into which the RBD image shall be created
    pool: replicapool2
# RBD image format. Defaults to "2".
    imageFormat: "2"
# RBD image features. Available for imageFormat: "2". CSI RBD currently supports only `layering` feature.
    imageFeatures: layering
# The secrets contain Ceph admin credentials.
    csi.storage.k8s.io/provisioner-secret-name: rook-ceph-csi
    csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
    csi.storage.k8s.io/node-stage-secret-name: rook-ceph-csi
    csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
# Specify the filesystem type of the volume. If not specified, csi-provisioner
    # will set default as `ext4`.
    csi.storage.k8s.io/fstype: xfs
    allowVolumeExpansion: "false"
# Delete the rbd volume when a PVC is deleted
reclaimPolicy: Delete
Persistent Volume Claim için şöyle yaparız. Burada 3 tane PVC var.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mysql-datadir-galera-ss-0
spec:
  storageClassName: rook-ceph-block
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mysql-datadir-galera-ss-1
spec:
  storageClassName: rook-ceph-block
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mysql-datadir-galera-ss-2
spec:
  storageClassName: rook-ceph-block
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
2. Ceph File Storage - CephFS
Provisioner olarak "ceph.com/cephfs" kullanılır. Ben yapınca PVC'ye bakınca şöyleydi.
$ kubectl get pvc 
NAME               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
adv-vitess-backup  Bound    pvc-417581ac-d314-4be7-a631-32df9ce360fe   2Gi        RWX            rook-cephfs       9m40s

Örnek
Şöyle yaparız
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: cephfs
provisioner: ceph.com/cephfs
parameters:
    monitors: 10.0.1.226:6789, 10.0.1.205:6789, 10.0.1.82:6789
    adminId: admin
    adminSecretName: ceph-secret-admin
    adminSecretNamespace: cephfs
    claimRoot: /pvc-volumes
PVC için şöyle yaparız
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: claim1
spec:
  storageClassName: cephfs
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
PV ve PVC'ye bakarsak şunu görürüz
# kubectl get pvc
NAME      STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
claim1    Bound     pvc-a7db18a7-9641-11e9-ab86-12e154d66096   1Gi        RWX            cephfs         2m

# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM            STORAGECLASS   REASON    AGE
pvc-a7db18a7-9641-11e9-ab86-12e154d66096   1Gi        RWX            Delete           Bound     default/claim1   cephfs                   2m





12 Temmuz 2022 Salı

local Volume - Pod Tekrar Başlarsa Volume'u Bulabilir

Giriş
Bazı notlar şöyle. Örnek video burada

local Volume vs hostpath
Açıklaması şöyle
Local volumes are not a hostPath. HostPath was invented for forwarding directories and files or sockets directly from the host. Local volumes were invented just for storing data.
Bir başka açıklama şöyle
Because programs running on your cluster aren’t guaranteed to run on a specific node, data can’t be saved to any arbitrary place in the file system. If a program tries to save data to a file for later, but is then relocated onto a new node, the file will no longer be where the program expects it to be. For this reason, the traditional local storage associated to each node is treated as a temporary cache to hold programs, but any data saved locally can not be expected to persist.


To store data permanently, Kubernetes uses Persistent Volumes. While the CPU and RAM resources of all nodes are effectively pooled and managed by the cluster, persistent file storage is not. Instead, local or cloud drives can be attached to the cluster as a Persistent Volume. This can be thought of as plugging an external hard drive in to the cluster. Persistent Volumes provide a file system that can be mounted to the cluster, without being associated with any particular node.
local Volume Nedir
Açıklaması şöyle. Yani local volume belirli bir worker node üzerindedir.
It remembers which node was used for provisioning the volume, thus making sure that a restarting POD always will find the data storage in the state it had left it before the reboot.
Açıklaması şöyle. Yani dinamik değildir
A local volume represents a mounted local storage device such as a disk, partition or directory.

Local volumes can only be used as a statically created PersistentVolume. Dynamic provisioning is not supported yet.

Compared to hostPath volumes, local volumes can be used in a durable and portable manner without manually scheduling Pods to nodes, as the system is aware of the volume's node constraints by looking at the node affinity on the PersistentVolume.
Worker Node Çökerse
Açıklaması şöyle
If you're using local volumes, and the node crashes, your pod cannot be rescheduled to a different node. It must be scheduled to the same node. That is the caveat of using local storage, your Pod becomes bound forever to one specific node.
Dezavantajları
1. Being tied to 1 specific node
2. surviving cluster crashes

Kullanım
POD claimName alanı ile PVC'ye atıfta bulunur. PVC ise storageClassName alanı ile PV'ye atıfta bulunur

Varsayılan (Default) PV'ler
Açıklaması şöyle
When you request storage, you can specify a StorageClass. If you do not specify a StorageClass, the default StorageClass is used. For example, suppose you create a PersistentVolumeClaim that does not specify a StorageClass. The volume controller will fulfill the claim according to the default StorageClass.
Açıklaması şöyle
If you set up a Kubernetes cluster on GCP, AWS, Azure, or any other cloud platform, a default StorageClass creates for you which uses the standard persistent disk type.
AWS'de görebiliriz. Şöyle yaparız. İsmi default
$ kubectl get storageclass
NAME PROVISIONER AGE
default (default) kubernetes.io/aws-ebs 3d
GCP'de görebiliriz. Şöyle yaparız. İsmi standard
$ kubectl get storageclass
NAME PROVISIONER AGE
standard (default) kubernetes.io/gce-pd 3d

Örnek - Worker Node Üzerinde
PersistentVolume yaratmak için şöyle yaparız. path ile yolunu belirtiriz.
apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-local-pv
spec:
  capacity:
    storage: 500Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: my-local-storage
  local:
    path: /mnt/disk/vol1
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - node1
Örnek - Worker Node Üzerinde
Şöyle yaparız
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-specific-node
  labels:
    type: local
spec:
  storageClassName: local-storage
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  local:
    path: "/mnt/data2"
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - alex-k8s-2.novalocal
Persistent Volume Claim için şöyle yaparız. storageClassName ile atıfta bulunuruz
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-specific-node
spec:
  storageClassName: local-storage
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 500M
Podda kullanmak için şöyle yaparız. claimName ile Persistent Volume Claim'ye atıfta bulunuruz
apiVersion: v1
kind: Pod
metadata:
  name: task-pv-pod
spec:
  volumes:
    - name: task-pv-storage
      persistentVolumeClaim:
        claimName: pvc-specific-node
  containers:
    - name: task-pv-container
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: task-pv-storage
Kubernetes bize şunu gösterir
$ kcl get pv
NAME               CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                       STORAGECLASS    REASON   AGE
pv-specific-node   1Gi        RWO            Retain           Bound    default/pvc-specific-node   local-storage            9m28s

$ kcl get pvc
NAME                STATUS   VOLUME             CAPACITY   ACCESS MODES   STORAGECLASS    AGE
pvc-specific-node   Bound    pv-specific-node   1Gi        RWO            local-storage   6m53s



nfs Volume

Giriş
Network-Attached Storage(NAS) kavramına bakmak lazım. Açıklaması şöyle. Yani NAS kendi işlemcisi, işletim sistemi olan müstakil bir bilgisayar gibi.
Have you ever wondered how businesses shared files and data across computers within a network in the 1980s? They relied on network-attached storage(NAS) - a reliable and efficient way to deliver unstructured data to the network-connected devices using an ethernet connection.

With time, different technologies have become prominent, including cloud storage offering cheap online storage. However, NAS still solves the critical business pain point of intuitively sharing files within an organization.
Kullanılan protokoller şöyle
NAS box supports various protocols for file formatting transferred across the network. These include:
Network File System(NFS)
- Server Message Blocks(SMB)
- Apple Filing Protocol(AFP)
- Internetwork Packet Exchange
- Common Internet File System(CIFS)
- NetBIOS Extended User Interface
Kullanım
1. Podda kullanmak için volumes ile direkt yüklenebilir. Bu durumda nfs sunucusunun IP adresini belirtmek gerekir.
2. Podda kullanmak için PersistentVolumeClaim ile isim belirtilir

1. Volume Olarak Direkt Kullanma
Örnek
Şöyle yaparız
#Pod definiton with nfs share directory
apiVersion: v1
kind: Pod
metadata:
  name: webserver
spec:
  containers:
  - image: nginx:latest
    name: nginx-container
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: test-vol
  volumes:
  - name: test-vol
    nfs:
      server: 10.3.97.250           # nfs server ip or dns
      path: /var/local/nfs-share    # nfs share directory
2. NFS'i PersistentVolume Olarak Yaratma
Örnek
Şöyle yaparız
#PV using NFS-Share directory
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-nfs
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    path: /var/nfs_server/kubernetes_data
    server: 10.25.96.6     #IP or DNS of nfs server
Örnek
PersistentVolume yaratılır. Şöyle yaparız
apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: any-name
  nfs:
    path: /test
    server: 10.0.0.0
PersistentVolumeClaim yaratılır. Şöyle yaparız
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 10Gi
  storageClassName: any-name
Pod'da kullanmak için şöyle yaparız
apiVersion: v1
kind: Pod
metadata:
 name: pod-with-volume
spec:
 containers:
 - image: nginx
   name: pod-with-volume
   volumeMounts:
   - mountPath: /data
     name: my-volume
 volumes:
 - name: my-volume
   persistentVolumeClaim:
     claimName: my-pvc



azure Volume

Giriş 
Açıklaması şöyle
AWS/AKS/GKE and other cloud providers provide storage to be used by their managed Kubernetes services.
Örnek
Şöyle yaparız
apiVersion: v1
kind: Pod
metadata:
 name: azure-volume
spec:
 containers:
  - image: httpd
    name: azure-volume
    volumeMounts:
      - name: azureVolume
        mountPath: /mnt/azure
 volumes:
      - name: azure
        azureDisk:
          diskName: my-test.vhd
          diskURI: https://my.blob.com/vhd/my-test.vhd
Açıklaması şöyle
In the above specification, 
- diskName : Name of the VHD blob object
- diskURI : URI of the VHD blob

emptyDir Volume - Pod Silinince Bu Volume da Silinir

Giriş
Açıklaması şöyle. emptyDir volume Java'da temporary dosya, dizin yaratmak için idealdir.
This kind of Volume is created when a Pod is scheduled on a node. This volume is for the lifetime of the pod only. It gets deleted as soon as pod is terminated. All containers within the Pod share this volume. The use case for such a volume can be to use as a temporary space for applications internal work or use as a cache for improving the performance of applications. 
1. Pod içindeki tüm container'lar bu volume'a erişebilir. 
2. Pod başlarken bu dizin boştur
3. Pod silinince bu dizin de silinir
4. Eğer container çökerse bu dizin ve içindekiler kaybolmaz. Açıklaması şöyle
A container crashing does not remove a Pod from a node. The data in an emptyDir volume is safe across container crashes.
medium Alanı
Açıklaması şöyle
What type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir
sizeLimit Alanı
Açıklaması şöyle
Total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: http://kubernetes.io/docs/user-guide/volumes#emptydir

Kullanım
1. volumes bölümünde -name ile bir volume ismi belirtilir.
2. Bu volume için emptyDir belirtilir
3. İstenirse emptyDir bellekte te olabilir

Örnek - memory
Şöyle yaparız
apiVersion: v1
kind: Pod
metadata:
  name: my-server
spec:
  containers:
  - image: nginx
    name: my-server
    volumeMounts:
    - mountPath: /testcache
      name: cache-volume
  volumes:
  - name: cache-volume
    emptyDir:
      medium: Memory
Örnek - TMP
Açıklaması şöyle
If you need to write temporary/cache files, fine, but since you are going to lose everything when that container dies, you shouldn't be writing anything of import within a container. Since you are only going to write temporary files, you really don't need your container to have a writable layer. Just mount a volume at /tmp and run your container with a read-only root file system. 
Şöyle yaparız. Burada tmp isimli emptyDir volume /tmp dizini olarak kullanılıyor. Ayrıca Linux'taki TMP ortam değişkeni de /tmp dizinine yönlendiriliyor.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: app
  template:
    metadata:
      labels:
        app.kubernetes.io/name: app
      name: app
    spec:
      containers:
      - env:
        - name: TMPDIR
          value: /tmp
        image: my/app:1.0.0
        name: app
        securityContext:
          readOnlyRootFilesystem: true
        volumeMounts:
        - mountPath: /tmp
          name: tmp
      volumes:
      - emptyDir: {}
        name: tmp
Örnek
Şöyle yaparız. Burada "grafana-storage" isimli volumeMounts bir emptyDir Volume'a atıfta bulunuyor.
grafana-datasources  isimli volumeMounts ile bir ConfigMap'e atıfta bulunuyor
apiVersion: apps/v1
kind: Deployment
metadata:
  name: grafana
  namespace: monitoring
spec:
  ...
  template:
    ...
    spec:
      containers:
      - name: grafana
        ...
        resources:
          ...
        volumeMounts:
          - mountPath: /var/lib/grafana
            name: grafana-storage
          - mountPath: /etc/grafana/provisioning/datasources
            name: grafana-datasources
            readOnly: false
      volumes:
        - name: grafana-storage
          emptyDir: {}
        - name: grafana-datasources
          configMap:
              defaultMode: 420
              name: grafana-datasources

Kubernetes Storage Tipleri

Giriş
Seçenekler şöyle
1. Volumes
2. Persistent Volume
3. Dynamic Volumes

Volumes
Açıklaması şöyle
Kubernetes Volume is a directory on a disk backed by some media. This Volume is available to all containers running inside a Pod. The Pod specification mentions what kind of volume is to be provisioned and where to mount it. The kind of Volume is specified by choosing one between many Volume Types that Kubernetes provides. Some of these are:

- AWS EBS
- Azure Disk
- Ceph file system- emptyDir
- nfs
- secret
Persistent Volume
Bizim tarafımızdan devreye alınır

Dynamic Volumes
Otomatik yaratılır

4 Temmuz 2022 Pazartesi

Kubernetes API Server - Dışarıdan Gelen İsteği Doğrular

Giriş
API Server master node üzerinde çalışır. Açıklaması şöyle
How can Kubernetes APIs be secured?

Kubernetes API security approaches include:

- Use the correct authorization mode with the API server
- Use API authentication
- Ensure that TLS protects all incoming traffic
- Use authorization-mode=Webhook to make kubeless protect the API
- Use restrictive RBAC role policy on the kube-dashboard
- Remove any default service account permissions
Açıklaması şöyle. Dışarıdan herhangi bir istek geldiğinde API server doğrulama yapar. Ayrıca etcd ile etkileşimde bulunan tek bileşen budur
The API Server provides APIs to support lifecycle orchestration (scaling, updates, and so on) for different types of applications. It also acts as the gateway to the cluster, so the API server must be accessible by clients from outside the cluster.
1. "kubectl komutu" API Server ile REST çağrısı kullanarak haberleşir. Şeklen şöyle

2. API Server aldığı yaml içeriğini etcd sunucusuna kaydeder. Tüm akış şöyle
1. User declares what he/she wants and passes that to K8S using kubectl command. We all know that API-Server is the only component that can talk to user, master node other components and worker nodes.
2. kubectl interacts with API-Server and generates a manifest (we can say a description of user wants).
3. This manifest is written in ETCD (key-value database and single source of truth) by API-Server.
4. As soon as there is something in ETCD, controller manager wakes up and respond according to the requirement. For deployment, a Deployment controller wakes up and check the requirement. It says replica is needed so, it’ll create a replica set (a bunch of item that goes into the pods) and goes to sleep. 
5. Now, the controller responsible for replica take its turn and create 3 replicas of pod. The pods details get stored in ETCD.
6. Schedular wakes up and sees that there are pending pods without any nodes assigned. So, it will assign the nodes and goes to sleep.
7. kubelet (on worker node) asks API-Server whether it has something for them? Now nodes has been assigned to pods kubelet will pull the image, networking and response back to API-Server that pods are running and API-Server writes the update to ETCD.



API Server Bileşenleri
1. HTTP Module
Açıklaması şöyle
1. This is nothing more than a regular web server.
2. Once the API receives the requests, it has to make sure that:
 - You have access to the cluster (authentication).
- You can create, delete, list, etc. resources (authorization).
3. This is the part where the RBAC rules are evaluated.
2. Mutation Admission Controller Module
Açıklaması şöyle
This component is in charge of looking at your YAML and modifying it.

Does your Pod have an image pull policy?

- If not, the admission controller will add “Always” for you.

Is the resource a Pod?
 -It sets the default Service Account (if none is set).
- Adds a volume with the token.

And more!
3. Validation Admission Controller Module
Açıklaması şöyle. Yani bazı mantıksal kontroller yapılıyor
Are you trying to deploy more resources than your quota?

The controller will prevent that too.
Kubernetes API Server Extension Points
 Mutation Admission Controller ve Validation Admission Controller noktalarına hook veya extension takılabiliyor. Şeklen şöyle.
Istio ve GateKeeper şeklen şöyle

Metrics API
Açıklaması şöyle
You can add your own APIs and register them with Kubernetes.

An excellent example of that is the metrics API server.

The metrics API server registers itself with the API and exposes extra API endpoints.
Şeklen şöyle














kubectl cluster-info seçeneği

Cluster bilgisini gösterir