27 Mart 2022 Pazar

Kubernetes kind : ClusterRoleBinding

Giriş
ClusterRole nesnesine atıfta bulunur. Açıklaması şöyle
Roles and RoleBindings are created in a namespace, and they grant access to resources in the current namespace.

What happens when you want to grant access to global resources, such as Nodes or Persistent Volumes?

It’d be great if there were a way to define a profile as global instead of being scoped to a namespace.

Well, you just invented the ClusterRole — a global role that applies to the entire cluster.

To link an identity to a global role, we use a ClusterRoleBinding.

What happens when you link a “standard” ClusterRole to a RoleBinding?

Is it even possible?

Yes.

The user will have all the permissions from the ClusterRole but scoped in the current namespace of the RoleBinding.
Açıklaması şöyle
What happens when you link a “standard” ClusterRole to a RoleBinding?

Is it even possible?

Yes.

The user will have all the permissions from the ClusterRole but scoped in the current namespace of the RoleBinding.
roleRef Alanı
- apiGroup
olarak  rbac.authorization.k8s.io yazılır
kind olarak ClusterRole 
name olarak atıfta bulunduğumuz role yazılır

subjects Alanı
Örnek
Açıklaması şöyle. Burada default namespace içindeki default service account'a role veriliyor. 
To allow Hazelcast to use the service inside Kubernetes for the discovery, we also need to grant certain permissions. An example of RBAC configuration for default namespace you can find in Hazelcast documentation.
Şöyle yaparız
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: hazelcast-cluster-role
rules:
  - apiGroups:
      - ""
    resources:
      - endpoints
      - pods
      - nodes
      - services
    verbs:
      - get
      - list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: hazelcast-cluster-role-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: hazelcast-cluster-role
subjects:
  - kind: ServiceAccount
    name: default
    namespace: default
Örnek
Şöyle yaparız. Burada helm chart kullanılıyor. Sadece vitess-operator isimli service account nesnesine hak vermek istiyoruz. 

Bu yüzden projenin namespace (isim alanı) içindeki vitess-operator isimli service account nesnesine vitess-operator isimli role'ün tanımladığı haklara erişimi veriliyor.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: vitess-operator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: vitess-operator
subjects:
  - kind: ServiceAccount
    name: vitess-operator
    namespace: {{ .Release.Namespace }}
Örnek
Açıklaması şöyle
For instance, the Kubernetes External DNS project uses a ClusterRole to realize the mandatory permissions it has to work. The External DNS incubator will be accustomed to utilizing external DNS servers for Kubernetes service discovery. the appliance desires read-only access to Services and Ingresses on all namespaces, however, it should not be granted to any extent further privileges (like modifying or deleting resources). The ClusterRole for such associate account ought to look as follows
Şöyle yaparız. Burada external-dns isimli ServiceAccount nesnesine cluster çapında service ve Ingress nesnelerine salt okunur (get, watch ve list) haklar veriliyor.
apiVersion: v1
kind: ServiceAccount
metadata:
 name: external-dns
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
 name: external-dns
rules:
- apiGroups: [""]
 resources: ["services"]
 verbs: ["get","watch","list"]
- apiGroups: ["extensions"]
 resources: ["ingresses"]
 verbs: ["get","watch","list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
 name: external-dns-viewer
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: external-dns
subjects:
- kind: ServiceAccount
 name: external-dns


Kubernetes kind : RoleBinding - Belirli Bir Namespace İçindir

Giriş
Özet olarak isminden de anlaşıldığı gibi Subject ve Role'leri birleştirir. Yani kullanıcıya rol ataması yapılır.

RoleBinding nesnesi, rol olarak ClusterRole ve Role nesnesine atıfta bulunur. Açıklaması şöyle
A RoleBinding may reference any Role in the same namespace. Alternatively, a RoleBinding can reference a ClusterRole and bind that ClusterRole to the namespace of the RoleBinding.
Yani ClusterRole veya Role ile erişim hakları belirtilir, RoleBinding ile hangi Subject'e yani kime hak verildiği belirtilir. 

RoleBinding vs ClusterRoleBinding 
RoleBinding ile belirli bir namespace içindedir, ClusterRoleBinding ise namespace içinde değildir. Açıklaması şöyle
A RoleBinding grants permissions within a specific namespace whereas a ClusterRoleBinding grants that access cluster-wide. 
Açıklaması şöyle
If a ClusterRole is bound with a RoleBinding instead of a ClusterRoleBinding, it'll be only granted the permissions within the namespace that RoleBinding specified.
Örnek
Şeklen şöyle


kubectl describe pods seçeneği

1. Reason Alanı
Pod çalışmıyorsa bu alanın değerine bakılır. Açıklaması şöyle
1: An exceeded pod memory limit causes a OOMKilled termination
2: Node out of memory causes a MemoryPressure and and pod eviction.
OOMKilled ise açıklaması şöyle
There are two main OOMKilled errors you’ll see in Kubernetes:

  1. OOMKilled: Limit Overcommit : Node'a çok fazla pod verilmiştir.
  2. OOMKilled: Container Limit Reached : Pod çok fazla bellek tüketmiştir.
Örnek
Şöyle olsun. Burada OOMKilled görülüyor çünkü Pod'a yeterli bellek ayrılmamış. 128Mi yetmiyor.
$ kubectl describe pods vitess-operator-587c464495-524nx -n rlwy03
Containers:
  foo:
    ...
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       OOMKilled
      Exit Code:    137
      Started:      Mon, 28 Mar 2022 05:16:05 +0000
      Finished:     Mon, 28 Mar 2022 05:16:13 +0000
    Ready:          False
    Restart Count:  127
    Limits:
      memory:  128Mi
    Requests:
      cpu:     100m
      memory:  128Mi
...
Örnek
Şöyle olsun. Burada kubetnetes düğümümün kaynakları istenilen pod sayısını çalıştırmak için yetersiz.
kubectl describe pod mypod-xxxx

...
Reason:         Evicted
Message:        Pod The node had condition: [MemoryPressure].
Örnek
Şöyle olsun. Burada gcloud komutuyla "stop" edilen bir cluster "start" ile tekrar başlatılamıyor
State:          Waiting
  Reason:       PodInitializing
Last State:     Terminated
  Reason:       ContainerStatusUnknown
  Message:      The container could not be located when the pod was deleted.  The container used to be Running
  Exit Code:    137
  Started:      ...
  Finished:     ...
Ready:          False
2. Events Alanı
Örnek
Şöyle olsun. Burada mount hataları görülüyor.
Events:
  Type     Reason          Age                    From             Message
  ----     ------          ----                   ----             -------
  Warning  NodeNotReady    150m                   node-controller  Node is not ready
  Warning  FailedMount     148m (x6 over 148m)    kubelet          MountVolume.MountDevice failed for volume "pvc-e2afd3bb-99f8-4916-81f5-5187d1af0265" : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name rook-ceph.cephfs.csi.ceph.com not found in the list of registered CSI drivers
  Warning  FailedMount     148m (x6 over 148m)    kubelet          MountVolume.MountDevice failed for volume "pvc-5d8b5425-e76f-4ee0-8d89-fcbff627d664" : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name rook-ceph.cephfs.csi.ceph.com not found in the list of registered CSI drivers
  Warning  FailedMount     147m (x7 over 148m)    kubelet          MountVolume.MountDevice failed for volume "pvc-b7aa1775-2187-4eac-a2c9-f9cc141f9c00" : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name rook-ceph.cephfs.csi.ceph.com not found in the list of registered CSI drivers
  Warning  FailedMount     147m (x6 over 148m)    kubelet          MountVolume.MountDevice failed for volume "pvc-bc6a9994-4074-40d9-b52b-4cea36c94628" : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name rook-ceph.cephfs.csi.ceph.com not found in the list of registered CSI drivers
  Normal   AddedInterface  142m                   multus           Add eth0 [10.131.2.18/23] from openshift-sdn
  Normal   Started         78m (x7 over 142m)     kubelet          Started container wait-for-oam
  Normal   Pulled          5m39s (x14 over 142m)  kubelet          Container image "gcr.io/..." already present on machine
  Normal   Created         5m39s (x14 over 142m)  kubelet          Created container wait-for-oam


25 Mart 2022 Cuma

Red Hat OpenShift oc Komutu

Giriş
OpenShift CLI (oc) anlamına gelir. Açıklaması şöyle. Yani oc komutu daha gelişmiş seçenekler sunuyor
Because OpenShift Container Platform is a certified Kubernetes distribution, you can use the supported kubectl binaries that ship with OpenShift Container Platform, or you can gain extended functionality by using the oc binary.
Bir başka açıklama şöyle
minishift oc binary si kullanıyor yani kubectl get pods yerine oc get pods olarak çalışıyor
Sıkça kullanılan seçenekler şöyle
oc status
oc logs -f <pod_name>
oc get pods 
oc get po
oc get all
oc describe pod <pod_name>
oc get services 
oc delete all -l app=<app_name>
oc delete pod <pod_name>
oc get <resource> <resource_name> -o yaml 
autoscale seçeneği
Şöyle yaparız
# autoscale looks up a deployment, replica set, stateful set, or replication controller by
# name and creates an autoscaler that uses the given resource as a reference. An
# autoscaler can automatically increase or decrease number of pods deployed within
# the system as needed.

#Auto scale a deployment, with the number of pods between 2 and 4, target CPU utilization at 80%
oc autoscale deployment <resource_name> --min=2 --max=4 --cpu-percent=80%
create config map seçeneği
Şöyle yaparız
# Create a config map file and mount it as a volume to a deployment/deployment config:
oc create configmap <name> --from-file=<file name> --from-file=<file name{If multiple files}>

# Mount the config map into the pods via the deployment controller or deployment:
oc set volume deployment/<resource_name> --add --name=<config_map_volume> -m <mount_path> -t configmap --configmap-name=<name> --default-mode='0777'
create secret seçeneği
Şöyle yaparız
# Create a secret from the CLI and mount it as a volume to a deployment config:
oc create secret generic <secret_name> --from-literal=username=myuser --from-file=<file_name> 

oc set volume dc/<resource_name> --add --name=<secret_volume> -m <mount_path> -t secret --secret-name=<secret_name> --default-mode='0755' 
get route seçeneği
Örnek
Şöyle yaparız
oc get route default-route -n openshift-image-registry
expose  seçeneği
Örnek
Şöyle yaparız
oc expose new-app
Açıklaması şöyle
The expose command creates an OpenShift route, which configures ingress for my “Hello World” all configured with the OpenShift router (an HAProxy Load Balancer by default).
login seçeneği
OpenShift cluster'a giriş içindir
Örnek - token
Şöyle yaparız
$ oc login       
You must obtain an API token by visiting https://oauth-openshift.apps.varsha.lab.upshift.rdu2.redhat.com/oauth/token/request
Bu adrese gittikten sonra şöyle yaparız
# Log in with this token
oc login --token=sha256~vNVfZ0VuFK7SkveM_nGRFTS7nrfawCQnQ9FEPNScv-0 --server=https://api.varsha.lab.upshift.rdu2.redhat.com:6443
Örnek - kullanıcı ismi ve şifre
Şöyle yaparız
oc login -u <username> -p <password> <servername>

Example : oc login -u kubeadmin -p asdfghjkliuytr https://upi-o.varsha.lab.upshift.rdu2.redhat.com:6443
new-app seçeneği
Açıklaması şöyle
On executing this command, OpenShift does the following:
  • Created a build pod to do “stuff” for building the app
  • Created an OpenShift Build config
  • Pulled the builder image into OpenShift’s internal docker registry.
  • Cloned the “Hello World” repo locally
  • Seen that there’s a maven pom, so compiled the application using maven
  • Created a new container image with the compiled java application and pushed this container image into the internal container registry
  • Created a Kubernetes Deployment with pod spec, service spec etc.
  • Kicked off a deploy of the container image.
  • Removed the build pod.
project seçeneği
Şöyle yaparız
#Switch to a specific project:
oc project <project_name>

#List existing projects:
oc project list
oc projects

#Display the currently logged-in project:
oc project
rsh seçeneği
Bir pod'a erişmek için şöyle yaparız
oc rsh <pod_name>
oc exec <mypod> -- <command_to_execute_in_pod>
oc debug <pod_name>
scale seçeneği
Şöyle yaparız
oc scale dc <resource_name> --replicas=<count>
whoami seçeneği
Açıklaması şöyle
$ oc whoami --help
Show information about the current session

 The default options for this command will return the currently authenticated user name or an empty string.  Other flags
support returning the currently used token or the user context.

Usage:
  oc whoami [flags]

Examples:
  # Display the currently authenticated user
  oc whoami

Options:
      --show-console=false: If true, print the current server's web console URL
  -c, --show-context=false: Print the current user context name
      --show-server=false: If true, print the current server's REST API URL
  -t, --show-token=false: Print the token the current session is using. This will return an error if you are using a
different form of authentication.

Use "oc options" for a list of global command-line options (applies to all commands).
Örnek - --show-console
Açıklaması şöyle
The command oc whoami --show-console displays the web console URL via CLI.
Şöyle yaparız
$ oc whoami --show-console
https://console-openshift-console.apps.varsha.lab.upshift.rdu2.redhat.com



24 Mart 2022 Perşembe

kubectl get nodes - Cluster Üyelerini Gösterir

Örnek
Şöyle yaparız. Burada daha node'lar yaratılmamış
$ kubetctl get nodes
No resources found
Örnek
Şöyle yaparız. Burada sadece daha master node'lar yaratılmamış
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION master Ready master 3m32s v1.22.3 master Ready master 3m4s v1.22.3 master Ready master 3m12s v1.22.3
Örnek
Şöyle yaparız. Burada ilk komutta bir master sanırım tam Ready değil. İkinci komutta ise Ready olduğu görülüyor
$ kubectl get nodes 
NAME                                    STATUS                     ROLES    AGE   VERSION
master-0.c.product-foo.internal         Ready                      master   27m   ...
master-1.c.product-foo.internal         Ready,SchedulingDisabled   master   27m   ...
master-2.c.product-foo.internal         Ready                      master   27m   ...
worker-a-dbp6x.c.product-foo.internal   Ready                      worker   16m   ...
worker-a-glkzm.c.product-foo.internal   Ready                      worker   16m   ...
worker-a-qlxw7.c.product-foo.internal   Ready                      worker   16m   ...

$ kubectl get nodes
NAME                                    STATUS   ROLES    AGE   VERSION
master-0.c.product-foo.internal         Ready    master   29m   ...
master-1.c.product-foo.internal         Ready    master   30m   ...
master-2.c.product-foo.internal         Ready    master   30m   ...
worker-a-dbp6x.c.product-foo.internal   Ready    worker   18m   ...
worker-a-glkzm.c.product-foo.internal   Ready    worker   18m   ...
worker-a-qlxw7.c.product-foo.internal   Ready    worker   18m   ...

Zero Trust

Giriş
Açıklaması şöyle
Zero Trust is a security model that assumes all actors, systems, and services operating in and between networks cannot be trusted.

Kubernetes includes all the necessary hooks to implement Zero Trust to control access to each Kubernetes cluster in your fleet. These hooks fall into four key areas: Authentication, Authorization, Admission Control, Logging, and Auditing ...

Kubernetes kind : StorageClass - Dynamic Provisioning İçindir

Dynamic Volume Nedir?
Açıklaması şöyle
... in a managed environment it is highly recommended to have a facility where we can create volumes on-demand. Kubernetes provides Dynamic Volumes which eliminate the need to pre-provision volumes. To enable Dynamic Volumes we have to use a Kubernetes resource known as StorageClass . This object specifies which storage provisioner is to be used and the associated parameters.
Kullanım
StorageClass tipi PersistentVolume ile birlikte kullanılır. 
- provisioner mutlaka belirtilir.
metadata.name ile belirtilen isim PersistentVolumeClaim tipinde storageClassName olarak kullanılır

Parametreler
Açıklaması şöyle
Each StorageClass contains the fields provisioner, parameters, and reclaimPolicy, which are used when a Persistent Volume belonging to the class needs to be dynamically provisioned.
Bir başka açıklama şöyle
provisioner: A StorageClass object contains a provisioner , that decides which volume plugin will be used to provision a PV. Kubernetes provides internal and external provisioners. Internal provisioners are also called “In-tree” volume plugins, which means their code is part of the core Kubernetes code and imported with the core Kubernetes binaries. We can also specify and run an external provisioner, which is also defined as Container Storage Interface(CSI).

parameters: Indicate properties of the underlying storage system.

reclaimPolicy: Can be either Delete​or Retain . Default is Delete

volumeBindingMode: Can be either Immediate or WaitForFirstConsumer

  Immediate — Immediately provisions PV after PVC is created.
  WaitForFirstConsumer— will delay the provisioning of a PV until a Pod using the PVC is created.
allowVolumeExpansion
Açıklaması şöyle
Resizing a Persistent Volume (PV) was very difficult prior to Kubernetes v1.11. It was an entirely manual process that involved a long list of steps, and required the creation of a new volume from a snapshot. You couldn’t just go and modify the PVC object to change the claim size.

Persistent volume expansion feature was promoted to beta in Kubernetes v1.11. This feature allows users to easily resize an existing volume by editing the PersistentVolumeClaim object. Users no longer have to manually interact with the storage backend or delete and recreate PV and PVC objects to increase the size of a volume. Shrinking persistent volumes is not supported though. You can find more information, including a list of volume types supported, here.

Although the feature is enabled by default, a cluster admin has to make the feature available to users by setting the allowVolumeExpansion field to true in their StorageClass object(s). Only PVCs created from a StorageClass with this setting will be allowed to trigger volume expansion.

Any PVC created from this StorageClass can be edited to request more space. Kubernetes will interpret a change to the storage field as a request for more space, and will trigger an automatic volume resizing.
Örnek
Şöyle yaparız
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
  name: gp2
parameters:
  fsType: ext4
  type: gp2
allowVolumeExpansion: true
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
volumeBindingMode: Immediate
minikube Kullanımı
Örnek
minikube ile şöyle yaparız
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: fast
provisioner: k8s.io/minikube-hostpath
parameters:
  type: pd-ssd
Örnek
Şöyle yaparız
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
reclaimPolicy: Delete
Google Kullanımı
Örnek
Şöyle yaparız. Burada provisioner belirtiliyor ayrıca name olarak SSD olduğundan fast deniliyor
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-ssd
Örnek
Açıklaması şöyle
For example, to use a SSD disk provided by Google Kubernetes Engine, create the following  StorageClass:
Örnek
Bir StorageClass yaratırız. Şöyle yaparız. Burada provisioner belirtiliyor.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: my-class
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-ssd
PersistentVolumeClaim   yaratırız. Şöyle yaparız
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: my-class
  resources:
    requests:
      storage: 30Gi
AWS EBS Kullanımı
Açıklaması şöyle
... for AWS EBS, there are various types of EBS volumes (gp2 , gp3 , io1 , io2 , etc). And also various configurations can be done, for example — enabling encryption, specifying minimum IOPS, and minimum throughput for an EBS volume.
Örnek
Şöyle yaparız
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gp2
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
  fsType: ext4
Örnek
Şöyle yaparız
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gp2-standard
provisioner: kubernetes.io/aws-ebs   # Internal-provisioner
parameters:
  type: gp2
reclaimPolicy: Retain
volumeBindingMode: Immediate
Örnek
Şöyle yaparız
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gp2-standard
provisioner: ebs.csi.aws.com
parameters:
  type: gp2
volumeBindingMode: Immediate
reclaimPolicy: Retain
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: io2-encrypted
provisioner: ebs.csi.aws.com   
parameters:
  type: io2
  iopsPerGB: "3000"
  encrypted: "true"
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete







Kubernetes kind: StatefulSet - Pod'un Tekil Kimliği Vardır

Giriş
The types of replication sets that exist are:

Replicaset: used for basic replication and by deployments.
Statefulset: used for replication where not all replicas are exactly the same.
Daemonset: used for replication across all nodes, making sure each node in your cluster runs a certain pod.

StatefulSet Nedir?
Açıklaması şöyle
StatefulSet - This is an API object to manage stateful applications like databases.
Açıklaması şöyle. Master-Slave ilişki varsa kullanılır
The StatefulSet manages the pods whenever it needs to control master-slave behavior.
Özellikleri şöyle. Yani StatefulSet, Deployment gibi Pod sayısını tutturmak için ReplicaSet kullanmıyor. Her StatefulSet Pod'un ismi ve DNS adresi hep aynı.
- all replicas have specific name — {StatefulSet name}-index
- unique persistent storage for each replica
- PVC is auto-created for each replica but is not autodeleted (well, this feature is alpha in Kubernetes 1.23)
- headless service is necessary to create a stable DNS name for each pod
- As opposed to the Deployment, the StatefulSet creates pods directly. Due to this issue automatic rollback in case of failed upgrade is not possible.
- upgrades/terminations are done sequentially from the pod with the biggest index number to the pod with index number 0
StatefulSet vs ReplicaSet 
Açıklaması şöyle. ReplicaSet ile Pod ismi sabit değil, ayrıca ölçeklendirme durumunda herhangi bir Pod silinebiliyor.
Now, when Kubernetes started, the only sort of way that you could do replication was using a ReplicaSet. With a replica set, every single replica was treated entirely identically. They have random hashes on the end of their application names. And if a scaling event happens, for example, a scaled-down, a container is chosen at random and deleted. These characteristics make ReplicaSet very hard to map to stateful applications. Many stateful applications expect their hostnames to be constant. So, those complexities of using ReplicaSet and stateful applications led to the eventual development of StatefulSets. A StatefulSets in Kubernetes is similar to a ReplicaSet, but it adds some guarantees that make it easier to manage stateful applications inside of Kubernetes.
Devamı şöyle Bir diğer fark ise StatefulSet Pod'ları sırayla çalıştırılır.
The working of StatefulSets is similar to Deployments, but in StatefulSets the deployment of containers happens in order. Rather than deploying all the containers in one go, it is deployed sequentially one by one. Once the first pod is deployed and gets ready, then only the second pod can start. In order to have the correct reference, these pods have a name with a unique ID which showcases there unique identity. So, for example, if there are 3 pods of MySQL, the names would be mysql-0 , mysql-1 and mysql-2. And, if any of these pod fail, a new pod with the same name will be deployed by StatefulSets.
Headless Service
Deployment'tan farklı olarak StatefulSet bir LoadBalancer kullanmıyor. Yani Deployment şeklen şöyle. Burada bir LB service kullanılıyor 

StatefulSet ise şeklen şöyle. LoadBalancer yok

Stateful Pod
Açıklaması şöyle. Yani stateful Pod tekrar başlarsa, identity işlerini Kubernetes hallediyor.
In Kubernetes, a statefulSet is a type of deployment strategy that maintains a sticky identity and storage for each of the Pods. statefulSet were designed to host a stateful application such as databases or persisted cache that need to maintains their state across restarts and reschedules. The nice thing about a statefulSet is that every pod is assigned an integer ordinal index, from 0 up through N-1, which is unique over the set. When a pod crashes or is rescheduled, Kubernetes will take of all of complexity of reviving it and reassign it to the correct identity. When scaling down the statefulSet, for example from 5 to 3 replicas, Kubernetes will terminate the last two pods in the set.
Örnek
Şeklen şöyle

Açıklaması şöyle
The first pod has access to read/write replica of mysql; the others only have access to the read replica.

Pods of Statefulsets are predictable. Statefulsets guarantee mapping of Pod names, DNS hostnames and volume mappings. This means, the first created pod always has the name my-pod-1 and access to mysql r/w replica. The next one will have the name my-pod-2 and access to mysql read replica, and so on. Any consumer service can directly access to my-pod-1 and it is guaranteed that it will have write privilege.

Örnek
Şeklen şöyle

Diğer Alternatif
Açıklaması şöyle. Yani StatefulSet  yerine "Deployment + Persistent Volume Claim" kullanılabilir.
Kubernetes users are confused about when one should make a Deployment with a PVC and when they should use a StatefulSet with a PVC. There is also a general lack of understanding regarding disk access policies, what RWO/RWX means, and what they allow you to do. These concepts are complicated and require a deep level of understanding to avoid users making bad decisions that they come to regret later.

En İyi Kullanım Önerileri
Şeklen şöyle
Burada sanırım en önemli madde StatefulSet'in farklı bir isim alanı içinde olması

Örnek
Şöyle yaparız. Burada serviceName ile headless service ismi belirtiliyor. volumeClaimTemplates ile disk alanı ayrılıyor
apiVersion: apps/v1
kind: StatefulSet metadata: name: mysql spec: selector: matchLabels: app: mysql serviceName: mysql replicas: 3 template: metadata: labels: app: mysql spec: initContainers: - name: init-mysql image: mysql:5.7 command: - bash - "-c" - | set -ex # Generate mysql server-id from pod ordinal index. [[ `hostname` =~ -([0-9]+)$ ]] || exit 1 ordinal=${BASH_REMATCH[1]} echo [mysqld] > /mnt/conf.d/server-id.cnf # Add an offset to avoid reserved server-id=0 value. echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf # Copy appropriate conf.d files from config-map to emptyDir. if [[ $ordinal -eq 0 ]]; then cp /mnt/config-map/master.cnf /mnt/conf.d/ else cp /mnt/config-map/slave.cnf /mnt/conf.d/ fi volumeMounts: - name: conf mountPath: /mnt/conf.d - name: config-map mountPath: /mnt/config-map volumeClaimTemplates: - metadata: name: data spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 10Gi
Örnek
Şöyle yaparız
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: consumer
  labels:
    app: consumer
    chart: consumer-0.1.0
    draft: draft-app
    release: balancedpartitions
    heritage: Helm
spec:
  revisionHistoryLimit: 0
  replicas: 2
  selector:
    matchLabels:
      app: consumer
      release: balancedpartitions
  serviceName: consumer
  template:
    metadata:
      labels:
        app: consumer
        draft: draft-app
        release: balancedpartitions
      annotations:
        buildID: ""
    spec:
      containers:
        - name: consumer
          image: "consumer:latest"
          imagePullPolicy: IfNotPresent
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /health/readiness
              port: http
          readinessProbe:
            httpGet:
              path: /health/liveness
              port: http
          env:
            - name: STATEFULSET_NAME
              value: consumer
            - name: STATEFULSET_NAMESPACE
              value: default
            - name: PartitionCount
              value: "3"
            - name: PartitionQueuePrefix
              value: OrderEvents
            - name: RMQHost
              value: rabbitmq
          resources:
            {}

23 Mart 2022 Çarşamba

kubectl top seçeneği

Giriş
Metric API'nin çalışıyor olması gerekir. Eğer çalışmıyorsa çıktı şöyledir

Örnek
Şöyle yaparız
$ kubectl top  nodes 
error: Metrics API not available
Sürekli izlemek için Linux watch komutu kullanılabilir. Şöyle yaparız
watch kubectl top node -n 5

1. NODE
Örnek
Şöyle yaparız. POD çıktısından farklı olarak yüzde olarak ta gösteriyor
$ kubectl top nodes 

NAME                                                          CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
rlwy-08-qvq55-master-0.c.product-oce-private.internal         695m         17%    4043Mi          32%
rlwy-08-qvq55-master-1.c.product-oce-private.internal         1180m        30%    7622Mi          61%
rlwy-08-qvq55-master-2.c.product-oce-private.internal         1035m        26%    6613Mi          53%
rlwy-08-qvq55-worker-a-8w7bw.c.product-oce-private.internal   608m         3%     9107Mi          16%
rlwy-08-qvq55-worker-a-fk8qq.c.product-oce-private.internal   516m         3%     6788Mi          12%
rlwy-08-qvq55-worker-a-k2lnr.c.product-oce-private.internal   483m         3%     7480Mi          13%
rlwy-08-qvq55-worker-a-ks2xp.c.product-oce-private.internal   694m         4%     9105Mi          16%
rlwy-08-qvq55-worker-a-lmk99.c.product-oce-private.internal   816m         5%     24555Mi         44%
rlwy-08-qvq55-worker-a-nkv8m.c.product-oce-private.internal   473m         2%     7159Mi          13%
rlwy-08-qvq55-worker-a-qfd5m.c.product-oce-private.internal   1327m        8%     7019Mi          12%
rlwy-08-qvq55-worker-a-qj47h.c.product-oce-private.internal   406m         2%     6292Mi          11%
rlwy-08-qvq55-worker-a-rf9m9.c.product-oce-private.internal   646m         4%     7031Mi          12%
rlwy-08-qvq55-worker-a-vkzs7.c.product-oce-private.internal   1222m        7%     6616Mi          12%
rlwy-08-qvq55-worker-a-zsxq2.c.product-oce-private.internal   547m         3%     8582Mi          15%
2. POD
Metrics API'yi kullanarak POD tarafından kullanılan CPU ve MEMORY bilgisini gösterir. Yüzde olarak göstermiyor

Metrics API'yi bu bilgiyi sağlayan şey ise Metrics Server. Açıklaması şöyle
When the pod is running, we can then use the ‘kubectl top’ command which is available through the Metrics API to reveal information such as container CPU and memory usage in bytes.

These metrics can be accessed either with the kubectl top command, or by a controller in the cluster, for example Horizontal Pod Autoscaler, to make decisions on what to do when a container passes a specified threshold.

NOTE: The API requires the metrics server to be deployed in the cluster. Otherwise it will not be available.
MEMORY alanında gösterilen bilgi JVM heap kullanımı değildir. Açıklaması burada

Örnek
Şöyle yaparız
$ kubectl top pod --all-namespaces
NAMESPACE     NAME                          CPU(cores) MEMORY(bytes)
default       hello-node-55b49fb9f8-fzb5q      0m           10Mi
kube-system   coredns-5c98db65d4-jjz8s         4m           11Mi
kube-system   coredns-5c98db65d4-txdpc         4m           13Mi
kube-system   etcd-minikube                    29m          43Mi
kube-system   heapster-b6n49                   0m           16Mi
kube-system   influxdb-grafana-sq7nb           1m           37Mi
kube-system   kube-addon-manager-minikube      8m           5Mi
kube-system   kube-apiserver-minikube          47m          254Mi
kube-system   kube-controller-manager-minikube 20m          35Mi
kube-system   kube-proxy-cvcdj                 2m           9Mi
kube-system   kube-scheduler-minikube          2m           10Mi
kube-system   kubernetes-dashboard-7b8ddcb5m   0m           14Mi
kube-system   metrics-server-84bb785897-nt4xs  0m           8Mi
kube-system   storage-provisioner              0m           27Mi
Örnek
Şöyle yaparız
kubectl top pod --containers
Örnek
Şöyle yaparız
$ kubectl top pod -n rlwy-08
W0923 11:08:47.830819 1958517 top_pod.go:140] Using json format to get metrics. 
Next release will switch to protocol-buffers, switch early by passing 
--use-protocol-buffers flag

NAME                                                             CPU(cores) MEMORY(bytes)
adv-vitess-cluster-az1-vtctld-a22f4b1a-85b49c8546-9mzhk          2m         32Mi
adv-vitess-cluster-az1-vtctld-a22f4b1a-85b49c8546-nhrlg          2m         29Mi
adv-vitess-cluster-az1-vtgate-498e7697-78977564ff-fbxs4          1m         30Mi
adv-vitess-cluster-az1-vtgate-498e7697-78977564ff-lz8x5          2m         32Mi
adv-vitess-cluster-etcd-07a83994-1                               34m        46Mi
adv-vitess-cluster-etcd-07a83994-2                               37m        48Mi
adv-vitess-cluster-etcd-07a83994-3                               31m        40Mi
rlwy-08-fault-monitoring-events-collector-0                      2m         350Mi
rlwy-08-fault-monitoring-events-collector-1                      5m         341Mi
rlwy-08-prometheus-filestat-exporter-6f99dfbf54-4ptlf            1m         14Mi
rlwy-08-prometheus-ofcs-backlog-size-exporter-5644c6486f-dt6qp   5m         10Mi
rlwy-08-prometheus-ps-stats-exporter-85c54466bf-shgb6            94m        29Mi
rlwy-08-prometheus-rescue-mode-exporter-5fdc759bcd-kkjcm         41m        21Mi
rlwy-08-sftp-7bdc8f7f85-td6xz                                    2m         6Mi
rlwy-08-sql-exporter-86dc94f497-krjkd                            1m         21Mi
rlwy-gui-7965bc5ffb-gdrd4                                        4m         501Mi
rlwy-oam-green-0                                                 137m       5028Mi
vitess-operator-67b5bf658-fttph                                  7m         141Mi

kubectl cluster-info seçeneği

Cluster bilgisini gösterir