30 Ocak 2022 Pazar

Kubernetes metadata Anahtar Kelimesi

Giriş
Genellikle sadece name,namespace gibi şeyler yazılır

annotations Alanı
Açıklaması şöyle
Next, take advantage of Linux kernel security features, such as SELinux, AppArmor (beta since 1.4), and/or seccomp (stable since 1.19). AppArmor defines the permissions for a Linux user or group to confine programs to a limited set of resources. Once an AppArmor profile is defined, pods with AppArmor annotations will enforce those rules.
Örnek
Şöyle yaparız
apiVersion: v1
kind: Pod
metadata:
name: apparmor
annotations:
container.apparmor.security.beta.kubernetes.io/hello: localhost/k8s-apparmor-example-deny-write
spec:
containers:
- name: hello
  image: busybox
  command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ]


Kubernetes Resource Requirements

Giriş
İki çeşir Resource var. 
1. Object'ler için Resource. Resource Requirements Pod veya Deployment gibi bir şey için belirtilebilir. 
2. System için Resource

Şeklen şöyle


LimitRange ve ResourceQuotas
Açıklaması şöyle
... Kubernetes provides two resource-level policies: LimitRange and ResourceQuotas. LimitRanges can be used to constrain individual resource usage (e.g., max 2 CPUs per pod), whereas ResourceQuota controls the aggregate resource usage (e.g., a total of 20 CPU in the dev namespace).
kind: ResourceQuota yazısına bakabilirsiniz

Resource Consumption Limits
Açıklaması şöyle. requests "en az", limits ise "en fazla" değerleri belirtir.
Kubernetes defines ‘requests’ and ‘limits’ in its resource utilization category. Requests represent the minimum resources an application needs to run, and limits define the maximum resources. No control over the resources also means we are not monitoring the application. We can specify the resource limits in the deployment YAML.
CPU
Şeklen şöyle


Bu konuda  düstur şöyle. Ya Burstable ya da Guaranteed QoS kullanılması öneriliyor.
Pods are guaranteed to get the amount of CPU they request, they may or may not get additional CPU time (depending on the other jobs running).
...
Always set CPU requests. This is the baseline and it is the only thing you can count on.

- Tim Hockin, Kubernetes Maintainer
1. Eğer CPU limit belirtilmişse ve worker node üzerinde bu kadar kaynak yoksa, Pod çalıştırılmaz. "Pending" olarak görürüz. "kubectl describe pod" ile  bakarsak sebep FailedScheduling olarak görülür.

2. Eğer CPU limit belirtilmemişse açıklaması şöyle
- As our pod’s container has not been given any upper threshold limit it could use all of the CPU resources available on the Node where it is running.
- There can also be a situation where our Pod’s container is running in some given namespace, which has some default CPU limit. In this case, our container will automatically be assigned the default limit.
- A LimitRange parameter can be used by our cluster administrator to specify a default value for the CPU limit.

Limit Koyma veya Koymama
Bence güzel bir açıklama şöyle. Burada her iki uygulamaya limit koymama, sadece birine koyma durumları tartışılıyor. Neticede alt sınırı belirlemek her zaman iyidir.
Why they are saying CPU limits are bad

Let's start with an example where we have two pods, A and B. Pod A is always low on CPU, it uses from 100m to 200m, while B is an application constantly receiving requests and keeps its CPU usage medium to high, from 300m to 500m. Let's also suppose our node has 1000m available for pods.

This is fine because the worst-case scenario is 700m (A 200m + B 500m), we have 300m to spare. The issue is that during any unpredicted event, things could go wrong. Let's consider some examples:
  1. A and B don't have any requests or limits. If pod B starts spending lots of CPU time, like 980m, then A is going to starve with only 20m.
  2. We set up CPU limits for B at 500m, now A won't starve because B has limits. Great, but if A is consuming 100m, and B is consuming 500m then we have 400m unused even though B could be using it during these unpredicted events.
  3. Instead of setting up limits for B, let's set up requests for A. Now A requests 200m for CPU and B has no limits. When B starts spending lots of CPU, A won't be affected because it is guaranteed it's always going to have 200m for him.
The second example shows how setting limits could waste resources that could easily be used. The third example is not yet ideal, we should set requests for both pods, but the idea is to show that limits could waste your resources, and if you set proper requests for every pod then no one is going to starve, so you don't need them.
M Son Eki
Açıklaması şöyle
The unit suffix m stands for “thousandth of a core,”
Gi Son Eki
Açıklaması şöyle. Yani Gi Gibibytes anlamına geliyor.
... (G) is power of ten, while the other one (Gi) is power of two. So,
- 10^3 is power of ten. the result is 1000, or 1G
- 2^10 is power of two. the result is 1024, or 1Gi

Örnek - cpu
Şöyle yaparız. Burada al sınıf 0.5 CPU, üst sınır ise 1 tam CPU, 
resources:
limits: cpu: "1000m" memory: "1Gi" requests: cpu: "500m" memory: "500Mi"
Örnek - memory
Şöyle yaparız
apiVersion: v1
kind: Pod
metadata:
  name: static-web
  labels:
    role: myrole
spec:
  containers:
  - name: nginx
    image: nginx
    resources:
      requests:
        memory: "1Mi"
        cpu: 0.2
      limits:
        memory: "1Gi"
        cpu: 1
    envFrom:
    - secretRef:
        name: testsecret


Static Pod - docker-compose Yerine Kullanılabilir

Giriş
Açıklaması şöyle
In a typical Kubernetes architecture, the Kubelet is the main agent that runs in all worker nodes and sometimes even the control-plane nodes (more on this later).

It registers itself against a Kubernetes Cluster and is responsible for creating, updating and deleting the containers (or pods) running in the target host. The Kubelet receives instructions from various sources and ensures that the desired state is applied in the target host.

But, did you know that the Kubelet can run in standalone mode? By standalone, we mean that the Kubelet can perform almost all of its functions without any input from an external source.

This presents a potential alternative to docker-compose.

26 Ocak 2022 Çarşamba

Kubernetes kind: Service İle LoadBalancer Service - Podlar arasında Yük Dengeler

Giriş
kind : Service 
spec/type: LoadBalancer
şeklinde kullanılır.

spec/selector/app ile servisler veya pod'lar belirtilir. targetPort ile gelen istek belirtilen porta gönderilir.

Bu servis yaratılınca otomatik olarak NodePort ve ClusterIP de yaratılır. Açıklaması şöyle
Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
Örnek
Şöyle yaparız
apiVersion: v1
kind: Service
metadata:
  name: app-users
  labels:
    app: app-users
spec:
  type: LoadBalancer
  ports:
    - port: 8080
      protocol: TCP
      targetPort: 8080
  selector:
    app: app-users
Örnek
Şöyle yaparız
apiVersion: v1
kind: Service
metadata:
  name: redis-service
  labels:
    app: redis
spec:
  selector:
    app: redis
  ports:
  - port: 80
    targetPort: 6379
    protocol: "TCP"
    name: redis
  type: LoadBalancer
Örnek
Şöyle yaparız. Burada adv-vtgate-az1 isimli NodePort servisini dış dünyaya açıyoruz.
kubectl expose service -n rlwy03 adv-vtgate-az1 --type=LoadBalancer --name=my-service
Dış dünyaya açılan servis şöyledir
$ kubectl describe services my-service -n rlwy03
Name:                     my-service
Namespace:                rlwy03
Labels:                   app.kubernetes.io/managed-by=Helm
                          planetscale.com/cell=az1
                          planetscale.com/cluster=adv-vitess-cluster
                          planetscale.com/component=vtgate
Annotations:              <none>
Selector:                 planetscale.com/cell=az1,planetscale.com/cluster=adv-vitess-cluster,planetscale.com/component=vtgate
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       172.30.97.99
IPs:                      172.30.97.99
LoadBalancer Ingress:     34.68.182.24

Port:                     port-1  15000/TCP
TargetPort:               15000/TCP
NodePort:                 port-1  30879/TCP
Endpoints:                10.129.2.37:15000,10.131.2.31:15000

Port:                     port-2  15999/TCP
TargetPort:               15999/TCP
NodePort:                 port-2  30963/TCP
Endpoints:                10.129.2.37:15999,10.131.2.31:15999

Port:                     port-3  3306/TCP
TargetPort:               3306/TCP
NodePort:                 port-3  31515/TCP
Endpoints:                10.129.2.37:3306,10.131.2.31:3306

Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason                Age   From                Message
  ----    ------                ----  ----                -------
  Normal  EnsuringLoadBalancer  120m  service-controller  Ensuring load balancer
  Normal  EnsuredLoadBalancer   119m  service-controller  Ensured load balancer
Doyasıyla şöyle bağlanabilirim
$ mysql -h 34.68.182.24 -u myuser -pmypassword
Örnek
Şöyle yaparız. port dış dünyaya açılır. Gelen isteği targetPort'a gönderir
namespace: default
  labels:
    app: my-example
spec:
  type: LoadBalancer
  ports:
  - port: 8080
    targetPort: 8080
    name: http
  - port: 22
    targetPort: 22
    name: ssh
    protocol: TCP
  selector:
    app: my-example
Örnek 
Şöyle yaparız
apiVersion: v1
kind: Service              # 1
metadata:
  name: sa-frontend-lb
spec:
  type: LoadBalancer       # 2
  ports:
  - port: 80               # 3
    protocol: TCP          # 4
    targetPort: 80         # 5
  selector:                # 6
    app: sa-frontend       # 7
Açıklaması şöyle
1. Kind: A service.
2. Type: Specification type, we choose LoadBalancer because we want to balance the load between the pods.
3. Port: Specifies the port in which the service gets requests.
4. Protocol: Defines the communication.
5. TargetPort: The port at which incoming requests are forwarded.
6. Selector: Object that contains properties for selecting pods.
7. app: sa-frontend Defines which pods to target, only pods that are labeled with “app: sa-frontend”
Örnek
Deployment şöyledir
apiVersion: apps/v1
kind: Deployment
metadata:
  name: springboot
  namespace: springboot-project
  labels:
    app: springboot
spec:
  replicas: 3
  selector:
    matchLabels:
      app: springboot
  template:
    metadata:
      labels:
        app: springboot
    spec:
      containers:
        - name: springboot
          image: byckles/jpa_project:2.0.0
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8082
          env:
            - name: SPRING_DATASOURCE_URL
              valueFrom:
                configMapKeyRef:
                  name: mysql-configmap
                  key: database_url
            - name: SPRING_DATASOURCE_USERNAME
              valueFrom:
                secretKeyRef:
                  key: mysql-user-username
                  name: mysql-secret
            - name: SPRING_DATASOURCE_PASSWORD
              valueFrom:
                secretKeyRef:
                  key: mysql-user-password
                  name: mysql-secret
Service şöyledir
apiVersion: v1
kind: Service
metadata:
  name: springboot-service
  namespace: springboot-project
spec:
  selector:
    app: springboot
  type: LoadBalancer
  ports:
    - protocol: TCP
      port: 8082
      targetPort: 8082
      nodePort: 30001
Örnek 
Şöyle  yaparız. pod tarafında spec/selector/matchLabels ile service tarafında spec/selector/app uyuşmalıdır
apiVersion: v1 # Kubernetes API version
kind: Service # Kubernetes resource kind we are creating metadata: # Metadata of the resource kind we are creating name: student-kubernetes-demo spec: selector: app: student-kubernetes-demo ports: - protocol: "TCP" port: 8080 # The port that the service is running on in the cluster targetPort: 8080 # The port exposed by the service type: LoadBalancer # LoadBalancer indicates that our service will be external. --- apiVersion: apps/v1 kind: Deployment # Kubernetes resource kind we are creating metadata: name: student-kubernetes-demo spec: selector: matchLabels: app: student-kubernetes-demo replicas: 2 # Number of replicas that will be created for this deployment template: metadata: labels: app: student-kubernetes-demo spec: containers: - name: student-kubernetes-demo image: student-kubernetes-demo imagePullPolicy: IfNotPresent ports:             - containerPort: 8080 # port that the container is running on in the cluster
loadBalancerIP Alanı
Örnek
Şöyle yaparız. Burada loadBalancer tarafından kullanılacak IP belirtiliyor.
apiVersion: v1
kind: Service
metadata:
  name: my-frontend-service
spec:
  type: LoadBalancer
  clusterIP: 10.0.171.123
  loadBalancerIP: 123.123.123.123
  selector:
    app: web
  ports:
  - name: http
    protocol: TCP
    port: 80
    targetPort: 8080
Örnek
Şöyle yaparız
apiVersion: v1
kind: Service
metadata:
  name: order-service
  namespace: default
  labels:
    app: order-service
spec:
  selector:
    app: order-deployment
  ports:
    - name: order-service-port
      protocol: TCP
      port: 8181
  type: LoadBalancer
  loadBalancerIP: ""



10 Ocak 2022 Pazartesi

Kubernetes Operator

Giriş
Açıklaması şöyle. Yani Kubernetes Operator gerçekleştirimleri CustomResource tanımlar ve kullanırlar
The way Kubernetes provisions pods and starts services does not align with the operational steps needed to care and feed for a Cassandra cluster — there’s a gap that must be bridged between Kubernetes workflows and Cassandra runbooks.

Kubernetes provides a number of built-in resources — from a simple building block like a Pod, to higher-level abstractions such as a Deployment. These resources let users define their requirements, and Kubernetes provides control loops to ensure that the running state matches the target state. A control loop takes short incremental actions to nudge the orchestrated components towards the desired end state — such as restarting a pod or creating a DNS entry. However, domains like distributed databases require more complex sequences of actions that don’t fit nicely within the predefined resources. This is great, but not everything fits nicely within a predefined resource.

Kubernetes Custom Resources were created to allow the Kubernetes API to be extended for domain-specific logic, by defining new resource types and controllers. OSS frameworks like operator-SDK, kubebuilder, and juju were created to simplify the creation of custom resources and their controllers. Tools built with these frameworks came to be known as Operators.
How To Build a Kubernetes Operator
Açıklaması şöyle
If your goal after reading this is to create a Kubernetes Operator, you need to know that there are already some frameworks that will make your life easier at that task.

Tools like Kopf, kubebuilder, metacontroller , or even the CNCF Operator Framework will provide you the tools and the everyday tasks to start focusing on what your operator needs to do, and they will handle the main daily tasks for you.
Mevcut Operator'lar
OperatorsHub sayfasında bulunabilir

Java Operator SDK
Bir örnek burada

OraOperator
Bu projeyi ilk olarak burada gördüm

Cluster Propotional Autoscaler - ReplicaSet Ekler/Siler

Giriş Açıklaması şöyle CPA aims to horizontally scale the number of Pod replicas based on the cluster’s scale. A common example is DNS ser...