29 Eylül 2022 Perşembe

kubectl get pvc seçeneği

Örnek
Şöyle yaparız
$ kubectl get pvc -n rlwy-08
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE adv-vitess-backup Bound pvc-a3551c9e-fa86-4a67-a5bf-0d0f9f50ab9a 20Gi RWX rook-cephfs 2d19h adv-vitess-cluster-etcd-07a83994-1 Bound pvc-2843553b-8f29-4ec3-9ed7-919db6785e07 1Gi RWO rook-ceph-block 2d19h adv-vitess-cluster-etcd-07a83994-2 Bound pvc-65b7807e-161b-4a92-b0be-6048066ec437 1Gi RWO rook-ceph-block 2d19h adv-vitess-cluster-etcd-07a83994-3 Bound pvc-42206c18-73ac-46de-8d2a-9b60ccab3361 1Gi RWO rook-ceph-block 2d19h adv-vitess-cluster-vttablet-az1-4135592426-c2dc2c3d Bound pvc-2b5bc5c8-4675-4a5c-9e10-3ced9edc3c27 1Gi RWO standard 2d19h fault-monitoring-events-collector-rlwy-08-fault-monitoring-events-collector-0 Bound pvc-e7d2ab1e-6e31-4994-9941-09e2e0dea825 2Gi RWO rook-ceph-block 2d19h fault-monitoring-events-collector-rlwy-08-fault-monitoring-events-collector-1 Bound pvc-4ba14ce9-8606-4839-b52a-9bda801ff699 2Gi RWO rook-ceph-block 2d19h fault-monitoring-events-collector-rlwy-08-fault-monitoring-events-collector-2 Bound pvc-9da3c81a-9641-40d2-a6aa-fb203dd4f5b4 2Gi RWO rook-ceph-block 2d19h fault-monitoring-events-collector-rlwy-08-fault-monitoring-events-collector-3 Bound pvc-1917f0d4-bafe-400f-948e-0eba2ae5a833 2Gi RWO rook-ceph-block 2d19h fault-monitoring-events-collector-rlwy-08-fault-monitoring-events-collector-4 Bound pvc-a34e63f4-b056-448f-9ccf-f92ca2d45d60 2Gi RWO rook-ceph-block 2d19h k8s-datashare-a Bound pvc-643edd39-d7fc-447b-aafe-db5ecbf808f1 200Gi RWX rook-cephfs 2d19h k8s-datashare-b Bound pvc-aa3fd831-a34e-46ee-b297-ecfea7c323e1 200Gi RWX rook-cephfs 2d19h k8s-optional Bound pvc-c2d5dad2-a43c-46d4-a493-534e1af9c971 100Gi RWX rook-cephfs 2d19h k8s-pcmconfig Bound pvc-dd3f64e3-5f01-4d39-8ce4-c04fb8064559 2Gi RWX rook-cephfs 2d19h k8s-pcsconfig Bound pvc-f16fbecb-55af-49b8-8b9f-23dff1ce98b8 2Gi RWX rook-cephfs 2d19h k8s-railway-tspft Bound pvc-4d3bfc2b-d047-440d-852d-ab1e3c11fda7 2Gi RWX rook-cephfs 2d19h k8s-ticketdatashare-a Bound pvc-a8163c65-58c3-498f-8bee-56c2fa2d03fb 200Gi RWX rook-cephfs 2d19h    2d19h
get pv ile yaparsak şöyledir
$ kubectl get pv -n rlwy-08
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                                                                               STORAGECLASS      REASON   AGE
pvc-08ece60f-fb9d-4555-accf-f5c9e62a020c   20Gi       RWO            Delete           Bound    rlwy-08-pgo/pgdata-rlwy-postgres-1                                                                                  rook-ceph-block            2d19h
pvc-0ab069f8-62f0-4f05-a3c5-c1307738d968   94Gi       RWO            Delete           Bound    oce-system/elasticsearch-data-oce-es-data-2                                                                         rook-ceph-block            2d19h
pvc-1917f0d4-bafe-400f-948e-0eba2ae5a833   2Gi        RWO            Delete           Bound    rlwy-08/fault-monitoring-events-collector-rlwy-08-fault-monitoring-events-collector-3                               rook-ceph-block            2d19h
pvc-2843553b-8f29-4ec3-9ed7-919db6785e07   1Gi        RWO            Delete           Bound    rlwy-08/adv-vitess-cluster-etcd-07a83994-1                                                                          rook-ceph-block            2d19h
pvc-2940756f-e102-4e6b-b83d-706f24b3c4a9   954Mi      RWO            Delete           Bound    oce-system/elasticsearch-data-oce-es-client-1                                                                       rook-ceph-block            2d19h
pvc-2b5bc5c8-4675-4a5c-9e10-3ced9edc3c27   1Gi        RWO            Delete           Bound    rlwy-08/adv-vitess-cluster-vttablet-az1-4135592426-c2dc2c3d                                                         standard                   2d19h
pvc-42206c18-73ac-46de-8d2a-9b60ccab3361   1Gi        RWO            Delete           Bound    rlwy-08/adv-vitess-cluster-etcd-07a83994-3                                                                          rook-ceph-block            2d19h
pvc-4237222d-a131-47d9-a702-3bcd5fc170d2   10Gi       RWO            Delete           Bound    oce-system/gs-log-oce-system-logstash-1                                                                             rook-ceph-block            2d19h
pvc-462460d8-64d0-48d1-85b6-7dd584e23e9b   954Mi      RWO            Delete           Bound    oce-system/elasticsearch-data-oce-es-master-1                                                                       rook-ceph-block            2d19h
pvc-4ba14ce9-8606-4839-b52a-9bda801ff699   2Gi        RWO            Delete           Bound    rlwy-08/fault-monitoring-events-collector-rlwy-08-fault-monitoring-events-collector-1                               rook-ceph-block            2d19h
pvc-4d3bfc2b-d047-440d-852d-ab1e3c11fda7   2Gi        RWX            Retain           Bound    rlwy-08/k8s-railway-tspft                                                                                           rook-cephfs                2d19h
pvc-59afeef7-f89b-4677-909b-23399d28f1e0   10Gi       RWO            Delete           Bound    oce-system/oce-system-thanos-compactor                                                                              rook-ceph-block            2d19h
pvc-643edd39-d7fc-447b-aafe-db5ecbf808f1   200Gi      RWX            Retain           Bound    rlwy-08/k8s-datashare-a                                                                                             rook-cephfs                2d19h
pvc-65b7807e-161b-4a92-b0be-6048066ec437   1Gi        RWO            Delete           Bound    rlwy-08/adv-vitess-cluster-etcd-07a83994-2                                                                          rook-ceph-block            2d19h
pvc-6697884b-d049-42ba-8e41-4f5f6c8f67b1   94Gi       RWO            Delete           Bound    oce-system/elasticsearch-data-oce-es-data-1                                                                         rook-ceph-block            2d19h
pvc-7a891f8a-3c1c-46fc-b73f-ac6514461dfe   10Gi       RWO            Delete           Bound    oce-system/data-oce-system-logstash-1                                                                               rook-ceph-block            2d19h
pvc-8f2ac20d-24b6-4734-a0ed-c58014633fa9   94Gi       RWO            Delete           Bound    oce-system/elasticsearch-data-oce-es-data-0  																	   rook-ceph-block            2d19h
pvc-9636f1b9-b2a5-4a0c-9f57-463366f1c6e5   20Gi       RWO            Delete           Bound    rlwy-08-pgo/pgdata-rlwy-postgres-0																				   rook-ceph-block            2d19h
pvc-97e2547c-3160-4cfa-a8b7-7b4e4b8caaa3   10Gi       RWO            Delete           Bound    oce-system/data-oce-system-logstash-0																		       rook-ceph-block            2d19h
pvc-9da3c81a-9641-40d2-a6aa-fb203dd4f5b4   2Gi        RWO            Delete           Bound    rlwy-08/fault-monitoring-events-collector-rlwy-08-fault-monitoring-events-collector-2							   rook-ceph-block            2d19h
pvc-9fa1cb3b-fc7f-4feb-8c3f-7491c8e89a1f   954Mi      RWO            Delete           Bound    oce-system/elasticsearch-data-oce-es-master-0																	   rook-ceph-block            2d19h
pvc-a34e63f4-b056-448f-9ccf-f92ca2d45d60   2Gi        RWO            Delete           Bound    rlwy-08/fault-monitoring-events-collector-rlwy-08-fault-monitoring-events-collector-4							   rook-ceph-block            2d19h
pvc-a3551c9e-fa86-4a67-a5bf-0d0f9f50ab9a   20Gi       RWX            Retain           Bound    rlwy-08/adv-vitess-backup																						   rook-cephfs                2d19h
pvc-a6d9e92e-eca1-4285-95d6-0b2a10a3cca9   954Mi      RWO            Delete           Bound    oce-system/elasticsearch-data-oce-es-client-0																	   rook-ceph-block            2d19h
pvc-a8163c65-58c3-498f-8bee-56c2fa2d03fb   200Gi      RWX            Retain           Bound    rlwy-08/k8s-ticketdatashare-a																					   rook-cephfs                2d19h
pvc-a995c479-56bf-49f1-8a60-25225b3e6b0f   20Gi       RWO            Delete           Bound    oce-system/prometheus-oce-system-kube-prometheus-prometheus-db-prometheus-oce-system-kube-prometheus-prometheus-1   rook-ceph-block            2d19h
pvc-aa3fd831-a34e-46ee-b297-ecfea7c323e1   200Gi      RWX            Retain           Bound    rlwy-08/k8s-datashare-b																						       rook-cephfs                2d19h
pvc-aadae5be-3639-4fed-9e02-3e2cfd18c156   10Gi       RWO            Delete           Bound    oce-system/gs-log-oce-system-logstash-0																	 		   rook-ceph-block            2d19h
pvc-b087214e-b164-48a8-abac-c4c4fe1e8376   10Gi       RWO            Delete           Bound    oce-system/data-oce-system-thanos-storegateway-0																	   rook-ceph-block            2d19h
pvc-c2d5dad2-a43c-46d4-a493-534e1af9c971   100Gi      RWX            Retain           Bound    rlwy-08/k8s-optional																								   rook-cephfs                2d19h
pvc-cebe80c6-a77e-4666-bc73-a1b7a475995b   954Mi      RWO            Delete           Bound    oce-system/elasticsearch-data-oce-es-master-2																	   rook-ceph-block            2d19h
pvc-dd3f64e3-5f01-4d39-8ce4-c04fb8064559   2Gi        RWX            Retain           Bound    rlwy-08/k8s-pcmconfig																							   rook-cephfs                2d19h
pvc-e7d2ab1e-6e31-4994-9941-09e2e0dea825   2Gi        RWO            Delete           Bound    rlwy-08/fault-monitoring-events-collector-rlwy-08-fault-monitoring-events-collector-0							   rook-ceph-block            2d19h
pvc-e8e529d9-38da-4552-a1b3-ba4436fcfb87   20Gi       RWO            Delete           Bound    oce-system/prometheus-oce-system-kube-prometheus-prometheus-db-prometheus-oce-system-kube-prometheus-prometheus-0   rook-ceph-block            2d19h
pvc-ea3b6205-6293-432f-b737-14c63ed5bd86   10Gi       RWO            Delete           Bound    oce-system/oce-system-thanos-minio																				   rook-ceph-block            2d19h
pvc-f16fbecb-55af-49b8-8b9f-23dff1ce98b8   2Gi        RWX            Retain           Bound    rlwy-08/k8s-pcsconfig																							   rook-cephfs                2d19h
pvc-fd0e7801-a9fd-4d90-b2c5-22ffbb54d75f   20Gi       RWO            Delete           Bound    rlwy-08-pgo/pgdata-grafana-grafana-0																		 		   rook-ceph-block            2d19h
  rook-ceph-block            2d19h

28 Eylül 2022 Çarşamba

kubectl describe configmap seçeneği

Giriş
configmap'in içini görebilmeyi sağlar

Örnek
Şöyle yaparız
$ kubectl describe configmap cmap-advenvironment -n rlwy-08
Name:         cmap-advenvironment
Namespace:    rlwy-08
Labels:       app=oce-rlwy
              app.kubernetes.io/managed-by=Helm
              chart=oce-rlwy-82.40.14
              heritage=Helm
              ne=rlwy
              release=rlwy-08-oce-rlwy
Annotations:  meta.helm.sh/release-name: rlwy-08-oce-rlwy
              meta.helm.sh/release-namespace: rlwy-08

Data
====
AdvEnvironment.txt:
----
FOO=/foo
export FOO
...
Events:  <none>

27 Eylül 2022 Salı

kubectl describe service seçeneği

Örnek
Şöyle yaparız. Burada 22 numaralı port ismi procssh olan bir NodePort servis olarak açıldı
$ kubectl expose pod rlwy-proc-blue-754d7577f7-4qhhk --port=22 --target-port=22
--type=NodePort --name procssh -n rlwy-08 service/procssh exposed
Şimdi bu servise bakalım. Tersine birebir bir ilişki kurmak zor. Selector kısmında direkt pod ismi yazmıyor.
$ kubectl describe service procssh -n rlwy-08
Name:                     procssh
Namespace:                rlwy-08
Labels:                   app=oce-rlwy-proc
                          chartColor=blue
                          ne=rlwy
                          pod-template-hash=754d7577f7
                          processingGroup=true
                          release=rlwy-08-oce-rlwy-proc-blue
                          runsCommManager=true
Annotations:              <none>
Selector:                 app=oce-rlwy-proc,
                          chartColor=blue,
                          ne=rlwy,
                          pod-template-hash=754d7577f7,
                          processingGroup=true,
                          release=rlwy-08-oce-rlwy-proc-blue,
                          runsCommManager=true
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       172.30.25.151
IPs:                      172.30.25.151
Port:                     <unset>  22/TCP
TargetPort:               22/TCP
NodePort:                 <unset>  30000/TCP
Endpoints:                10.131.2.30:22
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

Kubernetes kind : PodDisruptionBudget

Giriş
rolling upgrade esnasında en az belirtilen sayı kadar pod'un çalışmasını garanti eder.

Rolling Deployment Nedir
Açıklaması şöyle
The rolling deployment strategy is the default strategy by Kubernetes that slowly replaces the old pods of the previous version with the pods of the new version. 
Blue/G
Örnek
Şöyle yaparız
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: cron-service-pdb
spec:
  minAvailable: 1
  selector:
    matchLabels:
      app: cron-service
Açıklaması şöyle
Having the above PDB ensures that we always have one guaranteed pod running in K8S cluster at any given point of time .


26 Eylül 2022 Pazartesi

Kubernetes kind : ClusterRole

Örnek
Açıklaması şöyle
To allow Hazelcast to use the service inside Kubernetes for the discovery, we also need to grant certain permissions. An example of RBAC configuration for default namespace you can find in Hazelcast documentation.
Hangi kaynaklara hangi verb'lerin uygulanabileceği belirtilir. Şeklen şöyle
resource olarak şunlar olabilir
1. endpoints
2. pods
3.  nodes
4. services
verb olarak şunlar olabilir
1. get, 
2. list, 
3. watch 

Örnek
Şöyle yaparız
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: hazelcast-cluster-role
rules:
  - apiGroups:
      - ""
    resources:
      - endpoints
      - pods
      - nodes
      - services
    verbs:
      - get
      - list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: hazelcast-cluster-role-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: hazelcast-cluster-role
subjects:
  - kind: ServiceAccount
    name: default
    namespace: default

25 Eylül 2022 Pazar

minikube tunnel seçeneği - LoadBalancer'a IP Verir

Giriş
Sanırım minikube ingress eklentisinin kurulu olması gerekiyor. LoadBalancer tipi service erişim içindir

Ingress Eğer servislerimiz HTTP ve LoadBalancer ise kullanılabilir. Eklenti kurulu değilse kurmak için şöyle yaparız. Kurulumdan sonra artık minikube tunnel komutu kullanılabilir.
minikube addons enable ingress

Örnek
Elimizde şöyle bir LoadBalancer service olsun
 — -
apiVersion: v1
kind: Service
metadata:
 name: “nginx-service”
 namespace: “default”
spec:
 ports:
 — port: 80
 type: LoadBalancer
 selector:
 app: “nginx”
"kubectl get svc nginx-service" ile bakarsak LoadBalancer servisin EXTERNAL-IP alanının "pending" olduğunu görürürz. Açıklaması şöyle
The external IP will be shown pending as we are using Minikube and hence we need to use the following command to get the external IP address.

minikube tunnel
Now, let’s access the service with the command,

minikube service nginx-service


When you access your http://127.0.0.1:64711/ local, you should see the NGINX screen.

Örnek
"minikube tunnel" komutundan önce çıktı şöyle
> kubectl get svc
NAME             TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
hello-minikube   NodePort       10.103.194.22   <none>        8080:30072/TCP   58m
hello-node       LoadBalancer   10.110.209.73   <pending>     8080:30501/TCP   50m
kubernetes       ClusterIP      10.96.0.1       <none>        443/TCP          60m
"minikube tunnel" komutunu çalıştırırız.
>minikube tunnel
* Starting tunnel for service hello-node.
Bu sefer çıktı şöyle olur
>kubectl get svc
NAME             TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
hello-minikube   NodePort       10.103.194.22   <none>        8080:30072/TCP   59m
hello-node       LoadBalancer   10.110.209.73   127.0.0.1     8080:30501/TCP   51m
kubernetes       ClusterIP      10.96.0.1       <none>        443/TCP          60m
Artık servisimize "http://127.0.0.1:8080/" olarak erişebiliriz.
Örnek
"minikube tunnel" komutunu çalıştırırdıktan sonra çıktı şöyle 
> kubectl get service app-users
NAME        TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)          AGE
app-users   LoadBalancer   10.104.184.147   10.104.184.147   8080:31434/TCP   24h


23 Eylül 2022 Cuma

Prometheus PromQL

container_cpu_cfs_throttled_seconds_total
Açıklaması şöyle. Bu durum Best Effort Qos veya Guaranteed Qos kullanılmışsa ve JVM warm-up aşamasında görülebilir.
Kubernetes exposes a per-pod metric container_cpu_cfs_throttled_seconds_total which denotes — how many seconds CPU has been throttled for this pod since its start.

 container_cpu_usage_seconds_total
Örnek
Şöyle yaparız
avg by (namespace)(rate(container_cpu_usage_seconds_total{pod=~"rlwy-proc-blue-.*" ,container="main"}[10m]))
Açıklaması şöyle
This query will give us the number of cores that are being used by specified container.
Şeklen şöyle. Bu pod'a atanmış 4 işlemci var. Bir ara hepsini 100% kullanmış.


19 Eylül 2022 Pazartesi

kubectl logs seçeneği

Giriş
Şöyle yaparız
kubectl logs <pod-name>
--all-containers seçeneği
Tüm container'ların loglarına bakmak için şöyle yaparız
kubectl logs <pod_name> --all-containers
-c seçeneği
Belirli bir  container'ın logların bakmak için şöyle yaparız
kubectl logs -c <container_name> <pod_name>
-f seçeneği
Logların sürekli akmasın sağlar

Örnek
Şöyle yaparız
kubectl logs -f <pod-id>
Örnek
Eğer pod'un içindeki bir container'a bakacaksak şöyle yaparız
kubectl logs -f <pod-name> <container-name>
-p/--previous seçeneği
 -p ile bir önceki hatalı Pod'un logları görülebilir. Açıklaması şöyle
When a container is repeatedly crashing, it may not be obvious how to determine what is wrong. You might look at kubectl log , but the problem that caused your container to crash will often have been immediately prior to the container restarting. Since the logs returned by kubectl log (by default) are only from the current instance of the container, and kubernetes will likely restart your container pretty quickly after your container crashes, the logs that you get may not have the information that you need.
...
Thankfully, kubectl log has a handy flag, --previous or -p, that lets you see the log not for the current container instance, but rather for the previously started container (if there is one) — and at the end of the log for the previous container is likely where the relevant error messages will be.
Örnek
Şöyle yaparız 
kubectl logskubectl logs <pod name> -n <namespace> -p
kubectl logs <pod name> -n <namespace> --previous
kubectl logs <pod name> -n <namespace> --all-containers
kubectl logs <pod name> -n <namespace> -c mycontainer
kubectl logs <pod name> -n <namespace> --tail 50
Örnek
Şöyle yaparız
kubectl logs $POD_NAME
kubectl logs $POD_NAME -c $CONTAINER_NAME

# or if you want to follow continuous output
kubectl logs -f $POD_NAME

# Another super useful debugging tool is the -p/--previous flag, 
# which you can use in the case that an instance keeps crashing/ 
# there was an unexpected restart.
kubectl logs -p $POD_NAME
Örnek
Daha önce başarısız olmuş bir podun loglarına bakmak için şöyle yaparız
kubectl logs --previous <pod_name>
--tail seçeneği
Son X satır loga bakmamızı sağlar
Örnek
Son 50 satıra bakmak için şöyle yaparız
kubectl logs --tail=50 <pod_name>
--since seçeneği
Son X zamanın loglarına bakmamızı sağlar
Örnek - dakika
Son X dakikanın loglarına bakmak için şöyle yaparız
kubectl logs --since=10m redis-master-f46ff57fd-qmrd7
kubectl logs --since=15m redis-master-f46ff57fd-qmrd7
Örnek -saat
Son altı saatin loglarına bakmak için şöyle yaparız
kubectl logs --since=6h <pod_name>


14 Eylül 2022 Çarşamba

kubectl create ingress seçeneği

Örnek
Sadece komut satırından şöyle yaparız
kubectl create deployment echoserver --image=k8s.gcr.io/echoserver:1.4

# Make it accessible
kubectl expose deployment echoserver --port 80 --target-port 8080

#expose the service externally as echoserver.localdev.me
kubectl create ingress echoserver --class=nginx 
  --rule='echoserver.localdev.me/*=echoserver:80'
Açıklaması şöyle
The ingress controller will in our example read the host field of incoming HTTP requests and perform routing to internal services based on its value. However, a prerequisite is that the request reaches the ingress controller and this implies the DNS echoserver.localdev.me has to be routed to the Kubernetes cluster which here is localhost. So how does this echoserver.localdev.me work?

localdev.me is a service that defines its own DNS servers and will answer with the loopback IP address (127.0.0.1) to any subdomains of localedev.me DNS requests. You can have a look at their DNS records. This is handy as it allows to use subdomains in local to perform ingress routing without the need for manual /etc/hosts modifications.

kubectl get endpoints seçeneği

Giriş
Bir servisin hangi pod'a erişimi sağladığını gösterir.

Örnek
Şöyle yaparız
kubectl get endpoints
Örnek
Şöyle yaparız
# Get the kubernetes service
> kubectl get svc kubernetes -n default
--------------------------------------------------------------------
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)        
kubernetes   ClusterIP      10.96.0.1       <none>           443/TCP
--------------------------------------------------------------------

# List the endpoints of the kubernetes service
> kubectl get endpoints -n default kubernetes
--------------------------------------------------------------------
NAME         ENDPOINTS         AGE
kubernetes   10.25.96.3:6443   24d
--------------------------------------------------------------------

# Get local endpoint of the k8s cluster from the Kubernetes service
> ENDPOINT=https://$(kubectl get endpoints kubernetes -n default \
  -o jsonpath='{.subsets[0].addresses[0].ip}'):6443/
> echo $ENDPOINT
--------------------------------------------------------------------
https://10.25.96.3:6443/      # (This will vary in your case)
--------------------------------------------------------------------


kubectl get replicasets seçeneği

Örnek 
Şöyle yaparız
kubectl get replicasets

kubectl explain seçeneği - Yardım Sayfasını Gösterir

Örnek
Şöyle yaparız
kubectl explain pods
Bir nesnenin alt alanı hakkında bilgi almak için şöyle yaparız
kubectl explain pods.spec.volumes

2 Eylül 2022 Cuma

minikube ip seçeneği - Docker Driver ile Çalışmaz. Uygulamaya Erişim İçindir

Giriş
Örnek
Şöyle yaparız
$ minikube ip
IP adresini öğrendikten sonra şöyle yaparız. 
http://192.168.49.2:31000
adresine giderek minikube içinde çalışan bir uygulamaya erişebiliriz. Burada uygulamamızın 31000 adresini dinlediğini varsaydık.

Ancak Docker Driver kullanıyorsak açıklaması şöyle
It may happen if you are using the docker driver while starting the minikube , instead of the virtual box. In this situation minikube IP will not function and you will be required to start the local tunnel from your terminal using the following commands,
$ minikube service jenkins --url
Örnek
Şöyle yaparız. Burada minikube ip adsresini öğrendik. Daha sonra servisimizin kullandığı portu öğrendik.
> minikube ip
172.20.164.53

> kubectl get service app-users
NAME        TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
app-users   LoadBalancer   10.104.184.147   <pending>     8080:31434/TCP   24h
Şöyle yaparız
http://172.20.164.53:31434/users/


1 Eylül 2022 Perşembe

kubectl jsonpath Örnekleri

Örnek
Şöyle yaparız
export NODE_IP=$(kubectl get nodes -o jsonpath={.items[0].status.addresses[1].address})
Örnek
Şöyle yaparız
kubectl get pods -l app=oce-ocs-oam -o 'jsonpath={.items[*].metadata.name}'

kubectl cluster-info seçeneği

Cluster bilgisini gösterir