label Neden Lazım
Worker'lara label verilerek bazı podların bu label'a sahip worker'da çalışması sağlanır.
Pod'u worker'a atamak için 2 yöntem var
1. nodeSelector
2. nodeAffinity
nodeSelector — This is a simple Pod scheduling feature that allows scheduling a Pod onto a node whose labels match the nodeSelector labels specified by the user.
Node Affinity — This is the enhanced version of the nodeSelector introduced in Kubernetes 1.4 in beta. It offers a more expressive syntax for fine-grained control of how Pods are scheduled to specific nodes.
Inter-Pod Affinity — This feature addresses the third scenario above. Inter-Pod affinity allows co-location by scheduling Pods onto nodes that already have specific Pods running.
Worker Node'a label Vermek
kubectl label nodes <nodename> mylabel=somevalue
Örnek
kubectl label node minikube foo=bar
Worker Node'a Verilen Label'ları Listelemek
kubectl get nodes --show-labels
1. nodeSelector
Örnek
kubectl label nodes host02 disktype=ssd
node “host02” labeled
apiVersion: v1
kind: Pod
metadata:
name: httpd
labels:
env: prod
spec:
containers:
- name: httpd
image: httpd
imagePullPolicy: IfNotPresent
nodeSelector:
disktype: ssd
Örnek - Yanlış Kullanım
apiVersion: v1
kind: Pod
metadata:
<snip>
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/region
operator: In
values:
- ca-central-1
<snip>
nodeSelector:
topology.kubernetes.io/zone: "ca-central-1a"
topology.kubernetes.io/zone: "ca-central-1b"
All nodeSelectors must be matched. Node labels can only have a single value. Perhaps the developer thought that nodeSelectors were ORed and not ANDed, or perhaps this was just a careless mistake. Either way, Kubernetes is doing exactly what you told it to do by not scheduling the Pod on any worker, because the scheduling instructions were impossible to satisfy.
2. nodeAffinity