Şunlardan birisi olabilir
- hostPath Volume : PV tanımında hostPath alanı ile belirtilir
- local Volume : PV tanımında local alanı ile belirtilir.
Her ikisinde de şuna dikkat etmek gerekir. Worker node silinirse veri kaybolur
Once a node has died, the data of both hostpath and local persistent volumes of that node are lost.
Diğer seçenekler şöyle
- aws ebs
- azure
PersistentVolume İçin Bazı Özellikller
Açıklaması şöyle
1. Capacity2. accessModesReadWriteOnce — the volume can be mounted as read-write by a single node.ReadWriteOnly — the volume can be mounted as read-only by many nodes.ReadWriteMany — the volume can be mounted as read-write by many nodes.ReadWriteOncePod — the volume can be mounted as read-write by single pod.3. persistentVolumeReclaimPolicyRetain — volume will be retained, after associate pod terminates.Delete — volume will be deleted after associate pod terminates.Recycle — volume will be recycled for future use.Note : Currently, only NFS and HostPath support recycling. AWS EBS, GCE PD, Azure Disk, and Cinder volumes support deletion.
Gerçek Disk/Storage Nerede?
spec bölümünde tanımlanır. nfs, google cloud, local storage vs olabilir. 25 tane kadar farklı depolama yöntemi destekleniyor.
persistentVolumeReclaimPolicy
Retain
Açıklaması şöyle. Yani Pod, PVC ve PV silinse bile veri silinmez.
The Retain reclaim policy allows for manual reclamation of the resource. When the PersistentVolumeClaim is deleted, the PersistentVolume still exists and the volume is considered "released". But it is not yet available for another claim because the previous claimant's data remains on the volume.
Örnek
hostPath kullanan bir tane PV ve PVC yaratalım
# myPersistent-Volume.yamlkind: PersistentVolumeapiVersion: v1metadata:name: my-persistent-volumelabels:type: localspec:storageClassName: pv-democapacity:storage: 100MiaccessModes:- ReadWriteOncehostPath:path: "/mnt/persistent-volume"# myPersistent-VolumeClaim.yamlkind: PersistentVolumeClaimapiVersion: v1metadata:name: my-persistent-volumeclaimspec:storageClassName: pv-demoaccessModes:- ReadWriteOnceresources:requests:storage: 10Mi
Açıklaması şöyle. Burada hem PV hem de PVC aynı storageClassName ismini kullanıyor
A PV can have a class, which is specified by setting the storageClassName attribute to the name of a StorageClass. A PV of a particular class can only be bound to PVCs requesting that class.
A PV with no storageClassName has no class and can only be bound to PVCs that request no particular class. ( not done in this tutorial )
Ayrıca PV için açıklama şöyle
The storageClassName: pv-demo is what links PersistentVolumeClaim to PersistentVolume .Last 2 lines : we define this disk space exists on the host at /mnt/persistent-volume
Bir de Pod yaratalım. Pod açısından verisi /my-pv-path/ dizininde ama worker node açısından /mnt/persistent-volume dizininde
apiVersion: v1 kind: Pod metadata: name: myvolumes-pod spec: containers: - image: alpine imagePullPolicy: IfNotPresent name: myvolumes-container command: ['sh', '-c', 'echo Container 1 is Running ; sleep 3600'] volumeMounts: - mountPath: "/my-pv-path" name: my-persistent-volumeclaim-name volumes: - name: my-persistent-volumeclaim-name persistentVolumeClaim: claimName: my-persistent-volumeclaim
Pod, PVC ve PV'yi silsek bile veri silinmez. Bu veriyi başka bir Pod kullanabilir. Açıklaması şöyle
Someone can now create other PersistentVolumeClaims and PersistentVolumes to use this PERSISTENT data.
Namespace
PersistentVolume ile namespace kullanılamaz. Tüm cluster tarafından erişilebilir. PV'ler Pod'lardan önce yaratılmalıdır
Container Attached Storage vs Traditional Shared Storage
Açıklaması şöyle
There is a local persistent volume functionality that was introduced in Kubernetes 1.14, and this just means that you can have persistent storage on Kubernetes using the Kubernetes server local drive. You could have SSDs installed in each one of the Kubernetes application servers running various loads.....However, this architecture breaks the philosophy of Kubernetes and the applications. When you use this local persistent volume functionality, Kubernetes is going to only schedule the given pod that's using this persistent storage to that one physical server. So, you've lost application portability, and you can't “move” off that physical server.
Bir yazı burada
Hiç yorum yok:
Yorum Gönder