Ceph Nedir?
Açıklaması şöyle. Yani CRUSH hash ile dosyanın nerede olduğunu bulmak hızlı. Ceph ile 1. Block Storage,
2. File Storage
3. Object Storage yapılabilir.
Örnek
Ceph is an open source, distributed, scaled-out, software-defined storage system that can provide block, object, and file storage. The clusters of Ceph are designed in order to run on any hardware with the help of an algorithm called CRUSH (Controlled Replication Under Scalable Hashing).
The CRUSH algorithm enables the client to independently computes where data should be written to or read from. By deriving this metadata dynamically, there is no need to manage a centralized table.
Servers can perform a CRUSH lookup very quickly; moreover, a smaller computing load can be distributed across cluster nodes, leveraging the power of distributed storage. This allows Ceph can quickly scale to hundreds of petabytes without the risk of bottlenecks and the associated single points of failure.
Ceph is a true unified storage solution that provides block, file, and object services from a single unified software defined backend. Ceph provides three main types of storage:- Block storage via the RADOS Block Device (RBD)
- File storage via CephFS
- Object storage via RADOS Gateway, which provides S3 and Swift-compatible storage.
1. Ceph Block Storage (RBD) : Farklı podlar tarafından paylaşılamaz
2. Ceph File Storage - CephFS : NFS gibi farkı podlar tarafından paylaşılabilir.
3. Ceph Object Storage - AWS S3 gibidir
Ceph'in Tarihçesi
Açıklaması şöyle. Yani 2014 yılında RedHat satın almış.
2003–2007: Ceph was developed at University of California by Sage Weil in 2003 as a part of his PhD project. Then it was open sourced in 2006 under a LGPL to serve as a reference implementation and research platform. Lawrence Livermore National Laboratory supported Sage’s early followup work from 2003 to 2007.2007–2011: DreamHost supported Ceph development from 2007 to 2011. During this period the core components of Ceph gained stability and reliability, new features were implemented, and the road map for the future was drawn.2012 — Current: In 2012 Sage Weil founded Inktank to enable the widespread adoption of Ceph. In 2014 Red Hat agreed to acquire Inktank
Kurulum
Açıklaması şöyle. Kubernetes ortamında Rook ile kurulurThere are several different ways to install Ceph, such as:Cephadm: Installs and manages a Ceph cluster using containers and systemd, with tight integration with the CLI and dashboard GUI.
Rook: Deploys and manages Ceph clusters running in K8s, while also enabling management of storage resources and provisioning via K8s APIs.
ceph-ansible: Deploys and manages Ceph clusters using Ansible.
ceph-salt: Installs Ceph using Salt and cephadm.
jaas.ai/ceph-mon: Installs Ceph using Juju.
github.com/openstack/puppet-ceph: Installs Ceph via Puppet.
Manual: Ceph can also be installed manually.
Ceph Block Storage - RBD
Açıklaması şöyle. RBD kısaltması RADOS Block Device anlamına gelir. Tek bir HDD/SDD disk gibi düşünülebilir.
if you are running on bare-metal for production use rook-ceph operator to use ceph RBD as a storage class
1. CephBlockPool yaratılır
2. StorageClass yaratılır. Provisioner olarak "rook-ceph.rbd.csi.ceph.com" veya "ceph.com/rbd" kullanılır
3. PVC ile bu StorageClass kullanılır. Bir PVC tek bir Pod tarafından kullanılabilir.
Örnek - PVC + POD içinde PVC'yi Kullanma
PVC için şöyle yaparız
apiVersion: v1kind: PersistentVolumeClaim metadata: name: mariadb-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi storageClassName: rook-ceph-block
Açıklaması şöyle
You should set accessModes to ReadWriteOnce when using rbd. ReadWriteMany is supported by cephfs.
Kullanmak için şöyle yaparız
apiVersion: v1 kind: Pod metadata: name: mariadb-pod labels: app: mariadb spec: containers: - image: mariadb name: mariadb env: - name: MYSQL_ROOT_PASSWORD value: qwer1234 ports: - containerPort: 3306 name: mariadb volumeMounts: - name: db-vol mountPath: /var/lib/mysql volumes: - name: db-vol persistentVolumeClaim: claimName: mariadb-pvc
Örnek - StorageClass + PVC
Şöyle yaparız
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: fast-rbd provisioner: ceph.com/rbd parameters: monitors: 10.0.1.118:6789, 10.0.1.227:6789, 10.0.1.172:6789 adminId: admin adminSecretName: ceph-secret adminSecretNamespace: kube-system pool: kube userId: kube userSecretName: ceph-secret-kube userSecretNamespace: kube-system imageFormat: "2" imageFeatures: layering
PVC için şöyle yaparız
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: testclaim spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: fast-rbd
PV ve PVC'ye bakarsak şunu görürüz
# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE testclaim Bound pvc-c215ad98-95b3-11e9-8b5d-12e154d66096 1Gi RWO fast-rbd 2m # kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-c215ad98-95b3-11e9-8b5d-12e154d66096 1Gi RWO Delete Bound default/testclaim fast-rbd 8m
Örnek - CephBlockPool + StorageClass + PVC
Ceph RBD yaratmak için şöyle yaparız
apiVersion: ceph.rook.io/v1kind: CephBlockPoolmetadata:name: replicapool2namespace: rook-cephspec:failureDomain: hostreplicated:size: 200---apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: rook-ceph-block2# Change "rook-ceph" provisioner prefix to match the operator namespace if neededprovisioner: rook-ceph.rbd.csi.ceph.comparameters:# clusterID is the namespace where the rook cluster is runningclusterID: rook-ceph# Ceph pool into which the RBD image shall be createdpool: replicapool2# RBD image format. Defaults to "2".imageFormat: "2"# RBD image features. Available for imageFormat: "2". CSI RBD currently supports only `layering` feature.imageFeatures: layering# The secrets contain Ceph admin credentials.csi.storage.k8s.io/provisioner-secret-name: rook-ceph-csicsi.storage.k8s.io/provisioner-secret-namespace: rook-cephcsi.storage.k8s.io/node-stage-secret-name: rook-ceph-csicsi.storage.k8s.io/node-stage-secret-namespace: rook-ceph# Specify the filesystem type of the volume. If not specified, csi-provisioner# will set default as `ext4`.csi.storage.k8s.io/fstype: xfsallowVolumeExpansion: "false"# Delete the rbd volume when a PVC is deletedreclaimPolicy: Delete
Persistent Volume Claim için şöyle yaparız. Burada 3 tane PVC var.
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: mysql-datadir-galera-ss-0 spec: storageClassName: rook-ceph-block accessModes: - ReadWriteOnce resources: requests: storage: 10Gi --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: mysql-datadir-galera-ss-1 spec: storageClassName: rook-ceph-block accessModes: - ReadWriteOnce resources: requests: storage: 10Gi --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: mysql-datadir-galera-ss-2 spec: storageClassName: rook-ceph-block accessModes: - ReadWriteOnce resources: requests: storage: 10Gi
2. Ceph File Storage - CephFS
Provisioner olarak "ceph.com/cephfs" kullanılır. Ben yapınca PVC'ye bakınca şöyleydi.
$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE adv-vitess-backup Bound pvc-417581ac-d314-4be7-a631-32df9ce360fe 2Gi RWX rook-cephfs 9m40s
Şöyle yaparız
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: cephfs provisioner: ceph.com/cephfs parameters: monitors: 10.0.1.226:6789, 10.0.1.205:6789, 10.0.1.82:6789 adminId: admin adminSecretName: ceph-secret-admin adminSecretNamespace: cephfs claimRoot: /pvc-volumes
PVC için şöyle yaparız
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: claim1 spec: storageClassName: cephfs accessModes: - ReadWriteMany resources: requests: storage: 1Gi
PV ve PVC'ye bakarsak şunu görürüz
# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE claim1 Bound pvc-a7db18a7-9641-11e9-ab86-12e154d66096 1Gi RWX cephfs 2m # kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-a7db18a7-9641-11e9-ab86-12e154d66096 1Gi RWX Delete Bound default/claim1 cephfs 2m
Hiç yorum yok:
Yorum Gönder