728x90
SMALL
작성중! 미완
✅ 클러스터 구성
Master 노드 | 1대 | Kubernetes Control Plane (CephFS 마운트) |
Worker 노드 | 3대 | Pod 실행 |
Storage 노드 | 3대 | Rook-Ceph 배포 (MON, OSD, MGR, MDS 실행) |
Network 노드 | 3대 | 네트워크 트래픽 관리 및 Ingress, LoadBalancer 역할 |
📌 목표:
- Storage 3대에 Rook-Ceph를 배포하여 Ceph 클러스터 구성
- Master 노드는 CephFS를 직접 마운트하여 사용
- PersistentVolume(PV), PersistentVolumeClaim(PVC), StorageClass 구성
📌 전제조건:
- storage server ceph cluster 완료
🔁 1. Rook-Ceph 설치 및 Ceph 클러스터 구성
✅ 1. Rook-Ceph 네임스페이스 생성
kubectl create namespace rook-ceph
✅ 2. Rook-Ceph CRD (Custom Resource Definitions) 배포
wget https://raw.githubusercontent.com/rook/rook/v1.11.9/deploy/examples/crds.yaml
wget https://raw.githubusercontent.com/rook/rook/v1.11.9/deploy/examples/common.yaml
wget https://raw.githubusercontent.com/rook/rook/v1.11.9/deploy/examples/operator.yaml
kubectl apply -f crds.yaml
kubectl apply -f common.yaml
kubectl apply -f operator.yaml
✅ 3. Ceph 클러스터 배포 (ceph-cluster.yaml)
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
name: rook-ceph
namespace: rook-ceph
spec:
mon:
count: 3
cephVersion:
image: quay.io/ceph/ceph:v16
dataDirHostPath: /var/lib/rook
storage:
useAllNodes: false
nodes:
- name: storage1
- name: storage2
- name: storage3
useAllDevices: true
kubectl apply -f ceph-cluster.yaml
✅ 4. Ceph 블록 풀 생성 (ceph-blockpool.yaml)
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
name: replicapool
namespace: rook-ceph
spec:
replicated:
size: 3
kubectl apply -f ceph-blockpool.yaml
✅ 5. CephFS 파일 공유 스토리지 생성 (ceph-filesystem.yaml)
apiVersion: ceph.rook.io/v1
kind: CephFilesystem
metadata:
name: my-cephfs
namespace: rook-ceph
spec:
metadataPool:
replicated:
size: 3
dataPools:
- replicated:
size: 3
metadataServer:
activeCount: 1
activeStandby: true
kubectl apply -f ceph-filesystem.yaml
🛠 2. Kubernetes에서 Ceph StorageClass 설정
✅ 1. 블록 스토리지(StorageClass) 생성 (storageclass-block.yaml)
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-ceph-block
provisioner: rook-ceph.rbd.csi.ceph.com
parameters:
clusterID: rook-ceph
pool: replicapool
imageFormat: "2"
imageFeatures: layering
reclaimPolicy: Delete
allowVolumeExpansion: true
kubectl apply -f storageclass-block.yaml
✅ 2. CephFS 공유 스토리지(StorageClass) 생성 (storageclass-filesystem.yaml)
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-cephfs
provisioner: rook-ceph.cephfs.csi.ceph.com
parameters:
clusterID: rook-ceph
fsName: my-cephfs
pool: my-cephfs-data0
reclaimPolicy: Delete
allowVolumeExpansion: true
kubectl apply -f storageclass-filesystem.yaml
🛠 3. Kubernetes에서 Ceph PVC 요청 및 데이터 저장
✅ 1. PVC 생성 (블록 스토리지) (pvc-block.yaml)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-ceph-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: rook-ceph-block
kubectl apply -f pvc-block.yaml
✅ 2. PVC 생성 (CephFS 공유 스토리지) (pvc-filesystem.yaml)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-cephfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
storageClassName: rook-cephfs
kubectl apply -f pvc-filesystem.yaml
✅ 3. PVC를 사용하는 Pod 배포 (pod-ceph.yaml)
apiVersion: v1
kind: Pod
metadata:
name: my-ceph-app
spec:
containers:
- name: my-container
image: nginx
volumeMounts:
- mountPath: "/data"
name: my-storage
volumes:
- name: my-storage
persistentVolumeClaim:
claimName: my-ceph-pvc
kubectl apply -f pod-ceph.yaml
🔄 4. 최종 확인
kubectl get cephcluster -n rook-ceph
kubectl get storageclass
kubectl get pvc
kubectl get pods
LIST
'🔹Storage' 카테고리의 다른 글
Linux 데이터 저장 방식 비교: 특징과 사용 사례 (0) | 2025.01.27 |
---|---|
[Ceph 18.2.0] Rocky 9.4 version 설치 (1) | 2024.07.25 |
stress 명령어 사용법 (0) | 2024.03.25 |
RADOS Object 레벨 스토리지 명령어 (0) | 2024.03.10 |
[Ceph] RADOS Block Device 명령어 (0) | 2024.03.10 |