[Ceph 18.2.0] Rocky 9.4 version 설치

2024. 7. 25. 11:13·🔹Storage
728x90
SMALL
  • os: rocky 9.4
  • ceph: 18.2.0 version
  • 참고 링크: https://kifarunix.com/how-to-deploy-ceph-storage-cluster-on-rocky-linux/

  • kernel version 최신화
# uname -r
5.14.0-427.24.1.el9_4.x86_64

 

  • 아래 명령으로 ceph 기본 설정 및 확인
CEPH_RELEASE=18.2.0
curl -sLO https://download.ceph.com/rpm-${CEPH_RELEASE}/el9/noarch/cephadm
chmod +x ./cephadm
./cephadm add-repo --release reef
./cephadm install
which cephadm

 

  • bootstrap 설정
cephadm bootstrap --mon-ip {cephadm 설정 IP}
ex) cephadm bootstrap --mon-ip 192.168.98.131
---------------------------------------------
Creating directory /etc/ceph for ceph.conf
Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chronyd.service is enabled and running
Repeating the final host check...
podman (/usr/bin/podman) version 4.9.4 is present
systemctl is present
lvcreate is present
Unit chronyd.service is enabled and running
Host looks OK
Cluster fsid: 24055b1a-4a2b-11ef-a040-000c29075f77
Verifying IP 192.168.98.131 port 3300 ...
Verifying IP 192.168.98.131 port 6789 ...
Mon IP `192.168.98.131` is in CIDR network `192.168.98.0/24`
Mon IP `192.168.98.131` is in CIDR network `192.168.98.0/24`
Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network
Pulling container image quay.io/ceph/ceph:v18...
Ceph version: ceph version 18.2.4 (e7ad5345525c7aa95470c26863873b581076945d) reef (stable)
Extracting ceph user uid/gid from container image...
Creating initial keys...
Creating initial monmap...
Creating mon...
firewalld ready
Enabling firewalld service ceph-mon in current zone...
Waiting for mon to start...
Waiting for mon...
mon is available
Assimilating anything we can from ceph.conf...
Generating new minimal ceph.conf...
Restarting the monitor...
Setting public_network to 192.168.98.0/24 in mon config section
Wrote config to /etc/ceph/ceph.conf
Wrote keyring to /etc/ceph/ceph.client.admin.keyring
Creating mgr...
Verifying port 0.0.0.0:9283 ...
Verifying port 0.0.0.0:8765 ...
Verifying port 0.0.0.0:8443 ...
firewalld ready
Enabling firewalld service ceph in current zone...
firewalld ready
Enabling firewalld port 9283/tcp in current zone...
Enabling firewalld port 8765/tcp in current zone...
Enabling firewalld port 8443/tcp in current zone...
Waiting for mgr to start...
Waiting for mgr...
mgr not available, waiting (1/15)...
mgr not available, waiting (2/15)...
mgr not available, waiting (3/15)...
mgr is available
Enabling cephadm module...
Waiting for the mgr to restart...
Waiting for mgr epoch 5...
mgr epoch 5 is available
Setting orchestrator backend to cephadm...
Generating ssh key...
Wrote public SSH key to /etc/ceph/ceph.pub
Adding key to root@localhost authorized_keys...
Adding host s1...
Deploying mon service with default placement...
Deploying mgr service with default placement...
Deploying crash service with default placement...
Deploying ceph-exporter service with default placement...
Deploying prometheus service with default placement...
Deploying grafana service with default placement...
Deploying node-exporter service with default placement...
Deploying alertmanager service with default placement...
Enabling the dashboard module...
Waiting for the mgr to restart...
Waiting for mgr epoch 9...
mgr epoch 9 is available
Generating a dashboard self-signed certificate...
Creating initial admin user...
Fetching dashboard port number...
firewalld ready
Ceph Dashboard is now available at:

             URL: https://s1:8443/
            User: admin
        Password: dssrajmghr

Enabling client.admin keyring and conf on hosts with "admin" label
Saving cluster configuration to /var/lib/ceph/24055b1a-4a2b-11ef-a040-000c29075f77/config directory
Enabling autotune for osd_memory_target
You can access the Ceph CLI as following in case of multi-cluster or non-default config:

        sudo /usr/sbin/cephadm shell --fsid 24055b1a-4a2b-11ef-a040-000c29075f77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

Or, if you are only running a single cluster on this host:

        sudo /usr/sbin/cephadm shell

Please consider enabling telemetry to help improve Ceph:

        ceph telemetry on

For more information see:

        https://docs.ceph.com/en/latest/mgr/telemetry/

Bootstrap complete.

 

  • 결과 확인
    • ceph dashboard 페이지가 들어가지는지

위에 확인한 URL / User / PassWord 입력 시 들어가짐을 확인


Enable Ceph CLI

  • 아래 3가지 설정 필요
  • Repository 설정
cephadm add-repo --release reef

 

  • Package 설치
yum install -y ceph-common

 

  • Key 다른 서버로 복사
ssh-copy-id -f -i /etc/ceph/ceph.pub root@s2
ssh-copy-id -f -i /etc/ceph/ceph.pub root@s3

 


/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/etc/ceph/ceph.pub"
The authenticity of host 's3 (192.168.98.130)' can't be established.
ED25519 key fingerprint is SHA256:60Wg7BiVby0jxECnJWPWVV6YaTSpjWZubRe2AZTN1RM.
This host key is known by the following other names/addresses:
    ~/.ssh/known_hosts:1: 192.168.98.129
    ~/.ssh/known_hosts:4: 192.168.98.130
    ~/.ssh/known_hosts:5: s2
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
root@s3's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@s3'"
and check to make sure that only the key(s) you wanted were added.

  • host 추가

ceph orch host add s2

Added host 's2' with addr '192.168.98.129'

ceph orch host add s3

Added host 's3' with addr '192.168.98.130'

  • host 조회

ceph orch host ls

HOST  ADDR            LABELS  STATUS
s1    192.168.98.131  _admin
s2    192.168.98.129
s3    192.168.98.130
3 hosts in cluster
  • Device 조회
ceph orch device ls

 

  • AVAILABLE 에 대한 결과가 YES 인 Disk 를 OSD 로 사용할 수 있다.

osd 한번에 attach 하기

ceph orch apply osd --all-available-devices --method raw

# ceph orch apply osd --all-available-devices --method raw
Inferring fsid 24055b1a-4a2b-11ef-a040-000c29075f77
Inferring config /var/lib/ceph/24055b1a-4a2b-11ef-a040-000c29075f77/mon.s1/config
Using ceph image with id '2bc0b0f4375d' and tag 'v18' created on 2024-07-23 22:19:35 +0000 UTC
quay.io/ceph/ceph@sha256:6ac7f923aa1d23b43248ce0ddec7e1388855ee3d00813b52c3172b0b23b37906
Scheduled osd.all-available-devices update...
[root@s1 ~]# 2024-07-25T03:25:19.977306+0000 mon.s1 [WRN] Health check update: OSD count 2 < osd_pool_default_size 3 (TOO_FEW_OSDS)
2024-07-25T03:25:21.530548+0000 mon.s1 [INF] Health check cleared: TOO_FEW_OSDS (was: OSD count 2 < osd_pool_default_size 3)
2024-07-25T03:25:21.530568+0000 mon.s1 [INF] Cluster is now healthy
2024-07-25T03:25:37.184392+0000 mon.s1 [INF] osd.0 [v2:192.168.98.130:6800/2806722205,v1:192.168.98.130:6801/2806722205] boot
2024-07-25T03:25:45.548514+0000 mon.s1 [INF] osd.2 [v2:192.168.98.129:6800/716403286,v1:192.168.98.129:6801/716403286] boot
2024-07-25T03:25:45.548588+0000 mon.s1 [INF] osd.1 [v2:192.168.98.131:6802/218946262,v1:192.168.98.131:6803/218946262] boot
2024-07-25T03:25:49.626715+0000 mon.s1 [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
2024-07-25T03:25:51.466964+0000 mon.s1 [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
2024-07-25T03:25:51.467024+0000 mon.s1 [INF] Cluster is now healthy

 


 

 

  • daemon 들의 사용중인 port 확인
# netstat -tnlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 192.168.98.131:3300     0.0.0.0:*               LISTEN      17675/ceph-mon
tcp        0      0 0.0.0.0:6808            0.0.0.0:*               LISTEN      76122/ceph-osd
tcp        0      0 0.0.0.0:6809            0.0.0.0:*               LISTEN      76122/ceph-osd
tcp        0      0 0.0.0.0:6804            0.0.0.0:*               LISTEN      76122/ceph-osd
tcp        0      0 0.0.0.0:6805            0.0.0.0:*               LISTEN      76122/ceph-osd
tcp        0      0 0.0.0.0:6806            0.0.0.0:*               LISTEN      76122/ceph-osd
tcp        0      0 0.0.0.0:6807            0.0.0.0:*               LISTEN      76122/ceph-osd
tcp        0      0 0.0.0.0:6800            0.0.0.0:*               LISTEN      17868/ceph-mgr
tcp        0      0 0.0.0.0:6801            0.0.0.0:*               LISTEN      17868/ceph-mgr
tcp        0      0 0.0.0.0:6802            0.0.0.0:*               LISTEN      76122/ceph-osd
tcp        0      0 0.0.0.0:6803            0.0.0.0:*               LISTEN      76122/ceph-osd
tcp        0      0 0.0.0.0:9926            0.0.0.0:*               LISTEN      25737/ceph-exporter
tcp        0      0 192.168.98.131:6789     0.0.0.0:*               LISTEN      17675/ceph-mon
tcp        0      0 192.168.98.131:8765     0.0.0.0:*               LISTEN      17868/ceph-mgr
tcp        0      0 192.168.98.131:7150     0.0.0.0:*               LISTEN      17868/ceph-mgr
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      5475/sshd: /usr/sbi
tcp6       0      0 :::9100                 :::*                    LISTEN      26233/node_exporter
tcp6       0      0 :::9093                 :::*                    LISTEN      69583/alertmanager
tcp6       0      0 :::9094                 :::*                    LISTEN      69583/alertmanager
tcp6       0      0 :::9095                 :::*                    LISTEN      69909/prometheus
tcp6       0      0 :::3000                 :::*                    LISTEN      29583/grafana
tcp6       0      0 :::8443                 :::*                    LISTEN      17868/ceph-mgr
tcp6       0      0 :::22                   :::*                    LISTEN      5475/sshd: /usr/sbi
tcp6       0      0 :::9283                 :::*                    LISTEN      17868/ceph-mgr

 


중간에 꼬인 게 있다면

 

https://sungbin-park.tistory.com/6

 

Ceph 서버 초기화 방법

Ceph 설치되어 있는 Storage 서버들에서, Ceph 를 완전 제거하는 방법을 공유드립니다. ceph version : 17.2.6 입니다. 1. OSD 를 삭제한다 # osd 상태 확인 명령어 ceph osd status ceph orch ps --daemon_type osd --refresh # A

sungbin-park.tistory.com

 

  • 링크 참고하여 ceph 초기화 하는 것이 가장 좋다
LIST

'🔹Storage' 카테고리의 다른 글

Rook-Ceph Kubernetes 클러스터 구축 및 데이터 저장 방법  (0) 2025.02.18
Linux 데이터 저장 방식 비교: 특징과 사용 사례  (0) 2025.01.27
stress 명령어 사용법  (0) 2024.03.25
RADOS Object 레벨 스토리지 명령어  (0) 2024.03.10
[Ceph] RADOS Block Device 명령어  (0) 2024.03.10
'🔹Storage' 카테고리의 다른 글
  • Rook-Ceph Kubernetes 클러스터 구축 및 데이터 저장 방법
  • Linux 데이터 저장 방식 비교: 특징과 사용 사례
  • stress 명령어 사용법
  • RADOS Object 레벨 스토리지 명령어
terranbin
terranbin
Studying Computer Science
  • terranbin
    Engineer
    terranbin
  • 전체
    오늘
    어제
    • 분류 전체보기 (129)
      • ☁️Cloud (42)
        • AWS (38)
        • MS Azure (4)
      • 🐳 Infra (1)
        • System (12)
        • DRBD (3)
      • 🔌Network (8)
      • 🔹Storage (8)
      • 🔹Kubernetes (15)
      • 🔹 DevOps (8)
      • 🔹Study (4)
      • 🔹Install (6)
      • 🔹ETC (2)
      • 🔹PostMan (6)
      • 🔹Openstack (3)
      • 🔹RcloneView (6)
      • 🔹Test (0)
      • 🔹Debug (2)
      • 🔹DBMS (2)
  • 블로그 메뉴

    • 홈
  • 링크

    • sungbin
    • Github
  • 공지사항

  • 인기 글

  • 태그

    OpenStack
    kubernetes
    S3
    postman
    AWS
    EBS
    EC2
    rocky9
    rocky8
    SAA
    Google Drive
    rcloneview
    network
    terraform
    kubectl
    aws dlt
    distributed load testing
    설치
    ceph
    centos7
  • 최근 댓글

  • 최근 글

  • hELLO· Designed By정상우.v4.10.3
terranbin
[Ceph 18.2.0] Rocky 9.4 version 설치
상단으로

티스토리툴바