PG全称是placement groups,它是ceph的逻辑存储单元。在数据存储到cesh时,先打散成一系列对象,再结合基于对象名的哈希操作、复制级别、PG数量,产生目标PG号。根据复制级别的不同,每个PG在不同的OSD上进行复制和分发。可以把PG想象成存储了多个对象的逻辑容器,这个容器映射到多个具体的OSD。PG存在的意义是提高ceph存储系统的性能和扩展性。
如果没有PG,就难以管理和跟踪数以亿计的对象,它们分布在数百个OSD上。对ceph来说,管理PG比直接管理每个对象要简单得多。每个PG需要消耗一定的系统资源包括CPU、内存等。集群的PG数量应该被精确计算得出。通常来说,增加PG的数量可以减少OSD的负载,但是这个增加应该有计划进行。一个推荐配置是每OSD对应50-100个PG。如果数据规模增大,在集群扩容的同时PG数量也需要调整。CRUSH会管理PG的重新分配。
每个pool应该分配多少个PG,与OSD的数量、复制份数、pool数量有关,有个计算公式在:
《learning ceph》这本书里的计算公式也差不多:
Total PGs = ((Total_number_of_OSD * 100) / max_replication_count) / pool_count
结算的结果往上取靠近2的N次方的值。比如总共OSD数量是160,复制份数3,pool数量也是3,那么按上述公式计算出的结果是1777.7。取跟它接近的2的N次方是2048,那么每个pool分配的PG数量就是2048。
在更改pool的PG数量时,需同时更改PGP的数量。PGP是为了管理placement而存在的专门的PG,它和PG的数量应该保持一致。如果你增加pool的pg_num,就需要同时增加pgp_num,保持它们大小一致,这样集群才能正常rebalancing。下面介绍如何修改pg_num和pgp_num。
(1)检查rbd这个pool里已存在的PG和PGP数量:
$ ceph osd pool get rbd pg_num
pg_num: 128
$ ceph osd pool get rbd pgp_num
pgp_num: 128
(2)检查pool的复制size,执行如下命令:
$ ceph osd dump |grep size|grep rbd
pool 2 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash
rjenkins pg_num 128 pgp_num 128 last_change 45 flags hashpspool stripe_width 0
(3)使用上述公式,根据OSD数量、复制size、pool的数量,计算出新的PG数量,假设是256.
(4)变更rbd的pg_num和pgp_num为256:
$ ceph osd pool set rbd pg_num 256
$ ceph osd pool set rbd pgp_num 256
(5)如果有其他pool,同步调整它们的pg_num和pgp_num,以使负载更加均衡。
##############################################################
Ceph常用命令总结
1. 创建自定义pool
ceph osd pool create pg_num pgp_num
其中pgp_num为pg_num的有效归置组个数,是一个可选参数。pg_num应该足够大,不要拘泥于官方文档的计算方法,根据实际情况选择256、512、1024、2048、4096。
2. 设置pool的副本数、最小副本数、最大副本数
ceph osd pool set <poolname> size 2
ceph osd pool set <poolname> min_size 1
ceph osd pool set <poolname> max_size 10
资源所限,如果不希望保存3副本,可以用该命令对特定的pool更改副本存放数。
利用get可以获得特定pool的副本数。
ceph osd pool get <poolname> size
3. 增加osd
可以利用ceph-deploy增加osd:
ceph-deploy osd prepare monosd1:/mnt/ceph osd2:/mnt/ceph
ceph-deploy osd activate monosd1:/mnt/ceph osd2:/mnt/ceph
#相当于:
ceph-deploy osd create monosd1:/mnt/ceph osd2:/mnt/ceph
#还有一种方法,在安装osd时同时指定对应的journal的安装路径
ceph-deploy osd create osd1:/cephmp1:/dev/sdf1 /cephmp2:/dev/sdf2
也可以手动增加:
## Prepare disk first, create partition and format it
<insert parted oneliner>
mkfs.xfs -f /dev/sdd
mkdir /cephmp1
mount /dev/sdd /cephmp1
cd /cephmp1
ceph-osd -i 12 --mkfs --mkkey
ceph auth add osd.12 osd 'allow *' mon 'allow rwx' -i /cephmp1/keyring
#change the crushmap
ceph osd getcrushmap -o map
crushtool -d map -o map.txt
vim map.txt
crushtool -c map.txt -o map
ceph osd setcrushmap -i map
## Start it
/etc/init.d/ceph start osd.12
4. 删除osd
先将此osd停止工作:
## Mark it out
ceph osd out 5
## Wait for data migration to complete (ceph -w), then stop it
service ceph -a stop osd.5
## Now it is marked out and down
再对其进行删除操作:
## If deleting from active stack, be sure to follow the above to mark it out and down
ceph osd crush remove osd.5
## Remove auth for disk
ceph auth del osd.5
## Remove disk
ceph osd rm 5
## Remove from ceph.conf and copy new conf to all hosts
5. 查看osd总体情况、osd的详细信息、crush的详细信息
ceph osd tree
ceph osd dump --format=json-pretty
ceph osd crush dump --format=json-pretty
6. 获得并修改CRUSH maps
## save current crushmap in binary
ceph osd getcrushmap -o crushmap.bin
## Convert to txt
crushtool -d crushmap.bin -o crushmap.txt
## Edit it and re-convert to binary
crushtool -c crushmap.txt -o crushmap.bin.new
## Inject into running system
ceph osd setcrushmap -i crushmap.bin.new
## If you've added a new ruleset and want to use that for a pool, do something like:
ceph osd pool default crush rule = 4
#也可以这样设置一个pool的rule
cpeh osd pool set testpool crush_ruleset <ruleset_id>
-o=output; -d=decompile; -c=compile; -i=input
记住这些缩写,上面的命令就很容易理解了。
7. 增加/删除journal
为了提高性能,通常将ceph的journal置于单独的磁盘或分区中:
先利用以下命令设置ceph集群为nodown:
- ceph osd set nodown
# Relevant ceph.conf options
-- existing setup --
[osd]
osd data = /srv/ceph/osd$id
osd journal = /srv/ceph/osd$id/journal
osd journal size = 512
# stop the OSD:
/etc/init.d/ceph osd.0 stop
/etc/init.d/ceph osd.1 stop
/etc/init.d/ceph osd.2 stop
# Flush the journal:
ceph-osd -i 0 --flush-journal
ceph-osd -i 1 --flush-journal
ceph-osd -i 2 --flush-journal
# Now update ceph.conf - this is very important or you'll just recreate
journal on the same disk again
-- change to [filebased journal] --
[osd]
osd data = /srv/ceph/osd$id
osd journal = /srv/ceph/journal/osd$id/journal
osd journal size = 10000
-- change to [partitionbased journal (journal
in this case would be on /dev/sda2)] --
[osd]
osd data = /srv/ceph/osd$id
osd journal = /dev/sda2
osd journal size = 0
# Create new journal on each disk
ceph-osd -i 0 --mkjournal
ceph-osd -i 1 --mkjournal
ceph-osd -i 2 --mkjournal
# Done, now start all OSD again
/etc/init.d/ceph osd.0 start
/etc/init.d/ceph osd.1 start
/etc/init.d/ceph osd.2 start
记得将nodown设置回来:
- ceph osd unset nodown
8. ceph cache pool
经初步测试,ceph的cache pool性能并不好,有时甚至低于无cache pool时的性能。可以利用flashcache等替代方案来优化ceph的cache。
ceph osd tier add satapool ssdpool
ceph osd tier cache-mode ssdpool writeback
ceph osd pool set ssdpool hit_set_type bloom
ceph osd pool set ssdpool hit_set_count 1
## In this example 80-85% of the cache pool is equal to 280GB
ceph osd pool set ssdpool target_max_bytes $((280*1024*1024*1024))
ceph osd tier set-overlay satapool ssdpool
ceph osd pool set ssdpool hit_set_period 300
ceph osd pool set ssdpool cache_min_flush_age 300 # 10 minutes
ceph osd pool set ssdpool cache_min_evict_age 1800 # 30 minutes
ceph osd pool set ssdpool cache_target_dirty_ratio .4
ceph osd pool set ssdpool cache_target_full_ratio .8
9. 查看运行时配置
ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show
10. 查看监控集群状态
ceph health
cehp health detail
ceph status
ceph -s
#可以加上--fortmat=json-pretty
ceph osd stat
ceph osd dump
ceph osd tree
ceph mon stat
ceph quorum_status
ceph mon dump
ceph mds stat
ceph mds dump
11. 查看所有的pool
ceph osd lspools
rados lspools
12. 查看kvm和qemu是否支持rbd
qemu-system-x86_64 -drive format=?
qemu-img -h | grep rbd
13, 查看特定的pool及其中的文件
rbd ls testpool
rbd create testpool/test.img -s 1024 --image-format=2
rbd info testpool/test.img
rbd rm testpool/test.img
#统计块数
rados -p testpool ls | grep ^rb.0.11a1 | wc -l
#导入并查看文件
rados makepool testpool
rados put -p testpool logo.png logo.png
ceph osd map testpool logo.png
rbd import logo.png testpool/logo.png
rbd info testpool/logo.png
14. 挂载/卸载创建的块设备
ceph osd pool create testpool 256 256
rbd create testpool/test.img -s 1024 --image-format=2
rbd map testpool/test.img
rbd showmapped
mkfs.xfs /dev/rbd0
rbd unmap /dev/rbd0
15. 创建快照
#创建
rbd snap create testpool/test.img@test.img-snap1
#查看
rbd snap ls testpool/test.img
#回滚
rbd snap rollback testpool/test.img@test.img-snap1
#删除
rbd snap rm testpool/test.img@test.img-snap1
#清除所有快照
rbd snap purge testpool/test.img
16. 计算合理的pg数
官方建议每OSD50-100个pg。total pgs=osds*100/副本数,例如6osd、2副本的环境,pgs为6*100/2=300
pg数只能增加,无法减少;增加pg_num后必须同时增减pgp_num
17. 对pool的操作
ceph osd pool create testpool 256 256
ceph osd pool delete testpool testpool --yes-i-really-really-mean-it
ceph osd pool rename testpool anothertestpool
ceph osd pool mksnap testpool testpool-snap
18. 重新安装前的格式化
ceph-deploy purge osd0 osd1
ceph-deploy purgedata osd0 osd1
ceph-deploy forgetkeys
ceph-deploy disk zap --fs-type xfs osd0:/dev/sdb1
19. 修改osd journal的存储路径
#noout参数会阻止osd被标记为out,使其权重为0
ceph osd set noout
service ceph stop osd.1
ceph-osd -i 1 --flush-journal
mount /dev/sdc /journal
ceph-osd -i 1 --mkjournal /journal
service ceph start osd.1
ceph osd unset noout
20. xfs挂载参数
mkfs.xfs -n size=64k /dev/sdb1
#/etc/fstab挂载参数
rw,noexec,nodev,noatime,nodiratime,nobarrier
21. 认证配置
[global]
auth cluser required = none
auth service required = none
auth client required = none
#0.56之前
auth supported = none
22. pg_num不够用,进行迁移和重命名
ceph osd pool create new-pool pg_num
rados cppool old-pool new-pool
ceph osd pool delete old-pool
ceph osd pool rename new-pool old-pool
#或者直接增加pool的pg_num
23. 推送config文件
ceph-deploy --overwrite-conf config push mon1 mon2 mon3
24. 在线修改config参数
ceph tell osd.* injectargs '--mon_clock_drift_allowde 1'
使用此命令需要区分配置的参数属于mon、mds还是osd。
Ceph三节点(3 mon+9 osd)集群部署
2017-7-14
一、基础环境准备
3台机器,每台机器标配:1G内存、2块网卡、3块20G的SATA裸盘
"$"符号表示三个节点都进行同样配置
$ cat /etc/hosts
ceph-node1 10.20.0.101
ceph-node2 10.20.0.102
ceph-node3 10.20.0.103
$ yum install epel-release
二、安装ceph-deploy
[root@ceph-node1 ~]# ssh-keygen //在node1节点配置免SSH密钥登录到其他节点
[root@ceph-node1 ~]# ssh-copy-id ceph-node2
[root@ceph-node1 ~]# ssh-copy-id ceph-node3
[root@ceph-node1 ~]# yum install ceph-deploy -y
三个节点安装: yum install ceph
[root@ceph01 yum.repos.d]# cat ceph.repo
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
gpgcheck=0
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
gpgcheck=0
[root@ceph01 yum.repos.d]#
[root@ceph-node1 ~]# ceph-deploy new ceph-node1 ceph-node2 ceph-node3 //new命令会生成一个ceph-node1集群,并且会在当前目录生成配置文件和密钥文件
+++++++++++++++++++++++++++++++++++++++
[root@ceph-node1 ~]# ceph -v
ceph version 0.94.10 (b1e0532418e4631af01acbc0cedd426f1905f4af)
[root@ceph-node1 ~]# ceph-deploy mon create-initial //创建第一个monitor
[root@ceph-node1 ~]# ceph status //此时集群处于error状态是正常的
cluster ea54af9f-f286-40b2-933d-9e98e7595f1a
health HEALTH_ERR
[root@ceph-node1 ~]# systemctl start ceph
[root@ceph-node1 ~]# systemctl enable ceph
三、创建对象存储设备OSD,并加入到ceph集群
[root@ceph-node1 ~]# ceph-deploy disk list ceph-node1 //列出ceph-node1已有的磁盘,很奇怪没有列出sdb、sdc、sdd,但是确实存在的
//下面的zap命令慎用,会销毁磁盘中已经存在的分区表和数据。ceph-node1是主机名,同样可以是ceph-node2
[root@ceph-node1 ~]# ceph-deploy disk zap ceph-node1:sdb ceph-node1:sdc ceph-node1:sdd
[root@ceph-node1 ~]# ceph-deploy osd create ceph-node1:sdb ceph-node1:sdc ceph-node1:sdd //擦除磁盘原有数据,并创建新的文件系统,默认是XFS,然后将磁盘的第一个分区作为数据分区,第二个分区作为日志分区。加入到OSD中。
[root@ceph-node1 ~]# ceph status //可以看到集群依旧没有处于健康状态。我们需要再添加一些节点到ceph集群中,以便它能够形成分布式的、冗余的对象存储,这样集群状态才为健康。
cluster ea54af9f-f286-40b2-933d-9e98e7595f1a
health HEALTH_WARN
64 pgs stuck inactive
64 pgs stuck unclean
monmap e1: 1 mons at {ceph-node1=10.20.0.101:6789/0}
election epoch 2, quorum 0 ceph-node1
osdmap e6: 3 osds: 0 up, 0 in
pgmap v7: 64 pgs, 1 pools, 0 bytes data, 0 objects
0 kB used, 0 kB / 0 kB avail
64 creating
四、纵向扩展多节点Ceph集群,添加Monitor和OSD
注意:Ceph存储集群最少需要一个Monitor处于运行状态,要提供可用性的话,则需要奇数个monitor,比如3个或5个,以形成仲裁(quorum)。
(1)在Ceph-node2和Ceph-node3部署monitor,但是是在ceph-node1执行命令!
[root@ceph-node1 ~]# ceph-deploy mon add ceph-node2
[root@ceph-node1 ~]# ceph-deploy mon add ceph-node3
++++++++++++++++++++++++++
报错:[root@ceph-node1 ~]# ceph-deploy mon create ceph-node2
[ceph-node3][WARNIN] Executing /sbin/chkconfig ceph on
[ceph-node3][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[ceph-node3][WARNIN] monitor: mon.ceph-node3, might not be running yet
[ceph-node3][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-node3.asok mon_status
[ceph-node2][WARNIN] neither `public_addr` nor `public_network` keys are defined for monitors
解决:①通过在CentOS 7上chkconfig,怀疑节点1并没有远程启动节点2的ceph服务,如果我在node2手动启动的话,应该就可以了
[root@ceph-node2 ~]# systemctl status ceph
● ceph.service - LSB: Start Ceph distributed file system daemons at boot time
Loaded: loaded (/etc/rc.d/init.d/ceph)
Active: inactive (dead)
结果enable后还是失败了
②沃日,原来是书上写错了,在已经添加了监控节点后,后续添加监控节点应该是mon add,真的是醉了!
++++++++++++++++++++++++++++++++
[root@ceph-node1 ~]# ceph status
monmap e3: 3 mons at {ceph-node1=10.20.0.101:6789/0,ceph-node2=10.20.0.102:6789/0,ceph-node3=10.20.0.103:6789/0}
election epoch 8, quorum 0,1,2 ceph-node1,ceph-node2,ceph-node3
(2)添加更多的OSD节点,依然在ceph-node1执行命令即可。
[root@ceph-node1 ~]# ceph-deploy disk list ceph-node2 ceph-node3
//确保磁盘号不要出错,否则的话,容易把系统盘都给格式化了!
[root@ceph-node1 ~]# ceph-deploy disk zap ceph-node2:sdb ceph-node2:sdc ceph-node2:sdd
[root@ceph-node1 ~]# ceph-deploy disk zap ceph-node3:sdb ceph-node3:sdc ceph-node3:sdd
//经过实践,下面的这条命令,osd create最好分两步,prepare和activate,终于为什么不清楚。
[root@ceph-node1 ~]# ceph-deploy osd create ceph-node2:sdb ceph-node2:sdc ceph-node2:sdd
[root@ceph-node1 ~]# ceph-deploy osd create ceph-node3:sdb ceph-node3:sdc ceph-node3:sdd
++++++++++++++++++++++++++++++++++++++++++++
报错:
[ceph-node3][WARNIN] ceph-disk: Error: Command '['/usr/sbin/sgdisk', '--new=2:0:5120M', '--change-name=2:ceph journal', '--partition-guid=2:fa28bc46-55de-464a-8151-9c2b51f9c00d', '--typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106', '--mbrtogpt', '--', '/dev/sdd']' returned non-zero exit status 4
[ceph-node3][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk -v prepare --fs-type xfs --cluster ceph -- /dev/sdd
[ceph_deploy][ERROR ] GenericError: Failed to create 3 OSDs
未解决:原来敲入osd create命令不小心把node2写成node3了,哎我尼玛,后面越来越难办了
+++++++++++++++++++++++++++++
[root@ceph-node1 ~]# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.08995 root default
-2 0.02998 host ceph-node1
0 0.00999 osd.0 up 1.00000 1.00000
1 0.00999 osd.1 up 1.00000 1.00000
2 0.00999 osd.2 up 1.00000 1.00000
-3 0.02998 host ceph-node2
3 0.00999 osd.3 down 0 1.00000
4 0.00999 osd.4 down 0 1.00000
8 0.00999 osd.8 down 0 1.00000
-4 0.02998 host ceph-node3
5 0.00999 osd.5 down 0 1.00000
6 0.00999 osd.6 down 0 1.00000
7 0.00999 osd.7 down 0 1.00000
有6个OSD都处于down状态,ceph-deploy osd activate 依然失败,根据之前的报告,osd create的时候就是失败的。
未解决:由于刚部署ceph集群,还没有数据,可以把OSD给清空。
参考文档:http://www.cnblogs.com/zhangzhengyan/p/5839897.html
[root@ceph-node1 ~]# ceph-deploy disk zap ceph-node2:sdb ceph-node2:sdc ceph-node2:sdd
[root@ceph-node1 ~]# ceph-deploy disk zap ceph-node3:sdb ceph-node3:sdc ceph-node3:sdd
(1)从ceph osd tree移走crush map的osd.4,还有其他osd号
[root@ceph-node1 ~]# ceph osd crush remove osd.3
[root@ceph-node1 ~]# ceph osd crush remove osd.4
[root@ceph-node1 ~]# ceph osd crush remove osd.8
[root@ceph-node1 ~]# ceph osd crush remove osd.5
[root@ceph-node1 ~]# ceph osd crush remove osd.6
[root@ceph-node1 ~]# ceph osd crush remove osd.7
(2)[root@ceph-node1 ~]# ceph osd rm 3
[root@ceph-node1 ~]# ceph osd rm 4
[root@ceph-node1 ~]# ceph osd rm 5
[root@ceph-node1 ~]# ceph osd rm 6
[root@ceph-node1 ~]# ceph osd rm 7
[root@ceph-node1 ~]# ceph osd rm 8
[root@ceph-node1 ~]# ceph osd tree //终于清理干净了
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.02998 root default
-2 0.02998 host ceph-node1
0 0.00999 osd.0 up 1.00000 1.00000
1 0.00999 osd.1 up 1.00000 1.00000
2 0.00999 osd.2 up 1.00000 1.00000
-3 0 host ceph-node2
-4 0 host ceph-node3
可以登录到node2和node3,sdb/sdc/sdd都被干掉了,除了还剩GPT格式,现在重新ceph-deploy osd create
我尼玛还是报错呀,[ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk -v prepare --fs-type xfs --cluster ceph -- /dev/sdd
++++++++++++++++++++++++++++++++++
报错:(1)node1远程激活node2的osd出错。prepare和activate能够取代osd create的步骤
[root@ceph-node1 ~]# ceph-deploy osd prepare ceph-node2:sdb ceph-node2:sdc ceph-node2:sdd
[root@ceph-node1 ~]# ceph-deploy osd activate ceph-node2:sdb ceph-node2:sdc ceph-node2:sdd
[ceph-node2][WARNIN] ceph-disk: Cannot discover filesystem type: device /dev/sdb: Line is truncated:
[ceph-node2][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk -v activate --mark-init sysvinit --mount /dev/sdb
解决:格式分区权限问题,在报错的节点执行ceph-disk activate-all即可。
(2)明明是node2的sdb盘,但是ceph osd tree却发现执行的是node3的sdb盘,当然会报错了
Starting Ceph osd.4 on ceph-node2...
Running as unit ceph-osd.4.1500013086.674713414.service.
Error EINVAL: entity osd.3 exists but key does not match
[root@ceph-node1 ~]# ceph osd tree
3 0 osd.3 down 0 1.00000
解决:[root@ceph-node1 ~]# ceph auth del osd.3
[root@ceph-node1 ~]# ceph osd rm 3
在node2上lsblk发现sdb不正常,没有挂载osd,那么于是
[root@ceph-node1 ~]# ceph-deploy disk zap ceph-node2:sdb
[root@ceph-node1 ~]# ceph-deploy osd prepare ceph-node2:sdb
[root@ceph-node1 ~]# ceph osd tree //至少osd跑到node2上面,而不是node3,还好还好。
-3 0.02998 host ceph-node2
3 0.00999 osd.3 down 0 1.00000
[root@ceph-node1 ~]# ceph-deploy osd activate ceph-node2:sdb //肯定失败,按照上面的经验,必须在Node2上单独激活
[root@ceph-node2 ~]# ceph-disk activate-all
[root@ceph-node1 ~]# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.08995 root default
-2 0.02998 host ceph-node1
0 0.00999 osd.0 up 1.00000 1.00000
1 0.00999 osd.1 up 1.00000 1.00000
2 0.00999 osd.2 up 1.00000 1.00000
-3 0.02998 host ceph-node2
4 0.00999 osd.4 up 1.00000 1.00000
5 0.00999 osd.5 up 1.00000 1.00000
3 0.00999 osd.3 up 1.00000 1.00000
-4 0.02998 host ceph-node3
6 0.00999 osd.6 up 1.00000 1.00000
7 0.00999 osd.7 up 1.00000 1.00000
8 0.00999 osd.8 up 1.00000 1.00000
哎,卧槽,终于解决了,之前只是一个小小的盘符写错了,就害得我搞这么久啊,细心点!
=========================================
拓展:
[root@ceph-node1 ~]# lsblk // OSD up的分区都挂载到/var/lib/ceph/osd目录下
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 40G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 39.5G 0 part
├─centos-root 253:0 0 38.5G 0 lvm /
└─centos-swap 253:1 0 1G 0 lvm [SWAP]
sdb 8:16 0 20G 0 disk
├─sdb1 8:17 0 15G 0 part /var/lib/ceph/osd/ceph-0
└─sdb2 8:18 0 5G 0 part
sdc 8:32 0 20G 0 disk
├─sdc1 8:33 0 15G 0 part /var/lib/ceph/osd/ceph-1
└─sdc2 8:34 0 5G 0 part
sdd 8:48 0 20G 0 disk
├─sdd1 8:49 0 15G 0 part /var/lib/ceph/osd/ceph-2
└─sdd2 8:50 0 5G 0 part
sr0 11:0 1 1024M 0 rom