ceph集群新搭建以后是只有一个默认的存储池rbd的池
## 创建文件接口集群
1.创建一个元数据池
1 2
| [root@mytest ~] pool 'metadata' created
|
2.创建一个数据池
1 2
| [root@mytest ~] pool 'data' created
|
3.创建一个文件系统
1 2
| [root@mytest ~] new fs with metadata pool 4 and data pool 5
|
4.创建一个mds
5.部署完检查状态
1 2 3 4 5 6 7 8 9 10 11
| [root@mytest ceph] cluster 7e5469ac-ae1f-494f-9913-901f60c0a76b health HEALTH_OK monmap e1: 1 mons at {mytest=192.168.0.76:6789/0} election epoch 1, quorum 0 mytest mdsmap e60: 1/1/1 up {0=mytest=up:active} osdmap e70: 2 osds: 2 up, 2 in pgmap v252: 104 pgs, 3 pools, 1962 bytes data, 20 objects 75144 kB used, 30624 MB / 30697 MB avail 104 active+clean client io 108 B/s wr, 0 op/s
|
删除文件接口集群(删除mds)
1.停止mds进程
1 2 3
| [root@mytest ceph] === mds.mytest === Stopping Ceph mds.mytest on mytest...kill 9638...done
|
2.将mds状态标记为失效
1 2
| [root@mytest ceph] failed mds.0
|
3.删除ceph文件系统
4.删除完了检查状态
1 2 3 4 5 6 7 8 9
| [root@mytest ceph] cluster 7e5469ac-ae1f-494f-9913-901f60c0a76b health HEALTH_OK monmap e1: 1 mons at {mytest=192.168.0.76:6789/0} election epoch 1, quorum 0 mytest osdmap e71: 2 osds: 2 up, 2 in pgmap v253: 104 pgs, 3 pools, 1962 bytes data, 20 objects 75144 kB used, 30624 MB / 30697 MB avail 104 active+clean
|
可以看到已经没有了mds的那一条了