1、简介
Whether you want to provide Ceph Object Storage and/or Ceph Block Device services to Cloud Platforms, deploy a Ceph Filesystem or use Ceph for another purpose, all Ceph Storage Cluster deployments begin with setting up eachCeph Node, your network, and the Ceph Storage Cluster. A Ceph Storage Cluster requires at least one Ceph Monitor, Ceph Manager, and Ceph OSD (Object Storage Daemon). The Ceph Metadata Server is also required when running Ceph Filesystem clients.
无论你是想为云平台 提供 ceph对象存储 亦或 ceph块存储服务,部署一个 ceph文件系统 或者使用 ceph其他用途,所有的ceph 存储集群部署 都起始于 安装 ceph 节点,网络配置 和 集群配置;一个ceph存储集群 要求至少一个 mon、mgr(L版本及之后) 和 osd守护进程;当 需要运行文件系统客户端时 mds服务也是需要的
2、crush算法
Ceph stores data as objects within logical storage pools. Using the CRUSH algorithm, Ceph calculates which placement group should contain the object, and further calculates which Ceph OSD Daemon should store the placement group. The CRUSH algorithm enables the Ceph Storage Cluster to scale, rebalance, and recover dynamically.
ceph将数据作为 对象 存储到 逻辑池中;使用 crush 算法,ceph计算出归置组 应该包含的 对象,以及计算出 哪些osd守护进程应该存储这些归置组;crush算法能够使 ceph集群 动态的进行 规模调整,数据均衡 和 迁移恢复
3、组件
3.1 Monitor
- Monitors: A Ceph Monitor (ceph-mon) maintains maps of the cluster state, including the monitor map, manager map, the OSD map, and the CRUSH map. These maps are critical cluster state required for Ceph daemons to coordinate with each other. Monitors are also responsible for managing authentication between daemons and clients. At least three monitors are normally required for redundancy and high availability.
- Monitors:一个 ceph Monitor (ceph-mon)保持这 集群状态的 map状态,包括 monitor map、manager map、osd map 和 crush map;这些映射是CEPH守护进程相互协调所需的状态。监视器还负责管理守护进程和客户端之间的身份验证。冗余和高可用性通常需要至少三个监视器。
3.2 Managers
- Managers: A Ceph Manager daemon (ceph-mgr) is responsible for keeping track of runtime metrics and the current state of the Ceph cluster, including storage utilization, current performance metrics, and system load. The Ceph Manager daemons also host python-based plugins to manage and expose Ceph cluster information, including a web-based dashboard and REST API. At least two managers are normally required for high availability.
3.3 OSDs
- Ceph OSDs: A Ceph OSD (object storage daemon, ceph-osd) stores data, handles data replication, recovery, rebalancing, and provides some monitoring information to Ceph Monitors and Managers by checking other Ceph OSD Daemons for a heartbeat. At least 3 Ceph OSDs are normally required for redundancy and high availability.
- Ceph OSDs:一个 Ceph OSD(对象存储守护进程,ceph-osd)通过 与其他 Ceph OSDs 进行心跳监测 来 存储数据,处理处理,副本,恢复,均衡 并提供一些 监控信息给 Monitors 和 Manageers;
3.4 MDSs
- MDSs: A Ceph Metadata Server (MDS, ceph-mds) stores metadata on behalf of the Ceph Filesystem (i.e., Ceph Block Devices and Ceph Object Storage do not use MDS). Ceph Metadata Servers allow POSIX file system users to execute basic commands (like ls, find, etc.) without placing an enormous burden on the Ceph Storage Cluster.