Cloud Storage with Red Hat Ceph Storage
CL260
Course Objectives and Structure
Schedule
Introduction
|
Creating Object Storage Cluster Components
|
Providing Object Storage Using a RADOS Gateway
|
Tuning and Troubleshooting Red Hat Ceph Storage
|
Introducing Red Hat Ceph Storage Architecture
|
Creating and Customizing Storage Maps
|
Accessing Object Storage Using a REST API
|
Managing Cloud Platforms with Red Hat Ceph Storage
|
Deploying Red Hat Ceph Storage
|
Providing Block Storage Using RADOS Block Devices
|
Providing File Storage with CephFS
|
Comprehensive Review
|
Configuring a Red Hat Ceph Storage Cluster
|
Expanding Block Storage Operations
|
Managing a Red Hat Ceph Storage Cluster
|
|
Creating Object Storage Cluster Components
|
|
|
|
Chapter 1: Introducing Red Hat Ceph Storage Architecture
Goal: Describe Red Hat Ceph Storage architecture, including data organization, distribution, and client access methods.
Objectives:
-
Describe the personas in the cloud storage ecosystem that characterize the use cases and tasks taught in this course.
-
Describe the Red Hat Ceph Storage architecture, introduce the Object Storage Cluster, and describe the choices in data access methods.
-
Describe and compare the use cases for the various management interfaces provided for Red Hat Ceph Storage.
Describing Storage Personas
Introducing Cloud Storage Personas
Cloud Storage Personas in This Course
Quiz: Describing Storage Personas
Describing Red Hat Ceph Storage Architecture
Introducing the Ceph Cluster Architecture
Ceph Storage Back-end Components
Ceph components
Data Distribution and Organization in Ceph
Ceph pool data protection methods
Objects in a Ceph pool stored in placement groups
Guided Exercise: Describing Red Hat Ceph Storage Architecture
Describing Red Hat Ceph Storage Management Interfaces
Introducing Ceph Interfaces
Cephadm interaction with other services
Exploring Ceph Management Interfaces
The Ceph Orchestrator
Ceph Dashboard GUI status screen
Guided Exercise: Describing Red Hat Ceph Storage Management Interfaces
Summary
- The following services provide the foundation for a Ceph storage cluster:
- Monitors (MONs) maintain cluster maps.
- Object Storage Devices (OSDs) store and manage objects.
- Managers (MGRs) track and expose cluster runtime metrics.
- Metadata Servers (MDSes) store metadata that CephFS uses to efficiently run POSIX commands for clients.
- RADOS (Reliable Autonomic Distributed Object Store) is the back end for storage in the Ceph cluster, a self-healing and self-managing object store.
- RADOS provides four access methods to storage: the
librados
native API, the object-based RADOS Gateway, the RADOS Block Device (RBD), and the distributed file-based CephFS file system.
- A Placement Group (PG) aggregates a set of objects into a hash bucket.
The CRUSH algorithm maps the hash buckets to a set of OSDs for storage.
Summary (continued)
- Pools are logical partitions of the Ceph storage that are used to store object data.
Each pool is a name tag for grouping objects.
A pool groups objects for storage by using placement groups.
- Red Hat Ceph Storage provides two interfaces, a command line and a Dashboard GUI, for managing clusters.
Both interfaces use the same cephadm module to perform operations and to interact with cluster services.
Chapter 2: Deploying Red Hat Ceph Storage
Goal: Deploy a new Red Hat Ceph Storage cluster and expand the cluster capacity.
Objectives:
Deploying Red Hat Ceph Storage
Preparing for Cluster Deployment
Setting up the Admin Node
Guided Exercise: Deploying Red Hat Ceph Storage
Expanding Red Hat Ceph Storage Cluster Capacity
Expanding Your Ceph Cluster Capacity
Guided Exercise: Expanding Red Hat Ceph Storage Cluster Capacity
Deploying Red Hat Ceph Storage
Summary
- The two main components of the cephadm utility:
- The
cephadm shell
runs a bash shell within a specialized management container.
Use the cephadm shell to perform cluster deployment tasks and cluster management tasks after the cluster is installed.
- The
cephadm orchestrator
provides a command-line interface to the orchestrator ceph-mgr modules.
The orchestrator coordinates configuration changes that must be performed cooperatively across multiple nodes and services in a storage cluster.
- As of version 5.0, all Red Hat Ceph Storage cluster services are containerized.
- Preparing for a new cluster deployment requires planning cluster service placement and distributing SSH keys to nodes.
Summary (continued)
- Use cephadm to bootstrap a new cluster:
- Installs and starts the MON and MGR daemons on the bootstrap node.
- Writes a copy of the cluster public SSH key and adds the key to authorized keys file.
- Writes a minimal configuration file to communicate with the new cluster.
- Writes a copy of the administrative secret key to the key ring file.
- Deploys a basic monitoring stack.
- Use the
cephadm-preflight.yml
playbook to verify cluster host prerequisites.
- Assign labels to the cluster hosts to identify the daemons running on each host.
The
_admin
label is reserved for administrative nodes.
- Expand cluster capacity by adding OSD nodes to the cluster or additional storage space to existing OSD nodes.
Chapter 3: Configuring a Red Hat Ceph Storage Cluster
Goal: Manage the Red Hat Ceph Storage configuration, including the primary settings, the use of monitors, and the cluster network layout.
Objectives:
-
Identify and configure the primary settings for the overall Red Hat Ceph Storage cluster.
-
Describe the purpose of cluster monitors and the quorum procedures, query the monitor map, manage the configuration database, and describe Cephx.
-
Describe the purpose for each of the cluster networks, and view and modify the network configuration.
Managing Cluster Configuration Settings
Ceph Cluster Configuration Overview
Modifying the Cluster Configuration File
Using the Centralized Configuration Database
Cluster Bootstrap Options
Using Service Configuration Files
Overriding Configuration Settings at Runtime
Guided Exercise: Managing Cluster Configuration Settings
Configuring Cluster Monitors
Configuring Ceph Monitors
Viewing the Monitor Quorum
Analyzing the Monitor Map
Managing the Centralized Configuration Database
Guided Exercise: Configuring Cluster Monitors
Configuring Cluster Networking
Configuring the Public and Cluster Networks
OSD network communication
Configuring Network Security
Guided Exercise: Configuring Cluster Networking
Configuring a Red Hat Ceph Storage Cluster
Summary
- Most cluster configuration settings are stored in the cluster configuration database on the MON nodes.
The database is automatically synchronized across MONs.
- Certain configuration settings, such as cluster boot settings, can be stored in the cluster configuration file.
The default file name is
ceph.conf
.
This file must be synchronized manually between all cluster nodes.
- Most configuration settings can be modified when the cluster is running.
You can change a setting temporarily or make it persistent across daemon restarts.
- The MON map holds the MON cluster quorum information that can be viewed with
ceph
commands or with the dashboard.
You can configure MON settings to ensure high cluster availability.
Summary (continued)
- Cephx provides cluster authentication via shared secret keys.
The
client.admin
key ring is required for administering the cluster.
- Cluster nodes operate across the
public
network.
You can configure an additional cluster
network to separate OSD replication, heartbeat, backfill, and recovery traffic.
Cluster performance and security might be increased by configuring a cluster
network.
- You can use firewall rules to secure communication to cluster nodes.
Chapter 4: Creating Object Storage Cluster Components
Goal: Create and manage the components that comprise the object storage cluster, including OSDs, pools, and the cluster authorization method.
Objectives:
-
Describe OSD configuration scenarios and create BlueStore OSDs using ceph-volume.
-
Describe and compare replicated and erasure coded pools, and create and configure each pool type.
-
Describe Cephx and configure user authentication and authorization for Ceph clients.
Creating BlueStore OSDs Using Logical Volumes
BlueStore OSD layout
FileStore versus BlueStore write throughput
FileStore versus BlueStore read throughput
Provisioning BlueStore OSDs
Guided Exercise: Creating BlueStore OSDs Using Logical Volumes
Creating and Configuring Pools
Creating Replicated Pools
Configuring Erasure Coded Pools
Erasure coded pools
Managing and Operating Pools
Guided Exercise: Creating and Configuring Pools
Managing Ceph Authentication
User authentication for Ceph applications
Configuring User Authentication
Configuring User Authorization
Guided Exercise: Managing Ceph Authentication
Creating Object Storage Cluster Components
Summary
-
BlueStore
is the default storage back end for Red Hat Ceph Storage 5.
It stores objects directly on raw block devices and improves performance over the previous FileStore
back end.
-
BlueStore
OSDs use a RocksDB
key-value database to manage metadata and store it on a BlueFS
partition.
Red Hat Ceph Storage 5 uses sharding by default for new OSDs.
-
Block.db
stores object metadata and the write-ahead log (WAL)
stores journals.
You can improve OSD performance by placing the block.db
and WAL
devices on faster storage than the object data.
- You can provision OSDs by using service specification files, by choosing a specific host and device, or automatically with the orchestrator service.
-
Pools
are logical partitions for storing objects. The available pool types are replicated
and erasure coded
.
-
Replicated pools
are the default type of pool, they copy each object to multiple OSDs.
Summary (continued)
-
Erasure coded pools
function by dividing object data into chunks (k), calculating coding chunks (m) based on the data chunks, then storing each chunk on separate OSDs.
The coding chunks are used to reconstruct object data if an OSD fails.
- A pool
namespace
allows you to logically partition a pool and is useful for restricting storage access by an application.
- The
cephx
protocol authenticates clients and authorizes communication between clients, applications, and daemons in the cluster.
It is based on shared secret keys.
- Clients can access the cluster when they are configured with a user account name and a key-ring file containing the user's secret key.
- Cephx
capabilities
provide a way to control access to pools and object data within pools.
Chapter 5: Creating and Customizing Storage Maps
Goal: Manage and adjust the CRUSH and OSD maps to optimize data placement to meet the performance and redundancy requirements of cloud applications.
Objectives:
Managing and Customizing the CRUSH Map
CRUSH and Object Placement Strategies
CRUSH map default hierarchy example
Customizing Failure and Performance Domains
Optimizing Placement Groups
Guided Exercise: Managing and Customizing the CRUSH Map
Analyzing OSD Map Updates
Cluster map consistency using Paxos
Guided Exercise: Managing the OSD Map
Creating and Customizing Storage Maps
Summary
- The CRUSH algorithm provides a decentralized way for Ceph clients to interact with the Red Hat Ceph Storage cluster, which enables massive scalability.
- The CRUSH map contains two main components: a hierarchy of buckets that organize OSDs into a treelike structure where the OSDs are the leaves of the tree, and at least one CRUSH rule that determines how Ceph assigns PGs to OSDs from the CRUSH tree.
- Ceph provides various command-line tools to display, tune, modify, and use the CRUSH map.
Summary (continued)
- You can modify the CRUSH algorithm's behavior by using
tunables
, which disable, enable, or adjust features of the CRUSH algorithm.
- The OSD map
epoch
is the map's revision number and increments whenever a change occurs.
Ceph updates the OSD map every time an OSD joins or leaves the cluster and OSDs keep the map synchronized among themselves.
Chapter 6: Providing Block Storage Using RADOS Block Devices
Goal: Configure Red Hat Ceph Storage to provide block storage for clients using RADOS block devices (RBDs).
Objectives:
-
Provide block storage to Ceph clients using RADOS block devices (RBDs), and manage RBDs from the command line.
-
Create and configure RADOS block devices snapshots and clones.
-
Export an RBD image from the cluster to an external file and import it into another cluster.
Managing RADOS Block Devices
Block Storage Using a RADOS Block Device (RBD)
Managing and Configuring RBD Images
Accessing RADOS Block Device Storage
Kernel environment access
Virtual environment access
Tuning the RBD Image Format
RBD layout
Guided Exercise: Managing RADOS Block Devices
Managing RADOS Block Device Snapshots
Enabling RBD Snapshots and Cloning
RBD snapshots creation
Writing to an RBD snapshot
RBD clone write operation
RBD clone read operation
Guided Exercise: Managing RADOS Block Device Snapshots
Importing and Exporting RBD Images
Importing and Exporting RBD Images
Exporting and Importing Changes to RBD Images
Guided Exercise: Importing and Exporting RBD Images
Providing Block Storage Using RADOS Block Devices
Summary
- The
rbd
command manages RADOS block device pools, images, snapshots, and clones.
- The
rbd map
command uses the krbd
kernel module to map RBD images to Linux block devices.
Configuring the rbdmap
service can map these images persistently.
- RBD has an export and import mechanism for maintaining copies of RBD images that are fully functional and accessible.
- The
rbd export-diff
and the rbd import-diff
commands export and import RBD image changes made between two points in time.
Chapter 7: Expanding Block Storage Operations
Goal: Expand block storage operations by implementing remote mirroring and the iSCSI Gateway.
Objectives:
-
Configure an RBD mirror to replicate an RBD block device between two Ceph clusters for disaster recovery purposes.
-
Configure the Ceph iSCSI Gateway to export RADOS Block Devices using the iSCSI protocol, and configure clients to use the iSCSI Gateway.
One-way mirroring
Two-way mirroring
Configuring RBD Mirroring
Guided Exercise: Configuring RBD Mirrors
Providing iSCSI Block Storage
Describing the Ceph iSCSI Gateway
Deploying an iSCSI Gateway
iSCSI gateway Dashboard page
Configure an iSCSI Target
iSCSI Targets page
Configure an iSCSI Initiator
Quiz: Providing iSCSI Block Storage
Expanding Block Storage Operations
Summary
- RBD mirroring supports automatic or selective mirroring of images using pool mode or image mode.
- The RBD mirror agent can replicate pool data between two Red Hat Ceph Storage clusters, in either one-way or two-way mode, to facilitate disaster recovery.
- Deploying an iSCSI gateway publishes RBD images as iSCSI targets for network-based block storage provisioning.
Chapter 8: Providing Object Storage Using a RADOS Gateway
Goal: Configure Red Hat Ceph Storage to provide object storage for clients using a RADOS Gateway (RGW).
Objectives:
-
Deploy a RADOS Gateway to provide clients with access to Ceph object storage.
-
Configure the RADOS Gateway with multisite support to allow objects to be stored in two or more geographically diverse Ceph storage clusters.
Deploying an Object Storage Gateway
Introducing Object Storage
Introducing the RADOS Gateway
RADOS Gateway service architecture
Using the Beast Front-end
High Availability Proxy and Encryption
HashiCorp key management integration
Guided Exercise: Deploying an Object Storage Gateway
Configuring a Multisite Object Storage Deployment
RADOS Gateway Multisite Deployment
RADOS Gateway multisite diagram
Configuring Multisite RGW Deployments
Metadata Search Capabilities
RADOS Gateway Multisite Monitoring
RADOS Gateway Daemon list
RADOS Gateway Daemon Performance
RADOS Gateway Overall Performance
Guided Exercise: Configuring a Multisite Object Storage Deployment
Providing Object Storage Using a RADOS Gateway
Summary
- The RADOS Gateway is a service that connects to a Red Hat Ceph Storage cluster, and provides object storage to applications using a REST API.
- You can deploy the RADOS Gateway by using the Ceph orchestrator command-line interface or by using a service specification file.
- You can use
HAProxy
and keepalived
services to load balance the RADOS Gateway service.
Summary (continued)
- The RADOS Gateway supports multisite configuration, which allows RADOS Gateway objects to be replicated between separate Red Hat Ceph Storage clusters.
- Objects written to a RADOS Gateway for one zone are replicated to all other zones in the zone group.
- Metadata and configuration updates must occur in the master zone of the master zone group.
Chapter 9: Accessing Object Storage Using a REST API
Goal: Configure the RADOS Gateway to provide access to object storage using REST APIs.
Objectives:
-
Configure the RADOS Gateway to provide access to object storage compatible with the Amazon S3 API, and manage objects stored using that API.
-
Configure the RADOS Gateway to provide access to object storage compatible with the Swift API, and manage objects stored using that API.
Providing Object Storage Using the Amazon S3 API
Amazon S3 API in RADOS Gateway
Creating a User for the Amazon S3 API
Managing Ceph Object Gateway Users
Accessing S3 Objects Using RADOS Gateway
Guided Exercise: Providing Object Storage Using the Amazon S3 API
Providing Object Storage Using the Swift API
OpenStack Swift Support in a RADOS Gateway
Creating a Subuser for OpenStack Swift
Managing Ceph Object Gateway Subusers
Swift Container Object Versioning and Expiration
Multitenancy Support in Swift
Guided Exercise: Providing Object Storage Using the Swift API
Accessing Object Storage Using a REST API
Summary
- You can access the RADOS Gateway by using clients that are compatible with the Amazon S3 API or the OpenStack Swift API.
- The RADOS Gateway can be configured to use either of the bucket name formats supported by the Amazon S3 API.
- To support authentication by using the OpenStack Swift API, Swift users are represented by RADOS Gateway subusers.
- You can define deletion policies for Amazon S3 buckets, and object versioning for Swift containers, to manage the behavior of deleted objects.
Chapter 10: Providing File Storage with CephFS
Goal: Configure Red Hat Ceph Storage to provide file storage for clients using the Ceph File System (CephFS).
Objectives:
-
Provide file storage on the Ceph cluster by deploying the Ceph File System (CephFS).
-
Configure CephFS, including snapshots, replication, memory management, and client access.
Deploying Shared File Storage
The Ceph File System and MDS
Mounting a File System with CephFS
Guided Exercise: Deploying Shared File Storage
Managing Shared File Storage
Mapping a File to an Object
Controlling the RADOS Layout of Files
Guided Exercise: Managing Shared File Storage
Providing File Storage with CephFS
Summary
- You can distinguish the different characteristics for file-based, block-based, and object-based storage.
- CephFS is a POSIX-compliant file system that is built on top of RADOS to provide file-based storage.
- CephFS requires at least one Metadata Server that is separate from file data.
- Deploying CephFS requires multiple steps:
- Create two pools, one for CephFS data and another for CephFS metadata.
- Start the MDS service on the hosts.
- Create a CephFS file system.
- You can mount CephFS file systems with either of the two available clients:
- The kernel client, which does not support quotas but is faster.
- The FUSE client, which supports quotas as ACLs but is slower.
Summary (continued)
- NFS Ganesha is a user space NFS file server for accessing Ceph storage.
- CephFS supports multisite geo-replication with snapshots.
- You can determine which OSDs store a file's objects.
- You can modify the RADOS layout to control how files are mapped to objects.
- CephFS enables asynchronous snapshots by creating a folder in the hidden
.snap
folder.
- You can schedule snapshots for your CephFS file system.
Chapter 11: Managing a Red Hat Ceph Storage Cluster
Goal: Manage an operational Ceph cluster using tools to check status, monitor services, and properly start and stop all or part of the cluster. Perform cluster maintenance by replacing or repairing cluster components, including MONs, OSDs, and PGs.
Objectives:
-
Administer and monitor a Red Hat Ceph Storage cluster, including starting and stopping specific services or the full cluster, and querying cluster health and utilization.
-
Perform common cluster maintenance tasks, such as adding or removing MONs and OSDs, and recovering from various component failures.
Performing Cluster Administration and Monitoring
Defining the Ceph Manager (MGR)
Monitoring Cluster Health
Powering Down or Restarting the Cluster
Monitoring Placement Groups
Using the Balancer Module
Guided Exercise: Performing Cluster Administration and Monitoring
Performing Cluster Maintenance Operations
Adding or Removing OSD Nodes
Placing Hosts Into Maintenance Mode
Guided Exercise: Performing Cluster Maintenance Operations
Managing a Red Hat Ceph Storage Cluster
Summary
- Enable or disable Ceph Manager (MGR) modules, and more about the role of the Ceph Manager (MGR).
- Use the CLI to find the URL of the Dashboard GUI on the active MGR.
- View the status of cluster MONs by using the CLI or the Dashboard GUI.
- Monitor cluster health and interpret the cluster health status.
- Power down the entire cluster by setting cluster flags to stop background operations, then stopping daemons and nodes in a specific order by function.
- Power up the entire cluster by starting nodes and daemons in a specific order by function, then setting cluster flags to enable background operations.
Summary (continued)
- Start, stop, or restart individual cluster daemons and view daemon logs.
- Monitor cluster storage by viewing OSD and PG states and capacity.
- Identify the PG details for a specific object.
- Find and replace a failed OSD.
- Add or remove a cluster MON node.
- Use the balancer module to optimize the placement of PGs across OSDs.
Chapter 12: Tuning and Troubleshooting Red Hat Ceph Storage
Goal: Identify the key Ceph cluster performance metrics, and use them to tune and troubleshoot Ceph operations for optimal performance.
Objectives:
-
Choose Red Hat Ceph Storage architecture scenarios and operate Red Hat Ceph Storage-specific performance analysis tools to optimize cluster deployments.
-
Protect OSD and cluster hardware resources from over-utilization by controlling scrubbing, deep scrubbing, backfill, and recovery processes to balance CPU, RAM, and I/O requirements.
-
Identify key tuning parameters and troubleshoot performance for Ceph clients, including RADOS Gateway, RADOS Block Devices, and CephFS.
Optimizing Red Hat Ceph Storage Performance
Defining Performance Tuning
Optimizing Ceph Performance
Designing the Cluster Architecture
Separate networks for OSD and client traffic
Manually Controlling the Primary OSD for a PG
Tuning with Ceph Performance Tools
Guided Exercise: Optimizing Red Hat Ceph Storage Performance
Tuning Object Storage Cluster Performance
Maintaining OSD Performance
Storing Data on Ceph BlueStore
The BlueStore Fragmentation Tool
Maintaining Data Coherence with Scrubbing
Trimming Snapshots and OSDs
Controlling Backfill and Recovery
Guided Exercise: Tuning Object Storage Cluster Performance
Troubleshooting Clusters and Clients
Beginning Troubleshooting
Troubleshooting Network Issues
Troubleshooting Ceph Clients
Troubleshooting Ceph Monitors
Troubleshooting Ceph OSDs
Troubleshooting the RADOS Gateway
Guided Exercise: Troubleshooting Clusters and Clients
Tuning and Troubleshooting Red Hat Ceph Storage
Summary
- Red Hat Ceph Storage 5 performance depends on the performance of the underlying storage, network, and operating system file system components.
- Performance is improved by reducing latency, increasing IOPS, and increasing throughput. Tuning for one metric often adversely affects the performance of another.
Your primary tuning metric must consider the expected workload behavior of your storage cluster.
- Ceph implements a scale-out model architecture.
Increasing the number of OSD nodes increases the overall performance.
The greater the parallel access, the greater the load capacity.
- The
RADOS
and RBD bench
commands are used to stress and benchmark a Ceph cluster.
Summary (continued)
- Controlling scrubbing, deep scrubbing, backfill, and recovery processes helps avoid cluster over-utilization.
- Troubleshooting Ceph issues starts with determining which Ceph component is causing the issue.
- Enabling logging for a failing Ceph subsystem provides diagnostic information about the issue.
- The log debug level is used to increase the logging verbosity.
Chapter 13: Managing Cloud Platforms with Red Hat Ceph Storage
Goal: Manage Red Hat cloud infrastructure to use Red Hat Ceph Storage to provide image, block, volume, object, and shared file storage.
Objectives:
-
Describe Red Hat OpenStack Platform storage requirements, and compare the architecture choices for using Red Hat Ceph Storage as an RHOSP storage back end.
-
Describe how OpenStack implements Ceph storage for each storage-related OpenStack component.
-
Describe Red Hat OpenShift Container Platform storage requirements, and compare the architecture choices for using Red Hat Ceph Storage as an RHOCP storage back end.
-
Describe how OpenShift implements Ceph storage for each storage-related OpenShift feature.
Introducing OpenStack Storage Architecture
Red Hat OpenStack Platform Overview
A simple set of OpenStack services
Selecting a Ceph Integration Architecture
An example overcloud with multiple node roles
Quiz: Introducing OpenStack Storage Architecture
Implementing Storage in OpenStack Components
OpenStack Storage Implementation Overview
Storage Implementation by Type
Quiz: Implementing Storage in OpenStack Components
Introducing OpenShift Storage Architecture
Red Hat OpenShift Container Platform overview
Introducing Red Hat OpenShift Data Foundation
Rook Architecture
Red Hat OpenShift Data Foundation installation
Quiz: Introducing OpenShift Storage Architecture
Implementing Storage in OpenShift Components
Implementing Storage in Red Hat OpenShift Container Platform
Quiz: Implementing Storage in OpenShift Components
Summary
- Red Hat Ceph Storage can provide a unified storage back end for OpenStack services that consume block, image, object, and file-based storage.
- OpenStack Glance can use Ceph RBD images to store the operating system images that it manages.
- OpenStack Cinder can also use RADOS block devices to provide block-based storage for virtual machines that run as cloud instances.
- The RADOS Gateway can replace the native OpenStack Swift storage by providing object storage for applications that use the OpenStack Swift API, and integrates its user authentication with OpenStack Keystone.
- Red Hat OpenShift Data Foundation is an operator bundle that provides cloud storage and data services to Red Hat OpenShift Container Platform; it is composed of the ocs-storage, NooBaa, and Rook-Ceph operators.
Summary (continued)
- Rook-Ceph is a cloud storage orchestrator that installs, monitors, and manages the underlying Ceph cluster in the OpenShift Data Foundation bundle operator. Rook-Ceph provides the required drivers to request storage to the cluster.
- PersistentVolumeClaims are an OpenShift resource type that represent a request for a storage object. They contain the StorageClass which describes the PersistentVolume that should bind to it.
- Access modes describe the mount capabilities of a PersistentVolume on pods.
Chapter 14: Comprehensive Review
Goal: Review tasks from Cloud Storage with Red Hat Ceph Storage
Objectives:
Reviewing Cloud Storage with Red Hat Ceph Storage
Lab: Deploying Red Hat Ceph Storage
Lab: Configuring Red Hat Ceph Storage
Lab: Deploying and Configuring Block Storage with RBD
Lab: Deploying and Configuring RADOS Gateway