{"id":2021,"date":"2019-08-01T12:42:03","date_gmt":"2019-08-01T12:42:03","guid":{"rendered":"https:\/\/www.sparksupport.com\/blog\/?p=2021"},"modified":"2024-04-24T08:00:41","modified_gmt":"2024-04-24T08:00:41","slug":"ceph-a-distributed-object-storage-system","status":"publish","type":"post","link":"https:\/\/sparksupport.com\/blog\/ceph-a-distributed-object-storage-system\/","title":{"rendered":"Ceph Storage: A Distributed Object Storage System"},"content":{"rendered":"<h3><strong><span data-preserver-spaces=\"true\">CEPH Storage<\/span><\/strong><\/h3>\n<p><span data-preserver-spaces=\"true\">Ceph is an open-source, software-defined and distributed storage system. A Software-defined Storage (SDS) system means a form of storage virtualization to separate the storage hardware from the software that manages the storage infrastructure. Ceph Storage is a true SDS solution and runs on any commodity hardware without any vendor lock-in. An SDS selection provides the flexibility in hardware selection. Customers can select any commodity hardware of any manufacturer. Ceph is massively scalable(up to exabytes and beyond) and there is no single point of failure. Today, private and public cloud models are used massively in providing\u00a0<\/span><a class=\"_e75a791d-denali-editor-page-rtfLink\" href=\"https:\/\/sparksupport.com\/it-infrastructure-management\/\" target=\"_blank\" rel=\"noopener noreferrer\"><span data-preserver-spaces=\"true\">IT infrastructure management<\/span><\/a><span data-preserver-spaces=\"true\">\u00a0to customers. <\/span><\/p>\n<p><span data-preserver-spaces=\"true\">\u00a0Ceph is very popular in cloud storage solutions such as OpenStack. cloud depends on commodity hardware and CEPH makes full use of this commodity hardware to provide a faultless, cost-effective storage system. Ceph is a unified storage solution that provides access to files, blocks as well as objects from a single platform along with their storage. RAID technology has been the fundamental building block of storage systems for the past few years. RAID uses a lot of disk spaces and takes an efficient amount of time to repair a failed disk which has storage size in the order of TBs.The integration of RAID technology also increases the cost required for the storage. A CEPH storage system addresses these problems and eliminates the need for RAID technology. Ceph Storage support has been added to Linux kernel from Version 2.6.32<\/span><\/p>\n<h3><strong><span data-preserver-spaces=\"true\">WHY OBJECT STORAGE?<\/span><\/strong><\/h3>\n<p><span data-preserver-spaces=\"true\">An object is a combination of data and metadata components. These are identified with a unique id and eliminates the possibility of another object with the same id. Traditional storage solutions are not capable of providing object storage. They provide only file and block-based storage. Object-Based storage has many advantages when compared with traditional file and block-based storage solutions. The selection of object storage provides platform and hardware independence and allows the freedom in selecting them. The basic building block or foundation of CEPH is an object. Any form of data whether it is a file, the block gets stored in the form of objects in a CEPH cluster and replicates these objects across the cluster and improves the reliability. In Ceph, objects are not tied to a physical path, making objects flexible and location-independent. This enables Ceph to scale linearly from the petabyte level to an exabyte level.<\/span><\/p>\n<h4><strong><span data-preserver-spaces=\"true\">CEPH RELEASES<\/span><\/strong><\/h4>\n<p><span data-preserver-spaces=\"true\">Hammer V0.94.3 is the latest release of CEPH. Before that Giant version was also released.<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">Ceph Storage comparison list<\/span><\/p>\n<h4><strong><span data-preserver-spaces=\"true\">CEPH ARCHITECTURE<\/span><\/strong><\/h4>\n<p><span data-preserver-spaces=\"true\">A CEPH storage cluster is made up of several different software daemons where each daemon takes care of unique CEPH functionalities. Each of these daemons is separated from each other and this feature makes CEPH cluster storage cost low as compared to other storage systems.In the below figure, RADOS is the lower part that is internal to the Ceph cluster with no direct client interface and the upper part that has all the client interfaces.<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">clients flow chart<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0\u00a0Figure: Ceph Architecture<\/span><\/p>\n<h4><strong><span data-preserver-spaces=\"true\">CEPH DEPLOYMENT<\/span><\/strong><\/h4>\n<p><span data-preserver-spaces=\"true\">Suppose we have three nodes with hostnames as CEPH-node1, CEPH-node2, and CEPH-node3 respectively.<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">1.Installing Ceph-deploy on CEPH-node1 by executing<\/span><\/p>\n<p><span data-preserver-spaces=\"true\"># yum install CEPH-deploy<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">2. Create a CEPH cluster by using CEPH-deploy tool,<\/span><\/p>\n<p><span data-preserver-spaces=\"true\"># CEPH-deploy new CEPH-node1<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">The new subcommand of CEPH-deploy deploys a new cluster with CEPH as the cluster name, which is by default. It generates a cluster configuration and keying files as ceph.conf and ceph.mon.keyring files in the current working directory. When CEPH Storage runs with authentication and authorization enabled, it will ask for a username and a keyring containing the secret key of that user. By default, the client.admin is the default user name.<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">3. To install Ceph software binaries on all the nodes using CEPH-deploy, execute the following command from CEPH-node1<\/span><\/p>\n<p><span data-preserver-spaces=\"true\"># CEPH-deploy install \u2013release emperor CEPH-node1 CEPH-node2 CEPH-node3<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">the emperor is a version type of CEPH<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">4. Create the first monitor on CEPH-node1<\/span><\/p>\n<p><span data-preserver-spaces=\"true\"># CEPH-deploy mon create-initial<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">5. Check the cluster status by<\/span><\/p>\n<p><span data-preserver-spaces=\"true\"># CEPH status<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">Initially, the cluster won\u2019t be healthy.<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">Creating Object Storage Device<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">Create an Object Storage Device(OSD) on CEPH-node1 and add it to the CEPH cluster by,<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">1. List the disks on nodes by,<\/span><\/p>\n<p><span data-preserver-spaces=\"true\"># CEPH-deploy disk list CEPH-node1<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">From the output, identify the disks (other than OS-partition disks) on which we should create Ceph OSD.<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">2. The disk zap subcommand will destroy the existing partition table and content from the disk.<\/span><\/p>\n<p><span data-preserver-spaces=\"true\"># CEPH-deploy disk zap CEPH-node1:sdb CEPH-node1:sdc CEPH-node1:sdd<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">3. The osd create subcommand will first prepare the disk, that is, erase the disk with a filesystem, which is xfs by default. Then, it will activate the disk\u2019s first partition as data partition and second partition as a journal:<\/span><\/p>\n<p><span data-preserver-spaces=\"true\"># ceph-deploy osd create ceph-node1:sdb ceph-node1:sdc ceph-node1:sdd<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">4. Check the cluster status for new OSD entries:<\/span><\/p>\n<p><span data-preserver-spaces=\"true\"># ceph status<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">At this stage, the cluster will not be healthy. We need to add a few more nodes to the Ceph cluster so that it can set up a distributed, replicated object storage, and hence become healthy.<\/span><\/p>\n<h4><strong><span data-preserver-spaces=\"true\">RADOS<\/span><\/strong><\/h4>\n<p><span data-preserver-spaces=\"true\">Reliable Autonomic Distributed Object Store(RADOS) or storage cluster is the heart of CEPH storage system. RADOS provides features such as distributed object store, high availability, reliability, no single point of failure, self-healing,self-managing to CEPH storage system. The data access methods of Ceph, such as rados block device(RBD), CephFS, rados gateway,and rados library operate on top of the RADOS layer. RADOS stores data in the form of objects inside a pool. When there is a written request to a ceph cluster, the position to which the corresponding data write to be made is calculated based on the algorithm called CRUSH. Based on that, RADOS distributes data to all the cluster nodes in the form of objects. <\/span><\/p>\n<p><span data-preserver-spaces=\"true\">RADOS also performs data replication. It takes a copy of objects and distributes these copies to different zones. No two copies will reside on the same zone and ensure that every object is replicated at least once. RADOS also checks for object states to ensure every object is keeping a stable state. In the case of inconsistency, recoveries are performed with the help of remaining object copies. These recovery operations are hidden from the end-user. RADOS consists of two major components, Object Storage Device(OSD) and Monitor.<\/span><\/p>\n<h5><span data-preserver-spaces=\"true\">1.RADOS Object Storage Device(OSD): <\/span><\/h5>\n<p><span data-preserver-spaces=\"true\">OSD stores data of clients in the form objects and on physical disk drives of each node in the cluster. A CEPH cluster consists of many OSDs. For any read and write operations, the client requests for cluster maps from monitors and after examining the maps client directly interacts with OSDs for I\/O operations. Each object in OSD has one primary copy and several secondary copies that are scattered across other OSDs. Each OSD plays the role of primary OSD for some objects and at the same time acts as a secondary OSD for other objects. When there is a disk failure, all OSDs performs recovery options. At this time secondary OSD holding replicated copies of failed objects will be promoted as primary OSD along with the creation of new secondary object copies.<\/span><\/p>\n<h5><span data-preserver-spaces=\"true\">2. Ceph Monitors: <\/span><\/h5>\n<p><span data-preserver-spaces=\"true\">Ceph monitors do not store data of clients. It serves updated cluster maps to clients and other cluster nodes. Clients and other cluster nodes periodically check with monitors for the most recent copies of cluster nodes. Ceph Storage monitors are responsible for the health of Ceph clusters by storing cluster information, the states of nodes, and cluster configuration information. It also keeps a master copy of a cluster. A typical ceph cluster consists of more than one monitor. The monitor count in the cluster should be an odd number and a multi monitored ceph architecture develops a quorum. The decision making is distributed among all the monitors. The odd number of monitors are recommended to avoid split-brain scenarios. Out of all the ceph monitors, one operates as a leader. The other monitors will become a leader if the current leader monitor is down. At least three monitors should be there in a production cluster. The cluster map includes the monitor, OSD, PG and CRUSH maps.<\/span><\/p>\n<h5><span data-preserver-spaces=\"true\">3.Monitor map: <\/span><\/h5>\n<p><span data-preserver-spaces=\"true\">This holds end-to-end information about a monitor node, which includes the Ceph cluster-ID, monitor hostname, and IP address with the port number. It also stores the current data for map creation and last-changed information.<\/span><\/p>\n<h5><span data-preserver-spaces=\"true\">4.OSD map:<\/span><\/h5>\n<p><span data-preserver-spaces=\"true\"> This stores fields such as the cluster-ID, information for OSD map creation,last-changed information and information related to pools such as pool names, pool ID, type, replication level, and placement groups. It also stores OSD information such as count, state, weight and OSD host information. We can check the cluster\u2019s OSD maps by executing:<\/span><\/p>\n<p><span data-preserver-spaces=\"true\"># ceph osd dump<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">\u2022PG map: This holds the time stamp, last OSD map, full ratio, and near full ratio information. It also keeps track of each placement group ID, object count, state, state stamp, up and acting OSD sets. To check cluster PG map, execute:<\/span><\/p>\n<p><span data-preserver-spaces=\"true\"># ceph pg dump<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">\u2022CRUSH map: This holds information of cluster\u2019s storage devices and the rules defined for the failure when storing data. To check cluster CRUSH map, execute the following command:<\/span><\/p>\n<p><span data-preserver-spaces=\"true\"># ceph osd crush dump<\/span><\/p>\n<h4><strong><span data-preserver-spaces=\"true\">librados<\/span><\/strong><\/h4>\n<p><span data-preserver-spaces=\"true\">libraos is a C library that allows applications to work directly with RADOS, bypassing other interface layers to interact with ceph cluster. It offers API support so that applications can interact directly and parallelly with no HTTP overhead. Applications link with librados library and extend their protocol, thereby gaining access to RADOS. This direct interaction with RADOS using librados improves the performance of applications. librados library serves as the base for other service interfaces that are built on top of librados interface, which includes the Ceph File System, Ceph Rados gateway and Ceph Block Device.<\/span><\/p>\n<h4><strong><span data-preserver-spaces=\"true\">RADOS GATEWAY<\/span><\/strong><\/h4>\n<p><span data-preserver-spaces=\"true\">Ceph object gateway is known as the RADOS gateway. It provides API for different applications such as Amazon S3 API, Swift API(OpenStack Object Storage). It can be considered as a proxy that converts HTTP requests to RADOS requests and vice versa. Both S3 and swift API shares a common namespace inside a ceph cluster so that we can write data with one API and retrieve that data using another API. Apart from S3 and Swift API, an application can be made to bypass the RADOS gateway and get direct parallel access to librados, that is, to the ceph cluster. This method of removing additional layers will be an effective one for applications that require extreme performance from a storage point of view. Maintaining more than one gateway will result in reduced load on a storage cluster.<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">\u2022 S3 compatible: This provides an Amazon S3 RESTful API-compatible interface to Ceph storage clusters. RESTful(Representational State Transfer) API is a popular API building style for\u00a0<\/span><a class=\"_e75a791d-denali-editor-page-rtfLink\" href=\"https:\/\/www.sparksupport.com\/cloud-computing-services.html\" target=\"_blank\" rel=\"noopener noreferrer\"><span data-preserver-spaces=\"true\">CLOUD COMPUTING SERVICES<\/span><\/a><span data-preserver-spaces=\"true\">\u00a0based APIs.<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">\u2022 Swift compatible API: It provides an OpenStack Swift API-compatible interface to Ceph storage clusters. Ceph Object Gateway can be used as a replacement for Swift in an OpenStack cluster.<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">\u2022 Admin API: This is helpful for the administration of our Ceph cluster over HTTP RESTful API.<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">ceph cluster flow chart<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">\u00a0<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">Figure: Different access methods using RADOS Gateway<\/span><\/p>\n<h4><strong><span data-preserver-spaces=\"true\">RADOS BLOCK DEVICE(RBD)<\/span><\/strong><\/h4>\n<p><span data-preserver-spaces=\"true\">In block storage, data is stored as volumes that are in the form of blocks and are attached to nodes. This provides large storage capacity required by applications. These blocks are mapped to the operating system and are controlled by its file system. Ceph introduced a new protocol called RBD. RBD provides a reliable, distributed and high-performance block storage disks to clients. RBD drivers have been integrated with Linux kernel. RBD supports images up to 16 exabytes. Ceph block device provides full support to cloud platforms such as OpenStack and cloud stack etc. In OpenStack,ceph block device is used with cinder and glance components.<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">1.Creating an RBD with the name \u2018testrdb\u2019 with 20480 MB or 20 GB size<\/span><\/p>\n<p><span data-preserver-spaces=\"true\"># rbd create testrdb \u2013size 20480<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">2. Listing RBDs by,<\/span><\/p>\n<p><span data-preserver-spaces=\"true\"># rbd ls<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">3. Retrieve information about the block device by,<\/span><\/p>\n<p><span data-preserver-spaces=\"true\"># rbd \u2013image testrbd info<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">4. Map the remote rbd image to RBD device,<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">echo \u201c{ceph-monitor ip} name=admin,secret=Qwer12%$&amp;*wqMN ceph-pool ceph-image\u201d &gt; \/sys\/bus\/rbd\/add<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">\u2018ceph-image\u2019 is the name for rbd image and \u2018ceph-pool\u2019 is the name of pool.<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">5. Format the device,<\/span><\/p>\n<p><span data-preserver-spaces=\"true\"># mkfs.xfs -L rbddevice \/dev\/rbd0<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">rbddevice is the label used to identify the RBD device in a multiple RBD environment.<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">6. Remove the rbd device by executing,<\/span><\/p>\n<p><span data-preserver-spaces=\"true\"># echo \u201c0\u201d &gt; \/sys\/bus\/rbd\/remove<\/span><\/p>\n<h4><strong><span data-preserver-spaces=\"true\">CEPH File System<\/span><\/strong><\/h4>\n<p><span data-preserver-spaces=\"true\">Ceph provides a file system on top of RADOS. It uses a metadata daemon that manages metadata and keeps it separated from the data. This separation helps to reduce complexity and improves reliability. CephFs offers a POSIX, distributed file system of any size. Ceph file system uses the same ceph storage cluster system as ceph block devices and Ceph object storage. To use a ceph file system, We require at least one metadata server. Linux kernel version 2.6.34 and above supports CephFs. There are two approaches to use a CephFS, using a native kernel driver and others by using a Ceph FUSE.<\/span><\/p>\n<h4><strong><span data-preserver-spaces=\"true\">Mounting CephFS with kernel driver<\/span><\/strong><\/h4>\n<p><span data-preserver-spaces=\"true\">1. Check kernel version of the client by using command \u2018uname -r\u2019 and create a mount point directory,<\/span><\/p>\n<p><span data-preserver-spaces=\"true\"># mkdir \/mnt\/cephkernel<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">2. Mount cephfs by,<\/span><\/p>\n<p><span data-preserver-spaces=\"true\"># mount -t ceph &lt;monitr ip&gt;:&lt;port no of monitor&gt;:\/ \/mnt\/cephkernel -o name=admin,secret=&lt;key&gt;<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">eg: mount -t ceph 192.168.1.65:6789:\/ \/mnt\/cephkernel -o name=admin,secret=Mwkwwk&amp;%$75757HJF<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">Here key is the admin secret key located in \/etc\/ceph\/ceph.client.admin.keyring<\/span><\/p>\n<h4><strong><span data-preserver-spaces=\"true\">Mounting CephFS as FUSE\u00a0<\/span><\/strong><\/h4>\n<p><span data-preserver-spaces=\"true\">FUSE stands for the file system in userspace. It is a mechanism used that allows non-privileged users to create their own file systems without editing kernel code.<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">1.Install CEPH-fuse module on the client machine by,<\/span><\/p>\n<p><span data-preserver-spaces=\"true\"># yum install CEPH-fuse<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">2. Create a directory called \u2018cephnew\u2019 for mounting,<\/span><\/p>\n<p><span data-preserver-spaces=\"true\"># mkdir \/mnt\/cephfs<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">3.Mount by,<\/span><\/p>\n<p><span data-preserver-spaces=\"true\"># CEPH-fuse -m &lt;monitor ip&gt;:&lt;port number of monitor&gt; &lt;mount point name&gt;<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">eg: CEPH-fuse -m 192.168.1.34:6789 \/mnt\/cephfs<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">4. To mount permanently, open \/etc\/fstab and add,<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">&lt;ceph-id&gt; &lt;mount point&gt; &lt;Type&gt; &lt;options&gt;<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">id=admin \/mnt\/cephfs fuse.ceph defaults 0 0<\/span><\/p>\n<h4><strong><span data-preserver-spaces=\"true\">PLACEMENT GROUP(PG)<\/span><\/strong><\/h4>\n<p><span data-preserver-spaces=\"true\">A placement group is a logical collection of objects that are replicated on OSDs to provide reliability in storage system. We can consider PG as a logical container holding multiple objects and this container is mapped onto multiple OSDs.Placement Group is essential for the scalability and performance of a CEPH storage system. Without PGs,It will be difficult to track and manage multiple replicated copies of an object that is spread over many OSDs. Every placement group requires resources like CPU, a memory so that they can easily manage multiple objects. Increasing the number of PGs in a cluster reduces OSD load, but the count increment of PG should be done in a regulated way. 50 to 100 PGs per OSD is recommended.<\/span><\/p>\n<h4><strong><span data-preserver-spaces=\"true\">CEPH POOLS<\/span><\/strong><\/h4>\n<p><span data-preserver-spaces=\"true\">A CEPH pool is a logical partition to store objects. Ceph provides easy storage management using these pools. Each pool in CEPH holds several placement groups and this placement group holds an object that is mapped to OSDs. A CEPH pool ensures data availability by creating several object copies. At the time of pool creation, we can define the replica size. The default replica size is 2(object + additional copy). When we first deploy a CEPH cluster without creating a pool, CEPH uses default pools to store data. <\/span><\/p>\n<p><span data-preserver-spaces=\"true\">A CEPH pool supports snapshot features. A CEPH pool allows setting ownerships and access to objects. In Ceph Storage Systems, Data management starts as soon as the client writes data to a CEPH pool. Once the client writes data to a CEPH pool, data is then written to a primary OSD based on the pool replication size. The primary OSD then replicates the same data to secondary and tertiary OSDs. After finishing data writes, the secondary and tertiary OSDs will give an acknowledgment of primary OSD. Then only primary OSD will give an acknowledgment to the client, confirming that the data write operation has been completed.<\/span><\/p>\n<h3><strong><span data-preserver-spaces=\"true\">Creating a Pool<\/span><\/strong><\/h3>\n<p><span data-preserver-spaces=\"true\">Creating a Ceph pool requires a pool name, PG and PGP and a pool-type which is replicated by default. PGP is the total number of Placement Groups for the Placement purpose of objects inside a pool.<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">1. Creating a pool named as \u2018newpool\u2019 with 128 PG and PGP numbers by,<\/span><\/p>\n<p><span data-preserver-spaces=\"true\"># ceph osd pool create newpool 128 128<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">2. Listing of pools can be done in two ways,<\/span><\/p>\n<p><span data-preserver-spaces=\"true\"># ceph osd lspools<\/span><\/p>\n<p><span data-preserver-spaces=\"true\"># rados lspools<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">3. The default replication size for a Ceph pool created with CEPH emperor or earlier releases is two. We can set replication size by,<\/span><\/p>\n<p><span data-preserver-spaces=\"true\"># ceph osd pool set newpool size 4<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">4.Taking snapshot of a pool<\/span><\/p>\n<p><span data-preserver-spaces=\"true\"># rados mksnap snapshot01 -p newpool<\/span><\/p>\n<h4><strong><span data-preserver-spaces=\"true\">CRUSH<\/span><\/strong><\/h4>\n<p><span data-preserver-spaces=\"true\">Normally traditional storage systems store data and its metadata. The metadata, which is the data about data, stores information such as where the data is stored in memory. Each time new data is added to the storage system, its metadata is first updated with the physical location where the data will be stored, after which the actual data is stored. This is not usable when we need to deal with exabyte level data and it creates a single point of failure for the storage system. if we lose our storage metadata, we lose all our data. So it is important to keep central metadata safe from disasters, either by keeping multiple copies on a single node or replicating the entire data and metadata. Such complex management of metadata is a bottleneck in a storage system\u2019s scalability, high availability, and performance.<\/span><\/p>\n<h5>How it works?<\/h5>\n<p><span data-preserver-spaces=\"true\">Using the\u00a0 CEPH Controlled Replication Under Scalable Hashing (CRUSH) algorithm. Unlike traditional systems that rely on storing and managing a central metadata\/index table, Ceph uses the CRUSH algorithm to compute where the data should be written to or read from. Instead of storing metadata, CRUSH computes metadata on-demand, thus removing all the limitations encountered in traditionally storing metadata. The metadata computation process is known as CRUSH lookup and it is not system dependent. Ceph provides enough flexibility to clients to perform on-demand metadata computation and allows data to read or write. For a read-and-write operation to Ceph clusters, client-first contact a Ceph monitor and retrieve a copy of the cluster map. The cluster map helps clients to know the state and configuration of the Ceph cluster. The data is converted to objects with object id and pool names\/IDs. The object is then hashed with the number of placement groups to generate a final placement group within the required Ceph pool. <\/span><\/p>\n<p><span data-preserver-spaces=\"true\">The calculated placement group then goes through a CRUSH lookup(on-demand metadata computation) to determine the primary OSD location for the storage or retrieval of data. After computing the OSD ID, the client contacts this OSD directly and stores the data. All these computer operations are performed by the clients, hence it does not impact cluster performance. Once the data is written to the primary OSD, the same node performs a CRUSH lookup operation and computes the location for secondary placement groups and OSDs so that the data is replicated across clusters for high availability.<\/span><\/p>\n<h4><strong><span data-preserver-spaces=\"true\">Recovery and Rebalancing<\/span><\/strong><\/h4>\n<p><span data-preserver-spaces=\"true\">In the event of failure of any component, Ceph waits for 300 seconds(default), before it marks OSD down and initiates recovery operation. This recovery option is done through \u2018mon osd down out interval\u2019 parameter under the CEPH cluster configuration file. During this recovery operation, CEPH starts to regenerate the affected data which is placed on the node that failed. CRUSH replicates data to many nodes and these replicated copies of data are used for the recovery. When a new disk or host is added to a CEPH cluster, CRUSH starts a rebalancing operation during which it moves data from existing hosts or disks to the new host or disk. The Rebalancing operation is performed to keep all disks equally utilized. This will make cluster performance more efficient. All the existing OSDs will work in parallel to move the data and helps to complete the Rebalancing operation in a faster way.<\/span><\/p>\n<h4><strong><span data-preserver-spaces=\"true\">CEPH and Openstack<\/span><\/strong><\/h4>\n<p><span data-preserver-spaces=\"true\">Openstack is a set of software tools for building and managing cloud computing platforms for public and private clouds. Ceph provides robust reliable storage for OpenStack. Ceph can be integrated with OpenStack components such as Cinder, Glance, Nova, and Keystone. The main benefits of integrating Ceph with Openstack includes,<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">1. Ceph is a unified storage solution of block, file and mainly object storage for Openstack, allowing different applications to use storage as they need.<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">2. Ceph supports rich APIs for both Swift and S3 object storage interfaces.<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">3. It provides a snapshot feature to OpenStack volumes that can be used as a backup.<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">4. Ceph provides a feature-rich storage backend at a very low cost which in turn limits the OpenStack deployment cost.<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">5. It provides advanced block storage capabilities such as cloning of VM for OpenStack clouds<\/span><\/p>\n<h4><strong><span data-preserver-spaces=\"true\">CEPH Best Practices :\u00a0<\/span><\/strong><\/h4>\n<p><span data-preserver-spaces=\"true\">1.The OSD journal<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">Ceph first writes the data from CEPH clients to a journal. After completing this writing to journal, then data is written to the storage. Journal is a small-sized partition on the same disk as OSD or in another SSD(Solid State Drive) disk or maybe as a file on a file system. 10 GB is the common size of the journal. Ceph uses journaling for speed and consistency.\u00a0<\/span><a class=\"_e75a791d-denali-editor-page-rtfLink\" href=\"https:\/\/ceph.io\/\" target=\"_blank\" rel=\"noopener noreferrer\"><span data-preserver-spaces=\"true\">Ceph<\/span><\/a><span data-preserver-spaces=\"true\">\u00a0incorporates Btrfs and XFS as journaling file systems for OSD. A sync operation will run every five seconds and it determines the life of a particular journal. <\/span><\/p>\n<p><span data-preserver-spaces=\"true\">Usage of SSD disk partitions for journaling purpose results in faster write of data to the journal. So it is recommended to use SSD disk partitions for journals. The back storage can be compromised of slower disks like SATA disks. In the case of a journal failure in a Btrfs based file system, there will be only minimal data loss or no data loss at all. The failure of journal disks that host OSDs running on XFS or ext4 file systems will result in data loss. So Btrfs is preferred. Btrfs is a copy of the write file system, which means if the content of a block is changed then the changed block is written separately. This method preserves the old block and old data will be available even after a journal failure. We should not exceed OSD to journal ratio of four to five OSDs per journal disk when external SSDs are used for the journal.<\/span><\/p>\n<h4><span data-preserver-spaces=\"true\">Ceph Storage task<\/span><\/h4>\n<p><span data-preserver-spaces=\"true\">1.Figure: Ceph OSD journaling<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">In the above figure, (1) indicates the first data writing from the client to the journal. (2) indicates the data writing from journal to back storage, which is physical disks like SATA disks.<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">2.Number of Placement Groups<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">Setting the correct number of placement groups is an essential step in building Ceph storage clusters. The formula to calculate the total number of placement groups for a Ceph cluster is:<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">Total PGs = (Total number of OSD * 100) \/ maximum replication count<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">The maximum replication count is the number of maximum replications set for an object. The result must be rounded up to the nearest power of 2. For example, a result value of 1888.82 will be round to 2048.<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">Total number of PGs per pool in the Ceph cluster is calculated by,<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">Total PGs = ((Total number of OSD * 100) \/ maximum replication count)\/pool count<\/span><\/p>\n<p><span data-preserver-spaces=\"true\">This value also needs to be rounded to the nearest power of two.<\/span><\/p>\n<h2><strong><span data-preserver-spaces=\"true\">CONCLUSION<\/span><\/strong><\/h2>\n<p><span data-preserver-spaces=\"true\">If we make a comparison between Ceph and other storage solutions available today, Ceph has more features. Ceph is an open-source, software-defined storage solution on top of any commodity hardware, which makes it an economical storage solution. Ceph provides a variety of interfaces for the clients to connect to a Ceph cluster, thus increasing flexibility for clients. For data protection, Ceph does not rely on RAID technology. Rather, it uses replication, which has been proved to be better solutions than RAID. Every component of Ceph is reliable and supports high availability. Ceph does not have any single point of failure, which is a major challenge for other storage solutions available today. One of the biggest advantages of Ceph is its unified nature, where it provides block, file, and object storage solutions, while other storage systems are still incapable of providing this. <\/span><\/p>\n<p><span data-preserver-spaces=\"true\">Ceph is a distributed storage system and clients can perform quick transactions using Ceph. It does not follow the traditional method of storing data by maintaining metadata, rather it introduces a new mechanism, which allows clients to dynamically calculate data location required by them. This provides an increase in performance for the client, as they no longer need to wait to get data locations and contents from the metadata server. In the event of failure, when other storage systems cannot provide reliability against multiple failures. Ceph detects and corrects failure in the disk, node, network, data center, etc.<\/span><\/p>\n<p><span data-preserver-spaces=\"true\"> Other storage solutions can only provide reliability up to disk or node failure. It provides a unified, distributed, highly scalable, and reliable object storage solution, which is much needed for today\u2019s and the future\u2019s unstructured data needs. The world\u2019s storage need is increasing, so we need a storage system that is scalable to the exabyte level without affecting data reliability and performance. Ceph provides a solution to all these problems. For more distillery details, you can<\/span><a class=\"_e75a791d-denali-editor-page-rtfLink\" href=\"https:\/\/www.sparksupport.com\" target=\"_blank\" rel=\"noopener noreferrer\"><span data-preserver-spaces=\"true\">\u00a0contact us<\/span><\/a><span data-preserver-spaces=\"true\">. You can also refer to our\u00a0<\/span><a class=\"_e75a791d-denali-editor-page-rtfLink\" href=\"https:\/\/www.sparksupport.com\/blog\/\" target=\"_blank\" rel=\"noopener noreferrer\"><span data-preserver-spaces=\"true\">blog<\/span><\/a><span data-preserver-spaces=\"true\">\u00a0for more technical articles on different subjects<\/span><\/p>\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>CEPH Storage Ceph is an open-source, software-defined and distributed storage system. A Software-defined Storage (SDS) system means a form of storage virtualization to separate the<\/p>\n","protected":false},"author":29,"featured_media":3964,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[6],"tags":[],"class_list":["post-2021","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-linux"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.2 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Ceph Storage: A Distributed Object Storage System -<\/title>\n<meta name=\"description\" content=\"Looking for detailed description of ceph storage system. Get complete guide on Distributed Object Storage System based on modern trends in storage industry.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/sparksupport.com\/blog\/ceph-a-distributed-object-storage-system\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Ceph Storage: A Distributed Object Storage System -\" \/>\n<meta property=\"og:description\" content=\"Looking for detailed description of ceph storage system. Get complete guide on Distributed Object Storage System based on modern trends in storage industry.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/sparksupport.com\/blog\/ceph-a-distributed-object-storage-system\/\" \/>\n<meta property=\"article:published_time\" content=\"2019-08-01T12:42:03+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-04-24T08:00:41+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/sparksupport.com\/blog\/wp-content\/uploads\/2016\/03\/1.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"800\" \/>\n\t<meta property=\"og:image:height\" content=\"592\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"vivekrh\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"vivekrh\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"19 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/sparksupport.com\/blog\/ceph-a-distributed-object-storage-system\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/sparksupport.com\/blog\/ceph-a-distributed-object-storage-system\/\"},\"author\":{\"name\":\"vivekrh\",\"@id\":\"https:\/\/sparksupport.com\/blog\/#\/schema\/person\/3cb83ca54ebb2d601e63c1956838be0c\"},\"headline\":\"Ceph Storage: A Distributed Object Storage System\",\"datePublished\":\"2019-08-01T12:42:03+00:00\",\"dateModified\":\"2024-04-24T08:00:41+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/sparksupport.com\/blog\/ceph-a-distributed-object-storage-system\/\"},\"wordCount\":4212,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/sparksupport.com\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/sparksupport.com\/blog\/ceph-a-distributed-object-storage-system\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/sparksupport.com\/blog\/wp-content\/uploads\/2016\/03\/1.jpg\",\"articleSection\":[\"linux\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/sparksupport.com\/blog\/ceph-a-distributed-object-storage-system\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/sparksupport.com\/blog\/ceph-a-distributed-object-storage-system\/\",\"url\":\"https:\/\/sparksupport.com\/blog\/ceph-a-distributed-object-storage-system\/\",\"name\":\"Ceph Storage: A Distributed Object Storage System -\",\"isPartOf\":{\"@id\":\"https:\/\/sparksupport.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/sparksupport.com\/blog\/ceph-a-distributed-object-storage-system\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/sparksupport.com\/blog\/ceph-a-distributed-object-storage-system\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/sparksupport.com\/blog\/wp-content\/uploads\/2016\/03\/1.jpg\",\"datePublished\":\"2019-08-01T12:42:03+00:00\",\"dateModified\":\"2024-04-24T08:00:41+00:00\",\"description\":\"Looking for detailed description of ceph storage system. Get complete guide on Distributed Object Storage System based on modern trends in storage industry.\",\"breadcrumb\":{\"@id\":\"https:\/\/sparksupport.com\/blog\/ceph-a-distributed-object-storage-system\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/sparksupport.com\/blog\/ceph-a-distributed-object-storage-system\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/sparksupport.com\/blog\/ceph-a-distributed-object-storage-system\/#primaryimage\",\"url\":\"https:\/\/sparksupport.com\/blog\/wp-content\/uploads\/2016\/03\/1.jpg\",\"contentUrl\":\"https:\/\/sparksupport.com\/blog\/wp-content\/uploads\/2016\/03\/1.jpg\",\"width\":800,\"height\":592,\"caption\":\"Hand play online concept.\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/sparksupport.com\/blog\/ceph-a-distributed-object-storage-system\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/sparksupport.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Ceph Storage: A Distributed Object Storage System\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/sparksupport.com\/blog\/#website\",\"url\":\"https:\/\/sparksupport.com\/blog\/\",\"name\":\"SparkSupport Blog\",\"description\":\"SparkSupport Blogs\",\"publisher\":{\"@id\":\"https:\/\/sparksupport.com\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/sparksupport.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/sparksupport.com\/blog\/#organization\",\"name\":\"SparkSupport\",\"url\":\"https:\/\/sparksupport.com\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/sparksupport.com\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/sparksupport.com\/blog\/wp-content\/uploads\/2019\/08\/cropped-logo-1.jpg\",\"contentUrl\":\"https:\/\/sparksupport.com\/blog\/wp-content\/uploads\/2019\/08\/cropped-logo-1.jpg\",\"width\":216,\"height\":44,\"caption\":\"SparkSupport\"},\"image\":{\"@id\":\"https:\/\/sparksupport.com\/blog\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/sparksupport.com\/blog\/#\/schema\/person\/3cb83ca54ebb2d601e63c1956838be0c\",\"name\":\"vivekrh\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/1d419a49f4ad2945c90b8146197ba3d14d50976b6b98cff4b8d58289f2791f14?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/1d419a49f4ad2945c90b8146197ba3d14d50976b6b98cff4b8d58289f2791f14?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/1d419a49f4ad2945c90b8146197ba3d14d50976b6b98cff4b8d58289f2791f14?s=96&d=mm&r=g\",\"caption\":\"vivekrh\"},\"url\":\"https:\/\/sparksupport.com\/blog\/author\/vivekrh\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Ceph Storage: A Distributed Object Storage System -","description":"Looking for detailed description of ceph storage system. Get complete guide on Distributed Object Storage System based on modern trends in storage industry.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/sparksupport.com\/blog\/ceph-a-distributed-object-storage-system\/","og_locale":"en_US","og_type":"article","og_title":"Ceph Storage: A Distributed Object Storage System -","og_description":"Looking for detailed description of ceph storage system. Get complete guide on Distributed Object Storage System based on modern trends in storage industry.","og_url":"https:\/\/sparksupport.com\/blog\/ceph-a-distributed-object-storage-system\/","article_published_time":"2019-08-01T12:42:03+00:00","article_modified_time":"2024-04-24T08:00:41+00:00","og_image":[{"width":800,"height":592,"url":"https:\/\/sparksupport.com\/blog\/wp-content\/uploads\/2016\/03\/1.jpg","type":"image\/jpeg"}],"author":"vivekrh","twitter_card":"summary_large_image","twitter_misc":{"Written by":"vivekrh","Est. reading time":"19 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/sparksupport.com\/blog\/ceph-a-distributed-object-storage-system\/#article","isPartOf":{"@id":"https:\/\/sparksupport.com\/blog\/ceph-a-distributed-object-storage-system\/"},"author":{"name":"vivekrh","@id":"https:\/\/sparksupport.com\/blog\/#\/schema\/person\/3cb83ca54ebb2d601e63c1956838be0c"},"headline":"Ceph Storage: A Distributed Object Storage System","datePublished":"2019-08-01T12:42:03+00:00","dateModified":"2024-04-24T08:00:41+00:00","mainEntityOfPage":{"@id":"https:\/\/sparksupport.com\/blog\/ceph-a-distributed-object-storage-system\/"},"wordCount":4212,"commentCount":0,"publisher":{"@id":"https:\/\/sparksupport.com\/blog\/#organization"},"image":{"@id":"https:\/\/sparksupport.com\/blog\/ceph-a-distributed-object-storage-system\/#primaryimage"},"thumbnailUrl":"https:\/\/sparksupport.com\/blog\/wp-content\/uploads\/2016\/03\/1.jpg","articleSection":["linux"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/sparksupport.com\/blog\/ceph-a-distributed-object-storage-system\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/sparksupport.com\/blog\/ceph-a-distributed-object-storage-system\/","url":"https:\/\/sparksupport.com\/blog\/ceph-a-distributed-object-storage-system\/","name":"Ceph Storage: A Distributed Object Storage System -","isPartOf":{"@id":"https:\/\/sparksupport.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/sparksupport.com\/blog\/ceph-a-distributed-object-storage-system\/#primaryimage"},"image":{"@id":"https:\/\/sparksupport.com\/blog\/ceph-a-distributed-object-storage-system\/#primaryimage"},"thumbnailUrl":"https:\/\/sparksupport.com\/blog\/wp-content\/uploads\/2016\/03\/1.jpg","datePublished":"2019-08-01T12:42:03+00:00","dateModified":"2024-04-24T08:00:41+00:00","description":"Looking for detailed description of ceph storage system. Get complete guide on Distributed Object Storage System based on modern trends in storage industry.","breadcrumb":{"@id":"https:\/\/sparksupport.com\/blog\/ceph-a-distributed-object-storage-system\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/sparksupport.com\/blog\/ceph-a-distributed-object-storage-system\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/sparksupport.com\/blog\/ceph-a-distributed-object-storage-system\/#primaryimage","url":"https:\/\/sparksupport.com\/blog\/wp-content\/uploads\/2016\/03\/1.jpg","contentUrl":"https:\/\/sparksupport.com\/blog\/wp-content\/uploads\/2016\/03\/1.jpg","width":800,"height":592,"caption":"Hand play online concept."},{"@type":"BreadcrumbList","@id":"https:\/\/sparksupport.com\/blog\/ceph-a-distributed-object-storage-system\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/sparksupport.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Ceph Storage: A Distributed Object Storage System"}]},{"@type":"WebSite","@id":"https:\/\/sparksupport.com\/blog\/#website","url":"https:\/\/sparksupport.com\/blog\/","name":"SparkSupport Blog","description":"SparkSupport Blogs","publisher":{"@id":"https:\/\/sparksupport.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/sparksupport.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/sparksupport.com\/blog\/#organization","name":"SparkSupport","url":"https:\/\/sparksupport.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/sparksupport.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/sparksupport.com\/blog\/wp-content\/uploads\/2019\/08\/cropped-logo-1.jpg","contentUrl":"https:\/\/sparksupport.com\/blog\/wp-content\/uploads\/2019\/08\/cropped-logo-1.jpg","width":216,"height":44,"caption":"SparkSupport"},"image":{"@id":"https:\/\/sparksupport.com\/blog\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/sparksupport.com\/blog\/#\/schema\/person\/3cb83ca54ebb2d601e63c1956838be0c","name":"vivekrh","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/1d419a49f4ad2945c90b8146197ba3d14d50976b6b98cff4b8d58289f2791f14?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/1d419a49f4ad2945c90b8146197ba3d14d50976b6b98cff4b8d58289f2791f14?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/1d419a49f4ad2945c90b8146197ba3d14d50976b6b98cff4b8d58289f2791f14?s=96&d=mm&r=g","caption":"vivekrh"},"url":"https:\/\/sparksupport.com\/blog\/author\/vivekrh\/"}]}},"_links":{"self":[{"href":"https:\/\/sparksupport.com\/blog\/wp-json\/wp\/v2\/posts\/2021","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sparksupport.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sparksupport.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sparksupport.com\/blog\/wp-json\/wp\/v2\/users\/29"}],"replies":[{"embeddable":true,"href":"https:\/\/sparksupport.com\/blog\/wp-json\/wp\/v2\/comments?post=2021"}],"version-history":[{"count":0,"href":"https:\/\/sparksupport.com\/blog\/wp-json\/wp\/v2\/posts\/2021\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/sparksupport.com\/blog\/wp-json\/wp\/v2\/media\/3964"}],"wp:attachment":[{"href":"https:\/\/sparksupport.com\/blog\/wp-json\/wp\/v2\/media?parent=2021"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sparksupport.com\/blog\/wp-json\/wp\/v2\/categories?post=2021"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sparksupport.com\/blog\/wp-json\/wp\/v2\/tags?post=2021"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}