Ceph Storage Clusters

Experience seamless exabyte-level cloud storage without a single point of failure. Our fully distributed architecture, powered by Ceph on specialized hardware, ensures unmatched reliability and flexibility. With root level access, you can tailor your Ceph Storage Clusters to your unique workloads and unlock the full potential of scalable, high-performance cloud storage.

  • Use your NVMe drives as fast storage or as a caching layer for your spinners. Exceptional performance per GB stored can be achieved with this method.
  • Replicate an existing Ceph cluster to this cluster for data recovery purposes.
  • Replicate cluster out to another OpenMetal cloud or to any other Ceph cluster you administer.
  • Your Ceph storage cluster will be provisioned with a recent stable release or work with our team to select your own version.
  • Ceph provides S3-compatible APIs, which means it can mimic the functionality of AWS S3 and integrates with tools that use S3 storage.

OpenMetal Storage Cloud

Open Source Object Storage With Ceph

Build large-scale Ceph storage clusters using our validated configurations and tested hardware

Ceph is an open source software-defined storage platform that enables cloud computing environments to store and manage data on a distributed basis. Ceph is designed to provide highly reliable, scalable, and cost-effective storage solutions for organizations of any size. It can be used for both private and public clouds, as well as traditional IT environments.

Ceph provides a unified storage layer that allows users to store data in the most efficient way possible while keeping costs low. With Ceph’s advanced features such as data replication, snapshots, erasure coding, and compression, organizations can ensure their data is secure and always available. Additionally, Ceph provides S3-compatible APIs, which means it can mimic the functionality of Amazon S3 and integrate with tools that use S3 storage. 

If you’re looking for disk space intensive storage needs like large scale S3 compatible object storage, OpenMetal’s Ceph Storage Clusters are ideal.

Ceph Storage Cluster Use Cases


Big Data

Ceph can efficiently handle large data sets created by big data analytics projects. Its distributed architecture and parallel processing capabilities help accelerate data analysis tasks, and its built-in redundancy ensures data reliability, which is critical for big data analytics.


High Performance Computing

Ceph can manage massive datasets included in scientific simulations thanks to its distributed file system capabilities and robust performance. Additionally, Ceph’s efficient storage and management capabilities make it an ideal choice for storing and processing the large datasets required by ML and AI models.


Cloud Storage

Ceph storage clusters are a flexible and open source alternative to traditional cloud storage solutions like AWS S3. Its horizontal scalability allows for elasticity to meet growing data demands, while its object-based storage model aligns seamlessly with cloud-native applications.

Cloud Integrated or Bare Metal Integrated

While our Ceph Storage Clusters can be purchased independently, our customers often integrate them into broader infrastructure solutions, including hosted private clouds and dedicated bare metal servers.

Cloud Integrated

  • Unified Management: Both Ceph and OpenStack can be managed from a single interface, simplifying operations.
  • Flexibility: OpenStack offers a flexible platform for various workloads, including compute, network, and storage.
  • Integration: Ceph and OpenStack are designed to work together seamlessly, providing a cohesive solution.

Bare Metal Integrated

  • Performance: Bare metal servers offer maximum performance and control.
  • Simplicity: Managing bare metal servers is generally simpler than managing a cloud platform.
  • Cost-Effectiveness: Bare metal can be cost-effective for specific workloads, especially those requiring high performance and predictable costs.

“I would definitely recommend OpenMetal. The key points would be an excellent support staff with deep knowledge and troubleshooting steps that are relevant to the issue and don’t begin with the basics. In addition, we have had very minimal downtime issues and our site performance has improved since we started using OpenMetal.”

 

Explore OpenMetal’s IaaS solutions to support your organization’s innovation efforts?
Learn More

 

 

 

Arys Andreou,
Head of Infrastructure &
Matt Weston, CFO

Logo - MyMiniFactory

 

Strategic Data Center Locations

Data Center Locations Map

All our servers are strategically located in Tier III data centers, with two in the United States and one in the Netherlands. Our data centers:

  • Support compliance with various federal regulations and industry standards.
  • Are located near a high concentration of IT, telecommunications, biotechnology, federal government, and international organizations.
  • Provide multiple layers of redundancy.
  • Are ISO 27001 certified to provide additional data security and privacy.

Learn More About Our Data Centers

Contact OpenMetal Sales Team

Chat About Ceph Storage

We’re available to answer questions and provide information.

Chat With Us

Request a Quote

Let us know your requirements and we’ll build you a customized quote.

Request a Quote

Schedule a Consultation

Get a deeper assessment and discuss your unique requirements.

Schedule Consultation

You can also reach our team at sales@openmetal.io

Ceph Storage Cluster FAQ

Ceph provides the network storage including block storage, object storage, and if needed, NFS-compatible file storage called CephFS. All OpenMetal clouds come with Ceph-based block and object storage supplied by the private Cloud Core. For more information see our Operator’s Manual for Ceph.

Our included onboarding process will get you out and using your Ceph storage cluster. We also offer optional long term assisted management if needed. For complex alternations to our standard deployment we may assist, for a fee, or connect you with an accredited Ceph professional in our network.

Your Ceph employs five distinct daemons that are all fully distributed and run alongside OpenStack and your VM workloads. A calculated amount of resources have been set aside for your Ceph including your object gateways, Ceph monitors and Ceph OSDs. Health monitoring software (Datadog) is also included in your deployment and can help pinpoint if there is any contention once a cloud gets near to capacity.

No, this is not recommended unless it is an emergency or a similar temporary situation. Those drives are not intended for heavy use and are not rated for high disk writes per day. Additionally, they are in use by the cluster management software and operating system already.

If you need to maximize your usable disk space, we have the following general preference for Replica 2. This choice is based on the following:
We supply only data center grade SATA SSD and NVMe drives. The mean time between failures of a typical hard drive is 300,000 hours. Most recommendations and history of a selection of 3 replicas come from hard drive use cases taking into account this failure rate. Both our SATA SSDs MTBF and our NVMe’s MTBF are 2 million hours. Though failures will certainly still occur, it is roughly 6 times less likely than with an HDD.

When Ceph has been hyper-converged onto 3 servers with a replica level of 3 when you lose one of the 3 members, Ceph can not recover itself out of a degraded state until the lost member is restored or replaced. The data is not at risk since two copies remain but it is now effectively a Replica level of 2. When Ceph has been hyper-converged onto 3 servers with a replica level of 2 when you lose one of the 3 members, Ceph can be set to self-heal by taking any data that has fallen to 1 replica and automatically start the copy process to recover to a replica level of 2. Your data loss danger only occurs during the time when only 1 replica is present.

Disaster recovery processes for data have progressed significantly. This will be based on your specific situation, but if restoring data from backups to production is straightforward and fast, then in the extremely rare case of both of the 2 replicas failing in the degraded period, you will then need to recover from backups.

Usable Ceph disk space savings are significant (estimated, not exact):

HC Small, Replica 3 – 960GB * 3 servers / 3 replicas = 960GB usable
HC Small, Replica 2 – 960GB * 3 servers / 2 replicas = 1440GB usable
HC Standard, Replica 3 – 3.2TB* 3 servers / 3 replicas = 3.2TB usable
HC Standard, Replica 2 – 3.2TB * 3 servers / 2 replicas = 4.8TB usable