Storing sensitive data at scale requires more than basic encryption. You need distributed architecture that removes single points of failure, access controls that restrict data to authorized users, and infrastructure that keeps your workloads physically isolated from other tenants. Ceph storage clusters deliver object, block, and file storage across multiple nodes with replication and erasure coding built in. When you combine that distributed design with private cloud infrastructure and hardware-backed confidentiality features, you get a foundation that can secure data while scaling to petabytes. This post explains why Ceph is foundational for confidential cloud storage, how it distributes and protects data, and how OpenMetal uses it to build secure storage environments for enterprises that can’t compromise on compliance or control.
Why Distributed Storage Matters for Data Confidentiality
Traditional storage systems rely on centralized controllers or monolithic arrays. If that controller fails, your storage becomes unavailable. If an attacker gains access to that single node, they can potentially access everything stored there.
Ceph takes a different approach. It distributes data across multiple storage nodes using a placement algorithm called CRUSH (Controlled Replication Under Scalable Hashing). When you write an object or block to a Ceph cluster, the data is broken into smaller pieces and spread across different physical servers. Each piece is replicated or erasure-coded to maintain durability even if hardware fails.
This design removes single points of failure. If one storage node goes offline, the cluster continues serving data from replicas on other nodes. If you need more capacity or throughput, you add nodes and the cluster automatically rebalances. This distributed architecture also limits the blast radius of any potential security breach—data is not centralized in one location where it could all be compromised at once.
For enterprises handling sensitive data, this matters because it aligns with security principles like defense in depth and least privilege. Data is spread across multiple servers, access is controlled at the cluster level, and replication traffic stays within private networks.
How Ceph Handles Object, Block, and File Storage
Ceph provides three storage interfaces from a single cluster:
Object Storage (RADOS Gateway): Applications access data using S3- or Swift-compatible APIs. This is useful for unstructured data like medical imaging files, log archives, or machine learning datasets. Each object is stored with metadata, and access is controlled through bucket policies and authentication tokens.
Block Storage (RBD): Virtual machines and containers mount Ceph block devices as if they were local disks. This is the interface you use when attaching encrypted volumes to workloads. Block devices support snapshots, clones, and thin provisioning, which makes them useful for stateful applications like databases or healthcare data platforms.
File Storage (CephFS): Multiple clients can mount a shared filesystem backed by Ceph. This is useful for workloads that need POSIX semantics—think high-performance computing, collaborative research environments, or shared application directories.
All three interfaces use the same underlying storage pool. This means you don’t need separate storage systems for different workload types. You deploy one Ceph cluster and expose the interface that matches your application requirements.
Hardware Configurations That Balance Performance and Capacity
Storage performance depends on hardware. If you’re using slow disks or under-provisioned networking, your workloads will be bottlenecked regardless of how well Ceph is configured.
OpenMetal Ceph storage clusters use enterprise-grade servers designed specifically for storage workloads. The Large V4 storage configuration, for example, combines high-speed NVMe cache drives with large-capacity enterprise HDDs. The NVMe drives act as a cache layer for frequently accessed data, while the HDDs provide bulk capacity at lower cost. This hybrid design balances performance and economics—you get fast access to hot data without paying NVMe prices for everything.
Each server includes dual 10 Gbps network links, totaling 20 Gbps of bandwidth. One set of links handles client traffic (applications reading and writing data), while the other handles Ceph’s internal replication traffic. This separation prevents replication from saturating your client-facing network.
Network isolation is critical for confidentiality. Ceph replication traffic—when data is copied between storage nodes for durability—remains within customer VLANs or VXLAN overlays. This means your data never traverses shared public infrastructure. It stays inside private networks controlled by your organization, reducing exposure to traffic analysis or interception.
Confidential Computing and Storage: Protecting Data in Use
Encryption at rest and in transit are standard requirements for regulated data. But what about data while it’s being processed? That’s where confidential computing enters the picture.
Hardware-backed confidential computing uses Trusted Execution Environments (TEEs) to isolate workloads at the processor level. Intel Trust Domain Extensions (TDX) and Software Guard Extensions (SGX) are two examples of TEE technologies available on OpenMetal’s V4 generation servers. These technologies create secure enclaves where your code and data are cryptographically isolated from the host operating system, hypervisor, and other virtual machines.
When you run a sensitive workload—say, training a machine learning model on encrypted medical records—the compute nodes use TDX to ensure that data in memory cannot be accessed by unauthorized processes. Even system administrators or cloud operators cannot inspect what’s happening inside the TEE. Attestation mechanisms provide cryptographic proof that your workload is running in a genuine trusted environment.
This isolation extends to storage. When your workload reads data from a Ceph volume, that data enters the TEE and remains encrypted in memory. When it writes data back to storage, the data is encrypted before leaving the secure enclave. The result is protection across the entire data lifecycle: at rest on disk, in transit across the network, and in use during computation.
Confidential computing is particularly relevant for industries with strict compliance requirements. Healthcare organizations need to protect patient data under HIPAA. Financial institutions must secure transaction records and customer information under regulations like PCI DSS and GDPR. Blockchain and cryptocurrency platforms need to protect private keys and wallet data from exposure.
By combining Ceph storage with confidential computing hardware, you create an environment where sensitive data is protected even if the underlying infrastructure is compromised. This is the difference between trusting your cloud provider and verifying your security at the hardware level.
OpenStack Orchestration: Keystone, Neutron, and Ceph Integration
OpenMetal private clouds are orchestrated with OpenStack, which provides the control plane for managing compute, networking, and storage resources. Two OpenStack services are particularly important for data confidentiality: Keystone and Neutron.
Keystone handles identity and access management. It authenticates users, assigns roles, and enforces permissions across your cloud. When a user or application tries to access a Ceph volume, Keystone verifies their credentials and checks whether they have the necessary permissions. This centralized access control prevents unauthorized access at the API layer before data is even retrieved from storage.
Neutron provides virtualized networking. It creates isolated networks for different projects or workloads, assigns IP addresses, and enforces security group rules. When you deploy workloads that need to access Ceph storage, Neutron ensures that traffic flows through the correct private network and that only authorized workloads can reach storage endpoints.
Together, Keystone, Neutron, and Ceph enforce data confidentiality through a combination of physical isolation, access control policies, security groups, and encrypted traffic. You define who can access what, which networks they can use, and how data is replicated. This layered approach reduces the attack surface and ensures that even if one control fails, others remain in place.
Fixed Pricing and Compliance: Predictable Costs for Regulated Data
Public cloud storage often uses variable pricing based on usage. You pay for capacity, bandwidth, API requests, and egress fees. For regulated data storage, this creates two problems. First, costs are unpredictable, which complicates budgeting and compliance reporting. Second, egress fees can make it expensive to move data out, which creates vendor lock-in.
OpenMetal uses fixed monthly pricing based on dedicated hardware. You pay for complete storage clusters without metering or variable egress fees. This structure supports predictable financial planning and makes it easier to generate compliance reports where storage costs must remain consistent.
Fixed pricing also aligns with data sovereignty requirements. When you know exactly which hardware stores your data and how much you’re paying for it, you can more easily demonstrate compliance with regulations that require data to remain in specific jurisdictions or under specific controls.
When You Need Confidential Cloud Storage with Ceph
Not every workload needs the level of isolation and control that OpenMetal provides. If you’re storing public datasets or running stateless applications, public cloud object storage may be sufficient. But if you’re handling regulated data, proprietary algorithms, or sensitive customer information, you need infrastructure designed for confidentiality.
You should consider confidential cloud storage with Ceph if:
- Your organization operates in a highly regulated industry like finance, healthcare, or government, where compliance requirements mandate strict data controls and auditability
- You’re building AI or machine learning systems that process sensitive datasets and you need to protect both the data and the models from exposure
- You’re running blockchain or Web3 infrastructure where private keys, wallet data, or smart contract logic must be protected at the hardware level
- You need to scale storage to multi-petabyte capacity without sacrificing performance or introducing single points of failure
- You require physical isolation from other tenants and want to avoid the risks inherent in multi-tenant public cloud environments
OpenMetal’s Ceph-based private clouds provide the architecture to meet these requirements. You get distributed storage that scales as you grow, hardware-backed confidentiality features that protect data in use, and OpenStack orchestration that enforces access controls across compute, storage, and networking.
Building Your Confidential Storage Environment
If you’re ready to deploy confidential cloud storage with Ceph, start by evaluating your data classification and compliance requirements. Identify which datasets require encryption at rest, which need confidentiality during processing, and what your performance and capacity targets are.
Next, consider your hardware configuration. OpenMetal’s V4 servers support Intel TDX and SGX for confidential computing workloads, and you can configure storage nodes with the capacity and throughput your applications need. If you’re running GPU-accelerated workloads alongside storage—such as training machine learning models on encrypted datasets—you can add GPU servers to your environment and pass them through to TDX-enabled virtual machines.
Network design is also important. Segment your storage traffic from your application traffic, and use VLANs or VXLAN overlays to isolate replication traffic within private networks. This prevents sensitive data from traversing shared infrastructure and reduces the risk of traffic interception.
Finally, integrate your Ceph cluster with your orchestration and access control systems. Use Keystone to manage user identities and permissions, and use Neutron to enforce network segmentation. Deploy monitoring tools to track storage health, replication status, and access patterns, so you can detect anomalies and respond before they become incidents.
OpenMetal’s platform gives you the tools and control to build this environment without needing to assemble hardware, configure networking, or tune Ceph by hand. You get production-ready infrastructure designed to handle sensitive data at scale.
If you’re ready to explore confidential cloud storage with Ceph, contact our team to get started.
Read More on the OpenMetal Blog