The OpenMetal Hosted Private Cloud XL v4 is a three-node OpenStack and Ceph cluster, each node running dual Intel Xeon Gold 6530 processors with 1TB DDR5 4800MHz RAM and 25.6TB of Micron 7500 MAX NVMe. Across the cluster, that is 192 physical cores, 3TB of pooled RAM, and 76.8TB of raw NVMe delivered through Ceph’s distributed storage layer — deployed in under 45 seconds via the OpenMetal control plane, with Day 2 operations handled by OpenMetal. There are no OpenStack licensing costs, no Ceph licensing costs, and no VMware fees. For teams running large-scale virtualization or multi-tenant SaaS on VMware, the XL v4 Hosted Private Cloud provides the OpenStack API surface and Ceph storage architecture with a fraction of the operational overhead.

Key Takeaways

  • 192 physical cores / 384 threads across 3 nodes — no vCPU oversubscription at the hypervisor level; VM cores map to physical cores on dedicated hardware
  • 3TB DDR5 4800MHz pooled RAM with Ceph distributed storage across 76.8TB raw NVMe — supports large-scale multi-tenant VM fleets, CI/CD infrastructure, and data-intensive OpenStack workloads
  • Deploys in under 45 seconds via the OpenMetal control plane — full OpenStack + Ceph cluster with Horizon dashboard, OpenStack API, and S3-compatible object storage available immediately
  • Day 2 operations included — OpenMetal monitors, alerts, and maintains the OpenStack and Ceph platform layer; customers manage workloads, not platform infrastructure
  • No licensing overhead — no VMware vSphere, no NSX, no vSAN fees; OpenStack and Ceph are fully open source with no per-VM or per-socket licensing

Cluster Configuration

ComponentPer NodeCluster Total (3 Nodes)
Processor2x Intel Xeon Gold 65306x Intel Xeon Gold 6530
Cores / Threads64 / 128192 / 384
Memory1024GB DDR5 4800MHz ECC3072GB (3TB) DDR5
Data NVMe4x 6.4TB Micron 7500 MAX12x 6.4TB (76.8TB raw)
Boot Drives2x 960GB RAID 1— (per node)
Private Network20Gbps LACP20Gbps inter-node mesh
Public Bandwidth6 Gbps6 Gbps (shared cluster uplink)
TDXEnabled by defaultAvailable on all individual bare metal nodes
PricingFixed monthly — see openmetal.io/cloud-deployment-calculator

Ceph pools the 76.8TB of raw NVMe across all three nodes. Usable capacity depends on Ceph replication factor: at 3x replication, usable capacity is approximately 25.6TB; at erasure coding (4+2), approximately 51.2TB. OpenMetal configures the replication policy at deployment.

Cluster architecture diagram of the OpenMetal XL v4 Hosted Private Cloud showing three nodes with OpenStack and Ceph

What OpenMetal Manages

  • Ceph cluster health monitoring — OSD status, pool utilization, replication health, recovery operations
  • OpenStack platform updates — controller upgrades, API endpoint maintenance, Horizon dashboard updates
  • Alerting and incident response — OpenMetal monitors platform-level events and responds to infrastructure alerts; application-level monitoring remains the customer’s responsibility
  • Network configuration — VLAN provisioning, floating IP management, security group defaults
  • Storage management — Ceph pool configuration, capacity planning, OSD replacement coordination

Customers retain full control of the OpenStack API, Horizon dashboard, VM lifecycle, project/tenant management, network topology, and all workloads running on the cluster.

OpenStack API and Horizon Access

  • Nova — VM lifecycle management (create, resize, snapshot, migrate)
  • Neutron — software-defined networking, VLANs, floating IPs, security groups
  • Cinder — block storage volumes backed by Ceph RBD
  • Swift / RADOSGW — S3-compatible object storage via Ceph RADOS Gateway
  • Glance — VM image registry for custom and pre-built images
  • Keystone — multi-tenant identity and access management
  • Horizon — web-based dashboard for VM, storage, and network management

The OpenStack API is fully compatible with OpenStack SDK, Terraform (OpenStack provider), Ansible (OpenStack modules), and Pulumi. Infrastructure-as-code workflows that target the OpenStack API require no modification when moving from other OpenStack deployments to OpenMetal.

Ceph Storage Architecture

Ceph distributes the 76.8TB of NVMe storage across all three nodes. Each node contributes four Micron 7500 MAX OSDs; Ceph monitors (MONs) and managers (MGRs) run on the controller node with automatic failover. The storage cluster self-heals on OSD failure — if a single drive fails on any node, Ceph redistributes data from the remaining OSDs without manual intervention.

The Micron 7500 MAX provides 1.1M random read IOPS and 400K random write IOPS per OSD. Across 12 OSDs, the cluster delivers aggregate NVMe throughput appropriate for high-density VM storage — database volumes, container persistent storage, and CI/CD artifact stores all run on the same Ceph pool with quality-of-service controls available through Cinder volume types.

S3-compatible object storage is available via RADOS Gateway (RADOSGW) without additional software. Bucket creation, access key management, and lifecycle policies follow the S3 API — tools that target AWS S3 work against OpenMetal’s RADOSGW endpoint without code changes.

VMware Migration

For teams running VMware vSphere or VMware Cloud on AWS, the XL v4 Hosted Private Cloud provides a migration path that eliminates per-socket and per-VM licensing overhead.

VMwareOpenMetal HPC XL v4
vSphere ESXi hypervisorKVM (via OpenStack Nova)
vSAN distributed storageCeph RBD / RADOS
NSX-T network virtualizationNeutron + OVN
vCenter managementHorizon + OpenStack API
vSphere LicensingNone (OpenStack + Ceph are open source)
Per-socket/per-VM feesNone

VM migration from VMware uses standard tools: virt-v2v for offline migration, or snapshot-based migration workflows via Glance image import. OpenMetal’s team provides migration guidance; ramp pricing is available for customers transitioning from an existing VMware deployment.

Networking

The three-node cluster connects via a 20Gbps LACP-bonded private mesh — inter-node traffic for Ceph replication, Nova live migration, and OpenStack control plane communication runs on this unmetered private fabric. Each node contributes 20Gbps to the east-west mesh, providing sufficient bandwidth for Ceph’s replication writes even under heavy storage load.

Public egress is billed on the 95th-percentile model across the cluster’s shared 6Gbps public uplink. Multi-VM workloads that produce bursty public traffic — batch jobs, scheduled exports, API endpoints — do not accumulate per-GB egress charges for isolated burst events.

Security and Confidential Computing

Intel TDX is enabled by default on all XL v4 nodes in the cluster. Guest VMs can request TDX Trust Domain isolation through the OpenStack Nova API [VERIFY OpenStack TDX flavor configuration]. Each Trust Domain runs with hardware-encrypted memory, isolated from the OpenStack hypervisor layer and from other guest VMs. For SaaS providers running multiple customer workloads on the cluster, TDX provides contractual hardware isolation between tenants at the OpenStack VM granularity.

Intel SGX enclaves (up to 128GB EPC per node) are available alongside TDX for application-level confidential computing workloads.

HIPAA and Regulatory Compliance

OpenMetal executes HIPAA BAAs at the organizational level. The XL v4 Hosted Private Cloud is deployed across four locations. Ashburn is hosted at an NTT DATA facility holding HIPAA as a facility-operator certification, alongside SOC 1/2 Type II, ISO 27001, PCI DSS, and NIST 800-53 HIGH. Los Angeles (Digital Realty): SOC 2, SOC 3, ISO 27001, PCI DSS — HIPAA at that location is OpenMetal organizational-level only. Amsterdam (Digital Realty): SOC 1/2, PCI-DSS, ISO 27001, ISO 50001, ISO 22301. Singapore (Digital Realty): BCA Green Mark Platinum. Multi-tenant OpenStack environments running PHI should combine OpenMetal’s HIPAA BAA with TDX guest isolation for the appropriate technical safeguard posture.

Recommended Workloads

Large-scale virtualization and VM hosting

The 192-core, 3TB RAM cluster comfortably hosts 100+ production VMs depending on per-VM sizing. Nova live migration moves VMs between nodes without downtime for maintenance or rebalancing. For MSPs and hosting providers, the multi-tenant project model in Keystone maps cleanly to customer accounts — each customer sees only their own VMs, networks, and storage volumes.

Kubernetes on OpenStack

Kubernetes clusters deployed on top of OpenStack Nova VMs use Cinder persistent volumes for pod storage and Neutron load balancers for service exposure. With 192 cores across the cluster, large Kubernetes node pools run without co-tenant interference. The OpenStack Magnum service [VERIFY availability on OpenMetal HPC] can provision Kubernetes clusters directly from the OpenStack API.

CI/CD infrastructure

Jenkins, GitLab CI, GitHub Actions runners, and Buildkite agents deployed on OpenStack VMs benefit from on-demand VM provisioning that Nova provides. Ephemeral build VMs spin up, run a pipeline, and terminate — all within the fixed-cost billing model. Ceph object storage serves artifact repositories (container registries, package caches, build artifact stores) via the S3-compatible RADOSGW endpoint.

Multi-tenant SaaS platforms

SaaS providers that provision per-customer OpenStack projects get tenant isolation at the Keystone level, network isolation at the Neutron level, and hardware isolation at the TDX level — three independent isolation boundaries for a single monthly price. The Ceph storage pool supports per-tenant volume quotas and QoS limits through Cinder volume types.

Data pipelines and analytics infrastructure

Apache Spark, Presto, and Kafka clusters deployed on OpenStack VMs use Ceph RBD volumes for persistent storage and RADOSGW for S3-compatible data lake storage. The 20Gbps inter-node mesh provides sufficient east-west bandwidth for Spark shuffle operations and Kafka replication. Fixed monthly pricing makes cost-per-query modeling straightforward compared to pay-per-use cloud analytics services.

“Public clouds are really too expensive. You don’t have to spend that level of investment with a public cloud. The answer to that is a private cloud. But you need a trusted expert that you can rely on, and that trusted expert is OpenMetal.”

Tom Fanelli, CEO & Co-Founder — Convesio

Ready to Deploy an XL v4 Hosted Private Cloud?

Tell us about your workload — virtualization platform, Kubernetes cluster, CI/CD infrastructure, or VMware migration — and we’ll help you configure the right XL v4 cluster.

Get a XL v4 Hosted Private Cloud Quote   Schedule a Consultation

Deployment Options

The XL v4 Hosted Private Cloud is available in Ashburn, Los Angeles, Amsterdam, and Singapore. Proof of Concept clusters are available for testing. Ramp pricing is available for migrations from VMware or other providers.

LocationRegionCertificationsLocation Page
Ashburn, VAUS-EastSOC 1/2 Type II, ISO 27001, PCI DSS, NIST 800-53 HIGH, HIPAA (facility)Ashburn
Los Angeles, CAUS-WestSOC 2, SOC 3, ISO 27001, PCI DSSLos Angeles
AmsterdamEU-WestSOC 1/2, PCI-DSS, ISO 27001, ISO 50001, ISO 22301Amsterdam
SingaporeAsiaBCA Green Mark PlatinumSingapore

→ Pricing: openmetal.io/cloud-deployment-calculator

Get a XL v4 Hosted Private Cloud Quote

Tell us about your infrastructure needs and we’ll provide a custom quote for the XL v4 Hosted Private Cloud.

  • 3-node cluster: OpenStack + Ceph, 192 cores, 3TB RAM, 76.8TB NVMe, Day 2 ops included
  • VMware migration: Ramp pricing and migration guidance available
  • Custom node counts: Larger clusters available — contact OpenMetal

Ramp pricing available for migrations. All deployments include fixed monthly pricing, 99.96%+ network SLA, and DDoS protection.



Product specifications, pricing, and availability may change due to market conditions and other factors. For the most current information, please contact the OpenMetal team directly.