The OpenMetal Hosted Private Cloud on XXL v4 hardware delivers a three-node OpenStack + Ceph cluster built on the highest-density compute and storage nodes in the v4 generation — ready in under 45 seconds, with Day 2 operations included. Three XXL v4 nodes provide a cluster-level foundation of 192 physical cores, 6TB of DDR5 memory, and 115.2TB of raw NVMe storage, pooled into Ceph and managed through OpenStack’s standard API surface. No VMware licensing, no vSphere fees, no OpenStack support contracts. OpenMetal handles hardware provisioning, OpenStack lifecycle, and Ceph cluster management — you control the workloads.
This page covers the three-node Hosted Private Cloud cluster configuration. For single-server bare metal deployment of the same hardware, see the Bare Metal XXL v4 page.
Key Takeaways
- 192 physical cores, 6TB RAM, 115.2TB NVMe — pooled and managed: Three XXL v4 nodes give a cluster-level resource pool that eliminates the per-VM limits of public cloud instance types. Tenant VMs draw from the full cluster pool without per-resource surcharges.
- Under 45 seconds to a live OpenStack cluster: OpenMetal’s provisioning pipeline deploys a production-ready OpenStack environment on three nodes in under 45 seconds. No multi-day provisioning queues, no professional services engagement required to stand up the cluster.
- VMware replacement — no hypervisor licensing costs: The XXL v4 Hosted Private Cloud runs KVM-based OpenStack Nova without VMware licensing. For teams leaving vSphere due to Broadcom pricing changes, this eliminates per-socket and per-core licensing costs while retaining the OpenStack API surface familiar to infrastructure teams.
- TDX active across the cluster by default: All three XXL v4 nodes ship TDX-enabled. OpenStack Nova can schedule TDX VM flavors to any node in the cluster, providing hardware-enforced tenant isolation without per-node configuration.
- Day 2 operations included — hardware, OpenStack, and Ceph: OpenMetal’s infrastructure team handles hardware maintenance, OpenStack version lifecycle, and Ceph rebalancing. Your team manages workloads and VM configuration; OpenMetal manages the platform.
- Fixed cluster pricing — not per-VM metering: Cluster pricing is a fixed monthly rate regardless of how many VMs run inside it. See openmetal.io/products/hosted-private-cloud for current cluster pricing.
Cluster Configuration at a Glance
| Component | Per Node | 3-Node Cluster Total |
|---|---|---|
| Processor | 2x Intel Xeon Gold 6530 | 6x Xeon Gold 6530 |
| Total Cores / Threads | 64C / 128T | 192 cores / 384 threads |
| Base / Max Turbo | 2.1 GHz / 4.0 GHz | — |
| L3 Cache | 320 MB | 960 MB total |
| Memory | 2048GB DDR5 4800 MHz | 6144GB (6TB) |
| Boot Storage | 2x 960GB RAID 1 | 3x RAID 1 pairs (OS only) |
| Data Storage | 6x 6.4TB NVMe | 18 drives / 115.2TB raw |
| Ceph Usable Storage | — | ~38.4TB (3x replication) / ~76.8TB (4+2 EC) |
| Private Mesh Bandwidth | 10 Gbps x2 (LACP) | 20 Gbps per node inter-node |
| Public Bandwidth | 10 Gbps, burst 40 Gbps | Per-node allotment |
| Intel TDX | Active by default | Active across all 3 nodes |
| Intel SGX | 128 GB EPC per node | 384 GB EPC cluster-wide |
| PCIe | PCIe 5.0, 80 lanes/socket | — |
| Pricing | Fixed monthly cluster rate — view calculator | |
Ready to Deploy an XXL v4 Private Cloud?
Tell us about your cluster requirements and we’ll configure the right three-node deployment — with Day 2 operations included.
Get a XXL v4 Hosted Private Cloud Quote Schedule a Consultation
What OpenMetal Manages
OpenMetal’s Hosted Private Cloud model splits responsibilities cleanly: OpenMetal owns the platform layer; the customer owns the workload layer.
OpenMetal manages:
- Hardware procurement, rack, and cabling
- IPMI and out-of-band access for hardware lifecycle events
- OpenStack version upgrades and security patches
- Ceph OSD management, rebalancing, and failure recovery
- Network fabric configuration and VLAN provisioning
- Monitoring and alerting for cluster health
- Weekly Google Meet check-ins for ongoing infrastructure review
Customer controls:
- VM flavors and image management
- Tenant/project structure and quotas
- Security groups, floating IPs, and load balancer configuration
- Application deployments and OS management inside VMs
- OpenStack API and Horizon dashboard access with admin credentials
This model reduces operational overhead for infrastructure teams that need full OpenStack flexibility without a dedicated platform SRE team.
OpenStack API and Horizon Access
Every Hosted Private Cloud cluster ships with full OpenStack API access — Nova, Neutron, Cinder, Glance, Keystone, Octavia, and Swift endpoints are available from day one. Customers receive admin-level credentials and access to the Horizon dashboard for GUI-based management.
The OpenStack API surface supports:
- Terraform and Pulumi for infrastructure-as-code VM provisioning
- OpenTofu for teams migrating IaC tooling away from HashiCorp licensing
- Ansible OpenStack collection for configuration management
- Python openstackclient for CLI operations
- Direct REST API for custom automation
No OpenStack support contract is required — the standard OpenStack upstream API is available, and OpenMetal’s infrastructure team handles cluster-level issues. Application-layer support is the customer’s responsibility or available through OpenMetal’s premium support tiers.
Ceph Storage Architecture
The XXL v4 Hosted Private Cloud pools all 18 NVMe data drives (115.2TB raw) across three nodes into a Ceph cluster managed by OpenMetal.
Replication vs erasure coding:
- 3x replication (default): ~38.4TB usable. All data written to three OSDs on separate nodes. Highest performance and lowest recovery complexity.
- 4+2 erasure coding: ~76.8TB usable. Higher storage efficiency with one parity overhead pair. More appropriate for cold or warm data tiers where storage efficiency matters more than peak write IOPS.
Ceph provides:
- Cinder block volumes (RBD) — persistent volumes for OpenStack VMs, equivalent to EBS semantics but on locally-attached NVMe OSDs
- Object storage (RADOSGW / S3-compatible) — S3-compatible object storage endpoint available within the cluster private network
- CephFS (optional) — POSIX filesystem for shared access use cases
The 115.2TB raw NVMe pool across three nodes means the Ceph OSD layer runs entirely on PCIe 5.0 NVMe — no spinning disk, no SATA SSD hybrid tier. Ceph block volume latency is bounded by local NVMe performance, not a network-backed storage backend.
Processor
Six Xeon Gold 6530 processors — two per node — contribute 192 total physical cores and 384 hardware threads to the cluster. In OpenStack Nova scheduling, these map directly to vCPU allocation across tenant VMs. The cluster’s physical core count supports high-density VM scheduling without vCPU overcommit ratios that degrade performance under concurrent load.
The Emerald Rapids microarchitecture’s Speed Select Technology profiles remain available on each node, enabling per-node CPU tuning for workload-specific requirements — a cluster deployed for latency-sensitive multi-tenant SaaS might use 24-core / 225W profiles per socket, while a batch processing cluster might use the full 32-core / 270W profile.
Hardware accelerators — AMX, AVX-512, DL Boost, QuickAssist — are available to guest VMs without hypervisor-level abstraction penalties. OpenStack Nova with CPU passthrough or custom VM flavors can expose these capabilities to tenant workloads.
Memory
The 6TB cluster-wide DDR5 4800 MHz memory pool supports Nova memory allocation across tenant VMs with headroom for healthy overcommit ratios. A cluster allocating 1:1.5 vRAM-to-RAM achieves approximately 9TB of allocatable VM memory — sufficient for large SaaS fleets, multi-tenant Kubernetes clusters, and high-density VDI workloads.
Intel TDX is active on all three nodes at base configuration. OpenStack Nova with TDX support can schedule hardware-isolated TDX VMs to any node in the cluster, providing tenant-level hardware memory isolation at the scheduler level — relevant for multi-tenant SaaS operators offering compliance-grade isolation to customers.
Storage
Boot drives on each node are RAID 1 mirrored pairs used exclusively for the hypervisor OS. Tenant VM storage is provisioned from the Ceph pool on NVMe data drives, keeping VM disk I/O on a separate physical storage path from the host OS.
Ceph block volume performance on the XXL v4 cluster is bounded by the Micron 7500 MAX NVMe per-drive specs (7,000 MB/s sequential read, 1.1M random read IOPS per drive), distributed across 18 OSDs and the private mesh bandwidth between nodes. Tenant VMs accessing Cinder volumes get NVMe-backed block storage without the additional network hop of AWS EBS or similar network-attached block storage products.
Networking
Each XXL v4 node includes two 10 Gbps ports. Inter-node cluster traffic — Ceph replication, OpenStack API communication, Nova live migration — traverses the private VLAN mesh at up to 20 Gbps per node with LACP bonding. This private cluster traffic is unmetered.
Public-facing VM traffic exits through each node’s public interface at up to 10 Gbps per node, with burst to 40 Gbps. OpenMetal’s network SLA is 99.96% guaranteed, tracking above 99.99% in actual operation since 2022. DDoS protection is included at no additional charge across all cluster nodes.
Egress pricing: 95th-percentile billing, not per-GB transfer.
Egress is billed at 95th-percentile for overages — the same model as bare metal. For multi-tenant SaaS clusters with variable egress profiles, 95th-percentile billing means burst traffic during releases or peak periods does not generate the same cumulative cost as sustained high-volume transfer.
Security and Confidential Computing
All three XXL v4 nodes in a Hosted Private Cloud cluster ship with Intel TDX active, SGX available (128 GB EPC per node), and TME-MK full memory encryption enabled. In a multi-tenant OpenStack environment, TDX VM flavors can be offered to tenants as a hardware isolation tier — each TDX VM’s memory is cryptographically isolated from other VMs and from the OpenStack host.
Intel SGX supports key management and attestation services running inside enclaves on each cluster node. The 384 GB total EPC across three nodes supports concurrent enclave workloads distributed by Nova scheduling.
- AES-NI — hardware TLS offload
- Intel Boot Guard — firmware boot chain integrity
- CET — control-flow enforcement (see the XXL v4 primary spec page for the full feature list)
HIPAA and Regulatory Compliance
OpenMetal is HIPAA compliant at the organizational level and offers BAAs. Hosted Private Cloud clusters in Ashburn (NTT DATA VA1) are hosted in a facility with facility-operator HIPAA certification. Los Angeles (Digital Realty LAX10) HIPAA compliance is at the OpenMetal organizational level. Amsterdam (Digital Realty AMS3) holds SOC1/2, ISO 27001, ISO 50001, ISO 22301, and PCI-DSS at the facility-operator level. Singapore (Digital Realty SIN10) holds BCA Green Mark Platinum.
Recommended Workloads on the XXL v4 Hosted Private Cloud
Multi-tenant SaaS platform infrastructure
The XXL v4 cluster’s 192 physical cores and 6TB RAM support high-density multi-tenant SaaS deployments where each customer tenant runs in isolated VM or container environments. The fixed cluster pricing model converts infrastructure cost from per-VM variable to a flat monthly rate — predictable cost as tenant count scales within the cluster. TDX VM flavors provide contractually defensible hardware isolation for compliance-requiring customers.
Managed Kubernetes at scale
Kubernetes clusters running on the XXL v4 Hosted Private Cloud benefit from the large node RAM and core count for high-density pod scheduling. Talos Linux, Flatcar, and RKE2 are supported on OpenStack VMs. Persistent volumes backed by Ceph RBD eliminate the need for a separate storage backend. OpenMetal’s Day 2 operations handle the infrastructure layer — your team manages the Kubernetes cluster and application deployments, not the underlying OpenStack or Ceph.
VMware migration workloads
Teams migrating from VMware vSphere following Broadcom pricing changes can use the XXL v4 Hosted Private Cloud as a drop-in KVM-based alternative. The OpenStack Nova API supports the same VM lifecycle operations (create, clone, snapshot, live migrate) without per-socket or per-core licensing fees. The 6TB cluster memory pool handles the same VM densities that vSphere clusters provided. OpenMetal’s infrastructure team handles the OpenStack and Ceph platform — no internal platform SRE team required.
CI/CD and ephemeral build infrastructure
High-concurrency build pipelines — GitLab CI runners, Jenkins agents, GitHub Actions self-hosted runners — benefit from the 384-thread cluster for parallel job execution. Build artifacts stored in Ceph object storage (S3-compatible RADOSGW) are accessible to all nodes without separate object storage infrastructure. The fixed cluster pricing model converts variable per-minute CI/CD cloud costs into a predictable monthly rate for teams with sustained build volumes.
Large-scale OpenStack-native workloads
Organizations running OpenStack-native applications — Heat-orchestrated stacks, Magnum Kubernetes clusters, Trove database-as-a-service — can deploy against the XXL v4 cluster’s full OpenStack API surface with admin credentials. No managed service limitations, no API subset restrictions. The 45-second cluster deployment time allows rapid provisioning for development, staging, and production environments within the same OpenStack project structure.
Ready to Deploy an XXL v4 Private Cloud?
Tell us about your cluster requirements and we’ll configure the right three-node deployment — with Day 2 operations included.
Get a XXL v4 Hosted Private Cloud Quote Schedule a Consultation
XXL v4 Hosted Private Cloud Deployment Options
Hosted Private Cloud — Three-Node OpenStack + Ceph
The standard XXL v4 Hosted Private Cloud is a three-node cluster with OpenStack, Ceph, and full API access. Day 2 operations included. Clusters deploy in under 45 seconds once provisioned.
- 45-second deployment: Production-ready OpenStack available in under a minute
- Admin credentials: Full OpenStack admin access from day one
- Day 2 included: Hardware maintenance, OpenStack lifecycle, Ceph management
- Fixed monthly pricing: No per-VM or per-vCPU metering
- Proof of Concept clusters available: Test deployment workflows, OpenStack API compatibility, and environment fit before committing to production
- Ramp pricing: Available for migrations from VMware or public cloud environments
→ View pricing and configuration: openmetal.io/cloud-deployment-calculator
Bare Metal Alternative
Prefer single-server control with no OpenStack overhead? The same XXL v4 hardware is available as a standalone Bare Metal Dedicated Server with full root access, IPMI, and your choice of hypervisor or OS.
Both deployment paths: available across OpenMetal’s Tier III data center locations. Fixed monthly pricing applies regardless of utilization. No per-hour, per-query, or per-GB billing.
Where to deploy
| Location | Region | Certifications | Location Page |
|---|---|---|---|
| Ashburn, VA | US-East | SOC 1/2 Type II, ISO 27001, PCI DSS, NIST 800-53 HIGH, HIPAA (facility-level) | Ashburn |
| Los Angeles, CA | US-West | SOC 2, SOC 3, ISO 27001, PCI DSS | Los Angeles |
| Amsterdam | EU-West | SOC 1/2, PCI-DSS, ISO 27001, ISO 50001, ISO 22301 | Amsterdam |
| Singapore | Asia | BCA Green Mark Platinum | Singapore |
Get a XXL v4 Hosted Private Cloud Quote
Tell us about your infrastructure needs and we’ll provide a custom quote for the XXL v4 Hosted Private Cloud.
- Standard 3-node cluster: 192 cores, 6TB DDR5, 115.2TB raw NVMe, OpenStack + Ceph
- Larger clusters: Additional XXL v4 nodes can be added to expand compute and storage pools
- TDX configuration: All three nodes ship TDX-active — no additional configuration required
- VMware migration: Ramp pricing and migration guidance available
Ramp pricing available for migrations. All deployments include fixed monthly pricing, 99.96%+ network SLA, and DDoS protection.
Specifications, pricing, and availability are subject to change without notice. The information on this page is provided for general guidance and does not constitute a contractual commitment. Contact OpenMetal for current configuration details and pricing. AWS specifications and pricing are sourced from publicly available documentation and may not reflect current rates or configurations.



































