The Hosted Private Cloud Medium v5 is a three-node OpenStack and Ceph cluster built on the same Medium v5 hardware available as a standalone bare metal server. Each node contributes dual Xeon 6505P Granite Rapids processors, 256 GB DDR5-6400, and 6.4 TB of Micron 7500 MAX NVMe to a shared compute and storage pool. OpenMetal deploys the cluster in under 45 seconds, handles Day 2 operations (monitoring, patching, incident response), and charges a fixed monthly rate with no per-hour billing, no VMware licensing, and no per-GB egress. The Medium v5 cluster is the entry point for organizations moving to private cloud on current-generation Granite Rapids hardware, providing faster memory bandwidth and a higher RAM ceiling than the Medium v4 with the same three-node footprint.
This page covers the three-node Hosted Private Cloud cluster configuration. For single-server bare metal deployment of the same hardware, see the Bare Metal Dedicated Server — Medium v5 page.
Key Takeaways
- Production-ready in under 45 seconds: OpenMetal’s proprietary automation deploys a fully configured three-node OpenStack + Ceph cluster in under 45 seconds. No manual provisioning, no multi-day setup.
- 72 cores / 144 threads across the cluster: Three nodes of dual Xeon 6505P at 2.2 GHz base / 4.1 GHz turbo provide sufficient VM scheduling capacity for small-to-mid-scale SaaS platforms, development environments, and SMB infrastructure.
- 768 GB aggregate DDR5-6400 memory: 256 GB per node at DDR5-6400 speed delivers higher memory bandwidth than the Medium v4’s DDR5-4400, improving performance for memory-intensive VM workloads, in-memory databases, and Kubernetes pod density.
- 19.2 TB pooled Ceph NVMe storage: Micron 7500 MAX drives across three nodes form a distributed Ceph storage pool with built-in replication. With 3x replication, usable capacity is approximately 6.4 TB. No SAN, no external storage appliances.
- No VMware licensing costs: OpenStack and Ceph are open-source with zero licensing fees. Organizations migrating from VMware eliminate vSphere, vSAN, and vCenter licensing entirely.
- Day 2 operations included: OpenMetal monitors cluster health, handles patching, and provides incident response. Base support covers hardware and cloud software delivery; Assisted Management adds a named Account Engineer, 24/7 incident response, and proactive health checks.
Cluster Config at a Glance
| Component | Per Node | Cluster Total (3 Nodes) |
|---|---|---|
| Processor | 2x Intel Xeon 6505P (Granite Rapids) | 6x Intel Xeon 6505P |
| Cores / Threads | 24C / 48T | 72C / 144T |
| Base / Turbo | 2.2 GHz / 4.1 GHz | — |
| Memory | 256 GB DDR5-6400 ECC | 768 GB |
| Boot Storage | 2x 960 GB SSD (RAID 1) | 6x 960 GB (3 RAID 1 pairs) |
| Data Storage | 1x 6.4 TB Micron 7500 MAX | 3x 6.4 TB (19.2 TB raw) |
| Ceph Usable Storage | — | ~6.4 TB (3x replication) |
| Private Bandwidth | 20 Gbps (2x 10 Gbps LACP) | 60 Gbps aggregate mesh |
| Public Bandwidth | 6 Gbps | 6 Gbps (shared cluster egress) |
| Network SLA | 99.96% base | 99.96% base (actual >99.99%) |
| DDoS Protection | Up to 10 Gbps per IP | Included |
| Remote Management | Full IPMI per node | 3x IPMI consoles |
| Intel SGX | 128 GB EPC (enabled by default) | 384 GB EPC cluster-wide |
| Intel TDX | Eligible (1 TB RAM upgrade required per node) | Per-node basis |
| Pricing | Fixed monthly per cluster — see openmetal.io/cloud-deployment-calculator | |
Hosted Private Cloud Medium v5: three Xeon 6505P nodes, Ceph distributed storage, 20 Gbps private mesh — deployed in under 45 seconds.
Ready to Deploy a Medium v5 Hosted Private Cloud?
Tell us about your workload and we’ll help you configure the right cluster size and storage tier.
What OpenMetal Manages
OpenMetal’s Hosted Private Cloud model splits responsibilities cleanly: OpenMetal owns the platform layer; the customer owns the workload layer.
OpenMetal manages:
- Hardware procurement, rack, and cabling
- IPMI and out-of-band access for hardware lifecycle events
- OpenStack version upgrades and security patches
- Ceph OSD management, rebalancing, and failure recovery
- Network fabric configuration and VLAN provisioning
- Monitoring and alerting for cluster health
- Incident response for platform-layer events
- Onboarding: dedicated Account Manager, Account Engineer, and Executive Sponsor; private Slack channel; optional weekly Google Meet check-ins
Customer controls:
- VM flavors and image management
- Tenant/project structure and quotas
- Security groups, floating IPs, and load balancer configuration
- Application deployments and OS management inside VMs
- OpenStack API and Horizon dashboard access with admin credentials
Assisted Management (additional fee) adds a named Account Engineer, joint cloud health monitoring, 24/7 incident response, monthly proactive health checks, and assisted upgrades.
OpenStack API and Horizon Access
The Medium v5 Hosted Private Cloud exposes the full OpenStack API surface — Nova, Neutron, Cinder, Glance, Keystone, and Swift endpoints are available from day one. Customers receive admin-level credentials and access to the Horizon dashboard for GUI-based management.
The OpenStack API surface supports:
- Terraform and Pulumi for infrastructure-as-code VM provisioning
- Ansible OpenStack collection for configuration management
- Python openstackclient for CLI operations
- Direct REST API for custom automation
- VM lifecycle (create, resize, migrate, snapshot)
- Live VM migration via Horizon for zero-downtime maintenance
- Neutron networking with VLANs and optional VXLAN overlays
- Security groups and floating IPs
- Custom instance flavors sized to your workload
- S3-compatible object storage via Ceph RADOSGW
No VMware licensing, no vSphere fees, no vCenter charges. Organizations migrating from VMware retain equivalent functionality through OpenStack’s compute, network, and storage APIs without per-socket or per-VM licensing costs.
Ceph Storage Architecture
The three-node cluster pools all Micron 7500 MAX NVMe drives into a distributed Ceph storage layer managed by OpenMetal. Ceph provides:
- Block storage (RBD): Persistent volumes for VMs with configurable replication (default 3x across nodes). Each write is replicated across all three nodes for data durability.
- Object storage (RADOSGW): S3-compatible API for application-level object storage without external services. Tools that target AWS S3 work against OpenMetal’s RADOSGW endpoint without code changes.
- Self-healing: If a drive or node fails, Ceph automatically re-replicates data across remaining OSDs. No manual RAID rebuilds.
With 19.2 TB raw NVMe across three nodes and 3x replication, usable capacity is approximately 6.4 TB. The Micron 7500 MAX drives deliver 1.1M random read IOPS and sub-1ms QoS at 99.9999% per drive, providing consistent performance for Ceph OSD operations. Boot/data isolation on each node keeps Ceph OSD I/O off the OS drives.
For workloads requiring additional storage density, Storage Medium v4 or Storage Large v4 nodes can be added to the cluster to expand the Ceph pool without replacing the compute nodes.
Processor
Six Xeon 6505P processors — two per node — contribute 72 total physical cores and 144 hardware threads to the cluster. In OpenStack Nova scheduling, these physical cores map directly to vCPU allocation for tenant VMs, with headroom for modest overcommit ratios under typical workloads.
The Granite Rapids (Intel 3) microarchitecture brings AMX matrix multiply accelerators, AVX-512, and DL Boost to each socket. These hardware accelerators are accessible to guest VMs via CPU passthrough or custom OpenStack Nova flavors — relevant for inference and data processing workloads running inside cluster VMs. At 2.2 GHz base and 4.1 GHz turbo, the 6505P operates at competitive frequencies while benefiting from the Intel 3 process node’s improved per-core efficiency over the Medium v4’s Emerald Rapids (Intel 7) processors.
Memory
Each Medium v5 node ships with 256 GB DDR5-6400 ECC in 16 DIMM slots, for a cluster total of 768 GB. DDR5-6400 delivers approximately 46% higher theoretical memory bandwidth than the DDR5-4400 in Medium v4 nodes — a meaningful improvement for OpenStack Nova scheduling under high concurrency, in-memory database workloads running in VMs, and Ceph OSD write throughput on the data path.
A cluster allocating a 1:1.5 vRAM-to-RAM ratio achieves approximately 1.15 TB of allocatable VM memory, sufficient for 40–60 moderate-sized VMs or a dense Kubernetes node pool.
Intel SGX is enabled by default on all three nodes with 128 GB EPC per node (384 GB cluster-wide). Intel TDX is not active at base configuration — each node is TDX-eligible when upgraded to 1 TB (16x 64 GB DIMMs). TDX upgrades can be applied per node independently; the cluster does not require all three nodes to be upgraded simultaneously.
The per-node RAM ceiling is 2 TB (16x 128 GB DIMMs), enabling future memory upgrades without replacing the cluster hardware.
Storage
Boot and data isolation
Boot drives on each node are RAID 1 mirrored 960 GB SSDs used exclusively for the hypervisor OS. VM storage is provisioned from the Ceph pool on Micron 7500 MAX NVMe data drives, keeping VM disk I/O on a separate physical storage path from the host OS.
Micron 7500 MAX NVMe data drives
The Micron 7500 MAX delivers 7,000 MB/s sequential read, 1.1M random read IOPS, and sub-1ms QoS at 99.9999% per drive. Each node contributes one 6.4 TB OSD to the Ceph pool; across three nodes, the aggregate pool is 19.2 TB raw NVMe. At 3x replication, usable capacity is approximately 6.4 TB. Ceph block volumes (Cinder RBD) and object storage (RADOSGW) are available from day one without additional configuration.
For workloads requiring additional storage capacity, Storage Medium v4 or Storage Large v4 nodes can be added to expand the Ceph pool without replacing the compute nodes.
Networking
Each node includes two 10 Gbps ports bonded in LACP for a 20 Gbps private mesh. Inter-node cluster traffic — Ceph replication, OpenStack API communication, Nova live migration — traverses this private VLAN fabric and is unmetered. The 60 Gbps aggregate private bandwidth (3 nodes × 20 Gbps) provides sufficient headroom for Ceph replication writes even under heavy storage and migration load.
Public-facing VM traffic exits through each node’s public interface at up to 6 Gbps per node. OpenMetal’s network SLA is 99.96% guaranteed, tracking above 99.99% in actual operation since 2022. DDoS protection is included at no additional charge across all cluster nodes.
Security and Confidential Computing
Intel SGX is enabled by default on all three nodes, providing 128 GB EPC per node and 384 GB of enclave memory cluster-wide. SGX supports key management and attestation workloads running inside enclaves, distributable across Nova scheduling.
Intel TDX is available as an upgrade. Each node supports TDX when the RAM is upgraded to 1 TB (16x 64 GB DIMMs), enabling hardware-enforced confidential VMs with cryptographically isolated memory for healthcare, financial, and regulated workloads. TDX-upgraded nodes can coexist in the same cluster — OpenStack Nova schedules TDX-flavored VMs to TDX-enabled nodes automatically. TDX upgrades can be applied per node as workload requirements grow without upgrading the full cluster. See the TDX Edition bare metal page for attestation and configuration details.
- AES-NI hardware acceleration for cipher operations
- Intel Boot Guard for verified boot chain integrity
- CET (Control-flow Enforcement Technology) for control-flow protection
HIPAA and Regulatory Compliance
OpenMetal is HIPAA compliant at the organizational level and offers BAAs. Medium v5 Hosted Private Cloud clusters in Ashburn (NTT DATA VA1) are hosted in a facility with facility-operator HIPAA certification, alongside SOC 1/2 Type II, ISO 27001, PCI DSS, and NIST 800-53 HIGH. Los Angeles (Digital Realty LAX10) HIPAA compliance is at the OpenMetal organizational level, with facility-level SOC 2, SOC 3, ISO 27001, and PCI DSS. Amsterdam (Digital Realty AMS3) holds SOC 1/2, PCI-DSS, ISO 27001, ISO 50001, and ISO 22301. Singapore (Digital Realty SIN10) holds BCA Green Mark Platinum. Multi-tenant OpenStack environments handling PHI should combine OpenMetal’s HIPAA BAA with TDX guest isolation for the appropriate technical safeguard posture.
What Changed from Medium v4 to Medium v5 HPC
| Component | Medium v4 HPC (3-Node) | Medium v5 HPC (3-Node) |
|---|---|---|
| CPU per node | 2x Xeon Silver 4510 (Emerald Rapids) | 2x Xeon 6505P (Granite Rapids) |
| Process node | Intel 7 | Intel 3 |
| Cores / Threads | 72C / 144T cluster | 72C / 144T cluster |
| Memory speed | DDR5-4400 | DDR5-6400 |
| Aggregate cluster RAM | 768 GB | 768 GB |
| NVMe per node | 1x 6.4 TB Micron 7500 MAX | 1x 6.4 TB Micron 7500 MAX |
| Public bandwidth | 2 Gbps | 6 Gbps |
| Max RAM per node | 512 GB | 2 TB |
| PCIe generation | PCIe 5.0 | PCIe 5.0 |
| SGX | Enabled by default | Enabled by default (128 GB EPC) |
| TDX eligibility | Yes (1 TB upgrade per node) | Yes (1 TB upgrade per node) |
| Pricing | Fixed monthly | Fixed monthly |
The most meaningful upgrade for HPC workloads is DDR5-6400 versus DDR5-4400: the Xeon 6505P delivers approximately 46% higher theoretical memory bandwidth per node, which directly benefits OpenStack VM scheduling under high concurrency, in-memory database workloads running inside VMs, and Ceph OSD write throughput on the data storage path. The Granite Rapids CPU also raises the per-node RAM ceiling from 512 GB to 2 TB, enabling future memory upgrades without replacing the cluster hardware.
Recommended Workloads on the Medium v5 Hosted Private Cloud
SMB and startup infrastructure
The Medium v5 cluster provides a complete private cloud for small and mid-sized businesses that have outgrown shared hosting or single-server setups but do not yet need the Large v5’s compute density. The 72-core / 768 GB cluster supports 20–40 VMs at moderate sizing (8–16 GB per VM), with Ceph providing replicated storage and OpenStack handling network isolation between environments (production, staging, development). Fixed monthly pricing makes budgeting predictable for organizations with limited ops staff.
Development and staging environments
Provision isolated development and staging environments that mirror production topology without the cost of dedicated per-environment hardware. OpenStack’s project-level isolation keeps environments separated, while the 6.4 TB usable Ceph pool handles application data and database snapshots. The OpenStack API supports infrastructure-as-code workflows with Terraform, Pulumi, and Ansible for consistent environment provisioning.
Managed Kubernetes infrastructure
Deploy Kubernetes on top of OpenStack VMs using the Medium v5 cluster as the control plane and worker pool. DDR5-6400 memory bandwidth supports higher pod density per node compared to the Medium v4, while Ceph RBD provides persistent volumes with automatic replication. OpenMetal’s private VLAN mesh delivers 20 Gbps per node for pod-to-pod east-west traffic. Compatible with Talos, Flatcar, K3s, and upstream Kubernetes. For larger Kubernetes deployments, the Large v5 cluster doubles compute and storage per node.
VMware migration (entry-level)
Replace a small VMware vSphere deployment with OpenStack on dedicated hardware. The Medium v5 cluster provides equivalent VM lifecycle management, live migration, and network isolation without vSphere, vSAN, or vCenter licensing. For organizations with fewer than 30–40 VMs, the Medium v5 cluster eliminates VMware licensing costs while maintaining operational parity. OpenMetal’s onboarding team supports migration planning, and ramp pricing lets you run both environments simultaneously during the transition.
Confidential VM workloads (with RAM upgrade)
Each node in the Medium v5 cluster is TDX-eligible. Upgrading a node to 1 TB (16x 64 GB DIMMs) activates Intel TDX, enabling hardware-enforced confidential VMs for healthcare, financial, and regulated workloads. TDX upgrades can be applied per node as workload requirements grow — the cluster does not require all three nodes to be upgraded simultaneously.
Ready to Deploy a Medium v5 Hosted Private Cloud?
Tell us about your workload and we’ll help you configure the right cluster size and storage tier.
Deployment Options
Standard 3-Node Cluster
The default deployment: three Medium v5 nodes running OpenStack and Ceph, production-ready in under 45 seconds. 72 cores, 768 GB DDR5-6400 RAM, 19.2 TB raw NVMe. Fixed monthly pricing with rate locks up to 5 years.
- 45-second deployment: Production-ready OpenStack available in under a minute
- Admin credentials: Full OpenStack admin access from day one
- Day 2 included: Hardware maintenance, OpenStack lifecycle, Ceph management
- Fixed monthly pricing: No per-VM or per-vCPU metering
- Proof of Concept clusters available: Test deployment workflows, OpenStack API compatibility, and environment fit before committing to production
- Ramp pricing: Available for migrations from VMware or public cloud environments
Expanded Clusters
Add compute nodes (Medium v5 or Large v5) to scale CPU and memory. Add Storage Medium v4 or Storage Large v4 nodes to expand Ceph capacity. OpenMetal’s onboarding team helps design cluster topology for your workload profile.
Bare Metal Alternative
Prefer single-server control with no OpenStack overhead? The same Medium v5 hardware is available as a standalone Bare Metal Dedicated Server with full root access, IPMI, and your choice of hypervisor or OS.
Where to Deploy
| Location | Region | Facility Operator | Certifications |
|---|---|---|---|
| Ashburn, Virginia | US-East | NTT DATA (VA1) | SOC1/2 Type II, ISO 27001, PCI DSS, NIST 800-53 HIGH, HIPAA (facility-level) |
| Los Angeles, California | US-West | Digital Realty (LAX10) | SOC 2, SOC 3, ISO 27001, PCI DSS |
| Amsterdam, Netherlands | EU-West | Digital Realty (AMS3) | SOC1/2, PCI-DSS, ISO 27001, ISO 50001, ISO 22301 |
| Singapore | Asia | Digital Realty (SIN10) | BCA Green Mark Platinum |
Facility certifications are held by the facility operator. OpenMetal is HIPAA compliant at the organizational level and offers BAAs. Proof of Concept clusters are available for testing before committing.
→ Pricing: openmetal.io/cloud-deployment-calculator
Get a Medium v5 Hosted Private Cloud Quote
Tell us about your infrastructure needs and we’ll provide a custom quote for the Medium v5 Hosted Private Cloud.
- Standard cluster: 3-node OpenStack + Ceph with Day 2 operations included
- Expanded cluster: Additional compute or storage nodes sized to your workload
- Custom configurations: RAM upgrades, additional NVMe drives, TDX activation per node
OpenMetal offers BAAs for HIPAA-covered entities. Ramp pricing available for migrations. All deployments include fixed monthly pricing, 99.96%+ network SLA, and DDoS protection.
Specifications, pricing, and availability are subject to change without notice. The information on this page is provided for general guidance and does not constitute a contractual commitment. Contact OpenMetal for current configuration details and pricing. AWS specifications and pricing are sourced from publicly available documentation and may not reflect current rates or configurations.



































