The Large v5 Hosted Private Cloud is a three-node OpenStack and Ceph cluster built on OpenMetal’s current-generation Large v5 bare metal hardware. Each node contributes dual Xeon 6517P processors (Granite Rapids), 512 GB DDR5-6400, and 12.8 TB of Micron 7500 MAX NVMe, pooled into a Ceph storage backend and managed under a unified OpenStack control plane. OpenMetal stands the cluster up in under 45 seconds and operates it Day 2 — monitoring, patching, and incident response — on fixed monthly pricing with no VMware licensing, no vSphere fees, and no shared tenancy. 

Key Takeaways

  • 96 cores / 192 threads pooled under OpenStack: Three Large v5 nodes contribute 32 dedicated cores each (64C/128T compute per node, plus control-plane overhead), all schedulable as Nova instances. The 14% base clock uplift over Large v4 nodes carries through to every guest VM running on the cluster.
  • 1.5 TB DDR5-6400 cluster RAM: 512 GB per node across 8 channels per socket delivers the bandwidth headroom that mixed virtualization workloads need — in-memory databases, dense VM placement, and large Ceph BlueStore caches all benefit from the DDR5-6400 step up.
  • 38.4 TB pooled NVMe (raw) on Ceph: Each node contributes 12.8 TB of Micron 7500 MAX NVMe to the cluster storage pool. Ceph manages replication (typically 3x by default for production safety, configurable), thin provisioning, and snapshots across the pool. The 10-bay chassis per node allows storage expansion to ~64 TB per node without forklift hardware swaps.
  • 45-second cluster deployment: OpenMetal’s automation stands up the full OpenStack control plane (Keystone, Nova, Neutron, Cinder, Glance, Heat) and the Ceph storage backend in under 45 seconds. Days, not weeks, from order to first VM.
  • Day 2 ops included on fixed monthly pricing: OpenMetal handles cluster monitoring, OpenStack and Ceph patching, host firmware updates, incident response, and capacity planning. No separate ops contract, no VMware licensing, no Red Hat OpenStack subscription.
  • Intel SGX enabled by default per node: 128 GB EPC per CPU (256 GB per node) for application-level confidential computing inside tenant VMs — key management, signing, software-HSM emulation. For VM-level confidential computing (Intel TDX), pair the cluster with bare metal Large v5 TDX servers on the same private network.

Server Configuration at a Glance

ComponentPer NodeCluster Total (3 nodes)
Processor2x Intel Xeon 6517P (Granite Rapids, Intel 3)6 CPUs total
Cores / Threads32 cores / 64 threads96 cores / 192 threads
Base / Max Turbo Frequency3.2 GHz / 4.2 GHz
L3 Cache144 MB432 MB
TDP per CPU190W
Memory512 GB DDR5-6400 ECC1.5 TB
Boot Storage2x 960 GB SSD in RAID 16x 960 GB
Ceph Data Storage2x 6.4 TB Micron 7500 MAX NVMe (12.8 TB)38.4 TB raw NVMe pool
Max Drive Bays per Node1030 cluster bays
Private Bandwidth per Node20 Gbps (2x 10 Gbps LACP bonded)60 Gbps aggregate east-west
Public Bandwidth per Node6 Gbps18 Gbps
Network SLA99.96% base (actual >99.99%)
Cluster SoftwareOpenStack (Nova, Neutron, Cinder, Glance, Keystone, Heat, Horizon) + Ceph (RBD, RGW, CephFS-optional)
Deployment TimeUnder 45 seconds
Day 2 OperationsIncluded (OpenMetal-managed)
Confidential ComputingIntel SGX enabled by default (128 GB EPC per CPU). Intel TDX: bare metal Large v5 only.
PricingFixed monthly cluster price — contact OpenMetal

OpenMetal Large v5 Hosted Private Cloud architecture diagram

Large v5 Hosted Private Cloud component architecture

Ready to Deploy a Large v5 Hosted Private Cloud?

Tell us about your workload and we’ll help you configure the right deployment — bare metal or Hosted Private Cloud, in any of our four data center regions.

Get a Large v5 Hosted Private Cloud Quote   Schedule a Consultation

What OpenMetal Manages: Day 2 Operations

A Hosted Private Cloud on Large v5 is operated by OpenMetal under a Day 2 model. The customer owns the cluster, has full OpenStack admin credentials and Ceph access, and decides what runs on it. OpenMetal handles the operational layer below the customer’s workload:

  • Cluster monitoring: Host-level health, OpenStack service health, Ceph health, network path health, with alerting routed to OpenMetal’s NOC for round-the-clock response.
  • OpenStack and Ceph patching: Coordinated upgrades of the OpenStack release train and Ceph version, including pre-upgrade validation, rolling node updates, and post-upgrade verification.
  • Host firmware and OS updates: BIOS, BMC firmware, drive firmware, and host OS patches applied with cluster-aware orchestration to avoid quorum loss.
  • Incident response: First responder on cluster-level incidents (control plane, Ceph health, network paths, host hardware). Customer-workload incidents remain the customer’s responsibility, with OpenMetal providing diagnostic data on request.
  • Capacity planning: Quarterly capacity reviews, forecasting against observed growth, and recommended node additions or RAM upgrades.

This Day 2 layer is what removes the operational tax that customers running self-hosted OpenStack or self-managed Ceph encounter, and what makes the Large v5 cluster economically competitive with self-managed VMware deployments before the licensing cost difference is factored in.

OpenStack API and Horizon Access

Every Large v5 cluster ships with full OpenStack admin access. Customers operate against the standard OpenStack API set (Keystone for identity, Nova for compute, Neutron for SDN, Cinder for block storage, Glance for images, Heat for orchestration, Octavia-optional for load balancing) and the Horizon dashboard for UI-based operations.

There is no proprietary control plane to learn. Existing OpenStack tooling — Terraform’s OpenStack provider, Ansible’s os_* modules, the python-openstackclient CLI, and any of the open-source operator stacks (Kolla-Ansible, OpenStack-Ansible, OpenStack Helm) — works against an OpenMetal cluster without modification. Workloads built for a self-hosted OpenStack environment migrate to an OpenMetal Large v5 cluster with no code changes.

The Granite Rapids architecture in the Large v5 brings two API-side benefits to OpenStack operators: the larger 144 MB L3 cache per CPU reduces VM context-switch overhead in dense workloads, and the 24 GT/s UPI fabric keeps cross-socket VM scheduling fast for NUMA-aware Nova placement.

Ceph Storage Architecture

Ceph runs across all three Large v5 nodes as the cluster’s unified storage backend, providing RBD (block storage to Cinder), RGW (S3-compatible object storage), and optionally CephFS (POSIX filesystem). Each node contributes its 12.8 TB of Micron 7500 MAX NVMe as Ceph OSDs, pooled into 38.4 TB raw capacity. Default replication is 3x for production safety, which delivers approximately 12.8 TB of usable capacity at the standard replication factor (configurable per pool to trade durability for usable capacity).

The Micron 7500 MAX is well-matched to Ceph OSD duty: its 6-plane architecture and independent wordline read (iWL) deliver sustained 70 us read latency at the 99th percentile and 35,040 TB of write endurance (3 DWPD), enough headroom for write-heavy RBD workloads under typical OpenStack flavors. The 819 GB/s aggregate memory bandwidth per node supports large BlueStore caches and on-host PG (Placement Group) caching.

OpenMetal’s Ceph version policy follows the upstream stable release cadence with conservative version selection — typically running the latest stable Ceph release that has been hardened in production for at least one minor version. Version upgrades are coordinated with the customer and rolled across the cluster without quorum loss.

For workloads that outgrow the 38.4 TB cluster pool, OpenMetal supports expanding the cluster with additional Ceph storage nodes (Storage Server tier) without disrupting the existing OpenStack compute plane.

Networking

Each Large v5 node ships with dual 10 Gbps NICs, LACP bonded for 20 Gbps aggregate private bandwidth per node and 60 Gbps of aggregate east-west capacity across the 3-node cluster. This is where OpenStack control-plane traffic, Ceph OSD heartbeats and replication, live VM migration, and tenant VM traffic all flow.

Each node also gets 6 Gbps of public bandwidth (18 Gbps aggregate). OpenMetal’s base network SLA is 99.96%, with actual performance exceeding 99.99% since 2022. DDoS protection is included for up to 10 Gbps per IP. Tenant VMs use Neutron-managed virtual networks (typically VXLAN-overlay) with floating IPs for public exposure.

Egress pricing: 95th-percentile, not per-GB

Cluster public egress is billed on 95th-percentile measurement across the aggregated public uplinks, not per-GB transfer. This matters most for tenant workloads with bursty public traffic — a SaaS API that handles spikes during deployment windows but averages a fraction of peak pays for the 95th-percentile rate, not for every byte transferred. East-west traffic between nodes (Ceph replication, live migration, tenant-to-tenant) is included at no additional cost.

Security, Compliance, and Confidential Computing

Every Large v5 node ships with Intel SGX enabled by default (128 GB EPC per CPU, 256 GB per node). SGX enclaves are available to tenant workloads through the standard SGX runtime libraries; pass-through to guest VMs requires the SGX-aware kernel and runtime on the guest side.

Intel TDX is supported on bare metal Large v5 servers only, not on Hosted Private Cloud clusters. Customers needing VM-level confidential computing on OpenMetal deploy bare metal Large v5 TDX Edition servers alongside the HPC cluster on the same private network; confidential workloads run on the BM TDX servers, general workloads on the HPC cluster. See the Large v5 TDX Edition page for the bare metal TDX configuration.

TME-MK (Total Memory Encryption — Multi-Key) is active on every node, encrypting all DRAM with per-tenant keys. Boot Guard verifies firmware integrity during node boot. Control-Flow Enforcement Technology (CET) protects against ROP/JOP exploitation on every node.

For a step-by-step guide to enabling SGX on a Hosted Private Cloud cluster, see the Enabling Intel SGX guide page.

HIPAA and regulatory compliance

OpenMetal is HIPAA compliant at the organizational level and offers Business Associate Agreements (BAAs). This is an OpenMetal organizational certification, not a facility-level one.

Large v5 clusters deployed in Ashburn and Los Angeles are hosted in HIPAA-compliant facilities. Facility-level certifications are held by the facility operator (not OpenMetal) and vary by location:

  • Ashburn, VA: SOC1 Type II, SOC2 Type II, ISO 27001, ISO 50001, PCI DSS, NIST 800-53 HIGH, HIPAA (facility-level)
  • Los Angeles, CA: SOC1/SOC2, ISO 27001, PCI-DSS, HIPAA (facility-level)
  • Amsterdam, NL: SOC Type 1/2, PCI-DSS, ISO 27001, ISO 50001, ISO 22301
  • Singapore: BCA Green Mark Platinum [additional certifications pending]

Recommended Workloads on the Large v5 Hosted Private Cloud

Multi-tenant SaaS hosting

Run customer-isolated VMs across the 96-core / 192-thread cluster with Neutron-managed per-tenant networking. The Granite Rapids cache and DDR5-6400 bandwidth keep dense tenant-VM placement responsive, while Ceph RBD provides per-tenant block storage with snapshots and clones. Public egress on 95th-percentile billing is well-matched to SaaS traffic patterns (bursty, with deployment-time spikes).

Managed Kubernetes platforms

Stand up multi-tenant Kubernetes-as-a-service offerings on OpenStack with Magnum or with custom Cluster API providers. Each tenant gets a dedicated K8s control plane and worker pool scheduled across the cluster’s 192 threads. Ceph RBD backs persistent volume claims; RGW serves S3-compatible object storage for tenant backups and shared registries.

CI/CD infrastructure

GitLab Runners, GitHub Actions self-hosted runners, Jenkins agents, and similar build-fleet workloads benefit from the cluster’s ability to schedule short-lived high-CPU VMs without per-instance billing penalties. Fixed monthly pricing makes CI cost predictable regardless of build volume.

Large-scale virtualization migration (VMware alternative)

Customers migrating off VMware (escalating per-core licensing, vSphere fees, Broadcom contract terms) move workloads onto the Large v5 cluster as OpenStack instances with similar live-migration, snapshot, and HA semantics. The cluster delivers an enterprise virtualization platform with no licensing surcharge per core and no per-VM ELA accounting. OpenMetal’s Ramp Pricing supports parallel-run migration so existing VMware spend tapers as Large v5 spend ramps.

Internal platform-as-a-service

Run internal developer platforms on OpenStack with self-service VM provisioning via Heat templates and Horizon. The 1.5 TB cluster RAM and 38.4 TB Ceph pool accommodate ~50-100 developer VMs depending on flavor, with elastic capacity through quota-based fair-share.

Hybrid: HPC cluster paired with bare metal Large v5 TDX servers

Customers with a confidential-workload minority alongside general virtualization deploy the standard 3-node HPC cluster for general workloads and pair it with one or more bare metal Large v5 TDX Edition servers on the same private network. Confidential VMs run on the BM TDX servers under hardware-isolated Trust Domains; general workloads run on the HPC cluster under OpenStack. Cross-tier traffic flows over the 20 Gbps private VLAN at no per-GB cost. Application-level confidential computing inside HPC tenant VMs uses the SGX enclaves enabled by default on every node.

“With OpenMetal, we found a true partner, we have more control over the performance of our clouds, and we are able to significantly reduce our cloud costs. These three things make this relationship something I would say yes to a hundred times over.”

Tom Fanelli, CEO & Co-Founder — Convesio

Ready to Deploy a Large v5 Hosted Private Cloud?

Tell us about your workload and we’ll help you configure the right deployment — bare metal or Hosted Private Cloud, in any of our four data center regions.

Get a Large v5 Hosted Private Cloud Quote   Schedule a Consultation

Cloud Comparison: Large v5 HPC vs Public Cloud

Public cloud equivalents to a 3-node Large v5 Hosted Private Cloud would be a similarly-sized AWS reserved-instance fleet plus EBS-backed block storage plus VPC networking plus the operational layer (managed-services subscriptions or in-house ops). For mid-sized SaaS, managed Kubernetes platform providers, and VMware-migrating enterprises, the cost gap widens as workload density grows.

DimensionOpenMetal Large v5 HPCAWS Equivalent 
Compute96 dedicated cores, 1.5 TB RAM~6x m7i.16xlarge reserved 
Block storage38.4 TB Ceph NVMe (replicated)EBS gp3 or io2 
Object storageRGW (S3-compatible) includedS3 with separate metering 
Egress95th-percentile, 18 Gbps publicPer-GB ($0.09/GB first 10 TB) 
Operational layerDay 2 includedSelf-managed or paid managed-service
LicensingOpenStack + Ceph (no fees)EKS / RDS / etc per-service fees
Pricing modelFixed monthly, 5-year price lock availableReserved with separate egress/storage

When public cloud is the better fit

Public cloud remains a strong choice for event-driven architectures that need scale-to-zero, workloads with deep integration into AWS-native services (Lambda, DynamoDB, SageMaker, managed databases), and small organizations spending under $10,000/month where the operational simplicity of fully-managed services outweighs the unit-cost premium. For mid-to-large organizations spending above $20,000/month, OpenMetal’s fixed pricing model typically delivers close to 50% cost reduction once compute, storage, egress, and licensing are aggregated.

Deployment Options

3-node Hosted Private Cloud cluster

The default Large v5 cluster size. Deploys in under 45 seconds with the full OpenStack control plane and Ceph backend. Suitable for most SaaS, internal-platform, and VMware-migration starting points.

Multi-cluster deployments

For workloads requiring more than a single cluster’s capacity, OpenMetal supports multi-cluster Large v5 deployments with cross-cluster networking via private VLANs. Useful for region-distributed SaaS, blue/green production rollouts, and disaster recovery configurations.

Bare metal alternative

If your workload doesn’t need the OpenStack management plane, you can deploy Large v5 servers as standalone bare metal with full root access and IPMI. The bare metal Large v5 ships with the same hardware and the same fixed monthly pricing model, without the cluster Day 2 layer. See the bare metal Large v5 page for the standalone configuration.

Where to deploy

Deploy a Large v5 Hosted Private Cloud cluster in Ashburn, Los Angeles, Amsterdam, or Singapore. All locations offer the same fixed monthly pricing regardless of region.

LocationRegionFacility CertificationsLocation Page
Ashburn, VirginiaUS-EastSOC1/2 Type II, ISO 27001, PCI DSS, NIST 800-53, HIPAAAshburn facility specs
Los Angeles, CaliforniaUS-WestSOC1/2, ISO 27001, PCI-DSS, HIPAALos Angeles facility specs
Amsterdam, NetherlandsEU-WestSOC Type 1/2, PCI-DSS, ISO 27001, ISO 50001, ISO 22301Amsterdam facility specs
SingaporeAsiaBCA Green Mark PlatinumSingapore facility specs

All facilities are Tier III data center spaces. Facility certifications are held by the facility operator. Proof of Concept clusters are available for testing OpenStack workflows, Ceph performance, and migration patterns before committing to a production deployment.

Get a Large v5 Hosted Private Cloud Quote

Tell us about your infrastructure needs and we’ll provide a custom quote for the Large v5 Hosted Private Cloud — as a standalone bare metal server or as part of a Hosted Private Cloud cluster.

  • Bare metal: Single-server or multi-server deployments with full root access and IPMI
  • Hosted Private Cloud: Three-node OpenStack + Ceph clusters with Day 2 operations included
  • Custom configurations: RAM upgrades, additional NVMe drives, TDX enablement

Ramp pricing available for migrations. All deployments include fixed monthly pricing, 99.96%+ network SLA, and DDoS protection.