The Large v5 is OpenMetal’s current-generation mid-tier bare metal server, built on dual Intel Xeon 6517P processors on the Granite Rapids architecture (Intel 3 process node). It replaces the Large v4 with a 14% higher base clock (3.2 GHz vs 2.8 GHz), a near-doubling of L3 cache (144 MB vs 75 MB), 23% faster DDR5-6400 memory, and a 15% reduction in per-socket TDP (190W vs 225W). Available as a standalone dedicated server with full root access and IPMI, or as part of a three-node Hosted Private Cloud cluster running OpenStack and Ceph, both on fixed monthly pricing with no shared tenancy.
Key Takeaways
- Granite Rapids architecture on Intel 3: Dual Xeon 6517P delivers 32 cores / 64 threads with a 14% higher base clock and 92% larger L3 cache than the Large v4, while drawing 15% less power per socket. The combined effect is faster per-core throughput for database queries, web serving, and real-time analytics, alongside lower thermal headroom for sustained turbo.
- DDR5-6400 memory bandwidth: 512 GB across 8 channels per socket delivers ~819 GB/s aggregate memory bandwidth, a 23% increase over the Large v4’s DDR5-5200. The eight open DIMM slots provide a clean path to 1 TB or beyond, which also activates Intel TDX confidential computing on this server.
- Boot and data drive isolation: Two dedicated 960 GB boot SSDs in RAID 1 keep OS I/O off the 12.8 TB Micron 7500 MAX data NVMe pool, eliminating I/O contention that affects database WAL writes, analytics scans, and blockchain chain sync. Up to 10 total drive bays leave 8 open for expansion beyond the base configuration.
- PCIe 5.0 with 88 lanes per processor: 176 total PCIe 5.0 lanes feed the Micron 7500 MAX NVMe drives and LACP-bonded network adapters directly, with bandwidth headroom for additional NVMe expansion, GPU pairing, or high-throughput accelerators without saturating the I/O subsystem.
- TDX-upgradeable confidential computing: Intel TDX hardware-isolated trust domains activate with a 1 TB RAM upgrade (8 open DIMM slots available). Intel SGX enclaves are enabled by default. No hypervisor licensing or cloud attestation fees.
- 95th-percentile egress billing: Public bandwidth is billed on 95th-percentile burst, not per-GB transfer. Predictable monthly costs with no AWS-style egress surprises. Includes 6 Gbps public bandwidth with burst capacity.
Server Configuration at a Glance
| Component | Specification |
|---|---|
| Processor | 2x Intel Xeon 6517P (Granite Rapids, Intel 3 process node) |
| Total Cores / Threads | 32 cores / 64 threads |
| Base / Max Turbo Frequency | 3.2 GHz / 4.2 GHz |
| L3 Cache | 72 MB per processor (144 MB total) |
| TDP | 190W per processor |
| UPI | 3 UPI links at 24 GT/s |
| Memory | 512 GB DDR5-6400 ECC (16 DIMM slots, 8 populated) |
| Boot Storage | 2x 960 GB SSD in RAID 1 (dedicated OS drives) |
| Data Storage | 2x 6.4 TB Micron 7500 MAX NVMe (12.8 TB total) |
| Max Drive Bays | 10 drives |
| Private Bandwidth | 20 Gbps (2x 10 Gbps LACP bonded) |
| Public Bandwidth | 6 Gbps |
| Network SLA | 99.96% base (actual >99.99% since 2022) |
| DDoS Protection | Included, up to 10 Gbps per IP |
| PCIe | PCIe 5.0, 88 lanes per processor (176 total) |
| Remote Management | Full IPMI access (power, console, BIOS, OS install) |
| Confidential Computing | Intel TDX (requires 1 TB RAM upgrade) + Intel SGX (enabled by default) |
| Compliance | HIPAA-eligible (Ashburn, Los Angeles); SOC 1/2, ISO 27001, PCI-DSS (facility-level, varies by location) |
| Pricing | Fixed monthly — see openmetal.io/bare-metal-pricing (Large v5 pricing in preview; contact OpenMetal for current rates) |
Large v5 component architecture
Ready to Deploy a Large v5?
Tell us about your workload and we’ll help you configure the right deployment — bare metal or Hosted Private Cloud, in any of our four data center regions.
Processor: Intel Xeon 6517P (Granite Rapids)
OpenMetal selected the Xeon 6517P for the Large v5 to deliver a meaningful per-core uplift over the Large v4 without changing the 32-core / 64-thread footprint that customers have built their deployments around. At 3.2 GHz base / 4.2 GHz turbo, the 6517P runs 14% faster at base clock than the 6526Y it replaces, while drawing 35W less per socket (190W vs 225W) thanks to the move from Intel 7 to the Intel 3 process node. The result is more headroom for sustained turbo on latency-sensitive workloads like database queries, RPC handling, and web serving.
The L3 cache jump is the second headline. Each 6517P provides 72 MB of L3 cache (144 MB across both sockets), nearly double the 75 MB total on the Large v4. Larger cache directly benefits workloads with large working sets that previously spilled to memory: PostgreSQL shared buffers, ClickHouse query state, JIT-compiled query plans in Presto, and per-tenant Proxmox VM page tables. The increased cache combined with the wider UPI fabric (3 links at 24 GT/s, up from 2 links at 16 GT/s on the Large v4) keeps cross-socket NUMA traffic fast for multi-instance database hosting and dense virtualization layouts.
PCIe 5.0 is delivered through 88 lanes per processor (176 total), feeding the Micron 7500 MAX NVMe drives and LACP-bonded NICs at full link width with significant headroom for expansion. For workload-specific acceleration, the 6517P includes Intel AMX (Advanced Matrix Extensions) for INT8/BF16 matrix operations in ML inference, AVX-512 with dual FMA units for vectorized floating-point compute, and full Hyper-Threading. These accelerate vectorized database query execution (ClickHouse, PostgreSQL), scientific computing, and batch ML inference without GPU hardware. For GPU-class training and inference workloads, OpenMetal offers dedicated A100 and H100 servers in the same facilities.
Memory: 512 GB DDR5-6400
OpenMetal configures the Large v5 with 512 GB of DDR5-6400 ECC registered memory across 8 of the 16 DIMM slots. DDR5-6400 operates at 6,400 MT/s, delivering 51.2 GB/s per channel. With 8 memory channels per processor and two processors, aggregate memory bandwidth reaches approximately 819 GB/s, a 23% increase over the Large v4’s DDR5-5200.
That bandwidth matters most for memory-throughput-bound workloads: in-memory databases scanning large datasets (Redis, PostgreSQL shared buffers), virtual machines competing for memory bus access in Proxmox or KVM environments, and vectorized compute operations processing data in large batches with AVX-512. Combined with the 144 MB of L3 cache across both sockets, the Large v5 keeps far more hot data on-die than the Large v4, reducing the frequency of DRAM round-trips that dominate query latency for analytical workloads.
The 8 open DIMM slots provide a clear upgrade path to 1 TB or beyond without replacing existing modules. Upgrading to 1 TB activates Intel TDX support for hardware-isolated confidential computing (see Security section). Contact OpenMetal to discuss RAM upgrades on deployed servers. ECC is standard across all OpenMetal configurations, catching and correcting single-bit memory errors before they affect running workloads, a requirement for production databases, financial systems, and compliance-sensitive deployments.
Storage: Micron 7500 MAX NVMe
Boot and data isolation
The Large v5 separates boot and data storage into dedicated drive pools. Two 960 GB SSDs serve as dedicated OS drives in RAID 1, keeping system-level I/O (logging, package management, system monitoring) completely off the data NVMe drives. This is an OpenMetal design decision applied across current-generation servers and carried forward from the Large v4. For details on boot drive configuration, see the Dual Boot Drives with RAID 1 feature page.
Data drives: Micron 7500 MAX
The Large v5 ships with 2x 6.4 TB Micron 7500 MAX NVMe SSDs (12.8 TB total raw capacity). The 7500 MAX uses Micron’s 232-layer 3D TLC NAND with 6-plane architecture and independent wordline read (iWL), connected via PCIe Gen4 x4 (NVMe v2.0b).
| Metric | Micron 7500 MAX (6400 GB) |
|---|---|
| Sequential Read | 7,000 MB/s |
| Sequential Write | 5,900 MB/s |
| Random Read IOPS | 1,100,000 |
| Random Write IOPS | 400,000 |
| Mixed 70/30 IOPS | 650,000 |
| Read Latency (typical) | 70 us |
| Write Latency (typical) | 15 us |
| Read Latency (99th pct) | 80 us |
| Write Latency (99th pct) | 65 us |
| Endurance (TBW) | 35,040 TB |
| DWPD | 3 |
| MTTF | 2,000,000 hours (0-55C) / 2,500,000 hours (0-50C) |
| QoS | Sub-1ms at 99.9999% (6-nines) for 4KB random read up to QD128 |
| Warranty | 5 years |
The chassis supports up to 10 total drives, leaving 8 open bays for additional NVMe capacity beyond the base 12.8 TB configuration — a meaningful expansion path over the Large v4’s 6-bay chassis.
Networking
Each Large v5 ships with dual 10 Gbps NICs, LACP bonded for 20 Gbps aggregate private bandwidth. Hardware lives on customer-specific VLANs, and all private network traffic is included at no additional cost. This is where east-west traffic between servers in the same deployment moves: database replication, Ceph OSD heartbeats, Proxmox corosync and live migration, Kubernetes pod-to-pod communication.
The Large v5 includes 6 Gbps of public bandwidth, a 50% increase over the Large v4’s 4 Gbps base allocation. OpenMetal’s base network SLA is 99.96%, with actual performance exceeding 99.99% since 2022 (2026 is also tracking above 99.99%). DDoS protection is included for up to 10 Gbps per IP. Use OpenMetal IPs or bring your own /24 or larger block for direct public internet connectivity.
Egress pricing: 95th-percentile billing, not per-GB transfer
Public egress is billed on 95th-percentile measurement, not per-GB transfer. This is a structural cost advantage over AWS, Azure, and GCP, where egress charges scale linearly with traffic volume. On OpenMetal, a server that bursts to 10 Gbps during a deployment window but averages 2 Gbps pays for the 95th-percentile rate, not for every byte transferred. Additional 1 Gbps public egress is available at $375/month in advance, or billed at 95th percentile for overages in arrears.
Security, Compliance, and Confidential Computing
The Large v5’s Xeon 6517P supports Intel TDX, but activation requires a RAM upgrade to 1 TB (filling all 16 DIMM slots). TDX creates hardware-isolated trust domains that protect VM memory from the host OS, hypervisor, and other VMs, enforced by the CPU itself rather than software. This is a customer-initiated upgrade, not a default configuration. Contact OpenMetal to schedule a RAM upgrade on a deployed Large v5. For servers that ship TDX-enabled out of the box, see the XL v4, XL v5, XXL v4, and XL v4 High Frequency tiers (all have 1 TB+ RAM by default). Intel SGX enclaves are enabled by default on the Large v5 for application-level memory encryption, protecting specific application code and data inside encrypted enclaves independent of TDX — useful for key management, certificate signing, and secure computation on sensitive data without exposing it to the host OS. TME-MK (Total Memory Encryption — Multi-Key) encrypts all DRAM with per-tenant keys regardless of TDX status.
- AES-NI: Hardware-accelerated AES encryption for TLS termination, disk encryption, and VPN throughput without CPU overhead.
- Boot Guard: Verifies firmware integrity during boot, preventing rootkit injection before the OS loads.
- Control-Flow Enforcement Technology (CET): Hardware-level protection against ROP/JOP attacks.
- Crypto Acceleration and QuickAssist Technology (QAT): Offload paths for cryptographic and compression workloads, reducing main-core load on TLS-heavy services.
HIPAA and regulatory compliance
OpenMetal is HIPAA compliant at the organizational level and offers Business Associate Agreements (BAAs). This is an OpenMetal organizational certification, not a facility-level one.
Large v5 servers deployed in Ashburn and Los Angeles are hosted in HIPAA-compliant facilities. Facility-level certifications are held by the facility operator (not OpenMetal) and vary by location:
- Ashburn, VA: SOC1 Type II, SOC2 Type II, ISO 27001, ISO 50001, PCI DSS, NIST 800-53 HIGH, HIPAA (facility-level)
- Los Angeles, CA: SOC1/SOC2, ISO 27001, PCI-DSS, HIPAA (facility-level)
- Amsterdam, NL: SOC Type 1/2, PCI-DSS, ISO 27001, ISO 50001, ISO 22301
- Singapore: BCA Green Mark Platinum [additional certifications pending]
Recommended Workloads on the Large v5
Databases and transaction processing
The Large v5’s 32 cores at 3.2 GHz base clock, 144 MB of L3 cache, 512 GB DDR5-6400, and 12.8 TB of Micron 7500 MAX NVMe (1.1M random read IOPS, 70 us typical latency) make it a strong fit for PostgreSQL, MySQL, SQL Server, and Oracle workloads. The larger L3 cache directly benefits query plans that previously spilled to memory on the Large v4, while DDR5-6400 reduces buffer-cache miss penalties. Boot/data isolation keeps WAL writes and checkpoint I/O on dedicated NVMe, free from OS-level contention. Deploy multiple Large v5 servers with LACP-bonded private networking for primary-replica replication over the 20 Gbps mesh. IPMI access allows BIOS-level tuning for NUMA-aware database placement.
Kubernetes and container orchestration
Run Talos, Flatcar, K3s, or full Kubernetes on bare metal with direct hardware access and no hypervisor overhead. The Large v5’s 64 threads handle dense pod scheduling, while 512 GB RAM supports large etcd datasets and in-memory caching layers. Boot/data isolation means container runtime I/O (image pulls, overlay writes) stays on the boot drives while application data uses the NVMe pool. Three or more Large v5 servers connected over the 20 Gbps LACP mesh form a production-grade cluster with Ceph or local NVMe for persistent volumes. The 10-bay chassis leaves expansion headroom for storage-heavy clusters without provisioning separate storage nodes.
Virtualization and multi-tenant hosting (Proxmox, KVM, OpenStack)
OpenMetal publishes a full Proxmox reference architecture for bare metal, demonstrated by Wendell Wilson from Level1Techs on the previous-generation Large v4: a 3-node Proxmox cluster with VLAN segmentation (corosync, storage, VM, management), ZFS storage pools with replication, HA pfSense VM pair for routing, and a 4th node as a dedicated storage/replication target. The same architecture scales to Large v5 with the benefit of higher base clocks for VM responsiveness, deeper L3 cache for guest page tables, and faster DDR5-6400 memory for in-VM workloads. Ceph is also supported as an alternative to ZFS for hyper-converged Proxmox deployments.
Blockchain infrastructure
Validator nodes, archive nodes, and RPC endpoints benefit from the Large v5’s NVMe throughput (7,000 MB/s sequential read) and storage capacity (12.8 TB base, expandable to ~64 TB with all 10 bays populated). The higher base clock helps with serial signature verification on validator nodes, while the larger L3 cache reduces DRAM pressure during state lookups on archive nodes. Boot/data isolation protects chain state writes from OS activity. The fixed monthly pricing model avoids the cost unpredictability of running blockchain nodes on per-hour cloud instances, where storage IOPS costs alone can exceed the entire OpenMetal monthly rate.
Data analytics and batch processing
Spark, Presto, and ClickHouse workloads benefit from the 819 GB/s aggregate memory bandwidth for in-memory shuffles and the Micron 7500 MAX’s 650,000 mixed 70/30 IOPS for spill-to-disk operations. AVX-512 accelerates vectorized query execution in ClickHouse and columnar scans, while the 144 MB L3 cache keeps frequently-accessed columnar segments resident on-die. Multiple Large v5 servers connected over the 20 Gbps private mesh handle distributed queries with low inter-node latency.
ML inference (CPU-based)
Intel AMX on the 6517P accelerates INT8 and BF16 matrix operations for inference workloads using ONNX Runtime, TensorFlow Serving, or PyTorch. The Large v5 handles batch inference, feature serving, and embedding generation without GPU hardware. The combination of higher clocks and larger cache improves throughput for small-batch inference where per-request latency dominates. For training workloads or large-model inference requiring GPU acceleration, OpenMetal offers dedicated A100 and H100 servers deployable in the same facilities and connected over the same private network.
Ready to Deploy a Large v5?
Tell us about your workload and we’ll help you configure the right deployment — bare metal or Hosted Private Cloud, in any of our four data center regions.
Cloud Comparison: OpenMetal Large v5 vs AWS i4i
The closest AWS instance family by spec profile is the i4i (storage-optimized with persistent local NVMe). The i4i.4xlarge (16 vCPUs, 128 GB RAM, 1x 3,750 GB NVMe) is the nearest price-tier comparison, while the i4i.metal matches closer on raw specs. The Large v5 provides dedicated hardware with persistent local NVMe, full IPMI access, and a current-generation Granite Rapids CPU that no i4i variant currently offers.
| Dimension | OpenMetal Large v5 | AWS i4i.4xlarge |
|---|---|---|
| CPU | 32 physical cores / 64 threads (dedicated, Granite Rapids) | 16 vCPUs (shared physical host, Ice Lake) |
| RAM | 512 GB DDR5-6400 | 128 GB |
| Storage | 12.8 TB NVMe (Micron 7500 MAX, persistent) | 1x 3,750 GB NVMe (ephemeral) |
| Tenancy | Single-tenant dedicated hardware | Shared host (dedicated host extra) |
| Egress | 95th-percentile billing, 6 Gbps public | Per-GB ($0.09/GB first 10 TB) |
| Pricing Model | Fixed monthly, lock up to 5 years | On-demand hourly, or 1/3-year reserved |
| IPMI | Full IPMI (power, console, BIOS) | Not available |
| Network | 20 Gbps private (LACP), 6 Gbps public | Up to 12.5 Gbps |
| Management | Self-managed or Assisted Management | Self-managed |
Note: A more direct AWS comparison by total spec would be a bare metal i3en.metal (96 vCPUs, 768 GB, 8x 7,500 GB NVMe) , but that instance costs significantly more per month and still bills egress per GB. The structural difference is the billing model: OpenMetal’s fixed monthly rate includes predictable egress, while AWS costs scale with usage, transfer volume, and commitment term.
When public cloud is the better fit
Public cloud remains a strong choice for event-driven architectures that need scale-to-zero, workloads with deep integration into AWS-native services (Lambda, DynamoDB, SageMaker), and organizations spending under $10,000/month on cloud infrastructure where the operational overhead of managing dedicated hardware may outweigh the cost savings. For organizations spending above $20,000/month on public cloud, OpenMetal’s fixed pricing model typically delivers close to 50% cost reduction.
Generation Comparison: Large v4 vs Large v5
| Component | Large v4 | Large v5 | Improvement |
|---|---|---|---|
| CPU | 2x Xeon Gold 6526Y (Emerald Rapids) | 2x Xeon 6517P (Granite Rapids) | Newer microarchitecture |
| Process Node | Intel 7 | Intel 3 | Newer node, lower power per perf |
| Base Clock | 2.8 GHz | 3.2 GHz | +14% |
| Max Turbo | 3.9 GHz | 4.2 GHz | +7.7% |
| Cores / Threads | 32C / 64T | 32C / 64T | Same |
| L3 Cache | 37.5 MB per CPU (75 MB total) | 72 MB per CPU (144 MB total) | +92% total |
| TDP per CPU | 225W | 190W | -15.5% (more efficient) |
| PCIe Generation | PCIe 5.0 | PCIe 5.0 | Same generation, 88 lanes per CPU on v5 |
| UPI | 2 links at 16 GT/s | 3 links at 24 GT/s | +125% cross-socket bandwidth |
| Memory Speed | DDR5-5200 | DDR5-6400 | +23% bandwidth |
| Memory Capacity | 512 GB | 512 GB | Same (both upgradeable) |
| Boot Drives | 2x 960 GB RAID 1 | 2x 960 GB RAID 1 | Same |
| Data Storage | 2x 6.4 TB Micron 7500 MAX | 2x 6.4 TB Micron 7500 MAX | Same capacity, same drive |
| Max Drive Bays | 6 | 10 | +4 bays for expansion |
| Public Bandwidth | 4 Gbps | 6 Gbps | +50% |
| Network | 2x 10 Gbps (LACP) | 2x 10 Gbps (LACP) | Same |
The Large v5’s headline improvements are the move to the Granite Rapids architecture on the Intel 3 process node, a 14% higher sustained base clock, near-doubling of L3 cache, 23% faster DDR5 memory, and a 50% larger public bandwidth allocation. The TDP reduction from 225W to 190W per socket means more sustained turbo headroom for clock-sensitive workloads. The chassis change to 10 drive bays gives meaningful expansion room for storage-heavy deployments without moving to a Storage Server tier.
Deployment Options
Bare Metal Dedicated Server
Deploy a Large v5 as a standalone bare metal server with full root access and IPMI remote management. Every server is single-tenant dedicated hardware with no shared components. Pre-built images are available for Big Data, Virtualization, and High Performance Computing environments, or install a custom OS via IPMI console access. Pricing is fixed monthly with the option to lock rates for up to 5 years. Ramp pricing is available for migrations from other providers, allowing you to avoid paying for two environments simultaneously during the transition.
Hosted Private Cloud
Deploy a three-node Large v5 Hosted Private Cloud cluster running OpenStack and Ceph, production-ready in under 45 seconds. OpenMetal handles Day 2 operations including monitoring, patching, and incident response. No VMware licensing costs, no vSphere fees. Full OpenStack API and Horizon dashboard access. Ceph provides distributed block and object storage across the cluster with no additional licensing.
Where to deploy
Deploy a Large v5 bare metal server or Hosted Private Cloud cluster in Ashburn, Los Angeles, Amsterdam, or Singapore. All locations offer the same fixed monthly pricing regardless of region.
| Location | Region | Facility Certifications | Location Page |
|---|---|---|---|
| Ashburn, Virginia | US-East | SOC1/2 Type II, ISO 27001, PCI DSS, NIST 800-53, HIPAA | Ashburn facility specs |
| Los Angeles, California | US-West | SOC1/2, ISO 27001, PCI-DSS, HIPAA | Los Angeles facility specs |
| Amsterdam, Netherlands | EU-West | SOC Type 1/2, PCI-DSS, ISO 27001, ISO 50001, ISO 22301 | Amsterdam facility specs |
| Singapore | Asia | BCA Green Mark Platinum | Singapore facility specs |
All facilities are Tier III data center spaces. Facility certifications are held by the facility operator. Proof of Concept clusters are available for testing replication, environment fit, and workload validation before committing to a production deployment.
Get a Large v5 Quote
Tell us about your infrastructure needs and we’ll provide a custom quote for the Large v5 — as a standalone bare metal server or as part of a Hosted Private Cloud cluster.
- Bare metal: Single-server or multi-server deployments with full root access and IPMI
- Hosted Private Cloud: Three-node OpenStack + Ceph clusters with Day 2 operations included
- Custom configurations: RAM upgrades, additional NVMe drives, TDX enablement
Ramp pricing available for migrations. All deployments include fixed monthly pricing, 99.96%+ network SLA, and DDoS protection.



































