The Large v4 is OpenMetal’s mid-tier bare metal dedicated server, built on dual 5th Gen Intel Xeon Gold 6526Y processors (Emerald Rapids). It is the next generation of the Large v3 with a 40% higher base clock (2.8 GHz vs 2.0 GHz), 18% faster memory bandwidth from DDR5-5200, and dedicated dual boot drives in RAID 1, a feature the v3 lacked. Available as a standalone dedicated server with full root access and IPMI, or as part of a three-node Hosted Private Cloud cluster running OpenStack and Ceph, both on fixed monthly pricing with no shared tenancy.
Key Takeaways
- 40% higher base clocks over Large v3: Dual Xeon Gold 6526Y at 2.8 GHz base (vs 2.0 GHz on the v3’s 5416S) delivers more single-threaded throughput for latency-sensitive workloads like database queries, web serving, and real-time analytics, while maintaining the same 32-core / 64-thread count.
- DDR5-5200 memory bandwidth: 512 GB across 8 channels per socket delivers ~665 GB/s aggregate bandwidth, directly benefiting in-memory databases, ML inference pipelines, and virtualized workloads competing for memory bus access. Eight open DIMM slots allow upgrading to 1 TB+.
- Boot and data drive isolation: Two dedicated 960 GB boot SSDs in RAID 1 keep OS I/O off the 12.8 TB Micron 7500 MAX data NVMe pool, eliminating I/O contention that affects database WAL writes, analytics scans, and blockchain chain sync. The Large v3 shipped with a single boot drive.
- TDX-upgradeable confidential computing: Intel TDX hardware-isolated trust domains activate with a 1 TB RAM upgrade (8 open DIMM slots). Intel SGX enclaves are available by default. No hypervisor licensing or cloud attestation fees.
- 95th-percentile egress billing: Public bandwidth is billed on 95th-percentile burst, not per-GB transfer. Predictable monthly costs with no AWS-style egress surprises. Can burst up to 40 Gbps.
- HIPAA-eligible infrastructure: OpenMetal is HIPAA compliant at the organizational level and offers BAAs. Large v4 servers deployed in Ashburn and Los Angeles are hosted in HIPAA-compliant facilities operated by the facility provider.
Server Configuration at a Glance
| Component | Specification |
|---|---|
| Processor | 2x Intel Xeon Gold 6526Y (Emerald Rapids, Intel 7) |
| Total Cores / Threads | 32 cores / 64 threads |
| Base / Max Turbo Frequency | 2.8 GHz / 3.9 GHz |
| L3 Cache | 37.5 MB per processor (75 MB total) |
| TDP | 225W per processor |
| Memory | 512 GB DDR5-5200 ECC (16 DIMM slots, 8 populated) |
| Boot Storage | 2x 960 GB SSD in RAID 1 (dedicated OS drives) |
| Data Storage | 2x 6.4 TB Micron 7500 MAX NVMe (12.8 TB total) |
| Max Drive Bays | 6 drives |
| Private Bandwidth | 20 Gbps (2x 10 Gbps LACP bonded) |
| Public Bandwidth | 4 Gbps (burst up to 40 Gbps) |
| Network SLA | 99.96% base (actual >99.99% since 2022) |
| DDoS Protection | Included, up to 10 Gbps per IP |
| PCIe | PCIe 5.0 |
| Remote Management | Full IPMI access (power, console, BIOS, OS install) |
| Confidential Computing | Intel TDX (requires 1 TB RAM upgrade) + Intel SGX |
| Compliance | HIPAA-eligible (Ashburn, Los Angeles); SOC 1/2, ISO 27001, PCI-DSS (facility-level, varies by location) |
| Pricing | Fixed monthly — see openmetal.io/bare-metal-pricing |
Large v4 component architecture: dual-socket Xeon Gold 6526Y, DDR5-5200 memory channels, boot/data drive isolation, LACP-bonded networking
Ready to Deploy a Large v4?
Tell us about your workload and we’ll help you configure the right deployment — bare metal or Hosted Private Cloud, in any of our four data center regions.
Intel Xeon Gold 6526Y (Emerald Rapids)
OpenMetal selected the Xeon Gold 6526Y for the Large v4 to balance single-threaded clock speed with multi-threaded density. At 2.8 GHz base / 3.9 GHz turbo across 32 cores (64 threads in dual-socket), the 6526Y hits the sweet spot for mixed workloads that need both per-core performance (database queries, web serving) and parallel throughput (virtualization, batch analytics). Each processor provides 37.5 MB of L3 cache, 75 MB total across both sockets .
The 6526Y belongs to Intel’s 5th Gen Xeon Scalable family (Emerald Rapids), fabricated on Intel 7 at a 225W TDP per socket . PCIe 5.0 provides high-bandwidth connections to the Micron 7500 MAX NVMe drives and LACP-bonded network adapters . Inter-socket UPI links run at 16 GT/s, keeping cross-socket traffic fast for NUMA-aware workloads like multi-instance database hosting and Proxmox VM placement.
For workload-specific acceleration, the 6526Y includes Intel AMX (Advanced Matrix Extensions) for INT8/BF16 matrix operations in ML inference, AVX-512 with dual FMA units for vectorized floating-point compute, and full Hyper-Threading. These accelerate vectorized database query execution (ClickHouse, PostgreSQL), scientific computing, and batch ML inference without GPU hardware. For GPU-class training and inference workloads, OpenMetal offers dedicated A100 and H100 servers in the same facilities.
512 GB DDR5-5200
OpenMetal configures the Large v4 with 512 GB of DDR5-5200 ECC registered memory across 8 of the 16 DIMM slots. DDR5-5200 operates at 5,200 MT/s, delivering 41.6 GB/s per channel. With 8 memory channels per processor and two processors, aggregate memory bandwidth reaches approximately 665 GB/s.
That bandwidth matters most for memory-throughput-bound workloads: in-memory databases scanning large datasets (Redis, PostgreSQL shared buffers), virtual machines competing for memory bus access in Proxmox or KVM environments, and vectorized compute operations processing data in large batches with AVX-512. The 18% bandwidth increase over the Large v3’s DDR5-4400 translates to measurable gains in memory-intensive query execution and VM density.
The 8 open DIMM slots provide a clear upgrade path to 1 TB or beyond without replacing existing modules. Upgrading to 1 TB activates Intel TDX support for hardware-isolated confidential computing (see Security section). Contact OpenMetal to discuss RAM upgrades on deployed servers. ECC is standard across all OpenMetal configurations, catching and correcting single-bit memory errors before they affect running workloads, a requirement for production databases, financial systems, and compliance-sensitive deployments.
Micron 7500 MAX NVMe
Boot and data isolation
The Large v4 separates boot and data storage into dedicated drive pools. Two 960 GB SSDs serve as dedicated OS drives in RAID 1, keeping system-level I/O (logging, package management, system monitoring) completely off the data NVMe drives. This is an OpenMetal design decision applied across current-generation servers: the Large v3 shipped with only a single boot drive, so the v4 adds both redundancy and isolation. For details on boot drive configuration, see the Dual Boot Drives with RAID 1 feature page.
Micron 7500 MAX data drives
The Large v4 ships with 2x 6.4 TB Micron 7500 MAX NVMe SSDs (12.8 TB total raw capacity). The 7500 MAX uses Micron’s 232-layer 3D TLC NAND with 6-plane architecture and independent wordline read (iWL), connected via PCIe Gen4 x4 (NVMe v2.0b).
| Metric | Micron 7500 MAX (6400 GB) |
|---|---|
| Sequential Read | 7,000 MB/s |
| Sequential Write | 5,900 MB/s |
| Random Read IOPS | 1,100,000 |
| Random Write IOPS | 400,000 |
| Mixed 70/30 IOPS | 650,000 |
| Read Latency (typical) | 70 us |
| Write Latency (typical) | 15 us |
| Read Latency (99th pct) | 80 us |
| Write Latency (99th pct) | 65 us |
| Endurance (TBW) | 35,040 TB |
| DWPD | 3 |
| MTTF | 2,000,000 hours (0-55C) / 2,500,000 hours (0-50C) |
| QoS | Sub-1ms at 99.9999% (6-nines) for 4KB random read up to QD128 |
| Warranty | 5 years |
Source: Micron 7500 Tech Prod Spec Rev. A 10/2023
The chassis supports up to 6 total drives, leaving 2 open bays for additional NVMe capacity beyond the base 12.8 TB configuration.
Networking
Each Large v4 ships with dual 10 Gbps NICs, LACP bonded for 20 Gbps aggregate private bandwidth. Hardware lives on customer-specific VLANs, and all private network traffic is included at no additional cost. This is where east-west traffic between servers in the same deployment moves: database replication, Ceph OSD heartbeats, Proxmox corosync and live migration, Kubernetes pod-to-pod communication. For LACP bonding details, see the Dual Uplinks with LACP feature page.
The Large v4 includes 4 Gbps of public bandwidth with burst capacity up to 40 Gbps. OpenMetal’s base network SLA is 99.96%, with actual performance exceeding 99.99% since 2022 (2026 is also tracking above 99.99%). DDoS protection is included for up to 10 Gbps per IP. Use OpenMetal IPs or bring your own /24 or larger block for direct public internet connectivity.
Egress pricing: 95th-percentile billing, not per-GB transfer.
Public egress is billed on 95th-percentile measurement, not per-GB transfer. This is a structural cost advantage over AWS, Azure, and GCP, where egress charges scale linearly with traffic volume. On OpenMetal, a server that bursts to 10 Gbps during a deployment window but averages 2 Gbps pays for the 95th-percentile rate, not for every byte transferred. Additional 1 Gbps public egress is available at $375/month in advance, or billed at 95th percentile for overages in arrears.
Security and Confidential Computing
The Large v4’s Xeon Gold 6526Y supports Intel TDX, but activation requires a RAM upgrade to 1 TB (filling all 16 DIMM slots). TDX creates hardware-isolated trust domains that protect VM memory from the host OS, hypervisor, and other VMs, enforced by the CPU itself rather than software. This is a customer-initiated upgrade, not a default configuration. Contact OpenMetal to schedule a RAM upgrade on a deployed Large v4. For servers that ship TDX-enabled out of the box, see the XL v4, XL v5, XXL v4, and XL v4 High Frequency tiers (all have 1 TB+ RAM by default). SGX enclaves are also available on the Large v4 for application-level memory encryption, protecting specific application code and data inside encrypted enclaves independent of TDX. Useful for key management, certificate signing, and secure computation on sensitive data without exposing it to the host OS.
- TME-MK (Total Memory Encryption — Multi-Key): Encrypts all DRAM with per-tenant keys, active regardless of TDX status.
- AES-NI: Hardware-accelerated AES encryption for TLS termination, disk encryption, and VPN throughput without CPU overhead.
- Boot Guard: Verifies firmware integrity during boot, preventing rootkit injection before the OS loads.
- Control-Flow Enforcement Technology (CET): Hardware-level protection against ROP/JOP attacks.
For a step-by-step guide to enabling SGX and TDX on OpenMetal servers, see the Enabling Intel SGX and TDX guide page.
HIPAA and Regulatory Compliance
OpenMetal is HIPAA compliant at the organizational level and offers Business Associate Agreements (BAAs). This is an OpenMetal organizational certification, not a facility-level one.
Large v4 servers deployed in Ashburn and Los Angeles are hosted in HIPAA-compliant facilities. Facility-level certifications are held by the facility operator (not OpenMetal) and vary by location:
- Ashburn, VA: SOC1 Type II, SOC2 Type II, ISO 27001, ISO 50001, PCI DSS, NIST 800-53 HIGH, HIPAA (facility-level)
- Los Angeles, CA: SOC1/SOC2, ISO 27001, PCI-DSS, HIPAA (facility-level)
- Amsterdam, NL: SOC Type 1/2, PCI-DSS, ISO 27001, ISO 50001, ISO 22301
- Singapore: BCA Green Mark Platinum [additional certifications pending]
Recommended Workloads on the Large v4
Databases and transaction processing
The Large v4’s 32 cores at 2.8 GHz base clock, 512 GB DDR5-5200, and 12.8 TB of Micron 7500 MAX NVMe (1.1M random read IOPS, 70 us typical latency) make it a strong fit for PostgreSQL, MySQL, SQL Server, and Oracle workloads. Boot/data isolation keeps WAL writes and checkpoint I/O on dedicated NVMe, free from OS-level contention. Deploy multiple Large v4 servers with LACP-bonded private networking for primary-replica replication over the 20 Gbps mesh. IPMI access allows BIOS-level tuning for NUMA-aware database placement.
Kubernetes and container orchestration
Run Talos, Flatcar, K3s, or full Kubernetes on bare metal with direct hardware access and no hypervisor overhead. The Large v4’s 64 threads handle dense pod scheduling, while 512 GB RAM supports large etcd datasets and in-memory caching layers. Boot/data isolation means container runtime I/O (image pulls, overlay writes) stays on the boot drives while application data uses the NVMe pool. Three or more Large v4 servers connected over the 20 Gbps LACP mesh form a production-grade cluster with Ceph or local NVMe for persistent volumes.
Virtualization and multi-tenant hosting (Proxmox, KVM, OpenStack)
OpenMetal publishes a full Proxmox reference architecture for bare metal, demonstrated by Wendell Wilson from Level1Techs: a 3-node Proxmox cluster on Large v4 servers with VLAN segmentation (corosync, storage, VM, management), ZFS storage pools with replication, HA pfSense VM pair for routing, and a 4th Large v4 as a dedicated storage/replication node. RamNode achieved a 16x increase in VM density (4 to 64 customers per node on 8GB VDS plans) after migrating to OpenMetal v4 hardware, with a 70% reduction in monthly infrastructure costs. Ceph is also supported as an alternative to ZFS for hyper-converged Proxmox deployments.
Blockchain infrastructure
Validator nodes, archive nodes, and RPC endpoints benefit from the Large v4’s NVMe throughput (7,000 MB/s sequential read) and storage capacity (12.8 TB base, expandable to ~38 TB with all 6 bays populated). Boot/data isolation protects chain state writes from OS activity. The fixed monthly pricing model avoids the cost unpredictability of running blockchain nodes on per-hour cloud instances, where storage IOPS costs alone can exceed the entire OpenMetal monthly rate.
Data analytics and batch processing
Spark, Presto, and ClickHouse workloads benefit from the 665 GB/s aggregate memory bandwidth for in-memory shuffles and the Micron 7500 MAX’s 650,000 mixed 70/30 IOPS for spill-to-disk operations. AVX-512 accelerates vectorized query execution in ClickHouse and columnar scans. Multiple Large v4 servers connected over the 20 Gbps private mesh handle distributed queries with low inter-node latency.
ML inference (CPU-based)
Intel AMX on the 6526Y accelerates INT8 and BF16 matrix operations for inference workloads using ONNX Runtime, TensorFlow Serving, or PyTorch. The Large v4 handles batch inference, feature serving, and embedding generation without GPU hardware. For training workloads or large-model inference requiring GPU acceleration, OpenMetal offers dedicated A100 and H100 servers deployable in the same facilities and connected over the same private network.
Ready to Deploy a Large v4?
Tell us about your workload and we’ll help you configure the right deployment — bare metal or Hosted Private Cloud, in any of our four data center regions.
How the Large v4 Compares to Public Cloud
The closest AWS instance family by spec profile is the i4i (storage-optimized). The i4i.4xlarge (16 vCPUs, 128 GB RAM, 1x 3,750 GB NVMe) is the nearest price-tier comparison, while the i4i.metal matches closer on raw specs. The Large v4 provides dedicated hardware with persistent local NVMe and IPMI access that no i4i variant offers.
| Dimension | OpenMetal Large v4 | AWS i4i.4xlarge |
|---|---|---|
| CPU | 32 physical cores / 64 threads (dedicated) | 16 vCPUs (shared, multi-tenant infrastructure) |
| RAM | 512 GB DDR5-5200 | 128 GB |
| Storage | 12.8 TB NVMe (Micron 7500 MAX) | 1x 3,750 GB NVMe (ephemeral) |
| Tenancy | Single-tenant dedicated hardware | Shared host (dedicated host extra) |
| Egress | 95th-percentile billing, burst to 40 Gbps | Per-GB ($0.09/GB first 10 TB) |
| Pricing Model | Fixed monthly, lock up to 5 years | On-demand hourly, or 1/3-year reserved |
| IPMI | Full IPMI (power, console, BIOS) | Not available |
| Network | 20 Gbps private (LACP), 4 Gbps public | Up to 12.5 Gbps (shared, burstable network bandwidth) |
| Management | Self-managed or Assisted Management | Self-managed |
Note: A more direct AWS comparison by total spec would be a bare metal i3en.metal (96 vCPUs, 768 GB, 8x 7,500 GB NVMe) , but that instance costs significantly more per month and still bills egress per GB. The structural difference is the billing model: OpenMetal’s fixed monthly rate includes predictable egress, while AWS costs scale with usage, transfer volume, and commitment term.
When public cloud is the better fit
Public cloud remains a strong choice for event-driven architectures that need scale-to-zero, workloads with deep integration into AWS-native services (Lambda, DynamoDB, SageMaker), and organizations spending under $10,000/month on cloud infrastructure where the operational overhead of managing dedicated hardware may outweigh the cost savings. For organizations spending above $20,000/month on public cloud, OpenMetal’s fixed pricing model typically delivers close to 50% cost reduction.
A detailed comparison sub-page is available here.
What Changed from Large v3 to Large v4
| Component | Large v3 | Large v4 | Improvement |
|---|---|---|---|
| CPU | 2x Xeon Gold 5416S (Sapphire Rapids) | 2x Xeon Gold 6526Y (Emerald Rapids) | Newer microarchitecture |
| Base Clock | 2.0 GHz | 2.8 GHz | +40% |
| Max Turbo | 4.0 GHz | 3.9 GHz | -2.5% (traded for higher sustained base) |
| Cores / Threads | 32C / 64T | 32C / 64T | Same |
| Memory Speed | DDR5-4400 | DDR5-5200 | +18% bandwidth |
| Memory Capacity | 512 GB | 512 GB | Same (both upgradeable) |
| Boot Drives | 1x 960 GB | 2x 960 GB RAID 1 | Added redundancy + isolation |
| Data Storage | 2x 6.4 TB NVMe | 2x 6.4 TB NVMe | Same capacity |
| Data Drive Model | Micron 7450 MAX | Micron 7500 MAX | +3% seq read, +5% seq write, 10x better UBER |
| Max Drive Bays | 6 | 6 | Same |
| Network | 2x 10 Gbps | 2x 10 Gbps | Same |
| Chassis | SYS-221BT-HNTR | SYS-221BT-HNR | Updated revision |
| Locations | US-East, EU-West, US-West | US-East, EU-West, US-West, Asia | +Singapore |
The Large v4’s headline improvement is the 40% higher sustained base clock, which directly benefits single-threaded workloads like database query latency and web request processing. The addition of dual boot drives with RAID 1 addresses a gap in the v3’s design, where a single boot drive failure could take a server offline. The upgrade from Micron 7450 MAX to 7500 MAX brings improved endurance (10x better UBER rating) and slightly higher sequential throughput.
Large v4 Deployment Options
Bare Metal Dedicated Server
Deploy a Large v4 as a standalone bare metal server with full root access and IPMI remote management. Every server is single-tenant dedicated hardware with no shared components. Pre-built images are available for Big Data, Virtualization, and High Performance Computing environments, or install a custom OS via IPMI console access. Pricing is fixed monthly with the option to lock rates for up to 5 years. Ramp pricing is available for migrations from other providers, allowing you to avoid paying for two environments simultaneously during the transition.
→ View pricing: openmetal.io/bare-metal-pricing
Hosted Private Cloud
Deploy a three-node Large v4 Hosted Private Cloud cluster running OpenStack and Ceph, production-ready in under 45 seconds. OpenMetal handles Day 2 operations including monitoring, patching, and incident response. No VMware licensing costs, no vSphere fees. Full OpenStack API and Horizon dashboard access. Ceph provides distributed block and object storage across the cluster with no additional licensing.
→ View pricing and configuration: openmetal.io/cloud-deployment-calculator
Both deployment paths: available across OpenMetal’s Tier III data center locations. Fixed monthly pricing applies regardless of utilization. No per-hour, per-query, or per-GB billing.
Get a Large v4 Quote
Tell us about your infrastructure needs and we’ll provide a custom quote for the Large v4 — as a standalone bare metal server or as part of a Hosted Private Cloud cluster.
- Bare metal: Single-server or multi-server deployments with full root access and IPMI
- Hosted Private Cloud: Three-node OpenStack + Ceph clusters with Day 2 operations included
- Custom configurations: RAM upgrades, additional NVMe drives, TDX enablement
Ramp pricing available for migrations. All deployments include fixed monthly pricing, 99.96%+ network SLA, and DDoS protection.
Product specifications, pricing, and availability may change due to market conditions and other factors. For the most current information, please contact the OpenMetal team directly.



































