This comparison sets the OpenMetal Large v5 bare metal server against AWS i4i family instances, the closest AWS profile for storage-heavy workloads needing persistent NVMe and high I/O. The structural difference is not raw spec parity — it is the tenancy, billing, and storage-persistence models. AWS i4i delivers ephemeral instance NVMe on a shared physical host with per-GB egress billing. The Large v5 delivers persistent local NVMe on dedicated single-tenant hardware with 95th-percentile egress billing on fixed monthly pricing.
Key Takeaways
- Dedicated vs shared tenancy: Large v5 ships as a single-tenant bare metal server with 32 dedicated physical cores. AWS i4i is a shared-host instance with vCPUs that can be subject to noisy-neighbor effects unless you upgrade to a dedicated host (extra cost) or to i4i.metal.
- Fixed monthly vs metered hourly pricing: Large v5 is one fixed monthly rate, lockable up to 5 years. AWS i4i bills hourly with separate egress, IP, and snapshot meters that scale with usage.
- Persistent local NVMe vs ephemeral instance storage: Large v5 NVMe persists across reboots and OS reinstalls. AWS i4i instance storage is ephemeral — it loses contents on instance stop/terminate, requiring EBS at additional cost and latency for persistent state.
- 95th-percentile vs per-GB egress: Large v5 bills public egress on 95th-percentile burst. AWS i4i bills $0.09/GB for the first 10 TB of monthly egress, which compounds for any sustained outbound workload.
- Full IPMI vs SSH/SSM only: Large v5 includes power, console, BIOS, and OS-install control via IPMI. AWS i4i has no equivalent — recovery requires AWS Systems Manager or instance restart.
- HIPAA at the platform vs per-service: OpenMetal is HIPAA-compliant at the organizational level with one BAA covering the dedicated hardware. AWS HIPAA eligibility is per-service, requiring careful selection and configuration of HIPAA-eligible services within an AWS account.
Spec Comparison
| Component | OpenMetal Large v5 | AWS i4i.4xlarge | AWS i4i.metal |
|---|---|---|---|
| CPU | 2x Xeon 6517P (Granite Rapids, Intel 3) | 16 vCPUs (Ice Lake shared) | 128 vCPUs (Ice Lake, full host) |
| Physical Cores | 32 dedicated | Shared, no direct mapping | 64 physical (dual socket) |
| Threads | 64 | 16 vCPU = 8 physical cores | 128 |
| RAM | 512 GB DDR5-6400 | 128 GB | 1024 GB |
| Local Storage | 12.8 TB NVMe (Micron 7500 MAX, persistent) | 1x 3,750 GB NVMe (ephemeral) | 8x 3,750 GB NVMe (ephemeral) |
| Tenancy | Single-tenant dedicated bare metal | Shared host (dedicated host optional, extra) | Full physical host |
| Remote Management | Full IPMI (power, console, BIOS, OS install) | SSM / instance restart only | SSM / instance restart only |
| Boot / Data Isolation | 2x 960 GB boot drives in RAID 1, separate from data NVMe | Single drive pool | Single drive pool |
| Storage Persistence | Local NVMe persists across reboots | Ephemeral (lost on stop/terminate) | Ephemeral (lost on stop/terminate) |
| Confidential Computing | Intel TDX (with 1 TB RAM upgrade) + SGX (default) | Limited | Limited |
The Large v5 is selected against the i4i family because i4i is AWS’s storage-optimized line, which is the closest profile to the Large v5’s persistent NVMe focus. The i4i.4xlarge is the closest hourly-cost tier; the i4i.metal is the closest raw-spec match, but it costs significantly more and still loses on tenancy and persistence semantics.
Processor: Dedicated Granite Rapids Cores vs Shared Ice Lake vCPUs
The Large v5’s 32 physical Granite Rapids cores are dedicated to the customer’s workload. There is no hypervisor between the OS and the CPU, no vCPU-to-pCPU translation, and no noisy-neighbor exposure. The Xeon 6517P runs at 3.2 GHz base / 4.2 GHz turbo with 144 MB of L3 cache across both sockets and 3 UPI links at 24 GT/s for cross-socket traffic.
AWS i4i instances are sized in vCPUs, where 1 vCPU is one thread on a shared host’s physical CPU (Ice Lake generation in the i4i family). A 16-vCPU i4i.4xlarge maps to 8 physical cores’ worth of compute share. The shared-host model means: BIOS-level tuning is not available, NUMA topology is not directly addressable, and CPU-cache state can be perturbed by sibling tenants on the same host. The i4i.metal exposes the full host but at significantly higher cost than equivalent OpenMetal bare metal.
For workload-specific acceleration, the Large v5’s 6517P includes Intel AMX (INT8/BF16 matrix), AVX-512 with dual FMA, and full Hyper-Threading — all available without hypervisor passthrough caveats.
Memory: 512 GB DDR5-6400 vs Ice Lake DDR4
The Large v5 ships with 512 GB of DDR5-6400, 8 of 16 DIMM slots populated, with a clear in-place upgrade path to 1 TB (which also activates Intel TDX). DDR5-6400 delivers approximately 819 GB/s aggregate memory bandwidth across both sockets.
AWS i4i family runs on Ice Lake processors with DDR4 memory. The i4i.4xlarge provides 128 GB; the full i4i.metal provides 1024 GB. There is no in-place RAM upgrade path — changing memory capacity means migrating to a different instance size, which requires planned downtime and a new instance lifecycle.
The OpenMetal upgrade path matters most for in-memory databases (Redis, PostgreSQL shared buffers), VM density workloads (Proxmox, OpenStack on bare metal), and confidential computing migration paths (1 TB activates TDX). On AWS, expanding past the current instance’s memory means rebuilding into a new instance family.
Storage: Persistent NVMe vs Ephemeral Instance Storage
Boot/data isolation
The Large v5 separates boot and data storage into dedicated drive pools: 2x 960 GB boot SSDs in RAID 1 for OS and system I/O, and 2x 6.4 TB Micron 7500 MAX NVMe drives (12.8 TB) for application data. Boot I/O cannot contend with data I/O on the application drives.
AWS i4i instances present a single drive pool. Boot disk activity competes with application I/O on the same NVMe surface.
Persistence
Large v5 NVMe is local persistent storage. Data survives reboots, OS reinstalls, and IPMI-driven power cycles.
AWS i4i instance NVMe is ephemeral. Data is lost when the instance is stopped or terminated. To make state persistent on AWS, customers attach EBS volumes, which adds cost (per-GB-month plus IOPS provisioning) and adds latency (EBS volumes are network-attached, not local).
NVMe performance (Micron 7500 MAX, 6.4 TB)
| Metric | Micron 7500 MAX (on Large v5) | AWS i4i Local NVMe |
|---|---|---|
| Sequential Read | 7,000 MB/s | ~3,500 MB/s |
| Sequential Write | 5,900 MB/s | n/a |
| Random Read IOPS | 1,100,000 | ~600,000 |
| Random Write IOPS | 400,000 | n/a |
| Read Latency (typical) | 70 us | n/a |
| Persistence | Yes (local persistent) | No (ephemeral) |
| Endurance (TBW) | 35,040 TB | n/a |
Source for Large v5 column: Micron 7500 Tech Prod Spec Rev. A 10/2023.
EBS as the persistence workaround
On AWS, customers needing persistent storage in a storage-heavy workload typically pair i4i with EBS gp3 or io2 volumes. The total cost includes the i4i hourly rate plus EBS per-GB-month plus per-IOPS provisioning, and the persistent path adds network latency relative to local NVMe. The Large v5’s 12.8 TB of persistent local NVMe is included in the base monthly price.
Networking: 20 Gbps Private LACP vs Cloud Networking
The Large v5 includes 2x 10 Gbps NICs LACP-bonded for 20 Gbps aggregate private bandwidth, plus 6 Gbps of public bandwidth. East-west traffic between OpenMetal servers in the same deployment is included at no per-GB cost — VLAN-isolated and on dedicated network paths.
AWS i4i.4xlarge has up to 12.5 Gbps of network bandwidth, shared between private (VPC) and public (internet) flows. AZ-to-AZ traffic costs $0.01-0.02/GB depending on direction, and cross-region traffic costs more. Private workloads spread across AZs accumulate inter-AZ charges that don’t exist on OpenMetal’s single-facility model.
Egress: 95th-percentile vs per-GB
The egress cost gap is the single largest structural cost driver between OpenMetal and AWS. OpenMetal bills public egress on 95th-percentile burst measurement — a workload that bursts to 10 Gbps during a deployment window but averages 2 Gbps pays for the 95th-percentile rate, not for every byte transferred. AWS bills $0.09/GB for the first 10 TB of monthly egress, scaling downward with volume but never reaching parity with 95th-percentile.
At 5 TB/month of egress, AWS charges approximately $450/month in transfer alone. At 50 TB/month, that rises to roughly $4,500/month. The Large v5’s 95th-percentile billing is bounded by burst rate, not by total bytes — a sustained low-rate workload pays a flat predictable amount regardless of total volume.
Security and Confidential Computing
| Capability | OpenMetal Large v5 | AWS i4i |
|---|---|---|
| Intel TDX | Yes (with 1 TB RAM upgrade) | Limited to specific instance types, not i4i |
| Intel SGX | Yes, enabled by default, 128 GB EPC | Not available on i4i |
| AWS Nitro Enclaves | N/A | Available, process-level isolation |
| TME-MK | Yes, all DRAM | n/a |
| Measured boot | Boot Guard | Nitro |
| Physical tenancy | Dedicated single-tenant | Shared host (default) |
The structural difference: OpenMetal places the trust boundary at the silicon (TDX, SGX, dedicated hardware), where AWS Nitro Enclaves place the boundary at a process level inside a shared-host instance. For workloads where the trust requirement is “the infrastructure operator cannot read tenant memory,” TDX on dedicated bare metal is the cleaner answer.
HIPAA and regulatory compliance
OpenMetal is HIPAA compliant at the organizational level and offers a single BAA covering all dedicated hardware. Facility-level certifications (SOC, ISO, PCI-DSS, NIST 800-53) are held by the facility operator and listed per location:
- Ashburn, VA: SOC1/2 Type II, ISO 27001, PCI DSS, NIST 800-53 HIGH, HIPAA (facility-level)
- Los Angeles, CA: SOC1/2, ISO 27001, PCI-DSS, HIPAA (facility-level)
- Amsterdam, NL: SOC Type 1/2, PCI-DSS, ISO 27001, ISO 50001, ISO 22301
- Singapore: BCA Green Mark Platinum [additional certifications pending]
AWS provides HIPAA-eligibility on a per-service basis — not every AWS service is HIPAA-eligible, and a HIPAA-aware AWS architecture must use only the eligible service subset with a service-specific BAA configuration.
Recommended Workloads on the Large v5
When OpenMetal Wins
- Sustained 24/7 workloads: Fixed monthly pricing beats hourly metering once utilization exceeds ~30-40% of an equivalent reserved instance.
- Egress-heavy traffic: SaaS APIs, video delivery, telemetry exports, blockchain RPC — any workload pushing meaningful bytes to the public internet benefits from 95th-percentile billing.
- Compliance / isolation requirements: Dedicated tenancy, hardware-isolated confidential computing (TDX/SGX), org-level HIPAA BAA, and physical facility certifications.
- Predictable budgeting: Fixed monthly pricing with optional 5-year price locks. CFO-friendly.
- Persistent local storage: 12.8 TB of local NVMe that survives reboots, no EBS premium for persistence.
- Migration from VMware: Ramp pricing supports parallel-run migration during the transition window.
When AWS Wins
- Scale-to-zero workloads: Lambda, Fargate, and other serverless models charge nothing when idle. OpenMetal does not.
- Deep AWS-native integration: Workloads tightly coupled to managed services (DynamoDB streams, SageMaker pipelines, EventBridge orchestration, IAM-driven cross-service auth) live more naturally inside AWS than alongside it.
- Global edge presence: CloudFront, Global Accelerator, and Route 53 latency-based routing serve a global edge footprint OpenMetal does not match.
- Sub-$10k/month cloud spend: Below this threshold, AWS’s operational simplicity often outweighs unit-cost differences. Above $20k/month, OpenMetal typically delivers ~50% reduction once compute, storage, and egress are aggregated.
- Bursty, unpredictable demand: Workloads with 10x demand spikes for short periods may not fully amortize dedicated hardware.
The honest framing: AWS is a better fit for cloud-native ephemeral architectures and small-to-mid scale operations. OpenMetal is a better fit for sustained workloads, egress-heavy workloads, regulated workloads, and any architecture where the customer cares about the physical infrastructure their compute runs on.
Cost Model
| Cost Dimension | OpenMetal Large v5 | AWS i4i Equivalent |
|---|---|---|
| Pricing model | Fixed monthly | Hourly (or 1/3-year reserved) |
| Egress | 95th-percentile, included base | Per-GB ($0.09/GB first 10 TB) |
| Private traffic | Included (VLAN, no per-GB) | Inter-AZ at $0.01-0.02/GB |
| IPMI / remote console | Included | Not available |
| Licensing | None (Linux pre-built images included) | AMI subscription costs vary |
| Commitment | Optional 5-year price lock | 1- or 3-year RI for discount |
| Ramp pricing | Available for migrations | Not available |
| DDoS protection | Included (10 Gbps per IP) | AWS Shield Standard included, Shield Advanced extra |
| Support | Standard support included | Basic support free, Business/Enterprise paid |
TCO illustration: 3-node cluster, 12 months, sustained 24/7
| Cost Line | OpenMetal (3x Large v5) | AWS (3x i4i.4xlarge reserved + supporting services) |
|---|---|---|
| Compute | Fixed monthly | 1-year reserved |
| Persistent storage | 12.8 TB local per node included | EBS gp3 add-on |
| Egress at 5 TB/mo | Bounded by 95th-percentile | ~$450/mo per cluster |
| Egress at 50 TB/mo | Bounded by 95th-percentile | ~$4,500/mo per cluster |
| Inter-node traffic | Included (private VLAN) | Inter-AZ at $0.01-0.02/GB |
| Support | Included | Business support ~3-10% of usage |
| DDoS | Included | Shield Advanced $3,000/mo if enabled |
The cost gap widens as egress and inter-node volume grow. At low egress volumes ($<5 TB/mo) the difference is modest; at sustained tens of TB/mo, the difference becomes the dominant line item.
Spend threshold guidance
OpenMetal’s positioning materials cite a $10,000/month threshold below which public cloud’s operational simplicity often outweighs unit-cost savings, and a $20,000/month threshold above which OpenMetal’s fixed pricing typically delivers ~50% cost reduction. Use these as rough qualification anchors, not absolute rules — workload shape matters more than spend level alone.
Ready to Deploy a Large v5?
Tell us about your workload and we’ll help you configure the right deployment — bare metal or Hosted Private Cloud, in any of our four data center regions.
Deployment Options
Bare Metal Dedicated Server
Deploy a Large v5 as a standalone bare metal server with full root access and IPMI remote management. Pricing is fixed monthly with the option to lock rates for up to 5 years. Ramp pricing is available for migrations from AWS or other providers, allowing parallel-run during the transition.
Hosted Private Cloud
Deploy a three-node Large v5 Hosted Private Cloud cluster running OpenStack and Ceph, production-ready in under 45 seconds. OpenMetal handles Day 2 operations including monitoring, patching, and incident response. No VMware licensing, no AWS service surcharges, fixed monthly pricing.
Get a Large v5 Quote
Tell us about your infrastructure needs and we’ll provide a custom quote for the Large v5 — as a standalone bare metal server or as part of a Hosted Private Cloud cluster.
- Bare metal: Single-server or multi-server deployments with full root access and IPMI
- Hosted Private Cloud: Three-node OpenStack + Ceph clusters with Day 2 operations included
- Custom configurations: RAM upgrades, additional NVMe drives, TDX enablement
Ramp pricing available for migrations. All deployments include fixed monthly pricing, 99.96%+ network SLA, and DDoS protection.



































