The Storage Large v4 is a different server from the compute-focused Large v4. It pairs high-capacity SATA HDDs with NVMe cache drives in a three-tier storage architecture designed for Ceph OSD nodes, bulk object storage, and archival workloads. The CPU is a dual Xeon Silver 4510, optimized for I/O throughput rather than per-core compute performance, with enough processing power to saturate the storage backplane and handle Ceph rebalancing without bottlenecking. Deploy Storage Large v4 nodes alongside Large v4 compute nodes to build Ceph clusters with hundreds of terabytes of capacity on fixed monthly pricing.
Key Takeaways
- Three-tier storage architecture: Dedicated boot SSDs (RAID 1) + NVMe cache tier (Micron 7500 MAX) + HDD bulk tier (20 TB SATA). Each tier serves a distinct I/O role, eliminating contention between OS operations, hot data, and cold storage.
- 240 TB raw HDD capacity: Twelve 20 TB SATA HDDs provide bulk storage for Ceph OSD nodes, S3-compatible object storage (RADOSGW), backup targets, media archives, and compliance data retention.
- 25.6 TB NVMe cache: Four 6.4 TB Micron 7500 MAX drives serve as Ceph journal/WAL/DB or fast-tier cache, accelerating random I/O against the HDD pool. 1.1M random read IOPS per drive.
- Pair with compute nodes: Deploy 3 Large v4 compute nodes + 2 Storage Large v4 nodes for a Ceph cluster with 480 TB raw HDD + 51.2 TB NVMe cache, all on the same 20 Gbps private mesh.
- 95th-percentile egress billing: Same predictable billing model as compute servers. No per-GB object storage transfer charges like AWS S3.
- HIPAA-eligible storage: OpenMetal is HIPAA compliant at the organizational level. Store PHI on dedicated hardware in HIPAA-compliant facilities in Ashburn and Los Angeles.
Server Configuration at a Glance
| Component | Specification |
|---|---|
| Processor | 2x Intel Xeon Silver 4510 (Emerald Rapids, Intel 7) |
| Total Cores / Threads | 24 cores / 48 threads |
| Base / Max Turbo Frequency | 2.4 GHz / 4.1 GHz |
| Memory | 256 GB DDR4-2933 ECC (16 DIMM slots) |
| Boot Storage | 2x 960 GB SSD in RAID 1 (dedicated OS drives) |
| NVMe Cache Tier | 4x 6.4 TB Micron 7500 MAX NVMe (25.6 TB) |
| HDD Bulk Tier | 12x 20 TB SATA HDD (240 TB raw) |
| Max Drive Bays | 12 drives (HDD) + 4 (NVMe) + 2 (boot) |
| Private Bandwidth | 20 Gbps (2x 10 Gbps LACP bonded) |
| Public Bandwidth | 2 Gbps |
| Network SLA | 99.96% base (actual >99.99% since 2022) |
| DDoS Protection | Included, up to 10 Gbps per IP |
| Remote Management | Full IPMI access |
| Pricing | Fixed monthly — see openmetal.io/bare-metal-pricing |
Storage Large v4 three-tier architecture: boot SSD RAID 1, Micron 7500 MAX NVMe cache, 20 TB SATA HDD bulk, LACP-bonded private mesh
Ready to Deploy a Storage Large v4?
Tell us about your workload and we’ll help you configure the right deployment — standalone storage nodes or paired with compute servers, in any of our four data center regions.
Intel Xeon Silver 4510
The Storage Large v4 uses dual Xeon Silver 4510 processors rather than the Gold 6526Y found in the compute Large v4. The Silver 4510 provides 24 cores / 48 threads at 2.4 GHz base / 4.1 GHz turbo, adequate for saturating the storage backplane: handling Ceph OSD operations, managing RAID and drive I/O scheduling, and processing RADOSGW S3 API requests. Storage servers do not need the higher per-core clocks of Gold-class processors because the bottleneck is drive I/O throughput, not CPU compute. The Silver 4510 keeps power and cost proportional to the storage role.
Storage Architecture (Three-Tier)
Tier 1: Boot drives
Two 960 GB SSDs in RAID 1 serve as dedicated OS drives, identical to the compute Large v4. System I/O (logging, package management, Ceph monitor processes) stays on the boot tier, fully isolated from data operations.
Tier 2: NVMe cache (Micron 7500 MAX)
Four 6.4 TB Micron 7500 MAX NVMe drives (25.6 TB total) serve as the fast-tier cache layer. In a Ceph deployment, these drives typically host:
- Ceph BlueStore WAL (Write-Ahead Log): Absorbs random writes before flushing to HDD, converting random I/O into sequential writes.
- Ceph BlueStore DB: Metadata and key-value store for OSD operations.
- Fast-tier cache: Hot data promotion from HDD to NVMe for read-intensive workloads.
| Metric | Micron 7500 MAX (6400 GB) |
|---|---|
| Sequential Read | 7,000 MB/s |
| Sequential Write | 5,900 MB/s |
| Random Read IOPS | 1,100,000 |
| Random Write IOPS | 400,000 |
| Mixed 70/30 IOPS | 650,000 |
| Read Latency (typical) | 70 us |
| Write Latency (typical) | 15 us |
| Endurance (TBW) | 35,040 TB |
| DWPD | 3 |
| QoS | Sub-1ms at 99.9999% (6-nines) for 4KB random read up to QD128 |
Source: Micron 7500 Tech Prod Spec Rev. A 10/2023
Tier 3: HDD bulk storage
Twelve 20 TB SATA HDDs (240 TB raw) provide high-capacity bulk storage for cold and warm data. These drives handle sequential reads, archival writes, and Ceph recovery operations. With 3x Ceph replication across a 3-node storage cluster, usable capacity is approximately 240 TB per node / 3 = 80 TB usable per node (or 240 TB usable across a 3-node cluster with 720 TB raw).
Networking
Dual 10 Gbps NICs (LACP bonded, 20 Gbps aggregate) connect each Storage Large v4 to the private mesh. This is the data path for Ceph OSD replication, recovery backfill, and client I/O from compute nodes. The 20 Gbps link is sized to handle Ceph rebalancing operations (which can saturate bandwidth during recovery events) without starving client I/O. Public bandwidth is 2 Gbps (lower than the compute Large v4’s 4 Gbps) because storage nodes typically serve data over the private mesh to compute nodes, not directly to the public internet.
Egress pricing: 95th-percentile billing, not per-GB transfer.
For S3-compatible object storage served externally, public egress is billed on 95th-percentile measurement, not per-GB transfer.
Data Security
Storage Large v4 servers run on dedicated, single-tenant hardware. No shared hypervisors, no noisy neighbors, no multi-tenant storage controllers. Data at rest sits on physically isolated drives accessible only through your server’s OS and IPMI interface. The three-tier storage architecture provides an additional layer of operational isolation: boot drives (RAID 1) are separated from NVMe cache and HDD bulk tiers, preventing OS-level I/O from interfering with data operations.
- Dedicated tenancy: Single-tenant bare metal with no shared storage controllers or hypervisors
- Boot/data isolation: RAID 1 boot drives physically separated from data tiers
- IPMI remote management: Out-of-band access for secure server administration
- AES-NI: Hardware-accelerated encryption for data-at-rest and in-transit workloads
- DDoS protection: Included on all public IPs, up to 10 Gbps per IP
HIPAA and Regulatory Compliance
OpenMetal is HIPAA compliant at the organizational level and offers Business Associate Agreements (BAAs) for customers storing protected health information. Storage Large v4 servers deployed in Ashburn and Los Angeles are hosted in HIPAA-compliant facilities, making them suitable for healthcare data, PHI archival, and compliance-driven retention workloads.
Facility certifications are maintained by the data center operators (not OpenMetal) and vary by location. The Ashburn facility holds SOC1/2 Type II, ISO 27001, ISO 50001, PCI DSS, NIST 800-53 HIGH, and HIPAA certifications. The Los Angeles facility holds SOC1/2, ISO 27001, PCI-DSS, and HIPAA certifications. The Amsterdam facility holds SOC Type 1/2, PCI-DSS, ISO 27001, ISO 50001, and ISO 22301 certifications.
Pair with Compute Nodes
Storage Large v4 servers are designed to complement compute-focused Large v4, XL v4, or XXL v4 nodes in Ceph cluster deployments. Example configurations:
| Cluster Configuration | Compute Capacity | Storage Capacity (Raw) | NVMe Cache |
|---|---|---|---|
| 3x Large v4 compute + 2x Storage Large v4 | 96C/192T, 1.5 TB RAM | 480 TB HDD | 51.2 TB |
| 3x Large v4 compute + 3x Storage Large v4 | 96C/192T, 1.5 TB RAM | 720 TB HDD | 76.8 TB |
| 3x XL v4 compute + 3x Storage Large v4 | 192C/384T, 3 TB RAM | 720 TB HDD | 76.8 TB |
All nodes connect over the same 20 Gbps LACP-bonded private mesh. Compute nodes run VMs and applications; storage nodes provide the Ceph OSD backing. OpenMetal’s onboarding team helps design cluster topology for your capacity and performance requirements.
How the Storage Large v4 Compares to Public Cloud
The Storage Large v4 provides 240 TB raw HDD per node on fixed monthly pricing. Compare the structural cost model against cloud storage services:
| Dimension | OpenMetal Storage Large v4 | AWS S3 Standard | AWS EBS gp3 |
|---|---|---|---|
| Pricing model | Fixed monthly per server | Per-GB/month ($0.023/GB) | Per-GB/month ($0.08/GB) |
| Egress | 95th-percentile billing | Per-GB ($0.09/GB first 10 TB) | Via EC2 egress pricing |
| Tenancy | Dedicated hardware | Shared | Shared |
| IOPS (NVMe cache tier) | 1.1M per drive (4 drives) | N/A (S3 is object, not block) | 3,000 baseline |
| Management | Self-managed or Assisted | Fully managed | Fully managed |
| Replication | Ceph 3x (configurable) | 3x (built-in) | EBS snapshots (manual) |
When OpenMetal wins: Sustained multi-hundred-TB workloads where per-GB storage and egress charges would exceed the fixed monthly server cost. At scale (500 TB+), the fixed-cost model delivers significant savings over per-GB cloud pricing.
When cloud storage wins: Sub-10 TB workloads, deep integration with cloud-native services (Lambda triggers on S3, Athena queries), and scenarios where fully managed operations outweigh the cost premium.
Recommended Workloads on the Storage Large v4
Ceph OSD nodes
The primary use case. Each Storage Large v4 contributes 12 HDD OSDs and 4 NVMe cache devices to a Ceph cluster. NVMe drives handle WAL/DB and hot-tier caching, while HDDs provide bulk capacity. The 20 Gbps private mesh handles replication and recovery traffic.
Bulk object storage (S3-compatible)
Run Ceph RADOSGW on Storage Large v4 nodes to provide S3-compatible object storage for applications. 240 TB raw per node supports large-scale media libraries, log aggregation, and dataset hosting. No per-GB storage charges, no per-request API charges.
Backup and archival targets
Use Storage Large v4 nodes as Proxmox Backup Server targets, Veeam repositories, or custom backup endpoints. The HDD tier provides cost-effective capacity for retention-heavy backup policies, while NVMe cache accelerates restore operations. Boot/data isolation protects backup operations from OS-level I/O.
Media streaming and content delivery
Store and serve video, audio, and image assets from HDD bulk storage with NVMe cache acceleration for frequently accessed content. The 20 Gbps private mesh delivers high throughput to frontend compute nodes running transcoding or CDN edge services.
Data lake storage
Aggregate raw datasets from multiple sources for analytics pipelines. The 240 TB per node supports large-scale data lakes for Spark, Presto, or Hive workloads running on companion compute nodes. NVMe cache accelerates metadata operations and hot partition access.
Compliance archival
Store regulated data (healthcare records, financial transaction logs, legal documents) on dedicated hardware with HIPAA-eligible infrastructure. Fixed retention costs on dedicated servers, no per-GB archival charges. OpenMetal is HIPAA compliant at the organizational level and offers BAAs.
Ready to Deploy a Storage Large v4?
Tell us about your workload and we’ll help you configure the right deployment — standalone storage nodes or paired with compute servers, in any of our four data center regions.
Storage Large v4 Deployment Options
Standalone Storage Nodes
Deploy one or more Storage Large v4 servers as dedicated Ceph OSD nodes, backup targets, or S3-compatible object storage endpoints. Each server ships with full root access, IPMI remote management, and fixed monthly pricing with no per-GB storage or transfer charges. Pre-built images are available for rapid provisioning, and price locks extend up to 5 years. Ramp pricing is available for migrations from other storage platforms.
→ View pricing: openmetal.io/bare-metal-pricing
Compute + Storage Clusters
Pair Storage Large v4 nodes with Large v4, XL v4, or XXL v4 compute servers to build Ceph clusters with hundreds of terabytes of capacity. Compute nodes run VMs and applications while storage nodes provide the Ceph OSD backing, all connected over a 20 Gbps LACP-bonded private mesh. OpenMetal’s onboarding team helps design cluster topology for your capacity and performance requirements.
All deployments: available across OpenMetal’s Tier III data center locations in Ashburn, Los Angeles, Amsterdam, and Singapore. Fixed monthly pricing applies regardless of utilization. No per-hour, per-query, or per-GB billing.
Get a Storage Large v4 Quote
Tell us about your infrastructure needs and we’ll provide a custom quote for the Storage Large v4 — as standalone storage nodes or paired with compute servers in a Ceph cluster.
- Standalone storage: Single or multi-node storage servers for Ceph, backup, or archival
- Compute + storage clusters: Pair with Large v4, XL v4, or XXL v4 compute nodes
- Custom configurations: Drive count adjustments, RAM upgrades, network topology
Ramp pricing available for migrations. All deployments include fixed monthly pricing, 99.96%+ network SLA, and DDoS protection.
Specifications, pricing, and availability are subject to change without notice. The information on this page is provided for general guidance and does not constitute a contractual commitment. Contact OpenMetal for current configuration details and pricing.



































