Looking to explore the power of OpenMetal’s private networking?
The OpenMetal team is standing by to assist you with scoping out an infrastructure plan that’s out-of-the box networking ready – to fit your needs, budgets and timelines.
Modern infrastructure teams face a critical challenge: balancing high-performance networking requirements with cost predictability. While most organizations focus on compute and storage specifications, network architecture often becomes the hidden bottleneck that constrains workload performance and inflates unexpected costs.
OpenMetal addresses this challenge head-on with dual 10 Gbps NICs per server (delivering 20 Gbps total connectivity), free internal traffic, customer-specific VLANs with VXLAN support, and built-in DDoS protection — all included by default without premium pricing. This networking foundation eliminates the performance bottlenecks and surprise bandwidth bills that plague traditional cloud deployments.
High-Performance Private Networking (20 Gbps Per Server) with Built-In DDoS Protection
Each bare metal server includes dual 10 Gbps NICs dedicated to private networking, providing 20 Gbps total private connectivity — a configuration that supports the most demanding east-west workloads without bottlenecks. This high-throughput architecture proves especially valuable for AI training synchronization, database replication, backup operations, Ceph rebalance activities, and cluster data shuffles.
Consider the practical implications: a distributed AI training job that requires frequent gradient synchronization across multiple nodes can fully utilize this bandwidth without degrading performance. Similarly, database clusters performing real-time replication or large-scale backup operations can transfer data at wire speed without competing for limited network resources.
Private network traffic operates as unmetered and free, enabling unrestricted internal transfers to support aggressive RPO/RTO strategies, frequent checkpoints, and large-scale cluster designs without bandwidth cost concerns. This approach removes the economic barriers that typically force teams to architect around network limitations rather than performance requirements.
OpenMetal Competitive Advantage: Many hyperscalers and bare metal providers either charge for internal traffic or limit private NIC speeds to 1-10 Gbps without premium fees. OpenMetal delivers 20 Gbps per server, free internal traffic, and built-in DDoS protection by default — removing both performance bottlenecks and surprise bandwidth costs.
VXLAN-Ready Underlay for Scalable Overlay Networks
Network virtualization has become essential for modern multi-tenant and containerized environments. OpenMetal’s VLANs come pre-configured to support VXLAN traffic with no additional underlay changes needed, enabling immediate deployment of software-defined networking solutions.
VXLAN technology supports up to 16 million logical network segments, making it ideal for large-scale Kubernetes deployments, multi-tenant private clouds, or application-specific network isolation. This capability proves particularly valuable when deploying container network interfaces (CNIs) like Calico or Cilium that rely on VXLAN for pod networking.
The technical implementation matters: rather than requiring custom engineering to enable VXLAN at scale, teams can immediately deploy overlay networks knowing the underlying infrastructure supports the necessary encapsulation and forwarding requirements.
OpenMetal Competitive Advantage: Many providers allow VLANs but require custom engineering to enable VXLAN overlays at scale. OpenMetal ships with VXLAN-ready underlays out of the box — saving days or weeks of networking setup for DevOps teams.
Public Egress with 95th-Percentile Billing & Cluster Bandwidth Aggregation
Predictable egress costs remain a significant challenge in cloud deployments, particularly for workloads with variable traffic patterns. OpenMetal addresses this with a unique billing model that includes generous public egress bandwidth per server, with additional usage billed via 95th-percentile calculation rather than per-GB pricing.
This billing approach smooths out temporary traffic spikes during events like product launches, batch data exports, or large-scale data ingestion operations. Instead of facing unexpected bills during high-traffic periods, teams can plan around consistent, predictable networking costs.
The cluster-wide bandwidth aggregation feature adds another layer of efficiency. Rather than leaving unused bandwidth stranded on individual servers, the combined egress allotment across all cluster members optimizes utilization for workloads with variable per-node traffic patterns.
OpenMetal Competitive Advantage: Most cloud providers use per-GB egress billing, making unpredictable workloads expensive. OpenMetal’s 95th-percentile approach plus cluster-wide aggregation keeps costs predictable, even for burst-heavy traffic patterns.
Premium Hardware for Bare Metal and Clusters
The networking capabilities build upon enterprise-grade hardware designed for sustained high-performance operation. OpenMetal’s current server lineup includes several configurations optimized for different workload requirements:
XL v4 Servers: Powered by dual Intel Xeon Gold 6530 processors (32 cores, 64 threads each) with 1TB DDR5-5600 RAM and 4×6.4TB Micron 7450 MAX NVMe SSDs. These systems deliver the processing power and storage performance needed for demanding AI training and high-throughput database workloads.
Large v4 Storage Servers: Feature dual Intel Xeon Silver 4510 CPUs with 720TB HDD storage, 76.8TB NVMe flash, and 512GB DDR5 RAM. The combination of massive capacity storage with high-speed NVMe caching makes these ideal for building scalable Ceph clusters.
Medium v4 Servers: Include dual Intel Xeon Silver 4510 processors (24 cores, 48 threads each) with 256GB DDR5 RAM and Micron 7450 MAX NVMe storage, providing balanced compute and storage for general-purpose workloads.
All servers deploy with dual high-speed 10Gbps uplinks configured as bonded logical interfaces using Link Aggregation Control Protocol (LACP), providing both performance scalability and fault tolerance. This redundant network configuration ensures continuous operation even during network maintenance or hardware failures.
OpenMetal Competitive Advantage: Unlike commodity bare metal hosts that repurpose consumer-grade components, OpenMetal’s infrastructure uses optimized enterprise gear tuned for high availability, low latency, and heavy I/O workloads.
Cloud Transformation: From Bare Metal to Full Private Cloud
OpenMetal provides a unique capability that differentiates it from traditional infrastructure providers: the ability to transform raw bare metal into a fully functional hosted private cloud in under 45 minutes, powered by OpenStack.
This transformation enables teams to scale from dedicated servers to a complete cloud environment — including compute, storage, and networking orchestration — without re-provisioning hardware or facing vendor lock-in. The integration runs deeper than simple co-location: bare metal servers and Hosted Private Cloud instances can share VLANs, enabling hybrid deployments where high-performance workloads run on bare metal while management and control-plane services operate in VMs.
This architectural flexibility proves valuable for organizations that need both raw performance and cloud-native orchestration. Database clusters can run directly on bare metal for maximum I/O performance while monitoring, logging, and management tools operate in the private cloud environment.
OpenMetal Competitive Advantage: Other providers may offer bare metal or private cloud, but not both in an integrated, on-demand, same-VLAN architecture. OpenMetal delivers a unified fabric where bare metal and private cloud resources operate seamlessly together.
Geographic Footprint & Compliance-Ready Zones
Enterprise workloads often require specific geographic placement for latency optimization and regulatory compliance. OpenMetal operates multiple geographically distributed Tier III+ facilities that maintain SOC 2 compliance, ISO certification, and HIPAA readiness — supporting regulated workloads in healthcare, finance, and government sectors.
This geographic diversity enables low-latency access for distributed teams while supporting data sovereignty requirements across different jurisdictions. The compliance certifications ensure that sensitive workloads can operate within the necessary regulatory frameworks without additional compliance overhead.
OpenMetal Competitive Advantage: Many bare metal providers limit options to one or two locations, restricting compliance and latency optimization. OpenMetal offers multiple compliance-ready zones for both performance and regulatory needs.
Cloud Tech Stack & Enterprise-Grade Support
The infrastructure foundation uses proven open-source technologies to avoid proprietary lock-in while maintaining enterprise-grade reliability. The core platform runs on OpenStack for compute, storage, and networking orchestration, while Ceph provides distributed storage capabilities supporting both block and object workloads with high durability and performance.
This technology stack choice matters for long-term operational flexibility. Teams can leverage existing OpenStack expertise, integrate with standard APIs, and avoid the vendor-specific tooling that often accompanies proprietary cloud platforms.
The support model also differs from typical cloud providers. Rather than navigating tiered support structures, customers receive direct access to expert engineers for architecture guidance, performance optimization, and rapid issue resolution.
OpenMetal Competitive Advantage: Many providers offer only ticket-based, tiered support with limited engineering access. OpenMetal connects customers directly with cloud and infrastructure experts, ensuring faster, more accurate resolution of complex issues.
Use Case: High-Throughput AI Training Cluster
To illustrate these networking capabilities in practice, consider deploying a distributed AI training cluster using multiple XL v4 servers. The architecture leverages the 20 Gbps private networking for gradient synchronization while utilizing customer-specific VLANs for traffic isolation.
The training workflow operates as follows: each node processes data batches locally, then synchronizes gradients across the cluster using the high-bandwidth private network. The free internal traffic eliminates concerns about data transfer costs during frequent synchronization cycles, while the 20 Gbps bandwidth prevents network bottlenecks that could slow training convergence.
VXLAN support enables logical separation between different training jobs or research teams sharing the same physical infrastructure. Each training experiment can operate within its own network segment while still utilizing the underlying high-performance networking fabric.
The 95th-percentile billing model accommodates the variable egress patterns typical of ML workflows — high bandwidth during model checkpointing and result publishing, with lower utilization during pure training phases.
Use Case: Write-Heavy Database Cluster
For database deployments requiring high availability and aggressive backup strategies, the networking architecture supports real-time replication without bandwidth constraints. Consider a PostgreSQL cluster with multiple read replicas and continuous backup to object storage.
The primary database synchronizes with replicas using the 20 Gbps private network, ensuring low-latency replication that maintains data consistency across the cluster. Backup operations leverage the free internal traffic to transfer large datasets to the Ceph storage cluster without incurring additional costs.
VLAN isolation separates database traffic from application traffic, preventing interference between replication, backup, and client operations. This network segmentation also supports compliance requirements by isolating sensitive database communications.
The cluster bandwidth aggregation feature optimizes egress costs when publishing database exports or sharing anonymized datasets with external partners, as the combined bandwidth allocation across all cluster members reduces the likelihood of exceeding included allowances.
Taking Action: Your Next Steps
OpenMetal’s private networking architecture addresses the hidden performance and cost challenges that constrain modern infrastructure deployments. The combination of 20 Gbps private networking, free internal traffic, VXLAN-ready underlays, and predictable egress billing creates a foundation that supports both current workload requirements and future scaling needs.
The competitive advantages — included high-bandwidth networking, pre-configured VXLAN support, and unique billing models — eliminate the typical tradeoffs between performance and cost predictability. Teams can architect for performance requirements rather than bandwidth limitations, while maintaining clear visibility into networking costs.
For CTOs, infrastructure architects, and DevOps engineers evaluating private cloud or bare metal deployments, these networking capabilities represent a fundamental shift from traditional cloud economics. Rather than managing around network constraints, teams can focus on optimizing application performance and user experience.
Ready to find out more? Our team is standing by.
Works Cited
- “Bare Metal Servers as a Service | OpenMetal IaaS.” OpenMetal, 10 Apr. 2025, https://openmetal.io/products/bare-metal/.
- “Bare Metal Dedicated Server – XL v4 – 5th Gen Intel 6530,1TB RAM DDR5, 6.4TB Micron 7450.” OpenMetal, 23 Jan. 2025, https://openmetal.io/resources/hardware-details/xl-v4-bare-metal-dedicated-server/.
- “Bare Metal Server – Medium v4 – 5th Gen Intel®Xeon Silver 4510, 256GB DDR5, Micron 7450 MAX.” OpenMetal, 23 May 2025, https://openmetal.io/resources/hardware-details/bare-metal-server-5th-gen-intelxeon-silver-4510-256gb-ddr5-micron-7450-max/.
- “Storage Server – Large V4 – 264TB HDD, 25.6TB NVME – Micron MAX or Pro, 5th Gen Intel® Xeon Silver 4510.” OpenMetal, 29 Apr. 2025, https://openmetal.io/resources/blog/storage-server-large-v4/.
- “VXLAN Overview.” Cisco, https://www.cisco.com/c/en/us/solutions/data-center-virtualization/virtual-extensible-lan-vxlan/index.html.
- “Understanding VLANs – IEEE 802.1Q.” IEEE Standards Association, https://standards.ieee.org/standard/802_1Q-2018.html.
- “East-West vs North-South Traffic in Data Centers.” Network World, https://www.networkworld.com/article/3534661/east-west-vs-north-south-traffic-in-the-data-center.html.
- “NCCL and High-Bandwidth Networking for AI.” NVIDIA Developer, https://developer.nvidia.com/nccl.