At OpenMetal, every bare metal server is deployed with dual high-speed 10Gbps uplinks, providing a critical layer of both performance scalability and fault tolerance. To fully leverage these redundant connections, OpenMetal configures these uplinks as a bonded logical interface using Link Aggregation Control Protocol (LACP).
This LACP bonding allows OpenMetal to automatically balance traffic flows across both physical uplinks, effectively doubling the usable bandwidth from 10Gbps to 20Gbps per server — assuming symmetrical traffic across both links. This is particularly valuable for customers running:
- Large-scale data transfers within private cloud environments
- High-throughput backup or replication jobs
- Data-intensive analytics and machine learning pipelines
Questions? Schedule a meeting or start a chat.
Intelligent Traffic Distribution with LACP Hashing
OpenMetal’s LACP configuration is tuned to distribute traffic intelligently across both uplinks using a hashing algorithm designed for optimal flow balancing. Depending on your network design and traffic patterns, this algorithm can distribute packets based on:
- Source and destination IP pairs, ensuring even distribution across flows communicating with multiple external systems.
- Layer 4 port information (e.g., TCP or UDP port numbers), allowing finer-grained balancing of diverse application traffic.
- VLAN tags, which is particularly important for customers deploying multi-tenant OpenStack clouds, where isolating tenant traffic into separate VLANs is a common requirement.
This flexibility allows OpenMetal’s infrastructure to adapt to each customer’s specific workload patterns, ensuring that no single link becomes congested while the other sits idle. At the same time, all of this is presented to the operating system and applications as a single logical interface (commonly known as a bonded interface or LAG interface). From your application’s point of view, the server has one unified network path — even though that path is actively spreading traffic across two distinct physical cables and upstream switch ports.
Seamless Failover at the Network Edge
One of the greatest operational benefits of OpenMetal’s dual uplink design is automatic failover at the data-link layer. If a network cable is accidentally damaged, a transceiver fails, or even an entire upstream switch port malfunctions, OpenMetal’s LACP configuration automatically removes the failed link from the bond. No human intervention is required — the switch and server automatically negotiate the loss of link, and all traffic instantly shifts to the surviving uplink.
This means your critical services continue to operate uninterrupted, with only minor (if any) performance degradation. Instead of 20Gbps aggregate capacity, the server temporarily operates with 10Gbps, but connectivity is preserved, ensuring there is no outage, no dropped traffic, and no need for urgent manual troubleshooting.
A Building Block of OpenMetal’s High Availability Network
At OpenMetal, we understand that the network is often the weakest link when it comes to infrastructure reliability. That’s why dual uplinks and LACP bonding are just part of our multi-layered HA design. These uplinks are further enhanced by:
- Redundant network paths to fully independent “A” and “B” switch fabrics
- Dual power supplies on all switches and core routers
- Automated monitoring that detects and alerts on link degradation in real time
This fully redundant design ensures no single network component — from a NIC to a top-of-rack switch — can disrupt service availability. It’s all part of OpenMetal’s commitment to providing highly available infrastructure-as-a-service, designed from the ground up to support mission-critical workloads.
OpenMetal Bare Metal Servers with Dual Uplinks
All of OpenMetal’s hardware (except for the XS) are equipped with dual 10Gbps uplinks, with an effective total usable bandwidth of 20Gbps per server. Below are the OpenMetal servers with dual uplinks and dual boot drives making them the best building blocks for a highly available infrastructure.
Server | CPU | Cores | Storage | Memory | Private Bandwidth | Public Bandwidth |
---|---|---|---|---|---|---|
XXL v4 | 2x Intel Xeon Gold 6530 | 64C/128T 2.1/4.0Ghz | 6x 6.4TB NVMe 2x 960GB Boot Disk | 2048GB DDR5 4800Mhz | 20Gbps | 2Gbps |
XL v4 | 2x Intel Xeon Gold 6530 | 64C/128T | 4x 6.4TB NVMe 2x 960GB Boot Disk | 1024GB DDR5 4800Mhz | 20Gbps | 2Gbps |
Large v4 | 2x Intel Xeon Gold 6526Y | 32C/64T 2.8/3.9Ghz | 2x 6.4TB NVMe 2x 480GB Boot Disk | 512GB DDR5 5200MHz | 20Gbps | 1Gbps |
Medium v4 | 2x Intel Xeon Silver 4510 | 24C/48T 2.4/4.1Ghz | 6.4TB NVMe | 256GB DDR5 4400MHz | 20Gbps | 500Mbps |
Questions? Schedule a meeting or start a chat.
Interested in the OpenMetal IaaS Platform?
Hosted Private Cloud
Day 2 ready. No licensing costs. Delivered in 45 seconds. Powered by enterprise open source tech.
Bare Metal Servers
High end bare metal server hosting for virtualization, big data, streaming, and much more.
Consult with Our Team
Meet our experts to get a deeper assessment and discuss your unique IaaS requirements.