Dual Uplinks with LACP

At OpenMetal, every bare metal server is deployed with dual high-speed 10Gbps uplinks, providing a critical layer of both performance scalability and fault tolerance. To fully leverage these redundant connections, OpenMetal configures these uplinks as a bonded logical interface using Link Aggregation Control Protocol (LACP).

This LACP bonding allows OpenMetal to automatically balance traffic flows across both physical uplinks, effectively doubling the usable bandwidth from 10Gbps to 20Gbps per server — assuming symmetrical traffic across both links. This is particularly valuable for customers running:

  • Large-scale data transfers within private cloud environments
  • High-throughput backup or replication jobs
  • Data-intensive analytics and machine learning pipelines

Intelligent Traffic Distribution with LACP Hashing

OpenMetal’s LACP configuration is tuned to distribute traffic intelligently across both uplinks using a hashing algorithm designed for optimal flow balancing. Depending on your network design and traffic patterns, this algorithm can distribute packets based on:

  • Source and destination IP pairs, ensuring even distribution across flows communicating with multiple external systems.
  • Layer 4 port information (e.g., TCP or UDP port numbers), allowing finer-grained balancing of diverse application traffic.
  • VLAN tags, which is particularly important for customers deploying multi-tenant OpenStack clouds, where isolating tenant traffic into separate VLANs is a common requirement.

This flexibility allows OpenMetal’s infrastructure to adapt to each customer’s specific workload patterns, ensuring that no single link becomes congested while the other sits idle. At the same time, all of this is presented to the operating system and applications as a single logical interface (commonly known as a bonded interface or LAG interface). From your application’s point of view, the server has one unified network path — even though that path is actively spreading traffic across two distinct physical cables and upstream switch ports.

Seamless Failover at the Network Edge

One of the greatest operational benefits of OpenMetal’s dual uplink design is automatic failover at the data-link layer. If a network cable is accidentally damaged, a transceiver fails, or even an entire upstream switch port malfunctions, OpenMetal’s LACP configuration automatically removes the failed link from the bond. No human intervention is required — the switch and server automatically negotiate the loss of link, and all traffic instantly shifts to the surviving uplink.

This means your critical services continue to operate uninterrupted, with only minor (if any) performance degradation. Instead of 20Gbps aggregate capacity, the server temporarily operates with 10Gbps, but connectivity is preserved, ensuring there is no outage, no dropped traffic, and no need for urgent manual troubleshooting.

A Building Block of OpenMetal’s High Availability Network

At OpenMetal, we understand that the network is often the weakest link when it comes to infrastructure reliability. That’s why dual uplinks and LACP bonding are just part of our multi-layered HA design. These uplinks are further enhanced by:

  • Redundant network paths to fully independent “A” and “B” switch fabrics
  • Dual power supplies on all switches and core routers
  • Automated monitoring that detects and alerts on link degradation in real time

This fully redundant design ensures no single network component — from a NIC to a top-of-rack switch — can disrupt service availability. It’s all part of OpenMetal’s commitment to providing highly available infrastructure-as-a-service, designed from the ground up to support mission-critical workloads.

OpenMetal Bare Metal Servers with Dual Uplinks

All of OpenMetal’s hardware (except for the XS) are equipped with dual 10Gbps uplinks, with an effective total usable bandwidth of 20Gbps per server. Below are the OpenMetal servers with dual uplinks and dual boot drives making them the best building blocks for a highly available infrastructure.  

ServerCPUCoresStorageMemoryPrivate BandwidthPublic Bandwidth
XXL v42x Intel Xeon Gold 653064C/128T
2.1/4.0Ghz
6x 6.4TB NVMe
2x 960GB Boot Disk
2048GB DDR5 4800Mhz20Gbps2Gbps
XL v42x Intel Xeon Gold 6530

64C/128T
2.1/4.0Ghz

4x 6.4TB NVMe
2x 960GB Boot Disk
1024GB DDR5 4800Mhz20Gbps2Gbps
Large v42x Intel Xeon Gold 6526Y32C/64T
2.8/3.9Ghz
2x 6.4TB NVMe
2x 480GB Boot Disk
512GB DDR5 5200MHz20Gbps1Gbps
Medium v42x Intel Xeon Silver 451024C/48T
2.4/4.1Ghz

6.4TB NVMe
2x 480GB Boot Disk

256GB DDR5 4400MHz20Gbps500Mbps

Interested in the OpenMetal IaaS Platform?

Hosted Private Cloud

Day 2 ready. No licensing costs. Delivered in 45 seconds. Powered by enterprise open source tech.

Learn More

Bare Metal Servers

High end bare metal server hosting for virtualization, big data, streaming, and much more.

Learn More

Consult with Our Team

Meet our experts to get a deeper assessment and discuss your unique IaaS requirements.

Schedule Meeting

Explore More OpenMetal Hardware Content

At OpenMetal, every bare metal server is deployed with dual high-speed 10Gbps uplinks, providing a critical layer of both performance scalability and fault tolerance. To fully leverage these redundant connections, OpenMetal configures these uplinks as a bonded logical interface using Link Aggregation Control Protocol (LACP).

At OpenMetal, we design our bare metal servers for high availability, ensuring resilience across private clouds and storage clusters. To protect the operating system from hardware failures, many servers feature dual boot drives in RAID 1, providing redundancy and seamless recovery.

OpenMetal’s Medium v4 bare metal dedicated server is powered by dual 5th gen Intel® Xeon Silver 4510 processors and 256 DDR5 RAM.

OpenMetal’s XXL v4 bare metal dedicated server is powered by dual 5th gen Intel® Xeon Gold 6530 processors and 2048GB DDR5 RAM.

OpenMetal’s Large v4 bare metal dedicated server is powered by dual Intel® Xeon Gold 6526Y processors, 512GB DDR5-5200 RAM, and come with two Micron 7450 MAX drives.

Explore Intel® (TDX), a hardware-based technology designed to enhance the security and privacy of virtualized workloads in cloud environments.