In this article

  • Why Network Architecture Matters for Private Cloud Infrastructure
  • OpenMetal’s Physical Network Foundation: Dedicated VLANs Per Customer
  • Understanding VLANs: The Physical Isolation Layer
  • How VXLANs Solve the VLAN Scalability Problem
  • OpenMetal’s Hardware Configuration: Dual 10 Gbps Bonding
  • Creating Virtual Private Clouds with OpenStack Projects
  • Common Use Cases for OpenMetal’s Network Architecture
  • Network Management and Configuration in Practice
  • Security Implications of Network Architecture Choices
  • Performance Considerations and Network Optimization
  • Wrapping Up: Building Secure, Scalable Network Infrastructure
  • FAQs

Understanding network architecture is foundational for deploying production OpenStack private clouds. This guide walks you through how VLANs provide physical isolation, how VXLANs enable massive overlay scalability, and how OpenMetal’s hardware design eliminates common multi-tenant security risks while giving you complete control over network segmentation.

Why Network Architecture Matters for Private Cloud Infrastructure

When you’re building production workloads on private cloud infrastructure, network design directly impacts security posture, performance characteristics, and compliance readiness. Poor network architecture introduces latency, creates security vulnerabilities, and limits your ability to scale tenant workloads independently.

OpenStack Neutron provides the networking layer for OpenStack clouds, handling everything from basic connectivity to advanced services like load balancing and VPN termination. According to the official OpenStack Networking Guide, Neutron supports multiple networking technologies and plugins, giving cloud operators flexibility in how they architect their networks.

The challenge for network architects evaluating private cloud providers comes down to understanding how their physical network isolation works, how virtual networks scale within that physical infrastructure, and where security boundaries actually exist in the stack.

OpenMetal’s Physical Network Foundation: Dedicated VLANs Per Customer

OpenMetal takes a different approach to multi-tenancy than most cloud providers. Every customer receives dedicated Layer 2 VLANs that terminate exclusively on their assigned hardware. This means your broadcast domain is completely isolated from other customers, with separate ARP tables and no shared VLAN infrastructure.

This design eliminates entire classes of security concerns. Cross-tenant broadcast storms become impossible because broadcast traffic never crosses customer boundaries. ARP spoofing attacks can’t reach beyond your dedicated VLANs. Network troubleshooting becomes simpler because you’re working within a defined, isolated network segment rather than sharing infrastructure with unknown neighbors.

Physical crossover to the data center occurs only at the switch level for internet connectivity and IPMI traffic. Your customer VLANs never touch other customer infrastructure. OpenMetal’s switches come preconfigured to support VXLAN traffic, which we’ll discuss shortly, but the underlying physical isolation remains constant.

For compliance-focused organizations, this architecture directly supports HIPAA, SOC 2, ISO 27001, and PCI-DSS requirements. You can demonstrate clear network isolation during audits because the separation exists at the physical layer, not just as a logical construct managed by hypervisor software.

Understanding VLANs: The Physical Isolation Layer

VLANs, or Virtual Local Area Networks, create logically isolated networks within switch-based infrastructure. VLANs allow you to divide network traffic based on functionality, project teams, or security groups, reducing broadcast traffic and improving network security.

There are two primary VLAN types. Tagged VLANs use the IEEE 802.1Q standard to tag Ethernet frames, allowing VLANs to span multiple switches. Port-based VLANs assign specific switch ports to specific VLANs without adding tags to the Ethernet frames themselves.

Trunk ports on switches carry multiple tagged VLANs simultaneously. Access ports carry traffic for a single VLAN only. OpenMetal’s infrastructure uses trunk ports configured on every switchport connected to customer nodes, which means you can layer additional VLANs on top of existing bond interfaces without requiring physical changes to your infrastructure.

Traditional VLAN implementations face a hard limit: the 12-bit VLAN ID field in 802.1Q headers restricts you to 4,094 unique VLAN identifiers. For large-scale cloud environments supporting hundreds or thousands of tenant networks, this limitation becomes a design constraint that requires additional networking layers to overcome.

How VXLANs Solve the VLAN Scalability Problem

VXLAN, or Virtual Extensible LAN, extends Layer 2 networks over Layer 3 infrastructure using UDP encapsulation. Unlike traditional VLANs, VXLANs use a 24-bit VXLAN Network Identifier (VNI), which supports up to 16 million isolated virtual networks. This scalability makes VXLAN the standard choice for modern cloud infrastructure.

VXLANs tunnel Layer 2 traffic over IP networks. Virtual machines running on different physical hosts can communicate over a VXLAN tunnel, even if those hosts are in different subnets or different data centers. From the VM’s perspective, other VMs in the same VXLAN exist within the same Layer 2 domain.

VXLAN Tunnel Endpoints (VTEPs) handle the encapsulation and decapsulation of packets. A VTEP can be a physical network device like a router or switch, or it can be a virtual switch deployed on a server. Open vSwitch, which OpenStack commonly uses, can function as a VTEP. VTEPs encapsulate Ethernet frames into VXLAN packets, send them across an IP network to the destination VTEP, where the packets are decapsulated and forwarded to the destination.

OpenStack Neutron implements VXLAN overlays using the ML2 plugin with VXLAN type drivers (typically the default for tenant networks), the openvswitch mechanism driver, and l2population for better ARP handling. This configuration allows tenant networks to span multiple data centers without requiring physical underlay reconfiguration.

Within OpenMetal’s architecture, VXLANs operate inside your dedicated customer-specific VLANs. This means you get the massive scalability of VXLAN overlays while maintaining the physical isolation and security benefits of dedicated Layer 2 VLANs. You’re not sharing VXLAN infrastructure with other customers because you’re not sharing the underlying VLANs.

OpenMetal’s Hardware Configuration: Dual 10 Gbps Bonding

OpenMetal Private NetworkingEvery OpenMetal Private Cloud Core consists of three hyperconverged servers. Each server includes dual 10 Gbps NICs providing 20 Gbps aggregate bandwidth for private networking. This bandwidth is unmetered for internal traffic, which matters significantly when you’re running workloads like Ceph storage replication, AI training with large datasets, or backup operations.

All bare metal nodes come preconfigured with bonded network interfaces using LACP (802.3ad). This creates a bond0 interface combining two physical NICs, providing both redundancy and additional bandwidth.

The OpenStack-Ansible network architecture documentation shows how production OpenStack deployments segment traffic using VLANs across multiple network interfaces or bonds. Common networks include the management network (for infrastructure and service communication), the overlay network (for VXLAN tunneled traffic), and the storage network (for Ceph and other storage traffic).

OpenMetal’s infrastructure follows this pattern. Within your deployment, you can segment workloads into separate VLANs for different traffic types. Storage replication traffic for Ceph gets its own VLAN. AI training workloads that need high-bandwidth, low-latency communication can use dedicated VLANs. Management traffic, backup traffic, and production application traffic can each operate on separate network segments.

Public connectivity operates at 1-2 Gbps depending on your configuration. OpenMetal uses 95th percentile billing for public bandwidth, with included transfer ranging from 46TB to 925TB depending on your tier. Overages are billed at $375 per 1 Gbps. The 95th percentile billing method means off-peak traffic below the 95th percentile is effectively free, which benefits workloads with variable bandwidth patterns.

Creating Virtual Private Clouds with OpenStack Projects

OpenStack Projects function as Virtual Private Clouds, providing completely separate network space for different applications, environments, or customers. Each Project gets its own virtual networking stack through Neutron, including virtual routers, virtual switches, and independent security policies.

Private subnets within Projects use high-speed VXLANs operating inside your dedicated VLANs. You manage these networks through OpenStack APIs or the Horizon dashboard. Network management capabilities include security groups (instance-level firewall rules), floating IPs (public IP addresses attached to private instances), L3/NAT forwarding, load balancers, and VPNaaS (VPN as a Service).

Security groups in OpenStack function as virtual firewalls controlling inbound and outbound network traffic at the instance level. You define rules specifying which protocols, ports, and source/destination addresses can reach your instances. Unlike traditional network firewalls that operate at network boundaries, security groups apply directly to individual instances, providing granular control over network access.

This architecture gives you flexibility in how you structure your infrastructure. You might create separate Projects for development, staging, and production environments. Each Project maintains network isolation while sharing the underlying physical infrastructure. Within each Project, you can build complex network topologies with multiple subnets, routing between them, and controlled access to external networks.

Common Use Cases for OpenMetal’s Network Architecture

OpenMetal’s network design supports a wide range of demanding workloads. The Private Cloud Core deploys in approximately 45 seconds with OpenStack and Ceph, and you get full root access for Neutron configuration. This means you can customize network behavior to match your specific requirements rather than working within provider-imposed constraints.

SaaS platforms benefit from the ability to create isolated tenant networks with guaranteed performance characteristics. Blockchain validators need low-latency, high-bandwidth connectivity between nodes, which the dual 10 Gbps private networking provides. E-commerce platforms can segment different application tiers onto separate networks for improved security and performance.

CI/CD pipelines generate significant internal network traffic between build servers, artifact repositories, and deployment targets. The unmetered 20 Gbps private networking means you don’t need to worry about bandwidth costs for internal automation workflows.

AI and machine learning workloads often move large datasets between storage and compute resources. The high-speed internal networking combined with Ceph’s distributed storage architecture allows you to build data pipelines without network bandwidth becoming the bottleneck.

Confidential computing workloads requiring strict data isolation benefit from the dedicated VLAN architecture. You can demonstrate to auditors exactly how network isolation works because the separation exists at the physical layer.

Big data processing frameworks like Hadoop or Spark depend on high-bandwidth, low-latency networking between cluster nodes. The dual 10 Gbps bonding provides the network performance these distributed systems need to operate at scale.

Network Management and Configuration in Practice

OpenStack Neutron provides multiple interfaces for network management. The Horizon dashboard offers a web-based interface for creating networks, subnets, routers, and security groups. The OpenStack CLI tools provide scriptable access to the same functionality, which matters for infrastructure-as-code workflows and automation.

Network configuration in OpenStack follows a logical hierarchy. First, you create networks, which are Layer 2 broadcast domains. Within networks, you define subnets specifying IP address ranges and gateway addresses. Routers connect subnets together and provide connectivity to external networks. Security groups and network policies control traffic flow between instances and between instances and external networks.

For advanced networking scenarios, Neutron supports features like VPN as a Service for creating site-to-site VPN connections, Load Balancing as a Service for distributing traffic across multiple instances, and Firewall as a Service for network-level firewall rules.

OpenMetal provides root-level access to your infrastructure, which means you can configure these Neutron features directly. You’re not limited to a provider-managed networking interface that restricts what you can configure. If you need to adjust MTU sizes, configure custom routing policies, or implement advanced security rules, you have the access level required to make those changes.

The OpenStack-Ansible documentation provides detailed examples of network interface configurations for different deployment scenarios. These configurations show how to set up bonded interfaces, configure VLANs on bonds, and create the network bridges OpenStack requires for different traffic types.

Security Implications of Network Architecture Choices

Network architecture decisions have direct security implications. Shared VLAN infrastructure between customers creates potential attack vectors. If an attacker compromises one customer’s infrastructure, they gain visibility into network traffic for other customers sharing the same VLANs. ARP spoofing, MAC flooding, and VLAN hopping attacks all become possibilities in shared VLAN environments.

OpenMetal’s dedicated VLAN architecture eliminates these shared infrastructure attack vectors. Because your VLANs terminate exclusively on your hardware, an attacker would need to compromise either your specific hardware or the data center’s core switching infrastructure to gain visibility into your traffic. The latter represents a much higher barrier to entry and a much more detectable attack pattern.

Within your infrastructure, network segmentation provides defense in depth. By placing different application tiers, different environments, or different security zones on separate VLANs, you limit the blast radius if an attacker compromises one system. They gain access to systems on the same network segment, but not to systems on other segments unless they can compromise additional systems or network devices.

Security groups provide instance-level firewall protection, but network segmentation via VLANs provides an additional security layer at the network level. Using both together creates overlapping security controls that make infrastructure more resilient to attack.

For organizations dealing with sensitive data, the combination of physical VLAN isolation and logical VXLAN segmentation provides a strong foundation for meeting regulatory requirements. You can demonstrate to auditors that customer data travels across physically isolated network infrastructure, which satisfies many compliance framework requirements around network security.

Performance Considerations and Network Optimization

Network performance in OpenStack private clouds depends on multiple factors. Hardware capabilities matter: 10 Gbps NICs provide better performance than 1 Gbps NICs. Network topology matters: the number of hops between source and destination affects latency. Configuration matters: MTU sizes, queue depths, and driver settings all impact throughput and latency characteristics.

The OpenStack-Ansible documentation discusses MTU considerations for storage networks. Larger MTU sizes (jumbo frames) reduce CPU overhead and improve throughput for large data transfers. Setting MTU to 9000 bytes instead of the standard 1500 bytes can significantly improve performance for storage traffic.

The documentation emphasizes that MTU must be set consistently across the entire network path. This includes container interfaces, underlying bridges, physical NICs, and all connected network equipment like switches, routers, and storage devices. Inconsistent MTU settings cause fragmentation or dropped packets, which severely impacts performance.

OpenMetal’s dual 10 Gbps private networking provides high bandwidth for internal traffic, but how you design your network topology determines whether you can actually use that bandwidth. Placing compute nodes and storage nodes on the same high-speed network segment reduces latency and maximizes throughput. Routing storage traffic through multiple network hops or across lower-bandwidth links creates bottlenecks that prevent you from achieving the performance your hardware can deliver.

VXLAN encapsulation introduces some overhead compared to native VLAN traffic. The UDP encapsulation adds headers to each packet, which consumes bandwidth and requires CPU cycles for encapsulation and decapsulation. For most workloads, this overhead is negligible compared to the scalability benefits VXLAN provides. For extremely latency-sensitive or bandwidth-intensive workloads, you might choose to use provider networks that bypass VXLAN encapsulation and operate directly on VLANs.

Wrapping Up: Building Secure, Scalable Network Infrastructure

The network architecture decisions you make when deploying private cloud infrastructure have long-term implications for security, performance, and operational complexity. Understanding how VLANs provide physical isolation, how VXLANs enable overlay scalability, and how these technologies work together gives you the foundation for building production-grade cloud infrastructure.

OpenMetal’s approach of providing dedicated Layer 2 VLANs per customer eliminates entire classes of multi-tenant security risks while giving you the flexibility to segment your workloads however your architecture requires. The dual 10 Gbps bonded interfaces provide the bandwidth needed for demanding workloads, and the unmetered internal traffic means you can design your network topology based on what works best for your applications rather than optimizing for bandwidth costs.

OpenStack Neutron’s support for VXLANs means you can create thousands of tenant networks within your infrastructure without hitting the 4,094 VLAN ID limit. The ML2 plugin architecture gives you flexibility in how you implement networking, and the full root access OpenMetal provides means you can customize Neutron configuration to match your specific requirements.

Whether you’re building a SaaS platform that needs strong tenant isolation, running blockchain validators that need high-bandwidth connectivity, or deploying AI workloads that move large datasets between storage and compute, understanding the underlying network architecture helps you design systems that are secure, performant, and compliant with regulatory requirements.

Ready to deploy your private cloud with enterprise-grade network architecture? Check out the OpenMetal Cloud Deployment Calculator to explore configuration options, or dive into our documentation on connecting clusters and configuring external networks.

FAQs

What’s the difference between VLANs and VXLANs?

VLANs operate at Layer 2 and are limited to 4,094 unique identifiers, which restricts scalability in large cloud environments. VXLANs tunnel Layer 2 traffic over Layer 3 networks using UDP encapsulation and support up to 16 million unique network identifiers through their 24-bit VNI field. VXLANs can span multiple data centers without physical network reconfiguration, while traditional VLANs are constrained by physical network topology.

How does dedicated VLAN infrastructure improve security?

Dedicated VLANs provide complete broadcast domain isolation between customers. Your ARP tables, broadcast traffic, and Layer 2 network segments never touch other customer infrastructure. This eliminates cross-tenant broadcast storms, prevents ARP spoofing attacks from reaching beyond your infrastructure, and provides demonstrable network isolation for compliance audits. Physical crossover occurs only at the switch level for internet and IPMI traffic.

What bandwidth does OpenMetal provide for private networking?

Each server in an OpenMetal Private Cloud Core includes dual 10 Gbps NICs providing 20 Gbps aggregate bandwidth for private networking. This bandwidth is unmetered for internal traffic between your servers. Public connectivity operates at 1-2 Gbps depending on your configuration, with 95th percentile billing for public bandwidth usage.

Can I configure custom network topologies in OpenStack Neutron?

OpenMetal provides full root access to your infrastructure, which means you have complete control over Neutron configuration. You can create custom network topologies, adjust MTU sizes, configure routing policies, implement VPNaaS for site-to-site connectivity, and set up load balancing. Network management is available through OpenStack APIs, the Horizon dashboard, and direct CLI access.


Interested in Exploring OpenMetal’s Hosted Private Cloud and Bare Metal Options?

Chat With Our Team

We’re available to answer questions and provide information.

Chat With Us

Schedule a Consultation

Get a deeper assessment and discuss your unique requirements.

Schedule Consultation

Try It Out

Take a peek under the hood of our cloud platform or launch a trial.

Trial Options

 

 

 Read More on the OpenMetal Blog

OpenMetal’s OpenStack Network Architecture Explained: VLANs, VXLANs, and Private Networking

Oct 13, 2025

Discover how OpenMetal’s dedicated VLAN architecture and OpenStack’s VXLAN overlays create secure, scalable network infrastructure. This guide covers physical isolation, virtual networking, dual 10 Gbps bonding, and network segmentation strategies for production private clouds.

Why Data-Intensive Intelligence Platforms Are Moving to Private Cloud

Oct 13, 2025

Organizations running data-intensive intelligence platforms face mounting challenges with hyperscale cloud costs and performance unpredictability. OpenMetal Private Cloud built on OpenStack with Ceph storage delivers self-service infrastructure with transparent pricing, consistent performance, and sovereignty control—without sacrificing scalability.

Confidential Computing as Regulators Tighten Cross-Border Data Transfer Rules

Oct 10, 2025

Cross-border data transfer regulations are tightening globally. Confidential computing provides enterprises with verifiable, hardware-backed protection for sensitive workloads during processing. Learn how CTOs and CISOs use Intel TDX, regional infrastructure, and isolated networking to meet GDPR, HIPAA, and PCI-DSS requirements.

Why Blockchain Validators Are Moving from Public Cloud to Bare Metal

Oct 09, 2025

Blockchain validators demand millisecond precision and unthrottled performance. Public cloud throttling, unpredictable costs, and resource sharing are driving operators to bare metal infrastructure. Learn why dedicated hardware with isolated networking eliminates the risks that shared environments create.

Why Internal Cloud Networking on Hyperscalers Costs More Than You Think: East-West Traffic Economics

Oct 09, 2025

Modern applications generate massive east-west traffic between internal services—often exceeding external user traffic. Hyperscale clouds hide these flows behind opaque pricing and shared networks. Discover how OpenMetal’s dedicated infrastructure gives enterprise teams transparent control over internal networking performance and costs.

Dedicated Network Infrastructure for Microservices: The Hidden Advantage

Oct 08, 2025

Deploying microservices and service meshes requires predictable network QoS that hyperscalers can’t provide. OpenMetal’s dedicated infrastructure gives developers transparent control over traffic flows, free internal bandwidth, and network policies that actually work—bridging the gap between intent and reality.

Fixed-Cost Infrastructure: Why PE Firms Prefer Predictable Capex Over Variable Cloud Spend

Oct 07, 2025

Private equity firms are replacing variable cloud costs with fixed-cost infrastructure to improve EBITDA predictability and portfolio valuations. Learn how transparent, hardware-based pricing creates financial advantages for PE-backed SaaS companies.

Optimizing Latency and Egress Costs for Globally Distributed Workloads

Oct 07, 2025

Discover how OpenMetal’s strategically positioned data centers eliminate the “data tax” on globally distributed applications. Free east-west traffic between regions plus predictable 95th percentile bandwidth billing lets you architect for performance instead of cost avoidance, with typical savings of 30-60% versus public cloud.

Why Hyperscalers Can’t Deliver True Zero-Trust: The Private Cloud Advantage

Oct 06, 2025

Traditional VLANs and firewalls fail in distributed cloud environments. Learn how zero-trust networking using microsegmentation, VXLAN overlays, and identity-based routing works in private clouds—and why OpenMetal’s transparent architecture makes it operationally achievable where hyperscalers fall short.

Infrastructure as a Product: The Platform Team Playbook for DevOps Leaders

Oct 04, 2025

The hardest enterprise migration isn’t technical—it’s cultural. Discover why platform teams are replacing traditional IT silos, how to navigate the transformation from operations to product thinking, and what private cloud infrastructure means for developer experience and business velocity.