Q: What internal network throughput can I expect between nodes in an OpenMetal private cloud?
Within an OpenMetal Hosted Private Cloud, traffic between nodes (compute-to-compute, compute-to-storage, and control-plane communication) runs over dedicated infrastructure included in the base deployment cost.
OpenMetal documents within-VLAN throughput at up to 10 Gbps via public or private IP space. This applies to all internal traffic: VM-to-VM communication, Ceph storage traffic serving block and object storage to running instances, and OpenStack service communication between control-plane nodes.
Because the infrastructure is dedicated hardware, this throughput is not subject to sharing with other customers. The 10 Gbps internal bandwidth reflects the capacity allocated to each individual deployment, not an aggregate divided across tenants. This differs meaningfully from public cloud environments where instance-to-instance bandwidth is drawn from shared physical capacity and governed by per-instance network caps that vary by instance type and platform policy.
From a security and compliance standpoint, inter-node traffic is isolated to the customer’s private VLAN environment. Storage replication, control-plane messaging, and tenant workload traffic are architecturally separated: an application workload cannot observe or interfere with Ceph storage replication traffic running on the same cluster. For workloads with regulatory requirements around network traffic segregation, this isolation is structural rather than policy-enforced, which simplifies compliance documentation considerably.



































