In this article

If you’re planning a migration from VMware to Proxmox, your infrastructure choices determine whether the project succeeds or stalls. This guide covers the hardware requirements, network architecture, storage considerations, and deployment options that make large-scale migrations work.


Broadcom’s acquisition of VMware in 2023 triggered pricing changes that have organizations reconsidering their virtualization strategy. The shift from perpetual licensing to subscriptions, combined with controversial minimum core requirements (briefly 72 cores per purchase in April 2025, though later rolled back), pushed many teams to evaluate alternatives like Proxmox Virtual Environment.

Why Infrastructure Planning Comes First

Many migration guides focus on the mechanics of moving VMs. That’s important, but starting there skips an important step. Your infrastructure determines:

  • How many VMs you can migrate simultaneously
  • Whether storage performance matches or exceeds VMware
  • If your network can handle cluster traffic without affecting production
  • Whether you can test thoroughly before cutting over production workloads

Proxmox runs on standard x86_64 hardware, which gives you flexibility. That same flexibility means you need to make intentional choices about what hardware to use. Organizations are deploying Proxmox on dedicated bare metal to eliminate VMware licensing while maintaining enterprise features.e

Core Hardware Requirements

Proxmox VE has modest minimum requirements (2 GB RAM, single CPU core with VT-x/AMD-V), but production deployments need significantly more resources.

CPU Selection

Your processor needs Intel VT-x or AMD-V virtualization extensions. Most server processors from the last decade include these, but verify before ordering hardware.

For enterprise migrations, consider:

  • Core count: Allocate at least one full core per Proxmox service (monitor, manager, OSD). A node running six Ceph OSDs should have 8+ cores reserved just for storage services.
  • Base frequency: Ceph monitors and managers benefit from higher clock speeds more than additional cores
  • Generation: Newer CPU generations provide better performance per watt and often include security features like Intel TDX for confidential computing

A typical three-node cluster for VMware migration might use processors with 16-32 cores per socket. This supports both VM workloads and storage services without contention.

Memory Architecture

RAM requirements scale with three factors: Proxmox itself, guest VMs, and storage backend.

Base allocation:

  • Proxmox host: 4-8 GB
  • ZFS/Ceph storage: 1 GB per TB of storage capacity
  • Guest VMs: Sum of all allocated VM memory

Example: A host with 24 TB of Ceph storage running 10 VMs (8 GB each) needs minimum 112 GB RAM (8 GB host + 24 GB storage + 80 GB VMs). Production deployments should add 20-30% overhead.

Proxmox RAM Allocation

Use ECC memory for production environments. Non-ECC RAM experiencing bit errors can corrupt storage or cause silent data corruption. For compliance-driven workloads (HIPAA, PCI-DSS), ECC RAM is not optional.

Storage Configuration

Storage architecture directly impacts migration success. You have several approaches:

Local storage with hardware RAID: Traditional hardware RAID controllers with battery-backed write cache (BBU) work well for VM storage. This approach mirrors many VMware vSAN configurations. Use enterprise SSDs with power-loss protection (PLP), not consumer drives.

ZFS storage (software-defined): ZFS provides data integrity checking, snapshots, and compression without hardware RAID controllers. ZFS is not compatible with hardware RAID controllers. Requirements: 1 GB RAM per TB of storage, enterprise SSDs with PLP recommended.

Ceph distributed storage: For clusters of three or more nodes, Ceph provides distributed block storage similar to VMware vSAN. Ceph creates shared storage across nodes with automatic replication and self-healing. Requirements: Minimum three nodes, dedicated high-speed network for storage traffic, significant memory allocation.

Which to choose? If migrating from VMware vSAN, Ceph provides a comparable architecture. For standalone hosts or two-node clusters, hardware RAID or ZFS work better.

Storage Performance Considerations

Your storage must match or exceed VMware performance. Key specifications:

  • IOPS capability: Modern NVMe SSDs deliver 100,000+ IOPS. A single high-performance NVMe can saturate 10 Gbps networking.
  • Latency: Ceph clusters should target sub-5ms latency for storage operations
  • Throughput: Plan for sustained write performance during migration (moving VMs generates heavy I/O)

For hybrid storage (mixing SSD and HDD), tier hot data on SSDs and cold data on HDDs. Ceph supports cache tiering natively.

Network Architecture for Production Clusters

Network configuration determines cluster stability, storage performance, and migration speed. Proxmox clusters require careful network separation.

Separation by Traffic Type

Production Proxmox clusters use physically separated networks:

Cluster network (corosync traffic) Proxmox cluster management uses corosync for maintaining cluster state. Corosync requires low latency (under 5ms) and stable connections. High latency or packet loss causes cluster instability.

Recommendation: Dedicated 1 Gbps network, not shared with other traffic. This network doesn’t need high bandwidth but must be reliable.

Ceph public network If using Ceph storage, this network carries VM-to-storage traffic. All VMs access their storage through this network.

Recommendation: 10 Gbps minimum. For NVMe-backed Ceph, consider 25 Gbps or higher. Heavy storage I/O will saturate 10 Gbps quickly.

Ceph cluster network (optional) Separate network for Ceph OSD replication and heartbeat traffic. This reduces load on the public network and improves performance in large clusters.

Recommendation: Same speed as Ceph public network. Only needed for clusters with heavy storage rebalancing or large OSD counts.

VM/management network Production traffic, management interfaces, and external connectivity.

Recommendation: 1 Gbps typically sufficient for management. VLANs can segment different VM networks.

Proxmox Network Architecture

Example Three-Node Configuration

A practical three-node cluster might use:

  • 2x 1 Gbps NICs: Bonded for management and corosync (with redundancy)
  • 2x 10 Gbps NICs: Bonded for Ceph public and cluster networks
  • Optional: Additional NICs for VM production traffic on separate VLANs

Network redundancy matters. Bond interfaces where possible, and configure corosync with multiple rings for failover.

Migration Network Considerations

During migration, network bandwidth determines how quickly you can move VMs. A 1 TB VM migrating over 10 Gbps network takes approximately 15-20 minutes. Over 1 Gbps, the same VM takes 2-3 hours.

If your VMware environment and Proxmox cluster share network infrastructure, migration can happen over existing links. For large migrations (dozens or hundreds of VMs), dedicated migration VLANs prevent impacting production traffic.

Deployment Architecture Options

You have several options for deploying Proxmox infrastructure for a VMware migration.

Self-Managed On-Premises

Purchase and install servers in your own data center. You maintain physical control, but you’re responsible for:

  • Hardware procurement and installation
  • Network configuration
  • Power and cooling
  • Hardware replacement when failures occur

This works well if you already have data center space and staff to manage hardware.

Colocation

Place your hardware in a third-party data center. The colo provider handles power, cooling, and physical security. You maintain remote management through IPMI/iDRAC and handle OS and application layers.

Benefit: Professional data center environment without building one yourself. You still purchase and manage hardware.

Bare Metal Hosting

Lease dedicated servers from a provider. You get root access and full control over the OS, but the provider owns and maintains the physical hardware.

OpenMetal’s bare metal servers provide an option here. You deploy Proxmox on dedicated hardware with:

  • Full root access via IPMI for OS installation
  • Dedicated hardware (no noisy neighbors)
  • Private networking between servers for cluster traffic
  • Fixed monthly costs without per-VM licensing

The advantage: You avoid hardware capital expenses and data center management while maintaining full control over the Proxmox layer. If a disk or power supply fails, the provider handles replacement.

Hardware Sizing by Migration Scale

Here’s practical sizing guidance based on migration scope:

Proxmox Hardware Sizing Guide

Small Migration (10-30 VMs)

Three-node cluster:

  • CPU: 16-24 cores per node
  • RAM: 128 GB per node
  • Storage: 6x 1.92 TB SSDs per node (RAID or Ceph)
  • Network: 2x 1 Gbps + 2x 10 Gbps per node

This configuration supports the VMs plus Proxmox and storage services. Ceph would provide approximately 15 TB usable capacity (accounting for 3x replication).

Medium Migration (30-100 VMs)

Three-node cluster:

  • CPU: 32-48 cores per node
  • RAM: 256-384 GB per node
  • Storage: 12x 3.84 TB SSDs per node
  • Network: 2x 10 Gbps + 2x 25 Gbps per node

Higher core count and memory support more concurrent VMs. Additional storage capacity and faster networking handle increased I/O demands.

Large Migration (100+ VMs)

Five+ node cluster:

  • CPU: 48-64 cores per node
  • RAM: 512 GB+ per node
  • Storage: Mix of NVMe (hot data) and SAS SSD (warm data)
  • Network: 2x 25 Gbps or 100 Gbps for Ceph traffic

At this scale, consider dedicated storage nodes vs hyperconverged. Dedicated storage nodes run only Ceph OSDs, while compute nodes run only VMs. This separation provides better performance tuning.

Testing Infrastructure Before Migration

Before moving production VMs, validate that your Proxmox infrastructure meets performance requirements.

Performance Benchmarking

Run pveperf on each node to get baseline metrics. Compare results across nodes to ensure consistency.

For storage, use fio or other I/O testing tools:

fio --name=random-write --ioengine=libaio --iodepth=32 --rw=randwrite --bs=4k --direct=1 --size=1G --numjobs=4 --runtime=60 --group_reporting

Test both sequential and random I/O patterns. Results should match or exceed your VMware environment’s storage performance.

Cluster Stability Testing

Simulate failures to verify cluster behavior:

  • Disconnect network cables to test failover
  • Reboot nodes to test HA functionality
  • Generate heavy network traffic to verify corosync stability

Proxmox should maintain cluster quorum and automatically restart VMs on surviving nodes during node failures.

Pilot Migration

Choose 3-5 non-critical VMs for a pilot migration. These should represent different workload types (Windows, Linux, database, application servers).

Migrate the pilot VMs using Proxmox’s import wizard or manual conversion. Monitor:

  • Migration duration (network throughput)
  • Post-migration performance (CPU, RAM, I/O)
  • Application functionality after boot
  • Guest tools and driver changes needed

Document any issues and adjust your migration plan before tackling production workloads.

Migration Tooling Requirements

Proxmox 8.2+ includes a built-in import wizard that connects to VMware ESXi and migrates VMs. Requirements:

  • ESXi 6.5 – 8.0 (source environment)
  • Proxmox VE 8.0+ (target environment)
  • Network connectivity between VMware and Proxmox clusters

The wizard handles disk conversion (VMDK to raw or qcow2) and creates appropriate VM configurations. For VMs with special requirements (vTPM, certain storage controllers), manual migration might be needed.

Alternative approach: Export VMDKs from VMware, transfer to Proxmox storage, manually import. This works for any VMware version but requires more steps.

Storage Backend Selection

Your storage choice impacts migration complexity and post-migration performance.

Local Storage Scenarios

If each Proxmox node uses local storage (not shared), VMs are tied to specific nodes. Live migration between nodes requires copying VM storage, which takes time.

This works fine for:

  • Standalone hosts
  • Two-node clusters
  • Workloads that don’t require frequent migration between hosts

Shared Storage Scenarios

Ceph or other shared storage allows live migration (moving running VMs between nodes without downtime). The VM’s storage remains accessible from any node.

Shared storage makes sense for:

  • Clusters with 3+ nodes
  • Workloads requiring high availability
  • Environments needing frequent rebalancing

Shared storage adds complexity (network configuration, Ceph management) but provides VMware-like live migration capabilities.

Network Configuration Details

Each Proxmox node needs /etc/network/interfaces configured to support multiple networks. Example configuration for a three-network setup:

auto vmbr0
iface vmbr0 inet static
    address 10.0.1.10/24
    gateway 10.0.1.1
    bridge-ports eno1
    bridge-stp off
    bridge-fd 0
    # Management network

auto vmbr1
iface vmbr1 inet static
    address 10.0.10.10/24
    bridge-ports eno2
    bridge-stp off
    bridge-fd 0
    # Corosync cluster network

auto vmbr2
iface vmbr2 inet static
    address 10.0.20.10/24
    bridge-ports eno3
    bridge-stp off
    bridge-fd 0
    # Ceph public network

For bonded interfaces (redundancy), replace single NICs with bond interfaces:

auto bond0
iface bond0 inet manual
    bond-slaves eno1 eno2
    bond-mode 802.3ad
    bond-miimon 100
    bond-xmit-hash-policy layer3+4

Network configuration varies by deployment. The key principle: separate traffic types to prevent interference.

Deployment Timeline Considerations

Infrastructure deployment takes time. Budget accordingly:

  • Hardware procurement: 2-4 weeks (if buying)
  • Bare metal provisioning: 1-3 days (if leasing)
  • Network configuration: 2-3 days
  • Storage cluster setup: 1-2 days
  • Testing and validation: 1 week
  • Pilot migration: 1 week
  • Production migration: Varies by VM count

Realistic timeline for medium migration: 6-8 weeks from hardware order to production cutover.

Using pre-provisioned bare metal infrastructure compresses the timeline by eliminating hardware procurement and physical setup.

Cost Considerations

Infrastructure costs vary by deployment model:

Self-managed on-premises:

  • Capital: Server hardware ($8,000-$15,000 per node)
  • Operational: Power, cooling, network, staff time
  • No software licensing fees (Proxmox is open source)

Bare metal hosting:

  • Operational: Fixed monthly per server ($500-$2,000+ depending on specs)
  • No capital expenses
  • No software licensing fees
  • Provider handles hardware replacement

Comparison point: VMware vSphere with vSAN licensing can cost $3,000-$6,000 per processor per year in subscription fees. Proxmox eliminates these costs entirely.

For a three-node cluster, eliminating VMware licensing often saves $20,000-$40,000 annually. These savings offset bare metal hosting costs or pay for hardware purchases within 12-18 months.

Infrastructure Monitoring and Management

Post-migration, you need visibility into infrastructure health:

  • Proxmox web UI provides cluster-wide monitoring
  • Ceph dashboard shows storage cluster status
  • External monitoring (Prometheus, Grafana) can track historical metrics

Set up alerts for:

  • Cluster membership changes
  • High storage utilization (>80%)
  • Ceph health warnings
  • High CPU/memory usage

Regular infrastructure maintenance includes:

  • Proxmox updates (monthly)
  • Ceph version upgrades (quarterly)
  • Hardware firmware updates (as needed)
  • Log rotation and cleanup

Getting Started

If you’re planning a VMware to Proxmox migration, start by:

  1. Auditing your current VMware environment (VM count, storage usage, network configuration)
  2. Determining performance requirements (IOPS, throughput, memory)
  3. Choosing deployment model (on-premises, colo, bare metal hosting)
  4. Sizing hardware based on workload needs
  5. Setting up a test environment for pilot migration

The infrastructure decisions you make now determine whether your migration succeeds. Take time to size appropriately, separate networks correctly, and test thoroughly before migrating production workloads.

OpenMetal Infrastructure Options

OpenMetal provides bare metal servers suitable for Proxmox deployments, with hardware ranging from general-purpose configurations to high-memory and storage-optimized systems. All bare metal servers include:

  • Full root access via IPMI for OS installation
  • Dedicated VLANs for private networking between servers
  • Fixed monthly pricing with no per-VM licensing
  • 95th percentile bandwidth billing

For organizations wanting to eliminate VMware licensing costs while maintaining enterprise features, Proxmox on dedicated bare metal provides a cost-effective path. You get the control of on-premises infrastructure without the data center overhead.


Ready to explore infrastructure options for your VMware migration? Learn more about OpenMetal’s bare metal offerings or schedule a consultation to discuss your specific requirements. Read more about choosing between hardware generations for your deployment.


Chat With Our Team

We’re available to answer questions and provide information.

Reach Out

Schedule a Consultation

Get a deeper assessment and discuss your unique requirements.

Schedule Consultation

Try It Out

Take a peek under the hood of our cloud platform or launch a trial.

Trial Options

 

 

 Read More on the OpenMetal Blog

The Infrastructure Needed for a Successful VMware to Proxmox Migration

Jan 09, 2026

Moving off VMware? Infrastructure planning comes first. This article breaks down specific hardware requirements for Proxmox, including CPU core counts, RAM formulas for Ceph storage, and essential network architecture to ensure your new cluster matches VMware’s performance and stability.

Singapore Private Cloud, Bare Metal Servers, GPU Servers – Data Center and Colo

Jan 08, 2026

Singapore and APAC based businesses benefit from hosting their applications on OpenMetal’s dedicated servers and cloud infrastructure located in the heart of Singapore.

How to Prepare Your BNB Chain Infrastructure for 20,000 TPS

Jan 05, 2026

BNB Chain’s 2026 roadmap targets 20,000 transactions per second with new Rust-based clients and Scalable DB architecture. Node operators need to understand the dual-client strategy, hardware requirements, and infrastructure implications. Learn how to prepare your validators and nodes for this scale.

FinOps for AI Gets Easier with Fixed Monthly Infrastructure Costs

Dec 15, 2025

AI workload costs hit $85,521 monthly in 2025, up 36% year-over-year, while 94% of IT leaders struggle with cost optimization. Variable hyperscaler billing creates 30-40% monthly swings that make financial planning impossible. Fixed-cost infrastructure with dedicated GPUs eliminates this volatility.

Why DePIN Compute Networks Require Bare Metal Infrastructure To Function Correctly

Dec 11, 2025

Render Network, Akash, io.net, and Gensyn nodes fail on AWS because virtualization breaks hardware attestation. DePIN protocols need cryptographic proof of physical GPUs and hypervisors mask the identities protocols verify. This guide covers why bare metal works, real operator economics, and setup.

When Self Hosting Vector Databases Becomes Cheaper Than SaaS

Dec 09, 2025

AI startups hit sticker shock when Pinecone bills jump from $50 to $3,000/month. This analysis reveals the exact tipping point where self-hosting vector databases on OpenMetal becomes cheaper than SaaS. Includes cost comparisons, migration guides for Qdrant/Weaviate/Milvus, and real ROI timelines.

How to Choose Between OpenMetal’s Five Hardware Generations for Hosted Private Cloud and Bare Metal Deployments

Dec 05, 2025

OpenMetal offers five hardware generations across hosted private cloud and bare metal deployments. This guide breaks down the specs, performance differences, and use cases for each generation from V1’s foundation infrastructure to V4’s latest enterprise hardware, helping you choose the right configuration for development, production, or hybrid workloads.

How to Build a Confidential RAG Pipeline That Guarantees Data Privacy

Dec 03, 2025

Overcome the trust barrier in enterprise AI. This guide details how to deploy vector databases within Intel TDX Trust Domains on OpenMetal. Learn how Gen 5 hardware isolation and private networking allow you to run RAG pipelines on sensitive data while keeping it inaccessible to the provider.

Why Running Cilium with eBPF on Bare Metal Outperforms Virtualized Overlay Networks

Dec 02, 2025

Are overlay networks killing your Kubernetes performance? Discover why running Cilium on OpenMetal bare metal outperforms virtualized clouds. We provide a technical guide on switching to Direct Routing, configuring Jumbo Frames, and leveraging dedicated hardware to maximize eBPF efficiency.

Benchmarking Intel Xeon Gen 5 Performance for High Density Workloads

Nov 26, 2025

Maximize density with 5th Gen Intel Xeon. We benchmark OpenMetal’s Large V4 servers to reveal 21% better compute, 14x faster AI inference via AMX, and secure confidential computing with TDX. Eliminate the GPU tax and future-proof I/O.