In this article
Sui, Aptos, and Solana all use parallel execution, but their infrastructure demands differ in ways the official docs don’t fully explain. This article breaks down the execution model behind each chain, what that means for CPU, RAM, storage, and networking requirements, where shared cloud environments fail these workloads in predictable ways, and how to map those requirements to hardware that holds up under production load.
The official documentation for Sui, Aptos, and Solana each publish hardware recommendations for running validator nodes. Those specs are a starting point, not a complete picture. They tell you the minimum CPU cores, RAM, and storage you need. They don’t explain why those numbers are what they are, what happens when your infrastructure can’t actually deliver them consistently, or how the wrong hosting environment can quietly drain validator rewards without triggering an obvious alert.
This post covers all three chains: what makes each one’s infrastructure requirements distinct, where shared cloud environments specifically fall short, and how to map those requirements to hardware that actually holds up under load.
Why These Three Chains are Different from Ethereum
Before getting into chain-specific requirements, it helps to understand what Sui, Aptos, and Solana have in common that sets them apart from serial-execution chains like Ethereum.
On Ethereum, every transaction is processed in sequence. The EVM has a global state, and every write to that state has to be ordered relative to every other write. This makes the execution model predictable and easy to reason about, but it caps throughput at what a single execution thread can handle.
All three of these chains break out of that model using different approaches to parallel execution. They can process many transactions simultaneously, which is how they achieve the throughput targets their documentation describes. But parallelism shifts where the bottlenecks are. Instead of a single sequential execution thread being the ceiling, the ceiling moves to core count, memory bandwidth, storage IOPS, and network consistency. Those are the things your hardware actually needs to deliver, and they’re the things shared cloud environments are worst at guaranteeing.
Sui
How it works
Sui is built around an object model rather than accounts. Every asset on Sui is a distinct object with explicit ownership. When two users make unrelated transfers at the same time, their transactions touch different objects with no overlap, so they can execute in parallel with no coordination needed between them.
For transactions involving only owned objects, Sui skips consensus entirely and uses a fast path through the Mysticeti protocol. Consensus only kicks in for transactions that touch shared objects, like an AMM pool contract where multiple users are writing to the same state simultaneously. This split keeps latency low for the common case while preserving correctness for contested state.
What it demands from hardware
CPU: core count over clock speed
Because Sui can actually distribute transaction execution across cores, having many physical cores available matters more than maximizing single-core frequency. This is the opposite of Solana, which we’ll get to. A Sui validator benefits from 24 or more physical cores. On a shared VM where those cores are time-sliced among multiple tenants, the parallel execution advantage disappears. You get virtual cores, not physical ones, and the hypervisor decides how much CPU time your workload actually gets.
Storage: sustained write pressure from object versioning
Every time a Sui object is modified, a new version is written to RocksDB. This is how Sui tracks state history, but it means the storage I/O profile is far more write-intensive than a simple key-value lookup. At production TPS, this creates sustained write pressure that consumer-grade NVMe drives aren’t built for. You won’t see an immediate failure. You’ll see gradual, unexplained performance degradation months into operation as write endurance gets used up. Enterprise NVMe drives with proper endurance ratings, like the Micron 7450 MAX NVMe in OpenMetal’s V4 servers, are what production validator nodes actually need.
RPC indexing: storage that grows with TPS
Full nodes serving RPC requests maintain index files on top of the chain database. On Sui mainnet today, those index files run around 1.5TB and grow with transaction throughput, not just chain age. A pruned full node with indexes sits around 2.5TB total currently. At roughly 18 TPS average, that database was growing at about 10GB per day. At 183 TPS sustained, it jumped to around 40GB per day. RPC operators planning 12 to 18 months out need to account for this growth rate, not just current storage size.
Network tuning: sysctl access required
Sui’s documentation explicitly flags that default Linux network buffer sizes are too small for validator loads and requires manual sysctl tuning (net.core.rmem_max, net.core.wmem_max, and TCP buffer settings). This kind of low-level system access isn’t available on most managed cloud instances.
Minimum specs for mainnet
- CPU: 24 physical cores minimum, 32+ recommended
- RAM: 128GB minimum, 256GB or more for validators with meaningful delegated stake
- Storage: 4TB enterprise NVMe, separate OS drive recommended
- Network: 1 Gbps minimum, 10 Gbps recommended
One more thing to plan for
Sui is deprecating its JSON-RPC interface on July 31, 2026. Teams migrating to the new gRPC and GraphQL RPC stack will be adding Postgres-backed indexer infrastructure that wasn’t part of the original node deployment. If you’re reconfiguring your data stack ahead of that deadline, it’s a sensible time to evaluate whether the hardware it runs on should change too.
Aptos

How it works
Aptos uses Block-STM, a parallel execution engine built around optimistic concurrency. Rather than determining upfront which transactions conflict, Block-STM executes all transactions in a block in parallel speculatively, then detects conflicts after the fact and re-executes only the transactions that touched overlapping state. For workloads where most transactions are independent (which covers the majority of real usage), this delivers high throughput without requiring explicit ownership tracking like Sui uses. When shared state is heavily contested, re-execution overhead increases, which is worth accounting for when sizing hardware.
Aptos also has a specific deployment requirement that Sui and Solana don’t share: every mainnet validator must run a validator fullnode (VFN) on a separate, independent machine. The VFN connects to the validator over a private network and handles the validator’s connection to the rest of the network. Running them on the same server isn’t supported for mainnet and creates the resource contention the isolation requirement is designed to prevent. That means a production Aptos validator setup is two servers minimum, not one.
What it demands from hardware
CPU: modern server-class silicon, not just core count
The current Aptos docs (last updated April 2026) specify 48 threads from either a 5th Gen AMD EPYC (Turin) or 6th Gen Intel Xeon (Granite Rapids). This is a meaningful shift from older community guides that cited 32 cores at 2.8GHz. The specific call-out of current-generation silicon reflects that Block-STM’s performance depends on memory bandwidth and IPC improvements in newer architectures, not just raw thread count. Unlike Sui’s deterministic parallelism, Block-STM’s optimistic approach means CPU utilization is less predictable under workloads with heavily contested shared state, so headroom above the reference spec matters.
Storage: a hard IOPS floor with a specific bandwidth requirement
The official spec is 3.0TB Enterprise NVMe SSD with at least 60K IOPS and 600MiB/s bandwidth. Aptos’s docs explicitly address the local vs. network storage question and note that local SSD typically provides lower latency and better IOPS, while network storage requires CPU support to scale IOPS. For a validator running Block-STM re-execution under load, the 600MiB/s bandwidth floor matters as much as the IOPS figure. Cloud-attached storage like AWS EBS and GCP Persistent Disk can hit 60K IOPS on paper but require CPU resources to do so and share physical capacity across instances. Current mainnet database size is in the several-hundred-GB range; archival nodes grow unbounded and have no recommended storage size.
Two machines means two sets of costs
The validator and VFN isolation requirement doubles your hardware footprint compared to single-node chains. Cloud providers can look cost-effective initially since you can spin up two VMs cheaply, but the ongoing egress costs between the validator and VFN add up, and the performance constraints apply to both machines independently.
Reference specs for mainnet (per machine, per official Aptos docs, updated April 2026)
- CPU: 48 threads, 5th Gen AMD EPYC (Turin) or 6th Gen Intel Xeon (Granite Rapids)
- RAM: 128GB
- Storage: 3.0TB Enterprise NVMe SSD, 60K IOPS minimum, 600MiB/s bandwidth
- Network: 1 Gbps, with the VFN network kept private between validator and VFN
Solana
![]()
How it works
Solana parallelizes transaction execution through Sealevel, its parallel smart contract runtime, which processes non-overlapping transactions concurrently. But the infrastructure profile for Solana validators is shaped less by Sealevel and more by two other architectural decisions: Proof of History (PoH) and the vote transaction model.
PoH is a continuous cryptographic timestamp sequence that Solana validators generate to establish the order of events without requiring all validators to communicate first. Generating PoH is a single-threaded SHA-256 hash chain. This means Solana validators are more dependent on single-core clock speed than either Sui or Aptos validators are, since PoH generation can’t be parallelized. High core count still matters for transaction processing, but the single-threaded PoH bottleneck means per-core performance gets you more on Solana than it would on Sui.
The vote transaction model also creates significant overhead. Every slot (roughly 400ms), validators submit a vote transaction to the network confirming the block they’ve processed. These vote transactions cost real SOL to send and generate continuous network and storage activity regardless of user transaction volume. A validator with low delegated stake can find that vote costs eat most or all of their staking rewards. This isn’t an infrastructure question, but it shapes the economics of operating a Solana validator in ways that Sui and Aptos don’t have equivalents for.
What it demands from hardware
CPU: high single-core frequency plus core count
Solana’s official docs set the minimum at 12 cores/24 threads at 2.8GHz with SHA extensions and AVX2 support, but that’s a floor that competitive mainnet validators don’t operate at. Community data consistently shows 24+ cores at 3.5GHz or higher as the practical baseline, with AMD EPYC as the dominant choice. SHA extensions are a hard requirement, not a suggestion: without them, transaction processing slows measurably. The 2025 launch of Firedancer on mainnet has raised the bar further. Firedancer’s tile-based, NUMA-optimized architecture gets more out of high core-count CPUs than Agave did, and operators running it have reported meaningful improvements in skip rates and vote latency. If you’re running Agave today, the hardware that’s competitive now may need revisiting when you move to Firedancer.
RAM: the highest requirement of the three, and the official docs don’t name a number
The official Solana documentation doesn’t publish a specific RAM requirement, which is telling in itself. Community practice for mainnet validators currently sits at 384-512GB ECC RAM, with RPC nodes serving production traffic pushing into 768GB to 1TB territory. Solana is memory-fragmentation sensitive, so ECC is non-negotiable and monitoring swap and heap fragmentation should be part of your standard operations. The accounts database being held in memory is what makes Solana’s latency possible, and undersizing RAM is one of the most common causes of degraded validator performance.
Storage: separate drives for accounts and ledger, enterprise only
The official docs are explicit: accounts and ledger should be on separate NVMe devices. Accounts are 500GB or larger with high TBW requirements; ledger is 1TB or larger, also high TBW. The I/O patterns are fundamentally different, ledger writes are heavy and sequential while account reads are random and latency-sensitive, and combining them on a single drive creates contention even on fast hardware. Consumer NVMe drives are disqualified: the WD SN850 and Samsung 980 Pro, while fine for desktop use, degrade under Solana’s sustained write load and have caused node stalls. Enterprise drives with 300K+ read IOPS are the baseline.
Network: consistency over raw throughput
Dropped packets or jitter translate directly into missed votes, which costs real money. The official minimum is 1 Gbps symmetric, with 10 Gbps preferred for mainnet. Unlike Sui and Aptos where occasional latency spikes cause performance degradation, on Solana they cause missed slots with direct financial consequences.
Practical specs for mainnet (official minimum + community baseline)
- CPU: 12 cores/24 threads at 2.8GHz official minimum; 24+ cores at 3.5GHz+ practical baseline; AMD EPYC recommended; SHA extensions and AVX2 required
- RAM: no official figure published; 384-512GB ECC recommended by community for validators; 768GB-1TB for high-traffic RPC nodes
- Storage: 500GB+ enterprise NVMe for accounts, 1TB+ enterprise NVMe for ledger (separate drives); OS drive optional but recommended
- Network: 1 Gbps minimum, 10 Gbps recommended
The Failure Modes Cloud Environments Share Across All Three Chains
The specific hardware requirements differ by chain, but the ways shared cloud infrastructure fails to meet them follow predictable patterns.
CPU steal affects consensus timing on all three chains
When your VM’s virtual CPU is waiting for physical CPU time because another tenant on the same host is using it, that’s CPU steal. On Sui, it degrades parallel transaction throughput. On Aptos, it increases Block-STM re-execution rates under load. On Solana, it can cause missed votes. CPU steal tends to spike when the physical host is busy, which correlates with network-wide activity spikes on the chain you’re validating, exactly when consistent performance matters most. If you’re not tracking CPU steal explicitly in your monitoring (not just CPU utilization), you may be experiencing this without knowing it.
RocksDB compaction gets starved of memory bandwidth
All three chains use RocksDB as their primary storage backend. RocksDB runs periodic compaction jobs in the background to reorganize SST files and maintain read performance. On dedicated hardware, compaction has full access to the memory bus. On a shared VM, it’s competing with other tenants’ workloads for memory bandwidth. When compaction falls behind, read amplification increases over time and node performance degrades gradually, often attributed to other causes.
Network-attached storage IOPS don’t hold under sustained load
EBS on AWS and Persistent Disk on GCP are both network-attached and share physical storage capacity across instances. The IOPS figures in cloud pricing are ceiling numbers, not guarantees under sustained load. Aptos’s 60K IOPS requirement and Solana’s need for low-latency random account reads both create sustained I/O patterns that cause cloud storage to throttle during heavy activity periods. Local NVMe instance storage solves this, but it’s ephemeral on most cloud providers and lost on instance restart.
Egress fees accumulate with p2p traffic
All three chains generate significant validator-to-validator traffic: Sui checkpoint sync and consensus gossip, Aptos VFN-to-validator communication and peer syncing, Solana vote transactions and shred propagation. On AWS, GCP, and Azure, all outbound traffic is metered. This traffic doesn’t show up in initial infrastructure budgets and tends to grow as each network’s activity increases. On OpenMetal’s bare metal servers, private traffic between servers within a deployment is unmetered, converting a variable recurring cost into a predictable flat rate.
Mapping These Requirements to OpenMetal Hardware
OpenMetal’s V4 server line uses 5th-gen Intel Xeon processors and all-Micron 7450 MAX NVMe storage across the lineup. Here’s how the configs map to the three chains.
For Sui validators
The Medium V4 covers current mainnet requirements: dual Intel Xeon Silver 4510 (24 physical cores total), 256GB DDR5 at 4400MHz, 6.4TB Micron 7450 MAX NVMe, at $619/month (month to month pricing).
For validators expecting to grow delegated stake, the Large V4 (dual Intel Xeon Gold 6526Y, 32 physical cores total, 512GB DDR5, dual 6.4TB NVMe) at $1,174/month adds memory headroom and a second NVMe drive for a separate OS disk.
For Aptos validators
The two-machine requirement means budgeting for a pair of servers. The Large V4 ($1,174/month) covers the 128GB RAM spec comfortably on both the validator and VFN, with 512GB DDR5 giving significant headroom above the reference minimum. The dual 6.4TB NVMe drives deliver well above the 60K IOPS and 600MiB/s bandwidth requirement. The unmetered private networking between servers within a deployment handles the VFN-to-validator private network connection without adding egress costs.
One honest note: the current Aptos docs specifically call out 5th Gen AMD EPYC (Turin) or 6th Gen Intel Xeon (Granite Rapids) as the reference CPUs. OpenMetal’s V4 lineup uses 4th and 5th Gen Intel Xeon (Sapphire Rapids and Emerald Rapids), which meet the performance requirements but aren’t the latest Granite Rapids generation the docs cite. If you want to run the exact reference spec, that’s worth noting.
For Solana validators
The RAM requirements push toward the larger configurations. The XL V4 (dual Intel Xeon Gold 6530, 64 physical cores total, 1TB DDR5, four 6.4TB NVMe drives) at $1,988/month meets the 512GB-plus RAM recommendation for mainnet validators with headroom.
For high-traffic RPC nodes, the XXL V4 (same CPUs, 2TB DDR5, six 6.4TB NVMe drives) at $2,779/month covers the 768GB to 1TB memory range that production RPC endpoints need. The four and six NVMe drive configurations on the XL and XXL also support the accounts/ledger separation Solana requires without needing separate servers. The same caveat applies here as with Aptos: the Solana community strongly favors AMD EPYC for per-core clock speed, which is important for PoH generation. OpenMetal’s V4 servers use Intel Xeon, which works but isn’t the community’s first choice for competitive Solana validation.
All servers include dual 10 Gbps uplinks (20 Gbps total) with unmetered private traffic between servers in the same deployment. Public bandwidth ranges from 2 Gbps on the Medium V4 to 10 Gbps on the XL and XXL, which matters if you’re serving external RPC traffic at volume.
With OpenMetal’s bare metal servers you get full root access and direct control over system configuration: the sysctl tuning Sui requires for network buffers, the disk layout separation Solana needs for accounts and ledger, and any other low-level tuning that managed cloud instances simply don’t allow. Nothing between you and the hardware.
Before You Commit to Your Infrastructure Setup
Cloud for development and testnet is a reasonable call. The flexibility is useful when requirements are still shifting. The question is what you’re accepting when you move to mainnet production.
The move to bare metal tends to get triggered by one of a few things: a validator rewards audit that traces underperformance back to CPU steal, an incident where RPC latency spikes during high-traffic events and cloud storage throttling is the culprit, or a cost review where egress fees have grown to a number that’s hard to justify. The transition is easier to plan before one of those triggers than after.
For teams earlier in the evaluation, the blockchain infrastructure stacks post covers how to approach more complex multi-node deployments.
To talk through your specific setup, whether that’s a Sui validator, an Aptos two-machine deployment, or a Solana RPC fleet, reach out to the team and we can walk through what makes sense for your workload.
Schedule a Consultation
Get a deeper assessment and discuss your unique requirements.
Read More on the OpenMetal Blog



































