In this article

  • Why Public Clouds Ban Security Research
  • The Nested Isolation Model: Infrastructure VLANs + OpenStack VPCs
  • Malware Analysis Workflow: Provision, Detonate, Destroy
  • Deploy Cuckoo Sandbox on Bare Metal
  • Hardware Control: IPMI for Catastrophic Failures
  • Scaling Your Security Lab
  • Total Infrastructure Control
  • Ready to Build Your Pentest Lab?

You’ve just spun up a honeypot to analyze a new botnet variant. Three hours later, your AWS account is suspended. Support tells you that your “malicious activity” violated their AUP and your appeal will take 5-7 business days.

Your legitimate security research just got flagged as an attack.

This isn’t an edge case. Hyperscalers like AWS, Azure, and GCP use automated abuse detection systems that can’t distinguish between a penetration test and an actual intrusion. Their platform-wide monitoring protects their infrastructure and other tenants, which means your work, no matter how legitimate, is a liability they can’t afford.

Security researchers need a different approach: a platform built for containment, not built to contain you.

Why Public Clouds Ban Security Research

Public cloud providers operate under strict Acceptable Use Policies that explicitly prohibit activities that are standard practice in security research:

  • Port scanning and network reconnaissance: AWS’s AUP forbids unauthorized port scanning, even within your own VPCs.
  • Malware detonation: Running live malware samples violates acceptable use policies across all major hyperscalers.
  • Penetration testing: While some providers allow limited pentesting with pre-approval, the process is bureaucratic and restrictive.
  • C2 infrastructure: Operating command-and-control servers, even for research, triggers immediate flags.

The fundamental issue is one of shared infrastructure. Public clouds must protect thousands of tenants on shared hardware. Your honeypot’s network scanning looks identical to a reconnaissance attack against their other customers. Their systems are designed to shut you down first and ask questions later.

The Nested Isolation Model: Infrastructure VLANs + OpenStack VPCs

To conduct security research effectively, you need two distinct layers of isolation. Not shared infrastructure with network segmentation, but true dedicated hardware with nested virtualization.

This is the “sandbox-within-a-sandbox” model.

The Outer Sandbox (Your Infrastructure VLAN)

Your entire research environment is deployed on its own dedicated infrastructure VLAN. When you provision an On-Demand OpenStack-Powered Private Cloud Core, your control plane and all compute nodes are logically isolated from every other customer at the infrastructure level.

This is dedicated hardware, not shared compute. You’re not in a “neighborhood” with other tenants. Your network scanning, packet captures, and traffic analysis won’t trigger abuse alerts for anyone else because there is no one else on your infrastructure.

A Private Cloud Core starts with three nodes. For security research, Medium V4 servers provide a cost-effective foundation: 24 cores and 256GB DDR5 RAM per node gives you 72 cores and 768GB RAM total. That’s enough capacity to run your OpenStack control plane and dozens of simultaneous test environments.

The Inner Sandbox (OpenStack VPCs)

Inside your private cloud, you have full OpenStack capabilities. This is where you create disposable test environments using OpenStack Projects, which function as true Virtual Private Clouds with VXLAN overlay networks.

These VPCs provide:

  • Cost-free provisioning: Create and destroy hundreds of isolated network environments with no additional charges.
  • Logical isolation: Each Project operates in its own network namespace. The VMs in “Project_Ransomware_Test” cannot see or communicate with “Project_Phishing_Analysis.”
  • Granular control: You configure all firewall rules, security groups, and routing policies for each VPC independently.

This creates complete containment. A malware sample that escapes its VM is still trapped in a disposable virtual network, which is itself isolated within your dedicated infrastructure.

Malware Analysis Workflow: Provision, Detonate, Destroy

Here’s how a security researcher uses this nested isolation model to safely analyze a ransomware sample:

1. Provision a Test Environment

The analyst logs into their OpenStack dashboard and creates a new Project: “Ransomware_Sample_042.” This is an empty VPC with no VMs, no networks, and no connection to other Projects.

2. Build the Target Infrastructure

Inside the new VPC, the analyst provisions:

  • An isolated virtual network (10.20.30.0/24)
  • A “victim” Windows Server 2019 VM
  • A Linux VM acting as a C2 listener
  • A security group that allows communication only between these two VMs

The entire environment is self-contained. No traffic can enter or exit without explicit firewall rules.

3. Detonate and Observe

The analyst introduces the ransomware sample into the victim VM. The malware executes, encrypts the server’s file system, and attempts to communicate with the C2 listener.

Using OpenStack’s native network monitoring and packet capture tools, the analyst collects forensic data:

  • Complete packet captures of C2 communication
  • Disk I/O patterns showing file encryption behavior
  • Process execution logs from the victim VM

The malware cannot escape this environment. Even if it had a hypervisor exploit, it would still be contained within the dedicated infrastructure VLAN, isolated from any other customer’s systems.

4. Destroy the Environment

Once analysis is complete, the analyst deletes the entire “Ransomware_Sample_042” Project. The VPC, all VMs, virtual networks, and storage volumes are permanently destroyed.

The lab is now clean with no cross-contamination risk, ready for the next test.

This workflow is repeatable, safe, and cost-effective. No per-VM charges, no egress fees for forensic data transfer, and no risk of account suspension.

Deploy Cuckoo Sandbox on Bare Metal

OpenMetal Bare Metal ServerFor researchers who want to deploy automated malware analysis platforms, a single bare metal server provides an excellent starting point.

A Small V4 server (8 cores, 64GB RAM) is sufficient for running Cuckoo Sandbox with multiple analysis VMs. The key advantages over public cloud deployment:

  • No hypervisor restrictions: You control the entire stack from hardware to guest OS.
  • High-speed private networking: 20Gbps private network interfaces enable rapid VM provisioning and data transfer between your Cuckoo controller and analysis VMs.
  • Generous bandwidth: Sufficient egress for extracting large forensic datasets without throttling or overage charges.
  • IPMI access: Modern out-of-band management for the underlying hardware (more on this below).

For larger security operations teams, deploy Cuckoo across multiple nodes in a Private Cloud Core. This provides high availability and the ability to scale your analysis capacity horizontally.

Hardware Control: IPMI for Catastrophic Failures

What happens if a test goes catastrophically wrong? What if you suspect a hypervisor-level exploit or rootkit?

On a public cloud, you’re stuck. You can’t access the underlying hardware, can’t verify the hypervisor state, and can’t perform a true bare-metal reimage without involving support.

With OpenMetal, you have modern IPMI access to every server in your Private Cloud Core. This provides:

  • Out-of-band console access: Connect to the server’s physical console independent of the OS or network state.
  • Remote power control: Power-cycle or hard-reset a node without logging into the operating system.
  • Bare-metal reimaging: Trigger a complete reimage of the physical server to a known-good state.

This is critical for security researchers who need absolute certainty about their infrastructure state. If you suspect compromise at the hypervisor level, you can verify and remediate without relying on potentially compromised software.

Scaling Your Security Lab

The 3-node Private Cloud Core is the recommended starting point, but your infrastructure can scale to match your needs:

Single-Server Research Box

For individual researchers or small teams, a single Small V4 bare metal server provides a dedicated, isolated environment for running security tools. Use it as a pentesting jump box, a dedicated Cuckoo instance, or a contained environment for testing exploits.

Starting cost: Around $200/month for dedicated hardware with no noisy neighbors, no AUP restrictions, and full administrative access.

Enterprise Security Operations

For large SecOps teams running comprehensive threat hunting, SIEM, and malware analysis platforms, deploy your Private Cloud Core using Large V4 or XL V4 servers:

  • Large V4: 32 cores, 512GB RAM per node
  • XL V4: 64 cores, 1TB RAM per node

This provides the capacity to run a full security stack: SIEM for log aggregation, SOAR for automated response, malware analysis infrastructure, and dozens of isolated test environments for your entire organization.

Total Infrastructure Control

This model provides capabilities that hyperscalers cannot offer:

Complete firewall control: Configure all security groups, network ACLs, and routing policies within OpenStack with no platform-level restrictions.

Custom network topologies: Build complex multi-tier networks, air-gapped environments, and custom routing schemes without navigating provider limitations.

No AUP restrictions: Run honeypots, operate C2 infrastructure for research, conduct network scanning, and detonate malware without risk of account suspension.

Dedicated hardware: Your compute, storage, and networking run on hardware dedicated to your infrastructure. No shared tenancy, no noisy neighbors, no cross-tenant attack surface.

For security researchers, this is the difference between hiding your work from your provider and having a provider that supports your work.

Ready to Build Your Pentest Lab?

Stop trying to hide your security research from your cloud provider. The “sandbox-within-a-sandbox” model provides true isolation at both the infrastructure and virtualization layers.

Schedule a call to discuss your specific security research requirements, or start with a trial of a Private Cloud Core to test the platform yourself!

Chat With Our Team

We’re available to answer questions and provide information.

Reach Out

Schedule a Consultation

Get a deeper assessment and discuss your unique requirements.

Schedule Consultation

Try It Out

Take a peek under the hood of our cloud platform or launch a trial.

Trial Options

 

 

 Read More on the OpenMetal Blog

Build a Secure Penetration Testing Lab with On-Demand Private Cloud Infrastructure

Nov 11, 2025

Public cloud providers like AWS and GCP will suspend your account for running honeypots, malware analysis, or penetration testing. Security researchers need dedicated infrastructure with nested isolation. Learn how to build a “sandbox-within-a-sandbox” lab using infrastructure VLANs and OpenStack VPCs.

Why Hyperscalers Won’t Let You Build an Email Service on Their Infrastructure

Nov 10, 2025

Hyperscalers like AWS and GCP block custom email services, pushing you to their metered APIs. Learn why this conflict of interest hurts your business and how to build a scalable, high-volume email platform on OpenMetal’s dedicated hardware with BYOIP, private networking, and no sending limits.

Why Enterprise Workloads Need BYOIP Support That Hyperscalers Can’t Provide

Nov 06, 2025

Hyperscalers lock you in by owning your IP addresses. Moving infrastructure means updating firewall rules, losing email reputation, and coordinating DNS changes across partners. BYOIP gives you control over your network identity. Learn why this matters for multi-region, hybrid, and enterprise workloads.

Why Run Proxmox VE on OpenMetal Bare Metal Infrastructure?

Nov 04, 2025

Deploying Proxmox VE on OpenMetal bare metal eliminates virtualization licensing costs while providing enterprise features like HA clustering and live migration. Organizations achieve 50%+ savings versus public cloud with predictable monthly pricing. Dedicated hardware delivers consistent performance without resource contention, making this combination ideal for production workloads, database consolidation, and VMware migrations.

Optimizing Latency and Egress Costs for Globally Distributed Workloads

Oct 07, 2025

Discover how OpenMetal’s strategically positioned data centers eliminate the “data tax” on globally distributed applications. Free east-west traffic between regions plus predictable 95th percentile bandwidth billing lets you architect for performance instead of cost avoidance, with typical savings of 30-60% versus public cloud.

Performance Consistency: The Overlooked KPI of Cloud Strategy

Sep 27, 2025

Most enterprises focus on uptime and peak performance when choosing cloud providers, but performance consistency—stable, predictable performance without noisy neighbors or throttling—is the real game-changer for cloud strategy success.

Why Singapore SaaS Leaders Are Embracing Open Source Private Cloud

Sep 27, 2025

Discover why Singapore SaaS companies are embracing open source private cloud infrastructure as a strategic alternative to hyperscaler dependence. Learn how OpenMetal’s hosted OpenStack solution delivers predictable costs, data sovereignty, and vendor independence for growing businesses across ASEAN.

Why Mature Cloud Infrastructure Needs the Discipline of Capacity Planning

Sep 15, 2025

Public cloud’s infinite shelf promise masked the need for capacity discipline, creating unpredictable costs and architectural debt. Discover how predictable infrastructure models like OpenMetal’s hosted private cloud transform capacity planning from reactive firefighting into proactive strategic advantage for CTOs.

The Hidden Costs of Hyperscaler Networking vs OpenMetal’s Transparent and Predictable Model

Sep 15, 2025

Enterprise cloud networking costs have become unpredictable budget wild cards. AWS, GCP, and Azure charge per-GB for internal traffic, creating cost volatility that punishes distributed architectures. OpenMetal’s two-network model eliminates cross-AZ fees and uses 95th percentile billing to smooth traffic spikes.

From VMware to OpenStack: The People and Process Side of Migration

Sep 15, 2025

Most VMware to OpenStack migration guides focus on technical differences, but successful transitions require organizational transformation. This comprehensive guide reveals why people and processes matter more than hypervisor specs, with practical steps for managing cultural change, skills development, and stakeholder buy-in during your OpenStack migration journey.