How to Deploy Confidential Computing Workloads on OpenMetal Infrastructure

Confidential computing workloads on bare metal is a new approach to protecting sensitive data—not just when it’s stored or transmitted, but while it’s actively being used. With growing security concerns and stricter data regulations, more organizations are asking how to make this a practical part of their infrastructure.

In this blog, we’ll break down how you can use OpenMetal’s bare metal servers to support confidential workloads using Intel TDX. Whether you’re working with protected health data, training machine learning models, or handling financial transactions, OpenMetal gives you the tools and control to keep it secure.

For a broader look at the technology, see our overview on Confidential Computing Benefits and Use Cases.

What You Need for Confidential Computing Workloads

To build a confidential computing environment, you’ll need:

  • Hardware-level security features like Intel TDX (Trust Domain Extensions)
  • Trusted Execution Environments (TEEs) that isolate data in memory
  • Operating systems and hypervisors that support those features
  • Full control over the hardware and how it’s configured

Note: Intel® Software Guard Extensions (SGX), Intel® Trust Domain Extensions (TDX), AMD® SEV, and Arm® TrustZone are examples of hardware-based TEEs.

Why OpenMetal Is a Fit for Confidential Computing Workloads

OpenMetal gives teams the flexibility and access they need to deploy secure workloads:

  • Bare Metal Control: Full access to physical servers without shared tenants
  • Intel 5th Gen CPUs with TDX: Available on our Medium V4, Large V4, XL V4, and XXL V4 bare metal configurations. You can also add H100 GPUs to XXL V4 servers for workloads that need acceleration.
  • GPU Support via PCIe Passthrough: You can attach the H100 to Intel TDX-enabled VMs using PCIe passthrough.
  • Fast, Isolated Networking: Redundant 10Gbps with VLAN segmentation
  • Encrypted Storage: Attach encrypted volumes to workloads as needed
  • Open APIs and CLI: Automate secure deployments

A Practical Guide to Deploying Confidential Workloads on OpenMetal

  1. Choose Intel TDX-Ready Hardware: Use OpenMetal’s Medium, Large, XL, or XXL configurations featuring 5th Gen Intel CPUs and optional H100 GPUs on the XXL. These servers come configured to launch TDX-enabled virtual machines.
  2. Deploy Virtual Machines with Intel TDX: Launch TDX-enabled VMs on supported nodes. These VMs benefit from memory and execution isolation from other workloads and the hypervisor.
  3. Attach GPUs with PCIe Passthrough (Optional): If your workload requires a GPU, the H100 can be passed through directly to your TDX-enabled VM using PCIe passthrough. This enables GPU acceleration while keeping CPU and memory data isolated.
  4. Secure Storage and Networking: Use encrypted volumes and VLAN-based network isolation to strengthen your setup. These security layers support the integrity and protection of your environment.
  5. Monitor and Validate: Deploy internal tools or third-party solutions to validate the state of your confidential computing environment. Monitoring configurations and access helps ensure ongoing protection and compliance.

Common Use Cases

  • Healthcare: Analyze PHI while maintaining HIPAA compliance
  • AI/ML: Protect training data and proprietary models
  • Finance: Run encrypted models for fraud detection or trading
  • Web3/Crypto: Safeguard wallet data and blockchain metadata from exposure

 

Final Thoughts 

Confidential computing workloads are already making an impact across real-world production environments. OpenMetal provides a reliable path to deploying secure infrastructure through Intel TDX-enabled hardware and GPU passthrough capabilities.

If you’re ready to explore confidential computing, contact our team to get started.

Read More on the OpenMetal Blog

How to Choose the Right Data Center Location for Your Infrastructure

Most organizations default to the closest data center and revisit that decision only when something breaks. This guide covers the four factors that should drive location decisions and walks through OpenMetal’s Ashburn, Los Angeles, Amsterdam, and Singapore locations so you can match the right infrastructure to your actual requirements.

What the Specs Don’t Tell You About Running Sui, Aptos, or Solana

The official hardware specs for Sui, Aptos, and Solana tell you the minimums. They don’t explain why those numbers exist, what happens when your hosting can’t actually deliver them, or how shared cloud infrastructure fails these workloads in specific and predictable ways.

Hosted Private Cloud — Medium v5 — Granite Rapids Intel Xeon 6505P, 768GB DDR5, Micron 7500 MAX

The Hosted Private Cloud Medium v5 is a three-node OpenStack and Ceph cluster built on the same Medium v5 hardware available as a standalone bare metal server. Each node contributes

OpenMetal Medium v5 vs AWS i4i — Dedicated Infrastructure vs Shared Cloud

This page compares the OpenMetal Bare Metal Medium v5 against the AWS i4i.8xlarge, the closest EC2 instance by RAM and NVMe storage profile. The comparison is structural: tenancy model, billing

Bare Metal Server — Medium v5 TDX Edition — Xeon 6505P, 1TB DDR5, Micron 7500 MAX

The OpenMetal Medium v5 TDX Edition is the same Granite Rapids Xeon 6505P server as the standard Medium v5, configured with all 16 DIMM slots populated at 1 TB DDR5-6400

Bare Metal Server — Medium v5 — Granite Rapids Intel Xeon 6505P, 256GB DDR5, Micron 7500 MAX

The OpenMetal Medium v5 is the entry server in the v5 Granite Rapids lineup, built on dual Intel Xeon 6505P processors (Granite Rapids, Intel 3 process). It replaces the Medium

OpenMetal XXL v4 vs AWS x2idn — Dedicated Bare Metal vs Cloud Infrastructure

This page compares the OpenMetal Bare Metal Dedicated Server XXL v4 with the AWS x2idn.32xlarge and x2idn.metal — the closest AWS equivalents by RAM profile for high-memory, NVMe-accelerated workloads. Both

Hosted Private Cloud — XXL v4 — Intel Xeon Gold 6530, 6TB DDR5, 115.2TB NVMe Cluster

The OpenMetal Hosted Private Cloud on XXL v4 hardware delivers a three-node OpenStack + Ceph cluster built on the highest-density compute and storage nodes in the v4 generation — ready