How to Deploy Confidential Computing Workloads on OpenMetal Infrastructure

Confidential computing workloads on bare metal is a new approach to protecting sensitive data—not just when it’s stored or transmitted, but while it’s actively being used. With growing security concerns and stricter data regulations, more organizations are asking how to make this a practical part of their infrastructure.

In this blog, we’ll break down how you can use OpenMetal’s bare metal servers to support confidential workloads using Intel TDX. Whether you’re working with protected health data, training machine learning models, or handling financial transactions, OpenMetal gives you the tools and control to keep it secure.

For a broader look at the technology, see our overview on Confidential Computing Benefits and Use Cases.

What You Need for Confidential Computing Workloads

To build a confidential computing environment, you’ll need:

  • Hardware-level security features like Intel TDX (Trust Domain Extensions)
  • Trusted Execution Environments (TEEs) that isolate data in memory
  • Operating systems and hypervisors that support those features
  • Full control over the hardware and how it’s configured

Note: Intel® Software Guard Extensions (SGX), Intel® Trust Domain Extensions (TDX), AMD® SEV, and Arm® TrustZone are examples of hardware-based TEEs.

Why OpenMetal Is a Fit for Confidential Computing Workloads

OpenMetal gives teams the flexibility and access they need to deploy secure workloads:

  • Bare Metal Control: Full access to physical servers without shared tenants
  • Intel 5th Gen CPUs with TDX: Available on our Medium V4, Large V4, XL V4, and XXL V4 bare metal configurations. You can also add H100 GPUs to XXL V4 servers for workloads that need acceleration.
  • GPU Support via PCIe Passthrough: You can attach the H100 to Intel TDX-enabled VMs using PCIe passthrough.
  • Fast, Isolated Networking: Redundant 10Gbps with VLAN segmentation
  • Encrypted Storage: Attach encrypted volumes to workloads as needed
  • Open APIs and CLI: Automate secure deployments

A Practical Guide to Deploying Confidential Workloads on OpenMetal

  1. Choose Intel TDX-Ready Hardware: Use OpenMetal’s Medium, Large, XL, or XXL configurations featuring 5th Gen Intel CPUs and optional H100 GPUs on the XXL. These servers come configured to launch TDX-enabled virtual machines.
  2. Deploy Virtual Machines with Intel TDX: Launch TDX-enabled VMs on supported nodes. These VMs benefit from memory and execution isolation from other workloads and the hypervisor.
  3. Attach GPUs with PCIe Passthrough (Optional): If your workload requires a GPU, the H100 can be passed through directly to your TDX-enabled VM using PCIe passthrough. This enables GPU acceleration while keeping CPU and memory data isolated.
  4. Secure Storage and Networking: Use encrypted volumes and VLAN-based network isolation to strengthen your setup. These security layers support the integrity and protection of your environment.
  5. Monitor and Validate: Deploy internal tools or third-party solutions to validate the state of your confidential computing environment. Monitoring configurations and access helps ensure ongoing protection and compliance.

Common Use Cases

  • Healthcare: Analyze PHI while maintaining HIPAA compliance
  • AI/ML: Protect training data and proprietary models
  • Finance: Run encrypted models for fraud detection or trading
  • Web3/Crypto: Safeguard wallet data and blockchain metadata from exposure

 

Final Thoughts 

Confidential computing workloads are already making an impact across real-world production environments. OpenMetal provides a reliable path to deploying secure infrastructure through Intel TDX-enabled hardware and GPU passthrough capabilities.

If you’re ready to explore confidential computing, contact our team to get started.

Read More on the OpenMetal Blog

Proxmox Storage Architecture on Bare Metal: Ceph vs. ZFS Decision Guide

A technical comparison of Ceph and ZFS storage architectures for Proxmox bare metal deployments. Covers distributed vs local storage trade-offs, hardware requirements, performance characteristics, operational complexity, and decision frameworks based on cluster size and workload requirements.

Distributed SQL on Bare Metal: Why HTAP Databases Benefit from Dedicated Infrastructure

HTAP databases are highly sensitive to latency, network variance, and storage performance. This article explains why bare metal infrastructure provides predictable performance, operational clarity, and cost stability for running distributed SQL systems at scale.

Choosing Your Next Infrastructure Provider After Equinix Metal

Equinix Metal ends service in June 2026. This comprehensive guide helps teams evaluate replacement infrastructure by philosophy rather than features. Includes decision frameworks, migration timelines, and key questions to ask providers to avoid another EOL event.

The Infrastructure Needed for a Successful VMware to Proxmox Migration

Moving off VMware? Infrastructure planning comes first. This article breaks down specific hardware requirements for Proxmox, including CPU core counts, RAM formulas for Ceph storage, and essential network architecture to ensure your new cluster matches VMware’s performance and stability.

Singapore Private Cloud, Bare Metal Servers, GPU Servers – Data Center and Colo

Singapore and APAC based businesses benefit from hosting their applications on OpenMetal’s dedicated servers and cloud infrastructure located in the heart of Singapore.

How to Prepare Your BNB Chain Infrastructure for 20,000 TPS

BNB Chain’s 2026 roadmap targets 20,000 transactions per second with new Rust-based clients and Scalable DB architecture. Node operators need to understand the dual-client strategy, hardware requirements, and infrastructure implications. Learn how to prepare your validators and nodes for this scale.

FinOps for AI Gets Easier with Fixed Monthly Infrastructure Costs

AI workload costs hit $85,521 monthly in 2025, up 36% year-over-year, while 94% of IT leaders struggle with cost optimization. Variable hyperscaler billing creates 30-40% monthly swings that make financial planning impossible. Fixed-cost infrastructure with dedicated GPUs eliminates this volatility.

Why DePIN Compute Networks Require Bare Metal Infrastructure To Function Correctly

Render Network, Akash, io.net, and Gensyn nodes fail on AWS because virtualization breaks hardware attestation. DePIN protocols need cryptographic proof of physical GPUs and hypervisors mask the identities protocols verify. This guide covers why bare metal works, real operator economics, and setup.