How to Deploy Confidential Computing Workloads on OpenMetal Infrastructure

Confidential computing workloads on bare metal is a new approach to protecting sensitive data—not just when it’s stored or transmitted, but while it’s actively being used. With growing security concerns and stricter data regulations, more organizations are asking how to make this a practical part of their infrastructure.

In this blog, we’ll break down how you can use OpenMetal’s bare metal servers to support confidential workloads using Intel TDX. Whether you’re working with protected health data, training machine learning models, or handling financial transactions, OpenMetal gives you the tools and control to keep it secure.

For a broader look at the technology, see our overview on Confidential Computing Benefits and Use Cases.

What You Need for Confidential Computing Workloads

To build a confidential computing environment, you’ll need:

  • Hardware-level security features like Intel TDX (Trust Domain Extensions)
  • Trusted Execution Environments (TEEs) that isolate data in memory
  • Operating systems and hypervisors that support those features
  • Full control over the hardware and how it’s configured

Note: Intel® Software Guard Extensions (SGX), Intel® Trust Domain Extensions (TDX), AMD® SEV, and Arm® TrustZone are examples of hardware-based TEEs.

Why OpenMetal Is a Fit for Confidential Computing Workloads

OpenMetal gives teams the flexibility and access they need to deploy secure workloads:

  • Bare Metal Control: Full access to physical servers without shared tenants
  • Intel 5th Gen CPUs with TDX: Available on our Medium V4, Large V4, XL V4, and XXL V4 bare metal configurations. You can also add H100 GPUs to XXL V4 servers for workloads that need acceleration.
  • GPU Support via PCIe Passthrough: You can attach the H100 to Intel TDX-enabled VMs using PCIe passthrough.
  • Fast, Isolated Networking: Redundant 10Gbps with VLAN segmentation
  • Encrypted Storage: Attach encrypted volumes to workloads as needed
  • Open APIs and CLI: Automate secure deployments

A Practical Guide to Deploying Confidential Workloads on OpenMetal

  1. Choose Intel TDX-Ready Hardware: Use OpenMetal’s Medium, Large, XL, or XXL configurations featuring 5th Gen Intel CPUs and optional H100 GPUs on the XXL. These servers come configured to launch TDX-enabled virtual machines.
  2. Deploy Virtual Machines with Intel TDX: Launch TDX-enabled VMs on supported nodes. These VMs benefit from memory and execution isolation from other workloads and the hypervisor.
  3. Attach GPUs with PCIe Passthrough (Optional): If your workload requires a GPU, the H100 can be passed through directly to your TDX-enabled VM using PCIe passthrough. This enables GPU acceleration while keeping CPU and memory data isolated.
  4. Secure Storage and Networking: Use encrypted volumes and VLAN-based network isolation to strengthen your setup. These security layers support the integrity and protection of your environment.
  5. Monitor and Validate: Deploy internal tools or third-party solutions to validate the state of your confidential computing environment. Monitoring configurations and access helps ensure ongoing protection and compliance.

Common Use Cases

  • Healthcare: Analyze PHI while maintaining HIPAA compliance
  • AI/ML: Protect training data and proprietary models
  • Finance: Run encrypted models for fraud detection or trading
  • Web3/Crypto: Safeguard wallet data and blockchain metadata from exposure

 

Final Thoughts 

Confidential computing workloads are already making an impact across real-world production environments. OpenMetal provides a reliable path to deploying secure infrastructure through Intel TDX-enabled hardware and GPU passthrough capabilities.

If you’re ready to explore confidential computing, contact our team to get started.

Read More on the OpenMetal Blog

How to Deploy Confidential Computing Workloads on OpenMetal Infrastructure

Learn how to deploy confidential computing workloads on bare metal using Intel TDX, OpenMetal servers, and secure infrastructure best practices.

Enabling Intel SGX and TDX on OpenMetal v4 Servers: Hardware Requirements

Learn how to enable Intel SGX and TDX on OpenMetal’s Medium, Large, XL, and XXL v4 servers. This guide covers required memory configurations (8 DIMMs per CPU and 1TB RAM), hardware prerequisites, and a detailed cost comparison for provisioning SGX/TDX-ready infrastructure.

10 Hugging Face Model Types and Domains that are Perfect for Private AI Infrastructure

A quick list of some of the most popular Hugging Face models / domain types that could benefit from being hosted on private AI infrastructure.

OpenMetal Enterprise Storage Tier Offerings and Architecture

Discover how OpenMetal delivers performance and flexibility through tiered cloud storage options. Learn the pros and use cases of direct-attached NVMe, Ceph-based high availability block storage, and scalable, low-cost erasure-coded object storage—all integrated into OpenStack.

Storage Server – Large V4 – 240TB HDD, 25.6TB NVME – Micron MAX or Pro, 5th Gen Intel® Xeon Silver 4510

Discover the power of the OpenMetal Large v4 Storage Server with dual Intel Xeon Silver 4510 CPUs, 720TB HDD storage, 76.8TB NVMe flash, and 512GB DDR5 RAM. Perfect for building high-performance, scalable, and resilient storage clusters for cloud, AI/ML, and enterprise data lakes.

Use Case: Running Blockchain Infrastructure on Bare Metal for Crypto Trading and Validator Workloads

A crypto trading firm deployed blockchain infrastructure on bare metal to run Solana validator workloads with low latency and full control.

Web3 Use Case: Blockchain Infrastructure on Bare Metal and Ceph Storage

A Web3 team deployed blockchain infrastructure on bare metal and Ceph storage to scale decentralized workloads and cut storage costs.

Announcing the launch of Private AI Labs Program – Up to $50K in infrastructure usage credits

With the new OpenMetal Private AI Labs program, you can access private GPU servers and clusters tailored for your AI projects. By joining, you’ll receive up to $50,000 in usage credits to test, build, and scale your AI workloads.