Confidential Computing for AI Training: How to Protect Models and Data on Bare Metal

Training AI models often involves sensitive data and valuable intellectual property. Whether you’re building proprietary machine learning models or analyzing confidential datasets, keeping that information secure throughout the training process is essential. Confidential computing protects data at every stage—when stored, in transit, and during processing.

This post explores how you can use confidential computing—specifically Intel TDX and bare metal infrastructure—to secure AI training workloads. If you already know the basics, check out OpenMetal’s blog on practical deployments or on balancing security and speed.

Why AI Models and Training Data Need Protection

AI models are incredibly valuable—often reflecting years of development and unique intellectual property. When businesses train these models, they often rely on proprietary data that might include sensitive personal information, competitive insights, or financial details. This type of data attracts attackers, which is why teams must protect it throughout the entire AI lifecycle.

Even if encryption is used during storage or transmission, a major gap remains: what happens when the data is being processed? In traditional virtualized environments, it’s possible for insiders or misconfigured systems to expose active memory. That’s where confidential computing plays a key role—protecting the training process itself.

How Confidential Computing Helps

Confidential computing creates a trusted execution environment (TEE) around the workload. This isolates it from the rest of the system—even the hypervisor and root users. With Intel TDX, which is supported by OpenMetal’s infrastructure, you can run secure virtual machines that shield your AI models and data while in use.

This is especially important for training large language models, recommendation systems, or predictive algorithms that rely on confidential or high-value data. By using TEEs, organizations gain confidence that the data will remain protected throughout the process—even if they’re deploying in a shared or multi-tenant environment.

What You Need for Confidential Computing in AI Training

To successfully run confidential computing workloads for AI training, your infrastructure must meet several key requirements—starting with the right hardware. At the foundation are Intel 5th Gen Xeon CPUs with Intel Trust Domain Extensions (TDX). These processors enable hardware-based memory encryption and ensure that sensitive data used in training models stays protected, even while in use.

At OpenMetal, both our XL V4 and XXL V4 bare metal servers are equipped with TDX-capable CPUs. This gives you the ability to isolate memory and workloads at the hardware level, which is essential for truly confidential computing environments.

Once you have the hardware in place, you’ll need a way to create secure virtual machines. Using hypervisors like KVM or QEMU, which are compatible with Intel TDX, you can launch TDX-enabled VMs that keep data fully isolated from the host system and other tenants.

AI workloads also generate and process huge volumes of data, so fast, secure storage is a must. With encrypted NVMe storage, OpenMetal ensures your training data stays protected while delivering high-speed performance—even in cases of drive loss or unauthorized access.

For those who require GPU acceleration during training, OpenMetal offers H100 GPUs that can be attached to TDX-enabled virtual machines using PCIe passthrough—but this configuration is available only on the XXL V4 bare metal server. This server provides the right balance of compute power, memory capacity, and hardware support to run both Intel TDX and GPU passthrough simultaneously.

This setup handles demanding AI workloads like deep learning exceptionally well, delivering both security and performance at scale.

Lastly, network isolation is critical—especially for customers dealing with compliance or privacy regulations. OpenMetal provides dedicated VLANs to separate your traffic from other workloads, helping to reduce risk and maintain a clean, segmented network environment.

Example Use Case

An OpenMetal customer in the blockchain space provides a helpful comparison. Their platform manages validator workloads and real-time transaction indexing. While they’re not training AI models, their infrastructure has similar security and performance needs: consistent compute, strict data separation, and hardware-level trust.

They use OpenMetal’s XL V4 servers with Intel TDX to launch secure VMs, isolate data with VLAN segmentation, and use encrypted volumes for sensitive blockchain metadata. The same environment is ideal for AI teams training proprietary models, especially if those models support financial, medical, or compliance-focused products.

Final Thoughts

Confidential computing is no longer experimental—it’s ready for production. If you’re training AI models with proprietary data, using Intel TDX on OpenMetal’s bare metal servers gives you the security and performance you need. If you’re ready to adopt confidential computing for AI training, OpenMetal’s Intel TDX-enabled infrastructure gives you a secure foundation to begin.

Contact us to learn how to start building your confidential AI training environment today.

Read More on the OpenMetal Blog

How to Choose the Right Data Center Location for Your Infrastructure

Most organizations default to the closest data center and revisit that decision only when something breaks. This guide covers the four factors that should drive location decisions and walks through OpenMetal’s Ashburn, Los Angeles, Amsterdam, and Singapore locations so you can match the right infrastructure to your actual requirements.

What the Specs Don’t Tell You About Running Sui, Aptos, or Solana

The official hardware specs for Sui, Aptos, and Solana tell you the minimums. They don’t explain why those numbers exist, what happens when your hosting can’t actually deliver them, or how shared cloud infrastructure fails these workloads in specific and predictable ways.

Hosted Private Cloud — Medium v5 — Granite Rapids Intel Xeon 6505P, 768GB DDR5, Micron 7500 MAX

The Hosted Private Cloud Medium v5 is a three-node OpenStack and Ceph cluster built on the same Medium v5 hardware available as a standalone bare metal server. Each node contributes

OpenMetal Medium v5 vs AWS i4i — Dedicated Infrastructure vs Shared Cloud

This page compares the OpenMetal Bare Metal Medium v5 against the AWS i4i.8xlarge, the closest EC2 instance by RAM and NVMe storage profile. The comparison is structural: tenancy model, billing

Bare Metal Server — Medium v5 TDX Edition — Xeon 6505P, 1TB DDR5, Micron 7500 MAX

The OpenMetal Medium v5 TDX Edition is the same Granite Rapids Xeon 6505P server as the standard Medium v5, configured with all 16 DIMM slots populated at 1 TB DDR5-6400

Bare Metal Server — Medium v5 — Granite Rapids Intel Xeon 6505P, 256GB DDR5, Micron 7500 MAX

The OpenMetal Medium v5 is the entry server in the v5 Granite Rapids lineup, built on dual Intel Xeon 6505P processors (Granite Rapids, Intel 3 process). It replaces the Medium

OpenMetal XXL v4 vs AWS x2idn — Dedicated Bare Metal vs Cloud Infrastructure

This page compares the OpenMetal Bare Metal Dedicated Server XXL v4 with the AWS x2idn.32xlarge and x2idn.metal — the closest AWS equivalents by RAM profile for high-memory, NVMe-accelerated workloads. Both

Hosted Private Cloud — XXL v4 — Intel Xeon Gold 6530, 6TB DDR5, 115.2TB NVMe Cluster

The OpenMetal Hosted Private Cloud on XXL v4 hardware delivers a three-node OpenStack + Ceph cluster built on the highest-density compute and storage nodes in the v4 generation — ready