Confidential Computing for AI Training: How to Protect Models and Data on Bare Metal

Training AI models often involves sensitive data and valuable intellectual property. Whether you’re building proprietary machine learning models or analyzing confidential datasets, keeping that information secure throughout the training process is essential. Confidential computing protects data at every stage—when stored, in transit, and during processing.

This post explores how you can use confidential computing—specifically Intel TDX and bare metal infrastructure—to secure AI training workloads. If you already know the basics, check out OpenMetal’s blog on practical deployments or on balancing security and speed.

Why AI Models and Training Data Need Protection

AI models are incredibly valuable—often reflecting years of development and unique intellectual property. When businesses train these models, they often rely on proprietary data that might include sensitive personal information, competitive insights, or financial details. This type of data attracts attackers, which is why teams must protect it throughout the entire AI lifecycle.

Even if encryption is used during storage or transmission, a major gap remains: what happens when the data is being processed? In traditional virtualized environments, it’s possible for insiders or misconfigured systems to expose active memory. That’s where confidential computing plays a key role—protecting the training process itself.

How Confidential Computing Helps

Confidential computing creates a trusted execution environment (TEE) around the workload. This isolates it from the rest of the system—even the hypervisor and root users. With Intel TDX, which is supported by OpenMetal’s infrastructure, you can run secure virtual machines that shield your AI models and data while in use.

This is especially important for training large language models, recommendation systems, or predictive algorithms that rely on confidential or high-value data. By using TEEs, organizations gain confidence that the data will remain protected throughout the process—even if they’re deploying in a shared or multi-tenant environment.

What You Need for Confidential Computing in AI Training

To successfully run confidential computing workloads for AI training, your infrastructure must meet several key requirements—starting with the right hardware. At the foundation are Intel 5th Gen Xeon CPUs with Intel Trust Domain Extensions (TDX). These processors enable hardware-based memory encryption and ensure that sensitive data used in training models stays protected, even while in use.

At OpenMetal, both our XL V4 and XXL V4 bare metal servers are equipped with TDX-capable CPUs. This gives you the ability to isolate memory and workloads at the hardware level, which is essential for truly confidential computing environments.

Once you have the hardware in place, you’ll need a way to create secure virtual machines. Using hypervisors like KVM or QEMU, which are compatible with Intel TDX, you can launch TDX-enabled VMs that keep data fully isolated from the host system and other tenants.

AI workloads also generate and process huge volumes of data, so fast, secure storage is a must. With encrypted NVMe storage, OpenMetal ensures your training data stays protected while delivering high-speed performance—even in cases of drive loss or unauthorized access.

For those who require GPU acceleration during training, OpenMetal offers H100 GPUs that can be attached to TDX-enabled virtual machines using PCIe passthrough—but this configuration is available only on the XXL V4 bare metal server. This server provides the right balance of compute power, memory capacity, and hardware support to run both Intel TDX and GPU passthrough simultaneously.

This setup handles demanding AI workloads like deep learning exceptionally well, delivering both security and performance at scale.

Lastly, network isolation is critical—especially for customers dealing with compliance or privacy regulations. OpenMetal provides dedicated VLANs to separate your traffic from other workloads, helping to reduce risk and maintain a clean, segmented network environment.

Example Use Case

An OpenMetal customer in the blockchain space provides a helpful comparison. Their platform manages validator workloads and real-time transaction indexing. While they’re not training AI models, their infrastructure has similar security and performance needs: consistent compute, strict data separation, and hardware-level trust.

They use OpenMetal’s XL V4 servers with Intel TDX to launch secure VMs, isolate data with VLAN segmentation, and use encrypted volumes for sensitive blockchain metadata. The same environment is ideal for AI teams training proprietary models, especially if those models support financial, medical, or compliance-focused products.

Final Thoughts

Confidential computing is no longer experimental—it’s ready for production. If you’re training AI models with proprietary data, using Intel TDX on OpenMetal’s bare metal servers gives you the security and performance you need. If you’re ready to adopt confidential computing for AI training, OpenMetal’s Intel TDX-enabled infrastructure gives you a secure foundation to begin.

Contact us to learn how to start building your confidential AI training environment today.

Read More on the OpenMetal Blog

EBITDA Impact of Cloud Repatriation: Why PE Firms Are Moving Portfolio SaaS Back to Private Cloud

Private equity firms are systematically implementing cloud repatriation strategies across SaaS portfolios to convert unpredictable cloud costs into fixed expenses, typically reducing infrastructure spending by 30-50% while improving EBITDA forecasting accuracy. This strategic shift addresses the margin compression caused by usage-based cloud billing and creates sustainable competitive advantages for portfolio companies.

The Return of Bare Metal? Insights From The Cloudcast

The cloud landscape is shifting, and infrastructure leaders are taking notice. In a recent episode of The Cloudcast, OpenMetal’s founder and president Todd Robinson sat down with hosts Aaron Delp and Brian Gracely to explore why bare metal and private cloud are experiencing a comeback.

Confidential Computing for Healthcare AI: Training Models on PHI Without Public Cloud Risk

Healthcare organizations can now train AI models on sensitive patient data without exposing it to public cloud vulnerabilities. Confidential computing creates hardware-protected environments where PHI remains secure during processing, enabling breakthrough AI development while maintaining HIPAA compliance and reducing regulatory overhead.

Why CFOs Should Partner with Operating Partners on Cloud Spend Reduction

Cloud costs eating your EBITDA? CFOs and Operating Partners need strategic alignment to tackle unpredictable public cloud pricing. Discover how fixed pricing models deliver 20-30% savings and financial predictability for PE-backed SaaS companies.

Confidential Computing for High-Performance Workloads: Balancing Security, GPUs, and Speed

Learn how confidential computing enables secure, high-performance workloads by combining Intel TDX hardware isolation with GPU acceleration. Explore real-world applications in AI training, blockchain validation, and financial analytics while maintaining data confidentiality and computational speed.

Building a Private HPC Cluster for Scientific and Financial Modeling

Discover how to build private HPC clusters that deliver the performance, control, and cost predictability your scientific simulations and financial models demand. Learn about hardware configurations, network architecture, and deployment strategies that outperform traditional cloud options.

Using Greenplum to Build a Massively Parallel Processing (MPP) Data Warehouse on OpenMetal

Learn how to build a massively parallel processing data warehouse using Greenplum on OpenMetal’s bare metal infrastructure. Discover architecture best practices, performance advantages, and cost optimization strategies for large-scale analytical workloads that outperform traditional public cloud deployments.

Scaling Blockchain Startups in a Portfolio: The Infrastructure Cost Curve

Discover how portfolio managers are transforming blockchain startup growth with predictable infrastructure costs. OpenMetal’s fixed-cost bare metal eliminates unpredictable cloud expenses, delivering 30-60% savings when monthly spend hits $20,000. Learn the infrastructure strategy that’s reshaping blockchain investment returns.