In this article
Most EU AI Act coverage focuses on legal classification, risk categories, and documentation requirements. That’s important, but there’s a layer that gets much less attention: the infrastructure your AI workloads run on determines whether you can actually meet those requirements.
The EU AI Act has been generating compliance conversations in legal and policy circles since it entered into force in August 2024. For most organizations, the focus has been on classifying AI systems, understanding risk tiers, and building governance documentation. Those are real obligations. But the infrastructure question, specifically where your AI workloads run and what controls exist at the hardware level, has been largely absent from the conversation.
That’s a problem, because several of the Act’s core requirements for high-risk AI systems can’t be satisfied by policy documents alone. They require infrastructure that can produce verifiable evidence, maintain data within defined boundaries, and support the kind of auditability that regulators will actually ask for.
If you’re running AI workloads that serve EU users, here’s what the Act requires at the infrastructure level, and where the gaps tend to appear.
What the EU AI Act Actually Requires at the Infrastructure Level
The EU AI Act establishes a risk-based framework. Most AI applications fall into the minimal or limited risk categories and face light obligations. High-risk systems, which include AI used in hiring, credit scoring, healthcare, critical infrastructure, and certain biometric applications, face substantially more demanding requirements.
For high-risk systems, four requirements translate directly into infrastructure decisions.
Data governance and lineage
Article 10 of the Act requires that training, validation, and testing datasets be subject to documented data governance practices. In practical terms, this means knowing exactly where your training data came from, how it was processed, what transformations were applied, and whether it meets quality and representativeness standards. It requires infrastructure that can capture and retain data lineage information in a way that auditors can verify.
Automatic logging and auditability
High-risk AI systems must maintain logs of their operation automatically, capturing enough detail to allow post-hoc review of how the system behaved and what decisions it influenced. The Raconteur’s EU AI Act technical audit guide notes that screenshots and declarations are no longer sufficient, and only operational evidence counts. Your infrastructure needs to produce that evidence reliably and store it in a way that can be retrieved and presented to regulators.
Data residency inside the EU
For organizations serving EU users with high-risk systems, keeping personal data within EU jurisdiction is the most straightforward path through GDPR’s cross-border data transfer requirements, which run in parallel with the AI Act. Storing training data, inference inputs, and outputs on infrastructure physically located in the EU removes a significant layer of compliance complexity.
Cybersecurity for high-risk systems
The Act requires that high-risk AI systems achieve an appropriate level of accuracy, robustness, and cybersecurity. For AI systems handling sensitive data, that means the infrastructure beneath them needs to be hardened against both external attacks and insider threats. The definition of “appropriate” will be shaped by enforcement practice, but organizations whose training data or model weights could be accessed by a privileged infrastructure operator have a harder argument to make than those with hardware-level isolation.
Why Public Cloud Makes Some of These Harder Than They Should Be
Public cloud isn’t disqualified from EU AI Act compliance. Many organizations will meet their obligations on AWS, Azure, or GCP. But there are specific friction points worth understanding before assuming your existing infrastructure is ready.
The trust boundary problem
When you run AI workloads on shared public cloud infrastructure, the cloud provider sits inside your trust boundary. Their staff, with appropriate authorization, can access the physical hardware your workloads run on. For most workloads this is an acceptable tradeoff. For high-risk AI systems processing sensitive personal data, it creates a compliance gap that’s harder to close. Contractual assurances from a cloud provider are a different category of protection than a cryptographic guarantee that access is physically impossible.
Auditability of the underlying stack
Public cloud providers operate proprietary infrastructure. You have visibility into your instances, your storage, and your networking configuration. You don’t have visibility into the firmware, the BIOS configuration, or what’s happening at the hardware level beneath your VMs. For organizations that need to produce verifiable evidence of their security posture, that opacity creates gaps in the audit trail.
Data residency complexity at scale
Hyperscaler EU regions handle data residency for data at rest reasonably well, but data in transit between services, replication across availability zones, and the behavior of managed services can create situations where data touches infrastructure outside your intended geographic boundary. Tracing that data flow well enough to satisfy an auditor is an engineering project on top of your actual AI work.
Egress costs on training data pipelines
High-volume AI training workloads generate significant data movement. On public cloud, that movement is billed per gigabyte at rates that compound quickly. Organizations running iterative training pipelines on large datasets often find that data transfer costs become a meaningful fraction of their total infrastructure bill, which creates pressure to cut corners on data governance practices that require additional data movement.
What Dedicated EU Infrastructure Changes
Running AI workloads on dedicated infrastructure in the EU changes the compliance picture in several specific ways.
Physical data residency is unambiguous
When your training data, model weights, and inference logs sit on hardware in an EU data center that you control exclusively, the data residency question has a clean answer. There’s no ambiguity about replication behavior, no managed service routing traffic through non-EU regions, and no dependency on a provider’s contractual promises about where your data lives.
You control the full audit trail
On dedicated bare metal, you have access to the full stack. BIOS configuration, firmware versions, hardware event logs, and network traffic are all within your control and visibility. When a regulator or auditor asks for evidence of your security posture, you can produce it from the infrastructure layer up rather than stopping at the hypervisor boundary.
Hardware-level isolation for sensitive training data
For organizations processing the most sensitive categories of data in AI training, Intel Trust Domain Extensions (TDX) provides hardware-enforced isolation that goes beyond what software security controls can offer. TDX creates isolated Trust Domains where memory is encrypted and inaccessible to the host OS, the hypervisor, and the infrastructure operator. Sensitive training data processed inside a TDX Trust Domain cannot be accessed or tampered with even by someone with physical access to the server. That’s not a contractual guarantee. It’s a cryptographic one.
This matters specifically for the AI Act’s cybersecurity requirements for high-risk systems. OpenMetal’s V4 servers support Intel TDX on configurations with 1TB or more of RAM, with remote attestation available for cryptographic verification of the isolated environment.
Predictable costs don’t create governance tradeoffs
On fixed-cost dedicated infrastructure, there are no per-gigabyte charges on data movement between your servers. Training pipelines that require extensive data preprocessing, validation, and logging don’t generate variable cost spikes that create pressure to minimize data handling steps. Your data governance practices can be driven by compliance requirements rather than infrastructure economics.
A named team that knows your environment
EU AI Act compliance requires ongoing monitoring, incident reporting, and the ability to respond quickly when regulators ask questions. OpenMetal’s engineer-to-engineer support model means you have a named team familiar with your infrastructure, not a ticket queue. When something needs to be documented or investigated, you’re working with people who know your environment rather than starting from scratch.
The Timeline Reality and What to Prioritize First
It’s worth being honest about where enforcement actually stands. The EU AI Act’s compliance timelines are in active flux. The EU Digital Omnibus proposals are currently in trilogue negotiations between the European Commission, Council, and Parliament. Current positions suggest the high-risk AI system deadline may shift to December 2027 for most systems, later than the August 2026 date that has been widely cited.
That’s not a reason to defer infrastructure decisions. A few points worth internalizing:
Governance infrastructure and general-purpose AI model obligations have already been in effect since August 2025. If you’re building on or offering GPAI models, those obligations apply now.
Transparency obligations including watermarking requirements for AI-generated content are targeted for November 2026 under current parliamentary positions.
The infrastructure decisions you make today have long lead times. Moving AI workloads to EU-based dedicated infrastructure, implementing data lineage tracking, and establishing logging and audit capabilities are multi-month projects. Organizations that start those projects when enforcement deadlines are announced will be behind organizations that built compliance-ready infrastructure in advance.
And practically speaking, enterprise EU customers are asking these questions now, regardless of what regulators are doing. A credible answer to “where does our data live and who can access it” is a commercial requirement, not just a regulatory one.
A Practical Starting Point
The question isn’t whether you agree that data governance and auditability matter. It’s whether your current infrastructure can produce the evidence that regulators and customers will ask for.
A useful first step is mapping which of your AI workloads would qualify as high-risk under the Act, then asking whether your current infrastructure supports the data lineage, logging, and isolation requirements those workloads trigger. For many organizations, that audit surfaces gaps that are easier to address with dedicated EU infrastructure than by retrofitting compliance controls onto shared cloud environments.
OpenMetal’s Amsterdam data center provides dedicated bare metal options with Intel TDX support for organizations that need hardware-level isolation alongside EU data residency. If you’re evaluating confidential computing infrastructure specifically, OpenMetal’s confidential computing use case page covers the technical implementation in detail.
This article is for informational purposes and does not constitute legal advice. Organizations should consult qualified legal counsel for guidance on EU AI Act compliance obligations specific to their situation.
Schedule a Consultation
Get a deeper assessment and discuss your unique requirements.
Read More on the OpenMetal Blog



































