As artificial intelligence continues to push infrastructure to its limits, businesses are seeking scalable, flexible, and cost-effective solutions that avoid the lock-in of proprietary platforms. At OpenInfra Days 2025, Todd Robinson, co-founder of OpenMetal, shared key insights into how open infra—particularly OpenStack—can provide organizations with the right tools to support AI workloads efficiently and affordably.

During the panel discussion, Unleashing AI with OpenInfra, Robinson highlighted OpenMetal’s unique approach to cloud infrastructure and how its on-demand private cloud model enables businesses to test and scale AI workloads without committing to massive upfront costs.

The Importance of Open Source in AI

Robinson emphasized the growing need for a strong open source AI community to counterbalance the influence of hyper-scalers.

“If we leave AI purely in the hands of the hyper-scalers, it creates a dependency that limits innovation.”

He pointed out that open source enables companies to build and customize AI models instead of being restricted to pre-packaged solutions dictated by large cloud providers. Open infra, particularly OpenStack, provides the control and transparency businesses need to tailor AI infrastructure to their specific needs.

The Infrastructure Challenge: AI’s Power and Scalability Demands

One of the biggest challenges in AI today is the increasing demand for compute power. AI accelerators such as GPUs require significant power and cooling, which can quickly drive up operational costs. Robinson acknowledged this challenge and explained OpenMetal’s approach:

“We work with customers on real-world workloads to determine what’s actually needed before committing to large-scale deployments.”

By providing on-demand private cloud environments, OpenMetal enables organizations to test AI workloads in a cost-controlled, flexible infrastructure. Sometimes a bare metal setup using OpenStack’s Ironic can be all that the organization needs to start. Instead of investing heavily in hardware upfront, businesses can experiment with AI models using OpenMetal’s pre-configured, scalable cloud and bare metal environments.

Use Cases: AI on OpenMetal’s OpenStack Cloud

Robinson provided real-world examples of how OpenMetal beta AI customers are leveraging OpenInfra for AI workloads:

  • Optimizing Natural Language Processing (NLP) Costs – One OpenMetal customer, struggling with the high costs of AWS’s Transcribe service for speech-to-text processing, explored running their own NLP models. By shifting their workload to an OpenStack-based private cloud, they found they could significantly reduce costs while maintaining performance.
  • Deploying Open-Source AI Models – Some organizations are using OpenMetal’s infrastructure to experiment with Hugging Face models and smaller generative AI deployments, without the constraints of proprietary cloud AI services. This allows them to fine-tune and deploy AI applications on their terms.
  • Building an Open-Source Databricks Alternative – Robinson shared an example of a customer who wanted to migrate off Databricks due to its high operational costs. OpenMetal’s team showed them an alternative set up of an open-source equivalent using Debezium, Kafka, Delta Lake, and Apache Spark—all running on OpenStack. This if implemented by the customer would provide them a cost-effective, flexible alternative for big data processing.
  • Using CPUs for AI Workloads – Contrary to the common assumption that AI workloads require expensive GPUs, OpenMetal has helped customers run smaller AI models efficiently on CPUs. Robinson emphasized:

“You probably have CPU in your cluster already—almost everybody does. Use it. Try running models directly in a VM today. You don’t need to go all-in on H100s to get started.”

Many businesses are now leveraging OpenMetal’s CPU-based OpenStack environments to test running inference workloads and AI model fine-tuning without the need for dedicated GPUs.

Building AI-Ready OpenStack Solutions

Robinson discussed OpenMetal’s approach to integrating OpenStack with AI workloads. Many organizations assume that AI workloads must run on expensive, high-end GPUs, but in reality, a combination of efficient CPU-based processing and targeted GPU acceleration can be just as effective. He stressed the need for organizations to start small and build AI strategies incrementally.

He also emphasized that OpenStack already has the capability to manage GPU passthrough, FPGA integration, and multi-node AI training environments, but more needs to be done to educate users on these capabilities.

Where OpenInfra Can Improve

Robinson believes OpenStack and the OpenInfra community should focus on:

  • Better visibility into AI workloads – Organizations need real-world examples of AI deployments running on OpenStack.
  • Increased collaboration with AI projects – Working with platforms like Hugging Face and DeepSeek could strengthen OpenStack’s positioning in AI.
  • Easier onboarding for AI users – Simplified tools and better documentation can accelerate OpenStack adoption for AI workloads.

He suggested that the OpenInfra Foundation, through it’s members, could also facilitate access to AI hardware for testing and development purposes:

“Maybe we could get more companies to donate hardware for testing AI workloads on OpenStack. That would help push innovation forward.”

Final Thoughts: OpenMetal’s Vision for AI and OpenInfra

Robinson’s insights at the panel highlighted OpenMetal’s commitment to making AI infrastructure more accessible, scalable, and cost-efficient. OpenMetal’s approach—leveraging OpenStack for on-demand private cloud environments—offers a powerful alternative to traditional hyperscale cloud providers, ensuring businesses retain control over their AI workloads without the constraints of proprietary ecosystems.

By combining bare metal automation, OpenStack, and flexible cloud deployments, OpenMetal is positioning itself as a key player in the evolving AI infrastructure landscape. The future of AI depends on open source collaboration, and OpenMetal is dedicated to ensuring that organizations can build, test, and scale AI workloads on their own terms.

For those looking to explore AI on their own terms, OpenMetal is rolling out new AI-focused products designed to simplify the deployment of AI infrastructure. Stay tuned for updates from OpenMetal as they continue to push the boundaries of Private AI.

Interested in the OpenMetal IaaS Platform?

Hosted Private Cloud

Day 2 ready. No licensing costs. Delivered in 45 seconds. Powered by enterprise open source tech.

Learn More

Bare Metal Servers

High end bare metal server hosting for virtualization, big data, streaming, and much more.

Learn More

Consult with Our Team

Meet our experts to get a deeper assessment and discuss your unique IaaS requirements.

Schedule Meeting

Explore More OpenMetal AI Content

This article highlights OpenMetal’s perspective on AI infrastructure, as shared by Todd Robinson at OpenInfra Days 2025. It explores how OpenInfra, particularly OpenStack, enables scalable, cost-efficient AI workloads while avoiding hyperscaler lock-in.

At OpenMetal, you can deploy AI models on your own infrastructure, balancing CPU vs. GPU inference for cost and performance, and maintaining full control over data privacy.

10 essential AI tools WordPress agencies can explore to streamline workflows, enhance customer operations, and stay competitive.

This article offers insights as to how WordPress agencies can gain a competitive edge by embracing AI innovation.