GPU Servers and Clusters

Private dedicated hardware for your AI/ML workloads and more.

  • Monthly subscriptions to A100 and H100 multi-GPU deployments
  • Private resources for your use case. Virtualize your hardware only if it fits your needs.
  • Completely customizable and built to order.
  • Consistent and reliable performance.
  • Connect to our OpenStack deployments for more functionality, or deploy Bare Metal for complete control.

OpenMetal Converged Node

Private GPU Servers for AI/ML workloads

Fully customizable deployments ranging from large-scale 8x GPU setups to CPU-based inference.

GPUGPU MemoryGPU CoresCPUStorageMemoryPrice
X-Large
The most complete AI hardware we offer. It's ideal for AI/ML training, high-throughput inference, and demanding compute workloads that push performance to the limit.
8X NVIDIA SXM5 H100640 GB HBM3
Cuda: 135168
Tensor: 4224
2x Intel Xeon Gold 6530
64C/128T
2.1/4.0Ghz
Up to 16 NVMe drives
2x 960GB Boot Disk
Up to 8TB, DDR5 5600MTsContact UsContact Us
Large
Perfect for mid-sized GPU workloads with maximum flexibility. These servers support up to 2x H100 GPUs, 2TB of memory, and 24 drives each!
2X NVIDIA H100 PCIe160 GB HBM3
Cuda: 33792
Tensor: 1056
2x Intel Xeon Gold 6530
64C/128T
2.1/4.0Ghz
1x 6.4TB NVMe
2x 960GB Boot Disk
1024GB DDR5 4800Mhz
$4,608.00/mo
eq. $6.31/hr
Contact Us
1X NVIDIA H100 PCIe80 GB HBM3
Cuda: 16896
Tensor: 528
2x Intel Xeon Gold 6530
64C/128T
2.1/4.0Ghz
1x 6.4TB NVMe
2x 960GB Boot Disk
1024GB DDR5 4800Mhz
$2,995.20/mo
eq. $4.10/hr
Contact Us
2X NVIDIA A100 80G160 GB HBM2e
Cuda: 13824
Tensor: 864
2x Intel Xeon Gold 6530
64C/128T
2.1/4.0Ghz
1x 6.4TB NVMe
2x 960GB Boot Disk
1024GB DDR5 4800Mhz
$3,087.36/mo
eq. $4.23/hr
Contact Us
1X NVIDIA A100 80G80 GB HBM2e
Cuda: 6912
Tensor: 432
2x Intel Xeon Gold 6530
64C/128T
2.1/4.0Ghz
1x 6.4TB NVMe
2x 960GB Boot Disk
1024GB DDR5 4800Mhz
$2,234.88/mo
eq. $3.06/hr
Contact Us
Medium
Low cost GPU workloads. Less flexible than our large GPU deployments, but far more powerful than CPU inferencing.
1X NVIDIA A100 40G40 GB HBM2e
Cuda: 6912
Tensor: 432
AMD EPYC 7272
12C/24T
2.9Ghz
1TB NVMe
2x 960GB Boot Disk
256GB DDR4 3200MHz
$714.24/mo
eq. $0.98/hr
Contact Us
Small – CPU Based
Running AI inference using Intel’s 5th Generation and AMX is the most afforable option. Ideal for small models and non production use-cases. Learn more

Pricing shown requires a 3-year agreement. Lower pricing may be available with longer commitments. Final pricing will be confirmed by your sales representative and is subject to change.

How is Private AI on OpenMetal Infrastructure different?

It’s private, customizable and our engineers are on your team.

Private Resources

We provide dedicated hardware exclusively for your team. None of the resources are virtualized or shared with other users, ensuring consistent performance and allowing you to fully leverage your GPU’s capabilities.

Built to Order

Connect with our team to design your ideal AI/ML deployment. We’ll handle ordering, setup, and ensure everything runs reliably. The specifications listed are just a starting point. Contact us to discuss possibilities.

 

Access to Engineers

Our engineers are here to help you evaluate hardware capabilities and identify the best solution for your specific use case. After deployment, we’ll work with you to ensure you’re maximizing the value of your services. 

What you should know before running your own AI workloads

Performance Comparison of GPUs

Different GPU models offer varying levels of performance based on core counts, memory bandwidth, and architectural improvements. Comparing models like the A100 and H100 helps identify which hardware best supports specific AI workloads, including training large language models or running real-time inference.

Read More

Inference on CPU

CPU-based inference remains a practical option for certain workloads, especially when GPUs are not required. We examine the performance impact of Intel’s Advanced Matrix Extensions (AMX) on 5th Gen Intel processors, comparing execution speed and efficiency gains when running AI models. AMX enables improved matrix computation performance, making CPU inference more viable for specific tasks.

Read More

Private vs Public AI

Bare metal provides direct access to physical hardware without virtualization overhead, offering predictable performance ideal for AI training and large inference tasks. Cloud virtualization adds flexibility but may introduce resource contention and variable latency depending on shared infrastructure.

Read More

Comparing Costs

The cost of running AI workloads depends on hardware selection, usage patterns, and resource efficiency. Dedicated GPUs involve higher upfront costs but deliver faster results, while time-shared or virtualized resources may reduce costs for intermittent or smaller-scale workloads.

Read More

MIG vs Time-Slicing

Multi-Instance GPU (MIG) and time-slicing are two methods for sharing GPU resources, each offering different levels of isolation and performance. OpenMetal supports both, providing flexibility not available from all providers.

Read More

Measuring Inference Performance

Inference performance is measured by throughput, latency, and token generation speed for large language models. Factors such as model size, batch processing, and hardware configuration directly affect results, making accurate benchmarking critical for production planning.

 

 

GPU Server Deployment Sizes for various workloads 

Access dedicated GPU servers with full control over resource utilization. Users can run workloads directly on bare metal or connect to OpenStack to create and manage virtual machines, networks, and storage. This flexibility allows teams to decide how and when to virtualize resources based on their workload requirements.

X-Large GPU Server

Built for enterprise-grade AI/ML workloads requiring maximum performance and scalability. This deployment includes 8× NVIDIA H100 GPUs per node, designed to handle nearly all use cases, from large-scale model training to high-throughput inference and multi-user environments. Suitable for advanced generative AI models, deep learning pipelines, and demanding production workloads requiring sustained performance.

OpenMetal Converged Node

OpenMetal Compute Node

Large GPU Server

Ideal for teams running frequent AI experiments or large-scale model training jobs. This deployment is fully customizable, allowing selection of GPU type, CPU, memory, and storage to match specific workload requirements. It offers the flexibility to support diverse use cases such as computer vision, natural language processing, and advanced data analytics in production environments.

Medium GPU Server

Suited for teams transitioning from proof-of-concept to production workloads. This deployment supports a single NVIDIA A100 GPU per node, providing sufficient resources for moderate AI/ML pipelines, including model tuning, dataset augmentation, and testing of generative AI models. It is ideal for workloads that benefit from GPU acceleration but do not require multiple GPUs or large-scale distributed training.

Small GPU Server

Recommended for development environments, application integration, or running smaller models in production where GPU acceleration is not required. This deployment does not include a GPU and is designed for CPU-only inference workloads. It is suitable for validating models, processing lighter inference tasks, or building pipelines that do not depend on GPU resources.

Contact Us

Connect with our team to discuss your requirements, delivery timelines, capabilities, and agreement pricing.


Pricing FAQs, Eligibility and Usage Restrictions

Different regions have different legal requirements for content and workloads that are your responsibility to understand and conform with.

We provide true bare metal resources, giving you complete control over the GPU, BIOS, drivers, and full hardware capabilities. Unlike other providers who deploy bare metal servers but carve out and resell slices of GPU, storage, or compute, potentially impacting your performance due to shared usage, OpenMetal delivers dedicated hardware that is yours alone.

No other tenants share your systems. This ensures maximum performance and gives you the flexibility to meet strict security and data protection requirements, including confidential computing, for your specific use case.

Power usage is included with the cost of your hardware. You will not receive a separate bill for power utilization.

No, we do not offer hourly GPUs. All billing is paid monthly and custom orders require an agreement of at least 1 year. Any hourly prices shown are soley for comparison purposes.

Each storage cluster comes with a base egress bandwidth allowance. Currently there’s no charge for ingress. There is no current way to limit your bandwidth usage from our side but we encourage you to use the tools within OpenStack and Linux that can limit bandwidth usage. For Egress usage beyond the base cluster allowance you can check out or egress pricing or lock in your rate with a bandwidth plan from sales. 

For guaranteed bandwidth plans, please contact Sales. The typical plan is:

  • For Egress, you are billed on the 95th Percentile at $0.37 per Mbit per week. Your 95th Percentile is calculated against the number of days up to 7 days. The minimum usage commitment is 100Mbits at $37.00/wk. Higher commits yield lower per mbit costs.

We reserve the right to implement limitations and restrictions at any time if you are not on a guaranteed bandwidth plan

The current network performance for 2024 is 99.96%. The base SLA is 99.96%.

Yes, all of our GPU deployments support MIG and Time-Slicing methods of sharing GPU resources.

Yes, we can support many other GPU models. Our published list is limited to the most in-demand and readily available options. If you’re interested in a specific model, reach out to our sales team and we’ll provide you with detailed information on availability and pricing.

Not sure yet? Our welcome team can:

Get You Started Fast

Our expert hardware engineers will work closely with your team. Save time, improve performance, and lower costs.

Negotiate Ramp Pricing

Don’t pay twice during the move process. Work with your account manager to get a move plan that fits.

Beat Your Bill

Has your mega cloud provider hit you with a mega bill? Transparent prices, fixed budgets, and a team that cares.