Private GPU Servers for AI/ML workloads
Fully customizable deployments ranging from large-scale 8x GPU setups to CPU-based inference.
GPU | GPU Memory | GPU Cores | CPU | Storage | Memory | Price | |||
---|---|---|---|---|---|---|---|---|---|
X-Large The most complete AI hardware we offer. It's ideal for AI/ML training, high-throughput inference, and demanding compute workloads that push performance to the limit. | |||||||||
8X NVIDIA SXM5 H100 | 640 GB HBM3 | Cuda: 135168 Tensor: 4224 | 2x Intel Xeon Gold 6530 64C/128T 2.1/4.0Ghz | Up to 16 NVMe drives 2x 960GB Boot Disk | Up to 8TB, DDR5 5600MTs | Contact Us | Contact Us | ||
Large Perfect for mid-sized GPU workloads with maximum flexibility. These servers support up to 2x H100 GPUs, 2TB of memory, and 24 drives each! | |||||||||
2X NVIDIA H100 PCIe | 160 GB HBM3 | Cuda: 33792 Tensor: 1056 | 2x Intel Xeon Gold 6530 64C/128T 2.1/4.0Ghz | 1x 6.4TB NVMe 2x 960GB Boot Disk | 1024GB DDR5 4800Mhz | $4,608.00/mo eq. $6.31/hr | Contact Us | ||
1X NVIDIA H100 PCIe | 80 GB HBM3 | Cuda: 16896 Tensor: 528 | 2x Intel Xeon Gold 6530 64C/128T 2.1/4.0Ghz | 1x 6.4TB NVMe 2x 960GB Boot Disk | 1024GB DDR5 4800Mhz | $2,995.20/mo eq. $4.10/hr | Contact Us | ||
2X NVIDIA A100 80G | 160 GB HBM2e | Cuda: 13824 Tensor: 864 | 2x Intel Xeon Gold 6530 64C/128T 2.1/4.0Ghz | 1x 6.4TB NVMe 2x 960GB Boot Disk | 1024GB DDR5 4800Mhz | $3,087.36/mo eq. $4.23/hr | Contact Us | ||
1X NVIDIA A100 80G | 80 GB HBM2e | Cuda: 6912 Tensor: 432 | 2x Intel Xeon Gold 6530 64C/128T 2.1/4.0Ghz | 1x 6.4TB NVMe 2x 960GB Boot Disk | 1024GB DDR5 4800Mhz | $2,234.88/mo eq. $3.06/hr | Contact Us | ||
Medium Low cost GPU workloads. Less flexible than our large GPU deployments, but far more powerful than CPU inferencing. | |||||||||
1X NVIDIA A100 40G | 40 GB HBM2e | Cuda: 6912 Tensor: 432 | AMD EPYC 7272 12C/24T 2.9Ghz | 1TB NVMe 2x 960GB Boot Disk | 256GB DDR4 3200MHz | $714.24/mo eq. $0.98/hr | Contact Us |
Pricing shown requires a 3-year agreement. Lower pricing may be available with longer commitments. Final pricing will be confirmed by your sales representative and is subject to change.
How is Private AI on OpenMetal Infrastructure different?
It’s private, customizable and our engineers are on your team.
Private Resources
We provide dedicated hardware exclusively for your team. None of the resources are virtualized or shared with other users, ensuring consistent performance and allowing you to fully leverage your GPU’s capabilities.
Built to Order
Connect with our team to design your ideal AI/ML deployment. We’ll handle ordering, setup, and ensure everything runs reliably. The specifications listed are just a starting point. Contact us to discuss possibilities.
Access to Engineers
Our engineers are here to help you evaluate hardware capabilities and identify the best solution for your specific use case. After deployment, we’ll work with you to ensure you’re maximizing the value of your services.
What you should know before running your own AI workloads
Performance Comparison of GPUs
Different GPU models offer varying levels of performance based on core counts, memory bandwidth, and architectural improvements. Comparing models like the A100 and H100 helps identify which hardware best supports specific AI workloads, including training large language models or running real-time inference.
Inference on CPU
CPU-based inference remains a practical option for certain workloads, especially when GPUs are not required. We examine the performance impact of Intel’s Advanced Matrix Extensions (AMX) on 5th Gen Intel processors, comparing execution speed and efficiency gains when running AI models. AMX enables improved matrix computation performance, making CPU inference more viable for specific tasks.
Private vs Public AI
Bare metal provides direct access to physical hardware without virtualization overhead, offering predictable performance ideal for AI training and large inference tasks. Cloud virtualization adds flexibility but may introduce resource contention and variable latency depending on shared infrastructure.
Comparing Costs
The cost of running AI workloads depends on hardware selection, usage patterns, and resource efficiency. Dedicated GPUs involve higher upfront costs but deliver faster results, while time-shared or virtualized resources may reduce costs for intermittent or smaller-scale workloads.
MIG vs Time-Slicing
Multi-Instance GPU (MIG) and time-slicing are two methods for sharing GPU resources, each offering different levels of isolation and performance. OpenMetal supports both, providing flexibility not available from all providers.
Measuring Inference Performance
Inference performance is measured by throughput, latency, and token generation speed for large language models. Factors such as model size, batch processing, and hardware configuration directly affect results, making accurate benchmarking critical for production planning.
GPU Server Deployment Sizes for various workloads
Access dedicated GPU servers with full control over resource utilization. Users can run workloads directly on bare metal or connect to OpenStack to create and manage virtual machines, networks, and storage. This flexibility allows teams to decide how and when to virtualize resources based on their workload requirements.
X-Large GPU Server
Built for enterprise-grade AI/ML workloads requiring maximum performance and scalability. This deployment includes 8× NVIDIA H100 GPUs per node, designed to handle nearly all use cases, from large-scale model training to high-throughput inference and multi-user environments. Suitable for advanced generative AI models, deep learning pipelines, and demanding production workloads requiring sustained performance.
Large GPU Server
Ideal for teams running frequent AI experiments or large-scale model training jobs. This deployment is fully customizable, allowing selection of GPU type, CPU, memory, and storage to match specific workload requirements. It offers the flexibility to support diverse use cases such as computer vision, natural language processing, and advanced data analytics in production environments.
Medium GPU Server
Suited for teams transitioning from proof-of-concept to production workloads. This deployment supports a single NVIDIA A100 GPU per node, providing sufficient resources for moderate AI/ML pipelines, including model tuning, dataset augmentation, and testing of generative AI models. It is ideal for workloads that benefit from GPU acceleration but do not require multiple GPUs or large-scale distributed training.
Small GPU Server
Recommended for development environments, application integration, or running smaller models in production where GPU acceleration is not required. This deployment does not include a GPU and is designed for CPU-only inference workloads. It is suitable for validating models, processing lighter inference tasks, or building pipelines that do not depend on GPU resources.
Contact Us
Connect with our team to discuss your requirements, delivery timelines, capabilities, and agreement pricing.
Pricing FAQs, Eligibility and Usage Restrictions
We provide true bare metal resources, giving you complete control over the GPU, BIOS, drivers, and full hardware capabilities. Unlike other providers who deploy bare metal servers but carve out and resell slices of GPU, storage, or compute, potentially impacting your performance due to shared usage, OpenMetal delivers dedicated hardware that is yours alone.
No other tenants share your systems. This ensures maximum performance and gives you the flexibility to meet strict security and data protection requirements, including confidential computing, for your specific use case.
For guaranteed bandwidth plans, please contact Sales. The typical plan is:
- For Egress, you are billed on the 95th Percentile at $0.37 per Mbit per week. Your 95th Percentile is calculated against the number of days up to 7 days. The minimum usage commitment is 100Mbits at $37.00/wk. Higher commits yield lower per mbit costs.
We reserve the right to implement limitations and restrictions at any time if you are not on a guaranteed bandwidth plan