When your technical diligence team starts evaluating a potential SaaS acquisition, you probably have a clear framework for assessing product architecture, codebase quality, and security posture. But when it comes to cloud infrastructure, the picture often gets murky. The bills are high, the architecture is complex, and the actual resource utilization remains unclear behind layers of abstraction and metered billing. This creates a challenge. Cloud infrastructure often represents one of the largest operational expenses for SaaS companies, sometimes consuming 50% or more of cost of revenue. Yet during technical diligence, it’s also one of the most difficult areas to evaluate with confidence. Can you verify whether resources are right-sized? Are there hidden costs waiting to emerge post-acquisition? Will infrastructure expenses scale predictably as the company grows? For private equity firms conducting technical diligence, understanding what to look for in cloud infrastructure can mean the difference between accurate valuation and post-close surprises that erode EBITDA.
Why Cloud Infrastructure Assessment Matters During Diligence
Technical diligence teams need to answer fundamental questions about operational efficiency, cost predictability, and scalability potential. Cloud expenses play an important role in assessing a company’s scalability, operational efficiency, and overall profitability. High and unpredictable cloud costs may signal architectural inefficiencies that could hinder growth.
The financial implications extend beyond monthly bills. When infrastructure costs fluctuate unpredictably or consume disproportionate portions of revenue, it directly impacts gross margins and, consequently, company valuations. Research from Andreessen Horowitz reveals that $100 billion of market value is being lost among the top 50 public software companies due to cloud impact on margins. For PE firms evaluating acquisition targets, this represents both risk and opportunity.
Understanding how a target company manages its infrastructure provides insights into broader operational maturity. A company’s ability to monitor and optimize cloud expenses is a strong indicator of technical maturity and operational discipline, which are key factors during an investment evaluation.
What Makes Cloud Infrastructure Difficult to Evaluate
Traditional public cloud environments create visibility challenges that complicate technical diligence. When workloads run on shared multi-tenant infrastructure, determining actual resource utilization requires access to multiple monitoring systems and detailed usage reports that may not provide complete pictures of efficiency or waste.
Usage-Based Pricing Creates Forecasting Uncertainty
Public cloud costs can reach 50% of total cost of revenue for software companies, and the unpredictability makes budgeting nearly impossible. During diligence, this unpredictability makes it difficult to project future infrastructure costs against growth scenarios.
The billing models themselves introduce complexity. Instance charges, data transfer fees, API calls, storage costs, and dozens of other line items combine to create invoices that require specialized knowledge to interpret. Your diligence team needs to understand not just what the company currently pays, but how those costs will evolve as customer counts increase and data volumes grow.
Resource Utilization Remains Hidden
In typical public cloud deployments, workloads use roughly 30% of allocated resources on average and burst to use no more than 30% more over time, leaving approximately 40% of VM resources wasted. Identifying this waste during a diligence review requires deep technical investigation that goes beyond reviewing invoices.
The challenge compounds when target companies lack comprehensive monitoring tools or cost tracking practices. Many organizations struggle with cloud cost visibility, often failing to monitor resource usage at a granular level. Without proper tools to track these inefficiencies, such usage can lead to significant waste and unexpected charges.
Architectural Choices Affect Long-Term Costs
The architecture decisions teams make early in development (instance types, storage configurations, networking approaches) create cost structures that become difficult and expensive to change. During diligence, assessing whether these architectural choices align with efficient operations requires specialized cloud expertise that may not exist within standard technical review teams.
Key Infrastructure Questions for Technical Diligence
When conducting technical diligence on cloud infrastructure, your team should be able to answer specific questions that reveal operational maturity, cost efficiency, and scalability potential.
Cost Structure and Predictability
How are infrastructure costs tracked and allocated? Companies with mature operations implement cost tracking tools that provide granular visibility into spending patterns. Implementing cost monitoring tools, such as AWS Cost Explorer or Azure Cost Management, provides businesses with insights into their resource consumption and spending patterns.
What percentage of total cost of revenue does infrastructure represent? This ratio provides immediate insight into operational efficiency and margin potential. Companies approaching 50% or higher infrastructure costs relative to cost of revenue represent optimization opportunities that can directly improve EBITDA post-acquisition.
Are there committed spend agreements or reserved instances? Understanding existing commitments helps forecast costs and identifies optimization opportunities. Reserving instances refers to committing to use specific cloud resources for a fixed period in exchange for significantly reduced pricing compared to on-demand rates. Research from Flexera’s 2023 State of the Cloud Report indicates that businesses can save up to 72% on cloud expenses by using reserved instances1.
What are the data transfer patterns and egress costs? Data transfer costs represent one of the most insidious hidden expenses. As SaaS companies grow their customer bases and increase data processing, egress fees compound. These charges are often minimal during initial deployment but can become substantial portions of monthly bills as companies scale.
Resource Utilization and Efficiency
What is the actual utilization rate across compute resources? This reveals whether the company is over-provisioning infrastructure out of caution or running efficiently. Low utilization rates indicate immediate cost reduction opportunities.
Are there automated scaling policies in place? Companies that implement auto-scaling based on actual demand demonstrate operational maturity and cost awareness. Manual provisioning often leads to over-allocation and wasted spend.
How are development, staging, and production environments managed? Separate environments are necessary, but inefficient management (like running production-scale resources in development) drives up costs without adding value.
Scalability and Performance
Can the infrastructure scale to support projected growth? Your diligence should validate that the current architecture can handle anticipated customer growth without requiring fundamental redesign or triggering cost spikes.
What are the performance characteristics and SLA requirements? Understanding latency requirements, throughput needs, and uptime commitments helps assess whether the infrastructure is appropriately architected for the company’s customer commitments.
Are there architectural bottlenecks that will require significant investment to address? Identifying technical debt or design limitations that could constrain growth helps you factor remediation costs into acquisition modeling.
Security and Compliance
How is network isolation and data segmentation implemented? Multi-tenant public cloud environments require careful configuration to maintain proper isolation. Verification of these security controls protects against data exposure risks.
What compliance frameworks does the infrastructure support? For target companies in regulated industries, understanding compliance posture (SOC 2, HIPAA, PCI DSS) affects both risk assessment and operational flexibility post-acquisition.
Can infrastructure configurations be audited and verified? The ability to inspect and validate infrastructure settings directly provides confidence in security posture and operational controls.
How OpenMetal Infrastructure Supports Technical Diligence
OpenMetal infrastructure is designed to provide the transparency and technical clarity that makes technical diligence straightforward. Rather than trying to understand infrastructure through invoices and API logs, diligence teams can directly verify configurations, inspect network topology, and validate resource allocation.
Direct Infrastructure Visibility
OpenMetal environments run on dedicated bare metal servers that provide complete isolation and transparency. Each server includes dual 10 Gbps private links and dual 10 Gbps public uplinks that support predictable bandwidth. Dedicated VLANs isolate customer environments at the hardware level, which can be verified directly rather than relying on provider assertions.
Both infrastructure networks and OpenStack networks can be viewed through OpenMetal Central or OpenStack Horizon. This allows diligence reviewers to confirm segmentation, isolation, and configuration management directly. Network topology, routing tables, security groups, and firewall rules are all accessible for audit.
Rapid Deployment Demonstrates Operational Maturity
Infrastructure that can be provisioned quickly indicates mature automation and reproducibility. OpenMetal provides 45-second OpenStack deployment and 20-minute cluster expansion. This deployment speed demonstrates that infrastructure is well-architected and can be reliably reproduced, important factors when assessing operational risk.
All OpenStack services run in Docker containers deployed through Kolla-Ansible, which provides clear version tracking and component consistency. This containerized approach makes it possible for reviewers to validate service health, replication status, and configuration consistency during the diligence process.
Fixed-Cost Model Simplifies Financial Evaluation
The fixed-cost pricing model directly addresses one of the most challenging aspects of cloud diligence: projecting future infrastructure costs. Rather than trying to model usage-based pricing against growth scenarios, costs link directly to physical hardware capacity (CPU, memory, NVMe storage, and network bandwidth).
This structure allows diligence reviewers to project infrastructure costs accurately against growth forecasts. If the target company needs to support 2x customer growth, you can model the additional hardware capacity required and calculate precise cost implications. Egress is billed using 95th percentile measurement, which averages peak traffic and allows short-term spikes without increasing overall cost projections.
Hardware Specifications Provide Clear Capacity Assessment
Most diligence-ready environments use Medium V4 or Large V4 hardware configurations. The Medium V4 includes dual Intel Xeon Silver 4510 CPUs with 256 GB DDR5 RAM and up to six NVMe slots, offering balanced compute and I/O performance for moderate workloads.
The Large V4 uses dual Intel Xeon Gold 6526Y CPUs with 512 GB DDR5 RAM, supporting higher performance requirements and data-intensive workloads. For companies with more demanding needs, XL V4 and XXL V4 configurations provide 1 TB and 2 TB DDR5 memory respectively.
These hardware details give diligence teams a direct view of compute and memory capacity. Rather than trying to translate abstract instance types into actual performance capabilities, you can assess whether the physical specifications align with application requirements.
Security Features Can Be Verified Through Hardware
From a security and compliance perspective, OpenMetal V4 servers include Intel Trust Domain Extensions (TDX) and Software Guard Extensions (SGX). These hardware security features can be verified through BIOS configuration and remote attestation logs, providing evidence of trusted execution and isolated compute domains.
Combined with DDoS protection up to 10 Gbps per IP and the ability for customers to announce their own IP blocks, reviewers can validate network control, data sovereignty, and risk posture. This level of verification is difficult to achieve in multi-tenant public cloud environments where hardware access is abstracted away.
Conducting Infrastructure Assessment with OpenMetal
When your technical diligence team evaluates a portfolio company running on OpenMetal infrastructure, the assessment process becomes more direct and verifiable than traditional public cloud reviews.
Request Audit-Level Access
OpenMetal can provide temporary audit-level access to the target company’s infrastructure during diligence. This access allows your technical team to directly inspect configurations, review logs, and validate network topology without relying solely on documentation or screenshots provided by the seller.
The OpenStack Horizon dashboard provides comprehensive visibility into compute instances, storage volumes, networking configuration, and security policies. Your team can verify resource utilization, identify potential optimization opportunities, and confirm that configurations align with security best practices.
Review Infrastructure Costs Against Capacity
Because OpenMetal pricing is fixed per server rather than usage-based, financial evaluation becomes straightforward. Review the current hardware allocation and compare it against actual resource utilization to identify optimization opportunities.
If the target company is running workloads on Large V4 hardware but utilization patterns show they could operate efficiently on Medium V4 configurations, this represents an immediate cost reduction opportunity that flows directly to EBITDA improvement post-acquisition.
Validate Performance and Reliability
Request performance metrics and uptime data for the evaluation period. Because OpenMetal infrastructure runs on dedicated hardware rather than shared resources, performance should be consistent and predictable. Variability in performance metrics could indicate application-level issues rather than infrastructure constraints.
Check replication and backup configurations to verify disaster recovery capabilities. OpenMetal’s Ceph storage provides built-in replication, but reviewing the configuration ensures appropriate redundancy levels for the company’s data protection requirements.
Assess Scalability Path
Model growth scenarios against infrastructure capacity. If the target company projects 3x customer growth over 24 months, calculate the additional hardware capacity required and the associated costs. Because hardware specifications are transparent and costs are fixed, these projections can be precise rather than estimated.
Validate that the architecture can scale horizontally by adding additional servers to the cluster. OpenMetal’s 20-minute cluster expansion capability demonstrates that scaling is operationally straightforward and won’t require extended planning or architectural changes.
Engineer-to-Engineer Communication
OpenMetal provides engineer-to-engineer onboarding and dedicated Slack channels for direct communication. During technical diligence, this allows your team to ask specific technical questions, request configuration details, or schedule live sessions to review infrastructure topology.
This direct access to engineering expertise helps your diligence team quickly resolve technical questions that might otherwise require extended back-and-forth through intermediaries. When evaluating migration complexity or optimization opportunities, having direct technical dialogue accelerates the assessment process.
Red Flags to Watch for During Cloud Diligence
Certain patterns during technical diligence indicate potential problems that could affect valuation or post-acquisition integration.
Lack of Cost Monitoring or Tracking
Without clear ownership of cloud resources, it becomes difficult to track who is responsible for excess spending. Unpredictable costs associated with sudden spikes in demand can create budgeting challenges. Companies without implemented cost tracking tools or FinOps practices often have hidden inefficiencies that will require post-acquisition remediation.
High Infrastructure Costs Relative to Revenue
If infrastructure costs approach or exceed 50% of cost of revenue, this indicates significant optimization opportunities but also suggests the company hasn’t prioritized operational efficiency. While this creates upside potential, it also raises questions about management’s operational discipline.
Over-Provisioned Resources
Consistently low utilization rates across compute resources suggest defensive over-provisioning. While some headroom is appropriate, utilization rates below 40% indicate wasteful spending that could be optimized.
Complex Multi-Cloud Architectures
A multi-cloud strategy may offer cost advantages and reduce dependency on a single vendor. However, this approach should be weighed carefully against the added complexity it introduces. Companies running workloads across multiple providers without clear architectural reasons may have evolved infrastructure organically rather than strategically, creating operational complexity that increases costs and slows development.
Inability to Answer Basic Infrastructure Questions
If the target company’s technical team cannot quickly answer questions about resource utilization, data transfer patterns, or scaling capabilities, this suggests infrastructure management maturity gaps that could create post-acquisition challenges.
Post-Acquisition Infrastructure Optimization
Once you’ve completed technical diligence and closed the acquisition, infrastructure optimization becomes a value creation lever that directly impacts EBITDA.
Implementing Cost Tracking and Governance
Fostering a culture of financial operations (FinOps) emphasizes collaboration between finance, development, and operations teams, ensuring that cost efficiency is prioritized throughout the software development lifecycle. Implementing FinOps practices across your portfolio creates consistent approaches to infrastructure management and cost optimization.
Right-Sizing Infrastructure
Review actual resource utilization and adjust hardware allocations to match workload requirements. Moving from over-provisioned configurations to appropriately sized infrastructure typically reduces costs by 30-50% without impacting performance.
Consolidating Environments
Portfolio companies often run separate development, staging, and production environments that duplicate infrastructure costs. Consolidating non-production environments onto shared infrastructure while maintaining proper isolation reduces costs without affecting operations.
Standardizing Across Portfolio
For PE firms managing multiple SaaS companies, infrastructure consistency across the portfolio creates operational efficiencies and knowledge transfer opportunities. Teams can share best practices, automation scripts, and optimization strategies that improve financial performance consistently across holdings.
The Financial Impact of Infrastructure Clarity
Clear infrastructure visibility during technical diligence provides confidence in valuation modeling and identifies optimization opportunities that flow directly to value creation.
High-growth software companies often trade at 24-25x gross profit multiples, meaning every dollar of gross profit saved through infrastructure optimization translates to $24-25 in market capitalization gains. Infrastructure improvements don’t just reduce monthly costs but fundamentally improve company financial profiles.
For target companies where cloud costs quietly erode portfolio EBITDA, identifying these inefficiencies during diligence allows you to factor optimization potential into acquisition modeling. Post-close implementation of infrastructure improvements becomes a clear path to EBITDA enhancement.
Experts converge on this formula: Repatriation results in one-third to one-half the cost of running equivalent workloads in the cloud. Understanding this optimization potential during diligence helps you assess whether a target’s infrastructure costs represent risk or opportunity.
Making Infrastructure Assessment Standard Practice
As cloud spending continues to grow, the adoption of FinOps principles is no longer optional. Organizations that fail to prioritize cloud cost management risk falling behind competitors who optimize their spending to support innovation and growth.
For private equity firms conducting technical diligence, infrastructure assessment should be a standard component of every SaaS evaluation. The questions you ask, the metrics you review, and the verification steps you take during diligence directly impact your ability to accurately value targets and identify post-acquisition value creation opportunities.
OpenMetal infrastructure supports this diligence process by providing transparent, auditable environments where costs link directly to physical capacity rather than abstract usage metrics. You can verify network topology, inspect security configurations, and validate performance characteristics through direct access rather than relying on vendor assertions or incomplete documentation.
When your technical diligence team can answer infrastructure questions with confidence, you make better investment decisions, model valuations more accurately, and identify optimization paths that create measurable value post-acquisition.
The infrastructure layer doesn’t have to be the most expensive and least understood part of technical diligence. With the right approach and the right infrastructure platform, it becomes a source of competitive advantage that drives EBITDA improvement and portfolio value creation.
Ready to evaluate how transparent infrastructure supports better technical diligence across your portfolio? Explore OpenMetal’s PE firm program →
[1] Vaultinum. “Optimizing Cloud Costs: a key factor in Tech Due Diligence.“
Read More on the OpenMetal Blog