Today’s Stock Market in 2-Minutes
December 16, 2025
The AI data center business has rapidly evolved from a niche infrastructure buildout into one of the most capital-intensive growth markets in technology. What began as a race to secure GPUs has become a broader competition for power capacity, cooling capability, networking throughput, real estate, and long-term energy contracts—and increasingly, this transformation is playing out in the public markets.
AI-optimized facilities are fundamentally distinct from traditional enterprise data centers. Large-scale model training and high-volume inference workloads require accelerators, high rack power densities, advanced cooling, and ultra-fast interconnects. McKinsey estimates that global investment in AI-ready data centers could reach $5.2 trillion by 2030—a testament to the industry’s scale and cost intensity.([McKinsey & Company][1])
This massive capital demand has fueled both private investment and a new wave of public market interest.
Nvidia (NVDA) remains the dominant supplier of GPUs and related infrastructure components that power much of the world’s AI compute. As Nvidia’s chips power clusters across hyperscalers and emerging GPU cloud providers, its financial performance and valuation reflect that strategic centrality. In 2025, Nvidia has attained unprecedented market valuations driven by AI demand and strong data-center revenue contributions.([Wikipedia][2])
A helpful way to understand investor expectations in the AI data center ecosystem is through valuation multiples, particularly enterprise-value-to-sales (EV/Sales) and price-to-sales (P/S) ratios.
CoreWeave (CRWV), a GPU-centric cloud and AI infrastructure provider, has traded at multiples implying double-digit valuations relative to revenue in the context of its growth profile and capital plans. Some market commentary has suggested valuations on the order of ~13× projected 2025 revenues at points of heightened investor enthusiasm.([The Wall Street Journal][3])
*Other data center and infrastructure players historically command high EBITDA multiples (20×–30×) and elevated revenue multiples compared with broader markets because of the anticipated long-term cash flows associated with scale assets.([Oliver Wyman][4])
Across broader tech markets, AI-exposed companies have seen median EV/Revenue multiples in the mid-20x to 30x range in private and public valuations where growth expectations are highest.([Aventis Advisors][5])
By comparison:
The S&P 500’s average price-to-sales ratio hovers near ~2.8x, reflecting the broader market’s valuation levels across all sectors.([Eqvista][6])
Anecdotal valuation comparisons also show how different providers are being priced relative to revenues:
Some infrastructure providers have traded at 20x+ sales, while more speculative or early-stage GPU infrastructure firms have been priced at even higher multiples in private markets.([skepticallyoptimistic.substack.com][7])
These numbers highlight the market’s growth premium for companies that can demonstrate rapidly expanding revenue streams tied to AI compute demand.
This enthusiasm has helped propel companies like CoreWeave (CRWV) into the spotlight. CoreWeave’s cloud specializes in GPU infrastructure for AI, and its access to Nvidia GPUs has helped it grow rapidly—even as investors debate the sustainability of its high revenue multiples and capital intensity.([Wikipedia][8])
WhiteFiber (WYFI) exemplifies another publicly traded entrant focused on data-center infrastructure and specialized GPU deployments. As an infrastructure-adjacent operator, its valuation also reflects expectations for future AI-related revenue growth rather than current earnings.([Global Equity Briefing][9])
Data center landlords and interconnection specialists—such as Equinix (EQIX) and Digital Realty (DLR)—don’t always command the same multiples as pure AI compute plays, but they benefit indirectly from the same structural demand as hyperscalers and smaller cloud operators expand capacity. These companies often trade at premiums to traditional real estate because of long-term contracted revenue streams and asset scarcity.
Supporting infrastructure suppliers like Vertiv (VRT) see valuation interest tied to their critical role in power delivery and cooling systems that enable high-density AI racks.
Against that backdrop, AXE Compute (AGPU) represents a newer, asset-light access model in the AI infrastructure landscape. Rather than owning data centers or GPU hardware, AXE Compute aggregates capacity from global partners and resells or brokers access to CPU and GPU compute at scale—often targeting buyers seeking more competitive pricing than they could secure by contracting individually with large operators.
This intermediary model doesn’t carry the same capex burden associated with owning facilities and racks, but its valuation in the market will depend on its ability to demonstrate:
Asset-light models tend to be evaluated differently than heavy-asset operators; market multiples will reflect the balance between recurring revenue strength and execution risk.
Across all segments of the AI data center market, two forces dominate valuations:
These dynamics push investors to price in future growth more aggressively than in many other sectors—even as execution risk remains high amid supply chain constraints, power availability challenges, and competitive pressure.([McKinsey & Company][1])
If 2023–2024 was the era of “AI needs GPUs,” 2025 increasingly looks like the era of “AI needs power, space, and scalable infrastructure financed efficiently.” The public companies that best navigate this landscape will combine access to key inputs (like Nvidia GPUs), reliable revenue streams, and operational resilience.
From Nvidia (NVDA) at the chip layer to GPU-centric cloud providers like CoreWeave (CRWV) and infrastructure platforms like WhiteFiber (WYFI) and asset-light aggregators like AXE Compute (AGPU), the public markets are starting to price the long road ahead—balancing growth expectations with the reality of capital-intensive buildouts and evolving competitive dynamics.
—
[1]: https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/the-cost-of-compute-a-7-trillion-dollar-race-to-scale-data-centers?utm_source=chatgpt.com “The cost of compute: A $7 trillion race to scale data centers”