The Chip That Changed Everything
When Nvidia shipped its first Blackwell B200 GPUs in late 2025, data center operators did something unusual: they started placing orders 18 months in advance. Microsoft alone committed to $10 billion in Blackwell-based infrastructure for 2026, according to their Q1 filing.
This is not hype. This is companies betting their AI strategies on a single chip architecture.
By the Numbers
Blackwell delivers 4x the training performance of the previous Hopper generation at roughly the same power draw. For inference — the actual running of AI models that powers products like Claude and ChatGPT — the improvement is closer to 7x.
That efficiency gain matters because AI infrastructure costs are exploding. Goldman Sachs estimates global AI infrastructure spending will hit $200 billion in 2026, up from $130 billion in 2025.
Who Benefits
The immediate winners are obvious: Nvidia, whose data center revenue hit $38 billion in Q4 2025 alone, and the hyperscalers (AWS, Azure, GCP) who can offer Blackwell instances at premium pricing.
But the more interesting story is downstream. Startups that previously needed $10M in compute to train competitive models can now do it for $2-3M. That is reshaping who can compete in AI.
What to Watch
AMD's MI400 series ships in Q3 2026 and promises competitive performance at lower cost. Intel is effectively out of the high-end AI chip race. And custom chips from Google (TPU v6) and Amazon (Trainium3) are gaining traction for specific workloads.
The AI chip market is a three-horse race at best. And right now, Nvidia is lapping the field.