Introduction
Microsoft has launched a new era in AI infrastructure with the unveiling of Fairwater, the largest and most sophisticated AI datacenter the company has ever built. Located in Wisconsin, this facility represents a multi-billion dollar investment that completely redefines the concept of supercomputing for artificial intelligence.
The AI Datacenter Revolution
An AI datacenter is a specialized facility designed specifically for training and running large-scale artificial intelligence models and applications. Unlike traditional cloud datacenters, which are optimized to run many smaller, independent workloads such as hosting websites, email, or business applications, AI datacenters function as one massive AI supercomputer.
Fairwater spans 315 acres and houses three massive buildings with a combined 1.2 million square feet under roofs. Construction required 46.6 miles of deep foundation piles, 26.5 million pounds of structural steel, and 120 miles of medium-voltage underground cable.
Architecture and Performance
The heart of the datacenter consists of interconnected clusters of NVIDIA GB200 servers with millions of compute cores and exabytes of storage. Each rack packs 72 NVIDIA Blackwell GPUs, tied together in a single NVLink domain that delivers 1.8 terabytes of GPU-to-GPU bandwidth and gives every GPU access to 14 terabytes of pooled memory.
This configuration allows the rack to operate as a single, giant accelerator, capable of processing an astonishing 865,000 tokens per second - the highest throughput of any cloud platform available today. The datacenter will deliver 10X the performance of the world's fastest supercomputer today.
Advanced Networking for AI
To ensure low latency communication across multiple layers, Microsoft has implemented sophisticated network architecture. At the rack level, GPUs communicate over NVLink and NVSwitch at terabytes per second. To connect across multiple racks into a pod, Azure uses both InfiniBand and Ethernet fabrics that deliver 800 Gbps in a full fat tree non-blocking architecture.
Cooling Innovations
Traditional air cooling can't handle the density of modern AI hardware. Fairwater uses advanced liquid cooling systems with integrated pipes that circulate cold liquid directly into servers, extracting heat efficiently.
The closed-loop system ensures zero water waste, requiring water only once during construction and continuously reusing it with no evaporation losses. Over 90% of datacenter capacity uses this system, supported by the second largest water-cooled chiller plant on the planet.
Storage and Compute for AI
To support the AI infrastructure cluster, an entirely separate datacenter infrastructure is needed to store and process the data used and generated by the AI cluster. The Wisconsin AI datacenter's storage systems are five football fields in length.
Microsoft reengineered Azure storage for the most demanding AI workloads. Each Azure Blob Storage account can sustain over 2 million read/write transactions per second, with millions of accounts available to elastically scale to meet virtually any data requirement.
AI WAN: Global Network of AI Datacenters
These new AI datacenters are part of a global network of Azure AI datacenters, interconnected via Wide Area Network (WAN). This isn't just about one building, but a distributed, resilient and scalable system that operates as a single, powerful AI machine.
This represents a fundamental shift in how we think about AI supercomputers. Instead of being limited by the walls of a single facility, Microsoft is building a distributed system where compute, storage and networking resources are seamlessly pooled and orchestrated across datacenter regions.
Global Expansion
Beyond Fairwater, Microsoft has announced plans for other strategic AI datacenters:
- Narvik, Norway: partnership with nScale and Aker JV to develop a new hyperscale AI datacenter
- Loughton, UK: collaboration with nScale to build the UK's largest supercomputer
- Multiple identical Fairwater datacenters under construction in other US locations
Conclusion
Microsoft's Fairwater datacenter represents a multi-billion dollar investment that redefines global AI infrastructure. With hundreds of thousands of cutting-edge AI chips and seamless connections to Microsoft's global cloud network of over 400 datacenters in 70 regions, these facilities democratize access to AI services on a global scale.
Microsoft's distributed approach, where software and hardware are optimized as one purpose-built system, is unleashing a new era of cloud-powered intelligence that is secure, adaptive, and ready for the future of AI.
FAQ
What is an AI datacenter and how does it differ from traditional datacenters?
An AI datacenter is a specialized facility designed for training and running large-scale AI models, functioning as one massive supercomputer instead of managing independent workloads like traditional datacenters.
How powerful is Microsoft's Fairwater AI datacenter?
Fairwater will deliver 10X the performance of the world's fastest supercomputer today, with racks capable of processing 865,000 tokens per second.
What chips does Microsoft use in its AI datacenters?
Microsoft uses NVIDIA GB200 servers with Blackwell GPUs, and future datacenters in Norway and UK will use GB300 chips with even more pooled memory per rack.
How does cooling work in Microsoft AI datacenters?
Microsoft uses advanced closed-loop liquid cooling systems that circulate cold liquid directly into servers, ensuring zero water waste and continuous reuse.
Where is Microsoft building new AI datacenters?
Microsoft is building AI datacenters in Wisconsin (Fairwater), Norway (Narvik), UK (Loughton), and multiple other locations across the United States.
How do Microsoft AI datacenters connect to each other?
AI datacenters are interconnected via AI WAN (Wide Area Network) enabling them to operate as a single distributed system for large-scale AI training across geographically diverse regions.