Microsoft’s AI Build-Out Rewrites the Balance Sheet as Cloud Crosses $50 Billion
Highlights
- Revenue: $81.3B (+17% YoY, constant currency)
- Operating income: +21% YoY (constant currency); operating margin 47%
- EPS: $4.14 (+24% YoY, ex-OpenAI impact, constant currency)
- Microsoft Cloud revenue: $51.5B (+26% YoY, constant currency)
- Azure & other cloud services: +39% YoY (constant currency)
- Fabric analytics run-rate: >$2B, revenue +60% YoY, 31,000 customers
- Microsoft 365 commercial seats: >450M (+6% YoY); Copilot seats: 15M paid, seat adds +160% YoY
- GitHub Copilot: 4.7M paid subscribers (+75% YoY)
- Commercial RPO: $625B (+10% YoY), with 25% to be recognized in 12 months (+39% YoY)
- Operating cash flow: $35.8B (+60% YoY); capital expenditures: $37.5B
- Company gross margin %: 68% (down slightly YoY on heavy AI infrastructure spend)
- More Personal Computing revenue: $14.3B (-3% YoY); gaming revenue: -9% YoY
A full-stack AI narrative – and a very heavy wallet
Microsoft’s latest quarter offered a paradox that investors in big tech are going to have to get used to: a company delivering robust growth and expanding margins, while simultaneously shovelling cash into capital expenditures at a rate that would have seemed implausible even a year ago.
Satya Nadella set the tone early. Microsoft Cloud revenue surpassed $50 billion in a single quarter for the first time, rising 26% year on year in constant currency, powered by Azure’s 39% growth. He described the current moment as “the beginning phases of AI diffusion and its broad GDP impact,” and then proceeded to lay out a vision of a tech stack being rebuilt from silicon to software agents.
Yet the real story sat in the numbers that Amy Hood, chief financial officer, unpacked: $81.3 billion of revenue, up 17% in constant currency; operating income up 21%; and EPS up 24% after adjusting for the accounting impact of Microsoft’s OpenAI stake. Operating margin climbed to 47%, even as the company spent $37.5 billion on capital expenditures in just one quarter, roughly two-thirds of it on short-lived GPU and CPU assets.
The message was clear: Microsoft is turning its balance sheet into an AI engine, and so far the income statement is keeping up.
Cloud, tokens and a new metric for the AI age
What emerged from Nadella’s remarks was a new vocabulary for understanding hyperscale infrastructure in the AI era. The key variable, he argued, is no longer simply utilization or raw capacity; it is “tokens per watt per dollar” – a measure that captures performance, energy efficiency and cost in one. This is the yardstick against which Microsoft is now designing data centres, silicon and software.
On the infrastructure side, the company added nearly one gigawatt of capacity in the quarter alone, knitting assets together in what Nadella called an “AI super factory.” A flagship example is the Fairwater complex spanning Atlanta and Wisconsin, linked via an AI-wide area network and using two-storey designs with liquid cooling. Higher-density GPUs there are intended to drive better performance and lower latency for high-scale AI training.
Crucially, Microsoft is no longer just a buyer of other people’s chips. Alongside NVIDIA and AMD parts, its in-house Maya accelerators and Cobalt CPUs are stepping into the spotlight. The Maya 200, which has now been brought online, is pitched as delivering more than 10 “flops” at FP4 precision and over 30% improved total cost of ownership versus the latest generation in Microsoft’s fleet. It will be deployed first for inferencing and synthetic data generation for the company’s superintelligence team, and then for Copilot and Foundry workloads.
On the CPU side, Cobalt 200 is described as a “big leap forward” with over 50% higher performance than Microsoft’s first custom cloud-native processor. The underlying strategy is to weave these custom parts into a heterogeneous fleet, while continuing to lean heavily on external vendors so that the company is never locked into a single generation of technology.
All the while, sovereignty considerations are reshaping the geographic footprint. Microsoft announced new data centre investments in seven countries during the quarter, expanding what Nadella claimed is the “most comprehensive set of sovereignty solutions” spanning public, private and national-partner clouds.
The rise of the agent platform – and Foundry’s role
Nadella framed this platform shift as a rewriting of software around “agents” – autonomous, task-focused AI systems – rather than traditional applications. To enable this, Microsoft is constructing what amounts to an operating system for agents: a model catalogue, tuning services, orchestration harnesses, context engineering tools, and safety, observability and security services.
Foundry, Microsoft’s AI development environment, sits at the centre of this narrative. The company now offers what it calls the broadest selection of models among hyperscalers, ranging from OpenAI’s GPT-5.0.2 to Anthropic’s Claude 4.5, alongside region-specific and sovereign models and Microsoft’s own first-party models optimized for coding and security.
More than 1,500 customers have used both Anthropic and OpenAI models on Foundry, and over 250 are on track to process more than one trillion tokens there this year. The number of customers spending over $1 million per quarter on Foundry grew nearly 80%, reflecting not just experimentation but large-scale deployment.
The company is also pushing deeper into “context engineering”: Foundry Knowledge and Fabric are designed to give agents access to enterprise systems of record, operational and analytical data, and semi-structured content, all while respecting fine-grained permissions. Fabric, only two years after broad availability, has passed a $2 billion annual run-rate with more than 31,000 customers and 60% revenue growth, which Microsoft says makes it the fastest-growing analytics platform in the market.
Microsoft is betting that as enterprises embed their “tacit knowledge” into model weights, this will become a core form of intellectual property and, by extension, a core driver of cloud lock-in. Sovereignty, in Nadella’s telling, is as much about safeguarding those trained weights as it is about where data physically resides.
Governance for a world of proliferating agents
A less visible, but potentially important, part of the stack is agent governance. Over 80% of the Fortune 500 already have agents built with Copilot Studio and AgentBuilder, low-code tools intended for knowledge workers rather than developers.
To corral this proliferation, Microsoft introduced Agent 365, a control plane extending existing identity, governance, security and management policies into the agent world. The idea is that whatever controls a company uses across Microsoft 365 and Azure will now apply to agents, whether they run on Microsoft’s cloud or elsewhere. Partners from Adobe and Databricks to SAP and Workday are already integrating Agent 365, effectively acknowledging that agent management is emerging as a category in its own right.
For investors, this is a subtle but important point: Microsoft is not just selling agent execution; it is selling the scaffolding and compliance fabric around those agents, which historically has been a high-margin software franchise.
Copilot momentum: from daily habit to enterprise standard
If infrastructure and platforms are the base, Copilot is the capstone – and here the numbers are beginning to look like a meaningful franchise rather than just a feature.
Across consumer, productivity, development and security, Microsoft’s family of “high-value agentic experiences” is starting to create daily habits. In consumer, daily users of the standalone Copilot app have tripled year on year, with AI integrated into chat, search, browsing, shopping and the operating system.
But the real story is on the commercial side. Microsoft 365 Copilot is being recast as a stateful organizational agent, powered by what Nadella calls WorkIQ – an intelligence layer that reasons over employees, roles, documents, communications and history, all inside the customer’s security boundary. Microsoft claims this yields “unmatched” grounded accuracy and that this quarter saw the largest improvement in response quality yet.
That qualitative narrative is backed by quantitative traction:
- Daily active users of Microsoft 365 Copilot are up 10x year on year.
- Average conversations per user have doubled.
- Seat additions rose more than 160% year on year, with 15 million paid Copilot seats now in place.
- The number of customers with over 35,000 seats tripled; Publicis alone bought more than 95,000 seats, effectively rolling Copilot out to nearly all employees.
GitHub Copilot is seeing similar acceleration. Paid Copilot subscribers reached 4.7 million, up 75% year on year, with Copilot Pro Plus subscriptions for individual developers up 77% quarter on quarter. Large enterprises are beginning to standardize on this tooling: Siemens is adopting GitHub’s full platform after deploying Copilot to more than 30,000 developers.
Microsoft is trying to elevate Copilot from a simple code completion tool into an orchestration layer for AI coding agents. GitHub AgentHQ now coordinates models from Anthropic, OpenAI, Google, Cognition and xAI against a customer’s repositories, while Copilot CLI and Visual Studio Code extensions offer multiple form factors. The GitHub Copilot SDK allows developers to embed the same runtime – including multi-step planning and tool integration – directly into their own applications.
Security is another area where Copilot is being pressed into service. A dozen new and updated security Copilot agents shipped across Defender, Entra, Intune and Purview, and Microsoft is now rolling Security Copilot out to all E5 customers. The company highlighted Icertis cutting manual security incident triage time by 75%, and said that 24 billion Copilot interactions were audited by Purview this quarter, up ninefold.
In healthcare, Dragon Copilot is already in use by over 100,000 providers, documenting 21 million patient encounters in the quarter – triple the volume a year ago – as Mount Sinai Health moves towards a system-wide deployment. And in scientific R&D, Microsoft Discovery is being used by companies from Unilever to Synopsys to orchestrate specialised agents for literature review, hypothesis generation and simulation.
For investors trying to gauge durability, these are early but important data points: AI services are not just landing; they are embedding into workflows that tend to be sticky.
Segment dynamics: productivity strength, cloud acceleration, consumer drag
Beneath the AI narrative, the three principal segments exhibited a familiar pattern: commercial strength offset by consumer softness.
Productivity and Business Processes revenue rose 16% in constant currency to $34.1 billion. Microsoft 365 commercial cloud grew 17%, with average revenue per user driven higher by E5 licence adoption and Copilot uplift. The installed base of Microsoft 365 commercial seats climbed 6% to more than 450 million, with particular strength in small and mid-sized business and frontline worker offerings.
Microsoft 365 consumer cloud revenue increased 29% in constant currency, largely through ARPU expansion, while overall consumer subscriptions rose 6%. LinkedIn revenue grew 11% in constant currency, powered by marketing solutions, and Dynamics 365 revenue rose 19% across workloads. Segment operating margin climbed to 60%, helped by efficiency gains even as AI investments continued.
In the Intelligent Cloud segment, revenue reached $32.9 billion, up 29% in constant currency. Azure and other cloud services were slightly ahead of expectations with 39% growth, supported by what Hood called “ongoing efficiency gains across our fungible fleet” that allowed capacity to be reallocated to Azure and monetized within the quarter. On-premises server revenue jumped 21% in constant currency, boosted by hybrid demand, the launch of SQL Server 2025 and some pull-forward buying ahead of memory price increases.
AI infrastructure remains a drag on gross margins here: segment gross margin dollars rose 20%, but margin percentage fell year on year due to heavy AI investment and mix shift toward Azure, partially offset by efficiency improvements. Operating margin edged down to 42%, with higher AI spending largely balanced by operating leverage.
The More Personal Computing segment was the laggard. Revenue declined 3% to $14.3 billion. Windows OEM grew 5% as the Windows 10 end-of-support cycle continued to drive upgrades and as customers bought ahead of anticipated memory cost inflation, but devices revenue was flat overall. Search and news advertising ex-TAC increased 9% in constant currency but came in slightly below expectations due to execution issues and the normalization of third-party partnership benefits.
Gaming was the weak link: revenue fell 9% in constant currency, with Xbox content and services revenue down 6%, below expectations on softer first-party content. Impairment charges in the gaming business also lifted operating expenses, leaving segment operating income down 3% in constant currency and margins unchanged at 27%.
Capex as strategy: building a six-year AI machine
If there was a single number that encapsulated investors’ unease, it was the $37.5 billion of capital expenditure. Microsoft disclosed that roughly two-thirds of this went into short-lived assets – primarily GPUs and CPUs – with the rest devoted to long-lived infrastructure like data centre sites and power.
Operating cash flow, at $35.8 billion, was up 60% on strong cloud billings and collections, but free cash flow dropped to $5.9 billion as cash capital expenditures rose and the mix of finance leases fell. The company added $6.7 billion of finance leases in the quarter, mostly for large data centre sites, and paid $29.9 billion in cash for property, plant and equipment. Capital intensity is, for the moment, marching higher.
Hood’s guidance suggests that this is not a one-off spike. While Capex is expected to decline sequentially in the next quarter due to normal build-out timing, the mix of short-lived assets is likely to remain similar to Q2 as Microsoft races to “close the gap between demand and supply.” Rising memory prices, she warned, will also push capital costs up over time, though these will feed into cloud gross margins gradually as assets depreciate over six years.
The other balancing act is capacity allocation. Hood emphasised that Azure revenue growth is essentially constrained by how much capacity Microsoft chooses to allocate to the public cloud versus its own first-party services and R&D. Before that capacity ever reaches Azure customers, it is being used to support the rapid uptake of Microsoft 365 Copilot, GitHub Copilot and other Microsoft-built agents, as well as to accelerate product innovation by feeding GPUs to AI research teams.
To illustrate the effect, she offered a counterfactual: if all the GPUs that came online in the first half had been allocated to Azure, the Azure growth KPI would have been “over 40%.” Instead, Microsoft is consciously trading some near-term Azure growth for a broader portfolio of high-margin AI franchises and long-term R&D advances.
For investors anxious about whether this spending will pay off over the useful life of the hardware, Hood pointed to the structure of Microsoft’s contracts. Commercial remaining performance obligation (RPO) rose to $625 billion, up 10% year on year, with a weighted average duration of about two and a half years. Roughly a quarter of that will turn into revenue over the next 12 months – up 39% year on year – while the remainder, recognized beyond a year, grew 56%.
Behind those averages lies a more telling detail: approximately 45% of the commercial RPO balance relates to OpenAI. That concentration has drawn external scrutiny, but Hood framed it differently, highlighting the 55% – some $350 billion – tied to a broad base of customers, solutions and industries, which grew 28%.
On the AI-specific hardware, she said “the majority of the capital that we’re spending today… are already contracted for most of their useful life.” The largest GPU deals, including those with OpenAI, have been structured so that the GPUs are effectively sold for their entire six-year lifespan, limiting the risk that Microsoft ends up sitting on underutilized AI assets. As the fleet ages, efficiency improves, and margins expand as the same hardware delivers more output for a fixed depreciation base.
A reshaped P&L – and a cautious margin outlook
The weight of the OpenAI partnership is also reshaping the income statement in more subtle ways. Following OpenAI’s recapitalization, Microsoft now accounts for its stake under the equity method, recording gains or losses based on changes in OpenAI’s net assets rather than its operating results. In the quarter, this produced a $10 billion gain in other income and expense.
Excluding that, other income and expense was slightly negative and below expectations due to net investment losses. Hood guided to roughly $700 million of positive other income and expense in the coming quarter – a mix of equity portfolio gains and interest income, partly offset by interest costs, including those tied to data centre leases.
Gross margin percentage for the company dipped slightly to 68%, as AI infrastructure spending outpaced the immediate monetization of AI workloads. Yet operating expenses grew only 5% in constant currency, thanks to what Hood described as “focused execution” in prioritising investments, primarily in R&D for compute capacity and AI talent, and despite impairment charges in gaming. The result was higher operating leverage and expanding margins overall.
Looking ahead, Microsoft expects Q3 revenue of $80.65–$81.75 billion, equivalent to 15–17% growth. Cost of goods sold is forecast to rise 22–23%, and operating expenses 10–11%. Operating margin should be “down slightly” year on year as AI investments continue, even as the company now expects full-year FY26 operating margins to be “up slightly” versus prior expectations, helped by H1 discipline and a favourable revenue mix that included stronger-than-expected Windows OEM and on-premises server sales.
By segment, Q3 guidance assumes:
- Productivity and Business Processes revenue of $34.25–$34.55 billion, up 14–15%, with Microsoft 365 commercial cloud growth of 13–14% in constant currency, mid-to-high-20s growth for Microsoft 365 consumer cloud, low-double-digit expansion at LinkedIn and high-teens growth for Dynamics 365.
- Intelligent Cloud revenue of $34.1–$34.4 billion, up 27–29%, with Azure growth of 37–38% in constant currency, even as on-premises server revenue normalizes and likely declines in the low single digits.
- More Personal Computing revenue of $12.3–$12.8 billion, implying a low-teens decline in Windows OEM and devices, high-single-digit growth in search and news advertising ex-TAC, and mid-single-digit declines in Xbox content and services, with hardware also down.
Foreign exchange is expected to be a mild tailwind, adding three points to total revenue growth in Q3 and a smaller impact in Q4.
A new equilibrium for hyperscale AI
In previous waves of computing – mainframe, client-server, early internet – capital spending was often the constraint; demand caught up later. In this AI cycle, Microsoft is contending with the opposite problem: demand that “continues to exceed our supply” in Hood’s words, forcing the company to ration scarce GPUs between external cloud customers, its own AI products and its research pipeline.
The result is a subtle but significant change in how investors will need to think about Microsoft. The traditional lenses of Azure growth and Windows cycles are still relevant, but now sit inside a more complex optimization problem: how to deploy capital across a full stack of AI infrastructure, models, agents and experiences, while preserving the company’s long-cultivated margin and cash flow profile.
Nadella’s bet is that agents will become the new interface for work, code, security, healthcare and scientific discovery, and that Microsoft’s ability to provide the infrastructure, models, orchestration fabric and governance layer for those agents will underpin decades of growth. Hood’s job is to translate that bet into an asset base that can earn an adequate return over a six-year depreciation schedule.
For now, the numbers suggest that both sides of that equation are holding. But with capital expenditures now running at levels that rival some sovereign infrastructure plans, investors will be watching closely to see whether the tokens per watt per dollar – and the dollars per token – justify the build-out.