Broadcom Maps a $100 Billion AI Chip Future as Custom XPUs Rewire the Data Center

Highlights
  • Q1 FY26 revenue: $19.3B (+29% YoY, record)
  • Q1 adj. EBITDA: $13.1B (68% margin, record)
  • Semiconductor revenue: $12.5B (+52% YoY)
  • AI semiconductor revenue: $8.4B (+106% YoY); Q2 guide: $10.7B (+140% YoY)
  • Q2 FY26 revenue guidance: ~$22B (+47% YoY); EBITDA margin ~68%
  • Line of sight to >$100B AI chip revenue in 2027
  • Infrastructure Software revenue: $6.8B (+1% YoY); VMware ARR +19% YoY
  • Free cash flow: $8B (41% of revenue)
  • Capital returns in Q1: $10.9B (dividends + buybacks); new $10B buyback authorization

AI-driven semiconductor boom

Broadcom entered fiscal 2026 with the air of a company that has moved from catching the AI wave to choreographing it. In the March quarter, revenue climbed 29% year-on-year to a record $19.3 billion, powered overwhelmingly by demand for custom AI accelerators and networking chips. Adjusted EBITDA reached $13.1 billion, giving the group an enviable 68% margin and underscoring the operating leverage that CEO Hock Tan has spent years constructing.

The centre of gravity is now unmistakably semiconductors. Segment revenue jumped 52% to $12.5 billion, of which AI-related semiconductors contributed $8.4 billion, more than doubling year-on-year and coming in “way above” internal expectations. For the current quarter, Broadcom is guiding semiconductor revenue to $14.8 billion, up 76% from a year earlier, with AI chips expected to surge 140% to $10.7 billion.

Tan’s most striking statement was not about the past quarter, but about 2027. On the back of detailed power (“gigawatt”) plans from a tight set of hyperscale and AI-native customers, Broadcom now has “line of sight” to AI chip revenue in excess of $100 billion in that year alone. That figure, he stressed, relates purely to silicon: XPUs, switches and DSPs, not racks or systems.

Behind the headline sits a small but potent customer list. Broadcom now counts six major AI chip partners: Google, Anthropic, Meta, two unnamed hyperscalers, and OpenAI. All are moving down the path of custom XPUs — Broadcom’s catch-all term for accelerators beyond general-purpose GPUs — and all are treating those programs as strategic, not optional.

Google’s seventh-generation Ironwood TPU is ramping strongly in 2026, with “even stronger” demand expected for next generations from 2027. Anthropic, Tan said, is off to a strong start with 1 gigawatt of compute this year and is expected to exceed 3 gigawatts in 2027. Meta’s in-house MTIA accelerator, contrary to analyst speculation, is “alive and well” and already shipping; next-generation XPUs there are expected to scale to multiple gigawatts in 2027 and beyond. Customers four and five are forecast to more than double shipments by 2027. A sixth, OpenAI, is slated to deploy its first-generation XPU at more than 1 gigawatt of capacity in 2027 under the recently announced 10-gigawatt, multi-year deal.

Tan framed these engagements as “deep, strategic and multiyear”, built on Broadcom’s long-honed expertise in SerDes, advanced packaging, and high-speed networking, and crucially on the ability to manufacture “hundreds of thousands” of complex chips at high yields and at speed. That, he argued, is what has historically tripped up “customer-owned tooling” efforts: internal chip design programs at cloud and AI companies that seek to bypass merchant silicon vendors. In AI, where each model provider is not only competing with one another but also with NVIDIA’s rapid generational cadence, “good enough” silicon is no longer good enough.

Networking as a second AI growth engine

If XPUs are the muscles of Broadcom’s AI story, networking is the circulatory system. AI networking revenue grew 60% year-on-year in the quarter and already accounts for roughly a third of AI revenue; by next quarter, management expects networking to represent about 40%. The growth, Tan said, is running even faster than that of the accelerators themselves.

At the heart of this is the Tomahawk 6 switch, a 100 terabit-per-second device paired with 200G SerDes that has effectively become the reference point for large-scale AI clusters. Hyperscalers, whether building around GPUs or XPUs, want the highest bandwidth they can get, and for now Broadcom is “the only one out there” at that performance level. The roadmap extends into 2027 with Tomahawk 7, which will double performance again and is also expected to be first to market.

The company is also leaning into its networking advantage in the scale-up domain — connecting accelerators to each other within racks and clusters. Here the goal is to avoid expensive, power-hungry optics for as long as possible by using direct attach copper. Tan highlighted Broadcom’s ability to keep copper viable not just at today’s 200G SerDes speeds, but at 400G SerDes in the 2028 time frame, a capability he cast as “a huge advantage” for customers that want to grow cluster sizes while keeping latency, power and cost under control.

The message was pointed in another way. Broadcom has been an early champion of co-packaged optics, but Tan was blunt that, despite the industry buzz, customers will not rush there: CPO will “come in its time, not this year, maybe not next year”. For now, Ethernet — both for scale-out and, increasingly, scale-up — is consolidating its position as the fabric of choice. Many of Broadcom’s XPU engagements, networking chief Charlie Kawwas noted, are already being architected around Ethernet scale-up.

Taken together, networking and accelerators form a tightly coupled franchise. The ability to co-optimise compute and interconnect, and to guarantee supply across both, is emerging as one of Broadcom’s key selling points to AI platforms under pressure to deliver ever-larger, more efficient clusters.

Non-AI chips and software: steady ballast

Outside AI, Broadcom is showing more measured but broadly stable trends. Non-AI semiconductor revenue was $4.1 billion in the quarter, flat year-on-year and in line with guidance, with enterprise networking, broadband and server storage offsetting seasonal wireless weakness. For the current quarter, management expects this non-AI chip revenue to rise about 4% from a year ago, to roughly the same $4.1 billion level, underlining how heavily the growth story now rests on AI while the rest of the portfolio provides a stabilising base.

The Infrastructure Software segment, anchored by VMware, continues to function as a high-margin cash machine. Segment revenue was $6.8 billion, up 1% year-on-year and representing 35% of total revenue; gross margin of 93% and operating margin of 78% highlight the economics of Broadcom’s infrastructure-focused software strategy. Revenue is expected to grow to about $7.2 billion next quarter, a 9% year-on-year increase.

Within that, VMware remains the star. Revenue there grew 13% year-on-year, and bookings were strong enough to push total contract value in the quarter above $9.2 billion, supporting 19% growth in annual recurring revenue. Tan pushed back firmly against any narrative that AI might erode VMware’s position. VMware Cloud Foundation, he argued, is becoming the “permanent abstraction layer” in the data centre, stitching together CPUs, GPUs, storage and networking into high-performance private clouds and serving as the interface between AI software and physical silicon. In his telling, the rise of generative and “agentic” AI will require more VMware, not less.

For investors, this is not just a philosophical point: it explains why Broadcom is still willing to allocate significant capital to R&D — $1.5 billion in the quarter, the bulk of its $2 billion in operating expenses — while maintaining some of the highest margins in the sector.

Margins, cash, and capital returns

CFO Kirsten Spears’ numbers sketch a business that is as disciplined in its cash flows as it is aggressive in capital deployment. Group gross margin came in at 77%, flat with the company’s guidance and robust given the rapid mix shift towards AI hardware. Operating income of $12.8 billion was up 31% year-on-year, lifting operating margin to 66.4%. Adjusted EBITDA surpassed guidance, helped by what Spears called “favorable operating leverage” as AI volumes ramped.

Concerns that lower-margin rack-scale systems might drag down profitability were brushed aside. Tan described suggestions of a 500-basis-point gross margin hit as “hallucinating”, insisting that AI products now conform to Broadcom’s broader semiconductor margin model thanks to improved yields and careful cost control. Spears, who had previously flagged some potential mix effect, now believes the impact from racks will be “not substantial at all”.

Free cash flow was $8 billion, a hefty 41% of revenue, despite capital expenditures of just $250 million. Inventory climbed to $3 billion, with days on hand increasing from 58 to 68 as Broadcom consciously stockpiles key components — from leading-edge wafers to high-bandwidth memory and the much-discussed T-glass substrates — to support what it describes as “constrained capacity” amid booming AI demand. Those inventory moves are intimately tied to the multiyear supply agreements underpinning the $100 billion AI revenue target: Tan and Kawwas both emphasised that Broadcom has “fully secured” capacity for 2026 through 2028 in partnership with its manufacturing ecosystem.

Shareholders are seeing that confidence monetised. In the quarter, Broadcom paid $3.1 billion in dividends and repurchased $7.8 billion of stock, returning a total of $10.9 billion to investors. The Board has now authorised an additional $10 billion in share repurchases through the end of 2026, even as the company carries $14.2 billion in cash on the balance sheet. The non-GAAP diluted share count is expected to be around 4.94 billion in the current quarter, before the effect of further buybacks.

For the second fiscal quarter, Broadcom is guiding to consolidated revenue of about $22 billion, up 47% year-on-year, with gross margin steady at 77% and adjusted EBITDA margin at about 68%. The tax line will tick up, with a non-GAAP rate of 16.5% reflecting the global minimum tax and geographic mix, but there was little sense that this will materially dent the trajectory.

Sector context: from optionality to obligation

In a market jittery about whether outsized AI capital spending can be justified and sustained, Tan’s narrative is deliberately contrarian. While investors fret about hyperscalers reining in GPU budgets if returns disappoint, Broadcom’s CEO argues that his own customer set — a handful of hyperscalers and AI specialists — is past the stage of experimentation and into long-term industrialisation.

These firms, he said repeatedly, are not dabbling in optional GPU procurement from the cloud; they are building strategic, custom silicon platforms that sit at the heart of their business models. The dichotomy matters. GPU purchases, in his view, are transactional, subject to near-term optimisation and budget cycles. Custom XPUs, by contrast, are tightly bound to each customer’s roadmap for training ever more capable models and, crucially, for turning those models into products through inference at scale.

It is that inference demand, as much as training, that is surprising even Broadcom. Every new, more capable LLM, Tan observed, must be productised almost simultaneously with its training, or risk being leapfrogged by a rival model by the time the inference infrastructure is ready. That dynamic pushes customers towards developing parallel chips — one tuned for training, one for inference — on a near-annual cadence, each then demanding multi-gigawatt deployment plans. The result is a virtuous, or depending on your perspective, relentless cycle of chip design and capacity planning in which Broadcom has entrenched itself as a central actor.

In this light, the $100 billion AI chip revenue “line of sight” for 2027 is not simply an exuberant forecast. It is the sum of a handful of large, named and unnamed customers’ multi-year power build-out plans, negotiated supply contracts for scarce components through 2028, and a technology roadmap that places Broadcom at the intersection of custom compute, high-speed networking and enterprise infrastructure software. For investors, the bet is whether that carefully constructed edifice can withstand the inevitable volatility of AI expectations.

For now, Broadcom appears to be behaving less like a chip supplier surfing a hype cycle, and more like a systems architect of the AI data centre era — an architect that expects, and has already pre-booked the materials, to build at least $100 billion worth of foundations in a single year.