Silicon Analysts
Loading...

AI Data Center Value Chain Analysis 2025 — Interactive Supply Chain Map

Comprehensive interactive analysis of the AI data center ecosystem covering 52 companies and 86 supply chain relationships across 10 value chain layers. This research maps the competitive dynamics between GPU makers, hyperscaler custom ASICs, ARM and x86 CPUs, ASIC co-designers, startups, OEMs, cable/optical interconnect suppliers, and cloud providers in the $1.2 trillion AI data center market projected for 2030.

Key Market Statistics

Executive Summary — Key Takeaways

  1. NVIDIA's data center revenue hit $115.2B in FY2025 (+142% YoY), but market share is projected to decline from 86% to ~75% by 2026 as custom ASICs scale.
  2. Hyperscalers are spending $380B+ on AI capex in 2025 while simultaneously building custom chips (TPU, Trainium, Maia, MTIA) that offer 40-65% TCO advantages over GPUs.
  3. Broadcom and Marvell control ~95% of the custom ASIC co-design market — Google alone spends ~$8B/year with Broadcom on TPU development.
  4. ARM CPUs have grown from 5% to ~20% of data center server share since 2020, driven by 30-60% energy efficiency gains. NVIDIA's own next-gen platform (Vera Rubin) is ARM-exclusive.
  5. The AI data center total addressable market is projected to grow from $242B in 2025 to $1.2T by 2030 — a 5x expansion in five years.
  6. NVIDIA's NVLink Fusion strategy lets hyperscalers plug custom ASICs into NVIDIA's rack architecture — ensuring NVIDIA stays embedded even when it isn't the primary compute chip.
  7. Cable and optical interconnects are an emerging bottleneck — AI racks require 10-36x more fiber than traditional setups, DAC/AOC lead times exceed 20 weeks, and the $2.7B interconnect market is projected to reach $10.7B by 2034.

Value Chain Layers

The AI data center value chain is organized into 10 layers: GPU Makers, ARM CPU, x86 CPU, ASIC Co-designers, Established ASIC, ASIC Startups, Interconnects, OEMs / ODMs, Hyperscalers / Cloud, On-Premise / Private.

GPU Makers

Companies designing and selling general-purpose GPU accelerators for AI training and inference.

  • NVIDIA: Dominant AI accelerator supplier with 75-86% data center AI market share. Key metric: $115.2B DC rev. Revenue: $130.5B FY2025 total; $115.2B data center. Products: H100, H200, B200, GB200, Vera Rubin (2026).
  • AMD: Second-largest GPU supplier, gaining share with MI-series accelerators. Key metric: ~$10B DC rev. Revenue: ~$10B data center 2025E. Products: MI300X, MI325X, MI350 (2025), MI400 (2026).
  • Intel: Third GPU contender via Gaudi line and upcoming Falcon Shores. Key metric: ~$2B AI accel. Revenue: ~$2B AI accelerator 2025E. Products: Gaudi 2, Gaudi 3, Falcon Shores (2025).

ARM CPU

Custom ARM-based server CPUs displacing x86 in data centers with superior energy efficiency.

  • AWS Graviton: ARM server CPU designed by Annapurna Labs for AWS infrastructure. Key metric: 50%+ of new AWS CPU. Products: Graviton4 (96 cores), Graviton5 (192 cores, 2025).
  • Ampere: Independent ARM server chip company serving cloud and enterprise. Key metric: 192 cores. Products: Altra Max (128 cores), AmpereOne (192 cores).
  • Google Axion: Google custom ARM CPU for internal cloud workloads. Key metric: GCP custom CPU. Products: Axion (Neoverse V2-based, 2024).
  • Microsoft Cobalt: Microsoft custom ARM CPU for Azure infrastructure. Key metric: 128 cores. Products: Cobalt 100 (128 Neoverse N2 cores).
  • NVIDIA Grace/Vera: NVIDIA ARM CPU for GPU-attached data center use. Key metric: GPU-paired ARM. Products: Grace (72 Neoverse V2), Vera CPU (2026).
  • Fujitsu: Japanese ARM chip designer behind Fugaku supercomputer. Key metric: HPC leader (JP). Products: A64FX (Fugaku), next-gen 2nm planned with Rapidus.

x86 CPU

Traditional x86 server CPU suppliers facing share erosion from ARM and accelerator-first architectures.

  • Intel: Largest x86 server CPU maker but losing share to AMD and ARM. Key metric: ~$12B DC CPU (↓). Revenue: ~$12B DC CPU 2025E (declining). Products: Xeon 6 (Granite Rapids), Sierra Forest (E-cores).
  • AMD: Fastest-growing x86 server CPU supplier, approaching #1 share. Key metric: 30%+ server share (↑). Revenue: ~$8B DC CPU 2025E (growing). Products: EPYC 9005 (Turin, Zen 5), EPYC 9006 (2026).

ASIC Co-designers

Companies providing custom ASIC design services to hyperscalers building their own AI chips.

  • Broadcom: Largest custom ASIC partner; co-designs TPUs for Google, ASICs for Meta & others. Key metric: ~60% ASIC co-design. Revenue: ~$12B AI revenue FY2025. Products: TPU co-design, XPU, custom networking.
  • Marvell: Second-largest ASIC co-designer; partners with AWS, Microsoft. Key metric: ~35% ASIC co-design. Revenue: ~$2B custom silicon 2025E. Products: Custom AI accelerators, DPUs, networking ASICs.
  • Alchip: Taiwanese ASIC design services for hyperscaler custom chips. Key metric: TSMC tapeout partner. Products: Turnkey ASIC design at 5nm, 3nm.
  • GUC: TSMC subsidiary providing ASIC design services. Key metric: TSMC subsidiary. Products: Advanced node tapeout services (3nm, 2nm).

Established ASIC

Major tech companies with deployed custom AI accelerators in production data centers.

  • Google TPU: Google Tensor Processing Units, now in 7th generation. Key metric: 4,614 TFLOPS (v7). Products: TPU v6e (Trillium), TPU v7 (Ironwood): 4,614 TFLOPS, 192GB HBM3e.
  • AWS Trainium: Amazon custom training accelerator designed by Annapurna Labs. Key metric: 3nm (Trainium 3). Products: Trainium 2, Trainium 3 (3nm, 2025).
  • Meta MTIA: Meta in-house inference accelerator. Key metric: Inference focus. Products: MTIA v2 (2025).
  • Microsoft Maia: Microsoft custom AI training/inference chip for Azure. Key metric: Azure custom AI. Products: Maia 100 (2024), Maia 200 (2025E).
  • Intel Gaudi: Intel AI accelerator line (from Habana Labs acquisition). Key metric: Budget inference. Products: Gaudi 2, Gaudi 3.
  • IBM: IBM AIU and Telum AI processors for enterprise workloads. Key metric: Enterprise AI. Products: Telum II, AIU (Artificial Intelligence Unit).

ASIC Startups

Venture-backed startups challenging NVIDIA with novel chip architectures for AI workloads.

  • Cerebras: Maker of the WSE (Wafer-Scale Engine), the largest chip ever built. Key metric: 4T transistors. Products: WSE-3: 4T transistors, 900K cores, 44GB on-chip SRAM.
  • Groq: Inference-focused chip with deterministic, ultra-low-latency architecture. Key metric: 800+ tok/sec. Products: LPU (Language Processing Unit).
  • Graphcore: UK-based IPU (Intelligence Processing Unit) maker. Key metric: SoftBank acquired. Products: Bow IPU (3D wafer-on-wafer).
  • SambaNova: Reconfigurable dataflow architecture for AI training and inference. Key metric: Enterprise RDU. Products: SN40L (RDU).
  • Untether AI: Near-memory compute architecture for AI inference. Key metric: Near-memory arch. Products: tsunAImi (2023).

Interconnects

Cable, connector, and optical transceiver suppliers providing the physical links between GPUs, switches, and racks in AI data centers.

  • Amphenol: Dominant high-speed connector and cable supplier; custom-engineered the NVLink spine cartridge for GB200 NVL72. Key metric: $6.2B Q3 rev. Revenue: $6.2B Q3 2025 record (+53% YoY); IT Datacom 37% of sales. Products: NVLink copper cables, DAC/AEC assemblies, high-speed connectors.
  • TE Connectivity: Second-largest connector maker; backup NVLink supplier and DAC/AEC cable producer. Key metric: Backup NVLink. Revenue: FY2026E EPS $10.56 (+20.5% YoY); Q1 FY26 +17%. Products: 800G AEC solutions, high-speed connectors, DAC cables.
  • Corning: Dominant optical fiber and cable supplier; signed $6B multi-year Meta fiber deal. Key metric: $6B Meta deal. Revenue: $6.3B optical comms 2025 (+35% YoY); enterprise +61%. Products: Optical fiber, fiber cable assemblies, structured cabling.
  • InnoLight: Leading 800G/1.6T optical transceiver module maker with >50% NVIDIA wallet share. Key metric: >50% NVIDIA optics. Products: 800G DR8/FR8, 1.6T optical transceivers.
  • Coherent: Vertically integrated optical company (formerly II-VI); makes 800G/1.6T transceivers and InP lasers. Key metric: InP laser leader. Products: 800G ZR/ZR+ coherent optics, InP lasers, VCSELs, transceiver modules.
  • Lumentum: Key supplier of EML lasers — the optical engine inside most 800G/1.6T transceivers. Key metric: $534M Q1 (+58%). Revenue: $533.8M Q1 FY26 record (+58% YoY); cloud/AI >60% of mix. Products: EML lasers, VCSEL arrays, optical components.

OEMs / ODMs

Server manufacturers assembling and selling AI-optimized hardware to enterprises and cloud providers.

  • Supermicro: Largest AI server ODM by GPU-server volume. Key metric: $23B rev. Revenue: $23B FY2025E.
  • HPE: Hewlett Packard Enterprise AI server and HPC systems. Key metric: Enterprise + HPC. Products: ProLiant, Cray EX (exascale).
  • Lenovo: Global server vendor with growing AI portfolio. Key metric: Top-3 global. Products: ThinkSystem SR with NVIDIA GPUs.
  • Dell: Dell PowerEdge AI server platform. Key metric: $10B AI pipeline. Revenue: ~$10B AI server pipeline. Products: PowerEdge XE9680 (8xH100/B200).
  • Cisco: AI networking and compute infrastructure. Key metric: DC networking #1. Products: UCS X-Series, AI networking switches.
  • ASUS: ASUS server division (ESC series) for AI workloads. Key metric: GPU servers. Products: ESC8000 GPU servers.
  • Inspur: Largest server maker in China. Key metric: #1 China servers.
  • QCT: Quanta Cloud Technology — major hyperscale ODM. Key metric: Hyperscale ODM.
  • Gigabyte: Gigabyte server division for AI and HPC workloads. Key metric: Value GPU servers. Products: G-series GPU servers.
  • Eviden: Atos subsidiary specializing in HPC and AI infrastructure. Key metric: EU HPC leader. Products: BullSequana AI servers.

Hyperscalers / Cloud

Cloud infrastructure providers operating the world's largest AI compute fleets.

  • AWS: Largest cloud provider by revenue and AI infrastructure investment. Key metric: $115B run-rate. Revenue: $115B run-rate 2025E.
  • Google Cloud: Google Cloud Platform with TPU-native AI infrastructure. Key metric: $46B run-rate. Revenue: $46B run-rate 2025E.
  • Microsoft Azure: Second-largest cloud; exclusive OpenAI inference partner. Key metric: $100B+ run-rate. Revenue: $100B+ run-rate 2025E.
  • Oracle Cloud: Oracle Cloud Infrastructure growing rapidly in AI. Key metric: $25B cloud. Revenue: $25B cloud 2025E.
  • Alibaba Cloud: Largest cloud provider in China. Key metric: #1 China cloud.
  • Baidu AI Cloud: Baidu AI infrastructure powering Ernie and PaddlePaddle. Key metric: Kunlun chip.
  • IBM Cloud: Enterprise hybrid cloud with AI integration. Key metric: Watsonx.

On-Premise / Private

Companies building massive private AI compute infrastructure outside public cloud.

  • Apple: Building private AI cloud infrastructure with custom M-series chips. Key metric: Private Cloud. Products: M2 Ultra, M4 clusters for Private Cloud Compute.
  • Meta: Operates one of the largest private AI compute fleets globally. Key metric: ~600K GPUs. Products: ~600K NVIDIA GPUs (H100/B200), MTIA for inference.
  • Tesla: Building Dojo supercomputer and Cortex data centers for autonomous driving AI. Key metric: 100K+ H100s. Products: Dojo D1 chip, Cortex cluster (100K+ H100s).

Supply Chain Relationships

This analysis maps 86 relationships between companies, categorized as: supply relationships (60 edges), competitive relationships (20 edges), and co-design partnerships (6 edges).

8 Power Shifts Reshaping the AI Data Center Ecosystem

Power Shift 1: NVIDIA Becomes a Systems Company

$115.2BFY2025 data center revenue (+142% YoY)

NVIDIA is no longer just a chip company. The DGX Pod and SuperPod platforms, combined with the NVLink Fusion interconnect strategy, signal a pivot toward selling complete rack-scale systems. The upcoming Vera Rubin platform bundles six co-designed chips into a single architecture. NVLink Fusion is the most strategically significant move — it lets hyperscalers plug their own custom ASICs into NVIDIA's rack architecture, ensuring NVIDIA stays embedded even when it isn't supplying the primary compute chip.

Source: NVIDIA FY2025 earnings, GTC 2025 keynote

Power Shift 2: The Great Decoupling: Hyperscalers Build Their Own

40-65%TCO advantage from custom ASICs vs GPUs

Google's TPU v7 Ironwood delivers 4,614 TFLOPS with 192GB HBM3e — purpose-built for Gemini. AWS Trainium 3 moves to 3nm with 2x the training performance of its predecessor. Microsoft Maia 200 and Meta MTIA are ramping. The economics are compelling: custom silicon offers 40-65% total cost of ownership advantages at scale. But adoption remains at roughly 15-20% of internal workloads — CUDA lock-in and the sheer pace of NVIDIA's roadmap keep most training on GPUs.

Source: Google I/O 2025, AWS re:Invent, SemiAnalysis

Power Shift 3: Established Companies Lean on Co-designers

~$8B/yrGoogle's annual spend with Broadcom on TPU development

The custom silicon revolution runs through a handful of ASIC design houses. Broadcom controls roughly 60% of the custom AI ASIC co-design market, with Marvell holding about 35%. Google alone spends an estimated $8 billion annually with Broadcom on TPU silicon development. Alchip and GUC handle the complex tapeout process at TSMC's most advanced nodes. This creates a parallel semiconductor ecosystem that routes entirely around NVIDIA — from design through fabrication to deployment.

Source: Broadcom earnings, Raymond James estimates

Power Shift 4: Startups Intensify the Race

4Ttransistors on Cerebras WSE-3 (largest chip ever)

The startup landscape is defined by architectural bets against NVIDIA's dominance. Cerebras built the largest chip ever — a full-wafer engine with 4 trillion transistors and 900,000 AI cores. Groq's LPU achieves 800+ tokens per second on inference. SambaNova's reconfigurable dataflow architecture targets enterprise AI. But most face the fundamental challenge of competing against NVIDIA's 20-year CUDA ecosystem with over 4 million developers. Cerebras withdrew its IPO in October 2025. The startup path is viable but treacherous.

Source: Company disclosures, TechCrunch

Power Shift 5: Hyperscaler Vertical Integration Accelerates

$380B+combined hyperscaler AI capex in 2025

The four largest cloud providers are spending over $380 billion on AI infrastructure in 2025 alone — and the trend is accelerating. These companies are evolving from chip customers to chip designers, building custom CPUs (Graviton, Axion, Cobalt), custom accelerators (TPU, Trainium, Maia), and custom networking silicon. The most telling signal: AWS Trainium 3 will use NVIDIA NVLink Fusion, showing that even direct competitors cooperate when the architecture demands it. Custom silicon is projected to capture 15-25% of total data center AI compute by 2030.

Source: Company earnings calls, Bernstein Research

Power Shift 6: ARM CPUs Rise in the Data Center

~15-23%ARM share of data center CPUs (2025, up from 5% in 2020)

ARM's data center penetration has quietly reached 15-23% of server CPU shipments — up from roughly 5% in 2020. AWS Graviton 5 with 192 cores now runs over half of all new AWS CPU capacity. Google Axion and Microsoft Cobalt 200 add further momentum. NVIDIA's own Vera CPU is ARM-based, and the Blackwell/Rubin NVL72 rack pairs exclusively with ARM Grace/Vera CPUs. The driving force is 30-60% better energy efficiency versus x86, which matters enormously at data center scale.

Source: Ampere, Arm Holdings earnings, SemiAnalysis

Power Shift 7: x86 Decline Accelerates

~30%+AMD's approaching server CPU share (record high)

Intel's data center CPU dominance has collapsed from 90%+ to below 70%, and the trajectory is worsening. AMD is approaching its highest-ever server CPU share above 30%, but the total x86 pie is shrinking relative to ARM. The most significant structural shift: NVIDIA's NVL72 rack architecture pairs its GPUs exclusively with ARM-based Grace and Vera CPUs, not x86. As GPU-attached compute grows faster than general-purpose servers, the GPU ecosystem itself is pulling the market toward ARM.

Source: Mercury Research, Intel/AMD earnings

Power Shift 8: The Interconnect Bottleneck Emerges

20+ wksDAC/AOC cable lead times for large GPU deployments

As AI data centers scale, the physical layer — cables, connectors, and optical transceivers — has become a critical chokepoint. AI-focused racks require 10-36x more fiber than traditional CPU-based setups. NVIDIA's GB200 NVL72 alone uses 5,184 copper cables per rack. Corning's fiber inventory is sold out through 2026 after a $6B Meta deal. The DAC/AOC market is projected to grow from $2.7B to $10.7B by 2034. Amphenol, which custom-engineered the NVLink spine cartridge, saw IT Datacom revenue surge 134%. Meanwhile, InnoLight holds over 50% of NVIDIA's 800G optical transceiver orders. The transition to 1.6T optics in 2026 will intensify supply pressure further.

Source: Corning earnings, Amphenol Q3 2025, industry reports

Interactive Tools

This page includes four interactive tools: (1) A supply chain flow diagram visualizing all 86 relationships between 52 companies with hover highlighting and edge type filtering. (2) A value chain explorer with tabbed navigation across all 10 layers, company detail modals with ego-network graphs, connection statistics, and power shift references. (3) Market data dashboards showing NVIDIA vs AMD vs Intel quarterly data center revenue, ARM vs x86 CPU share trends, AI data center total addressable market projections, and hyperscaler AI capex by company. (4) A scenario modeling tool with 4 preset scenarios and 6 adjustable parameters to project NVIDIA GPU share, Broadcom ASIC revenue, ARM CPU adoption, and custom ASIC compute share through 2030.

Methodology

This analysis synthesizes data from company earnings reports (NVIDIA, AMD, Intel, Broadcom, Marvell, hyperscaler quarterly filings), industry research (Yole Group, Mercury Research, SemiAnalysis, TrendForce, Bernstein Research, Goldman Sachs), and publicly available product announcements. Last updated: 2026-02-18.

Interactive Research
Updated 2026-02-18

The AI Data Center Chess Game

How the $1.2 trillion AI infrastructure boom is triggering the biggest semiconductor supply chain realignment in decades

86% → ~75%
NVIDIA data center AI share
2024→2026E
$242B → $1.2T
AI data center TAM by 2030
2025→2030E
$380B+
Hyperscaler AI capex in 2025
2025E
40-65%
Custom ASIC TCO advantage vs GPUs
At scale

Key Takeaways

  1. 1NVIDIA's data center revenue hit $115.2B in FY2025 (+142% YoY), but market share is projected to decline from 86% to ~75% by 2026 as custom ASICs scale.
  2. 2Hyperscalers are spending $380B+ on AI capex in 2025 while simultaneously building custom chips (TPU, Trainium, Maia, MTIA) that offer 40-65% TCO advantages over GPUs.
  3. 3Broadcom and Marvell control ~95% of the custom ASIC co-design market — Google alone spends ~$8B/year with Broadcom on TPU development.
  4. 4ARM CPUs have grown from 5% to ~20% of data center server share since 2020, driven by 30-60% energy efficiency gains. NVIDIA's own next-gen platform (Vera Rubin) is ARM-exclusive.
  5. 5The AI data center total addressable market is projected to grow from $242B in 2025 to $1.2T by 2030 — a 5x expansion in five years.
  6. 6NVIDIA's NVLink Fusion strategy lets hyperscalers plug custom ASICs into NVIDIA's rack architecture — ensuring NVIDIA stays embedded even when it isn't the primary compute chip.
  7. 7Cable and optical interconnects are an emerging bottleneck — AI racks require 10-36x more fiber than traditional setups, DAC/AOC lead times exceed 20 weeks, and the $2.7B interconnect market is projected to reach $10.7B by 2034.
Loading flow diagram...
Loading value chain explorer...

8 Power Shifts Reshaping the Value Chain

From NVIDIA's systems pivot to ARM's data center rise — the structural forces redrawing the competitive map.

1

NVIDIA Becomes a Systems Company

$115.2B
FY2025 data center revenue (+142% YoY)

NVIDIA is no longer just a chip company. The DGX Pod and SuperPod platforms, combined with the NVLink Fusion interconnect strategy, signal a pivot toward selling complete rack-scale systems. The upcoming Vera Rubin platform bundles six co-designed chips into a…

Source: NVIDIA FY2025 earnings, GTC 2025 keynote
2

The Great Decoupling: Hyperscalers Build Their Own

40-65%
TCO advantage from custom ASICs vs GPUs

Google's TPU v7 Ironwood delivers 4,614 TFLOPS with 192GB HBM3e — purpose-built for Gemini. AWS Trainium 3 moves to 3nm with 2x the training performance of its predecessor. Microsoft Maia 200 and Meta MTIA are ramping. The economics are compelling:…

Source: Google I/O 2025, AWS re:Invent, SemiAnalysis
3

Established Companies Lean on Co-designers

~$8B/yr
Google's annual spend with Broadcom on TPU development

The custom silicon revolution runs through a handful of ASIC design houses. Broadcom controls roughly 60% of the custom AI ASIC co-design market, with Marvell holding about 35%. Google alone spends an estimated $8 billion annually with Broadcom on TPU…

Source: Broadcom earnings, Raymond James estimates
4

Startups Intensify the Race

4T
transistors on Cerebras WSE-3 (largest chip ever)

The startup landscape is defined by architectural bets against NVIDIA's dominance. Cerebras built the largest chip ever — a full-wafer engine with 4 trillion transistors and 900,000 AI cores. Groq's LPU achieves 800+ tokens per second on inference. SambaNova's reconfigurable…

Source: Company disclosures, TechCrunch
5

Hyperscaler Vertical Integration Accelerates

$380B+
combined hyperscaler AI capex in 2025

The four largest cloud providers are spending over $380 billion on AI infrastructure in 2025 alone — and the trend is accelerating. These companies are evolving from chip customers to chip designers, building custom CPUs (Graviton, Axion, Cobalt), custom accelerators…

Source: Company earnings calls, Bernstein Research
6

ARM CPUs Rise in the Data Center

~15-23%
ARM share of data center CPUs (2025, up from 5% in 2020)

ARM's data center penetration has quietly reached 15-23% of server CPU shipments — up from roughly 5% in 2020. AWS Graviton 5 with 192 cores now runs over half of all new AWS CPU capacity. Google Axion and Microsoft Cobalt…

Source: Ampere, Arm Holdings earnings, SemiAnalysis
7

x86 Decline Accelerates

~30%+
AMD's approaching server CPU share (record high)

Intel's data center CPU dominance has collapsed from 90%+ to below 70%, and the trajectory is worsening. AMD is approaching its highest-ever server CPU share above 30%, but the total x86 pie is shrinking relative to ARM. The most significant…

Source: Mercury Research, Intel/AMD earnings
8

The Interconnect Bottleneck Emerges

20+ wks
DAC/AOC cable lead times for large GPU deployments

As AI data centers scale, the physical layer — cables, connectors, and optical transceivers — has become a critical chokepoint. AI-focused racks require 10-36x more fiber than traditional CPU-based setups. NVIDIA's GB200 NVL72 alone uses 5,184 copper cables per rack.…

Source: Corning earnings, Amphenol Q3 2025, industry reports
Silicon Analysts Pro

Want deeper analysis?

Quarterly updated data on every company in the value chain, custom TCO tools, and AI ASIC tracker access.

Join the Pro Waitlist

Free to join — no credit card required.

Loading charts...
Loading scenario model...
Coming Soon

Go Deeper with Silicon Analysts Premium

Get ahead of the $1.2T AI infrastructure cycle with data, tools, and analysis built for semiconductor professionals and investors.

Quarterly Updates
Updated data on every company in the value chain
TCO Comparison Tool
GPU vs ASIC vs hybrid total cost modeling
AI ASIC Tracker
Every custom chip program, status, and specs
Join the Pro Waitlist

Free to join — no credit card required.

Methodology & Data Sources

This analysis synthesizes data from company earnings reports (NVIDIA, AMD, Intel, Broadcom, Marvell, hyperscaler quarterly filings), industry research (Yole Group, Mercury Research, SemiAnalysis, TrendForce, Bernstein Research, Goldman Sachs), and publicly available product announcements. Market share estimates reflect the best available consensus from multiple analyst sources. Projected figures (marked "E") represent consensus analyst estimates and company guidance as of the publication date.

The scenario explorer uses a simplified model based on observable market dynamics. It is intended for directional analysis only, not precise forecasting. Actual outcomes will depend on technology roadmap execution, software ecosystem evolution, and macroeconomic factors.

Last updated: 2026-02-18|Silicon Analysts Research

AI Data Center Value Chain FAQ

What is the AI data center value chain?
The AI data center value chain encompasses the full ecosystem of companies that design, manufacture, assemble, and operate the computing infrastructure powering artificial intelligence workloads. This includes GPU and AI accelerator designers (NVIDIA, AMD), custom ASIC builders (Google TPU, AWS Trainium), ASIC co-design houses (Broadcom, Marvell), CPU makers (Intel, AMD, ARM-based), server OEMs (Supermicro, Dell, HPE), cable and optical interconnect suppliers (Amphenol, Corning, InnoLight), and hyperscale cloud operators (AWS, Google Cloud, Microsoft Azure). The total addressable market is projected to reach $1.2 trillion by 2030.
Why are hyperscalers building custom AI chips instead of buying NVIDIA GPUs?
Hyperscalers like Google, Amazon, Microsoft, and Meta are developing custom AI accelerators primarily for cost efficiency — custom ASICs can deliver 40-65% lower total cost of ownership versus GPUs for specific workloads at scale. Additionally, custom chips reduce dependency on a single vendor (NVIDIA), can be optimized for specific model architectures (e.g., Google's TPU for Transformer workloads), and provide supply chain control during GPU shortages. However, CUDA ecosystem lock-in and NVIDIA's rapid innovation pace keep most training workloads on GPUs.
What is NVIDIA NVLink Fusion and why does it matter?
NVLink Fusion is NVIDIA's interconnect strategy that allows third-party chips — including hyperscaler custom ASICs — to plug into NVIDIA's rack-scale architecture (NVL72 and beyond). This is strategically significant because it ensures NVIDIA remains embedded in data center deployments even when customers use competing compute chips. By controlling the interconnect fabric, NVIDIA shifts its value proposition from supplying the only viable AI chip to providing the system architecture that connects all AI chips.
How is ARM disrupting x86 in data centers?
ARM-based CPUs have grown from approximately 5% of data center CPU shipments in 2020 to 15-23% in 2025. Key drivers include 30-60% better energy efficiency versus x86, which translates to significant cost savings at data center scale. AWS Graviton now runs over half of all new Amazon CPU capacity, Google and Microsoft have deployed custom ARM CPUs (Axion, Cobalt), and NVIDIA's own Grace/Vera CPUs are ARM-based. The trend is accelerating because NVIDIA's NVL72 rack pairs GPUs exclusively with ARM CPUs, pulling the GPU ecosystem toward ARM.