AI Server System BOM & Rack Cost — Historical Time Series Data
Full system-level bill of materials for AI training servers — from DGX H100 (8-GPU) through GB200 NVL72 (72-GPU rack). Tracks how rack economics evolved as HBM and networking costs exploded.
AI Server System BOM & Rack Cost
Full system-level bill of materials for AI training servers — from DGX H100 (8-GPU) through GB200 NVL72 (72-GPU rack). Tracks how rack economics evolved as HBM and networking costs exploded.
8 data points·Unit: $K·Last updated: Mar 2026
Data Table
| Period | Series | Value |
|---|---|---|
DGX H100 (2023) | GPU (COGS) | 26.6 |
DGX H100 (2023) | Networking | 8 |
DGX H100 (2023) | Power & Cooling | 4 |
DGX H100 (2023) | Memory, Storage & Chassis | 6.4 |
GB200 NVL72 (2024) | GPU (COGS) | 460.8 |
GB200 NVL72 (2024) | Networking | 80 |
GB200 NVL72 (2024) | Power & Cooling | 120 |
GB200 NVL72 (2024) | Memory, Storage & Chassis | 65 |
Methodology & Sources
Synthesized from Epoch.ai hardware cost database, SemiAnalysis system architecture deep dives, and analyst estimates. All costs are approximate manufacture/procurement costs, not list prices.
Citations:
- Epoch.ai / NVIDIA DGX spec (Dec 2023)
- SemiAnalysis (Dec 2023)
- Data center analyst estimates (Dec 2023)
- Silicon Analysts estimate (Mar 2026)
- Silicon Analysts / chipSpecs.ts (Mar 2026)
- SemiAnalysis rack cost analysis (Dec 2024)
- NVIDIA NVL72 spec / Silicon Analysts (Mar 2026)