🦾 Beyond the GPU Hype: CPUs Still Do the Heavy Lifting

A Newsletter for Entrepreneurs, Investors, and Computing Geeks

Happy Monday! Here’s what’s inside this week’s newsletter:

  • Deep dive: Beyond the GPU hype, this deep dive explores why CPUs still handle most high-performance workloads, how new architectures advance, and why GPU migration remains challenging.

  • Spotlights: Quantinuum launches Helios, its most accurate commercial quantum computer to date, and Deutsche Telekom partners with NVIDIA on a €1 billion data center project to power Germany’s new Industrial AI Cloud.

  • Headlines: Semiconductor moves including SK Hynix’s HBM advances, quantum breakthroughs, progress in photonics and neuromorphic tech, and major data center and AI expansions.

  • Readings: Semiconductor scaling and market trends, quantum networking and lunar experiments, renewed photonics investment, neuromorphic edge growth, and evolving AI data center infrastructure.

  • Funding news: Moderate activity, led by EdgeCortix’s $110M Series B in semiconductors, alongside early-stage deals in quantum and data-center infrastructure.

  • Bonus: The latest Data Center Construction Cost Index shows stabilizing prices for traditional builds, a persistent AI premium for liquid-cooled facilities, and shifting regional cost patterns as supply chains mature.

Deep Dive: Beyond the GPU Hype – CPUs Still Do the Heavy Lifting

GPUs dominate the spotlight and are expected to grow installations by more than 17% annually through 2030. However, CPUs continue to handle most of the world’s real computational work, powering roughly 80–90% of all high-performance simulation workloads.

GPU-powered AI may define the public narrative, but a quieter transformation is unfolding beneath it. CPUs, long the backbone of scientific and industrial computing, are undergoing a technological renaissance. Far from obsolete, they remain essential for workloads where stability, precision, and compatibility matter more than raw parallelism.

Fit-for-Purpose Computing

Supercomputing has never been about one-size-fits-all infrastructure. Each workload, whether climate modeling, financial risk analysis, or molecular dynamics, requires a specific balance of performance, precision, and cost.

GPUs excel at highly parallel workloads like AI training and image generation. But CPUs remain unmatched for tasks involving complex control logic, sequential processing, and high numerical accuracy, particularly in fields governed by regulation or decades of validated code.

The New Wave of CPU Innovation

While GPUs pioneered advances like High-Bandwidth Memory (HBM) and stacked cache architectures, CPUs are now adopting these technologies to bring GPU-class bandwidth and efficiency to general-purpose computing. A new generation of architectures is emerging that closes performance gaps while preserving compatibility:

  • High-Bandwidth Memory (HBM): Integrating HBM directly on the CPU package delivers multi-terabyte-per-second bandwidth, a leap of 4–5× over previous generations, without code changes. Since most HPC workloads are memory-bound, these gains translate into immediate real-world performance improvements.

  • 3D V-Cache and Stacked Architectures: By vertically stacking additional cache (high-speed on-chip memory) on top of the CPU die, next-generation processors can triple cache capacity. This keeps larger datasets physically closer to compute units, reducing latency and boosting performance in iterative simulations and data-intensive workloads.

Together, these advances represent the most significant leap in CPU memory performance in two decades, reshaping what is possible for bandwidth-constrained applications across weather, finance, and engineering domains.

Challenges in Transitioning Beyond CPUs

As workloads scale, the promise of GPU acceleration is hard to ignore. Yet shifting from CPU-based systems remains far from straightforward. Despite their performance potential, several practical and economic barriers keep CPUs central to high-performance computing:

  • Code Modernization and Migration Effort: Most scientific and engineering applications are written in specific programming languages, deeply tied to CPU architectures. Porting them to GPUs demands extensive refactoring and optimization before they can run efficiently.

  • Verification and Reliability: Once adapted, critical codes must be rigorously re-tested to ensure correctness and compliance, especially in regulated sectors such as aerospace, energy, and finance, adding further cost and delay to migration efforts.

  • Power and Cooling Overheads: GPUs often require significantly higher power budgets and generate more heat per chip, increasing demands on cooling infrastructure. Over time, these energy and thermal costs can erode the efficiency gains achieved through faster computation.

  • Rapid Hardware Obsolescence: GPU architectures evolve quickly, often requiring re-tuning with each generation. This pace of change is far faster than on CPUs, limiting long-term stability and complicating maintenance for large-scale deployments.

  • Cost and Total Cost of Ownership (TCO): When all factors are considered (migration, infrastructure, re-optimization, and energy consumption), the overall economics can deteriorate quickly, often offsetting theoretical performance gains.

Together, these factors do not diminish the importance of GPUs, but they underline that CPUs remain the backbone of high-performance computing today.

Source: Designing CPUs for Next-Generation Supercomputing (MIT Technology Review, 2025)

Spotlights

⚛️ Quantinuum announces commercial launch of Helios (The Quantum Insider)

“Quantinuum has launched Helios, described as the world’s most accurate general-purpose commercial quantum computer, to accelerate enterprise adoption and enable hybrid quantum–classical computing.

Helios integrates with NVIDIA’s GB200 processors via NVQLink, expanding Quantinuum’s partnership with NVIDIA to support real-time error correction, hybrid programming through the new Guppy language, and applications in GenAI, materials, and finance.”

“The partnership brings together Deutsche Telekom’s trusted infrastructure and operations and NVIDIA AI and Omniverse digital twin platforms to power the AI era of Germany’s industrial transformation.

[…]

The platform harnesses state-of-the-art NVIDIA hardware — including DGX B200 systems and RTX PRO Servers — as well as software including NVIDIA AI Enterprise and NVIDIA Omniverse, fully integrated into Deutsche Telekom’s cloud and network ecosystem.

Built in German data centers and powered by up to 10,000 NVIDIA GPUs, the Industrial AI Cloud gives manufacturers, automakers, robotics, healthcare, energy and pharma leaders the compute muscles they need.”

In addition, Deutsche Telekom and NVIDIA signed a €1 billion partnership to build a data center in Munich.

Headlines


Last week’s headlines featured semiconductor moves from SK Hynix, Tesla, and Volkswagen, breakthroughs in quantum networking and national initiatives, advances in photonics and neuromorphic tech, major data center and cloud investments, and new AI infrastructure updates.

⚛️ Quantum

Defence-related quantum news:

⚡️ Photonic / Optical

🧠 Neuromorphic

💥 Data Centers

☁️ Cloud

📡 Networking

🤖 AI

Readings


This reading list covers semiconductor market shifts and scaling challenges, lunar quantum experiments and new qubits, renewed investment in photonics, neuromorphic market growth, cloud governance trade-offs, and evolving infrastructure for AI at the edge and in data centers.

🦾 Semiconductors

Global Semiconductor Sales Increase 15.8% from Q2 to Q3 (Semiconductor Industry Association) (6 mins)

⚛️ Quantum

⚡️ Photonic / Optical

Part 1 was originally published in September but is included here for context.

🧠 Neuromorphic

💥 Data Centers

In-System Test For AI Data Centers (SemiEngineering) (15 mins – Video)

New Data Center Developments: November 2025 (Data Center Knowledge) (25 mins)

☁️ Cloud

🤖 AI

Moving AI Workloads To The Edge (SemiEngineering) (12 mins)

Funding News


Last week’s funding activity was moderate, led by a $110M Series B for EdgeCortix. Most rounds clustered in semiconductors, complemented by early-stage deals in quantum and data-center infrastructure.

Amount

Name

Round

Category

€1.5M

Leil

Seed

Data Centers

$3M

EuQlid

Seed

Quantum

$17.5M

RAAAM Memory Technologies

Series A

Semiconductors

$25M

DualBird

Series A

Semiconductors

$110M

EdgeCortix

Series B

Semiconductors

Bonus: Data Center Construction Cost Index (2025-2026)

The latest index highlights stabilizing costs for traditional builds, rising complexity and premiums for AI-ready facilities, and persistent inflation despite maturing global markets.

Stabilisation

Construction costs for traditional cloud data centers are stabilizing, with a 5.5% year-on-year increase versus 9% in 2024. Broader construction inflation has cooled to around 4.2%, and maturing supply chains are easing pressure in newer regions.

AI Premium

AI-ready, liquid-cooled data centers remain significantly more expensive—typically 7–10% higher than air-cooled builds—due to higher power density, technical complexity, and specialized cooling systems.

Density

Higher power density allows smaller footprints and potential cost offsets. Mega-campuses designed for AI training can achieve economies of scale, while reduced redundancy requirements in some builds lower costs further.

Markets

Tokyo (US$15.2/W), Singapore (US$14.5), and Zurich (US$14.2) remain the most expensive markets. Paris, Amsterdam, Madrid, and Dublin are catching up as demand grows and supply chains mature, while Lagos has normalized after early setup cost spikes.

We hope you are able to zoom :-) We didn’t want to leave out this graphic, but couldn’t make it any bigger.

❤️ Love these insights? Forward this newsletter to a friend or two. They can subscribe here.