- Future of Computing Newsletter
- Posts
- ⚛️🦾 Who Wins When Quantum Hardware Is Put to the Test?
⚛️🦾 Who Wins When Quantum Hardware Is Put to the Test?
A Newsletter for Entrepreneurs, Investors, and Computing Geeks
Happy Monday! This week’s deep dive looks at MIT’s latest Quantum Index Report, with insights drawn from the chapter on benchmarking leading quantum processors and what it reveals about the state of progress across modalities.
In our spotlights, we feature Lightmatter’s optical interconnect breakthrough and new GPU benchmarks comparing Nvidia’s H100 and GB200 clusters.
The headlines cover a dense lineup of quantum announcements, along with major moves in semiconductors, photonics, neuromorphic computing, and data centers.
This week’s readings include wafer-level chip connectivity, greener semiconductor manufacturing, proteins as qubits, reconfigurable photonic processors, and reports on global data center capacity.
Funding news included six announced deals last week, with most of the activity concentrated at the early stage across semiconductors, quantum, and AI.
In the bonus section, we turn to the U.S.–China chip debate over Nvidia and AMD’s H20 exports, drawing on recent reports and analysis, and note that Nvidia is already preparing a new AI chip for China.
Deep Dive: Who Wins When Quantum Hardware Is Put to the Test?
This deep dive focuses on a specific part of the MIT Quantum Index Report 2025: the section on benchmarking quantum processors (chapter 10, pages 93-104).
Source: “Quantum Index Report 2025” (MIT, 2025)
Why Quantum Benchmarks Are Essential
Quantum computing performance is notoriously opaque. Multiple modalities, the absence of universally accepted metrics, and the fact that results vary by algorithm and hardware make it difficult to assess progress.
Without clarity, predictions about investments, deployments, and use-case readiness remain unreliable.
Modalities in Focus
Quantum processors are built on a range of physical platforms (i.e., modalities), each with distinct characteristics, trade-offs, and technical challenges. Understanding these modalities is essential for interpreting performance metrics and benchmarking results.
Superconducting QPUs
Definition: Encode quantum information in the current or phase of superconducting circuits cooled to near absolute zero.
Strengths: Fast gate speeds, relatively mature hardware ecosystem.
Limitations: Require cryogenic infrastructure, limited coherence times.
Trapped-Ion QPUs
Definition: Use individual ions held by electromagnetic fields, with quantum gates performed via lasers.
Strengths: High gate fidelity, long coherence times, strong connectivity.
Limitations: Slow gate speeds, limited scalability due to optical control complexity.
Neutral-Atom QPUs
Definition: Trap and control atoms using optical tweezers; qubit states are manipulated with lasers.
Strengths: Scalable architectures, flexible 2D and 3D layouts.
Limitations: Lower gate fidelities than other platforms (but gap is closing), challenging precision control.
Photonic QPUs
Definition: Use photons as qubits, routed through integrated optics or fibers.
Strengths: Room-temperature operation, high-speed communication, network integration potential.
Limitations: Photon loss, limited multi-qubit control, fidelity challenges.
Electron Spin QPUs
Definition: Leverage the quantum state of single electrons as qubits, often in silicon quantum dots.
Strengths: Long coherence times, high-fidelity control potential, CMOS compatibility.
Limitations: Require cryogenic cooling, scalability, and uniform control remain major challenges.
Key Metrics and Trade-offs
The MIT study uses a range of physical and aggregated benchmarks to evaluate QPU performance. Below are the most relevant metrics, along with the core trade-offs that influence system behavior.
Qubit Count
Definition: The number of physical qubits available in a QPU. This reflects the system’s capacity to represent quantum states and scale up workloads.
Trade-off: More qubits increase architectural complexity and noise sensitivity. Without sufficient control fidelity and error mitigation, larger systems may not yield better performance.
Fidelity
Definition: A measure of how accurately a quantum operation (typically a gate) performs compared to its ideal version. Two-qubit gate fidelity is especially important for most algorithms.
Trade-off: Higher fidelity often requires slower operations, extensive calibration, or constraints on system size and layout.
Gate Speed
Definition: The time it takes to execute a quantum gate, typically measured in nanoseconds or microseconds.
Trade-off: Faster gates enable deeper circuits within limited coherence times, but may come at the cost of increased error rates or lower fidelity.
Coherence
Definition: The duration over which a qubit retains its quantum state without decohering.
Trade-off: Platforms with long coherence times may have slower gate speeds or more limited qubit control, affecting throughput.
Note: Gate speed and coherence must be considered together, since only their combination determines how many gate operations can be executed before decoherence sets in.
Quantum Volume (QV)
Definition: A composite metric that reflects the largest random circuit a system can execute successfully. It incorporates qubit count, connectivity, fidelity, and gate depth.
Trade-off: High QV does not always correlate with application-level usefulness, since systems can be optimized for this benchmark without necessarily improving real-world performance.
Benchmarking Outcomes
The benchmarking results in the MIT study reveal a nuanced landscape across quantum platforms. Rather than identifying a single leading QPU or modality, the data shows a fragmented picture, shaped by trade-offs between speed, fidelity, scalability, and error resilience.
No platform consistently leads
Performance depends heavily on the metric. Superconducting QPUs typically lead in gate speed, trapped-ion QPUs achieve the highest fidelities, and neutral-atom QPUs show promise in scalability. However, none consistently outperform across all benchmark categories.

Source: “Quantum Index Report 2025” (MIT, 2025), p. 99
Qubit count remains a poor predictor of capability
The study highlights that several smaller QPUs outperform larger ones in key metrics. Architectural choices and qubit quality matter more than raw qubit numbers, and scaling up often introduces additional noise and complexity.
Benchmark results can be tuned
Aggregated benchmarks such as Quantum Volume and CLOPS provide useful insights, but are highly sensitive to device-specific optimizations. Vendors frequently tune systems for particular benchmarks, which can obscure true general-purpose performance.
Despite the progress reflected in these benchmarks, quantum computers today remain below the performance of classical systems for all practical tasks.
Spotlights
⚡️ Lightmatter Achieves World‑First 16‑Wavelength Bidirectional Link on Single‑Mode Optical Fiber (Business Wire)
“Lightmatter, the leader in photonic (super)computing, today announced a groundbreaking achievement in optical communications: a 16-wavelength bidirectional Dense Wavelength Division Multiplexing (DWDM) optical link operating on one strand of standard single-mode (SM) fiber. Powered by Lightmatter's industry-leading Passage™ interconnect and Guide™ laser technologies, this breakthrough shatters previous limitations in fiber bandwidth density and spectral utilization, setting a new benchmark for high-performance, resilient data center interconnects.”
🦾 H100 vs GB200 NVL72 Training Benchmarks (SemiAnalysis)
“In this report, we will start by present the results of benchmark runs across over 2,000 H100 GPUs, analyzing data on model flops utilization (MFU), total cost of ownership (TCO) and cost per training 1M tokens. We will also discuss energy use, examining the energy in utility Joules consumed for each token trained and compare it to the average US household annual energy usage, reframing power efficiency in societal context. We will also show the results of this analysis when scaling the GPU cluster from 128 H100s to 2048 H100s and across different versions of Nvidia software.”
Headlines
Last week’s headlines highlight major moves in semiconductors, from new fabrication techniques to multibillion-dollar investments, a dense lineup of quantum announcements, and fresh breakthroughs in photonics, neuromorphic computing, and data center networking.
🦾 Semiconductors
Rice Scientists Pioneer Transfer‑Free Method to Grow Ultrathin Semiconductors on Electronics (Rice University)
SoftBank Makes $2B Investment in Intel (TechCrunch)
Japan Storms Back into the Chip Wars (The Economist)
⚛️ Quantum Computing
QuiX Quantum Aims for Universal Photonic Quantum Computer in 2026 (EE Times Europe)
Quantinuum Eyes $10 Billion Valuation in New Fundraising Talks (The Quantum Insider)
US Quantum Computing Firm Strangeworks Expands European Presence with Quantagonia Acquisition (Tech.eu)
Microsoft Maps Path to a Quantum‑Safe Future (The Quantum Insider)
Quantum Dots Reveal Hidden Spins, Boosting Data Control (Quantum Zeitgeist)
Researchers Claim New Benchmark Set in Secure Quantum Communication (The Quantum Insider)
Researchers Unlock Error Correction for Distributed Quantum Computing (Quantum Zeitgeist)
⚡️ Photonic / Optical Computing
Former BelGaN-site to Become Europe’s First Full‑Fledged Photonic Chip Centre (Belga News Agency)
Boron Nitride Membranes Unlock New Photonics Potential (Quantum Zeitgeist)
European Project to Repurpose Fiber‑Optic Cables Into Photonic Sensors (All About Circuits)
Researchers Unlock 96 % Fidelity, Chip‑Scale Entangled Photons (Quantum Zeitgeist)
🧠 Neuromorphic Computing
💥 Data Centers
NVIDIA Introduces Spectrum‑XGS Ethernet to Connect Distributed Data Centers Into Giga‑Scale AI Super‑Factories (NVIDIA Newsroom)
Readings
This week’s reading list features developments in wafer-level connectivity and greener chip manufacturing, a crossover between quantum computing and biology, reconfigurable photonic processors, and shifting dynamics in the data center landscape.
🦾 Semiconductors
A path to high‑density front and backside wafer connectivity (imec) (19 mins)
How Can We Reduce Environmental Impact in Chip Manufacturing? (imec) (21 mins)
⚛️ Quantum Computing
Proteins Double as Qubits, A Step That Could One Day Bridge Quantum Computing and Biology (The Quantum Insider) (9 mins)
⚡️ Photonic / Optical Computing
Reconfigurable Large‑Scale Optoelectronic Reservoir Computing on Programmable Silicon Photonic Processor (ResearchGate) (4 mins - abstract only)
Reconfigurable Versatile Integrated Photonic Computing Chip (EurekAlert!) (5 mins)
💥 Data Centers
Report: Hyperscale Data Center Market Size, Share & Trends Forecast 2025–2034 (Global Market Insights) (23 mins)
Data centres: too many blank spots in Central and Eastern Europe (Euronews) (5 mins)
Funding News
Last week brought a handful of additional funding rounds, with most activity concentrated at the early stage. In total, six deals were announced across semiconductors, quantum, and AI.
Amount | Name | Round | Category |
---|---|---|---|
₹ 1.39M | Semiconductors | ||
$2.5M | Quantum | ||
€6.6M | Semiconductors | ||
$7M | AI | ||
$10M | Quantum | ||
approx. $50M | Semiconductors |
Bonus: Risk or Reward? The U.S.-China Chip Debate
The U.S. decision to allow sales of NVIDIA’s H20 chips and similar AMD products to China has sparked debate. Critics warn it weakens security. Supporters argue it strengthens U.S. leadership in AI hardware and slows the development of a competitive Chinese chip industry. Quite an interesting discussion!
America Hands China an AI Advantage (Hudson Institute)
While this debate unfolds, another development adds fuel to the discussion: Nvidia is reportedly working on a new, more powerful AI chip tailored for the Chinese market.