Happy Tuesday!
Over the last two weeks, quantum computing saw its largest M&A deal ever, Samsung ships the world’s first commercial HBM4, and a startup proves you can beat the GPU giant at its own game. With FPGAs.
Here's what to expect in this week's newsletter:
Spotlights: IonQ acquires SkyWater Technology for $1.8 billion to become the first vertically integrated quantum platform, Samsung ships the world’s first commercial HBM4, and Positron hits unicorn status with $230M from Arm and Qatar
Funding News: Big rounds in AI chips, photonics, and quantum networking
Bonus: Why Nvidia's 90% market share is finally starting to crack
Spotlights
IonQ announced the acquisition of U.S. semiconductor foundry SkyWater Technology for $1.8 billion — the largest M&A transaction in quantum computing history.
The deal ($770M cash, remainder in stock) transforms IonQ into the world's only vertically integrated quantum platform company, with in-house chip design, fabrication, and packaging across facilities in Minnesota, Florida, and Texas.
Why does that matter? Speed.
IonQ claims it can now shrink its 256-qubit chip development timeline from nine months to just two. "The combination of Oxford plus SkyWater makes IonQ an inevitability to prevail," said Chairman & CEO Niccolo de Masi.
SkyWater brings critical capabilities: DMEA Category 1A Trusted Foundry status, ~$300M in government investment, and existing quantum customers including D-Wave and PsiQuantum. IonQ has accelerated its roadmap accordingly, now targeting 200,000-qubit QPUs (8,000 logical qubits) for 2028.
Wall Street's reaction was mixed. The stock initially rose 4% before closing down on dilution concerns. But for quantum bulls, the message is clear: the consolidation era has begun.

(Credit: Samsung)
Samsung announced on February 12 that it has begun commercial shipments of HBM4, making it the first company in the world to do so. The next-generation memory delivers 11.7 Gbps per pin (upgradable to 13 Gbps) and 3.3 TB/s per stack, and is manufactured on Samsung's most advanced DRAM process node.
The chips are destined for Nvidia's Vera Rubin platform, expected later this year. Samsung shares surged 7.6% on the announcement, hitting an all-time high.
SK Hynix, which dominated the HBM3e cycle as Nvidia's primary supplier, is close behind with its own HBM4 mass production expected by March/April. The stakes are enormous: TrendForce estimates HBM revenue will exceed $100B in 2026, and Samsung plans to triple its HBM sales this year. For AI infrastructure, this is the current bottleneck that will slowly start to break open.
Positron closed a $230M Series B on February 4, valuing the AI chip startup at over $1 billion. Strategic investors include Arm Holdings and the Qatar Investment Authority; sovereign wealth funds are now diversifying beyond Nvidia.
The company's Atlas chip takes an unconventional approach: rather than competing on raw compute, Positron optimizes for memory bandwidth utilization. Where GPUs typically achieve 10-30% memory efficiency, Atlas achieves 93%, delivering comparable performance to an H100 while consuming less than one-third the power.
"In our testing, Positron Atlas delivered roughly 3× lower end-to-end latency than a comparable H100-based system," said Jump Trading CTO Alex Davies, whose firm both invested and deployed the chips.
Founded by ex-Lambda executives Mitesh Agrawal (CEO) and Thiel Fellow Thomas Sohmers (CTO), Positron has now raised a total of $305M. The current FPGA-based Atlas ships to customers, including Cloudflare. A custom ASIC called "Asimov" with 2TB of LPDDR5x tapes out in Q3 2026. Critically, everything is manufactured in Arizona, bypassing the CoWoS and HBM bottlenecks strangling competitors.
Headlines
Semiconductors & AI Hardware
EU inaugurates €2.5B NanoIC pilot line at IMEC — Europe's first facility with the most advanced EUV lithography
UK AI chip startup Fractile commits £100M to expand UK operations, targeting H2 2026 tapeout for inference chips
Applied Materials pays $252.5M to settle China export probe, beats Q1 estimates
Cadence launches ChipStack AI "Super Agent" for agentic chip design — Nvidia, Qualcomm as early users
SIA: Global semiconductor sales hit $798B in 2025, on track for $1 trillion in 2026
Microsoft launches Maia 200, its most powerful AI chip yet
Intel Foundry demos massive AI test chip with 12 HBM4 stacks at 8x reticle size
Quantum
Infleqtion SPAC merger approved (90%+ votes), lists on NYSE as INFQ on February 17
EU launches €50M SUPREME consortium for superconducting quantum industrialization with 23 partners, including IQM and Infineon
Spain invests €9.75M in Nu Quantum to build European quantum networking hub in Madrid
Diraq secures $14M investment for Silicon Quantum Commercialization
Euro-Q-Exa quantum computer inaugurated at LRZ Munich — IQM's 54-qubit system, third EuroHPC quantum deployment
Infrastructure & Policy
Cisco unveils Silicon One G300: 102.4 Tbps AI networking chip challenging Broadcom and Nvidia
U.S. lawmakers demand blanket ban on chipmaking tool exports to China, targeting ASML
Nvidia H200 sales to China stalled by the State Department’s security review
Funding News
Amount | Name | Round | Category |
$1B | AI Systems | ||
$300M | AI Chip Design | ||
$230M | AI Inference | ||
$110M | Deeptech VC | ||
$50M | Chiplet Interconnect | ||
€10M | Photonics | ||
$12M | Post-Silicon Semis | ||
$8M | AI Semiconductors | ||
$6M | Quantum Computing | ||
$4M | QuEra/Roadrunner | Quantum Testbed |
Bonus: The Cracks in Nvidia's Monopoly Are Finally Showing
For years, the "Nvidia alternative" narrative has been a punchline. Startups raised billions, hyperscalers announced custom chips, and yet Jensen Huang's company kept posting gross margins above 80% while competitors shipped PowerPoint decks.
But something feels different this time.
Nvidia's AI accelerator market share has dropped from ~95% at its peak to 75-80% today, according to multiple analyst estimates. Custom ASIC shipments are growing at 44.6% annually versus 16.1% for GPUs. And hyperscaler decoupling is underway: Microsoft's Maia 200 now powers Copilot, Amazon’s Trainium3 runs Anthropic's Claude, and Google's TPU v7 handles most internal inference.
The shift is driven by economics. AWS is pitching Trainium at a 30–40% price-performance advantage over general-purpose GPUs for inference workloads, while Meta reports 44% lower costs with its MTIA chips. When you're spending $100B+ on AI infrastructure annually, those percentages translate to tens of billions in savings.
But here's what's underappreciated: even the startups are finally shipping real products. Positron's Atlas cards are running in Cloudflare's datacenters. Cerebras is preparing a $20B+ IPO. And Groq got acquired by Nvidia itself for $20B, the ultimate validation that the threat was real.
None of this means Nvidia is in trouble. CUDA's 20-year ecosystem moat is real, and training workloads still strongly favor their hardware. But the era of 90%+ market share is ending.



