- Future of Computing Newsletter
- Posts
- š§ š¦¾ Inside the Machine That Thinks Like a Brain
š§ š¦¾ Inside the Machine That Thinks Like a Brain
A Newsletter for Entrepreneurs, Investors, and Computing Geeks
Happy Monday! This weekās deep dive explores neuromorphic computing and what it means for AI efficiency, how brain-inspired principles are translated into hardware, and the technical challenges still holding it back.
In our spotlights, we look at new approaches to processor architectures for energy efficiency and Jensen Huangās blunt take on the future of photonics.
Our headlines cover another quantum-heavy week, alongside updates in semiconductors, photonics, cloud valuations, and AI model releases.
The readings section spans high-bandwidth memory, quantum-assisted drug discovery, neuromorphic system design, and advances in photonic chip technology.
More than half of last weekās funding news centers on semiconductors, with deals spanning from a Ā£500K cloud pre-seed to a $300M semiconductor round.
For this weekās bonus, we look at the deepening ties between the U.S. chip industry and federal policy.
Deep Dive: Inside the Machine That Thinks Like a Brain
As AI models grow more complex, their compute and energy demands are soaring. GPUs deliver raw performance but werenāt designed for the scale and efficiency needs of modern workloads. Neuromorphic computing offers a different path, inspired by how the brain processes information.
Brain-Inspired Principles
Neuromorphic computing draws on concepts from neuroscience, focusing on how computation is organized and how information flows through a system. These ideas include the following.
Massive parallelism: This means that many processors work at the same time, allowing complex tasks to be split into smaller ones that run simultaneously, greatly increasing speed and scalability.
Event-based communication: In this approach, components transmit information only in response to specific signals or changes, avoiding energy waste from constant, unnecessary communication.
Sparse activation: This means that only the parts of the system needed for a task perform calculations, keeping the rest idle to save resources.
Note: Event-based communication and sparse activation may sound similar, but the difference is that event-based communication refers to when and how components exchange information (communication efficiency), while sparse activation refers to which parts of the system perform work (computation efficiency).
Turning Brain-Inspired Ideas into Hardware
Neuromorphic computing translates concepts like parallelism, event-driven processing, and sparsity into concrete architectural hardware choices at the chip and system level. The below design features are what make the approach so energy-efficient in practice.
Low-power hybrid processors: These combine different types of processing cores in a single chip to match the most suitable core to each task, improving performance while reducing energy use.
Asynchronous, event-driven design: In this approach, processing units operate independently and only exchange information when needed, minimizing idle power consumption.
Highly parallel topologies: These connect large numbers of processors so they can work simultaneously, from a single chip up to interconnected supercomputers, enabling large-scale computation at low energy cost.
Note: The above design elements put the earlier principles into action. While concepts like parallelism and event-driven processing describe in what way neuromorphic systems operate, these architectural choices show how they are built to achieve those capabilities.
Technical Challenges of Neuromorphic Computing
Neuromorphic computing still faces several technical challenges that must be addressed before it can achieve mainstream adoption and deliver on its full potential.
Software tooling: Developers need compilers, model converters, and debugging tools that fully leverage the hardwareās capabilities.
Interoperability: It must be possible to run AI models designed for GPUs or CPUs on neuromorphic hardware without extensive rewrites.
Algorithm optimization: AI models need to be adapted to take full advantage of sparsity and event-driven processing.
Memory and bandwidth: Data movement between processing units and memory must avoid bottlenecks that limit performance.
Standardized benchmarks: The field requires fair, widely accepted ways to compare neuromorphic systems with conventional architectures.
Scalable manufacturing: Chips must be produced at volume while maintaining high performance and manufacturing yield.
Precision and stability: Numerical accuracy and reliability must be maintained for modern AI workloads.
Note: These challenges are arranged in order of relevance from our perspective. By relevance, we mean the extent to which each issue currently limits the performance, scalability, or adoption of neuromorphic systems. Happy to hear your take on it! Just shoot me a message.
Together, these principles and architectural innovations have enabled neuromorphic systems to deliver efficiency gains of up to 10ā50Ć over GPUs in real-world workloads such as drug discovery, optimization, and neurosymbolic reasoning. As AI continues to scale, these advances point toward a more sustainable computing paradigm that is fundamentally more aligned with how the brain processes information.
If you would like to learn more about how large-scale brain-inspired supercomputers are actually built and deployed, from architectural innovations to real-world efficiency gains, read our interview with SpiNNcloud co-founder and CEO Hector A. Gonzalez.
Spotlights
𦾠Will New Processor Architectures Raise Energy Efficiency? (SemiEngineering) (12āÆmins)
āThere are opportunities today to make well-known architectures more energy-efficient, but the number of options for substantial changes is dwindling. As a result, new architectural ideas are being considered, with some moving to commercial availability. Some are very different from what exists today, even to the point of trying to recycle energy within circuits.
But processors donāt exist in a vacuum. They have broad ecosystems, including support for operating systems, coding, compilation, testing, and debugging. Would any architectural changes be compelling enough to force a large-scale infrastructure change?ā
The article is well written and covers multiple possible pathways to address the problem, while framing them along different axes, including the degree of change, the type of architecture, the balance between hardware and software focus, and the economic viability of each approach.
ā”ļø Nvidia CEO: Copper to dominate for āseveral yearsā despite photonics push (SDxCentral)
āNvidia CEO Jensen Huang said the chip giant is working with Taiwan Semiconductor Manufacturing Company (TSMC) on silicon photonics technology, but does not expect it to be deployed soon.
āItās still several years away,ā Huang told reporters from Taiwanās iNEWS in a clip posted to social media by SemiAnalysis. āWe should stay with copper for as long as we can, and then after that, if we must, weāll use silicon photonics.ā
[...]
Nvidia is already working on photonic projects, with plans to launch photonic versions of its Spectrum-X networking switches featuring co-packaged optics (CPO) in 2026.ā
PS: Weāre not sure we agree.
Headlines
Last weekās headlines were again dominated by quantum computing, with stories spanning new hardware deployments, advances in error correction and simulation, and quantum-AI hybrids, alongside developments in semiconductors, a photonic acquisition, cloud valuations, and AI model updates.
𦾠Semiconductors
Rivos seeking $500m in funding for chip to rival Nvidia (Data Center Dynamics)
Thereās no dedicated category for Thermodynamic Computing yet (and who knows if there ever will be š), so weāve included it under semiconductors for now.
āļø Quantum Computing
Rigetti Computing Reports Q2āÆ2025 Results; Announces General Availability of its 36āQubit MultiāChip Quantum Computer (GlobeNewswire)
University of Vienna Deploys Quantum Computer to Orbit (Quantum Zeitgeist)
Wits University Pioneers Quantum Imaging and Secure Communication (Quantum Zeitgeist)
Qubit Meet Robot: Quantum Circuits Could Speed Up Robotic Arm Calculations, Especially for Complex Movements (The Quantum Insider)
Chinaās Pan-Jianwei Team Uses AI to Build Record-Breaking Atom Array (The Quantum Insider)
Quantum Leaders Tell FT: Quantum Computing Race Enters Final Stretch, but Scaling Challenges Still Loom (The Quantum Insider)
ā”ļø Photonic / Optical Computing
Neurophos announces its acquisition of SiliconBee (LinkedIn Post)
Keep an eye out for our upcoming interview with Neurophos on the Future of Computing blog!
āļø Cloud
š¤ AI
Readings
This weekās reading list includes highlights such as high-bandwidth memory (HBM), quantum approaches to drug discovery, neuromorphic systems design, and advances in photonic chips.
𦾠Semiconductors
Physical AI Chip Sales Wonāt Rival GenAI Anytime Soon (SemiEngineering) (14āÆmins)
How a once-tiny research lab helped Nvidia become a $4 trillion-dollar company (TechCrunch) (7 mins)
Scaling the Memory Wall: The Rise and Roadmap of High Bandwidth Memory (SemiAnalysis) (40 mins)
How the IBM Research AI Hardware Center is Building Tomorrowās Processors (IBM Research) (8āÆmins)
Metrology Under Pressure: Detecting Defects in FineāPitch Hybrid Bonding (SemiEngineering) (15āÆmins)
āļø Quantum Computing
Quantum Reservoir Computing Could Give Drug Discovery a Boost ā Especially When Data Is Scarce (The Quantum Insider) (7mins) (6 mins)
Survey: Wide Gap Between QEC Awareness and QEC Capabilities (The Quantum Insider)
š§ Neuromorphic Computing
Integrated algorithm and hardware design for hybrid neuromorphic systems (Nature) (58āÆmins)
A Bioinspired LowāPower Optoelectronic Synaptic Transistor for Artificial Visual Recognition and Multilevel Optical Storage (ACS Applied Materials & Interfaces) (3āÆmins)
ā”ļø Photonic / Optical Computing
Hybrid PhotonicāTerahertz Chip For NextāGen Technologies (Optica) (4 mins)
Transforming Test For Coāpackaged Optics (SemiEngineering) (13āÆmins)
New technique improves multi-photon state generation (EurekAlert!) (4āÆmins)
š„ Data Centers
Data Center Semiconductor TrendsāÆ2025: Artificial Intelligence Reshapes Compute and Memory Markets (Yole Group) (4āÆmins)
š¤ AI
āThe amount of inference compute needed is already 100x moreā: How Europeās AI companies can meet demands (Sifted) (6 mins)
Funding News
Last weekās funding skewed toward semiconductors, which admittedly is a broad category spanning everything from chip design to manufacturing tools. Rounds ranged from a Ā£500K pre-seed in cloud to a $300M venture round in semiconductors.
Amount | Name | Round | Category |
---|---|---|---|
Ā£500K | Cloud | ||
ā¬2.3M | Semiconductors | ||
$3M | Quantum | ||
$8.5M | Semiconductors | ||
$8.8M | Semiconductors | ||
$10M | Semiconductors | ||
$50M | Quantum | ||
$255M | Photonic / Optical | ||
$300M | Semiconductors |
Bonus: Lines Are Blurring Between Government and Industry in the U.S.
The U.S. chip industry is finding itself more entangled with federal policy than ever. Reports suggest the Trump administration is considering taking an ownership stake in Intel, while Nvidia and AMD have struck a rare deal to hand over 15% of their China AI chip revenue to the U.S. government
Nvidia, AMD to pay U.S. government 15% of China AI chip sales in an unusual export agreement (CBS News)