🧠🦾 Inside the Machine That Thinks Like a Brain

A Newsletter for Entrepreneurs, Investors, and Computing Geeks

Happy Monday! This week’s deep dive explores neuromorphic computing and what it means for AI efficiency, how brain-inspired principles are translated into hardware, and the technical challenges still holding it back.

In our spotlights, we look at new approaches to processor architectures for energy efficiency and Jensen Huang’s blunt take on the future of photonics.

Our headlines cover another quantum-heavy week, alongside updates in semiconductors, photonics, cloud valuations, and AI model releases.

The readings section spans high-bandwidth memory, quantum-assisted drug discovery, neuromorphic system design, and advances in photonic chip technology.

More than half of last week’s funding news centers on semiconductors, with deals spanning from a Ā£500K cloud pre-seed to a $300M semiconductor round.

For this week’s bonus, we look at the deepening ties between the U.S. chip industry and federal policy.

Deep Dive: Inside the Machine That Thinks Like a Brain

As AI models grow more complex, their compute and energy demands are soaring. GPUs deliver raw performance but weren’t designed for the scale and efficiency needs of modern workloads. Neuromorphic computing offers a different path, inspired by how the brain processes information.

Brain-Inspired Principles

Neuromorphic computing draws on concepts from neuroscience, focusing on how computation is organized and how information flows through a system. These ideas include the following.

  • Massive parallelism: This means that many processors work at the same time, allowing complex tasks to be split into smaller ones that run simultaneously, greatly increasing speed and scalability.

  • Event-based communication: In this approach, components transmit information only in response to specific signals or changes, avoiding energy waste from constant, unnecessary communication.

  • Sparse activation: This means that only the parts of the system needed for a task perform calculations, keeping the rest idle to save resources.

Note: Event-based communication and sparse activation may sound similar, but the difference is that event-based communication refers to when and how components exchange information (communication efficiency), while sparse activation refers to which parts of the system perform work (computation efficiency).

Turning Brain-Inspired Ideas into Hardware

Neuromorphic computing translates concepts like parallelism, event-driven processing, and sparsity into concrete architectural hardware choices at the chip and system level. The below design features are what make the approach so energy-efficient in practice.

  • Low-power hybrid processors: These combine different types of processing cores in a single chip to match the most suitable core to each task, improving performance while reducing energy use.

  • Asynchronous, event-driven design: In this approach, processing units operate independently and only exchange information when needed, minimizing idle power consumption.

  • Highly parallel topologies: These connect large numbers of processors so they can work simultaneously, from a single chip up to interconnected supercomputers, enabling large-scale computation at low energy cost.

Note: The above design elements put the earlier principles into action. While concepts like parallelism and event-driven processing describe in what way neuromorphic systems operate, these architectural choices show how they are built to achieve those capabilities.

Technical Challenges of Neuromorphic Computing

Neuromorphic computing still faces several technical challenges that must be addressed before it can achieve mainstream adoption and deliver on its full potential.

  • Software tooling: Developers need compilers, model converters, and debugging tools that fully leverage the hardware’s capabilities.

  • Interoperability: It must be possible to run AI models designed for GPUs or CPUs on neuromorphic hardware without extensive rewrites.

  • Algorithm optimization: AI models need to be adapted to take full advantage of sparsity and event-driven processing.

  • Memory and bandwidth: Data movement between processing units and memory must avoid bottlenecks that limit performance.

  • Standardized benchmarks: The field requires fair, widely accepted ways to compare neuromorphic systems with conventional architectures.

  • Scalable manufacturing: Chips must be produced at volume while maintaining high performance and manufacturing yield.

  • Precision and stability: Numerical accuracy and reliability must be maintained for modern AI workloads.

Note: These challenges are arranged in order of relevance from our perspective. By relevance, we mean the extent to which each issue currently limits the performance, scalability, or adoption of neuromorphic systems. Happy to hear your take on it! Just shoot me a message.

Together, these principles and architectural innovations have enabled neuromorphic systems to deliver efficiency gains of up to 10–50Ɨ over GPUs in real-world workloads such as drug discovery, optimization, and neurosymbolic reasoning. As AI continues to scale, these advances point toward a more sustainable computing paradigm that is fundamentally more aligned with how the brain processes information.

If you would like to learn more about how large-scale brain-inspired supercomputers are actually built and deployed, from architectural innovations to real-world efficiency gains, read our interview with SpiNNcloud co-founder and CEO Hector A. Gonzalez.

Other companies exploring neuromorphic computing include Innatera, SynSense, and Neurobus.

Spotlights

🦾 Will New Processor Architectures Raise Energy Efficiency? (SemiEngineering) (12 mins)

ā€œThere are opportunities today to make well-known architectures more energy-efficient, but the number of options for substantial changes is dwindling. As a result, new architectural ideas are being considered, with some moving to commercial availability. Some are very different from what exists today, even to the point of trying to recycle energy within circuits.

But processors don’t exist in a vacuum. They have broad ecosystems, including support for operating systems, coding, compilation, testing, and debugging. Would any architectural changes be compelling enough to force a large-scale infrastructure change?ā€

The article is well written and covers multiple possible pathways to address the problem, while framing them along different axes, including the degree of change, the type of architecture, the balance between hardware and software focus, and the economic viability of each approach.

ā€œNvidia CEO Jensen Huang said the chip giant is working with Taiwan Semiconductor Manufacturing Company (TSMC) on silicon photonics technology, but does not expect it to be deployed soon.

ā€œIt’s still several years away,ā€ Huang told reporters from Taiwan’s iNEWS in a clip posted to social media by SemiAnalysis. ā€œWe should stay with copper for as long as we can, and then after that, if we must, we’ll use silicon photonics.ā€

[...]

Nvidia is already working on photonic projects, with plans to launch photonic versions of its Spectrum-X networking switches featuring co-packaged optics (CPO) in 2026.ā€

PS: We’re not sure we agree.

Headlines


Last week’s headlines were again dominated by quantum computing, with stories spanning new hardware deployments, advances in error correction and simulation, and quantum-AI hybrids, alongside developments in semiconductors, a photonic acquisition, cloud valuations, and AI model updates.

🦾 Semiconductors

There’s no dedicated category for Thermodynamic Computing yet (and who knows if there ever will be 😃), so we’ve included it under semiconductors for now.

āš›ļø Quantum Computing

āš”ļø Photonic / Optical Computing

Keep an eye out for our upcoming interview with Neurophos on the Future of Computing blog!

ā˜ļø Cloud

 šŸ¤– AI

Readings


This week’s reading list includes highlights such as high-bandwidth memory (HBM), quantum approaches to drug discovery, neuromorphic systems design, and advances in photonic chips.

🦾 Semiconductors

āš›ļø Quantum Computing

🧠 Neuromorphic Computing

āš”ļø Photonic / Optical Computing

Transforming Test For Co‑packaged Optics (SemiEngineering) (13 mins)

šŸ’„ Data Centers

šŸ¤– AI

Funding News


Last week’s funding skewed toward semiconductors, which admittedly is a broad category spanning everything from chip design to manufacturing tools. Rounds ranged from a Ā£500K pre-seed in cloud to a $300M venture round in semiconductors.

Amount

Name

Round

Category

Ā£500K

Computle

Pre-Seed

Cloud

€2.3M

EDGX

Seed

Semiconductors

$3M

Olee Space

Seed

Quantum

$8.5M

Mako 

Seed

Semiconductors

$8.8M

Syenta

ā€œPre-Series Aā€

Semiconductors

$10M

NeoLogic

Series A

Semiconductors

$50M

Quantinuum

Series B

Quantum

$255M

Celestial AI

Series C1

Photonic / Optical

$300M

Massed Compute

Venture Round

Semiconductors

Bonus: Lines Are Blurring Between Government and Industry in the U.S.

The U.S. chip industry is finding itself more entangled with federal policy than ever. Reports suggest the Trump administration is considering taking an ownership stake in Intel, while Nvidia and AMD have struck a rare deal to hand over 15% of their China AI chip revenue to the U.S. government

ā¤ļø Love these insights? Forward this newsletter to a friend or two. They can subscribe here.