- Future of Computing Newsletter
- Posts
- 𧬠𦾠The Computational Demands of Protein Science
𧬠𦾠The Computational Demands of Protein Science
A Newsletter for Entrepreneurs, Investors, and Computing Geeks
Happy Monday! Hereās whatās inside this weekās newsletter:
Deep dive: Protein science as a new compute workload, why structure prediction is so demanding, and how GPUs and specialized hardware are being adapted.
Spotlights: Appleās new A19 chips with upgraded cooling and connectivity, and first insights from Mira Muratiās Thinking Machines Lab on making large-scale AI inference reproducible
Headlines: New semiconductor launches, IPO moves and physics breakthroughs in quantum, expansions in photonics, nuclear debates in data centers, cloud partnerships, shifting alliances in AI, and much more.
Readings: Chip bottlenecks and DRAM evolution, advances in quantum key distribution, photonic accelerators for AI, neuromorphic efficiency gains, and global data center, cloud trends, and much more.
Funding news: Multi-billion rounds from PsiQuantum and Mistral, alongside early to mid-stage activity in semiconductors and photonics with most deals coming in under $60M.
Bonus: A set of new neuromorphic market forecasts, covering chips, hardware, and sensors, all pointing toward steady growth and rising relevance across AI and next-gen computing.
Deep Dive: The Computational Demands of Protein Science
An episode from the NVIDIA AI Podcast last week (From AlphaFold to MMseqs2-GPU: How AI is Accelerating Protein Science) inspired me to dig deeper into how biology is emerging as a new class of workloads that shape hardware requirements.
Predicting protein structures has become one of the most important computational problems of our time. It enables drug discovery and enzyme engineering. Yet the workloads that power this science are not only biologically complex but also some of the most compute-intensive tasks ever attempted outside of physics.
Why Protein Structure Prediction is Computationally Demanding
At the core of protein inference are usually two demanding stages: 1) generating multiple sequence alignments (MSAs), which search through massive protein databases to find and align evolutionarily related sequences, and 2) running deep learning models such as AlphaFold and OpenFold, which use this information to predict the final 3D structure. The first stage captures evolutionary context from billions of sequences, while the second relies on deep neural networks with tens to hundreds of millions of parameters.
MSA bottleneck: Generating alignments for a given protein involves searching and matching across protein databases that contain hundreds of millions to billions of sequences. Even on modern GPUs, this step can dominate runtime (up to 80%) if not carefully optimized.
Inference bottleneck: Once alignments are ready, the models themselves require enormous matrix multiplications, attention layers, and symmetry-aware geometry operations. These workloads are GPU-bound, resembling large language models in structure but with additional symmetry constraints from physics.
In the real world, you rarely care about just one protein. Drug discovery means running predictions for thousands of candidate proteins, and other fields like enzyme engineering may require testing entire sets of designed or natural variants.
This multiplies both bottlenecks by orders of magnitude, which is why even GPU clusters start to strain.
What Protein Workloads Demand from Hardware
Protein science workloads may look similar to AI or HPC, but they stress the hardware stack in distinct ways. Making them tractable requires processors tuned for these demands:
High-Bandwidth Memory (HBM): Protein inference involves processing large alignment matrices and ensembles of candidate structures. Storing these in HBM provides bandwidth on the order of hundreds of GBs up to several TBs per second, which helps avoid I/O bottlenecks that would stall standard GPUs.
Massive Parallelism: GPUs are inherently parallel, but biology workloads push this further. Tensor operations in protein models and redesigned sequence alignment algorithms demand extreme thread-level parallelism to reach usable performance.
Multi-Instance Capability: A single GPU can be partitioned into multiple logical devices, each with dedicated compute and memory resources. This makes it possible to run several predictions in parallel on one chip, which is essential when screening thousands of proteins or processing large datasets efficiently.
Beyond raw hardware, software optimizations are required to unlock these features. Modern GPUs include specialized instructions, such as Tensor Core operations, which libraries expose to biology models. This hardwareāsoftware co-design is essential for symmetry-aware computations, delivering speedups while preserving accuracy.
How NVIDIAās Latest GPU Targets Biology Workloads
NVIDIA just launched the RTX PRO 6000 Blackwell Server Edition GPU, showing up to 138Ć faster protein inference with MMseqs2-GPU (GPU-accelerated sequence alignment software) and OpenFold2 (open-source structure prediction model). With 96 GB of high-bandwidth memory and new software optimizations, it enables proteome-scale folding on a single server instead of a supercomputing cluster.
Further interesting sources:
On the compute side: Accelerate Protein Structure Inference Over 100x with NVIDIA RTX PRO 6000 Blackwell Server Edition (NVIDIA Developer)
On the biology side: AlphaFold - The Most Useful Thing AI Has Ever Done (Veritasium)

The NVIDIA RTX PRO 6000 Blackwell Server Edition GPU sets a new benchmark for protein structure inference. Source: NVIDIA.
Spotlights
𦾠Apple Debuts A19 and A19 Pro Processors for iPhone 17, iPhone Air and iPhone 17 Pro (Tomās Hardware)
Likely built on TSMCās N3P node, the new chips feature upgraded CPU/GPU cores with integrated Neural Accelerators for improved AI and graphics performance. The Pro models introduce a vapor-chamber cooling system, promising 20Ć better heat dissipation than previous designs. Apple is also deepening vertical integration with its own N1 networking chip and an updated C1X modem for more efficient connectivity.
š¤ Defeating Nondeterminism in LLM Inference (Thinking Machines)
Mira Muratiās $2B-backed research lab Thinking Machines Lab, staffed with former OpenAI researchers, has published a first look into its work on reproducible AI. The blog post explores why large language models return different answers even when using deterministic settings like temperature zero, challenges the common āconcurrency + floating pointā explanation, and outlines how inference engines might be redesigned to achieve truly reproducible results.
Headlines
Last weekās headlines brought new chip launches and policy reviews in semiconductors, IPO moves and physics breakthroughs in quantum, updates in photonics, nuclear debates and big build-outs in data centers, major cloud deals, and shifting alliances in AI.
𦾠Semiconductors
SiFiveās New RISC-V IP Combines Scalar, Vector and Matrix Compute to Accelerate AI (SiFive)
Axelera AI Boosts LLMs at the Edge by 2Ć with Metis M.2 Max Introduction (Axelera)
NVIDIA Unveils Rubin CPX, a New Class of GPU Designed for Massive Context Inference (NVIDIA)
Keysight Introduces Two Frequency Extenders And Calibration Kit To Extend Broadband Vector Analyzer Up To 250 GHz (Keysight)
On another note: The European CommissionāÆasked stakeholders to evaluate and review the Chips Act.
āļø Quantum
Infleqtion to go public through merger with Churchill Capital Corp X (The Quantum Insider)
IonQ Secures Regulatory Approval from the UK Investment Security Unit (ISU) (IonQ)
Exotic Phase of Matter Realized on a Quantum Processor (Technical University of Munich)
Physicists measure expansion of quantum wavepacket in levitated atom experiment (Phys.org)
Time Crystals Break Out of the Quantum Lab (The Quantum Insider)
CDimension Delivers Wafer-Scale 2D Materials to Accelerate Quantum Roadmap by Cutting Noise (The Quantum Insider)
ā”ļø Photonic / Optical
i4Networks Deploys Nokiaās Optical Solution to Deliver Data Center Interconnection Services Across the Netherlands (Nokia)
Silicon Photonics in the Spotlight: TSMC Lifts the Curtain on COUPE at SEMICON Taiwan (TrendForce)
Jenoptik Expands Optics Manufacturing at Jena Campus (Photonics)
š„ Data Centers
NVIDIA and OpenAI to Spend Billions on UK Data Centers (Data Center Dynamics)
āļø Cloud
CoreWeave launches AI startup fund; shares jump 9% (Tech in Asia)
Google Cloud Launches Free Multicloud Transfers Amid EU Data Act (TechRepublic)
NATO Communications and Information Agency Selects Oracle Cloud Infrastructure (Oracle)
š¤ AI

Overview of recent CoreWeave developments. Source: Tech in Asia (article on CoreWeaveās new AI fund).
Readings
This weekās reading list covers chip bottlenecks and DRAM evolution, advances in quantum key distribution, photonic accelerators for AI inference, neuromorphic efficiency gains, and shifting dynamics in global data center and cloud markets.
𦾠Semiconductors
Huawei Ascend Production Ramp: Die Banks, TSMC Continued Production, HBM is The Bottleneck (SemiAnalysis) (22 mins)
Another Giant Leap: The Rubin CPX Specialized Accelerator & Rack (SemiAnalysis)(23 mins)
The Evolution of DRAM (SemiEngineering) (16 mins)
What Do LLMs Want From Hardware? (SemiEngineering) (16 mins)
Graphene & 2D Materials 2026-2036: Technologies, Markets, Players (IDTechEx) (9 mins)
āļø Quantum
Fault-tolerant Quantum Computing Achieves Exponential Speedup for Navier-Stokes Equations with Reduced Resource Overhead (Quantum Zeitgeist) (10 mins)
Twin Beams Generate 95% Correlated Random Numbers for Secure Communication (Quantum Zeitgeist) (3 mins)
Geometric Discord Enables Key Distribution and Boosts Secret Key Rates in QKD Protocols (Quantum Zeitgeist) (7 mins)
ā”ļø Photonic / Optical
Seeing Is Believing: A Technical Deep Dive Into Lightmatterās Hardware (Lightmatter) (8 mins)
Analog Plus 3D Optics to Accelerate AI Inference and Combinatorial Optimization (SemiEngineering) (3 mins)
Coloring Optical Signals for More Bandwidth in Data Centers (SemiEngineering) (10 mins)
š§ Neuromorphic
DelGrad: exact event-based gradients for training delays and weights on spiking neuromorphic hardware (Nature) (30 mins)
Alleviating the Communication Bottleneck in Neuromorphic Computing with Custom-Designed Spiking Neural Networks (MDPI) (23 mins)
Comparing Neuromorphic Computing: Efficiency Gains (PatSnap Eureka) (9 mins)
š„ Data Centers
Top 10: Data Centre Companies in Europe (Data Centre Magazine) (6 mins)
North America Data Center Trends H1 2025 (CBRE) (10 mins)
Data Center Monitoring Market Size and Forecast 2025 to 2034 (Precedence Research) (13 mins)
āļø Cloud
The Worldās Largest Cloud Providers Ranked by Market Share (Visual Capitalist) (3 mins)

Overview of Huawei chip ecosystem. Source: SemiAnalysis.
Funding News
Last weekās rounds underscored the scale divide in computing. AI and quantum dominated the top end with PsiQuantum ($2B) and Mistral ($1.7B). At the same time, semiconductors and photonics saw a string of early to mid-stage rounds below $60M. The pattern reflects capital concentrating in a few platform bets, while component-level innovation progresses in smaller steps.
Amount | Name | Round | Category |
---|---|---|---|
Undisclosed | Data Centers | ||
ā¬6.5M | Photonics | ||
$8M | Semiconductors | ||
$14M | Photonics | ||
$20.5M | Semiconductors | ||
$51M | Semiconductors | ||
$58M | Photonics | ||
$64.6M | Cloud | ||
$200M | AI | ||
$230M | Quantum | ||
ā¬600M | Data Centers | ||
$1.7B | AI | ||
$2B | Quantum |
Bonus: Neuromorphic by the Numbers
A number of fresh market reports on neuromorphic computing were published last week. Different scopes, geographies, and methodologies, but one common theme: steady growth across chips, hardware, and sensors. Treat the figures as directional, since each report uses its own assumptions.
Neuromorphic Computing Market Opportunities and Challenges 2025ā2032: Strategic Outlook (Newstrail)
Global Neuromorphic Chip Market Report 2025: Size Projected USD 11.9 Billion, CAGR of 13.73% by 2033 (OpenPR)
Neuromorphic Sensors Market to Hit USD 10.6 Billion by 2035, Driving AI, Robotics, and Next-Gen Computing (Newstrail)