🤖⚡️ Photonic Accelerators for Fully-Homomorphic Encryption and the Need for AI Inference Speed

A Newsletter for Computing Geeks, Entrepreneurs, and STEM Graduates

Optalysys: Shaping the Future of Photonic Accelerators for Fully-Homomorphic Encryption

Imagine you could process sensitive data without needing to decrypt it first and without ever revealing the actual information. 

It sounds like science fiction, but fully-homomorphic encryption (FHE), a groundbreaking form of quantum-secure cryptography, will allow third parties in the future to analyze data while its confidentiality remains intact. 

One major challenge, however, remains: FHE requires significantly more computational effort and time, making it impractical with today’s digital processors. 

Optalysys develops photonic accelerators that can speed up FHE so encrypted data can be processed at a speed similar to unencrypted data. Founded in 2013 by Nick New and Robert Todd based on years of research on optical computing, it raised last summer 2023 a £21M Series A from Lingotto, imec.xpand, and Northern Gritstone

Learn more about the future of photonic accelerators for fully-homomorphic encryption from our interview with the co-founder and CEO, Nick New: 

If you want to work hands-on with FHE, here’s your chance if you’re in Paris on Sep 26 - 28: You have 48 hours to build a fully functional, privacy-preserving application with FHE.

Future of Computing News

⚛️ Kipu Quantum Team Says New Quantum Algorithm Outshines Existing Techniques (The Quantum Insider)

🤖 OpenAI releases o1, its first model with ‘reasoning’ abilities (The Verge)

🤖 Not just large language models: Introducing LLaVA V1.5 7B on GroqCloud, a cutting-edge visual model (Groq)

🤖 China’s AI models lag their U.S. counterparts by 6 to 9 months, says former head of Google China (CNBC)

🤖 Pixtral-12b-240910: Mistral’s first multimodal model (TechCrunch) (it’s already available on HuggingFace)

🦾 Nvidia and Oracle team up for Zettascale cluster: Available with up to 131,072 Blackwell GPUs (Tom’s Hardware)

🦾 Oracle is designing a data center that would be powered by three small nuclear reactors (CNBC)

🦾 European consortium kicks off THz InP project (Compound Semiconductor)

🦾 Global semiconductor sales hit $51.3bn; increasing 18.7% year-over-year (DCD)

🦾 Startup NetworkOcean wants to sink GPUs into San Francisco Bay: Surprises regulators who hadn’t heard about it (ArsTechnica)

Funding News

⚛️ State Farm Ventures has invested in quantum computing software company Entropica Labs, extending the company’s Series A round to a total of $5.5 million (Quantum Computing Report)

⚛️ Quantum Source Raises $50 Million Series A Funding to Make Scalable, Useful Quantum Computing A Reality (The Quantum Insider)

⚛️ Quantum Optics Jena Secures $9.4M in Funding for QKD (Photonics)

🦾 Nvidia Contributes to $160 Million Applied Digital Funding Round (PYMNTS)

Deep Dive: The Need for AI Inference Speed

With Cerebras recently launching its AI inference platform, it looked like Groq had lost its lead on providing the fastest LLM inference speed for a moment.

But as Sunny Madra from their GroqCloud team xeeted this week, supposedly, they’re back on top of the list — though it’s not a 10x head start anymore.

He didn’t disclose details of how they got to this higher speed on their private endpoint — just so much: They’re still using their current chips, fabricated on a 14 nm node, so their next-generation chips will be fabricated using 4 nm process technology and be even faster.

Why does inference speed matter if we get near-instant answers from smaller expert models already today?

It’s not just about how fast you come up with one answer. It’s about how many answers you can generate in a given time and, thus, how many customers you can serve.

Being faster is one way to increase computing power, handle more queries, and make AI outputs ubiquitous.

Good Reads

Want to read more about startups beyond computing and learn how the best founders build massive companies? Check out the Failory Newsletter by Nico Cerdeira

“In the hills of eastern Tennessee, a record-breaking machine called Frontier is providing scientists with unprecedented opportunities to study everything from atoms to galaxies.”

“Unlike the existing works favoring the long-context LLM over RAG, we argue that the extremely long context in LLMs suffers from a diminished focus on relevant information and leads to potential degradation in answer quality.”

“Before Shor’s factoring, quantum computers were, for what it’s worth, a science toy. After Shor, they were a money-making technology in the making - or a valuable piece of military tech at the very least. Why would the governments fund a major international project for something that could be funded by profit-oriented entities instead?”

“When executing leading-edge LLM models, most AI accelerators experience significant drops in efficiency, often to as low as 1-5%.

Latency, another crucial metric, is typically missing from AI processor specifications.

This omission arises not only because latency is highly algorithm-dependent but also due to the generally low efficiency of most processors.”

Thank you for being one of the 722 subscribers to this week's edition 🤗

If you like this newsletter, please forward it to a friend or two who can subscribe here