🤖🦾 Behind the Scenes at OpenAI

A Newsletter for Entrepreneurs, Investors, and Computing Geeks

Happy Monday! This week’s deep dive looks behind the scenes at OpenAI. In our spotlights section, we cover Meta’s massive data center ambitions and Mira Murati’s $12B AI debut. We also cover major headlines across AI, semiconductors, quantum, neuromorphic, and cloud, alongside curated readings on compute architectures, photonic innovation, and next-gen materials. As always, there’s a full roundup of funding news. And in our bonus section, we highlight a particularly dense week of scientific breakthroughs.

The Future of Computing Conference is coming to Paris on November 6. After sold-out editions in London and Berlin, we’re bringing together 200 selected founders, researchers, investors, and engineers for a focused, one-day event on computing, semiconductors, AI, quantum, and more.

Organized in partnership with iXcampus, CDL-Paris, HEC Paris, and Elaia, the conference offers in-depth discussions, curated demos, and the chance to connect with others building the future of computing in Europe. Sign up here.

Deep Dive: Behind the Scenes at OpenAI

Source: “Reflections on OpenAI” (Calv) (20 mins)

In one of the most insightful blog posts on OpenAI to date, engineer and Segment co-founder Calvin French-Owen reflects on his year inside the company.

Below are selected takeaways from his post, particularly relevant for anyone building teams (and therefore culture), infrastructure, large-scale software systems, or coding tools, but the full post goes much deeper and is well worth your time.

Culture

Hypergrowth: OpenAI scaled from around 1,000 to 3,000 people within a year. The speed of this expansion led to internal breakdowns in reporting structures, hiring processes, and communication flows.

Bottom-Up Structure: Despite its size, OpenAI operates in a decentralized way. Especially in research, small teams often pursue ideas independently, without needing formal approval. Progress tends to come from iteration rather than top-down planning.

Meritocratic Ethos: Promotions are based on execution and contribution rather than visibility or politics. Many leaders are recognized for shipping high-impact work, even if they are not strong presenters or traditional managers.

Secrecy and Silos: Intense external scrutiny means much of the internal work is highly compartmentalized. Sensitive projects are siloed in restricted Slack channels, and access to financial and roadmap information is limited.

Fluid Teams: Team composition changes quickly. Engineers and researchers are often reassigned informally across efforts depending on immediate needs. Decisions are made quickly and new priorities can be pursued without waiting for formal planning cycles.

Not a Monolith: Different teams at OpenAI operate with different goals and mental models. Some view the company as a research lab, others as a consumer tech company, and others as an enterprise platform. These perspectives coexist and shape internal dynamics.

Code and Infrastructure

Tech Stack: Most services are built using FastAPI and Pydantic. There is no enforced style guide across the organization, which allows for speed and flexibility but results in inconsistencies across teams.

Monorepo: OpenAI operates on a large Python monorepo, with additional code in Rust and Go. Code quality ranges from production-grade infrastructure to lightweight experimental notebooks.

Code Wins: Development decisions are typically made by the teams that build the systems. This leads to fast execution and high ownership, but also results in duplicate tooling, such as multiple queueing or agent libraries.

Azure-Based Infra: The entire platform runs on Azure. Only a few core services, like AKS, CosmosDB, and BlobStore, are widely relied upon by engineers. Many teams are cautious about the rest of the Azure ecosystem.

Build In-House Philosophy: Due to the absence of equivalents to services like DynamoDB or BigQuery, many infrastructure components are developed internally. There is a strong preference for building rather than buying.

Performance-Driven Scaling: Model training begins with small-scale experiments and scales up when results are promising. GPU planning is driven by latency targets rather than theoretical compute capacity. Each new model generation introduces different usage patterns, which require continuous benchmarking and system tuning.

Spotlights

Meta’s Data Center Ambitions

Meta is going all-in on infrastructure to power frontier AI. The company is building out two massive projects (Hyperion and Prometheus) that signal a serious push to compete with OpenAI, Google, and Anthropic not just on models, but on compute scale.

Hyperion, a new facility in Louisiana, will scale to 5 GW of compute which is enough to train frontier AI models at massive scale. It marks a major step in Meta’s effort to pair infrastructure scale with top-tier talent.

Prometheus, a 1 GW supercluster in Ohio, is expected to come online in 2026. It will be one of the largest AI compute clusters ever built by a tech company, designed to support training and inference for frontier-scale models.

Mira Murati’s $12B AI Debut

“Thinking Machines Lab, the AI startup founded by OpenAI’s former chief technology officer Mira Murati, officially closed a $2 billion seed round led by Andreessen Horowitz on Monday, a company spokesperson told TechCrunch.

The deal, which includes participation from Nvidia, Accel, ServiceNow, CISCO, AMD, and Jane Street, values the startup at $12 billion, the spokesperson said.”

Headlines

This week’s headlines cover major advances in AI capabilities, shifts in semiconductor strategy and production, and new milestones in quantum and neuromorphic computing.

 🤖 AI

🦾 Semiconductors

⚛️ Quantum Computing

🧠 Neuromorphic Computing

☁️ Cloud

If you’re looking for the usual photonic / optical computing updates, don’t worry. You’ll find them in the bonus section on recent breakthroughs!

Readings

This week’s reading list covers advances in semiconductor design, quantum and neuromorphic computing architectures, and the scaling challenges of next-gen data centers.

🦾 Semiconductors

⚛️ Quantum Computing

Rotonium: Shaping the Future of Photonic Quantum Edge Computing (Future of Computing) (10 mins) Yes, this is our own interview and definitely worth a read :-)

⚡️ Photonic / Optical Computing

🧠 Neuromorphic Computing

💥 Data Centers

Funding News

This week’s funding news covers a wave of early-stage quantum rounds and one AI seed round that completely resets the scale.

Amount

Name

Round

Category

Undisclosed

Qubitcore

Pre-Seed

Quantum

€1.5M

Commutator Studios

Pre-Seed

Quantum

$2.5M

Bifrost Electronic

Seed

Quantum

$4.9M

BQP

Seed

Quantum

$21M

GigaIO

Series B

Semiconductors

$35M

NetBox Labs

Series B

Cloud

$51M

Spacelift

Series C

Cloud

€62M

Q.ANT

Series A

Quantum

€70M

Exein

Series C 

Semiconductors

€80M

QuNorth

No Venture Round

Quantum

$2B

Thinking Machines Lab

Seed

AI

Bonus: A Week of Breakthroughs

An unusually dense week for scientific progress spanning quantum chips, optics, materials science, and timekeeping. Here’s a roundup of the most interesting new publications and press releases:

NIST Ion Clock Sets New Record for Most Accurate Clock in the World (NIST, National Institute of Standards and Technology)

Love these insights? Forward this newsletter to a friend or two. They can subscribe here.