Interconnects AI

Interconnects AI

The Q* hypothesis: Tree-of-thoughts reasoning, process reward models, and supercharging synthetic data
Emergency special: The information we need to understand what Q* is was right in front of us, but the memes are more fun than reality.
Nov 24, 2023 • Nathan Lambert
Opus 4.6, Codex 5.3, and the post-benchmark era
On comparing models in 2026.
Feb 9 • Nathan Lambert
DeepSeek R1's recipe to replicate o1 and the future of reasoning LMs
Yes, ring the true o1 replication bells for DeepSeek R1 🔔🔔🔔. Where we go next.
Jan 21, 2025 • Nathan Lambert
Behind the curtain: what it feels like to work in AI right now (April 2023)
Fear, FOMO, and the scientific exodus driven by ChatGPT
Apr 5, 2023 • Nathan Lambert
xAI's Grok 4: The tension of frontier performance with a side of Elon favoritism
An o3 class model, the possibility of progress, chatbot beige, and the illusiveness of taste.
Jul 12, 2025 • Nathan Lambert
Claude Code Hits Different
Coding agents cross a meaningful threshold with Opus 4.5.
Jan 9 • Nathan Lambert
Burning out
The international AI industry's collective risk.
Oct 25, 2025 • Nathan Lambert
Get Good at Agents
The tools are getting so powerful that we need to change how we scope, manage, and approach our work.
Jan 21 • Nathan Lambert
DeepSeek V3 and the actual cost of training frontier AI models
The $5M figure for the last training run should not be your basis for how much frontier AI models cost.
Jan 9, 2025 • Nathan Lambert
Open models in perpetual catch-up
The open-closed gap, distillation, innovation timescales, how open models win, specialized models, what’s missing, etc.
Feb 17 • Nathan Lambert
5 Thoughts on Kimi K2 Thinking
Quick thoughts on another fantastic open model from a rapidly rising Chinese lab.
Nov 6, 2025 • Nathan Lambert
Thoughts on the job market in the age of LLMs
On standing out and finding gems.
Jan 30 • Nathan Lambert
© 2026 Interconnects AI, LLC · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture