It’s finally here! The public (and most complete) version of my talk covering every stage of the process to build Olmo 3 Think (slides are available). I’ve been giving this, improving it, and getting great feedback at other venues such as The Conference on Language Modeling (COLM) & The PyTorch Conference.This involves changes and new considerations of every angle of the stack, from pretraining, evaluation, and of course post-training.
Most of the talk focuses on reinforcement learning infrastructure and evaluating reasoning models, with quick comments on every training stage. I hope you enjoy it, and let us know what to improve in the future!
Chapters
00:00:00 Introduction
00:06:30 Pretraining Architecture
00:09:25 Midtraining Data
00:11:08 Long-context Necessity
00:13:04 Building SFT Data
00:20:05 Reasoning DPO Surprises
00:24:47 Scaling RL
00:41:05 Evaluation Overview
00:48:50 Evaluation Reflections
01:00:25 Conclusions
Here’s the YouTube link:









