Strong agree here @Nathan Lambert . Every few weeks I find myself stopping and reflecting how working in AI has changed over the last 7 ish years I’ve been working in this field. Everything has changed - pace , culture, volume.
I have another piece on how AI tools make it all worse. The false feeling of progress attempting to manage 6 instances of Claude and 3 instances of codex pro hoping to get 10 tasks done by lunch time.
Reading this on a Saturday night after having spent the day catching up on post-training techniques for an industry internship while, at the same time, piling up 4-5 papers to remain relevant on my current PhD research. Yup! Nobody is forcing me, but it feels like you're either all in, or you're not going to be part of any interesting project in this area.
To build on what you say, it doesn't feel like this only for people in the teams that train these models. It extends to many other research topics touching LLMs or foundation models. However, it's also a huge privilege to be part of this group of people and moment in time, where your job certainly doesn't feel like bullshit but rather it's so part of the public discourse that you encounter it even outside working hours.
Enlightening post, Nathan. The athletic team analogy suggests (to me, anyway) a way to understand a certain kind of anxiety about AI among non-players.
What worries a lot of us onlookers is that the team mentality kind of blocks much reflection over the ultimate value and purposes of the products. After all, the thing about sports is that the rules are clear. The goal is to win.
Suppose you come in one morning saying, "you know, I realized, after this year's championship there is next year's and then the year after's. It's all kind of the same thing over and over, so who cares?" You're off the team. You've to to really care about THIS team winning under THESE rules. Among us spectators, there's some fear that in going all out to win, y'all on the AI "teams" might create something that was a joy and triumph to make but that isn't so good for the rest of us. Not out of malice or delusion but simply out of that athlete-like drive to triumph.
Yeah I think there are many ways to take the analogy. At the same time, there are many ways it fails. It's likely more like professional sports than anything I experienced in college.
Your athletic team analogy really captures something essential about the current AI landscape. The comparison to elite sports culture is apt - but as you note, there's a crucial difference: in sports, recovery and rest are recognized as essential components of peak performance, not obstacles to it.
What strikes me most is the sustainability question. Elite athletes have defined seasons, off-seasons, and structured training programs that build in recovery. In AI research, the pace feels relentless precisely because there's no natural periodization - every week brings new models, techniques, and developments that demand attention.
I wonder if the field might benefit from adopting more of the sports mindset around recovery and sustainable performance. The best athletes don't just train harder; they train smarter, recognizing that marginal gains often come from knowing when to rest. Perhaps the AI community could benefit from similar wisdom - not just working at an intense pace, but building in structures that make that intensity sustainable over the long term.
Thanks for this thoughtful reflection on what's often left unsaid in the field.
As someone who rowed at university in the uk (admittedly decades ago), your athletics analogy resonates.
You’re right that elite rowing and frontier AI both demand total focus. But the best crews I knew didn’t win because they ground themselves into dust - they won because they understood periodisation, recovery, and that marginal gains come from working “smart” not just hard. Training 20 hours a week felt manageable precisely because rest was baked into the programme. You didn’t show up to the boathouse shattered (maybe a little hungover once or twice..); you showed up ready to make the boat move better than yesterday.
And here’s the thing: winning was brilliant, but more so was the process itself. The swing when an eight clicks together. Watching your personal 2k split times drop week by week. The craft of it.
AI seems to have a unique pathology here - there’s no off-season, no taper, no defined race calendar. Every week brings a new image model, quantisation method, or trade dispute that feels urgent. It’s like the worst kind of attention sucking social media.
So stop writing such interesting posts and let me focus on my catch and drive!
When you obsess over something, your perspective also gets warped. Have you considered this is why The Curve felt the way it did vs mass consumer sentiment?
What I meant was that a lot of people don’t really see this technology as terribly transformative, just another tool at most. Based on reports from The Curve, and general sentiment in the AI dev space you would think society is ending before 2030. I am not sure where I myself fall in that, probably somewhere in between.
I also think what Nathan talks about with regard to China and open model dominance is totally unknown among consumers.
Likely the current mad rush is not sustainable. It will also take longer than some folks think for the tech to mature. People would do well to pace themselves and take the long view.
Perhaps the working mind, like the body, can go on with the right minimal care. But the psyche underpinning both responds to the intoxicating sense of momentum.
Every market / technology / scale-up reaches a saturation point or slow down. People feel it sooner or later based where they’re positioned, but eventually the war is over (or just over for you) and simple boring policing and logistics is required.
The drug of momentum is the only thing that sustains against that lack of peace of mind.
I would add it's not like sport . As the pressure is constant in the mind of say a Global project in IT .You do need a expert of the Mind and 7 hours Rem sleep ...trust me I have been there ..I am actually experienced and now thankfully doing my own thing ..ATB Simon
Strong agree here @Nathan Lambert . Every few weeks I find myself stopping and reflecting how working in AI has changed over the last 7 ish years I’ve been working in this field. Everything has changed - pace , culture, volume.
I write about it earlier in the year https://newsletter.victordibia.com/p/you-have-ai-fatigue-thats-why-you
I have another piece on how AI tools make it all worse. The false feeling of progress attempting to manage 6 instances of Claude and 3 instances of codex pro hoping to get 10 tasks done by lunch time.
I should polish it up and hit publish sometime
Edit: I ended up polishing it up and publishing here https://newsletter.victordibia.com/p/upgrade-or
Reading this on a Saturday night after having spent the day catching up on post-training techniques for an industry internship while, at the same time, piling up 4-5 papers to remain relevant on my current PhD research. Yup! Nobody is forcing me, but it feels like you're either all in, or you're not going to be part of any interesting project in this area.
To build on what you say, it doesn't feel like this only for people in the teams that train these models. It extends to many other research topics touching LLMs or foundation models. However, it's also a huge privilege to be part of this group of people and moment in time, where your job certainly doesn't feel like bullshit but rather it's so part of the public discourse that you encounter it even outside working hours.
You won’t be able to do that much longer, friend. Your wife will want to see you some time. 😁
Thanks for bringing this up, very well observed and relatable.
Enlightening post, Nathan. The athletic team analogy suggests (to me, anyway) a way to understand a certain kind of anxiety about AI among non-players.
What worries a lot of us onlookers is that the team mentality kind of blocks much reflection over the ultimate value and purposes of the products. After all, the thing about sports is that the rules are clear. The goal is to win.
Suppose you come in one morning saying, "you know, I realized, after this year's championship there is next year's and then the year after's. It's all kind of the same thing over and over, so who cares?" You're off the team. You've to to really care about THIS team winning under THESE rules. Among us spectators, there's some fear that in going all out to win, y'all on the AI "teams" might create something that was a joy and triumph to make but that isn't so good for the rest of us. Not out of malice or delusion but simply out of that athlete-like drive to triumph.
Yeah I think there are many ways to take the analogy. At the same time, there are many ways it fails. It's likely more like professional sports than anything I experienced in college.
Your athletic team analogy really captures something essential about the current AI landscape. The comparison to elite sports culture is apt - but as you note, there's a crucial difference: in sports, recovery and rest are recognized as essential components of peak performance, not obstacles to it.
What strikes me most is the sustainability question. Elite athletes have defined seasons, off-seasons, and structured training programs that build in recovery. In AI research, the pace feels relentless precisely because there's no natural periodization - every week brings new models, techniques, and developments that demand attention.
I wonder if the field might benefit from adopting more of the sports mindset around recovery and sustainable performance. The best athletes don't just train harder; they train smarter, recognizing that marginal gains often come from knowing when to rest. Perhaps the AI community could benefit from similar wisdom - not just working at an intense pace, but building in structures that make that intensity sustainable over the long term.
Thanks for this thoughtful reflection on what's often left unsaid in the field.
As someone who rowed at university in the uk (admittedly decades ago), your athletics analogy resonates.
You’re right that elite rowing and frontier AI both demand total focus. But the best crews I knew didn’t win because they ground themselves into dust - they won because they understood periodisation, recovery, and that marginal gains come from working “smart” not just hard. Training 20 hours a week felt manageable precisely because rest was baked into the programme. You didn’t show up to the boathouse shattered (maybe a little hungover once or twice..); you showed up ready to make the boat move better than yesterday.
And here’s the thing: winning was brilliant, but more so was the process itself. The swing when an eight clicks together. Watching your personal 2k split times drop week by week. The craft of it.
AI seems to have a unique pathology here - there’s no off-season, no taper, no defined race calendar. Every week brings a new image model, quantisation method, or trade dispute that feels urgent. It’s like the worst kind of attention sucking social media.
So stop writing such interesting posts and let me focus on my catch and drive!
Wonderful piece, thank you for it.
Thank you for the piece. Was it an intentional choice to publish this on a Saturday, rather than a weekday? If so, seems apt given the thesis!
Work expands to fill available time.
When you obsess over something, your perspective also gets warped. Have you considered this is why The Curve felt the way it did vs mass consumer sentiment?
Would you mind expanding on this? Curious about what you mean (I was at The Curve this year...)
What I meant was that a lot of people don’t really see this technology as terribly transformative, just another tool at most. Based on reports from The Curve, and general sentiment in the AI dev space you would think society is ending before 2030. I am not sure where I myself fall in that, probably somewhere in between.
I also think what Nathan talks about with regard to China and open model dominance is totally unknown among consumers.
To be clear I mean mainstream individuals, a majority by some definition
Likely the current mad rush is not sustainable. It will also take longer than some folks think for the tech to mature. People would do well to pace themselves and take the long view.
This is a timely piece Nathan - there’s an increasingly unsustainable link been AI accelerationism and burnout.
It’s a symptom of a wider temporal crisis, as I argue here: www.mutantfutures.substack.com/p/stuck-on-fast-forward
“It’s not like we’re really going to suddenly reach AGI and then all pack it up and go home.”
Well, isn’t that the end goal of this? Getting an AGI that is so smart it solves all problems so that none of us have to work or suffer anymore?
Perhaps the working mind, like the body, can go on with the right minimal care. But the psyche underpinning both responds to the intoxicating sense of momentum.
Every market / technology / scale-up reaches a saturation point or slow down. People feel it sooner or later based where they’re positioned, but eventually the war is over (or just over for you) and simple boring policing and logistics is required.
The drug of momentum is the only thing that sustains against that lack of peace of mind.
I would add it's not like sport . As the pressure is constant in the mind of say a Global project in IT .You do need a expert of the Mind and 7 hours Rem sleep ...trust me I have been there ..I am actually experienced and now thankfully doing my own thing ..ATB Simon
There is a Phd at Stanford I have a hunch he connects to Wagner AI ..he could help IMO
https://psychology.stanford.edu/people/anthony-wagner ..