5 Comments
User's avatar
Herbie Bradley's avatar

Thanks for the thoughtful reflections—I think your description of automation of AI R&D (RE and RS) seem reasonable, though I might be inclined to forecast slightly faster timelines because if one of the things AI is better at is automating lots of the engineering/infrastructure grind then we should expect to deal with the increase in complexity well, while the bottleneck shifts to idea generation and compute.

What I'm most interested in though is your takes on the open models discussion towards the end of your post. To devil's advocate what I think many Curve attendees positions will be, I might say:

- The closed (US) frontier is ahead of the open/local/China one, and the trend is stable (c.f. the Epoch chart)

- The geopolitical consequences of AI, both economic(productivity benefits -> economic disruption and benefits) and military (defense fine-tunes, c.f. GovClaude, OpenAI for Government) primarily come from the frontier of most capable models, because (a) empirically most people seem to want to use the frontier even if it costs more (c.f. usage of GPT-5 mini vs GPT-5) and (b) in international competition, maximising capabilities is important.

- Therefore open models shape how AI startups work (and how a small set of enterprises use AI), but do not affect geopolitics much.

Curious for your thoughts.

Expand full comment
Nathan Lambert's avatar

Yeah, I need to dig into it, kind of ran out of steam in writing. I'd say my take is a combination of:

* Because AI progress will be slower, more time to diffuse "good enough" and cheap alternatives,

* People around the world will be using AI and many of them won't be using American web services,

* They actually may have a different ratio of growth to misuse, where the misuse in practice is actually nearly impossible to stop and track, so trying to ban it wouldn't work

The most substantive being the kind of long-tail of AI onboarding and soft power (rather than vulnerabilities like backdoors). Then also in the case where AI progress DOES plateau, leading labs crumble under the weight of their valuation and Qwen wins. Maybe this (even if its a low % probability)

Expand full comment
Eric's avatar

What’s your POV on what’s slowing down progress the most right now?

Obviously compute is a big rate limiting factor, this seems like #1 to me (well, maybe after “idea generation” lol). Other “factors of production” that I’m not quite sure how to rank include:

* Complexity of the systems to train & host

* New training data

* Evals — this seems like a possible blocker to me because the process is either not that valuable (machine evals) or takes so long (human evals). Nonetheless “solving” evals seems like it would unlock incremental improvements vs. step changes

Expand full comment
Ilia Karelin's avatar

It sounds like the demand for electricity will rise even higher and faster in the next 12 months. Very curious where this will go in terms of energy and economy.

Expand full comment
Seth's avatar

As an outsider, AGI debates are confusing because people will say things like "we will automate AI engineering", but what they seem to actually mean is more like "we will witness the birth of a new God, look upon our forecast of Him and despair". And many smart people seem to genuinely not realize that these are two statements are different?

Anyway, I appreciate this blog because it stays mostly agnostic on the question of AI divinity.

Expand full comment