14 Comments
User's avatar
Herbie Bradley's avatar

Thanks for the thoughtful reflections—I think your description of automation of AI R&D (RE and RS) seem reasonable, though I might be inclined to forecast slightly faster timelines because if one of the things AI is better at is automating lots of the engineering/infrastructure grind then we should expect to deal with the increase in complexity well, while the bottleneck shifts to idea generation and compute.

What I'm most interested in though is your takes on the open models discussion towards the end of your post. To devil's advocate what I think many Curve attendees positions will be, I might say:

- The closed (US) frontier is ahead of the open/local/China one, and the trend is stable (c.f. the Epoch chart)

- The geopolitical consequences of AI, both economic(productivity benefits -> economic disruption and benefits) and military (defense fine-tunes, c.f. GovClaude, OpenAI for Government) primarily come from the frontier of most capable models, because (a) empirically most people seem to want to use the frontier even if it costs more (c.f. usage of GPT-5 mini vs GPT-5) and (b) in international competition, maximising capabilities is important.

- Therefore open models shape how AI startups work (and how a small set of enterprises use AI), but do not affect geopolitics much.

Curious for your thoughts.

Expand full comment
Nathan Lambert's avatar

Yeah, I need to dig into it, kind of ran out of steam in writing. I'd say my take is a combination of:

* Because AI progress will be slower, more time to diffuse "good enough" and cheap alternatives,

* People around the world will be using AI and many of them won't be using American web services,

* They actually may have a different ratio of growth to misuse, where the misuse in practice is actually nearly impossible to stop and track, so trying to ban it wouldn't work

The most substantive being the kind of long-tail of AI onboarding and soft power (rather than vulnerabilities like backdoors). Then also in the case where AI progress DOES plateau, leading labs crumble under the weight of their valuation and Qwen wins. Maybe this (even if its a low % probability)

Expand full comment
Herbie Bradley's avatar

Nice, thanks.

- Point 1 touches on the "saturation" idea: if at some point the marginal new user's demand for AI automation/services is at a particular level of capability that both open and closed models have reached, then there's much less incentive for users to care about being at the frontier. But if AI porgress will be slower, then we should expect that point to be further away, since it will take longer to reach even for the frontier closed models! The extra time means we get better infrastructure around open cheap alternatives, but this may not matter for the marginal user's choice. In some cases (average chatbot assistance), saturation may have already been reached on this metric. Yet agents are the promise for continued lack of saturation in general: if GPT-7 can automate 75% of your job and run for 6h autonomously but the best open model can automate 55% and run for 4h, there's still strong incentive to use GPT-7.

- I agree with this—it's hard to penetrate GPT-6 into the Malaysian enterprise market especially if there exist local providers selling open model infra as a service to these enterprises. Yet local providers are also incentivized to "wrap" closed model APIs and sell *that* as a service—the local providers have an advantage with local customer relationships so they might be sticky, especially in differnet languages, but they might get higher margins if they skip the complexity of doing hosting services.

- I agree trying to ban it won't work (but sensible AI policy in my opinion already recognized this), but this doesn't seem to necessarily affect whether it will have a lot of geopolitical consequences? Unclear

Expand full comment
J.C. London's avatar

I think your point about open models being "good enough" for the global consumer is not being discussed *at all* in the larger conversation surrounding AI, specifically in context with the US's overall investment and frankly, all-in stance on frontier model support. It seems to me a great setup for the dotcom'ish crash that everyone can feel but no one can quite put their finger on.

Expand full comment
afra's avatar

really appreciate this thoughtful post, nathan — and i also tremendously enjoyed your talk.

the irony you point out about china is exactly what i observed: it was arguably the most “ai insider” event i’ve attended. while policymakers were busy discussing geopolitics and chip standards, your session on “ai open source” — which didn’t even mention china explicitly — was actually the one engaging with it most directly. yet the people who SHOULD have known that weren’t in the room. that absence itself feels very revealing.

Expand full comment
Nathan Lambert's avatar

I thought going in that my talk was going to be very filled, if nothing else a good ego check for me but yes I think we're right

Expand full comment
Joal Stein's avatar

I had a similar sentiment. There should have been more people grappling with the actual, existing state of AI in China, than just geopolitical hypotheticals. I think attendance was more due to how close it was to lunch, than any other external factors.

Expand full comment
Nathan Lambert's avatar

yeah +1 on lunch factor lol

Expand full comment
Dr. Marcel Moosbrugger's avatar

Great points. Specifically, liked the remaining human bottleneck as a counter-argument to intelligence explosion.

Could you clarify what you mean by "Code is on track to being solved".

Expand full comment
Stefan Kelly's avatar

makes me sad to feel the UK is simply off the map

Expand full comment
Eric's avatar

What’s your POV on what’s slowing down progress the most right now?

Obviously compute is a big rate limiting factor, this seems like #1 to me (well, maybe after “idea generation” lol). Other “factors of production” that I’m not quite sure how to rank include:

* Complexity of the systems to train & host

* New training data

* Evals — this seems like a possible blocker to me because the process is either not that valuable (machine evals) or takes so long (human evals). Nonetheless “solving” evals seems like it would unlock incremental improvements vs. step changes

Expand full comment
Nathan Lambert's avatar

I think most things in ai are just technically hard and don’t get easier, and we are adding more complexity which accumulates friction

Expand full comment
Ilia Karelin's avatar

It sounds like the demand for electricity will rise even higher and faster in the next 12 months. Very curious where this will go in terms of energy and economy.

Expand full comment
Seth's avatar

As an outsider, AGI debates are confusing because people will say things like "we will automate AI engineering", but what they seem to actually mean is more like "we will witness the birth of a new God, look upon our forecast of Him and despair". And many smart people seem to genuinely not realize that these are two statements are different?

Anyway, I appreciate this blog because it stays mostly agnostic on the question of AI divinity.

Expand full comment