Discussion about this post

User's avatar
Herbie Bradley's avatar

Thanks for the thoughtful reflections—I think your description of automation of AI R&D (RE and RS) seem reasonable, though I might be inclined to forecast slightly faster timelines because if one of the things AI is better at is automating lots of the engineering/infrastructure grind then we should expect to deal with the increase in complexity well, while the bottleneck shifts to idea generation and compute.

What I'm most interested in though is your takes on the open models discussion towards the end of your post. To devil's advocate what I think many Curve attendees positions will be, I might say:

- The closed (US) frontier is ahead of the open/local/China one, and the trend is stable (c.f. the Epoch chart)

- The geopolitical consequences of AI, both economic(productivity benefits -> economic disruption and benefits) and military (defense fine-tunes, c.f. GovClaude, OpenAI for Government) primarily come from the frontier of most capable models, because (a) empirically most people seem to want to use the frontier even if it costs more (c.f. usage of GPT-5 mini vs GPT-5) and (b) in international competition, maximising capabilities is important.

- Therefore open models shape how AI startups work (and how a small set of enterprises use AI), but do not affect geopolitics much.

Curious for your thoughts.

afra's avatar

really appreciate this thoughtful post, nathan — and i also tremendously enjoyed your talk.

the irony you point out about china is exactly what i observed: it was arguably the most “ai insider” event i’ve attended. while policymakers were busy discussing geopolitics and chip standards, your session on “ai open source” — which didn’t even mention china explicitly — was actually the one engaging with it most directly. yet the people who SHOULD have known that weren’t in the room. that absence itself feels very revealing.

12 more comments...

No posts

Ready for more?