Discussion about this post

User's avatar
Herbie Bradley's avatar

Thanks for the thoughtful reflections—I think your description of automation of AI R&D (RE and RS) seem reasonable, though I might be inclined to forecast slightly faster timelines because if one of the things AI is better at is automating lots of the engineering/infrastructure grind then we should expect to deal with the increase in complexity well, while the bottleneck shifts to idea generation and compute.

What I'm most interested in though is your takes on the open models discussion towards the end of your post. To devil's advocate what I think many Curve attendees positions will be, I might say:

- The closed (US) frontier is ahead of the open/local/China one, and the trend is stable (c.f. the Epoch chart)

- The geopolitical consequences of AI, both economic(productivity benefits -> economic disruption and benefits) and military (defense fine-tunes, c.f. GovClaude, OpenAI for Government) primarily come from the frontier of most capable models, because (a) empirically most people seem to want to use the frontier even if it costs more (c.f. usage of GPT-5 mini vs GPT-5) and (b) in international competition, maximising capabilities is important.

- Therefore open models shape how AI startups work (and how a small set of enterprises use AI), but do not affect geopolitics much.

Curious for your thoughts.

Expand full comment
J.C. London's avatar

I think your point about open models being "good enough" for the global consumer is not being discussed *at all* in the larger conversation surrounding AI, specifically in context with the US's overall investment and frankly, all-in stance on frontier model support. It seems to me a great setup for the dotcom'ish crash that everyone can feel but no one can quite put their finger on.

Expand full comment
12 more comments...

No posts