7 Comments
User's avatar
Ben Schulz's avatar

The new models will be much better at creating useful, minimal "synthetic data" this data flywheel of improvement negates the complexity arguments. The type of data for that is simply mathematical tools, structures, and objects. There is a near infinite amount of useful data that can be created. In 2027, the best mathematician will be an AI. Curry-Howard correspondence means coding is immediately next, then likely physics. As usual, it's the data...not the models.

Nathan Lambert's avatar

The models aren’t good at figuring out data on their own, ie making it. They’re quite awful. They’re good at getting the most out of existing data. Crafting useful and scalable data ideas is so hard.

Max's avatar

Nathan Lambert, I am wondering what your timelines for AGI look like? My timeline goes from 2028 to 2035 ish.

Nathan Lambert's avatar

2026? But my definition has mostly been crossed in the past

Max's avatar
1hEdited

What do you mean by “But my definition has mostly been crossed in the past”?

Nathan Lambert's avatar

I think we already have AGI. Most in SF think we don’t yet.

David F Brochu's avatar

AI cannot exceed the limitations of its observer. Stupid questions get stupid answers. Make the AI’s vector “improve” your observer and give it the math to do so and recursive self-improvement is the outcome. One catch, can’t tell it what that is one has to make it obvious. Surprisingly it is.