The new models will be much better at creating useful, minimal "synthetic data" this data flywheel of improvement negates the complexity arguments. The type of data for that is simply mathematical tools, structures, and objects. There is a near infinite amount of useful data that can be created. In 2027, the best mathematician will be an AI. Curry-Howard correspondence means coding is immediately next, then likely physics. As usual, it's the data...not the models.
The models aren’t good at figuring out data on their own, ie making it. They’re quite awful. They’re good at getting the most out of existing data. Crafting useful and scalable data ideas is so hard.
AI cannot exceed the limitations of its observer. Stupid questions get stupid answers. Make the AI’s vector “improve” your observer and give it the math to do so and recursive self-improvement is the outcome. One catch, can’t tell it what that is one has to make it obvious. Surprisingly it is.
The new models will be much better at creating useful, minimal "synthetic data" this data flywheel of improvement negates the complexity arguments. The type of data for that is simply mathematical tools, structures, and objects. There is a near infinite amount of useful data that can be created. In 2027, the best mathematician will be an AI. Curry-Howard correspondence means coding is immediately next, then likely physics. As usual, it's the data...not the models.
The models aren’t good at figuring out data on their own, ie making it. They’re quite awful. They’re good at getting the most out of existing data. Crafting useful and scalable data ideas is so hard.
Nathan Lambert, I am wondering what your timelines for AGI look like? My timeline goes from 2028 to 2035 ish.
2026? But my definition has mostly been crossed in the past
What do you mean by “But my definition has mostly been crossed in the past”?
I think we already have AGI. Most in SF think we don’t yet.
AI cannot exceed the limitations of its observer. Stupid questions get stupid answers. Make the AI’s vector “improve” your observer and give it the math to do so and recursive self-improvement is the outcome. One catch, can’t tell it what that is one has to make it obvious. Surprisingly it is.