Discussion about this post

User's avatar
Sarah Marzen's avatar

There are some papers showing that when models are trained on AI slop, which people are now producing and posting everyday, they have some weird version of Mad Cow disease and die. I feel like that's lossy self-improvement more than anything. Also, not a huge fan of the word lossy here because it has such a history in information theory that has nothing to do with the post.

Ben Schulz's avatar

The new models will be much better at creating useful, minimal "synthetic data" this data flywheel of improvement negates the complexity arguments. The type of data for that is simply mathematical tools, structures, and objects. There is a near infinite amount of useful data that can be created. In 2027, the best mathematician will be an AI. Curry-Howard correspondence means coding is immediately next, then likely physics. As usual, it's the data...not the models.

16 more comments...

No posts

Ready for more?