Discussion about this post

User's avatar
Mark Hofmeijer's avatar

I believe AGI is a milestone that can only be labeled in hindsight.

Looking back, historians will be able to determine a point in time where it turns out to be just scaling up. That day in current time will just glide by as 'just another day'.

Expand full comment
Abhishek's avatar

Geoff Hinton offers a way to reconcile both of your positions. When asked about the most important problem in AI recently (other than safety), he emphasized that training and inference shouldn’t be viewed as distinct categories, in his view, they’re parts of the same continuous process. As Dwarkesh describes, continual learning represents the training/fine-tuning aspect of weight adjustments and long-term model adaptation. Meanwhile, your use of "memory" aligns more with in-context learning, which focuses on inference using context and prompts. In Hinton’s framework, “learning” spans both approaches, and as algorithms and architectures evolve, both fine-tuning and dynamic context-based learning will remain integral to truly intelligent systems. I am currently working on some interesting way to fuse gradient descent and context engineering/tool usage.

Expand full comment
13 more comments...

No posts