Apr 26Liked by Nathan Lambert

Of all definitions, I like The Modern Turing Test (Suleyman) the least. Intuitively, it seems to be the most prone to "reward hacking". A model can just discover something which is trivial but super hard for humans (like flash trading).

All other reasonable definitions do seem to require embodiment. But let's consider this thought experiment - imagine human brain completely separated from its body but still alive and able to communicate with us. That is still AGI because brain implements human intelligence, but will we be able to tell? Intuitively, there seems to be connection with Gödel's first incompleteness theorem, e.g will NGI (us) be able to identify AGI?

Expand full comment

Great points really. Further cements my thinking around it - so many angles to this largely social problem.

Expand full comment
Apr 24Liked by Nathan Lambert

The real AGI is the friends we made along the way

Expand full comment