I do agree with the sentiments of this, and perhaps the clarification is that what LLMs do does not resemble (given outcomes, including what o3 did poorly in the AGI test) “human reasoning”—even if, as this article says, human reasoning is opaque to us too.
To completely take away the impressiveness of what some of this CoT "reasoning" does I think goes too far. However, it's still kind of tough to keep barreling through AI hype with more and more evocative language about what these models do though, without much precision or agreement on what the definitions mean.
Thanks for sharing your presentation! Definitely helpful and an interesting overview on the topic.
I do agree with the sentiments of this, and perhaps the clarification is that what LLMs do does not resemble (given outcomes, including what o3 did poorly in the AGI test) “human reasoning”—even if, as this article says, human reasoning is opaque to us too.
To completely take away the impressiveness of what some of this CoT "reasoning" does I think goes too far. However, it's still kind of tough to keep barreling through AI hype with more and more evocative language about what these models do though, without much precision or agreement on what the definitions mean.
Thanks for sharing your presentation! Definitely helpful and an interesting overview on the topic.