Ben Goertzel on Whether Current AIs can Reason Adequately
Can current AI really reason – or are large language models just clever parrots, skipping the “understanding” step humans rely on?
In this interview, Ben Goertzel (founder of SingularityNET and OpenCog) digs into one of the most fascinating debates in AI today: he argues that there is a big difference between appearing to reason and actually building abstract representations required for reasoning. We talk about Google Gemini’s gold medal at the International Math Olympiad, the paper The Illusion of Thinking, and whether LLMs can ever reach true reasoning ability.
Also we discuss whether reasoning is required for morality, whether AI will ever win a Moral Olympiad, and if it does, will this be a reliable signal that the AI is actually moral.
Finally we discuss whether AI may converge on new kinds of reasoning and logic, and also whether the ontology of the universe determines the kind of logic that arises within it.
Along the way we explore:
- Why humans learn strategy differently from Deep Blue or AlphaZero
- The link between reasoning, creativity, and Maggie Boden’s ideas on AI imagination
- What’s missing in current AI when it comes to abstract representation
- Searle’s Chinese Room and whether LLMs “understand” anything
- What does AGI mean? Sam Altman’s redefinition of AGI
- Whether AI could one day win a “Moral Olympiad” — or if ethical reasoning requires grounding in empathy
Quotes from the interview
“The way Deep Blue or AlphaZero are playing chess, they are not learning general strategic principles, they are learning highly particular patterns about that particular game. We are not as good at chess or Go as they are, but we are extracting more generalisable knowledge when we learn to play chess or than these other algorithms are.”
“Could you come up with a different go algorithm that combined the best of AlphaGo with the best of the human mind, and abstracted general strategic principles as well as a humongous library of very specific go playing patterns – quite possibly you could”
“A human cannot very easily turn knowledge into a bunch of text depicting an abstract representation of that knowledge without actually in it’s mind constructing the abstract representation of that knowledge, right? An LLM can go straight from that abstract representation of knowledge to some text depicting an abstract representation of that knowledge without ever having an inside it’s ram state the [actual] abstract representation of that knowledge – because it’s learning from a set of ordered pairs.. so it’s making a leap from the [abstract representation of the] knowledge to the textual depiction of the abstract representation, right?”
