Future Day Singularity Salon with Ben Goertzel & Hugo de Garis
For Future Day we hosted an electrifying discussion between two of the most thought-provoking minds in AI and the Singularity: Ben Goertzel and Hugo de Garis. Convened by Adam Ford, this conversation was an exploration into the accelerating trajectory of Artificial General Intelligence (AGI), the promises and perils of AGI, as well as political and social ramifications of AGI drawing near with special attention on Hugo de Garis’ framing in his Artilect War writings and how things are playing out in the world today.
They discuss AGI timelines – Hugo thinks within a decade but Goertzel thinks it could be as little as 3 years away (which is more than some big AI company leaders think). And how we’ll know when we’ve reached the singularity, suggesting it will be evident when machines demonstrate a clear understanding of their actions, solve problems better than the smartest people, and can perform all human jobs.
Hugo de Garis expresses cynicism about aligning super-intelligent machines with human values, they question if complete alignment is even possible or necessary. Ben Goertzel suggests focusing on instilling basic compassion in AI.
Political and Social Implications:
- Hugo and Goertzel discuss the potential for AI development to be driven by nationalistic competition, with countries vying to gain an advantage.
- They explore the possibility of social unrest if the benefits of AI are not distributed equitably, potentially leading to terrorism in developing nations.
- Goertzel suggests that early-stage AGI might provide enough “goodies” (e.g., advanced medicine, technology, VR-porn) to mitigate potential opposition.
Goertzel and I both think that the early stages of the singularity could be dangerous – but think there is a good chance that AI will naturally evolve to be a positive force in the long term. I prompted them about survival strategies in the early stages of the singularity, Goertzel suggests that agility of mind and the ability to adapt to rapid changes will be crucial for navigating the transition period. Goertzel emphasises the importance of building compassionate values into the seed AGI to guide its development. We all agree that global cooperation and a focus on the well-being of the entire species would be a great approach to navigating the singularity.
Goertzel and de Garis – long-time friends and former colleagues – have spent decades at the frontier of AI research, grappling with deep philosophical and technical questions about intelligence, consciousness, and the fate of humanity in a world shaped by exponentially advancing technology. While Goertzel champions a decentralized, open-ended approach to AGI development, de Garis has warned of a future where an intelligence explosion could divide humanity into those who embrace godlike machine minds and those who resist.
It was a lively, insightful, and sometimes contentious dialogue as these two visionaries dissected the latest advancements, revisited old debates, and offered perspectives on the road ahead.
How close are we to AGI? As little as 3 years according to Ben Goertzel.
What role should ethics and alignment play? Is the Singularity an inevitable destiny or a preventable risk?
If you missed this special Future Day 2025 event, it’s on YouTube now.
Future Day—where the future isn’t just discussed, but actively shaped.

I’ve been pondering the anticipated future singularity and wanted to offer a comment: How might it transform our ability to tackle past accidents—perhaps by enabling time travel or sending critical information back in time to prevent or soften harmful consequences? Could it even open a path to restoring the presence of loved ones we’ve lost? How would an artificial superintelligence (ASI) manage the potential paradoxes of time travel? What level of human intelligence equivalent (H)—such as 1,000H, 10,000H, or higher—would an ASI need to master these capabilities? And, while predictions are inherently uncertain, is there any rough timeline for these breakthroughs? Are they relegated to a distant future, or could exponential, super-exponential, or even hyperbolic progress bring them closer than we expect?