Debate – Ben Goertzel & Hugo de Garis – The Artilect War: Inevitable Conflict or Aesthetic Survival?
As part of Future Day 2026, we are hosting a conversation that makes standard geopolitical tensions look like a playground dispute. We’re bringing together two of the most provocative minds in AGI – Ben Goertzel and Hugo de Garis (with Adam Ford as moderator/provocateur) – to tackle the ultimate existential question: Is an Artilect War inevitable, and should humanity accept becoming the “number two” species?
The discussion will build upon last years discussion between Ben and Hugo on AGI and the Singularity.
It will explore the idea of human transcendence. If we can’t beat them, do we join them?
Will humanity transcend into a Jupiter brain quectotech utility fog?
Is the Artilect War the inevitable conclusion of biological intelligence? Or can we find a path toward existing in a universe that still finds us aesthetically pleasing?



Topics
Human Hubris?
Reality check on AI alignment. While we might successfully align the first generation of AGIs, that generation may design its successor.
- The Hubris Argument: To assume human engineers can ensure friendliness through the Nth generation of self-improving machines is “human hubris to the Nth degree.”
- The Power Gap: Once “Artilects” (artificial intellects) become trillions of times smarter than us, they will pursue their own goals. Whether those goals include us is entirely up to them.
Humans as “Butterflies”?
While the technical difficulty of long-term alignment may be huge, perhaps there is a more “aesthetic” outlook – where super-AGIs might not wipe us out for the same reason we don’t pave over every single forest – Just as we place some value on chimps and trees and butterflies, the future super-AGIs may place some aesthetic and moral value on humans.
It’s a humbling thought – our survival might depend on being interesting enough to keep around as a protected species.
Geopolitics vs. Cosmism
The “why” and “how” of AGI development.
- The Short-Term Race: Are we being driven by grand “Cosmist” dreams of reaching the stars, or by raw “US vs. China” etc arms race dynamics. Short-term opportunists are currently winning over the “Luddites” – will this remain true?
- The Clampdown: Will there be a “bad incident” involving AGI that will eventually lead to a global clampdown by major powers?
- AI as a Great Filter: Will the rise of the Artilect be an impassable layer in the Great Filter?
What will AGI or a Mature Superintelligence want?
There seem to be a lot of moving parts to consider when thinking about what AI may want.
Will geopolitics, or ideologies like cosmism, ludditism or cyborgism effect the dynamics of what AI wants or what goals it has earlier and later in it’s development?
Will the AI architecture that bootstraps the singularity strongly effect what AI will want early and later?
Will what AGI wants be completely arbitrary?
Are there dangers of AGI locking in some early value/goal system and then fight tooth and nail to protect it1?
How confident should we be about the nature of what AGI will want?
Is there structure to the landscape of value from which superintelligence may select from?
Footnotes
- See: see section 2.2 Goal-content integrity in Bostrom’s ‘The Superintelligent Will’ (pdf), and my thoughts on value content integrity ↩︎