Nick Bostrom: Failure to Develop Superintelligence Would Be a Catastrophe
Nick Bostrom dropped a bombshell that cuts through some of the haze around AI doom-and-gloom: “I think ultimately this transition to the superintelligence era is one we should do.” That’s right—should. Not “might” or “could,” but a firm nudge toward a future where humanity doesn’t just survive AI but thrives with it. Even more striking? He doubles down: “It would be in itself an existential catastrophe if we forever failed to develop superintelligence.”
Let that sink in – Bostrom, often painted as a brooding prophet of AI peril, isn’t clutching pearls here. He’s arguing that stalling out—never cracking the superintelligence code—would be its own disaster. A world stuck in the slow lane, missing out on what could be our greatest leap forward. It’s a bold take, and it flips the script on the “Bostrom-as-doomer” stereotype.
But don’t get it twisted—he’s not popping champagne just yet. In the full interview, Bostrom soberly acknowledges the flip side: building superintelligence comes with existential risks that could derail everything if we’re sloppy or reckless. There is no guarantee we will reach a ‘Deep Utopia’. He’s no Pollyanna; he’s a realist. The path to an existentially secure era of beneficial superintelligence isn’t a blueprint etched in stone—it’s a messy, adaptive trek. “We’ll have to feel our way through this and make adjustments as we go along,” he says, shrugging off rigid plans for a more nimble, eyes-open approach.
It would be in itself an existential catastrophe if we forever failed to develop superintelligence.
Nick Bostrom, in interview with Adam Ford, 2025
Superintelligence Is Our Destiny—If We Don’t Mess It Up
So, where does that leave us? Bostrom’s not cheerleading unchecked AI chaos or preaching inevitable collapse. He’s saying superintelligence is a prize worth chasing—humanity’s shot at transcendence—but only if we navigate the gauntlet of risks with our wits intact. Catastrophe isn’t baked in; it’s just one possible fork in the road. The real tragedy? Never taking the journey at all.
Bostrom articulates a pragmatic, almost exploratory approach to AI development. Rather than advocating for a rigid, pre-set blueprint, he suggested we must “feel our way through this and make adjustments as we go along.” This perspective, paired with his acknowledgment of superintelligence as both an imperative and a potential existential risk, offers a nuanced lens on humanity’s path forward.
I think ultimately this transition to the superintelligence era is one we should do.
Nick Bostrom, in interview with Adam Ford, 2025
Nick Bostrom in the full interview does say afterwards that developing superintelligence is associated with significant existential risks.
Advocating for value pluralism Nick later says: “I’m mostly thinking on the margin of, is there like little things here or there you can do that seems constructive and that improve the chances of a broadly cooperative future where a lot of different values can be respected.”
Transcript
“I think broadly speaking with AI, … rather than coming up with a detailed plan and blueprint in advance, we’ll have to kind of feel our way through this and make adjustments as we go along as new opportunities come into view. And particularly with respect to the more, I don’t know, political aspects of what it makes sense to push for and which AI developer to favour and which not and whether to get governments more involved or less involved. I feel that it’s hard to have like a firm fixed opinion on that. It feels more like something that needs to be navigated rather than designed… I think ultimately this transition to the superintelligence era is one we should do. It would be in itself an existential catastrophe if we forever failed to develop superintelligence.” – Nick Bostrom, in interview with Adam Ford, 2025