Anders Sandberg: Scary Futures Tier List – Halloween Special

Halloween special: a scary futures tier list that is spooky in theme & sobering in content. This tier list isn’t scientific, it isn’t the final say, it doesn’t exhaustively include all doomsday risks – its a bit of a gimmick, and a fun intuition pump. Anders Sandberg is a neuroscientist and futurist well known for sizing up the biggest canvases we’ve got. Formerly a senior research fellow at Oxford’s Future of Humanity Institute, he’s worked on AI, cognitive enhancement, existential risk, and those deliciously unsettling Fermi-paradox puzzles. His forthcoming books include “Law, Liberty and Leviathan: human autonomy in the era of existential risk and Artificial Intelligence”, and a big one – “Grand Futures—a tour of what’s physically possible for advanced civilisations”. He authored classic papers like “Daily Life Among the Jupiter Brains” (which came out in 1999), and co-authored “Eternity in Six Hours” on intergalactic expansion, and “Dissolving the Fermi Paradox.”

Chapters

0:00 Intro
1:33 Why a tier list of scary futures?
3:32 Doom by natural causes – everyone dies by natural causes (all at once)
5:03 Doom by asphyxiation – everyone suffocates (all at once)
6:08 Reasoning about super-unlikely but super-high impact scenarios – Probing the Improbable [1]
7:21 Death by LHC (Large Hadron Collider) – particle physics risks
10:01 Dark Fire
15:30 Vacuum Decay – bubbles of nothing
18:34 How Unlikely is a Doomsday? [2]
21:01 AI Doom via Perverse Instantiation / Predictable Clickers
23:45 AI Doom – Death by (Right) Metaethics (also death by wrong metaethics)
27:48 AI Doom – Sleepwalking into oblivion
31:03 Meditations on Moloch – multipolar traps
33:12 Mindless outsources
35:25 Enfeeblement, Lack of Autonomy – Serfdom conclusion
42:06 Perverse Instantiation of Proxy Values (and orthogonality thesis + goal content integrity)
43:32 Human alignment to the AI (where AI is optimising for something weird, not optimising for objectively good values)
45:36 AI: More Moral Than Us (related to Death by Metaethics) [3]
48:01 Higher Value Distraction & Value Lock-In via Avoidance of Higher Values [4] – Also discussed Indirect Normativity [5]
53:55 Value Lock-in via Totalitarianism
56:40 Rational convergence – AI, for instrumental reasons aligns to what it predicts the cosmic collective wants (cooperative values assuming offence/defence scaling favours defence) [6]
1:00:37 Cooperation through regular interactions and trade
1:02:18 Cosmic Cooperation Breakdown
1:03:45 Simulation Shutdown
1:04:52 Sycophantic AI makes us like it / Discomfort avoidance
1:06:50 Hacking Humans: YGBM tech (You’ve Gotta Believe Me) – Automation of Radical Persuasion – Soft Capture of Values
1:10:51 DYI wetlabs – backyard biohacking leading to bioterrorism
1:12:18 Geoengineering whiplash leading to cascade failure
1:14:12 Risk avoidance – Kindness trap – avoidance of suffering leads to risk aversion, overly precaution
1:16:19 Big Rip
1:16:50 Wireheading – goal gaming
1:18:19 What might Superintelligence find scary? ‘There is always something darker’

[1] Paper: ‘Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes‘ by Toby Ord, Rafaela Hillerbrand, Anders Sandberg
[2] Paper: ‘How Unlikely is a Doomsday‘ – Nick Bostrom, Max Tegmark
[3] Blog post: ‘More Moral Than Us‘ – Adam Ford
[4] Blog post: ‘Al Alignment to Higher Values, Not Human Values‘ – Adam Ford
[5] Nick Bostrom – Superintellingence Chapter 13 and also see post on Indirect Normativity
[6] Blog Post: ‘AI, Don’t Be a Cosmic Jerk

#halloween #xrisk #ai #teirlist #superintelligence

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *