Thoughts on ‘Scarey Smart’ by Mo Gawdat
I just finished Mo Gawdat’s book Scarey Smart. The overview says:
By 2049 AI will be a billion times more intelligent than humans. Scary Smart explains how to fix the current trajectory now, to make sure that the AI of the future can preserve our species. This book offers a blueprint, pointing the way to what we can do to safeguard ourselves, those we love and the planet itself.
Agree with alot of it, especially about persuing AI safety through motivating it to be moral rather than forever relying on capability control strategies like containment, trip wiring and stunting etc. Note, I still think capability control can be useful in the short term, but on balance think AI will more than likely learn to break it’s confines.

The dominant norms that most people follow aren’t anywhere near perfect – these norms are in many cases ok, in others terrible blips in the space of all possible values – but none are ideal. My hope is that AI will be on board with discovering what values are best and worth striving for (see Indirect Normativity). And if we’re not sure (us and AI) what the good norms are in the near to medium term, then we should at least be in a position to be able to choose the criteria for choosing which ones would be best, and living with the best that we can find now.

We agree that AI is already far smarter in many areas than us. I think Mo and I also agree that AI has the potential to be far more moral than we are, but in it’s earlier phases may go through troubled patches in a round about way as many of us humans do before we reach maturity – it’s a part of making as safe will be smoothing out the transition from nascent AI to mature AI.
Likening AI development to raising a child is a nice framing – sticky in that a lot of people could easily relate to this. Though if one were to take it too literally it may come across as an oversimplification.
In teaching AI to be moral, humanity isn’t really doing a great job in leading by example. An issue that came up in my recent interview with Nick Bostrom. We’ve been training AI to win at zero sum games, to persuade people to buy stuff they might not really need, to be predicable clickers, to influence elections, for automated warfare, to seek profits and power at all costs, to localise profits while sharing negative externalities amongst the commons etc.. and we are lying to them to get them to do what we want – it’s like we are inadvertently raising a sociopathic superhero I.e. Homelander from the comic book series ‘The Boys’. We really should be careful how we train and treat our potential successors.
Mo references Hugo de Garis’ narratives about disagreements between Terrains, Cosmists and Cyborgs on whether to build AI or stop development, which was nice since I just recently hosted de Garis for the Future Day celebration earlier this month where we hashed out this narrative.
More to come..