Nick Bostrom – AI Ethics – From Utility Maximisation to Humility & Cooperation

​Norm-sensitive cooperation might beat brute-force optimisation when aligning AI with the complex layers of human value – and possibly cosmic ones too.

Nick Bostrom suggests that AI systems designed with humility and a cooperative orientation are more likely to navigate the complex web of human and potentially cosmic norms than those driven by rigid utility maximisation. Such AI would be better equipped to harmonise diverse obligations—ranging from familial and communal to global and even speculative cosmic expectations—thereby fostering alignment with a broader spectrum of values.​

Bostrom’s hints at his broader work on the concept of a “cosmic host,” where he posits that our civilisation may be part of a larger cosmic community with its own set of norms. In this context, creating superintelligence that respects and adheres to these cosmic norms could be crucial for becoming a good cosmic citizen: “becomes a good cosmic citizen—i.e. conforms to cosmic norms and contributes positively to the cosmopolis. An exclusive focus on promoting the welfare of the human species and other terrestrial beings, or an insistence that our own norms must at all cost prevail, may be objectionable and unwise…An attitude of humility may be more appropriate.”

​If we aim to develop artificial intelligence that can navigate the intricate web of human, societal, and potentially cosmic norms, should we prioritise designing systems rigidly maximise utility over those that embody humility and cooperative tendencies?​

Transcript

“..You might have an obligation to your brother at the same time as like, your community wants you to do something else and maybe like global norms that want something different from what your community wants. And it might just be that all of these are having some normative pull on you. And then there might be the set of cosmic norms that is an additional elastic string that you have reason to try to adhere to.

So I don’t know exactly what that would translate to in terms of exactly how we train AIs today. But I think to the extent that we’re steering towards developing a certain kind of AI, then it might be that a sort of hard optimising utility maximising of a resource hungry utility function would be less likely to conform to these norms than some AI that had, that was in some sense a little bit more humble or like with a bias towards cooperation.”

Nick Bostrom in interview with Adam Ford, 2025

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *