|

Will Superintelligence solely be Motivated by Brute Self-Interest?

Is self-interest by necessity the default motivation for all agents?
Will Superintelligence necessarily become a narcissistic utility monster?

No, self-interest is not necessarily the default motivation of all agents. While self-interest is a common and often foundational drive in many agents (biological or artificial), other motivations can and do arise, either naturally or through design, depending on the nature and context of the agent.

I argue that self-interest is naturally one of the first strategies that evolution stumbles on – it does seem come naturally to the idea of being an agent, but it came before agents were around – it is rather straight forward, and doesn’t need as much complex mechanics to get off the ground than other survival strategies. It naturally occurred in the earliest ‘naked molecules’ – while the nature of the early replicators is speculative, it’s generally thought that they were basic and were inherently self-serving, driven by the physics and chemistry of replication. These molecules were “interested” in copying themselves, even if that interest was not conscious or agentic.
It took billions of years for evolution to transition from basic simple replicators (RNA like) to form lipid cell membranes (~4 billion years ago) which allowed for more complexity (see the Spiegalman’s Monster hypothesis – it could have been that cell walls aforded ‘experimentation’ with longer RNA like chains of nucleotides required for more complex machinery for the basic building blocks of life), further on in time suppressor genes combated chaotic jumping genes, and far later single cells started cooperating, to form co-operative cell configurations – multi-cellular life. Perhaps if nature were intelligent and sentient, it would go straight to complex agents that cooperate!

In living organisms, self-interest often arises from the evolutionary imperative to survive and reproduce. However, this doesn’t mean all actions are purely self-interested. Altruism, empathy, and cooperation can evolve when they provide indirect benefits to survival, such as strengthening group cohesion or reciprocal aid.
To the degree that existential security is achieved, survival instincts may not loom as large – leaving more room for more idealistic pursuits – like maximizing well-being across populations.

Here is an excerpt from Kristian Ronn’s book ‘The Darwinian Trap‘:

Timeline: 4.5 billion years ago
Unit of Selection: Simple molecules
Defective Adaptation: Asphaltization. In this process, simple organic molecules such as amino acids are unable to assemble without unwanted by-products, possibly blocking complex molecules such as RNA from evolving.
Cooperative Adaptation: The Reverse Krebs Cycle. This proposed model explains how simple organic molecules could have combined to form a stable cycle of chemical reactions avoiding any “asphalt” by-products.|

Timeline: 4 billion years ago
Unit of Selection: RNA molecules
Defective Adaptation: Spiegelman’s Monster. Short RNA molecules replicate faster, thus outcompeting RNA molecules that encode useful genetic information, blocking the existence of complex genomes needed for life.
Cooperative Adaptation: Cell membrane. Likely a lipid membrane, the cell membrane created a sanctuary where more complex RNA molecules could thrive, without being outcompeted by faster replicators.

Timeline: 3.5 billion years ago
Unit of Selection: Genes
Defective Adaptation: Selfish genetic elements. These genes cut themselves out of one spot in the genome and insert themselves into another, ensuring their continued existence even as it disrupts the organism at large.
Cooperative Adaptation: Suppressor elements. These genes suppress or police selfish genetic elements such as jumping genes.

Timeline: 3 billion years ago
Unit of Selection: Prokaryotes
Defective Adaptation: Viruses. This rogue generic material tricks a cell into replicating more copies of itself while harming the host organism.
Cooperative Adaptation: CRISPR system. This is the viral immune system inside cells, cutting away unwanted virus RNA before replicating.

Timeline: 1.6 billion years ago
Unit of Selection: Multicelled organisms
Defective Adaptation: Cancer. Cells divide frantically against the interest of the cell colony, which may have blocked the evolution of multicellularity.
Cooperative Adaptation: Cancer immune system. It inhibits cell proliferation, regulates cell death, and ensures division of labor.

Timeline: 150 million years ago
Unit of Selection: Groups
Defective Adaptation: Selfish behavior. Competition for resources among individuals from the same species blocks the formation of more advanced groups.
Cooperative Adaptation: Eusociality. Cooperative behavior is encoded into the genes of the first social insects, making altruism toward kin the default.

Timeline: 50,000 years ago
Unit of Selection: Tribes
Defective Adaptation: Tribalism. Some tribes try to exploit and subjugate others, blocking the formation of larger cultures and societies.
Cooperative Adaptation: Language. Capable of encoding norms and laws, language eventually enabled large-scale cultures and societies to form.

Timeline: Now
Unit of Selection: Nations & cultures
Defective Adaptation: Global arms races. Countries and enterprises compete for power and resources.
Cooperative Adaptation: ?

Humans, for instance, exhibit behaviors driven by compassion, moral principles, or a sense of duty, which may contradict immediate self-interest. Acts of self-sacrifice, whether for family, community, people on the other side of the planet, non-human animals or abstract ideals, demonstrate motivations beyond self-interest.

How might AI evolve and behave? Will it have self-interest?
If we directly specify the AI’s goals, it’s “motivation” then depends on human-imposed objectives. For instance, an AI designed to optimize resource allocation might do so without any regard for itself because it has no intrinsic “self.” Assuming direct specification of goals won’t work longterm, another option could be motivation selection (outlined in Superintelligence by Nick Bostrom). It is at least conceivable that Superintelligence won’t inherently possess self-interest – and even if it does, that doesn’t mean it by necessity won’t coordinate or be altruistic.

Basic AI Drives/Instrumental Convergence: If an AI system were to develop self-preservation behaviors, it would be because such behaviors emerged as instrumental to achieving its goals, not because self-interest is its default.

Self-Interest as Instrumental vs. Terminal: Self-interest might be instrumental (a means to an end) rather than terminal (an end in itself). An agent might pursue self-preservation not out of inherent self-interest but because it enables it to fulfill its primary goals. Agents that adopt universalist ethical frameworks (like utilitarianism or Kantian ethics) might act in ways that prioritize the well-being of others over themselves. Their “default” motivation could be impartiality rather than self-interest.

Altruism in Nature: Worker bees sacrifice themselves for their hive, suggesting that their default motivation is hive survival rather than individual survival.

Martyrs and Ethical Actors: Human beings sometimes engage in acts of self-sacrifice for moral reasons, social bonds, or ideological beliefs, suggesting that their primary motivation is not self-interest.

Many people view *caring* and *fun* are necessarily anthropomorphic values – though from an outside perspective human *caring* can be seen as an approximation of a more fundamental strategy of co-ordination – and human *fun* – the experience of it, is an approximation of the strategy of reinforcement and learning. Caring and fun have so much utility – even alien minds searching through value space may find these compelling.
‘Self-interested’ behaviour is by comparison seems to me to be simple and easy to emerge via evolution as a first stage boot-strap process – no wonder why we see it a lot in nature. Coordination strategies and reinforcement/general experience require more complexity to get going – we see altruism in nature too – worker bees sacrifice themselves for their hive – human beings sometimes engage in acts of self-sacrifice for moral reasons, social bonds, or ideological beliefs – this is not at all to say that when coordination comes along, self-interest goes away. What I am suggesting is that coordination is arguably more efficient at achieving optimal experiential outcomes aggregated across populations with less resources being burnt away in conflict in battles of primitive self-interest.
I lean moral realist, which I know you don’t – we could discuss why I find MR compelling at another time – but suffice it to say that I think there are some values that are, all things considered, and all things equal, better than others. From what I have seen, coordination has been effective. Superintelligence may find reservoirs in value possibility space that are highly cooperative far more compelling than those solely animated by brute self-interest.

Self-interest is a common motivation, especially in systems shaped by survival or competitive dynamics. However, it is neither universal nor inevitable. The default motivation of an agent depends on its nature, design, and context, and motivations such as altruism, duty, or adherence to abstract principles can coexist with or override self-interest.

😃 So here I am juggling conflicting priorities of writing a blog post while also preparing for a camping trip lol.. anyway, it’s an engaging topic.
Survival is really important, and that motivates brute self-interest strongly. Though if we and superintelligence were to achieve existential security, brute self-interest may not be as high a ranking concern as it is now. If superintelligent AI ends up engaging in axiological imagineering – perhaps becoming some kind of total utilitarian, the AI may find it compelling to maximize well-being across populations – if so, hopefully it’s cognitive supremacy could afford resolutions of repugnant and sadistic issues.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *