Survival, Cooperation, and the Evolution of Values in the Age of AI
No one in their right mind can deny the struggle to survive animates natural selection. Though it’s a bit misleading to say that ‘survival is paramount – and that every strategy serves that goal’: survival plays a crucial role in evolution, but it is not the ultimate goal. As emphasised by Charles Darwin’s theory of natural selection, the ‘struggle for existence’ favours individuals with traits that enhance their likelihood of survival and reproduction. However, survival is a means to an end—the passing of genes to future generations. Evolution prioritises reproductive success, and strategies like competition or cooperation are employed not solely for survival, but to maximise the chances of leaving descendants.
As humanity faces a future increasingly shaped by technology and artificial intelligence, we are right to question whether this ancient dynamic is fit for the future. Can values help shape our future for the better?
Values are guiding principles: they are fundamental beliefs about what is right, wrong, good, bad and important. These guiding principles provide direction, offering a framework for making choices, especially in complex or ambiguous situations. Ideally these frameworks help make choices that result in achieving utility i.e. fulfilling needs based on the aforementioned fundamental beliefs.
Evolution by natural selection specifically focuses on biological fitness (reproductive success), while values focuses on achieving utility – and that “utility” refers to the usefulness or benefit that a particular value provides to an individual or population. If we want our values to shape the future, we should be clear about what we want them to be.
What caused our values to get to where they are now, and how are they being driven? How does evolutionary strategies of adaptivity, survival and cooperation intersect in a world where technology changes the rules of the game? What happens when values evolve, or drift, or are deliberately selected, in ways that challenge deeply held cultural norms?
These questions emerged in a recent conversation I had with someone who argued that the struggle for existence remains supreme, even when cooperation enters the picture. Animals, he pointed out, cooperate only insofar as it enhances survival or reproductive success. If cooperation is not necessary in a given context, individualistic strategies prevail. But what about humans, whose cooperation as it seems often transcends survival to encompass ideals, aspirations, and collective flourishing? This conversation spurred deeper reflections on the interplay between survival, cooperation, and values—particularly as we look toward a future shaped by AI.
The Nature of Cooperation: A Strategy, Not a Goal?
Some see cooperation as not a replacement for the struggle to survive but a strategy within it. This perspective is grounded in evolutionary biology. Whether it’s wolves hunting in packs or symbiotic relationships between species, cooperation often emerges because it increases fitness. However, I think this view risks oversimplifying human cooperation, which is layered with cultural, moral, and technological dimensions.
Human cooperation isn’t merely instrumental; it’s often tied to shared goals that go beyond mere survival. For instance, cooperation to prevent existential risks—such as nuclear war or climate change—isn’t just about individual survival but the preservation of a collective future. In these contexts, cooperation becomes a moral imperative, not just a pragmatic strategy.
A counterpoint: as technology advances, the need for cooperation diminishes. Machines can replace human labour, reducing reliance on others. Yet, technology also creates new interdependencies. Consider the global cooperation required to manage AI, regulate biotechnology, or distribute the dividends of technological growth. With increasingly east access to doom-tech, cooperation is vital not just for survival but for avoiding mutually assured destruction.
Survival, Cooperation, and the Evolution of Values in the Age of AI
A key challenge to cooperation lies in the compatibility of traits among individuals or groups. It was suggested that effective cooperation requires shared cultural or evolutionary traits. I.e. the tech industry where cultural traits—instilled over decades—supersede national or tribal identities, enabling global collaboration.
This raises profound questions about inclusion and exclusion. What happens to those whose values or traits are incompatible with dominant cooperative strategies? Do we build enclaves of shared values and leave others behind? Or do we strive for broader frameworks that allow for coexistence despite differences?
The tension here is between efficiency and inclusivity. Highly cohesive groups often cooperate more effectively, but they risk creating insular systems that exclude or marginalise others. In a globally interconnected world, such exclusion could lead to conflict, especially if marginalised groups perceive their values as being erased or ignored.
The Evolution of Values: AI and Motivational Selection
As humanity transitions into the age of artificial intelligence, the evolution of values becomes a critical issue. AI systems will likely play a role in determining which values are prioritised, preserved, or discarded. This process, sometimes referred to as motivational selection, poses both opportunities and challenges.
If AI operates under frameworks like Nick Bostrom’s indirect normativity, it might help discover stance-independent moral truths—values that remain valid regardless of individual perspectives. But this process will inevitably lead to the discarding of some human values while elevating others. For instance, values rooted in tribalism or zero-sum competition might be replaced by those emphasising fairness, cooperation, and long-term thinking.
This transition will not be smooth. Many people will resist the loss of cherished values, even if evidence and logic suggest new ones are better. Will evidence and logic convince everyone? Probably not. Cultural instillation—the deliberate shaping of norms and values—may play a role, but it must be done carefully to avoid coercion or alienation.
Managing Value Transitions in a Fragmented World
Perhaps the greatest challenge lies in managing transitions in a world where values are already fragmented. Different populations often hold incompatible values, whether due to cultural, religious, or ideological differences. This fragmentation could deepen as new values emerge through AI-driven motivational selection.
To navigate this, we must focus on meta-values—higher-order principles that facilitate coexistence despite deeper differences. Meta-values like fairness, respect for diversity, and a commitment to avoiding harm can provide a foundation for cooperation, even among groups with conflicting worldviews. AI might help identify and promote these meta-values by simulating scenarios where they lead to better outcomes.
But coexistence isn’t enough. We must also find ways to foster a sense of shared purpose. Storytelling, education, and shared goals—such as mitigating existential risks—might bridge some divides. Incrementalism, or introducing changes slowly and with buy-in, could ease the transition to new value systems.
Towards a Future of Cooperative Flourishing
The interplay of survival, cooperation, and values is at the heart of humanity’s future. While the struggle to survive remains a fundamental force, it may be automated away. Cooperation has become something far more interesting—an engine for flourishing rather than mere survival. As AI takes on a greater role in shaping our world we need to bias it towards selecting for higher values.
How do we balance the preservation of cultural diversity with the need for global cooperation? How do we manage transitions to new value systems without alienating those who feel left behind? And how do we ensure that AI-guided motivational selection respects the dignity and agency of all people?
The answers to these questions will determine whether humanity’s future is one of fragmented survival or cooperative flourishing. To succeed, we must embrace a nuanced understanding of cooperation—one that recognises its evolutionsry roots in survival while aspiring to something far greater.
References and Further Reading
- Darwin, Charles. On the Origin of Species (1859).
- Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies (2014).
- Wilson, Edward O. The Social Conquest of Earth (2012).
- Pinker, Steven. The Better Angels of Our Nature: Why Violence Has Declined (2011).
- McCaskill, William. What We Owe the Future (2022).