Moral Realism – Is the truth about ethics out there?
Something I find attractive about moral realism1, is that because ethical propositions2 (propositions contained in ethical sentences) refer to facts about actual reality (physical and/or logical), these facts should then be amenable to testing. This means that the landscape on which to judge moral aptness gets clearer and clearer the more we understand the universe and our place in it.
An important aspect of moral realism is cognitivism, asserting that ethical sentences (e.g., “Stealing is wrong”) express propositions that can be true or false. Moral realism builds on this, claiming these propositions refer to mind-independent facts about the world- facts potentially testable as we deepen our understanding of reality. This testability is appealing: if moral truths are grounded in physical and/or logical realities (e.g., harm’s impact on well-being), they could become clearer with scientific and philosophical progress.
Non-cognitivist views, like nihilism, deny that ethical sentences bear truth values, treating them as expressions of emotion or preference. Relativists, meanwhile, accept truth-aptness but relativise it to specific contexts, tolerating conflicting moral claims.
Do your ethical sentences express propositions referring to objective features of the universe? or are they just subjective opinion?
In the same way that non-ethical sentences do – in as much as we get the facts right about the objective features, and our logic is sound then we have reason to treat them as objective.
Is the support of one form of ethics no more valid than barracking for any given football team?
No, because there are differences between ethics and aesthetics. Barracking for my Aussie rules football team was very much circumstantial – when I was very young my uncle Robbie told me that Richmond is the best, at the time they seemed to be winning, and I just went along with it and got caught up in tribal euphoria. If my uncle favoured Collingwood, I most likely would have experienced similar feelings of tribal euphoria. It’s wrong headed to say that ceteris paribus it’s objectively better to follow one football team rather than another – though it seems right headed to say that self mutilation and pain infliction with no reasonable 2nd order benefits is objectively bad. If my uncle told me that inflicting pain on myself was gnarly awesome, I would have thought we was weird and probably avoided him – if I didn’t, I believe I would have been in a pretty sad state of affairs, who’s cumulative negative effects may have cascaded to the present – overall objectively bad. Btw, my uncle is awesome – he wouldn’t suggest anything like this.
Does your intuitions regarding right or wrong reflect the nature of the universe?
I reckon evolution by natural selection probably stumbled on strategies for survival which dimly reflect ideals of cooperation and coordination – true pillars of morality. In many ways we are staring though a glass darkly – yet our “experience discloses the intrinsic nature of the physical“3 – all things being equal, ones feeling of pain and suffering is objectively bad and ones feeling of bliss or happiness is objectively good. We don’t have time and resources to put this to the test, but imagine if it were possible to do a massive experiment on all living species behavioural reaction to pain, and pleasure (again ceteris paribus) – I venture to guess that each species would overwhelmingly do what it could to reduce the pain and increase the pleasure – why? to keep it simple, because pain sucks and pleasure is fabulous.
Our ancestor’s folk wisdom about morality may be no more correct than our ancestors folk ideas about health, or where we come from or what happens when we die. Though some of their intuitions about morality may have been on the right track… up front, I’m not arguing for naturalised hierarchies – I think we ought to have concern for any organism that can suffer regardless of where they sit in a social hierarchy (positions in an ostensible hierarchy doesn’t explain potential to suffer or experience wellbeing). We have observed social hierarchy in humans, non-human primates and in other animals – that’s just a fact. Interestingly, I’ve heard that concern for the wellbeing of others regardless of status was an artifact of hunter and gatherer tribes – perhaps based in survival needs where hierarchies got in the way – and that unfortunately there has been a swing back to hierarchies, with norms which preference the well-being of those toward the bottom less than those towards the top.
Should we expect that sentience – the ability to experience and feel – to be a common feature of life in the universe?
This is a harder question – but I’d say yeah it’s common among complex locomotive organisms with minds, because synthetic groundless reward mechanisms may drift so far from reality that it may become maladaptive. Yes, we can make informed speculations, but definitive answers remain elusive due to our limited data – and obviously we should avoid terrestrial parochialism. On Earth, sentience is most evident in animals with complex nervous systems, such as mammals, birds, and cephalopods like octopuses. Yet in other life it doesn’t – plants, bacteria, and fungi, for instance, lack nervous systems and are not typically considered conscious, though they respond to their environments in other ways – though totally worth prioritising the good research projects of making sense of sentience and it’s distribution in nature.
If we are moral sceptics, are we clear about what else we are or aren’t sceptical about, and our reasons why? Does our scepticism extend to science, epistemics, or even the external world?
If we adopt moral scepticism, we must ask: Are we clear about what else we are -or aren’t – sceptical about, and why? Does our scepticism stop at morality, or does it extend to science, epistemology, or even the external world?
Many anti-realists – those who reject objective moral truths – don’t explicitly demarcate the boundaries of their scepticism. Without a consistent approach, we risk applying our doubts selectively, which calls for introspection: Are we adhering to principled reasons, or are our sceptical stances arbitrary?
If we aren’t globally radical in our scepticism, it makes sense to be clear about how we demarcate our scepticism between morality, epistemics and the external world without special pleading4. Taking influence from the demarcation problem in the philosophy of science, let’s call this “the scepticism demarcation problem”5 – my observation is that it seems many anti-realists aren’t as clear about the issues with demarcation as seems adequate to meet the importance of morality.
To explore this, consider the varieties of scepticism:
- Moral scepticism denies objective moral truths, ranging from nihilism (no moral truths exist) to relativism (moral truths depend on perspectives).
- Scientific scepticism questions the reliability of scientific methods or the existence of scientific facts.
- Epistemological scepticism doubts our capacity to know anything with certainty.
- External world scepticism challenges the reality beyond our own minds.
Let’s try to be clear about why we apply scepticism selectively – elucidating any principled reasons for doubting moral truths while accepting, say, scientific or empirical truths. A deep look inside might reveal inconsistencies in how we apply scepticism across domains.
Perhaps we ought to take a deep look inside and see whether we aren’t applying our principles of scepticism equally across domains.
If we aren’t globally radical sceptics we should articulate why we doubt morality but not, say, the empirical findings of science. For instance, a moral nihilist6 might argue that morality is a human construct with no objective basis, yet accept scientific laws as grounded in observable reality. Without clarity on these distinctions, our scepticism may appear inconsistent.
If moral realism is false, then there are no mind-independent objectively factual values (goods or bads) in the universe that we either know about given today’s physics, or are awaiting discovery given tomorrow’s physics. While if moral relativism is true, then ethical values are mind-dependent, they exist as a result of agents capable of experience, able to evaluate good or bad. moral propositions refer to objective facts, independent of human opinion;
Moral realism, moral universalism – what is at steak?
What’s the difference between moral realism and moral universalism? I don’t think there is much of a difference – Moral universalism is the idea that there are universal moral principles that apply to all people, regardless of culture or individual beliefs. Moral realism is the idea that these universal moral principles are objective truths, not just social constructs or personal opinions. Perhaps it’s framing – so some principles may apply to certain situations, i.e. particular cultures, species, times etc – it may be real that morals apply differently to culture A than they do to culture B – but perhaps universal principles cut through all situations ultimately homogenising morality – but I don’t see it that way. This is different from moral relativism where there are no facts about morality outside of relative positions, moral realism sees that there are facts about the matter of all relative positions that can be expressed in terms outside of the particular relative positions they are relevant to.
Take the points that:
a) There being good and bad requires experience (sentience).
b) Experience requires minds.
One can then conclude that a universe without minds endowed with he ability to experience does not have good or bad. This however is not incompatible with Moral Realism (or universalism) in that the physical phenomena that gives rise to the experience of good or bad may exist objectively in semi-independent reality (just to be clear, I think that minds exist in objective reality, and that they aren’t separate from it). Even if good and bad experience is only realised within minds, it’s causes can exist outside the mind.
How does moral relativism relate to ethical subjectivism? Most forms of ethical subjectivism are forms of moral relativism in that moral standards are relative to cultures or even individuals. Though there are some exceptions like ideal observer theory and divine command theory.
Experience, minds and morality
Consider this argument:
- Good and bad require experience (sentience).
- Experience requires minds.
- Thus, a universe without minds lacks good or bad.
This doesn’t undermine moral realism. While the experience of good and bad may be mind-dependent, the causes of those experiences – physical phenomena like pain or pleasure – can exist objectively in reality. For example, a fire’s capacity to burn exists independently, even if its “badness” is realised only when a sentient being feels the pain. Moral realism can thusly accommodate mind-dependence in experience while rooting moral facts in objective conditions.
In contrast, moral relativism ties goodness or badness to the perceiving mind: a phenomenon X is good or bad only if a mind evaluates it as such. Relativism often aligns with ethical subjectivism, where moral standards vary by individual. However, relativism can also extend to cultural norms, differing from subjectivism’s narrower focus. Exceptions like ideal observer theory (morality reflects an idealised perspective) or divine command theory (morality stems from a deity) complicate this, blending objective elements into otherwise relative frameworks.
So, what about AI? How does moral realism play into AI safety?
Firstly we should be safe, not just from potentially rogue AI, but also from ourselves – we ought to be concerned about all serious risks, and distribute our x-risk credence across a well adjusted portfolio of possible risks.
We can have self-defeating preferences, bad values that guide us down blind alleys toward suffering. We can have preferences that steeply favour our own wellbeing over others, and exploit any favourable power asymmetries in service to our own wellbeing – all the while hoping that AI won’t develop principles derived from this quirky human phenomenon, and once it’s far more powerful than us, and the power asymmetries are steeply tipped in it’s favour, and it has decisive strategic advantages, and we can’t control it, and it finds itself in a position to treacherously turn on us – yeah we better hope that AI is more moral than we are.
But how can AI be more moral than humans if all it has to go by is anthropocentric moral parochialism? or even worse, social hierarchy parochialism? Well, it’s not all that bad, AI can do better than scrying cultural drift for better patterns of moral virtue – some of human morality does have more grounding than relativism, perhaps AI can learn from this? Well that’s a good start. But wouldn’t it be nice if AI could get it’s hands dirty, shovel the stars and actually discover what morality is, and how to get more of it?
Oh but aren’t human values intractably complex?
What is intractable for us may not be intractable for stronger minds. Also the idea that the intractability is metaphorically similar to AI trying to guess an exact bitstring7, such that any miss however near or far would mean landing in some random area in value space most likely ending in human extinction is suspicious. Human values have changed over aeons, swaying to and fro animated by impulse, self-interest, mysticism and varying degrees of irrationality – and so far as I can tell we aren’t extinct yet. To suggest that at each step of the way we have lucked out on getting the exact bitstring required to survive seems ridiculously implausible.
However complex human values may have been at any stage in history, and how complex they are at this given moment in whatever part of the world we might be referring to, navigating and coordinating between them hasn’t been an exact science – so far we’ve been able to do so adequately enough to not go extinct – though I wouldn’t bet on the humans not going extinct ticket forever without help from moral superintelligence.
Implications
Moral scepticism doesn’t automatically entail scepticism elsewhere, but it demands consistency. If we doubt moral truths due to their mind-dependence, do we also doubt subjective experiences in science (e.g., perception of data)? If not, why? A rigorous moral sceptic should map their doubts across domains, justifying where scepticism begins and ends.
In conclusion – just be good, and hope AI follows suit – perhaps we can be good exemplars, nudging the likelihood that AI will end up being nice in a positive direction.
References
[1] Wikipedia – Moral Realism – https://en.wikipedia.org/wiki/Moral_realism
[2] Wikipedia – Cognitivism – https://en.wikipedia.org/wiki/Cognitivism_%28ethics%29
[3] Wikipedia – Nihilism – https://en.wikipedia.org/wiki/Moral_nihilism
[4] Wikipedia – Moral Relativism – https://en.wikipedia.org/wiki/Moral_relativism
[5] Wikipedia – Moral Universalism – https://en.wikipedia.org/wiki/Moral_universalism
Notes
Moral Relativism: Given phenomena X, X can only be good or bad if there is a mind that can derive a good or bad experience from X.
Moral relativism holds that moral standards are relative – to either cultures or individuals.
Cognitivism: Ethical sentences can express propositions and therefore be truth-apt (can be true or false). Cognitivism relates to Moral Realism where these propositions refer to mind-independent facts about the world. So Moral Realism is Cognitivism + the idea that these ethical sentences contain propositions with the added property that these propositions refer to mind-independent facts.
Note that moral relativists and moral nihilists think there is no objective moral truth. Moral relativists think we should tolerate conflicting notions of moral truth.
Footnotes
Wow, I noticed this was sitting in draft for sometime, so I cleaned it up and released it – better late than never.
- Moral realism, in contrast top moral nihilism and moral relativism, asserts that objective moral facts exist, independent of human opinion. If realism is false, no mind-independent “goods” or “bads” populate the universe – neither known in full or in part today, nor awaiting discovery in the future. ↩︎
- Proposition: what meaningful declarative sentences try to express. Propositions are considered to be the truth bearers in the sentence (bearing truth values, what is considered to be wrong or right). ↩︎
- I’ve had lengthy discussions with David Pearce about this and surrounding topics – many recorded in video ↩︎
- Scepticism about morality is the idea that there are no objective moral truths. Scepticism about science might involve doubting the reliability of scientific methods or the existence of scientific facts. Scepticism about epistemics could involve doubting our ability to know anything at all. Scepticism about the external world might involve doubting the existence of anything outside of our own minds. Are all these tempered with the same standards of scepticism; the same sceptical principles? ↩︎
- The demarcation problem in the philosophy of science refers to the challenge of distinguishing between science and non-science, or specifically, between science and pseudoscience. It’s a long-standing debate about what criteria define a discipline as “scientific” and how to separate it from non-scientific fields that may make similar claims about the world.
The “scepticism demarcation problem” refers to the intellectual challenge of distinguishing between genuinely productive, intellectually rigorous forms of scepticism and what might be termed “pseudo-scepticism.” Just as the original demarcation problem sought to differentiate science from non-science or pseudoscience, this problem aims to establish criteria for identifying scepticism that contributes to knowledge and critical thought versus that which hinders it, misleads, or serves other, non-epistemic agendas.
When is doubt a tool for uncovering truth and improving understanding, and when is it a weapon for undermining knowledge, promoting agenda, or simply refusing to engage with evidence?
The idea of demarcating forms of scepticism is about recognising that not all doubt is equally valid or productive. ↩︎ - Nihilism holds that morality lacks any objective or relative truth – actions are neither inherently right nor wrong. This differs from moral relativism, which posits that moral truths exist but are contingent on cultural or individual frameworks. ↩︎
- See complexity of value ↩︎