Moral Realism for EAs
I keep bumping into a pattern in EA spaces: people are very comfortable being ambitious about epistemology and cause prioritisation, but surprisingly pessimistic or dismissive about moral realism, or seem attracted to moral anti-realism.1
Moral realism often gets caricatured as spooky – as if it posits a weird substance floating outside the universe, or some divine law written between the charm and strange quarks. That’s not the view I’m interested in defending.
The version that I care about (and what I think matters to most EAs) is thoroughly naturalistic:
- The empirical facts – who exists, what they experience, how their minds work, how they interact.
- Given that complete description, there are further truths about what there is reason to do, which options are better or worse.
No extra ectoplasm, no moral pixie dust – there is just higher-level facts that supervene2 on the physical, in the same way that facts about ecosystems or economies do. There’s no difference in moral facts without some difference in natural facts.
Suppose we assume a naturalistic picture – physics, chemistry, biology, psychology, no extra metaphysical furniture. Once this world contains conscious beings, do you think some futures are objectively better or worse than others (or think it’s at least an open question whether they do), or do you think talk of ‘better’ and ‘worse’ can only ever reflect preferences?
Where EAs already act like realists
A lot of our motivation for EA already points in a realist direction.
Consider something very simple (you probably already have): putting your hand on a hot stove. You don’t need to reason through moral argument to know you have reason to yank it away. The ‘ouch’ isn’t just a neutral signal; it has a distinctive negative feel and pushes you to stop. For many of us, it seems as if the experience itself is a decisive reason not to be in that state – not just a preference about what we happen to dislike.
This is important: the realist takes that phenomenology at (roughly) face value: we don’t think agony is bad because we dislike it; we dislike it because of the way it is, and that looks like an object-given reason. The anti-realist treats the same data as a very strong but ultimately non-objective aversion.
Call it a preference if you like, but then notice: it’s the kind of state that any minimally informed, minimally reflective agent has decisive reason to avoid. That’s the sense in which it looks “intrinsically bad”. And this isn’t just about adult humans with explicit preferences. Infants, non-human animals, people with limited reasoning: when they’re in intense agony, something has gone badly wrong for them, whether or not they can represent it in moral language.3
Now, anti-realists will happily agree that agony is bad for us. They just want to stop there: there’s no further stance-independent fact about what matters. I think that’s too quick, especially given what else EAs already believe.
Consider norms of reason. If you’re in EA, you probably think claims like:
“You ought to update on evidence.”
“You shouldn’t double-count data.”
“You shouldn’t believe contradictions.”
are not just personal whims. You take them to be correct standards for any reasoner, just in virtue of what beliefs are and what evidence is. That’s a form of normative realism about epistemic reasons.
And we don’t panic about that. We don’t say: “Logic is spooky because you can’t touch it.” We treat validity, consistency, and good Bayesian practice as abstract but real features of the space of possible belief-states.
So the question I want on the table isn’t “Realism: yes or no?” but:
Why treat epistemic norms as tracking real standards, while treating moral norms as mere projection?4
There might be a good disanalogy there – but we should actually articulate it, not just assume moral truth is “obviously” different.
A second place where EA already leans realist is in our attitudes to arbitrariness.
Most of us find it unacceptable to say: “I care about children in London but not in Melbourne, Australia or Nairobi, Kenya” simply because of where they were born. We call that a bias, a moral error, not just an aesthetic choice. Similarly, many EAs endorse something like:
“Future people matter just as much as present people, other things equal.”
We don’t usually treat that as a personal fashion statement. We treat it as a correction of a parochial bias.
On a strict anti-realist view, though, there is no stance-independent sense in which bias is bad or parochialism is mistaken. There are only our current preferences and higher-order attitudes. You can still be an EA anti-realist – you just have to say, in the end, “this whole structure of impartial concern is what I/we happen to endorse”.
The realist says: there is something about suffering, about agency, about the structure of conscious lives, that gives anyone reason to care, whether or not they currently do. If you or I were wired to care only about paperclips, then – given the actual nature of minds and value – we would be getting something wrong.
Realism also makes sense of a pattern EAs already rely on: convergence under improvement.
If morality were just random cultural fashion, you’d expect moral views to drift and diverge as societies get more complex, in the way clothing styles do. Instead, what we often see is that under certain kinds of pressure – more information, reduced superstition, better understanding of psychology, and a widening circle of empathy – there is partial convergence:
- on the badness of arbitrary cruelty,
- on the wrongness of chattel slavery,
- on the idea that women, children, and then non-human animals are not mere property.
This convergence is noisy and incomplete. It can be partly explained by evolutionary and game-theoretic forces. But notice: it happens when we apply exactly the methods we trust to be truth-tracking elsewhere – open debate, evidence, bias reduction, taking more perspectives into account. That pattern is what you’d expect if there were moral facts to be progressively approximated, not just norms being reinvented from scratch.
Finally, think about what actually motivates you as an EA. Your biology gives you strong incentives to hoard resources for yourself and your kin. Yet when you donate to avert malaria deaths or fund AI safety research, you’re allowing quieter, more abstract considerations to override those brute drives.
You’re not just saying “I have a kink for altruism”. You’re treating some reasons – impartial ones, evidence-sensitive ones – as better than others. You’re already acting as if there are facts about what there is most reason to do, that can contradict what your genes would prefer.
Moral realism, in the sense I care about5, is just taking that pattern seriously and asking:
- given what conscious creatures are like,
- given the structure of suffering, flourishing, agency, cooperation,
- what follows about what there is most reason to do?
You can still be highly uncertain. You can still use moral uncertainty, work with multiple theories, update on evidence. Realism doesn’t mean we already know the full moral truth, it means: there is something there to be more or less wrong about – and as EAs, that should matter to us at least as much as whether our empirical models are getting reality right.
In essence, I frame realism as a logical terminus of EA’s epistemic virtues – naturalism, anti-arbitrariness, respect for convergence and reflection etc – rather than as a spooky add-on. So if EAs aren’t in the business of selling alien or arbitrary metaphysics, it’s worth noticing where EA practice already leans
Footnotes
- This piece isn’t an attempt to prove moral realism, it’s purpose is to show why I think that if you’re an EA, a non-trivial credence in some form of moral realism is very hard to escape – and that anti-realism shouldn’t be treated as the default, obviously correct view. ↩︎
- Supervenience: moral facts supervene on non-moral facts meaning that there can be no moral difference between two situations without a corresponding non-moral difference. In other words, if two situations are identical in all non-moral respects (like physical and mental facts), they must also be identical in all moral respects. A change in a moral fact requires a change in some underlying non-moral fact, but not vice-versa ↩︎
- See Derek Parfit’s “Agony Argument” – discussed in On What Matters, specifically the chapters where he discusses how Anti-Realism collapses into Nihilism. This is widely considered the heavy artillery for secular Moral Realism (as distinct from moral realism via Devine Command) because it relies on phenomenology (what it feels like to be conscious) rather than religion or abstract metaphysics. It appeals directly to the Utilitarian intuition that suffering is real. ↩︎
- See the “Companions in Guilt” argument – challenges a skeptical or anti-realist position by arguing that if the target position is true, then a similarly unpalatable or absurd conclusion must also be true in a different domain. ↩︎
- I lean naturalist, and am partial to robust realism too – go figure. ↩︎