Moral Realism for EAs

I keep bumping into a pattern in EA spaces: people are very comfortable being ambitious about epistemology and cause prioritisation, but surprisingly pessimistic or dismissive about moral realism, or seem attracted to moral anti-realism.1

Moral realism often gets caricatured as spooky – as if it posits a weird substance floating outside the universe, or some divine law written between the charm and strange quarks. That’s not the view I’m interested in defending.

The version that I care about (and what I think matters to most EAs) is thoroughly naturalistic:

  • The empirical facts – who exists, what they experience, how their minds work, how they interact.
  • Given that complete description, there are further truths about what there is reason to do, which options are better or worse.

No extra ectoplasm, no moral pixie dust – there is just higher-level facts that supervene2 on the physical, in the same way that facts about ecosystems or economies do. There’s no difference in moral facts without some difference in natural facts.

Suppose we assume a naturalistic picture – physics, chemistry, biology, psychology, no extra metaphysical furniture. Once this world contains conscious beings, do you think some futures are objectively better or worse than others (or think it’s at least an open question whether they do), or do you think talk of ‘better’ and ‘worse’ can only ever reflect preferences?

Where EAs already act like realists

A lot of our motivation for EA already points in a realist direction.

Consider something very simple (you probably already have): putting your hand on a hot stove. You don’t need to reason through moral argument to know you have reason to yank it away. The ‘ouch’ isn’t just a neutral signal; it has a distinctive negative feel and pushes you to stop. For many of us, it seems as if the experience itself is a decisive reason not to be in that state – not just a preference about what we happen to dislike.

This is important: the realist takes that phenomenology at (roughly) face value: we don’t think agony is bad because we dislike it; we dislike it because of the way it is, and that looks like an object-given reason. The anti-realist treats the same data as a very strong but ultimately non-objective aversion.

You can describe this as a very strong preference. But notice the deeper claim many people are tempted to make:

Extreme agony is the sort of state that any minimally informed, minimally reflective agent has decisive reason to avoid.

That’s one way of cashing out the idea that agony is “intrinsically bad”. This is not limited to adult humans with explicit preferences – Infants, non-human animals, people with limited reasoning – when they are in intense agony, something has gone badly wrong for them, whether or not they can represent it in moral language.3

Now, you can still be an anti-realists and happily agree that agony is bad for us. Anti-realists may just want to stop there: there’s no further stance-independent fact about what matters. I think that stops short too quick, especially given what else EAs already believe (I discuss this in the next section).

Here it helps to distinguish two kinds of reasons (following Parfit and others):

  • Desire-given reasons: facts about what would satisfy your existing desires.
  • Object-given reasons: facts about the objects or states themselves (e.g., the nature of agony) that count in favour of or against them, independently of what you currently want.

The realist claim is that at least some moral reasons – especially about intense suffering – are object-given in this sense. This needn’t commit you to the view that only pleasure and pain matter. It just uses these as the cleanest cases where the pull of realism is easiest to see.

Epistemic norms: realism you already believe

Most EAs are already realists about at least some norms – namely, epistemic norms. If you’re in EA, you probably think claims like ..

  • “You ought to update on evidence”,
  • “You shouldn’t double-count data”,
  • “You shouldn’t believe contradictions” etc

.. are not just personal whims (offered as reports of mere taste). Most EAs take them to be correct standards for any reasoner (just in virtue of what beliefs and evidence are actually like).

That’s a form of normative realism about epistemic reasons. Some philosophers try to explain these norms in a broadly constructivist (i.e. Humean)4 way, but the everyday EA way of talking sounds very much like realism about at least some normative standards.

And we don’t seem to panic about logic being spooky because we can’t touch it – we treat validity, consistency, and good Bayesian practice as abstract but real features of the space of possible belief-states.

So the question I want on the table isn’t a simplistic “Realism: yes or no?” but something that encourages us think more deeply than we would perhaps otherwise:

Why treat epistemic norms as tracking real standards, while treating moral norms as mere projection or personal discretion?5

If you think it would be a catastrophic mistake to believe logic is just an evolved quirk or say “there’s no fact of the matter about good reasoning”, you’re already halfway to realism about some normative domain. The suggestion here is to take moral reasons with similar seriousness.

Convergence, progress, and the expanding circle

Realism also makes sense of a pattern EAs already rely on: convergence under improvement.

If morality were just random cultural fashion, you’d expect moral views to drift and diverge as societies get more complex, in the way clothing styles do. Instead, we often see partial convergence under certain pressures:

  • better information and scientific understanding,
  • reduced superstition and arbitrary taboos,
  • wider perspective-taking and empathy,
  • more inclusive political institutions.

Under these conditions, we tend to converge – admittedly noisily and imperfectly – on:

  • the badness of arbitrary cruelty,
  • the wrongness of chattel slavery,
  • the idea that women, children, and eventually non-human animals are not mere property.

Of course, this convergence can be partly explained by evolutionary and game-theoretic forces. Realists don’t deny that. The real point is more modest:

Convergence by itself doesn’t prove realism, but notice: the fact it tends to occur under conditions we otherwise trust as truth-tracking (open debate, evidence, bias reduction, more perspectives) is at least suggestive that we’re not just inventing entirely new norms from scratch each time (what we’d expect if there were moral facts to be progressively approximated).

This gels well with EAs who are familiar to ideas like Singer’s “expanding circle”: as our factual understanding and imaginative identification expand, we correct earlier, narrower moral views and call those corrections progress, not just “change of fashion”.

If you think that, say, abolishing slavery or granting basic rights to women and children was moral progress, not merely a shift from Fashion A to Fashion B, you’re already implicitly treating some standards as more correct than others.

Arbitrariness and distant or future people

Yet another place where EA already leans realist is in our attitudes to arbitrariness.

Most EAs are allergic to stuff like “I care about children in London or San Francisco but not in Melbourne, Australia or Nairobi, Kenya” simply because of where they were born. We call that a bias, or a moral error to be corrected, not just an aesthetic choice. Similarly, many EAs endorse something like:

“Future people matter just as much as present people, other things equal.”

We don’t usually treat that as a personal fashion statement. We treat it as a correction of a parochial bias.

On a strict anti-realist view, though, there is no stance-independent sense in which bias is bad or parochialism is mistaken. Anti-realists can and do reconstruct talk of “error” and “bias” within our practice – in terms of what fits our deepest values under idealised reflection. There are only our current preferences and higher-order attitudes.

You can still be an EA anti-realist – you just have to say, in the end, “this whole structure of impartial concern is what I/we happen to endorse”. The realist’s claim is just that there’s also something those values can get right or wrong, over and above that internal story.

The realist says: there is something about sentience: suffering and bliss, about agency, about the structure of conscious lives, that gives anyone reason to care, whether or not they currently do. If you or I were wired to care only about paperclips or only about our own tribe, then – given the actual nature of minds and value – we would be getting something wrong (in a similar way to getting something wrong if not caring about epistemic hygiene).

Biology vs impartial altruism

Finally, think about what actually motivates you as an EA. Your biology gives you strong incentives to hoard resources for yourself and your kin. Evolutionary psychology explains why we care about our kin (inclusive fitness) and coalition partners. It does not, by itself, explain why an EA would donate a significant fraction of their income to strangers, or worry about shrimp welfare, or spend years working on AI alignment for the sake of people who might not even exist yet.

Yet when you donate to avert malaria deaths or fund AI safety research, you’re allowing quieter, more abstract considerations to override those brute drives.

Most probably you aren’t motivated alone by a kink for altruism. You’re treating some reasons – impartial ones, evidence-sensitive ones – as better than others. That behaviour is exactly what we’d expect if there are facts about what there is most reason to do, and if those facts can sometimes conflict with what your genes would prefer.

Anti-realists can try to understand this in terms of higher-order desires and identity: “These are the values I endorse on reflection.” The realist adds: and some ways of endorsing good values or revising values to be more correct, given the actual structure of the world and of minded beings within it.

What this isn’t claiming

None of this requires thinking that:

  • we already know the full truth about morality,
  • current EA norms are close to the final word,
  • or that moral inquiry is easy.

Realism is compatible with deep moral uncertainty, with pluralism about what matters, and with serious pessimism about our ability to get things exactly right.

There is something there to be more or less wrong about, and EA practice already presupposes this more than its anti-realist slogans admit.

You can think of this as analogous to science: we don’t have a final theory of physics, but we don’t conclude that there is no fact of the matter about how electrons behave. We accept fallibilism without sliding into the belief that it’s all made up or arbitrary.

Moral realism, in the sense I care about6, is just taking that pattern seriously and asking:

  1. given what conscious creatures are like,
  2. given the structure of suffering, flourishing, agency, cooperation,
  3. what follows about what there is most reason to do?

You can still be highly uncertain. You can still use moral uncertainty, work with multiple theories, update on evidence. Realism doesn’t mean we already know the full moral truth, it means: there is something there to be more or less wrong about – and as EAs, that should matter to us at least as much as whether our empirical models are getting reality right.

Why this matters for EA

This isn’t a purely academic dispute. It bears on several core EA concerns:

  • Moral uncertainty: If there might be stance-independent moral facts, then we have reasons to include moral realism in our hedge across competing moral theories and to invest in improving our moral epistemology.
  • AI alignment: If powerful systems will be making high-stakes decisions affecting vast numbers of present and future beings, it matters whether there are facts about better and worse futures that go beyond “current human preferences plus bargaining”.7
  • Longtermism and moral progress: If you think the future can be morally better or worse in ways that aren’t just expressions of whatever future preferences happen to win, that’s already a realist-sounding thought.

Practically, this suggests that part of the EA project is not only satisfying current preferences under constraints, but also trying – cautiously – to approximate moral truths under reflection. That is a different project from treating morality as “whatever our present preferences and social equilibria happen to be”.

Moral realism, in the naturalistic sense sketched here asks:

Given what conscious creatures are like, given the structure of suffering, flourishing, agency, and cooperation, what follows about what there is most reason to do?

You can be highly uncertain about the answer. You can still use moral uncertainty, work with multiple theories, and update over time. Realism doesn’t mean we already know all the moral truths, it assumes there is something there to know.

So if EA isn’t in the business of selling alien or arbitrary metaphysics, it’s worth noticing where EA practice already leans – and adjusting our metaethical credences accordingly.


Footnotes

  1. This piece isn’t an attempt to prove moral realism, it’s purpose is to show why I think that if you’re an EA, a non-trivial credence in some form of moral realism is very hard to escape – and that anti-realism shouldn’t be treated as the default, obviously correct view. ↩︎
  2. Supervenience: moral facts supervene on non-moral facts meaning that there can be no moral difference between two situations without a corresponding non-moral difference. In other words, if two situations are identical in all non-moral respects (like physical and mental facts), they must also be identical in all moral respects. A change in a moral fact requires a change in some underlying non-moral fact, but not vice-versa ↩︎
  3. See Derek Parfit’s “Agony Argument” – discussed in On What Matters, specifically the chapters where he discusses how Anti-Realism collapses into Nihilism. This is widely considered the heavy artillery for secular Moral Realism (as distinct from moral realism via Devine Command) because it relies on phenomenology (what it feels like to be conscious) rather than religion or abstract metaphysics. It appeals directly to the Utilitarian intuition that suffering is real. ↩︎
  4. Humean constructivism is a moral theory that argues moral principles are constructed rather than discovered. ↩︎
  5. See the “Companions in Guilt” argument – challenges a skeptical or anti-realist position by arguing that if the target position is true, then a similarly unpalatable or absurd conclusion must also be true in a different domain.
    There might be a good disanalogy to counteract the companions in guilt argument, but we should actually articulate it, not just assume moral truth is “obviously” different. I’ll probably do a write up about the Companions in Guilt argument, and it’s counterargument in the near future. ↩︎
  6. I lean naturalist, and am partial to robust realism too – go figure. ↩︎
  7. See post on AI Alignment to Moral Realism ↩︎

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *