Taking Morality Seriously in the Age of AI – Interview with David Enoch
In an era where artificial intelligence systems are increasingly tasked with decisions that carry ethical weight – from medical triage to autonomous weapons – the question of whether machines can authentically engage with morality has never been more pressing. To explore this issue, we turn to philosopher David Enoch1, a leading advocate of moral realism2 – the view that moral facts are objective, independent of human beliefs or preferences. In this interview, Enoch argues that if we are to take morality seriously, we must treat it as something robustly real – something that could, in principle, guide or constrain even the behaviour of intelligent machines.
We discussed his book “Taking Morality Seriously”, AI – and whether AI should take a metaethical position, and AI becomes a moral realist, what might be the comparative advantage vs other metaethical positions.
What is Moral Realism?
David Enoch defends moral realism – the view that moral facts exist independently of human opinions or social conventions. In contrast to perspectives that see morality as a product of evolution or culture, he argues that moral claims can be objectively true or false.
Why AI Raises New Questions
As varieties of artificial intelligence systems increasingly take on roles in decision-making – some of which have ethical stakes – it becomes urgent to ask whether these systems can grasp moral truths. David Enoch and I explore what moral realism can offer to AI alignment.
Enoch’s position prompts a vital question: should we design machines to recognise and human values or preferences, or objective values?
Robust Realism
David Enoch describes the Robust Realism in his book, saying
I believe that there are irreducibly normative truths and facts, facts such that we should care about our future well-being, that we should not humiliate other people, that we should not reason and form beliefs in ways we know to be unreliable… They are independent of us, our desires and our (or anyone else’s) will. And our thinking and talking about them amounts not just to an expression of any practical attitudes, but to a representation of these normative truths and facts. These normative truths are truths that, when successful in our normative inquiries, we discover rather than create or construct. They are, in other words, just as respectable as empirical or mathematical truths (at least, that is, according to scientific and mathematical realists).
In the interview we cover:
- Motivation for adopting robust moral realism today
- The accessibility of robustly objective moral facts to approximately ideal rational agents (like AI), and what would make these agents closer to ideal
- The challenge that robust realism is too metaphysically ‘heavy’ compared to more lightweight metaethical alternatives like constructivism or quasi-realism
- Robust moral realism posits that moral facts exist, but aren’t reducible to non-moral or natural facts, and critiques that this framework lacks empirical grounding
Contrasting Metaethical Positions
- Weighing up the advantages of robust realism over other realist positions like naturalist realism, and non-realist alternatives like expressivism or error theory
- Dealing with disagreement and whether robust realism accommodates persistent disagreement without collapsing into scepticism or relativism
- If robust realism turns out to be false or innacurate, what might the next best metaethical view be and why
Metaethics and AI Alignment
- Whether a commitment to moral realism strengthens or weakens the alignment problem
- Value pluralism – how metaethical uncertainty should play a role in AI alignment strategies, and whether a commitment to robust realism can coexist with a prudential openness to other views in practice
- Whether AI that is designed without any metaethical commitments is potentially dangerous. Whether AI systems should be built with an explicit metaethical stance
- Whether a sufficiently sophisticated AI would indirectly come to the right metaethical stance with or without guidance – i.e. via indirect normativity3
AI and Deeper Moral Inquiry
- What would it take for AI to genuinely contribute to moral inquiry rather than just simulate it?
- Using AI to stress-test our moral theories
- Domains within metaethics where AI could offer something new
- Whether the development of idealised deliberative models (e.g. “ideal advisor” theories) could be enhanced by AI – helping us converge on what a better, wiser agent would value
- Whether robust moral realism may eventually require a kind of empirically assisted inquiry
- The argument from queerness applied to AI – whether will AI find morality queer
- Whether AI requires a phenomenology of deliberation
- Whether the supposed metaphysical and epistemological weirdness of moral facts is exaggerated and parallels other domains we accept without hesitation4, like modality or logical necessity. First, would AI accept other domains without hesitation? If so, would it (with a perhaps alien mind) accept moral realism without hesitation?
- Analogy to mathematical realism – whether would AI think mathematics is real, accepting the existence of abstract mathematical truths despite their non-empirical nature and if so, whether it would coherently accept moral truths as objective (but not necessarily natural) features of reality
- Ways distinguish between an AI that genuinely approximates ideal moral reasoning versus one that merely mimics it
- The role of ideal agents or reflective coherence in moral discovery, and whether that if moral facts are objective and stance-independent, then a sufficiently rational AI could, in principle, discover them.
- What methods or reasoning processes would reliably track these moral truths, and whether it would need to simulate ideal agents or rely on some other kind of reflective equilibrium
- Whether these ‘epistemic tools’ actually track moral reality; whether reflective methods reliably define moral truth, or just help reliably approximate it. Whether they could be (mis)used just to justify our existing beliefs. Whether they are just mere tools for achieving internal consistency.
- Criteria for success – what makes a method truth-tracking in the moral domain?
- Whether the same standards for truth-tracking apply to machine reasoning – and if so, whether this implies anything about AI discovering moral truths
- If humans themselves are imperfect moral agents, how should we approach the problem of aligning both human values and AI reasoning with robust moral truths?
- If AI could become a better moral reasoner than most humans, should it be guiding us rather than the other way around?
Biography
David Enoch does primarily moral, political, and legal philosophy.
David studied law and philosophy at Tel Aviv University, then clerked for Justice Beinisch at the Israeli Supreme Court. He pursued a PhD in philosophy at NYU (2003), and has been a faculty member at the Hebrew University in Jerusalem ever since, on a joint appointment in philosophy and law. He started at Oxford as the Professor of the Philosophy of Law in 2023.
David published work in metaethics (where he defends a robust, non-naturalistic kind of moral realism), in the philosophy of law (where he criticises some versions of “general jurisprudence”, discusses moral and legal luck, and analyses the role of statistical evidence), in political philosophy (where he criticises Rawlsian, public-reason liberalism, discusses false consciousness, and nudging), in ethics (where he discusses the status of hypothetical consent, and rejects the existence of moral luck), and more..
Video Chapters:
00:00 Intro
01:26 Philosophical concept of AI
02:56 Align AI to human values and/or objective values?
05:49 What is Robust Moral Realism?
10:03 What is objectivity?
16:00 Circularity – is your argument for moral objectivity based on moral claims?
22:24 How to Get AI to Care? On motivational internalism, de re vs de dicto
32:42 Caring about how to care
41:44 Robust or Minimal Moral Realism for AI Alignment?
46:25 An AI’s starting initial location in the landscape of value may influence it to gravitate towards particular ethical outcomes, such as egoism or altruism
48:23 Where is impartiality required? (i.e. in preference vs moral based conflicts)
52:47 What if AI can’t tell the difference between moral convictions and preferences?
56:03 Explanatory Indespensibility (Quine/Putnam)
59:56 Are normative facts are real by their indespensibility for explaining moral practises/motivation?
1:03:56 Deliberative Indespensibility
1:07:48 An alien like AI which does not deliberate may be dangerous – more on Deliberative Indespensibility
1:11:35 Are epistemic commitmets deeply felt but epistemically baseless? – Sharron Street
1:20:43 If AI converged independently on the same morals as we did, would that support moral realism?
1:24:38 AI Interpretability
1:27:10 Irriducibly normative facts – how do we know when normative facts are irriducible
1:32:43 Reasons reductionism
1:36:55 Robust Metaethical Realism – if you accept Robust Realism, why not go the extra mile?
1:42:31 More on Impartiality
1:47:26 Metaethics & AI Alignment – normative bedrock needed?
1:49:57 Many AI alignment researchers identify as anti-realist
1:51:30 Encoding AI with Value pluralism
1:56:43 Are (current/near-term) AI systems useful for stress testing our moral theories?
Footnotes
- David Enoch’s bio at Oxford faculty of Law – https://www.law.ox.ac.uk/people/david-enoch ↩︎
- See previous post on AI Alignment to Moral Realism, and on the possibility of AI being More Moral than Us ↩︎
- See previous post on Indirect Normativity ↩︎
- Shafer-Landau challenges the Argument from Queerness in this manner ↩︎