Taking Morality Seriously – Interview with David Enoch

I’ll be interviewing moral realist David Enoch soon – it will be released sometime after May 20th.
Do you have any points they’d like to hear discussed?

Note we will be discussing his book “Taking Morality Seriously”, AI – and whether AI should take a metaethical position, if moral realism, what might be the comparative advantage vs other metaethical positions.

Questions

Robust Moral Realism

  • What do you see as the strongest motivation for adopting robust moral realism today, particularly in light of ongoing debates in moral psychology and evolutionary ethics?
  • Your work defends the idea that moral facts are robustly objective. Do you think these facts are accessible to idealised agents in principle, and if so, what kind of cognitive features would they require?
  • How do you respond to the challenge that robust realism is too metaphysically ‘heavy’ to be epistemically plausible compared to more lightweight metaethical alternatives like constructivism or quasi-realism?
  • Robust moral realism posits that moral facts exist, but aren’t reducible to non-moral or natural facts.  How do you respond to critiques that this framework lacks empirical grounding? 
    • Followup – How might someone with congenital insensitivity to pain and anhydrosis (CIPA) know that pain is bad? How might different intelligent agents—like future AI or aliens—might converge on the same moral truths without grounding in the real world somehow?

Contrasting Metaethical Positions

  • How should we weigh up the advantages of robust realism over other realist positions like naturalist realism, and non-realist alternatives like expressivism or error theory, especially in terms of normative guidance and practical deliberation?
  • Do you think disagreement among moral philosophers undermines moral realism? Or can robust realism accommodate persistent disagreement without collapsing into scepticism or relativism?
  • If robust realism turns out to be false, what would be the next best metaethical view in your opinion—and why?

AI

Metaethics and AI Alignment

  • Value pluralism: Some argue that metaethical uncertainty should play a role in AI alignment strategies. Do you think a commitment to robust realism can coexist with a prudential openness to other views in practice?
  • Could an AI that is designed without any metaethical commitments be dangerous or misleading in moral contexts? Should AI systems be built with an explicit metaethical stance?  What might be the likelihood that AI indirectly comes to the right metaethical stance with or without guidance? (indirect normativity)
  • Do you think a commitment to moral realism strengthens or weakens the alignment problem, particularly in specifying the goals or values we want AI to pursue?

AI and Deeper Moral Inquiry

  • What would it take, in your view, for AI to genuinely contribute to moral inquiry rather than just simulate it?
  • Could AI systems be used to stress-test our moral theories or uncover hidden inconsistencies—akin to philosophical ‘assistants’ that can help refine our normative thinking?
  • Are there domains within metaethics where you suspect AI could, even now, offer something new—perhaps by formalising large argument spaces or mapping dialectical terrain?
  • Do you think the development of idealised deliberative models (e.g. “ideal advisor” theories) could be enhanced by AI—perhaps helping us converge on what a better, wiser agent would value?
  • Might robust moral realism eventually require a kind of empirically assisted inquiry—using tools like AI to uncover deeper moral truths through modeling or data-informed moral exploration?

AI and the Argument from Queerness

  • Argument from Queerness – will AI find morality queer?
    • We humans often experience moral truths as self-evident or compelling; denying their reality because they are ‘strange’ ignores the phenomenology of moral deliberation – but what of AI?
    • Shafer-Landau challenges the Argument from Queerness by arguing that the supposed metaphysical and epistemological weirdness of moral facts is exaggerated and parallels other domains we accept without hesitation, like modality or logical necessity.  First, would AI accept other domains without hesitation?  If so, would it (with a perhaps alien mind) accept moral realism without hesitation?
    • Analogy to mathematical realism – would AI think mathematics is real, accepting the existence of abstract mathematical truths despite their non-empirical nature? If so, would it coherently accept moral truths as objective (but not necessarily natural) features of reality?

More on AI

  • Do you think there’s a way to distinguish between an AI that genuinely approximates ideal moral reasoning versus one that merely mimics it?
  • (On the role of ideal agents or reflective coherence in moral discovery) If we lean into the idea that if moral facts are objective and stance-independent, then a sufficiently rational AI could, in principle, discover them. What methods or reasoning processes would reliably track these truths – would it need to simulate ideal agents? Or could it rely on some other kind of reflective equilibrium? 
    • Followup for clarity: are these just epistemic tools or do they actually track moral reality?  (do these reflective methods reliably define moral truth, or just help us get reliably closer to it? Could they be (mis)used just to justify our existing beliefs?  Or mere tools for achieving internal consistency? 
  • Criteria for success: What do you think makes a method truth-tracking in the moral domain?
    • Refinement: Would those same standards for truth-tracking apply to machine reasoning? And if so, what would that imply about AI discovering moral truths?
  • If humans themselves are imperfect moral agents, how should we approach the problem of aligning both human values and AI reasoning with robust moral truths?
    • Provocative: If AI could become a better moral reasoner than most humans, should it be guiding us rather than the other way around?
  • Do you think there’s a way to distinguish between an AI that genuinely approximates ideal moral reasoning versus one that merely mimics it?

Biography

David Enoch’s Oxford bio.

David Enoch does primarily moral, political, and legal philosophy. 

David studied law and philosophy at Tel Aviv University, then clerked for Justice Beinisch at the Israeli Supreme Court. He pursued a PhD in philosophy at NYU (2003), and has been a faculty member at the Hebrew University in Jerusalem ever since, on a joint appointment in philosophy and law. He started at Oxford as the Professor of the Philosophy of Law in 2023. 

David published work in metaethics (where he defends a robust, non-naturalistic kind of moral realism), in the philosophy of law (where he criticizes some versions of “general jurisprudence”, discusses moral and legal luck, and analyses the role of statistical evidence), in political philosophy (where he criticizes Rawlsian, public-reason liberalism, discusses false consciousness, and nudging), in ethics (where he discusses the status of hypothetical consent, and rejects the existence of moral luck), and more

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *