AI’s Moral Compass: When Models Rival Human Ethicists – Danica Dillion

People used to say ‘morality is too complex for machines’. Then we started asking machines for advice about relationships, therapy, medical decisions, and legal dilemmas – because apparently we enjoy living dangerously. To help us think clearly about what’s happening, I interviewed Dianca Dillion.

Danica looks at a surprising finding from Moral Turing Test–style research: in one study, participants rated ethical advice from GPT-4o as slightly more moral and trustworthy than advice from a well-known NYT column ‘The Ethicist’.

Danica Dillion is a postdoctoral researcher working with Dr Mirta Galesic at the Complexity Science Hub and Dr Kurt Gray at the Deepest Beliefs Lab at The Ohio State University, and previously an NSF Graduate Research Fellow at UNC Chapel Hill.

Links

Danica Dillion’s blog: https://danicadillion.com/
Deep Beliefs Lab: https://www.deepestbeliefslab.com/
NYT column ‘The Ethicist’: https://www.nytimes.com/column/the-ethicist
Mind & Machine Alignment Summit at Ohio State University: https://u.osu.edu/mindmachinealignment/

Video chapters

00:00 Intro
01:05 Moral Turing Test Study Results
04:53 Human trust in AI – ceading moral authority?
08:28 If the MTT test study were done with the more powerful models of today, would the results be different?
10:45 Growing suspicion of AI, frontier labs responding to suspicion in how they tune the LLMs
12:50 Detectability
19:42 Is AI genuinely reasoning? Emergent symbolic reasoning
31:41 Are moral systems approximating some underlying moral structure?
34:20 Can AI discover ethics?
35:48 Can AI help us make progress in ethics?
40:39 Indirect normativity – choosing what to choose, indirect value discovery procedure
43:00 Bias in the training data (large samples of the internet are in english)
45:52 Would people trust responses less if they knew it came from AI?
48:17 Would cybernetic partnerships btw human & AI be trusted more than AI or humans alone?
1:07:49 Who gets the most say in the values that guide AI?
1:11:08 Overconfidence in AI & epistemic humility
1:12:45 Maximisation behaviour & insatiability
1:13:23 Can AI be more moral than humans?
1:27:17 Use of AI to reduce prejudice and partisan animosity?
1:29:43 Mind & Machine Alignment Summit at Ohio State University – https://u.osu.edu/mindmachinealignment/

Also see the talk Danica gave at Future Day 2026 – ‘AI’s Moral Compass: Better Than Expected—Now What?

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *