Danica Dillion – AI’s Moral Compass: Better Than Expected—Now What?
Danica Dillion is a welcome contribution to Future Day 2026! is a Postdoctoral Researcher working with Dr. Mirta Galesic at Complexity Science Hub and Dr. Kurt Gray in the Deepest Beliefs Lab at The Ohio State University. Previously, she was an NSF Graduate Research Fellow at UNC at Chapel Hill.
Synopsis
People have long doubted that machines could ever model the complexities of morality. Today, people turn to AI for advice, therapy, and help with high stakes dilemmas like medical and legal decisions. So just how moral are these “moral machines”? Our research suggests that LLMs show surprisingly strong moral modeling abilities, while also revealing key needs for improvement. We find that LLMs can closely track people’s moral judgments across a wide range of scenarios and give advice and moral justifications perceived as more moral and trustworthy than those of both laypeople and a renowned ethicist. We also show that a simple “bottleneck” intervention—prompting the model to evaluate psychologically grounded features of a situation before issuing a judgment—often improves moral alignment while offering a practical mechanism for steerability. What does it mean for society if AI can mimic moral reasoning and deliver persuasive moral guidance? This capability could help with some of the complex challenges we face today, but there are also critical failure modes we need to address. Some key next steps are to build systems that better reflect moral pluralism and global perspectives, develop stronger “under-the-hood” understanding of how models arrive at their answers, and strengthen collaboration between the two sides of alignment—those who study people and those who build machines.
Research
Danica studies changing belief systems: why we see the world the way we do, what happens when conflicting views come head-to-head, and how our collective understandings of the world change in tandem with new ways of living.
Danica’s blog: https://danicadillion.com/
