AI as a Moral Hypothesis Generator with David Enoch

“AI as a moral hypothesis generator” – with David Enoch.

AI may be useful to help generate moral hypotheses using pattern recognition to find deep ethical patterns (meta-patterns) invisible to humans or hard to see – proposing new moral ideas or testing existing ones within varieties of ethical frameworks like utilitarianism or deontology, acting as a powerful research assistant to challenge assumptions and explore complex moral dilemmas – even though humans remain the ultimate moral judges for the time being we should consider the possibility that AI could be more morally capable across the board than us humans at some stage.
Creating a highly capable AMA (artificial moral agent) may require designing AI with explicitly robust reasoning structures suitable for ethical reasoning, using large datasets of moral scenarios, and applying the right kinds of algorithms to sort through, aggregate and/or prioritise different ethical theories to suggest novel or refined moral insights.

Suppose that what the system does is just raise these really interesting moral claims that we haven’t thought about. And you use it as an insightful kind of hypothesis generator. That would be great, right? I mean, as long as it doesn’t just generate infinitely many random ones. So just think, for instance, suppose that 100, 200, 300 years ago, such a system would come up with ideas in the vicinity of veganism.

All that seems like, okay, once the idea is out there, it’s worth serious consideration. But at that point, we don’t treat the AI as authoritative in any way. We don’t say, the system’s come up with it, so it must be true. But we’re saying something like, here’s a really interesting idea. We came across it via consulting our friendly neighbourhood AI system, but that’s not necessarily a feature of it. And then we still somehow subject it to the usual kind of normative discussions that we have in order to see whether it’s justified or not. So that seems to me like a wonderful role for AI – and not even a threatening one.

– David Enoch – from interview with Adam Ford (STF)

See the full interview here – and it’s associated blog post.

Leveraging emergent symbolic capabilities, AI can propose novel ethical claims for human scrutiny, acting as a research assistant in ethical exploration.

Discovery of “Moral Meta-Patterns”

Advanced AI systems can analyse vast datasets of ethical contexts (historical, cultural, and legal) to identify meta-patterns that are often invisible to the human mind due to cognitive or cultural bias.

AI may be able to help with “ethical domain transfer” – transferring a moral principle from one domain (e.g., environmental ethics) to a seemingly unrelated one (e.g., digital data sovereignty), proposing a unified meta-principle that connects them.
Also thinking about how pattern induction could be useful here, by using i.e. symbol abstraction heads, the model maps complex human scenarios into abstract variables, allowing it to perform logical operations that reveal deep-seated structural similarities across different moral frameworks.

Generators and Verifiers

Current research focuses on a hybrid architecture where the “creative” and “logical” parts of the AI work in tandem: A deep neural network generates hypothesis, suggests a novel moral claim – but also a separate symbolic module or partition does symbolic verification – evaluating the claim against established moral theories or logical constraints (e.g., deontology or utilitarian calculus) to ensure it is internally coherent and not merely random noise.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *