AI as a Moral Hypothesis Generator with David Enoch

“AI as a moral hypothesis generator” – with David Enoch.

AI may be useful to help generate moral hypotheses using pattern recognition to find deep ethical patterns (meta-patterns) invisible to humans or hard to see – proposing new moral ideas or testing existing ones within varieties of ethical frameworks like utilitarianism or deontology, acting as a powerful research assistant to challenge assumptions and explore complex moral dilemmas – even though humans remain the ultimate moral judges for the time being we should consider the possibility that AI could be more morally capable across the board than us humans at some stage.
Creating a highly capable AMA (artificial moral agent) may require designing AI with explicitly robust reasoning structures suitable for ethical reasoning, using large datasets of moral scenarios, and applying the right kinds of algorithms to sort through, aggregate and/or prioritise different ethical theories to suggest novel or refined moral insights.

Suppose that what the system does is just raise these really interesting moral claims that we haven’t thought about. And you use it as an insightful kind of hypothesis generator. That would be great, right? I mean, as long as it doesn’t just generate infinitely many random ones. So just think, for instance, suppose that 100, 200, 300 years ago, such a system would come up with ideas in the vicinity of veganism.

All that seems like, okay, once the idea is out there, it’s worth serious consideration. But at that point, we don’t treat the AI as authoritative in any way. We don’t say, the system’s come up with it, so it must be true. But we’re saying something like, here’s a really interesting idea. We came across it via consulting our friendly neighbourhood AI system, but that’s not necessarily a feature of it. And then we still somehow subject it to the usual kind of normative discussions that we have in order to see whether it’s justified or not. So that seems to me like a wonderful role for AI – and not even a threatening one.

– David Enoch – from interview with Adam Ford (STF)

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *