Can AI Be Moral? | Wendell Wallach on Moral Machines, AI Ethics & Governance

Can AI really be moral – or does it just produce moral-sounding answers? Wendell Wallach, co-author of Moral Machines, joins me to discuss machine ethics, moral motivation, AI governance, and why controlling AI may not be enough.

Wendell Wallach is one of the foundational voices in AI ethics and machine ethics, best known as co-author of Moral Machines: Teaching Robots Right from Wrong1. In this conversation we explore whether AI can genuinely be moral, or whether today’s systems merely sound moral. We discuss comparative moral Turing tests, the difference between control and motivation, the risk of moral outsourcing, the “banality of evil” in the age of generative AI, and what serious AI governance would need to look like if it is to be more than theatre. We also examine near-term control measures, international governance, AI safety in China, and the deeper question of whether moral motivation can be engineered rather than merely simulated. Wallach has also written A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control, helped lead Carnegie Council’s AI & Equality Initiative, discussed international AI governance frameworks, and recently hosted a Carnegie conversation on AI safety in China. His site also lists a forthcoming book, Cloud Illusions: Moral Intelligence and Self-Understanding in the Digital Age. If you’re interested in AI ethics, moral machines, AI governance, AGI risk, moral realism, machine motivation, or whether advanced AI could ever become more moral than humans, this interview is for you.


Also see interview with Collin Allen co-author of Moral Machines.

  1. Moral Machines: Teaching Robots Right from Wrong – book ↩︎

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *