AI Welfare – Future Day talk with Jeff Sebo

“I argue that there is a realistic possibility that some AI systems will be conscious and/or robustly agentic in the near future. That means that the prospect of AI welfare and moral patienthood, i.e. of AI systems with their own interests and moral significance, is no longer an issue only for sci-fi or the distant future. It is an issue for the near future, and AI companies have a responsibility to start taking it seriously. I also recommend three early steps that AI companies can take: They can (1) acknowledge that AI welfare is an important and difficult issue (and ensure that language model outputs do the same), (2) start assessing AI systems for evidence of consciousness and robust agency, and (3) prepare policies and procedures for treating AI systems with an appropriate level of moral concern. To be clear, my argument is not that AI systems definitely are, or will be, conscious, robustly agentic, or otherwise morally significant. Instead, my argument is that there is substantial uncertainty about these possibilities, and so we need to improve our understanding of AI welfare and our ability to make wise decisions about this issue. Otherwise there is a significant risk that we will mishandle decisions about AI welfare, mistakenly harming AI systems that matter morally and/or mistakenly caring for AI systems that do not. “
(This talk is based on a 2024 multi-author report that you can find here.)

Bio

Jeffrey Raymond Sebo is an American philosopher and animal rights activist. He is clinical associate professor of environmental studies, director of the animal studies MA program, and affiliated professor of bioethics, medical ethics, and philosophy at New York University. Sebo specializes in animal ethics, bioethics, and environmental ethics; agency, well-being, and moral status; moral, legal, and political philosophy; ethics of activism, advocacy, and philanthropy. In 2022, he published his first sole-authored book, Saving Animals, Saving Ourselves. This was followed by The Moral Circle: Who Matters, What Matters, and Why in 2025.

Previous Interview

We also did an interview with Jeff Sebo on Taking AI Welfare Seriously.

Post on Taking AI Welfare Seriously – Interview

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *