Will Superintelligence solely be Motivated by Brute Self-Interest?
|

Will Superintelligence solely be Motivated by Brute Self-Interest?

Is self-interest by necessity the default motivation for all agents?Will Superintelligence necessarily become a narcissistic utility monster? No, self-interest is not necessarily the default motivation of all agents. While self-interest is a common and often foundational drive in many agents (biological or artificial), other motivations can and do arise, either naturally or through design, depending…

Taking AI Welfare Seriously with Jeff Sebo
| |

Taking AI Welfare Seriously with Jeff Sebo

In this interview Jeff Sebo discusses the ethical implications of artificial intelligence and why we must take the possibility of AI sentience seriously now. He explores challenges in measuring moral significance, the risks of dismissing AI as mere tools, and strategies to mitigate suffering in artificial systems. Drawing on themes from the paper ‘Taking AI…

Reverse Wireheading
|

Reverse Wireheading

Concerning sentient AI, we would like to avoid unnecessary suffering in artificial systems. It’s hard for biological systems like humans to turn off suffering without appropriate pharmacology i.e. from aspirin to anesthetics etc. AI may be able to self administer pain killers – a kind of wireheading in reverse. Similarly to wireheading in AI systems…

Peter Singer – Ethics, Uncertainty & Moral Progress
| | | |

Peter Singer – Ethics, Uncertainty & Moral Progress

In this short interview, Peter Singer, a renowned philosopher and ethicist widely recognized for his thought-provoking ideas about universal ethics, discusses the value of life, moral progress, population ethics (aka population axiology), the far future, the uncertainties inherent in philosophical reasoning, moral realism (objective normative truths) and ‘alternative facts’. Points covered:0:00 Intro 0:08 Moral progress…

Understanding the moral status of digital minds requires a mature understanding of sentience
|

Understanding the moral status of digital minds requires a mature understanding of sentience

Turning off all AI won’t happen in the real world. If we understand the signatures of sentience, we’ll be in a better position to know what to do to circumstantially prevent/mitigate it or encourage it. AI, esp LLMs ‘claiming sentience’ isn’t enough… We need to deep operational understandings of it. See the article by 80k…

VRisk – Value Risk
|

VRisk – Value Risk

This is not an attempt at repackaging known concepts, it’s an attempt to carve out axis of concern, hopefully to help increase clarity and weight to concerns about value choice – what would otherwise sound like a vague fear of “wrong values.” What is Value Risk? Value Risk (VRisk) is the risk that a system—technological…

Indirect Normativity
|

Indirect Normativity

Alignment Challenges It’s hardly a week goes by and there has been some new breakthrough in AI, it’s vaulting in capability to new heights all the time – it’s increasingly hard to deny how critical a role ethics has to play in shaping AI safety. Therefore it’s deeply concerning to see such significant disagreement among…