Will Superintelligence solely be Motivated by Brute Self-Interest?
|

Will Superintelligence solely be Motivated by Brute Self-Interest?

Is self-interest by necessity the default motivation for all agents?Will Superintelligence necessarily become a narcissistic utility monster? No, self-interest is not necessarily the default motivation of all agents. While self-interest is a common and often foundational drive in many agents (biological or artificial), other motivations can and do arise, either naturally or through design, depending…

Reverse Wireheading
|

Reverse Wireheading

Concerning sentient AI, we would like to avoid unnecessary suffering in artificial systems. It’s hard for biological systems like humans to turn off suffering without appropriate pharmacology i.e. from aspirin to anesthetics etc. AI may be able to self administer pain killers – a kind of wireheading in reverse. Similarly to wireheading in AI systems…

Peter Singer – Ethics, Uncertainty & Moral Progress
| | | |

Peter Singer – Ethics, Uncertainty & Moral Progress

In this short interview, Peter Singer, a renowned philosopher and ethicist widely recognized for his thought-provoking ideas about universal ethics, discusses the value of life, moral progress, population ethics (aka population axiology), the far future, the uncertainties inherent in philosophical reasoning, moral realism (objective normative truths) and ‘alternative facts’. Points covered:0:00 Intro 0:08 Moral progress…

Understanding the moral status of digital minds requires a mature understanding of sentience
|

Understanding the moral status of digital minds requires a mature understanding of sentience

Turning off all AI won’t happen in the real world. If we understand the signatures of sentience, we’ll be in a better position to know what to do to circumstantially prevent/mitigate it or encourage it. AI, esp LLMs ‘claiming sentience’ isn’t enough… We need to deep operational understandings of it. See the article by 80k…

AI Alignment to Higher Values, not Human Values
|

AI Alignment to Higher Values, not Human Values

The meaning of homo sapien means ‘wise man’ – given the current human-caused precarious states of affairs, this self-description seems a bit of a reach. Human values in aggregate aren’t coherent, and of those that are, not all are really that great (especially if our revealed preferences hint at what they are). Connor Leahy, who…

VRisk – Value Risk
|

VRisk – Value Risk

Value Risk (vrisk): The risk of bad to sub-optimal values happening. An obviously bad vrisk is one where existential risk (xrisk) is high. Existential here could mean extinction of life/species/sentience etc, or the universe becoming uninhabitable to the degree where there is no chance of new life emerging to experience anything. Extinction means local annihilation…

Indirect Normativity
|

Indirect Normativity

Alignment Challenges Given the critical role of ethics in AI safety, it’s deeply concerning to see such significant disagreement among experts who have rigorously studied ethics. The divergence in moral and meta-ethical perspectives among these experts poses a serious question: How can we effectively align AI if the very foundations of ethical understanding are not…

AI: Unlocking the Post-Human – David Pearce & James Hughes
| | | |

AI: Unlocking the Post-Human – David Pearce & James Hughes

A discussion between David Pearce and James Hughes moderated by Adam Ford exploring the ethical and philosophical landscapes of AI, human enhancement and the future of emerging technologies affording higher states of well-being. Pearce and Hughes discuss the implications of transforming human experience via leveraging biotech and cybernetics, as well as requirements for AI to…