Understanding the moral status of digital minds requires a mature understanding of sentience
|

Understanding the moral status of digital minds requires a mature understanding of sentience

Turning off all AI won’t happen in the real world. If we understand the signatures of sentience, we’ll be in a better position to know what to do to circumstantially prevent/mitigate it or encourage it. AI, esp LLMs ‘claiming sentience’ isn’t enough… We need to deep operational understandings of it. See the article by 80k…

AI Alignment to Higher Values, not Human Values
|

AI Alignment to Higher Values, not Human Values

The meaning of homo sapien means ‘wise man’ – given the current human-caused precarious states of affairs, this self-description seems a bit of a reach. Human values in aggregate aren’t coherent, and of those that are, not all are really that great (especially if our revealed preferences hint at what they are). Connor Leahy, who…

VRisk – Value Risk
|

VRisk – Value Risk

Value Risk (vrisk): The risk of bad to sub-optimal values happening. An obviously bad vrisk is one where existential risk (xrisk) is high. Existential here could mean extinction of life/species/sentience etc, or the universe becoming uninhabitable to the degree where there is no chance of new life emerging to experience anything. Extinction means local annihilation…

Indirect Normativity
|

Indirect Normativity

Alignment Challenges Given the critical role of ethics in AI safety, it’s deeply concerning to see such significant disagreement among experts who have rigorously studied ethics. The divergence in moral and meta-ethical perspectives among these experts poses a serious question: How can we effectively align AI if the very foundations of ethical understanding are not…

AI: Unlocking the Post-Human – David Pearce & James Hughes
| | | |

AI: Unlocking the Post-Human – David Pearce & James Hughes

A discussion between David Pearce and James Hughes moderated by Adam Ford exploring the ethical and philosophical landscapes of AI, human enhancement and the future of emerging technologies affording higher states of well-being. Pearce and Hughes discuss the implications of transforming human experience via leveraging biotech and cybernetics, as well as requirements for AI to…

Can philosophical zombies do philosophy?
| |

Can philosophical zombies do philosophy?

Can philosophical zombies be philosophical? Preamble There are various takes on what a p-zombie is, most of which were attempts to dethrone physicalism – this article isn’t one of those. This kind of p-zombie is different from what David Chalmers describes in The Consciousness Mind. In Chalmers view, p-zombies are physiologically identical to humans, even…

J. Dmitri Gallow – AI Interpretability, Orthogonality, Instrumental Convergence & Divergence
| |

J. Dmitri Gallow – AI Interpretability, Orthogonality, Instrumental Convergence & Divergence

J. Dmitri Gallow discusses the principles of instrumental convergence and divergence in AI. The orthogonality thesis, which states intelligence and desire are independent, and the instrumental convergence thesis, which suggests intelligent beings will have similar instrumental desires, are critical concepts. Gallow’s argument focuses on the instrumental divergence, which emerges from the complexity and unpredictability of AI’s actions based on its desires.

James Hughes
|

James Hughes on the Economic Impacts of Artificial General Intelligence

The following is an enlightening session with James Hughes, Associate Provost at the University of Massachusetts Boston and Director of the Institute for Ethics and Emerging Technologies (IEET), we delve into the intricate world of Artificial General Intelligence (AGI) and its profound economic implications. In this interview, Hughes, a renowned expert in the field, sheds…