Will Superintelligence solely be Motivated by Brute Self-Interest?
|

Will Superintelligence solely be Motivated by Brute Self-Interest?

Is self-interest by necessity the default motivation for all agents?Will Superintelligence necessarily become a narcissistic utility monster? No, self-interest is not necessarily the default motivation of all agents. While self-interest is a common and often foundational drive in many agents (biological or artificial), other motivations can and do arise, either naturally or through design, depending…

Reverse Wireheading
|

Reverse Wireheading

Concerning sentient AI, we would like to avoid unnecessary suffering in artificial systems. It’s hard for biological systems like humans to turn off suffering without appropriate pharmacology i.e. from aspirin to anesthetics etc. AI may be able to self administer pain killers – a kind of wireheading in reverse. Similarly to wireheading in AI systems…

Peter Singer – Ethics, Uncertainty & Moral Progress
| | | |

Peter Singer – Ethics, Uncertainty & Moral Progress

In this short interview, Peter Singer, a renowned philosopher and ethicist widely recognized for his thought-provoking ideas about universal ethics, discusses the value of life, moral progress, population ethics (aka population axiology), the far future, the uncertainties inherent in philosophical reasoning, moral realism (objective normative truths) and ‘alternative facts’. Points covered:0:00 Intro 0:08 Moral progress…

VRisk – Value Risk
|

VRisk – Value Risk

Value Risk (vrisk): The risk of bad to sub-optimal values happening. An obviously bad vrisk is one where existential risk (xrisk) is high. Existential here could mean extinction of life/species/sentience etc, or the universe becoming uninhabitable to the degree where there is no chance of new life emerging to experience anything. Extinction means local annihilation…

Indirect Normativity
|

Indirect Normativity

Alignment Challenges Given the critical role of ethics in AI safety, it’s deeply concerning to see such significant disagreement among experts who have rigorously studied ethics. The divergence in moral and meta-ethical perspectives among these experts poses a serious question: How can we effectively align AI if the very foundations of ethical understanding are not…

AI: Unlocking the Post-Human – David Pearce & James Hughes
| | | |

AI: Unlocking the Post-Human – David Pearce & James Hughes

A discussion between David Pearce and James Hughes moderated by Adam Ford exploring the ethical and philosophical landscapes of AI, human enhancement and the future of emerging technologies affording higher states of well-being. Pearce and Hughes discuss the implications of transforming human experience via leveraging biotech and cybernetics, as well as requirements for AI to…

Can philosophical zombies do philosophy?
| |

Can philosophical zombies do philosophy?

Can philosophical zombies be philosophical? Preamble There are various takes on what a p-zombie is, most of which were attempts to dethrone physicalism – this article isn’t one of those. This kind of p-zombie is different from what David Chalmers describes in The Consciousness Mind. In Chalmers view, p-zombies are physiologically identical to humans, even…

J. Dmitri Gallow – AI Interpretability, Orthogonality, Instrumental Convergence & Divergence
| |

J. Dmitri Gallow – AI Interpretability, Orthogonality, Instrumental Convergence & Divergence

J. Dmitri Gallow discusses the principles of instrumental convergence and divergence in AI. The orthogonality thesis, which states intelligence and desire are independent, and the instrumental convergence thesis, which suggests intelligent beings will have similar instrumental desires, are critical concepts. Gallow’s argument focuses on the instrumental divergence, which emerges from the complexity and unpredictability of AI’s actions based on its desires.