Vernor Vinge on the Turing Test, Artificial Intelligence
|

Vernor Vinge on the Turing Test, Artificial Intelligence

On the coat-tails of a the blockbuster film “The Imitation Game” I saw quite a bit of buzz on the internet about Alan Turing, and the Turing Test.  The title of the movie refers to the idea of the Turing Test may someday show that machines would ostensibly be (at least in controlled circumstances) indistinguishable…

Will Superintelligence solely be Motivated by Brute Self-Interest?
|

Will Superintelligence solely be Motivated by Brute Self-Interest?

Is self-interest by necessity the default motivation for all agents?Will Superintelligence necessarily become a narcissistic utility monster? No, self-interest is not necessarily the default motivation of all agents. While self-interest is a common and often foundational drive in many agents (biological or artificial), other motivations can and do arise, either naturally or through design, depending…

Reverse Wireheading
|

Reverse Wireheading

Concerning sentient AI, we would like to avoid unnecessary suffering in artificial systems. It’s hard for biological systems like humans to turn off suffering without appropriate pharmacology i.e. from aspirin to anesthetics etc. AI may be able to self administer pain killers – a kind of wireheading in reverse. Similarly to wireheading in AI systems…

Peter Singer – Ethics, Uncertainty & Moral Progress
| | | |

Peter Singer – Ethics, Uncertainty & Moral Progress

In this short interview, Peter Singer, a renowned philosopher and ethicist widely recognized for his thought-provoking ideas about universal ethics, discusses the value of life, moral progress, population ethics (aka population axiology), the far future, the uncertainties inherent in philosophical reasoning, moral realism (objective normative truths) and ‘alternative facts’. Points covered:0:00 Intro 0:08 Moral progress…

Understanding the moral status of digital minds requires a mature understanding of sentience
|

Understanding the moral status of digital minds requires a mature understanding of sentience

Turning off all AI won’t happen in the real world. If we understand the signatures of sentience, we’ll be in a better position to know what to do to circumstantially prevent/mitigate it or encourage it. AI, esp LLMs ‘claiming sentience’ isn’t enough… We need to deep operational understandings of it. See the article by 80k…

AI Alignment to Higher Values, not Human Values
|

AI Alignment to Higher Values, not Human Values

The meaning of homo sapien means ‘wise man’ – given the current human-caused precarious states of affairs, this self-description seems a bit of a reach. Human values in aggregate aren’t coherent, and of those that are, not all are really that great (especially if our revealed preferences hint at what they are). Connor Leahy, who…

| |

On ASI Indifference

What if superintelligent AI didn’t care about humans? Does intelligence nessecitate care? Probably not.While I find it hard to imagine care in the absence of cognivite capacity, I can’t see anything that guarantees all cognitively apt agents will harbour care. How, if at all possible, can we from our current position assess the likelyhood that…

VRisk – Value Risk
|

VRisk – Value Risk

Value Risk (vrisk): The risk of bad to sub-optimal values happening. An obviously bad vrisk is one where existential risk (xrisk) is high. Existential here could mean extinction of life/species/sentience etc, or the universe becoming uninhabitable to the degree where there is no chance of new life emerging to experience anything. Extinction means local annihilation…

Indirect Normativity
|

Indirect Normativity

Alignment Challenges Given the critical role of ethics in AI safety, it’s deeply concerning to see such significant disagreement among experts who have rigorously studied ethics. The divergence in moral and meta-ethical perspectives among these experts poses a serious question: How can we effectively align AI if the very foundations of ethical understanding are not…