Early Philosophical Groundwork for Indirect Normativity
|

Early Philosophical Groundwork for Indirect Normativity

Note: this is by no means exhaustive. it’s focus is on western philosophy. I’ll expand and categorise more later… The earliest precursors to indirect normativity can be traced back to early philosophical discussions on how to ground moral decision-making in processes or frameworks rather than specific, static directives. While Nick Bostrom’s work on indirect normativity…

Understanding V-Risk: Navigating the Complex Landscape of Value in AI
|

Understanding V-Risk: Navigating the Complex Landscape of Value in AI

In this post I explore what I explore what I broadly define as V-Risk (Value Risk), which I think is a critical and underrepresented concept in general, but especially for the alignment of artificial intelligence. There are two main areas of AI alignment: capability control and motivation selection. Values are what motivates approaches to fulfilling…

Value Space
|

Value Space

The concept of a non-arbitrary objective value space offers a compelling framework for understanding the moral landscape. This framework presupposes the existence of stance-independent moral truths—ethical principles that rational agents will converge upon given sufficient cognitive sophistication [1]. Human values are a narrow slice of value space – In the vast space of all conceivable…

Will Superintelligence solely be Motivated by Brute Self-Interest?
|

Will Superintelligence solely be Motivated by Brute Self-Interest?

Is self-interest by necessity the default motivation for all agents?Will Superintelligence necessarily become a narcissistic utility monster? No, self-interest is not necessarily the default motivation of all agents. While self-interest is a common and often foundational drive in many agents (biological or artificial), other motivations can and do arise, either naturally or through design, depending…

Reverse Wireheading
|

Reverse Wireheading

Concerning sentient AI, we would like to avoid unnecessary suffering in artificial systems. It’s hard for biological systems like humans to turn off suffering without appropriate pharmacology i.e. from aspirin to anesthetics etc. AI may be able to self administer pain killers – a kind of wireheading in reverse. Similarly to wireheading in AI systems…

Peter Singer – Ethics, Uncertainty & Moral Progress
| | | |

Peter Singer – Ethics, Uncertainty & Moral Progress

In this short interview, Peter Singer, a renowned philosopher and ethicist widely recognized for his thought-provoking ideas about universal ethics, discusses the value of life, moral progress, population ethics (aka population axiology), the far future, the uncertainties inherent in philosophical reasoning, moral realism (objective normative truths) and ‘alternative facts’. Points covered:0:00 Intro 0:08 Moral progress…

Understanding the moral status of digital minds requires a mature understanding of sentience
|

Understanding the moral status of digital minds requires a mature understanding of sentience

Turning off all AI won’t happen in the real world. If we understand the signatures of sentience, we’ll be in a better position to know what to do to circumstantially prevent/mitigate it or encourage it. AI, esp LLMs ‘claiming sentience’ isn’t enough… We need to deep operational understandings of it. See the article by 80k…