The Is and the Ought

Back to basics. The “is” is what science and rationality tell us about the world and logic – and “ought” represents moral obligations, values, or prescriptions about how the world should be, rather than how it is.

What is an “is”?

The “is” refers to factual statements about the world, encompassing empirical observations, logical truths and analytical truths.

a) Empirical observations, or referents, include the existence of phenomena like suffering and pleasure, which are grounded in the reality of conscious minds and their physical instantiation in brains. These are not merely abstract concepts but real experiences tied to real physical processes.1

b) Logical truths, such as mathematical axioms like 1 + 1 = 2, represent a different kind of “is” – truths that hold by virtue of their logical structure and are independent of specific empirical observations.

c) Analytic truths are truths that are true by virtue of the meanings of the words or concepts involved; truths by definition, e.g., “all bachelors are unmarried”

Empirical facts, logical truths and analytical truths can inform ethical reasoning.

..and what is an “ought”?

An “ought” refers to a statement about how things should be, rather than how they are. It expresses a value judgement, moral obligation, or what is considered right or desirable (or conversely what is wrong, or undesirable).  “Ought” statements are normative, meaning they prescribe or recommend, implying a sense of moral duty or responsibility (i.e. you ought to help those in need)

While the reality of suffering and pleasure can be empirically supported by neuroscience and psychology, some believe that their moral significance (i.e., that suffering is bad and pleasure is good) is a separate ethical judgement requiring reasoned justification, and others believe that self-intimatingness emphasises the immediate and compelling nature of our moral experiences and the potential for a more direct foundation for morality – in a previous interview David Pearce says that consciousness experience discloses the intrinsic nature of the physical.2

The “is” describes the existence of these experiences of suffering and pleasure, while the “ought” prescribes how we should respond to them. If one doesn’t believe that that the moral significance of experiences is self-intimating, then the required reasoned justification; normative bridgework3 can be achieved with logical truths in conjunction with ethical premises. Logical truths, while undeniably true, don’t in themselves dictate moral principles. They may be used in conjunction with ethical premises to derive moral conclusions, but they are not sufficient on their own to establish moral “oughts.”

The “is” refers to the existence of experiences like suffering and pleasure, while the “ought” prescribes how we should respond to them. For those who don’t consider the moral significance of these experiences self-evident (or self-intimating), reasoned justification is necessary. This justification, or “normative bridgework,” can be constructed using logical truths in conjunction with ethical premises. While logical truths are undeniably valid, they are insufficient on their own to establish moral “oughts.” They can, however, be valuable tools when combined with substantive ethical principles to derive moral conclusions.

Those who believe suffering is self-intimatingly bad usually acknowledge that degrees of suffering and the context in which it occurs can affect our overall moral judgement – actually some hard-line negative utilitarians might not – but anyway, while the basic badness of suffering might be intuitive, applying that intuition to specific situations can still require careful consideration and ethical reasoning.

People hang on the mantra ‘you can’t derive ought from is’ – a statement attributed to Scottish philosopher David Hume – I used to believe that was all there was to his statement, until I did some digging.

Hume’s point is that many arguments jumped that gap without explanation – he did not necessarily say it’s impossible, just that you need an appropriate normative bridge.  Hume emphasised the need for careful reasoning and explanation when moving from factual statements to moral conclusions. He challenged philosophers to explicitly state the principles or reasons that justify such transitions.

 ┌────────────────────────┐         (Normative Bridge)         ┌────────────────────────┐
 │   Empirical Realm:     │   ==>  of Reason & Values   ==>    │    Moral Realm:        │
 │          "IS"          │------------------------------------│         "OUGHT"        │
 ├────────────────────────┤                                    ├────────────────────────┤
 │  Observed Facts:       │                                    │  Value Judgments,      │
 │    - Suffering         │                                    │  Ethical Principles,   │
 │    - Pleasure          │                                    │  Moral Imperatives     │
 └────────────────────────┘                                    └────────────────────────┘

In essence, Hume’s critique highlights the importance of being clear, providing justification and avoiding implicit assumptions.

  • Clarity: Being clear about the premises (facts) and conclusions (moral judgements) in our arguments.
  • Justification: Providing strong reasons or principles to support the move from “is” to “ought.”
  • Avoiding Implicit Assumptions: Making sure that any underlying moral assumptions are brought to light and defended, rather than left unstated.

I emphasise that Hume’s work has had a lasting impact on ethical theory, but discussion often excludes the part about the normative bridge – as in this discussion between Sean Carol and Sam Harris (I think Carol is wrong).

A transhumanist might bridge the “is-ought” gap to justify human enhancement by starting with factual premises about human nature and potential. They might argue that humans are naturally inclined to seek improvement, evidenced by our pursuit of education, technology, and self-improvement practices. Furthermore, they might point to the is of current scientific and technological advancements, demonstrating the increasing capacity to enhance human physical and cognitive abilities through genetic engineering, pharmaceuticals, or cybernetic implants. From these factual premises, a transhumanist could construct a normative bridge by arguing that we ought to pursue human enhancement because it aligns with our inherent drive for betterment and because we now have the means to significantly expand human potential. They might further argue that such enhancements could alleviate suffering, extend lifespan, and unlock new forms of creativity and understanding, all of which are considered morally desirable “oughts” within the transhumanist value system.

The Motivational Force of the Real

Anti-realism in general doesn’t seem to carry the same collective motivational force as realism.

Epistemic anti-realism guts the force of truth or justified true belief. If “truth” is only what I or my community happens to believe, then why should anyone abandon convenient “truths” (comforting falsehoods) or face inconvenient facts?

Realism, by contrast, treats truth as objective: claims about the world are right or wrong whether we prefer them or not. That gives knowledge its binding authority and explains why we should care about evidence even when it hurts.

What about moral anti-realism? If morality is nothing more than personal or societal preferences – whether mine, my culture’s, or even my preferences after reflective equilibrium – those standards still ultimately trace back to me or some other agent. That makes it hard to compel or persuade others to act in ways that require genuine sacrifice. Why should someone give up their own advantage just because my preferences say they ought to?

By contrast, if realism is true, moral claims are more than personal attitudes. They are facts with independent authority – truths that apply regardless of who recognises them. That gives moral argument motivational force beyond a fragile consensus built on shifting preferences.

When science disavows “ought”

Historically and in many minds today, science has been tasked with describing what is. By default, that leaves open the question of what ought to be – but open to what? Ungrounded mysticism? Or is there no ought?

The application of science is technology, which changes what is (e.g., building new machines, discovering new processes). Note that

When science confines itself to describing reality – what is – and deliberately avoids taking a stance on how we ought to act, we end up letting markets or strategic “game-theoretic” pressures decide our goals. Then technology just follows that impetus, forging ahead with partial or unexamined values – creating externalities, most of which we never deliberately chose.

Merely having a powerful descriptive system (science) plus powerful manipulative tools (technology) does not ensure good outcomes. To bind these tools to good, I argue we need a conception of “ought” that is as verifiably robust and fault tolerant. In the absence of a shared moral framework, game-theoretic incentives (rivalrous, multi-polar competition) take the driver’s seat. I believe this conception of ought can best be couched in moral realism.

So, moral realism requires a science of ought? No, moral realism does not require there to be a science of ought4. While a science of ought might be compatible with some forms of moral realism (i.e. moral naturalism argues that moral properties are reducible to natural properties, meaning they can be studied scientifically), it’s not a necessary condition. The moral realist can quite rightly believe that moral facts exist objectively but are knowable through intuition5, reason, or some other non-scientific means. The existence of objective moral facts doesn’t necessitate that we discover them through scientific methods.

Though if the moral naturalists are right, and “good” can be defined in terms of well-being and “bad” can be defined in terms of suffering, we can measure and study well-being and suffering. If this is true, then science could potentially play a huge role in determining what is morally good or bad.  

An ethical naturalist might study well-being and suffering by:

  • Defining well-being and suffering in naturalistic terms: They might identify well-being with things like happiness, health, or the fulfilment of certain needs, all of which can be studied empirically. Suffering could be linked to physical or psychological pain, distress, or the frustration of desires.
  • Using scientific methods to measure and analyse well-being and suffering: This could involve surveys, psychological experiments, brain scans (fMRI, magnetoencephalography etc), and other tools to gather data about people’s experiences and the factors that influence them.
  • Investigating the causes and consequences of well-being and suffering: This could involve studying how different social, environmental, or biological factors affect people’s well-being and how well-being, in turn, influences behaviour, health, and other outcomes.
  • Developing interventions to promote well-being and reduce suffering: Based on their research, ethical naturalists might propose ways to improve people’s lives, reduce pain and distress, or create social conditions that are more conducive to human flourishing.

Essentially, they would apply the tools and methods of science to understand the nature of well-being and suffering, with the goal of using that knowledge to make ethical judgements and improve human lives.

Ought as Emergent from the Is

Rather than denying Hume’s gap or papering over it, we could try re-conceptualising oughts as emergent properties of certain complex systems, just as liquidity emerges from molecular physics or biological fitness emerges from genetics. On this view:

  • Empirical access matters: any sufficiently advanced intelligence, investigating the causal-structural fabric of the universe, would converge on the same prescriptive regularities (e.g. “systems like ours flourish under cooperation rather than parasitism”).
  • This would make oughts discoverable, not just imagined. For instance in the case of superintelligent AI, especially if adequately motivated, may seek to discover oughts and naturally fall in line with them.

This is sort of akin to Sharon Hewitt Rawlette’s “moral realism from phenomenal value”6 – though I’d argue that there may be more to phenomenal value than the arguably single hedonic axis of value humans have access to, and if we are open to the idea, there could be lot more to discover. For instance, Nick Bostrom writes a lot in Deep Utopia about the value of meaning (both subjective and objective), and whether this just boils down to hedonics (which I have covered a lot elsewhere). As an aside, Derek Parfit makes a more controversial claim that certain normative truths are like logical/mathematical truths7 in a “non-ontological sense” – which so far I have found impenetrably mysterious – so won’t go further with Parfit’s claim.

Mind-Independent and Convergent-Mind-Dependent

Two distinct possibilities emerge:

  • Mind-independent moral facts: Values are literally woven into reality, the way that mathematical truths are. Pain is bad in itself, regardless of who recognises it.
  • Convergent-mind-dependent moral facts: Oughts aren’t “out there” like mountains, but any mind with enough rationality and empirical adequacy would converge on them (like converging on heliocentrism or the germ theory of disease). This makes morality “objective” in the sense of being robust under idealised investigation, even if ontologically it lives in the interplay of mind and world.

I’ll focus on the later.

How might oughts be motivating factors rather than inert facts. Here are three candidate accounts:

  1. Desire-independent reasons (robust realism8): Certain truths (e.g., “suffering is bad”) provide motivation by sheer rational apprehension, just as seeing a proof compels assent.
  2. Convergence through rational deliberation (constructivist-realism hybrid): Any agent trying to act coherently will be compelled to treat others’ interests as mattering, because that’s the only stable solution to coordination games at scale.
  3. Minds are selected for care and cooperation because these traits scale well. Advanced minds refine and universalise these tendencies, so the motivation is partly biological, partly rational convergence.

If oughts can be treated as emergent, empirically trackable features of the universe Moral realism stops looking like metaphysical excess baggage and starts looking like naturalised science. AI alignment becomes tractable in principle: rather than “programming in values,” we point AI toward the empirical discovery of stance-independent normative truths, and on top of that helps resolve moral disagreement among humans, reframing moral disagreement as analogous to scientific disagreement: a matter of incomplete information, bias, or error – rather than mere taste.

Forward-Looking Provocation

Suppose an advanced alien civilisation with radically different biology still converges on prescriptions like “unnecessary suffering is wrong” and “reciprocity is better than exploitation.” – Would you take all this as evidence that ought is embedded in the is?

One could worry that the convergence merely reflects shared pragmatic pressures – but these pragmatic pressures are part of the cosmic landscape. One could ask to what degree are these pragmatic pressures common among advanced civs – but however many there are, they are all part of the cosmic landscape – the amount of different kinds of pressures may help reveal how nuanced and context sensitivity ideal moral realism is.

Game theory has a dominant role in driving world outcomes

Historically, John von Neumann’s work on Game Theory influenced nuclear strategy and shaped the modern strategic world-view. That orientation – “the idea that is good is the idea that doesn’t lose” – underpins arms races, corporate competition, and zero-sum thinking.

Under game-theoretic logic, whoever harnesses powerful new technology first can gain advantage (Bostrom describes AI arms races driven by desires for first mover advantages) – so they tend to do so even if it creates destructive externalities. Others, feeling threatened, do it in turn.

This spiral leads to multi-polar traps, in which each player’s short-term incentives push them to escalate, ignoring long-term or collective harm.

Game theory can say which strategies “win” under certain payoffs. It does not say which goals are morally good. Because the system is set up to reward “winning,” it can produce outcomes contrary to universal well-being or environmental sustainability. [BUT if game theory was played not as a means for individuals to win, but winning meant the most moral outcome, wouldn’t this help bridge the is / ought gap?]

Technology Amplifies the Power of Is

Modern – and especially emerging – technologies (AI, biotech, nuclear, etc.) are often not merely incrementally stronger tools, but exponentially more powerful. This effectively means we can “move reality around” in ways that surpass any prior era.

The stakes of misaligning “ought” with those newly expanded powers skyrocket: a single actor or small group can do large-scale harm, intentionally or accidentally.  Traditional or weaker moral frameworks (local religious codes, scattered norms, minimal regulation) are insufficient to handle planetary-scale technologies.

Affordances of Technology

In design/engineering language, an “affordance” is what a technology allows you to do. A hammer affords hammering nails and also can be used as a blunt weapon.

There is a combinatorial nature to technology affordance. Each new piece of tech doesn’t operate alone; it combines in “tech ecosystems” with everything else. The synergy means the net outcome is often unpredictable, as each new device or platform can drastically shift what’s possible on many different fronts for many different actors.

Because of these affordances, when you lower the barrier to a certain capability (e.g., advanced synthetic biology kits, or generative AI that can create malicious code), you must consider all the possible motives that any user might have.  One group may use it to cure cancer; another could use the exact same processes for catastrophic ends.  [It’s really really hard to predict how all the motives of all users motivate the uses of new tech.]

Is Technology Value-Neutral?

There is a common stance: “Guns don’t kill people, people kill people,” or, “Tech is neutral; it depends on how we use it.” – That technology is simply neutral is misleading because, in a competitive environment, certain ways of using the technology create major relative advantage (in a competitive advantage). Once one side or actor uses the tool in a way that yields advantage, others in rivalry may feel forced to do the same (or something even more extreme) to keep up. So the “neutral” tool can set off inexorable social or arms-race dynamics.

Historical Example – The Plough

The historical use of the plough illustrates  that even seemingly benign technology changes cultural values, social organisation, land use (clear-cutting), animal treatment, etc. Agriculture allowed for higher population growth, which led to conquering societies that lacked it. Once widely adopted, the entire “mythos” – views of land, animals, spiritual practice – changed.

The lesson to be learned is that every technology is embedded in power relationships, cultural stories, and the motivational landscape of real people. So it’s never purely neutral in practice.

[Is it splitting hairs to insist that technology is values neutral, it’s the use of tech that isn’t?]

Dual Use / Omni-Use Tech

Most technology has more than one use.  Some technologies (e.g., nuclear) are called “dual-use” because they can be used for both peaceful and military applications. Commonly, dual-use really means multi-use [look this up].  Schmachtenberger uses the term omni-use to stress this point.  AI, biotech, and even simpler tools are “omni-use.” They can be used in myriad ways, far beyond a simple “peace vs. war” dichotomy.

With AI the possible set of malicious uses is enormous: disinformation, deep-fakes, extortion, automating cybersecurity breaches, etc. The number of beneficial uses is also enormous.  Though widespread access (lowering the barrier of entry) means all sorts of motivations – criminal, ideological, anarchic – could harness it. The synergy among multiple advanced technologies can create entirely new threat surfaces.

Motivational Landscapes in Tech

A motivational landscape is the distribution of desires, incentives, biases, and goals among all relevant players – whether individuals, companies, or nations.

Obviously this matters a lot in tech – the same technology can be used to feed the hungry or to build weapons, depending on motives. Moreover, certain motivations (profit, competitive advantage, survival) shape which tech gets funded and developed fastest.  Standard market incentives (maximise returns, reduce cost, capture user attention, etc.) often lead to negative (sometimes destructive) externalities if not bound by moral constraints.

Personal + Collective Alignment

On an individual level, each of us has “parts” – some altruistic, some self-serving, some fearful, brave, foolhardy and some wise. If those parts remain unintegrated, they create conflicts within ourselves that mirror the larger societal conflicts. [need to discuss further]  A healthier motivational landscape, personally or collectively, is one where short-term advantage does not sabotage long-term viability.

[analogy with cancer referencing Agent Smith?  Heart not at war with liver, cancer will spread unencumbered by collective regulatory pressure, gaining near-term advantage until it kills it’s host – and thereby kill itself ]

How to Bind AI to “the Good”

Schmachtenberger calls our era’s constellation of existential risks the “meta-crisis.” – another term has been floating around too – the “poly-crisis”. We face climate breakdown, biotech hazards, unstable financial systems, political upheaval and now AI risk. These crises are interwoven and can exacerbate each other.  

The deeper problem: we do not have (or we have no consensus on)  a robust “ought” that commands global legitimacy, nor do we have governance systems capable of effectively binding these new powers.  I argued elsewhere that we should inform our “ought” with moral realism in as much as we can justifiably do so – and take a pluralistic approach to do the rest of the heavy lifting in the short term, and indirect normativity once we can achieve it.  

Alignment is Bigger Than Just “Align AI With Human Goals”

If humanity itself is not internally aligned (in its deeper values, motivations, and synergy with the biosphere), how do we align AI with us? Whose “ought” are we even coding in?

Daniel Schmachtenberger suggests that we need to:

  1. Evolve a broader moral awareness that includes second- and third-order consequences.
  2. Create governance frameworks – both formal (laws, regulations) and informal (cultural norms, moral education) – that can guide or bind these extremely powerful new tools.
  3. Address the underlying game-theoretic trap: if “win at all costs” is our default, then each new technology is steered by that logic. We need shared aims that factor in the well-being of all, plus future generations.

All of which I agree with, but I’d add moral realism, pluralism and indirect normativity into the mix.

Because advanced tech is borderless, we need global coordination to shepherd it – no single nation’s regulation is enough.

Arguably even laws or regulations only work if they reflect the moral will of the people (the “superstructure”[? define]) rather than imposition from top-down.  Otherwise, the system is unstable or oppressive.  [turn key totalitarianism].  [look up how to do this through Multi-Level governance]

Everyone needs to take personal responsibility – scientists, entrepreneurs, policymakers, and everyday users must think about how new capacities can be misused – and factor that into development choices, not just chase near-term gains.

We may need something akin to a cultural enlightenment (a positive cultural shift) where the public (not just the experts) internalize the seriousness of existential and catastrophic risks.

Pulling It All Together

Technology doesn’t solve the is/ought chasm by itself – tools can only produce outcomes within the moral frameworks (or lack thereof) of the people who wield them.

Game Theory + Rivalry = Multi-Polar Trap – if we lack a shared moral-ethical boundary, competition for advantage leads to “races” in which moral constraints appear to be a disadvantage. This dynamic can produce negative-sum outcomes, even outright catastrophes.

Technology is not values neutral – because it affords tremendous new powers (and confers massive competitive advantages), it effectively forces everyone else’s hand. Culture and politics inevitably adapt around these new powers, sometimes destructively.

AI’s alignment crisis reflects our own lack of alignment – Schmachtenberger sees personal, cultural, and planetary misalignment as the deeper cause of X-risk. An unaligned humanity is attempting to align AI – a paradox.

The task: evolve a coherent “ought” that binds powerful technology – the ultimate challenge is forging a holistic ethical framework – one that acknowledges the interconnectedness of the world – and then building governance, norms, and personal motivations to steer technology in synergy with that framework.

In short, unbounded power without a commensurate evolution in moral thinking creates a “meta-crisis.” Only by deliberately cultivating a global ethic that sees the bigger picture – and by internalising that ethic in our institutions and personal motivations – can we effectively bind AI (and other advanced tech) to “the good,” rather than to destructive forms of strategic self-interest.  [Note: I argue that we can’t really rely on humans being good for AI to be good – out of all humans in the world, a certain percentage are psychopaths, and others are sociopaths – it’s such a huge attack surface to try and correct]

Footnotes

  1. Pain and pleasure are real phenomena. ↩︎
  2. I have accumulated quite a number of interviews with the amazing David Pearce – recommended viewing! ↩︎
  3. Normative bridgework refers to attempts to create logical connections, or “bridge principles,” between factual statements (“is”) and moral/normative statements (“ought”) to overcome David Hume‘s (in)famous is-ought gap (aka Hume’s Guillotine). These bridge principles aim to show how a factual situation can logically necessitate a moral or normative conclusion, though they often face scrutiny for being non-analytic or otherwise problematic, as seen in debates about promises and promises. ↩︎
  4. A science of ought is not a necessary condition for moral realism to stand. Though if it were achievable, a science of ought would be awesome. ↩︎
  5. Intuition may be a good starting point – though it would be nice if intuition weren’t relegated to the armchair – intuitions need grounding i.e. testing, justification etc. ↩︎
  6. Sharon Hewitt Rawlette’s moral realism argues that values are not external but are experienced directly within our phenomenal consciousness as qualities of pleasure or pain. Her theory posits that these “qualia” are the basis for our understanding of goodness and badness. From this experiential foundation, we form basic normative concepts and can understand why non-experiential things are valuable if they are direct or indirect causes of these experiences. ↩︎
  7. I’m not sure I understand this fully, and therefore can’t agree directly with this. Derek Parfit claimed that certain normative (ethical and non-ethical) truths, like mathematical or logical truths, are true in a “non-ontological sense”. This may mean their truth doesn’t depend on correctly describing or corresponding to some part of reality, thereby avoiding the commitment to “abstract objects” that some think burdens moral realism. But I find the idea of non-ontological existence to be mysterious – how can non-ontological existence co-exist with ontological existence? So it doesn’t seem to avoid the ontology debate, but seemingly tries to avoid it by introducing another strange layer. So, Parfit believed this analogy helps demonstrate that irreducibly normative truths can be objective without requiring a mysterious layer of reality (i.e. a platonic realm of abstract objects), though his concept of non-ontological existence faced criticism for being unclear and difficult to connect with ordinary understandings of existence, and as I said, arguably introduces it’s own mysterious layer. ↩︎
  8. See interview with David Enoch – Should AI Be a Moral Realist? ↩︎

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *