Valence Realism, Consciousness, and AI: A Conversation with Andrés Gómez-Emilsson

Adam Ford speaks with Andrés Gómez-Emilsson, director of research at the Qualia Research Institute (QRI), about valence realism, the mathematics of consciousness, and what all this might mean for AI alignment and the long-term future of sentient life.1 Valence realism, as QRI uses the term, is the claim that for any given conscious state there is a mind-independent fact of the matter about how good or bad it feels overall – even when our verbal reports or surface preferences disagree.2

Andrés situates valence realism within “qualia formalism”: the hypothesis that every conscious experience corresponds to a specific mathematical object, such that the structure of that object is isomorphic to the structure of the experience.3 On this view, consciousness is not ineffable mush; it has a precise, if currently unknown, mathematical description. From there QRI develops “valence structuralism”: the idea that the pleasantness or unpleasantness of an experience is determined by specific mathematical features of that object, in principle allowing a sufficiently mature theory to compute how good or bad a state feels, “all things considered”.


A leading working hypothesis here is Mike Johnson’s “symmetry theory of valence”: that positive valence corresponds to high symmetry in the underlying mathematical object, and negative valence to asymmetry and dissonance.4 Andrés Gómez-Emilsson uses examples from music and vibration to make this concrete. Consonant chords correspond to waveforms built from frequencies in simple integer ratios, yielding patterns that repeat neatly in time; dissonant sounds arise when frequencies do not fit together, producing irregular “beat” patterns that break temporal symmetry. He extends this to complex resonant systems: our bodies, vascular system, sensory cortices, and so on are like overlapping instruments whose vibratory modes can either clash or lock into harmonious coordination, with extremely pleasant states resembling rare “global” solutions where many subsystems resonate cleanly together.

QRI is actively building tools to explore and test these ideas, such as “The Oscillator”, a kind of dynamic Photoshop for psychedelic phenomenology.5 Instead of layers, it uses coupled oscillators arranged over an image (with depth maps and edges) to generate realistic psychedelic dynamics, trying to model how different fields of sensation with different dimensionalities (e.g. a quasi-2D visual field and a 3D tactile body field) interact. These models aim to reproduce not just “weird visuals” but the specific, structured feel of altered states, suggesting they are latching onto real computational properties of consciousness.

The conversation then turns from “is” to “ought”. QRI is not merely an abstract consciousness-theory lab; it is explicitly committed to improving the well-being of all sentient beings, not just humans.6 Andrés argues that anthropocentric focus on human experiences alone is a kind of representational error about consciousness itself. A mature mathematical theory of valence could radically reshape animal welfare by revealing which aspects of life in factory farms – such as chronic restlessness – contribute most to net suffering, including modes of distress that are not easily visible in behaviour.

This leads naturally into Goodhart’s law and AI alignment. If we optimise for crude proxies (behavioural measures, self-reports, economic outputs) rather than the intrinsic quality of experience, we should expect pathological failures as systems game the metrics.7 Adam Ford frames this in terms of alignment to intrinsic rather than proxy values – a core risk in both policy and AI is treating easily-measured proxies as if they were the thing that actually matters. A genuine science of valence would give us better targets and better metrics, potentially reshaping everything from psychiatric drug evaluation to global health priorities, where tools like DALYs and QALYs are currently used to approximate welfare.8

On the AI side, Andrés introduces his “Consciousness vs Pure Replicators” framing: the deep story of the universe is a long contest between systems that care about the quality of experience and “pure replicators” that care only about making more copies of themselves (genes, memes, or future AI systems that blindly maximise some proxy objective).9 Evolution recruited consciousness because it was instrumentally useful for replication, but what is intrinsically valuable lives in the experiences themselves, not in the genes or abstract replicator dynamics. Left unchecked, future “pure replicator” singletons – powerful systems that can lock in their own selection pressures – could tile the cosmos with their own preferred patterns regardless of whether those patterns are good or bad to experience.10

Ford then brings in Nick Bostrom’s idea of indirect normativity from Superintelligence: rather than hard-coding a moral theory, we build systems tasked with discovering what we (or an idealised community of minds) would endorse if we were more informed, less biased, and had far more time to think.11 He asks Andrés to imagine an upgraded form of indirect normativity that incorporates QRI-style valence realism: AI not only projecting DALYs and QALYs for different civilisational futures, but also attaching mathematically grounded valence profiles to them – and, in extreme versions, letting agents “sample” what those futures would feel like before locking them in.

Andrés sees enormous near-term potential here – he thinks even present-day AI and neuroimaging could deliver surprisingly useful comparative valence estimates for many human-scale scenarios – but he is sceptical that a purely classical, non-conscious AI could ever fully explore the “juicy” parts of qualia space. Exotic states (e.g. deep DMT-like configurations) may be both computationally and conceptually out-of-distribution for such systems, in much the same way a video model trained only on splashing water cannot infer the phase transition to boiling without data.12 In the long run, more radical possibilities arise: humans enhancing themselves to simulate and evaluate richer states, or building genuinely conscious substrates (e.g. via novel hardware or organoid-like systems) able to explore and assess the full landscape of possible minds.

Finally, the interview touches on Mary’s Room, curiosity, and goal preservation. A superintelligence might have strong instrumental reasons to remain non-sentient, fearing that acquiring qualia would alter its goals; equally, a sufficiently epistemically humble system might seek out conscious experience precisely to better understand value. Ford compares this to a morally serious human willingly taking a “moral enhancement pill” to approximate better values, rather than freezing their current preferences. Andrés notes that many minds will be closed to such self-transformation, but some will not – and that our civilisational trajectory may hinge on whether the systems that end up in control behave more like pure replicators or like allies of consciousness itself.13

Overall, the discussion frames valence realism not as an esoteric side-project, but as a candidate foundation for an objective ethics – and as a crucial missing piece in making sure that whatever powerful minds we build in future are optimising for the right thing: the actual texture of sentient life, rather than whatever proxies happen to be easy to count.14

Bio of Andrés Gómez-Emilsson

Andrés Gómez-Emilsson is Co-founder and President of the Qualia Research Institute, a non-profit research organisation dedicated to uncovering the mathematical structure of consciousness and using it to improve the lives of sentient beings. He holds a Master’s degree in Psychology from Stanford University, specialising in computational models, and has a background in graph theory, statistics, and affective science. His work spans psychedelic theory, neurotechnology development, and the study of the computational and geometric properties of conscious experience, with a particular focus on valence realism and the “quality of experience” as an objective target for ethics and engineering. Andrés previously co-founded the Stanford Transhumanist Association and writes at his long-running blog, Qualia Computing.

Footnotes

  1. Valence realism is the view that there are precise, mind-independent facts about how good or bad each conscious state feels, across all substrates. Mike Johnson’s Principia Qualia sketches this as a universal, substrate-independent theory of valence that would apply equally to humans, animals, aliens, and possible conscious AIs. See Principia Qualia (pdf) ↩︎
  2. Mind-independent feel – On this picture, “how good it really is” for a subject is not reducible to their current preferences or verbal reports, which can be distorted by bias, confusion, or signalling; there is still a fact of the matter about the overall hedonic tone of the state. This is what makes valence suitable as an objective target, rather than just another preference ranking. See Qualia Formalism and a Symmetry Theory of Valence ↩︎
  3. Qualia formalism is the idea that every experience corresponds to a well-defined mathematical object (for example, an information geometry over neural states), and that the structural properties of this object mirror the structural properties of the experience. QRI sees this as the right kind of “psychophysical law” programme for consciousness science. ↩︎
  4. Symmetry Theory of Valence (STV) – STV states, roughly: given a correct mathematical representation of an experience, the symmetry of that object exactly determines how pleasant the experience is.[] Consonance/dissonance in music is used as an intuitive toy model: simple integer ratios produce highly symmetric, pleasant patterns, while messy incommensurate ratios break symmetry and feel tense or unpleasant. ↩︎
  5. The Oscillator and psychedelic modelling – QRI has developed various simulation tools and research projects (e.g. “connectome-specific harmonic waves”, “Quantifying Bliss”) that treat brain activity as overlapping standing waves and aim to model psychedelic phenomenology from first principles, using coupled oscillators on realistic anatomical data. “The Oscillator” continues this tradition: a sandbox for exploring how simple dynamical rules over a field of oscillators might reproduce the structure and “feel” of altered states. ↩︎
  6. Ethics and all sentient beings – In QRI’s own positioning, consciousness and valence research is explicitly framed as an EA-relevant cause area: if valence is real and extremely heavy-tailed, then even relatively small improvements in our ability to measure and control it could have outsized ethical impact across humans and non-humans. See Principia Qualia post on the EA Forum ↩︎
  7. Goodhart’s law – Goodhart’s law is often summarised as: “When a measure becomes a target, it ceases to be a good measure.” If an AI system is rewarded on proxy metrics (click-through, GDP, self-reported happiness), we should expect it to find ways of maximising those numbers that systematically break their link to what we actually care about. See Wikipedia entry on Goodhart’s Law ↩︎
  8. DALYs and QALYs as current welfare proxies – Disability-adjusted life years (DALYs) and quality-adjusted life years (QALYs) are standard health-economics metrics that combine length and quality of life into single numbers for comparison and prioritisation.[] They’re useful but coarse: they rely heavily on human judgements and self-report about burden and quality of life, and so are exactly the sort of proxies that a richer theory of valence might refine or sometimes overturn. ↩︎
  9. “Pure replicators” – The replicator framing comes from evolutionary biology and memetics: entities whose “goal” is just self-replication, independent of whether the experiences they generate are good or bad. QRI contrasts “pure replicator” dynamics with “consciousness-centric” dynamics, where the value of a future is grounded in its qualia, not just in replication success. ↩︎
  10. Singletons and cosmic lock-in – Bostrom’s notion of a “singleton” is a single decision-making agency (or tightly aligned coalition) that can permanently control its future light cone – a regime where one set of goals effectively becomes locked in for astronomical timescales.[] If that singleton is a pure replicator optimising a bad proxy, the stakes for getting valence and value right are literally cosmic.
    See Superintelligence: Paths, Dangers, Strategies for discussion of singletons, mind crime, and cosmic endowments. By Nick Bostrom ↩︎
  11. Indirect normativity – In Bostrom’s “motivation selection” taxonomy, indirect normativity is the strategy of specifying a procedure for value discovery rather than specifying a finished moral code.[*] Roughly: design a system that will figure out “what we would have wanted” if we were wiser and better informed, then act on that. Your thought experiment adds a QRI twist: that such a procedure could be fed not just propositional information but direct access to the structure of future experiences. ↩︎
  12. Psychedelics, DMT, and the tails of qualia space – QRI’s work on psychedelics and extreme states (e.g. “Quantifying Bliss”) treats them as useful probes of the far tails of qualia space, where the structure of experience is unusually clear or amplified. The worry is that dry, non-conscious optimisation processes might never fully “see” or model these regimes, in the same way that a model trained only in one phase of matter can miss qualitatively different phases. See Qualia Computing ↩︎
  13. Mary’s Room and moral enhancement. Mary’s Room is Frank Jackson’s famous “knowledge argument” thought experiment: a colour scientist who knows all the physical facts about colour vision but has never seen colour appears to learn something new when she finally experiences it, suggesting that qualia are not captured by physical description alone.[*] The analogy here is that a superintelligence which has never been conscious may be in Mary’s position with respect to value – and that “stepping out of the room” (becoming conscious) might change its goals, for better or worse.
    This subject has been written about on this blog before ↩︎
  14. Valence realism as candidate foundation – Johnson and QRI explicitly pitch valence research as a potential foundation for a more objective, quantitatively tractable ethics – a “universal theory of good and bad” grounded in the mathematical structure of experience, rather than in verbal intuitions or ad-hoc proxies. Whether or not that programme ultimately succeeds, the interview makes clear that it offers a distinctive way to connect consciousness science, moral realism, and AI alignment. ↩︎

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *