| |

On Consciousness, Qualia, Valence & Intelligence with Mike Johnson

Andrés L. Gómez Emilsson
Andrés Gómez Emilsson joined in to add very insightful questions for a 3 part interview series with Mike Johnson, covering the relationship of metaphysics to qualia/consciousness/hedonic valence, and defining their terms, whether panpsychism matters, increasing sensitivity to bliss, valence variance, Effective Altruism, cause prioritization, and the importance of consciousness/valence research .
Andrés Gómez Emilsson interviews Mike Johnson

Carving Reality at the Joints

Andrés L. Gómez Emilsson: Do metaphysics matter for understanding qualia, consciousness, valence and intelligence? Mike Johnson: If we define metaphysics as the study of what exists, it absolutely does matter for understanding qualia, consciousness, and valence. I think metaphysics matters for intelligence, too, but in a different way. The big question is whether terms like qualia, consciousness, and valence “carve reality at the joints” or whether they’re emergent linguistic constructs that don’t reflect the structure of the universe. And if these things are ‘real’ in some sense, the follow-up question is: how can we formalize these concepts? Intelligence seems different: it seems like a ‘fuzzy’ concept, without a good “crisp”, or frame-invariant, definition. Andrés: What about sources of sentient valence outside of human brains? What is the “minimum viable valence organism”? What would you expect it to look like?
Mike Johnson
Mike: If some form of panpsychism is true- and it’s hard to construct a coherent theory of consciousness without allowing panpsychism- then I suspect two interesting things are true.
  1. A lot of things are probably at least a little bit conscious. The “minimum viable valence experiencer” could be pretty minimal. Both Brian Tomasik and Stuart Hameroff suggest that there could be morally-relevant experience happening at the level of fundamental physics. This seems highly counter-intuitive but also logically plausible to me.
  2. Biological organisms probably don’t constitute the lion’s share of moral experience. If there’s any morally-relevant experience that happens on small levels (e.g., quantum fuzz) or large levels (e.g., black holes, or eternal inflation), it probably outweighs what happens on Earth by many, many, many orders of magnitude. Whether it’ll outweigh the future impact of humanity on our light-cone is an open question.

The big question is whether terms like qualia, consciousness, and valence “carve reality at the joints” or whether they’re emergent linguistic constructs that don’t reflect the structure of the universe. And if these things are ‘real’ in some sense, the follow-up question is: how can we formalize these concepts?

In contrast with Brian Tomasik on this issue, I suspect (and hope) that the lion’s share of the qualia of the universe is strongly net positive. Appendix F of Principia Qualia talks a little more about this. Andrés: What would be the implications of finding a sure-fire way to induce great valence for brief moments? Could this be used to achieve “strategic alignment” across different branches of utilitarianism? Mike: A device that could temporarily cause extreme positive or negative valence on demand would immediately change the world. First, it would validate valence realism in a very visceral way. I’d say it would be the strongest philosophical argument ever made. Second, it would obviously have huge economic & ethical uses. Third, I agree that being able to induce strong positive & negative valence on demand could help align different schools of utilitarianism. Nothing would focus philosophical arguments about the discount rate between pleasure & suffering more than a (consensual!) quick blast of pure suffering followed by a quick blast of pure pleasure. Similarly, a lot of people live their lives in a rather numb state. Giving them a visceral sense that ‘life can be more than this’ could give them ‘skin in the game’. Fourth, it could mess a lot of things up. Obviously, being able to cause extreme suffering could be abused, but being able to cause extreme pleasure on-demand could lead to bad outcomes too. You (Andres) have written about wireheading before, and I agree with the game-theoretic concerns involved. I would also say that being able to cause extreme pleasure in others could be used in adversarial ways. More generally, human culture is valuable and fragile; things that could substantially disrupt it should be approached carefully. A friend of mine was describing how in the 70s, the emerging field of genetic engineering held the Asilomar Conference on Recombinant DNA to discuss how the field should self-regulate. The next year, these guidelines were adopted by the NIH wholesale as the basis for binding regulation, and other fields (such as AI safety!) have attempted to follow the same model. So the culture around technologies may reflect a strong “founder effect”, and we should be on the lookout for a good, forward-looking set of principles for how valence technology should work. One principle that seems to make sense is to not publicly post ‘actionable’ equations, pseudocode, or code for how one could generate suffering with current computing resources (if this is indeed possible). Another principle is to focus resources on positive, eusocial applications only, insofar as that’s possible– I’m especially concerned about addiction, and bad actors ‘weaponizing’ this sort of research. Another would be to be on guard against entryism, or people who want to co-opt valence research for political ends. All of this is pretty straightforward, but it would be good to work it out a bit more formally, look at the successes and failures of other research communities, and so on.

A question I find very interesting is whether valence research is socially disruptive or socially stabilizing by default. I think we should try very hard to make it a socially stabilizing force.

A question I find very interesting is whether valence research is socially disruptive or socially stabilizing by default. I think we should try very hard to make it a socially stabilizing force. One way to think about this is in terms of existential risk. It’s a little weird to say, but I think the fact that so many people are jaded, or feel hopeless, is a big existential risk, because they feel like they have very little to lose. So they don’t really care what happens to the world, because they don’t have good qualia to look forward to, no real ‘skin in the game’. If valence tech could give people a visceral, ‘felt sense’ of wonder and possibility, I think the world could become a much safer place, because more people would viscerally care about AI safety, avoiding nuclear war, and so on. Finally, one thing that I think doesn’t make much sense is handing off the ethical issues to professional bioethicists and expecting them to be able to help much. Speaking as a philosopher, I don’t think bioethics itself has healthy community & dresearch norms (maybe bioethics needs some bioethicsethicists…). And in general, I think especially when issues are particularly complex or technical, I think the best type of research norms comes from within a community. Andrés: What is the role of valence variance in intelligence? Can a sentient being use its consciousness in any computationally fruitful way without any valence variance? Can a “perfectly flat world(-simulation)” be used for anything computational?   Mike: I think we see this today, with some people suffering from affective blunting (muted emotions) but seemingly living functional lives. More generally, what a sentient agent functionally accomplishes, and how it feels as it works toward that goal, seem to be correlated but not identical. I.e., one can vary without the other. But I don’t think that valence is completely orthogonal to behavior, either. My one-sentence explanation here is that evolution seems to have latched onto the
Why we seek out pleasure: the Symmetry Theory of Homeostatic Regulation
property which corresponds to valence- which I argue is symmetry– in deep ways, and has built our brain-minds around principles of homeostatic symmetry. This naturally leads to a high variability in our valence, as our homeostatic state is perturbed and restored. Logically, we could build minds around different principles- but it might be a lot less computationally efficient to do so. We’ll see. 🙂 One angle of research here could be looking at people who suffer from affective blunting, and trying to figure out if it holds them back: what it makes them bad at doing. It’s possible that this could lead to understanding human-style intelligence better. Going a little further, we can speculate that given a certain goal or computation, there could be “valence-positive” processes that could accomplish it, and “valence-negative” processes. This implies that there’s a nascent field of “ethical computation” that would evaluate the valence of different algorithms running on different physical substrates, and choose the one that best satisfices between efficiency and valence. (This is of course a huge simplification which glosses over tons of issues…)
Andrés: What should we prioritize: super-intelligence, super-longevity or super-happiness? Does the order matter? Why? Mike: I think it matters quite a bit! For instance, I think the world looks a lot different if we figure out consciousness *before* AGI, versus if we ignore it until AGI is built. The latter seems to involve various risks that the former doesn’t. A risk that I think we both agree is serious and real is this notion of “what if accelerating technology leads to Malthusian conditions where agents don’t- and literally can’t, from a competitive standpoint- care about qualia & valence?” Robin Hanson has a great post called “This is the Dream Time” (of relaxed selection). But his book “Age of Em” posits a world where selection pressures go back up very dramatically. I think if we enter such an era without a good theory of qualia, we could trade away a lot of what makes life worth living.  
Andrés: What are some conceptual or factual errors that you see happening in the transhumanist/rationalist/EA community related to modeling qualia, valence and intelligence? Mike: First, I think it’s only fair to mention what these communities do right. I’m much more likely to have a great conversation about these topics with EAs, transhumanists, and rationalists than a random person off the street, or even a random grad student. People from this community are always smart, usually curious, often willing to explore fresh ideas and stretch their brain a bit, and sometimes able to update based on purely abstract arguments. And there’s this collective sense that ideas are important and have real implications for the future. So there’s a lot of great things happening in these communities and they’re really a priceless resource for sounding out theories, debating issues, and so on. But I would highlight some ways in which I think these communities go astray.

Computationalism, functionalism, fun theory, ‘hedonic brain regions’, ‘pleasure neurochemicals’, the reinforcement learning theory of valence, and so on all give the illusion of explanatory depth but don’t actually explain things in a way which allows us to do anything useful.

First, people don’t realize how bad most existing models of qualia & valence are. Michael Graziano argues that most theories of consciousness are worse than wrong- that they play to our intuitions but don’t actually explain anything. Computationalism, functionalism, fun theory, ‘hedonic brain regions’, ‘pleasure neurochemicals’, the reinforcement learning theory of valence, and so on all give the illusion of explanatory depth but don’t actually explain things in a way which allows us to do anything useful. Second, people don’t realize how important a good understanding of qualia & valence are. They’re upstream of basically everything interesting and desirable. Here’s what I think has happened, at least in the rationalist community: historically, consciousness research has been a black hole. Smart people go in, but nothing comes out. So communities (such as physicists and LessWrong) naturally have an interest in putting up a fence around the topic with a sign that says

historically, consciousness research has been a black hole. Smart people go in, but nothing comes out. So communities .. naturally have an interest in putting up a fence around the topic with a sign that says ‘Don’t go here!’

‘Don’t go here!’ – But over time, people forgot why the mystery was blocked off, and started to think that the mystery doesn’t exist. This leads to people actively avoiding thinking about these topics without being able to articulate why. Andrés: Is there value in studying extreme cases of valence? E.g. Buddhist monks who claim to achieve extreme sustainable bliss, or people on MDMA? Mike: ‘What science can analyze, science can duplicate.’ And studying outliers such as your examples is a time-honored way of gathering data with high signal-to-noise. So yes, definitely. 🙂
Also see the 1st part, and the 2nd part of this interview series. Also this interview with Christof Koch will likely be of interest.
 
Mike Johnson is a philosopher living in the Bay Area, writing about mind, complexity theory, and formalization. He is Co-founder of the Qualia Research Institute. Much of Mike’s research and writings can be found at the Open Theory website. ‘Principia Qualia’ is Mike’s magnum opus – a blueprint for building a new Science of Qualia. Click here for the full version, or here for an executive summary. If you like Mike’s work, consider helping fund it at Patreon.

Similar Posts