Posts

Cognitive Biases & In-Group Convergences with Joscha Bach

True & false vs right & wrong – People converge their views to set of rights and wrongs relative to in-group biases in their peer group.
As a survival mechanism, convergence in groups is sometimes more healthy than being right – so one should optimize for convergence sometimes even at the cost of getting stuff wrong – so humans probably have an evolutionary propensity to favor convergence over truth.
However by optimizing for convergence may result in the group mind being more stupid than the smartest people in the group.

 

 
Joscha highlights the controversy of Yonatan Zunger being fired for sending out an email about biological differences between men and women effecting abilities as engineers – where Zunger’s arguments may be correct – now regardless of what the facts are about how biological differences effect differences in ability between men & women, google fired him because they thought supporting these arguments would make for a worse social environment.

This sort of thing leads to an interesting difference in discourse, where:
* ‘nerds’ tend to focus on ‘content‘, on imparting ideas and facts where everyone can judge these autonomously and form their own opinions – in view that in order to craft the best solutions we need to have the best facts
* most people the purpose of communication is ‘coordination‘ between individuals and groups (society, nations etc) – where the value on a ‘fact’ is it’s effect on the coordination between people

So is Google’s response to the memo controversy about getting the facts right, or about how Google at this point should be organised?

What’s also really interesting is that different types of people read this ‘memo’ very differently – making it very difficult to form agreement about the content of this memo – how can one agree on whats valuable about communication – whether it’s more about imparting ideas and facts or whether it’s more about coordination?

More recently there has been a lot of talk about #FakeNews – where it’s very difficult to get people to agree to things that are not in their own interests – and including, as Joshca points out, the idea that truth matters.

Joscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Discussion points:
– In-group convergence: thinking in true & false vs right & wrong
– The group mind may be more stupid than the smartest individuals in the group

On Consciousness, Qualia, Valence & Intelligence with Mike Johnson

Andrés L. Gómez Emilsson

Andrés Gómez Emilsson joined in to add very insightful questions for a 3 part interview series with Mike Johnson, covering the relationship of metaphysics to qualia/consciousness/hedonic valence, and defining their terms, whether panpsychism matters, increasing sensitivity to bliss, valence variance, Effective Altruism, cause prioritization, and the importance of consciousness/valence research .
Andrés Gómez Emilsson interviews Mike Johnson

Carving Reality at the Joints

Andrés L. Gómez Emilsson: Do metaphysics matter for understanding qualia, consciousness, valence and intelligence? Mike Johnson: If we define metaphysics as the study of what exists, it absolutely does matter for understanding qualia, consciousness, and valence. I think metaphysics matters for intelligence, too, but in a different way. The big question is whether terms like qualia, consciousness, and valence “carve reality at the joints” or whether they’re emergent linguistic constructs that don’t reflect the structure of the universe. And if these things are ‘real’ in some sense, the follow-up question is: how can we formalize these concepts? Intelligence seems different: it seems like a ‘fuzzy’ concept, without a good “crisp”, or frame-invariant, definition. Andrés: What about sources of sentient valence outside of human brains? What is the “minimum viable valence organism”? What would you expect it to look like?

Mike Johnson

Mike: If some form of panpsychism is true- and it’s hard to construct a coherent theory of consciousness without allowing panpsychism- then I suspect two interesting things are true.
  1. A lot of things are probably at least a little bit conscious. The “minimum viable valence experiencer” could be pretty minimal. Both Brian Tomasik and Stuart Hameroff suggest that there could be morally-relevant experience happening at the level of fundamental physics. This seems highly counter-intuitive but also logically plausible to me.
  2. Biological organisms probably don’t constitute the lion’s share of moral experience. If there’s any morally-relevant experience that happens on small levels (e.g., quantum fuzz) or large levels (e.g., black holes, or eternal inflation), it probably outweighs what happens on Earth by many, many, many orders of magnitude. Whether it’ll outweigh the future impact of humanity on our light-cone is an open question.

The big question is whether terms like qualia, consciousness, and valence “carve reality at the joints” or whether they’re emergent linguistic constructs that don’t reflect the structure of the universe. And if these things are ‘real’ in some sense, the follow-up question is: how can we formalize these concepts?

In contrast with Brian Tomasik on this issue, I suspect (and hope) that the lion’s share of the qualia of the universe is strongly net positive. Appendix F of Principia Qualia talks a little more about this. Andrés: What would be the implications of finding a sure-fire way to induce great valence for brief moments? Could this be used to achieve “strategic alignment” across different branches of utilitarianism? Mike: A device that could temporarily cause extreme positive or negative valence on demand would immediately change the world. First, it would validate valence realism in a very visceral way. I’d say it would be the strongest philosophical argument ever made. Second, it would obviously have huge economic & ethical uses. Third, I agree that being able to induce strong positive & negative valence on demand could help align different schools of utilitarianism. Nothing would focus philosophical arguments about the discount rate between pleasure & suffering more than a (consensual!) quick blast of pure suffering followed by a quick blast of pure pleasure. Similarly, a lot of people live their lives in a rather numb state. Giving them a visceral sense that ‘life can be more than this’ could give them ‘skin in the game’. Fourth, it could mess a lot of things up. Obviously, being able to cause extreme suffering could be abused, but being able to cause extreme pleasure on-demand could lead to bad outcomes too. You (Andres) have written about wireheading before, and I agree with the game-theoretic concerns involved. I would also say that being able to cause extreme pleasure in others could be used in adversarial ways. More generally, human culture is valuable and fragile; things that could substantially disrupt it should be approached carefully. A friend of mine was describing how in the 70s, the emerging field of genetic engineering held the Asilomar Conference on Recombinant DNA to discuss how the field should self-regulate. The next year, these guidelines were adopted by the NIH wholesale as the basis for binding regulation, and other fields (such as AI safety!) have attempted to follow the same model. So the culture around technologies may reflect a strong “founder effect”, and we should be on the lookout for a good, forward-looking set of principles for how valence technology should work. One principle that seems to make sense is to not publicly post ‘actionable’ equations, pseudocode, or code for how one could generate suffering with current computing resources (if this is indeed possible). Another principle is to focus resources on positive, eusocial applications only, insofar as that’s possible– I’m especially concerned about addiction, and bad actors ‘weaponizing’ this sort of research. Another would be to be on guard against entryism, or people who want to co-opt valence research for political ends. All of this is pretty straightforward, but it would be good to work it out a bit more formally, look at the successes and failures of other research communities, and so on.

A question I find very interesting is whether valence research is socially disruptive or socially stabilizing by default. I think we should try very hard to make it a socially stabilizing force.

A question I find very interesting is whether valence research is socially disruptive or socially stabilizing by default. I think we should try very hard to make it a socially stabilizing force. One way to think about this is in terms of existential risk. It’s a little weird to say, but I think the fact that so many people are jaded, or feel hopeless, is a big existential risk, because they feel like they have very little to lose. So they don’t really care what happens to the world, because they don’t have good qualia to look forward to, no real ‘skin in the game’. If valence tech could give people a visceral, ‘felt sense’ of wonder and possibility, I think the world could become a much safer place, because more people would viscerally care about AI safety, avoiding nuclear war, and so on. Finally, one thing that I think doesn’t make much sense is handing off the ethical issues to professional bioethicists and expecting them to be able to help much. Speaking as a philosopher, I don’t think bioethics itself has healthy community & dresearch norms (maybe bioethics needs some bioethicsethicists…). And in general, I think especially when issues are particularly complex or technical, I think the best type of research norms comes from within a community. Andrés: What is the role of valence variance in intelligence? Can a sentient being use its consciousness in any computationally fruitful way without any valence variance? Can a “perfectly flat world(-simulation)” be used for anything computational?   Mike: I think we see this today, with some people suffering from affective blunting (muted emotions) but seemingly living functional lives. More generally, what a sentient agent functionally accomplishes, and how it feels as it works toward that goal, seem to be correlated but not identical. I.e., one can vary without the other. But I don’t think that valence is completely orthogonal to behavior, either. My one-sentence explanation here is that evolution seems to have latched onto the

Why we seek out pleasure: the Symmetry Theory of Homeostatic Regulation

property which corresponds to valence- which I argue is symmetry– in deep ways, and has built our brain-minds around principles of homeostatic symmetry. This naturally leads to a high variability in our valence, as our homeostatic state is perturbed and restored. Logically, we could build minds around different principles- but it might be a lot less computationally efficient to do so. We’ll see. 🙂 One angle of research here could be looking at people who suffer from affective blunting, and trying to figure out if it holds them back: what it makes them bad at doing. It’s possible that this could lead to understanding human-style intelligence better. Going a little further, we can speculate that given a certain goal or computation, there could be “valence-positive” processes that could accomplish it, and “valence-negative” processes. This implies that there’s a nascent field of “ethical computation” that would evaluate the valence of different algorithms running on different physical substrates, and choose the one that best satisfices between efficiency and valence. (This is of course a huge simplification which glosses over tons of issues…)
Andrés: What should we prioritize: super-intelligence, super-longevity or super-happiness? Does the order matter? Why? Mike: I think it matters quite a bit! For instance, I think the world looks a lot different if we figure out consciousness *before* AGI, versus if we ignore it until AGI is built. The latter seems to involve various risks that the former doesn’t. A risk that I think we both agree is serious and real is this notion of “what if accelerating technology leads to Malthusian conditions where agents don’t- and literally can’t, from a competitive standpoint- care about qualia & valence?” Robin Hanson has a great post called “This is the Dream Time” (of relaxed selection). But his book “Age of Em” posits a world where selection pressures go back up very dramatically. I think if we enter such an era without a good theory of qualia, we could trade away a lot of what makes life worth living.  
Andrés: What are some conceptual or factual errors that you see happening in the transhumanist/rationalist/EA community related to modeling qualia, valence and intelligence? Mike: First, I think it’s only fair to mention what these communities do right. I’m much more likely to have a great conversation about these topics with EAs, transhumanists, and rationalists than a random person off the street, or even a random grad student. People from this community are always smart, usually curious, often willing to explore fresh ideas and stretch their brain a bit, and sometimes able to update based on purely abstract arguments. And there’s this collective sense that ideas are important and have real implications for the future. So there’s a lot of great things happening in these communities and they’re really a priceless resource for sounding out theories, debating issues, and so on. But I would highlight some ways in which I think these communities go astray.

Computationalism, functionalism, fun theory, ‘hedonic brain regions’, ‘pleasure neurochemicals’, the reinforcement learning theory of valence, and so on all give the illusion of explanatory depth but don’t actually explain things in a way which allows us to do anything useful.

First, people don’t realize how bad most existing models of qualia & valence are. Michael Graziano argues that most theories of consciousness are worse than wrong- that they play to our intuitions but don’t actually explain anything. Computationalism, functionalism, fun theory, ‘hedonic brain regions’, ‘pleasure neurochemicals’, the reinforcement learning theory of valence, and so on all give the illusion of explanatory depth but don’t actually explain things in a way which allows us to do anything useful. Second, people don’t realize how important a good understanding of qualia & valence are. They’re upstream of basically everything interesting and desirable. Here’s what I think has happened, at least in the rationalist community: historically, consciousness research has been a black hole. Smart people go in, but nothing comes out. So communities (such as physicists and LessWrong) naturally have an interest in putting up a fence around the topic with a sign that says

historically, consciousness research has been a black hole. Smart people go in, but nothing comes out. So communities .. naturally have an interest in putting up a fence around the topic with a sign that says ‘Don’t go here!’

‘Don’t go here!’ – But over time, people forgot why the mystery was blocked off, and started to think that the mystery doesn’t exist. This leads to people actively avoiding thinking about these topics without being able to articulate why. Andrés: Is there value in studying extreme cases of valence? E.g. Buddhist monks who claim to achieve extreme sustainable bliss, or people on MDMA? Mike: ‘What science can analyze, science can duplicate.’ And studying outliers such as your examples is a time-honored way of gathering data with high signal-to-noise. So yes, definitely. 🙂
Also see the 1st part, and the 2nd part of this interview series. Also this interview with Christof Koch will likely be of interest.
 
Mike Johnson is a philosopher living in the Bay Area, writing about mind, complexity theory, and formalization. He is Co-founder of the Qualia Research Institute. Much of Mike’s research and writings can be found at the Open Theory website. ‘Principia Qualia’ is Mike’s magnum opus – a blueprint for building a new Science of Qualia. Click here for the full version, or here for an executive summary. If you like Mike’s work, consider helping fund it at Patreon.

Why don’t we see more larger brained species in our ecosystem?

Species with larger brains seem to have a higher general intelligence. So why haven’t all species evolved larger brains? Possibly because general intelligence relies on a capability for social learning, so larger brains are only useful for species that rely more on social learning. – Kaj Sotala

Larger brains cost require more fuel, larger brains require larger heads, larger heads are harder to scaffold, scaffolding costs nutrients, there are so many trade-offs… scaffolding+big brains are often a the expense of other important morphologies … locomotion abilities, large craniums at the expense of stronger and bigger mouths, and in many species larger heads correlate with more complications during birth.

Perhaps I should add – (and this seems similar to some AI singleton scenarios) – larger brains (as mentioned, correlates with greater intelligence), could give distinct first mover advantages when coupled with other morphologies that enable the organism to generally & efficiently manipulate its environment (opposable thumbs etc). The first mover advantage may be so powerful as to diminish dependencies on environmental factors – or the rest of the ecological web. We, as arguably the most intelligent species on the planet, seem to be moving through an era where we as a species decreasingly depend on aspects of the ecosystem or aspects of our evolutionary endowed morphology – we synthesize better alternatives – and its happening so fast that evolution by natural selection can’t keep up – it’s production of ‘new better models’ of organisms is crowded out by the rapid progress of civilization. So perhaps one reason we don’t see more larger brained species, is for anthropic reasons – ruffly that the higher the occurrence of large brained morphologies, the higher the likelihood of one single species taking the first mover advantages related to high intelligence – and as a result quickly becoming technologically advanced enough for either subduing and manipulating the ecosystem – which may crowd out (and essentially retard) the power of evolution by natural selection to evolve larger brains in other species. The species may go on to inadvertently destroy the ecosystem, intentionally phase out the current ecosystem (involving blind natural selection) perhaps for ethical or instrumental reasons, or leave the ecosystem and natural selection to its own devices and go interstellar – in which case there would be at least one less large brained species in the ecosystem, and perhaps re-opening a niche for another species to fill.

This was inspired by Kaj Sotala’s query about why we don’t see more larger brained species than we currently do.