Posts

Uncovering the Mysteries of Affective Neuroscience – the Importance of Valence Research with Mike Johnson

Valence in overview

Adam: What is emotional valence (as opposed to valence in chemistry)?

Mike: Put simply, emotional valence is how pleasant or unpleasant something is. A somewhat weird fact about our universe is that some conscious experiences do seem to feel better than others.

 

Adam: What makes things feel the way they do? What makes some things feel better than others?

Mike: This sounds like it should be a simple question, but neuroscience just don’t know. It knows a lot of random facts about what kinds of experiences, and what kinds of brain activation patterns, feel good, and which feel bad, but it doesn’t have anything close to a general theory here.

..the way affective neuroscience talks about this puzzle sometimes sort of covers this mystery up, without solving it.

And the way affective neuroscience talks about this puzzle sometimes sort of covers this mystery up, without solving it. For instance, we know that certain regions of the brain, like the nucleus accumbens and ventral pallidum, seem to be important for pleasure, so we call them “pleasure centers”. But we don’t know what makes something a pleasure center. We don’t even know how common painkillers like acetaminophen (paracetamol) work! Which is kind of surprising.

In contrast, the hypothesis about valence I put forth in Principia Qualia would explain pleasure centers and acetaminophen and many other things in a unified, simple way.

 

Adam: How does the hypothesis about valence work?

Mike: My core hypothesis is that symmetry in the mathematical representation of an experience corresponds to how pleasant or unpleasant that experience is. I see this as an identity relationship which is ‘True with a capital T’, not merely a correlation.  (Credit also goes to Andres Gomez Emilsson & Randal Koene for helping explore this idea.)

What makes this hypothesis interesting is that
(1) On a theoretical level, it could unify all existing valence research, from Berridge’s work on hedonic hotspots, to Friston & Seth’s work on predictive coding, to Schmidhuber’s idea of a compression drive;

(2) It could finally explain how the brain’s so-called “pleasure centers” work– they function to tune the brain toward more symmetrical states!

(3) It implies lots and lots of weird, bold, *testable* hypotheses. For instance, we know that painkillers like acetaminophen, and anti-depressants like SSRIs, actually blunt both negative *and* positive affect, but we’ve never figured out how. Perhaps they do so by introducing a certain type of stochastic noise into acute & long-term activity patterns, respectively, which disrupts both symmetry (pleasure) and anti-symmetry (pain).

 

Adam: What kinds of tests would validate or dis-confirm your hypothesis? How could it be falsified and/or justified by weight of induction?

Mike: So this depends on the details of how activity in the brain generates the mind. But I offer some falsifiable predictions in PQ (Principia Qualia):

  • If we control for degree of consciousness, more pleasant brain states should be more compressible;
  • Direct, low-power stimulation (TMS) in harmonious patterns (e.g. 2hz+4hz+6hz+8hz…160hz) should feel remarkably more pleasant than stimulation with similar-yet-dissonant patterns (2.01hz+3.99hz+6.15hz…).

Those are some ‘obvious’ ways to test this. But my hypothesis also implies odd things such as that chronic tinnitus (ringing in the ears) should product affective blunting (lessened ability to feel strong valence).

Note: see https://qualiacomputing.com/2017/06/18/quantifying-bliss-talk-summary/ and http://opentheory.net/2018/08/a-future-for-neuroscience/ for a more up-to-date take on this.

 

Adam: Why is valence research important?

Mike Johnson: Put simply, valence research is important because valence is important. David Chalmers famously coined “The Hard Problem of Consciousness”, or why we’re conscious at all, and “The Easy Problem of Consciousness”, or how the brain processes information. I think valence research should be called “The Important Problem of Consciousness”. When you’re in a conscious moment, the most important thing to you is how pleasant or unpleasant it feels.

That’s the philosophical angle. We can also take the moral perspective, and add up all the human and non-human animal suffering in the world. If we knew what suffering was, we could presumably use this knowledge to more effectively reduce it and make the world a kinder place.

We can also take the economic perspective, and add up all the person-years, capacity to contribute, and quality of life lost to Depression and chronic pain. A good theory of valence should allow us to create much better treatments for these things. And probably make some money while doing it.

Finally, a question I’ve been wondering for a while now is whether having a good theory of qualia could help with AI safety and existential risk. I think it probably can, by helping us see and avoid certain failure-modes.

 

Adam: How can understanding valence could help make future AIs safer? (How to help define how the AI should approach making us happy?, and in terms of a reinforcement mechanism for AI?)

Mike: Last year, I noted a few ways a better understanding of valence could help make future AIs safer on my blog. I’d point out a few notions in particular though:

  • If we understand how to measure valence, we could use this as part of a “sanity check” for AI behavior. If some proposed action would cause lots of suffering, maybe the AI shouldn’t do it.
  • Understanding consciousness & valence seem important for treating an AI humanely. We don’t want to inadvertently torture AIs- but how would we know?
  • Understanding consciousness & valence seems critically important for “raising the sanity waterline” on metaphysics. Right now, you can ask 10 AGI researchers about what consciousness is, or what has consciousness, or what level of abstraction to define value, and you’ll get at least 10 different answers. This is absolutely a recipe for trouble. But I think this is an avoidable mess if we get serious about understanding this stuff.

 

Adam: Why the information theoretical approach?

Mike: The way I would put it, there are two kinds of knowledge about valence: (1) how pain & pleasure work in the human brain, and (2) universal principles which apply to all conscious systems, whether they’re humans, dogs, dinosaurs, aliens, or conscious AIs.

It’s counter-intuitive, but I think these more general principles might be a lot easier to figure out than the human-specific stuff. Brains are complicated, but it could be that the laws of the universe, or regularities, which govern consciousness are pretty simple. That’s certainly been the case when we look at physics. For instance, my iPhone’s processor is super-complicated, but it runs on electricity, which itself actually obeys very simple & elegant laws.

Elsewhere I’ve argued that:

>Anything piped through the complexity of the brain will look complex, regardless of how simple or complex it starts out as. Similarly, anything will look irreducibly complex if we’re looking at it from the wrong level of abstraction.

 

Adam: What do you think of Thomas A. Bass’s view of ITheory – he thinks that (at least in many cases) it has not been easy to turn data into knowledge. That there is a pathological attraction to information which is making us ‘sick’ – he calls it Information Pathology. If his view offers any useful insights to you concerning avoiding ‘Information Pathology’ – what would they be?

Mike: Right, I would agree with Bass that we’re swimming in neuroscience data, but it’s not magically turning into knowledge. There was a recent paper called “Could a neuroscientist understand a microprocessor?” which asked if the standard suite of neuroscience methods could successfully reverse-engineer the 6502 microprocessor used in the Atari 2600 and NES. This should be easier than reverse-engineering a brain, since it’s a lot smaller and simpler, and since they were analyzing it in software they had all the data they could ever ask for, but it turned out that the methods they were using couldn’t cut it. Which really begs the question of whether these methods can make progress on reverse-engineering actual brains. As the paper puts it, neuroscience thinks it’s data-limited, but it’s actually theory-limited.

The first takeaway from this is that even in the age of “big data” we still need theories, not just data. We still need people trying to guess Nature’s structure and figuring out what data to even gather. Relatedly, I would say that in our age of “Big Science” relatively few people are willing or able to be sufficiently bold to tackle these big questions. Academic promotions & grants don’t particularly reward risk-taking.

 

Adam: Information Theory frameworks – what is your “Eight Problems” framework and how does it contrast with Giulio Tononi’s Integrated Information Theory (IIT)? How might IIT help address valence in a principled manner? What is lacking IIT – and how does your ‘Eight Problems’ framework address this?

Mike: IIT is great, but it’s incomplete. I think of it as *half* a theory of consciousness. My “Eight Problems for a new science of consciousness” framework describes what a “full stack” approach would look like, what IIT will have to do in order to become a full theory.

The biggest two problems IIT faces is that (1) it’s not compatible with physics, so we can’t actually apply it to any real physical systems, and (2) it says almost nothing about what its output means. Both of these are big problems! But IIT is also the best and only game in town in terms of quantitative theories of consciousness.

Principia Qualia aims to help fix IIT, and also to build a bridge between IIT and valence research. If IIT is right, and we can quantify conscious experiences, then how pleasant or unpleasant this experience is should be encoded into its corresponding mathematical object.

 

Adam: What are the three principles for a mathematical derivation of valence?

Mike: First, a few words about the larger context. Probably the most important question in consciousness research is whether consciousness is real, like an electromagnetic field is real, or an inherently complex, irreducible linguistic artifact, like “justice” or “life”. If consciousness is real, then there’s interesting stuff to discover about it, like there was interesting stuff to discover about quantum mechanics and gravity. But if consciousness isn’t real, then any attempt to ‘discover’ knowledge about it will fail, just like attempts to draw a crisp definition for ‘life’ (elan vital) failed.

If consciousness is real, then there’s a hidden cache of predictive knowledge waiting to be discovered. If consciousness isn’t real, then the harder we try to find patterns, the more elusive they’ll be- basically, we’ll just be talking in circles. David Chalmers refers to a similar distinction with his “Type-A vs Type-B Materialism”.

I’m a strong believer in consciousness realism, as are my research collaborators. The cool thing here is, if we assume that consciousness is real, a lot of things follow from this– like my “Eight Problems” framework. Throw in a couple more fairly modest assumptions, and we can start building a real science of qualia.

Anyway, the formal principles are the following:

  1. Consciousness can be quantified. (More formally, that for any conscious experience, there exists a mathematical object isomorphic to it.)
  2. There is some order, some rhyme & reason & elegance, to consciousness. (More formally, the state space of consciousness has a rich set of mathematical structures.)
  3. Valence is real. (More formally, valence is an ordered property of conscious systems.)

 

Basically, they combine to say: this thing we call ‘valence’ could have a relatively simple mathematical representation. Figuring out valence might not take an AGI several million years. Instead, it could be almost embarrassingly easy.

 

Adam: Does Qualia Structuralism, Valence Structuralism and Valence Realism relate to the philosophy of physics principles of realism and structuralism? If so, is there an equivalent ontic Qualia Structuralism and Valence Structuralism?….

Mike: “Structuralism” is many things to many contexts. I use it in a specifically mathematical way, to denote that the state space of qualia quite likely embodies many mathematical structures, or properties (such as being a metric space).

Re: your question about ontics, I tend to take the empirical route and evaluate claims based on their predictions whenever possible. I don’t think predictions change if we assume realism vs structuralism in physics, so maybe it doesn’t matter. But I can get back to you on this. 🙂

 

Adam: What about the Qualia Research Institute I’ve also recently heard about :D! It seems both you (Mike) and Andrés Gómez Emilson are doing some interesting work there

Mike: We know very little about consciousness. This is a problem, for various and increasing reasons– it’s upstream of a lot of futurist-related topics.

But nobody seems to know quite where to start unraveling this mystery. The way we talk about consciousness is stuck in “alchemy mode”– we catch glimpses of interesting patterns, but it’s unclear how to systematize this into a unified framework. How to turn ‘consciousness alchemy’ into ‘consciousness chemistry’, so to speak.

Qualia Research Institute is a research collective which is working on building a new “science of qualia”. Basically, we think our “full-stack” approach cuts through all the confusion around this topic and can generate hypotheses which are novel, falsifiable, and useful.

Right now, we’re small (myself, Andres, and a few others behind the scenes) but I’m proud of what we’ve accomplished so far, and we’ve got more exciting things in the pipeline. 🙂

Also see the 2nd part, and the 3rd part of this interview series. Also this interview with Christof Koch will likely be of interest.

 

Mike Johnson is a philosopher living in the Bay Area, writing about mind, complexity theory, and formalization. He is Co-founder of the Qualia Research Institute. Much of Mike’s research and writings can be found at the Open Theory website.
‘Principia Qualia’ is Mike’s magnum opus – a blueprint for building a new Science of Qualia. Click here for the full version, or here for an executive summary.
If you like Mike’s work, consider helping fund it at Patreon.

Physicalism & Materialism – John Wilkins

Materialism was a pre-socratic view that for something to be real it has to be matter – physical stuff made of atoms (which at the time were considered hard like billiard balls – fundametal parts of reality).  The reason these days the term physicalism is used is because it can describe things that aren’t matter – like forces, or aren’t observable matter – like dark matter, or energy or fields, or spacetime etc..  Physicalism is the idea that all that exist can be described in the language of some ‘ideal’ physics – we may never know what this ideal physics is, though people think that it is something close to our current physics (as we can make very accurate predictions with our current physics).

If magic, telepathy or angels were real, there would be a physics that could describe them – they’d have patterns and properties that would be describable and explainable.  A physicist would likely think that even the mind operates according to physical rules.  Being a physicalist according to John means you think everything is governed by rules, physical rules – and that there is an ideal language that can be used to describe all this.

Note John is also a deontologist.  Perhaps there should exist an ideal language that can fully describe ethics – does this mean that ideally there is no need for utilitarianism?  I’ll leave that question for another post.

Interview with John Wilkins on Materialism & Physicalism.

Here are some blog posts about physicalism by John Wilkins:

Is physicalism an impoverished metaphysics?

Every so often, we read about some philosopher or other form of public intellectual who makes the claim that a physicalist ontology – a world view in which only things that can be described in terms of physics are said to exist – is impoverished. That is, there are things whereof science cannot know, &c. A recent example is that made by Thomas Nagel [nicely eviscerated here by the physicist Sean Carroll], whose fame in philosophy rests with an influential 1974 paper that there is something like being a bat that no amount of physics, physiology or other objective science could account for.

Recent, Nagel has argued that the evolutionary view called (historically misleadingly) neo-Darwinism, is “almost certainly” false. One of the reasons is that “materialism” (which Nagel should know is an antiquated world view replaced by physicalism defined above; there are many non-material things in physics, not least fields of various kinds) does not permit a full account of consciousness; the subjective facts of being a particular individual organism. Another is that the chance that life would emerge from a lifeless universe is staggeringly unlikely. How this is calculated is somewhat mysterious, given that at best we only have (dare I say it?) subjective estimates anyway, but there it is.

But Nagel is not alone. Various nonreligious (apparently) thinkers have made similar assertions, although some, like Frank Jackson, who proposed the Knowledge Argument, have since backed down. What is it that physicalism must account for that these disputants and objectors say it cannot?

It almost entirely consists of consciousness, intentions, intelligence or some similar mental property which is entirely inexplicable by “reductionist” physicalism. [Reductionism is a term of abuse that means – so far as I can tell – solely that the person who makes such an accusation does not like the thing or persons being accused.] And that raises our question: is physicalism lacking something?

I bet you are dying to know more… you’ll just have to follow the link…
See more at Evolving Thoughts>>

Is Physicalism Coherent?

In my last post I argued that physicalism cannot be rejected simply because people assert there are nonphysical objects which are beyond specification. Some are, however, specifiable, and one commentator has identified the obvious ones: abstract objects like the rules of chess or numbers. I have dealt with these before in my “Pizza reductionism” post, which I invite you to go read.

Done? OK, then; let us proceed.

It is often asserted that there are obviously things that are not physical, such as ideas, numbers, concepts, etc., quite apart from qualia, I once sat with a distinguished philosopher, who I respect greatly and so shall not name, when he asserted that we can construct natural classifications because we can deal first with the natural numbers. I asked him “In what sense are numbers natural objects?”, meaning, why should we think numbers are entities in the natural world. He admitted that the question had not occurred to him (I doubt that – he is rather smart), but that it was simply an axiom of his philosophy. I do not think such abstract objects are natural.

This applies to anything that is “informational”, including all semantic entities like meanings, symbols, lexical objects, and so on. They only “exist” as functional modalities in our thoughts and language. I have also argued this before: information does not “exist”; it is a function of how we process signals. Mathematics is not a domain, it is a language, and the reason it works is because the bits that seriously do not work are not explored far[*] – not all of it has to work in a physical or natural sense, but much of it has to, or else it becomes a simple game that we would not play so much.

So the question of the incoherence of physicalism is based on the assumption (which runs contrary to physicalism, and is thus question begging) that abstract objects are natural things. I don’t believe they are, and I certainly do not think that a thought, or concept, for example, which can be had by many minds and is therefore supposed to be located in none of them (and thus transcendental), really is nonphysical. That is another case of nouning language. The thought “that is red” exists, for a physicalist, in all the heads that meet the functional social criteria for ascriptions of red. It exists nowhere else – it just is all those cognitive and social behaviours in biological heads…

Yes, I know, it’s a real page turner…
See more at Evolving Thoughts>>

In philosophy, physicalism is the ontological thesis that “everything is physical”, that there is “nothing over and above” the physical, or that everything supervenes on the physical. Physicalism is a form of ontological monism—a “one substance” view of the nature of reality as opposed to a “two-substance” (dualism) or “many-substance” (pluralism) view. Both the definition of physical and the meaning of physicalism have been debated. Physicalism is closely related to materialism. Physicalism grew out of materialism with the success of the physical sciences in explaining observed phenomena. The terms are often used interchangeably, although they are sometimes distinguished, for example on the basis of physics describing more than just matter (including energy and physical law). Common arguments against physicalism include both the philosophical zombie argument and the multiple observers argument, that the existence of a physical being may imply zero or more distinct conscious entities. “When I lost my belief in religion I had to decide what I needed to accept as a bare minimum. I decided that I needed to believe in the physical world. I never found the slightest reason to accept the existence of anything else. To this day I am a physicalist only because I never found the need to be anything else. The principle of parsimony suggests that one should not believe in more than one needs to. Even if it does make you feel comfortable.”

 

Let’s get physicalism!

See John Wilkin’s Blog ‘Evolving Thoughts

#philsci #philosophy #science #physics

On Consciousness, Qualia, Valence & Intelligence with Mike Johnson

Andrés L. Gómez Emilsson

Andrés Gómez Emilsson joined in to add very insightful questions for a 3 part interview series with Mike Johnson, covering the relationship of metaphysics to qualia/consciousness/hedonic valence, and defining their terms, whether panpsychism matters, increasing sensitivity to bliss, valence variance, Effective Altruism, cause prioritization, and the importance of consciousness/valence research .

Andrés Gómez Emilsson interviews Mike Johnson

Carving Reality at the Joints

Andrés L. Gómez Emilsson: Do metaphysics matter for understanding qualia, consciousness, valence and intelligence?

Mike Johnson: If we define metaphysics as the study of what exists, it absolutely does matter for understanding qualia, consciousness, and valence. I think metaphysics matters for intelligence, too, but in a different way.

The big question is whether terms like qualia, consciousness, and valence “carve reality at the joints” or whether they’re emergent linguistic constructs that don’t reflect the structure of the universe. And if these things are ‘real’ in some sense, the follow-up question is: how can we formalize these concepts?

Intelligence seems different: it seems like a ‘fuzzy’ concept, without a good “crisp”, or frame-invariant, definition.

Andrés: What about sources of sentient valence outside of human brains? What is the “minimum viable valence organism”? What would you expect it to look like?

Mike Johnson

Mike: If some form of panpsychism is true- and it’s hard to construct a coherent theory of consciousness without allowing panpsychism- then I suspect two interesting things are true.

  1. A lot of things are probably at least a little bit conscious. The “minimum viable valence experiencer” could be pretty minimal. Both Brian Tomasik and Stuart Hameroff suggest that there could be morally-relevant experience happening at the level of fundamental physics. This seems highly counter-intuitive but also logically plausible to me.
  2. Biological organisms probably don’t constitute the lion’s share of moral experience. If there’s any morally-relevant experience that happens on small levels (e.g., quantum fuzz) or large levels (e.g., black holes, or eternal inflation), it probably outweighs what happens on Earth by many, many, many orders of magnitude. Whether it’ll outweigh the future impact of humanity on our light-cone is an open question.

The big question is whether terms like qualia, consciousness, and valence “carve reality at the joints” or whether they’re emergent linguistic constructs that don’t reflect the structure of the universe. And if these things are ‘real’ in some sense, the follow-up question is: how can we formalize these concepts?

In contrast with Brian Tomasik on this issue, I suspect (and hope) that the lion’s share of the qualia of the universe is strongly net positive. Appendix F of Principia Qualia talks a little more about this.

Andrés: What would be the implications of finding a sure-fire way to induce great valence for brief moments? Could this be used to achieve “strategic alignment” across different branches of utilitarianism?

Mike: A device that could temporarily cause extreme positive or negative valence on demand would immediately change the world.

First, it would validate valence realism in a very visceral way. I’d say it would be the strongest philosophical argument ever made.

Second, it would obviously have huge economic & ethical uses.

Third, I agree that being able to induce strong positive & negative valence on demand could help align different schools of utilitarianism. Nothing would focus philosophical arguments about the discount rate between pleasure & suffering more than a (consensual!) quick blast of pure suffering followed by a quick blast of pure pleasure. Similarly, a lot of people live their lives in a rather numb state. Giving them a visceral sense that ‘life can be more than this’ could give them ‘skin in the game’.

Fourth, it could mess a lot of things up. Obviously, being able to cause extreme suffering could be abused, but being able to cause extreme pleasure on-demand could lead to bad outcomes too. You (Andres) have written about wireheading before, and I agree with the game-theoretic concerns involved. I would also say that being able to cause extreme pleasure in others could be used in adversarial ways. More generally, human culture is valuable and fragile; things that could substantially disrupt it should be approached carefully.

A friend of mine was describing how in the 70s, the emerging field of genetic engineering held the Asilomar Conference on Recombinant DNA to discuss how the field should self-regulate. The next year, these guidelines were adopted by the NIH wholesale as the basis for binding regulation, and other fields (such as AI safety!) have attempted to follow the same model. So the culture around technologies may reflect a strong “founder effect”, and we should be on the lookout for a good, forward-looking set of principles for how valence technology should work.

One principle that seems to make sense is to not publicly post ‘actionable’ equations, pseudocode, or code for how one could generate suffering with current computing resources (if this is indeed possible). Another principle is to focus resources on positive, eusocial applications only, insofar as that’s possible– I’m especially concerned about addiction, and bad actors ‘weaponizing’ this sort of research. Another would be to be on guard against entryism, or people who want to co-opt valence research for political ends.

All of this is pretty straightforward, but it would be good to work it out a bit more formally, look at the successes and failures of other research communities, and so on.


A question I find very interesting is whether valence research is socially disruptive or socially stabilizing by default. I think we should try very hard to make it a socially stabilizing force.

A question I find very interesting is whether valence research is socially disruptive or socially stabilizing by default. I think we should try very hard to make it a socially stabilizing force. One way to think about this is in terms of existential risk. It’s a little weird to say, but I think the fact that so many people are jaded, or feel hopeless, is a big existential risk, because they feel like they have very little to lose. So they don’t really care what happens to the world, because they don’t have good qualia to look forward to, no real ‘skin in the game’. If valence tech could give people a visceral, ‘felt sense’ of wonder and possibility, I think the world could become a much safer place, because more people would viscerally care about AI safety, avoiding nuclear war, and so on.

Finally, one thing that I think doesn’t make much sense is handing off the ethical issues to professional bioethicists and expecting them to be able to help much. Speaking as a philosopher, I don’t think bioethics itself has healthy community & dresearch norms (maybe bioethics needs some bioethicsethicists…). And in general, I think especially when issues are particularly complex or technical, I think the best type of research norms comes from within a community.

Andrés: What is the role of valence variance in intelligence? Can a sentient being use its consciousness in any computationally fruitful way without any valence variance? Can a “perfectly flat world(-simulation)” be used for anything computational?

 

Mike: I think we see this today, with some people suffering from affective blunting (muted emotions) but seemingly living functional lives. More generally, what a sentient agent functionally accomplishes, and how it feels as it works toward that goal, seem to be correlated but not identical. I.e., one can vary without the other.

But I don’t think that valence is completely orthogonal to behavior, either. My one-sentence explanation here is that evolution seems to have latched onto the

Why we seek out pleasure: the Symmetry Theory of Homeostatic Regulation

property which corresponds to valence- which I argue is symmetry– in deep ways, and has built our brain-minds around principles of homeostatic symmetry. This naturally leads to a high variability in our valence, as our homeostatic state is perturbed and restored. Logically, we could build minds around different principles- but it might be a lot less computationally efficient to do so. We’ll see. 🙂 One angle of research here could be looking at people who suffer from affective blunting, and trying to figure out if it holds them back: what it makes them bad at doing. It’s possible that this could lead to understanding human-style intelligence better.

Going a little further, we can speculate that given a certain goal or computation, there could be “valence-positive” processes that could accomplish it, and “valence-negative” processes. This implies that there’s a nascent field of “ethical computation” that would evaluate the valence of different algorithms running on different physical substrates, and choose the one that best satisfices between efficiency and valence. (This is of course a huge simplification which glosses over tons of issues…)

Andrés: What should we prioritize: super-intelligence, super-longevity or super-happiness? Does the order matter? Why?

Mike: I think it matters quite a bit! For instance, I think the world looks a lot different if we figure out consciousness *before* AGI, versus if we ignore it until AGI is built. The latter seems to involve various risks that the former doesn’t.

A risk that I think we both agree is serious and real is this notion of “what if accelerating technology leads to Malthusian conditions where agents don’t- and literally can’t, from a competitive standpoint- care about qualia & valence?” Robin Hanson has a great post called “This is the Dream Time” (of relaxed selection). But his book “Age of Em” posits a world where selection pressures go back up very dramatically. I think if we enter such an era without a good theory of qualia, we could trade away a lot of what makes life worth living.

 

Andrés: What are some conceptual or factual errors that you see happening in the transhumanist/rationalist/EA community related to modeling qualia, valence and intelligence?

Mike: First, I think it’s only fair to mention what these communities do right. I’m much more likely to have a great conversation about these topics with EAs, transhumanists, and rationalists than a random person off the street, or even a random grad student. People from this community are always smart, usually curious, often willing to explore fresh ideas and stretch their brain a bit, and sometimes able to update based on purely abstract arguments. And there’s this collective sense that ideas are important and have real implications for the future. So there’s a lot of great things happening in these communities and they’re really a priceless resource for sounding out theories, debating issues, and so on.

But I would highlight some ways in which I think these communities go astray.

Computationalism, functionalism, fun theory, ‘hedonic brain regions’, ‘pleasure neurochemicals’, the reinforcement learning theory of valence, and so on all give the illusion of explanatory depth but don’t actually explain things in a way which allows us to do anything useful.

First, people don’t realize how bad most existing models of qualia & valence are. Michael Graziano argues that most theories of consciousness are worse than wrong- that they play to our intuitions but don’t actually explain anything. Computationalism, functionalism, fun theory, ‘hedonic brain regions’, ‘pleasure neurochemicals’, the reinforcement learning theory of valence, and so on all give the illusion of explanatory depth but don’t actually explain things in a way which allows us to do anything useful.

Second, people don’t realize how important a good understanding of qualia & valence are. They’re upstream of basically everything interesting and desirable.

Here’s what I think has happened, at least in the rationalist community: historically, consciousness research has been a black hole. Smart people go in, but nothing comes out. So communities (such as physicists and LessWrong) naturally have an interest in putting up a fence around the topic with a sign that says

historically, consciousness research has been a black hole. Smart people go in, but nothing comes out. So communities .. naturally have an interest in putting up a fence around the topic with a sign that says ‘Don’t go here!’

‘Don’t go here!’ – But over time, people forgot why the mystery was blocked off, and started to think that the mystery doesn’t exist. This leads to people actively avoiding thinking about these topics without being able to articulate why.

Andrés: Is there value in studying extreme cases of valence? E.g. Buddhist monks who claim to achieve extreme sustainable bliss, or people on MDMA?

Mike: ‘What science can analyze, science can duplicate.’ And studying outliers such as your examples is a time-honored way of gathering data with high signal-to-noise. So yes, definitely. 🙂


Also see the 1st part, and the 2nd part of this interview series. Also this interview with Christof Koch will likely be of interest.

 

Mike Johnson is a philosopher living in the Bay Area, writing about mind, complexity theory, and formalization. He is Co-founder of the Qualia Research Institute. Much of Mike’s research and writings can be found at the Open Theory website.
‘Principia Qualia’ is Mike’s magnum opus – a blueprint for building a new Science of Qualia. Click here for the full version, or here for an executive summary.
If you like Mike’s work, consider helping fund it at Patreon.

Ethics, Qualia Research & AI Safety with Mike Johnson

What’s the relationship between valence research and AI ethics?

Hedonic valence is a measure of the quality of our felt sense of experience, the intrinsic goodness (positive valence) or averseness (negative valence) of an event, object, or situation.  It is an important aspect of conscious experience; always present in our waking lives. If we seek to understand ourselves, it makes sense to seek to understand how valence works – how to measure it and test for it.

Also, might there be a relationship to the AI safety/friendliness problem?
In this interview, we cover a lot of things, not least .. THE SINGULARITY (of course) & the importance of Valence Research to AI Friendliness Research (as detailed here). Will thinking machines require experience with valence to understand it’s importance?

Here we cover some general questions about Mike Johnson’s views on recent advances in science and technology & what he sees as being the most impactful, what world views are ready to be retired, his views on XRisk and on AI Safety – especially related to value theory.

This one part of an interview series with Mike Johnson (another section on Consciousness, Qualia, Valence & Intelligence). 

 

Adam Ford: Welcome Mike Johnson, many thanks for doing this interview. Can we start with your background?

Mike Johnson

Mike Johnson: My formal background is in epistemology and philosophy of science: what do we know & how do we know it, what separates good theories from bad ones, and so on. Prior to researching qualia, I did work in information security, algorithmic trading, and human augmentation research.

 

Adam: What is the most exciting / interesting recent (scientific/engineering) news? Why is it important to you?

Mike: CRISPR is definitely up there! In a few short years precision genetic engineering has gone from a pipe dream to reality. The problem is that we’re like the proverbial dog that caught up to the car it was chasing: what do we do now? Increasingly, we can change our genome, but we have no idea how we should change our genome, and the public discussion about this seems very muddled. The same could be said about breakthroughs in AI.

 

Adam: What are the most important discoveries/inventions over the last 500 years?

Mike: Tough question. Darwin’s theory of Natural Selection, Newton’s theory of gravity, Faraday & Maxwell’s theory of electricity, and the many discoveries of modern physics would all make the cut. Perhaps also the germ theory of disease. In general what makes discoveries & inventions important is when they lead to a productive new way of looking at the world.

 

Adam: What philosophical/scientific ideas are ready to be retired? What theories of valence are ready to be relegated to the dustbin of history? (Why are they still in currency? Why are they in need of being thrown away or revised?)

Mike: I think that 99% of the time when someone uses the term “pleasure neurochemicals” or “hedonic brain regions” it obscures more than it explains. We know that opioids & activity in the nucleus accumbens are correlated with pleasure– but we don’t know why, we don’t know the causal mechanism. So it can be useful shorthand to call these things “pleasure neurochemicals” and whatnot, but every single time anyone does that, there should be a footnote that we fundamentally don’t know the causal story here, and this abstraction may ‘leak’ in unexpected ways.

 

Adam: What have you changed your mind about?

Mike: Whether pushing toward the Singularity is unequivocally a good idea. I read Kurzweil’s The Singularity is Near back in 2005 and loved it- it made me realize that all my life I’d been a transhumanist and didn’t know it. But twelve years later, I’m a lot less optimistic about Kurzweil’s rosy vision. Value is fragile, and there are a lot more ways that things could go wrong, than ways things could go well.

 

Adam: I remember reading Eliezer’s writings on ‘The Fragility of Value’, it’s quite interesting and worth consideration – the idea that if we don’t get AI’s value system exactly right, then it would be like pulling a random mind out of mindspace – most likely inimicable to human interests. The writing did seem quite abstract, and it would be nice to see a formal model or something concrete to show this would be the case. I’d really like to know how and why value is as fragile as Eliezer seems to make out. Is there any convincing crisply defined model supporting this thesis?

Mike: Whether the ‘Complexity of Value Thesis’ is correct is super important. Essentially, the idea is that we can think of what humans find valuable as a tiny location in a very large, very high-dimensional space– let’s say 1000 dimensions for the sake of argument. Under this framework, value is very fragile; if we move a little bit in any one of these 1000 dimensions, we leave this special zone and get a future that doesn’t match our preferences, desires, and goals. In a word, we get something worthless (to us). This is perhaps most succinctly put by Eliezer in “Value is fragile”:

“If you loose the grip of human morals and metamorals – the result is not mysterious and alien and beautiful by the standards of human value. It is moral noise, a universe tiled with paperclips. To change away from human morals in the direction of improvement rather than entropy, requires a criterion of improvement; and that criterion would be physically represented in our brains, and our brains alone. … You want a wonderful and mysterious universe? That’s your value. … Valuable things appear because a goal system that values them takes action to create them. … if our values that prefer it are physically obliterated – or even disturbed in the wrong dimension. Then there is nothing left in the universe that works to make the universe valuable.”

If this frame is right, then it’s going to be really really really hard to get AGI right, because one wrong step in programming will make the AGI depart from human values, and “there will be nothing left to want to bring it back.” Eliezer, and I think most of the AI safety community assumes this.

But– and I want to shout this from the rooftops– the complexity of value thesis is just a thesis! Nobody knows if it’s true. An alternative here would be, instead of trying to look at value in terms of goals and preferences, we look at it in terms of properties of phenomenological experience. This leads to what I call the Unity of Value Thesis, where all the different manifestations of valuable things end up as special cases of a more general, unifying principle (emotional valence). What we know from neuroscience seems to support this: Berridge and Kringelbach write about how “The available evidence suggests that brain mechanisms involved in fundamental pleasures (food and sexual pleasures) overlap with those for higher-order pleasures (for example, monetary, artistic, musical, altruistic, and transcendent pleasures).” My colleague Andres Gomez Emilsson writes about this in The Tyranny of the Intentional Object. Anyway, if this is right, then the AI safety community could approach the Value Problem and Value Loading Problem much differently.

 

Adam: I’m also interested in the nature of possible attractors that agents might ‘extropically’ gravitate towards (like a thirst for useful and interesting novelty, generative and non-regressive, that might not neatly fit categorically under ‘happiness’) – I’m not wholly convinced that they exist, but if one leans away from moral relativism, it makes sense that a superintelligence may be able to discover or extrapolate facts from all physical systems in the universe, not just humans, to determine valuable futures and avoid malignant failure modes (Coherent Extrapolated Value if you will). Being strongly locked into optimizing human values may be a non-malignant failure mode.

Mike: What you write reminds me of Schmidhuber’s notion of a ‘compression drive’: we’re drawn to interesting things because getting exposed to them helps build our ‘compression library’ and lets us predict the world better. But this feels like an instrumental goal, sort of a “Basic AI Drives” sort of thing. Would definitely agree that there’s a danger of getting locked into a good-yet-not-great local optima if we hard optimize on current human values.

Probably the danger is larger than that too– as Eric Schwitzgebel notes​, ​

“Common sense is incoherent in matters of metaphysics. There’s no way to develop an ambitious, broad-ranging, self- consistent metaphysical system without doing serious violence to common sense somewhere. It’s just impossible. Since common sense is an inconsistent system, you can’t respect it all. Every metaphysician will have to violate it somewhere.”

If we lock in human values based on common sense, we’re basically committing to following an inconsistent formal system. I don’t think most people realize how badly that will fail.

 

Adam: What invention or idea will change everything?

Mike: A device that allows people to explore the space of all possible qualia in a systematic way. Right now, we do a lot of weird things to experience interesting qualia: we drink fermented liquids, smoke various plant extracts, strap ourselves into rollercoasters, and parachute out of plans, and so on, to give just a few examples. But these are very haphazard ways to experience new qualia! When we’re able to ‘domesticate’ and ‘technologize’ qualia, like we’ve done with electricity, we’ll be living in a new (and, I think, incredibly exciting) world.

 

Adam: What are you most concerned about? What ought we be worrying about?

Mike: I’m worried that society’s ability to coordinate on hard things seems to be breaking down, and about AI safety. Similarly, I’m also worried about what Eliezer Yudkowsky calls ‘Moore’s Law of Mad Science’, that steady technological progress means that ‘every eighteen months the minimum IQ necessary to destroy the world drops by one point’. But I think some very smart people are worrying about these things, and are trying to address them.

In contrast, almost no one is worrying that we don’t have good theories of qualia & valence. And I think we really, really ought to, because they’re upstream of a lot of important things, and right now they’re “unknown unknowns”- we don’t know what we don’t know about them.

One failure case that I worry about is that we could trade away what makes life worth living in return for some minor competitive advantage. As Bostrom notes in Superintelligence,

“When it becomes possible to build architectures that could not be implemented well on biological neural networks, new design space opens up; and the global optima in this extended space need not resemble familiar types of mentality. Human-like cognitive organizations would then lack a niche in a competitive post-transition economy or ecosystem. We could thus imagine, as an extreme case, a technologically highly advanced society, containing many complex structures, some of them far more intricate and intelligent than anything that exists on the planet today – a society which nevertheless lacks any type of being that is conscious or whose welfare has moral significance. In a sense, this would be an uninhabited society. It would be a society of economic miracles and technological awesomeness, with nobody there to benefit. A Disneyland with no children.”

Nick Bostrom

Now, if we don’t know how qualia works, I think this is the default case. Our future could easily be a technological wonderland, but with very little subjective experience. “A Disneyland with no children,” as Bostrom quips.

 

 

Adam: How would you describe your ethical views? What are your thoughts on the relative importance of happiness vs. suffering? Do things besides valence have intrinsic moral importance?

Mike: Good question. First, I’d just like to comment that Principia Qualia is a descriptive document; it doesn’t make any normative claims.

I think the core question in ethics is whether there are elegant ethical principles to be discovered, or not. Whether we can find some sort of simple description or efficient compression scheme for ethics, or if ethics is irreducibly complex & inconsistent.

The most efficient compression scheme I can find for ethics, that seems to explain very much with very little, and besides that seems intuitively plausible, is the following:

  1. Strictly speaking, conscious experience is necessary for intrinsic moral significance. I.e., I care about what happens to dogs, because I think they’re conscious; I don’t care about what happens to paperclips, because I don’t think they are.
  2. Some conscious experiences do feel better than others, and all else being equal, pleasant experiences have more value than unpleasant experiences.

Beyond this, though, I think things get very speculative. Is valence the only thing that has intrinsic moral importance? I don’t know. On one hand, this sounds like a bad moral theory, one which is low-status, has lots of failure-modes, and doesn’t match all our intuitions. On the other hand, all other systematic approaches seem even worse. And if we can explain the value of most things in terms of valence, then Occam’s Razor suggests that we should put extra effort into explaining everything in those terms, since it’d be a lot more elegant. So– I don’t know that valence is the arbiter of all value, and I think we should be actively looking for other options, but I am open to it. That said I strongly believe that we should avoid premature optimization, and we should prioritize figuring out the details of consciousness & valence (i.e. we should prioritize research over advocacy).

Re: the relative importance of happiness vs suffering, it’s hard to say much at this point, but I’d expect that if we can move valence research to a more formal basis, there will be an implicit answer to this embedded in the mathematics.

Perhaps the clearest and most important ethical view I have is that ethics must ultimately “compile” to physics. What we value and what we disvalue must ultimately cash out in terms of particle arrangements & dynamics, because these are the only things we can actually change. And so if people are doing ethics without caring about making their theories cash out in physical terms, they’re not actually doing ethics- they’re doing art, or social signaling, or something which can serve as the inspiration for a future ethics.

Perhaps the clearest and most important ethical view I have is that ethics must ultimately “compile” to physics. What we value and what we disvalue must ultimately cash out in terms of particle arrangements & dynamics, because these are the only things we can actually change.

The analogy I’d offer here is that we can think about our universe as a computer, and ethics as choosing a program to run on this computer. Unfortunately, most ethicists aren’t writing machine-code, or even thinking about things in ways that could be easily translated to machine-code. Instead, they’re writing poetry about the sorts of programs that might be nice to run. But you can’t compile poetry to machine-code! So I hope the field of ethics becomes more physics-savvy and quantitative (although I’m not optimistic this will happen quickly).

Eliezer Yudkowsky refers to something similar with his notions of “AI grade philosophy”, “compilable philosophy”, and “computable ethics”, though I don’t think he quite goes far enough (i.e., all the way to physics).

 

Adam: What excites you? What do you think we have reason to be optimistic about?

Mike: The potential of qualia research to actually make peoples’ lives better in concrete, meaningful ways. Medicine’s approach to pain management and treatment of affective disorders are stuck in the dark ages because we don’t know what pain is. We don’t know why some mental states hurt. If we can figure that out, we can almost immediately help a lot of people, and probably unlock a surprising amount of human potential as well. What does the world look like with sane, scientific, effective treatments for pain & depression & akrasia? I think it’ll look amazing.

 

Adam: If you were to take a stab at forecasting the Intelligence Explosion – in what timeframe do you think it might happen (confidence intervals allowed)?

Mike: I don’t see any intractable technical hurdles to an Intelligence Explosion: the general attitude in AI circles seems to be that progress is actually happening a lot more quickly than expected, and that getting to human-level AGI is less a matter of finding some fundamental breakthrough, and more a matter of refining and connecting all the stuff we already know how to do.

The real unknown, I think, is the socio-political side of things. AI research depends on a stable, prosperous society able to support it and willing to ‘roll the dice’ on a good outcome, and peering into the future, I’m not sure we can take this as a given. My predictions for an Intelligence Explosion:

  • Between ~2035-2045 if we just extrapolate research trends within the current system;
  • Between ~2080-2100 if major socio-political disruptions happen but we stabilize without too much collateral damage (e.g., non-nuclear war, drawn-out social conflict);
  • If it doesn’t happen by 2100, it probably implies a fundamental shift in our ability or desire to create an Intelligence Explosion, and so it might take hundreds of years (or never happen).

 

If a tree falls in the forest and no one is around to hear it, does it make a sound? It would be unfortunate if a whole lot of awesome stuff were to happen with no one around to experience it.  <!–If a rainbow appears in a universe, and there is no one around to experience it, is it beautiful?–>

Also see the 2nd part, and 3nd part (conducted by Andrés Gómez Emilson) of this interview series conducted by Andrés Gómez Emilson and this interview with Christof Koch will likely be of interest.

 

Mike Johnson is a philosopher living in the Bay Area, writing about mind, complexity theory, and formalization. He is Co-founder of the Qualia Research Institute. Much of Mike’s research and writings can be found at the Open Theory website.
‘Principia Qualia’ is Mike’s magnum opus – a blueprint for building a new Science of Qualia. Click here for the full version, or here for an executive summary.
If you like Mike’s work, consider helping fund it at Patreon.

Moral Enhancement – Are we morally equipped to deal with humanities grand challenges? Anders Sandberg

The topic of Moral Enhancement is controversial (and often misrepresented); it is considered by many to be repugnant – provocative questions arise like “who’s morals?”, “who are the ones to be morally enhanced?”, “will it be compulsory?”, “won’t taking a morality pill decrease the value of the intended morality if it skips the difficult process we normally go through to become better people?”, “Shouldn’t people be concerned that use of enhancements which alter character traits might consumer’s authenticity?”

Humans have a built in capacity of learning moral systems from their parents and other people. We are not born with any particular moral [code] – but with the ability to learn it just like we learn languages. The problem is of course this built in facility might have worked quite well back in the Stone Age when we were evolving in small tribal communities – but doesn’t work that well when surrounded with a high-tech civilization, millions of other people and technology that could be
potentially very dangerous. So we might need to update our moral systems and that is the interesting question of moral enhancement: can we make ourselves more fit for a current work?Anders Sandberg - Are we morally equipped for the future?
Humans have an evolved capacity to learn moral systems – we became more adept at learning moral systems that aided our survival in the ancestral environment – but are our moral instincts fit for the future?

Illustration by Daniel Gray

Let’s build some context. For millennia humans have lived in complex social structures constraining and encouraging certain types of behaviour. More recently for similar reasons people go through years of education at the end of which (for the most part) are more able to morally function in the modern world – though this world is very different from that of our ancestors, and when considering the possibilities for vastly radical change at breakneck speed in the future, it’s hard to know how humans will keep up both intellectually and ethically. This is important to consider as the degree to which we shape the future for the good depends both on how well and how ethically we solve the problems needed to achieve change that on balance (all things equal) benefits humanity (and arguably all morally relevant life-forms).

Can we engineer ourselves to be more ethically fit for the future?

Peter Singer discussed how our circles of care and compassion have expanded over the years – through reason we have been able to expand our natural propensity to act morally and the circumstances in which we act morally.

We may need to expand our circle of ethical consideration to include artificial life – considering certain types of software as moral patients.

So, if we think we could use a boost in our propensity for ethical progress,

How do we actually achieve ideal Moral Enhancement?

That’s a big topic (see a list of papers on the subject of ME here) – the answers may depend on what our goals and  preferences. One idea (among many others) is to regulate the level of Oxytocin (the cuddle hormone) – though this may come with the drawback of increasing distrust in the out-group.
Since morality depends on us being able to make accurate predictions and solve complex ethical problems, ‘Intelligence Enhancement‘ could be an effective aspect of moral enhancement. 

Morality is dependent on us being able to predict what’s going to happen when we do something. So various forms of intelligence enhancement might be very useful also for becoming more moral. Our ability to control our reactions that allow our higher-order values to control our lower order values is also important, that might actually require us to literally rewire or have biochips that help us do it.Anders Sandberg - Are we morally equipped for the future?

How we decide whether to use Moral Enhancement Therapy will be interesting – it may be needed to help solve global coordination problems; to increase the likelihood that we will, as a civilization, cooperate and cope with many known and as yet to be realised complex ethical quandaries as we move through times of unprecedented social and technological change.

This interview is part of a larger series that was completed in Oxford, UK late 2012.

Interview Transcript

Anders Sandberg

So humans have a kind of built-in capacity of learning moral systems from their parents and other people we’re not born with any particular moral [code] but the ability to learn it just like we can learn languages. The problem is of course this built-in facility might have worked quite well back in the Stone Age when we were evolving in small tribal communities – but doesn’t work that well when surrounded with a high-tech civilization, millions of other people and technology that could be potentially very dangerous. So we might need to update our moral systems. And that is the interesting question of moral enhancement:

  • can we make ourselves more fit for a current work?
  • And what kind of fitness should we be talking about?

For example we might want to improve on altruism – that we should be coming to strangers. But in a big society, in a big town – of course there are going to be some stranger’s that you shouldn’t trust. So it’s not just blind trust you want to enhance – you actually want to enhance ability to make careful judgements; to figure out what’s going to happen on whom you can trust. So maybe you want to have some other aspect, maybe the care – the circle of care – is what you want to expand.

Peter Singer pointed out that there are circles of care and compassion have been slowly expanding from our own tribe and their own gender, to other genders, to other people and eventually maybe to other species. But this is still biologically based a lot of it is going on here in the brain and might be modified. Maybe we should artificially extend these circles of care to make sure that we actually do care about those entities we ought to be caring about. This might be a problem of course, because some of these agents might be extremely different for what we used to.

For example machine intelligence might produce more machines or software that is a ‘moral patient’ – we actually ought to be caring about the suffering of software. That might be very tricky because our pattern receptors up in the brain are not very tuned for that – we tend to think that if it’s got a face and the speaks then it’s human and then we can care about it. But who thinks about Google? Maybe we could get super-intelligences that we actually ought to care a lot about, but we can’t recognize them at all because they’re so utterly different from ourselves.

So there are some easy ways of modifying how we think and react – for example by taking a drug. So the hormone oxytocin is sometimes called ‘the cuddle hormone’ – it’s released when breastfeeding and when having bodily contact with your loved one, and it generally seems to be making us more altruistic; more willing to trust strangers. You can kind of sniff it and run an economic game and you can immediately see a change in response. It might also make you a bit more ego-centric. It does enlarge feelings of comfort and family friendliness – except that it’s
only within what you consider to be your family. So we might want to tweak that.

Similarly we might think about adding links to our brains that allow us to think in better ways. After all, morality is dependent on us being able to predict what’s going to happen when we do something. So various forms of intelligence enhancement might be very useful also for becoming more moral. Our ability to control our reactions that allow our higher-order values to control our lower order values is also important, that might actually require us to literally rewire or have biochips that help us do it.

But most important is that we need the information we need to retrain the subtle networks in a brain in order to think better. And that’s going to require something akin to therapy – it might not necessarily be about lying on a sofa and telling your psychologist about your mother. It might very well be a bit of training, a bit of cognitive enhancement, maybe a bit of brain scanning – to figure out what actually ails you. It’s probably going to look very very different from anything Freud or anybody else envisioned for the future.

But I think in the future we’re actually going to try to modify ourselves so we’re going to be extra certain, maybe even extra moral, so we can function in a complex big world.

 

Related Papers

Neuroenhancement of Love and Marriage: The Chemicals Between Us

Anders contributed to this paper ‘Neuroenhancement of Love and Marriage: The Chemicals Between Us‘. This paper reviews the evolutionary history and biology of love and marriage. It examines the current and imminent possibilities of biological manipulation of lust, attraction and attachment, so called neuroenhancement of love. We examine the arguments for and against these biological interventions to influence love. We argue that biological interventions offer an important adjunct to psychosocial interventions, especially given the biological limitations inherent in human love.

Human Engineering and Climate Change

Anders also contributed to the paper “Human Engineering and Climate Change” which argues that cognitive, moral and biological enhancement could increase human ecological sustainability.

Many thanks for watching!

Consider supporting me by:
a) Subscribing to my YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating via Patreon: https://www.patreon.com/scifuture and/or
c) Sharing the media I create

Kind regards,
Adam Ford
– Science, Technology & the Future: http://scifuture.org

Amazing Progress in Artificial Intelligence – Ben Goertzel

At a recent conference in Beijing (the Global Innovators Conference) – I did yet another video interview with the legendary AGI guru – Ben Goertzel. This is the first part of the interview, where he talks about some of the ‘amazing’ progress in AI over recent years, including Deep Mind’s AlphaGo sealing a 4-1 victory over Go grandmaster Lee Sedol, progress in hybrid architectures in AI (Deep Learning, Reinforcement Learning, etc), interesting academic research in AI being taken up by tech giants, and finally providing some sobering remarks on the limitations of deep neural networks.

All Aboard The Ship of Theseus with Keith Wiley

An exploration of the philosophical concept of metaphysical identity, using numerous variations on the infamous Ship of Theseus thought experiment.

Video interview with Keith Wiley

Note: a separate text interview is below.

SONY DSC

SONY DSC

Keith Wiley is the author of A Taxonomy and Metaphysics of Mind-Uploading, available on Amazon.

The ship of Theseus, also known as Theseus’ paradox, is a thought experiment that raises the question of whether an object that has had all of its components replaced remains fundamentally the same object. The paradox is most notably recorded by Plutarch in Life of Theseus from the late first century. Plutarch asked whether a ship that had been restored by replacing every single wooden part remained the same ship.

The paradox had been discussed by other ancient philosophers such as Heraclitus and Plato prior to Plutarch’s writings, and more recently by Thomas Hobbes and John Locke. Several variants are known, including the grandfather’s axe, which has had both head and handle replaced.
See more at Wikipedia…

Text Interview

Note this is not a transcription of the video/audio interview.

The Ship of Theseus Metaphor

Adam Ford: Firstly, what is the story or metaphor of the Ship of Theseus intended to convey?

Keith Wiley: Around the first century AD, Plutarch wrote several biographies, including one of the king Theseus entitled Life of Theseus, in which he wrote the following passage:

The ship on which Theseus sailed with the youths and returned in safety, the thirty-oared galley, was preserved by the Athenians down to the time of Demetrius Phalereus. They took away the old timbers from time to time, and put new and sound ones in their places, so that the vessel became a standing illustration for the philosophers in the mooted question of growth, some declaring that it remained the same, others that it was not the same vessel.Plutarch

People sometimes erroneously believe that Plutarch presents the scenario (replacing a ship piecemeal style until all original material is absent) with a conclusion or judgment, i.e., that it makes some prescription of the “correct” way to interpret the scenario (as to, yes or no, is the ship’s identity preserved). However, as you see from the passage above, this is not the case. Plutarch left the question open. He mere poses the question and leaves it to the reader to ruminate on an actual answer.

The specific questions in that scenario are:

  • Does identity require maintaining the same material components? Aka, is identity tied and indicated by specific sets of atoms?
  • If not, then does preservation of identity require some sort of temporally overlapping sequence of closely connected parts?

The more general question being asked is: What is the nature of identity? What are its properties? What are its requirements (to claim preservation under various circumstances)? What traits specify identity and indicate the transformations under which identity may be preserved and under which it is necessarily lost?

Here is a video explainer by Keith Wiley (intended to inspire viewers to think about identity preservation)

Adam Ford: How does this story relate to mind uploading?

Keith Wiley: The identity of relatively static objects, and of objects not possessing minds or consciousness, is an introduction to the thornier question of metaphysical personal identity, i.e., the identity of persons. The goal in considering how various theories of identity describe what is happening in the Ship of Theseus is to prime our thinking about what happens to personal identity of people in analogous scenarios. For example, in a most straightforward manner, the Ship of Theseus asks us to consider how our identity would be affected if we replaced, piecemeal style, all the material in our own bodies. The funny thing is, this is already the case! It is colloquially estimated that our bodies turn over their material components approximately every seven years (whether this is precisely accurate is beside the point). The intent is not that a conclusion drawn from the Ship of Theseus definitively resolves the question concerning personal identity, because the former is a much simpler scenario. The critical distinction is that people are more obviously dynamic across time than static physical objects because our minds undergo constant psychological change. This raises the question of whether some sort of “temporal continuity” is at play in people that does not take effect in ships. There is also the question of whether consciousness somehow changes the discussion in radical ways. So the Ship of Theseus is not conclusive on personal identity. It is just a way to get us started in thinking about such issues.

Adam Ford: Fishing for clarification on how you use the term ‘identity’, Robin Hanson (scenario of uploads in the future in Age of Em) enquired about what kind of identity concept you are interested in. That is, what function do you intend this concept to serve?

Keith Wiley: Sure. First, and this might not be what Robin meant, there are different fundamental kinds of identity, two big ones being quantitative and numerical. Two things quantitatively identified possess the same properties, but are not necessarily “the same entity”. Two things numerically identical are somehow “the same thing”, which is problematic in its phrasing since they were admitted to be “two things” to begin with. The crucial distinction is in whether numerical identity makes any difference, or whether quantitative identity is all the fundamentally matters.

For me, I phrase the crucial question of personal identity relative to mind uploading in the following way: Do we grant equal primacy to claims to the original single identity to all minds (people) who psychologically descend from that common ancestral mind (person)? I always phrase it this way: granting primacy in claims to a historical identity. Do we tolerate the metaphysical interpretation that all descendant minds are equal in the primacy of their claim to the identity they perceive themselves to be? Alternatively, do we disregard such claims, dictating to others that they are not, in fact, who they believe themselves to be, and that they are not entitled to the rights of the people they claim to be? My concern is of:
bias (differing assignments of traits to various people),
prejudice (differing assignments of values, claims, or rights resulting from bias),
and discrimination (actions favoring and dismissing various people, resulting from prejudices).

Adam Ford: Is ‘identity’ the most appropriate word to be using here?

Keith Wiley: Well, identity certainly doesn’t seem to fully “work”. There’s always some boundary case or exception that undermines any identity theory we attempt to assign. My primary concern, such as it is on an entirely abstract philosophical musing (at this point in history when mind uploading isn’t remotely possible yet) is only secondarily the nature of identity. The primary concern, justified by those secondary aspects of identity, is whether we should regard uploads in some denigrated fashion. Should we dismiss their claims that they are the original person, that they should be perceived as the original person, that they should be treated and entitled and “enrighted” as the original person? I don’t just mean from a legal standpoint. We can pass all sorts of laws that force people to be respectful, but that’s an uninteresting question to me. I’m asking if it is fundamentally right or wrong to regard an upload in a denigrated way when judging its identity claims.

Ontology, Classification & Reality

Adam Ford: As we move forward the classification of identity will likely be fraught with struggle. We might need to invent new words to clarify the difference between distinct concepts. Do you have any ideas for new words?

Keith Wiley: The terminology I generally use is that of mind descendants and mind ancestors. In this way we can ask whether all minds descending from a common ancestral mind should be afforded equal primacy in their claim to the ancestral identity, or alternatively, whether there is a reasonable justification to exhibit biases, prejudices, and discriminations against some minds over such such questions. Personally, I don’t believe any such asymmetry in our judgment of persons and their identity claims can be grounded on physical or material traits (such as whose brain is composed of more matter from the ancestral brain, which comes up when debating nondestructive uploading scenarios).

Adam Ford: An appropriate definition for legal reasons?

Keith Wiley: I find legal distinctions to be uninteresting. It used to be illegal for whites and blacks to marry. Who cares what the law says from a moral, much less metaphysical, perspective. I’m interested in finding the most consistent, least arbitrary, and least paradoxical way to comprehend reality, including the aspect of reality that describes how minds relate to their mental ancestors.

Adam Ford: For scientific reasons?

Keith Wiley: I don’t believe this is a scientific question. How to procedurally accomplish uploading is a scientific question. Whether it can be done in a nondestructive way, leaving the original body and brain unharmed, is a scientific question. Whether multi-uploading (producing multiple uploads at once) is technically possible is a scientific question, say via an initial scan that can be multi-instantiated. I think those are crucial scientific endeavors that will be pursued in the future, and I participate in some of the discussions around that research. But at this point in history, when nothing like mind uploading is possible yet, I am pursuing other aspects, nonscientific aspects, namely the philosophical question of whether we have the correct metaphysical notion of identity in the first place, and whether we are applying identity theories in an irrational, or even discriminatory, fashion.

Implications for Brain Preservation

Adam Ford: Potential brain preservation (inc cryonics) customers may be interested in knowing the possible likely science of reanimation (which, it has been suggested, includes mind uploading) – and the type of preservation which will most likely achieve the best results. Even though we don’t have mind uploading yet – people are committing their brains to preservation strategies that are to some degree based on strategies for revival. Mummification? No – that probably won’t work. Immersion in saline based solution? Yeah for short periods of time. Plastination? Yes but only if it’s the connectome we are after… And then there is different methods of cryonic suspension that may be tailored to different intended outcomes – do you want to destructively scan the brain layer by layer and be uploaded in the future? Do you want to be able to fully revive the actual brain in the (potentially in a longer term future)?

Keith Wiley: People closer to the cryonics community than myself, such as some of my fellow BPF board members, claim that most current cryonics enthusiasts (and paying members or current subjects) are not of the mind uploading persuasion, preferring biological revival instead. Perhaps because they tend to be older (baby boomer generation) they have not bought into computerization of brains and minds. Their passion for cryonics is far more aligned with the prospect of future biological revival. I suspect there will be a shift toward those of a mind uploading persuasion as the newer generations, more comfortable with computers, enter the cryonics community.

As you described above, there are few categories of preservation and a few paths of potential revival. Preservation is primarily of two sorts: cryogenic and at least conceivably reversible, and room temperature and inconceivably reversible. The former is amenable to both biological revival and mind uploading. The latter is exclusively amenable to mind uploading. Why would one ever choose the latter option then? Simple: it might be the better method of preservation! It might preserve the connectome in greater detail for longer periods of time with lesser rates of decay — or it might simply be cheaper or otherwise easier to maintain over the long term. After all, cryonic storage requires cryonic facilities and constant nitrogen reintroduction as it boils off. Room temperature storage can be put on the shelf and forgotten about for millennia.

Adam Ford: What about for social (family) reasons?

Keith Wiley: This is closer to the area where I think and write, although not necessarily in a family-oriented way. But social in terms of whether our social contracts with one another should justify treating certain people in a discriminatory fashion and whether there is a rational basis for such prejudices. Not that any of this will be a real-world issue with which to tackle for quite some time. But perhaps some day…

Adam Ford: If the intended outcomes of BP are for subjective personal reasons?

Keith Wiley: I would admit that much of my personal interest here is to try to grind out the absolutely most logical way to comprehend minds and identity relative to brains, especially under the sorts of physical transformations that brains could hypothetically experience (Parfit’s hemispherical fission, teleportation, gradual nanobot replacement, freeze-slice-scan-and-emulate, etc.).

Philosophy

Adam Ford: In relation to appropriate definitions of ‘identity’ for scientific reasons – what are your thoughts on the whole map/territory ‘is science real’ debate? Where do you sit – scientific realism, anti-realism and structural realism (epistemic or ontic)? what’s your favorite?

Keith Wiley: I suppose I lean toward scientific realism (to my understanding: scientific claims and truths hold real truth value, not just current societal “perspective”, and further they can be applied to yet-to-be observed phenomena), although antirealism is a nifty idea (scientific truths are essentially those which we have yet to disprove, but expect to with some future overturning, or furthermore, unobserved phenomena are not reasonable subjects of scientific inquiry). The reason I don’t like the latter is it leads to antiintellectualism, which is a huge problem for our society. Rather than overturning or disregarding scientific theories, I prefer to interpret it as that we refine them, saying that new theories apply in corners where the old ones didn’t fit well (Newton’s laws are fine in many circumstances, but are best appended by quantum mechanics at the boundary’s of their applicability). Structural and ontic realism are currently vague to me. I’ve read about them but haven’t really grinded through their implications yet.

Adam Ford: If we are concerned about our future and the future of things we value we perhaps should ask a fundamental question: How do things actually persist? (Whether you’re a perdurantist or an endurantist – this is still a relevant question – see 5.2 ‘How Things Persist?’ in ‘Endurantism and Perdurantism’)

Keith Wiley: Perdurantism and Endurantism are not terms I have come across before. I do like the idea of conceptualizing objects as 4D temporal “worms”. I describe brains that way in my book for example. If this is the “right” way (or at least a good way) to conceive of the existence of physical objects, then it partially solves the persistence or preservation-of-identity problem: preservation of identity is the temporal stream of physical continuity. The problem is, I reject any physical requirement for explicitly *personal* identity of minds, because there appears to be no associated physical trait — plus that would leave open how to handle brain fission, ala Parfit, so worms just *can’*t solve the problem of personal identity, only of physical objects.

Adam Ford: Cybernetics – signal is more important than substrate – has cybernetics influenced your thinking? If so, how?

Keith Wiley: If by signal, you mean function, then I’ve always held that the functional traits of the brain are far more important (it not entirely more important) than mere material components.

Adam Ford: “signal is more important than substrate” – Yet the signal quality depends on the substrate – surely a ship’s substrate is not as tightly coupled to its function of moving across a body of water (wood, fiberglass, even steel will work) than a conscious human mind is to its biological brain. in terms of the granularity of replacement part – how much is needed?

Keith Wiley: Good question. I have no idea. I tend to presume the requisite level is action potential processing and generation, which is a pretty popular assumption I think. We should be open on this question at this time in history and current state of scientific knowledge.

Adam Ford: What level of functional representation is needed in order to be preserve ‘selfhood’?

Keith Wiley: Short answer: We don’t know yet. Long answer, it is widely presumed that the action-potential patterns of the connectome are where the crucial stuff is happening, but this is a supposition. We don’t know for sure.

Adam Ford: A Trolley Problem applied to Mind Uploaded Clones: As with the classic trolley problem, a trolley is hurtling down a track towards 5 people. As in the classic case, you can divert it onto a separate track by pulling a nearby leaver. However, suddenly 5 functionally equivalent carbon copies* of the original 5 people appear on the separate track. Would you pull the lever to save the originals but kill the copies? Or leave the originals to die, saving the copies? (*assume you just know the copies are functionally equivalent to the originals)

Keith Wiley: Much of my writing focuses on mind uploading and the related question of what minds are and what personal identity is. My primary claim is that uploads are wholly human in their psychological traits and human rights, and furthermore that they have equal primacy in their claim to the identity of the person who preceded an uploading procedure — even if the bio-original body and brain survive! The upload is still no less “the original person” than the person housed in the materially original body, precisely because bodies and material preservation are irrelevant to who we are, by my reckoning. If this is not the case, then how can we solve the fission paradox? Who gets to “be the original” if we split someone in two? The best solution is that only psychological traits matter and material traits are simply irrelevant.

So, for those reasons, I would rephrase your trolley scenario thusly: track one has five people, track two has five other people. Coincidentally, pairs of people from each track have very recently diverging memories, but the scenario is psychologically symmetrical between the two tracks even if there is some physical asymmetry in terms of how old the various material compositions (bodies) are. So we can disregard notions of asymmetry for the purpose of analyzing the moral or identity-preserving-killing implications of the trolley problem. It is simply “Five people on one track, five on another. Should you pull the lever, killing those on the diverted track to save those on the initial track?” That’s how I rephrase it.

Adam Ford: I wonder if the experiment would yield different results if there were 5 individuals on one track and 6 copies of 1 person on the other? (As some people suggest that copies are actually identical to the original – eg for voting purposes)

Keith Wiley: But they clearly aren’t identical in the scenario you described. The classic trolley problem has always implied that the subjects are reasonably alert and mentally dynamic (thinking). It isn’t carefully described so as to imply that the people involved are explicitly unconscious, to say nothing of the complexities involved in rendering them as physically static objects (preserved brains undergoing essentially no metabolic or signal-processing (action potentials) activity. The problem is never posed that way. Consequently, they are all awake and therefore divergent from one another, distinct individuals with all the rights of individual personhood. So it’s just five against six in your example. That’s all there is to it. People might suggest, as you said above, that copies are identical to each other (or to the original), but those people are just wrong.

So an interesting question then, is what if the various subjects involved actually are unconscious or even rigidly preserved? Can we say their psychological sequences have not diverged and that they therefore represent redundant physical instantiations of a given mind? I explore this exact question in my book by the way. I think a case could be made that until psychological divergence (until the brains are rolling forward through time, accumulating experiences and memories) we can say they are redundant in terms of identity and associated person-value. But to be clear, if the bio-original was statically preserved, then uploaded or duplicated, and then both people were put on the train tracks in their preserved state, physically identical, frozen with no ongoing psychological experience, then I would be clear to state that while it might not matter if we kill the upload, it *also* doesn’t matter if we choose the other way and kill the bio-original! That is the obvious implication of my reasoning here. And in your case above, if we have five distinct people on one track (let’s stay everyone involved is statically preserved) and six uploads of one of those people on the other track, then we could recast the problem as “five on one track and one on the other”. The funny thing is, if we save the six and revive them, then, after the fact, we have granted life to six distinct individuals, but we can only say that after we revive them, not at the time of the trolley experiment when they are statically preserved. So now we are speculating on the “tentative” split personhood of a set of identical but static minds based on a later time when they might be revived. Does that tentative individuality grant them individuality while they are still preserved? Does the mere potential to diverge and individualize grant them full-blown distinct identity before the divergence has occurred? I don’t know. Fascinating question. I guess the anti-abortion-choice and pro-abortion-choice debate has been trying to sort out the personhood of tentative, potential, or possible persons for a long time (and by extension, whether contraception is acceptable hits the same question). We don’t seem to have all agreed on a solution there yet, so we probably won’t agree in this case either.

Philosophy of identity

Adam Ford: Retention of structure across atomic change – is identity the structure, the atomic composition, the atomic or structural continuum through change, or a mixture?

Keith Wiley: Depends on one’s chosen theory of identity of course. Body theory, psychological theory, psychological branching theory, closest continuer theory, 4D spacetime “worm” theory. There’s several to choose from — but I find that some more paradox-prone than others, and I generally take that as an indication of a weak theory. I’m a branchest, although the distinction from worm theory is, on some accounts, virtually indistinguishable.

Adam Ford: Leibniz thought about the Identity of indiscernibles (principle in ontology that no two things can have all properties the same) – if objX and objY share all the same properties, are they the same thing? If KeithX and KeithY share the same functional characteristics are they the same person?

Keith Wiley: But do they really share the same properties to begin with, or is the premise unfounded? When people casually analyze these sorts of scenarios, the two people are standing there, conscious, wondering if someone is about to pass judgment on them and kill them. They are experiencing the world from slightly different sensorial vantage points (vision, sound, etc.) Their minds are almost certainly diverged in their psychological state mere fractions of a second upon regaining consciousness. So they aren’t functionally identical in the first place. Thus the question is flawed, right? The question can only be applied if they are unconscious and rigidly preserved (frozen perhaps). Although I believe a case could be made that mere lack of consciousness is sufficient to designate them *psychologically* identical even if they are not necessarily physically identical due to microscopic metabolic variations — but I leave that subtly as an open question for the time being.

Adam Ford: Here is a Symmetric Universe counterexample – Max Black – two distinct perfect spheres (or two Ship of Theseuses) are two separate objects even though they share all the same properties – but don’t share the same space-time. What are your thoughts?

Keith Wiley: This is very close to worm theory. It distinguishes seemingly identical entities by considering their spacetime worms, which squiggle their way through different spacetime paths and are therefore not identical in the first place. They never were. The reason they appeared to be identical is that we only considered 3D space projection of their truly 4D spacetime structure. You can easily alias pairs of distinct higher-dimensional entities by looking only at their projections onto lower dimensions and thereby wrongly conclude that they are identical when, in fact, they never were to begin with in their true higher dimensional structure. For example, consider two volumes, a sphere and a cylinder. They are 3D. But project them onto a 2D plane (at the right angle) and you get two circles. You might wrongly conclude they are identical, but they weren’t to begin with! You simply ignored an entire dimension of their nature. That’s what the 4D spacetime worm says about the identity of physical objects.

However, once we dismiss any relevance or importance of physical traits anyway (because I reject body identity on the matter of personal identity, favoring psychological identity), then the 4D worm becomes more convoluted. The question then becomes, what sort of “time worm” describes psychological changes over time instead of physical, structure, and material changes over time? I think it’s as simple as: take an information pattern instantiated in a physical system (a brain), produce a second physical instantiation, and now readily conclude that the psychological temporal worm (just a temporal sequence of psychological states frankly) has diverged.

Adam Ford: Nice answer! – I’m certainly interested in hearing more about worm theory – I think this wikipedia source is about the same thing: https://en.wikipedia.org/wiki/Perdurantism
Do you have any personal writings I can point at in the text form of the interview?

Keith Wiley: Ah, I hadn’t heard that term before. Thanks for the reference. Well, I always refer to my book of course, and more recently Randal Koene and I published a paper in the Journal of Consciousness Studies this past March.

(See Free near-final version on arxiv

Adam Ford: David Pearce is skeptical that our we as in our subjects of experience are actually enduring metaphysical egos – he seems more of a stage theorist – that each moment of subjective experience is fleeting – only persisting through one cycle of quantum cohesion delimited by decoherence.

Keith Wiley: Hmmm, I see the distinction in the link to stage theorist you provided above, and I do not believe I am committed to a position on that question. I go both ways in my own writing, sometimes describing things as true 4D entities (I describe brains that way in my book) but also writing quite frequently in terms of “mind descendants of mind ancestors”. That phrasing admits that perhaps identity does not span time in a temporal worm, but rather that it consists of instantaneous time slices of momentary identity connected in a temporal sequence. Like I said, I am uncommitted on this distinction, at least for now.

Identity: Accidental properties vs Essential properties

Adam Ford: Is the sense of an enduring metaphysical ego really an ‘accidental property’ (based on our intuitions of self) rather than an ‘essential property’ of identity?

Keith Wiley: It is possible we don’t yet know what a mind is in sufficient detail to answer such a question. I confess to not being entirely sure what the question is asking. That said, it is possible that conscious and cognitively rich aliens have come up with a fairly different way of comprehending what their minds actually are, and consequently may also have rather bizarre notions of what personal identity is.

Note that in the video, I sometimes offer an answer to the question “Did we preserve the ship in this scenario?” and I sometimes don’t, simply asking the viewer “So did we preserve it or not? What do you think?” This is because I’m certainly not sure of all the answers to this question in all the myriad scenarios yet.

Adam Ford: This argument is criticized by some modern philosophers on the grounds that it allegedly derives a conclusion about what is true from a premise about what people know. What people know or believe about an entity, they argue, is not really a characteristic of that entity.
There may be a problem in that what is true about a phenomenon or object (like identity) shouldn’t be derived from how we label or what we know about it – the label or description isn’t a characteristic of the identity (map not the territory etc).

Keith Wiley: I would essentially agree that identity shouldn’t merely be a convention of how we arbitrarily label things (i.e., that labeling grants or determines identity), but rather the reverse, that we are likely to label things so as to indicate how we perceive their identity. The question is, does our perception of identity indicate truth, which we then label, or does our perception determine or choose identity, which we then label? I would like to think reality is more objective than that, that there at least some aspects of identity that aren’t merely our choices, but rather traits of the world that we discover, observe, and finally label.

ship-of-theseus

Notes

References

A Taxonomy and Metaphysics of Mind-Uploading https://www.amazon.com/dp/0692279849
The Fallacy of Favouring Gradual Replacement Mind Uploading Over Scan-and-Copy https://arxiv.org/abs/1504.06320 Research Gate: https://www.researchgate.net/publication/299820458_The_Fallacy_of_Favouring_Gradual_Replacement_Mind_Uploading_Over_Scan-and-Copy

The Endurance/Perdurance Distinction By Neil Mckinnon http://www.tandfonline.com/doi/pdf/10.1080/713659467
Endurantism and Perdurantism for a discussion on 3 different ways on what these terms have been taken to mean : http://www.nikkeffingham.com/resources/Endurantism+and+Perdurantism.pdf
Plutarch: http://penelope.uchicago.edu/Thayer/E/Roman/Texts/Plutarch/Lives/Theseus*.html

Definitions


Perdure – remain in existence throughout a substantial period of time; persisting in virtue of having both temporal and spatial parts (alternatively the thesis that objects are four dimensional and have temporal parts)
Endure – being wholly present at all times at which it exists (endurance distinct from perducance in that endurance has strict identity and perdurance has a looser unity relation (genidentity))
Genidentity – is an existential relationship underlying the genesis of an object from one moment to the next.
Gunk – In mereology, an area of philosophical logic, the term gunk applies to any whole whose parts all have further proper parts. That is, a gunky object is not made of indivisible atoms or simples. Because parthood is transitive, any part of gunk is itself gunk.

Bio

Keith Wiley has a Ph.D. in Computer Science from the University of New Mexico and was one of the original members of MURG, the Mind Uploading Research Group, an online community dating to the mid-90s that discussed issues of consciousness with an aim toward mind-uploading. He has written multiple book chapters, peer-reviewed journal articles, and magazine articles, in addition to several essays on a broad array of topics, available on his website. Keith is also an avid rock-climber and a prolific classical piano composer.


Also see Jennifer Wang’s (Stanford University) video as she introduces us to the Ship of Theseus puzzle that has bedeviled philosophy since the ancient Greeks. She tells the Ship of Theseus story, and draws out the more general question behind it: what does it take for an object to persist over time? She then breaks this ancient problem down with modern clarity and rigor.

Longevity Day with Aubrey de Grey!

“Longevity Day” (based on the UN International Day of Older Persons – October 1) is a day of support for biomedical aging and longevity research. This has been a worldwide international campaign successfully adopted by many longevity activists groups. In this interview Aubrey de Grey lends support to Longevity Day and covers a variety of points, including:
– Updates: on progress at SENS (achievements, and predictions based on current support), funding campaigns, the recent Rejuvenation Biotechnology conference, and exciting news in health and medicine as it applies to longevity
– Advocacy: What advocates for longevity research need to know
– Effective Altruism and Science Philanthropy – giving with impact – cause prioritization and uncertainty – how to go about measuring estimates on impacts of dollars or units of effort given to research organizations
– Action: High impact areas, including more obvious steps to take, and some perhaps less obvious/underpopulated areas
– Leveraging Longevity Day: What to do in preparation to leverage Longevity Day? Once one has celebrated Longevity Day, what to do next?

“Longevity Day” (based on the UN International Day of Older Persons – October 1st) is a day of support for biomedical aging and longevity research. This has been a worldwide international campaign successfully adopted by many longevity activists groups.

Here is the Longevity Day Facebook Page.

longevity-advocacy-action-aubrey-de-grey-longevity-day-oct-1st

Heavy-Tailed Distributions: What Lurks Beyond Our Intuitions?

Understanding heavy-tailed distributions are important to assessing likelihoods and impact scales when thinking about possible disasters – especially relevant to xRisk and Global Catastrophic Risk analysis. How likely is civilization to be devastated by a large scale disaster or even go extinct?
Anders discusses how heavy-tailed distributions account for more than our intuitions tell us.

How likely is civilization to devastated by a global disaster or even go extinct?
In this video, Anders Sandberg discusses (with the aid of a whiteboard) how heavy-tailed distributions account for more than our intuitions tell us .

Considering large-scale disasters may be far more important than we intuit.

Transcript of dialog

So typically when people talk about probability they think about nice probability distribution like the bell curve or the Gaussian curve. So this means that it’s most likely that you get something close to zero and then less and less likely that you get very positive or very negative things and this is a rather nice looking curve.

However, many things in the world turn out to have much nastier probability distributions. A lot of disasters for example have a power law distribution. So if this is the size of a disaster and this is the probably, they fall off like this. This doesn’t look very dangerous from the start. Most disasters are fairly small, there’s a high probability of something close to zero and a low probability of something large. But it turns out that the probability getting a really large one can become quite big.

So suppose this one has alpha equal to 1 – that means that there is the chance of getting a disaster of size 10 is proportional to 1 in 10 and that disaster is 10 times as large that’s just a 10th of that probability and that it’s also 10 times as large as that big disaster (again a 10th of that).

That means that we’ve quite a lot of probability of getting very very large disasters – so in this case getting something that is very far out here is exceedingly unlikely, but in the case of power laws you can actually expect to see some very very large outbreaks.

So if you think about the time that various disasters happen – they happen irregularly and occasionally one is through the roof, and then another one, and you can’t of course tell when they happen – that’s random. And you can’t really tell how big they are going to be except that you’re going to be distributed in this way.

The real problem is that when something is bigger than any threshold that you imagine.. well it’s not just going to be a little bit taller, it’s going to be a whole lot taller.

So if we’re going to see a war for example as large as even the Second World War, we shouldn’t expect it to kill a million people more. We could expect it to kill tens or most likely hundreds or even a billions of people more – which is a rather scary prospect.

So the problem here is that disasters seem to be having these heavy tails. So a heavy a tail in probability slang that means that the probability mass over here, the chance that something very large is happening, there again it falls off very slowly. And this is of course a big problem because we tend to think in terms of normal distributions.

Normal distributions are nice. We say they’re normal because a lot of the things in our everyday life get distributed like this. The tallness of people for example – very rarely do we meet somebody who’s a kilometer tall, however, when we meet the people and think about how much they’re making or much money they have – well Bill Gates. He is far far richer than just ten times you and me and then he’s actually got, he’s from afar out here.

So when we get to the land where we have these fat heavy tails when both the the richest (if we are talking about rich people and the dangers if we talk about this) also tend to be much bigger than we can normally think about.

Adam: Hmm yes definitely un-intuitive.

Mmm and the problem is of course our intuitions are all shaped by what’s going on here in the normal realm. We have this experience about what has happened so far in our lives and once we venture out here and talk about very big events or intuitions suddenly become very bad. We make mistakes. We don’t really understand the consequences, cognitive biases take over and this can of course completely mess up our planning.

So we invest far too little in handling the really big disasters and we’re far too uninterested in going for the big wins in technology and science.

We should pay more attention probability theory (esp heavy-tailed distributions) in order to discover and avoid disasters that lurk beyond our intuitions.


Also see –
– Anders Sandberg: The Survival Curve of Our Species: Handling Global Catastrophic and Existential Risks

Anders Sandberg on Wikipedia: https://en.wikipedia.org/wiki/Anders_Sandberg

anders-sandberg-03_41_59_21-still025

Many thanks for watching!

Consider supporting me by:
a) Subscribing to my YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating via Patreon: https://www.patreon.com/scifuture and/or
c) Sharing the media I create

Kind regards,
Adam Ford
– Science, Technology & the Future: http://scifuture.org

Singularity Skepticism or Advocacy – to what extent is it warranted?

Why are some people so skeptical of the possibility of Super-intelligent Machines, while others take it quite seriously?
Hugo de Garis addresses both ‘Singularity Skepticism’ and advocacy – reasons for believing machine intelligence is not only possible but quite probable!
The Singularity will likely be an unprecedentedly huge issue that we will need to face in the coming decades.

Singularity Skepticism - Hugh de Garis. 2jpg

If you take the average person in the street and you talk to them about a future intelligent machine – there is a lot of skepticism – because today’s machines aren’t intelligent right? I know from my own personal experience that I get incredibly frustrated with computers, they crash all the time, they don’t do what I want… literally I say “I hate computers” but I really love them – so I have an ambivalent relationship with computers..Hugo de Garis
.

The exponential growth of technology and resolution of brain-scanning may lead to advanced neuro-engineering. Brain simulation right down to the chemical synapse, or just plain old functional brain representation might be possible within our lifetimes – this would likely lead to a neuromorphic flavour of the singularity.

There have been some enthusiastic and skeptical responses to this video so far on YouTube:

AZR NSMX1 commented that “Computers already have a better memory and a higher speed than human brain, they can learn and recognice the human voice since 1982 with the first software made for Kurzweil Industries, the expert systems are the first steps for thinking, then in 90’s we learned that emotions are more easy for machines than we believed, an emotion is just an uncontrolled reaction an automatic preservation code that may be good or not for a robot to reach its goal. Now in 2010 the Watson supercomputer show us that is able to structure the human language to produce a logic response, if that is not what does the thought, then somebody explain me what means to think. The only thing they still can’t do is the creative thinking and conciousness, but that will be reached between 2030 and 2035. Conciousness is just the amout and quality of the information you can process, IBM Blue Brain team said this, for example we the humans are very stupid when it comes to use and exploit all the possibilities offered by the smell sense compared to dogs or bears, in this dimension a cockroach is smarter than us because they can map the direction of smell to find the food or other members of their group, we can’t do this, we just have no consciusness in that world. Creativity is the most complex thing, if machines reaches creativity then our world will change because we will not only have to work anymore, but what is better we will not have to think anymore haha. Machines gonna do everything.”
 My response: There has certainly been some impressive strides in technological advancement, it might asymptote at some stage – not sure when, but my take is that there won’t likely be many fundamental engineering or scientific bottlenecks that will block or stifle progress – the biggest problems I think will be sociological impediments – human caused. 

Darian Rachel says “Around the 8 minute or so point he makes a statement that a machine will be built that is intelligent and conscious. He seems to pull this idea that it will be conscious “out of the air” somewhere. It seems to be a rather silly idea.”
 My response: while I agree that a conscious machine is likely difficult to build, there doesn’t seem to be much agreement among humans about whether it exists, what consciousness actually is, whether it is a byproduct of (complex?) information processing and whether it is actually computable (using classical computation). Perhaps Hugo de Garis views consciousness as just being self-aware. 

Exile438 responded that the “human brain has 100billion neurons and each connects to 10,000 other neurons, 10^11*10^4=10^15 human brain capacity estimate. Brain scanning resolution and speed of computers doubles every so often so within the next 2 to 3 decades we can simulate a brain on a computer. If we can do that it would run electronically 4million times faster then our chemical brains. This leads to singularity.”
 My response: it’s certainly a strange and exciting time to be alive – the fundamental questions that we have been wrestling with since before recorded history – questions around personal identity and what makes us what we – may be unraveled within the lifetimes of most of us here today.