Physicalism & Materialism – John Wilkins

Materialism was a pre-socratic view that for something to be real it has to be matter – physical stuff made of atoms (which at the time were considered hard like billiard balls – fundametal parts of reality).  The reason these days the term physicalism is used is because it can describe things that aren’t matter – like forces, or aren’t observable matter – like dark matter, or energy or fields, or spacetime etc..  Physicalism is the idea that all that exist can be described in the language of some ‘ideal’ physics – we may never know what this ideal physics is, though people think that it is something close to our current physics (as we can make very accurate predictions with our current physics).

If magic, telepathy or angels were real, there would be a physics that could describe them – they’d have patterns and properties that would be describable and explainable.  A physicist would likely think that even the mind operates according to physical rules.  Being a physicalist according to John means you think everything is governed by rules, physical rules – and that there is an ideal language that can be used to describe all this.

Note John is also a deontologist.  Perhaps there should exist an ideal language that can fully describe ethics – does this mean that ideally there is no need for utilitarianism?  I’ll leave that question for another post.

Interview with John Wilkins on Materialism & Physicalism.

Here are some blog posts about physicalism by John Wilkins:

Is physicalism an impoverished metaphysics?

Every so often, we read about some philosopher or other form of public intellectual who makes the claim that a physicalist ontology – a world view in which only things that can be described in terms of physics are said to exist – is impoverished. That is, there are things whereof science cannot know, &c. A recent example is that made by Thomas Nagel [nicely eviscerated here by the physicist Sean Carroll], whose fame in philosophy rests with an influential 1974 paper that there is something like being a bat that no amount of physics, physiology or other objective science could account for.

Recent, Nagel has argued that the evolutionary view called (historically misleadingly) neo-Darwinism, is “almost certainly” false. One of the reasons is that “materialism” (which Nagel should know is an antiquated world view replaced by physicalism defined above; there are many non-material things in physics, not least fields of various kinds) does not permit a full account of consciousness; the subjective facts of being a particular individual organism. Another is that the chance that life would emerge from a lifeless universe is staggeringly unlikely. How this is calculated is somewhat mysterious, given that at best we only have (dare I say it?) subjective estimates anyway, but there it is.

But Nagel is not alone. Various nonreligious (apparently) thinkers have made similar assertions, although some, like Frank Jackson, who proposed the Knowledge Argument, have since backed down. What is it that physicalism must account for that these disputants and objectors say it cannot?

It almost entirely consists of consciousness, intentions, intelligence or some similar mental property which is entirely inexplicable by “reductionist” physicalism. [Reductionism is a term of abuse that means – so far as I can tell – solely that the person who makes such an accusation does not like the thing or persons being accused.] And that raises our question: is physicalism lacking something?

I bet you are dying to know more… you’ll just have to follow the link…
See more at Evolving Thoughts>>

Is Physicalism Coherent?

In my last post I argued that physicalism cannot be rejected simply because people assert there are nonphysical objects which are beyond specification. Some are, however, specifiable, and one commentator has identified the obvious ones: abstract objects like the rules of chess or numbers. I have dealt with these before in my “Pizza reductionism” post, which I invite you to go read.

Done? OK, then; let us proceed.

It is often asserted that there are obviously things that are not physical, such as ideas, numbers, concepts, etc., quite apart from qualia, I once sat with a distinguished philosopher, who I respect greatly and so shall not name, when he asserted that we can construct natural classifications because we can deal first with the natural numbers. I asked him “In what sense are numbers natural objects?”, meaning, why should we think numbers are entities in the natural world. He admitted that the question had not occurred to him (I doubt that – he is rather smart), but that it was simply an axiom of his philosophy. I do not think such abstract objects are natural.

This applies to anything that is “informational”, including all semantic entities like meanings, symbols, lexical objects, and so on. They only “exist” as functional modalities in our thoughts and language. I have also argued this before: information does not “exist”; it is a function of how we process signals. Mathematics is not a domain, it is a language, and the reason it works is because the bits that seriously do not work are not explored far[*] – not all of it has to work in a physical or natural sense, but much of it has to, or else it becomes a simple game that we would not play so much.

So the question of the incoherence of physicalism is based on the assumption (which runs contrary to physicalism, and is thus question begging) that abstract objects are natural things. I don’t believe they are, and I certainly do not think that a thought, or concept, for example, which can be had by many minds and is therefore supposed to be located in none of them (and thus transcendental), really is nonphysical. That is another case of nouning language. The thought “that is red” exists, for a physicalist, in all the heads that meet the functional social criteria for ascriptions of red. It exists nowhere else – it just is all those cognitive and social behaviours in biological heads…

Yes, I know, it’s a real page turner…
See more at Evolving Thoughts>>

In philosophy, physicalism is the ontological thesis that “everything is physical”, that there is “nothing over and above” the physical, or that everything supervenes on the physical. Physicalism is a form of ontological monism—a “one substance” view of the nature of reality as opposed to a “two-substance” (dualism) or “many-substance” (pluralism) view. Both the definition of physical and the meaning of physicalism have been debated. Physicalism is closely related to materialism. Physicalism grew out of materialism with the success of the physical sciences in explaining observed phenomena. The terms are often used interchangeably, although they are sometimes distinguished, for example on the basis of physics describing more than just matter (including energy and physical law). Common arguments against physicalism include both the philosophical zombie argument and the multiple observers argument, that the existence of a physical being may imply zero or more distinct conscious entities. “When I lost my belief in religion I had to decide what I needed to accept as a bare minimum. I decided that I needed to believe in the physical world. I never found the slightest reason to accept the existence of anything else. To this day I am a physicalist only because I never found the need to be anything else. The principle of parsimony suggests that one should not believe in more than one needs to. Even if it does make you feel comfortable.”

 

Let’s get physicalism!

See John Wilkin’s Blog ‘Evolving Thoughts

#philsci #philosophy #science #physics

On Consciousness, Qualia, Valence & Intelligence with Mike Johnson

Andrés L. Gómez Emilsson

Andrés Gómez Emilsson joined in to add very insightful questions for a 3 part interview series with Mike Johnson, covering the relationship of metaphysics to qualia/consciousness/hedonic valence, and defining their terms, whether panpsychism matters, increasing sensitivity to bliss, valence variance, Effective Altruism, cause prioritization, and the importance of consciousness/valence research .

Andrés Gómez Emilsson interviews Mike Johnson

Carving Reality at the Joints

Andrés L. Gómez Emilsson: Do metaphysics matter for understanding qualia, consciousness, valence and intelligence?

Mike Johnson: If we define metaphysics as the study of what exists, it absolutely does matter for understanding qualia, consciousness, and valence. I think metaphysics matters for intelligence, too, but in a different way.

The big question is whether terms like qualia, consciousness, and valence “carve reality at the joints” or whether they’re emergent linguistic constructs that don’t reflect the structure of the universe. And if these things are ‘real’ in some sense, the follow-up question is: how can we formalize these concepts?

Intelligence seems different: it seems like a ‘fuzzy’ concept, without a good “crisp”, or frame-invariant, definition.

Andrés: What about sources of sentient valence outside of human brains? What is the “minimum viable valence organism”? What would you expect it to look like?

Mike Johnson

Mike: If some form of panpsychism is true- and it’s hard to construct a coherent theory of consciousness without allowing panpsychism- then I suspect two interesting things are true.

  1. A lot of things are probably at least a little bit conscious. The “minimum viable valence experiencer” could be pretty minimal. Both Brian Tomasik and Stuart Hameroff suggest that there could be morally-relevant experience happening at the level of fundamental physics. This seems highly counter-intuitive but also logically plausible to me.
  2. Biological organisms probably don’t constitute the lion’s share of moral experience. If there’s any morally-relevant experience that happens on small levels (e.g., quantum fuzz) or large levels (e.g., black holes, or eternal inflation), it probably outweighs what happens on Earth by many, many, many orders of magnitude. Whether it’ll outweigh the future impact of humanity on our light-cone is an open question.

The big question is whether terms like qualia, consciousness, and valence “carve reality at the joints” or whether they’re emergent linguistic constructs that don’t reflect the structure of the universe. And if these things are ‘real’ in some sense, the follow-up question is: how can we formalize these concepts?

In contrast with Brian Tomasik on this issue, I suspect (and hope) that the lion’s share of the qualia of the universe is strongly net positive. Appendix F of Principia Qualia talks a little more about this.

Andrés: What would be the implications of finding a sure-fire way to induce great valence for brief moments? Could this be used to achieve “strategic alignment” across different branches of utilitarianism?

Mike: A device that could temporarily cause extreme positive or negative valence on demand would immediately change the world.

First, it would validate valence realism in a very visceral way. I’d say it would be the strongest philosophical argument ever made.

Second, it would obviously have huge economic & ethical uses.

Third, I agree that being able to induce strong positive & negative valence on demand could help align different schools of utilitarianism. Nothing would focus philosophical arguments about the discount rate between pleasure & suffering more than a (consensual!) quick blast of pure suffering followed by a quick blast of pure pleasure. Similarly, a lot of people live their lives in a rather numb state. Giving them a visceral sense that ‘life can be more than this’ could give them ‘skin in the game’.

Fourth, it could mess a lot of things up. Obviously, being able to cause extreme suffering could be abused, but being able to cause extreme pleasure on-demand could lead to bad outcomes too. You (Andres) have written about wireheading before, and I agree with the game-theoretic concerns involved. I would also say that being able to cause extreme pleasure in others could be used in adversarial ways. More generally, human culture is valuable and fragile; things that could substantially disrupt it should be approached carefully.

A friend of mine was describing how in the 70s, the emerging field of genetic engineering held the Asilomar Conference on Recombinant DNA to discuss how the field should self-regulate. The next year, these guidelines were adopted by the NIH wholesale as the basis for binding regulation, and other fields (such as AI safety!) have attempted to follow the same model. So the culture around technologies may reflect a strong “founder effect”, and we should be on the lookout for a good, forward-looking set of principles for how valence technology should work.

One principle that seems to make sense is to not publicly post ‘actionable’ equations, pseudocode, or code for how one could generate suffering with current computing resources (if this is indeed possible). Another principle is to focus resources on positive, eusocial applications only, insofar as that’s possible– I’m especially concerned about addiction, and bad actors ‘weaponizing’ this sort of research. Another would be to be on guard against entryism, or people who want to co-opt valence research for political ends.

All of this is pretty straightforward, but it would be good to work it out a bit more formally, look at the successes and failures of other research communities, and so on.


A question I find very interesting is whether valence research is socially disruptive or socially stabilizing by default. I think we should try very hard to make it a socially stabilizing force.

A question I find very interesting is whether valence research is socially disruptive or socially stabilizing by default. I think we should try very hard to make it a socially stabilizing force. One way to think about this is in terms of existential risk. It’s a little weird to say, but I think the fact that so many people are jaded, or feel hopeless, is a big existential risk, because they feel like they have very little to lose. So they don’t really care what happens to the world, because they don’t have good qualia to look forward to, no real ‘skin in the game’. If valence tech could give people a visceral, ‘felt sense’ of wonder and possibility, I think the world could become a much safer place, because more people would viscerally care about AI safety, avoiding nuclear war, and so on.

Finally, one thing that I think doesn’t make much sense is handing off the ethical issues to professional bioethicists and expecting them to be able to help much. Speaking as a philosopher, I don’t think bioethics itself has healthy community & dresearch norms (maybe bioethics needs some bioethicsethicists…). And in general, I think especially when issues are particularly complex or technical, I think the best type of research norms comes from within a community.

Andrés: What is the role of valence variance in intelligence? Can a sentient being use its consciousness in any computationally fruitful way without any valence variance? Can a “perfectly flat world(-simulation)” be used for anything computational?

 

Mike: I think we see this today, with some people suffering from affective blunting (muted emotions) but seemingly living functional lives. More generally, what a sentient agent functionally accomplishes, and how it feels as it works toward that goal, seem to be correlated but not identical. I.e., one can vary without the other.

But I don’t think that valence is completely orthogonal to behavior, either. My one-sentence explanation here is that evolution seems to have latched onto the

Why we seek out pleasure: the Symmetry Theory of Homeostatic Regulation

property which corresponds to valence- which I argue is symmetry– in deep ways, and has built our brain-minds around principles of homeostatic symmetry. This naturally leads to a high variability in our valence, as our homeostatic state is perturbed and restored. Logically, we could build minds around different principles- but it might be a lot less computationally efficient to do so. We’ll see. 🙂 One angle of research here could be looking at people who suffer from affective blunting, and trying to figure out if it holds them back: what it makes them bad at doing. It’s possible that this could lead to understanding human-style intelligence better.

Going a little further, we can speculate that given a certain goal or computation, there could be “valence-positive” processes that could accomplish it, and “valence-negative” processes. This implies that there’s a nascent field of “ethical computation” that would evaluate the valence of different algorithms running on different physical substrates, and choose the one that best satisfices between efficiency and valence. (This is of course a huge simplification which glosses over tons of issues…)

Andrés: What should we prioritize: super-intelligence, super-longevity or super-happiness? Does the order matter? Why?

Mike: I think it matters quite a bit! For instance, I think the world looks a lot different if we figure out consciousness *before* AGI, versus if we ignore it until AGI is built. The latter seems to involve various risks that the former doesn’t.

A risk that I think we both agree is serious and real is this notion of “what if accelerating technology leads to Malthusian conditions where agents don’t- and literally can’t, from a competitive standpoint- care about qualia & valence?” Robin Hanson has a great post called “This is the Dream Time” (of relaxed selection). But his book “Age of Em” posits a world where selection pressures go back up very dramatically. I think if we enter such an era without a good theory of qualia, we could trade away a lot of what makes life worth living.

 

Andrés: What are some conceptual or factual errors that you see happening in the transhumanist/rationalist/EA community related to modeling qualia, valence and intelligence?

Mike: First, I think it’s only fair to mention what these communities do right. I’m much more likely to have a great conversation about these topics with EAs, transhumanists, and rationalists than a random person off the street, or even a random grad student. People from this community are always smart, usually curious, often willing to explore fresh ideas and stretch their brain a bit, and sometimes able to update based on purely abstract arguments. And there’s this collective sense that ideas are important and have real implications for the future. So there’s a lot of great things happening in these communities and they’re really a priceless resource for sounding out theories, debating issues, and so on.

But I would highlight some ways in which I think these communities go astray.

Computationalism, functionalism, fun theory, ‘hedonic brain regions’, ‘pleasure neurochemicals’, the reinforcement learning theory of valence, and so on all give the illusion of explanatory depth but don’t actually explain things in a way which allows us to do anything useful.

First, people don’t realize how bad most existing models of qualia & valence are. Michael Graziano argues that most theories of consciousness are worse than wrong- that they play to our intuitions but don’t actually explain anything. Computationalism, functionalism, fun theory, ‘hedonic brain regions’, ‘pleasure neurochemicals’, the reinforcement learning theory of valence, and so on all give the illusion of explanatory depth but don’t actually explain things in a way which allows us to do anything useful.

Second, people don’t realize how important a good understanding of qualia & valence are. They’re upstream of basically everything interesting and desirable.

Here’s what I think has happened, at least in the rationalist community: historically, consciousness research has been a black hole. Smart people go in, but nothing comes out. So communities (such as physicists and LessWrong) naturally have an interest in putting up a fence around the topic with a sign that says

historically, consciousness research has been a black hole. Smart people go in, but nothing comes out. So communities .. naturally have an interest in putting up a fence around the topic with a sign that says ‘Don’t go here!’

‘Don’t go here!’ – But over time, people forgot why the mystery was blocked off, and started to think that the mystery doesn’t exist. This leads to people actively avoiding thinking about these topics without being able to articulate why.

Andrés: Is there value in studying extreme cases of valence? E.g. Buddhist monks who claim to achieve extreme sustainable bliss, or people on MDMA?

Mike: ‘What science can analyze, science can duplicate.’ And studying outliers such as your examples is a time-honored way of gathering data with high signal-to-noise. So yes, definitely. 🙂


Also see the 1st part, and the 2nd part of this interview series. Also this interview with Christof Koch will likely be of interest.

 

Mike Johnson is a philosopher living in the Bay Area, writing about mind, complexity theory, and formalization. He is Co-founder of the Qualia Research Institute. Much of Mike’s research and writings can be found at the Open Theory website.
‘Principia Qualia’ is Mike’s magnum opus – a blueprint for building a new Science of Qualia. Click here for the full version, or here for an executive summary.
If you like Mike’s work, consider helping fund it at Patreon.

Ethics, Qualia Research & AI Safety with Mike Johnson

What’s the relationship between valence research and AI ethics?

Hedonic valence is a measure of the quality of our felt sense of experience, the intrinsic goodness (positive valence) or averseness (negative valence) of an event, object, or situation.  It is an important aspect of conscious experience; always present in our waking lives. If we seek to understand ourselves, it makes sense to seek to understand how valence works – how to measure it and test for it.

Also, might there be a relationship to the AI safety/friendliness problem?
In this interview, we cover a lot of things, not least .. THE SINGULARITY (of course) & the importance of Valence Research to AI Friendliness Research (as detailed here). Will thinking machines require experience with valence to understand it’s importance?

Here we cover some general questions about Mike Johnson’s views on recent advances in science and technology & what he sees as being the most impactful, what world views are ready to be retired, his views on XRisk and on AI Safety – especially related to value theory.

This one part of an interview series with Mike Johnson (another section on Consciousness, Qualia, Valence & Intelligence). 

 

Adam Ford: Welcome Mike Johnson, many thanks for doing this interview. Can we start with your background?

Mike Johnson

Mike Johnson: My formal background is in epistemology and philosophy of science: what do we know & how do we know it, what separates good theories from bad ones, and so on. Prior to researching qualia, I did work in information security, algorithmic trading, and human augmentation research.

 

Adam: What is the most exciting / interesting recent (scientific/engineering) news? Why is it important to you?

Mike: CRISPR is definitely up there! In a few short years precision genetic engineering has gone from a pipe dream to reality. The problem is that we’re like the proverbial dog that caught up to the car it was chasing: what do we do now? Increasingly, we can change our genome, but we have no idea how we should change our genome, and the public discussion about this seems very muddled. The same could be said about breakthroughs in AI.

 

Adam: What are the most important discoveries/inventions over the last 500 years?

Mike: Tough question. Darwin’s theory of Natural Selection, Newton’s theory of gravity, Faraday & Maxwell’s theory of electricity, and the many discoveries of modern physics would all make the cut. Perhaps also the germ theory of disease. In general what makes discoveries & inventions important is when they lead to a productive new way of looking at the world.

 

Adam: What philosophical/scientific ideas are ready to be retired? What theories of valence are ready to be relegated to the dustbin of history? (Why are they still in currency? Why are they in need of being thrown away or revised?)

Mike: I think that 99% of the time when someone uses the term “pleasure neurochemicals” or “hedonic brain regions” it obscures more than it explains. We know that opioids & activity in the nucleus accumbens are correlated with pleasure– but we don’t know why, we don’t know the causal mechanism. So it can be useful shorthand to call these things “pleasure neurochemicals” and whatnot, but every single time anyone does that, there should be a footnote that we fundamentally don’t know the causal story here, and this abstraction may ‘leak’ in unexpected ways.

 

Adam: What have you changed your mind about?

Mike: Whether pushing toward the Singularity is unequivocally a good idea. I read Kurzweil’s The Singularity is Near back in 2005 and loved it- it made me realize that all my life I’d been a transhumanist and didn’t know it. But twelve years later, I’m a lot less optimistic about Kurzweil’s rosy vision. Value is fragile, and there are a lot more ways that things could go wrong, than ways things could go well.

 

Adam: I remember reading Eliezer’s writings on ‘The Fragility of Value’, it’s quite interesting and worth consideration – the idea that if we don’t get AI’s value system exactly right, then it would be like pulling a random mind out of mindspace – most likely inimicable to human interests. The writing did seem quite abstract, and it would be nice to see a formal model or something concrete to show this would be the case. I’d really like to know how and why value is as fragile as Eliezer seems to make out. Is there any convincing crisply defined model supporting this thesis?

Mike: Whether the ‘Complexity of Value Thesis’ is correct is super important. Essentially, the idea is that we can think of what humans find valuable as a tiny location in a very large, very high-dimensional space– let’s say 1000 dimensions for the sake of argument. Under this framework, value is very fragile; if we move a little bit in any one of these 1000 dimensions, we leave this special zone and get a future that doesn’t match our preferences, desires, and goals. In a word, we get something worthless (to us). This is perhaps most succinctly put by Eliezer in “Value is fragile”:

“If you loose the grip of human morals and metamorals – the result is not mysterious and alien and beautiful by the standards of human value. It is moral noise, a universe tiled with paperclips. To change away from human morals in the direction of improvement rather than entropy, requires a criterion of improvement; and that criterion would be physically represented in our brains, and our brains alone. … You want a wonderful and mysterious universe? That’s your value. … Valuable things appear because a goal system that values them takes action to create them. … if our values that prefer it are physically obliterated – or even disturbed in the wrong dimension. Then there is nothing left in the universe that works to make the universe valuable.”

If this frame is right, then it’s going to be really really really hard to get AGI right, because one wrong step in programming will make the AGI depart from human values, and “there will be nothing left to want to bring it back.” Eliezer, and I think most of the AI safety community assumes this.

But– and I want to shout this from the rooftops– the complexity of value thesis is just a thesis! Nobody knows if it’s true. An alternative here would be, instead of trying to look at value in terms of goals and preferences, we look at it in terms of properties of phenomenological experience. This leads to what I call the Unity of Value Thesis, where all the different manifestations of valuable things end up as special cases of a more general, unifying principle (emotional valence). What we know from neuroscience seems to support this: Berridge and Kringelbach write about how “The available evidence suggests that brain mechanisms involved in fundamental pleasures (food and sexual pleasures) overlap with those for higher-order pleasures (for example, monetary, artistic, musical, altruistic, and transcendent pleasures).” My colleague Andres Gomez Emilsson writes about this in The Tyranny of the Intentional Object. Anyway, if this is right, then the AI safety community could approach the Value Problem and Value Loading Problem much differently.

 

Adam: I’m also interested in the nature of possible attractors that agents might ‘extropically’ gravitate towards (like a thirst for useful and interesting novelty, generative and non-regressive, that might not neatly fit categorically under ‘happiness’) – I’m not wholly convinced that they exist, but if one leans away from moral relativism, it makes sense that a superintelligence may be able to discover or extrapolate facts from all physical systems in the universe, not just humans, to determine valuable futures and avoid malignant failure modes (Coherent Extrapolated Value if you will). Being strongly locked into optimizing human values may be a non-malignant failure mode.

Mike: What you write reminds me of Schmidhuber’s notion of a ‘compression drive’: we’re drawn to interesting things because getting exposed to them helps build our ‘compression library’ and lets us predict the world better. But this feels like an instrumental goal, sort of a “Basic AI Drives” sort of thing. Would definitely agree that there’s a danger of getting locked into a good-yet-not-great local optima if we hard optimize on current human values.

Probably the danger is larger than that too– as Eric Schwitzgebel notes​, ​

“Common sense is incoherent in matters of metaphysics. There’s no way to develop an ambitious, broad-ranging, self- consistent metaphysical system without doing serious violence to common sense somewhere. It’s just impossible. Since common sense is an inconsistent system, you can’t respect it all. Every metaphysician will have to violate it somewhere.”

If we lock in human values based on common sense, we’re basically committing to following an inconsistent formal system. I don’t think most people realize how badly that will fail.

 

Adam: What invention or idea will change everything?

Mike: A device that allows people to explore the space of all possible qualia in a systematic way. Right now, we do a lot of weird things to experience interesting qualia: we drink fermented liquids, smoke various plant extracts, strap ourselves into rollercoasters, and parachute out of plans, and so on, to give just a few examples. But these are very haphazard ways to experience new qualia! When we’re able to ‘domesticate’ and ‘technologize’ qualia, like we’ve done with electricity, we’ll be living in a new (and, I think, incredibly exciting) world.

 

Adam: What are you most concerned about? What ought we be worrying about?

Mike: I’m worried that society’s ability to coordinate on hard things seems to be breaking down, and about AI safety. Similarly, I’m also worried about what Eliezer Yudkowsky calls ‘Moore’s Law of Mad Science’, that steady technological progress means that ‘every eighteen months the minimum IQ necessary to destroy the world drops by one point’. But I think some very smart people are worrying about these things, and are trying to address them.

In contrast, almost no one is worrying that we don’t have good theories of qualia & valence. And I think we really, really ought to, because they’re upstream of a lot of important things, and right now they’re “unknown unknowns”- we don’t know what we don’t know about them.

One failure case that I worry about is that we could trade away what makes life worth living in return for some minor competitive advantage. As Bostrom notes in Superintelligence,

“When it becomes possible to build architectures that could not be implemented well on biological neural networks, new design space opens up; and the global optima in this extended space need not resemble familiar types of mentality. Human-like cognitive organizations would then lack a niche in a competitive post-transition economy or ecosystem. We could thus imagine, as an extreme case, a technologically highly advanced society, containing many complex structures, some of them far more intricate and intelligent than anything that exists on the planet today – a society which nevertheless lacks any type of being that is conscious or whose welfare has moral significance. In a sense, this would be an uninhabited society. It would be a society of economic miracles and technological awesomeness, with nobody there to benefit. A Disneyland with no children.”

Nick Bostrom

Now, if we don’t know how qualia works, I think this is the default case. Our future could easily be a technological wonderland, but with very little subjective experience. “A Disneyland with no children,” as Bostrom quips.

 

 

Adam: How would you describe your ethical views? What are your thoughts on the relative importance of happiness vs. suffering? Do things besides valence have intrinsic moral importance?

Mike: Good question. First, I’d just like to comment that Principia Qualia is a descriptive document; it doesn’t make any normative claims.

I think the core question in ethics is whether there are elegant ethical principles to be discovered, or not. Whether we can find some sort of simple description or efficient compression scheme for ethics, or if ethics is irreducibly complex & inconsistent.

The most efficient compression scheme I can find for ethics, that seems to explain very much with very little, and besides that seems intuitively plausible, is the following:

  1. Strictly speaking, conscious experience is necessary for intrinsic moral significance. I.e., I care about what happens to dogs, because I think they’re conscious; I don’t care about what happens to paperclips, because I don’t think they are.
  2. Some conscious experiences do feel better than others, and all else being equal, pleasant experiences have more value than unpleasant experiences.

Beyond this, though, I think things get very speculative. Is valence the only thing that has intrinsic moral importance? I don’t know. On one hand, this sounds like a bad moral theory, one which is low-status, has lots of failure-modes, and doesn’t match all our intuitions. On the other hand, all other systematic approaches seem even worse. And if we can explain the value of most things in terms of valence, then Occam’s Razor suggests that we should put extra effort into explaining everything in those terms, since it’d be a lot more elegant. So– I don’t know that valence is the arbiter of all value, and I think we should be actively looking for other options, but I am open to it. That said I strongly believe that we should avoid premature optimization, and we should prioritize figuring out the details of consciousness & valence (i.e. we should prioritize research over advocacy).

Re: the relative importance of happiness vs suffering, it’s hard to say much at this point, but I’d expect that if we can move valence research to a more formal basis, there will be an implicit answer to this embedded in the mathematics.

Perhaps the clearest and most important ethical view I have is that ethics must ultimately “compile” to physics. What we value and what we disvalue must ultimately cash out in terms of particle arrangements & dynamics, because these are the only things we can actually change. And so if people are doing ethics without caring about making their theories cash out in physical terms, they’re not actually doing ethics- they’re doing art, or social signaling, or something which can serve as the inspiration for a future ethics.

Perhaps the clearest and most important ethical view I have is that ethics must ultimately “compile” to physics. What we value and what we disvalue must ultimately cash out in terms of particle arrangements & dynamics, because these are the only things we can actually change.

The analogy I’d offer here is that we can think about our universe as a computer, and ethics as choosing a program to run on this computer. Unfortunately, most ethicists aren’t writing machine-code, or even thinking about things in ways that could be easily translated to machine-code. Instead, they’re writing poetry about the sorts of programs that might be nice to run. But you can’t compile poetry to machine-code! So I hope the field of ethics becomes more physics-savvy and quantitative (although I’m not optimistic this will happen quickly).

Eliezer Yudkowsky refers to something similar with his notions of “AI grade philosophy”, “compilable philosophy”, and “computable ethics”, though I don’t think he quite goes far enough (i.e., all the way to physics).

 

Adam: What excites you? What do you think we have reason to be optimistic about?

Mike: The potential of qualia research to actually make peoples’ lives better in concrete, meaningful ways. Medicine’s approach to pain management and treatment of affective disorders are stuck in the dark ages because we don’t know what pain is. We don’t know why some mental states hurt. If we can figure that out, we can almost immediately help a lot of people, and probably unlock a surprising amount of human potential as well. What does the world look like with sane, scientific, effective treatments for pain & depression & akrasia? I think it’ll look amazing.

 

Adam: If you were to take a stab at forecasting the Intelligence Explosion – in what timeframe do you think it might happen (confidence intervals allowed)?

Mike: I don’t see any intractable technical hurdles to an Intelligence Explosion: the general attitude in AI circles seems to be that progress is actually happening a lot more quickly than expected, and that getting to human-level AGI is less a matter of finding some fundamental breakthrough, and more a matter of refining and connecting all the stuff we already know how to do.

The real unknown, I think, is the socio-political side of things. AI research depends on a stable, prosperous society able to support it and willing to ‘roll the dice’ on a good outcome, and peering into the future, I’m not sure we can take this as a given. My predictions for an Intelligence Explosion:

  • Between ~2035-2045 if we just extrapolate research trends within the current system;
  • Between ~2080-2100 if major socio-political disruptions happen but we stabilize without too much collateral damage (e.g., non-nuclear war, drawn-out social conflict);
  • If it doesn’t happen by 2100, it probably implies a fundamental shift in our ability or desire to create an Intelligence Explosion, and so it might take hundreds of years (or never happen).

 

If a tree falls in the forest and no one is around to hear it, does it make a sound? It would be unfortunate if a whole lot of awesome stuff were to happen with no one around to experience it.  <!–If a rainbow appears in a universe, and there is no one around to experience it, is it beautiful?–>

Also see the 2nd part, and 3nd part (conducted by Andrés Gómez Emilson) of this interview series conducted by Andrés Gómez Emilson and this interview with Christof Koch will likely be of interest.

 

Mike Johnson is a philosopher living in the Bay Area, writing about mind, complexity theory, and formalization. He is Co-founder of the Qualia Research Institute. Much of Mike’s research and writings can be found at the Open Theory website.
‘Principia Qualia’ is Mike’s magnum opus – a blueprint for building a new Science of Qualia. Click here for the full version, or here for an executive summary.
If you like Mike’s work, consider helping fund it at Patreon.

Science, Mindfulness & the Urgency of Reducing Suffering – Christof Koch

In this interview with Christof Koch, he shares some deeply felt ideas about the urgency of reducing suffering (with some caveats), his experience with mindfulness – explaining what it was like to visit the Dali Lama for a week, as well as a heart felt experience of his family dog ‘Nosey’ dying in his arms, and how that moved him to become a vegetarian. He also discusses the bias of human exceptionalism, the horrors of factory farming of non-human animals, as well as a consequentialist view on animal testing.
Christof Koch is an American neuroscientist best known for his work on the neural bases of consciousness.

Christof Koch is the President and Chief Scientific Officer of the Allen Institute for Brain Science in Seattle. From 1986 until 2013, he was a professor at the California Institute of Technology. http://www.klab.caltech.edu/koch/

Michio Kaku on the Holy Grail of Nanotechnology

Michio Kaku on Nanotechnology – Michio is the author of many best sellers, recently the Future of the Mind!

The Holy Grail of Nanotechnology

Merging with machines is on the horizon and Nanotechnology will be key to achieving this. The ‘Holy Grail of Nanotechnology’ is the replicator: A microscopic robot that rearranges molecules into desired structures. At the moment, molecular assemblers exist in nature in us, as cells and ribosomes.

Sticky Fingers problem

How might nanorobots/replicators look and behave?
Because of the ‘Sticky /Fat Fingers problem’ in the short term we won’t have nanobots with agile clippers or blow torches (like what we might see in a scifi movie).

The 4th Wave of High Technology

Humanity has seen an acceleration in history of technological progress from the steam engine and industrial revolution to the electrical age, the space program and high technology – what is the 4th wave that will dominate the rest of the 21st century?
Nanotechnology (molecular physics), Biotechnology, and Artificial Intelligence (reducing the curcuitry of the brain down to neurons) – “these three molecular technologies will propel us into the future”!

 

Michio Kaku – Bio

Michio Kaku (born January 24, 1947) is an American theoretical physicist, the Henry Semat Professor of Theoretical Physics at the City College of New York, a futurist, and a communicator and popularizer of science. He has written several books about physics and related topics, has made frequent appearances on radio, television, and film, and writes extensive online blogs and articles. He has written three New York Times Best Sellers: Physics of the Impossible (2008), Physics of the Future (2011), and The Future of the Mind (2014).

Kaku is the author of various popular science books:
– Beyond Einstein: The Cosmic Quest for the Theory of the Universe (with Jennifer Thompson) (1987)
– Hyperspace: A Scientific Odyssey through Parallel Universes, Time Warps, and the Tenth Dimension (1994)
– Visions: How Science Will Revolutionize the 21st Century[12] (1998)
– Einstein’s Cosmos: How Albert Einstein’s Vision Transformed Our Understanding of Space and Time (2004)
– Parallel Worlds: A Journey through Creation, Higher Dimensions, and the Future of the Cosmos (2004)
– Physics of the Impossible: A Scientific Exploration into the World of Phasers, Force Fields, Teleportation, and Time Travel (2008)
– Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 (2011)
– The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind (2014)

Subscribe to the YouTube ChannelScience, Technology & the Future

Aubrey de Grey – Towards the Future of Regenerative Medicine

Why is aging research important? Biological aging causes suffering, however in recent times there as been surprising progress in stem cell research and in regenerative medicine that will likely disrupt the way we think about aging, and in the longer term, substantially mitigate some of the suffering involved in growing old.
Aubrey de Grey is the Chief Science Officer of SENS Foundation – an organisation focused on going beyond ageing and leading the journey towards  the future of regenerative medicine!  
What will it take to get there?
 


You might wonder why pursue  regenerative medicine ?
Historically, doctors have been racing against time to find cures for specific illnesses, making temporary victories by tackling diseases one by one – solve one disease and another urgency beacons – once your body becomes frail, if you survive one major illness, you may not be so lucky with the next one – the older you get the less capable your body becomes to staving off new illnesses – you can imagine a long line of other ailments fading beyond view into the distance, and eventually one of them will do you in.  If we are to achieve  radical healthy longevity , we need to strike at the fundamental technical problems of why we get frail and more disease prone as we get older.  Every technical problem has a technical solution – regenerative medicine is a class of solutions that seek to keep turning the ‘biological clock’ back rather than achieve short-term palliatives.

The damage repair methodology has gained in popularity over the last two decades, though it’s still not popular enough to attract huge amounts of funding – what might tip the scales of advocacy in damage-repair’s favor?
A clear existence proof such as achieving…

Robust Mouse Rejuvenation

In this interview, Aubrey de Grey reveals the most amount of optimism I have heard him express about the near term achievement of Robust Mouse Rejuvenation.  Previously it’s been 10 years away subject to adequate funding (which was not realised) – now Aubrey predicts it might happen within only 5-6 years (subject to funding of course).  So, what is Robust Mouse Rejuvenation – and why should we care?

For those who have seen Aubrey speak on this, he used to say RMR within 10 years (subject to funding)

Specifically, the goal of RBR is this:  Make normal, healthy two-year old mice (expected to live one more year) live three further years. 

  • What’s the ideal type of mouse to test on and why?  The ideal mouse to trail on is one that doesn’t naturally have a certain kind of congenital disease (that might on average only live 1.5 or 2 years) – because increasing their lifespan might only be a sign that you have solved their particular congenital disease.  The ideal type of mouse is one which lives to 3 years on average, which could die of various things.
  • How many extra years is significant? Consistently increasing mouse lifespan for an extra two years on top of their normal three year lifespans – essentially tripling their remaining lifespan.
  • When, or at what stage of the mice’s life to begin the treatment? Don’t start treating the mice until they are already 2 years old – at a time where they would normally be 2 thirds of the way though their life (at or past middle age) and they would have one more year to live.

Why not start treating the mice earlier?  The goal is to produce sufficiently dramatic results in a laboratory to convince the main-stream gerontology community, such that they would willingly publicly endorse the idea that it is not impossible, but indeed it is only a matter of time before rejuvenation therapy will work in humans – that is to get out there on talk shows and in front of cameras and say all this.

Arguably, the mainstream gerontology community are generally a bit conservative – they have vested interests in being successful in publishing papers, they get grants they have worries around peer review, they want tenure, and have a reputation to uphold.   Gerontologists hold the keys to public trust – they are considered to be the authorities on aging.
When gerontologists are convinced and let the world know about it, a lot of other people in the scientific community and in the general community will also then become convinced.  Once that happens, here’s what’s likely to happen next – longevity through rejuvenation medicine will become a big issue, there will be domino effects – there will be a war on aging, experts will appear on Oprah Winfrey, politicians will have to include the war on aging in their political manifesto if they want to get elected.

Yoda - the oldest mouse ever to have lived?
Yoda, a cute dwarf mouse, was named as the oldest mouse in 2004 at age 4 who lived with the much larger Princess Leia, in ‘a pathogen-free rest home for geriatric mice’ belonging to Dr. Richard Miller, professor of pathology in the Geriatrics Center of the Medical School. “Yoda is only the second mouse I know to have made it to his fourth birthday without the rigors of a severe calorie-restricted diet,” Miller says. “He’s the oldest mouse we’ve seen in 14 years of research on aged mice at U-M. The previous record-holder in our colony died nine days short of his 4th birthday; 100-year-old people are much more common than 4-year-old mice.” (ref)

What about Auto-Immune Diseases?

Auto-immune diseases (considered incurable to some) – get worse with aging for the same reason we loose general ability to fight off infections and attack cancer. Essentially the immune system looses it’s precision – it has two arms: the innate system and the adaptive – the adaptive side works by having polyclonality – a very wide diversity of cells with different rearrangements of parts of the genome that confer specificity of the immune cell to a particular target (which it may or may not encounter at some time in the future) – this polyclonality diminishes over life such that the cells which are targeted towards a given problem with the immune system are on average less precisely adapted towards it – so the immune system takes longer to do it’s job or doesn’t do it effectively – so with autoimmune system it looses it’s ability to distinguish between things that are foreign and things that are part of the body. So this could be powerfully addressed by the same
measures taken to rejuvenate the immune system generally – regenerating the thyamis and illuminating senescent cells that are accumulating in the blood.

Big Bottlenecks

See Aubrey discuss this at timepoint: 38:50
Bottlenecks: which bottlenecks does Aubrey believes need most attention from the community of people who already believe aging is a problem that needs to be solved?

  1. The first thing: Funding. The shortage of funding is still the biggest bottleneck.
  2. The second thing: The need for policy makers to get on board with the ideas and understand what is coming – so it’s not only developing the therapies as quickly as possible, it’s also important that once they are developed, the therapies get disseminated as quick as possible to avoid complete chaos.

It’s very urgent to have proper discussions about this.  Anticipating the anticipation – getting ready for the public anticipating these therapies instead of thinking that it’s all science fiction and is never going to happen.

 

Effective Advocacy

See Aubrey discuss this at timepoint: 42:47
Advocacy, it’s a big ask to get people from extreme opposition to supporting regenerative medicine. Nudging people a bit sideways is a lot earlier – that is getting them from complete offense to less offense, or getting people who are un-decided to be in favor of it.

Here are 2 of the main aspects of advocacy:

  1. feasibility / importance – emphasize progress, embracement by the scientific community (see paper hallmarks of aging – single most highly cited paper on the biology of aging this decade) – defining the legitimacy of the damage repair approach – it’s not just a crazy hair brained idea …
  2. desirability – concerns about (bad arguments : on overpopulation – oh don’t worry we will immigrate into space – the people who are concerned about this problem aren’t the ones who would like to go to space) – focus on more of the things that can generalize to desirable outcomes – so regenerative medicine will have side effects, like a longer lifespan, but also people will be more healthy at any given age compared to what they would be in they hadn’t had regenerative therapy – no body wants Alzheimer’s, or heart disease – if the outcome of regenerative medicine is that then it’s easier to sell.

We need a sense of proportion on possible future problems – will they generally be more serious than they are today?
Talking about uploading, substrate independence, etc one is actively alienating the public – it’s better to create a foundation of credibility in the conversation before you decide to persuade anyone of anything.  If we are going to get from here to the long term future we need advocacy now – the short term matters as well.

More on Advocacy here:

And here

Other Stuff

This interview covers a fair bit of ground, so here are some other points covered:

– Updates & progress at SENS
– Highlights of promising progress in regenerative medicine in general
– Recent funding successes, what can be achieved with this?
– Discussion on getting the message across
– desirability & feasibility of rejuvenation therapy
– What could be the future of regenerative medicine?
– Given progress so far, what can people alive today look forward to?
– Multi-factorial diseases – Fixing amyloid plaque buildup alone won’t cure Alzheimer’s – getting rid of amyloid plaque alone only produced mild cognitive benefits in Alzheimer’s patients. There is still the unaddressed issue of tangles… If you only get rid of one component in a multi-component problem then you don’t get to see much improvement of pathology – in just he same way one shouldn’t expect to see much of an overall increase in health & longevity if you only fix 5 of 7 things that need fixing (i.e. 5 of the 7 strands of SENS)
– moth-balling anti-telomerase approach to fighting cancer in favor of cancer immunotherapy (for the time being) as it’s side effects need to be compensated against…
– Cancer immunotherapy – stimulating the bodies natural ability to attack cancer with it’s immune system -2 approaches – car-T (Chimeric Antigen Receptors and T cells), and checkpoint inhibiting drugs.. then there is training the immune system to identify neoantegens (stuff that all cancers produce)

Biography

Chief Science Officer, SENS Research Foundation, Mountain View, CA – http://sens.org

AgeX Therapeutics – http://www.agexinc.com/

Dr. Aubrey de Grey is a biomedical gerontologist based in Mountain View, California, USA, and is the Chief Science Officer of SENS Research Foundation, a California-based 501(c)(3) biomedical research charity that performs and funds laboratory research dedicated to combating the aging process. He is also VP of New Technology Discovery at AgeX Therapeutics, a biotechnology startup developing new therapies in the field of biomedical gerontology. In addition, he is Editor-in-Chief of Rejuvenation Research, the world’s highest-impact peer-reviewed journal focused on intervention in aging. He received his BA in computer science and Ph.D. in biology from the University of Cambridge. His research interests encompass the characterisation of all the types of self-inflicted cellular and molecular damage that constitute mammalian aging and the design of interventions to repair and/or obviate that damage. Dr. de Grey is a Fellow of both the Gerontological Society of America and the American Aging Association, and sits on the editorial and scientific advisory boards of numerous journals and organisations. He is a highly sought-after speaker who gives 40-50 invited talks per year at scientific conferences, universities, companies in areas ranging from pharma to life insurance, and to the public.

 

Many thanks for reading/watching!

Consider supporting SciFuture by:

a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_cente…

b) Donating – Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22 – Etherium: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b – Patreon: https://www.patreon.com/scifuture

c) Sharing the media SciFuture creates: http://scifuture.org

Kind regards, Adam Ford – Science, Technology & the Future

Moral Enhancement – Are we morally equipped to deal with humanities grand challenges? Anders Sandberg

The topic of Moral Enhancement is controversial (and often misrepresented); it is considered by many to be repugnant – provocative questions arise like “who’s morals?”, “who are the ones to be morally enhanced?”, “will it be compulsory?”, “won’t taking a morality pill decrease the value of the intended morality if it skips the difficult process we normally go through to become better people?”, “Shouldn’t people be concerned that use of enhancements which alter character traits might consumer’s authenticity?”

Humans have a built in capacity of learning moral systems from their parents and other people. We are not born with any particular moral [code] – but with the ability to learn it just like we learn languages. The problem is of course this built in facility might have worked quite well back in the Stone Age when we were evolving in small tribal communities – but doesn’t work that well when surrounded with a high-tech civilization, millions of other people and technology that could be
potentially very dangerous. So we might need to update our moral systems and that is the interesting question of moral enhancement: can we make ourselves more fit for a current work?Anders Sandberg - Are we morally equipped for the future?
Humans have an evolved capacity to learn moral systems – we became more adept at learning moral systems that aided our survival in the ancestral environment – but are our moral instincts fit for the future?

Illustration by Daniel Gray

Let’s build some context. For millennia humans have lived in complex social structures constraining and encouraging certain types of behaviour. More recently for similar reasons people go through years of education at the end of which (for the most part) are more able to morally function in the modern world – though this world is very different from that of our ancestors, and when considering the possibilities for vastly radical change at breakneck speed in the future, it’s hard to know how humans will keep up both intellectually and ethically. This is important to consider as the degree to which we shape the future for the good depends both on how well and how ethically we solve the problems needed to achieve change that on balance (all things equal) benefits humanity (and arguably all morally relevant life-forms).

Can we engineer ourselves to be more ethically fit for the future?

Peter Singer discussed how our circles of care and compassion have expanded over the years – through reason we have been able to expand our natural propensity to act morally and the circumstances in which we act morally.

We may need to expand our circle of ethical consideration to include artificial life – considering certain types of software as moral patients.

So, if we think we could use a boost in our propensity for ethical progress,

How do we actually achieve ideal Moral Enhancement?

That’s a big topic (see a list of papers on the subject of ME here) – the answers may depend on what our goals and  preferences. One idea (among many others) is to regulate the level of Oxytocin (the cuddle hormone) – though this may come with the drawback of increasing distrust in the out-group.
Since morality depends on us being able to make accurate predictions and solve complex ethical problems, ‘Intelligence Enhancement‘ could be an effective aspect of moral enhancement. 

Morality is dependent on us being able to predict what’s going to happen when we do something. So various forms of intelligence enhancement might be very useful also for becoming more moral. Our ability to control our reactions that allow our higher-order values to control our lower order values is also important, that might actually require us to literally rewire or have biochips that help us do it.Anders Sandberg - Are we morally equipped for the future?

How we decide whether to use Moral Enhancement Therapy will be interesting – it may be needed to help solve global coordination problems; to increase the likelihood that we will, as a civilization, cooperate and cope with many known and as yet to be realised complex ethical quandaries as we move through times of unprecedented social and technological change.

This interview is part of a larger series that was completed in Oxford, UK late 2012.

Interview Transcript

Anders Sandberg

So humans have a kind of built-in capacity of learning moral systems from their parents and other people we’re not born with any particular moral [code] but the ability to learn it just like we can learn languages. The problem is of course this built-in facility might have worked quite well back in the Stone Age when we were evolving in small tribal communities – but doesn’t work that well when surrounded with a high-tech civilization, millions of other people and technology that could be potentially very dangerous. So we might need to update our moral systems. And that is the interesting question of moral enhancement:

  • can we make ourselves more fit for a current work?
  • And what kind of fitness should we be talking about?

For example we might want to improve on altruism – that we should be coming to strangers. But in a big society, in a big town – of course there are going to be some stranger’s that you shouldn’t trust. So it’s not just blind trust you want to enhance – you actually want to enhance ability to make careful judgements; to figure out what’s going to happen on whom you can trust. So maybe you want to have some other aspect, maybe the care – the circle of care – is what you want to expand.

Peter Singer pointed out that there are circles of care and compassion have been slowly expanding from our own tribe and their own gender, to other genders, to other people and eventually maybe to other species. But this is still biologically based a lot of it is going on here in the brain and might be modified. Maybe we should artificially extend these circles of care to make sure that we actually do care about those entities we ought to be caring about. This might be a problem of course, because some of these agents might be extremely different for what we used to.

For example machine intelligence might produce more machines or software that is a ‘moral patient’ – we actually ought to be caring about the suffering of software. That might be very tricky because our pattern receptors up in the brain are not very tuned for that – we tend to think that if it’s got a face and the speaks then it’s human and then we can care about it. But who thinks about Google? Maybe we could get super-intelligences that we actually ought to care a lot about, but we can’t recognize them at all because they’re so utterly different from ourselves.

So there are some easy ways of modifying how we think and react – for example by taking a drug. So the hormone oxytocin is sometimes called ‘the cuddle hormone’ – it’s released when breastfeeding and when having bodily contact with your loved one, and it generally seems to be making us more altruistic; more willing to trust strangers. You can kind of sniff it and run an economic game and you can immediately see a change in response. It might also make you a bit more ego-centric. It does enlarge feelings of comfort and family friendliness – except that it’s
only within what you consider to be your family. So we might want to tweak that.

Similarly we might think about adding links to our brains that allow us to think in better ways. After all, morality is dependent on us being able to predict what’s going to happen when we do something. So various forms of intelligence enhancement might be very useful also for becoming more moral. Our ability to control our reactions that allow our higher-order values to control our lower order values is also important, that might actually require us to literally rewire or have biochips that help us do it.

But most important is that we need the information we need to retrain the subtle networks in a brain in order to think better. And that’s going to require something akin to therapy – it might not necessarily be about lying on a sofa and telling your psychologist about your mother. It might very well be a bit of training, a bit of cognitive enhancement, maybe a bit of brain scanning – to figure out what actually ails you. It’s probably going to look very very different from anything Freud or anybody else envisioned for the future.

But I think in the future we’re actually going to try to modify ourselves so we’re going to be extra certain, maybe even extra moral, so we can function in a complex big world.

 

Related Papers

Neuroenhancement of Love and Marriage: The Chemicals Between Us

Anders contributed to this paper ‘Neuroenhancement of Love and Marriage: The Chemicals Between Us‘. This paper reviews the evolutionary history and biology of love and marriage. It examines the current and imminent possibilities of biological manipulation of lust, attraction and attachment, so called neuroenhancement of love. We examine the arguments for and against these biological interventions to influence love. We argue that biological interventions offer an important adjunct to psychosocial interventions, especially given the biological limitations inherent in human love.

Human Engineering and Climate Change

Anders also contributed to the paper “Human Engineering and Climate Change” which argues that cognitive, moral and biological enhancement could increase human ecological sustainability.

Many thanks for watching!

Consider supporting me by:
a) Subscribing to my YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating via Patreon: https://www.patreon.com/scifuture and/or
c) Sharing the media I create

Kind regards,
Adam Ford
– Science, Technology & the Future: http://scifuture.org

The Great Filter, a possible explanation for the Fermi Paradox – interview with Robin Hanson

I grew up wondering about the nature of alien life, what it might look like, what they might do, and whether we will discover any time soon. Though aside from a number of conspiracy theories, and conjecture on Tabby’s Star, so far we have not discovered any signs of life out there in the cosmos. Why is it so?
Given the Drake Equation (which attempts to quantify the likelihood and detectability of extraterrestrial civilizations), it seems as though the universe should be teaming with life.  So where are all those alien civilizations?

The ‘L’ in the Drake equation (length of time civilizations emit detectable signs out into space) for a technologically advanced civilization could be a very long time – why haven’t we detected any?

There are alternative many explanations for reasons why we have not yet detected evidence of an advanced alien civilization, such as:
– Rare earth hypothesis – Astrophysicist ‘Michael H. Hart’ argues for a very narrow habitable zone based on climate studies.
– John Smart’s STEM theory
– Some form of transcendence

The universe is a pretty big place. If it’s just us, seems like an awful waste of space.Carl Sagan - 'Contact'

 

Our observable universe being seemingly dead implies that expansionist civilizations are extremely rare; a vast majority of stuff that starts on the path of life never makes it, therefore there must be at least one ‘great filter’ that stops the majority of life from evolving towards an expansionist civilization.

Peering into the history of biological evolution on earth, we have seen various convergences in evolution – these ‘good tricks’ include things like transitions from single cellular to multi-cellular (at least 14 times), eyes, wings etc. If we can see convergences in both evolution, and in the types of tools various human colonies created after being geographically dispersed, Deducing something about the directions complex life could take, especially ones that become technologically adept could inform us about our future.

The ‘Great Filter’ – should we worry?

The theory is, given estimates (including the likes of the Drake Equation), it’s not an unreasonable to argue that there should have been more than enough time and space for cosmic expansionist civilizations (Kardashev type I, II, III and beyond) to arise that are at least a billion years old – and that at least one of their light cones should have intersected with ours.  Somehow, they have been filtered out.  Somehow, planets with life on them make some distance towards spacefaring expansionist civs, but get’s stopped along the way. While we don’t specifically know what that great filter is, there have been many theories – though if the filter is real, seems that it has been very effective.

The argument in Robin’s piece ‘The Great Filter – Are We Almost Past It?’ is somewhat complex, here are some points I found interesting:

  • Life Will Colonize – taking hints from evolution and the behavoir of our human ancestors, it feasible that our ancestors will colonize the cosmos.
    • Looking at earth’s ecosystem, we see that life has consistently evolved to fill almost every ecological niche in the seas, on land and below. Humans as a single species has migrated from the African Savannah to colonize most of the planet filling new geographic and economic niches as requisite technological reach is achieved to take advantage of reproductively useful resources.
    • We should expect humanity to expand to other parts of the solar system, then out into the galaxy in so far as there exists motivation and freedom to do so.  Even if most of society become wireheads or VR addicted ‘navel gazers’, they will want more and more resources to fuel more and more powerful computers, and may also want to distribute civilization to avoid local disasters.
    • This indicates that alien life will attempt to do the same, and eventually, absent great filters, expand their civilization through the cosmos.
  • The Data Point – future technological advances will likely enable civilization to expand ‘explosively’ fast (relative to cosmological timescales) throughout the cosmos – however we a yet have no evidence of this happening, and if there was available evidence, we would have likely detected it by now – much of the argument for the great filter follows from this.
    • within at most the next million years (absent filters) it is foreseeable that our civilization may reach an “explosive point”; rapidly expanding outwards to utilize more and more available mass and energy resources.
    • Civilization will ‘scatter & adapt’ to expand well beyond the reach of any one large catastrophy (i.e. a supernova) to avoid total annihilation.
    • Civilization will recognisably disturb the places it colonizes, adapting the environment into ideal structures (i.e. create orbiting solar collectors, dyson spheres or matrioshka brains thereby substantially changing the star’s spectral output and appearance.  Really really advanced civs may even attempt wholesale reconstruction of galaxies)
    • But we haven’t detected an alien takeover on our planet, or seen anything in the sky to reflect expansionalist civs – even if earth or our solar system was kept in a ‘nature preserve’ (look up the “Zoo Hypothesis”) we should be able to see evidence in the sky of aggressive colonization of other star systems.  Despite great success stories in explaining how natural phenomenon in the cosmos works (mostly “dead” physical processes), we see no convincing evidence of alien life.
  • The Great Filter – ‘The Great Silence’ implies that at least one of the 9 steps to achieving an advanced expansionist civilization (outlined below) is very improbable; somewhere between dead matter and explosive growth lies The Great Filter.
    1. The right star system (including organics)
    2. Reproductive something (e.g. RNA)
    3. Simple (prokaryotic) single-cell life
    4. Complex (archaeatic & eukaryotic) single-cell life
    5. Sexual reproduction
    6. Multi-cell life
    7. Tool-using animals with big brains
    8. Where we are now
    9. Colonization explosion
  • Someone’s Story is Wrong / It Matters Who’s Wrong –  the great silence, as mentioned above seems to indicate that more or more of plausible sounding stories we have about the transitions through each of the 9 steps above is less probable than they look or just plain wrong. To the extent that the evolutionary steps to achieve our civilization were easy, our future success to achieve a technologically advanced / superintelligent / explosively expansionist civilization is highly improbable.  Realising this helps may help inform how we approach how we strategize our future.
    • Some scientists think that transitioning from prokaryotic (single-celled) life and archaeatic or eukaryotic life is rare – though it seems it has happened at least 42 times
    • Even if most of society wants to stagnate or slow down to stable speeds of expansion, it’s not infeasible that some part of our civ will escape and rapidly expand
    • Optimism about our future opposes optimisim about the ease at which life can evolve to where we are now.
    • Being aware of the Great Filter may at least help us improve our chances
  • Reconsidering Biology – Several potentially hard trial-and-error steps between dead matter and modern humans (lifecomplexitysexsocietycradle and language etc) – the harder they were, the more likely they can account for the great silence
  • Reconsidering AstroPhysics – physical phenomena which might reduce the likelihood we would see evidence of an expansionist civ
    • fast space travel may be more difficult even for superintelligence, the lower the maximum speed, the more it could account for the great silence.
    • The relative size of the universe could be smaller than we think, containing less constellations
    • There could be natural ‘baby universes’ which erupt with huge amounts of matter/energy which keep expansionist civs occupied, or effectively trapped
    • Harvesting energy on a large scale may be impossible, or the way in which it is done always preserves natural spectra
    • Advanced life may consistently colonize dark matter
  • Rethinking Social Theories – in order for advanced civs to be achieved, they must first loose ‘predispositions to territoriality and aggression’ making them ‘less likely to engage in galactic emperialism’

We can’t detect expansionist civs, and our default assumption is that there was plenty of time and hospitable space for advanced enough life to arise – especially if you agree with panspermia – that life could be seeded by precursors on roaming cosmic bodies (i.e. comets) – resulting in more life-bearing planets on them.  We can assume plausible reasons for a series of filters which slow down or halt evolutionary progress which would otherwise finally arrive at technologically savvy life capable of expansionist civs – but why all of them?

It seems like we as a technologically capable species are on the verge of having our civilizaiton escape earths gravity well and go spacefaring – so how far along the great filter are we?

Though it’s been thought to be less accurate than some of its predecessors, and more of a rallying point – let us revisit the Drake Equation anyway because its a good tool for helping understand the apparent contradiction between high probability estimates for the existence of extraterrestrial civilizations, and the complete lack of evidence that such civilizations exist.

The number of active, communicative extraterrestrial civilizations in the Milky Way galaxy N is assumed to be equal to the mathematical product of:

  1. R, the average rate of star formations, in our galaxy,
  2. fp, the fraction of formed stars that have planets,
  3. ne for stars that have planets, the average number of planets that can potentially support life,
  4. fl, the fraction of those planets that actually develop life,
  5. fi, the fraction of planets bearing life on which intelligent, civilized life, has developed,
  6. fc, the fraction of these civilizations that have developed communications, i.e., technologies that release detectable signs into space, and
  7. L, the length of time over which such civilizations release detectable signals,

 

Which of the values on the right side of the equation (1 to 7 above) are the biggest reasons (or most significant filters) for the ‘N’ value  (the estimated number of alien civilizations in our galaxy capable of communication) being so small?  if a substantial amount of the great filter is explained by ‘L’, then we are in trouble because the length of time expansionist civs emit signals likely correlates with how long they survive before disappearing (which we can assume likely means going extinct, though there are other possible explanations for going silent).  If other civs don’t seem to last long, then we can infer statistically that our civ might not either.  The larger the remaining filter we have ahead of us, the more cautious and careful we ought to be to avoid potential show stoppers.

So let’s hope that the great filter is behind us, or a substantial proportion is – meaning that the seemingly rare occurrence of expansionist civs is likely because the emergence of intelligent life is rare, rather than it being because the time expansionist civs exist is short.

The more we develop our theories about the potential behaviours of expansionist civs the more we may expand upon or adapt the ‘L’ section of the drake equation.

Many of the paramaters in the Drake Equation are really hard to quantify – exoplanet data from the Keplar Telescope has been used to adapt the Drake equation already – based on this data it seems that there seems to be far more potentially earth like habitable planets within our galaxy, which both excites me because news about alien life is exciting, and frustrates me because it decreases the odds that the larger portion of the great filter is behind us.

Only by doing the best we can with the very best that an era offers, do we find the way to do better in the future.'Frank Drake' - A Reminiscence of Project Ozma, Cosmic Search Vol. 1, No. 1, January 1979

Interview

…we should remember that the Great Filter is so very large that it is not enough to just find some improbable steps; they must be improbable enough. Even if life only evolves once per galaxy, that still leaves the problem of explaining the rest of the filter: why we haven’t seen an explosion arriving here from any other galaxies in our past universe? And if we can’t find the Great Filter in our past, we’ll have to fear it in our future.Robin Hanson - The 'Great Filter' - should we worry?

As stated on the Overcoming Bias blog:

We have three main sources of info on existential risks (xrisks):

  1. Inside View Analysis – where we try to use our best theories to reason about particular causal processes.
  2. Earth Track Records – the empirical distribution of related events observed so far on Earth.
  3. The Great Filter – inferences from the fact that the universe looks dead everywhere but here.

These sources are roughly equally informative. #2 suggests xrisks are low, even if high enough to deserve much effort to prevent them. I’d say that most variations on #1 suggest the same. However, #3 suggests xrisks could be very high, which should encourage more xrisk-mitigation efforts.

Ironically most xrisk efforts (of which I’m aware) focus on AI-risk, which can’t explain the great filter. Most analysis efforts also focus on #1, less on #2, and almost none on #3.

“What’s the worst that could happen?” – in 1996 (revised in 1998) Robin Hanson wrote:

Humanity seems to have a bright future, i.e., a non-trivial chance of expanding to fill the universe with lasting life. But the fact that space near us seems dead now tells us that any given piece of dead matter faces an astronomically low chance of begating such a future. There thus exists a great filter between death and expanding lasting life, and humanity faces the ominous question: how far along this filter are we?The Great Filter - Are We Almost Past It? - 'Robin Hanson'
If the ‘Great Filter’ is ahead of us, we could fatalistically resign ourselves to the view that human priorities too skewed to coordinate towards avoiding being ‘filtered’, or we can try to do something to decrease the odds of being filtered. To coordinate what our way around a great filter we need to have some idea of plausible filters.
How may a future great filter manifest?
– Reapers (mass effect)?
– Bezerker probes sent out to destroy any up and coming civilization that reaches a certain point? (A malevolent alien teenager in their basement could have seeded self-replicating bezerker probes as a ‘practical joke’)
– A robot takeover? (If this has been the cause of great filters in the past then why don’t we see evidence of expansionist robot civilizations? see here.  Also if the two major end states of life are either dead or genocidal intelligence explosion, and we aren’t the first, then it is speculated that we should live in a young universe.)

Robin Hanson gave a TedX talk on the Great Filter:

Bio

Robin Hanson is an associate professor of economics at George Mason University, a research associate at the Future of Humanity Institute of Oxford University, and chief scientist at Consensus Point. After receiving his Ph.D. in social science from the California Institute of Technology in 1997, Robin was a Robert Wood Johnson Foundation health policy scholar at the University of California at Berkeley. In 1984, Robin received a masters in physics and a masters in the philosophy of science from the University of Chicago, and afterward spent nine years researching artificial intelligence, Bayesian statistics, and hypertext publishing at Lockheed, NASA, and independently.

Robin has over 70 publications, and has pioneered prediction markets since 1988, being a principal architect of the first internal corporate markets, at Xanadu in 1990, of the first web markets, the Foresight Exchange since 1994, of DARPA’s Policy Analysis Market, from 2001 to 2003, and of Daggre/Scicast, since 2010.

Links

Robin Hanson’s 1998 revision on the paper he wrote on the Great Filter in 1996
– The Drake Equation at connormorency  (where I got the Drake equation image – thanks)|
Slate Star Codex – Don’t Fear the Filter
Ask Ethan: How Fast Could Life Have Arisen In The Universe?
Keith Wiley – The Fermi Paradox, Self-Replicating Probes, Interstellar Transport Bandwidth

The Amazing James Randi – Skepticism & the Singularity!

Magician James Randi (known as ‘The Amazing Randi’) has spent the bulk of his career debunking the claims of self-proclaimed psychics and paranormalists. Randi has an international reputation as a magician and escape artist, but he is perhaps best known as the world’s most tireless investigator and de-mystifier of paranormal and pseudoscientific claims.

The Amazing Randi has pursued ‘psychic’ spoon benders, exposed the dirty tricks of faith healers, investigated homeopathic water ‘with a memory,’ and generally been a thorn in the sides of those who try to pull the wool over the public’s eyes in the name of the supernatural. Randi is also starring in his own biographical documentary ‘An Honest Liar,’ which will be screened alongside his fireside chat across four Australian cities.

He has received numerous awards and recognitions, including a MacArthur Foundation Prize Fellowship (also known as the ‘MacArthur ‘Genius’ Grant’) in 1986. He’s the author of numerous books, including Flim-Flam!: Psychics, ESP, Unicorns, and Other Delusions (1982), The Truth About Uri Geller (1982), The Faith Healers (1987), and An Encyclopedia of Claims, Frauds, and Hoaxes of the Occult and Supernatural (1995).

In 1996, the James Randi Education Foundation was established to further Randi’s work. Randi’s long-standing challenge to psychics now stands as a $1,000,000 prize administered by the Foundation. It remains unclaimed.

The Amazing Randi brought his unique superheroic brand of sceptic justice to Australia: http://thinkinc.org.au/jamesrandi/

The Point of View of the Universe – Peter Singer

Peter Singer discusses the new book ‘The Point Of View Of The Universe – Sidgwick & Contemporary Ethics’ (By Katarzyna de Lazari-Radek and Peter Singer) He also discusses his reasons for changing his mind about preference utilitarianism.

 

Buy the book here: http://ukcatalogue.oup.com/product/97… Bart Schultz’s (University of Chicago) Review of the book: http://ndpr.nd.edu/news/49215-he-poin… “Restoring Sidgwick to his rightful place of philosophical honor and cogently defending his central positions are obviously no small tasks, but the authors are remarkably successful in pulling them off, in a defense that, in the case of Singer at least, means candidly acknowledging that previous defenses of Hare’s universal prescriptivism and of a desire or preference satisfaction theory of the good were not in the end advances on the hedonistic utilitarianism set out by Sidgwick. But if struggles with Singer’s earlier selves run throughout the book, they are intertwined with struggles to come to terms with the work of Derek Parfit, both Reasons and Persons (Oxford, 1984) and On What Matters (Oxford, 2011), works that have virtually defined the field of analytical rehabilitations of Sidgwick’s arguments. The real task of The Point of View of the Universe — the title being an expression that Sidgwick used to refer to the impartial moral point of view — is to defend the effort to be even more Sidgwickian than Parfit, and, intriguingly enough, even more Sidgwickian than Sidgwick himself.”