The Ghost in the Quantum Turing Machine – Scott Aaronson

Interview on whether machines can be conscious with Scott Aaronson, theoretical computer scientist and David J. Bruton Jr. Centennial Professor of Computer Science at the University of Texas at Austin. His primary areas of research are quantum computing and computational complexity theory.
Scott blogged about this and other segments of our interview – his blog is very popular and has way more comments than this one does – check it out.

Check out interview segment “The Winding Road to Quantum Supremacy” with Scott Aaronson – covering progress in quantum computation, whether there are things that quantum computers could do that classical computers can’t etc..


Adam Ford: In ‘Could a Quantum Computer have Subjective Experience?‘ you speculate where the process has to fully participate in the arrow of time to be conscious and this points to decoherence. If pressed, how might you try to formalize this?

Scott Aaronson: So yeah so I did write this kind of crazy essay five or six years ago that was called “The Ghost in the Quantum Turing Machine“, where I tried to explore a position that seemed to me to be mysteriously under-explored! And all of the debates about ‘could a machine be conscious?’ and we want to be thoroughgoing materialists right? There’s no magical ghost that defies the laws of physics; the brains or physical systems that obey the laws physics
just like any others.
But there is at least one very interesting difference between a brain and any digital computer that’s ever been built – and that is that the state of a brain is not obviously copyable; that is not obviously knowable to an outside person well enough to predict what a person will do in the future, without having to scan the person’s brain so invasively that you would kill them okay. And so there is a sort of privacy or opacity if you like to a brain that there is not to a piece of code running on a digital computer.
And so there are all sorts of classic philosophical conundrums that play on that difference. For example suppose that a human-level AI does eventually become possible and we have simulated people who were running a inside of our computers – well if I were to murder such a person in the sense of deleting their file is that okay as long as I kept the backup somewhere? As long as I can just restore them from backup? Or what if I’m running two exact copies of the program on two computers next to each other – is that instantiating two consciousnesses? Or is it really just one consciousness? Because there’s nothing to distinguish the one from the other?
So could I blackmail an AI to do what I wanted by saying even if I don’t have access to you as an AI, I’m gonna say if you don’t give me a million dollars then I’m just going to – since I have your code – I’m gonna create a million copies of your of the code and torture them? And – if you think about it – you are almost certain to be one of those copies because there’s far more of them than there are of you, and they’re all identical!
So yeah so there’s all these puzzles that philosophers have wondered about for generations about: the nature of identity, how does identity persist across time, can it be duplicated across space, and somehow in a world with copy-able AIs they would all become much more real!
And so one one point of view that you could take is that: well if I can predict exactly what someone is going to do right – and I don’t mean you know just saying as a philosophical matter that I could predict your actions if I were a Laplace demon and I knew the complete state of the universe right, because I don’t in fact know the complete state of the universe okay – but imagine that I could do that as an actual practical matter – I could build an actual machine that would perfectly predict down to the last detail every thing you would do before you had done it.
Okay well then in what sense do I still have to respect your personhood? I mean I could just say I have unmasked you as a machine; I mean my simulation has every bit as much right to personhood as you do at this point right – or maybe they’re just two different instantiations of the same thing.
So another possibility, you could say, is that maybe what we like to think of is consciousness only resides in those physical systems that for whatever reason are uncopyable – that if you try to make a perfect copy then you know you would ultimately run into what we call the no-cloning theorem in quantum mechanics that says that: you cannot copy the exact physical state of a an unknown system for quantum mechanical reasons. And so this would suggest of you where kind of personal identity is very much bound up with the flow of time; with things that happen that are evanescent; that can never happen again exactly the same way because the world will never reach exactly the same configuration.
A related puzzle concerns well: what if I took your conscious or took an AI and I ran it on a reversible computer? Now some people believe that any appropriate simulation brings about consciousness – which is a position that you can take. But now what if I ran the simulation backwards – as I can always do on a reversible computer? What if I ran the simulation, I computed it and then I uncomputed it? Now have I caused nothing to have happened? Or did I cause one forward consciousness, and then one backward consciousness – whatever that means? Did it have a different character from the forward consciousness?
But we know a whole class of phenomena that in practice can only ever happen in one direction in time – and these are thermodynamic phenomena right; these are phenomena that create waste heat; create entropy; that may take these little small microscopic unknowable degrees of freedom and then amplify them to macroscopic scale. And in principle there was macroscopic records could could get could become microscopic again. Like if I make a measurement of a quantum state at least according to the let’s say many-worlds quantum mechanics in principle that measurement could always be undone. And yet in practice we never see those things happen – for the same for basically the same reasons why we never see an egg spontaneously unscramble itself, or why we why we never see a shattered glass leap up to the table and reassemble itself right, namely these would represent vastly improbable decreases of entropy okay. And so the speculation was that maybe this sort of irreversibility in this increase of entropy that we see in all the ordinary physical processes and in particular in our own brains, maybe that’s important to consciousness?
Right uh or what we like to think of as free will – I mean we certainly don’t have an example to say that it isn’t – but you know the truth of the matter is I don’t know I mean I set out all the thoughts that I had about it in this essay five years ago and then having written it I decided that I had enough of metaphysics, it made my head hurt too much, and I was going to go back to the better defined questions in math and science.

Adam Ford: In ‘Is Information Physical?’ you note that if a system crosses a Swartzschild Bound it collapses into a black-hole – do you think this could be used to put an upper-bound on the amount of consciousness in any given physical system?

Scott Aaronson: Well so I can decompose your question a little bit. So there is what quantum gravity considerations let you do, it is believed today, is put a universal bound on how much computation can be going on in a physical system of a given size, and also how many bits can be stored there. And I the bounds are precise enough that I can just tell you what they are. So it appears that a physical system you know, that’s let’s say surrounded by a sphere of a given surface area, can store at most about 10 to the 69 bits, or rather 10 to the 69 qubits per square meter of surface area of the enclosing boundary. And it has a similar limit on how many computational steps it can do over it’s it’s whole history.
So now I think your question kind of reduces to the question: Can we upper-bound how much consciousness there is in a physical system – whatever that means – in terms of how much computation is going on in it; or in terms of how many bits are there? And that’s a little hard for me to think about because I don’t know what we mean by amount of consciousness right? Like am I ten times more conscious than a frog? Am I a hundred times more conscious? I don’t know – I mean some of the time I feel less conscious than a frog right.
But I am sympathetic to the idea that: there is some minimum of computational interestingness in any system that we would like to talk about as being conscious. So there is this ancient speculation of panpsychism, that would say that every electron, every atom is conscious – and do me that’s fine – you can speculate that if you want. We know nothing to rule it out; there were no physical laws attached to consciousness that would tell us that it’s impossible. The question is just what does it buy you to suppose that? What does it explain? And in the case of the electron I’m not sure that it explains anything!
Now you could say does it even explain anything to suppose that we’re conscious? But and maybe at least not for anyone beyond ourselves. You could say there’s this ancient conundrum that we each know that we’re conscious presumably by our own subjective experience and as far as we know everyone else might be an automaton – which if you really think about that consistently it could lead you to become a solipsist. So Allen Turing in his famous 1950 paper that proposed the Turing test had this wonderful remark about it – which was something like – ‘A’ is liable to think that ‘A’ thinks while ‘B’ does not, while ‘B’ is liable to think ‘B’ thinks but ‘A’ does not. But in practice it is customary to adopt the polite convention that everyone thinks. So it was a very British way of putting it to me right. We adopt the polite convention that solipsism is false; that people who can, or any entities let’s say, that can exhibit complex behaviors or goal-directed intelligent behaviors that are like ours are probably conscious like we are. And that’s a criterion that would apply to other people it would not apply to electrons (I don’t think), and it’s plausible that there is some bare minimum of computation in any entity to which that criterion would apply.

Adam Ford: Sabine Hossenfelder – I forget her name now – {Sabine Hossenfelder yes} – she had a scathing review of panpsychism recently, did you read that?

Scott Aaronson: If it was very recent then I probably didn’t read it – I mean I did read an excerpt where she was saying that like Panpsychism – is what she’s saying that it’s experimentally ruled out? If she was saying that I don’t agree with that – know I don’t even see how you would experimentally rule out such a thing; I mean you’re free to postulate as much consciousness as you want on the head of a pin – I would just say well it’s not if it doesn’t have
an empirical consequence; if it’s not affecting the world; if it’s not affecting the behavior of that head of a pin, in a way that you can detect – then Occam’s razor just itches to slice it out from our description of the world – always that’s the way that I would put it personally.\
So I put a detailed critique of integrated information theory (IIT), which is Giulio Tononi’s proposed theory of consciousness on my blog, and my critique was basically: so Tononi know comes up with a specific numerical measure that he calls ‘Phi’ and he claims that a system should be regarded as conscious if and only if the Phi is large. Now the actual definition of Phi has changed over time – it’s changed from one paper to another, and it’s not always clear how to apply it and there are many technical objections that could be raised against this criterion. But you know what I respect about IIT is that at least it sticks its neck out right. It proposes this very clear criterion, you know are we always much clearer than competing accounts do right – to tell you this is which physical systems you should regard as conscious and which not.
Now the danger of sticking your neck out is that it can get cut off right – and indeed I think that IIT is not only falsifiable but falsified, because as soon as this criterion is written down (what the point I was making is that) it is easy to construct physical systems that have enormous values of Phi – much much larger then a human has – that I don’t think anyone would really want to regard as intelligent let alone conscious or even very interesting.
And so my examples show that basically Phi is large if and only if your system has a lot of interconnection – if it’s very hard to decompose into two components that interact with each other only weakly – and so you have a high degree of information integration. And so my the point of my counter examples was to try to say well this cannot possibly be the sole relevant criterion, because a standard error correcting code as is used for example on every compact disc also has an enormous amount of information integration – but should we therefore say that you know ‘every error correcting code that gets implemented in some piece of electronics is conscious?’, and even more than that like a giant grid of logic gates just sitting there doing nothing would have a very large value of Phi – and we can multiply examples like that.
And so Tononi then posted a big response to my critique and his response was basically: well you’re just relying on intuition; you’re just saying oh well yeah these systems are not a conscious because my intuition says that they aren’t – but .. that’s parochial right – why should you expect a theory of consciousness to accord with your intuition and he just then just went ahead and said yes the error correcting code is consciouss, yes the giant grid of XOR gates is conscious – and if they have a thousand times larger value of Phi than a brain, then there are a thousand times more conscious than a human is. So you know the way I described it was he didn’t just bite the bullet he just devoured like a bullet sandwich with mustard. Which was not what I was expecting but now the critique that I’m saying that ‘any scientific theory has to accord with intuition’ – I think that is completely mistaken; I think that’s really a mischaracterization of what I think right.
I mean I’ll be the very first to tell you that science has overturned common sense intuition over and over and over right. I mean like for example temperature feels like an intrinsic quality of a of a material; it doesn’t feel like it has anything to do with motion with the atoms jiggling around at a certain speed – okay but we now know that it does. But when scientists first arrived at that modern conception of temperature in the eighteen hundreds, what was essential was that at least you know that new criterion agreed with the old criterion that fire is hotter than ice right – so at least in the cases where we knew what we meant by hot or cold – the new definition agreed with the old definition. And then the new definition went further to tell us many counterintuitive things that we didn’t know before right – but at least that it reproduced the way in which we were using words previously okay.
Even when Copernicus and Galileo where he discovered that the earth is orbiting the Sun right, the new theory was able to account for our observation that we were not flying off the earth – it said that’s exactly what you would expect to have happened even in the in ?Anakin? because of these new principles of inertia and so on okay.
But if a theory of consciousness says that this giant blank wall or this grid is highly highly conscious just sitting there doing nothing – whereas even a simulated person or an AI that passes the Turing test would not be conscious if it’s organized in such a way that it happens to have a low value of Phi – I say okay the burden is on you to prove to me that this Phi notion that you have defined has anything whatsoever to do with what I was calling consciousness you haven’t even shown me any cases where they agree with each other where I should therefore extrapolate to the hard cases; the ones where I lack an intuition – like at what point is an embryo conscious? or when is an AI conscious? I mean it’s like the theory seems to have gotten wrong the only things that it could have possibly gotten right, and so then at that point I think there is nothing to compel a skeptic to say that this particular quantity Phi has anything to do with consciousness.

Cognitive Biases & In-Group Convergences with Joscha Bach

True & false vs right & wrong – People converge their views to set of rights and wrongs relative to in-group biases in their peer group.
As a survival mechanism, convergence in groups is sometimes more healthy than being right – so one should optimize for convergence sometimes even at the cost of getting stuff wrong – so humans probably have an evolutionary propensity to favor convergence over truth.
However by optimizing for convergence may result in the group mind being more stupid than the smartest people in the group.


Joscha highlights the controversy of Yonatan Zunger being fired for sending out an email about biological differences between men and women effecting abilities as engineers – where Zunger’s arguments may be correct – now regardless of what the facts are about how biological differences effect differences in ability between men & women, google fired him because they thought supporting these arguments would make for a worse social environment.

This sort of thing leads to an interesting difference in discourse, where:
* ‘nerds’ tend to focus on ‘content‘, on imparting ideas and facts where everyone can judge these autonomously and form their own opinions – in view that in order to craft the best solutions we need to have the best facts
* most people the purpose of communication is ‘coordination‘ between individuals and groups (society, nations etc) – where the value on a ‘fact’ is it’s effect on the coordination between people

So is Google’s response to the memo controversy about getting the facts right, or about how Google at this point should be organised?

What’s also really interesting is that different types of people read this ‘memo’ very differently – making it very difficult to form agreement about the content of this memo – how can one agree on whats valuable about communication – whether it’s more about imparting ideas and facts or whether it’s more about coordination?

More recently there has been a lot of talk about #FakeNews – where it’s very difficult to get people to agree to things that are not in their own interests – and including, as Joshca points out, the idea that truth matters.

Joscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Discussion points:
– In-group convergence: thinking in true & false vs right & wrong
– The group mind may be more stupid than the smartest individuals in the group

Physicalism & Materialism – John Wilkins

Materialism was a pre-socratic view that for something to be real it has to be matter – physical stuff made of atoms (which at the time were considered hard like billiard balls – fundametal parts of reality).  The reason these days the term physicalism is used is because it can describe things that aren’t matter – like forces, or aren’t observable matter – like dark matter, or energy or fields, or spacetime etc..  Physicalism is the idea that all that exist can be described in the language of some ‘ideal’ physics – we may never know what this ideal physics is, though people think that it is something close to our current physics (as we can make very accurate predictions with our current physics).

If magic, telepathy or angels were real, there would be a physics that could describe them – they’d have patterns and properties that would be describable and explainable.  A physicist would likely think that even the mind operates according to physical rules.  Being a physicalist according to John means you think everything is governed by rules, physical rules – and that there is an ideal language that can be used to describe all this.

Note John is also a deontologist.  Perhaps there should exist an ideal language that can fully describe ethics – does this mean that ideally there is no need for utilitarianism?  I’ll leave that question for another post.

Interview with John Wilkins on Materialism & Physicalism.

Here are some blog posts about physicalism by John Wilkins:

Is physicalism an impoverished metaphysics?

Every so often, we read about some philosopher or other form of public intellectual who makes the claim that a physicalist ontology – a world view in which only things that can be described in terms of physics are said to exist – is impoverished. That is, there are things whereof science cannot know, &c. A recent example is that made by Thomas Nagel [nicely eviscerated here by the physicist Sean Carroll], whose fame in philosophy rests with an influential 1974 paper that there is something like being a bat that no amount of physics, physiology or other objective science could account for.

Recent, Nagel has argued that the evolutionary view called (historically misleadingly) neo-Darwinism, is “almost certainly” false. One of the reasons is that “materialism” (which Nagel should know is an antiquated world view replaced by physicalism defined above; there are many non-material things in physics, not least fields of various kinds) does not permit a full account of consciousness; the subjective facts of being a particular individual organism. Another is that the chance that life would emerge from a lifeless universe is staggeringly unlikely. How this is calculated is somewhat mysterious, given that at best we only have (dare I say it?) subjective estimates anyway, but there it is.

But Nagel is not alone. Various nonreligious (apparently) thinkers have made similar assertions, although some, like Frank Jackson, who proposed the Knowledge Argument, have since backed down. What is it that physicalism must account for that these disputants and objectors say it cannot?

It almost entirely consists of consciousness, intentions, intelligence or some similar mental property which is entirely inexplicable by “reductionist” physicalism. [Reductionism is a term of abuse that means – so far as I can tell – solely that the person who makes such an accusation does not like the thing or persons being accused.] And that raises our question: is physicalism lacking something?

I bet you are dying to know more… you’ll just have to follow the link…
See more at Evolving Thoughts>>

Is Physicalism Coherent?

In my last post I argued that physicalism cannot be rejected simply because people assert there are nonphysical objects which are beyond specification. Some are, however, specifiable, and one commentator has identified the obvious ones: abstract objects like the rules of chess or numbers. I have dealt with these before in my “Pizza reductionism” post, which I invite you to go read.

Done? OK, then; let us proceed.

It is often asserted that there are obviously things that are not physical, such as ideas, numbers, concepts, etc., quite apart from qualia, I once sat with a distinguished philosopher, who I respect greatly and so shall not name, when he asserted that we can construct natural classifications because we can deal first with the natural numbers. I asked him “In what sense are numbers natural objects?”, meaning, why should we think numbers are entities in the natural world. He admitted that the question had not occurred to him (I doubt that – he is rather smart), but that it was simply an axiom of his philosophy. I do not think such abstract objects are natural.

This applies to anything that is “informational”, including all semantic entities like meanings, symbols, lexical objects, and so on. They only “exist” as functional modalities in our thoughts and language. I have also argued this before: information does not “exist”; it is a function of how we process signals. Mathematics is not a domain, it is a language, and the reason it works is because the bits that seriously do not work are not explored far[*] – not all of it has to work in a physical or natural sense, but much of it has to, or else it becomes a simple game that we would not play so much.

So the question of the incoherence of physicalism is based on the assumption (which runs contrary to physicalism, and is thus question begging) that abstract objects are natural things. I don’t believe they are, and I certainly do not think that a thought, or concept, for example, which can be had by many minds and is therefore supposed to be located in none of them (and thus transcendental), really is nonphysical. That is another case of nouning language. The thought “that is red” exists, for a physicalist, in all the heads that meet the functional social criteria for ascriptions of red. It exists nowhere else – it just is all those cognitive and social behaviours in biological heads…

Yes, I know, it’s a real page turner…
See more at Evolving Thoughts>>

In philosophy, physicalism is the ontological thesis that “everything is physical”, that there is “nothing over and above” the physical, or that everything supervenes on the physical. Physicalism is a form of ontological monism—a “one substance” view of the nature of reality as opposed to a “two-substance” (dualism) or “many-substance” (pluralism) view. Both the definition of physical and the meaning of physicalism have been debated. Physicalism is closely related to materialism. Physicalism grew out of materialism with the success of the physical sciences in explaining observed phenomena. The terms are often used interchangeably, although they are sometimes distinguished, for example on the basis of physics describing more than just matter (including energy and physical law). Common arguments against physicalism include both the philosophical zombie argument and the multiple observers argument, that the existence of a physical being may imply zero or more distinct conscious entities. “When I lost my belief in religion I had to decide what I needed to accept as a bare minimum. I decided that I needed to believe in the physical world. I never found the slightest reason to accept the existence of anything else. To this day I am a physicalist only because I never found the need to be anything else. The principle of parsimony suggests that one should not believe in more than one needs to. Even if it does make you feel comfortable.”


Let’s get physicalism!

See John Wilkin’s Blog ‘Evolving Thoughts

#philsci #philosophy #science #physics

Science, Mindfulness & the Urgency of Reducing Suffering – Christof Koch

In this interview with Christof Koch, he shares some deeply felt ideas about the urgency of reducing suffering (with some caveats), his experience with mindfulness – explaining what it was like to visit the Dali Lama for a week, as well as a heart felt experience of his family dog ‘Nosey’ dying in his arms, and how that moved him to become a vegetarian. He also discusses the bias of human exceptionalism, the horrors of factory farming of non-human animals, as well as a consequentialist view on animal testing.
Christof Koch is an American neuroscientist best known for his work on the neural bases of consciousness.

Christof Koch is the President and Chief Scientific Officer of the Allen Institute for Brain Science in Seattle. From 1986 until 2013, he was a professor at the California Institute of Technology.

Michio Kaku on the Holy Grail of Nanotechnology

Michio Kaku on Nanotechnology – Michio is the author of many best sellers, recently the Future of the Mind!

The Holy Grail of Nanotechnology

Merging with machines is on the horizon and Nanotechnology will be key to achieving this. The ‘Holy Grail of Nanotechnology’ is the replicator: A microscopic robot that rearranges molecules into desired structures. At the moment, molecular assemblers exist in nature in us, as cells and ribosomes.

Sticky Fingers problem

How might nanorobots/replicators look and behave?
Because of the ‘Sticky /Fat Fingers problem’ in the short term we won’t have nanobots with agile clippers or blow torches (like what we might see in a scifi movie).

The 4th Wave of High Technology

Humanity has seen an acceleration in history of technological progress from the steam engine and industrial revolution to the electrical age, the space program and high technology – what is the 4th wave that will dominate the rest of the 21st century?
Nanotechnology (molecular physics), Biotechnology, and Artificial Intelligence (reducing the curcuitry of the brain down to neurons) – “these three molecular technologies will propel us into the future”!


Michio Kaku – Bio

Michio Kaku (born January 24, 1947) is an American theoretical physicist, the Henry Semat Professor of Theoretical Physics at the City College of New York, a futurist, and a communicator and popularizer of science. He has written several books about physics and related topics, has made frequent appearances on radio, television, and film, and writes extensive online blogs and articles. He has written three New York Times Best Sellers: Physics of the Impossible (2008), Physics of the Future (2011), and The Future of the Mind (2014).

Kaku is the author of various popular science books:
– Beyond Einstein: The Cosmic Quest for the Theory of the Universe (with Jennifer Thompson) (1987)
– Hyperspace: A Scientific Odyssey through Parallel Universes, Time Warps, and the Tenth Dimension (1994)
– Visions: How Science Will Revolutionize the 21st Century[12] (1998)
– Einstein’s Cosmos: How Albert Einstein’s Vision Transformed Our Understanding of Space and Time (2004)
– Parallel Worlds: A Journey through Creation, Higher Dimensions, and the Future of the Cosmos (2004)
– Physics of the Impossible: A Scientific Exploration into the World of Phasers, Force Fields, Teleportation, and Time Travel (2008)
– Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 (2011)
– The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind (2014)

Subscribe to the YouTube ChannelScience, Technology & the Future

Aubrey de Grey – Towards the Future of Regenerative Medicine

Why is aging research important? Biological aging causes suffering, however in recent times there as been surprising progress in stem cell research and in regenerative medicine that will likely disrupt the way we think about aging, and in the longer term, substantially mitigate some of the suffering involved in growing old.
Aubrey de Grey is the Chief Science Officer of SENS Foundation – an organisation focused on going beyond ageing and leading the journey towards  the future of regenerative medicine!  
What will it take to get there?

You might wonder why pursue  regenerative medicine ?
Historically, doctors have been racing against time to find cures for specific illnesses, making temporary victories by tackling diseases one by one – solve one disease and another urgency beacons – once your body becomes frail, if you survive one major illness, you may not be so lucky with the next one – the older you get the less capable your body becomes to staving off new illnesses – you can imagine a long line of other ailments fading beyond view into the distance, and eventually one of them will do you in.  If we are to achieve  radical healthy longevity , we need to strike at the fundamental technical problems of why we get frail and more disease prone as we get older.  Every technical problem has a technical solution – regenerative medicine is a class of solutions that seek to keep turning the ‘biological clock’ back rather than achieve short-term palliatives.

The damage repair methodology has gained in popularity over the last two decades, though it’s still not popular enough to attract huge amounts of funding – what might tip the scales of advocacy in damage-repair’s favor?
A clear existence proof such as achieving…

Robust Mouse Rejuvenation

In this interview, Aubrey de Grey reveals the most amount of optimism I have heard him express about the near term achievement of Robust Mouse Rejuvenation.  Previously it’s been 10 years away subject to adequate funding (which was not realised) – now Aubrey predicts it might happen within only 5-6 years (subject to funding of course).  So, what is Robust Mouse Rejuvenation – and why should we care?

For those who have seen Aubrey speak on this, he used to say RMR within 10 years (subject to funding)

Specifically, the goal of RBR is this:  Make normal, healthy two-year old mice (expected to live one more year) live three further years. 

  • What’s the ideal type of mouse to test on and why?  The ideal mouse to trail on is one that doesn’t naturally have a certain kind of congenital disease (that might on average only live 1.5 or 2 years) – because increasing their lifespan might only be a sign that you have solved their particular congenital disease.  The ideal type of mouse is one which lives to 3 years on average, which could die of various things.
  • How many extra years is significant? Consistently increasing mouse lifespan for an extra two years on top of their normal three year lifespans – essentially tripling their remaining lifespan.
  • When, or at what stage of the mice’s life to begin the treatment? Don’t start treating the mice until they are already 2 years old – at a time where they would normally be 2 thirds of the way though their life (at or past middle age) and they would have one more year to live.

Why not start treating the mice earlier?  The goal is to produce sufficiently dramatic results in a laboratory to convince the main-stream gerontology community, such that they would willingly publicly endorse the idea that it is not impossible, but indeed it is only a matter of time before rejuvenation therapy will work in humans – that is to get out there on talk shows and in front of cameras and say all this.

Arguably, the mainstream gerontology community are generally a bit conservative – they have vested interests in being successful in publishing papers, they get grants they have worries around peer review, they want tenure, and have a reputation to uphold.   Gerontologists hold the keys to public trust – they are considered to be the authorities on aging.
When gerontologists are convinced and let the world know about it, a lot of other people in the scientific community and in the general community will also then become convinced.  Once that happens, here’s what’s likely to happen next – longevity through rejuvenation medicine will become a big issue, there will be domino effects – there will be a war on aging, experts will appear on Oprah Winfrey, politicians will have to include the war on aging in their political manifesto if they want to get elected.

Yoda - the oldest mouse ever to have lived?
Yoda, a cute dwarf mouse, was named as the oldest mouse in 2004 at age 4 who lived with the much larger Princess Leia, in ‘a pathogen-free rest home for geriatric mice’ belonging to Dr. Richard Miller, professor of pathology in the Geriatrics Center of the Medical School. “Yoda is only the second mouse I know to have made it to his fourth birthday without the rigors of a severe calorie-restricted diet,” Miller says. “He’s the oldest mouse we’ve seen in 14 years of research on aged mice at U-M. The previous record-holder in our colony died nine days short of his 4th birthday; 100-year-old people are much more common than 4-year-old mice.” (ref)

What about Auto-Immune Diseases?

Auto-immune diseases (considered incurable to some) – get worse with aging for the same reason we loose general ability to fight off infections and attack cancer. Essentially the immune system looses it’s precision – it has two arms: the innate system and the adaptive – the adaptive side works by having polyclonality – a very wide diversity of cells with different rearrangements of parts of the genome that confer specificity of the immune cell to a particular target (which it may or may not encounter at some time in the future) – this polyclonality diminishes over life such that the cells which are targeted towards a given problem with the immune system are on average less precisely adapted towards it – so the immune system takes longer to do it’s job or doesn’t do it effectively – so with autoimmune system it looses it’s ability to distinguish between things that are foreign and things that are part of the body. So this could be powerfully addressed by the same
measures taken to rejuvenate the immune system generally – regenerating the thyamis and illuminating senescent cells that are accumulating in the blood.

Big Bottlenecks

See Aubrey discuss this at timepoint: 38:50
Bottlenecks: which bottlenecks does Aubrey believes need most attention from the community of people who already believe aging is a problem that needs to be solved?

  1. The first thing: Funding. The shortage of funding is still the biggest bottleneck.
  2. The second thing: The need for policy makers to get on board with the ideas and understand what is coming – so it’s not only developing the therapies as quickly as possible, it’s also important that once they are developed, the therapies get disseminated as quick as possible to avoid complete chaos.

It’s very urgent to have proper discussions about this.  Anticipating the anticipation – getting ready for the public anticipating these therapies instead of thinking that it’s all science fiction and is never going to happen.


Effective Advocacy

See Aubrey discuss this at timepoint: 42:47
Advocacy, it’s a big ask to get people from extreme opposition to supporting regenerative medicine. Nudging people a bit sideways is a lot earlier – that is getting them from complete offense to less offense, or getting people who are un-decided to be in favor of it.

Here are 2 of the main aspects of advocacy:

  1. feasibility / importance – emphasize progress, embracement by the scientific community (see paper hallmarks of aging – single most highly cited paper on the biology of aging this decade) – defining the legitimacy of the damage repair approach – it’s not just a crazy hair brained idea …
  2. desirability – concerns about (bad arguments : on overpopulation – oh don’t worry we will immigrate into space – the people who are concerned about this problem aren’t the ones who would like to go to space) – focus on more of the things that can generalize to desirable outcomes – so regenerative medicine will have side effects, like a longer lifespan, but also people will be more healthy at any given age compared to what they would be in they hadn’t had regenerative therapy – no body wants Alzheimer’s, or heart disease – if the outcome of regenerative medicine is that then it’s easier to sell.

We need a sense of proportion on possible future problems – will they generally be more serious than they are today?
Talking about uploading, substrate independence, etc one is actively alienating the public – it’s better to create a foundation of credibility in the conversation before you decide to persuade anyone of anything.  If we are going to get from here to the long term future we need advocacy now – the short term matters as well.

More on Advocacy here:

And here

Other Stuff

This interview covers a fair bit of ground, so here are some other points covered:

– Updates & progress at SENS
– Highlights of promising progress in regenerative medicine in general
– Recent funding successes, what can be achieved with this?
– Discussion on getting the message across
– desirability & feasibility of rejuvenation therapy
– What could be the future of regenerative medicine?
– Given progress so far, what can people alive today look forward to?
– Multi-factorial diseases – Fixing amyloid plaque buildup alone won’t cure Alzheimer’s – getting rid of amyloid plaque alone only produced mild cognitive benefits in Alzheimer’s patients. There is still the unaddressed issue of tangles… If you only get rid of one component in a multi-component problem then you don’t get to see much improvement of pathology – in just he same way one shouldn’t expect to see much of an overall increase in health & longevity if you only fix 5 of 7 things that need fixing (i.e. 5 of the 7 strands of SENS)
– moth-balling anti-telomerase approach to fighting cancer in favor of cancer immunotherapy (for the time being) as it’s side effects need to be compensated against…
– Cancer immunotherapy – stimulating the bodies natural ability to attack cancer with it’s immune system -2 approaches – car-T (Chimeric Antigen Receptors and T cells), and checkpoint inhibiting drugs.. then there is training the immune system to identify neoantegens (stuff that all cancers produce)


Chief Science Officer, SENS Research Foundation, Mountain View, CA –

AgeX Therapeutics –

Dr. Aubrey de Grey is a biomedical gerontologist based in Mountain View, California, USA, and is the Chief Science Officer of SENS Research Foundation, a California-based 501(c)(3) biomedical research charity that performs and funds laboratory research dedicated to combating the aging process. He is also VP of New Technology Discovery at AgeX Therapeutics, a biotechnology startup developing new therapies in the field of biomedical gerontology. In addition, he is Editor-in-Chief of Rejuvenation Research, the world’s highest-impact peer-reviewed journal focused on intervention in aging. He received his BA in computer science and Ph.D. in biology from the University of Cambridge. His research interests encompass the characterisation of all the types of self-inflicted cellular and molecular damage that constitute mammalian aging and the design of interventions to repair and/or obviate that damage. Dr. de Grey is a Fellow of both the Gerontological Society of America and the American Aging Association, and sits on the editorial and scientific advisory boards of numerous journals and organisations. He is a highly sought-after speaker who gives 40-50 invited talks per year at scientific conferences, universities, companies in areas ranging from pharma to life insurance, and to the public.


Many thanks for reading/watching!

Consider supporting SciFuture by:

a) Subscribing to the SciFuture YouTube channel:…

b) Donating – Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22 – Etherium: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b – Patreon:

c) Sharing the media SciFuture creates:

Kind regards, Adam Ford – Science, Technology & the Future

Surviving the Zombie Cell Apocalypse – Oisín Biotechs Stephen Hilbert

Oisín Biotechnologies ground-breaking research and technology is demonstrating that the solution to mitigating the effects of age-related diseases is to address the damage created by the aging process itself. We have recently successfully launched our first subsidiary, Oisin Oncology, focusing in combating multiple cancers.

Interview with Stephen Hilbert

We cover the exciting scientific progress at Oisín, targeting senescent cells (dubbed ‘zombie cells’) to help them to die properly, rejuvenation therapy vs traditional approaches to combating disease, Oisín’s potential for aiding astronauts survive high levels of radiation in space, funding for the research and therapy/drug development and specifically Stephen’s background in corporate development in helping raise capital for Oisín and it’s research.

Are we close to achieving Robust Mouse Rejuvenation?

According to Aubrey de Grey we are about 5-6 years away from  robust mouse rejuvenation   (RBR) subject to the kind of funding SENS has received this year and the previous year (2017-2018). There has been progress in developing certain therapies .

Specifically, the goal of RBR is this:

  • Make normal, healthy two-year old mice (expected to live one more year) live three further years.
    • The type of mice: The ideal mouse to trail on is one that doesn’t naturally have a certain kind of congenital disease (that might on average only live 1.5 or 2 years) – because increasing their lifespan might only be a sign that you have solved their particular congenital disease.
    • Number of extra years: Consistently increasing mouse lifespan for an extra two years on top of their normal three year lifespans – essentially tripling their remaining lifespan.
    • When to begin the treatment: Don’t start treating the mice until they are already 2 years old – at a time where they would normally be 2 thirds of the way though their life (at or past middle age) and they would have one more year to live.

Why not start treating the mice earlier?  The goal is to produce sufficiently dramatic results in a laboratory to convince the main-stream gerontology community such that they would willingly publicly endorse the idea that it is not impossible, but indeed it is only a matter of time before rejuvenation therapy will work in humans – that is to get out there on talk shows and in front of cameras and say all this.

The mainstream gerontology community are generally a bit conservative – they have vested interests in being successful in publishing papers, they get grants they have worries around peer review, they want tenure, and have a reputation to uphold.   Gerontologists hold the keys to public trust – they are considered to be the authorities on aging.


For the lowdown on progress towards Robust Mouse Rejuvenation see partway through this interview with Aubrey de Grey!

Preliminary results from study showing normalized mouse survival at 140 weeks

Stephen heads up corporate development for Oisín Biotechnologies. He has served as a business advisor to Oisín since its inception and has served on several biotechnology company advisory boards, specializing in business strategy and capital formation. Prior to Oisín, his career spanned over 15 years in the banking industry where he served as trusted advisor to accredited investors around the globe. Most recently he headed up a specialty alternative investment for a company in San Diego, focusing in tax and insurance strategies for family offices and investment advisors. Stephen is the founder of several ventures in the areas of real estate small manufacturing of novelty gifts and strategic consulting. He serves on the Overlake Hospital’s Pulse Board, assists with Children’s Hospital Guild and is the incoming Chairman at the Columbia Tower Club, a member’s club in Seattle.
LinkedIn Profile

Head of Corporate Strategy/Development Pre-Clinical Oisin Biotechnologies and OncoSenX
FightAging - Oisin Biotechnologies Produces Impressive Mouse Life Span Data from an Ongoing Study of Senescent Cell Clearance
FightAging reported:
Oisin Biotechnologies is the company working on what is, to my eyes, the best of the best when it comes to the current crop of senolytic technologies, approaches capable of selectively destroying senescent cells in old tissues. Adding senescent cells to young mice has been shown to produce pathologies of aging, and removal of senescent cells can reverse those pathologies, and also extend life span. It is a very robust and reliable approach, with these observations repeated by numerous different groups using numerous different methodologies of senescent cell destruction. Most of the current senolytic development programs focus on small molecules, peptides, and the like. These are expensive to adjust, and will be tissue specific in ways that are probably challenging and expensive to alter, where such alteration is possible at all. In comparison, Oisin Biotechnologies builds their treatments atop a programmable suicide gene therapy; they can kill cells based on the presence of any arbitrary protein expressed within those cells. Right now the company is focused on p53 and p16, as these are noteworthy markers of cancerous and senescent cells. As further investigation of cellular senescence improves the understanding of senescent biochemistry, Oisin staff could quickly adapt their approach to target any other potential signal of senescence – or of any other type of cell that is best destroyed rather than left alone. Adaptability is a very valuable characteristic. The Oisin Biotechnologies staff are currently more than six months in to a long-term mouse life span study, using cohorts in which the gene therapy is deployed against either p16, p53, or both p16 and p53, plus a control group injected with phosphate buffered saline (PBS). The study commenced more than six months ago with mice that were at the time two years (104 weeks) old. When running a life span study, there is a lot to be said for starting with mice that are already old; it saves a lot of time and effort. The mice were randomly put into one of the four treatment groups, and then dosed once a month. As it turns out, the mice in which both p16 and p53 expressing cells are destroyed are doing very well indeed so far, in comparison to their peers. This is quite impressive data, even given the fact that the trial is nowhere near done yet.
Considering investing/supporting this research?  Get in contact with Oisin here.

Moral Enhancement – Are we morally equipped to deal with humanities grand challenges? Anders Sandberg

The topic of Moral Enhancement is controversial (and often misrepresented); it is considered by many to be repugnant – provocative questions arise like “who’s morals?”, “who are the ones to be morally enhanced?”, “will it be compulsory?”, “won’t taking a morality pill decrease the value of the intended morality if it skips the difficult process we normally go through to become better people?”, “Shouldn’t people be concerned that use of enhancements which alter character traits might consumer’s authenticity?”

Humans have a built in capacity of learning moral systems from their parents and other people. We are not born with any particular moral [code] – but with the ability to learn it just like we learn languages. The problem is of course this built in facility might have worked quite well back in the Stone Age when we were evolving in small tribal communities – but doesn’t work that well when surrounded with a high-tech civilization, millions of other people and technology that could be
potentially very dangerous. So we might need to update our moral systems and that is the interesting question of moral enhancement: can we make ourselves more fit for a current work?Anders Sandberg - Are we morally equipped for the future?
Humans have an evolved capacity to learn moral systems – we became more adept at learning moral systems that aided our survival in the ancestral environment – but are our moral instincts fit for the future?

Illustration by Daniel Gray

Let’s build some context. For millennia humans have lived in complex social structures constraining and encouraging certain types of behaviour. More recently for similar reasons people go through years of education at the end of which (for the most part) are more able to morally function in the modern world – though this world is very different from that of our ancestors, and when considering the possibilities for vastly radical change at breakneck speed in the future, it’s hard to know how humans will keep up both intellectually and ethically. This is important to consider as the degree to which we shape the future for the good depends both on how well and how ethically we solve the problems needed to achieve change that on balance (all things equal) benefits humanity (and arguably all morally relevant life-forms).

Can we engineer ourselves to be more ethically fit for the future?

Peter Singer discussed how our circles of care and compassion have expanded over the years – through reason we have been able to expand our natural propensity to act morally and the circumstances in which we act morally.

We may need to expand our circle of ethical consideration to include artificial life – considering certain types of software as moral patients.

So, if we think we could use a boost in our propensity for ethical progress,

How do we actually achieve ideal Moral Enhancement?

That’s a big topic (see a list of papers on the subject of ME here) – the answers may depend on what our goals and  preferences. One idea (among many others) is to regulate the level of Oxytocin (the cuddle hormone) – though this may come with the drawback of increasing distrust in the out-group.
Since morality depends on us being able to make accurate predictions and solve complex ethical problems, ‘Intelligence Enhancement‘ could be an effective aspect of moral enhancement. 

Morality is dependent on us being able to predict what’s going to happen when we do something. So various forms of intelligence enhancement might be very useful also for becoming more moral. Our ability to control our reactions that allow our higher-order values to control our lower order values is also important, that might actually require us to literally rewire or have biochips that help us do it.Anders Sandberg - Are we morally equipped for the future?

How we decide whether to use Moral Enhancement Therapy will be interesting – it may be needed to help solve global coordination problems; to increase the likelihood that we will, as a civilization, cooperate and cope with many known and as yet to be realised complex ethical quandaries as we move through times of unprecedented social and technological change.

This interview is part of a larger series that was completed in Oxford, UK late 2012.

Interview Transcript

Anders Sandberg

So humans have a kind of built-in capacity of learning moral systems from their parents and other people we’re not born with any particular moral [code] but the ability to learn it just like we can learn languages. The problem is of course this built-in facility might have worked quite well back in the Stone Age when we were evolving in small tribal communities – but doesn’t work that well when surrounded with a high-tech civilization, millions of other people and technology that could be potentially very dangerous. So we might need to update our moral systems. And that is the interesting question of moral enhancement:

  • can we make ourselves more fit for a current work?
  • And what kind of fitness should we be talking about?

For example we might want to improve on altruism – that we should be coming to strangers. But in a big society, in a big town – of course there are going to be some stranger’s that you shouldn’t trust. So it’s not just blind trust you want to enhance – you actually want to enhance ability to make careful judgements; to figure out what’s going to happen on whom you can trust. So maybe you want to have some other aspect, maybe the care – the circle of care – is what you want to expand.

Peter Singer pointed out that there are circles of care and compassion have been slowly expanding from our own tribe and their own gender, to other genders, to other people and eventually maybe to other species. But this is still biologically based a lot of it is going on here in the brain and might be modified. Maybe we should artificially extend these circles of care to make sure that we actually do care about those entities we ought to be caring about. This might be a problem of course, because some of these agents might be extremely different for what we used to.

For example machine intelligence might produce more machines or software that is a ‘moral patient’ – we actually ought to be caring about the suffering of software. That might be very tricky because our pattern receptors up in the brain are not very tuned for that – we tend to think that if it’s got a face and the speaks then it’s human and then we can care about it. But who thinks about Google? Maybe we could get super-intelligences that we actually ought to care a lot about, but we can’t recognize them at all because they’re so utterly different from ourselves.

So there are some easy ways of modifying how we think and react – for example by taking a drug. So the hormone oxytocin is sometimes called ‘the cuddle hormone’ – it’s released when breastfeeding and when having bodily contact with your loved one, and it generally seems to be making us more altruistic; more willing to trust strangers. You can kind of sniff it and run an economic game and you can immediately see a change in response. It might also make you a bit more ego-centric. It does enlarge feelings of comfort and family friendliness – except that it’s
only within what you consider to be your family. So we might want to tweak that.

Similarly we might think about adding links to our brains that allow us to think in better ways. After all, morality is dependent on us being able to predict what’s going to happen when we do something. So various forms of intelligence enhancement might be very useful also for becoming more moral. Our ability to control our reactions that allow our higher-order values to control our lower order values is also important, that might actually require us to literally rewire or have biochips that help us do it.

But most important is that we need the information we need to retrain the subtle networks in a brain in order to think better. And that’s going to require something akin to therapy – it might not necessarily be about lying on a sofa and telling your psychologist about your mother. It might very well be a bit of training, a bit of cognitive enhancement, maybe a bit of brain scanning – to figure out what actually ails you. It’s probably going to look very very different from anything Freud or anybody else envisioned for the future.

But I think in the future we’re actually going to try to modify ourselves so we’re going to be extra certain, maybe even extra moral, so we can function in a complex big world.


Related Papers

Neuroenhancement of Love and Marriage: The Chemicals Between Us

Anders contributed to this paper ‘Neuroenhancement of Love and Marriage: The Chemicals Between Us‘. This paper reviews the evolutionary history and biology of love and marriage. It examines the current and imminent possibilities of biological manipulation of lust, attraction and attachment, so called neuroenhancement of love. We examine the arguments for and against these biological interventions to influence love. We argue that biological interventions offer an important adjunct to psychosocial interventions, especially given the biological limitations inherent in human love.

Human Engineering and Climate Change

Anders also contributed to the paper “Human Engineering and Climate Change” which argues that cognitive, moral and biological enhancement could increase human ecological sustainability.

Many thanks for watching!

Consider supporting me by:
a) Subscribing to my YouTube channel:
b) Donating via Patreon: and/or
c) Sharing the media I create

Kind regards,
Adam Ford
– Science, Technology & the Future:

The Great Filter, a possible explanation for the Fermi Paradox – interview with Robin Hanson

I grew up wondering about the nature of alien life, what it might look like, what they might do, and whether we will discover any time soon. Though aside from a number of conspiracy theories, and conjecture on Tabby’s Star, so far we have not discovered any signs of life out there in the cosmos. Why is it so?
Given the Drake Equation (which attempts to quantify the likelihood and detectability of extraterrestrial civilizations), it seems as though the universe should be teaming with life.  So where are all those alien civilizations?

The ‘L’ in the Drake equation (length of time civilizations emit detectable signs out into space) for a technologically advanced civilization could be a very long time – why haven’t we detected any?

There are alternative many explanations for reasons why we have not yet detected evidence of an advanced alien civilization, such as:
– Rare earth hypothesis – Astrophysicist ‘Michael H. Hart’ argues for a very narrow habitable zone based on climate studies.
– John Smart’s STEM theory
– Some form of transcendence

The universe is a pretty big place. If it’s just us, seems like an awful waste of space.Carl Sagan - 'Contact'


Our observable universe being seemingly dead implies that expansionist civilizations are extremely rare; a vast majority of stuff that starts on the path of life never makes it, therefore there must be at least one ‘great filter’ that stops the majority of life from evolving towards an expansionist civilization.

Peering into the history of biological evolution on earth, we have seen various convergences in evolution – these ‘good tricks’ include things like transitions from single cellular to multi-cellular (at least 14 times), eyes, wings etc. If we can see convergences in both evolution, and in the types of tools various human colonies created after being geographically dispersed, Deducing something about the directions complex life could take, especially ones that become technologically adept could inform us about our future.

The ‘Great Filter’ – should we worry?

The theory is, given estimates (including the likes of the Drake Equation), it’s not an unreasonable to argue that there should have been more than enough time and space for cosmic expansionist civilizations (Kardashev type I, II, III and beyond) to arise that are at least a billion years old – and that at least one of their light cones should have intersected with ours.  Somehow, they have been filtered out.  Somehow, planets with life on them make some distance towards spacefaring expansionist civs, but get’s stopped along the way. While we don’t specifically know what that great filter is, there have been many theories – though if the filter is real, seems that it has been very effective.

The argument in Robin’s piece ‘The Great Filter – Are We Almost Past It?’ is somewhat complex, here are some points I found interesting:

  • Life Will Colonize – taking hints from evolution and the behavoir of our human ancestors, it feasible that our ancestors will colonize the cosmos.
    • Looking at earth’s ecosystem, we see that life has consistently evolved to fill almost every ecological niche in the seas, on land and below. Humans as a single species has migrated from the African Savannah to colonize most of the planet filling new geographic and economic niches as requisite technological reach is achieved to take advantage of reproductively useful resources.
    • We should expect humanity to expand to other parts of the solar system, then out into the galaxy in so far as there exists motivation and freedom to do so.  Even if most of society become wireheads or VR addicted ‘navel gazers’, they will want more and more resources to fuel more and more powerful computers, and may also want to distribute civilization to avoid local disasters.
    • This indicates that alien life will attempt to do the same, and eventually, absent great filters, expand their civilization through the cosmos.
  • The Data Point – future technological advances will likely enable civilization to expand ‘explosively’ fast (relative to cosmological timescales) throughout the cosmos – however we a yet have no evidence of this happening, and if there was available evidence, we would have likely detected it by now – much of the argument for the great filter follows from this.
    • within at most the next million years (absent filters) it is foreseeable that our civilization may reach an “explosive point”; rapidly expanding outwards to utilize more and more available mass and energy resources.
    • Civilization will ‘scatter & adapt’ to expand well beyond the reach of any one large catastrophy (i.e. a supernova) to avoid total annihilation.
    • Civilization will recognisably disturb the places it colonizes, adapting the environment into ideal structures (i.e. create orbiting solar collectors, dyson spheres or matrioshka brains thereby substantially changing the star’s spectral output and appearance.  Really really advanced civs may even attempt wholesale reconstruction of galaxies)
    • But we haven’t detected an alien takeover on our planet, or seen anything in the sky to reflect expansionalist civs – even if earth or our solar system was kept in a ‘nature preserve’ (look up the “Zoo Hypothesis”) we should be able to see evidence in the sky of aggressive colonization of other star systems.  Despite great success stories in explaining how natural phenomenon in the cosmos works (mostly “dead” physical processes), we see no convincing evidence of alien life.
  • The Great Filter – ‘The Great Silence’ implies that at least one of the 9 steps to achieving an advanced expansionist civilization (outlined below) is very improbable; somewhere between dead matter and explosive growth lies The Great Filter.
    1. The right star system (including organics)
    2. Reproductive something (e.g. RNA)
    3. Simple (prokaryotic) single-cell life
    4. Complex (archaeatic & eukaryotic) single-cell life
    5. Sexual reproduction
    6. Multi-cell life
    7. Tool-using animals with big brains
    8. Where we are now
    9. Colonization explosion
  • Someone’s Story is Wrong / It Matters Who’s Wrong –  the great silence, as mentioned above seems to indicate that more or more of plausible sounding stories we have about the transitions through each of the 9 steps above is less probable than they look or just plain wrong. To the extent that the evolutionary steps to achieve our civilization were easy, our future success to achieve a technologically advanced / superintelligent / explosively expansionist civilization is highly improbable.  Realising this helps may help inform how we approach how we strategize our future.
    • Some scientists think that transitioning from prokaryotic (single-celled) life and archaeatic or eukaryotic life is rare – though it seems it has happened at least 42 times
    • Even if most of society wants to stagnate or slow down to stable speeds of expansion, it’s not infeasible that some part of our civ will escape and rapidly expand
    • Optimism about our future opposes optimisim about the ease at which life can evolve to where we are now.
    • Being aware of the Great Filter may at least help us improve our chances
  • Reconsidering Biology – Several potentially hard trial-and-error steps between dead matter and modern humans (lifecomplexitysexsocietycradle and language etc) – the harder they were, the more likely they can account for the great silence
  • Reconsidering AstroPhysics – physical phenomena which might reduce the likelihood we would see evidence of an expansionist civ
    • fast space travel may be more difficult even for superintelligence, the lower the maximum speed, the more it could account for the great silence.
    • The relative size of the universe could be smaller than we think, containing less constellations
    • There could be natural ‘baby universes’ which erupt with huge amounts of matter/energy which keep expansionist civs occupied, or effectively trapped
    • Harvesting energy on a large scale may be impossible, or the way in which it is done always preserves natural spectra
    • Advanced life may consistently colonize dark matter
  • Rethinking Social Theories – in order for advanced civs to be achieved, they must first loose ‘predispositions to territoriality and aggression’ making them ‘less likely to engage in galactic emperialism’

We can’t detect expansionist civs, and our default assumption is that there was plenty of time and hospitable space for advanced enough life to arise – especially if you agree with panspermia – that life could be seeded by precursors on roaming cosmic bodies (i.e. comets) – resulting in more life-bearing planets on them.  We can assume plausible reasons for a series of filters which slow down or halt evolutionary progress which would otherwise finally arrive at technologically savvy life capable of expansionist civs – but why all of them?

It seems like we as a technologically capable species are on the verge of having our civilizaiton escape earths gravity well and go spacefaring – so how far along the great filter are we?

Though it’s been thought to be less accurate than some of its predecessors, and more of a rallying point – let us revisit the Drake Equation anyway because its a good tool for helping understand the apparent contradiction between high probability estimates for the existence of extraterrestrial civilizations, and the complete lack of evidence that such civilizations exist.

The number of active, communicative extraterrestrial civilizations in the Milky Way galaxy N is assumed to be equal to the mathematical product of:

  1. R, the average rate of star formations, in our galaxy,
  2. fp, the fraction of formed stars that have planets,
  3. ne for stars that have planets, the average number of planets that can potentially support life,
  4. fl, the fraction of those planets that actually develop life,
  5. fi, the fraction of planets bearing life on which intelligent, civilized life, has developed,
  6. fc, the fraction of these civilizations that have developed communications, i.e., technologies that release detectable signs into space, and
  7. L, the length of time over which such civilizations release detectable signals,


Which of the values on the right side of the equation (1 to 7 above) are the biggest reasons (or most significant filters) for the ‘N’ value  (the estimated number of alien civilizations in our galaxy capable of communication) being so small?  if a substantial amount of the great filter is explained by ‘L’, then we are in trouble because the length of time expansionist civs emit signals likely correlates with how long they survive before disappearing (which we can assume likely means going extinct, though there are other possible explanations for going silent).  If other civs don’t seem to last long, then we can infer statistically that our civ might not either.  The larger the remaining filter we have ahead of us, the more cautious and careful we ought to be to avoid potential show stoppers.

So let’s hope that the great filter is behind us, or a substantial proportion is – meaning that the seemingly rare occurrence of expansionist civs is likely because the emergence of intelligent life is rare, rather than it being because the time expansionist civs exist is short.

The more we develop our theories about the potential behaviours of expansionist civs the more we may expand upon or adapt the ‘L’ section of the drake equation.

Many of the paramaters in the Drake Equation are really hard to quantify – exoplanet data from the Keplar Telescope has been used to adapt the Drake equation already – based on this data it seems that there seems to be far more potentially earth like habitable planets within our galaxy, which both excites me because news about alien life is exciting, and frustrates me because it decreases the odds that the larger portion of the great filter is behind us.

Only by doing the best we can with the very best that an era offers, do we find the way to do better in the future.'Frank Drake' - A Reminiscence of Project Ozma, Cosmic Search Vol. 1, No. 1, January 1979


…we should remember that the Great Filter is so very large that it is not enough to just find some improbable steps; they must be improbable enough. Even if life only evolves once per galaxy, that still leaves the problem of explaining the rest of the filter: why we haven’t seen an explosion arriving here from any other galaxies in our past universe? And if we can’t find the Great Filter in our past, we’ll have to fear it in our future.Robin Hanson - The 'Great Filter' - should we worry?

As stated on the Overcoming Bias blog:

We have three main sources of info on existential risks (xrisks):

  1. Inside View Analysis – where we try to use our best theories to reason about particular causal processes.
  2. Earth Track Records – the empirical distribution of related events observed so far on Earth.
  3. The Great Filter – inferences from the fact that the universe looks dead everywhere but here.

These sources are roughly equally informative. #2 suggests xrisks are low, even if high enough to deserve much effort to prevent them. I’d say that most variations on #1 suggest the same. However, #3 suggests xrisks could be very high, which should encourage more xrisk-mitigation efforts.

Ironically most xrisk efforts (of which I’m aware) focus on AI-risk, which can’t explain the great filter. Most analysis efforts also focus on #1, less on #2, and almost none on #3.

“What’s the worst that could happen?” – in 1996 (revised in 1998) Robin Hanson wrote:

Humanity seems to have a bright future, i.e., a non-trivial chance of expanding to fill the universe with lasting life. But the fact that space near us seems dead now tells us that any given piece of dead matter faces an astronomically low chance of begating such a future. There thus exists a great filter between death and expanding lasting life, and humanity faces the ominous question: how far along this filter are we?The Great Filter - Are We Almost Past It? - 'Robin Hanson'
If the ‘Great Filter’ is ahead of us, we could fatalistically resign ourselves to the view that human priorities too skewed to coordinate towards avoiding being ‘filtered’, or we can try to do something to decrease the odds of being filtered. To coordinate what our way around a great filter we need to have some idea of plausible filters.
How may a future great filter manifest?
– Reapers (mass effect)?
– Bezerker probes sent out to destroy any up and coming civilization that reaches a certain point? (A malevolent alien teenager in their basement could have seeded self-replicating bezerker probes as a ‘practical joke’)
– A robot takeover? (If this has been the cause of great filters in the past then why don’t we see evidence of expansionist robot civilizations? see here.  Also if the two major end states of life are either dead or genocidal intelligence explosion, and we aren’t the first, then it is speculated that we should live in a young universe.)

Robin Hanson gave a TedX talk on the Great Filter:


Robin Hanson is an associate professor of economics at George Mason University, a research associate at the Future of Humanity Institute of Oxford University, and chief scientist at Consensus Point. After receiving his Ph.D. in social science from the California Institute of Technology in 1997, Robin was a Robert Wood Johnson Foundation health policy scholar at the University of California at Berkeley. In 1984, Robin received a masters in physics and a masters in the philosophy of science from the University of Chicago, and afterward spent nine years researching artificial intelligence, Bayesian statistics, and hypertext publishing at Lockheed, NASA, and independently.

Robin has over 70 publications, and has pioneered prediction markets since 1988, being a principal architect of the first internal corporate markets, at Xanadu in 1990, of the first web markets, the Foresight Exchange since 1994, of DARPA’s Policy Analysis Market, from 2001 to 2003, and of Daggre/Scicast, since 2010.


Robin Hanson’s 1998 revision on the paper he wrote on the Great Filter in 1996
– The Drake Equation at connormorency  (where I got the Drake equation image – thanks)|
Slate Star Codex – Don’t Fear the Filter
Ask Ethan: How Fast Could Life Have Arisen In The Universe?
Keith Wiley – The Fermi Paradox, Self-Replicating Probes, Interstellar Transport Bandwidth

The Amazing James Randi – Skepticism & the Singularity!

Magician James Randi (known as ‘The Amazing Randi’) has spent the bulk of his career debunking the claims of self-proclaimed psychics and paranormalists. Randi has an international reputation as a magician and escape artist, but he is perhaps best known as the world’s most tireless investigator and de-mystifier of paranormal and pseudoscientific claims.

The Amazing Randi has pursued ‘psychic’ spoon benders, exposed the dirty tricks of faith healers, investigated homeopathic water ‘with a memory,’ and generally been a thorn in the sides of those who try to pull the wool over the public’s eyes in the name of the supernatural. Randi is also starring in his own biographical documentary ‘An Honest Liar,’ which will be screened alongside his fireside chat across four Australian cities.

He has received numerous awards and recognitions, including a MacArthur Foundation Prize Fellowship (also known as the ‘MacArthur ‘Genius’ Grant’) in 1986. He’s the author of numerous books, including Flim-Flam!: Psychics, ESP, Unicorns, and Other Delusions (1982), The Truth About Uri Geller (1982), The Faith Healers (1987), and An Encyclopedia of Claims, Frauds, and Hoaxes of the Occult and Supernatural (1995).

In 1996, the James Randi Education Foundation was established to further Randi’s work. Randi’s long-standing challenge to psychics now stands as a $1,000,000 prize administered by the Foundation. It remains unclaimed.

The Amazing Randi brought his unique superheroic brand of sceptic justice to Australia: