The Winding Road to Quantum Supremacy – Scott Aaronson

Interview on quantum computation with Scott Aaronson, theoretical computer scientist and David J. Bruton Jr. Centennial Professor of Computer Science at the University of Texas at Austin. His primary areas of research are quantum computing and computational complexity theory.
Scott blogged about this and other segments of our interview – his blog is very popular and has way more comments than this one does – check it out.

Check out interview segment “The Ghost in the Quantum Turing Machine” – covering whether a machine can be conscious, whether information is physical and integrated information theory.


Scott Aaronson: Okay so – Hi, I’m Scott Aaronson. I’m a computer science professor at the University of Texas at Austin and my main interest is the capabilities and limits of quantum computers, and more broadly what computer science and physics have to tell each other. And I got interested in it I guess because it was hard not to be – because as a teenager it just seemed clear to me that the universe is a giant video game and it just obeys certain rules, and so if I really wanted to understand the universe maybe I could ignore the details of physics and just think about computation.
But then with the birth of quantum computing and the dramatic discoveries in the mid-1990s (like Shor’s algorithm for factoring huge numbers) it became clear that physics actually changes the basic rules of computation – so that was something that I felt like I had to understand. And 20 years later we’re still trying to understand it, and we may also be able to build some devices that can outperform classical computers namely quantum computers and use them to do some interesting things.
But to me that’s that’s really just icing on the cake; really I just want to understand how things fit together. Well to tell you the truth when I first heard about quantum computing (I think from reading some popular article in the mid 90s about Shor’s algorithm which had only recently been discovered) my first reaction was this sounds like obvious hogwash; this sounds like some physicists who just do not understand the first thing about computation – and they’re just inventing some physics proposal that sounds like it just tries every possible solution in parallel. But none of these things are going to scale and in computer science there’s been decades of experience of that; of people saying: well why don’t you build a computer using a bunch of mirrors? or using soap bubbles? or using folding proteins?
And there’s all kinds of ideas that on paper look like they could evaluate an exponential number of solutions at only a linear amount of time, but they’re always kind of idealizing something? So it’s always when you examine them carefully enough you find that the amount of energy or scales explose up on you exponentially, or the precision with which you would need to measure becomes exponentially precise, or something becomes totally unrealistic – and I thought the same must be true of quantum computing. But in order to be sure I had to read something about it.
So I while I was working over a summer at Bell Labs doing work that had nothing to do with quantum computing, well my boss was nice enough to let me spend some time learning about and reading up on the basics of quantum computing – and that was really a revelation for me because I accepted [that] quantum mechanics is the real thing. It is a thing of comparable enormity to the basic principles of computation – you can say the principles of Turing – and it is exactly the kind of thing that could modify some of those principles. But the biggest surprise of all I think was that I despite not being a physicist not having any skill that partial differential equations or the others tools of the physicists that I could actually understand something about quantum mechanics.
And ultimately to learn the basic rules of how a quantum computer would work and start thinking about what they would be good for – quantum algorithms and things like that – it’s enough to be conversant with vectors and matrice. So you need to know a little bit of math but not that much. You need to be able to know linear algebra okay and that’s about it.
And I feel like this is a kind of a secret that gets buried in almost all the popular articles; they make it sound like quantum mechanics is just this endless profusion of counterintuitive things. That it’s: particles can be in two places at once, and a cat can be both dead and alive until you look at it, and then why is that not just a fancy way of saying well either the cat’s alive or dead and you don’t know which one until you look – they they never quite explained that part, and particles can have spooky action at a distance and affect each other instantaneously, and particles can tunnel through walls! It all sounds hopelessly obscure and you know there’s no hope for anyone who’s not a PhD in physics to understand any of it.
But the truth of the matter is there’s this one counterintuitive hump that you have to get over which is the certain change to or generalization of the rules of probability – and once you’ve gotten that then all the other things are just different ways of talking about or different manifestations of that one change. And a quantum computer in particular is just a computer that tries to take advantage of this one change to the rules of probability that the physicists discovered in the 1920s was needed to account for our world. And so that was really a revelation for me – that even you’re computer scientists are math people; people who are not physicists can actually learn this and start contributing to it – yeah!

Adam Ford: So it’s interesting that often when you try to pursue an idea, the practical gets in the way – we try to get to the ideal without actually considering the practical – and they feel like enemies. Should we be letting the ideal be the enemy of the practical?

Scott Aaronson: Well I think that from the very beginning it was clear that there is a theoretical branch of quantum computing which is where you just assume you have as many of these quantum bits (qubits) as you could possibly need, and they’re perfect; they stay perfectly isolated from their environment, and you can do whatever local operations on them you might like, and then you just study how many operations would you need to factor a number, or solve some other problem of practical importance. And the theoretical branch is really the branch where I started out in this field and where I’ve mostly been ever since.
And then there’s the practical branch which asks well what will it take to actually build a device that instantiates this theory – where we have to have qubits that are actually the energy levels of an electron, or the spin states of an atomic nucleus, or are otherwise somehow instantiated in the physical world. And they will be noisy, they will be interacting with their environment – we will have to take heroic efforts to keep them sufficiently isolated from their environments – which is needed in order to maintain their superposition state. How do we do that?
Well we’re gonna need some kind of fancy error correcting codes to do that, and then there are there are theoretical questions there as well but how do you design those correcting codes?
But there’s also practical questions: how do you engineer a system where the error rates are low enough that these codes can even be used at all; that if you try to apply them you won’t simply be creating even more error than you’re fixing. What should be the physical basis for qubits? Should it be superconducting coils? Should it be ions trapped in a magnetic field? Should it be photons? Should it be some new topological state of matter? Actually all four of those proposals and many others are all being pursued now!
So I would say that until fairly recently in the field, like five years ago or so, the theoretical and the practical branches we’re pretty disjointed from each other; they were never enemies so to speak. I mean we might poke fun at each other sometimes but we were we were never enemies. The the field always sort of rose or fell as a whole and we all knew that. But we just didn’t have a whole lot to scientifically say to each other because the experimentalists we’re just trying to get one or two qubits to work well, and they couldn’t even do that much, and we theorists we’re thinking about – well suppose you’ve got a billion cubits, or some arbitrary number, what could you do? And what would still be hard to do even then?
A lot of my work was has actually been about the limitations of quantum computers, but I also like to say the study of what you can’t do even with computers that you don’t have. And only recently the experimentalists have finally gotten the qubits to work pretty well in isolation so that now it finally makes sense to start to scale things up – not yet to a million qubits but maybe 50 qubits, maybe to 60, maybe to a hundred. This as it happens is what
Google and IBM and Intel and a bunch of startup companies are trying to do right now. And some of them are hoping to have devices within the next year or two, that might or might not do anything useful but if all goes well we hope will at least be able to do something interesting – in the sense of something that would be challenging for a classical computer to simulate, and that at least proves the point that we can do something this way that is beyond what classical computers can do.
And so as a result the most nitty-gritty experimentalists are now actually talking to us theorists because now they need to know – not just as a matter of intellectual curiosity, but as a fairly pressing practical matter – once we get 50 or 100 cubits working what do we do with them? What do we do with them first of all that you know is hard to simulate classically? How sure are you that there’s no fast classical method to do the same thing? How do we verify that we’ve really done it , and is it useful
for anything?
And ideally they would like us to come up with proposals that actually fit the constraints of the hardware that they’re building, where you could say you know eventually none of this should matter, eventually a quantum programmer should be able to pay as little attention to the hardware as a classical programmer has to worry about the details of the transistors today.
But in the near future when we only have 50 or 100 cubits you’re gonna have to make the maximum use of each and every qubit that you’ve got, and the actual details of the hardware are going to matter, and the result is that even we theorists have had to learn about these details in a way that we didn’t before.
There’s been a sort of coming together of the theory and practical branches of the field just in the last few years that to me has been pretty exciting.

Adam Ford: So you think we will have something equivalent to functional programming for quantum computing in the near future?

Scott Aaronson: Well there actually has been a fair amount of work on the design of quantum programming languages. There’s actually a bunch of them out there now that you can download and try out if you’d like. There’s one called Quipper, there’s another one called a Q# from Microsoft, and there are several others. Of course we don’t yet have very good hardware to run the programs on yet, mostly you can just run them in classical simulation, which naturally only works well for up to about 30 or 40 cubits, and then it becomes too slow. But if you would like to get some experience with quantum programming you can try these things out today, and many of them do try to provide higher level functionalities, so that you’re not just doing the quantum analog of assembly language programming, but you can think in higher-level modules, or you can program functionally. I would say that in quantum algorithms we’ve mostly just been doing theory and we haven’t been implementing anything, but we have had to learn to think that way. If we had to think in terms of each individual qubit, each individual operation on one or two
qubits, well we would never get very far right? And so we have to think in higher-level terms like there are certain modules that we know can be done – one of them is called the Quantum Fourier Transform and that’s actually the heart of Shor’s famous algorithm for factoring numbers (it has other applications as well). Another one is called Amplitude Amplification that’s the heart of Grover’s famous algorithm for searching long long lists of numbers
in about the square root of the number of steps that you would need classically, and that’s also like a quantum algorithm design primitive that we can just kind of plug in as a black box and it has many applications.
So we do think in these higher level terms, but there’s a different set of higher level abstractions than there would be for classical computing – and so you have to learn those. But the basic idea of decomposing a complicated
problem by breaking it down into its sub components that’s exactly the same in quantum computing as it is in classical computing.

Adam Ford: Are you optimistic with regards to quantum computing in the short to medium term?

Scott Aaronson: You’re asking what am I optimistic about – so I am I mean like I feel like the field has made amazing progress: both on theory side and on the experimental side. We’re not there yet, but we know a lot more than we did a decade ago. Some of what were my favorite open problems as a theorist a decade ago have now been resolved – some of them within the last year – actually and the hardware the qubits are not yet good enough to build a scalable quantum computer – in that sense the skeptics can clearly legitimately say we’re not there yet – well no duh we’re not – okay but: if you look at the coherence times of the qubits, you look at what you can do with them, and you compare that to where they were 10 years ago or 20 years ago – there’s been orders of magnitude type of progress. So the analogy that I like to make: Charles Babbage laid down the basic principles of classical computing in the 1820s right? I mean not with as much mathematical rigor as Turing would do later, but the basic ideas were there. He had what today we would call a design for a universal computer.
So now imagine someone then saying ‘well so when is this analytical engine gonna get built? will it be in the 1830s or will it take all the way until the 1840s?’ Well in this case it took more than a hundred years for a technology to be invented – namely the transistor – that really fully realized Babbage’s vision. I mean the vacuum tube came along earlier, and you could say partially realized that but it was just not reliable enough to really be scalable in the way that the transistor was. And optimistically now we’re in the very very early vacuum tube era of quantum computing. We don’t yet have the quantum computing analog of the transistor as people don’t even agree about which technology is the right one to scale up yet. Is it superconducting? Is it trapped ions? Is it photonics? Is it a topological matter? So they’re pursuing all these different approaches in parallel. The partisans of each approach have what sounds like compelling arguments as to why none of the other approaches could possibly scale. I hope that they’re not all correct uh-huh. People have only just recently gotten to the stage where one or two qubits work well in isolation, and where it makes sense to try to scale up to 50 or 100 of them and see if you can get them working well together at that kind of scale.
And so I think the the big thing to watch for in the next five to ten years is what’s been saddled with the somewhat unfortunate name of ‘Quantum Supremacy’ (and this was coined before Trump I hasten to say). But so this is just a term to refer to doing something with a quantum computer it’s not necessarily useful but that at least is classically hard. So you know as I was saying earlier, proves the point that you can do something that would take a lot longer to simulate it with a classical computer. And this is the thing that Google and some others are going to take their best shot at within the next couple of years so. What puts that in the realm of possibility is that just a mere 50 or 100 cubits if they work well enough should already be enough to get us this. In principle you know you may be able to do this without needing error correction – once you need error correction then that enormously multiplies the resources that you need to do even the simplest of what’s called ‘Fault-Tolerant Computing’ might take many thousands of physical qubits, okay, even though everyone agrees that ultimately if you want to scale to realize the true promise of quantum computing – or let’s say to threaten our existing methods of cryptography – then you’re going to need this fault tolerance. But that I expect we’re not gonna see in the next five to ten years.
If we do see it I mean that will be a huge shock – as big a shock as it would be if you told someone in 1939 that there was going to be a nuclear weapon in six years. In that case there was a world war that sort of accelerated the timeline you could say from what it would otherwise be. In this case I hope there won’t be a world war that accelerates this timeline. But my guess would be that if all goes well then quantum supremacy might be achievable within the next decade, and I hope that after that we could start to see some initial applications of quantum computing which will probably be some very very specialized ones; some things that we can already get with a hundred or so non-error-corrected qubits. And by necessity these are going to be very special things – they might mostly be physics simulations or simulations of some simple chemistry problems.
I actually have a proposed application for near-term quantum computers which is to generate cryptographically secure random numbers – those random numbers that you could prove to a skeptic really were generated randomly – turns out that even like a 50 or 60 qubit quantum computer should already be enough to give us that. But true scalable quantum computing the kind that could threaten cryptography and that could also speed up optimization problems and things like that – that will probably require error correction – and I could be pleasantly surprised . I’m not optimistic about that part becoming real on the next five to ten years, but you know since every everyone likes an optimist I guess I’ll I try to be optimistic that we will take big steps in that direction and maybe even get there within my lifetime.

Also see this and this of an interview with Mike Johnson conducted by Andrés Gómez Emilson and I. Also this interview with Christof Koch will likely be of interest.

Physicalism & Materialism – John Wilkins

Materialism was a pre-socratic view that for something to be real it has to be matter – physical stuff made of atoms (which at the time were considered hard like billiard balls – fundametal parts of reality).  The reason these days the term physicalism is used is because it can describe things that aren’t matter – like forces, or aren’t observable matter – like dark matter, or energy or fields, or spacetime etc..  Physicalism is the idea that all that exist can be described in the language of some ‘ideal’ physics – we may never know what this ideal physics is, though people think that it is something close to our current physics (as we can make very accurate predictions with our current physics).

If magic, telepathy or angels were real, there would be a physics that could describe them – they’d have patterns and properties that would be describable and explainable.  A physicist would likely think that even the mind operates according to physical rules.  Being a physicalist according to John means you think everything is governed by rules, physical rules – and that there is an ideal language that can be used to describe all this.

Note John is also a deontologist.  Perhaps there should exist an ideal language that can fully describe ethics – does this mean that ideally there is no need for utilitarianism?  I’ll leave that question for another post.

Interview with John Wilkins on Materialism & Physicalism.

Here are some blog posts about physicalism by John Wilkins:

Is physicalism an impoverished metaphysics?

Every so often, we read about some philosopher or other form of public intellectual who makes the claim that a physicalist ontology – a world view in which only things that can be described in terms of physics are said to exist – is impoverished. That is, there are things whereof science cannot know, &c. A recent example is that made by Thomas Nagel [nicely eviscerated here by the physicist Sean Carroll], whose fame in philosophy rests with an influential 1974 paper that there is something like being a bat that no amount of physics, physiology or other objective science could account for.

Recent, Nagel has argued that the evolutionary view called (historically misleadingly) neo-Darwinism, is “almost certainly” false. One of the reasons is that “materialism” (which Nagel should know is an antiquated world view replaced by physicalism defined above; there are many non-material things in physics, not least fields of various kinds) does not permit a full account of consciousness; the subjective facts of being a particular individual organism. Another is that the chance that life would emerge from a lifeless universe is staggeringly unlikely. How this is calculated is somewhat mysterious, given that at best we only have (dare I say it?) subjective estimates anyway, but there it is.

But Nagel is not alone. Various nonreligious (apparently) thinkers have made similar assertions, although some, like Frank Jackson, who proposed the Knowledge Argument, have since backed down. What is it that physicalism must account for that these disputants and objectors say it cannot?

It almost entirely consists of consciousness, intentions, intelligence or some similar mental property which is entirely inexplicable by “reductionist” physicalism. [Reductionism is a term of abuse that means – so far as I can tell – solely that the person who makes such an accusation does not like the thing or persons being accused.] And that raises our question: is physicalism lacking something?

I bet you are dying to know more… you’ll just have to follow the link…
See more at Evolving Thoughts>>

Is Physicalism Coherent?

In my last post I argued that physicalism cannot be rejected simply because people assert there are nonphysical objects which are beyond specification. Some are, however, specifiable, and one commentator has identified the obvious ones: abstract objects like the rules of chess or numbers. I have dealt with these before in my “Pizza reductionism” post, which I invite you to go read.

Done? OK, then; let us proceed.

It is often asserted that there are obviously things that are not physical, such as ideas, numbers, concepts, etc., quite apart from qualia, I once sat with a distinguished philosopher, who I respect greatly and so shall not name, when he asserted that we can construct natural classifications because we can deal first with the natural numbers. I asked him “In what sense are numbers natural objects?”, meaning, why should we think numbers are entities in the natural world. He admitted that the question had not occurred to him (I doubt that – he is rather smart), but that it was simply an axiom of his philosophy. I do not think such abstract objects are natural.

This applies to anything that is “informational”, including all semantic entities like meanings, symbols, lexical objects, and so on. They only “exist” as functional modalities in our thoughts and language. I have also argued this before: information does not “exist”; it is a function of how we process signals. Mathematics is not a domain, it is a language, and the reason it works is because the bits that seriously do not work are not explored far[*] – not all of it has to work in a physical or natural sense, but much of it has to, or else it becomes a simple game that we would not play so much.

So the question of the incoherence of physicalism is based on the assumption (which runs contrary to physicalism, and is thus question begging) that abstract objects are natural things. I don’t believe they are, and I certainly do not think that a thought, or concept, for example, which can be had by many minds and is therefore supposed to be located in none of them (and thus transcendental), really is nonphysical. That is another case of nouning language. The thought “that is red” exists, for a physicalist, in all the heads that meet the functional social criteria for ascriptions of red. It exists nowhere else – it just is all those cognitive and social behaviours in biological heads…

Yes, I know, it’s a real page turner…
See more at Evolving Thoughts>>

In philosophy, physicalism is the ontological thesis that “everything is physical”, that there is “nothing over and above” the physical, or that everything supervenes on the physical. Physicalism is a form of ontological monism—a “one substance” view of the nature of reality as opposed to a “two-substance” (dualism) or “many-substance” (pluralism) view. Both the definition of physical and the meaning of physicalism have been debated. Physicalism is closely related to materialism. Physicalism grew out of materialism with the success of the physical sciences in explaining observed phenomena. The terms are often used interchangeably, although they are sometimes distinguished, for example on the basis of physics describing more than just matter (including energy and physical law). Common arguments against physicalism include both the philosophical zombie argument and the multiple observers argument, that the existence of a physical being may imply zero or more distinct conscious entities. “When I lost my belief in religion I had to decide what I needed to accept as a bare minimum. I decided that I needed to believe in the physical world. I never found the slightest reason to accept the existence of anything else. To this day I am a physicalist only because I never found the need to be anything else. The principle of parsimony suggests that one should not believe in more than one needs to. Even if it does make you feel comfortable.”


Let’s get physicalism!

See John Wilkin’s Blog ‘Evolving Thoughts

#philsci #philosophy #science #physics

Ethics, Qualia Research & AI Safety with Mike Johnson

What’s the relationship between valence research and AI ethics?

Hedonic valence is a measure of the quality of our felt sense of experience, the intrinsic goodness (positive valence) or averseness (negative valence) of an event, object, or situation.  It is an important aspect of conscious experience; always present in our waking lives. If we seek to understand ourselves, it makes sense to seek to understand how valence works – how to measure it and test for it.

Also, might there be a relationship to the AI safety/friendliness problem?
In this interview, we cover a lot of things, not least .. THE SINGULARITY (of course) & the importance of Valence Research to AI Friendliness Research (as detailed here). Will thinking machines require experience with valence to understand it’s importance?

Here we cover some general questions about Mike Johnson’s views on recent advances in science and technology & what he sees as being the most impactful, what world views are ready to be retired, his views on XRisk and on AI Safety – especially related to value theory.

This one part of an interview series with Mike Johnson (another section on Consciousness, Qualia, Valence & Intelligence). 


Adam Ford: Welcome Mike Johnson, many thanks for doing this interview. Can we start with your background?

Mike Johnson

Mike Johnson: My formal background is in epistemology and philosophy of science: what do we know & how do we know it, what separates good theories from bad ones, and so on. Prior to researching qualia, I did work in information security, algorithmic trading, and human augmentation research.


Adam: What is the most exciting / interesting recent (scientific/engineering) news? Why is it important to you?

Mike: CRISPR is definitely up there! In a few short years precision genetic engineering has gone from a pipe dream to reality. The problem is that we’re like the proverbial dog that caught up to the car it was chasing: what do we do now? Increasingly, we can change our genome, but we have no idea how we should change our genome, and the public discussion about this seems very muddled. The same could be said about breakthroughs in AI.


Adam: What are the most important discoveries/inventions over the last 500 years?

Mike: Tough question. Darwin’s theory of Natural Selection, Newton’s theory of gravity, Faraday & Maxwell’s theory of electricity, and the many discoveries of modern physics would all make the cut. Perhaps also the germ theory of disease. In general what makes discoveries & inventions important is when they lead to a productive new way of looking at the world.


Adam: What philosophical/scientific ideas are ready to be retired? What theories of valence are ready to be relegated to the dustbin of history? (Why are they still in currency? Why are they in need of being thrown away or revised?)

Mike: I think that 99% of the time when someone uses the term “pleasure neurochemicals” or “hedonic brain regions” it obscures more than it explains. We know that opioids & activity in the nucleus accumbens are correlated with pleasure– but we don’t know why, we don’t know the causal mechanism. So it can be useful shorthand to call these things “pleasure neurochemicals” and whatnot, but every single time anyone does that, there should be a footnote that we fundamentally don’t know the causal story here, and this abstraction may ‘leak’ in unexpected ways.


Adam: What have you changed your mind about?

Mike: Whether pushing toward the Singularity is unequivocally a good idea. I read Kurzweil’s The Singularity is Near back in 2005 and loved it- it made me realize that all my life I’d been a transhumanist and didn’t know it. But twelve years later, I’m a lot less optimistic about Kurzweil’s rosy vision. Value is fragile, and there are a lot more ways that things could go wrong, than ways things could go well.


Adam: I remember reading Eliezer’s writings on ‘The Fragility of Value’, it’s quite interesting and worth consideration – the idea that if we don’t get AI’s value system exactly right, then it would be like pulling a random mind out of mindspace – most likely inimicable to human interests. The writing did seem quite abstract, and it would be nice to see a formal model or something concrete to show this would be the case. I’d really like to know how and why value is as fragile as Eliezer seems to make out. Is there any convincing crisply defined model supporting this thesis?

Mike: Whether the ‘Complexity of Value Thesis’ is correct is super important. Essentially, the idea is that we can think of what humans find valuable as a tiny location in a very large, very high-dimensional space– let’s say 1000 dimensions for the sake of argument. Under this framework, value is very fragile; if we move a little bit in any one of these 1000 dimensions, we leave this special zone and get a future that doesn’t match our preferences, desires, and goals. In a word, we get something worthless (to us). This is perhaps most succinctly put by Eliezer in “Value is fragile”:

“If you loose the grip of human morals and metamorals – the result is not mysterious and alien and beautiful by the standards of human value. It is moral noise, a universe tiled with paperclips. To change away from human morals in the direction of improvement rather than entropy, requires a criterion of improvement; and that criterion would be physically represented in our brains, and our brains alone. … You want a wonderful and mysterious universe? That’s your value. … Valuable things appear because a goal system that values them takes action to create them. … if our values that prefer it are physically obliterated – or even disturbed in the wrong dimension. Then there is nothing left in the universe that works to make the universe valuable.”

If this frame is right, then it’s going to be really really really hard to get AGI right, because one wrong step in programming will make the AGI depart from human values, and “there will be nothing left to want to bring it back.” Eliezer, and I think most of the AI safety community assumes this.

But– and I want to shout this from the rooftops– the complexity of value thesis is just a thesis! Nobody knows if it’s true. An alternative here would be, instead of trying to look at value in terms of goals and preferences, we look at it in terms of properties of phenomenological experience. This leads to what I call the Unity of Value Thesis, where all the different manifestations of valuable things end up as special cases of a more general, unifying principle (emotional valence). What we know from neuroscience seems to support this: Berridge and Kringelbach write about how “The available evidence suggests that brain mechanisms involved in fundamental pleasures (food and sexual pleasures) overlap with those for higher-order pleasures (for example, monetary, artistic, musical, altruistic, and transcendent pleasures).” My colleague Andres Gomez Emilsson writes about this in The Tyranny of the Intentional Object. Anyway, if this is right, then the AI safety community could approach the Value Problem and Value Loading Problem much differently.


Adam: I’m also interested in the nature of possible attractors that agents might ‘extropically’ gravitate towards (like a thirst for useful and interesting novelty, generative and non-regressive, that might not neatly fit categorically under ‘happiness’) – I’m not wholly convinced that they exist, but if one leans away from moral relativism, it makes sense that a superintelligence may be able to discover or extrapolate facts from all physical systems in the universe, not just humans, to determine valuable futures and avoid malignant failure modes (Coherent Extrapolated Value if you will). Being strongly locked into optimizing human values may be a non-malignant failure mode.

Mike: What you write reminds me of Schmidhuber’s notion of a ‘compression drive’: we’re drawn to interesting things because getting exposed to them helps build our ‘compression library’ and lets us predict the world better. But this feels like an instrumental goal, sort of a “Basic AI Drives” sort of thing. Would definitely agree that there’s a danger of getting locked into a good-yet-not-great local optima if we hard optimize on current human values.

Probably the danger is larger than that too– as Eric Schwitzgebel notes​, ​

“Common sense is incoherent in matters of metaphysics. There’s no way to develop an ambitious, broad-ranging, self- consistent metaphysical system without doing serious violence to common sense somewhere. It’s just impossible. Since common sense is an inconsistent system, you can’t respect it all. Every metaphysician will have to violate it somewhere.”

If we lock in human values based on common sense, we’re basically committing to following an inconsistent formal system. I don’t think most people realize how badly that will fail.


Adam: What invention or idea will change everything?

Mike: A device that allows people to explore the space of all possible qualia in a systematic way. Right now, we do a lot of weird things to experience interesting qualia: we drink fermented liquids, smoke various plant extracts, strap ourselves into rollercoasters, and parachute out of plans, and so on, to give just a few examples. But these are very haphazard ways to experience new qualia! When we’re able to ‘domesticate’ and ‘technologize’ qualia, like we’ve done with electricity, we’ll be living in a new (and, I think, incredibly exciting) world.


Adam: What are you most concerned about? What ought we be worrying about?

Mike: I’m worried that society’s ability to coordinate on hard things seems to be breaking down, and about AI safety. Similarly, I’m also worried about what Eliezer Yudkowsky calls ‘Moore’s Law of Mad Science’, that steady technological progress means that ‘every eighteen months the minimum IQ necessary to destroy the world drops by one point’. But I think some very smart people are worrying about these things, and are trying to address them.

In contrast, almost no one is worrying that we don’t have good theories of qualia & valence. And I think we really, really ought to, because they’re upstream of a lot of important things, and right now they’re “unknown unknowns”- we don’t know what we don’t know about them.

One failure case that I worry about is that we could trade away what makes life worth living in return for some minor competitive advantage. As Bostrom notes in Superintelligence,

“When it becomes possible to build architectures that could not be implemented well on biological neural networks, new design space opens up; and the global optima in this extended space need not resemble familiar types of mentality. Human-like cognitive organizations would then lack a niche in a competitive post-transition economy or ecosystem. We could thus imagine, as an extreme case, a technologically highly advanced society, containing many complex structures, some of them far more intricate and intelligent than anything that exists on the planet today – a society which nevertheless lacks any type of being that is conscious or whose welfare has moral significance. In a sense, this would be an uninhabited society. It would be a society of economic miracles and technological awesomeness, with nobody there to benefit. A Disneyland with no children.”

Nick Bostrom

Now, if we don’t know how qualia works, I think this is the default case. Our future could easily be a technological wonderland, but with very little subjective experience. “A Disneyland with no children,” as Bostrom quips.



Adam: How would you describe your ethical views? What are your thoughts on the relative importance of happiness vs. suffering? Do things besides valence have intrinsic moral importance?

Mike: Good question. First, I’d just like to comment that Principia Qualia is a descriptive document; it doesn’t make any normative claims.

I think the core question in ethics is whether there are elegant ethical principles to be discovered, or not. Whether we can find some sort of simple description or efficient compression scheme for ethics, or if ethics is irreducibly complex & inconsistent.

The most efficient compression scheme I can find for ethics, that seems to explain very much with very little, and besides that seems intuitively plausible, is the following:

  1. Strictly speaking, conscious experience is necessary for intrinsic moral significance. I.e., I care about what happens to dogs, because I think they’re conscious; I don’t care about what happens to paperclips, because I don’t think they are.
  2. Some conscious experiences do feel better than others, and all else being equal, pleasant experiences have more value than unpleasant experiences.

Beyond this, though, I think things get very speculative. Is valence the only thing that has intrinsic moral importance? I don’t know. On one hand, this sounds like a bad moral theory, one which is low-status, has lots of failure-modes, and doesn’t match all our intuitions. On the other hand, all other systematic approaches seem even worse. And if we can explain the value of most things in terms of valence, then Occam’s Razor suggests that we should put extra effort into explaining everything in those terms, since it’d be a lot more elegant. So– I don’t know that valence is the arbiter of all value, and I think we should be actively looking for other options, but I am open to it. That said I strongly believe that we should avoid premature optimization, and we should prioritize figuring out the details of consciousness & valence (i.e. we should prioritize research over advocacy).

Re: the relative importance of happiness vs suffering, it’s hard to say much at this point, but I’d expect that if we can move valence research to a more formal basis, there will be an implicit answer to this embedded in the mathematics.

Perhaps the clearest and most important ethical view I have is that ethics must ultimately “compile” to physics. What we value and what we disvalue must ultimately cash out in terms of particle arrangements & dynamics, because these are the only things we can actually change. And so if people are doing ethics without caring about making their theories cash out in physical terms, they’re not actually doing ethics- they’re doing art, or social signaling, or something which can serve as the inspiration for a future ethics.

Perhaps the clearest and most important ethical view I have is that ethics must ultimately “compile” to physics. What we value and what we disvalue must ultimately cash out in terms of particle arrangements & dynamics, because these are the only things we can actually change.

The analogy I’d offer here is that we can think about our universe as a computer, and ethics as choosing a program to run on this computer. Unfortunately, most ethicists aren’t writing machine-code, or even thinking about things in ways that could be easily translated to machine-code. Instead, they’re writing poetry about the sorts of programs that might be nice to run. But you can’t compile poetry to machine-code! So I hope the field of ethics becomes more physics-savvy and quantitative (although I’m not optimistic this will happen quickly).

Eliezer Yudkowsky refers to something similar with his notions of “AI grade philosophy”, “compilable philosophy”, and “computable ethics”, though I don’t think he quite goes far enough (i.e., all the way to physics).


Adam: What excites you? What do you think we have reason to be optimistic about?

Mike: The potential of qualia research to actually make peoples’ lives better in concrete, meaningful ways. Medicine’s approach to pain management and treatment of affective disorders are stuck in the dark ages because we don’t know what pain is. We don’t know why some mental states hurt. If we can figure that out, we can almost immediately help a lot of people, and probably unlock a surprising amount of human potential as well. What does the world look like with sane, scientific, effective treatments for pain & depression & akrasia? I think it’ll look amazing.


Adam: If you were to take a stab at forecasting the Intelligence Explosion – in what timeframe do you think it might happen (confidence intervals allowed)?

Mike: I don’t see any intractable technical hurdles to an Intelligence Explosion: the general attitude in AI circles seems to be that progress is actually happening a lot more quickly than expected, and that getting to human-level AGI is less a matter of finding some fundamental breakthrough, and more a matter of refining and connecting all the stuff we already know how to do.

The real unknown, I think, is the socio-political side of things. AI research depends on a stable, prosperous society able to support it and willing to ‘roll the dice’ on a good outcome, and peering into the future, I’m not sure we can take this as a given. My predictions for an Intelligence Explosion:

  • Between ~2035-2045 if we just extrapolate research trends within the current system;
  • Between ~2080-2100 if major socio-political disruptions happen but we stabilize without too much collateral damage (e.g., non-nuclear war, drawn-out social conflict);
  • If it doesn’t happen by 2100, it probably implies a fundamental shift in our ability or desire to create an Intelligence Explosion, and so it might take hundreds of years (or never happen).


If a tree falls in the forest and no one is around to hear it, does it make a sound? It would be unfortunate if a whole lot of awesome stuff were to happen with no one around to experience it.  <!–If a rainbow appears in a universe, and there is no one around to experience it, is it beautiful?–>

Also see the 2nd part, and 3nd part (conducted by Andrés Gómez Emilson) of this interview series conducted by Andrés Gómez Emilson and this interview with Christof Koch will likely be of interest.


Mike Johnson is a philosopher living in the Bay Area, writing about mind, complexity theory, and formalization. He is Co-founder of the Qualia Research Institute. Much of Mike’s research and writings can be found at the Open Theory website.
‘Principia Qualia’ is Mike’s magnum opus – a blueprint for building a new Science of Qualia. Click here for the full version, or here for an executive summary.
If you like Mike’s work, consider helping fund it at Patreon.

Science, Mindfulness & the Urgency of Reducing Suffering – Christof Koch

In this interview with Christof Koch, he shares some deeply felt ideas about the urgency of reducing suffering (with some caveats), his experience with mindfulness – explaining what it was like to visit the Dali Lama for a week, as well as a heart felt experience of his family dog ‘Nosey’ dying in his arms, and how that moved him to become a vegetarian. He also discusses the bias of human exceptionalism, the horrors of factory farming of non-human animals, as well as a consequentialist view on animal testing.
Christof Koch is an American neuroscientist best known for his work on the neural bases of consciousness.

Christof Koch is the President and Chief Scientific Officer of the Allen Institute for Brain Science in Seattle. From 1986 until 2013, he was a professor at the California Institute of Technology.

The End of Aging

Aging is a technical problem with a technical solution – finding the solution requires clear thinking and focused effort. Once solving aging becomes demonstrably feasible, it is likely attitudes will shift regarding its desirability. There is huge potential, for individuals and for society, in reducing suffering through the use of rejuvenation therapy to achieve new heights of physical well-being. I also discuss the looming economic implications of large percentages of illness among aging populations – and put forward that focusing on solving fundamental problems of aging will reduce the incidents of debilitating diseases of aging – which will in turn reduce the economic burden of illness. This mini-documentary discusses the implications of actually solving aging, as well as some misconceptions about aging.

‘The End of Aging’ won first prize in the international Longevity Film Competition *[1] in 2018.

The above video is the latest version with a few updates & kinks ironed out.

‘The End of Aging’ was Adam Ford’s submission for the Longevity Film Competition – all the contestants did a great job. Big thanks to the organisers of competition, it inspires people to produce videos to help spread awareness and understanding about the importance of ending aging.

It’s important to see that health in old age is desirable at population levels – rejuvenation medicine – repairing the bodies ability to cope with stressors (or practical reversal of the aging process), will end up being cheaper than traditional medicine  based on general indefinite postponement of ill-health on population levels (especially in the long run when rejuvenation therapy becomes efficient).

According to the World Health Organisation:

  1. Between 2015 and 2050, the proportion of the world’s population over 60 years will nearly double from 12% to 22%.
  2. By 2020, the number of people aged 60 years and older will outnumber children younger than 5 years.
  3. In 2050, 80% of older people will be living in low- and middle-income countries.
  4. The pace of population ageing is much faster than in the past.
  5. All countries face major challenges to ensure that their health and social systems are ready to make the most of this demographic shift.
The End of Aging – WHO 1 – 2020 portion of world population over 60 will double
The End of Aging – WHO 2 – Elderly outnumbering Infants
The End of Aging – WHO 3 – Pace of Population Aging Faster than in Past
The End of Aging – WHO 4 – 80 perc elderly in low to middle income countries
The End of Aging – WHO 5 Demographic Shifts


Happy Longevity Day 2018! 😀

[1] * The Longevity Film Competition is an initiative by the Healthy Life Extension Society, the SENS Research Foundation, and the International Longevity Alliance. The promoters of the competition invited filmmakers everywhere to produce short films advocating for healthy life extension, with a focus on dispelling four usual misconceptions and concerns around the concept of life extension: the false dichotomy between aging and age-related diseases, the Tithonus error, the appeal to nature fallacy, and the fear of inequality of access to rejuvenation biotechnologies.

Aubrey de Grey – Towards the Future of Regenerative Medicine

Why is aging research important? Biological aging causes suffering, however in recent times there as been surprising progress in stem cell research and in regenerative medicine that will likely disrupt the way we think about aging, and in the longer term, substantially mitigate some of the suffering involved in growing old.
Aubrey de Grey is the Chief Science Officer of SENS Foundation – an organisation focused on going beyond ageing and leading the journey towards  the future of regenerative medicine!  
What will it take to get there?

You might wonder why pursue  regenerative medicine ?
Historically, doctors have been racing against time to find cures for specific illnesses, making temporary victories by tackling diseases one by one – solve one disease and another urgency beacons – once your body becomes frail, if you survive one major illness, you may not be so lucky with the next one – the older you get the less capable your body becomes to staving off new illnesses – you can imagine a long line of other ailments fading beyond view into the distance, and eventually one of them will do you in.  If we are to achieve  radical healthy longevity , we need to strike at the fundamental technical problems of why we get frail and more disease prone as we get older.  Every technical problem has a technical solution – regenerative medicine is a class of solutions that seek to keep turning the ‘biological clock’ back rather than achieve short-term palliatives.

The damage repair methodology has gained in popularity over the last two decades, though it’s still not popular enough to attract huge amounts of funding – what might tip the scales of advocacy in damage-repair’s favor?
A clear existence proof such as achieving…

Robust Mouse Rejuvenation

In this interview, Aubrey de Grey reveals the most amount of optimism I have heard him express about the near term achievement of Robust Mouse Rejuvenation.  Previously it’s been 10 years away subject to adequate funding (which was not realised) – now Aubrey predicts it might happen within only 5-6 years (subject to funding of course).  So, what is Robust Mouse Rejuvenation – and why should we care?

For those who have seen Aubrey speak on this, he used to say RMR within 10 years (subject to funding)

Specifically, the goal of RBR is this:  Make normal, healthy two-year old mice (expected to live one more year) live three further years. 

  • What’s the ideal type of mouse to test on and why?  The ideal mouse to trail on is one that doesn’t naturally have a certain kind of congenital disease (that might on average only live 1.5 or 2 years) – because increasing their lifespan might only be a sign that you have solved their particular congenital disease.  The ideal type of mouse is one which lives to 3 years on average, which could die of various things.
  • How many extra years is significant? Consistently increasing mouse lifespan for an extra two years on top of their normal three year lifespans – essentially tripling their remaining lifespan.
  • When, or at what stage of the mice’s life to begin the treatment? Don’t start treating the mice until they are already 2 years old – at a time where they would normally be 2 thirds of the way though their life (at or past middle age) and they would have one more year to live.

Why not start treating the mice earlier?  The goal is to produce sufficiently dramatic results in a laboratory to convince the main-stream gerontology community, such that they would willingly publicly endorse the idea that it is not impossible, but indeed it is only a matter of time before rejuvenation therapy will work in humans – that is to get out there on talk shows and in front of cameras and say all this.

Arguably, the mainstream gerontology community are generally a bit conservative – they have vested interests in being successful in publishing papers, they get grants they have worries around peer review, they want tenure, and have a reputation to uphold.   Gerontologists hold the keys to public trust – they are considered to be the authorities on aging.
When gerontologists are convinced and let the world know about it, a lot of other people in the scientific community and in the general community will also then become convinced.  Once that happens, here’s what’s likely to happen next – longevity through rejuvenation medicine will become a big issue, there will be domino effects – there will be a war on aging, experts will appear on Oprah Winfrey, politicians will have to include the war on aging in their political manifesto if they want to get elected.

Yoda - the oldest mouse ever to have lived?
Yoda, a cute dwarf mouse, was named as the oldest mouse in 2004 at age 4 who lived with the much larger Princess Leia, in ‘a pathogen-free rest home for geriatric mice’ belonging to Dr. Richard Miller, professor of pathology in the Geriatrics Center of the Medical School. “Yoda is only the second mouse I know to have made it to his fourth birthday without the rigors of a severe calorie-restricted diet,” Miller says. “He’s the oldest mouse we’ve seen in 14 years of research on aged mice at U-M. The previous record-holder in our colony died nine days short of his 4th birthday; 100-year-old people are much more common than 4-year-old mice.” (ref)

What about Auto-Immune Diseases?

Auto-immune diseases (considered incurable to some) – get worse with aging for the same reason we loose general ability to fight off infections and attack cancer. Essentially the immune system looses it’s precision – it has two arms: the innate system and the adaptive – the adaptive side works by having polyclonality – a very wide diversity of cells with different rearrangements of parts of the genome that confer specificity of the immune cell to a particular target (which it may or may not encounter at some time in the future) – this polyclonality diminishes over life such that the cells which are targeted towards a given problem with the immune system are on average less precisely adapted towards it – so the immune system takes longer to do it’s job or doesn’t do it effectively – so with autoimmune system it looses it’s ability to distinguish between things that are foreign and things that are part of the body. So this could be powerfully addressed by the same
measures taken to rejuvenate the immune system generally – regenerating the thyamis and illuminating senescent cells that are accumulating in the blood.

Big Bottlenecks

See Aubrey discuss this at timepoint: 38:50
Bottlenecks: which bottlenecks does Aubrey believes need most attention from the community of people who already believe aging is a problem that needs to be solved?

  1. The first thing: Funding. The shortage of funding is still the biggest bottleneck.
  2. The second thing: The need for policy makers to get on board with the ideas and understand what is coming – so it’s not only developing the therapies as quickly as possible, it’s also important that once they are developed, the therapies get disseminated as quick as possible to avoid complete chaos.

It’s very urgent to have proper discussions about this.  Anticipating the anticipation – getting ready for the public anticipating these therapies instead of thinking that it’s all science fiction and is never going to happen.


Effective Advocacy

See Aubrey discuss this at timepoint: 42:47
Advocacy, it’s a big ask to get people from extreme opposition to supporting regenerative medicine. Nudging people a bit sideways is a lot earlier – that is getting them from complete offense to less offense, or getting people who are un-decided to be in favor of it.

Here are 2 of the main aspects of advocacy:

  1. feasibility / importance – emphasize progress, embracement by the scientific community (see paper hallmarks of aging – single most highly cited paper on the biology of aging this decade) – defining the legitimacy of the damage repair approach – it’s not just a crazy hair brained idea …
  2. desirability – concerns about (bad arguments : on overpopulation – oh don’t worry we will immigrate into space – the people who are concerned about this problem aren’t the ones who would like to go to space) – focus on more of the things that can generalize to desirable outcomes – so regenerative medicine will have side effects, like a longer lifespan, but also people will be more healthy at any given age compared to what they would be in they hadn’t had regenerative therapy – no body wants Alzheimer’s, or heart disease – if the outcome of regenerative medicine is that then it’s easier to sell.

We need a sense of proportion on possible future problems – will they generally be more serious than they are today?
Talking about uploading, substrate independence, etc one is actively alienating the public – it’s better to create a foundation of credibility in the conversation before you decide to persuade anyone of anything.  If we are going to get from here to the long term future we need advocacy now – the short term matters as well.

More on Advocacy here:

And here

Other Stuff

This interview covers a fair bit of ground, so here are some other points covered:

– Updates & progress at SENS
– Highlights of promising progress in regenerative medicine in general
– Recent funding successes, what can be achieved with this?
– Discussion on getting the message across
– desirability & feasibility of rejuvenation therapy
– What could be the future of regenerative medicine?
– Given progress so far, what can people alive today look forward to?
– Multi-factorial diseases – Fixing amyloid plaque buildup alone won’t cure Alzheimer’s – getting rid of amyloid plaque alone only produced mild cognitive benefits in Alzheimer’s patients. There is still the unaddressed issue of tangles… If you only get rid of one component in a multi-component problem then you don’t get to see much improvement of pathology – in just he same way one shouldn’t expect to see much of an overall increase in health & longevity if you only fix 5 of 7 things that need fixing (i.e. 5 of the 7 strands of SENS)
– moth-balling anti-telomerase approach to fighting cancer in favor of cancer immunotherapy (for the time being) as it’s side effects need to be compensated against…
– Cancer immunotherapy – stimulating the bodies natural ability to attack cancer with it’s immune system -2 approaches – car-T (Chimeric Antigen Receptors and T cells), and checkpoint inhibiting drugs.. then there is training the immune system to identify neoantegens (stuff that all cancers produce)


Chief Science Officer, SENS Research Foundation, Mountain View, CA –

AgeX Therapeutics –

Dr. Aubrey de Grey is a biomedical gerontologist based in Mountain View, California, USA, and is the Chief Science Officer of SENS Research Foundation, a California-based 501(c)(3) biomedical research charity that performs and funds laboratory research dedicated to combating the aging process. He is also VP of New Technology Discovery at AgeX Therapeutics, a biotechnology startup developing new therapies in the field of biomedical gerontology. In addition, he is Editor-in-Chief of Rejuvenation Research, the world’s highest-impact peer-reviewed journal focused on intervention in aging. He received his BA in computer science and Ph.D. in biology from the University of Cambridge. His research interests encompass the characterisation of all the types of self-inflicted cellular and molecular damage that constitute mammalian aging and the design of interventions to repair and/or obviate that damage. Dr. de Grey is a Fellow of both the Gerontological Society of America and the American Aging Association, and sits on the editorial and scientific advisory boards of numerous journals and organisations. He is a highly sought-after speaker who gives 40-50 invited talks per year at scientific conferences, universities, companies in areas ranging from pharma to life insurance, and to the public.


Many thanks for reading/watching!

Consider supporting SciFuture by:

a) Subscribing to the SciFuture YouTube channel:…

b) Donating – Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22 – Etherium: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b – Patreon:

c) Sharing the media SciFuture creates:

Kind regards, Adam Ford – Science, Technology & the Future

Surviving the Zombie Cell Apocalypse – Oisín Biotechs Stephen Hilbert

Oisín Biotechnologies ground-breaking research and technology is demonstrating that the solution to mitigating the effects of age-related diseases is to address the damage created by the aging process itself. We have recently successfully launched our first subsidiary, Oisin Oncology, focusing in combating multiple cancers.

Interview with Stephen Hilbert

We cover the exciting scientific progress at Oisín, targeting senescent cells (dubbed ‘zombie cells’) to help them to die properly, rejuvenation therapy vs traditional approaches to combating disease, Oisín’s potential for aiding astronauts survive high levels of radiation in space, funding for the research and therapy/drug development and specifically Stephen’s background in corporate development in helping raise capital for Oisín and it’s research.

Are we close to achieving Robust Mouse Rejuvenation?

According to Aubrey de Grey we are about 5-6 years away from  robust mouse rejuvenation   (RBR) subject to the kind of funding SENS has received this year and the previous year (2017-2018). There has been progress in developing certain therapies .

Specifically, the goal of RBR is this:

  • Make normal, healthy two-year old mice (expected to live one more year) live three further years.
    • The type of mice: The ideal mouse to trail on is one that doesn’t naturally have a certain kind of congenital disease (that might on average only live 1.5 or 2 years) – because increasing their lifespan might only be a sign that you have solved their particular congenital disease.
    • Number of extra years: Consistently increasing mouse lifespan for an extra two years on top of their normal three year lifespans – essentially tripling their remaining lifespan.
    • When to begin the treatment: Don’t start treating the mice until they are already 2 years old – at a time where they would normally be 2 thirds of the way though their life (at or past middle age) and they would have one more year to live.

Why not start treating the mice earlier?  The goal is to produce sufficiently dramatic results in a laboratory to convince the main-stream gerontology community such that they would willingly publicly endorse the idea that it is not impossible, but indeed it is only a matter of time before rejuvenation therapy will work in humans – that is to get out there on talk shows and in front of cameras and say all this.

The mainstream gerontology community are generally a bit conservative – they have vested interests in being successful in publishing papers, they get grants they have worries around peer review, they want tenure, and have a reputation to uphold.   Gerontologists hold the keys to public trust – they are considered to be the authorities on aging.


For the lowdown on progress towards Robust Mouse Rejuvenation see partway through this interview with Aubrey de Grey!

Preliminary results from study showing normalized mouse survival at 140 weeks

Stephen heads up corporate development for Oisín Biotechnologies. He has served as a business advisor to Oisín since its inception and has served on several biotechnology company advisory boards, specializing in business strategy and capital formation. Prior to Oisín, his career spanned over 15 years in the banking industry where he served as trusted advisor to accredited investors around the globe. Most recently he headed up a specialty alternative investment for a company in San Diego, focusing in tax and insurance strategies for family offices and investment advisors. Stephen is the founder of several ventures in the areas of real estate small manufacturing of novelty gifts and strategic consulting. He serves on the Overlake Hospital’s Pulse Board, assists with Children’s Hospital Guild and is the incoming Chairman at the Columbia Tower Club, a member’s club in Seattle.
LinkedIn Profile

Head of Corporate Strategy/Development Pre-Clinical Oisin Biotechnologies and OncoSenX
FightAging - Oisin Biotechnologies Produces Impressive Mouse Life Span Data from an Ongoing Study of Senescent Cell Clearance
FightAging reported:
Oisin Biotechnologies is the company working on what is, to my eyes, the best of the best when it comes to the current crop of senolytic technologies, approaches capable of selectively destroying senescent cells in old tissues. Adding senescent cells to young mice has been shown to produce pathologies of aging, and removal of senescent cells can reverse those pathologies, and also extend life span. It is a very robust and reliable approach, with these observations repeated by numerous different groups using numerous different methodologies of senescent cell destruction. Most of the current senolytic development programs focus on small molecules, peptides, and the like. These are expensive to adjust, and will be tissue specific in ways that are probably challenging and expensive to alter, where such alteration is possible at all. In comparison, Oisin Biotechnologies builds their treatments atop a programmable suicide gene therapy; they can kill cells based on the presence of any arbitrary protein expressed within those cells. Right now the company is focused on p53 and p16, as these are noteworthy markers of cancerous and senescent cells. As further investigation of cellular senescence improves the understanding of senescent biochemistry, Oisin staff could quickly adapt their approach to target any other potential signal of senescence – or of any other type of cell that is best destroyed rather than left alone. Adaptability is a very valuable characteristic. The Oisin Biotechnologies staff are currently more than six months in to a long-term mouse life span study, using cohorts in which the gene therapy is deployed against either p16, p53, or both p16 and p53, plus a control group injected with phosphate buffered saline (PBS). The study commenced more than six months ago with mice that were at the time two years (104 weeks) old. When running a life span study, there is a lot to be said for starting with mice that are already old; it saves a lot of time and effort. The mice were randomly put into one of the four treatment groups, and then dosed once a month. As it turns out, the mice in which both p16 and p53 expressing cells are destroyed are doing very well indeed so far, in comparison to their peers. This is quite impressive data, even given the fact that the trial is nowhere near done yet.
Considering investing/supporting this research?  Get in contact with Oisin here.

The future of neuroscience and understanding the complexity of the human mind – Brains and Computers

Two of the world’s leading brain researchers will come together to discuss some of the latest international efforts to understand the brain. They will discuss two massive initiatives – the US based Allen Institute for Brain Science and European Human Brain Project. By combining neuroscience with the power of computing both projects are harnessing the efforts of hundreds of neuroscientists in unprecedented collaborations aimed at unravelling the mysteries of the human brain.

This unique FREE public event, hosted by ABC Radio and TV personality Bernie Hobbs, will feature two presentations by each brain researcher followed by an interactive discussion with the audience.

This is your chance to ask the big brain questions.

[Event Registration Page] | [Meetup Event Page]

ARC Centre of Excellence for Integrative Brain Function

Monday, 3 April 2017 from 6:00 pm to 7:30 pm (AEST)

Melbourne Convention and Exhibition Centre
2 Clarendon Street
enter via the main Exhibition Centre entrance, opposite Crown Casino
South Wharf, VIC 3006 Australia

Professor Christof Koch
President and Chief Scientific Officer, Allen Institute for Brain Science, USA

Professor Koch leads a large scale, 10-year effort to build brain observatories to map, analyse and understand the mouse and human cerebral cortex. His work integrates theoretical, computational and experimental neuroscience. Professor Koch pioneered the scientific study of consciousness with his long-time collaborator, the late Nobel laureate Francis Crick. Learn more about the Allen Institute for Brain Science and Christof Koch.

Professor Karlheinz Meier
Co-Director and Vice Chair of the Human Brain Project
Professor of Physics, University of Heidelberg, Germany

Professor Meier is a physicist working on unravelling theoretical principles of brain information processing and transferring them to novel computer architectures. He has led major European initiatives that combine neuroscience with information science. Professor Meier is a co-founder of the European Human Brain Project where he leads the research to create brain-inspired computing paradigms. Learn more about the Human Brain Project and Karlheinz Meier.



This event is brought to you by the Australian Research Council Centre of Excellence for Integrative Brain Function.

Discovering how the brain interacts with the world.

The ARC Centre of Excellence for Integrative Brain Function is supported by the Australian Research Council.

Consciousness in Biological and Artificial Brains – Prof Christof Koch

Event Description: Human and non-human animals not only act in the world but are capable of conscious experience. That is, it feels like something to have a brain and be cold, angry or see red. I will discuss the scientific progress that has been achieved over the past decades in characterizing the behavioral and the neuronal correlates of consciousness, based on clinical case studies as well as laboratory experiments. I will introduce the Integrated Information Theory (IIT) that explains in a principled manner which physical systems are capable of conscious, subjective experience. The theory explains many biological and medical facts about consciousness and its pathologies in humans, can be extrapolated to more difficult cases, such as fetuses, mice, or non-mammalian brains and has been used to assess the presence of consciousness in individual patients in the clinic. IIT also explains why consciousness evolved by natural selection. The theory predicts that deep convolutional networks and von Neumann computers would experience next to nothing, even if they perform tasks that in humans would be associated with conscious experience and even if they were to run software faithfully simulating the human brain.

[Meetup Event Page]

Supported by The Florey Institute of Neuroscience & Mental Health, the University of Melbourne and the ARC Centre of Excellence for Integrative Brain Function.



Who: Prof Christof Koch, President and Chief Scientific Officer, Allen Institute for Brain Sciences, Seattle, USA

Venue: Melbourne Brain Centre, Ian Potter Auditorium, Ground Floor, Kenneth Myer Building (Building 144), Genetics Lane, 30 Royal Parade, University of Melbourne, Parkville

This will be of particular interest to those who know of David Pearce, Andreas Gomez, Mike Johnson and Brian Tomasik’s works – see this online panel:

Marching for Science with John Wilkins – a perspective from Philosophy of Science

Recent video interview with John Wilkins!

  • What should marchers for science advocate for (if anything)? Which way would you try to bias the economy of attention to science?
  • Should scientists (as individuals) be advocates for particular causes – and should the scientific enterprise advocate for particular causes?
  • The popular hashtag #AlternativeFacts and Epistemic Relativism – How about an #AlternativeHypotheses hashtag (#AltHype for short 😀 ?)
  • Some scientists have concerns for being involved directly – other scientists say they should have a voice and be heard on issues that matter and stand up and complain when public policy is based on erroneous logic and/or faulty assumptions, bad science. What’s your view? What are the risks?

John Wilkins is a historian and philosopher of science, especially biology. Apple tragic. Pratchett fan. Curmudgeon.

We will cover scientific realism vs structuralism in another video in the near future!
Topics will include:

  • Scientific Realism vs Scientific Structuralism (or Structuralism for short)
  • Ontic (OSR) vs Epistemic (ESR)
  • Does the claim that one can know only the abstract structure of the world trivialize scientific knowledge? (Epistemic Structural Realism and Ontic Structural Realism)
  • If we are in principle happy to accept scientific models (especially those that have graduated form hypothesis to theory) as structurally real – then does this give us reasons never to be overconfident about our assumptions?

Come to the Science March in Melbourne on April 22nd 2017 – bring your friends too 😀