Jerry Shay – The Telomere Theory of Ageing – Interview At Undoing Ageing, Berlin, 2019

“When telomeres get really short that could lead to a dna damage signal and cause cells to undergo a phenomenon called ‘replicative senescence’…where cells can secrete things that are not necessarily very good for you..”

Why is it that immune cells don’t work as well in older age?

Listen to the interview here

Jerry and his team compared a homogeneous group of centenarians in northern Italy to 80 year olds and 30 year olds – and tested their immune cells (T-Cells) for function (through RNA sequencing) – what was observed was all the young people clustered apart from most of the old people clustered.. but the centenarians didn’t cluster in any one spot.  It was found that the centenarians clustered along side the younger cohorts had better telomere length.

Out of 7 billion people on earth, there is only about ~ half a million centenarians – most of them are frail – though the ones with longer telomeres and more robust T-Cell physiology seem to be quite different to the frail centenarians.   What usually happens is when telomeres wear down the DNA in the cell gets damaged, triggering a DNA damage response. From this, Jerry and his team made a jump in logic – maybe there are genes (i.e. telomere [telomere expression genes?]) that when the telomeres are long these genes are repressed, and when the telomeres short the genes get activated – circumventing the need for a DNA damage response.  What is interesting is that they found genes that are really close to the telomere genes (cytokines – inflammatory gene responses – TNF Alpha, Ennalucan 1 etc) – are being activated in humans – a process called ‘Telomere Looping’. As we grow and develop our telomeres get longer, and at a certain length they start silencing certain inflammation genes, then as we age some of these genes get activated – this is sometimes referred to as the ‘Telomere Clock’.  Centenarians who are healthy maintain longer telomeres and don’t have these inflammation genes activated.


During early fetal development (12-18 weeks) telomerase gets silenced – it’s always been thought that this was to stop early onset of cancer – but Dr Shay asked, ‘why is it that all newborns have about the same length of telomeres?’ – and it’s not just in humans, it’s in other animals like whales, elephants, and many large long-lived mammal – this doesn’t occur in smaller mammals like mice, rats or rabbits.   The concept is that when the telomere is long enough, it loops over and silences its own gene, which stays silent until we are older (and in need of it again to help prevent cancer).

This Telomere Looping probably evolved as part of Antagonistic Pleiotropy – where things that may have a protection or advantage early in life may have unpredicted negative consequences later in life. This is what telomerase is for – we as humans need it in very early development, as do large long-lived mammals, and  a mechanism to shut it off – then at a later older age it can be activated again to fight against cancer.


There is a fair amount of evidence for accumulated damage as hallmarks for ageing – can we take a damage repair approach to rejuvenation medicine?

Telomere spectrum disorders or telomeropathies – human diseases of telomere disfunction – diseases like idiopathic pulmonary fibrosis in adults and dyskeratosis congenita in young children who are born with reduced amounts of telomeres and telomerase – they get age related diseases very early in life.  Can they be treated? Perhaps through gene therapy or by transiently elongating their telomeres. But can this be applied for the general population too?  People don’t lose their telomeres at the same rate – we know it’s possible for people to keep their telomeres long for 100 years or more – it’s just not yet known how.  It could be luck, likely it has a lot to do with genetics.


Ageing is complex – no one theory is going to explain everything about ageing – the telomere hypothesis of ageing perhaps makes up for about on average 5% or 10% of aging – though understanding it enough might give people an extra 10% of healthy life.   Eventually it will be all about personalised medicine – with genotyping we will be able to say you have about a 50% chance of bone marrow failure when you’re 80 years old – then if so you may be a candidate for bone marrow rejuvenation.

What is possible in the next 10 years?


Inflammation is highly central to causing age related disease.  Chronic inflammation can lead to a whole spectrum of diseases. The big difference between the subtle low grade inflammation that we have drugs for – like TNF blockers (like Humira and Enbrel) which subtly reduce inflammation – people can go into remission from many diseases after taking this.

There are about 40 million people on Metformin in the USA – which may help reduce the consequences of ageing – this and other drugs like it are safe drugs – if we can find further safe drugs to reduce inflammation etc this could go a long way – Aspirin perhaps (it’s complicated) – but it doesn’t take much to get a big bang out of a little intervention – the key to all this is safety – we don’t want to do any harm – so metformin and Asprin have been proven to be safe over time – now we need to learn how to repurpose those to specifically address the ageing problem.


Historically we have more or less ignored the fundamental problem of ageing and targeted specific diseases – but by the time you are diagnosed, it’s difficult to treat the disease – by the time you have been diagnosed with cancer, it’s likely so far advanced that it’s difficult to stop the eventual outcomes.   The concept of intervening in the ticking clock of ageing is becoming more popular now. If we can intervene early in the process we may be able to mitigate downstream diseases.

Jerry has been working on what they call a ‘Telomerase Mediated Inhibitor’ (see more about telomerase meditation here) – “it shows amazing efficacy in reducing tumor burden and improving immune cell function at the same time – it gets rid of the bad immune cells in the micro environment, and guess what?  the tumors disappear – so I think there’s ways to take advantage of the new knowledge of ageing research and apply it to diseases – but I think it’s going to be a while before we think about prevention.”

Unfortunately in the USA, and really globally “people want to have their problems their lifestyles the way they want them, and when something goes wrong, they want the doctor to come and and give them a pill to fix the problem instead of taking personal responsibility and saying that what we should be doing is preventing it in the first place.”  We all know that prevention is important, though most don’t want to practise prevention over the long haul.


The goal of all this not necessarily to live longer, but to live healthier – we now know that the costs associated with intervening with the pathologies associated with ageing is enormous.  Someone said that the 25% of medicare costs in the USA is in treating people that are on dialysis – that’s huge. If we could compress the number of years of end of life morbidities into a smaller window, it would pay for itself over and over again.   So the goal is to increase healthspan and reduce the long period of chronic diseases associated with ageing. We don’t want this to be a selected subgroup who have access to future regenerative medicine – there are many people in the world without resources or access at this time – we hope that will change.

Jerry’s goal is to take some of the discovered bio-markers of both healthy and less healthy older people – and test them out on larger population numbers – though it’s very difficult to get the funding one needs to conduct large population studies.

The Ghost in the Quantum Turing Machine – Scott Aaronson

Interview on whether machines can be conscious with Scott Aaronson, theoretical computer scientist and David J. Bruton Jr. Centennial Professor of Computer Science at the University of Texas at Austin. His primary areas of research are quantum computing and computational complexity theory.
Scott blogged about this and other segments of our interview – his blog is very popular and has way more comments than this one does – check it out.

Check out interview segment “The Winding Road to Quantum Supremacy” with Scott Aaronson – covering progress in quantum computation, whether there are things that quantum computers could do that classical computers can’t etc..


Adam Ford: In ‘Could a Quantum Computer have Subjective Experience?‘ you speculate where the process has to fully participate in the arrow of time to be conscious and this points to decoherence. If pressed, how might you try to formalize this?

Scott Aaronson: So yeah so I did write this kind of crazy essay five or six years ago that was called “The Ghost in the Quantum Turing Machine“, where I tried to explore a position that seemed to me to be mysteriously under-explored! And all of the debates about ‘could a machine be conscious?’ and we want to be thoroughgoing materialists right? There’s no magical ghost that defies the laws of physics; the brains or physical systems that obey the laws physics
just like any others.
But there is at least one very interesting difference between a brain and any digital computer that’s ever been built – and that is that the state of a brain is not obviously copyable; that is not obviously knowable to an outside person well enough to predict what a person will do in the future, without having to scan the person’s brain so invasively that you would kill them okay. And so there is a sort of privacy or opacity if you like to a brain that there is not to a piece of code running on a digital computer.
And so there are all sorts of classic philosophical conundrums that play on that difference. For example suppose that a human-level AI does eventually become possible and we have simulated people who were running a inside of our computers – well if I were to murder such a person in the sense of deleting their file is that okay as long as I kept the backup somewhere? As long as I can just restore them from backup? Or what if I’m running two exact copies of the program on two computers next to each other – is that instantiating two consciousnesses? Or is it really just one consciousness? Because there’s nothing to distinguish the one from the other?
So could I blackmail an AI to do what I wanted by saying even if I don’t have access to you as an AI, I’m gonna say if you don’t give me a million dollars then I’m just going to – since I have your code – I’m gonna create a million copies of your of the code and torture them? And – if you think about it – you are almost certain to be one of those copies because there’s far more of them than there are of you, and they’re all identical!
So yeah so there’s all these puzzles that philosophers have wondered about for generations about: the nature of identity, how does identity persist across time, can it be duplicated across space, and somehow in a world with copy-able AIs they would all become much more real!
And so one one point of view that you could take is that: well if I can predict exactly what someone is going to do right – and I don’t mean you know just saying as a philosophical matter that I could predict your actions if I were a Laplace demon and I knew the complete state of the universe right, because I don’t in fact know the complete state of the universe okay – but imagine that I could do that as an actual practical matter – I could build an actual machine that would perfectly predict down to the last detail every thing you would do before you had done it.
Okay well then in what sense do I still have to respect your personhood? I mean I could just say I have unmasked you as a machine; I mean my simulation has every bit as much right to personhood as you do at this point right – or maybe they’re just two different instantiations of the same thing.
So another possibility, you could say, is that maybe what we like to think of is consciousness only resides in those physical systems that for whatever reason are uncopyable – that if you try to make a perfect copy then you know you would ultimately run into what we call the no-cloning theorem in quantum mechanics that says that: you cannot copy the exact physical state of a an unknown system for quantum mechanical reasons. And so this would suggest of you where kind of personal identity is very much bound up with the flow of time; with things that happen that are evanescent; that can never happen again exactly the same way because the world will never reach exactly the same configuration.
A related puzzle concerns well: what if I took your conscious or took an AI and I ran it on a reversible computer? Now some people believe that any appropriate simulation brings about consciousness – which is a position that you can take. But now what if I ran the simulation backwards – as I can always do on a reversible computer? What if I ran the simulation, I computed it and then I uncomputed it? Now have I caused nothing to have happened? Or did I cause one forward consciousness, and then one backward consciousness – whatever that means? Did it have a different character from the forward consciousness?
But we know a whole class of phenomena that in practice can only ever happen in one direction in time – and these are thermodynamic phenomena right; these are phenomena that create waste heat; create entropy; that may take these little small microscopic unknowable degrees of freedom and then amplify them to macroscopic scale. And in principle there was macroscopic records could could get could become microscopic again. Like if I make a measurement of a quantum state at least according to the let’s say many-worlds quantum mechanics in principle that measurement could always be undone. And yet in practice we never see those things happen – for the same for basically the same reasons why we never see an egg spontaneously unscramble itself, or why we why we never see a shattered glass leap up to the table and reassemble itself right, namely these would represent vastly improbable decreases of entropy okay. And so the speculation was that maybe this sort of irreversibility in this increase of entropy that we see in all the ordinary physical processes and in particular in our own brains, maybe that’s important to consciousness?
Right uh or what we like to think of as free will – I mean we certainly don’t have an example to say that it isn’t – but you know the truth of the matter is I don’t know I mean I set out all the thoughts that I had about it in this essay five years ago and then having written it I decided that I had enough of metaphysics, it made my head hurt too much, and I was going to go back to the better defined questions in math and science.

Adam Ford: In ‘Is Information Physical?’ you note that if a system crosses a Swartzschild Bound it collapses into a black-hole – do you think this could be used to put an upper-bound on the amount of consciousness in any given physical system?

Scott Aaronson: Well so I can decompose your question a little bit. So there is what quantum gravity considerations let you do, it is believed today, is put a universal bound on how much computation can be going on in a physical system of a given size, and also how many bits can be stored there. And I the bounds are precise enough that I can just tell you what they are. So it appears that a physical system you know, that’s let’s say surrounded by a sphere of a given surface area, can store at most about 10 to the 69 bits, or rather 10 to the 69 qubits per square meter of surface area of the enclosing boundary. And it has a similar limit on how many computational steps it can do over it’s it’s whole history.
So now I think your question kind of reduces to the question: Can we upper-bound how much consciousness there is in a physical system – whatever that means – in terms of how much computation is going on in it; or in terms of how many bits are there? And that’s a little hard for me to think about because I don’t know what we mean by amount of consciousness right? Like am I ten times more conscious than a frog? Am I a hundred times more conscious? I don’t know – I mean some of the time I feel less conscious than a frog right.
But I am sympathetic to the idea that: there is some minimum of computational interestingness in any system that we would like to talk about as being conscious. So there is this ancient speculation of panpsychism, that would say that every electron, every atom is conscious – and do me that’s fine – you can speculate that if you want. We know nothing to rule it out; there were no physical laws attached to consciousness that would tell us that it’s impossible. The question is just what does it buy you to suppose that? What does it explain? And in the case of the electron I’m not sure that it explains anything!
Now you could say does it even explain anything to suppose that we’re conscious? But and maybe at least not for anyone beyond ourselves. You could say there’s this ancient conundrum that we each know that we’re conscious presumably by our own subjective experience and as far as we know everyone else might be an automaton – which if you really think about that consistently it could lead you to become a solipsist. So Allen Turing in his famous 1950 paper that proposed the Turing test had this wonderful remark about it – which was something like – ‘A’ is liable to think that ‘A’ thinks while ‘B’ does not, while ‘B’ is liable to think ‘B’ thinks but ‘A’ does not. But in practice it is customary to adopt the polite convention that everyone thinks. So it was a very British way of putting it to me right. We adopt the polite convention that solipsism is false; that people who can, or any entities let’s say, that can exhibit complex behaviors or goal-directed intelligent behaviors that are like ours are probably conscious like we are. And that’s a criterion that would apply to other people it would not apply to electrons (I don’t think), and it’s plausible that there is some bare minimum of computation in any entity to which that criterion would apply.

Adam Ford: Sabine Hossenfelder – I forget her name now – {Sabine Hossenfelder yes} – she had a scathing review of panpsychism recently, did you read that?

Scott Aaronson: If it was very recent then I probably didn’t read it – I mean I did read an excerpt where she was saying that like Panpsychism – is what she’s saying that it’s experimentally ruled out? If she was saying that I don’t agree with that – know I don’t even see how you would experimentally rule out such a thing; I mean you’re free to postulate as much consciousness as you want on the head of a pin – I would just say well it’s not if it doesn’t have
an empirical consequence; if it’s not affecting the world; if it’s not affecting the behavior of that head of a pin, in a way that you can detect – then Occam’s razor just itches to slice it out from our description of the world – always that’s the way that I would put it personally.\
So I put a detailed critique of integrated information theory (IIT), which is Giulio Tononi’s proposed theory of consciousness on my blog, and my critique was basically: so Tononi know comes up with a specific numerical measure that he calls ‘Phi’ and he claims that a system should be regarded as conscious if and only if the Phi is large. Now the actual definition of Phi has changed over time – it’s changed from one paper to another, and it’s not always clear how to apply it and there are many technical objections that could be raised against this criterion. But you know what I respect about IIT is that at least it sticks its neck out right. It proposes this very clear criterion, you know are we always much clearer than competing accounts do right – to tell you this is which physical systems you should regard as conscious and which not.
Now the danger of sticking your neck out is that it can get cut off right – and indeed I think that IIT is not only falsifiable but falsified, because as soon as this criterion is written down (what the point I was making is that) it is easy to construct physical systems that have enormous values of Phi – much much larger then a human has – that I don’t think anyone would really want to regard as intelligent let alone conscious or even very interesting.
And so my examples show that basically Phi is large if and only if your system has a lot of interconnection – if it’s very hard to decompose into two components that interact with each other only weakly – and so you have a high degree of information integration. And so my the point of my counter examples was to try to say well this cannot possibly be the sole relevant criterion, because a standard error correcting code as is used for example on every compact disc also has an enormous amount of information integration – but should we therefore say that you know ‘every error correcting code that gets implemented in some piece of electronics is conscious?’, and even more than that like a giant grid of logic gates just sitting there doing nothing would have a very large value of Phi – and we can multiply examples like that.
And so Tononi then posted a big response to my critique and his response was basically: well you’re just relying on intuition; you’re just saying oh well yeah these systems are not a conscious because my intuition says that they aren’t – but .. that’s parochial right – why should you expect a theory of consciousness to accord with your intuition and he just then just went ahead and said yes the error correcting code is consciouss, yes the giant grid of XOR gates is conscious – and if they have a thousand times larger value of Phi than a brain, then there are a thousand times more conscious than a human is. So you know the way I described it was he didn’t just bite the bullet he just devoured like a bullet sandwich with mustard. Which was not what I was expecting but now the critique that I’m saying that ‘any scientific theory has to accord with intuition’ – I think that is completely mistaken; I think that’s really a mischaracterization of what I think right.
I mean I’ll be the very first to tell you that science has overturned common sense intuition over and over and over right. I mean like for example temperature feels like an intrinsic quality of a of a material; it doesn’t feel like it has anything to do with motion with the atoms jiggling around at a certain speed – okay but we now know that it does. But when scientists first arrived at that modern conception of temperature in the eighteen hundreds, what was essential was that at least you know that new criterion agreed with the old criterion that fire is hotter than ice right – so at least in the cases where we knew what we meant by hot or cold – the new definition agreed with the old definition. And then the new definition went further to tell us many counterintuitive things that we didn’t know before right – but at least that it reproduced the way in which we were using words previously okay.
Even when Copernicus and Galileo where he discovered that the earth is orbiting the Sun right, the new theory was able to account for our observation that we were not flying off the earth – it said that’s exactly what you would expect to have happened even in the in ?Anakin? because of these new principles of inertia and so on okay.
But if a theory of consciousness says that this giant blank wall or this grid is highly highly conscious just sitting there doing nothing – whereas even a simulated person or an AI that passes the Turing test would not be conscious if it’s organized in such a way that it happens to have a low value of Phi – I say okay the burden is on you to prove to me that this Phi notion that you have defined has anything whatsoever to do with what I was calling consciousness you haven’t even shown me any cases where they agree with each other where I should therefore extrapolate to the hard cases; the ones where I lack an intuition – like at what point is an embryo conscious? or when is an AI conscious? I mean it’s like the theory seems to have gotten wrong the only things that it could have possibly gotten right, and so then at that point I think there is nothing to compel a skeptic to say that this particular quantity Phi has anything to do with consciousness.

Cognitive Biases & In-Group Convergences with Joscha Bach

True & false vs right & wrong – People converge their views to set of rights and wrongs relative to in-group biases in their peer group.
As a survival mechanism, convergence in groups is sometimes more healthy than being right – so one should optimize for convergence sometimes even at the cost of getting stuff wrong – so humans probably have an evolutionary propensity to favor convergence over truth.
However by optimizing for convergence may result in the group mind being more stupid than the smartest people in the group.


Joscha highlights the controversy of Yonatan Zunger being fired for sending out an email about biological differences between men and women effecting abilities as engineers – where Zunger’s arguments may be correct – now regardless of what the facts are about how biological differences effect differences in ability between men & women, google fired him because they thought supporting these arguments would make for a worse social environment.

This sort of thing leads to an interesting difference in discourse, where:
* ‘nerds’ tend to focus on ‘content‘, on imparting ideas and facts where everyone can judge these autonomously and form their own opinions – in view that in order to craft the best solutions we need to have the best facts
* most people the purpose of communication is ‘coordination‘ between individuals and groups (society, nations etc) – where the value on a ‘fact’ is it’s effect on the coordination between people

So is Google’s response to the memo controversy about getting the facts right, or about how Google at this point should be organised?

What’s also really interesting is that different types of people read this ‘memo’ very differently – making it very difficult to form agreement about the content of this memo – how can one agree on whats valuable about communication – whether it’s more about imparting ideas and facts or whether it’s more about coordination?

More recently there has been a lot of talk about #FakeNews – where it’s very difficult to get people to agree to things that are not in their own interests – and including, as Joshca points out, the idea that truth matters.

Joscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Discussion points:
– In-group convergence: thinking in true & false vs right & wrong
– The group mind may be more stupid than the smartest individuals in the group

Physicalism & Materialism – John Wilkins

Materialism was a pre-socratic view that for something to be real it has to be matter – physical stuff made of atoms (which at the time were considered hard like billiard balls – fundametal parts of reality).  The reason these days the term physicalism is used is because it can describe things that aren’t matter – like forces, or aren’t observable matter – like dark matter, or energy or fields, or spacetime etc..  Physicalism is the idea that all that exist can be described in the language of some ‘ideal’ physics – we may never know what this ideal physics is, though people think that it is something close to our current physics (as we can make very accurate predictions with our current physics).

If magic, telepathy or angels were real, there would be a physics that could describe them – they’d have patterns and properties that would be describable and explainable.  A physicist would likely think that even the mind operates according to physical rules.  Being a physicalist according to John means you think everything is governed by rules, physical rules – and that there is an ideal language that can be used to describe all this.

Note John is also a deontologist.  Perhaps there should exist an ideal language that can fully describe ethics – does this mean that ideally there is no need for utilitarianism?  I’ll leave that question for another post.

Interview with John Wilkins on Materialism & Physicalism.

Here are some blog posts about physicalism by John Wilkins:

Is physicalism an impoverished metaphysics?

Every so often, we read about some philosopher or other form of public intellectual who makes the claim that a physicalist ontology – a world view in which only things that can be described in terms of physics are said to exist – is impoverished. That is, there are things whereof science cannot know, &c. A recent example is that made by Thomas Nagel [nicely eviscerated here by the physicist Sean Carroll], whose fame in philosophy rests with an influential 1974 paper that there is something like being a bat that no amount of physics, physiology or other objective science could account for.

Recent, Nagel has argued that the evolutionary view called (historically misleadingly) neo-Darwinism, is “almost certainly” false. One of the reasons is that “materialism” (which Nagel should know is an antiquated world view replaced by physicalism defined above; there are many non-material things in physics, not least fields of various kinds) does not permit a full account of consciousness; the subjective facts of being a particular individual organism. Another is that the chance that life would emerge from a lifeless universe is staggeringly unlikely. How this is calculated is somewhat mysterious, given that at best we only have (dare I say it?) subjective estimates anyway, but there it is.

But Nagel is not alone. Various nonreligious (apparently) thinkers have made similar assertions, although some, like Frank Jackson, who proposed the Knowledge Argument, have since backed down. What is it that physicalism must account for that these disputants and objectors say it cannot?

It almost entirely consists of consciousness, intentions, intelligence or some similar mental property which is entirely inexplicable by “reductionist” physicalism. [Reductionism is a term of abuse that means – so far as I can tell – solely that the person who makes such an accusation does not like the thing or persons being accused.] And that raises our question: is physicalism lacking something?

I bet you are dying to know more… you’ll just have to follow the link…
See more at Evolving Thoughts>>

Is Physicalism Coherent?

In my last post I argued that physicalism cannot be rejected simply because people assert there are nonphysical objects which are beyond specification. Some are, however, specifiable, and one commentator has identified the obvious ones: abstract objects like the rules of chess or numbers. I have dealt with these before in my “Pizza reductionism” post, which I invite you to go read.

Done? OK, then; let us proceed.

It is often asserted that there are obviously things that are not physical, such as ideas, numbers, concepts, etc., quite apart from qualia, I once sat with a distinguished philosopher, who I respect greatly and so shall not name, when he asserted that we can construct natural classifications because we can deal first with the natural numbers. I asked him “In what sense are numbers natural objects?”, meaning, why should we think numbers are entities in the natural world. He admitted that the question had not occurred to him (I doubt that – he is rather smart), but that it was simply an axiom of his philosophy. I do not think such abstract objects are natural.

This applies to anything that is “informational”, including all semantic entities like meanings, symbols, lexical objects, and so on. They only “exist” as functional modalities in our thoughts and language. I have also argued this before: information does not “exist”; it is a function of how we process signals. Mathematics is not a domain, it is a language, and the reason it works is because the bits that seriously do not work are not explored far[*] – not all of it has to work in a physical or natural sense, but much of it has to, or else it becomes a simple game that we would not play so much.

So the question of the incoherence of physicalism is based on the assumption (which runs contrary to physicalism, and is thus question begging) that abstract objects are natural things. I don’t believe they are, and I certainly do not think that a thought, or concept, for example, which can be had by many minds and is therefore supposed to be located in none of them (and thus transcendental), really is nonphysical. That is another case of nouning language. The thought “that is red” exists, for a physicalist, in all the heads that meet the functional social criteria for ascriptions of red. It exists nowhere else – it just is all those cognitive and social behaviours in biological heads…

Yes, I know, it’s a real page turner…
See more at Evolving Thoughts>>

In philosophy, physicalism is the ontological thesis that “everything is physical”, that there is “nothing over and above” the physical, or that everything supervenes on the physical. Physicalism is a form of ontological monism—a “one substance” view of the nature of reality as opposed to a “two-substance” (dualism) or “many-substance” (pluralism) view. Both the definition of physical and the meaning of physicalism have been debated. Physicalism is closely related to materialism. Physicalism grew out of materialism with the success of the physical sciences in explaining observed phenomena. The terms are often used interchangeably, although they are sometimes distinguished, for example on the basis of physics describing more than just matter (including energy and physical law). Common arguments against physicalism include both the philosophical zombie argument and the multiple observers argument, that the existence of a physical being may imply zero or more distinct conscious entities. “When I lost my belief in religion I had to decide what I needed to accept as a bare minimum. I decided that I needed to believe in the physical world. I never found the slightest reason to accept the existence of anything else. To this day I am a physicalist only because I never found the need to be anything else. The principle of parsimony suggests that one should not believe in more than one needs to. Even if it does make you feel comfortable.”


Let’s get physicalism!

See John Wilkin’s Blog ‘Evolving Thoughts

#philsci #philosophy #science #physics

Science, Mindfulness & the Urgency of Reducing Suffering – Christof Koch

In this interview with Christof Koch, he shares some deeply felt ideas about the urgency of reducing suffering (with some caveats), his experience with mindfulness – explaining what it was like to visit the Dali Lama for a week, as well as a heart felt experience of his family dog ‘Nosey’ dying in his arms, and how that moved him to become a vegetarian. He also discusses the bias of human exceptionalism, the horrors of factory farming of non-human animals, as well as a consequentialist view on animal testing.
Christof Koch is an American neuroscientist best known for his work on the neural bases of consciousness.

Christof Koch is the President and Chief Scientific Officer of the Allen Institute for Brain Science in Seattle. From 1986 until 2013, he was a professor at the California Institute of Technology.

Towards the Abolition of Suffering Through Science

An online panel focusing on reducing suffering & paradise engineering through the lens of science.

Panelists: Andrés Gómez Emilsson, David Pearce, Brian Tomasik and Mike Johnson

Note, consider skipping to to 10:19 to bypass some audio problems in the beginning!!


Andrés Gómez Emilsson: Qualia computing (how to use consciousness for information processing, and why that has ethical implications)

  • How do we know consciousness is causally efficacious? Because we are conscious and evolution can only recruit systems/properties when they do something (and they do it better than the available alternatives).
  • What is consciousness’ purpose on animals?  (Information processing).
  • What is consciousness’ comparative advantage?  (Phenomenal binding).
  • Why does this matter for suffering reduction? Suffering has functional properties that play a role in the inclusive fitness of organisms. If we figure out exactly what role they play (by reverse-engineering the computational properties of consciousness), we can substitute them by equally (or better) functioning non-conscious or positive hedonic-tone analogues.
  • What is the focus of Qualia Computing? (it focuses on basic fundamental questions and simple experimental paradigms to get at them (e.g. computational properties of visual qualia via psychedelic psychophysics)).

Brian Tomasik:

  • Space colonization “Colonization of space seems likely to increase suffering by creating (literally) astronomically more minds than exist on Earth, so we should push for policies that would make a colonization wave more humane, such as not propagating wild-animal suffering to other planets or in virtual worlds.”
  • AGI safety “It looks likely that artificial general intelligence (AGI) will be developed in the coming decades or centuries, and its initial conditions and control structures may make an enormous impact to the dynamics, values, and character of life in the cosmos.”,
  • Animals and insects “Because most wild animals die, often painfully, shortly after birth, it’s plausible that suffering dominates happiness in nature. This is especially plausible if we extend moral considerations to smaller creatures like the ~1019 insects on Earth, whose collective neural mass outweighs that of humanity by several orders of magnitude.”

Mike Johnson:

  • If we successfully “reverse-engineer” the patterns for pain and pleasure, what does ‘responsible disclosure’ look like? Potential benefits and potential for abuse both seem significant.
  • If we agree that valence is a pattern in a dataset, what’s a good approach to defining the dataset, and what’s a good heuristic for finding the pattern?
  • What order of magnitude is the theoretical potential of mood enhancement? E.g., 2x vs 10x vs 10^10x
  • What are your expectations of the distribution of suffering in the world? What proportion happens in nature vs within the boundaries of civilization? What are counter-intuitive sources of suffering? Do we know about ~90% of suffering on the earth, or ~.001%?
  • Valence Research, The Mystery of Pain & Pleasure.
  • Why is it such an exciting time round about now to be doing valence research?  Are we at a sweet spot in history with this regard?  What is hindering valence research? (examples of muddled thinking, cultural barriers etc?)
  • How do we use the available science to improve the QALY? GiveDirectly has used change in cortisol levels to measure effectiveness, and the EU (what’s EU stand for?) evidently does something similar involving cattle. It seems like a lot of the pieces for a more biologically-grounded QALY- and maybe a SQALY (Species and Quality-Adjusted Life-Year)- are available, someone just needs to put them together. I suspect this one of the lowest-hanging highest-leverage research fruits.

David Pearce: The ultimate scope of our moral responsibilities. Assume for a moment that our main or overriding goal should be to minimise and ideally abolish involuntary suffering. I typically assume that (a) only biological minds suffer and (b) we are probably alone within our cosmological horizon. If so, then our responsibility is “only” to phase out the biology of involuntary suffering here on Earth and make sure it doesn’t spread or propagate outside our solar system. But Brian, for instance, has quite a different metaphysics of mind, most famously that digital characters in video games can suffer (now only a little – but in future perhaps a lot). The ramifications here for abolitionist bioethics are far-reaching.


– Valence research, Qualia computing (how to use consciousness for information processing, and why that has ethical implications),  animal suffering, insect suffering, developing an ethical Nozick’s Experience Machine, long term paradise engineering, complexity and valence
– Effective Altruism/Cause prioritization and suffering reduction – People’s practical recommendations for the best projects that suffering reducers can work on (including where to donate, what research topics to prioritize, what messages to spread). – So cause prioritization applied directly to the abolition of suffering?
– what are the best projects people can work on to reduce suffering? and what to work on first? (including where to donate, what research topics to prioritize, what messages to spread)
– If we successfully “reverse-engineer” the patterns for pain and pleasure, what does ‘responsible disclosure’ look like? Potential benefits and potential for abuse both seem significant
– If we agree that valence is a pattern in a dataset, what’s a good approach to defining the dataset, and what’s a good heuristic for finding the pattern?
– What order of magnitude is the theoretical potential of mood enhancement? E.g., 2x vs 10x vs 10^10x


David Pearce:
Mike Johnson:
Andrés Gómez Emilsson:
Brain Tomasik:


#hedweb ‪#EffectiveAltruism ‪#HedonisticImperative ‪#AbolitionistProject

The event was hosted on the 10th of August 2015, Venue: The Internet

Towards the Abolition of Suffering Through Science was hosted by Adam Ford for Science, Technology and the Future.

Towards the Abolition of Suffering Through Science

Towards the Abolition of Suffering Through Science

The End of Aging

Aging is a technical problem with a technical solution – finding the solution requires clear thinking and focused effort. Once solving aging becomes demonstrably feasible, it is likely attitudes will shift regarding its desirability. There is huge potential, for individuals and for society, in reducing suffering through the use of rejuvenation therapy to achieve new heights of physical well-being. I also discuss the looming economic implications of large percentages of illness among aging populations – and put forward that focusing on solving fundamental problems of aging will reduce the incidents of debilitating diseases of aging – which will in turn reduce the economic burden of illness. This mini-documentary discusses the implications of actually solving aging, as well as some misconceptions about aging.

‘The End of Aging’ won first prize in the international Longevity Film Competition *[1] in 2018.

The above video is the latest version with a few updates & kinks ironed out.

‘The End of Aging’ was Adam Ford’s submission for the Longevity Film Competition – all the contestants did a great job. Big thanks to the organisers of competition, it inspires people to produce videos to help spread awareness and understanding about the importance of ending aging.

It’s important to see that health in old age is desirable at population levels – rejuvenation medicine – repairing the bodies ability to cope with stressors (or practical reversal of the aging process), will end up being cheaper than traditional medicine  based on general indefinite postponement of ill-health on population levels (especially in the long run when rejuvenation therapy becomes efficient).

According to the World Health Organisation:

  1. Between 2015 and 2050, the proportion of the world’s population over 60 years will nearly double from 12% to 22%.
  2. By 2020, the number of people aged 60 years and older will outnumber children younger than 5 years.
  3. In 2050, 80% of older people will be living in low- and middle-income countries.
  4. The pace of population ageing is much faster than in the past.
  5. All countries face major challenges to ensure that their health and social systems are ready to make the most of this demographic shift.
The End of Aging – WHO 1 – 2020 portion of world population over 60 will double
The End of Aging – WHO 2 – Elderly outnumbering Infants
The End of Aging – WHO 3 – Pace of Population Aging Faster than in Past
The End of Aging – WHO 4 – 80 perc elderly in low to middle income countries
The End of Aging – WHO 5 Demographic Shifts


Happy Longevity Day 2018! 😀

[1] * The Longevity Film Competition is an initiative by the Healthy Life Extension Society, the SENS Research Foundation, and the International Longevity Alliance. The promoters of the competition invited filmmakers everywhere to produce short films advocating for healthy life extension, with a focus on dispelling four usual misconceptions and concerns around the concept of life extension: the false dichotomy between aging and age-related diseases, the Tithonus error, the appeal to nature fallacy, and the fear of inequality of access to rejuvenation biotechnologies.

Michio Kaku on the Holy Grail of Nanotechnology

Michio Kaku on Nanotechnology – Michio is the author of many best sellers, recently the Future of the Mind!

The Holy Grail of Nanotechnology

Merging with machines is on the horizon and Nanotechnology will be key to achieving this. The ‘Holy Grail of Nanotechnology’ is the replicator: A microscopic robot that rearranges molecules into desired structures. At the moment, molecular assemblers exist in nature in us, as cells and ribosomes.

Sticky Fingers problem

How might nanorobots/replicators look and behave?
Because of the ‘Sticky /Fat Fingers problem’ in the short term we won’t have nanobots with agile clippers or blow torches (like what we might see in a scifi movie).

The 4th Wave of High Technology

Humanity has seen an acceleration in history of technological progress from the steam engine and industrial revolution to the electrical age, the space program and high technology – what is the 4th wave that will dominate the rest of the 21st century?
Nanotechnology (molecular physics), Biotechnology, and Artificial Intelligence (reducing the curcuitry of the brain down to neurons) – “these three molecular technologies will propel us into the future”!


Michio Kaku – Bio

Michio Kaku (born January 24, 1947) is an American theoretical physicist, the Henry Semat Professor of Theoretical Physics at the City College of New York, a futurist, and a communicator and popularizer of science. He has written several books about physics and related topics, has made frequent appearances on radio, television, and film, and writes extensive online blogs and articles. He has written three New York Times Best Sellers: Physics of the Impossible (2008), Physics of the Future (2011), and The Future of the Mind (2014).

Kaku is the author of various popular science books:
– Beyond Einstein: The Cosmic Quest for the Theory of the Universe (with Jennifer Thompson) (1987)
– Hyperspace: A Scientific Odyssey through Parallel Universes, Time Warps, and the Tenth Dimension (1994)
– Visions: How Science Will Revolutionize the 21st Century[12] (1998)
– Einstein’s Cosmos: How Albert Einstein’s Vision Transformed Our Understanding of Space and Time (2004)
– Parallel Worlds: A Journey through Creation, Higher Dimensions, and the Future of the Cosmos (2004)
– Physics of the Impossible: A Scientific Exploration into the World of Phasers, Force Fields, Teleportation, and Time Travel (2008)
– Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 (2011)
– The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind (2014)

Subscribe to the YouTube ChannelScience, Technology & the Future

Aubrey de Grey – Towards the Future of Regenerative Medicine

Why is aging research important? Biological aging causes suffering, however in recent times there as been surprising progress in stem cell research and in regenerative medicine that will likely disrupt the way we think about aging, and in the longer term, substantially mitigate some of the suffering involved in growing old.
Aubrey de Grey is the Chief Science Officer of SENS Foundation – an organisation focused on going beyond ageing and leading the journey towards  the future of regenerative medicine!  
What will it take to get there?

You might wonder why pursue  regenerative medicine ?
Historically, doctors have been racing against time to find cures for specific illnesses, making temporary victories by tackling diseases one by one – solve one disease and another urgency beacons – once your body becomes frail, if you survive one major illness, you may not be so lucky with the next one – the older you get the less capable your body becomes to staving off new illnesses – you can imagine a long line of other ailments fading beyond view into the distance, and eventually one of them will do you in.  If we are to achieve  radical healthy longevity , we need to strike at the fundamental technical problems of why we get frail and more disease prone as we get older.  Every technical problem has a technical solution – regenerative medicine is a class of solutions that seek to keep turning the ‘biological clock’ back rather than achieve short-term palliatives.

The damage repair methodology has gained in popularity over the last two decades, though it’s still not popular enough to attract huge amounts of funding – what might tip the scales of advocacy in damage-repair’s favor?
A clear existence proof such as achieving…

Robust Mouse Rejuvenation

In this interview, Aubrey de Grey reveals the most amount of optimism I have heard him express about the near term achievement of Robust Mouse Rejuvenation.  Previously it’s been 10 years away subject to adequate funding (which was not realised) – now Aubrey predicts it might happen within only 5-6 years (subject to funding of course).  So, what is Robust Mouse Rejuvenation – and why should we care?

For those who have seen Aubrey speak on this, he used to say RMR within 10 years (subject to funding)

Specifically, the goal of RBR is this:  Make normal, healthy two-year old mice (expected to live one more year) live three further years. 

  • What’s the ideal type of mouse to test on and why?  The ideal mouse to trail on is one that doesn’t naturally have a certain kind of congenital disease (that might on average only live 1.5 or 2 years) – because increasing their lifespan might only be a sign that you have solved their particular congenital disease.  The ideal type of mouse is one which lives to 3 years on average, which could die of various things.
  • How many extra years is significant? Consistently increasing mouse lifespan for an extra two years on top of their normal three year lifespans – essentially tripling their remaining lifespan.
  • When, or at what stage of the mice’s life to begin the treatment? Don’t start treating the mice until they are already 2 years old – at a time where they would normally be 2 thirds of the way though their life (at or past middle age) and they would have one more year to live.

Why not start treating the mice earlier?  The goal is to produce sufficiently dramatic results in a laboratory to convince the main-stream gerontology community, such that they would willingly publicly endorse the idea that it is not impossible, but indeed it is only a matter of time before rejuvenation therapy will work in humans – that is to get out there on talk shows and in front of cameras and say all this.

Arguably, the mainstream gerontology community are generally a bit conservative – they have vested interests in being successful in publishing papers, they get grants they have worries around peer review, they want tenure, and have a reputation to uphold.   Gerontologists hold the keys to public trust – they are considered to be the authorities on aging.
When gerontologists are convinced and let the world know about it, a lot of other people in the scientific community and in the general community will also then become convinced.  Once that happens, here’s what’s likely to happen next – longevity through rejuvenation medicine will become a big issue, there will be domino effects – there will be a war on aging, experts will appear on Oprah Winfrey, politicians will have to include the war on aging in their political manifesto if they want to get elected.

Yoda - the oldest mouse ever to have lived?
Yoda, a cute dwarf mouse, was named as the oldest mouse in 2004 at age 4 who lived with the much larger Princess Leia, in ‘a pathogen-free rest home for geriatric mice’ belonging to Dr. Richard Miller, professor of pathology in the Geriatrics Center of the Medical School. “Yoda is only the second mouse I know to have made it to his fourth birthday without the rigors of a severe calorie-restricted diet,” Miller says. “He’s the oldest mouse we’ve seen in 14 years of research on aged mice at U-M. The previous record-holder in our colony died nine days short of his 4th birthday; 100-year-old people are much more common than 4-year-old mice.” (ref)

What about Auto-Immune Diseases?

Auto-immune diseases (considered incurable to some) – get worse with aging for the same reason we loose general ability to fight off infections and attack cancer. Essentially the immune system looses it’s precision – it has two arms: the innate system and the adaptive – the adaptive side works by having polyclonality – a very wide diversity of cells with different rearrangements of parts of the genome that confer specificity of the immune cell to a particular target (which it may or may not encounter at some time in the future) – this polyclonality diminishes over life such that the cells which are targeted towards a given problem with the immune system are on average less precisely adapted towards it – so the immune system takes longer to do it’s job or doesn’t do it effectively – so with autoimmune system it looses it’s ability to distinguish between things that are foreign and things that are part of the body. So this could be powerfully addressed by the same
measures taken to rejuvenate the immune system generally – regenerating the thyamis and illuminating senescent cells that are accumulating in the blood.

Big Bottlenecks

See Aubrey discuss this at timepoint: 38:50
Bottlenecks: which bottlenecks does Aubrey believes need most attention from the community of people who already believe aging is a problem that needs to be solved?

  1. The first thing: Funding. The shortage of funding is still the biggest bottleneck.
  2. The second thing: The need for policy makers to get on board with the ideas and understand what is coming – so it’s not only developing the therapies as quickly as possible, it’s also important that once they are developed, the therapies get disseminated as quick as possible to avoid complete chaos.

It’s very urgent to have proper discussions about this.  Anticipating the anticipation – getting ready for the public anticipating these therapies instead of thinking that it’s all science fiction and is never going to happen.


Effective Advocacy

See Aubrey discuss this at timepoint: 42:47
Advocacy, it’s a big ask to get people from extreme opposition to supporting regenerative medicine. Nudging people a bit sideways is a lot earlier – that is getting them from complete offense to less offense, or getting people who are un-decided to be in favor of it.

Here are 2 of the main aspects of advocacy:

  1. feasibility / importance – emphasize progress, embracement by the scientific community (see paper hallmarks of aging – single most highly cited paper on the biology of aging this decade) – defining the legitimacy of the damage repair approach – it’s not just a crazy hair brained idea …
  2. desirability – concerns about (bad arguments : on overpopulation – oh don’t worry we will immigrate into space – the people who are concerned about this problem aren’t the ones who would like to go to space) – focus on more of the things that can generalize to desirable outcomes – so regenerative medicine will have side effects, like a longer lifespan, but also people will be more healthy at any given age compared to what they would be in they hadn’t had regenerative therapy – no body wants Alzheimer’s, or heart disease – if the outcome of regenerative medicine is that then it’s easier to sell.

We need a sense of proportion on possible future problems – will they generally be more serious than they are today?
Talking about uploading, substrate independence, etc one is actively alienating the public – it’s better to create a foundation of credibility in the conversation before you decide to persuade anyone of anything.  If we are going to get from here to the long term future we need advocacy now – the short term matters as well.

More on Advocacy here:

And here

Other Stuff

This interview covers a fair bit of ground, so here are some other points covered:

– Updates & progress at SENS
– Highlights of promising progress in regenerative medicine in general
– Recent funding successes, what can be achieved with this?
– Discussion on getting the message across
– desirability & feasibility of rejuvenation therapy
– What could be the future of regenerative medicine?
– Given progress so far, what can people alive today look forward to?
– Multi-factorial diseases – Fixing amyloid plaque buildup alone won’t cure Alzheimer’s – getting rid of amyloid plaque alone only produced mild cognitive benefits in Alzheimer’s patients. There is still the unaddressed issue of tangles… If you only get rid of one component in a multi-component problem then you don’t get to see much improvement of pathology – in just he same way one shouldn’t expect to see much of an overall increase in health & longevity if you only fix 5 of 7 things that need fixing (i.e. 5 of the 7 strands of SENS)
– moth-balling anti-telomerase approach to fighting cancer in favor of cancer immunotherapy (for the time being) as it’s side effects need to be compensated against…
– Cancer immunotherapy – stimulating the bodies natural ability to attack cancer with it’s immune system -2 approaches – car-T (Chimeric Antigen Receptors and T cells), and checkpoint inhibiting drugs.. then there is training the immune system to identify neoantegens (stuff that all cancers produce)


Chief Science Officer, SENS Research Foundation, Mountain View, CA –

AgeX Therapeutics –

Dr. Aubrey de Grey is a biomedical gerontologist based in Mountain View, California, USA, and is the Chief Science Officer of SENS Research Foundation, a California-based 501(c)(3) biomedical research charity that performs and funds laboratory research dedicated to combating the aging process. He is also VP of New Technology Discovery at AgeX Therapeutics, a biotechnology startup developing new therapies in the field of biomedical gerontology. In addition, he is Editor-in-Chief of Rejuvenation Research, the world’s highest-impact peer-reviewed journal focused on intervention in aging. He received his BA in computer science and Ph.D. in biology from the University of Cambridge. His research interests encompass the characterisation of all the types of self-inflicted cellular and molecular damage that constitute mammalian aging and the design of interventions to repair and/or obviate that damage. Dr. de Grey is a Fellow of both the Gerontological Society of America and the American Aging Association, and sits on the editorial and scientific advisory boards of numerous journals and organisations. He is a highly sought-after speaker who gives 40-50 invited talks per year at scientific conferences, universities, companies in areas ranging from pharma to life insurance, and to the public.


Many thanks for reading/watching!

Consider supporting SciFuture by:

a) Subscribing to the SciFuture YouTube channel:…

b) Donating – Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22 – Etherium: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b – Patreon:

c) Sharing the media SciFuture creates:

Kind regards, Adam Ford – Science, Technology & the Future

Surviving the Zombie Cell Apocalypse – Oisín Biotechs Stephen Hilbert

Oisín Biotechnologies ground-breaking research and technology is demonstrating that the solution to mitigating the effects of age-related diseases is to address the damage created by the aging process itself. We have recently successfully launched our first subsidiary, Oisin Oncology, focusing in combating multiple cancers.

Interview with Stephen Hilbert

We cover the exciting scientific progress at Oisín, targeting senescent cells (dubbed ‘zombie cells’) to help them to die properly, rejuvenation therapy vs traditional approaches to combating disease, Oisín’s potential for aiding astronauts survive high levels of radiation in space, funding for the research and therapy/drug development and specifically Stephen’s background in corporate development in helping raise capital for Oisín and it’s research.

Are we close to achieving Robust Mouse Rejuvenation?

According to Aubrey de Grey we are about 5-6 years away from  robust mouse rejuvenation   (RBR) subject to the kind of funding SENS has received this year and the previous year (2017-2018). There has been progress in developing certain therapies .

Specifically, the goal of RBR is this:

  • Make normal, healthy two-year old mice (expected to live one more year) live three further years.
    • The type of mice: The ideal mouse to trail on is one that doesn’t naturally have a certain kind of congenital disease (that might on average only live 1.5 or 2 years) – because increasing their lifespan might only be a sign that you have solved their particular congenital disease.
    • Number of extra years: Consistently increasing mouse lifespan for an extra two years on top of their normal three year lifespans – essentially tripling their remaining lifespan.
    • When to begin the treatment: Don’t start treating the mice until they are already 2 years old – at a time where they would normally be 2 thirds of the way though their life (at or past middle age) and they would have one more year to live.

Why not start treating the mice earlier?  The goal is to produce sufficiently dramatic results in a laboratory to convince the main-stream gerontology community such that they would willingly publicly endorse the idea that it is not impossible, but indeed it is only a matter of time before rejuvenation therapy will work in humans – that is to get out there on talk shows and in front of cameras and say all this.

The mainstream gerontology community are generally a bit conservative – they have vested interests in being successful in publishing papers, they get grants they have worries around peer review, they want tenure, and have a reputation to uphold.   Gerontologists hold the keys to public trust – they are considered to be the authorities on aging.


For the lowdown on progress towards Robust Mouse Rejuvenation see partway through this interview with Aubrey de Grey!

Preliminary results from study showing normalized mouse survival at 140 weeks

Stephen heads up corporate development for Oisín Biotechnologies. He has served as a business advisor to Oisín since its inception and has served on several biotechnology company advisory boards, specializing in business strategy and capital formation. Prior to Oisín, his career spanned over 15 years in the banking industry where he served as trusted advisor to accredited investors around the globe. Most recently he headed up a specialty alternative investment for a company in San Diego, focusing in tax and insurance strategies for family offices and investment advisors. Stephen is the founder of several ventures in the areas of real estate small manufacturing of novelty gifts and strategic consulting. He serves on the Overlake Hospital’s Pulse Board, assists with Children’s Hospital Guild and is the incoming Chairman at the Columbia Tower Club, a member’s club in Seattle.
LinkedIn Profile

Head of Corporate Strategy/Development Pre-Clinical Oisin Biotechnologies and OncoSenX
FightAging - Oisin Biotechnologies Produces Impressive Mouse Life Span Data from an Ongoing Study of Senescent Cell Clearance
FightAging reported:
Oisin Biotechnologies is the company working on what is, to my eyes, the best of the best when it comes to the current crop of senolytic technologies, approaches capable of selectively destroying senescent cells in old tissues. Adding senescent cells to young mice has been shown to produce pathologies of aging, and removal of senescent cells can reverse those pathologies, and also extend life span. It is a very robust and reliable approach, with these observations repeated by numerous different groups using numerous different methodologies of senescent cell destruction. Most of the current senolytic development programs focus on small molecules, peptides, and the like. These are expensive to adjust, and will be tissue specific in ways that are probably challenging and expensive to alter, where such alteration is possible at all. In comparison, Oisin Biotechnologies builds their treatments atop a programmable suicide gene therapy; they can kill cells based on the presence of any arbitrary protein expressed within those cells. Right now the company is focused on p53 and p16, as these are noteworthy markers of cancerous and senescent cells. As further investigation of cellular senescence improves the understanding of senescent biochemistry, Oisin staff could quickly adapt their approach to target any other potential signal of senescence – or of any other type of cell that is best destroyed rather than left alone. Adaptability is a very valuable characteristic. The Oisin Biotechnologies staff are currently more than six months in to a long-term mouse life span study, using cohorts in which the gene therapy is deployed against either p16, p53, or both p16 and p53, plus a control group injected with phosphate buffered saline (PBS). The study commenced more than six months ago with mice that were at the time two years (104 weeks) old. When running a life span study, there is a lot to be said for starting with mice that are already old; it saves a lot of time and effort. The mice were randomly put into one of the four treatment groups, and then dosed once a month. As it turns out, the mice in which both p16 and p53 expressing cells are destroyed are doing very well indeed so far, in comparison to their peers. This is quite impressive data, even given the fact that the trial is nowhere near done yet.
Considering investing/supporting this research?  Get in contact with Oisin here.