The Ghost in the Quantum Turing Machine – Scott Aaronson

Interview on whether machines can be conscious with Scott Aaronson, theoretical computer scientist and David J. Bruton Jr. Centennial Professor of Computer Science at the University of Texas at Austin. His primary areas of research are quantum computing and computational complexity theory.
Scott blogged about this and other segments of our interview – his blog is very popular and has way more comments than this one does – check it out.

Check out interview segment “The Winding Road to Quantum Supremacy” with Scott Aaronson – covering progress in quantum computation, whether there are things that quantum computers could do that classical computers can’t etc..

Transcript

Adam Ford: In ‘Could a Quantum Computer have Subjective Experience?‘ you speculate where the process has to fully participate in the arrow of time to be conscious and this points to decoherence. If pressed, how might you try to formalize this?

Scott Aaronson: So yeah so I did write this kind of crazy essay five or six years ago that was called “The Ghost in the Quantum Turing Machine“, where I tried to explore a position that seemed to me to be mysteriously under-explored! And all of the debates about ‘could a machine be conscious?’ and we want to be thoroughgoing materialists right? There’s no magical ghost that defies the laws of physics; the brains or physical systems that obey the laws physics
just like any others.
But there is at least one very interesting difference between a brain and any digital computer that’s ever been built – and that is that the state of a brain is not obviously copyable; that is not obviously knowable to an outside person well enough to predict what a person will do in the future, without having to scan the person’s brain so invasively that you would kill them okay. And so there is a sort of privacy or opacity if you like to a brain that there is not to a piece of code running on a digital computer.
And so there are all sorts of classic philosophical conundrums that play on that difference. For example suppose that a human-level AI does eventually become possible and we have simulated people who were running a inside of our computers – well if I were to murder such a person in the sense of deleting their file is that okay as long as I kept the backup somewhere? As long as I can just restore them from backup? Or what if I’m running two exact copies of the program on two computers next to each other – is that instantiating two consciousnesses? Or is it really just one consciousness? Because there’s nothing to distinguish the one from the other?
So could I blackmail an AI to do what I wanted by saying even if I don’t have access to you as an AI, I’m gonna say if you don’t give me a million dollars then I’m just going to – since I have your code – I’m gonna create a million copies of your of the code and torture them? And – if you think about it – you are almost certain to be one of those copies because there’s far more of them than there are of you, and they’re all identical!
So yeah so there’s all these puzzles that philosophers have wondered about for generations about: the nature of identity, how does identity persist across time, can it be duplicated across space, and somehow in a world with copy-able AIs they would all become much more real!
And so one one point of view that you could take is that: well if I can predict exactly what someone is going to do right – and I don’t mean you know just saying as a philosophical matter that I could predict your actions if I were a Laplace demon and I knew the complete state of the universe right, because I don’t in fact know the complete state of the universe okay – but imagine that I could do that as an actual practical matter – I could build an actual machine that would perfectly predict down to the last detail every thing you would do before you had done it.
Okay well then in what sense do I still have to respect your personhood? I mean I could just say I have unmasked you as a machine; I mean my simulation has every bit as much right to personhood as you do at this point right – or maybe they’re just two different instantiations of the same thing.
So another possibility, you could say, is that maybe what we like to think of is consciousness only resides in those physical systems that for whatever reason are uncopyable – that if you try to make a perfect copy then you know you would ultimately run into what we call the no-cloning theorem in quantum mechanics that says that: you cannot copy the exact physical state of a an unknown system for quantum mechanical reasons. And so this would suggest of you where kind of personal identity is very much bound up with the flow of time; with things that happen that are evanescent; that can never happen again exactly the same way because the world will never reach exactly the same configuration.
A related puzzle concerns well: what if I took your conscious or took an AI and I ran it on a reversible computer? Now some people believe that any appropriate simulation brings about consciousness – which is a position that you can take. But now what if I ran the simulation backwards – as I can always do on a reversible computer? What if I ran the simulation, I computed it and then I uncomputed it? Now have I caused nothing to have happened? Or did I cause one forward consciousness, and then one backward consciousness – whatever that means? Did it have a different character from the forward consciousness?
But we know a whole class of phenomena that in practice can only ever happen in one direction in time – and these are thermodynamic phenomena right; these are phenomena that create waste heat; create entropy; that may take these little small microscopic unknowable degrees of freedom and then amplify them to macroscopic scale. And in principle there was macroscopic records could could get could become microscopic again. Like if I make a measurement of a quantum state at least according to the let’s say many-worlds quantum mechanics in principle that measurement could always be undone. And yet in practice we never see those things happen – for the same for basically the same reasons why we never see an egg spontaneously unscramble itself, or why we why we never see a shattered glass leap up to the table and reassemble itself right, namely these would represent vastly improbable decreases of entropy okay. And so the speculation was that maybe this sort of irreversibility in this increase of entropy that we see in all the ordinary physical processes and in particular in our own brains, maybe that’s important to consciousness?
Right uh or what we like to think of as free will – I mean we certainly don’t have an example to say that it isn’t – but you know the truth of the matter is I don’t know I mean I set out all the thoughts that I had about it in this essay five years ago and then having written it I decided that I had enough of metaphysics, it made my head hurt too much, and I was going to go back to the better defined questions in math and science.

Adam Ford: In ‘Is Information Physical?’ you note that if a system crosses a Swartzschild Bound it collapses into a black-hole – do you think this could be used to put an upper-bound on the amount of consciousness in any given physical system?

Scott Aaronson: Well so I can decompose your question a little bit. So there is what quantum gravity considerations let you do, it is believed today, is put a universal bound on how much computation can be going on in a physical system of a given size, and also how many bits can be stored there. And I the bounds are precise enough that I can just tell you what they are. So it appears that a physical system you know, that’s let’s say surrounded by a sphere of a given surface area, can store at most about 10 to the 69 bits, or rather 10 to the 69 qubits per square meter of surface area of the enclosing boundary. And it has a similar limit on how many computational steps it can do over it’s it’s whole history.
So now I think your question kind of reduces to the question: Can we upper-bound how much consciousness there is in a physical system – whatever that means – in terms of how much computation is going on in it; or in terms of how many bits are there? And that’s a little hard for me to think about because I don’t know what we mean by amount of consciousness right? Like am I ten times more conscious than a frog? Am I a hundred times more conscious? I don’t know – I mean some of the time I feel less conscious than a frog right.
But I am sympathetic to the idea that: there is some minimum of computational interestingness in any system that we would like to talk about as being conscious. So there is this ancient speculation of panpsychism, that would say that every electron, every atom is conscious – and do me that’s fine – you can speculate that if you want. We know nothing to rule it out; there were no physical laws attached to consciousness that would tell us that it’s impossible. The question is just what does it buy you to suppose that? What does it explain? And in the case of the electron I’m not sure that it explains anything!
Now you could say does it even explain anything to suppose that we’re conscious? But and maybe at least not for anyone beyond ourselves. You could say there’s this ancient conundrum that we each know that we’re conscious presumably by our own subjective experience and as far as we know everyone else might be an automaton – which if you really think about that consistently it could lead you to become a solipsist. So Allen Turing in his famous 1950 paper that proposed the Turing test had this wonderful remark about it – which was something like – ‘A’ is liable to think that ‘A’ thinks while ‘B’ does not, while ‘B’ is liable to think ‘B’ thinks but ‘A’ does not. But in practice it is customary to adopt the polite convention that everyone thinks. So it was a very British way of putting it to me right. We adopt the polite convention that solipsism is false; that people who can, or any entities let’s say, that can exhibit complex behaviors or goal-directed intelligent behaviors that are like ours are probably conscious like we are. And that’s a criterion that would apply to other people it would not apply to electrons (I don’t think), and it’s plausible that there is some bare minimum of computation in any entity to which that criterion would apply.

Adam Ford: Sabine Hossenfelder – I forget her name now – {Sabine Hossenfelder yes} – she had a scathing review of panpsychism recently, did you read that?

Scott Aaronson: If it was very recent then I probably didn’t read it – I mean I did read an excerpt where she was saying that like Panpsychism – is what she’s saying that it’s experimentally ruled out? If she was saying that I don’t agree with that – know I don’t even see how you would experimentally rule out such a thing; I mean you’re free to postulate as much consciousness as you want on the head of a pin – I would just say well it’s not if it doesn’t have
an empirical consequence; if it’s not affecting the world; if it’s not affecting the behavior of that head of a pin, in a way that you can detect – then Occam’s razor just itches to slice it out from our description of the world – always that’s the way that I would put it personally.\
So I put a detailed critique of integrated information theory (IIT), which is Giulio Tononi’s proposed theory of consciousness on my blog, and my critique was basically: so Tononi know comes up with a specific numerical measure that he calls ‘Phi’ and he claims that a system should be regarded as conscious if and only if the Phi is large. Now the actual definition of Phi has changed over time – it’s changed from one paper to another, and it’s not always clear how to apply it and there are many technical objections that could be raised against this criterion. But you know what I respect about IIT is that at least it sticks its neck out right. It proposes this very clear criterion, you know are we always much clearer than competing accounts do right – to tell you this is which physical systems you should regard as conscious and which not.
Now the danger of sticking your neck out is that it can get cut off right – and indeed I think that IIT is not only falsifiable but falsified, because as soon as this criterion is written down (what the point I was making is that) it is easy to construct physical systems that have enormous values of Phi – much much larger then a human has – that I don’t think anyone would really want to regard as intelligent let alone conscious or even very interesting.
And so my examples show that basically Phi is large if and only if your system has a lot of interconnection – if it’s very hard to decompose into two components that interact with each other only weakly – and so you have a high degree of information integration. And so my the point of my counter examples was to try to say well this cannot possibly be the sole relevant criterion, because a standard error correcting code as is used for example on every compact disc also has an enormous amount of information integration – but should we therefore say that you know ‘every error correcting code that gets implemented in some piece of electronics is conscious?’, and even more than that like a giant grid of logic gates just sitting there doing nothing would have a very large value of Phi – and we can multiply examples like that.
And so Tononi then posted a big response to my critique and his response was basically: well you’re just relying on intuition; you’re just saying oh well yeah these systems are not a conscious because my intuition says that they aren’t – but .. that’s parochial right – why should you expect a theory of consciousness to accord with your intuition and he just then just went ahead and said yes the error correcting code is consciouss, yes the giant grid of XOR gates is conscious – and if they have a thousand times larger value of Phi than a brain, then there are a thousand times more conscious than a human is. So you know the way I described it was he didn’t just bite the bullet he just devoured like a bullet sandwich with mustard. Which was not what I was expecting but now the critique that I’m saying that ‘any scientific theory has to accord with intuition’ – I think that is completely mistaken; I think that’s really a mischaracterization of what I think right.
I mean I’ll be the very first to tell you that science has overturned common sense intuition over and over and over right. I mean like for example temperature feels like an intrinsic quality of a of a material; it doesn’t feel like it has anything to do with motion with the atoms jiggling around at a certain speed – okay but we now know that it does. But when scientists first arrived at that modern conception of temperature in the eighteen hundreds, what was essential was that at least you know that new criterion agreed with the old criterion that fire is hotter than ice right – so at least in the cases where we knew what we meant by hot or cold – the new definition agreed with the old definition. And then the new definition went further to tell us many counterintuitive things that we didn’t know before right – but at least that it reproduced the way in which we were using words previously okay.
Even when Copernicus and Galileo where he discovered that the earth is orbiting the Sun right, the new theory was able to account for our observation that we were not flying off the earth – it said that’s exactly what you would expect to have happened even in the in ?Anakin? because of these new principles of inertia and so on okay.
But if a theory of consciousness says that this giant blank wall or this grid is highly highly conscious just sitting there doing nothing – whereas even a simulated person or an AI that passes the Turing test would not be conscious if it’s organized in such a way that it happens to have a low value of Phi – I say okay the burden is on you to prove to me that this Phi notion that you have defined has anything whatsoever to do with what I was calling consciousness you haven’t even shown me any cases where they agree with each other where I should therefore extrapolate to the hard cases; the ones where I lack an intuition – like at what point is an embryo conscious? or when is an AI conscious? I mean it’s like the theory seems to have gotten wrong the only things that it could have possibly gotten right, and so then at that point I think there is nothing to compel a skeptic to say that this particular quantity Phi has anything to do with consciousness.

Cognitive Biases & In-Group Convergences with Joscha Bach

True & false vs right & wrong – People converge their views to set of rights and wrongs relative to in-group biases in their peer group.
As a survival mechanism, convergence in groups is sometimes more healthy than being right – so one should optimize for convergence sometimes even at the cost of getting stuff wrong – so humans probably have an evolutionary propensity to favor convergence over truth.
However by optimizing for convergence may result in the group mind being more stupid than the smartest people in the group.

 

 
Joscha highlights the controversy of Yonatan Zunger being fired for sending out an email about biological differences between men and women effecting abilities as engineers – where Zunger’s arguments may be correct – now regardless of what the facts are about how biological differences effect differences in ability between men & women, google fired him because they thought supporting these arguments would make for a worse social environment.

This sort of thing leads to an interesting difference in discourse, where:
* ‘nerds’ tend to focus on ‘content‘, on imparting ideas and facts where everyone can judge these autonomously and form their own opinions – in view that in order to craft the best solutions we need to have the best facts
* most people the purpose of communication is ‘coordination‘ between individuals and groups (society, nations etc) – where the value on a ‘fact’ is it’s effect on the coordination between people

So is Google’s response to the memo controversy about getting the facts right, or about how Google at this point should be organised?

What’s also really interesting is that different types of people read this ‘memo’ very differently – making it very difficult to form agreement about the content of this memo – how can one agree on whats valuable about communication – whether it’s more about imparting ideas and facts or whether it’s more about coordination?

More recently there has been a lot of talk about #FakeNews – where it’s very difficult to get people to agree to things that are not in their own interests – and including, as Joshca points out, the idea that truth matters.

Joscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Discussion points:
– In-group convergence: thinking in true & false vs right & wrong
– The group mind may be more stupid than the smartest individuals in the group

Physicalism & Materialism – John Wilkins

Materialism was a pre-socratic view that for something to be real it has to be matter – physical stuff made of atoms (which at the time were considered hard like billiard balls – fundametal parts of reality).  The reason these days the term physicalism is used is because it can describe things that aren’t matter – like forces, or aren’t observable matter – like dark matter, or energy or fields, or spacetime etc..  Physicalism is the idea that all that exist can be described in the language of some ‘ideal’ physics – we may never know what this ideal physics is, though people think that it is something close to our current physics (as we can make very accurate predictions with our current physics).

If magic, telepathy or angels were real, there would be a physics that could describe them – they’d have patterns and properties that would be describable and explainable.  A physicist would likely think that even the mind operates according to physical rules.  Being a physicalist according to John means you think everything is governed by rules, physical rules – and that there is an ideal language that can be used to describe all this.

Note John is also a deontologist.  Perhaps there should exist an ideal language that can fully describe ethics – does this mean that ideally there is no need for utilitarianism?  I’ll leave that question for another post.

Interview with John Wilkins on Materialism & Physicalism.

Here are some blog posts about physicalism by John Wilkins:

Is physicalism an impoverished metaphysics?

Every so often, we read about some philosopher or other form of public intellectual who makes the claim that a physicalist ontology – a world view in which only things that can be described in terms of physics are said to exist – is impoverished. That is, there are things whereof science cannot know, &c. A recent example is that made by Thomas Nagel [nicely eviscerated here by the physicist Sean Carroll], whose fame in philosophy rests with an influential 1974 paper that there is something like being a bat that no amount of physics, physiology or other objective science could account for.

Recent, Nagel has argued that the evolutionary view called (historically misleadingly) neo-Darwinism, is “almost certainly” false. One of the reasons is that “materialism” (which Nagel should know is an antiquated world view replaced by physicalism defined above; there are many non-material things in physics, not least fields of various kinds) does not permit a full account of consciousness; the subjective facts of being a particular individual organism. Another is that the chance that life would emerge from a lifeless universe is staggeringly unlikely. How this is calculated is somewhat mysterious, given that at best we only have (dare I say it?) subjective estimates anyway, but there it is.

But Nagel is not alone. Various nonreligious (apparently) thinkers have made similar assertions, although some, like Frank Jackson, who proposed the Knowledge Argument, have since backed down. What is it that physicalism must account for that these disputants and objectors say it cannot?

It almost entirely consists of consciousness, intentions, intelligence or some similar mental property which is entirely inexplicable by “reductionist” physicalism. [Reductionism is a term of abuse that means – so far as I can tell – solely that the person who makes such an accusation does not like the thing or persons being accused.] And that raises our question: is physicalism lacking something?

I bet you are dying to know more… you’ll just have to follow the link…
See more at Evolving Thoughts>>

Is Physicalism Coherent?

In my last post I argued that physicalism cannot be rejected simply because people assert there are nonphysical objects which are beyond specification. Some are, however, specifiable, and one commentator has identified the obvious ones: abstract objects like the rules of chess or numbers. I have dealt with these before in my “Pizza reductionism” post, which I invite you to go read.

Done? OK, then; let us proceed.

It is often asserted that there are obviously things that are not physical, such as ideas, numbers, concepts, etc., quite apart from qualia, I once sat with a distinguished philosopher, who I respect greatly and so shall not name, when he asserted that we can construct natural classifications because we can deal first with the natural numbers. I asked him “In what sense are numbers natural objects?”, meaning, why should we think numbers are entities in the natural world. He admitted that the question had not occurred to him (I doubt that – he is rather smart), but that it was simply an axiom of his philosophy. I do not think such abstract objects are natural.

This applies to anything that is “informational”, including all semantic entities like meanings, symbols, lexical objects, and so on. They only “exist” as functional modalities in our thoughts and language. I have also argued this before: information does not “exist”; it is a function of how we process signals. Mathematics is not a domain, it is a language, and the reason it works is because the bits that seriously do not work are not explored far[*] – not all of it has to work in a physical or natural sense, but much of it has to, or else it becomes a simple game that we would not play so much.

So the question of the incoherence of physicalism is based on the assumption (which runs contrary to physicalism, and is thus question begging) that abstract objects are natural things. I don’t believe they are, and I certainly do not think that a thought, or concept, for example, which can be had by many minds and is therefore supposed to be located in none of them (and thus transcendental), really is nonphysical. That is another case of nouning language. The thought “that is red” exists, for a physicalist, in all the heads that meet the functional social criteria for ascriptions of red. It exists nowhere else – it just is all those cognitive and social behaviours in biological heads…

Yes, I know, it’s a real page turner…
See more at Evolving Thoughts>>

In philosophy, physicalism is the ontological thesis that “everything is physical”, that there is “nothing over and above” the physical, or that everything supervenes on the physical. Physicalism is a form of ontological monism—a “one substance” view of the nature of reality as opposed to a “two-substance” (dualism) or “many-substance” (pluralism) view. Both the definition of physical and the meaning of physicalism have been debated. Physicalism is closely related to materialism. Physicalism grew out of materialism with the success of the physical sciences in explaining observed phenomena. The terms are often used interchangeably, although they are sometimes distinguished, for example on the basis of physics describing more than just matter (including energy and physical law). Common arguments against physicalism include both the philosophical zombie argument and the multiple observers argument, that the existence of a physical being may imply zero or more distinct conscious entities. “When I lost my belief in religion I had to decide what I needed to accept as a bare minimum. I decided that I needed to believe in the physical world. I never found the slightest reason to accept the existence of anything else. To this day I am a physicalist only because I never found the need to be anything else. The principle of parsimony suggests that one should not believe in more than one needs to. Even if it does make you feel comfortable.”

 

Let’s get physicalism!

See John Wilkin’s Blog ‘Evolving Thoughts

#philsci #philosophy #science #physics

Science, Mindfulness & the Urgency of Reducing Suffering – Christof Koch

In this interview with Christof Koch, he shares some deeply felt ideas about the urgency of reducing suffering (with some caveats), his experience with mindfulness – explaining what it was like to visit the Dali Lama for a week, as well as a heart felt experience of his family dog ‘Nosey’ dying in his arms, and how that moved him to become a vegetarian. He also discusses the bias of human exceptionalism, the horrors of factory farming of non-human animals, as well as a consequentialist view on animal testing.
Christof Koch is an American neuroscientist best known for his work on the neural bases of consciousness.

Christof Koch is the President and Chief Scientific Officer of the Allen Institute for Brain Science in Seattle. From 1986 until 2013, he was a professor at the California Institute of Technology. http://www.klab.caltech.edu/koch/

Towards the Abolition of Suffering Through Science

An online panel focusing on reducing suffering & paradise engineering through the lens of science.

Panelists: Andrés Gómez Emilsson, David Pearce, Brian Tomasik and Mike Johnson

Note, consider skipping to to 10:19 to bypass some audio problems in the beginning!!


Topics

Andrés Gómez Emilsson: Qualia computing (how to use consciousness for information processing, and why that has ethical implications)

  • How do we know consciousness is causally efficacious? Because we are conscious and evolution can only recruit systems/properties when they do something (and they do it better than the available alternatives).
  • What is consciousness’ purpose on animals?  (Information processing).
  • What is consciousness’ comparative advantage?  (Phenomenal binding).
  • Why does this matter for suffering reduction? Suffering has functional properties that play a role in the inclusive fitness of organisms. If we figure out exactly what role they play (by reverse-engineering the computational properties of consciousness), we can substitute them by equally (or better) functioning non-conscious or positive hedonic-tone analogues.
  • What is the focus of Qualia Computing? (it focuses on basic fundamental questions and simple experimental paradigms to get at them (e.g. computational properties of visual qualia via psychedelic psychophysics)).

Brian Tomasik:

  • Space colonization “Colonization of space seems likely to increase suffering by creating (literally) astronomically more minds than exist on Earth, so we should push for policies that would make a colonization wave more humane, such as not propagating wild-animal suffering to other planets or in virtual worlds.”
  • AGI safety “It looks likely that artificial general intelligence (AGI) will be developed in the coming decades or centuries, and its initial conditions and control structures may make an enormous impact to the dynamics, values, and character of life in the cosmos.”,
  • Animals and insects “Because most wild animals die, often painfully, shortly after birth, it’s plausible that suffering dominates happiness in nature. This is especially plausible if we extend moral considerations to smaller creatures like the ~1019 insects on Earth, whose collective neural mass outweighs that of humanity by several orders of magnitude.”

Mike Johnson:

  • If we successfully “reverse-engineer” the patterns for pain and pleasure, what does ‘responsible disclosure’ look like? Potential benefits and potential for abuse both seem significant.
  • If we agree that valence is a pattern in a dataset, what’s a good approach to defining the dataset, and what’s a good heuristic for finding the pattern?
  • What order of magnitude is the theoretical potential of mood enhancement? E.g., 2x vs 10x vs 10^10x
  • What are your expectations of the distribution of suffering in the world? What proportion happens in nature vs within the boundaries of civilization? What are counter-intuitive sources of suffering? Do we know about ~90% of suffering on the earth, or ~.001%?
  • Valence Research, The Mystery of Pain & Pleasure.
  • Why is it such an exciting time round about now to be doing valence research?  Are we at a sweet spot in history with this regard?  What is hindering valence research? (examples of muddled thinking, cultural barriers etc?)
  • How do we use the available science to improve the QALY? GiveDirectly has used change in cortisol levels to measure effectiveness, and the EU (what’s EU stand for?) evidently does something similar involving cattle. It seems like a lot of the pieces for a more biologically-grounded QALY- and maybe a SQALY (Species and Quality-Adjusted Life-Year)- are available, someone just needs to put them together. I suspect this one of the lowest-hanging highest-leverage research fruits.

David Pearce: The ultimate scope of our moral responsibilities. Assume for a moment that our main or overriding goal should be to minimise and ideally abolish involuntary suffering. I typically assume that (a) only biological minds suffer and (b) we are probably alone within our cosmological horizon. If so, then our responsibility is “only” to phase out the biology of involuntary suffering here on Earth and make sure it doesn’t spread or propagate outside our solar system. But Brian, for instance, has quite a different metaphysics of mind, most famously that digital characters in video games can suffer (now only a little – but in future perhaps a lot). The ramifications here for abolitionist bioethics are far-reaching.

 

Other:
– Valence research, Qualia computing (how to use consciousness for information processing, and why that has ethical implications),  animal suffering, insect suffering, developing an ethical Nozick’s Experience Machine, long term paradise engineering, complexity and valence
– Effective Altruism/Cause prioritization and suffering reduction – People’s practical recommendations for the best projects that suffering reducers can work on (including where to donate, what research topics to prioritize, what messages to spread). – So cause prioritization applied directly to the abolition of suffering?
– what are the best projects people can work on to reduce suffering? and what to work on first? (including where to donate, what research topics to prioritize, what messages to spread)
– If we successfully “reverse-engineer” the patterns for pain and pleasure, what does ‘responsible disclosure’ look like? Potential benefits and potential for abuse both seem significant
– If we agree that valence is a pattern in a dataset, what’s a good approach to defining the dataset, and what’s a good heuristic for finding the pattern?
– What order of magnitude is the theoretical potential of mood enhancement? E.g., 2x vs 10x vs 10^10x

Panelists

David Pearce: http://hedweb.com/
Mike Johnson: http://opentheory.net/
Andrés Gómez Emilsson: http://qualiacomputing.com/
Brain Tomasik: http://reducing-suffering.org/

 

#hedweb ‪#EffectiveAltruism ‪#HedonisticImperative ‪#AbolitionistProject

The event was hosted on the 10th of August 2015, Venue: The Internet

Towards the Abolition of Suffering Through Science was hosted by Adam Ford for Science, Technology and the Future.

Towards the Abolition of Suffering Through Science

Towards the Abolition of Suffering Through Science

The End of Aging

Aging is a technical problem with a technical solution – finding the solution requires clear thinking and focused effort. Once solving aging becomes demonstrably feasible, it is likely attitudes will shift regarding its desirability. There is huge potential, for individuals and for society, in reducing suffering through the use of rejuvenation therapy to achieve new heights of physical well-being. I also discuss the looming economic implications of large percentages of illness among aging populations – and put forward that focusing on solving fundamental problems of aging will reduce the incidents of debilitating diseases of aging – which will in turn reduce the economic burden of illness. This mini-documentary discusses the implications of actually solving aging, as well as some misconceptions about aging.

‘The End of Aging’ won first prize in the international Longevity Film Competition *[1] in 2018.


The above video is the latest version with a few updates & kinks ironed out.

‘The End of Aging’ was Adam Ford’s submission for the Longevity Film Competition – all the contestants did a great job. Big thanks to the organisers of competition, it inspires people to produce videos to help spread awareness and understanding about the importance of ending aging.

It’s important to see that health in old age is desirable at population levels – rejuvenation medicine – repairing the bodies ability to cope with stressors (or practical reversal of the aging process), will end up being cheaper than traditional medicine  based on general indefinite postponement of ill-health on population levels (especially in the long run when rejuvenation therapy becomes efficient).

According to the World Health Organisation:

  1. Between 2015 and 2050, the proportion of the world’s population over 60 years will nearly double from 12% to 22%.
  2. By 2020, the number of people aged 60 years and older will outnumber children younger than 5 years.
  3. In 2050, 80% of older people will be living in low- and middle-income countries.
  4. The pace of population ageing is much faster than in the past.
  5. All countries face major challenges to ensure that their health and social systems are ready to make the most of this demographic shift.
The End of Aging – WHO 1 – 2020 portion of world population over 60 will double
The End of Aging – WHO 2 – Elderly outnumbering Infants
The End of Aging – WHO 3 – Pace of Population Aging Faster than in Past
The End of Aging – WHO 4 – 80 perc elderly in low to middle income countries
The End of Aging – WHO 5 Demographic Shifts

 

Happy Longevity Day 2018! 😀

[1] * The Longevity Film Competition is an initiative by the Healthy Life Extension Society, the SENS Research Foundation, and the International Longevity Alliance. The promoters of the competition invited filmmakers everywhere to produce short films advocating for healthy life extension, with a focus on dispelling four usual misconceptions and concerns around the concept of life extension: the false dichotomy between aging and age-related diseases, the Tithonus error, the appeal to nature fallacy, and the fear of inequality of access to rejuvenation biotechnologies.

Michio Kaku on the Holy Grail of Nanotechnology

Michio Kaku on Nanotechnology – Michio is the author of many best sellers, recently the Future of the Mind!

The Holy Grail of Nanotechnology

Merging with machines is on the horizon and Nanotechnology will be key to achieving this. The ‘Holy Grail of Nanotechnology’ is the replicator: A microscopic robot that rearranges molecules into desired structures. At the moment, molecular assemblers exist in nature in us, as cells and ribosomes.

Sticky Fingers problem

How might nanorobots/replicators look and behave?
Because of the ‘Sticky /Fat Fingers problem’ in the short term we won’t have nanobots with agile clippers or blow torches (like what we might see in a scifi movie).

The 4th Wave of High Technology

Humanity has seen an acceleration in history of technological progress from the steam engine and industrial revolution to the electrical age, the space program and high technology – what is the 4th wave that will dominate the rest of the 21st century?
Nanotechnology (molecular physics), Biotechnology, and Artificial Intelligence (reducing the curcuitry of the brain down to neurons) – “these three molecular technologies will propel us into the future”!

 

Michio Kaku – Bio

Michio Kaku (born January 24, 1947) is an American theoretical physicist, the Henry Semat Professor of Theoretical Physics at the City College of New York, a futurist, and a communicator and popularizer of science. He has written several books about physics and related topics, has made frequent appearances on radio, television, and film, and writes extensive online blogs and articles. He has written three New York Times Best Sellers: Physics of the Impossible (2008), Physics of the Future (2011), and The Future of the Mind (2014).

Kaku is the author of various popular science books:
– Beyond Einstein: The Cosmic Quest for the Theory of the Universe (with Jennifer Thompson) (1987)
– Hyperspace: A Scientific Odyssey through Parallel Universes, Time Warps, and the Tenth Dimension (1994)
– Visions: How Science Will Revolutionize the 21st Century[12] (1998)
– Einstein’s Cosmos: How Albert Einstein’s Vision Transformed Our Understanding of Space and Time (2004)
– Parallel Worlds: A Journey through Creation, Higher Dimensions, and the Future of the Cosmos (2004)
– Physics of the Impossible: A Scientific Exploration into the World of Phasers, Force Fields, Teleportation, and Time Travel (2008)
– Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 (2011)
– The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind (2014)

Subscribe to the YouTube ChannelScience, Technology & the Future

Aubrey de Grey – Towards the Future of Regenerative Medicine

Why is aging research important? Biological aging causes suffering, however in recent times there as been surprising progress in stem cell research and in regenerative medicine that will likely disrupt the way we think about aging, and in the longer term, substantially mitigate some of the suffering involved in growing old.
Aubrey de Grey is the Chief Science Officer of SENS Foundation – an organisation focused on going beyond ageing and leading the journey towards  the future of regenerative medicine!  
What will it take to get there?
 


You might wonder why pursue  regenerative medicine ?
Historically, doctors have been racing against time to find cures for specific illnesses, making temporary victories by tackling diseases one by one – solve one disease and another urgency beacons – once your body becomes frail, if you survive one major illness, you may not be so lucky with the next one – the older you get the less capable your body becomes to staving off new illnesses – you can imagine a long line of other ailments fading beyond view into the distance, and eventually one of them will do you in.  If we are to achieve  radical healthy longevity , we need to strike at the fundamental technical problems of why we get frail and more disease prone as we get older.  Every technical problem has a technical solution – regenerative medicine is a class of solutions that seek to keep turning the ‘biological clock’ back rather than achieve short-term palliatives.

The damage repair methodology has gained in popularity over the last two decades, though it’s still not popular enough to attract huge amounts of funding – what might tip the scales of advocacy in damage-repair’s favor?
A clear existence proof such as achieving…

Robust Mouse Rejuvenation

In this interview, Aubrey de Grey reveals the most amount of optimism I have heard him express about the near term achievement of Robust Mouse Rejuvenation.  Previously it’s been 10 years away subject to adequate funding (which was not realised) – now Aubrey predicts it might happen within only 5-6 years (subject to funding of course).  So, what is Robust Mouse Rejuvenation – and why should we care?

For those who have seen Aubrey speak on this, he used to say RMR within 10 years (subject to funding)

Specifically, the goal of RBR is this:  Make normal, healthy two-year old mice (expected to live one more year) live three further years. 

  • What’s the ideal type of mouse to test on and why?  The ideal mouse to trail on is one that doesn’t naturally have a certain kind of congenital disease (that might on average only live 1.5 or 2 years) – because increasing their lifespan might only be a sign that you have solved their particular congenital disease.  The ideal type of mouse is one which lives to 3 years on average, which could die of various things.
  • How many extra years is significant? Consistently increasing mouse lifespan for an extra two years on top of their normal three year lifespans – essentially tripling their remaining lifespan.
  • When, or at what stage of the mice’s life to begin the treatment? Don’t start treating the mice until they are already 2 years old – at a time where they would normally be 2 thirds of the way though their life (at or past middle age) and they would have one more year to live.

Why not start treating the mice earlier?  The goal is to produce sufficiently dramatic results in a laboratory to convince the main-stream gerontology community, such that they would willingly publicly endorse the idea that it is not impossible, but indeed it is only a matter of time before rejuvenation therapy will work in humans – that is to get out there on talk shows and in front of cameras and say all this.

Arguably, the mainstream gerontology community are generally a bit conservative – they have vested interests in being successful in publishing papers, they get grants they have worries around peer review, they want tenure, and have a reputation to uphold.   Gerontologists hold the keys to public trust – they are considered to be the authorities on aging.
When gerontologists are convinced and let the world know about it, a lot of other people in the scientific community and in the general community will also then become convinced.  Once that happens, here’s what’s likely to happen next – longevity through rejuvenation medicine will become a big issue, there will be domino effects – there will be a war on aging, experts will appear on Oprah Winfrey, politicians will have to include the war on aging in their political manifesto if they want to get elected.

Yoda - the oldest mouse ever to have lived?
Yoda, a cute dwarf mouse, was named as the oldest mouse in 2004 at age 4 who lived with the much larger Princess Leia, in ‘a pathogen-free rest home for geriatric mice’ belonging to Dr. Richard Miller, professor of pathology in the Geriatrics Center of the Medical School. “Yoda is only the second mouse I know to have made it to his fourth birthday without the rigors of a severe calorie-restricted diet,” Miller says. “He’s the oldest mouse we’ve seen in 14 years of research on aged mice at U-M. The previous record-holder in our colony died nine days short of his 4th birthday; 100-year-old people are much more common than 4-year-old mice.” (ref)

What about Auto-Immune Diseases?

Auto-immune diseases (considered incurable to some) – get worse with aging for the same reason we loose general ability to fight off infections and attack cancer. Essentially the immune system looses it’s precision – it has two arms: the innate system and the adaptive – the adaptive side works by having polyclonality – a very wide diversity of cells with different rearrangements of parts of the genome that confer specificity of the immune cell to a particular target (which it may or may not encounter at some time in the future) – this polyclonality diminishes over life such that the cells which are targeted towards a given problem with the immune system are on average less precisely adapted towards it – so the immune system takes longer to do it’s job or doesn’t do it effectively – so with autoimmune system it looses it’s ability to distinguish between things that are foreign and things that are part of the body. So this could be powerfully addressed by the same
measures taken to rejuvenate the immune system generally – regenerating the thyamis and illuminating senescent cells that are accumulating in the blood.

Big Bottlenecks

See Aubrey discuss this at timepoint: 38:50
Bottlenecks: which bottlenecks does Aubrey believes need most attention from the community of people who already believe aging is a problem that needs to be solved?

  1. The first thing: Funding. The shortage of funding is still the biggest bottleneck.
  2. The second thing: The need for policy makers to get on board with the ideas and understand what is coming – so it’s not only developing the therapies as quickly as possible, it’s also important that once they are developed, the therapies get disseminated as quick as possible to avoid complete chaos.

It’s very urgent to have proper discussions about this.  Anticipating the anticipation – getting ready for the public anticipating these therapies instead of thinking that it’s all science fiction and is never going to happen.

 

Effective Advocacy

See Aubrey discuss this at timepoint: 42:47
Advocacy, it’s a big ask to get people from extreme opposition to supporting regenerative medicine. Nudging people a bit sideways is a lot earlier – that is getting them from complete offense to less offense, or getting people who are un-decided to be in favor of it.

Here are 2 of the main aspects of advocacy:

  1. feasibility / importance – emphasize progress, embracement by the scientific community (see paper hallmarks of aging – single most highly cited paper on the biology of aging this decade) – defining the legitimacy of the damage repair approach – it’s not just a crazy hair brained idea …
  2. desirability – concerns about (bad arguments : on overpopulation – oh don’t worry we will immigrate into space – the people who are concerned about this problem aren’t the ones who would like to go to space) – focus on more of the things that can generalize to desirable outcomes – so regenerative medicine will have side effects, like a longer lifespan, but also people will be more healthy at any given age compared to what they would be in they hadn’t had regenerative therapy – no body wants Alzheimer’s, or heart disease – if the outcome of regenerative medicine is that then it’s easier to sell.

We need a sense of proportion on possible future problems – will they generally be more serious than they are today?
Talking about uploading, substrate independence, etc one is actively alienating the public – it’s better to create a foundation of credibility in the conversation before you decide to persuade anyone of anything.  If we are going to get from here to the long term future we need advocacy now – the short term matters as well.

More on Advocacy here:

And here

Other Stuff

This interview covers a fair bit of ground, so here are some other points covered:

– Updates & progress at SENS
– Highlights of promising progress in regenerative medicine in general
– Recent funding successes, what can be achieved with this?
– Discussion on getting the message across
– desirability & feasibility of rejuvenation therapy
– What could be the future of regenerative medicine?
– Given progress so far, what can people alive today look forward to?
– Multi-factorial diseases – Fixing amyloid plaque buildup alone won’t cure Alzheimer’s – getting rid of amyloid plaque alone only produced mild cognitive benefits in Alzheimer’s patients. There is still the unaddressed issue of tangles… If you only get rid of one component in a multi-component problem then you don’t get to see much improvement of pathology – in just he same way one shouldn’t expect to see much of an overall increase in health & longevity if you only fix 5 of 7 things that need fixing (i.e. 5 of the 7 strands of SENS)
– moth-balling anti-telomerase approach to fighting cancer in favor of cancer immunotherapy (for the time being) as it’s side effects need to be compensated against…
– Cancer immunotherapy – stimulating the bodies natural ability to attack cancer with it’s immune system -2 approaches – car-T (Chimeric Antigen Receptors and T cells), and checkpoint inhibiting drugs.. then there is training the immune system to identify neoantegens (stuff that all cancers produce)

Biography

Chief Science Officer, SENS Research Foundation, Mountain View, CA – http://sens.org

AgeX Therapeutics – http://www.agexinc.com/

Dr. Aubrey de Grey is a biomedical gerontologist based in Mountain View, California, USA, and is the Chief Science Officer of SENS Research Foundation, a California-based 501(c)(3) biomedical research charity that performs and funds laboratory research dedicated to combating the aging process. He is also VP of New Technology Discovery at AgeX Therapeutics, a biotechnology startup developing new therapies in the field of biomedical gerontology. In addition, he is Editor-in-Chief of Rejuvenation Research, the world’s highest-impact peer-reviewed journal focused on intervention in aging. He received his BA in computer science and Ph.D. in biology from the University of Cambridge. His research interests encompass the characterisation of all the types of self-inflicted cellular and molecular damage that constitute mammalian aging and the design of interventions to repair and/or obviate that damage. Dr. de Grey is a Fellow of both the Gerontological Society of America and the American Aging Association, and sits on the editorial and scientific advisory boards of numerous journals and organisations. He is a highly sought-after speaker who gives 40-50 invited talks per year at scientific conferences, universities, companies in areas ranging from pharma to life insurance, and to the public.

 

Many thanks for reading/watching!

Consider supporting SciFuture by:

a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_cente…

b) Donating – Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22 – Etherium: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b – Patreon: https://www.patreon.com/scifuture

c) Sharing the media SciFuture creates: http://scifuture.org

Kind regards, Adam Ford – Science, Technology & the Future

Surviving the Zombie Cell Apocalypse – Oisín Biotechs Stephen Hilbert

Oisín Biotechnologies ground-breaking research and technology is demonstrating that the solution to mitigating the effects of age-related diseases is to address the damage created by the aging process itself. We have recently successfully launched our first subsidiary, Oisin Oncology, focusing in combating multiple cancers.

Interview with Stephen Hilbert

We cover the exciting scientific progress at Oisín, targeting senescent cells (dubbed ‘zombie cells’) to help them to die properly, rejuvenation therapy vs traditional approaches to combating disease, Oisín’s potential for aiding astronauts survive high levels of radiation in space, funding for the research and therapy/drug development and specifically Stephen’s background in corporate development in helping raise capital for Oisín and it’s research.


Are we close to achieving Robust Mouse Rejuvenation?

According to Aubrey de Grey we are about 5-6 years away from  robust mouse rejuvenation   (RBR) subject to the kind of funding SENS has received this year and the previous year (2017-2018). There has been progress in developing certain therapies .

Specifically, the goal of RBR is this:

  • Make normal, healthy two-year old mice (expected to live one more year) live three further years.
    • The type of mice: The ideal mouse to trail on is one that doesn’t naturally have a certain kind of congenital disease (that might on average only live 1.5 or 2 years) – because increasing their lifespan might only be a sign that you have solved their particular congenital disease.
    • Number of extra years: Consistently increasing mouse lifespan for an extra two years on top of their normal three year lifespans – essentially tripling their remaining lifespan.
    • When to begin the treatment: Don’t start treating the mice until they are already 2 years old – at a time where they would normally be 2 thirds of the way though their life (at or past middle age) and they would have one more year to live.

Why not start treating the mice earlier?  The goal is to produce sufficiently dramatic results in a laboratory to convince the main-stream gerontology community such that they would willingly publicly endorse the idea that it is not impossible, but indeed it is only a matter of time before rejuvenation therapy will work in humans – that is to get out there on talk shows and in front of cameras and say all this.

The mainstream gerontology community are generally a bit conservative – they have vested interests in being successful in publishing papers, they get grants they have worries around peer review, they want tenure, and have a reputation to uphold.   Gerontologists hold the keys to public trust – they are considered to be the authorities on aging.

 

For the lowdown on progress towards Robust Mouse Rejuvenation see partway through this interview with Aubrey de Grey!

Preliminary results from study showing normalized mouse survival at 140 weeks

Stephen heads up corporate development for Oisín Biotechnologies. He has served as a business advisor to Oisín since its inception and has served on several biotechnology company advisory boards, specializing in business strategy and capital formation. Prior to Oisín, his career spanned over 15 years in the banking industry where he served as trusted advisor to accredited investors around the globe. Most recently he headed up a specialty alternative investment for a company in San Diego, focusing in tax and insurance strategies for family offices and investment advisors. Stephen is the founder of several ventures in the areas of real estate small manufacturing of novelty gifts and strategic consulting. He serves on the Overlake Hospital’s Pulse Board, assists with Children’s Hospital Guild and is the incoming Chairman at the Columbia Tower Club, a member’s club in Seattle.
LinkedIn Profile

Head of Corporate Strategy/Development Pre-Clinical Oisin Biotechnologies and OncoSenX
FightAging - Oisin Biotechnologies Produces Impressive Mouse Life Span Data from an Ongoing Study of Senescent Cell Clearance
FightAging reported:
Oisin Biotechnologies is the company working on what is, to my eyes, the best of the best when it comes to the current crop of senolytic technologies, approaches capable of selectively destroying senescent cells in old tissues. Adding senescent cells to young mice has been shown to produce pathologies of aging, and removal of senescent cells can reverse those pathologies, and also extend life span. It is a very robust and reliable approach, with these observations repeated by numerous different groups using numerous different methodologies of senescent cell destruction. Most of the current senolytic development programs focus on small molecules, peptides, and the like. These are expensive to adjust, and will be tissue specific in ways that are probably challenging and expensive to alter, where such alteration is possible at all. In comparison, Oisin Biotechnologies builds their treatments atop a programmable suicide gene therapy; they can kill cells based on the presence of any arbitrary protein expressed within those cells. Right now the company is focused on p53 and p16, as these are noteworthy markers of cancerous and senescent cells. As further investigation of cellular senescence improves the understanding of senescent biochemistry, Oisin staff could quickly adapt their approach to target any other potential signal of senescence – or of any other type of cell that is best destroyed rather than left alone. Adaptability is a very valuable characteristic. The Oisin Biotechnologies staff are currently more than six months in to a long-term mouse life span study, using cohorts in which the gene therapy is deployed against either p16, p53, or both p16 and p53, plus a control group injected with phosphate buffered saline (PBS). The study commenced more than six months ago with mice that were at the time two years (104 weeks) old. When running a life span study, there is a lot to be said for starting with mice that are already old; it saves a lot of time and effort. The mice were randomly put into one of the four treatment groups, and then dosed once a month. As it turns out, the mice in which both p16 and p53 expressing cells are destroyed are doing very well indeed so far, in comparison to their peers. This is quite impressive data, even given the fact that the trial is nowhere near done yet.
Considering investing/supporting this research?  Get in contact with Oisin here.

Antispecism & Compassionate Stewardship – David Pearce

I think our first ethical priority is to stop doing harm, and right now in our factory farms billions of non-human animals are being treated in ways that if our victims were human, we would get the perpetrators locked up for life. And the sentience (and what it’s worth the sapience) of a pig compares with the pre-linguistic toddler. A chicken perhaps may be no more intellectually advanced or sentient than a human infant. But before considering the suffering of free living animals we need to consider, I think, the suffering we’re causing our fellow creatures.

Essentially it’s a lifestyle choice – do we want to continue to exploit and abuse other sentient beings because we like the taste of their flesh, or do we want to embrace the cruelty free vegan lifestyle. Some people would focus on treating other sentient beings less inhumanely. I’d say that we really need an ethical revolution in which our focus is: how can we help other sentient beings rather than harm them?

It’s very straightforward indeed to be a vegetarian. Vegetarians tend to statistically live longer, they record high IQ scores, they tend to be slimmer – it’s very easy to be a vegetarian. A strict vegan lifestyle requires considerably more effort. But over the medium to long run I think our focus should be going vegan.

In the short run I think we should be closing factory farms and slaughterhouses. And given that factory farming and slaughterhouses are the greatest source of severe chronic readily avoidable
suffering in the world today, any talk of intervening compassionate stewardship of the rest of the living world is fanciful.

Will ethical argument alone persuade us to stop exploiting & killing other non-human beings because we like the taste of their flesh? Possibly not. I think realistically one wants a twin track strategy that combines animal advocacy with the development of in-vitro meat. But I would strenuously urge anyone watching this program to consider giving up meat and animal products if you are ethically serious.

The final strand of the Abolitionist Project on earth however is free-living animals in nature. And it might seem ecologically illiterate to argue that it is going to be feasible to take care of elephants, zebras, and free living animals. Because after all – let’s say there is starvation, it’s in winter, if you start feeding a lot of starving herbivores – all this does is lead the next spring to a population explosion followed by ecological collapse & more suffering than before.

However what is potentially feasible, if we’re ethically serious, is to micromanage the entire living world – now this sounds extremely far fetched and utopian, but I’ll sketch how it is feasible. Later this century and beyond, every cubic meter of the planet is going to be computationally accessible to surveillance, micro-management and control. And if we want to, we can use fertility regulation & immuno-contraception to regulate population numbers – cross-species fertility control. Starting off presumably with higher vertebrates – elephants for instance – already now – in the Kruger National Park for example – in preference to the cruel practice of culling, population numbers are controlled by immuno-contraception.

So starting off with higher vertebrates but eventually in our wildlife parks, then across the phylogenetic tree, it will be possible to micromanage the living world.

And just as right now if you were to stumble across a small child who is drowning in a pond – you would be guilty of complicity in that child’s drowning if you didn’t pull the child out – exactly the same intimacy over the rest of the living world is going to be feasible later this century and beyond.

Now what about obligate carnivores – predators? Surely it’s inevitable that they’re going to continue to prey on herbivores, so that means one might intuitively suppose that the abolitionist project could never be completed. But even there, if we’re ethically serious there are workarounds – in-vitro meat – for instance big cats if they are offered in vitro meat, catnip flavored in-vitro meat – they’re not going to be tempted to chase after herbivores.

Alternatively, a little bit of genetic tweaking, and you no longer have an obligate carnivore.

I’m supposing here that we do want to preserve recognizable approximations of today’s so-called charismatic megafauna – many people are extremely unhappy at the idea that lions or tigers or snakes or crocodiles should go extinct. I’m not personally persuaded that the world would be a worse place without crocodiles or snakes, but if we do want to preserve them it’s possible genetically to treat them or provide in vitro meat so that they don’t actually do any harm to sentient beings.

Some species essentialists would respond that a lion that is no longer chasing, asphyxiating, disemboweling zebras is no longer truly a lion. But one might make the same argument that a homo sapiens who is no longer beating his rivals over their heads, or waging war or practicing infanticide, slavery and all the other ghastly practices of our evolutionary past, or for that matter wearing clothes, that which are that someone who adopts a more civilized life style are no longer truly human – which I can only say good.

And likewise, if there is a living world in which lions are pacifistic, if a lion so to speak is lying down with the lamb I would say that is much more civilized.

Compassionate Biology

See this exerpt from The Antispeciesist Revolution:
If and when humans stop systematically harming other sentient beings, will our ethical duties to members of other species have been discharged? Not if the same ethical considerations as apply to members of other human races or age-groups apply also to members of other species of equivalent sentience. Thus if famine breaks out in sub-Saharan Africa and young human children are starving, then we recognise we have a duty to send aid; or better still, to take proactive to measures to ensure famines do not arise in the first instance, i.e. to provide not just food aid but family planning. So why not assist, say, starving free-living elephants? Until recently, no comparable interventions were feasible for members of other species. The technical challenges were insurmountable. Not least, the absence of cross-species fertility control technologies would have often made bad problems worse. Yet thanks to the exponential growth of computer power, every cubic metre of the planet will shortly be computationally accessible to micro-management, surveillance and control. Harnessed to biotechnology, nanotechnology and robotics, such tools confer unprecedented power over Nature. With unbridled power comes complicity. Ethically speaking, how many of the traditional cruelties of the living world do we wish to perpetuate? Orthodox conservation biologists argue we should not “interfere”: humans can’t “police” Nature. Antispeciesists disagree. Advocates of compassionate biology argue that humans and nonhumans alike should not be parasitised, starved, disembowelled, asphyxiated, or eaten alive.

As always, bioconservatives insist such miseries are “natural”; status quo bias runs deep. “”Custom will reconcile people to any atrocity””, observed George Bernard Shaw. Snuff movies in the guise of Nature documentaries are quite popular on Youtube, a counterpoint to the Disneyfied wildlife shows aired on mainstream TV. Moreover even sympathetic critics of compassionate biology might respond that helping free-living members of other species is prohibitively expensive. An adequate welfare safety-net scarcely exists for humans in many parts of the world. So how can we contemplate its extension to nonhumans – even just to large-brained, long-lived vertebrates in our Nature reserves? Provision of comprehensive healthcare for all free-living elephants, for example, might cost between two or three billion dollars annually. Compassionate stewardship of the living world would be technically daunting too, entailing ecosystem management, cross-species fertility control via immunocontraception, veterinary care, emergency famine-relief, GPS tracking and monitoring, and ultimately phasing out or genetically “reprogramming” carnivorous predators. The notional bill could approach the world’s 1.7 trillion-dollar annual arms budget. But irrespective of cost or timescale, if we are to be consistently non-speciesist, then decisions about resource allocation should be based not on species membership, but directly or indirectly on sentience. An elephant, for example, is at least as sentient as a human toddler – and may well be as sentient if not sapient as adult humans. If it is ethically obligatory to help sick or starving children, then it’s ethically obligatory to help sick or starving elephants – not just via crisis interventions but via long-term healthcare support.

A traditional conservation biologist might respond that elephants helped by humans are no longer truly wild. Yet on such a criterion, clothes-wearing humans or beneficiaries of food aid and family planning aren’t “wild” humans either. Why should this matter? “Free-living” and “wild” are conceptually distinct. To assume that the civilising process should be confined to our own species is mere speciesist prejudice. Humans, transhumans and posthumans must choose what forms of sentience we want to preserve and create on Earth and beyond. Humans already massively intervene in Nature, whether through habitat destruction, captive breeding programs for big cats, “rewilding”, etc. So the question is not whether humans should “interfere”, but rather what ethical principles should govern our interventions.

http://www.hedweb.com/transhumanism/antispeciesist.html

Subscribe to the YouTube Channel

Science, Technology & the Future