Posts

Reason – Philosophy Of Anti Aging: Ethics, Research & Advocacy

Reason was interviewed at the Undoing Aging conference in Berlin 2019 by Adam Ford – focusing on philosophy of anti-aging, ethics, research & advocacy. Here is the audio!

Topics include philosophical reasons to support anti-aging, high impact research (senolytics etc), convincing existence proofs that further research is worth doing, how AI can help and how human research (bench-work) isn’t being replaced by AI atm or in the foreseeable future, suffering mitigation and cause prioritization in Effective Altruism – how the EA movement sees anti-aging and why it should advocate for it, population effects (financial & public health) of an aging population and the ethics of solving aging as a problem…and more.

Reason is the founder and primary blogger at FightAging.org
 

Keith Comito on Undoing Ageing

How can solving aging reduce suffering? What are some common objections to the ideas of solving aging? How does Anti-Aging stack up against other cause areas (like climate change, or curing specific diseases)? How can we better convince people of the virtues of undoing the diseases of old age?

Keith Comito, interviewed by Adam Ford at the Undoing Aging 2019 conference in Berlin, discusses why solving the diseases of old age is powerful cause. Note the video of this interview will be available soon.

Keith is a computer programmer and mathematician whose work brings together a variety of disciplines to provoke thought and promote social change. He has created video games, bioinformatics programs, musical applications, and biotechnology projects featured in Forbes and NPR.

In addition to developing high-profile mobile applications such as HBO Now and MLB AtBat, he explores the intersection of technology and biology at the Brooklyn community lab Genspace, where he helped to create games which allow players to direct the motion of microscopic organisms.

Seeing age-related disease as one of the most profound problems facing humanity, he now works to accelerate and democratize longevity research efforts through initiatives such as Lifespan.io.

He earned a B.S. in Mathematics, B.S. in Computer science, and M.S. in Applied Mathematics at Hofstra University, where his work included analysis of the LMNA protein.

Future Day Melbourne 2019

Future Day is nigh – sporting a spectacular line of speakers!

Agenda

5.30Doors open – meet and greet other attendees
5.45Introduction
6.00Drew Berry – “The molecular machines that create your flesh and blood” [abstract]
6.45Brock Bastian – “Happiness, culture, mental illness, and the future self” [abstract]
7.30Lynette Plenderleith: “The future of biodiversity starts now” [abstract]
8.15Panel: Drew Berry, Brock Bastian, Lynette Plenderleith
Join the MeetupFuture Day is on the 21st of March - sporting a spectacular line of speakers ranging from Futurology, Philosophy, Biomedical Animation & Psychology!

Venue: KPMG Melbourne – 727 Collins St [map link] – Collins Square – Level 36 Room 2

Limited seating to about 40, though if there is overflow, there will be standing room.

PLEASE have a snack/drink before you come. Apparently we can’t supply food/drink at KPMG, so eat something beforehand – or work up and appetite…
Afterwards we will sojourn at a local pub for some grub and ale.

I’m looking forward to seeing people I have met before, and some new faces as well.

Drew Berry – Biomedical Animator @ The Walter and Eliza Hall Institute of Medical Research
Brock BastianMelbourne School of Psychological Sciences University of Melbourne

Check out the Future Day Facebook Group, and the Twitter account!

Abstracts

The molecular machines that create your flesh and blood

By Drew Berry – Abstract: A profound technological revolution is underway in bio-medical science, accelerating development of new therapies and treatments for the diseases that afflict us and also transforming how we perceive ourselves and the nature of our living bodies. Coupled to the accelerating pace of scientific discovery is an ever expanding need to explain to the public and develop appreciation of our new biomedical capabilities, to prepare the public for the tsunami of new knowledge and medicines that will impact patients, our families and community.
Drew Berry will present the latest visualisation experiments in creating cinematic movies and real-time interactive 3D molecular worlds, that reveal the current state of the art scientific discovery, focusing on the molecular engines that covert the food you eat into the chemical energy that powers your cells and tissues. Leveraging the incredible power of game GPU technology, vast molecular landscapes can be generated for 3D 360 degree cinema for museum and science centre dome theatres, interactive exploration in VR, and Augmented Reality education via student mobile phones.

 

Happiness, culture, mental illness, and the future self

By Brock Bastian – Abstract: What is the future of human happiness and wellbeing. We are currently treating mental illness at the level of individuals, yet rates of mental illness are not going down, and in some cases continue to rise. I will present research indicating that we need to start to tackle this problem at the level of culture. The cultural values places on particular emotional states may play a role in how people respond to their own emotional worlds. Furthermore, I will present evidence that basic cultural differences in how we explain events, predict the future and understand ourselves may also impact on the effectiveness of our capacity to deal with emotional events. This suggests that we need to begin to take culture seriously in how we treat mental illness. It also provides some important insights into what kind of thinking styles we might seek to promote and how we might seek to understand and shape our future selves. This also has implications for how we might find happiness in a world increasingly characterized by residential mobility, weak ties, and digital rather than face-to-face interaction.

 

The future of biodiversity starts now

By Lynette Plenderleith – Abstract: Biodiversity is vital to our food security, our industries, our health and our progress. Yet never before has the future of biodiversity been so under threat as we modify more land, burn more fossil fuels and transport exotic organisms around the planet. But in the face of catastrophic biodiversity collapse, scientists, community groups and not-for-profits are working to discover new ways to conserve biodiversity, for us and the rest of life on our planet. From techniques as simple as preserving habitat to complex scientific techniques like de-extinction, Lynette will discuss our options for the future to protect biodiversity, how the future of biodiversity could look and why we should start employing conservation techniques now. Our future relies on the conservation of  biodiversity and its future rests in our hands. We have the technology to protect it.

 

Biographies

Dr Drew Berry

Dr Drew Berry is a biomedical animator who creates beautiful, accurate visualisations of the dramatic cellular and molecular action that is going on inside our bodies. He began his career as a cell biologist and is fluent navigating technical reports, research data and models from scientific journals. As an artist, he works as a translator, transforming abstract and complicated scientific concepts into vivid and meaningful visual journeys. Since 1995 he has been a biomedical animator at the Walter and Eliza Hall Institute of Medical Research, Australia. His animations have exhibited at venues such as the Guggenheim Museum, MoMA, the Royal Institute of Great Britain and the University of Geneva. In 2010, he received a MacArthur Fellowship “Genius Grant”.

Recognition and awards

• Doctorate of Technology (hc), Linköping University Sweden, 2016
• MacArthur Fellowship, USA 2010
• New York Times “If there is a Steven Spielberg of molecular animation, it is probably Drew Berry” 2010
• The New Yorker “[Drew Berry’s] animations are astonishingly beautiful” 2008
• American Scientist “The admirers of Drew Berry, at the Walter and Eliza Hall Institute in Australia, talk about him the way Cellini talked about Michelangelo” 2009
• Nature Niche Prize, UK 2008
• Emmy “DNA” Windfall Films, UK 2005
• BAFTA “DNA Interactive” RGB Co, UK 2004

Animation http://www.wehi.tv
TED http://www.ted.com/talks/drew_berry_animations_of_unseeable_biology
Architectural projection https://www.youtube.com/watch?v=m9AA5x-qhm8
Björk video https://www.youtube.com/watch?v=Wa1A0pPc-ik
Wikipedia https://en.wikipedia.org/wiki/Drew_Berry

Assoc Prof Brock Bastian

Brock Bastian is a social psychologist whose research focuses on pain, happiness, and morality.

In his search for a new perspective on what makes for the good life, Brock Bastian has studied why promoting happiness may have paradoxical effects; why we need negative and painful experiences in life to build meaning, purpose, resilience, and ultimately greater fulfilment in life; and why behavioural ethics is necessary for understanding how we reason about personal and social issues and resolve conflicts of interest. His first book, The Other Side of Happiness, was published in January 2018.

 

The Other Side of Happiness: Embracing a More Fearless Approach to Living

Our addiction to positivity and the pursuit of pleasure is actually making us miserable. Brock Bastian shows that, without some pain, we have no real way to achieve and appreciate the kind of happiness that is true and transcendent.

Read more about The Other Side of Happiness

Dr. Lynette Plenderleith

Dr. Lynette Plenderleith is a wildlife biologist by training and is now a media science specialist, working mostly in television, with credits including children’s show WAC!
World Animal Championships and Gardening Australia. Lynette is Chair and Founder of Frogs Victoria, President of the Victorian branch of Australian Science Communicators and occasional performer of live science-comedy. Lynette has a Ph.D from Monash University, where she studied the ecology of native Australian frogs, a Master’s degree in the spatial ecology of salamanders from Towson University in the US and a BSc in Natural Sciences from Lancaster University in her homeland – the UK.
Twitter: @lynplen
Website: lynplen.com

 

 

The Future is not a product

It’s more exciting than gadgets with shiny screens and blinking lights.

Future Day is a way of focusing and celebrating the energy that more and more people around the world are directing toward creating a radically better future.

How should Future Day be celebrated? That is for us to decide as the future unfolds!

  • Future Day could be adopted as an official holiday by countries around the world.
  • Children can do Future Day projects at school, exploring their ideas and passions about creating a better future.
  • Future Day costume parties — why not? It makes at least as much sense as dressing up to celebrate halloween!
  • Businesses giving employees a day off from routine concerns, to think creatively about future projects
  • Special Future Day issues in newspapers, magazines and blogs
  • Use your imagination — that’s what the future is all about!

The Future & You

It’s time to create the future together!

Our aspirations are all too often sidetracked in this age of distraction. Lurking behind every unfolding minute is a random tangent with no real benefit for our future selves – so let’s ritualize our commitment to the future by celebrating it! Future Day is here to fill our attention economies with useful ways to solve the problems of arriving at desirable futures, & avoid being distracted by the usual gauntlet of noise we run every other day. Our future is very important – accelerating scientific & technological progress will change the world even more than it already has. While other days of celebration focus on the past – let’s face the future – an editable history of a time to come – a future that is glorious for everyone.

Videos from Previous Future Day Events / Interviews

Uncovering the Mysteries of Affective Neuroscience – the Importance of Valence Research with Mike Johnson

Valence in overview

Adam: What is emotional valence (as opposed to valence in chemistry)?

Mike: Put simply, emotional valence is how pleasant or unpleasant something is. A somewhat weird fact about our universe is that some conscious experiences do seem to feel better than others.

 

Adam: What makes things feel the way they do? What makes some things feel better than others?

Mike: This sounds like it should be a simple question, but neuroscience just don’t know. It knows a lot of random facts about what kinds of experiences, and what kinds of brain activation patterns, feel good, and which feel bad, but it doesn’t have anything close to a general theory here.

..the way affective neuroscience talks about this puzzle sometimes sort of covers this mystery up, without solving it.

And the way affective neuroscience talks about this puzzle sometimes sort of covers this mystery up, without solving it. For instance, we know that certain regions of the brain, like the nucleus accumbens and ventral pallidum, seem to be important for pleasure, so we call them “pleasure centers”. But we don’t know what makes something a pleasure center. We don’t even know how common painkillers like acetaminophen (paracetamol) work! Which is kind of surprising.

In contrast, the hypothesis about valence I put forth in Principia Qualia would explain pleasure centers and acetaminophen and many other things in a unified, simple way.

 

Adam: How does the hypothesis about valence work?

Mike: My core hypothesis is that symmetry in the mathematical representation of an experience corresponds to how pleasant or unpleasant that experience is. I see this as an identity relationship which is ‘True with a capital T’, not merely a correlation.  (Credit also goes to Andres Gomez Emilsson & Randal Koene for helping explore this idea.)

What makes this hypothesis interesting is that
(1) On a theoretical level, it could unify all existing valence research, from Berridge’s work on hedonic hotspots, to Friston & Seth’s work on predictive coding, to Schmidhuber’s idea of a compression drive;

(2) It could finally explain how the brain’s so-called “pleasure centers” work– they function to tune the brain toward more symmetrical states!

(3) It implies lots and lots of weird, bold, *testable* hypotheses. For instance, we know that painkillers like acetaminophen, and anti-depressants like SSRIs, actually blunt both negative *and* positive affect, but we’ve never figured out how. Perhaps they do so by introducing a certain type of stochastic noise into acute & long-term activity patterns, respectively, which disrupts both symmetry (pleasure) and anti-symmetry (pain).

 

Adam: What kinds of tests would validate or dis-confirm your hypothesis? How could it be falsified and/or justified by weight of induction?

Mike: So this depends on the details of how activity in the brain generates the mind. But I offer some falsifiable predictions in PQ (Principia Qualia):

  • If we control for degree of consciousness, more pleasant brain states should be more compressible;
  • Direct, low-power stimulation (TMS) in harmonious patterns (e.g. 2hz+4hz+6hz+8hz…160hz) should feel remarkably more pleasant than stimulation with similar-yet-dissonant patterns (2.01hz+3.99hz+6.15hz…).

Those are some ‘obvious’ ways to test this. But my hypothesis also implies odd things such as that chronic tinnitus (ringing in the ears) should product affective blunting (lessened ability to feel strong valence).

Note: see https://qualiacomputing.com/2017/06/18/quantifying-bliss-talk-summary/ and http://opentheory.net/2018/08/a-future-for-neuroscience/ for a more up-to-date take on this.

 

Adam: Why is valence research important?

Mike Johnson: Put simply, valence research is important because valence is important. David Chalmers famously coined “The Hard Problem of Consciousness”, or why we’re conscious at all, and “The Easy Problem of Consciousness”, or how the brain processes information. I think valence research should be called “The Important Problem of Consciousness”. When you’re in a conscious moment, the most important thing to you is how pleasant or unpleasant it feels.

That’s the philosophical angle. We can also take the moral perspective, and add up all the human and non-human animal suffering in the world. If we knew what suffering was, we could presumably use this knowledge to more effectively reduce it and make the world a kinder place.

We can also take the economic perspective, and add up all the person-years, capacity to contribute, and quality of life lost to Depression and chronic pain. A good theory of valence should allow us to create much better treatments for these things. And probably make some money while doing it.

Finally, a question I’ve been wondering for a while now is whether having a good theory of qualia could help with AI safety and existential risk. I think it probably can, by helping us see and avoid certain failure-modes.

 

Adam: How can understanding valence could help make future AIs safer? (How to help define how the AI should approach making us happy?, and in terms of a reinforcement mechanism for AI?)

Mike: Last year, I noted a few ways a better understanding of valence could help make future AIs safer on my blog. I’d point out a few notions in particular though:

  • If we understand how to measure valence, we could use this as part of a “sanity check” for AI behavior. If some proposed action would cause lots of suffering, maybe the AI shouldn’t do it.
  • Understanding consciousness & valence seem important for treating an AI humanely. We don’t want to inadvertently torture AIs- but how would we know?
  • Understanding consciousness & valence seems critically important for “raising the sanity waterline” on metaphysics. Right now, you can ask 10 AGI researchers about what consciousness is, or what has consciousness, or what level of abstraction to define value, and you’ll get at least 10 different answers. This is absolutely a recipe for trouble. But I think this is an avoidable mess if we get serious about understanding this stuff.

 

Adam: Why the information theoretical approach?

Mike: The way I would put it, there are two kinds of knowledge about valence: (1) how pain & pleasure work in the human brain, and (2) universal principles which apply to all conscious systems, whether they’re humans, dogs, dinosaurs, aliens, or conscious AIs.

It’s counter-intuitive, but I think these more general principles might be a lot easier to figure out than the human-specific stuff. Brains are complicated, but it could be that the laws of the universe, or regularities, which govern consciousness are pretty simple. That’s certainly been the case when we look at physics. For instance, my iPhone’s processor is super-complicated, but it runs on electricity, which itself actually obeys very simple & elegant laws.

Elsewhere I’ve argued that:

>Anything piped through the complexity of the brain will look complex, regardless of how simple or complex it starts out as. Similarly, anything will look irreducibly complex if we’re looking at it from the wrong level of abstraction.

 

Adam: What do you think of Thomas A. Bass’s view of ITheory – he thinks that (at least in many cases) it has not been easy to turn data into knowledge. That there is a pathological attraction to information which is making us ‘sick’ – he calls it Information Pathology. If his view offers any useful insights to you concerning avoiding ‘Information Pathology’ – what would they be?

Mike: Right, I would agree with Bass that we’re swimming in neuroscience data, but it’s not magically turning into knowledge. There was a recent paper called “Could a neuroscientist understand a microprocessor?” which asked if the standard suite of neuroscience methods could successfully reverse-engineer the 6502 microprocessor used in the Atari 2600 and NES. This should be easier than reverse-engineering a brain, since it’s a lot smaller and simpler, and since they were analyzing it in software they had all the data they could ever ask for, but it turned out that the methods they were using couldn’t cut it. Which really begs the question of whether these methods can make progress on reverse-engineering actual brains. As the paper puts it, neuroscience thinks it’s data-limited, but it’s actually theory-limited.

The first takeaway from this is that even in the age of “big data” we still need theories, not just data. We still need people trying to guess Nature’s structure and figuring out what data to even gather. Relatedly, I would say that in our age of “Big Science” relatively few people are willing or able to be sufficiently bold to tackle these big questions. Academic promotions & grants don’t particularly reward risk-taking.

 

Adam: Information Theory frameworks – what is your “Eight Problems” framework and how does it contrast with Giulio Tononi’s Integrated Information Theory (IIT)? How might IIT help address valence in a principled manner? What is lacking IIT – and how does your ‘Eight Problems’ framework address this?

Mike: IIT is great, but it’s incomplete. I think of it as *half* a theory of consciousness. My “Eight Problems for a new science of consciousness” framework describes what a “full stack” approach would look like, what IIT will have to do in order to become a full theory.

The biggest two problems IIT faces is that (1) it’s not compatible with physics, so we can’t actually apply it to any real physical systems, and (2) it says almost nothing about what its output means. Both of these are big problems! But IIT is also the best and only game in town in terms of quantitative theories of consciousness.

Principia Qualia aims to help fix IIT, and also to build a bridge between IIT and valence research. If IIT is right, and we can quantify conscious experiences, then how pleasant or unpleasant this experience is should be encoded into its corresponding mathematical object.

 

Adam: What are the three principles for a mathematical derivation of valence?

Mike: First, a few words about the larger context. Probably the most important question in consciousness research is whether consciousness is real, like an electromagnetic field is real, or an inherently complex, irreducible linguistic artifact, like “justice” or “life”. If consciousness is real, then there’s interesting stuff to discover about it, like there was interesting stuff to discover about quantum mechanics and gravity. But if consciousness isn’t real, then any attempt to ‘discover’ knowledge about it will fail, just like attempts to draw a crisp definition for ‘life’ (elan vital) failed.

If consciousness is real, then there’s a hidden cache of predictive knowledge waiting to be discovered. If consciousness isn’t real, then the harder we try to find patterns, the more elusive they’ll be- basically, we’ll just be talking in circles. David Chalmers refers to a similar distinction with his “Type-A vs Type-B Materialism”.

I’m a strong believer in consciousness realism, as are my research collaborators. The cool thing here is, if we assume that consciousness is real, a lot of things follow from this– like my “Eight Problems” framework. Throw in a couple more fairly modest assumptions, and we can start building a real science of qualia.

Anyway, the formal principles are the following:

  1. Consciousness can be quantified. (More formally, that for any conscious experience, there exists a mathematical object isomorphic to it.)
  2. There is some order, some rhyme & reason & elegance, to consciousness. (More formally, the state space of consciousness has a rich set of mathematical structures.)
  3. Valence is real. (More formally, valence is an ordered property of conscious systems.)

 

Basically, they combine to say: this thing we call ‘valence’ could have a relatively simple mathematical representation. Figuring out valence might not take an AGI several million years. Instead, it could be almost embarrassingly easy.

 

Adam: Does Qualia Structuralism, Valence Structuralism and Valence Realism relate to the philosophy of physics principles of realism and structuralism? If so, is there an equivalent ontic Qualia Structuralism and Valence Structuralism?….

Mike: “Structuralism” is many things to many contexts. I use it in a specifically mathematical way, to denote that the state space of qualia quite likely embodies many mathematical structures, or properties (such as being a metric space).

Re: your question about ontics, I tend to take the empirical route and evaluate claims based on their predictions whenever possible. I don’t think predictions change if we assume realism vs structuralism in physics, so maybe it doesn’t matter. But I can get back to you on this. 🙂

 

Adam: What about the Qualia Research Institute I’ve also recently heard about :D! It seems both you (Mike) and Andrés Gómez Emilson are doing some interesting work there

Mike: We know very little about consciousness. This is a problem, for various and increasing reasons– it’s upstream of a lot of futurist-related topics.

But nobody seems to know quite where to start unraveling this mystery. The way we talk about consciousness is stuck in “alchemy mode”– we catch glimpses of interesting patterns, but it’s unclear how to systematize this into a unified framework. How to turn ‘consciousness alchemy’ into ‘consciousness chemistry’, so to speak.

Qualia Research Institute is a research collective which is working on building a new “science of qualia”. Basically, we think our “full-stack” approach cuts through all the confusion around this topic and can generate hypotheses which are novel, falsifiable, and useful.

Right now, we’re small (myself, Andres, and a few others behind the scenes) but I’m proud of what we’ve accomplished so far, and we’ve got more exciting things in the pipeline. 🙂

Also see the 2nd part, and the 3rd part of this interview series. Also this interview with Christof Koch will likely be of interest.

 

Mike Johnson is a philosopher living in the Bay Area, writing about mind, complexity theory, and formalization. He is Co-founder of the Qualia Research Institute. Much of Mike’s research and writings can be found at the Open Theory website.
‘Principia Qualia’ is Mike’s magnum opus – a blueprint for building a new Science of Qualia. Click here for the full version, or here for an executive summary.
If you like Mike’s work, consider helping fund it at Patreon.

On Consciousness, Qualia, Valence & Intelligence with Mike Johnson

Andrés L. Gómez Emilsson

Andrés Gómez Emilsson joined in to add very insightful questions for a 3 part interview series with Mike Johnson, covering the relationship of metaphysics to qualia/consciousness/hedonic valence, and defining their terms, whether panpsychism matters, increasing sensitivity to bliss, valence variance, Effective Altruism, cause prioritization, and the importance of consciousness/valence research .
Andrés Gómez Emilsson interviews Mike Johnson

Carving Reality at the Joints

Andrés L. Gómez Emilsson: Do metaphysics matter for understanding qualia, consciousness, valence and intelligence? Mike Johnson: If we define metaphysics as the study of what exists, it absolutely does matter for understanding qualia, consciousness, and valence. I think metaphysics matters for intelligence, too, but in a different way. The big question is whether terms like qualia, consciousness, and valence “carve reality at the joints” or whether they’re emergent linguistic constructs that don’t reflect the structure of the universe. And if these things are ‘real’ in some sense, the follow-up question is: how can we formalize these concepts? Intelligence seems different: it seems like a ‘fuzzy’ concept, without a good “crisp”, or frame-invariant, definition. Andrés: What about sources of sentient valence outside of human brains? What is the “minimum viable valence organism”? What would you expect it to look like?

Mike Johnson

Mike: If some form of panpsychism is true- and it’s hard to construct a coherent theory of consciousness without allowing panpsychism- then I suspect two interesting things are true.
  1. A lot of things are probably at least a little bit conscious. The “minimum viable valence experiencer” could be pretty minimal. Both Brian Tomasik and Stuart Hameroff suggest that there could be morally-relevant experience happening at the level of fundamental physics. This seems highly counter-intuitive but also logically plausible to me.
  2. Biological organisms probably don’t constitute the lion’s share of moral experience. If there’s any morally-relevant experience that happens on small levels (e.g., quantum fuzz) or large levels (e.g., black holes, or eternal inflation), it probably outweighs what happens on Earth by many, many, many orders of magnitude. Whether it’ll outweigh the future impact of humanity on our light-cone is an open question.

The big question is whether terms like qualia, consciousness, and valence “carve reality at the joints” or whether they’re emergent linguistic constructs that don’t reflect the structure of the universe. And if these things are ‘real’ in some sense, the follow-up question is: how can we formalize these concepts?

In contrast with Brian Tomasik on this issue, I suspect (and hope) that the lion’s share of the qualia of the universe is strongly net positive. Appendix F of Principia Qualia talks a little more about this. Andrés: What would be the implications of finding a sure-fire way to induce great valence for brief moments? Could this be used to achieve “strategic alignment” across different branches of utilitarianism? Mike: A device that could temporarily cause extreme positive or negative valence on demand would immediately change the world. First, it would validate valence realism in a very visceral way. I’d say it would be the strongest philosophical argument ever made. Second, it would obviously have huge economic & ethical uses. Third, I agree that being able to induce strong positive & negative valence on demand could help align different schools of utilitarianism. Nothing would focus philosophical arguments about the discount rate between pleasure & suffering more than a (consensual!) quick blast of pure suffering followed by a quick blast of pure pleasure. Similarly, a lot of people live their lives in a rather numb state. Giving them a visceral sense that ‘life can be more than this’ could give them ‘skin in the game’. Fourth, it could mess a lot of things up. Obviously, being able to cause extreme suffering could be abused, but being able to cause extreme pleasure on-demand could lead to bad outcomes too. You (Andres) have written about wireheading before, and I agree with the game-theoretic concerns involved. I would also say that being able to cause extreme pleasure in others could be used in adversarial ways. More generally, human culture is valuable and fragile; things that could substantially disrupt it should be approached carefully. A friend of mine was describing how in the 70s, the emerging field of genetic engineering held the Asilomar Conference on Recombinant DNA to discuss how the field should self-regulate. The next year, these guidelines were adopted by the NIH wholesale as the basis for binding regulation, and other fields (such as AI safety!) have attempted to follow the same model. So the culture around technologies may reflect a strong “founder effect”, and we should be on the lookout for a good, forward-looking set of principles for how valence technology should work. One principle that seems to make sense is to not publicly post ‘actionable’ equations, pseudocode, or code for how one could generate suffering with current computing resources (if this is indeed possible). Another principle is to focus resources on positive, eusocial applications only, insofar as that’s possible– I’m especially concerned about addiction, and bad actors ‘weaponizing’ this sort of research. Another would be to be on guard against entryism, or people who want to co-opt valence research for political ends. All of this is pretty straightforward, but it would be good to work it out a bit more formally, look at the successes and failures of other research communities, and so on.

A question I find very interesting is whether valence research is socially disruptive or socially stabilizing by default. I think we should try very hard to make it a socially stabilizing force.

A question I find very interesting is whether valence research is socially disruptive or socially stabilizing by default. I think we should try very hard to make it a socially stabilizing force. One way to think about this is in terms of existential risk. It’s a little weird to say, but I think the fact that so many people are jaded, or feel hopeless, is a big existential risk, because they feel like they have very little to lose. So they don’t really care what happens to the world, because they don’t have good qualia to look forward to, no real ‘skin in the game’. If valence tech could give people a visceral, ‘felt sense’ of wonder and possibility, I think the world could become a much safer place, because more people would viscerally care about AI safety, avoiding nuclear war, and so on. Finally, one thing that I think doesn’t make much sense is handing off the ethical issues to professional bioethicists and expecting them to be able to help much. Speaking as a philosopher, I don’t think bioethics itself has healthy community & dresearch norms (maybe bioethics needs some bioethicsethicists…). And in general, I think especially when issues are particularly complex or technical, I think the best type of research norms comes from within a community. Andrés: What is the role of valence variance in intelligence? Can a sentient being use its consciousness in any computationally fruitful way without any valence variance? Can a “perfectly flat world(-simulation)” be used for anything computational?   Mike: I think we see this today, with some people suffering from affective blunting (muted emotions) but seemingly living functional lives. More generally, what a sentient agent functionally accomplishes, and how it feels as it works toward that goal, seem to be correlated but not identical. I.e., one can vary without the other. But I don’t think that valence is completely orthogonal to behavior, either. My one-sentence explanation here is that evolution seems to have latched onto the

Why we seek out pleasure: the Symmetry Theory of Homeostatic Regulation

property which corresponds to valence- which I argue is symmetry– in deep ways, and has built our brain-minds around principles of homeostatic symmetry. This naturally leads to a high variability in our valence, as our homeostatic state is perturbed and restored. Logically, we could build minds around different principles- but it might be a lot less computationally efficient to do so. We’ll see. 🙂 One angle of research here could be looking at people who suffer from affective blunting, and trying to figure out if it holds them back: what it makes them bad at doing. It’s possible that this could lead to understanding human-style intelligence better. Going a little further, we can speculate that given a certain goal or computation, there could be “valence-positive” processes that could accomplish it, and “valence-negative” processes. This implies that there’s a nascent field of “ethical computation” that would evaluate the valence of different algorithms running on different physical substrates, and choose the one that best satisfices between efficiency and valence. (This is of course a huge simplification which glosses over tons of issues…)
Andrés: What should we prioritize: super-intelligence, super-longevity or super-happiness? Does the order matter? Why? Mike: I think it matters quite a bit! For instance, I think the world looks a lot different if we figure out consciousness *before* AGI, versus if we ignore it until AGI is built. The latter seems to involve various risks that the former doesn’t. A risk that I think we both agree is serious and real is this notion of “what if accelerating technology leads to Malthusian conditions where agents don’t- and literally can’t, from a competitive standpoint- care about qualia & valence?” Robin Hanson has a great post called “This is the Dream Time” (of relaxed selection). But his book “Age of Em” posits a world where selection pressures go back up very dramatically. I think if we enter such an era without a good theory of qualia, we could trade away a lot of what makes life worth living.  
Andrés: What are some conceptual or factual errors that you see happening in the transhumanist/rationalist/EA community related to modeling qualia, valence and intelligence? Mike: First, I think it’s only fair to mention what these communities do right. I’m much more likely to have a great conversation about these topics with EAs, transhumanists, and rationalists than a random person off the street, or even a random grad student. People from this community are always smart, usually curious, often willing to explore fresh ideas and stretch their brain a bit, and sometimes able to update based on purely abstract arguments. And there’s this collective sense that ideas are important and have real implications for the future. So there’s a lot of great things happening in these communities and they’re really a priceless resource for sounding out theories, debating issues, and so on. But I would highlight some ways in which I think these communities go astray.

Computationalism, functionalism, fun theory, ‘hedonic brain regions’, ‘pleasure neurochemicals’, the reinforcement learning theory of valence, and so on all give the illusion of explanatory depth but don’t actually explain things in a way which allows us to do anything useful.

First, people don’t realize how bad most existing models of qualia & valence are. Michael Graziano argues that most theories of consciousness are worse than wrong- that they play to our intuitions but don’t actually explain anything. Computationalism, functionalism, fun theory, ‘hedonic brain regions’, ‘pleasure neurochemicals’, the reinforcement learning theory of valence, and so on all give the illusion of explanatory depth but don’t actually explain things in a way which allows us to do anything useful. Second, people don’t realize how important a good understanding of qualia & valence are. They’re upstream of basically everything interesting and desirable. Here’s what I think has happened, at least in the rationalist community: historically, consciousness research has been a black hole. Smart people go in, but nothing comes out. So communities (such as physicists and LessWrong) naturally have an interest in putting up a fence around the topic with a sign that says

historically, consciousness research has been a black hole. Smart people go in, but nothing comes out. So communities .. naturally have an interest in putting up a fence around the topic with a sign that says ‘Don’t go here!’

‘Don’t go here!’ – But over time, people forgot why the mystery was blocked off, and started to think that the mystery doesn’t exist. This leads to people actively avoiding thinking about these topics without being able to articulate why. Andrés: Is there value in studying extreme cases of valence? E.g. Buddhist monks who claim to achieve extreme sustainable bliss, or people on MDMA? Mike: ‘What science can analyze, science can duplicate.’ And studying outliers such as your examples is a time-honored way of gathering data with high signal-to-noise. So yes, definitely. 🙂
Also see the 1st part, and the 2nd part of this interview series. Also this interview with Christof Koch will likely be of interest.
 
Mike Johnson is a philosopher living in the Bay Area, writing about mind, complexity theory, and formalization. He is Co-founder of the Qualia Research Institute. Much of Mike’s research and writings can be found at the Open Theory website. ‘Principia Qualia’ is Mike’s magnum opus – a blueprint for building a new Science of Qualia. Click here for the full version, or here for an executive summary. If you like Mike’s work, consider helping fund it at Patreon.

Ethics, Qualia Research & AI Safety with Mike Johnson

What’s the relationship between valence research and AI ethics?

Hedonic valence is a measure of the quality of our felt sense of experience, the intrinsic goodness (positive valence) or averseness (negative valence) of an event, object, or situation.  It is an important aspect of conscious experience; always present in our waking lives. If we seek to understand ourselves, it makes sense to seek to understand how valence works – how to measure it and test for it.

Also, might there be a relationship to the AI safety/friendliness problem?
In this interview, we cover a lot of things, not least .. THE SINGULARITY (of course) & the importance of Valence Research to AI Friendliness Research (as detailed here). Will thinking machines require experience with valence to understand it’s importance?

Here we cover some general questions about Mike Johnson’s views on recent advances in science and technology & what he sees as being the most impactful, what world views are ready to be retired, his views on XRisk and on AI Safety – especially related to value theory.

This one part of an interview series with Mike Johnson (another section on Consciousness, Qualia, Valence & Intelligence). 

 

Adam Ford: Welcome Mike Johnson, many thanks for doing this interview. Can we start with your background?

Mike Johnson

Mike Johnson: My formal background is in epistemology and philosophy of science: what do we know & how do we know it, what separates good theories from bad ones, and so on. Prior to researching qualia, I did work in information security, algorithmic trading, and human augmentation research.

 

Adam: What is the most exciting / interesting recent (scientific/engineering) news? Why is it important to you?

Mike: CRISPR is definitely up there! In a few short years precision genetic engineering has gone from a pipe dream to reality. The problem is that we’re like the proverbial dog that caught up to the car it was chasing: what do we do now? Increasingly, we can change our genome, but we have no idea how we should change our genome, and the public discussion about this seems very muddled. The same could be said about breakthroughs in AI.

 

Adam: What are the most important discoveries/inventions over the last 500 years?

Mike: Tough question. Darwin’s theory of Natural Selection, Newton’s theory of gravity, Faraday & Maxwell’s theory of electricity, and the many discoveries of modern physics would all make the cut. Perhaps also the germ theory of disease. In general what makes discoveries & inventions important is when they lead to a productive new way of looking at the world.

 

Adam: What philosophical/scientific ideas are ready to be retired? What theories of valence are ready to be relegated to the dustbin of history? (Why are they still in currency? Why are they in need of being thrown away or revised?)

Mike: I think that 99% of the time when someone uses the term “pleasure neurochemicals” or “hedonic brain regions” it obscures more than it explains. We know that opioids & activity in the nucleus accumbens are correlated with pleasure– but we don’t know why, we don’t know the causal mechanism. So it can be useful shorthand to call these things “pleasure neurochemicals” and whatnot, but every single time anyone does that, there should be a footnote that we fundamentally don’t know the causal story here, and this abstraction may ‘leak’ in unexpected ways.

 

Adam: What have you changed your mind about?

Mike: Whether pushing toward the Singularity is unequivocally a good idea. I read Kurzweil’s The Singularity is Near back in 2005 and loved it- it made me realize that all my life I’d been a transhumanist and didn’t know it. But twelve years later, I’m a lot less optimistic about Kurzweil’s rosy vision. Value is fragile, and there are a lot more ways that things could go wrong, than ways things could go well.

 

Adam: I remember reading Eliezer’s writings on ‘The Fragility of Value’, it’s quite interesting and worth consideration – the idea that if we don’t get AI’s value system exactly right, then it would be like pulling a random mind out of mindspace – most likely inimicable to human interests. The writing did seem quite abstract, and it would be nice to see a formal model or something concrete to show this would be the case. I’d really like to know how and why value is as fragile as Eliezer seems to make out. Is there any convincing crisply defined model supporting this thesis?

Mike: Whether the ‘Complexity of Value Thesis’ is correct is super important. Essentially, the idea is that we can think of what humans find valuable as a tiny location in a very large, very high-dimensional space– let’s say 1000 dimensions for the sake of argument. Under this framework, value is very fragile; if we move a little bit in any one of these 1000 dimensions, we leave this special zone and get a future that doesn’t match our preferences, desires, and goals. In a word, we get something worthless (to us). This is perhaps most succinctly put by Eliezer in “Value is fragile”:

“If you loose the grip of human morals and metamorals – the result is not mysterious and alien and beautiful by the standards of human value. It is moral noise, a universe tiled with paperclips. To change away from human morals in the direction of improvement rather than entropy, requires a criterion of improvement; and that criterion would be physically represented in our brains, and our brains alone. … You want a wonderful and mysterious universe? That’s your value. … Valuable things appear because a goal system that values them takes action to create them. … if our values that prefer it are physically obliterated – or even disturbed in the wrong dimension. Then there is nothing left in the universe that works to make the universe valuable.”

If this frame is right, then it’s going to be really really really hard to get AGI right, because one wrong step in programming will make the AGI depart from human values, and “there will be nothing left to want to bring it back.” Eliezer, and I think most of the AI safety community assumes this.

But– and I want to shout this from the rooftops– the complexity of value thesis is just a thesis! Nobody knows if it’s true. An alternative here would be, instead of trying to look at value in terms of goals and preferences, we look at it in terms of properties of phenomenological experience. This leads to what I call the Unity of Value Thesis, where all the different manifestations of valuable things end up as special cases of a more general, unifying principle (emotional valence). What we know from neuroscience seems to support this: Berridge and Kringelbach write about how “The available evidence suggests that brain mechanisms involved in fundamental pleasures (food and sexual pleasures) overlap with those for higher-order pleasures (for example, monetary, artistic, musical, altruistic, and transcendent pleasures).” My colleague Andres Gomez Emilsson writes about this in The Tyranny of the Intentional Object. Anyway, if this is right, then the AI safety community could approach the Value Problem and Value Loading Problem much differently.

 

Adam: I’m also interested in the nature of possible attractors that agents might ‘extropically’ gravitate towards (like a thirst for useful and interesting novelty, generative and non-regressive, that might not neatly fit categorically under ‘happiness’) – I’m not wholly convinced that they exist, but if one leans away from moral relativism, it makes sense that a superintelligence may be able to discover or extrapolate facts from all physical systems in the universe, not just humans, to determine valuable futures and avoid malignant failure modes (Coherent Extrapolated Value if you will). Being strongly locked into optimizing human values may be a non-malignant failure mode.

Mike: What you write reminds me of Schmidhuber’s notion of a ‘compression drive’: we’re drawn to interesting things because getting exposed to them helps build our ‘compression library’ and lets us predict the world better. But this feels like an instrumental goal, sort of a “Basic AI Drives” sort of thing. Would definitely agree that there’s a danger of getting locked into a good-yet-not-great local optima if we hard optimize on current human values.

Probably the danger is larger than that too– as Eric Schwitzgebel notes​, ​

“Common sense is incoherent in matters of metaphysics. There’s no way to develop an ambitious, broad-ranging, self- consistent metaphysical system without doing serious violence to common sense somewhere. It’s just impossible. Since common sense is an inconsistent system, you can’t respect it all. Every metaphysician will have to violate it somewhere.”

If we lock in human values based on common sense, we’re basically committing to following an inconsistent formal system. I don’t think most people realize how badly that will fail.

 

Adam: What invention or idea will change everything?

Mike: A device that allows people to explore the space of all possible qualia in a systematic way. Right now, we do a lot of weird things to experience interesting qualia: we drink fermented liquids, smoke various plant extracts, strap ourselves into rollercoasters, and parachute out of plans, and so on, to give just a few examples. But these are very haphazard ways to experience new qualia! When we’re able to ‘domesticate’ and ‘technologize’ qualia, like we’ve done with electricity, we’ll be living in a new (and, I think, incredibly exciting) world.

 

Adam: What are you most concerned about? What ought we be worrying about?

Mike: I’m worried that society’s ability to coordinate on hard things seems to be breaking down, and about AI safety. Similarly, I’m also worried about what Eliezer Yudkowsky calls ‘Moore’s Law of Mad Science’, that steady technological progress means that ‘every eighteen months the minimum IQ necessary to destroy the world drops by one point’. But I think some very smart people are worrying about these things, and are trying to address them.

In contrast, almost no one is worrying that we don’t have good theories of qualia & valence. And I think we really, really ought to, because they’re upstream of a lot of important things, and right now they’re “unknown unknowns”- we don’t know what we don’t know about them.

One failure case that I worry about is that we could trade away what makes life worth living in return for some minor competitive advantage. As Bostrom notes in Superintelligence,

“When it becomes possible to build architectures that could not be implemented well on biological neural networks, new design space opens up; and the global optima in this extended space need not resemble familiar types of mentality. Human-like cognitive organizations would then lack a niche in a competitive post-transition economy or ecosystem. We could thus imagine, as an extreme case, a technologically highly advanced society, containing many complex structures, some of them far more intricate and intelligent than anything that exists on the planet today – a society which nevertheless lacks any type of being that is conscious or whose welfare has moral significance. In a sense, this would be an uninhabited society. It would be a society of economic miracles and technological awesomeness, with nobody there to benefit. A Disneyland with no children.”

Nick Bostrom

Now, if we don’t know how qualia works, I think this is the default case. Our future could easily be a technological wonderland, but with very little subjective experience. “A Disneyland with no children,” as Bostrom quips.

 

 

Adam: How would you describe your ethical views? What are your thoughts on the relative importance of happiness vs. suffering? Do things besides valence have intrinsic moral importance?

Mike: Good question. First, I’d just like to comment that Principia Qualia is a descriptive document; it doesn’t make any normative claims.

I think the core question in ethics is whether there are elegant ethical principles to be discovered, or not. Whether we can find some sort of simple description or efficient compression scheme for ethics, or if ethics is irreducibly complex & inconsistent.

The most efficient compression scheme I can find for ethics, that seems to explain very much with very little, and besides that seems intuitively plausible, is the following:

  1. Strictly speaking, conscious experience is necessary for intrinsic moral significance. I.e., I care about what happens to dogs, because I think they’re conscious; I don’t care about what happens to paperclips, because I don’t think they are.
  2. Some conscious experiences do feel better than others, and all else being equal, pleasant experiences have more value than unpleasant experiences.

Beyond this, though, I think things get very speculative. Is valence the only thing that has intrinsic moral importance? I don’t know. On one hand, this sounds like a bad moral theory, one which is low-status, has lots of failure-modes, and doesn’t match all our intuitions. On the other hand, all other systematic approaches seem even worse. And if we can explain the value of most things in terms of valence, then Occam’s Razor suggests that we should put extra effort into explaining everything in those terms, since it’d be a lot more elegant. So– I don’t know that valence is the arbiter of all value, and I think we should be actively looking for other options, but I am open to it. That said I strongly believe that we should avoid premature optimization, and we should prioritize figuring out the details of consciousness & valence (i.e. we should prioritize research over advocacy).

Re: the relative importance of happiness vs suffering, it’s hard to say much at this point, but I’d expect that if we can move valence research to a more formal basis, there will be an implicit answer to this embedded in the mathematics.

Perhaps the clearest and most important ethical view I have is that ethics must ultimately “compile” to physics. What we value and what we disvalue must ultimately cash out in terms of particle arrangements & dynamics, because these are the only things we can actually change. And so if people are doing ethics without caring about making their theories cash out in physical terms, they’re not actually doing ethics- they’re doing art, or social signaling, or something which can serve as the inspiration for a future ethics.

Perhaps the clearest and most important ethical view I have is that ethics must ultimately “compile” to physics. What we value and what we disvalue must ultimately cash out in terms of particle arrangements & dynamics, because these are the only things we can actually change.

The analogy I’d offer here is that we can think about our universe as a computer, and ethics as choosing a program to run on this computer. Unfortunately, most ethicists aren’t writing machine-code, or even thinking about things in ways that could be easily translated to machine-code. Instead, they’re writing poetry about the sorts of programs that might be nice to run. But you can’t compile poetry to machine-code! So I hope the field of ethics becomes more physics-savvy and quantitative (although I’m not optimistic this will happen quickly).

Eliezer Yudkowsky refers to something similar with his notions of “AI grade philosophy”, “compilable philosophy”, and “computable ethics”, though I don’t think he quite goes far enough (i.e., all the way to physics).

 

Adam: What excites you? What do you think we have reason to be optimistic about?

Mike: The potential of qualia research to actually make peoples’ lives better in concrete, meaningful ways. Medicine’s approach to pain management and treatment of affective disorders are stuck in the dark ages because we don’t know what pain is. We don’t know why some mental states hurt. If we can figure that out, we can almost immediately help a lot of people, and probably unlock a surprising amount of human potential as well. What does the world look like with sane, scientific, effective treatments for pain & depression & akrasia? I think it’ll look amazing.

 

Adam: If you were to take a stab at forecasting the Intelligence Explosion – in what timeframe do you think it might happen (confidence intervals allowed)?

Mike: I don’t see any intractable technical hurdles to an Intelligence Explosion: the general attitude in AI circles seems to be that progress is actually happening a lot more quickly than expected, and that getting to human-level AGI is less a matter of finding some fundamental breakthrough, and more a matter of refining and connecting all the stuff we already know how to do.

The real unknown, I think, is the socio-political side of things. AI research depends on a stable, prosperous society able to support it and willing to ‘roll the dice’ on a good outcome, and peering into the future, I’m not sure we can take this as a given. My predictions for an Intelligence Explosion:

  • Between ~2035-2045 if we just extrapolate research trends within the current system;
  • Between ~2080-2100 if major socio-political disruptions happen but we stabilize without too much collateral damage (e.g., non-nuclear war, drawn-out social conflict);
  • If it doesn’t happen by 2100, it probably implies a fundamental shift in our ability or desire to create an Intelligence Explosion, and so it might take hundreds of years (or never happen).

 

If a tree falls in the forest and no one is around to hear it, does it make a sound? It would be unfortunate if a whole lot of awesome stuff were to happen with no one around to experience it.  <!–If a rainbow appears in a universe, and there is no one around to experience it, is it beautiful?–>

Also see the 2nd part, and 3nd part (conducted by Andrés Gómez Emilson) of this interview series conducted by Andrés Gómez Emilson and this interview with Christof Koch will likely be of interest.

 

Mike Johnson is a philosopher living in the Bay Area, writing about mind, complexity theory, and formalization. He is Co-founder of the Qualia Research Institute. Much of Mike’s research and writings can be found at the Open Theory website.
‘Principia Qualia’ is Mike’s magnum opus – a blueprint for building a new Science of Qualia. Click here for the full version, or here for an executive summary.
If you like Mike’s work, consider helping fund it at Patreon.

Science, Mindfulness & the Urgency of Reducing Suffering – Christof Koch

In this interview with Christof Koch, he shares some deeply felt ideas about the urgency of reducing suffering (with some caveats), his experience with mindfulness – explaining what it was like to visit the Dali Lama for a week, as well as a heart felt experience of his family dog ‘Nosey’ dying in his arms, and how that moved him to become a vegetarian. He also discusses the bias of human exceptionalism, the horrors of factory farming of non-human animals, as well as a consequentialist view on animal testing.
Christof Koch is an American neuroscientist best known for his work on the neural bases of consciousness.

Christof Koch is the President and Chief Scientific Officer of the Allen Institute for Brain Science in Seattle. From 1986 until 2013, he was a professor at the California Institute of Technology. http://www.klab.caltech.edu/koch/

Towards the Abolition of Suffering Through Science

An online panel focusing on reducing suffering & paradise engineering through the lens of science.

Panelists: Andrés Gómez Emilsson, David Pearce, Brian Tomasik and Mike Johnson

Note, consider skipping to to 10:19 to bypass some audio problems in the beginning!!


Topics

Andrés Gómez Emilsson: Qualia computing (how to use consciousness for information processing, and why that has ethical implications)

  • How do we know consciousness is causally efficacious? Because we are conscious and evolution can only recruit systems/properties when they do something (and they do it better than the available alternatives).
  • What is consciousness’ purpose on animals?  (Information processing).
  • What is consciousness’ comparative advantage?  (Phenomenal binding).
  • Why does this matter for suffering reduction? Suffering has functional properties that play a role in the inclusive fitness of organisms. If we figure out exactly what role they play (by reverse-engineering the computational properties of consciousness), we can substitute them by equally (or better) functioning non-conscious or positive hedonic-tone analogues.
  • What is the focus of Qualia Computing? (it focuses on basic fundamental questions and simple experimental paradigms to get at them (e.g. computational properties of visual qualia via psychedelic psychophysics)).

Brian Tomasik:

  • Space colonization “Colonization of space seems likely to increase suffering by creating (literally) astronomically more minds than exist on Earth, so we should push for policies that would make a colonization wave more humane, such as not propagating wild-animal suffering to other planets or in virtual worlds.”
  • AGI safety “It looks likely that artificial general intelligence (AGI) will be developed in the coming decades or centuries, and its initial conditions and control structures may make an enormous impact to the dynamics, values, and character of life in the cosmos.”,
  • Animals and insects “Because most wild animals die, often painfully, shortly after birth, it’s plausible that suffering dominates happiness in nature. This is especially plausible if we extend moral considerations to smaller creatures like the ~1019 insects on Earth, whose collective neural mass outweighs that of humanity by several orders of magnitude.”

Mike Johnson:

  • If we successfully “reverse-engineer” the patterns for pain and pleasure, what does ‘responsible disclosure’ look like? Potential benefits and potential for abuse both seem significant.
  • If we agree that valence is a pattern in a dataset, what’s a good approach to defining the dataset, and what’s a good heuristic for finding the pattern?
  • What order of magnitude is the theoretical potential of mood enhancement? E.g., 2x vs 10x vs 10^10x
  • What are your expectations of the distribution of suffering in the world? What proportion happens in nature vs within the boundaries of civilization? What are counter-intuitive sources of suffering? Do we know about ~90% of suffering on the earth, or ~.001%?
  • Valence Research, The Mystery of Pain & Pleasure.
  • Why is it such an exciting time round about now to be doing valence research?  Are we at a sweet spot in history with this regard?  What is hindering valence research? (examples of muddled thinking, cultural barriers etc?)
  • How do we use the available science to improve the QALY? GiveDirectly has used change in cortisol levels to measure effectiveness, and the EU (what’s EU stand for?) evidently does something similar involving cattle. It seems like a lot of the pieces for a more biologically-grounded QALY- and maybe a SQALY (Species and Quality-Adjusted Life-Year)- are available, someone just needs to put them together. I suspect this one of the lowest-hanging highest-leverage research fruits.

David Pearce: The ultimate scope of our moral responsibilities. Assume for a moment that our main or overriding goal should be to minimise and ideally abolish involuntary suffering. I typically assume that (a) only biological minds suffer and (b) we are probably alone within our cosmological horizon. If so, then our responsibility is “only” to phase out the biology of involuntary suffering here on Earth and make sure it doesn’t spread or propagate outside our solar system. But Brian, for instance, has quite a different metaphysics of mind, most famously that digital characters in video games can suffer (now only a little – but in future perhaps a lot). The ramifications here for abolitionist bioethics are far-reaching.

 

Other:
– Valence research, Qualia computing (how to use consciousness for information processing, and why that has ethical implications),  animal suffering, insect suffering, developing an ethical Nozick’s Experience Machine, long term paradise engineering, complexity and valence
– Effective Altruism/Cause prioritization and suffering reduction – People’s practical recommendations for the best projects that suffering reducers can work on (including where to donate, what research topics to prioritize, what messages to spread). – So cause prioritization applied directly to the abolition of suffering?
– what are the best projects people can work on to reduce suffering? and what to work on first? (including where to donate, what research topics to prioritize, what messages to spread)
– If we successfully “reverse-engineer” the patterns for pain and pleasure, what does ‘responsible disclosure’ look like? Potential benefits and potential for abuse both seem significant
– If we agree that valence is a pattern in a dataset, what’s a good approach to defining the dataset, and what’s a good heuristic for finding the pattern?
– What order of magnitude is the theoretical potential of mood enhancement? E.g., 2x vs 10x vs 10^10x

Panelists

David Pearce: http://hedweb.com/
Mike Johnson: http://opentheory.net/
Andrés Gómez Emilsson: http://qualiacomputing.com/
Brain Tomasik: http://reducing-suffering.org/

 

#hedweb ‪#EffectiveAltruism ‪#HedonisticImperative ‪#AbolitionistProject

The event was hosted on the 10th of August 2015, Venue: The Internet

Towards the Abolition of Suffering Through Science was hosted by Adam Ford for Science, Technology and the Future.

Towards the Abolition of Suffering Through Science

Towards the Abolition of Suffering Through Science

The End of Aging

Aging is a technical problem with a technical solution – finding the solution requires clear thinking and focused effort. Once solving aging becomes demonstrably feasible, it is likely attitudes will shift regarding its desirability. There is huge potential, for individuals and for society, in reducing suffering through the use of rejuvenation therapy to achieve new heights of physical well-being. I also discuss the looming economic implications of large percentages of illness among aging populations – and put forward that focusing on solving fundamental problems of aging will reduce the incidents of debilitating diseases of aging – which will in turn reduce the economic burden of illness. This mini-documentary discusses the implications of actually solving aging, as well as some misconceptions about aging.


‘The End of Aging’ won first prize in the international Longevity Film Competition *[1] in 2018.


The above video is the latest version with a few updates & kinks ironed out.

‘The End of Aging’ was Adam Ford’s submission for the Longevity Film Competition – all the contestants did a great job. Big thanks to the organisers of competition, it inspires people to produce videos to help spread awareness and understanding about the importance of ending aging.

It’s important to see that health in old age is desirable at population levels – rejuvenation medicine – repairing the bodies ability to cope with stressors (or practical reversal of the aging process), will end up being cheaper than traditional medicine  based on general indefinite postponement of ill-health on population levels (especially in the long run when rejuvenation therapy becomes efficient).

According to the World Health Organisation:

  1. Between 2015 and 2050, the proportion of the world’s population over 60 years will nearly double from 12% to 22%.
  2. By 2020, the number of people aged 60 years and older will outnumber children younger than 5 years.
  3. In 2050, 80% of older people will be living in low- and middle-income countries.
  4. The pace of population ageing is much faster than in the past.
  5. All countries face major challenges to ensure that their health and social systems are ready to make the most of this demographic shift.

The End of Aging – WHO 1 – 2020 portion of world population over 60 will double
The End of Aging – WHO 2 – Elderly outnumbering Infants
The End of Aging – WHO 3 – Pace of Population Aging Faster than in Past
The End of Aging – WHO 4 – 80 perc elderly in low to middle income countries
The End of Aging – WHO 5 Demographic Shifts

 

Happy Longevity Day 2018! 😀

[1] * The Longevity Film Competition is an initiative by the Healthy Life Extension Society, the SENS Research Foundation, and the International Longevity Alliance. The promoters of the competition invited filmmakers everywhere to produce short films advocating for healthy life extension, with a focus on dispelling four usual misconceptions and concerns around the concept of life extension: the false dichotomy between aging and age-related diseases, the Tithonus error, the appeal to nature fallacy, and the fear of inequality of access to rejuvenation biotechnologies.

The Antispeciesist Revolution – read by David Pearce

The Antispeciesist Revolution

[Original text found here]

Speciesism.
When is it ethically acceptable to harm another sentient being? On some fairly modest(1) assumptions, to harm or kill someone simply on the grounds they belong to a different gender, sexual orientation or ethnic group is unjustified. Such distinctions are real but ethically irrelevant. On the other hand, species membership is normally reckoned an ethically relevant criterion. Fundamental to our conceptual scheme is the pre-Darwinian distinction between “humans” and “animals”. In law, nonhuman animals share with inanimate objects the status of property. As property, nonhuman animals can be bought, sold, killed or otherwise harmed as humans see fit. In consequence, humans treat nonhuman animals in ways that would earn a life-time prison sentence without parole if our victims were human. From an evolutionary perspective, this contrast in status isn’t surprising. In our ancestral environment of adaptedness, the human capacity to hunt, kill and exploit sentient beings of other species was fitness-enhancing(2). Our moral intuitions have been shaped accordingly. Yet can we ethically justify such behaviour today?

Naively, one reason for disregarding the interests of nonhumans is the dimmer-switch model of consciousness. Humans matter more than nonhuman animals because (most) humans are more intelligent. Intuitively, more intelligent beings are more conscious than less intelligent beings; consciousness is the touchstone of moral status.

The problem with the dimmer-switch model is that it’s empirically unsupported, among vertebrates with central nervous systems at least. Microelectrode studies of the brains of awake human subjects suggest that the most intense forms of experience, for example agony, terror and orgasmic bliss, are mediated by the limbic system, not the prefrontal cortex. Our core emotions are evolutionarily ancient and strongly conserved. Humans share the anatomical and molecular substrates of our core emotions with the nonhuman animals whom we factory-farm and kill. By contrast, distinctively human cognitive capacities such as generative syntax, or the ability to do higher mathematics, are either phenomenologically subtle or impenetrable to introspection. To be sure, genetic and epigenetic differences exist between, say, a pig and a human being that explain our adult behavioural differences, e.g. the allele of the FOXP2(1) gene implicated in the human capacity for recursive syntax. Such mutations have little to do with raw sentience(1).

Antispeciesism.
So what is the alternative to traditional anthropocentric ethics? Antispeciesism is not the claim that “All Animals Are Equal”, or that all species are of equal value, or that a human or a pig is equivalent to a mosquito. Rather the antispeciesist claims that, other things being equal, conscious beings of equivalent sentience deserve equal care and respect. A pig, for example, is of comparable sentience to a prelinguistic human toddler. As it happens, a pig is of comparable (or superior) intelligence to a toddler as well(5). However, such cognitive prowess is ethically incidental. If ethical status is a function of sentience, then to factory-farm and slaughter a pig is as ethically abhorrent as to factory-farm and slaughter a human baby. To exploit one and nurture the other expresses an irrational but genetically adaptive prejudice.

On the face of it, this antispeciesist claim isn’t just wrong-headed; it’s absurd. Philosopher Jonathan Haidt speaks of “moral dumfounding”(6), where we just know something is wrong but can’t articulate precisely why. Haidt offers the example of consensual incest between an adult brother and sister who use birth control. For evolutionary reasons, we “just know” such an incestuous relationship is immoral. In the case of any comparisons of pigs with human infants and toddlers, we “just know” at some deep level that any alleged equivalence in status is unfounded. After all, if there were no ethically relevant distinction between a pig and a toddler, or between a battery-farmed chicken and a human infant, then the daily behaviour of ordinary meat-eating humans would be sociopathic – which is crazy. In fact, unless the psychiatrists’ bible, Diagnostic and Statistical Manual of Mental Disorders, is modified explicitly to exclude behaviour towards nonhumans, most of us do risk satisfying its diagnostic criteria for the disorder. Even so, humans often conceive of ourselves as animal lovers. Despite the horrors of factory-farming, most consumers of meat and animal products are clearly not sociopaths in the normal usage of the term; most factory-farm managers are not wantonly cruel; and the majority of slaughterhouse workers are not sadists who delight in suffering. Serial killers of nonhuman animals are just ordinary men doing a distasteful job – “obeying orders” – on pain of losing their livelihoods.

Should we expect anything different? Jewish political theorist Hannah Arendt spoke famously of the “banality of evil”(7). If twenty-first century humans are collectively doing something posthuman superintelligence will reckon monstrous, akin to the [human] Holocaust or Atlantic slave trade, then it’s easy to assume our moral intuitions would disclose this to us. Our intuitions don’t disclose anything of the kind; so we sleep easy. But both natural selection and the historical record offer powerful reasons for doubting the trustworthiness of our naive moral intuitions. So the possibility that human civilisation might be founded upon some monstrous evil should be taken seriously – even if the possibility seems transparently absurd at the time.

One possible speciesist response is to raise the question of “potential”. Even if a pig is as sentient as a human toddler, there is a fundamental distinction between human toddlers and pigs. Only a toddler has the potential to mature into a rational adult human being.

The problem with this response is that it contradicts our treatment of humans who lack “potential”. Thus we recognise that a toddler with a progressive disorder who will never live to celebrate his third birthday deserves at least as much love, care and respect as his normally developing peers – not to be packed off to a factory-farm on the grounds it’s a shame to let good food go to waste. We recognise a similar duty of care for mentally handicapped adult humans and cognitively frail old people. For sure, historical exceptions exist to this perceived duty of care for vulnerable humans, e.g. the Nazi “euthanasia” program, with its eugenicist conception of “life unworthy of life”. But by common consent, we value young children and cognitively challenged adults for who they are, not simply for who they may – or may not – one day become. On occasion, there may controversially be instrumental reasons for allocating more care and resources to a potential genius or exceptionally gifted child than to a normal human. Yet disproportionate intraspecies resource allocation may be justified, not because high IQ humans are more sentient, but because of the anticipated benefits to society as a whole.

Practical Implications.
1. Invitrotarianism.

The greatest source of severe, chronic and readily avoidable suffering in the world today is man-made: factory farming. Humans currently slaughter over fifty billion sentient beings each year. One implication of an antispeciesist ethic is that factory farms should be shut and their surviving victims rehabilitated.

In common with most ethical revolutions in history, the prospect of humanity switching to a cruelty-free diet initially strikes most practically-minded folk as utopian dreaming. “Realists” certainly have plenty of hard evidence to bolster their case. As English essayist William Hazlitt observed, “The least pain in our little finger gives us more concern and uneasiness than the destruction of millions of our fellow-beings.” Without the aid of twenty-first century technology, the mass slaughter and abuse of our fellow animals might continue indefinitely. Yet tissue science technology promises to allow consumers to become moral agents without the slightest hint of personal inconvenience. Lab-grown in vitro meat produced in cell culture rather than a live animal has long been a staple of science fiction. But global veganism – or its ethical invitrotarian equivalent – is no longer a futuristic fantasy. Rapid advances in tissue engineering mean that in vitro meat will shortly be developed and commercialised. Today’s experimental cultured mincemeat can be supplanted by mass-manufactured gourmet steaks for the consumer market. Perhaps critically for its rapid public acceptance, in vitro meat does not need to be genetically modified – thereby spiking the guns of techno-luddites who might otherwise worry about “FrankenBurgers”. Indeed, cultured meat products will be more “natural” in some ways than their antibiotic-laced counterparts derived from factory-farmed animals.

Momentum for commercialisation is growing. Non-profit research organisations like New Harvest(8), working to develop alternatives to conventionally-produced meat, have been joined by hard-headed businessmen. Visionary entrepreneur and Stanford academic Peter Thiel has just funnelled $350,000 into Modern Meadow, a start-up that aims to combine 3D printing with in vitro meat cultivation. Within the next decade or so, gourmet steaks could be printed out from biological materials. In principle, the technology should be scalable.

Tragically, billions of nonhuman animals will grievously suffer and die this century at human hands before the dietary transition is complete. Humans are not obligate carnivores; eating meat and animal products is a lifestyle choice. “But I like the taste!” is not a morally compelling argument. Vegans and animal advocates ask whether we are ethically entitled to wait on a technological fix? The antispeciesist answer is clear: no.

2. Compassionate Biology.
If and when humans stop systematically harming other sentient beings, will our ethical duties to members of other species have been discharged? Not if the same ethical considerations as apply to members of other human races or age-groups apply also to members of other species of equivalent sentience. Thus if famine breaks out in sub-Saharan Africa and young human children are starving, then we recognise we have a duty to send aid; or better still, to take proactive measures to ensure famines do not arise in the first instance, i.e. to provide not just food aid but family planning. So why not assist, say, starving free-living elephants? Until recently, no comparable interventions were feasible for members of other species. The technical challenges were insurmountable. Not least, the absence of cross-species fertility control technologies would have often made bad problems worse. Yet thanks to the exponential growth of computer power, every cubic metre of the planet will shortly be computationally accessible to micro-management, surveillance and control. Harnessed to biotechnology, nanotechnology and robotics, such tools confer unprecedented power over Nature. With unbridled power comes complicity. Ethically speaking, how many of the traditional cruelties of the living world do we wish to perpetuate? Orthodox conservation biologists argue we should not “interfere”: humans can’t “police” Nature. Antispeciesists disagree. Advocates of compassionate biology argue that humans and nonhumans alike should not be parasitised, starved, disembowelled, asphyxiated, or eaten alive.

As always, bioconservatives insist such miseries are “natural”; status quo bias runs deep. “Custom will reconcile people to any atrocity”, observed George Bernard Shaw. Snuff movies in the guise of Nature documentaries are quite popular on Youtube, a counterpoint to the Disneyfied wildlife shows aired on mainstream TV. Moreover even sympathetic critics of compassionate biology might respond that helping free-living members of other species is prohibitively expensive. An adequate welfare safety-net scarcely exists for humans in many parts of the world. So how can we contemplate its extension to nonhumans – even just to large-brained, long-lived vertebrates in our Nature reserves? Provision of comprehensive healthcare for all free-living elephants(10), for example, might cost between two or three billion dollars annually. Compassionate stewardship of the living world would be technically daunting too, entailing ecosystem management, cross-species fertility control via immunocontraception, veterinary care, emergency famine-relief, GPS tracking and monitoring, and ultimately phasing out or genetically “reprogramming”(11) carnivorous predators. The notional bill could approach the world’s 1.7 trillion-dollar annual arms budget. But irrespective of cost or timescale, if we are to be consistently non-speciesist, then decisions about resource allocation should be based not on species membership, but directly or indirectly on sentience. An elephant, for example, is at least as sentient as a human toddler. If it is ethically obligatory to help sick or starving children, then it’s ethically obligatory to help sick or starving elephants – not just via crisis interventions but via long-term healthcare support.

A traditional conservation biologist might respond that elephants helped by humans are no longer truly wild. Yet on such a criterion, clothes-wearing humans or beneficiaries of food aid and family planning aren’t “wild” humans either. Why should this matter? “Free-living” and “wild” are conceptually distinct. To assume that the civilising process should be confined to our own species is mere speciesist prejudice. Humans, transhumans and posthumans must choose what forms of sentience we want to preserve and create on Earth and beyond. Humans already massively intervene in Nature, whether though habitat destruction, captive breeding programs for big cats, “rewilding”, etc. So the question is not whether humans should “interfere”, but rather what ethical principles should govern our interventions(12).

Speciesism and Superintelligence.
Why should transhumanists care about the suffering of nonhuman animals? This is not a “feel-good” issue. One reason we should care cuts to the heart of the future of life in the universe. Transhumanists differ over whether our posthuman successors will most likely be nonbiological artificial superintelligence; or cyborgs who effectively merge with our hyperintelligent machines; or our own recursively self-improving biological descendents who modify their own genetic source code and bootstrap their way to full-spectrum superintelligence(13). Regardless of the dominant lifeform of the posthuman era, biological humans have a vested interest in the behaviour of intellectually advanced beings towards cognitively humble creatures – if we survive at all. Compared to posthuman superintelligence, archaic humans may be no smarter than pigs or chickens – or perhaps worms. This does not augur well for Homo sapiens. Western-educated humans tend to view Jains as faintly ridiculous for practising ahimsa, or harmlessness, sweeping the ground in front of them to avoid inadvertently treading on insects. How quixotic! Yet the fate of sentient but cognitively humble lifeforms in relation to vastly superior intelligence is precisely the issue at stake as we confront the prospect of posthuman superintelligence. How can we ensure a Jain-like concern for comparatively simple-minded creatures such as ourselves? Why should superintelligences care any more than humans about the well-being of their intellectual inferiors? Might distinctively human-friendly superintelligence turn out to be as intellectually-incoherent as, say, Aryan-friendly superintelligence? If human primitives are to prove worthy of conservation, how can we implement technologies of impartial friendliness towards other sentients? And if posthumans do care, how do we know that a truly benevolent superintelligence wouldn’t turn Darwinian life into utilitronium with a communal hug?

Viewed in such a light, biological humanity’s prospects in a future world of superintelligence might seem dire. However, this worry expresses a one-dimensional conception of general intelligence. No doubt the nature of mature superintelligence is humanly unknowable. But presumably full-spectrum(14) superintelligence entails, at the very least, a capacity to investigate, understand and manipulate both the formal and the subjective properties of mind. Modern science aspires to an idealised “view from nowhere”(15), an impartial, God-like understanding of the natural universe, stripped of any bias in perspective and expressed in the language of mathematical physics. By the same token, a God-like superintelligence must also be endowed with the capacity impartially to grasp all possible first-person perspectives – not a partial and primitive Machiavellian cunning of the kind adaptive on the African savannah, but an unimaginably radical expansion of our own fitfully growing circle of empathy.

What such superhuman perspective-taking ability might entail is unclear. We are familiar with people who display abnormally advanced forms of “mind-blind”(16), autistic intelligence in higher mathematics and theoretical physics. Less well known are hyper-empathisers who display unusually sophisticated social intelligence. Perhaps the most advanced naturally occurring hyper-empathisers exhibit mirror-touch synaesthesia(17). A mirror-touch synaesthete cannot be unfriendly towards you because she feels your pain and pleasure as if it were her own. In principle, such unusual perspective-taking capacity could be generalised and extended with reciprocal neuroscanning technology and telemetry into a kind of naturalised telepathy, both between and within species. Interpersonal and cross-species mind-reading could in theory break down hitherto invincible barriers of ignorance between different skull-bound subjects of experience, thereby eroding the anthropocentric, ethnocentric and egocentric bias that has plagued life on Earth to date. Today, the intelligence-testing community tends to treat facility at empathetic understanding as if it were a mere personality variable, or at best some sort of second-rate cognition for people who can’t do IQ tests. But “mind-reading” can be a highly sophisticated, cognitively demanding ability. Compare, say, the sixth-order intentionality manifested by Shakespeare. Thus we shouldn’t conceive superintelligence as akin to God imagined by someone with autistic spectrum disorder. Rather full-spectrum superintelligence entails a God’s-eye capacity to understand the rich multi-faceted first-person perspectives of diverse lifeforms whose mind-spaces humans would find incomprehensibly alien.

An obvious objection arises. Just because ultra-intelligent posthumans may be capable of displaying empathetic superintelligence, how do we know such intelligence will be exercised? The short answer is that we don’t: by analogy, today’s mirror-touch synaesthetes might one day neurosurgically opt to become mind-blind. But then equally we don’t know whether posthumans will renounce their advanced logico-mathematical prowess in favour of the functional equivalent of wireheading. If they do so, then they won’t be superintelligent. The existence of diverse first-person perspectives is a fundamental feature of the natural world, as fundamental as the second law of thermodynamics or the Higgs boson. To be ignorant of fundamental features of the world is to be an idiot savant: a super-Watson(18) perhaps, but not a superintelligence(19).

High-Tech Jainism?
Jules Renard once remarked, “I don’t know if God exists, but it would be better for His reputation if He didn’t.” God’s conspicuous absence from the natural world needn’t deter us from asking what an omniscient, omnipotent, all-merciful deity would want humans to do with our imminent God-like powers. For we’re on the brink of a momentous evolutionary transition in the history of life on Earth. Physicist Freeman Dyson predicts we’ll soon “be writing genomes as fluently as Blake and Byron wrote verses”(20). The ethical risks and opportunities for apprentice deities are huge.

On the one hand, Karl Popper warns, “Those who promise us paradise on earth never produced anything but a hell”(21). Twentieth-century history bears out such pessimism. Yet for billions of sentient beings from less powerful species, existing life on Earth is hell. They end their miserable lives on our dinner plates: “for the animals it is an eternal Treblinka”, writes Jewish Nobel laureate Isaac Bashevis Singer(22).

In a more utopian vein, some utterly sublime scenarios are technically feasible later this century and beyond. It’s not clear whether experience below Sidgwick’s(23) “hedonic zero” has any long-term future. Thanks to molecular neuroscience, mastery of the brain’s reward circuitry could make everyday life wonderful beyond the bounds of normal human experience. There is no technical reason why the pitiless Darwinian struggle of the past half billion years can’t be replaced by an earthly paradise for all creatures great and small. Genetic engineering could allow “the lion to lie down with the lamb.” Enhancement technologies could transform killer apes into saintly smart angels. Biotechnology could abolish suffering throughout the living world. Artificial intelligence could secure the well-being of all sentience in our forward light-cone. Our quasi-immortal descendants may be animated by gradients of intelligent bliss orders of magnitude richer than anything physiologically feasible today.

Such fantastical-sounding scenarios may never come to pass. Yet if so, this won’t be because the technical challenges prove too daunting, but because intelligent agents choose to forgo the molecular keys to paradise for something else. Critically, the substrates of bliss don’t need to be species-specific or rationed. Transhumanists believe the well-being of all sentience(24) is the bedrock of any civilisation worthy of the name.

Also see this related interview with David Pearce on ‘Antispecism & Compassionate Stewardship’:

* * *
NOTES

1. How modest? A venerable tradition in philosophical meta-ethics is anti-realism. The meta-ethical anti-realist proposes that claims such as it’s wrong to rape women, kill Jews, torture babies (etc) lack truth value – or are simply false. (cf. JL Mackie, Ethics: Inventing Right and Wrong, Viking Press, 1977.) Here I shall assume that, for reasons we simply don’t understand, the pain-pleasure axis discloses the world’s inbuilt metric of (dis)value. Meta-ethical anti-realists may instead wish to interpret this critique of speciesism merely as casting doubt on its internal coherence rather than a substantive claim that a non-speciesist ethic is objectively true.

2. Extreme violence towards members of other tribes and races can be fitness-enhancing too. See, e.g. Richard Wrangham & Dale Peterson, Demonic Males: Apes and the Origins of Human Violence, Houghton Mifflin, 1997.

3. Fisher SE, Scharff C (2009). “FOXP2 as a molecular window into speech and language”. Trends Genet. 25 (4): 166–77. doi:10.1016/j.tig.2009.03.002. PMID 19304338.

4. Interpersonal and interspecies comparisons of sentience are of course fraught with problems. Comparative studies of how hard a human or nonhuman animal will work to avoid or obtain a particular stimulus give one crude behavioural indication. Yet we can go right down to the genetic and molecular level, e.g. interspecies comparisons of SCN9A genotype. (cf. http://www.pnas.org/? content/early/2010/02/23/?0913181107.full.pdf) We know that in humans the SCN9A gene modulates pain-sensitivity. Some alleles of SCN9A give rise to hypoalgesia, others alleles to hyperalgesia. Nonsense mutations yield congenital insensitivity to pain. So we could systematically compare the SCN9A gene and its homologues in nonhuman animals. Neocortical chauvinists will still be sceptical of non-mammalian sentience, pointing to the extensive role of cortical processing in higher vertebrates. But recall how neuroscanning techniques reveal that during orgasm, for example, much of the neocortex effectively shuts down. Intensity of experience is scarcely diminished.

5. Held S, Mendl M, Devereux C, and Byrne RW. 2001. “Studies in social cognition: from primates to pigs”. Animal Welfare 10:S209-17.

6. Jonathan Haidt, The Righteous Mind: Why Good People Are Divided by Politics and Religion, Pantheon Books, 2012.

7. Hannah Arendt, Eichmann in Jerusalem: A Report on the Banality of Evil, Viking Press, 1963.

8. http://www.new-harvest.org/

9. “PayPal Founder Backs Synthetic Meat Printing Company”, Wired, August 16 2012. http://www.wired.com/wiredscience/2012/08/3d-printed-meat/

10. https://www.abolitionist.com/reprogramming/elephantcare.html

11. https://www.abolitionist.com/reprogramming/index.html

12. The scholarly literature on the problem of wild animal suffering is still sparse. But perhaps see Arne Naess, “Should We Try To Relieve Clear Cases of Suffering in Nature?”, published in The Selected Works of Arne Naess, Springer, 2005; Oscar Horta, “The Ethics of the Ecology of Fear against the Nonspeciesist Paradigm: A Shift in the Aims of Intervention in Nature”, Between the Species, Issue X, August 2010. http://digitalcommons.calpoly.edu/bts/vol13/iss10/10/ ; Brian Tomasik, “The Importance of Wild-Animal Suffering”, http://www.utilitarian-essays.com/suffering-nature.html ; and the first print-published plea for phasing out carnivorism in Nature, Jeff McMahan’s “The Meat Eaters”, The New York Times. September 19, 2010. http://opinionator.blogs.nytimes.com/2010/09/19/the-meat-eaters/

13. Singularity Hypotheses, A Scientific and Philosophical Assessment, Eden, A.H.; Moor, J.H.; Søraker, J.H.; Steinhart, E. (Eds.) Spinger 2013. http://singularityhypothesis.blogspot.co.uk/p/table-of-contents.html

14. David Pearce, The Biointelligence Explosion. (preprint), 2012. https://www.biointelligence-explosion.com.

15. Thomas Nagel, The View From Nowhere , OUP, 1989.

16. Simon Baron-Cohen (2009). “Autism: the empathizing–systemizing (E-S) theory” (PDF). Ann N Y Acad Sci 1156: 68–80. doi:10.1111/j.1749-6632.2009.04467.x. PMID 19338503.

17. Banissy, M. J. & Ward, J. (2007). Mirror-touch synesthesia is linked with empathy. Nature Neurosci. doi: 10.1038/nn1926.

18. Stephen Baker. Final Jeopardy: Man vs. Machine and the Quest to Know Everything. Houghton Mifflin Harcourt. 2011.

19. Orthogonality or convergence? For an alternative to the convergence thesis, see Nick Bostrom, “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents”, 2012, http://www.nickbostrom.com/superintelligentwill.pdf; and Eliezer Yudkowsky, Carl Shulman, Anna Salamon, Rolf Nelson, Steven Kaas, Steve Rayhawk, Zack Davis, and Tom McCabe. “Reducing Long-Term Catastrophic Risks from Artificial Intelligence”, 2010. http://singularity.org/files/ReducingRisks.pdf

20. Freeman Dyson, “When Science & Poetry Were Friends”, New York Review of Books, August 13, 2009.

21. As quoted in Jon Winokur, In Passing: Condolences and Complaints on Death, Dying, and Related Disappointments, Sasquatch Books, 2005.

22. Isaac Bashevis Singer, The Letter Writer, 1964.

23. Henry Sidgwick, The Methods of Ethics. London, 1874, 7th ed. 1907.

24. The Transhumanist Declaration (1998, 2009). http://humanityplus.org/philosophy/transhumanist-declaration/

David Pearce
September 2012

Link to video