Cognitive Biases & In-Group Convergences with Joscha Bach

True & false vs right & wrong – People converge their views to set of rights and wrongs relative to in-group biases in their peer group.
As a survival mechanism, convergence in groups is sometimes more healthy than being right – so one should optimize for convergence sometimes even at the cost of getting stuff wrong – so humans probably have an evolutionary propensity to favor convergence over truth.
However by optimizing for convergence may result in the group mind being more stupid than the smartest people in the group.


Joscha highlights the controversy of Yonatan Zunger being fired for sending out an email about biological differences between men and women effecting abilities as engineers – where Zunger’s arguments may be correct – now regardless of what the facts are about how biological differences effect differences in ability between men & women, google fired him because they thought supporting these arguments would make for a worse social environment.

This sort of thing leads to an interesting difference in discourse, where:
* ‘nerds’ tend to focus on ‘content‘, on imparting ideas and facts where everyone can judge these autonomously and form their own opinions – in view that in order to craft the best solutions we need to have the best facts
* most people the purpose of communication is ‘coordination‘ between individuals and groups (society, nations etc) – where the value on a ‘fact’ is it’s effect on the coordination between people

So is Google’s response to the memo controversy about getting the facts right, or about how Google at this point should be organised?

What’s also really interesting is that different types of people read this ‘memo’ very differently – making it very difficult to form agreement about the content of this memo – how can one agree on whats valuable about communication – whether it’s more about imparting ideas and facts or whether it’s more about coordination?

More recently there has been a lot of talk about #FakeNews – where it’s very difficult to get people to agree to things that are not in their own interests – and including, as Joshca points out, the idea that truth matters.

Joscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Discussion points:
– In-group convergence: thinking in true & false vs right & wrong
– The group mind may be more stupid than the smartest individuals in the group

The Generative Universe Hypothesis

Remembering Lee Smolin’s theory of the dynamical evolution of the universe  where through a form of natural selection, black holes spawn new universes, I thought that if a superintelligent civilization understood its mechanics, they may try to control it, and engineer or bias the physics in the spawned universe – and possibly migrate to this new universe.   Say that they found out how to talk along the parent/child relations between universes, it may be a an energy efficient way achieve some of the outcomes of simulations (as described in Nick Bostrom’s Simulation Hypothesis).

The idea of moving to a more hospitable universe could be such a strong attractor to post-singularity civs that, once discovered, it may be the an obvious choice for a variety of reasons.   A) Better computation by faster/easier networking – Say for instance, that the speed of light were a lot faster, and information could travel over longer distances than in this universe – then network speed may not be as much of a hindrance to developing larger civs, distributed computation, and mega-scale galactic brains. B) As a means of escape – If it so happened that neighbouring alien civs were close enough to pose a threat, then escaping this universe to a new generated universe could be ideal – especially if one could close the door behind, or lay a trap at the opening to the generated universe to capture probes or ships that weren’t ones own.  C) Mere curiosity – it may not be full blown utility maximization that is the lone object of the endeavor,  it could be just simple curiosity about how (stable) universes may operate if fine tuned differently. (How far can you take simulations in this universe to test how hypothetical universes could operate without actually generating and testing the universes?)  D) To escape the ultimate fate of this universe – according to the most popular current estimates, we have about 10100 years until the heat death of this universe. E) Better computation by a ‘cooler’ environment – A colder yet stable universe to compute in – similar to the previous point and the first point.  Some hypothesise that civs may sleep until the universe gets colder when computation can be done far more efficiently, where these civs long for the heat death so that they can get really get started with whatever projects they have in mind that require the computing power only made possible by extremely low temperatures more abundantly available at or near the heat death.  Well, what if you could engineer a universe to achieve temperatures far lower than that which would be available in this universe, while also allowing the benefit of the universe being relatively steady (say that’s something that’s needed) – and if it could be achieved sooner by a generative universe solution than waiting around for this universes heat death then why not?  F) Fault tolerance – distributing a civ across (generated) universes may preserve the civ against risks of the current one going unexpectedly pear shaped – the more fault tolerance the merrier G) Load balancing – if it’s posisble to communicate between parent/child relationships, then civs may generate universes merely to act as containers for computation, helping solve really really really big problems far faster, or scaffold extremely detailed virtual realities far more efficiently – less lag, less jitters – deeper immersion! 

If this Perhaps we will find evidence of alien civs around black holes generating and testing new universes before taking the leap to transcend so to speak.

Why leave the future evolution of universes up to blind natural selection?  Advanced post-singularity alien civs might hypothesize an extremely strict set of criteria to allow for the formation of the right kinds of matter and energy in child universes to either mirror our own universe, or more likely take it up a notch or two;  to new levels of interestingness – while computational capacity is limited if constrained by the laws of this containing universe, it may be that spawning a new universe could allow for more interesting and efficient computation.

It may also be a great way to escape the heat death of the universe 🙂

I spoke about the idea with Andrew Arnel a while ago while out for a drink, where I came up with a really cool name for this idea – though I can’t remember what it was 🙂  perhaps it only sounds good after a few beers – perhaps it was something like the ‘generative’, spawnulation or ‘genulation’ hypothesis…


Update: also more recently I commented about this idea on a FB post by Mike Johnson:
I may have a similar idea relating to smolins darwinistic black-hole universe generation. Why build simulations where it would be more efficient to actually generate new universes not computationally bounded by or contained by the originating universe – by nudging the physics that would emerge in the new universe to be more able to support flourishing life, more computation and wider novelty possibility spaces.

Furthermore I spoke to Sundance Bilson Thomson (a physicist in Australia who was supervised by Lee Smolin) about whether what influenced the physics in the child universes was local phenomena surrounding the black hole in the parent universe, or global phenomena of the parent universe.  He said it was global phenomena based on something to do with the way stars are formed.  So this might lower my credence in the Generative Universe hypothesis as it pertains to Lee Smolin’s idea – though I need to seek out whether the nature of the generated child universes could still be nudged or engineered.

Why Technology Favors a Singleton over a Tyranny

Is democracy loosing its credibility, will it cede to dictatorship?  Will AI out-compete us in all areas of economic usefulness – making us the future useless class?

It’s difficult to get around the bottlenecks of networking and coordination in distributed democracies. In the past quite naturally distributed systems being scattered are more redundant wer in many ways fault tolerant and adaptive – though these payoffs for most of us may dwindle if humans become less and less able to compete with Ex Machina. If the relative efficiency of democracies to dictatorships tips towards the latter nudging a transition to centralized dictatorships, while solving some distribution & coordination problems, the concentration of resource allocation may be exaggerated beyond historical examples of tyranny. Where the ‘once was proletariat’ now new ‘useless class’ have little to no utility to the concentration of power – the top 0.001% – the would be tyrants will likely give up on ruling and tyrannizing – and instead find it easier to cull the resource hungry and rights demanding horde – more efficient that way. Ethics is fundamental to fair progress – ethics is philosophy with a deadline creeping closer – what can we do to increase the odds of a future where the value of life is evaluated beyond it’s economic usefulness?
I found ‘Why Technology Favors Tyranny by Yuval Noah Harari‘ was a good read – I enjoy his writing, and it provokes me to think.  About 5 years ago I did the ‘A Brief History of Humankind’ course via coursera – urging my friends to join me.  Since then Yuval has taken the world by storm.
The biggest and most frightening impact of the AI revolution might be on the relative efficiency of democracies and dictatorships. […] We tend to think about the conflict between democracy and dictatorship as a conflict between two different ethical systems, but it is actually a conflict between two different data-processing systems. Democracy distributes the power to process information and make decisions among many people and institutions, whereas dictatorship concentrates information and power in one place. Given 20th-century technology, it was inefficient to concentrate too much information and power in one place. Nobody had the ability to process all available information fast enough and make the right decisions. […]Why Technology Favors Tyranny
I assume AI superintelligence is highly probable if we don’t go extinct first.  For the same reason that the proletariat’s become useless I think ultimately the AI-Human combination will likely become useless too, and cede to Superintelligent AI – so all humans becomes useless. The bourgeoisie elite may initially feel safe in the idea that they don’t need to be useful, they just need to maintain control of power. Though the sliding relative dumbness of bourgeoisie to superintelligence will worry them.. perhaps not long after wiping out the useless class, the elite bourgeoisie will then see the importance of the AI control problem, and that their days are numbered too – at which point will they see ethics and the value of life beyond economic usefulness as important?
However, artificial intelligence may soon swing the pendulum in the opposite direction. AI makes it possible to process enormous amounts of information centrally. In fact, it might make centralized systems far more efficient than diffuse systems, because machine learning works better when the machine has more information to analyze. If you disregard all privacy concerns and concentrate all the information relating to a billion people in one database, you’ll wind up with much better algorithms than if you respect individual privacy and have in your database only partial information on a million people. An authoritarian government that orders all its citizens to have their DNA sequenced and to share their medical data with some central authority would gain an immense advantage in genetics and medical research over societies in which medical data are strictly private. The main handicap of authoritarian regimes in the 20th century—the desire to concentrate all information and power in one place—may become their decisive advantage in the 21st century.Why Technology Favors Tyranny
Yuval Noah Harari believes that we could be heading for a technologically enabled tyranny as AI automates all jobs away – and we become the useless class. Though if superintellignece is likely, then human’s will likely to be a bottleneck in any AI/Human hybrid use case – if tyranny happens, it won’t last for long – what use is a useless class to the elite?

Technology without ethics favors singleton utility monsters – not a tyranny – what use is it to tyrannize over a useless class?

Physicalism & Materialism – John Wilkins

Materialism was a pre-socratic view that for something to be real it has to be matter – physical stuff made of atoms (which at the time were considered hard like billiard balls – fundametal parts of reality).  The reason these days the term physicalism is used is because it can describe things that aren’t matter – like forces, or aren’t observable matter – like dark matter, or energy or fields, or spacetime etc..  Physicalism is the idea that all that exist can be described in the language of some ‘ideal’ physics – we may never know what this ideal physics is, though people think that it is something close to our current physics (as we can make very accurate predictions with our current physics).

If magic, telepathy or angels were real, there would be a physics that could describe them – they’d have patterns and properties that would be describable and explainable.  A physicist would likely think that even the mind operates according to physical rules.  Being a physicalist according to John means you think everything is governed by rules, physical rules – and that there is an ideal language that can be used to describe all this.

Note John is also a deontologist.  Perhaps there should exist an ideal language that can fully describe ethics – does this mean that ideally there is no need for utilitarianism?  I’ll leave that question for another post.

Interview with John Wilkins on Materialism & Physicalism.

Here are some blog posts about physicalism by John Wilkins:

Is physicalism an impoverished metaphysics?

Every so often, we read about some philosopher or other form of public intellectual who makes the claim that a physicalist ontology – a world view in which only things that can be described in terms of physics are said to exist – is impoverished. That is, there are things whereof science cannot know, &c. A recent example is that made by Thomas Nagel [nicely eviscerated here by the physicist Sean Carroll], whose fame in philosophy rests with an influential 1974 paper that there is something like being a bat that no amount of physics, physiology or other objective science could account for.

Recent, Nagel has argued that the evolutionary view called (historically misleadingly) neo-Darwinism, is “almost certainly” false. One of the reasons is that “materialism” (which Nagel should know is an antiquated world view replaced by physicalism defined above; there are many non-material things in physics, not least fields of various kinds) does not permit a full account of consciousness; the subjective facts of being a particular individual organism. Another is that the chance that life would emerge from a lifeless universe is staggeringly unlikely. How this is calculated is somewhat mysterious, given that at best we only have (dare I say it?) subjective estimates anyway, but there it is.

But Nagel is not alone. Various nonreligious (apparently) thinkers have made similar assertions, although some, like Frank Jackson, who proposed the Knowledge Argument, have since backed down. What is it that physicalism must account for that these disputants and objectors say it cannot?

It almost entirely consists of consciousness, intentions, intelligence or some similar mental property which is entirely inexplicable by “reductionist” physicalism. [Reductionism is a term of abuse that means – so far as I can tell – solely that the person who makes such an accusation does not like the thing or persons being accused.] And that raises our question: is physicalism lacking something?

I bet you are dying to know more… you’ll just have to follow the link…
See more at Evolving Thoughts>>

Is Physicalism Coherent?

In my last post I argued that physicalism cannot be rejected simply because people assert there are nonphysical objects which are beyond specification. Some are, however, specifiable, and one commentator has identified the obvious ones: abstract objects like the rules of chess or numbers. I have dealt with these before in my “Pizza reductionism” post, which I invite you to go read.

Done? OK, then; let us proceed.

It is often asserted that there are obviously things that are not physical, such as ideas, numbers, concepts, etc., quite apart from qualia, I once sat with a distinguished philosopher, who I respect greatly and so shall not name, when he asserted that we can construct natural classifications because we can deal first with the natural numbers. I asked him “In what sense are numbers natural objects?”, meaning, why should we think numbers are entities in the natural world. He admitted that the question had not occurred to him (I doubt that – he is rather smart), but that it was simply an axiom of his philosophy. I do not think such abstract objects are natural.

This applies to anything that is “informational”, including all semantic entities like meanings, symbols, lexical objects, and so on. They only “exist” as functional modalities in our thoughts and language. I have also argued this before: information does not “exist”; it is a function of how we process signals. Mathematics is not a domain, it is a language, and the reason it works is because the bits that seriously do not work are not explored far[*] – not all of it has to work in a physical or natural sense, but much of it has to, or else it becomes a simple game that we would not play so much.

So the question of the incoherence of physicalism is based on the assumption (which runs contrary to physicalism, and is thus question begging) that abstract objects are natural things. I don’t believe they are, and I certainly do not think that a thought, or concept, for example, which can be had by many minds and is therefore supposed to be located in none of them (and thus transcendental), really is nonphysical. That is another case of nouning language. The thought “that is red” exists, for a physicalist, in all the heads that meet the functional social criteria for ascriptions of red. It exists nowhere else – it just is all those cognitive and social behaviours in biological heads…

Yes, I know, it’s a real page turner…
See more at Evolving Thoughts>>

In philosophy, physicalism is the ontological thesis that “everything is physical”, that there is “nothing over and above” the physical, or that everything supervenes on the physical. Physicalism is a form of ontological monism—a “one substance” view of the nature of reality as opposed to a “two-substance” (dualism) or “many-substance” (pluralism) view. Both the definition of physical and the meaning of physicalism have been debated. Physicalism is closely related to materialism. Physicalism grew out of materialism with the success of the physical sciences in explaining observed phenomena. The terms are often used interchangeably, although they are sometimes distinguished, for example on the basis of physics describing more than just matter (including energy and physical law). Common arguments against physicalism include both the philosophical zombie argument and the multiple observers argument, that the existence of a physical being may imply zero or more distinct conscious entities. “When I lost my belief in religion I had to decide what I needed to accept as a bare minimum. I decided that I needed to believe in the physical world. I never found the slightest reason to accept the existence of anything else. To this day I am a physicalist only because I never found the need to be anything else. The principle of parsimony suggests that one should not believe in more than one needs to. Even if it does make you feel comfortable.”


Let’s get physicalism!

See John Wilkin’s Blog ‘Evolving Thoughts

#philsci #philosophy #science #physics

On Consciousness, Qualia, Valence & Intelligence with Mike Johnson

Andrés L. Gómez Emilsson

Andrés Gómez Emilsson joined in to add very insightful questions for a 3 part interview series with Mike Johnson, covering the relationship of metaphysics to qualia/consciousness/hedonic valence, and defining their terms, whether panpsychism matters, increasing sensitivity to bliss, valence variance, Effective Altruism, cause prioritization, and the importance of consciousness/valence research .
Andrés Gómez Emilsson interviews Mike Johnson

Carving Reality at the Joints

Andrés L. Gómez Emilsson: Do metaphysics matter for understanding qualia, consciousness, valence and intelligence? Mike Johnson: If we define metaphysics as the study of what exists, it absolutely does matter for understanding qualia, consciousness, and valence. I think metaphysics matters for intelligence, too, but in a different way. The big question is whether terms like qualia, consciousness, and valence “carve reality at the joints” or whether they’re emergent linguistic constructs that don’t reflect the structure of the universe. And if these things are ‘real’ in some sense, the follow-up question is: how can we formalize these concepts? Intelligence seems different: it seems like a ‘fuzzy’ concept, without a good “crisp”, or frame-invariant, definition. Andrés: What about sources of sentient valence outside of human brains? What is the “minimum viable valence organism”? What would you expect it to look like?

Mike Johnson

Mike: If some form of panpsychism is true- and it’s hard to construct a coherent theory of consciousness without allowing panpsychism- then I suspect two interesting things are true.
  1. A lot of things are probably at least a little bit conscious. The “minimum viable valence experiencer” could be pretty minimal. Both Brian Tomasik and Stuart Hameroff suggest that there could be morally-relevant experience happening at the level of fundamental physics. This seems highly counter-intuitive but also logically plausible to me.
  2. Biological organisms probably don’t constitute the lion’s share of moral experience. If there’s any morally-relevant experience that happens on small levels (e.g., quantum fuzz) or large levels (e.g., black holes, or eternal inflation), it probably outweighs what happens on Earth by many, many, many orders of magnitude. Whether it’ll outweigh the future impact of humanity on our light-cone is an open question.

The big question is whether terms like qualia, consciousness, and valence “carve reality at the joints” or whether they’re emergent linguistic constructs that don’t reflect the structure of the universe. And if these things are ‘real’ in some sense, the follow-up question is: how can we formalize these concepts?

In contrast with Brian Tomasik on this issue, I suspect (and hope) that the lion’s share of the qualia of the universe is strongly net positive. Appendix F of Principia Qualia talks a little more about this. Andrés: What would be the implications of finding a sure-fire way to induce great valence for brief moments? Could this be used to achieve “strategic alignment” across different branches of utilitarianism? Mike: A device that could temporarily cause extreme positive or negative valence on demand would immediately change the world. First, it would validate valence realism in a very visceral way. I’d say it would be the strongest philosophical argument ever made. Second, it would obviously have huge economic & ethical uses. Third, I agree that being able to induce strong positive & negative valence on demand could help align different schools of utilitarianism. Nothing would focus philosophical arguments about the discount rate between pleasure & suffering more than a (consensual!) quick blast of pure suffering followed by a quick blast of pure pleasure. Similarly, a lot of people live their lives in a rather numb state. Giving them a visceral sense that ‘life can be more than this’ could give them ‘skin in the game’. Fourth, it could mess a lot of things up. Obviously, being able to cause extreme suffering could be abused, but being able to cause extreme pleasure on-demand could lead to bad outcomes too. You (Andres) have written about wireheading before, and I agree with the game-theoretic concerns involved. I would also say that being able to cause extreme pleasure in others could be used in adversarial ways. More generally, human culture is valuable and fragile; things that could substantially disrupt it should be approached carefully. A friend of mine was describing how in the 70s, the emerging field of genetic engineering held the Asilomar Conference on Recombinant DNA to discuss how the field should self-regulate. The next year, these guidelines were adopted by the NIH wholesale as the basis for binding regulation, and other fields (such as AI safety!) have attempted to follow the same model. So the culture around technologies may reflect a strong “founder effect”, and we should be on the lookout for a good, forward-looking set of principles for how valence technology should work. One principle that seems to make sense is to not publicly post ‘actionable’ equations, pseudocode, or code for how one could generate suffering with current computing resources (if this is indeed possible). Another principle is to focus resources on positive, eusocial applications only, insofar as that’s possible– I’m especially concerned about addiction, and bad actors ‘weaponizing’ this sort of research. Another would be to be on guard against entryism, or people who want to co-opt valence research for political ends. All of this is pretty straightforward, but it would be good to work it out a bit more formally, look at the successes and failures of other research communities, and so on.

A question I find very interesting is whether valence research is socially disruptive or socially stabilizing by default. I think we should try very hard to make it a socially stabilizing force.

A question I find very interesting is whether valence research is socially disruptive or socially stabilizing by default. I think we should try very hard to make it a socially stabilizing force. One way to think about this is in terms of existential risk. It’s a little weird to say, but I think the fact that so many people are jaded, or feel hopeless, is a big existential risk, because they feel like they have very little to lose. So they don’t really care what happens to the world, because they don’t have good qualia to look forward to, no real ‘skin in the game’. If valence tech could give people a visceral, ‘felt sense’ of wonder and possibility, I think the world could become a much safer place, because more people would viscerally care about AI safety, avoiding nuclear war, and so on. Finally, one thing that I think doesn’t make much sense is handing off the ethical issues to professional bioethicists and expecting them to be able to help much. Speaking as a philosopher, I don’t think bioethics itself has healthy community & dresearch norms (maybe bioethics needs some bioethicsethicists…). And in general, I think especially when issues are particularly complex or technical, I think the best type of research norms comes from within a community. Andrés: What is the role of valence variance in intelligence? Can a sentient being use its consciousness in any computationally fruitful way without any valence variance? Can a “perfectly flat world(-simulation)” be used for anything computational?   Mike: I think we see this today, with some people suffering from affective blunting (muted emotions) but seemingly living functional lives. More generally, what a sentient agent functionally accomplishes, and how it feels as it works toward that goal, seem to be correlated but not identical. I.e., one can vary without the other. But I don’t think that valence is completely orthogonal to behavior, either. My one-sentence explanation here is that evolution seems to have latched onto the

Why we seek out pleasure: the Symmetry Theory of Homeostatic Regulation

property which corresponds to valence- which I argue is symmetry– in deep ways, and has built our brain-minds around principles of homeostatic symmetry. This naturally leads to a high variability in our valence, as our homeostatic state is perturbed and restored. Logically, we could build minds around different principles- but it might be a lot less computationally efficient to do so. We’ll see. 🙂 One angle of research here could be looking at people who suffer from affective blunting, and trying to figure out if it holds them back: what it makes them bad at doing. It’s possible that this could lead to understanding human-style intelligence better. Going a little further, we can speculate that given a certain goal or computation, there could be “valence-positive” processes that could accomplish it, and “valence-negative” processes. This implies that there’s a nascent field of “ethical computation” that would evaluate the valence of different algorithms running on different physical substrates, and choose the one that best satisfices between efficiency and valence. (This is of course a huge simplification which glosses over tons of issues…)
Andrés: What should we prioritize: super-intelligence, super-longevity or super-happiness? Does the order matter? Why? Mike: I think it matters quite a bit! For instance, I think the world looks a lot different if we figure out consciousness *before* AGI, versus if we ignore it until AGI is built. The latter seems to involve various risks that the former doesn’t. A risk that I think we both agree is serious and real is this notion of “what if accelerating technology leads to Malthusian conditions where agents don’t- and literally can’t, from a competitive standpoint- care about qualia & valence?” Robin Hanson has a great post called “This is the Dream Time” (of relaxed selection). But his book “Age of Em” posits a world where selection pressures go back up very dramatically. I think if we enter such an era without a good theory of qualia, we could trade away a lot of what makes life worth living.  
Andrés: What are some conceptual or factual errors that you see happening in the transhumanist/rationalist/EA community related to modeling qualia, valence and intelligence? Mike: First, I think it’s only fair to mention what these communities do right. I’m much more likely to have a great conversation about these topics with EAs, transhumanists, and rationalists than a random person off the street, or even a random grad student. People from this community are always smart, usually curious, often willing to explore fresh ideas and stretch their brain a bit, and sometimes able to update based on purely abstract arguments. And there’s this collective sense that ideas are important and have real implications for the future. So there’s a lot of great things happening in these communities and they’re really a priceless resource for sounding out theories, debating issues, and so on. But I would highlight some ways in which I think these communities go astray.

Computationalism, functionalism, fun theory, ‘hedonic brain regions’, ‘pleasure neurochemicals’, the reinforcement learning theory of valence, and so on all give the illusion of explanatory depth but don’t actually explain things in a way which allows us to do anything useful.

First, people don’t realize how bad most existing models of qualia & valence are. Michael Graziano argues that most theories of consciousness are worse than wrong- that they play to our intuitions but don’t actually explain anything. Computationalism, functionalism, fun theory, ‘hedonic brain regions’, ‘pleasure neurochemicals’, the reinforcement learning theory of valence, and so on all give the illusion of explanatory depth but don’t actually explain things in a way which allows us to do anything useful. Second, people don’t realize how important a good understanding of qualia & valence are. They’re upstream of basically everything interesting and desirable. Here’s what I think has happened, at least in the rationalist community: historically, consciousness research has been a black hole. Smart people go in, but nothing comes out. So communities (such as physicists and LessWrong) naturally have an interest in putting up a fence around the topic with a sign that says

historically, consciousness research has been a black hole. Smart people go in, but nothing comes out. So communities .. naturally have an interest in putting up a fence around the topic with a sign that says ‘Don’t go here!’

‘Don’t go here!’ – But over time, people forgot why the mystery was blocked off, and started to think that the mystery doesn’t exist. This leads to people actively avoiding thinking about these topics without being able to articulate why. Andrés: Is there value in studying extreme cases of valence? E.g. Buddhist monks who claim to achieve extreme sustainable bliss, or people on MDMA? Mike: ‘What science can analyze, science can duplicate.’ And studying outliers such as your examples is a time-honored way of gathering data with high signal-to-noise. So yes, definitely. 🙂
Also see the 1st part, and the 2nd part of this interview series. Also this interview with Christof Koch will likely be of interest.
Mike Johnson is a philosopher living in the Bay Area, writing about mind, complexity theory, and formalization. He is Co-founder of the Qualia Research Institute. Much of Mike’s research and writings can be found at the Open Theory website. ‘Principia Qualia’ is Mike’s magnum opus – a blueprint for building a new Science of Qualia. Click here for the full version, or here for an executive summary. If you like Mike’s work, consider helping fund it at Patreon.

Ethics, Qualia Research & AI Safety with Mike Johnson

What’s the relationship between valence research and AI ethics?

Hedonic valence is a measure of the quality of our felt sense of experience, the intrinsic goodness (positive valence) or averseness (negative valence) of an event, object, or situation.  It is an important aspect of conscious experience; always present in our waking lives. If we seek to understand ourselves, it makes sense to seek to understand how valence works – how to measure it and test for it.

Also, might there be a relationship to the AI safety/friendliness problem?
In this interview, we cover a lot of things, not least .. THE SINGULARITY (of course) & the importance of Valence Research to AI Friendliness Research (as detailed here). Will thinking machines require experience with valence to understand it’s importance?

Here we cover some general questions about Mike Johnson’s views on recent advances in science and technology & what he sees as being the most impactful, what world views are ready to be retired, his views on XRisk and on AI Safety – especially related to value theory.

This one part of an interview series with Mike Johnson (another section on Consciousness, Qualia, Valence & Intelligence). 


Adam Ford: Welcome Mike Johnson, many thanks for doing this interview. Can we start with your background?

Mike Johnson

Mike Johnson: My formal background is in epistemology and philosophy of science: what do we know & how do we know it, what separates good theories from bad ones, and so on. Prior to researching qualia, I did work in information security, algorithmic trading, and human augmentation research.


Adam: What is the most exciting / interesting recent (scientific/engineering) news? Why is it important to you?

Mike: CRISPR is definitely up there! In a few short years precision genetic engineering has gone from a pipe dream to reality. The problem is that we’re like the proverbial dog that caught up to the car it was chasing: what do we do now? Increasingly, we can change our genome, but we have no idea how we should change our genome, and the public discussion about this seems very muddled. The same could be said about breakthroughs in AI.


Adam: What are the most important discoveries/inventions over the last 500 years?

Mike: Tough question. Darwin’s theory of Natural Selection, Newton’s theory of gravity, Faraday & Maxwell’s theory of electricity, and the many discoveries of modern physics would all make the cut. Perhaps also the germ theory of disease. In general what makes discoveries & inventions important is when they lead to a productive new way of looking at the world.


Adam: What philosophical/scientific ideas are ready to be retired? What theories of valence are ready to be relegated to the dustbin of history? (Why are they still in currency? Why are they in need of being thrown away or revised?)

Mike: I think that 99% of the time when someone uses the term “pleasure neurochemicals” or “hedonic brain regions” it obscures more than it explains. We know that opioids & activity in the nucleus accumbens are correlated with pleasure– but we don’t know why, we don’t know the causal mechanism. So it can be useful shorthand to call these things “pleasure neurochemicals” and whatnot, but every single time anyone does that, there should be a footnote that we fundamentally don’t know the causal story here, and this abstraction may ‘leak’ in unexpected ways.


Adam: What have you changed your mind about?

Mike: Whether pushing toward the Singularity is unequivocally a good idea. I read Kurzweil’s The Singularity is Near back in 2005 and loved it- it made me realize that all my life I’d been a transhumanist and didn’t know it. But twelve years later, I’m a lot less optimistic about Kurzweil’s rosy vision. Value is fragile, and there are a lot more ways that things could go wrong, than ways things could go well.


Adam: I remember reading Eliezer’s writings on ‘The Fragility of Value’, it’s quite interesting and worth consideration – the idea that if we don’t get AI’s value system exactly right, then it would be like pulling a random mind out of mindspace – most likely inimicable to human interests. The writing did seem quite abstract, and it would be nice to see a formal model or something concrete to show this would be the case. I’d really like to know how and why value is as fragile as Eliezer seems to make out. Is there any convincing crisply defined model supporting this thesis?

Mike: Whether the ‘Complexity of Value Thesis’ is correct is super important. Essentially, the idea is that we can think of what humans find valuable as a tiny location in a very large, very high-dimensional space– let’s say 1000 dimensions for the sake of argument. Under this framework, value is very fragile; if we move a little bit in any one of these 1000 dimensions, we leave this special zone and get a future that doesn’t match our preferences, desires, and goals. In a word, we get something worthless (to us). This is perhaps most succinctly put by Eliezer in “Value is fragile”:

“If you loose the grip of human morals and metamorals – the result is not mysterious and alien and beautiful by the standards of human value. It is moral noise, a universe tiled with paperclips. To change away from human morals in the direction of improvement rather than entropy, requires a criterion of improvement; and that criterion would be physically represented in our brains, and our brains alone. … You want a wonderful and mysterious universe? That’s your value. … Valuable things appear because a goal system that values them takes action to create them. … if our values that prefer it are physically obliterated – or even disturbed in the wrong dimension. Then there is nothing left in the universe that works to make the universe valuable.”

If this frame is right, then it’s going to be really really really hard to get AGI right, because one wrong step in programming will make the AGI depart from human values, and “there will be nothing left to want to bring it back.” Eliezer, and I think most of the AI safety community assumes this.

But– and I want to shout this from the rooftops– the complexity of value thesis is just a thesis! Nobody knows if it’s true. An alternative here would be, instead of trying to look at value in terms of goals and preferences, we look at it in terms of properties of phenomenological experience. This leads to what I call the Unity of Value Thesis, where all the different manifestations of valuable things end up as special cases of a more general, unifying principle (emotional valence). What we know from neuroscience seems to support this: Berridge and Kringelbach write about how “The available evidence suggests that brain mechanisms involved in fundamental pleasures (food and sexual pleasures) overlap with those for higher-order pleasures (for example, monetary, artistic, musical, altruistic, and transcendent pleasures).” My colleague Andres Gomez Emilsson writes about this in The Tyranny of the Intentional Object. Anyway, if this is right, then the AI safety community could approach the Value Problem and Value Loading Problem much differently.


Adam: I’m also interested in the nature of possible attractors that agents might ‘extropically’ gravitate towards (like a thirst for useful and interesting novelty, generative and non-regressive, that might not neatly fit categorically under ‘happiness’) – I’m not wholly convinced that they exist, but if one leans away from moral relativism, it makes sense that a superintelligence may be able to discover or extrapolate facts from all physical systems in the universe, not just humans, to determine valuable futures and avoid malignant failure modes (Coherent Extrapolated Value if you will). Being strongly locked into optimizing human values may be a non-malignant failure mode.

Mike: What you write reminds me of Schmidhuber’s notion of a ‘compression drive’: we’re drawn to interesting things because getting exposed to them helps build our ‘compression library’ and lets us predict the world better. But this feels like an instrumental goal, sort of a “Basic AI Drives” sort of thing. Would definitely agree that there’s a danger of getting locked into a good-yet-not-great local optima if we hard optimize on current human values.

Probably the danger is larger than that too– as Eric Schwitzgebel notes​, ​

“Common sense is incoherent in matters of metaphysics. There’s no way to develop an ambitious, broad-ranging, self- consistent metaphysical system without doing serious violence to common sense somewhere. It’s just impossible. Since common sense is an inconsistent system, you can’t respect it all. Every metaphysician will have to violate it somewhere.”

If we lock in human values based on common sense, we’re basically committing to following an inconsistent formal system. I don’t think most people realize how badly that will fail.


Adam: What invention or idea will change everything?

Mike: A device that allows people to explore the space of all possible qualia in a systematic way. Right now, we do a lot of weird things to experience interesting qualia: we drink fermented liquids, smoke various plant extracts, strap ourselves into rollercoasters, and parachute out of plans, and so on, to give just a few examples. But these are very haphazard ways to experience new qualia! When we’re able to ‘domesticate’ and ‘technologize’ qualia, like we’ve done with electricity, we’ll be living in a new (and, I think, incredibly exciting) world.


Adam: What are you most concerned about? What ought we be worrying about?

Mike: I’m worried that society’s ability to coordinate on hard things seems to be breaking down, and about AI safety. Similarly, I’m also worried about what Eliezer Yudkowsky calls ‘Moore’s Law of Mad Science’, that steady technological progress means that ‘every eighteen months the minimum IQ necessary to destroy the world drops by one point’. But I think some very smart people are worrying about these things, and are trying to address them.

In contrast, almost no one is worrying that we don’t have good theories of qualia & valence. And I think we really, really ought to, because they’re upstream of a lot of important things, and right now they’re “unknown unknowns”- we don’t know what we don’t know about them.

One failure case that I worry about is that we could trade away what makes life worth living in return for some minor competitive advantage. As Bostrom notes in Superintelligence,

“When it becomes possible to build architectures that could not be implemented well on biological neural networks, new design space opens up; and the global optima in this extended space need not resemble familiar types of mentality. Human-like cognitive organizations would then lack a niche in a competitive post-transition economy or ecosystem. We could thus imagine, as an extreme case, a technologically highly advanced society, containing many complex structures, some of them far more intricate and intelligent than anything that exists on the planet today – a society which nevertheless lacks any type of being that is conscious or whose welfare has moral significance. In a sense, this would be an uninhabited society. It would be a society of economic miracles and technological awesomeness, with nobody there to benefit. A Disneyland with no children.”

Nick Bostrom

Now, if we don’t know how qualia works, I think this is the default case. Our future could easily be a technological wonderland, but with very little subjective experience. “A Disneyland with no children,” as Bostrom quips.



Adam: How would you describe your ethical views? What are your thoughts on the relative importance of happiness vs. suffering? Do things besides valence have intrinsic moral importance?

Mike: Good question. First, I’d just like to comment that Principia Qualia is a descriptive document; it doesn’t make any normative claims.

I think the core question in ethics is whether there are elegant ethical principles to be discovered, or not. Whether we can find some sort of simple description or efficient compression scheme for ethics, or if ethics is irreducibly complex & inconsistent.

The most efficient compression scheme I can find for ethics, that seems to explain very much with very little, and besides that seems intuitively plausible, is the following:

  1. Strictly speaking, conscious experience is necessary for intrinsic moral significance. I.e., I care about what happens to dogs, because I think they’re conscious; I don’t care about what happens to paperclips, because I don’t think they are.
  2. Some conscious experiences do feel better than others, and all else being equal, pleasant experiences have more value than unpleasant experiences.

Beyond this, though, I think things get very speculative. Is valence the only thing that has intrinsic moral importance? I don’t know. On one hand, this sounds like a bad moral theory, one which is low-status, has lots of failure-modes, and doesn’t match all our intuitions. On the other hand, all other systematic approaches seem even worse. And if we can explain the value of most things in terms of valence, then Occam’s Razor suggests that we should put extra effort into explaining everything in those terms, since it’d be a lot more elegant. So– I don’t know that valence is the arbiter of all value, and I think we should be actively looking for other options, but I am open to it. That said I strongly believe that we should avoid premature optimization, and we should prioritize figuring out the details of consciousness & valence (i.e. we should prioritize research over advocacy).

Re: the relative importance of happiness vs suffering, it’s hard to say much at this point, but I’d expect that if we can move valence research to a more formal basis, there will be an implicit answer to this embedded in the mathematics.

Perhaps the clearest and most important ethical view I have is that ethics must ultimately “compile” to physics. What we value and what we disvalue must ultimately cash out in terms of particle arrangements & dynamics, because these are the only things we can actually change. And so if people are doing ethics without caring about making their theories cash out in physical terms, they’re not actually doing ethics- they’re doing art, or social signaling, or something which can serve as the inspiration for a future ethics.

Perhaps the clearest and most important ethical view I have is that ethics must ultimately “compile” to physics. What we value and what we disvalue must ultimately cash out in terms of particle arrangements & dynamics, because these are the only things we can actually change.

The analogy I’d offer here is that we can think about our universe as a computer, and ethics as choosing a program to run on this computer. Unfortunately, most ethicists aren’t writing machine-code, or even thinking about things in ways that could be easily translated to machine-code. Instead, they’re writing poetry about the sorts of programs that might be nice to run. But you can’t compile poetry to machine-code! So I hope the field of ethics becomes more physics-savvy and quantitative (although I’m not optimistic this will happen quickly).

Eliezer Yudkowsky refers to something similar with his notions of “AI grade philosophy”, “compilable philosophy”, and “computable ethics”, though I don’t think he quite goes far enough (i.e., all the way to physics).


Adam: What excites you? What do you think we have reason to be optimistic about?

Mike: The potential of qualia research to actually make peoples’ lives better in concrete, meaningful ways. Medicine’s approach to pain management and treatment of affective disorders are stuck in the dark ages because we don’t know what pain is. We don’t know why some mental states hurt. If we can figure that out, we can almost immediately help a lot of people, and probably unlock a surprising amount of human potential as well. What does the world look like with sane, scientific, effective treatments for pain & depression & akrasia? I think it’ll look amazing.


Adam: If you were to take a stab at forecasting the Intelligence Explosion – in what timeframe do you think it might happen (confidence intervals allowed)?

Mike: I don’t see any intractable technical hurdles to an Intelligence Explosion: the general attitude in AI circles seems to be that progress is actually happening a lot more quickly than expected, and that getting to human-level AGI is less a matter of finding some fundamental breakthrough, and more a matter of refining and connecting all the stuff we already know how to do.

The real unknown, I think, is the socio-political side of things. AI research depends on a stable, prosperous society able to support it and willing to ‘roll the dice’ on a good outcome, and peering into the future, I’m not sure we can take this as a given. My predictions for an Intelligence Explosion:

  • Between ~2035-2045 if we just extrapolate research trends within the current system;
  • Between ~2080-2100 if major socio-political disruptions happen but we stabilize without too much collateral damage (e.g., non-nuclear war, drawn-out social conflict);
  • If it doesn’t happen by 2100, it probably implies a fundamental shift in our ability or desire to create an Intelligence Explosion, and so it might take hundreds of years (or never happen).


If a tree falls in the forest and no one is around to hear it, does it make a sound? It would be unfortunate if a whole lot of awesome stuff were to happen with no one around to experience it.  <!–If a rainbow appears in a universe, and there is no one around to experience it, is it beautiful?–>

Also see the 2nd part, and 3nd part (conducted by Andrés Gómez Emilson) of this interview series conducted by Andrés Gómez Emilson and this interview with Christof Koch will likely be of interest.


Mike Johnson is a philosopher living in the Bay Area, writing about mind, complexity theory, and formalization. He is Co-founder of the Qualia Research Institute. Much of Mike’s research and writings can be found at the Open Theory website.
‘Principia Qualia’ is Mike’s magnum opus – a blueprint for building a new Science of Qualia. Click here for the full version, or here for an executive summary.
If you like Mike’s work, consider helping fund it at Patreon.

Science, Mindfulness & the Urgency of Reducing Suffering – Christof Koch

In this interview with Christof Koch, he shares some deeply felt ideas about the urgency of reducing suffering (with some caveats), his experience with mindfulness – explaining what it was like to visit the Dali Lama for a week, as well as a heart felt experience of his family dog ‘Nosey’ dying in his arms, and how that moved him to become a vegetarian. He also discusses the bias of human exceptionalism, the horrors of factory farming of non-human animals, as well as a consequentialist view on animal testing.
Christof Koch is an American neuroscientist best known for his work on the neural bases of consciousness.

Christof Koch is the President and Chief Scientific Officer of the Allen Institute for Brain Science in Seattle. From 1986 until 2013, he was a professor at the California Institute of Technology.

The Antispeciesist Revolution – read by David Pearce

The Antispeciesist Revolution

[Original text found here]

When is it ethically acceptable to harm another sentient being? On some fairly modest(1) assumptions, to harm or kill someone simply on the grounds they belong to a different gender, sexual orientation or ethnic group is unjustified. Such distinctions are real but ethically irrelevant. On the other hand, species membership is normally reckoned an ethically relevant criterion. Fundamental to our conceptual scheme is the pre-Darwinian distinction between “humans” and “animals”. In law, nonhuman animals share with inanimate objects the status of property. As property, nonhuman animals can be bought, sold, killed or otherwise harmed as humans see fit. In consequence, humans treat nonhuman animals in ways that would earn a life-time prison sentence without parole if our victims were human. From an evolutionary perspective, this contrast in status isn’t surprising. In our ancestral environment of adaptedness, the human capacity to hunt, kill and exploit sentient beings of other species was fitness-enhancing(2). Our moral intuitions have been shaped accordingly. Yet can we ethically justify such behaviour today?

Naively, one reason for disregarding the interests of nonhumans is the dimmer-switch model of consciousness. Humans matter more than nonhuman animals because (most) humans are more intelligent. Intuitively, more intelligent beings are more conscious than less intelligent beings; consciousness is the touchstone of moral status.

The problem with the dimmer-switch model is that it’s empirically unsupported, among vertebrates with central nervous systems at least. Microelectrode studies of the brains of awake human subjects suggest that the most intense forms of experience, for example agony, terror and orgasmic bliss, are mediated by the limbic system, not the prefrontal cortex. Our core emotions are evolutionarily ancient and strongly conserved. Humans share the anatomical and molecular substrates of our core emotions with the nonhuman animals whom we factory-farm and kill. By contrast, distinctively human cognitive capacities such as generative syntax, or the ability to do higher mathematics, are either phenomenologically subtle or impenetrable to introspection. To be sure, genetic and epigenetic differences exist between, say, a pig and a human being that explain our adult behavioural differences, e.g. the allele of the FOXP2(1) gene implicated in the human capacity for recursive syntax. Such mutations have little to do with raw sentience(1).

So what is the alternative to traditional anthropocentric ethics? Antispeciesism is not the claim that “All Animals Are Equal”, or that all species are of equal value, or that a human or a pig is equivalent to a mosquito. Rather the antispeciesist claims that, other things being equal, conscious beings of equivalent sentience deserve equal care and respect. A pig, for example, is of comparable sentience to a prelinguistic human toddler. As it happens, a pig is of comparable (or superior) intelligence to a toddler as well(5). However, such cognitive prowess is ethically incidental. If ethical status is a function of sentience, then to factory-farm and slaughter a pig is as ethically abhorrent as to factory-farm and slaughter a human baby. To exploit one and nurture the other expresses an irrational but genetically adaptive prejudice.

On the face of it, this antispeciesist claim isn’t just wrong-headed; it’s absurd. Philosopher Jonathan Haidt speaks of “moral dumfounding”(6), where we just know something is wrong but can’t articulate precisely why. Haidt offers the example of consensual incest between an adult brother and sister who use birth control. For evolutionary reasons, we “just know” such an incestuous relationship is immoral. In the case of any comparisons of pigs with human infants and toddlers, we “just know” at some deep level that any alleged equivalence in status is unfounded. After all, if there were no ethically relevant distinction between a pig and a toddler, or between a battery-farmed chicken and a human infant, then the daily behaviour of ordinary meat-eating humans would be sociopathic – which is crazy. In fact, unless the psychiatrists’ bible, Diagnostic and Statistical Manual of Mental Disorders, is modified explicitly to exclude behaviour towards nonhumans, most of us do risk satisfying its diagnostic criteria for the disorder. Even so, humans often conceive of ourselves as animal lovers. Despite the horrors of factory-farming, most consumers of meat and animal products are clearly not sociopaths in the normal usage of the term; most factory-farm managers are not wantonly cruel; and the majority of slaughterhouse workers are not sadists who delight in suffering. Serial killers of nonhuman animals are just ordinary men doing a distasteful job – “obeying orders” – on pain of losing their livelihoods.

Should we expect anything different? Jewish political theorist Hannah Arendt spoke famously of the “banality of evil”(7). If twenty-first century humans are collectively doing something posthuman superintelligence will reckon monstrous, akin to the [human] Holocaust or Atlantic slave trade, then it’s easy to assume our moral intuitions would disclose this to us. Our intuitions don’t disclose anything of the kind; so we sleep easy. But both natural selection and the historical record offer powerful reasons for doubting the trustworthiness of our naive moral intuitions. So the possibility that human civilisation might be founded upon some monstrous evil should be taken seriously – even if the possibility seems transparently absurd at the time.

One possible speciesist response is to raise the question of “potential”. Even if a pig is as sentient as a human toddler, there is a fundamental distinction between human toddlers and pigs. Only a toddler has the potential to mature into a rational adult human being.

The problem with this response is that it contradicts our treatment of humans who lack “potential”. Thus we recognise that a toddler with a progressive disorder who will never live to celebrate his third birthday deserves at least as much love, care and respect as his normally developing peers – not to be packed off to a factory-farm on the grounds it’s a shame to let good food go to waste. We recognise a similar duty of care for mentally handicapped adult humans and cognitively frail old people. For sure, historical exceptions exist to this perceived duty of care for vulnerable humans, e.g. the Nazi “euthanasia” program, with its eugenicist conception of “life unworthy of life”. But by common consent, we value young children and cognitively challenged adults for who they are, not simply for who they may – or may not – one day become. On occasion, there may controversially be instrumental reasons for allocating more care and resources to a potential genius or exceptionally gifted child than to a normal human. Yet disproportionate intraspecies resource allocation may be justified, not because high IQ humans are more sentient, but because of the anticipated benefits to society as a whole.

Practical Implications.
1. Invitrotarianism.

The greatest source of severe, chronic and readily avoidable suffering in the world today is man-made: factory farming. Humans currently slaughter over fifty billion sentient beings each year. One implication of an antispeciesist ethic is that factory farms should be shut and their surviving victims rehabilitated.

In common with most ethical revolutions in history, the prospect of humanity switching to a cruelty-free diet initially strikes most practically-minded folk as utopian dreaming. “Realists” certainly have plenty of hard evidence to bolster their case. As English essayist William Hazlitt observed, “The least pain in our little finger gives us more concern and uneasiness than the destruction of millions of our fellow-beings.” Without the aid of twenty-first century technology, the mass slaughter and abuse of our fellow animals might continue indefinitely. Yet tissue science technology promises to allow consumers to become moral agents without the slightest hint of personal inconvenience. Lab-grown in vitro meat produced in cell culture rather than a live animal has long been a staple of science fiction. But global veganism – or its ethical invitrotarian equivalent – is no longer a futuristic fantasy. Rapid advances in tissue engineering mean that in vitro meat will shortly be developed and commercialised. Today’s experimental cultured mincemeat can be supplanted by mass-manufactured gourmet steaks for the consumer market. Perhaps critically for its rapid public acceptance, in vitro meat does not need to be genetically modified – thereby spiking the guns of techno-luddites who might otherwise worry about “FrankenBurgers”. Indeed, cultured meat products will be more “natural” in some ways than their antibiotic-laced counterparts derived from factory-farmed animals.

Momentum for commercialisation is growing. Non-profit research organisations like New Harvest(8), working to develop alternatives to conventionally-produced meat, have been joined by hard-headed businessmen. Visionary entrepreneur and Stanford academic Peter Thiel has just funnelled $350,000 into Modern Meadow, a start-up that aims to combine 3D printing with in vitro meat cultivation. Within the next decade or so, gourmet steaks could be printed out from biological materials. In principle, the technology should be scalable.

Tragically, billions of nonhuman animals will grievously suffer and die this century at human hands before the dietary transition is complete. Humans are not obligate carnivores; eating meat and animal products is a lifestyle choice. “But I like the taste!” is not a morally compelling argument. Vegans and animal advocates ask whether we are ethically entitled to wait on a technological fix? The antispeciesist answer is clear: no.

2. Compassionate Biology.
If and when humans stop systematically harming other sentient beings, will our ethical duties to members of other species have been discharged? Not if the same ethical considerations as apply to members of other human races or age-groups apply also to members of other species of equivalent sentience. Thus if famine breaks out in sub-Saharan Africa and young human children are starving, then we recognise we have a duty to send aid; or better still, to take proactive measures to ensure famines do not arise in the first instance, i.e. to provide not just food aid but family planning. So why not assist, say, starving free-living elephants? Until recently, no comparable interventions were feasible for members of other species. The technical challenges were insurmountable. Not least, the absence of cross-species fertility control technologies would have often made bad problems worse. Yet thanks to the exponential growth of computer power, every cubic metre of the planet will shortly be computationally accessible to micro-management, surveillance and control. Harnessed to biotechnology, nanotechnology and robotics, such tools confer unprecedented power over Nature. With unbridled power comes complicity. Ethically speaking, how many of the traditional cruelties of the living world do we wish to perpetuate? Orthodox conservation biologists argue we should not “interfere”: humans can’t “police” Nature. Antispeciesists disagree. Advocates of compassionate biology argue that humans and nonhumans alike should not be parasitised, starved, disembowelled, asphyxiated, or eaten alive.

As always, bioconservatives insist such miseries are “natural”; status quo bias runs deep. “Custom will reconcile people to any atrocity”, observed George Bernard Shaw. Snuff movies in the guise of Nature documentaries are quite popular on Youtube, a counterpoint to the Disneyfied wildlife shows aired on mainstream TV. Moreover even sympathetic critics of compassionate biology might respond that helping free-living members of other species is prohibitively expensive. An adequate welfare safety-net scarcely exists for humans in many parts of the world. So how can we contemplate its extension to nonhumans – even just to large-brained, long-lived vertebrates in our Nature reserves? Provision of comprehensive healthcare for all free-living elephants(10), for example, might cost between two or three billion dollars annually. Compassionate stewardship of the living world would be technically daunting too, entailing ecosystem management, cross-species fertility control via immunocontraception, veterinary care, emergency famine-relief, GPS tracking and monitoring, and ultimately phasing out or genetically “reprogramming”(11) carnivorous predators. The notional bill could approach the world’s 1.7 trillion-dollar annual arms budget. But irrespective of cost or timescale, if we are to be consistently non-speciesist, then decisions about resource allocation should be based not on species membership, but directly or indirectly on sentience. An elephant, for example, is at least as sentient as a human toddler. If it is ethically obligatory to help sick or starving children, then it’s ethically obligatory to help sick or starving elephants – not just via crisis interventions but via long-term healthcare support.

A traditional conservation biologist might respond that elephants helped by humans are no longer truly wild. Yet on such a criterion, clothes-wearing humans or beneficiaries of food aid and family planning aren’t “wild” humans either. Why should this matter? “Free-living” and “wild” are conceptually distinct. To assume that the civilising process should be confined to our own species is mere speciesist prejudice. Humans, transhumans and posthumans must choose what forms of sentience we want to preserve and create on Earth and beyond. Humans already massively intervene in Nature, whether though habitat destruction, captive breeding programs for big cats, “rewilding”, etc. So the question is not whether humans should “interfere”, but rather what ethical principles should govern our interventions(12).

Speciesism and Superintelligence.
Why should transhumanists care about the suffering of nonhuman animals? This is not a “feel-good” issue. One reason we should care cuts to the heart of the future of life in the universe. Transhumanists differ over whether our posthuman successors will most likely be nonbiological artificial superintelligence; or cyborgs who effectively merge with our hyperintelligent machines; or our own recursively self-improving biological descendents who modify their own genetic source code and bootstrap their way to full-spectrum superintelligence(13). Regardless of the dominant lifeform of the posthuman era, biological humans have a vested interest in the behaviour of intellectually advanced beings towards cognitively humble creatures – if we survive at all. Compared to posthuman superintelligence, archaic humans may be no smarter than pigs or chickens – or perhaps worms. This does not augur well for Homo sapiens. Western-educated humans tend to view Jains as faintly ridiculous for practising ahimsa, or harmlessness, sweeping the ground in front of them to avoid inadvertently treading on insects. How quixotic! Yet the fate of sentient but cognitively humble lifeforms in relation to vastly superior intelligence is precisely the issue at stake as we confront the prospect of posthuman superintelligence. How can we ensure a Jain-like concern for comparatively simple-minded creatures such as ourselves? Why should superintelligences care any more than humans about the well-being of their intellectual inferiors? Might distinctively human-friendly superintelligence turn out to be as intellectually-incoherent as, say, Aryan-friendly superintelligence? If human primitives are to prove worthy of conservation, how can we implement technologies of impartial friendliness towards other sentients? And if posthumans do care, how do we know that a truly benevolent superintelligence wouldn’t turn Darwinian life into utilitronium with a communal hug?

Viewed in such a light, biological humanity’s prospects in a future world of superintelligence might seem dire. However, this worry expresses a one-dimensional conception of general intelligence. No doubt the nature of mature superintelligence is humanly unknowable. But presumably full-spectrum(14) superintelligence entails, at the very least, a capacity to investigate, understand and manipulate both the formal and the subjective properties of mind. Modern science aspires to an idealised “view from nowhere”(15), an impartial, God-like understanding of the natural universe, stripped of any bias in perspective and expressed in the language of mathematical physics. By the same token, a God-like superintelligence must also be endowed with the capacity impartially to grasp all possible first-person perspectives – not a partial and primitive Machiavellian cunning of the kind adaptive on the African savannah, but an unimaginably radical expansion of our own fitfully growing circle of empathy.

What such superhuman perspective-taking ability might entail is unclear. We are familiar with people who display abnormally advanced forms of “mind-blind”(16), autistic intelligence in higher mathematics and theoretical physics. Less well known are hyper-empathisers who display unusually sophisticated social intelligence. Perhaps the most advanced naturally occurring hyper-empathisers exhibit mirror-touch synaesthesia(17). A mirror-touch synaesthete cannot be unfriendly towards you because she feels your pain and pleasure as if it were her own. In principle, such unusual perspective-taking capacity could be generalised and extended with reciprocal neuroscanning technology and telemetry into a kind of naturalised telepathy, both between and within species. Interpersonal and cross-species mind-reading could in theory break down hitherto invincible barriers of ignorance between different skull-bound subjects of experience, thereby eroding the anthropocentric, ethnocentric and egocentric bias that has plagued life on Earth to date. Today, the intelligence-testing community tends to treat facility at empathetic understanding as if it were a mere personality variable, or at best some sort of second-rate cognition for people who can’t do IQ tests. But “mind-reading” can be a highly sophisticated, cognitively demanding ability. Compare, say, the sixth-order intentionality manifested by Shakespeare. Thus we shouldn’t conceive superintelligence as akin to God imagined by someone with autistic spectrum disorder. Rather full-spectrum superintelligence entails a God’s-eye capacity to understand the rich multi-faceted first-person perspectives of diverse lifeforms whose mind-spaces humans would find incomprehensibly alien.

An obvious objection arises. Just because ultra-intelligent posthumans may be capable of displaying empathetic superintelligence, how do we know such intelligence will be exercised? The short answer is that we don’t: by analogy, today’s mirror-touch synaesthetes might one day neurosurgically opt to become mind-blind. But then equally we don’t know whether posthumans will renounce their advanced logico-mathematical prowess in favour of the functional equivalent of wireheading. If they do so, then they won’t be superintelligent. The existence of diverse first-person perspectives is a fundamental feature of the natural world, as fundamental as the second law of thermodynamics or the Higgs boson. To be ignorant of fundamental features of the world is to be an idiot savant: a super-Watson(18) perhaps, but not a superintelligence(19).

High-Tech Jainism?
Jules Renard once remarked, “I don’t know if God exists, but it would be better for His reputation if He didn’t.” God’s conspicuous absence from the natural world needn’t deter us from asking what an omniscient, omnipotent, all-merciful deity would want humans to do with our imminent God-like powers. For we’re on the brink of a momentous evolutionary transition in the history of life on Earth. Physicist Freeman Dyson predicts we’ll soon “be writing genomes as fluently as Blake and Byron wrote verses”(20). The ethical risks and opportunities for apprentice deities are huge.

On the one hand, Karl Popper warns, “Those who promise us paradise on earth never produced anything but a hell”(21). Twentieth-century history bears out such pessimism. Yet for billions of sentient beings from less powerful species, existing life on Earth is hell. They end their miserable lives on our dinner plates: “for the animals it is an eternal Treblinka”, writes Jewish Nobel laureate Isaac Bashevis Singer(22).

In a more utopian vein, some utterly sublime scenarios are technically feasible later this century and beyond. It’s not clear whether experience below Sidgwick’s(23) “hedonic zero” has any long-term future. Thanks to molecular neuroscience, mastery of the brain’s reward circuitry could make everyday life wonderful beyond the bounds of normal human experience. There is no technical reason why the pitiless Darwinian struggle of the past half billion years can’t be replaced by an earthly paradise for all creatures great and small. Genetic engineering could allow “the lion to lie down with the lamb.” Enhancement technologies could transform killer apes into saintly smart angels. Biotechnology could abolish suffering throughout the living world. Artificial intelligence could secure the well-being of all sentience in our forward light-cone. Our quasi-immortal descendants may be animated by gradients of intelligent bliss orders of magnitude richer than anything physiologically feasible today.

Such fantastical-sounding scenarios may never come to pass. Yet if so, this won’t be because the technical challenges prove too daunting, but because intelligent agents choose to forgo the molecular keys to paradise for something else. Critically, the substrates of bliss don’t need to be species-specific or rationed. Transhumanists believe the well-being of all sentience(24) is the bedrock of any civilisation worthy of the name.

Also see this related interview with David Pearce on ‘Antispecism & Compassionate Stewardship’:

* * *

1. How modest? A venerable tradition in philosophical meta-ethics is anti-realism. The meta-ethical anti-realist proposes that claims such as it’s wrong to rape women, kill Jews, torture babies (etc) lack truth value – or are simply false. (cf. JL Mackie, Ethics: Inventing Right and Wrong, Viking Press, 1977.) Here I shall assume that, for reasons we simply don’t understand, the pain-pleasure axis discloses the world’s inbuilt metric of (dis)value. Meta-ethical anti-realists may instead wish to interpret this critique of speciesism merely as casting doubt on its internal coherence rather than a substantive claim that a non-speciesist ethic is objectively true.

2. Extreme violence towards members of other tribes and races can be fitness-enhancing too. See, e.g. Richard Wrangham & Dale Peterson, Demonic Males: Apes and the Origins of Human Violence, Houghton Mifflin, 1997.

3. Fisher SE, Scharff C (2009). “FOXP2 as a molecular window into speech and language”. Trends Genet. 25 (4): 166–77. doi:10.1016/j.tig.2009.03.002. PMID 19304338.

4. Interpersonal and interspecies comparisons of sentience are of course fraught with problems. Comparative studies of how hard a human or nonhuman animal will work to avoid or obtain a particular stimulus give one crude behavioural indication. Yet we can go right down to the genetic and molecular level, e.g. interspecies comparisons of SCN9A genotype. (cf. content/early/2010/02/23/?0913181107.full.pdf) We know that in humans the SCN9A gene modulates pain-sensitivity. Some alleles of SCN9A give rise to hypoalgesia, others alleles to hyperalgesia. Nonsense mutations yield congenital insensitivity to pain. So we could systematically compare the SCN9A gene and its homologues in nonhuman animals. Neocortical chauvinists will still be sceptical of non-mammalian sentience, pointing to the extensive role of cortical processing in higher vertebrates. But recall how neuroscanning techniques reveal that during orgasm, for example, much of the neocortex effectively shuts down. Intensity of experience is scarcely diminished.

5. Held S, Mendl M, Devereux C, and Byrne RW. 2001. “Studies in social cognition: from primates to pigs”. Animal Welfare 10:S209-17.

6. Jonathan Haidt, The Righteous Mind: Why Good People Are Divided by Politics and Religion, Pantheon Books, 2012.

7. Hannah Arendt, Eichmann in Jerusalem: A Report on the Banality of Evil, Viking Press, 1963.


9. “PayPal Founder Backs Synthetic Meat Printing Company”, Wired, August 16 2012.



12. The scholarly literature on the problem of wild animal suffering is still sparse. But perhaps see Arne Naess, “Should We Try To Relieve Clear Cases of Suffering in Nature?”, published in The Selected Works of Arne Naess, Springer, 2005; Oscar Horta, “The Ethics of the Ecology of Fear against the Nonspeciesist Paradigm: A Shift in the Aims of Intervention in Nature”, Between the Species, Issue X, August 2010. ; Brian Tomasik, “The Importance of Wild-Animal Suffering”, ; and the first print-published plea for phasing out carnivorism in Nature, Jeff McMahan’s “The Meat Eaters”, The New York Times. September 19, 2010.

13. Singularity Hypotheses, A Scientific and Philosophical Assessment, Eden, A.H.; Moor, J.H.; Søraker, J.H.; Steinhart, E. (Eds.) Spinger 2013.

14. David Pearce, The Biointelligence Explosion. (preprint), 2012.

15. Thomas Nagel, The View From Nowhere , OUP, 1989.

16. Simon Baron-Cohen (2009). “Autism: the empathizing–systemizing (E-S) theory” (PDF). Ann N Y Acad Sci 1156: 68–80. doi:10.1111/j.1749-6632.2009.04467.x. PMID 19338503.

17. Banissy, M. J. & Ward, J. (2007). Mirror-touch synesthesia is linked with empathy. Nature Neurosci. doi: 10.1038/nn1926.

18. Stephen Baker. Final Jeopardy: Man vs. Machine and the Quest to Know Everything. Houghton Mifflin Harcourt. 2011.

19. Orthogonality or convergence? For an alternative to the convergence thesis, see Nick Bostrom, “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents”, 2012,; and Eliezer Yudkowsky, Carl Shulman, Anna Salamon, Rolf Nelson, Steven Kaas, Steve Rayhawk, Zack Davis, and Tom McCabe. “Reducing Long-Term Catastrophic Risks from Artificial Intelligence”, 2010.

20. Freeman Dyson, “When Science & Poetry Were Friends”, New York Review of Books, August 13, 2009.

21. As quoted in Jon Winokur, In Passing: Condolences and Complaints on Death, Dying, and Related Disappointments, Sasquatch Books, 2005.

22. Isaac Bashevis Singer, The Letter Writer, 1964.

23. Henry Sidgwick, The Methods of Ethics. London, 1874, 7th ed. 1907.

24. The Transhumanist Declaration (1998, 2009).

David Pearce
September 2012

Link to video

CLAIRE – a new European confederation for AI research

While the world wakes up to the huge potential impacts of AI in the future, how will national worries about other nations gaining ‘AI Supremacy’ effect development?
Especially development in AI Ethics & safety?

Claire-AI is a new European confederation.
Self described as

CONFEDERATION OF LABORATORIES FOR ARTIFICIAL INTELLIGENCE RESEARCH IN EUROPE – Excellence across all of AI. For all of Europe. With a Human-Centred Focus.

Liking the ‘human-centered’ focus (albeit a bit vague), but where is their focus on ethics?

A Competitive Vision

Their vision admits of a fear that Europe may be the losers in a race to achieve AI Supremacy, and this is worrisome – seen as a race between tribes, AI development could be a race to the bottom of the barrel of AI safety and alignment.

In the United States of America, huge investments in AI are made by the private sector. In 2017, the Canadian government started making major investments in AI research, focusing mostly on existing strength in deep learning. In 2017, China released its Next Generation AI Development Plan, with the explicit goal of attaining AI supremacy by 2030.

However, in terms of investment in talent, research, technology and innovation in AI, Europe lags far behind its competitors. As a result, the EU and associated countries are increasingly losing talent to academia and industry elsewhere. Europe needs to play a key role in shaping how AI changes the world, and, of course, benefit from the results of AI research. The reason is obvious: AI is crucial for meeting Europe’s needs to address complex challenges as well as for positioning Europe and its nations in the global market.

Also the FAQ page reflects this sentiment:

Why does Europe have to act, and act quickly? There would be serious economic consequences if Europe were to fall behind in AI technology, along with a brain-drain that already draws AI talent away from Europe, to countries that have placed a high priority on AI research. The more momentum this brain-drain develops, the harder it will be to reverse. There is also a risk of increasing dependence on AI technology developed elsewhere, which would bring economic disadvantages, lack of transparency and broad use of AI technology that is not well aligned with European values.

What are ‘European Values’? They aren’t spelt out very specifically – but I suspect much like other nations, they want whats best for the nation economically, and with regard to security.

Claire-AI’s vision of Ethics

There is mention of ‘humane’ AI – but this is not described in detail anywhere on their site.
What is meant by ‘human-centred’?

Human-centred AI is strongly based on human values and judgement. It is designed to complement rather than replace human intelligence. Human-centred AI is transparent, explainable, fair (i.e., free from hidden bias), and socially compatible. Is developed and deployed based on careful consideration of the disruptions AI technology can cause.

Many AI experts are convinced that the combination of learning and reasoning techniques will enable the next leap forward in AI; it also provides the basis for reliable, trustworthy, safe AI.

So, what are their goals?

What are we trying to achieve? Our main goal is to strengthen AI research and innovation in Europe.

Summing up

Strong AI when achieved, will be extremely powerful because intelligence is powerful. Over the last few years the interest in AI has ramped up significantly – with new companies and initiatives sprouting like mushrooms. The more competitiveness and economy of attention focusing on AI development in a race dynamic to achieve ‘AI supremacy’ will likely result in Strong AI being achieved sooner than previously expected by experts, as well as motivation to precautionary measures.
This race dynamic is good reason to focus on researching how we should think about the strategy to cope with global coordination problems in AI safety as well as its possible impact on an intelligence explosion.

The race dynamic could spur projects to move faster toward superintelligence while reducing investment in solving the control problem. Additional detrimental effects of the race dynamic are also possible, such as direct hostilities between competitors. Suppose that two nations are racing to develop the first superintelligence, and that one of them is seen to be pulling ahead. In a winner-takes-all situation, a lagging project might be tempted to launch a desperate strike against its rival rather than passively await defeat. Anticipating this possibility, the frontrunner might be tempted to strike preemptively. If the antagonists are powerful states, the clash could be bloody. (A “surgical strike” against the rival’s AI project might risk triggering a larger confrontation and might in any case not be feasible if the host country has taken precautions.)Nick Bostrom - Superintelligence: Paths, Dangers, Strategies

Humanity has a history of falling into Hobbsian Traps – since a first mover advantage of Strong AI could be overpowered compared to other economic focuses, a race to achieve such a powerful general purpose optimiser as Strong AI, could result in military arms races.

As with any general-purpose technology, it is possible to identify concerns around particular applications. It has been argued, for example, that military applications of AI, including lethal autonomous weapons, might incite new arms races, or lower the threshold for nations to go to war, or give terrorists and assassins new tools for violence.Nick Bostrom - Strategic Implications of Openness in AI Development

What could be done to mitigate against an AI arms race?


Moral Enhancement – Are we morally equipped to deal with humanities grand challenges? Anders Sandberg

The topic of Moral Enhancement is controversial (and often misrepresented); it is considered by many to be repugnant – provocative questions arise like “who’s morals?”, “who are the ones to be morally enhanced?”, “will it be compulsory?”, “won’t taking a morality pill decrease the value of the intended morality if it skips the difficult process we normally go through to become better people?”, “Shouldn’t people be concerned that use of enhancements which alter character traits might consumer’s authenticity?”

Humans have a built in capacity of learning moral systems from their parents and other people. We are not born with any particular moral [code] – but with the ability to learn it just like we learn languages. The problem is of course this built in facility might have worked quite well back in the Stone Age when we were evolving in small tribal communities – but doesn’t work that well when surrounded with a high-tech civilization, millions of other people and technology that could be
potentially very dangerous. So we might need to update our moral systems and that is the interesting question of moral enhancement: can we make ourselves more fit for a current work?Anders Sandberg - Are we morally equipped for the future?

Humans have an evolved capacity to learn moral systems – we became more adept at learning moral systems that aided our survival in the ancestral environment – but are our moral instincts fit for the future?

Illustration by Daniel Gray

Let’s build some context. For millennia humans have lived in complex social structures constraining and encouraging certain types of behaviour. More recently for similar reasons people go through years of education at the end of which (for the most part) are more able to morally function in the modern world – though this world is very different from that of our ancestors, and when considering the possibilities for vastly radical change at breakneck speed in the future, it’s hard to know how humans will keep up both intellectually and ethically. This is important to consider as the degree to which we shape the future for the good depends both on how well and how ethically we solve the problems needed to achieve change that on balance (all things equal) benefits humanity (and arguably all morally relevant life-forms).

Can we engineer ourselves to be more ethically fit for the future?

Peter Singer discussed how our circles of care and compassion have expanded over the years – through reason we have been able to expand our natural propensity to act morally and the circumstances in which we act morally.

We may need to expand our circle of ethical consideration to include artificial life – considering certain types of software as moral patients.

So, if we think we could use a boost in our propensity for ethical progress,

How do we actually achieve ideal Moral Enhancement?

That’s a big topic (see a list of papers on the subject of ME here) – the answers may depend on what our goals and  preferences. One idea (among many others) is to regulate the level of Oxytocin (the cuddle hormone) – though this may come with the drawback of increasing distrust in the out-group.
Since morality depends on us being able to make accurate predictions and solve complex ethical problems, ‘Intelligence Enhancement‘ could be an effective aspect of moral enhancement. 

Morality is dependent on us being able to predict what’s going to happen when we do something. So various forms of intelligence enhancement might be very useful also for becoming more moral. Our ability to control our reactions that allow our higher-order values to control our lower order values is also important, that might actually require us to literally rewire or have biochips that help us do it.Anders Sandberg - Are we morally equipped for the future?

How we decide whether to use Moral Enhancement Therapy will be interesting – it may be needed to help solve global coordination problems; to increase the likelihood that we will, as a civilization, cooperate and cope with many known and as yet to be realised complex ethical quandaries as we move through times of unprecedented social and technological change.

This interview is part of a larger series that was completed in Oxford, UK late 2012.

Interview Transcript

Anders Sandberg

So humans have a kind of built-in capacity of learning moral systems from their parents and other people we’re not born with any particular moral [code] but the ability to learn it just like we can learn languages. The problem is of course this built-in facility might have worked quite well back in the Stone Age when we were evolving in small tribal communities – but doesn’t work that well when surrounded with a high-tech civilization, millions of other people and technology that could be potentially very dangerous. So we might need to update our moral systems. And that is the interesting question of moral enhancement:

  • can we make ourselves more fit for a current work?
  • And what kind of fitness should we be talking about?

For example we might want to improve on altruism – that we should be coming to strangers. But in a big society, in a big town – of course there are going to be some stranger’s that you shouldn’t trust. So it’s not just blind trust you want to enhance – you actually want to enhance ability to make careful judgements; to figure out what’s going to happen on whom you can trust. So maybe you want to have some other aspect, maybe the care – the circle of care – is what you want to expand.

Peter Singer pointed out that there are circles of care and compassion have been slowly expanding from our own tribe and their own gender, to other genders, to other people and eventually maybe to other species. But this is still biologically based a lot of it is going on here in the brain and might be modified. Maybe we should artificially extend these circles of care to make sure that we actually do care about those entities we ought to be caring about. This might be a problem of course, because some of these agents might be extremely different for what we used to.

For example machine intelligence might produce more machines or software that is a ‘moral patient’ – we actually ought to be caring about the suffering of software. That might be very tricky because our pattern receptors up in the brain are not very tuned for that – we tend to think that if it’s got a face and the speaks then it’s human and then we can care about it. But who thinks about Google? Maybe we could get super-intelligences that we actually ought to care a lot about, but we can’t recognize them at all because they’re so utterly different from ourselves.

So there are some easy ways of modifying how we think and react – for example by taking a drug. So the hormone oxytocin is sometimes called ‘the cuddle hormone’ – it’s released when breastfeeding and when having bodily contact with your loved one, and it generally seems to be making us more altruistic; more willing to trust strangers. You can kind of sniff it and run an economic game and you can immediately see a change in response. It might also make you a bit more ego-centric. It does enlarge feelings of comfort and family friendliness – except that it’s
only within what you consider to be your family. So we might want to tweak that.

Similarly we might think about adding links to our brains that allow us to think in better ways. After all, morality is dependent on us being able to predict what’s going to happen when we do something. So various forms of intelligence enhancement might be very useful also for becoming more moral. Our ability to control our reactions that allow our higher-order values to control our lower order values is also important, that might actually require us to literally rewire or have biochips that help us do it.

But most important is that we need the information we need to retrain the subtle networks in a brain in order to think better. And that’s going to require something akin to therapy – it might not necessarily be about lying on a sofa and telling your psychologist about your mother. It might very well be a bit of training, a bit of cognitive enhancement, maybe a bit of brain scanning – to figure out what actually ails you. It’s probably going to look very very different from anything Freud or anybody else envisioned for the future.

But I think in the future we’re actually going to try to modify ourselves so we’re going to be extra certain, maybe even extra moral, so we can function in a complex big world.


Related Papers

Neuroenhancement of Love and Marriage: The Chemicals Between Us

Anders contributed to this paper ‘Neuroenhancement of Love and Marriage: The Chemicals Between Us‘. This paper reviews the evolutionary history and biology of love and marriage. It examines the current and imminent possibilities of biological manipulation of lust, attraction and attachment, so called neuroenhancement of love. We examine the arguments for and against these biological interventions to influence love. We argue that biological interventions offer an important adjunct to psychosocial interventions, especially given the biological limitations inherent in human love.

Human Engineering and Climate Change

Anders also contributed to the paper “Human Engineering and Climate Change” which argues that cognitive, moral and biological enhancement could increase human ecological sustainability.

Many thanks for watching!

Consider supporting me by:
a) Subscribing to my YouTube channel:
b) Donating via Patreon: and/or
c) Sharing the media I create

Kind regards,
Adam Ford
– Science, Technology & the Future: