Conference: AI & Human Enhancement – Understanding the Future – Early 2020

Introduction

Overview

The event will address a variety of topics futurology (i.e. accelerating change & long term futures, existential risk, philosophy, transhumanism & ‘the posthuman’) in general though it will have a special focus on Machine Understanding.
How will we operate along side artificial agents that increasingly ‘understand’ us, and important aspects of the world around us?
The ultimate goal of AI is to achieve not just intelligence in the broad scene of the word, but understanding – the ability to understand content & context, comprehend causation, provide explanations and summarize material etc.  Arguably perusing machine understanding has a different focus to artificial ‘general’ intelligence – where a machine could behave with a degree of generality, without actually understanding what it is doing.

To explore the natural questions inherent within this concept the conference aims to draw on the fields of AI, AGI, philosophy, cognitive science and psychology to cover a diverse set of methods, assumptions, approaches, and systems design and thinking in the field of AI and AGI.

We will also explore important ethical questions surrounding transformative technology, how to navigate risks and take advantage of opportunities.

When/Where

Dates: Slated for March or April 2020 – definite dates TBA.

Where: Melbourne, Victoria, Australia!

Speakers

We are currently working on a list of speakers – as at writing, we have confirmed:

John S. Wilkins (philosophy of science/species taxonomy) –   Author of ‘Species: The Evolution of the Idea‘, co-author of ‘The Nature of Classification: Relationships and Kinds in the Natural Sciences‘.   Blogs at ‘Evolving Thoughts‘.

Dr. Kevin B. Korb (philosophy of science/AI)  – Co-founded Bayesian Intelligence with Prof. Ann Nicholson in 2007. He continues to engage in research on the theory and practice of causal discovery of Bayesian networks (aka data mining with BNs), machine learning, evaluation theory, the philosophy of scientific method and informal logic.   Author of ‘Bayesian Artificial Intelligence‘ and co-author of ‘Evolving Ethics

 

David Pearce (philosophy, the hedonistic imperative) – British philosopher and co-founder of the World Transhumanist Association, currently rebranded and incorporated as Humanity+, Inc., and a prominent figure within the transhumanist movement. He approaches ethical issues from a lexical negative utilitarian perspective.   Author of ‘The Hedonistic Imperative‘ and ‘The Abolitionist Project

Stelarc (performance artist) – Cyprus-born performance artist raised in the Melbourne suburb of Sunshine, whose works focus heavily on extending the capabilities of the human body. As such, most of his pieces are centered on his concept that “the human body is obsolete”.  There is a book about Stelarc and his works – ‘Stelarc: The Monograph (Electronic Culture: History, Theory, and Practice)‘ which is edited by Marquard Smith.

Jakob Hohwy (head of philosophy at Monash University) – philosopher engaged in both conceptual and experimental research. He works on problems in philosophy of mind about perception, neuroscience, and mental illness.  Author of ‘The Predictive Mind‘.

Topics

Human Enhancement, Transhumanism & ‘the Posthuman’

Human enhancement technologies are used not only to treat diseases and disabilities, but increasingly also to increase human capacities and qualities. Certain enhancement technologies are already available, for instance, coffee, mood brighteners, reproductive technologies and plastic surgery.   On the one hand, the scientific community has taken an increasing interest in innovations and allocated substantial public and private resources to them. While on the other hand, such research can have an impact, positive or negative, on individuals, the society, and future generations. Some have advocated the right to use such technologies freely, considering primarily the value of freedom and individual autonomy for those users. Others have called attention to the risks and potential harms of these technologies, not only for the individual, but also for society as a whole. Such use, it is argued, could accentuate the discrimination among persons with different abilities, thus increasing injustice and the gap between the rich and the poor. There is a dilemma regarding how to regulate and manage such practices through national and international laws, so as to safeguard the common good and protect vulnerable persons.

Long Term Value and the Future of Life in the Universe

It seems obvious that we should have a care for future generations – though how far into the future should our concern expire?    This obvious sounding idea can lead to surprising conclusions.

Since the future is big, there could be overwhelmingly far more people in the future than in there are in the present generation. If you want to have a positive impact on lives, and are agnostic as to when the impact is realised, your key concern shouldn’t be to help the present generation, but to ensure that the future goes well for life in the long-term.

This idea is often confused with the claim that we shouldn’t do anything to help people in the present generation. But the long-term value thesis is about what most matters – and what we do to have a positive impact on the future of life in the universe is an extremely important and fascinatingly complicated question.

Artificial Intelligence & Understanding

Following on from a workshop at AGI17 on ‘Understanding Understanding’ we will cover many fascinating questions, such as:

  • What is understanding?
    • How should we define understanding?
    • Is understanding an emergent property of intelligent systems? And/or a central property of intelligent systems?
    • What are the typologies or gradations of understanding?
    • Does understanding relate to consciousness?  If so how?
    • Is general intelligence necessary and/or sufficient to achieve understanding in an artificial system?
    • What differentiates systems that do and do not have understanding?
  • Why focus on developing machine understanding?
    • Isn’t human understanding enough?
    • What are the pros/cons of developing MU?
    • Is it ethical to develop it?
    • Does morality come along for the ride once MU is achieved?
    • How could MU help solve the ‘value loading’ problem in AI alignment?
  • How create machine understanding?
    • What is required in order to achieve understanding in machines?
    • How can we create systems that exhibit understanding?
    • and how can we test for understanding?
    • Can understanding be achieved through hand-crafted architectures or must it emerge through self-organizing (constructivist) principles?
    • How can mainstream techniques be used towards the development of machines which exhibit understanding?
    • Do we need radically different approaches than those in use today to build systems with understanding?
    • Does building artificially intelligent machines with versus without understanding depend on the same underlying principles, or are these orthogonal approaches?
    • Do we need special programming languages to implement understanding in intelligent systems?
    • How can current state of the art methods in AGI address the need for understanding in machines?
  • When is machine understanding likely to occur?
    • What types of research/discoveries are likely to accelerate progress towards MU?
    • What may hinder progress?

The conference will also cover aspects of futurology in general, including transhumanism, posthumanism, reducing suffering, and the long term future.

 

 

Event: Stelarc – Contingent & Contestable Futures

STELARC – CONTINGENT AND CONTESTABLE FUTURES: DIGITAL NOISE, GLITCHES & CONTAMINATIONS

Synopsis: In the age of the chimera, uncertainty and ambivalence generate unexpected anxieties. The dead, the near-dead, the brain dead, the yet to be born, the partially living and synthetic life all now share a material and proximal existence, with other living bodies, microbial life, operational machines and executable and viral code. Digital objects proliferate, contaminating the human biome. Bodies become end effectors for other bodies in other places and for machines elsewhere, generating interactive loops and recursive choreographies. There was always a ghost in the machine, but not as a vital force that animates but rather as a fading attestation of the human.

Agenda

5.45 – Meet, great, and eat.. pub food – it’s actually not bad! Feel free to come early to take advantage of the $8.50 pints from 4.00-6.00.
6.40 – Adam Ford – Introduction
6.50 – Stelarc – Talk: Contingent & Contestable Futures

Where: The Clyde Hotel (upstairs in function room) 385 Cardigan St, Carlton VIC 3053 – bring your appetite, there is a good menu: https://www.theclydehotel.com.au
When: Thursday July 25th – 5.45 onwards, though a few of us will be there earlier (say 5pm) to take advantage of the $8.50 pints (from 4pm onwards – if you say you are with STF you will get $8.50 pints all night)

*p.s. the event will likely be videoed – if you have any issues with being seen or heard on YouTube, please let us know.

BRIEF BIOGRAPHICAL NOTES

Stelarc experiments with alternative anatomical architectures. His performances incorporate Prosthetics, Robotics, VR and Biotechnology. He is presently surgically constructing and augmenting an ear on his arm. In 1996 he was made an Honorary Professor of Art and Robotics, Carnegie Mellon University and in 2002 was awarded an Honorary Doctorate of Laws by Monash University. In 2010 he was awarded the Ars Electronica Hybrid Arts Prize. In 2015 he received the Australia Council’s Emerging and Experimental Arts Award. In 2016 he was awarded an Honorary Doctorate from the Ionian University, Corfu. His artwork is represented by Scott Livesey Galleries,
Melbourne. www.stelarc.org

Denis Odinokov – Conquering Cross-Linking for Biomedical Longevity

In order to achieve biomedical longevity, the problem of cross-Linking of the extracellular matrix needs to be addressed. Cells are held together by special linking proteins. When too many cross-links form between cells in a tissue, the tissue can lose its elasticity and cause problems including arteriosclerosis, presbyopia and weakened skin texture. These are chemical bonds between structures that are part of the body, but not within a cell. In senescent people many of these become brittle and weak. Fixing cross-linking may prove more difficult than just removing it – as it may create a vacuum where more waste is pulled in to fill the void left behind. Though some research is being conducted, the problem deserves a lot more hands on deck – and far more funding.
Denis gives a technical explanation of why conquering cross-linking is important, and strategies for addressing this problem in this interview conducted at the Undoing Aging conference in Berlin 2019.

Introduction to Denis’ writing/research here – “The Impact of Extracellular Matrix Proteins Cross-linking on the Aging Process“.

Understanding the consequences of the formation of protein crosslinks requires more attention both from the scientific community and independent researchers who are passionate with regards to the extension of the human lifespan. By doing so, it allows us to level up the playing field where we can create and work on more serious and impactful solutions.

Also see GlycoSENSSENS proposes to further develop small-molecular drugs and enzymes to break links caused by sugar-bonding, known as advanced glycation endproducts, and other common forms of chemical linking.

 

Reason – Philosophy Of Anti Aging: Ethics, Research & Advocacy

Reason was interviewed at the Undoing Aging conference in Berlin 2019 by Adam Ford – focusing on philosophy of anti-aging, ethics, research & advocacy. Here is the audio!

And the video:

Topics include philosophical reasons to support anti-aging, high impact research (senolytics etc), convincing existence proofs that further research is worth doing, how AI can help and how human research (bench-work) isn’t being replaced by AI atm or in the foreseeable future, suffering mitigation and cause prioritization in Effective Altruism – how the EA movement sees anti-aging and why it should advocate for it, population effects (financial & public health) of an aging population and the ethics of solving aging as a problem…and more.

Reason is the founder and primary blogger at FightAging.org

Jerry Shay – The Telomere Theory of Ageing – Interview At Undoing Ageing, Berlin, 2019

“When telomeres get really short that could lead to a dna damage signal and cause cells to undergo a phenomenon called ‘replicative senescence’…where cells can secrete things that are not necessarily very good for you..”

Why is it that immune cells don’t work as well in older age?

Listen to the interview here

Jerry and his team compared a homogeneous group of centenarians in northern Italy to 80 year olds and 30 year olds – and tested their immune cells (T-Cells) for function (through RNA sequencing) – what was observed was all the young people clustered apart from most of the old people clustered.. but the centenarians didn’t cluster in any one spot.  It was found that the centenarians clustered along side the younger cohorts had better telomere length.

Out of 7 billion people on earth, there is only about ~ half a million centenarians – most of them are frail – though the ones with longer telomeres and more robust T-Cell physiology seem to be quite different to the frail centenarians.   What usually happens is when telomeres wear down the DNA in the cell gets damaged, triggering a DNA damage response. From this, Jerry and his team made a jump in logic – maybe there are genes (i.e. telomere [telomere expression genes?]) that when the telomeres are long these genes are repressed, and when the telomeres short the genes get activated – circumventing the need for a DNA damage response.  What is interesting is that they found genes that are really close to the telomere genes (cytokines – inflammatory gene responses – TNF Alpha, Ennalucan 1 etc) – are being activated in humans – a process called ‘Telomere Looping’. As we grow and develop our telomeres get longer, and at a certain length they start silencing certain inflammation genes, then as we age some of these genes get activated – this is sometimes referred to as the ‘Telomere Clock’.  Centenarians who are healthy maintain longer telomeres and don’t have these inflammation genes activated.

 

During early fetal development (12-18 weeks) telomerase gets silenced – it’s always been thought that this was to stop early onset of cancer – but Dr Shay asked, ‘why is it that all newborns have about the same length of telomeres?’ – and it’s not just in humans, it’s in other animals like whales, elephants, and many large long-lived mammal – this doesn’t occur in smaller mammals like mice, rats or rabbits.   The concept is that when the telomere is long enough, it loops over and silences its own gene, which stays silent until we are older (and in need of it again to help prevent cancer).

This Telomere Looping probably evolved as part of Antagonistic Pleiotropy – where things that may have a protection or advantage early in life may have unpredicted negative consequences later in life. This is what telomerase is for – we as humans need it in very early development, as do large long-lived mammals, and  a mechanism to shut it off – then at a later older age it can be activated again to fight against cancer.

 

There is a fair amount of evidence for accumulated damage as hallmarks for ageing – can we take a damage repair approach to rejuvenation medicine?

Telomere spectrum disorders or telomeropathies – human diseases of telomere disfunction – diseases like idiopathic pulmonary fibrosis in adults and dyskeratosis congenita in young children who are born with reduced amounts of telomeres and telomerase – they get age related diseases very early in life.  Can they be treated? Perhaps through gene therapy or by transiently elongating their telomeres. But can this be applied for the general population too?  People don’t lose their telomeres at the same rate – we know it’s possible for people to keep their telomeres long for 100 years or more – it’s just not yet known how.  It could be luck, likely it has a lot to do with genetics.

 

Ageing is complex – no one theory is going to explain everything about ageing – the telomere hypothesis of ageing perhaps makes up for about on average 5% or 10% of aging – though understanding it enough might give people an extra 10% of healthy life.   Eventually it will be all about personalised medicine – with genotyping we will be able to say you have about a 50% chance of bone marrow failure when you’re 80 years old – then if so you may be a candidate for bone marrow rejuvenation.

What is possible in the next 10 years?

 

Inflammation is highly central to causing age related disease.  Chronic inflammation can lead to a whole spectrum of diseases. The big difference between the subtle low grade inflammation that we have drugs for – like TNF blockers (like Humira and Enbrel) which subtly reduce inflammation – people can go into remission from many diseases after taking this.

There are about 40 million people on Metformin in the USA – which may help reduce the consequences of ageing – this and other drugs like it are safe drugs – if we can find further safe drugs to reduce inflammation etc this could go a long way – Aspirin perhaps (it’s complicated) – but it doesn’t take much to get a big bang out of a little intervention – the key to all this is safety – we don’t want to do any harm – so metformin and Asprin have been proven to be safe over time – now we need to learn how to repurpose those to specifically address the ageing problem.

 

Historically we have more or less ignored the fundamental problem of ageing and targeted specific diseases – but by the time you are diagnosed, it’s difficult to treat the disease – by the time you have been diagnosed with cancer, it’s likely so far advanced that it’s difficult to stop the eventual outcomes.   The concept of intervening in the ticking clock of ageing is becoming more popular now. If we can intervene early in the process we may be able to mitigate downstream diseases.

Jerry has been working on what they call a ‘Telomerase Mediated Inhibitor’ (see more about telomerase meditation here) – “it shows amazing efficacy in reducing tumor burden and improving immune cell function at the same time – it gets rid of the bad immune cells in the micro environment, and guess what?  the tumors disappear – so I think there’s ways to take advantage of the new knowledge of ageing research and apply it to diseases – but I think it’s going to be a while before we think about prevention.”

Unfortunately in the USA, and really globally “people want to have their problems their lifestyles the way they want them, and when something goes wrong, they want the doctor to come and and give them a pill to fix the problem instead of taking personal responsibility and saying that what we should be doing is preventing it in the first place.”  We all know that prevention is important, though most don’t want to practise prevention over the long haul.

 

The goal of all this not necessarily to live longer, but to live healthier – we now know that the costs associated with intervening with the pathologies associated with ageing is enormous.  Someone said that the 25% of medicare costs in the USA is in treating people that are on dialysis – that’s huge. If we could compress the number of years of end of life morbidities into a smaller window, it would pay for itself over and over again.   So the goal is to increase healthspan and reduce the long period of chronic diseases associated with ageing. We don’t want this to be a selected subgroup who have access to future regenerative medicine – there are many people in the world without resources or access at this time – we hope that will change.

Jerry’s goal is to take some of the discovered bio-markers of both healthy and less healthy older people – and test them out on larger population numbers – though it’s very difficult to get the funding one needs to conduct large population studies.

Keith Comito on Undoing Ageing

What is the relationship between anti-aging and the reduction of suffering? What are some common objections to the ideas of solving aging? How does Anti-Aging stack up against other cause areas (like climate change, or curing specific diseases)? How can we better convince people of the virtues of undoing the diseases of old age?

Keith Comito, interviewed by Adam Ford at the Undoing Aging 2019 conference in Berlin, discusses why solving the diseases of old age is powerful cause. Note the video of this interview will be available soon. He is a computer programmer and mathematician whose work brings together a variety of disciplines to provoke thought and promote social change. He has created video games, bioinformatics programs, musical applications, and biotechnology projects featured in Forbes and NPR.

In addition to developing high-profile mobile applications such as HBO Now and MLB AtBat, he explores the intersection of technology and biology at the Brooklyn community lab Genspace, where he helped to create games which allow players to direct the motion of microscopic organisms.

Seeing age-related disease as one of the most profound problems facing humanity, he now works to accelerate and democratize longevity research efforts through initiatives such as Lifespan.io.

He earned a B.S. in Mathematics, B.S. in Computer science, and M.S. in Applied Mathematics at Hofstra University, where his work included analysis of the LMNA protein.

Future Day Melbourne 2019

Future Day is nigh – sporting a spectacular line of speakers!

Agenda

5.30Doors open – meet and greet other attendees
5.45Introduction
6.00Drew Berry – “The molecular machines that create your flesh and blood” [abstract]
6.45Brock Bastian – “Happiness, culture, mental illness, and the future self” [abstract]
7.30Lynette Plenderleith: “The future of biodiversity starts now” [abstract]
8.15Panel: Drew Berry, Brock Bastian, Lynette Plenderleith
Join the MeetupFuture Day is on the 21st of March - sporting a spectacular line of speakers ranging from Futurology, Philosophy, Biomedical Animation & Psychology!

Venue: KPMG Melbourne – 727 Collins St [map link] – Collins Square – Level 36 Room 2

Limited seating to about 40, though if there is overflow, there will be standing room.

PLEASE have a snack/drink before you come. Apparently we can’t supply food/drink at KPMG, so eat something beforehand – or work up and appetite…
Afterwards we will sojourn at a local pub for some grub and ale.

I’m looking forward to seeing people I have met before, and some new faces as well.

Drew Berry – Biomedical Animator @ The Walter and Eliza Hall Institute of Medical Research
Brock BastianMelbourne School of Psychological Sciences University of Melbourne

Check out the Future Day Facebook Group, and the Twitter account!

Abstracts

The molecular machines that create your flesh and blood

By Drew Berry – Abstract: A profound technological revolution is underway in bio-medical science, accelerating development of new therapies and treatments for the diseases that afflict us and also transforming how we perceive ourselves and the nature of our living bodies. Coupled to the accelerating pace of scientific discovery is an ever expanding need to explain to the public and develop appreciation of our new biomedical capabilities, to prepare the public for the tsunami of new knowledge and medicines that will impact patients, our families and community.
Drew Berry will present the latest visualisation experiments in creating cinematic movies and real-time interactive 3D molecular worlds, that reveal the current state of the art scientific discovery, focusing on the molecular engines that covert the food you eat into the chemical energy that powers your cells and tissues. Leveraging the incredible power of game GPU technology, vast molecular landscapes can be generated for 3D 360 degree cinema for museum and science centre dome theatres, interactive exploration in VR, and Augmented Reality education via student mobile phones.

 

Happiness, culture, mental illness, and the future self

By Brock Bastian – Abstract: What is the future of human happiness and wellbeing. We are currently treating mental illness at the level of individuals, yet rates of mental illness are not going down, and in some cases continue to rise. I will present research indicating that we need to start to tackle this problem at the level of culture. The cultural values places on particular emotional states may play a role in how people respond to their own emotional worlds. Furthermore, I will present evidence that basic cultural differences in how we explain events, predict the future and understand ourselves may also impact on the effectiveness of our capacity to deal with emotional events. This suggests that we need to begin to take culture seriously in how we treat mental illness. It also provides some important insights into what kind of thinking styles we might seek to promote and how we might seek to understand and shape our future selves. This also has implications for how we might find happiness in a world increasingly characterized by residential mobility, weak ties, and digital rather than face-to-face interaction.

 

The future of biodiversity starts now

By Lynette Plenderleith – Abstract: Biodiversity is vital to our food security, our industries, our health and our progress. Yet never before has the future of biodiversity been so under threat as we modify more land, burn more fossil fuels and transport exotic organisms around the planet. But in the face of catastrophic biodiversity collapse, scientists, community groups and not-for-profits are working to discover new ways to conserve biodiversity, for us and the rest of life on our planet. From techniques as simple as preserving habitat to complex scientific techniques like de-extinction, Lynette will discuss our options for the future to protect biodiversity, how the future of biodiversity could look and why we should start employing conservation techniques now. Our future relies on the conservation of  biodiversity and its future rests in our hands. We have the technology to protect it.

 

Biographies

Dr Drew Berry

Dr Drew Berry is a biomedical animator who creates beautiful, accurate visualisations of the dramatic cellular and molecular action that is going on inside our bodies. He began his career as a cell biologist and is fluent navigating technical reports, research data and models from scientific journals. As an artist, he works as a translator, transforming abstract and complicated scientific concepts into vivid and meaningful visual journeys. Since 1995 he has been a biomedical animator at the Walter and Eliza Hall Institute of Medical Research, Australia. His animations have exhibited at venues such as the Guggenheim Museum, MoMA, the Royal Institute of Great Britain and the University of Geneva. In 2010, he received a MacArthur Fellowship “Genius Grant”.

Recognition and awards

• Doctorate of Technology (hc), Linköping University Sweden, 2016
• MacArthur Fellowship, USA 2010
• New York Times “If there is a Steven Spielberg of molecular animation, it is probably Drew Berry” 2010
• The New Yorker “[Drew Berry’s] animations are astonishingly beautiful” 2008
• American Scientist “The admirers of Drew Berry, at the Walter and Eliza Hall Institute in Australia, talk about him the way Cellini talked about Michelangelo” 2009
• Nature Niche Prize, UK 2008
• Emmy “DNA” Windfall Films, UK 2005
• BAFTA “DNA Interactive” RGB Co, UK 2004

Animation http://www.wehi.tv
TED http://www.ted.com/talks/drew_berry_animations_of_unseeable_biology
Architectural projection https://www.youtube.com/watch?v=m9AA5x-qhm8
Björk video https://www.youtube.com/watch?v=Wa1A0pPc-ik
Wikipedia https://en.wikipedia.org/wiki/Drew_Berry

Assoc Prof Brock Bastian

Brock Bastian is a social psychologist whose research focuses on pain, happiness, and morality.

In his search for a new perspective on what makes for the good life, Brock Bastian has studied why promoting happiness may have paradoxical effects; why we need negative and painful experiences in life to build meaning, purpose, resilience, and ultimately greater fulfilment in life; and why behavioural ethics is necessary for understanding how we reason about personal and social issues and resolve conflicts of interest. His first book, The Other Side of Happiness, was published in January 2018.

 

The Other Side of Happiness: Embracing a More Fearless Approach to Living

Our addiction to positivity and the pursuit of pleasure is actually making us miserable. Brock Bastian shows that, without some pain, we have no real way to achieve and appreciate the kind of happiness that is true and transcendent.

Read more about The Other Side of Happiness

Dr. Lynette Plenderleith

Dr. Lynette Plenderleith is a wildlife biologist by training and is now a media science specialist, working mostly in television, with credits including children’s show WAC!
World Animal Championships and Gardening Australia. Lynette is Chair and Founder of Frogs Victoria, President of the Victorian branch of Australian Science Communicators and occasional performer of live science-comedy. Lynette has a Ph.D from Monash University, where she studied the ecology of native Australian frogs, a Master’s degree in the spatial ecology of salamanders from Towson University in the US and a BSc in Natural Sciences from Lancaster University in her homeland – the UK.
Twitter: @lynplen
Website: lynplen.com

 

 

The Future is not a product

It’s more exciting than gadgets with shiny screens and blinking lights.

Future Day is a way of focusing and celebrating the energy that more and more people around the world are directing toward creating a radically better future.

How should Future Day be celebrated? That is for us to decide as the future unfolds!

  • Future Day could be adopted as an official holiday by countries around the world.
  • Children can do Future Day projects at school, exploring their ideas and passions about creating a better future.
  • Future Day costume parties — why not? It makes at least as much sense as dressing up to celebrate halloween!
  • Businesses giving employees a day off from routine concerns, to think creatively about future projects
  • Special Future Day issues in newspapers, magazines and blogs
  • Use your imagination — that’s what the future is all about!

The Future & You

It’s time to create the future together!

Our aspirations are all too often sidetracked in this age of distraction. Lurking behind every unfolding minute is a random tangent with no real benefit for our future selves – so let’s ritualize our commitment to the future by celebrating it! Future Day is here to fill our attention economies with useful ways to solve the problems of arriving at desirable futures, & avoid being distracted by the usual gauntlet of noise we run every other day. Our future is very important – accelerating scientific & technological progress will change the world even more than it already has. While other days of celebration focus on the past – let’s face the future – an editable history of a time to come – a future that is glorious for everyone.

Videos from Previous Future Day Events / Interviews

Physicalism & Materialism – John Wilkins

Materialism was a pre-socratic view that for something to be real it has to be matter – physical stuff made of atoms (which at the time were considered hard like billiard balls – fundametal parts of reality).  The reason these days the term physicalism is used is because it can describe things that aren’t matter – like forces, or aren’t observable matter – like dark matter, or energy or fields, or spacetime etc..  Physicalism is the idea that all that exist can be described in the language of some ‘ideal’ physics – we may never know what this ideal physics is, though people think that it is something close to our current physics (as we can make very accurate predictions with our current physics).

If magic, telepathy or angels were real, there would be a physics that could describe them – they’d have patterns and properties that would be describable and explainable.  A physicist would likely think that even the mind operates according to physical rules.  Being a physicalist according to John means you think everything is governed by rules, physical rules – and that there is an ideal language that can be used to describe all this.

Note John is also a deontologist.  Perhaps there should exist an ideal language that can fully describe ethics – does this mean that ideally there is no need for utilitarianism?  I’ll leave that question for another post.

Interview with John Wilkins on Materialism & Physicalism.

Here are some blog posts about physicalism by John Wilkins:

Is physicalism an impoverished metaphysics?

Every so often, we read about some philosopher or other form of public intellectual who makes the claim that a physicalist ontology – a world view in which only things that can be described in terms of physics are said to exist – is impoverished. That is, there are things whereof science cannot know, &c. A recent example is that made by Thomas Nagel [nicely eviscerated here by the physicist Sean Carroll], whose fame in philosophy rests with an influential 1974 paper that there is something like being a bat that no amount of physics, physiology or other objective science could account for.

Recent, Nagel has argued that the evolutionary view called (historically misleadingly) neo-Darwinism, is “almost certainly” false. One of the reasons is that “materialism” (which Nagel should know is an antiquated world view replaced by physicalism defined above; there are many non-material things in physics, not least fields of various kinds) does not permit a full account of consciousness; the subjective facts of being a particular individual organism. Another is that the chance that life would emerge from a lifeless universe is staggeringly unlikely. How this is calculated is somewhat mysterious, given that at best we only have (dare I say it?) subjective estimates anyway, but there it is.

But Nagel is not alone. Various nonreligious (apparently) thinkers have made similar assertions, although some, like Frank Jackson, who proposed the Knowledge Argument, have since backed down. What is it that physicalism must account for that these disputants and objectors say it cannot?

It almost entirely consists of consciousness, intentions, intelligence or some similar mental property which is entirely inexplicable by “reductionist” physicalism. [Reductionism is a term of abuse that means – so far as I can tell – solely that the person who makes such an accusation does not like the thing or persons being accused.] And that raises our question: is physicalism lacking something?

I bet you are dying to know more… you’ll just have to follow the link…
See more at Evolving Thoughts>>

Is Physicalism Coherent?

In my last post I argued that physicalism cannot be rejected simply because people assert there are nonphysical objects which are beyond specification. Some are, however, specifiable, and one commentator has identified the obvious ones: abstract objects like the rules of chess or numbers. I have dealt with these before in my “Pizza reductionism” post, which I invite you to go read.

Done? OK, then; let us proceed.

It is often asserted that there are obviously things that are not physical, such as ideas, numbers, concepts, etc., quite apart from qualia, I once sat with a distinguished philosopher, who I respect greatly and so shall not name, when he asserted that we can construct natural classifications because we can deal first with the natural numbers. I asked him “In what sense are numbers natural objects?”, meaning, why should we think numbers are entities in the natural world. He admitted that the question had not occurred to him (I doubt that – he is rather smart), but that it was simply an axiom of his philosophy. I do not think such abstract objects are natural.

This applies to anything that is “informational”, including all semantic entities like meanings, symbols, lexical objects, and so on. They only “exist” as functional modalities in our thoughts and language. I have also argued this before: information does not “exist”; it is a function of how we process signals. Mathematics is not a domain, it is a language, and the reason it works is because the bits that seriously do not work are not explored far[*] – not all of it has to work in a physical or natural sense, but much of it has to, or else it becomes a simple game that we would not play so much.

So the question of the incoherence of physicalism is based on the assumption (which runs contrary to physicalism, and is thus question begging) that abstract objects are natural things. I don’t believe they are, and I certainly do not think that a thought, or concept, for example, which can be had by many minds and is therefore supposed to be located in none of them (and thus transcendental), really is nonphysical. That is another case of nouning language. The thought “that is red” exists, for a physicalist, in all the heads that meet the functional social criteria for ascriptions of red. It exists nowhere else – it just is all those cognitive and social behaviours in biological heads…

Yes, I know, it’s a real page turner…
See more at Evolving Thoughts>>

In philosophy, physicalism is the ontological thesis that “everything is physical”, that there is “nothing over and above” the physical, or that everything supervenes on the physical. Physicalism is a form of ontological monism—a “one substance” view of the nature of reality as opposed to a “two-substance” (dualism) or “many-substance” (pluralism) view. Both the definition of physical and the meaning of physicalism have been debated. Physicalism is closely related to materialism. Physicalism grew out of materialism with the success of the physical sciences in explaining observed phenomena. The terms are often used interchangeably, although they are sometimes distinguished, for example on the basis of physics describing more than just matter (including energy and physical law). Common arguments against physicalism include both the philosophical zombie argument and the multiple observers argument, that the existence of a physical being may imply zero or more distinct conscious entities. “When I lost my belief in religion I had to decide what I needed to accept as a bare minimum. I decided that I needed to believe in the physical world. I never found the slightest reason to accept the existence of anything else. To this day I am a physicalist only because I never found the need to be anything else. The principle of parsimony suggests that one should not believe in more than one needs to. Even if it does make you feel comfortable.”

 

Let’s get physicalism!

See John Wilkin’s Blog ‘Evolving Thoughts

#philsci #philosophy #science #physics

On Consciousness, Qualia, Valence & Intelligence with Mike Johnson

Andrés L. Gómez Emilsson

Andrés Gómez Emilsson joined in to add very insightful questions for a 3 part interview series with Mike Johnson, covering the relationship of metaphysics to qualia/consciousness/hedonic valence, and defining their terms, whether panpsychism matters, increasing sensitivity to bliss, valence variance, Effective Altruism, cause prioritization, and the importance of consciousness/valence research .
Andrés Gómez Emilsson interviews Mike Johnson

Carving Reality at the Joints

Andrés L. Gómez Emilsson: Do metaphysics matter for understanding qualia, consciousness, valence and intelligence? Mike Johnson: If we define metaphysics as the study of what exists, it absolutely does matter for understanding qualia, consciousness, and valence. I think metaphysics matters for intelligence, too, but in a different way. The big question is whether terms like qualia, consciousness, and valence “carve reality at the joints” or whether they’re emergent linguistic constructs that don’t reflect the structure of the universe. And if these things are ‘real’ in some sense, the follow-up question is: how can we formalize these concepts? Intelligence seems different: it seems like a ‘fuzzy’ concept, without a good “crisp”, or frame-invariant, definition. Andrés: What about sources of sentient valence outside of human brains? What is the “minimum viable valence organism”? What would you expect it to look like?

Mike Johnson

Mike: If some form of panpsychism is true- and it’s hard to construct a coherent theory of consciousness without allowing panpsychism- then I suspect two interesting things are true.
  1. A lot of things are probably at least a little bit conscious. The “minimum viable valence experiencer” could be pretty minimal. Both Brian Tomasik and Stuart Hameroff suggest that there could be morally-relevant experience happening at the level of fundamental physics. This seems highly counter-intuitive but also logically plausible to me.
  2. Biological organisms probably don’t constitute the lion’s share of moral experience. If there’s any morally-relevant experience that happens on small levels (e.g., quantum fuzz) or large levels (e.g., black holes, or eternal inflation), it probably outweighs what happens on Earth by many, many, many orders of magnitude. Whether it’ll outweigh the future impact of humanity on our light-cone is an open question.

The big question is whether terms like qualia, consciousness, and valence “carve reality at the joints” or whether they’re emergent linguistic constructs that don’t reflect the structure of the universe. And if these things are ‘real’ in some sense, the follow-up question is: how can we formalize these concepts?

In contrast with Brian Tomasik on this issue, I suspect (and hope) that the lion’s share of the qualia of the universe is strongly net positive. Appendix F of Principia Qualia talks a little more about this. Andrés: What would be the implications of finding a sure-fire way to induce great valence for brief moments? Could this be used to achieve “strategic alignment” across different branches of utilitarianism? Mike: A device that could temporarily cause extreme positive or negative valence on demand would immediately change the world. First, it would validate valence realism in a very visceral way. I’d say it would be the strongest philosophical argument ever made. Second, it would obviously have huge economic & ethical uses. Third, I agree that being able to induce strong positive & negative valence on demand could help align different schools of utilitarianism. Nothing would focus philosophical arguments about the discount rate between pleasure & suffering more than a (consensual!) quick blast of pure suffering followed by a quick blast of pure pleasure. Similarly, a lot of people live their lives in a rather numb state. Giving them a visceral sense that ‘life can be more than this’ could give them ‘skin in the game’. Fourth, it could mess a lot of things up. Obviously, being able to cause extreme suffering could be abused, but being able to cause extreme pleasure on-demand could lead to bad outcomes too. You (Andres) have written about wireheading before, and I agree with the game-theoretic concerns involved. I would also say that being able to cause extreme pleasure in others could be used in adversarial ways. More generally, human culture is valuable and fragile; things that could substantially disrupt it should be approached carefully. A friend of mine was describing how in the 70s, the emerging field of genetic engineering held the Asilomar Conference on Recombinant DNA to discuss how the field should self-regulate. The next year, these guidelines were adopted by the NIH wholesale as the basis for binding regulation, and other fields (such as AI safety!) have attempted to follow the same model. So the culture around technologies may reflect a strong “founder effect”, and we should be on the lookout for a good, forward-looking set of principles for how valence technology should work. One principle that seems to make sense is to not publicly post ‘actionable’ equations, pseudocode, or code for how one could generate suffering with current computing resources (if this is indeed possible). Another principle is to focus resources on positive, eusocial applications only, insofar as that’s possible– I’m especially concerned about addiction, and bad actors ‘weaponizing’ this sort of research. Another would be to be on guard against entryism, or people who want to co-opt valence research for political ends. All of this is pretty straightforward, but it would be good to work it out a bit more formally, look at the successes and failures of other research communities, and so on.

A question I find very interesting is whether valence research is socially disruptive or socially stabilizing by default. I think we should try very hard to make it a socially stabilizing force.

A question I find very interesting is whether valence research is socially disruptive or socially stabilizing by default. I think we should try very hard to make it a socially stabilizing force. One way to think about this is in terms of existential risk. It’s a little weird to say, but I think the fact that so many people are jaded, or feel hopeless, is a big existential risk, because they feel like they have very little to lose. So they don’t really care what happens to the world, because they don’t have good qualia to look forward to, no real ‘skin in the game’. If valence tech could give people a visceral, ‘felt sense’ of wonder and possibility, I think the world could become a much safer place, because more people would viscerally care about AI safety, avoiding nuclear war, and so on. Finally, one thing that I think doesn’t make much sense is handing off the ethical issues to professional bioethicists and expecting them to be able to help much. Speaking as a philosopher, I don’t think bioethics itself has healthy community & dresearch norms (maybe bioethics needs some bioethicsethicists…). And in general, I think especially when issues are particularly complex or technical, I think the best type of research norms comes from within a community. Andrés: What is the role of valence variance in intelligence? Can a sentient being use its consciousness in any computationally fruitful way without any valence variance? Can a “perfectly flat world(-simulation)” be used for anything computational?   Mike: I think we see this today, with some people suffering from affective blunting (muted emotions) but seemingly living functional lives. More generally, what a sentient agent functionally accomplishes, and how it feels as it works toward that goal, seem to be correlated but not identical. I.e., one can vary without the other. But I don’t think that valence is completely orthogonal to behavior, either. My one-sentence explanation here is that evolution seems to have latched onto the

Why we seek out pleasure: the Symmetry Theory of Homeostatic Regulation

property which corresponds to valence- which I argue is symmetry– in deep ways, and has built our brain-minds around principles of homeostatic symmetry. This naturally leads to a high variability in our valence, as our homeostatic state is perturbed and restored. Logically, we could build minds around different principles- but it might be a lot less computationally efficient to do so. We’ll see. 🙂 One angle of research here could be looking at people who suffer from affective blunting, and trying to figure out if it holds them back: what it makes them bad at doing. It’s possible that this could lead to understanding human-style intelligence better. Going a little further, we can speculate that given a certain goal or computation, there could be “valence-positive” processes that could accomplish it, and “valence-negative” processes. This implies that there’s a nascent field of “ethical computation” that would evaluate the valence of different algorithms running on different physical substrates, and choose the one that best satisfices between efficiency and valence. (This is of course a huge simplification which glosses over tons of issues…)
Andrés: What should we prioritize: super-intelligence, super-longevity or super-happiness? Does the order matter? Why? Mike: I think it matters quite a bit! For instance, I think the world looks a lot different if we figure out consciousness *before* AGI, versus if we ignore it until AGI is built. The latter seems to involve various risks that the former doesn’t. A risk that I think we both agree is serious and real is this notion of “what if accelerating technology leads to Malthusian conditions where agents don’t- and literally can’t, from a competitive standpoint- care about qualia & valence?” Robin Hanson has a great post called “This is the Dream Time” (of relaxed selection). But his book “Age of Em” posits a world where selection pressures go back up very dramatically. I think if we enter such an era without a good theory of qualia, we could trade away a lot of what makes life worth living.  
Andrés: What are some conceptual or factual errors that you see happening in the transhumanist/rationalist/EA community related to modeling qualia, valence and intelligence? Mike: First, I think it’s only fair to mention what these communities do right. I’m much more likely to have a great conversation about these topics with EAs, transhumanists, and rationalists than a random person off the street, or even a random grad student. People from this community are always smart, usually curious, often willing to explore fresh ideas and stretch their brain a bit, and sometimes able to update based on purely abstract arguments. And there’s this collective sense that ideas are important and have real implications for the future. So there’s a lot of great things happening in these communities and they’re really a priceless resource for sounding out theories, debating issues, and so on. But I would highlight some ways in which I think these communities go astray.

Computationalism, functionalism, fun theory, ‘hedonic brain regions’, ‘pleasure neurochemicals’, the reinforcement learning theory of valence, and so on all give the illusion of explanatory depth but don’t actually explain things in a way which allows us to do anything useful.

First, people don’t realize how bad most existing models of qualia & valence are. Michael Graziano argues that most theories of consciousness are worse than wrong- that they play to our intuitions but don’t actually explain anything. Computationalism, functionalism, fun theory, ‘hedonic brain regions’, ‘pleasure neurochemicals’, the reinforcement learning theory of valence, and so on all give the illusion of explanatory depth but don’t actually explain things in a way which allows us to do anything useful. Second, people don’t realize how important a good understanding of qualia & valence are. They’re upstream of basically everything interesting and desirable. Here’s what I think has happened, at least in the rationalist community: historically, consciousness research has been a black hole. Smart people go in, but nothing comes out. So communities (such as physicists and LessWrong) naturally have an interest in putting up a fence around the topic with a sign that says

historically, consciousness research has been a black hole. Smart people go in, but nothing comes out. So communities .. naturally have an interest in putting up a fence around the topic with a sign that says ‘Don’t go here!’

‘Don’t go here!’ – But over time, people forgot why the mystery was blocked off, and started to think that the mystery doesn’t exist. This leads to people actively avoiding thinking about these topics without being able to articulate why. Andrés: Is there value in studying extreme cases of valence? E.g. Buddhist monks who claim to achieve extreme sustainable bliss, or people on MDMA? Mike: ‘What science can analyze, science can duplicate.’ And studying outliers such as your examples is a time-honored way of gathering data with high signal-to-noise. So yes, definitely. 🙂
Also see the 1st part, and the 2nd part of this interview series. Also this interview with Christof Koch will likely be of interest.
 
Mike Johnson is a philosopher living in the Bay Area, writing about mind, complexity theory, and formalization. He is Co-founder of the Qualia Research Institute. Much of Mike’s research and writings can be found at the Open Theory website. ‘Principia Qualia’ is Mike’s magnum opus – a blueprint for building a new Science of Qualia. Click here for the full version, or here for an executive summary. If you like Mike’s work, consider helping fund it at Patreon.

Ethics, Qualia Research & AI Safety with Mike Johnson

What’s the relationship between valence research and AI ethics?

Hedonic valence is a measure of the quality of our felt sense of experience, the intrinsic goodness (positive valence) or averseness (negative valence) of an event, object, or situation.  It is an important aspect of conscious experience; always present in our waking lives. If we seek to understand ourselves, it makes sense to seek to understand how valence works – how to measure it and test for it.

Also, might there be a relationship to the AI safety/friendliness problem?
In this interview, we cover a lot of things, not least .. THE SINGULARITY (of course) & the importance of Valence Research to AI Friendliness Research (as detailed here). Will thinking machines require experience with valence to understand it’s importance?

Here we cover some general questions about Mike Johnson’s views on recent advances in science and technology & what he sees as being the most impactful, what world views are ready to be retired, his views on XRisk and on AI Safety – especially related to value theory.

This one part of an interview series with Mike Johnson (another section on Consciousness, Qualia, Valence & Intelligence). 

 

Adam Ford: Welcome Mike Johnson, many thanks for doing this interview. Can we start with your background?

Mike Johnson

Mike Johnson: My formal background is in epistemology and philosophy of science: what do we know & how do we know it, what separates good theories from bad ones, and so on. Prior to researching qualia, I did work in information security, algorithmic trading, and human augmentation research.

 

Adam: What is the most exciting / interesting recent (scientific/engineering) news? Why is it important to you?

Mike: CRISPR is definitely up there! In a few short years precision genetic engineering has gone from a pipe dream to reality. The problem is that we’re like the proverbial dog that caught up to the car it was chasing: what do we do now? Increasingly, we can change our genome, but we have no idea how we should change our genome, and the public discussion about this seems very muddled. The same could be said about breakthroughs in AI.

 

Adam: What are the most important discoveries/inventions over the last 500 years?

Mike: Tough question. Darwin’s theory of Natural Selection, Newton’s theory of gravity, Faraday & Maxwell’s theory of electricity, and the many discoveries of modern physics would all make the cut. Perhaps also the germ theory of disease. In general what makes discoveries & inventions important is when they lead to a productive new way of looking at the world.

 

Adam: What philosophical/scientific ideas are ready to be retired? What theories of valence are ready to be relegated to the dustbin of history? (Why are they still in currency? Why are they in need of being thrown away or revised?)

Mike: I think that 99% of the time when someone uses the term “pleasure neurochemicals” or “hedonic brain regions” it obscures more than it explains. We know that opioids & activity in the nucleus accumbens are correlated with pleasure– but we don’t know why, we don’t know the causal mechanism. So it can be useful shorthand to call these things “pleasure neurochemicals” and whatnot, but every single time anyone does that, there should be a footnote that we fundamentally don’t know the causal story here, and this abstraction may ‘leak’ in unexpected ways.

 

Adam: What have you changed your mind about?

Mike: Whether pushing toward the Singularity is unequivocally a good idea. I read Kurzweil’s The Singularity is Near back in 2005 and loved it- it made me realize that all my life I’d been a transhumanist and didn’t know it. But twelve years later, I’m a lot less optimistic about Kurzweil’s rosy vision. Value is fragile, and there are a lot more ways that things could go wrong, than ways things could go well.

 

Adam: I remember reading Eliezer’s writings on ‘The Fragility of Value’, it’s quite interesting and worth consideration – the idea that if we don’t get AI’s value system exactly right, then it would be like pulling a random mind out of mindspace – most likely inimicable to human interests. The writing did seem quite abstract, and it would be nice to see a formal model or something concrete to show this would be the case. I’d really like to know how and why value is as fragile as Eliezer seems to make out. Is there any convincing crisply defined model supporting this thesis?

Mike: Whether the ‘Complexity of Value Thesis’ is correct is super important. Essentially, the idea is that we can think of what humans find valuable as a tiny location in a very large, very high-dimensional space– let’s say 1000 dimensions for the sake of argument. Under this framework, value is very fragile; if we move a little bit in any one of these 1000 dimensions, we leave this special zone and get a future that doesn’t match our preferences, desires, and goals. In a word, we get something worthless (to us). This is perhaps most succinctly put by Eliezer in “Value is fragile”:

“If you loose the grip of human morals and metamorals – the result is not mysterious and alien and beautiful by the standards of human value. It is moral noise, a universe tiled with paperclips. To change away from human morals in the direction of improvement rather than entropy, requires a criterion of improvement; and that criterion would be physically represented in our brains, and our brains alone. … You want a wonderful and mysterious universe? That’s your value. … Valuable things appear because a goal system that values them takes action to create them. … if our values that prefer it are physically obliterated – or even disturbed in the wrong dimension. Then there is nothing left in the universe that works to make the universe valuable.”

If this frame is right, then it’s going to be really really really hard to get AGI right, because one wrong step in programming will make the AGI depart from human values, and “there will be nothing left to want to bring it back.” Eliezer, and I think most of the AI safety community assumes this.

But– and I want to shout this from the rooftops– the complexity of value thesis is just a thesis! Nobody knows if it’s true. An alternative here would be, instead of trying to look at value in terms of goals and preferences, we look at it in terms of properties of phenomenological experience. This leads to what I call the Unity of Value Thesis, where all the different manifestations of valuable things end up as special cases of a more general, unifying principle (emotional valence). What we know from neuroscience seems to support this: Berridge and Kringelbach write about how “The available evidence suggests that brain mechanisms involved in fundamental pleasures (food and sexual pleasures) overlap with those for higher-order pleasures (for example, monetary, artistic, musical, altruistic, and transcendent pleasures).” My colleague Andres Gomez Emilsson writes about this in The Tyranny of the Intentional Object. Anyway, if this is right, then the AI safety community could approach the Value Problem and Value Loading Problem much differently.

 

Adam: I’m also interested in the nature of possible attractors that agents might ‘extropically’ gravitate towards (like a thirst for useful and interesting novelty, generative and non-regressive, that might not neatly fit categorically under ‘happiness’) – I’m not wholly convinced that they exist, but if one leans away from moral relativism, it makes sense that a superintelligence may be able to discover or extrapolate facts from all physical systems in the universe, not just humans, to determine valuable futures and avoid malignant failure modes (Coherent Extrapolated Value if you will). Being strongly locked into optimizing human values may be a non-malignant failure mode.

Mike: What you write reminds me of Schmidhuber’s notion of a ‘compression drive’: we’re drawn to interesting things because getting exposed to them helps build our ‘compression library’ and lets us predict the world better. But this feels like an instrumental goal, sort of a “Basic AI Drives” sort of thing. Would definitely agree that there’s a danger of getting locked into a good-yet-not-great local optima if we hard optimize on current human values.

Probably the danger is larger than that too– as Eric Schwitzgebel notes​, ​

“Common sense is incoherent in matters of metaphysics. There’s no way to develop an ambitious, broad-ranging, self- consistent metaphysical system without doing serious violence to common sense somewhere. It’s just impossible. Since common sense is an inconsistent system, you can’t respect it all. Every metaphysician will have to violate it somewhere.”

If we lock in human values based on common sense, we’re basically committing to following an inconsistent formal system. I don’t think most people realize how badly that will fail.

 

Adam: What invention or idea will change everything?

Mike: A device that allows people to explore the space of all possible qualia in a systematic way. Right now, we do a lot of weird things to experience interesting qualia: we drink fermented liquids, smoke various plant extracts, strap ourselves into rollercoasters, and parachute out of plans, and so on, to give just a few examples. But these are very haphazard ways to experience new qualia! When we’re able to ‘domesticate’ and ‘technologize’ qualia, like we’ve done with electricity, we’ll be living in a new (and, I think, incredibly exciting) world.

 

Adam: What are you most concerned about? What ought we be worrying about?

Mike: I’m worried that society’s ability to coordinate on hard things seems to be breaking down, and about AI safety. Similarly, I’m also worried about what Eliezer Yudkowsky calls ‘Moore’s Law of Mad Science’, that steady technological progress means that ‘every eighteen months the minimum IQ necessary to destroy the world drops by one point’. But I think some very smart people are worrying about these things, and are trying to address them.

In contrast, almost no one is worrying that we don’t have good theories of qualia & valence. And I think we really, really ought to, because they’re upstream of a lot of important things, and right now they’re “unknown unknowns”- we don’t know what we don’t know about them.

One failure case that I worry about is that we could trade away what makes life worth living in return for some minor competitive advantage. As Bostrom notes in Superintelligence,

“When it becomes possible to build architectures that could not be implemented well on biological neural networks, new design space opens up; and the global optima in this extended space need not resemble familiar types of mentality. Human-like cognitive organizations would then lack a niche in a competitive post-transition economy or ecosystem. We could thus imagine, as an extreme case, a technologically highly advanced society, containing many complex structures, some of them far more intricate and intelligent than anything that exists on the planet today – a society which nevertheless lacks any type of being that is conscious or whose welfare has moral significance. In a sense, this would be an uninhabited society. It would be a society of economic miracles and technological awesomeness, with nobody there to benefit. A Disneyland with no children.”

Nick Bostrom

Now, if we don’t know how qualia works, I think this is the default case. Our future could easily be a technological wonderland, but with very little subjective experience. “A Disneyland with no children,” as Bostrom quips.

 

 

Adam: How would you describe your ethical views? What are your thoughts on the relative importance of happiness vs. suffering? Do things besides valence have intrinsic moral importance?

Mike: Good question. First, I’d just like to comment that Principia Qualia is a descriptive document; it doesn’t make any normative claims.

I think the core question in ethics is whether there are elegant ethical principles to be discovered, or not. Whether we can find some sort of simple description or efficient compression scheme for ethics, or if ethics is irreducibly complex & inconsistent.

The most efficient compression scheme I can find for ethics, that seems to explain very much with very little, and besides that seems intuitively plausible, is the following:

  1. Strictly speaking, conscious experience is necessary for intrinsic moral significance. I.e., I care about what happens to dogs, because I think they’re conscious; I don’t care about what happens to paperclips, because I don’t think they are.
  2. Some conscious experiences do feel better than others, and all else being equal, pleasant experiences have more value than unpleasant experiences.

Beyond this, though, I think things get very speculative. Is valence the only thing that has intrinsic moral importance? I don’t know. On one hand, this sounds like a bad moral theory, one which is low-status, has lots of failure-modes, and doesn’t match all our intuitions. On the other hand, all other systematic approaches seem even worse. And if we can explain the value of most things in terms of valence, then Occam’s Razor suggests that we should put extra effort into explaining everything in those terms, since it’d be a lot more elegant. So– I don’t know that valence is the arbiter of all value, and I think we should be actively looking for other options, but I am open to it. That said I strongly believe that we should avoid premature optimization, and we should prioritize figuring out the details of consciousness & valence (i.e. we should prioritize research over advocacy).

Re: the relative importance of happiness vs suffering, it’s hard to say much at this point, but I’d expect that if we can move valence research to a more formal basis, there will be an implicit answer to this embedded in the mathematics.

Perhaps the clearest and most important ethical view I have is that ethics must ultimately “compile” to physics. What we value and what we disvalue must ultimately cash out in terms of particle arrangements & dynamics, because these are the only things we can actually change. And so if people are doing ethics without caring about making their theories cash out in physical terms, they’re not actually doing ethics- they’re doing art, or social signaling, or something which can serve as the inspiration for a future ethics.

Perhaps the clearest and most important ethical view I have is that ethics must ultimately “compile” to physics. What we value and what we disvalue must ultimately cash out in terms of particle arrangements & dynamics, because these are the only things we can actually change.

The analogy I’d offer here is that we can think about our universe as a computer, and ethics as choosing a program to run on this computer. Unfortunately, most ethicists aren’t writing machine-code, or even thinking about things in ways that could be easily translated to machine-code. Instead, they’re writing poetry about the sorts of programs that might be nice to run. But you can’t compile poetry to machine-code! So I hope the field of ethics becomes more physics-savvy and quantitative (although I’m not optimistic this will happen quickly).

Eliezer Yudkowsky refers to something similar with his notions of “AI grade philosophy”, “compilable philosophy”, and “computable ethics”, though I don’t think he quite goes far enough (i.e., all the way to physics).

 

Adam: What excites you? What do you think we have reason to be optimistic about?

Mike: The potential of qualia research to actually make peoples’ lives better in concrete, meaningful ways. Medicine’s approach to pain management and treatment of affective disorders are stuck in the dark ages because we don’t know what pain is. We don’t know why some mental states hurt. If we can figure that out, we can almost immediately help a lot of people, and probably unlock a surprising amount of human potential as well. What does the world look like with sane, scientific, effective treatments for pain & depression & akrasia? I think it’ll look amazing.

 

Adam: If you were to take a stab at forecasting the Intelligence Explosion – in what timeframe do you think it might happen (confidence intervals allowed)?

Mike: I don’t see any intractable technical hurdles to an Intelligence Explosion: the general attitude in AI circles seems to be that progress is actually happening a lot more quickly than expected, and that getting to human-level AGI is less a matter of finding some fundamental breakthrough, and more a matter of refining and connecting all the stuff we already know how to do.

The real unknown, I think, is the socio-political side of things. AI research depends on a stable, prosperous society able to support it and willing to ‘roll the dice’ on a good outcome, and peering into the future, I’m not sure we can take this as a given. My predictions for an Intelligence Explosion:

  • Between ~2035-2045 if we just extrapolate research trends within the current system;
  • Between ~2080-2100 if major socio-political disruptions happen but we stabilize without too much collateral damage (e.g., non-nuclear war, drawn-out social conflict);
  • If it doesn’t happen by 2100, it probably implies a fundamental shift in our ability or desire to create an Intelligence Explosion, and so it might take hundreds of years (or never happen).

 

If a tree falls in the forest and no one is around to hear it, does it make a sound? It would be unfortunate if a whole lot of awesome stuff were to happen with no one around to experience it.  <!–If a rainbow appears in a universe, and there is no one around to experience it, is it beautiful?–>

Also see the 2nd part, and 3nd part (conducted by Andrés Gómez Emilson) of this interview series conducted by Andrés Gómez Emilson and this interview with Christof Koch will likely be of interest.

 

Mike Johnson is a philosopher living in the Bay Area, writing about mind, complexity theory, and formalization. He is Co-founder of the Qualia Research Institute. Much of Mike’s research and writings can be found at the Open Theory website.
‘Principia Qualia’ is Mike’s magnum opus – a blueprint for building a new Science of Qualia. Click here for the full version, or here for an executive summary.
If you like Mike’s work, consider helping fund it at Patreon.