Posts

Conference: AI & Human Enhancement – Understanding the Future – Early 2020

Introduction

Overview

The event will address a variety of topics futurology (i.e. accelerating change & long term futures, existential risk, philosophy, transhumanism & ‘the posthuman’) in general though it will have a special focus on Machine Understanding.
How will we operate along side artificial agents that increasingly ‘understand’ us, and important aspects of the world around us?
The ultimate goal of AI is to achieve not just intelligence in the broad scene of the word, but understanding – the ability to understand content & context, comprehend causation, provide explanations and summarize material etc.  Arguably perusing machine understanding has a different focus to artificial ‘general’ intelligence – where a machine could behave with a degree of generality, without actually understanding what it is doing.

To explore the natural questions inherent within this concept the conference aims to draw on the fields of AI, AGI, philosophy, cognitive science and psychology to cover a diverse set of methods, assumptions, approaches, and systems design and thinking in the field of AI and AGI.

We will also explore important ethical questions surrounding transformative technology, how to navigate risks and take advantage of opportunities.

When/Where

Dates: Slated for March or April 2020 – definite dates TBA.

Where: Melbourne, Victoria, Australia!

Speakers

We are currently working on a list of speakers – as at writing, we have confirmed:

John S. Wilkins (philosophy of science/species taxonomy) –   Author of ‘Species: The Evolution of the Idea‘, co-author of ‘The Nature of Classification: Relationships and Kinds in the Natural Sciences‘.   Blogs at ‘Evolving Thoughts‘.

Dr. Kevin B. Korb (philosophy of science/AI)  – Co-founded Bayesian Intelligence with Prof. Ann Nicholson in 2007. He continues to engage in research on the theory and practice of causal discovery of Bayesian networks (aka data mining with BNs), machine learning, evaluation theory, the philosophy of scientific method and informal logic.   Author of ‘Bayesian Artificial Intelligence‘ and co-author of ‘Evolving Ethics

 

David Pearce (philosophy, the hedonistic imperative) – British philosopher and co-founder of the World Transhumanist Association, currently rebranded and incorporated as Humanity+, Inc., and a prominent figure within the transhumanist movement. He approaches ethical issues from a lexical negative utilitarian perspective.   Author of ‘The Hedonistic Imperative‘ and ‘The Abolitionist Project

Stelarc (performance artist) – Cyprus-born performance artist raised in the Melbourne suburb of Sunshine, whose works focus heavily on extending the capabilities of the human body. As such, most of his pieces are centered on his concept that “the human body is obsolete”.  There is a book about Stelarc and his works – ‘Stelarc: The Monograph (Electronic Culture: History, Theory, and Practice)‘ which is edited by Marquard Smith.

Jakob Hohwy (head of philosophy at Monash University) – philosopher engaged in both conceptual and experimental research. He works on problems in philosophy of mind about perception, neuroscience, and mental illness.  Author of ‘The Predictive Mind‘.

Topics

Human Enhancement, Transhumanism & ‘the Posthuman’

Human enhancement technologies are used not only to treat diseases and disabilities, but increasingly also to increase human capacities and qualities. Certain enhancement technologies are already available, for instance, coffee, mood brighteners, reproductive technologies and plastic surgery.   On the one hand, the scientific community has taken an increasing interest in innovations and allocated substantial public and private resources to them. While on the other hand, such research can have an impact, positive or negative, on individuals, the society, and future generations. Some have advocated the right to use such technologies freely, considering primarily the value of freedom and individual autonomy for those users. Others have called attention to the risks and potential harms of these technologies, not only for the individual, but also for society as a whole. Such use, it is argued, could accentuate the discrimination among persons with different abilities, thus increasing injustice and the gap between the rich and the poor. There is a dilemma regarding how to regulate and manage such practices through national and international laws, so as to safeguard the common good and protect vulnerable persons.

Long Term Value and the Future of Life in the Universe

It seems obvious that we should have a care for future generations – though how far into the future should our concern expire?    This obvious sounding idea can lead to surprising conclusions.

Since the future is big, there could be overwhelmingly far more people in the future than in there are in the present generation. If you want to have a positive impact on lives, and are agnostic as to when the impact is realised, your key concern shouldn’t be to help the present generation, but to ensure that the future goes well for life in the long-term.

This idea is often confused with the claim that we shouldn’t do anything to help people in the present generation. But the long-term value thesis is about what most matters – and what we do to have a positive impact on the future of life in the universe is an extremely important and fascinatingly complicated question.

Artificial Intelligence & Understanding

Following on from a workshop at AGI17 on ‘Understanding Understanding’ we will cover many fascinating questions, such as:

  • What is understanding?
    • How should we define understanding?
    • Is understanding an emergent property of intelligent systems? And/or a central property of intelligent systems?
    • What are the typologies or gradations of understanding?
    • Does understanding relate to consciousness?  If so how?
    • Is general intelligence necessary and/or sufficient to achieve understanding in an artificial system?
    • What differentiates systems that do and do not have understanding?
  • Why focus on developing machine understanding?
    • Isn’t human understanding enough?
    • What are the pros/cons of developing MU?
    • Is it ethical to develop it?
    • Does morality come along for the ride once MU is achieved?
    • How could MU help solve the ‘value loading’ problem in AI alignment?
  • How create machine understanding?
    • What is required in order to achieve understanding in machines?
    • How can we create systems that exhibit understanding?
    • and how can we test for understanding?
    • Can understanding be achieved through hand-crafted architectures or must it emerge through self-organizing (constructivist) principles?
    • How can mainstream techniques be used towards the development of machines which exhibit understanding?
    • Do we need radically different approaches than those in use today to build systems with understanding?
    • Does building artificially intelligent machines with versus without understanding depend on the same underlying principles, or are these orthogonal approaches?
    • Do we need special programming languages to implement understanding in intelligent systems?
    • How can current state of the art methods in AGI address the need for understanding in machines?
  • When is machine understanding likely to occur?
    • What types of research/discoveries are likely to accelerate progress towards MU?
    • What may hinder progress?

The conference will also cover aspects of futurology in general, including transhumanism, posthumanism, reducing suffering, and the long term future.

 

 

Event: Stelarc – Contingent & Contestable Futures

STELARC – CONTINGENT AND CONTESTABLE FUTURES: DIGITAL NOISE, GLITCHES & CONTAMINATIONS

Synopsis: In the age of the chimera, uncertainty and ambivalence generate unexpected anxieties. The dead, the near-dead, the brain dead, the yet to be born, the partially living and synthetic life all now share a material and proximal existence, with other living bodies, microbial life, operational machines and executable and viral code. Digital objects proliferate, contaminating the human biome. Bodies become end effectors for other bodies in other places and for machines elsewhere, generating interactive loops and recursive choreographies. There was always a ghost in the machine, but not as a vital force that animates but rather as a fading attestation of the human.

Agenda

5.45 – Meet, great, and eat.. pub food – it’s actually not bad! Feel free to come early to take advantage of the $8.50 pints from 4.00-6.00.
6.40 – Adam Ford – Introduction
6.50 – Stelarc – Talk: Contingent & Contestable Futures

Where: The Clyde Hotel (upstairs in function room) 385 Cardigan St, Carlton VIC 3053 – bring your appetite, there is a good menu: https://www.theclydehotel.com.au
When: Thursday July 25th – 5.45 onwards, though a few of us will be there earlier (say 5pm) to take advantage of the $8.50 pints (from 4pm onwards – if you say you are with STF you will get $8.50 pints all night)

*p.s. the event will likely be videoed – if you have any issues with being seen or heard on YouTube, please let us know.

BRIEF BIOGRAPHICAL NOTES

Stelarc experiments with alternative anatomical architectures. His performances incorporate Prosthetics, Robotics, VR and Biotechnology. He is presently surgically constructing and augmenting an ear on his arm. In 1996 he was made an Honorary Professor of Art and Robotics, Carnegie Mellon University and in 2002 was awarded an Honorary Doctorate of Laws by Monash University. In 2010 he was awarded the Ars Electronica Hybrid Arts Prize. In 2015 he received the Australia Council’s Emerging and Experimental Arts Award. In 2016 he was awarded an Honorary Doctorate from the Ionian University, Corfu. His artwork is represented by Scott Livesey Galleries,
Melbourne. www.stelarc.org

Altered States of Consciousness through Technological Intervention

A mini-documentary on possible modes of being in the future – Ben Goertzel talks about the Singularity and exploring Altered States of Consciousness, Stelarc discusses Navigating Mixed Realities, Kent Kemmish muses on the paradox of strange futures, and Max More compares Transhumanism to Humanism

Altered-States-of-Consciousness-Thorough-Technological-Intervention---Geortzel-Stelarc-Kemmish-Max-Mored

Starring: Ben Goertzel, Stelarc, Kent Kemmish, Max More
Edited: Adam Ford

Topics : Singularity, Trasnshumanism, and States of Consciousness
Thanks to NASA for some of the b-roll

 

Transcript

Ben Goertzel

It’s better perhaps to think of the singularity in terms of human experience. Right now due to the way our brains are built we have a few states of consciousness that follow us around every day.

There’s the ordinary waking state of consciousness, there’s various kinds of sleep, there’s a flow state of consciousness that we get into when we’re really into the work, we’re doing or playing music and we’re really into it. There are various enlightened states you can get into by meditating a really long time. The spectrum of states of consciousness that human beings can enter into is a tiny little fragment of all the possible ways of experience. When the singularity comes it’s going to bring us a wild variety of states of consciousness, a wild variety of ways of thinking and feeling and experiencing the world.

Stelarc
Well I think we’re expected to increasingly perform in mixed realities, so sometimes we’re biological bodies, sometimes we’re machiningly augmented and accelerated, and other times we have to manage data streams in virtual systems. So we have to seamlessly slide between these three modes of operation, and engineering new interfaces, more intimate interfaces so we can do this more seamlessly is an important strategy.

Kent Kemmish
Plenty of scientists would say that it’s crazy and there’s no way, I guess we could have that debate. But they might agree with me that if it is crazy, it’s crazy because of how the world works socially and not because of how difficult it is intrinsically. It’s not crazy for scientific reasons; it’s crazy because the world is crazy.

Max More
I think that people when they look at the future, if they do accept this idea that there’s going to be drastic changes and great advances, they will necessarily try to fit that very complex, impossible to really understand future, into very familiar mental models because they want to put things in boxes, they want to feel like they have some sort of grip on that. So I won’t be surprised to see Christian transhumanists and Mormon transhumanists and even Buddhist transhumanists and every other group will have some kind of set of ideas, they will gradually accept them, but they will make their future world fit with their pre-existing views as to how it will be.

And I think that the essence of transhumanism is not religious, it’s really based on humanism, it’s an extension of humanism, hence transhumanism. It’s really based on ideas of reason and progress and enlightenment and a kind of a secularism. But that doesn’t mean it’s incompatible with trying to make certain of the transhumanist ideas of self-improvement, of enhancement. I think those are potentially compatible with at least non fundamentalist forms of religion.

– Many thanks to Tom Richards for the transcription