Conference: AI & Human Enhancement – Understanding the Future – Early 2020
Introduction
Overview
The event will address a variety of topics futurology (i.e. accelerating change & long term futures, existential risk, philosophy, transhumanism & ‘the posthuman’) in general though it will have a special focus on Machine Understanding.
How will we operate along side artificial agents that increasingly ‘understand’ us, and important aspects of the world around us?
The ultimate goal of AI is to achieve not just intelligence in the broad scene of the word, but understanding – the ability to understand content & context, comprehend causation, provide explanations and summarize material etc. Arguably perusing machine understanding has a different focus to artificial ‘general’ intelligence – where a machine could behave with a degree of generality, without actually understanding what it is doing.
To explore the natural questions inherent within this concept the conference aims to draw on the fields of AI, AGI, philosophy, cognitive science and psychology to cover a diverse set of methods, assumptions, approaches, and systems design and thinking in the field of AI and AGI.
We will also explore important ethical questions surrounding transformative technology, how to navigate risks and take advantage of opportunities.
When/Where
Dates: Slated for March or April 2020 – definite dates TBA.
Where: Melbourne, Victoria, Australia!
Speakers
We are currently working on a list of speakers – as at writing, we have confirmed:
John S. Wilkins (philosophy of science/species taxonomy) – Author of ‘Species: The Evolution of the Idea‘, co-author of ‘The Nature of Classification: Relationships and Kinds in the Natural Sciences‘. Blogs at ‘Evolving Thoughts‘.
Dr. Kevin B. Korb (philosophy of science/AI) – Co-founded Bayesian Intelligence with Prof. Ann Nicholson in 2007. He continues to engage in research on the theory and practice of causal discovery of Bayesian networks (aka data mining with BNs), machine learning, evaluation theory, the philosophy of scientific method and informal logic. Author of ‘Bayesian Artificial Intelligence‘ and co-author of ‘Evolving Ethics‘
David Pearce (philosophy, the hedonistic imperative) – British philosopher and co-founder of the World Transhumanist Association, currently rebranded and incorporated as Humanity+, Inc., and a prominent figure within the transhumanist movement. He approaches ethical issues from a lexical negative utilitarian perspective. Author of ‘The Hedonistic Imperative‘ and ‘The Abolitionist Project‘
Stelarc (performance artist) – Cyprus-born performance artist raised in the Melbourne suburb of Sunshine, whose works focus heavily on extending the capabilities of the human body. As such, most of his pieces are centered on his concept that “the human body is obsolete”. There is a book about Stelarc and his works – ‘Stelarc: The Monograph (Electronic Culture: History, Theory, and Practice)‘ which is edited by Marquard Smith.
Jakob Hohwy (head of philosophy at Monash University) – philosopher engaged in both conceptual and experimental research. He works on problems in philosophy of mind about perception, neuroscience, and mental illness. Author of ‘The Predictive Mind‘.
Topics
Human Enhancement, Transhumanism & ‘the Posthuman’
Human enhancement technologies are used not only to treat diseases and disabilities, but increasingly also to increase human capacities and qualities. Certain enhancement technologies are already available, for instance, coffee, mood brighteners, reproductive technologies and plastic surgery. On the one hand, the scientific community has taken an increasing interest in innovations and allocated substantial public and private resources to them. While on the other hand, such research can have an impact, positive or negative, on individuals, the society, and future generations. Some have advocated the right to use such technologies freely, considering primarily the value of freedom and individual autonomy for those users. Others have called attention to the risks and potential harms of these technologies, not only for the individual, but also for society as a whole. Such use, it is argued, could accentuate the discrimination among persons with different abilities, thus increasing injustice and the gap between the rich and the poor. There is a dilemma regarding how to regulate and manage such practices through national and international laws, so as to safeguard the common good and protect vulnerable persons.
Long Term Value and the Future of Life in the Universe
It seems obvious that we should have a care for future generations – though how far into the future should our concern expire? This obvious sounding idea can lead to surprising conclusions.
Since the future is big, there could be overwhelmingly far more people in the future than in there are in the present generation. If you want to have a positive impact on lives, and are agnostic as to when the impact is realised, your key concern shouldn’t be to help the present generation, but to ensure that the future goes well for life in the long-term.
This idea is often confused with the claim that we shouldn’t do anything to help people in the present generation. But the long-term value thesis is about what most matters – and what we do to have a positive impact on the future of life in the universe is an extremely important and fascinatingly complicated question.
Artificial Intelligence & Understanding
Following on from a workshop at AGI17 on ‘Understanding Understanding’ we will cover many fascinating questions, such as:
- What is understanding?
- How should we define understanding?
- Is understanding an emergent property of intelligent systems? And/or a central property of intelligent systems?
- What are the typologies or gradations of understanding?
- Does understanding relate to consciousness? If so how?
- Is general intelligence necessary and/or sufficient to achieve understanding in an artificial system?
- What differentiates systems that do and do not have understanding?
- Why focus on developing machine understanding?
- Isn’t human understanding enough?
- What are the pros/cons of developing MU?
- Is it ethical to develop it?
- Does morality come along for the ride once MU is achieved?
- How could MU help solve the ‘value loading’ problem in AI alignment?
- How create machine understanding?
- What is required in order to achieve understanding in machines?
- How can we create systems that exhibit understanding?
- and how can we test for understanding?
- Can understanding be achieved through hand-crafted architectures or must it emerge through self-organizing (constructivist) principles?
- How can mainstream techniques be used towards the development of machines which exhibit understanding?
- Do we need radically different approaches than those in use today to build systems with understanding?
- Does building artificially intelligent machines with versus without understanding depend on the same underlying principles, or are these orthogonal approaches?
- Do we need special programming languages to implement understanding in intelligent systems?
- How can current state of the art methods in AGI address the need for understanding in machines?
- When is machine understanding likely to occur?
- What types of research/discoveries are likely to accelerate progress towards MU?
- What may hinder progress?
The conference will also cover aspects of futurology in general, including transhumanism, posthumanism, reducing suffering, and the long term future.