How will we operate along side artificial agents that increasingly ‘understand’ us, and important aspects of the world around us?
The ultimate goal of AI is to achieve not just intelligence in the broad scene of the word, but understanding – the ability to understand content & context, comprehend causation, provide explanations and summarize material etc. Arguably perusing machine understanding has a different focus to artificial ‘general’ intelligence – where a machine could behave with a degree of generality, without actually understanding what it is doing.
To explore the natural questions inherent within this concept the conference aims to draw on the fields of AI, AGI, philosophy, cognitive science and psychology to cover a diverse set of methods, assumptions, approaches, and systems design and thinking in the field of AI and AGI.
We will also explore important ethical questions surrounding transformative technology, how to navigate risks and take advantage of opportunities.
We are currently working on a list of speakers – as at writing, we have confirmed:
John S. Wilkins (philosophy of science/species taxonomy) – Author of ‘Species: The Evolution of the Idea‘, co-author of ‘The Nature of Classification: Relationships and Kinds in the Natural Sciences‘. Blogs at ‘Evolving Thoughts‘.
Dr. Kevin B. Korb (philosophy of science/AI) – Co-founded Bayesian Intelligence with Prof. Ann Nicholson in 2007. He continues to engage in research on the theory and practice of causal discovery of Bayesian networks (aka data mining with BNs), machine learning, evaluation theory, the philosophy of scientific method and informal logic. Author of ‘Bayesian Artificial Intelligence‘ and co-author of ‘Evolving Ethics‘
David Pearce (philosophy, the hedonistic imperative) – British philosopher and co-founder of the World Transhumanist Association, currently rebranded and incorporated as Humanity+, Inc., and a prominent figure within the transhumanist movement. He approaches ethical issues from a lexical negative utilitarian perspective. Author of ‘The Hedonistic Imperative‘ and ‘The Abolitionist Project‘
Stelarc (performance artist) – Cyprus-born performance artist raised in the Melbourne suburb of Sunshine, whose works focus heavily on extending the capabilities of the human body. As such, most of his pieces are centered on his concept that “the human body is obsolete”. There is a book about Stelarc and his works – ‘Stelarc: The Monograph (Electronic Culture: History, Theory, and Practice)‘ which is edited by Marquard Smith.
Jakob Hohwy (head of philosophy at Monash University) – philosopher engaged in both conceptual and experimental research. He works on problems in philosophy of mind about perception, neuroscience, and mental illness. Author of ‘The Predictive Mind‘.
Following on from a workshop at AGI17 on ‘Understanding Understanding’ we will cover many fascinating questions, such as:
- What is understanding?
- How should we define understanding?
- Is understanding an emergent property of intelligent systems? And/or a central property of intelligent systems?
- What are the typologies or gradations of understanding?
- Does understanding relate to consciousness? If so how?
- Is general intelligence necessary and/or sufficient to achieve understanding in an artificial system?
- What differentiates systems that do and do not have understanding?
- Why focus on developing machine understanding?
- Isn’t human understanding enough?
- What are the pros/cons of developing MU?
- Is it ethical to develop it?
- Does morality come along for the ride once MU is achieved?
- How could MU help solve the ‘value loading’ problem in AI alignment?
- How create machine understanding?
- What is required in order to achieve understanding in machines?
- How can we create systems that exhibit understanding?
- and how can we test for understanding?
- Can understanding be achieved through hand-crafted architectures or must it emerge through self-organizing (constructivist) principles?
- How can mainstream techniques be used towards the development of machines which exhibit understanding?
- Do we need radically different approaches than those in use today to build systems with understanding?
- Does building artificially intelligent machines with versus without understanding depend on the same underlying principles, or are these orthogonal approaches?
- Do we need special programming languages to implement understanding in intelligent systems?
- How can current state of the art methods in AGI address the need for understanding in machines?
- When is machine understanding likely to occur?
- What types of research/discoveries are likely to accelerate progress towards MU?
- What may hinder progress?
The conference will also cover aspects of futurology in general, including transhumanism, posthumanism, reducing suffering, and the long term future.
Melbourne, Australia (venue is being confirmed)