Posts

Conference: AI & Human Enhancement – Understanding the Future – Early 2020

Introduction

Overview

The event will address a variety of topics futurology (i.e. accelerating change & long term futures, existential risk, philosophy, transhumanism & ‘the posthuman’) in general though it will have a special focus on Machine Understanding.
How will we operate along side artificial agents that increasingly ‘understand’ us, and important aspects of the world around us?
The ultimate goal of AI is to achieve not just intelligence in the broad scene of the word, but understanding – the ability to understand content & context, comprehend causation, provide explanations and summarize material etc.  Arguably perusing machine understanding has a different focus to artificial ‘general’ intelligence – where a machine could behave with a degree of generality, without actually understanding what it is doing.

To explore the natural questions inherent within this concept the conference aims to draw on the fields of AI, AGI, philosophy, cognitive science and psychology to cover a diverse set of methods, assumptions, approaches, and systems design and thinking in the field of AI and AGI.

We will also explore important ethical questions surrounding transformative technology, how to navigate risks and take advantage of opportunities.

When/Where

Dates: Slated for March or April 2020 – definite dates TBA.

Where: Melbourne, Victoria, Australia!

Speakers

We are currently working on a list of speakers – as at writing, we have confirmed:

John S. Wilkins (philosophy of science/species taxonomy) –   Author of ‘Species: The Evolution of the Idea‘, co-author of ‘The Nature of Classification: Relationships and Kinds in the Natural Sciences‘.   Blogs at ‘Evolving Thoughts‘.

Dr. Kevin B. Korb (philosophy of science/AI)  – Co-founded Bayesian Intelligence with Prof. Ann Nicholson in 2007. He continues to engage in research on the theory and practice of causal discovery of Bayesian networks (aka data mining with BNs), machine learning, evaluation theory, the philosophy of scientific method and informal logic.   Author of ‘Bayesian Artificial Intelligence‘ and co-author of ‘Evolving Ethics

 

David Pearce (philosophy, the hedonistic imperative) – British philosopher and co-founder of the World Transhumanist Association, currently rebranded and incorporated as Humanity+, Inc., and a prominent figure within the transhumanist movement. He approaches ethical issues from a lexical negative utilitarian perspective.   Author of ‘The Hedonistic Imperative‘ and ‘The Abolitionist Project

Stelarc (performance artist) – Cyprus-born performance artist raised in the Melbourne suburb of Sunshine, whose works focus heavily on extending the capabilities of the human body. As such, most of his pieces are centered on his concept that “the human body is obsolete”.  There is a book about Stelarc and his works – ‘Stelarc: The Monograph (Electronic Culture: History, Theory, and Practice)‘ which is edited by Marquard Smith.

Jakob Hohwy (head of philosophy at Monash University) – philosopher engaged in both conceptual and experimental research. He works on problems in philosophy of mind about perception, neuroscience, and mental illness.  Author of ‘The Predictive Mind‘.

Topics

Human Enhancement, Transhumanism & ‘the Posthuman’

Human enhancement technologies are used not only to treat diseases and disabilities, but increasingly also to increase human capacities and qualities. Certain enhancement technologies are already available, for instance, coffee, mood brighteners, reproductive technologies and plastic surgery.   On the one hand, the scientific community has taken an increasing interest in innovations and allocated substantial public and private resources to them. While on the other hand, such research can have an impact, positive or negative, on individuals, the society, and future generations. Some have advocated the right to use such technologies freely, considering primarily the value of freedom and individual autonomy for those users. Others have called attention to the risks and potential harms of these technologies, not only for the individual, but also for society as a whole. Such use, it is argued, could accentuate the discrimination among persons with different abilities, thus increasing injustice and the gap between the rich and the poor. There is a dilemma regarding how to regulate and manage such practices through national and international laws, so as to safeguard the common good and protect vulnerable persons.

Long Term Value and the Future of Life in the Universe

It seems obvious that we should have a care for future generations – though how far into the future should our concern expire?    This obvious sounding idea can lead to surprising conclusions.

Since the future is big, there could be overwhelmingly far more people in the future than in there are in the present generation. If you want to have a positive impact on lives, and are agnostic as to when the impact is realised, your key concern shouldn’t be to help the present generation, but to ensure that the future goes well for life in the long-term.

This idea is often confused with the claim that we shouldn’t do anything to help people in the present generation. But the long-term value thesis is about what most matters – and what we do to have a positive impact on the future of life in the universe is an extremely important and fascinatingly complicated question.

Artificial Intelligence & Understanding

Following on from a workshop at AGI17 on ‘Understanding Understanding’ we will cover many fascinating questions, such as:

  • What is understanding?
    • How should we define understanding?
    • Is understanding an emergent property of intelligent systems? And/or a central property of intelligent systems?
    • What are the typologies or gradations of understanding?
    • Does understanding relate to consciousness?  If so how?
    • Is general intelligence necessary and/or sufficient to achieve understanding in an artificial system?
    • What differentiates systems that do and do not have understanding?
  • Why focus on developing machine understanding?
    • Isn’t human understanding enough?
    • What are the pros/cons of developing MU?
    • Is it ethical to develop it?
    • Does morality come along for the ride once MU is achieved?
    • How could MU help solve the ‘value loading’ problem in AI alignment?
  • How create machine understanding?
    • What is required in order to achieve understanding in machines?
    • How can we create systems that exhibit understanding?
    • and how can we test for understanding?
    • Can understanding be achieved through hand-crafted architectures or must it emerge through self-organizing (constructivist) principles?
    • How can mainstream techniques be used towards the development of machines which exhibit understanding?
    • Do we need radically different approaches than those in use today to build systems with understanding?
    • Does building artificially intelligent machines with versus without understanding depend on the same underlying principles, or are these orthogonal approaches?
    • Do we need special programming languages to implement understanding in intelligent systems?
    • How can current state of the art methods in AGI address the need for understanding in machines?
  • When is machine understanding likely to occur?
    • What types of research/discoveries are likely to accelerate progress towards MU?
    • What may hinder progress?

The conference will also cover aspects of futurology in general, including transhumanism, posthumanism, reducing suffering, and the long term future.

 

 

Why did Sam Altman join OpenAI as CEO?

Sam Altman leaves role as president at YCombinator and joins OpenAI as CEO – why?

Elon Musk created OpenAI to to ensure that artificial intelligence, especially powerful artificial general intelligence (AGI), is “developed in a way that is safe and is beneficial to humanity,” – it’s an interesting bet – because AGI doesn’t exist yet – and the tech industries forecasts about when AGI will be realised spans across a wide spectrum of relatively soon to perhaps never.

We are trying to build safe artificial general intelligence. So it is my belief that in the next few decades, someone; some group of humans, will build a software system that is smarter and more capable than humans in every way. And so it will very quickly go from being a little bit more capable than humans, to something that is like a million, or a billion times more capable than humans… So we’re trying to figure out how to do that technically, make it safe and equitable, share the benefits of it – the decision making of it – over the world…Sam Altman

Sam and others believe that developing AGI is a large project, and won’t be cheap – and could require upwards of billions of dollars “in upcoming years into large-scale cloud compute, attracting and retaining talented people, and building AI supercomputers”. OpenAI was once a non-profit org, but recently it restructred as a for-profit with caveats.. Sam tells investors that it isn’t clear on the specifics of how return on investment will work in the short term, though ‘Once we build a generally intelligent system, that basically we will ask it to figure out a way to make an investment return for you.’

So, first create AGI and then use it to money… But how much money?

Capped profit at 100x investment – then excess profit goes to the rest of the world. 100x is quite a high bar no? The thought is that AGI could be so powerful it could..

“maybe capture the light cone of all future value in the universe, and that’s for sure not okay for one group of investors to have.”

If we take the high standards of Future of Humanity Institute* for due diligence in perusing safe AI – are these standards being met at OpenAI? While Sam seems to have some sympathy for the arguments for these standards, he seems to believe it’s more important to focus on societal consequences of superintelligent AI. Perhaps convincing key players of this in the short term will help incubate an environment where it’s easier to pursue strict safety standards for AGI development.

I really do believe that the work we are doing at OpenAI will not only far eclipse the work I did at YC, but any of the work anyone in the tech industry does…Sam Altman

See this video (at approx 25.30 minute mark and onwards)

 

* See Nick Bostrom’s book ‘Superintelligence

Juergen Schmidhuber on DeepMind, AlphaGo & Progress in AI

In asking AI researcher Juergen Schmidhuber about his thoughts on progress at DeepMind and about the AlphaGo vs Lee Sedol Go tournament – provided some initial comments. I will be updating this post with further interview.

juergen288x466genova1Juergen Schmidhuber: First of all, I am happy about DeepMind’s success, also because the company is heavily influenced by my former students: 2 of DeepMind’s first 4 members and their first PhDs in AI came from my lab, one of them co-founder, one of them first employee. (Other ex-PhD students of mine joined DeepMind later, including a co-author of our first paper on Atari-Go in 2010.)

Go is a board game where the Markov assumption holds: in principle, the current input (the board state) conveys all the information needed to determine an optimal next move (no need to consider the history of previous states). That is, the game can be tackled by traditional reinforcement learning (RL), a bit like 2 decades ago, when Tesauro used RL to learn from scratch a backgammon player on the level of the human world champion (1994). Today, however, we are greatly profiting from the fact that computers are at least 10,000 times faster per dollar.

In the last few years, automatic Go players have greatly improved. To learn a good Go player, DeepMind’s system combines several traditional methods such as supervised learning (from human experts) and RL based on Monte Carlo Tree Search. It will be very interesting to see the system play against the best human Go player Lee Sedol in the near future.

Unfortunately, however, the Markov condition does not hold in realistic real world scenarios. That’s why games such as football are much harder for machines than Go, and why Artificial General Intelligence (AGI) for RL robots living in partially observable environments will need more sophisticated learning algorithms, e.g., RL for recurrent neural networks.

For a comprehensive history of deep RL, see Section 6 of my survey with 888 references:
http://people.idsia.ch/~juergen/deep-learning-overview.html

Also worth seeing Juergen’s AMA here.

Juergen Schmidhuber’s website.

Can Intelligence Explode? – Marcus Hutter at Singularity Summit Australia 2012

Abstract: The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. The most popular scenario is the creation of super-intelligent algorithms that recursively create ever higher intelligences. After a short introduction to this intriguing potential future, I will elaborate on what it could mean for intelligence to explode. In this course, I will (have to) provide a more careful treatment of what intelligence actually is, separate speed from intelligence explosion, compare what super-intelligent participants and classical human observers might experience and do, discuss immediate implications for the diversity and value of life, consider possible bounds on intelligence, and contemplate intelligences right at the singularity.

 


 

Slides (pdf): http://www.hutter1.net/publ/ssingularity.pdf
Slides (PowerPoint): http://www.hutter1.net/publ/ssingularity.ppsx
Paper: M.Hutter, Can Intelligence Explode, Journal of Consciousness Studies, Vol.19, Nr 1-2 (2012) pages 143–166.
http://www.hutter1.net/publ/singularity.pdf

Also see:
http://2012.singularitysummit.com.au/2012/08/can-intelligence-explode/
http://2012.singularitysummit.com.au/2012/08/universal-artificial-intelligence/
http://2012.singularitysummit.com.au/2012/08/panel-intelligence-substrates-computation-and-the-future/
http://2012.singularitysummit.com.au/2012/01/marcus-hutter-to-speak-at-the-singularity-summit-au-2012/
http://2012.singularitysummit.com.au/agenda

Marcus Hutter (born 1967) is a German computer scientist and professor at the Australian National University. Hutter was born and educated in Munich, where he studied physics and computer science at the Technical University of Munich. In 2000 he joined Jürgen Schmidhuber’s group at the Swiss Artificial Intelligence lab IDSIA, where he developed the first mathematical theory of optimal Universal Artificial Intelligence, based on Kolmogorov complexity and Ray Solomonoff’s theory of universal inductive inference. In 2006 he also accepted a professorship at the Australian National University in Canberra.

Hutter’s notion of universal AI describes the optimal strategy of an agent that wants to maximize its future expected reward in some unknown dynamic environment, up to some fixed future horizon. This is the general reinforcement learning problem. Solomonoff/Hutter’s only assumption is that the reactions of the environment in response to the agent’s actions follow some unknown but computable probability distribution.

team-marcus-hutter

Professor Marcus Hutter

Research interests:

Artificial intelligence, Bayesian statistics, theoretical computer science, machine learning, sequential decision theory, universal forecasting, algorithmic information theory, adaptive control, MDL, image processing, particle physics, philosophy of science.

Bio:

Marcus Hutter is Professor in the RSCS at the Australian National University in Canberra, Australia. He received his PhD and BSc in physics from the LMU in Munich and a Habilitation, MSc, and BSc in informatics from the TU Munich. Since 2000, his research at IDSIA and now ANU is centered around the information-theoretic foundations of inductive reasoning and reinforcement learning, which has resulted in 100+ publications and several awards. His book “Universal Artificial Intelligence” (Springer, EATCS, 2005) develops the first sound and complete theory of AI. He also runs the Human Knowledge Compression Contest (50’000€ H-prize).

AGI Progress & Impediments – Progress in Artificial Intelligence Panel

Panelists: Ben Goertzel, David Chalmers, Steve Omohundro, James Newton-Thomas – held at the Singularity Summit Australia in 2011

Panelists discuss approaches to AGI, progress and impediments now and in the future.
Ben Goertzel:
Ben Goertzle with backdrop of headsBrain Emulation, Broad level roadmap simulation, bottleneck, lack of imaging technology, we don’t know what level of precision we need to reverse engineer biological intelligence. Ed Boyed – optimal brain imageing.
Not by Brain emulation (engineering/comp sci/cognitive sci), bottleneck is funding. People in the field believe/feel they know how to do it. To prove this, they need to integrate their architectures which looks like a big project. Takes a lot of money, but not as much as something like Microsoft Word.

David Chalmers (time 03:42):
DavidChalmersWe don’t know which of the two approaches. Though what form the singularity will take will likely be dependent on the approach we use to build AGI. We don’t understand the theory yet. Most don’t think we will have a perfect molecular scanner that scans the brain and its chemical constituents. 25 Years ago David Chalmers worked in Douglass Hofstadter’s AI lab, but his expertise in AI is now out of date. To get to Human Level AI by brute force or through cognitive psychology knows that the cog-sci is not in very good shape. Third approach is a hybrid of ruffly brain augmentation (through technology we are already using like ipads and computers etc) and technological extension and uploading. If using brain augmentation through tech and uploading as a first step in a Singularity then it is including Humans in the equation along with humanities values which may help shape a Singularity with those values.

Steve Omohundro (time 08:08):
steve_omohundro_headEarly in history AI, there was a distinction: The Neats and the Scruffies. John McCarthy (Stanford AI Lab) believed in mathematically precise logical representations – this shaped a lot of what Steve thought about how programming should be done. Marvin Minsky (MIT Lab) believed in exploring neural nets and self organising systems and the approach of throwing things together to see how it self-organises into intelligence. Both approaches are needed: the logical, mathematically precise, neat approach – and – the probabilistic, self-organising, fuzzy, learning approach, the scruffy. They have to come together. Theorem proving without any explorative aspect probably wont succeed. Purely Neural net based simulations can’t represent semantics well, need to combine systems with full semantics and systems with the ability to adapt to complex environments.

James Newton-Thomas (time 09:57)
james.newton-thomasJames has been playing with Neural-nets and has been disappointed with them not being thinks that Augmentation is the way forward. The AI problem is going to be easier to solve if we are smarter to solve it. Conferences such as this help infuse us with a collective empowerment of the individuals. There is an impediment – we are already being dehumanised with our Ipad, where the reason why we are having a conversation with others is a fact about our being part of a group and not about the information that can be looked up via an IPad. We need to careful in our approach so that we are able to maintain our humanity whilst gaining the advantages of the augmentation.

General Discussion (time 12:05):
David Chalmers: We are already becoming cyborgs in a sense by interacting with tech in our world. the more literal cyborg approach we are working on now. Though we are not yet at the point where the technology is commercialization to in principle allow a strong literal cyborg approach. Ben Goertzel: Though we could progress with some form of brain vocalization (picking up words directly from the brain), allowing to think a google query and have the results directly added to our mind – thus bypassing our low bandwidth communication and getting at the information directly in our heads. To do all this …
Steve Omohundro: EEG is gaining a lot of interest to help with the Quantified Self – brain interfaces to help measure things about their body (though the hardware is not that good yet).
Ben Goertzel: Use of BCIs for video games – and can detect whether you are aroused and paying attention. Though the resolution is very course – hard to get fine grained brain state information through the skull. Cranial jacks will get more information. Legal systems are an impediment.
James NT: Alan Snyder using time altering magnetic fields in helmets that shut down certain areas of the brain, which effectively makes people smarter in narrower domains of skill. Can provide an idiot savant ability at the cost of the ability to generalize. The brain that becomes to specific at one task is doing so at the cost of others – the process of generalization.

Ben Goertzel, David Chalmers, Steve Omohundro - A Thought Experiment

Ben Goertzel, David Chalmers, Steve Omohundro – A Thought Experiment