Posts

Joscha Bach – GPT-3: Is AI Deepfaking Understanding?

Joscha Bach on GPT-3, achieving AGI, machine understanding and lots more!


Discussion points:
02:40 What’s missing in AI atm? Unified coherent model of reality
04:14 AI systems like GPT-3 behave as if they understand – what’s missing?
08:35 Symbol grounding – does GPT-3 have it?
09:35 GPT-3 for music generation, GPT-3 for image generation, GPT-3 for video generation
11:13 GPT-3 temperature parameter. Strange output?
13:09 GPT-3 a powerful tool for idea generation
14:05 GPT-3 as a tool for writing code. Will GPT-3 spawn a singularity?
16:32 Increasing GPT-3 input context may have a high impact
16:59 Identifying grammatical structure & language
19:46 What is the GPT-3 transformer network doing?
21:26 GPT-3 uses brute force, not zero-shot learning, humans do ZSL
22:15 Extending the GPT-3 token context space. Current Context = Working Memory. Humans with smaller current contexts integrate concepts over long time-spans
24:07 GPT-3 can’t write a good novel
25:09 GPT-3 needs to become sensitive to multi-modal sense data – video, audio, text etc
26:00 GPT-3 a universal chat-bot – conversations with God & Johann Wolfgang von Goethe
30:14 What does understanding mean? Does it have gradients (i.e. from primitive to high level)?
32:19 (correlation vs causation) What is causation? Does GPT-3 understand causation? Does GPT-3 do causation?
38:06 Deep-faking understanding
40:06 The metaphor of the Golem applied to civ
42:33 GPT-3 fine with a person in the loop. Big danger in a system which fakes understanding. Deep-faking intelligible explanations.
44:32 GPT-3 babbling at the level of non-experts
45:14 Our civilization lacks sentience – it can’t plan ahead
46:20 Would GTP-3 (a hopfield network) improve dramatically if it could consume 1 to 5 trillion parameters?
47:24 GPT3: scaling up a simple idea. Clever hacks to formulate the inputs
47:41 Google GShard with 600 billion input parameters – Amazon may be doing something similar – future experiments
49:12 Ideal grounding in machines
51:13 We live inside a story we generate about the world – no reason why GPT-3 can’t be extended to do this
52:56 Tracking the real world
54:51 MicroPsi
57:25 What is computationalism? What is it’s relationship to mathematics?
59:30 Stateless systems vs step by step Computation – Godel, Turing, the halting problem & the notion of truth
1:00:30 Truth independent from the process used to determine truth. Constraining truth that which can be computed on finite state machines
1:03:54 Infinities can’t describe a consistent reality without contradictions
1:06:04 Stevan Harnad’s understanding of computation
1:08:32 Causation / answering ‘why’ questions
1:11:12 Causation through brute forcing correlation
1:13:22 Deep learning vs shallow learning
1:14:56 Brute forcing current deep learning algorithms on a Matrioshka brain – would it wake up?
1:15:38 What is sentience? Could a plant be sentient? Are eco-systems sentient?
1:19:56 Software/OS as spirit – spiritualism vs superstition. Empirically informed spiritualism
1:23:53 Can we build AI that shares our purposes?
1:26:31 Is the cell the ultimate computronium? The purpose of control is to harness complexity
1:31:29 Intelligent design
1:33:09 Category learning & categorical perception: Models – parameters constrain each other
1:35:06 Surprise minimization & hidden states; abstraction & continuous features – predicting dynamics of parts that can be both controlled & not controlled, by changing the parts that can be controlled. Categories are a way of talking about hidden states.
1:37:29 ‘Category’ is a useful concept – gradients are often hard to compute – so compressing away gradients to focus on signals (categories) when needed
1:38:19 Scientific / decision tree thinking vs grounded common sense reasoning
1:40:00 Wisdom/common sense vs understanding. Common sense, tribal biases & group insanity. Self preservation, dunbar numbers
1:44:10 Is g factor & understanding two sides of the same coin? What is intelligence?
1:47:07 General intelligence as the result of control problems so general they require agents to become sentient
1:47:47 Solving the Turing test: asking the AI to explain intelligence. If response is an intelligible & testable implementation plan then it passes?
1:49:18 The term ‘general intelligence’ inherits it’s essence from behavioral psychology; a behaviorist black box approach to measuring capability
1:52:15 How we perceive color – natural synesthesia & induced synesthesia
1:56:37 The g factor vs understanding
1:59:24 Understanding as a mechanism to achieve goals
2:01:42 The end of science?
2:03:54 Exciting currently untestable theories/ideas (that may be testable by science once we develop the precise enough instruments). Can fundamental physics be solved by computational physics?
2:07:14 Quantum computing. Deeper substrates of the universe that runs more efficiently than the particle level of the universe?
2:10:05 The Fermi paradox
2:12:19 Existence, death and identity construction

Exciting progress in Artificial Intelligence – Joscha Bach

Joscha Bach discusses progress made in AI so far, what’s missing in AI, and the conceptual progress needed to achieve the grand goals of AI.
Discussion points:
0:07 What is intelligence? Intelligence as the ability to be effective over a wide range of environments
0:37 Intelligence vs smartness – interesting models vs intelligent behavior
1:08 Models vs behaviors – i.e. Deepmind – solving goals over a wide range of environments
1:44 Starting from a blank slate – how does an AI see an Atari Game compared to a human? Pac Man analogy
3:31 Getting the narrative right as well as the details
3:54 Media fear mongering about AI
4:43 Progress in AI – how revolutionary are the ideas behind the AI that led to commercial success? There is a need for more conceptual progress in AI
5:04 Mental representations require probabilistic algorithms – to make further progress we probably need different means of functional approximation
5:33 Many of the new theories in AI are currently not deployed – we can assume a tremendous shift in every day use of technology in the future because of this
6:07 It’s an exciting time to be an AI researcher

 

Principles of Synthetic Intelligence - Joscha BachJoscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Joscha has taught computer science, AI, and cognitive science at the Humboldt-University of Berlin and the Institute for Cognitive Science at Osnabrück. His book “Principles of Synthetic Intelligence” (Oxford University Press) is available on amazon.

 

Ethical Progress, AI & the Ultimate Utility Function – Joscha Bach

Joscha Bach on ethical progress, and AI – it’s fascinating to think ‘What’s the ultimate utility function?’ – should we seek the answer in our evolved motivations?

Discussion points:
0:07 Future directions in ethical progress
1:13 Pain and suffering – concern for things we cannot regulate or change
1:50 Reward signals – we should only get them for things we can regulate
2:42 As soon as minds become mutable ethics dramatically changes – an artificial mind may be like a Zen master on steroids
2:53 The ultimate utility function – how can we maximize the neg-entropy in this universe?
3:29 Our evolved motives don’t align well to this ultimate utility function
4:10 Systems which only maximize what they can consume – humans are like yeast

 

Principles of Synthetic Intelligence - Joscha BachJoscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Joscha has taught computer science, AI, and cognitive science at the Humboldt-University of Berlin and the Institute for Cognitive Science at Osnabrück. His book “Principles of Synthetic Intelligence” (Oxford University Press) is available on amazon.

 

 

The Grand Challenge of Developing Friendly Artificial Intelligence – Joscha Bach

Joscha Bach discusses problems with achieving AI alignment, the current discourse around AI, and inefficiencies of human cognition & communication.

Discussion points:
0:08 The AI alignment problem
0:42 Asimov’s Laws: Problems with giving AI (rules) to follow – it’s a form of slavery
1:12 The current discourse around AI
2:52 Ethics – where do they come from?
3:27 Human constraints don’t apply to AI
4:12 Human communication problems vs AI – communication costs between minds is much larger than within minds
4:57 AI can change it’s preferences

Principles of Synthetic Intelligence - Joscha BachJoscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Joscha has taught computer science, AI, and cognitive science at the Humboldt-University of Berlin and the Institute for Cognitive Science at Osnabrück. His book “Principles of Synthetic Intelligence” (Oxford University Press) is available on amazon.

AI, Consciousness, Science, Art & Understanding – Joscha Bach

Here Joscha Bach discusses consciousness, it’s relationship to qualia and whether an AI or a utility maximizer would do with it.

What is consciousness? “I think under certain circumstances being conscious is an important part of a mind; it’s a model of a model of a model basically. What it means is our mind (our new cortex) produces this dream that we take to be the world based on the sensory data – so it’s basically a hallucination that predicts what next hits your retina – that’s the world. Out there, we don’t know what this is.. The universe is some kind of weird pattern generator with some quantum properties. And this pattern generator throws patterns at us, and we try to find regularity in them – and the hidden layers of this neural network amount to latent variables that are colors people sounds ideas and so on.. And this is the world that we subjectively inhabit – that’s the world that we find meaningful.”

… “I find theories [about consciousness] that make you feel good very suspicious. If there is something that is like my preferred outcome for emotional reasons, I should be realising that I have a confirmation bias towards this – and that truth is a very brutal vector”..

OUTLINE:
0:07 Consciousness and it’s importance
0:47 Phenomenal content
1:43 Consciousness and attention
2:30 When AI becomes conscious
2:57 Mary’s Room – the Knowledge Argument, art, science & understanding
4:07 What is understanding? What is truth?
4:49 What interests an artist? Art as a communicative exercise
5:48 Thomas Nagel: What is it like to be a bat?
6:19 Feel good theories
7:01 Raw feels or no? Why did nature endow us with raw feels?
8:29 What is qualia, and is it important?
9:49 Insight addiction & the aesthetics of information
10:52 Would a utility maximizer care about qualia?

BIO:
Principles of Synthetic Intelligence - Joscha BachJoscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Joscha has taught computer science, AI, and cognitive science at the Humboldt-University of Berlin and the Institute for Cognitive Science at Osnabrück. His book “Principles of Synthetic Intelligence” (Oxford University Press) is available on amazon.

John Wilkins – Comprehension and Compression

“In short, data is not knowledge; knowledge is not comprehension; comprehension is not wisdom”

The standard account of understanding has been, since Aristotle, knowledge of the causes of an event or effect. However, this account fails in cases where the subject understood is not causal. In this paper I offer an account of understanding as pattern recognition in large sets of data without the presumption that the patterns indicate causal chains.

All nervous systems by nature desire to process information. Consequently, entities with nervous systems tend to find information everywhere, and on the principle that if some is good a lot is better, we have come up with “Big Data”, which is often suggested as the solution to the problems of one science or another, although it is unclear exactly what counts as big data and how it is supposed to do this. In this paper I will argue (i) that understanding does not and cannot come from larger and higher dimensionality data sets, but from structure in the data that can be literally comprehended; and (ii) that big data multiplies uncertainties unless it can be summarized. In short, data is not knowledge; knowledge is not comprehension; comprehension is not wisdom.


Slides can be found here: https://www.slideshare.net/jswilkins/comprehension-as-compression

Event was held at Melbourne Uni in 2019: https://www.meetup.com/en-AU/Science-Technology-and-the-Future/events/265580084/

 

Consider supporting SciFuture by Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture

 

Conference: AI & Human Enhancement – Understanding the Future – Early 2020

Introduction

Overview

The event will address a variety of topics futurology (i.e. accelerating change & long term futures, existential risk, philosophy, transhumanism & ‘the posthuman’) in general though it will have a special focus on Machine Understanding.
How will we operate along side artificial agents that increasingly ‘understand’ us, and important aspects of the world around us?
The ultimate goal of AI is to achieve not just intelligence in the broad scene of the word, but understanding – the ability to understand content & context, comprehend causation, provide explanations and summarize material etc.  Arguably perusing machine understanding has a different focus to artificial ‘general’ intelligence – where a machine could behave with a degree of generality, without actually understanding what it is doing.

To explore the natural questions inherent within this concept the conference aims to draw on the fields of AI, AGI, philosophy, cognitive science and psychology to cover a diverse set of methods, assumptions, approaches, and systems design and thinking in the field of AI and AGI.

We will also explore important ethical questions surrounding transformative technology, how to navigate risks and take advantage of opportunities.

When/Where

Dates: Slated for March or April 2020 – definite dates TBA.

Where: Melbourne, Victoria, Australia!

Speakers

We are currently working on a list of speakers – as at writing, we have confirmed:

John S. Wilkins (philosophy of science/species taxonomy) –   Author of ‘Species: The Evolution of the Idea‘, co-author of ‘The Nature of Classification: Relationships and Kinds in the Natural Sciences‘.   Blogs at ‘Evolving Thoughts‘.

Dr. Kevin B. Korb (philosophy of science/AI)  – Co-founded Bayesian Intelligence with Prof. Ann Nicholson in 2007. He continues to engage in research on the theory and practice of causal discovery of Bayesian networks (aka data mining with BNs), machine learning, evaluation theory, the philosophy of scientific method and informal logic.   Author of ‘Bayesian Artificial Intelligence‘ and co-author of ‘Evolving Ethics

 

David Pearce (philosophy, the hedonistic imperative) – British philosopher and co-founder of the World Transhumanist Association, currently rebranded and incorporated as Humanity+, Inc., and a prominent figure within the transhumanist movement. He approaches ethical issues from a lexical negative utilitarian perspective.   Author of ‘The Hedonistic Imperative‘ and ‘The Abolitionist Project

Stelarc (performance artist) – Cyprus-born performance artist raised in the Melbourne suburb of Sunshine, whose works focus heavily on extending the capabilities of the human body. As such, most of his pieces are centered on his concept that “the human body is obsolete”.  There is a book about Stelarc and his works – ‘Stelarc: The Monograph (Electronic Culture: History, Theory, and Practice)‘ which is edited by Marquard Smith.

Jakob Hohwy (head of philosophy at Monash University) – philosopher engaged in both conceptual and experimental research. He works on problems in philosophy of mind about perception, neuroscience, and mental illness.  Author of ‘The Predictive Mind‘.

Topics

Human Enhancement, Transhumanism & ‘the Posthuman’

Human enhancement technologies are used not only to treat diseases and disabilities, but increasingly also to increase human capacities and qualities. Certain enhancement technologies are already available, for instance, coffee, mood brighteners, reproductive technologies and plastic surgery.   On the one hand, the scientific community has taken an increasing interest in innovations and allocated substantial public and private resources to them. While on the other hand, such research can have an impact, positive or negative, on individuals, the society, and future generations. Some have advocated the right to use such technologies freely, considering primarily the value of freedom and individual autonomy for those users. Others have called attention to the risks and potential harms of these technologies, not only for the individual, but also for society as a whole. Such use, it is argued, could accentuate the discrimination among persons with different abilities, thus increasing injustice and the gap between the rich and the poor. There is a dilemma regarding how to regulate and manage such practices through national and international laws, so as to safeguard the common good and protect vulnerable persons.

Long Term Value and the Future of Life in the Universe

It seems obvious that we should have a care for future generations – though how far into the future should our concern expire?    This obvious sounding idea can lead to surprising conclusions.

Since the future is big, there could be overwhelmingly far more people in the future than in there are in the present generation. If you want to have a positive impact on lives, and are agnostic as to when the impact is realised, your key concern shouldn’t be to help the present generation, but to ensure that the future goes well for life in the long-term.

This idea is often confused with the claim that we shouldn’t do anything to help people in the present generation. But the long-term value thesis is about what most matters – and what we do to have a positive impact on the future of life in the universe is an extremely important and fascinatingly complicated question.

Artificial Intelligence & Understanding

Following on from a workshop at AGI17 on ‘Understanding Understanding’ we will cover many fascinating questions, such as:

  • What is understanding?
    • How should we define understanding?
    • Is understanding an emergent property of intelligent systems? And/or a central property of intelligent systems?
    • What are the typologies or gradations of understanding?
    • Does understanding relate to consciousness?  If so how?
    • Is general intelligence necessary and/or sufficient to achieve understanding in an artificial system?
    • What differentiates systems that do and do not have understanding?
  • Why focus on developing machine understanding?
    • Isn’t human understanding enough?
    • What are the pros/cons of developing MU?
    • Is it ethical to develop it?
    • Does morality come along for the ride once MU is achieved?
    • How could MU help solve the ‘value loading’ problem in AI alignment?
  • How create machine understanding?
    • What is required in order to achieve understanding in machines?
    • How can we create systems that exhibit understanding?
    • and how can we test for understanding?
    • Can understanding be achieved through hand-crafted architectures or must it emerge through self-organizing (constructivist) principles?
    • How can mainstream techniques be used towards the development of machines which exhibit understanding?
    • Do we need radically different approaches than those in use today to build systems with understanding?
    • Does building artificially intelligent machines with versus without understanding depend on the same underlying principles, or are these orthogonal approaches?
    • Do we need special programming languages to implement understanding in intelligent systems?
    • How can current state of the art methods in AGI address the need for understanding in machines?
  • When is machine understanding likely to occur?
    • What types of research/discoveries are likely to accelerate progress towards MU?
    • What may hinder progress?

The conference will also cover aspects of futurology in general, including transhumanism, posthumanism, reducing suffering, and the long term future.

 

 

Why did Sam Altman join OpenAI as CEO?

Sam Altman leaves role as president at YCombinator and joins OpenAI as CEO – why?

Elon Musk created OpenAI to to ensure that artificial intelligence, especially powerful artificial general intelligence (AGI), is “developed in a way that is safe and is beneficial to humanity,” – it’s an interesting bet – because AGI doesn’t exist yet – and the tech industries forecasts about when AGI will be realised spans across a wide spectrum of relatively soon to perhaps never.

We are trying to build safe artificial general intelligence. So it is my belief that in the next few decades, someone; some group of humans, will build a software system that is smarter and more capable than humans in every way. And so it will very quickly go from being a little bit more capable than humans, to something that is like a million, or a billion times more capable than humans… So we’re trying to figure out how to do that technically, make it safe and equitable, share the benefits of it – the decision making of it – over the world…Sam Altman

Sam and others believe that developing AGI is a large project, and won’t be cheap – and could require upwards of billions of dollars “in upcoming years into large-scale cloud compute, attracting and retaining talented people, and building AI supercomputers”. OpenAI was once a non-profit org, but recently it restructred as a for-profit with caveats.. Sam tells investors that it isn’t clear on the specifics of how return on investment will work in the short term, though ‘Once we build a generally intelligent system, that basically we will ask it to figure out a way to make an investment return for you.’

So, first create AGI and then use it to money… But how much money?

Capped profit at 100x investment – then excess profit goes to the rest of the world. 100x is quite a high bar no? The thought is that AGI could be so powerful it could..

“maybe capture the light cone of all future value in the universe, and that’s for sure not okay for one group of investors to have.”

If we take the high standards of Future of Humanity Institute* for due diligence in perusing safe AI – are these standards being met at OpenAI? While Sam seems to have some sympathy for the arguments for these standards, he seems to believe it’s more important to focus on societal consequences of superintelligent AI. Perhaps convincing key players of this in the short term will help incubate an environment where it’s easier to pursue strict safety standards for AGI development.

I really do believe that the work we are doing at OpenAI will not only far eclipse the work I did at YC, but any of the work anyone in the tech industry does…Sam Altman

See this video (at approx 25.30 minute mark and onwards)

 

* See Nick Bostrom’s book ‘Superintelligence

Juergen Schmidhuber on DeepMind, AlphaGo & Progress in AI

In asking AI researcher Juergen Schmidhuber about his thoughts on progress at DeepMind and about the AlphaGo vs Lee Sedol Go tournament – provided some initial comments. I will be updating this post with further interview.

juergen288x466genova1Juergen Schmidhuber: First of all, I am happy about DeepMind’s success, also because the company is heavily influenced by my former students: 2 of DeepMind’s first 4 members and their first PhDs in AI came from my lab, one of them co-founder, one of them first employee. (Other ex-PhD students of mine joined DeepMind later, including a co-author of our first paper on Atari-Go in 2010.)

Go is a board game where the Markov assumption holds: in principle, the current input (the board state) conveys all the information needed to determine an optimal next move (no need to consider the history of previous states). That is, the game can be tackled by traditional reinforcement learning (RL), a bit like 2 decades ago, when Tesauro used RL to learn from scratch a backgammon player on the level of the human world champion (1994). Today, however, we are greatly profiting from the fact that computers are at least 10,000 times faster per dollar.

In the last few years, automatic Go players have greatly improved. To learn a good Go player, DeepMind’s system combines several traditional methods such as supervised learning (from human experts) and RL based on Monte Carlo Tree Search. It will be very interesting to see the system play against the best human Go player Lee Sedol in the near future.

Unfortunately, however, the Markov condition does not hold in realistic real world scenarios. That’s why games such as football are much harder for machines than Go, and why Artificial General Intelligence (AGI) for RL robots living in partially observable environments will need more sophisticated learning algorithms, e.g., RL for recurrent neural networks.

For a comprehensive history of deep RL, see Section 6 of my survey with 888 references:
http://people.idsia.ch/~juergen/deep-learning-overview.html

Also worth seeing Juergen’s AMA here.

Juergen Schmidhuber’s website.

Can Intelligence Explode? – Marcus Hutter at Singularity Summit Australia 2012

Abstract: The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. The most popular scenario is the creation of super-intelligent algorithms that recursively create ever higher intelligences. After a short introduction to this intriguing potential future, I will elaborate on what it could mean for intelligence to explode. In this course, I will (have to) provide a more careful treatment of what intelligence actually is, separate speed from intelligence explosion, compare what super-intelligent participants and classical human observers might experience and do, discuss immediate implications for the diversity and value of life, consider possible bounds on intelligence, and contemplate intelligences right at the singularity.

 


 

Slides (pdf): http://www.hutter1.net/publ/ssingularity.pdf
Slides (PowerPoint): http://www.hutter1.net/publ/ssingularity.ppsx
Paper: M.Hutter, Can Intelligence Explode, Journal of Consciousness Studies, Vol.19, Nr 1-2 (2012) pages 143–166.
http://www.hutter1.net/publ/singularity.pdf

Also see:
http://2012.singularitysummit.com.au/2012/08/can-intelligence-explode/
http://2012.singularitysummit.com.au/2012/08/universal-artificial-intelligence/
http://2012.singularitysummit.com.au/2012/08/panel-intelligence-substrates-computation-and-the-future/
http://2012.singularitysummit.com.au/2012/01/marcus-hutter-to-speak-at-the-singularity-summit-au-2012/
http://2012.singularitysummit.com.au/agenda

Marcus Hutter (born 1967) is a German computer scientist and professor at the Australian National University. Hutter was born and educated in Munich, where he studied physics and computer science at the Technical University of Munich. In 2000 he joined Jürgen Schmidhuber’s group at the Swiss Artificial Intelligence lab IDSIA, where he developed the first mathematical theory of optimal Universal Artificial Intelligence, based on Kolmogorov complexity and Ray Solomonoff’s theory of universal inductive inference. In 2006 he also accepted a professorship at the Australian National University in Canberra.

Hutter’s notion of universal AI describes the optimal strategy of an agent that wants to maximize its future expected reward in some unknown dynamic environment, up to some fixed future horizon. This is the general reinforcement learning problem. Solomonoff/Hutter’s only assumption is that the reactions of the environment in response to the agent’s actions follow some unknown but computable probability distribution.

team-marcus-hutter

Professor Marcus Hutter

Research interests:

Artificial intelligence, Bayesian statistics, theoretical computer science, machine learning, sequential decision theory, universal forecasting, algorithmic information theory, adaptive control, MDL, image processing, particle physics, philosophy of science.

Bio:

Marcus Hutter is Professor in the RSCS at the Australian National University in Canberra, Australia. He received his PhD and BSc in physics from the LMU in Munich and a Habilitation, MSc, and BSc in informatics from the TU Munich. Since 2000, his research at IDSIA and now ANU is centered around the information-theoretic foundations of inductive reasoning and reinforcement learning, which has resulted in 100+ publications and several awards. His book “Universal Artificial Intelligence” (Springer, EATCS, 2005) develops the first sound and complete theory of AI. He also runs the Human Knowledge Compression Contest (50’000€ H-prize).