Posts

Joscha Bach – GPT-3: Is AI Deepfaking Understanding?

Joscha Bach on GPT-3, achieving AGI, machine understanding and lots more!


Discussion points:
02:40 What’s missing in AI atm? Unified coherent model of reality
04:14 AI systems like GPT-3 behave as if they understand – what’s missing?
08:35 Symbol grounding – does GPT-3 have it?
09:35 GPT-3 for music generation, GPT-3 for image generation, GPT-3 for video generation
11:13 GPT-3 temperature parameter. Strange output?
13:09 GPT-3 a powerful tool for idea generation
14:05 GPT-3 as a tool for writing code. Will GPT-3 spawn a singularity?
16:32 Increasing GPT-3 input context may have a high impact
16:59 Identifying grammatical structure & language
19:46 What is the GPT-3 transformer network doing?
21:26 GPT-3 uses brute force, not zero-shot learning, humans do ZSL
22:15 Extending the GPT-3 token context space. Current Context = Working Memory. Humans with smaller current contexts integrate concepts over long time-spans
24:07 GPT-3 can’t write a good novel
25:09 GPT-3 needs to become sensitive to multi-modal sense data – video, audio, text etc
26:00 GPT-3 a universal chat-bot – conversations with God & Johann Wolfgang von Goethe
30:14 What does understanding mean? Does it have gradients (i.e. from primitive to high level)?
32:19 (correlation vs causation) What is causation? Does GPT-3 understand causation? Does GPT-3 do causation?
38:06 Deep-faking understanding
40:06 The metaphor of the Golem applied to civ
42:33 GPT-3 fine with a person in the loop. Big danger in a system which fakes understanding. Deep-faking intelligible explanations.
44:32 GPT-3 babbling at the level of non-experts
45:14 Our civilization lacks sentience – it can’t plan ahead
46:20 Would GTP-3 (a hopfield network) improve dramatically if it could consume 1 to 5 trillion parameters?
47:24 GPT3: scaling up a simple idea. Clever hacks to formulate the inputs
47:41 Google GShard with 600 billion input parameters – Amazon may be doing something similar – future experiments
49:12 Ideal grounding in machines
51:13 We live inside a story we generate about the world – no reason why GPT-3 can’t be extended to do this
52:56 Tracking the real world
54:51 MicroPsi
57:25 What is computationalism? What is it’s relationship to mathematics?
59:30 Stateless systems vs step by step Computation – Godel, Turing, the halting problem & the notion of truth
1:00:30 Truth independent from the process used to determine truth. Constraining truth that which can be computed on finite state machines
1:03:54 Infinities can’t describe a consistent reality without contradictions
1:06:04 Stevan Harnad’s understanding of computation
1:08:32 Causation / answering ‘why’ questions
1:11:12 Causation through brute forcing correlation
1:13:22 Deep learning vs shallow learning
1:14:56 Brute forcing current deep learning algorithms on a Matrioshka brain – would it wake up?
1:15:38 What is sentience? Could a plant be sentient? Are eco-systems sentient?
1:19:56 Software/OS as spirit – spiritualism vs superstition. Empirically informed spiritualism
1:23:53 Can we build AI that shares our purposes?
1:26:31 Is the cell the ultimate computronium? The purpose of control is to harness complexity
1:31:29 Intelligent design
1:33:09 Category learning & categorical perception: Models – parameters constrain each other
1:35:06 Surprise minimization & hidden states; abstraction & continuous features – predicting dynamics of parts that can be both controlled & not controlled, by changing the parts that can be controlled. Categories are a way of talking about hidden states.
1:37:29 ‘Category’ is a useful concept – gradients are often hard to compute – so compressing away gradients to focus on signals (categories) when needed
1:38:19 Scientific / decision tree thinking vs grounded common sense reasoning
1:40:00 Wisdom/common sense vs understanding. Common sense, tribal biases & group insanity. Self preservation, dunbar numbers
1:44:10 Is g factor & understanding two sides of the same coin? What is intelligence?
1:47:07 General intelligence as the result of control problems so general they require agents to become sentient
1:47:47 Solving the Turing test: asking the AI to explain intelligence. If response is an intelligible & testable implementation plan then it passes?
1:49:18 The term ‘general intelligence’ inherits it’s essence from behavioral psychology; a behaviorist black box approach to measuring capability
1:52:15 How we perceive color – natural synesthesia & induced synesthesia
1:56:37 The g factor vs understanding
1:59:24 Understanding as a mechanism to achieve goals
2:01:42 The end of science?
2:03:54 Exciting currently untestable theories/ideas (that may be testable by science once we develop the precise enough instruments). Can fundamental physics be solved by computational physics?
2:07:14 Quantum computing. Deeper substrates of the universe that runs more efficiently than the particle level of the universe?
2:10:05 The Fermi paradox
2:12:19 Existence, death and identity construction

Exciting progress in Artificial Intelligence – Joscha Bach

Joscha Bach discusses progress made in AI so far, what’s missing in AI, and the conceptual progress needed to achieve the grand goals of AI.
Discussion points:
0:07 What is intelligence? Intelligence as the ability to be effective over a wide range of environments
0:37 Intelligence vs smartness – interesting models vs intelligent behavior
1:08 Models vs behaviors – i.e. Deepmind – solving goals over a wide range of environments
1:44 Starting from a blank slate – how does an AI see an Atari Game compared to a human? Pac Man analogy
3:31 Getting the narrative right as well as the details
3:54 Media fear mongering about AI
4:43 Progress in AI – how revolutionary are the ideas behind the AI that led to commercial success? There is a need for more conceptual progress in AI
5:04 Mental representations require probabilistic algorithms – to make further progress we probably need different means of functional approximation
5:33 Many of the new theories in AI are currently not deployed – we can assume a tremendous shift in every day use of technology in the future because of this
6:07 It’s an exciting time to be an AI researcher

 

Principles of Synthetic Intelligence - Joscha BachJoscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Joscha has taught computer science, AI, and cognitive science at the Humboldt-University of Berlin and the Institute for Cognitive Science at Osnabrück. His book “Principles of Synthetic Intelligence” (Oxford University Press) is available on amazon.

 

Ethical Progress, AI & the Ultimate Utility Function – Joscha Bach

Joscha Bach on ethical progress, and AI – it’s fascinating to think ‘What’s the ultimate utility function?’ – should we seek the answer in our evolved motivations?

Discussion points:
0:07 Future directions in ethical progress
1:13 Pain and suffering – concern for things we cannot regulate or change
1:50 Reward signals – we should only get them for things we can regulate
2:42 As soon as minds become mutable ethics dramatically changes – an artificial mind may be like a Zen master on steroids
2:53 The ultimate utility function – how can we maximize the neg-entropy in this universe?
3:29 Our evolved motives don’t align well to this ultimate utility function
4:10 Systems which only maximize what they can consume – humans are like yeast

 

Principles of Synthetic Intelligence - Joscha BachJoscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Joscha has taught computer science, AI, and cognitive science at the Humboldt-University of Berlin and the Institute for Cognitive Science at Osnabrück. His book “Principles of Synthetic Intelligence” (Oxford University Press) is available on amazon.

 

 

The Grand Challenge of Developing Friendly Artificial Intelligence – Joscha Bach

Joscha Bach discusses problems with achieving AI alignment, the current discourse around AI, and inefficiencies of human cognition & communication.

Discussion points:
0:08 The AI alignment problem
0:42 Asimov’s Laws: Problems with giving AI (rules) to follow – it’s a form of slavery
1:12 The current discourse around AI
2:52 Ethics – where do they come from?
3:27 Human constraints don’t apply to AI
4:12 Human communication problems vs AI – communication costs between minds is much larger than within minds
4:57 AI can change it’s preferences

Principles of Synthetic Intelligence - Joscha BachJoscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Joscha has taught computer science, AI, and cognitive science at the Humboldt-University of Berlin and the Institute for Cognitive Science at Osnabrück. His book “Principles of Synthetic Intelligence” (Oxford University Press) is available on amazon.

Cognitive Biases & In-Group Convergences – Joscha Bach

Joscha Bach discusses biases in group think.

Discussion points:
– In-group convergence: thinking in true & false vs right & wrong
– The group mind may be more stupid than the smartest individuals in the group

Principles of Synthetic Intelligence - Joscha BachJoscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Joscha has taught computer science, AI, and cognitive science at the Humboldt-University of Berlin and the Institute for Cognitive Science at Osnabrück. His book “Principles of Synthetic Intelligence” (Oxford University Press) is available on amazon.

AI, Consciousness, Science, Art & Understanding – Joscha Bach

Here Joscha Bach discusses consciousness, it’s relationship to qualia and whether an AI or a utility maximizer would do with it.

What is consciousness? “I think under certain circumstances being conscious is an important part of a mind; it’s a model of a model of a model basically. What it means is our mind (our new cortex) produces this dream that we take to be the world based on the sensory data – so it’s basically a hallucination that predicts what next hits your retina – that’s the world. Out there, we don’t know what this is.. The universe is some kind of weird pattern generator with some quantum properties. And this pattern generator throws patterns at us, and we try to find regularity in them – and the hidden layers of this neural network amount to latent variables that are colors people sounds ideas and so on.. And this is the world that we subjectively inhabit – that’s the world that we find meaningful.”

… “I find theories [about consciousness] that make you feel good very suspicious. If there is something that is like my preferred outcome for emotional reasons, I should be realising that I have a confirmation bias towards this – and that truth is a very brutal vector”..

OUTLINE:
0:07 Consciousness and it’s importance
0:47 Phenomenal content
1:43 Consciousness and attention
2:30 When AI becomes conscious
2:57 Mary’s Room – the Knowledge Argument, art, science & understanding
4:07 What is understanding? What is truth?
4:49 What interests an artist? Art as a communicative exercise
5:48 Thomas Nagel: What is it like to be a bat?
6:19 Feel good theories
7:01 Raw feels or no? Why did nature endow us with raw feels?
8:29 What is qualia, and is it important?
9:49 Insight addiction & the aesthetics of information
10:52 Would a utility maximizer care about qualia?

BIO:
Principles of Synthetic Intelligence - Joscha BachJoscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Joscha has taught computer science, AI, and cognitive science at the Humboldt-University of Berlin and the Institute for Cognitive Science at Osnabrück. His book “Principles of Synthetic Intelligence” (Oxford University Press) is available on amazon.

Cognitive Biases & In-Group Convergences with Joscha Bach

True & false vs right & wrong – People converge their views to set of rights and wrongs relative to in-group biases in their peer group.
As a survival mechanism, convergence in groups is sometimes more healthy than being right – so one should optimize for convergence sometimes even at the cost of getting stuff wrong – so humans probably have an evolutionary propensity to favor convergence over truth.
However by optimizing for convergence may result in the group mind being more stupid than the smartest people in the group.

 

 
Joscha highlights the controversy of Yonatan Zunger being fired for sending out an email about biological differences between men and women effecting abilities as engineers – where Zunger’s arguments may be correct – now regardless of what the facts are about how biological differences effect differences in ability between men & women, google fired him because they thought supporting these arguments would make for a worse social environment.

This sort of thing leads to an interesting difference in discourse, where:
* ‘nerds’ tend to focus on ‘content‘, on imparting ideas and facts where everyone can judge these autonomously and form their own opinions – in view that in order to craft the best solutions we need to have the best facts
* most people the purpose of communication is ‘coordination‘ between individuals and groups (society, nations etc) – where the value on a ‘fact’ is it’s effect on the coordination between people

So is Google’s response to the memo controversy about getting the facts right, or about how Google at this point should be organised?

What’s also really interesting is that different types of people read this ‘memo’ very differently – making it very difficult to form agreement about the content of this memo – how can one agree on whats valuable about communication – whether it’s more about imparting ideas and facts or whether it’s more about coordination?

More recently there has been a lot of talk about #FakeNews – where it’s very difficult to get people to agree to things that are not in their own interests – and including, as Joshca points out, the idea that truth matters.

Joscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Discussion points:
– In-group convergence: thinking in true & false vs right & wrong
– The group mind may be more stupid than the smartest individuals in the group