Adam Ford argues for the position that Artificial Intelligence Will Be Smarter Than Humans in the lifetime of a young adult (sometime before the end of this century). This was one side of a debate put on my Melbourne University for a philosophy course. Adam discusses reasons why AI becoming smarter than human level intelligence is likely and important – approaches to thinking about the issue, what evidence there is for AI becoming superintelligent, what experts think about this issue, the history of the idea., the potential outcomes of superhuman AI, what’s at stake.. and more.
Why discuss this issue? Why is AI important?
Intelligence is powerful, it’s a force multiplier – if we can’t control such a powerful force multiplier, and it doesn’t end up being benevolent by default, then we are likely doomed.
1. There is a substantial chance we will create approximate *human-level AI before 2100;
2. If approximate human-level AI is created, there is a good chance vastly superhuman AI will follow via an intelligence explosion;
3. An uncontrolled intelligence explosion could destroy everything we value, but a controlled intelligence explosion would benefit humanity enormously.
(HI stands for Human level intelligence)
Syllogism1: intelligence is powerful; AI is intelligent; therefore AI is powerful
Syllogism2: technological progress is a much faster force multiplier than evolutionary progress; AI is subject to technological progress and HI is subject to evolutionary progress; therefore AI will become smarter faster than HI
Syllogism3: more intelligence is more powerful than less intelligence; AI will overtake HI; therefore AI will be more powerful than HI
Many thanks for tuning in!