Posts

Can we build AI without losing control over it? – Sam Harris

San Harris (author of The Moral Landscape and host of the Waking Up podcast) discusses the need for AI Safety – while fun to think about, we are unable to “martial an appropriate emotional response” to improvements in AI and automation and the prospect of dangerous AI – it’s a failure of intuition to respond to it like one would a sci-fi like doom scenario.

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris — and not just in some theoretical way. We’re going to build superhuman machines, says Harris, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants.

Anders Sandberg -The Technological Singularity

Anders Sandberg.00_23_53_16.Still031Anders gives a short tutorial on the Singularity – clearing up confusion and highlighting important aspects of the Technological Singularity and related ideas, such as accelerating change, horizons of predictability, self-improving artificial intelligence, and the intelligence explosion.

Tutorial Video:

Points covered in the tutorial:

  • The Mathematical Singularity
  • The Technological Singularity: A Horizon of predictability
  • Confusion Around The Technological Singularity
  • Drivers of Accelerated Growth
  • Technology Feedback Loops
  • A History of Coordination
  • Technological Inflection Points
  • Difficult of seeing what happens after an Inflection Point
  • The Intelligence Explosion
  • An Optimisation Power Applied To Itself
  • Group Minds
  • The HIVE Singularity: A Networked Global Mind
  • The Biointelligence explosion
  • Humans are difficult to optimise

An Overview of Models of the Technological Singularity

anders-sandberg-technology-feedback-loopsSee Anders’ paper ‘An overview of models of technological singularity
This paper reviews different definitions and models of technological singularity. The models range from conceptual sketches to detailed endogenous growth models, as well as attempts to fit empirical data to quantitative models. Such models are useful for examining the dynamics of the world-system and possible types of future crisis points where fundamental transitions are likely to occur. Current models suggest that, generically, even small increasing returns tends to produce radical growth. If mental capital becomes copyable (such as would be the case for AI or brain emulation) extremely rapid growth would also become likely.
http://agi-conf.org/2010/wp-content/uploads/2009/06/agi10singmodels2.pdf

[The] Technological singularity is of increasing interest among futurists both as a predicted possibility in the midterm future and as subject for methodological debate. The concept is used in a variety of contexts, and has acquired an unfortunately large number of meanings. Some versions stress the role of artificial intelligence, others refer to more general technological change. These multiple meanings can overlap, and many writers use combinations of meanings: even Vernor Vinge’s seminal essay that coined the term uses several meanings. Some of these meanings may imply each other but often there is a conflation of different elements that likely (but not necessarily) occur in parallel. This causes confusion and misunderstanding to the extent that some critics argue that the term should be avoided altogether. At the very least the term ‘singularity’ has led to many unfortunate assumptions that technological singularity involves some form of mathematical singularity and can hence be ignored as unphysical.Anders Sandberg

A list of models described in the paper:

A. Accelerating change

Exponential or superexponential technological growth (with linked economical growth and social change) (Ray Kurzweil (Kur05), John Smart (Smang))

B. Self improving technology

Better technology allows faster development of new and better technology. (Flake (Fla06))

C. Intelligence explosion

Smarter systems can improve themselves, producing even more intelligence in a strong feedback loop. (I.J. Good (Goo65), Eliezer Yudkowsky)

D. Emergence of superintelligence

(Singularity Institute) 1

E. Prediction horizon

Rapid change or the emergence of superhuman intelligence makes the future impossible to predict from our current limited knowledge and experience. (Vinge, (Vin93))

F. Phase transition

The singularity represents a shift to new forms of organisation. This could be a fundamental difference in kind such as humanity being succeeded by posthuman or artificial intelligences,
a punctuated equilibrium transition or the emergence of a new meta-system level. (Teilhard de Chardin, Valentin Turchin (Tur77), Heylighen (Hey07))

G. Complexity disaster

Increasing complexity and interconnectedness causes increasing payoffs, but increases instability. Eventually this produces a crisis, beyond which point the dynamics must be different.
(Sornette (JS01), West (BLH+07))

H. Inflexion point

Large-scale growth of technology or economy follows a logistic growth curve. The singularity represents the inflexion point where change shifts from acceleration to de-acceleration. (Extropian
FAQ, T. Modis (Mod02))

I. Infinite progress

The rate of progress in some domain goes to infinity in nite time. (Few, if any, hold this to be plausible 2 )

anders-sandberg-the-technological-singularity-predictability-horizon

Many thanks for watching!

Consider supporting SciFuture by:
a) Subscribing to the YouTube channel:
b) Donating via Patreon: https://www.patreon.com/scifuture and/or
c) Sharing the media SciFuture creates

Science, Technology & the Future: http://scifuture.org

Singularity Skepticism or Advocacy – to what extent is it warranted?

Why are some people so skeptical of the possibility of Super-intelligent Machines, while others take it quite seriously?
Hugo de Garis addresses both ‘Singularity Skepticism’ and advocacy – reasons for believing machine intelligence is not only possible but quite probable!
The Singularity will likely be an unprecedentedly huge issue that we will need to face in the coming decades.

Singularity Skepticism - Hugh de Garis. 2jpg

If you take the average person in the street and you talk to them about a future intelligent machine – there is a lot of skepticism – because today’s machines aren’t intelligent right? I know from my own personal experience that I get incredibly frustrated with computers, they crash all the time, they don’t do what I want… literally I say “I hate computers” but I really love them – so I have an ambivalent relationship with computers..Hugo de Garis
.

The exponential growth of technology and resolution of brain-scanning may lead to advanced neuro-engineering. Brain simulation right down to the chemical synapse, or just plain old functional brain representation might be possible within our lifetimes – this would likely lead to a neuromorphic flavour of the singularity.

There have been some enthusiastic and skeptical responses to this video so far on YouTube:

AZR NSMX1 commented that “Computers already have a better memory and a higher speed than human brain, they can learn and recognice the human voice since 1982 with the first software made for Kurzweil Industries, the expert systems are the first steps for thinking, then in 90’s we learned that emotions are more easy for machines than we believed, an emotion is just an uncontrolled reaction an automatic preservation code that may be good or not for a robot to reach its goal. Now in 2010 the Watson supercomputer show us that is able to structure the human language to produce a logic response, if that is not what does the thought, then somebody explain me what means to think. The only thing they still can’t do is the creative thinking and conciousness, but that will be reached between 2030 and 2035. Conciousness is just the amout and quality of the information you can process, IBM Blue Brain team said this, for example we the humans are very stupid when it comes to use and exploit all the possibilities offered by the smell sense compared to dogs or bears, in this dimension a cockroach is smarter than us because they can map the direction of smell to find the food or other members of their group, we can’t do this, we just have no consciusness in that world. Creativity is the most complex thing, if machines reaches creativity then our world will change because we will not only have to work anymore, but what is better we will not have to think anymore haha. Machines gonna do everything.”
 My response: There has certainly been some impressive strides in technological advancement, it might asymptote at some stage – not sure when, but my take is that there won’t likely be many fundamental engineering or scientific bottlenecks that will block or stifle progress – the biggest problems I think will be sociological impediments – human caused. 

Darian Rachel says “Around the 8 minute or so point he makes a statement that a machine will be built that is intelligent and conscious. He seems to pull this idea that it will be conscious “out of the air” somewhere. It seems to be a rather silly idea.”
 My response: while I agree that a conscious machine is likely difficult to build, there doesn’t seem to be much agreement among humans about whether it exists, what consciousness actually is, whether it is a byproduct of (complex?) information processing and whether it is actually computable (using classical computation). Perhaps Hugo de Garis views consciousness as just being self-aware. 

Exile438 responded that the “human brain has 100billion neurons and each connects to 10,000 other neurons, 10^11*10^4=10^15 human brain capacity estimate. Brain scanning resolution and speed of computers doubles every so often so within the next 2 to 3 decades we can simulate a brain on a computer. If we can do that it would run electronically 4million times faster then our chemical brains. This leads to singularity.”
 My response: it’s certainly a strange and exciting time to be alive – the fundamental questions that we have been wrestling with since before recorded history – questions around personal identity and what makes us what we – may be unraveled within the lifetimes of most of us here today. 

The long-term future of AI (and what we can do about it) : Daniel Dewey at TEDxVienna

daniel deweyThis has been one of my favourite simple talks on AI Impacts – Simple, clear and straight to the point. Recommended as an introduction to the ideas (referred to in the title).

I couldn’t find the audio of this talk at TED – it has been added to archive.org:

 

Daniel Dewey is a research fellow in the Oxford Martin Programme on the Impacts of Future Technology at the Future of Humanity Institute, University of Oxford. His research includes paths and timelines to machine superintelligence, the possibility of intelligence explosion, and the strategic and technical challenges arising from these possibilities. Previously, Daniel worked as a software engineer at Google, did research at Intel Research Pittsburgh, and studied computer science and philosophy at Carnegie Mellon University. He is also a research associate at the Machine Intelligence Research Institute.

http://www.tedxvienna.at/

 

Michio Kaku – A History of a Time to Come

Science, Technology & the Future interviews Dr. Michio Kaku on Artificial Intelligence and the Singularity, Biotech and Nanotechnology

  • What is it that is driving this revolution?
  • How do you think your background in Theoretical Physics shape your view on the future of the mind?
  • Intelligence enhancement, Internet of the mind – brain-net, like a hive mind? Where are we at with AI?
  • Many AI experts and scientists agree that some time in the future a Singularity will be possible (often disagreeing about when). What are your thoughts on the Singularity?
  • What about advances in Nanotechnology?
  • Is the Sticky Fingers problem a show stopper?

Michio is the author of many best sellers, most recently “the Future of the Mind” – We are entering a golden age of neuroscience – today it seems much of the discourse today seems to be it’s use in helping understand and treat mental illness (which is great) – though in the future, there will be other profound implications to understanding neuroscience – such as understanding the mechanics of intelligence…

Michio-Kaku-2014-06_24

Michio Kaku’s Biography

Michio Kaku (born January 24, 1947) is an American theoretical physicist, the Henry Semat Professor of Theoretical Physics at the City College of New York, a futurist, and a communicator and popularizer of science. He has written several books about physics and related topics, has made frequent appearances on radio, television, and film, and writes extensive online blogs and articles. He has written three New York Times Best Sellers: Physics of the Impossible (2008), Physics of the Future (2011), and The Future of the Mind (2014).

Kaku is the author of various popular science books:
– Beyond Einstein: The Cosmic Quest for the Theory of the Universe (with Jennifer Thompson) (1987)
– Hyperspace: A Scientific Odyssey through Parallel Universes, Time Warps, and the Tenth Dimension (1994)
– Visions: How Science Will Revolutionize the 21st Century[12] (1998)
– Einstein’s Cosmos: How Albert Einstein’s Vision Transformed Our Understanding of Space and Time (2004)
– Parallel Worlds: A Journey through Creation, Higher Dimensions, and the Future of the Cosmos (2004)
– Physics of the Impossible: A Scientific Exploration into the World of Phasers, Force Fields, Teleportation, and Time Travel (2008)
– Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 (2011)
– The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind (2014)

Also see this previous interview with Michio Kaku:

 

The Future of the Mind‘ – Book on Amazon.

Many thanks to Think Inc. who brought Dr Kaku to Australia!

Subscribe to the Science, Technology & the Future YouTube Channel

stf-science-technology-future-blueLogo-light-and-dark-grey-555x146-trans

Science, Technology & the Future

Michio Kaku – The Future of the Mind – Intelligence Enhancement & the Singularity

Scifuture interview with popular scientist Michio Kaku on the Scientific Quest to Understand, Enhance & Empower the Mind!

The audio of this interview is found here.

Dr. Michio Kaku advocates thinking about some of the radical Transhumanist ideas we all know and love – here he speaks on the frontiers of Neuroscience, Intelligence Enhancement, the Singularity, and his new book ‘The Future of the Mind’!

String theory stems from Albert Einstein’s legacy; it combines the theory of general relativity and quantum mechanics by assuming the multiverse of universes. String field theory then uses the mathematics of fields to put it all into perspectives. Dr Kaku’s goal is to unite the four fundamental forces of nature into one ‘unified field theory’, a theory that seeks to summarise all fundamental laws of the universe in one simple equation.

Note Scifuture did another interview with Michio Kaku – the article can be found here, audio can be found here, and the video can be found here.

MichioKaku12162013

The Future of the Mind‘ – Book on Amazon.

Many thanks to Think Inc. who brought Dr Kaku to Australia!

Subscribe to the Science, Technology & the Future YouTube Channel

stf-science-technology-future-blueLogo-light-and-dark-grey-555x146-trans

Science, Technology & the Future

The Singularity & Prediction – Can there be an Intelligence Explosion? – Interview with Marcus Hutter

Can there be an Intelligence Explosion?  Can Intelligence Explode?
The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. The most popular scenario is the creation of super-intelligent algorithms that recursively create ever higher intelligences. What could it mean for intelligence to explode?
We need to provide more careful treatment of what intelligence actually is, separate speed from intelligence explosion, compare what super-intelligent participants and classical human observers might experience and do, discuss immediate implications for the diversity and value of life, consider possible bounds on intelligence, and contemplate intelligences right at the singularity.

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make. Irving John Good - 'Good Thinking: The Foundations of Probability and Its Applications'

team-marcus-hutterPaper: M.Hutter, Can Intelligence Explode, Journal of Consciousness Studies, Vol.19, Nr 1-2 (2012) pages 143–166.
http://www.hutter1.net/publ/singularity.pdf
http://arxiv.org/abs/1202.6177

See also:
http://2012.singularitysummit.com.au/2012/08/can-intelligence-explode/
http://2012.singularitysummit.com.au/2012/08/universal-artificial-intelligence/

Can Intelligence Explode? – Marcus Hutter at Singularity Summit Australia 2012

Abstract: The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. The most popular scenario is the creation of super-intelligent algorithms that recursively create ever higher intelligences. After a short introduction to this intriguing potential future, I will elaborate on what it could mean for intelligence to explode. In this course, I will (have to) provide a more careful treatment of what intelligence actually is, separate speed from intelligence explosion, compare what super-intelligent participants and classical human observers might experience and do, discuss immediate implications for the diversity and value of life, consider possible bounds on intelligence, and contemplate intelligences right at the singularity.

 


 

Slides (pdf): http://www.hutter1.net/publ/ssingularity.pdf
Slides (PowerPoint): http://www.hutter1.net/publ/ssingularity.ppsx
Paper: M.Hutter, Can Intelligence Explode, Journal of Consciousness Studies, Vol.19, Nr 1-2 (2012) pages 143–166.
http://www.hutter1.net/publ/singularity.pdf

Also see:
http://2012.singularitysummit.com.au/2012/08/can-intelligence-explode/
http://2012.singularitysummit.com.au/2012/08/universal-artificial-intelligence/
http://2012.singularitysummit.com.au/2012/08/panel-intelligence-substrates-computation-and-the-future/
http://2012.singularitysummit.com.au/2012/01/marcus-hutter-to-speak-at-the-singularity-summit-au-2012/
http://2012.singularitysummit.com.au/agenda

Marcus Hutter (born 1967) is a German computer scientist and professor at the Australian National University. Hutter was born and educated in Munich, where he studied physics and computer science at the Technical University of Munich. In 2000 he joined Jürgen Schmidhuber’s group at the Swiss Artificial Intelligence lab IDSIA, where he developed the first mathematical theory of optimal Universal Artificial Intelligence, based on Kolmogorov complexity and Ray Solomonoff’s theory of universal inductive inference. In 2006 he also accepted a professorship at the Australian National University in Canberra.

Hutter’s notion of universal AI describes the optimal strategy of an agent that wants to maximize its future expected reward in some unknown dynamic environment, up to some fixed future horizon. This is the general reinforcement learning problem. Solomonoff/Hutter’s only assumption is that the reactions of the environment in response to the agent’s actions follow some unknown but computable probability distribution.

team-marcus-hutter

Professor Marcus Hutter

Research interests:

Artificial intelligence, Bayesian statistics, theoretical computer science, machine learning, sequential decision theory, universal forecasting, algorithmic information theory, adaptive control, MDL, image processing, particle physics, philosophy of science.

Bio:

Marcus Hutter is Professor in the RSCS at the Australian National University in Canberra, Australia. He received his PhD and BSc in physics from the LMU in Munich and a Habilitation, MSc, and BSc in informatics from the TU Munich. Since 2000, his research at IDSIA and now ANU is centered around the information-theoretic foundations of inductive reasoning and reinforcement learning, which has resulted in 100+ publications and several awards. His book “Universal Artificial Intelligence” (Springer, EATCS, 2005) develops the first sound and complete theory of AI. He also runs the Human Knowledge Compression Contest (50’000€ H-prize).

Should We Fear or Welcome the Singularity? Nobel Week Dialogue 2015 – The Future of Intelligence

Panel - Ray Kurzweil Stuart Russell Max Tegmark Harry Shum - mod Margaret BodenShould science and society welcome ‘the singularity’ – the idea of the hypothetical moment in time when artificial intelligence surpasses human intelligence?
The discussion has been growing over decades, institutes dedicated to solving AI friendliness have popped up, and more recently the ideas have found popular advocates. Certainly super intelligent machines could help solve classes of problems that humans struggle with, and also if not designed well may cause more problems that they solve.

Is the question of fear or hope in AI a false dichotomy?

Ray Kurzweil

Ray Kurzweil

While Kurzweil agrees that AI risks are real argues that we already face risks involving biotechnology – I think Kurzweil believes we can solve the biotech threat and other risks though building superintelligence.

Stuart Russell believes that a) we should be exactly sure what we want before we let the AI genie out of the bottle, and b) it’s a technological problem in much the same way as the containment of nuclear fusion is a technological problem.

Max Tegmark says we should both welcome and fear the Technological Singularity. We shouldn’t just bumble into it unprepared. All technologies have been double edged swords – in the past we learned from mistakes (i.e. with out of control fires) but with AI we may only get one chance.

Harry Shum says we should be focussing on what we believe we can develop with AI in the next few decades. We find it difficult to talk about AGI. Most of the social fears are around killer robots.

Maggie Boden

Maggie Boden

Maggie Boden poses an audience question about how will AI cope with our lack of development in ethical and moral norms?

Stuart Russell answers that machines have to come to understand what human values are. If the first sudo-general-purpose AI’s don’t get human values well enough they may end up cooking it’s owners cat – this could irreparably tarnish the AI and home robot industry.

Kurzweil adds that human society is getting more ethical – it seems that statistically we are making ethical progress.

Max Tegmark

Max Tegmark

Max Tegmark brings up that intelligence is defined by the degree of ability to achieve goals – so we can’t ignore the question of what goals to give the system if we are building highly intelligent AI. We need to make AI systems understand what humans really want, not what they say they want.

Harry Shum says that the important ethical question for AI systems needs to address data and user privacy.

Panelists: Harry Shum (Microsoft Research EVP of Tech), Max Tegmark (Cosmologist, MIT) Stuart Russell (Prof. of Computer Science, UC Berkeley) and Ray Kurzweil (Futurist, Google Director of Engineering). Moderator: Margaret Boden (Prof. of Cognitive Science, Uni. of Sussex).

This debate is from the 2015 edition of the meeting, held in Gothenburg, Sweden on 9 Dec.

Altered States of Consciousness through Technological Intervention

A mini-documentary on possible modes of being in the future – Ben Goertzel talks about the Singularity and exploring Altered States of Consciousness, Stelarc discusses Navigating Mixed Realities, Kent Kemmish muses on the paradox of strange futures, and Max More compares Transhumanism to Humanism

Altered-States-of-Consciousness-Thorough-Technological-Intervention---Geortzel-Stelarc-Kemmish-Max-Mored

Starring: Ben Goertzel, Stelarc, Kent Kemmish, Max More
Edited: Adam Ford

Topics : Singularity, Trasnshumanism, and States of Consciousness
Thanks to NASA for some of the b-roll

 

Transcript

Ben Goertzel

It’s better perhaps to think of the singularity in terms of human experience. Right now due to the way our brains are built we have a few states of consciousness that follow us around every day.

There’s the ordinary waking state of consciousness, there’s various kinds of sleep, there’s a flow state of consciousness that we get into when we’re really into the work, we’re doing or playing music and we’re really into it. There are various enlightened states you can get into by meditating a really long time. The spectrum of states of consciousness that human beings can enter into is a tiny little fragment of all the possible ways of experience. When the singularity comes it’s going to bring us a wild variety of states of consciousness, a wild variety of ways of thinking and feeling and experiencing the world.

Stelarc
Well I think we’re expected to increasingly perform in mixed realities, so sometimes we’re biological bodies, sometimes we’re machiningly augmented and accelerated, and other times we have to manage data streams in virtual systems. So we have to seamlessly slide between these three modes of operation, and engineering new interfaces, more intimate interfaces so we can do this more seamlessly is an important strategy.

Kent Kemmish
Plenty of scientists would say that it’s crazy and there’s no way, I guess we could have that debate. But they might agree with me that if it is crazy, it’s crazy because of how the world works socially and not because of how difficult it is intrinsically. It’s not crazy for scientific reasons; it’s crazy because the world is crazy.

Max More
I think that people when they look at the future, if they do accept this idea that there’s going to be drastic changes and great advances, they will necessarily try to fit that very complex, impossible to really understand future, into very familiar mental models because they want to put things in boxes, they want to feel like they have some sort of grip on that. So I won’t be surprised to see Christian transhumanists and Mormon transhumanists and even Buddhist transhumanists and every other group will have some kind of set of ideas, they will gradually accept them, but they will make their future world fit with their pre-existing views as to how it will be.

And I think that the essence of transhumanism is not religious, it’s really based on humanism, it’s an extension of humanism, hence transhumanism. It’s really based on ideas of reason and progress and enlightenment and a kind of a secularism. But that doesn’t mean it’s incompatible with trying to make certain of the transhumanist ideas of self-improvement, of enhancement. I think those are potentially compatible with at least non fundamentalist forms of religion.

– Many thanks to Tom Richards for the transcription