Posts

Extending Life is Not Enough

Dr Randal Koene covers the motivation for human technological augmentation and reasons to go beyond biological life extension.

randal_koene_squareCompetition is an inescapable occurrence in the animate and even in the inanimate universe. To give our minds the flexibility to transfer and to operate in different substrates bestows upon our species the most important competitive advantage.” I am a neuroscientist and neuroengineer who is currently the Science Director at Foundation 2045, and the Lead Scientist at Kernel, and I head the organization carboncopies.org, which is the outreach and roadmapping organization for the development of substrate-independent minds (SIM) and also previously participated in the ambitious and fascinating efforts of the nanotechnology startup Halcyon Molecular in Silicon Valley.

Slides of talk online here
Video of Talk:

Points discussed in the talk:
1. Biological Life-Extension is Not Enough Randal A. Koene Carboncopies.org
2. PERSONAL
3. No one wants to live longer just to live longer. Motivation informs Method.
4. Having an Objective, a Goal, requires that you have some notion of success.
5. Creating (intelligent) machines that have the capabilities we do not — is not as good as being able to experience them ourselves… Imagine… creating/playing music. Imagine… being the kayak.Imagine… perceiving the background radiation of the universe.
6. Is being out of the loop really your goal?
7. Near-term goals: Extended lives without expanded minds are in conflict with creative development.
8. Social
9. Gene survival is extremely dependent on an environment — it is unlikely to survive many changes.Worse… gene replication does not sustain that which we care most about!
10. Is CTGGAGTAC better than GTTGACTGAC? We are vessels for that game — but for the last10,000 years something has been happening!
11. Certain future experiences are desirable, others are not — these are your perspectives, the memes you champion…Death keeps stealing our champions, our experts.
12. Too early to do uploading? – No! The big perspective is relevant now. We don’t like myopic thinking in our politicians, lets not be myopic about world issues ourselves.
13. SPECIES
14. Life-extension in biology may increase the fragility of our species & civilization… More people? – Resources. Less births? – Fewer novel perspectives. Expansion? – Environmental limitation.
15. Biological life-extension within the same evolutionary niche = further specialization to the same performance “over-training” in conflict with generalization
16. Aubrey de Grey: Ultimately, desires “uploading”
17. TECHNICAL
18. Significant biological life-extension is incredibly difficult and beset by threats. Reality vs. popular perception.
19. Life-extension and Substrate-Independence are two different objectives
20. Developing out of a “catchment area” (S. Gildert) may demand iterations of exploration — and exploration involves risk.Hard-wired delusions and drives. What would an AGI do? Which types of AGI would exist in the long run?
21. “Uploading” is just one step of many — but a necessary step — for a truly advanced species
22. Thank You carboncopies.orgrandal.a.koene@carboncopies.org

http://www.carboncopies.org/singularity-summit-australia-2012
http://2012.singularitysummit.com.au/2012/11/randal-koene-extending-life-is-not-enough/

There is a short promo-interview for the Singularity Summit AU 2012 conference that Adam Ford did with Dr. Koene, though unfortunately the connection was a bit unreliable, which is noticeable in the video:

Most of those videos are available through the SciFuture YouTube channel: http://www.youtube.com/user/TheRationalFuture

randal-koene-extending-life-is-not-enough

Can Intelligence Explode? – Marcus Hutter at Singularity Summit Australia 2012

Abstract: The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. The most popular scenario is the creation of super-intelligent algorithms that recursively create ever higher intelligences. After a short introduction to this intriguing potential future, I will elaborate on what it could mean for intelligence to explode. In this course, I will (have to) provide a more careful treatment of what intelligence actually is, separate speed from intelligence explosion, compare what super-intelligent participants and classical human observers might experience and do, discuss immediate implications for the diversity and value of life, consider possible bounds on intelligence, and contemplate intelligences right at the singularity.

 


 

Slides (pdf): http://www.hutter1.net/publ/ssingularity.pdf
Slides (PowerPoint): http://www.hutter1.net/publ/ssingularity.ppsx
Paper: M.Hutter, Can Intelligence Explode, Journal of Consciousness Studies, Vol.19, Nr 1-2 (2012) pages 143–166.
http://www.hutter1.net/publ/singularity.pdf

Also see:
http://2012.singularitysummit.com.au/2012/08/can-intelligence-explode/
http://2012.singularitysummit.com.au/2012/08/universal-artificial-intelligence/
http://2012.singularitysummit.com.au/2012/08/panel-intelligence-substrates-computation-and-the-future/
http://2012.singularitysummit.com.au/2012/01/marcus-hutter-to-speak-at-the-singularity-summit-au-2012/
http://2012.singularitysummit.com.au/agenda

Marcus Hutter (born 1967) is a German computer scientist and professor at the Australian National University. Hutter was born and educated in Munich, where he studied physics and computer science at the Technical University of Munich. In 2000 he joined Jürgen Schmidhuber’s group at the Swiss Artificial Intelligence lab IDSIA, where he developed the first mathematical theory of optimal Universal Artificial Intelligence, based on Kolmogorov complexity and Ray Solomonoff’s theory of universal inductive inference. In 2006 he also accepted a professorship at the Australian National University in Canberra.

Hutter’s notion of universal AI describes the optimal strategy of an agent that wants to maximize its future expected reward in some unknown dynamic environment, up to some fixed future horizon. This is the general reinforcement learning problem. Solomonoff/Hutter’s only assumption is that the reactions of the environment in response to the agent’s actions follow some unknown but computable probability distribution.

team-marcus-hutter

Professor Marcus Hutter

Research interests:

Artificial intelligence, Bayesian statistics, theoretical computer science, machine learning, sequential decision theory, universal forecasting, algorithmic information theory, adaptive control, MDL, image processing, particle physics, philosophy of science.

Bio:

Marcus Hutter is Professor in the RSCS at the Australian National University in Canberra, Australia. He received his PhD and BSc in physics from the LMU in Munich and a Habilitation, MSc, and BSc in informatics from the TU Munich. Since 2000, his research at IDSIA and now ANU is centered around the information-theoretic foundations of inductive reasoning and reinforcement learning, which has resulted in 100+ publications and several awards. His book “Universal Artificial Intelligence” (Springer, EATCS, 2005) develops the first sound and complete theory of AI. He also runs the Human Knowledge Compression Contest (50’000€ H-prize).

AGI Progress & Impediments – Progress in Artificial Intelligence Panel

Panelists: Ben Goertzel, David Chalmers, Steve Omohundro, James Newton-Thomas – held at the Singularity Summit Australia in 2011

Panelists discuss approaches to AGI, progress and impediments now and in the future.
Ben Goertzel:
Ben Goertzle with backdrop of headsBrain Emulation, Broad level roadmap simulation, bottleneck, lack of imaging technology, we don’t know what level of precision we need to reverse engineer biological intelligence. Ed Boyed – optimal brain imageing.
Not by Brain emulation (engineering/comp sci/cognitive sci), bottleneck is funding. People in the field believe/feel they know how to do it. To prove this, they need to integrate their architectures which looks like a big project. Takes a lot of money, but not as much as something like Microsoft Word.

David Chalmers (time 03:42):
DavidChalmersWe don’t know which of the two approaches. Though what form the singularity will take will likely be dependent on the approach we use to build AGI. We don’t understand the theory yet. Most don’t think we will have a perfect molecular scanner that scans the brain and its chemical constituents. 25 Years ago David Chalmers worked in Douglass Hofstadter’s AI lab, but his expertise in AI is now out of date. To get to Human Level AI by brute force or through cognitive psychology knows that the cog-sci is not in very good shape. Third approach is a hybrid of ruffly brain augmentation (through technology we are already using like ipads and computers etc) and technological extension and uploading. If using brain augmentation through tech and uploading as a first step in a Singularity then it is including Humans in the equation along with humanities values which may help shape a Singularity with those values.

Steve Omohundro (time 08:08):
steve_omohundro_headEarly in history AI, there was a distinction: The Neats and the Scruffies. John McCarthy (Stanford AI Lab) believed in mathematically precise logical representations – this shaped a lot of what Steve thought about how programming should be done. Marvin Minsky (MIT Lab) believed in exploring neural nets and self organising systems and the approach of throwing things together to see how it self-organises into intelligence. Both approaches are needed: the logical, mathematically precise, neat approach – and – the probabilistic, self-organising, fuzzy, learning approach, the scruffy. They have to come together. Theorem proving without any explorative aspect probably wont succeed. Purely Neural net based simulations can’t represent semantics well, need to combine systems with full semantics and systems with the ability to adapt to complex environments.

James Newton-Thomas (time 09:57)
james.newton-thomasJames has been playing with Neural-nets and has been disappointed with them not being thinks that Augmentation is the way forward. The AI problem is going to be easier to solve if we are smarter to solve it. Conferences such as this help infuse us with a collective empowerment of the individuals. There is an impediment – we are already being dehumanised with our Ipad, where the reason why we are having a conversation with others is a fact about our being part of a group and not about the information that can be looked up via an IPad. We need to careful in our approach so that we are able to maintain our humanity whilst gaining the advantages of the augmentation.

General Discussion (time 12:05):
David Chalmers: We are already becoming cyborgs in a sense by interacting with tech in our world. the more literal cyborg approach we are working on now. Though we are not yet at the point where the technology is commercialization to in principle allow a strong literal cyborg approach. Ben Goertzel: Though we could progress with some form of brain vocalization (picking up words directly from the brain), allowing to think a google query and have the results directly added to our mind – thus bypassing our low bandwidth communication and getting at the information directly in our heads. To do all this …
Steve Omohundro: EEG is gaining a lot of interest to help with the Quantified Self – brain interfaces to help measure things about their body (though the hardware is not that good yet).
Ben Goertzel: Use of BCIs for video games – and can detect whether you are aroused and paying attention. Though the resolution is very course – hard to get fine grained brain state information through the skull. Cranial jacks will get more information. Legal systems are an impediment.
James NT: Alan Snyder using time altering magnetic fields in helmets that shut down certain areas of the brain, which effectively makes people smarter in narrower domains of skill. Can provide an idiot savant ability at the cost of the ability to generalize. The brain that becomes to specific at one task is doing so at the cost of others – the process of generalization.

Ben Goertzel, David Chalmers, Steve Omohundro - A Thought Experiment

Ben Goertzel, David Chalmers, Steve Omohundro – A Thought Experiment

The Future of Life in the Universe – Lawrence Krauss at the Singularity Summit Australia 2011

Prof. Lawrence M. Krauss is an internationally known theoretical physicist with wide research interests, including the interface between elementary particle physics and cosmology, where his studies include the early universe, the nature of dark matter, general relativity and neutrino astrophysics. He has investigated questions ranging from the nature of exploding stars to issues of the origin of all mass in the universe. He was born in New York City and moved shortly thereafter to Toronto, Canada, where he grew up. He received undergraduate degrees in both Mathematics and Physics at Carleton University. He received his Ph.D. in Physics from the Massachusetts Institute of Technology (1982), then joined the Harvard Society of Fellows (1982-85). He joined the faculty of the departments of Physics and Astronomy at Yale University as assistant professor in 1985, and associate professor in 1988. In 1993 he was named the Ambrose Swasey Professor of Physics, Professor of Astronomy, and Chairman of the department of Physics at Case Western Reserve University. He served in the latter position for 12 years, until 2005. During this period he built up the department, which was ranked among the top 20 Physics Graduate Research Programs in the country in a 2005 national ranking. Among the major new initiatives he spearheaded are included the creation of one of the top particle astrophysics experimental and theoretical programs in the US, and the creation of a groundbreaking Masters Program in Physics Entrepreneurship. In 2002, he was named Director of the Center for Education and Research in Cosmology and Astrophysics at Case.
Video of talk:

Videoed at the Singularity Summit Australia 2011: http://2011.singularitysummit.com.au

Lawrence Krauss - Singularity Summit 2011

Lawrence Krauss – the Universe is Really Really Big!