Two of the world’s leading brain researchers will come together to discuss some of the latest international efforts to understand the brain. They will discuss two massive initiatives – the US based Allen Institute for Brain Science and European Human Brain Project. By combining neuroscience with the power of computing both projects are harnessing the efforts of hundreds of neuroscientists in unprecedented collaborations aimed at unravelling the mysteries of the human brain.
This unique FREE public event, hosted by ABC Radio and TV personality Bernie Hobbs, will feature two presentations by each brain researcher followed by an interactive discussion with the audience.
This is your chance to ask the big brain questions.
Monday, 3 April 2017 from 6:00 pm to 7:30 pm (AEST)
Melbourne Convention and Exhibition Centre 2 Clarendon Street enter via the main Exhibition Centre entrance, opposite Crown Casino South Wharf, VIC 3006 Australia
Professor Christof Koch
President and Chief Scientific Officer, Allen Institute for Brain Science, USA
Professor Koch leads a large scale, 10-year effort to build brain observatories to map, analyse and understand the mouse and human cerebral cortex. His work integrates theoretical, computational and experimental neuroscience. Professor Koch pioneered the scientific study of consciousness with his long-time collaborator, the late Nobel laureate Francis Crick. Learn more about the Allen Institute for Brain Science and Christof Koch.
Professor Karlheinz Meier
Co-Director and Vice Chair of the Human Brain Project
Professor of Physics, University of Heidelberg, Germany
Professor Meier is a physicist working on unravelling theoretical principles of brain information processing and transferring them to novel computer architectures. He has led major European initiatives that combine neuroscience with information science. Professor Meier is a co-founder of the European Human Brain Project where he leads the research to create brain-inspired computing paradigms. Learn more about the Human Brain Project and Karlheinz Meier.
This event is brought to you by the Australian Research Council Centre of Excellence for Integrative Brain Function.
Discovering how the brain interacts with the world.
The ARC Centre of Excellence for Integrative Brain Function is supported by the Australian Research Council.
Event Description: The brain is a universe of 100 billion cells interacting through a constantly changing network of 1000 trillion synapses. It runs on a power budget of 20 Watts and holds an internal model of the world. Understanding our brain is among the key challenges for science, on equal footing with understanding genesis and the fate of our universe. The lecture will describe how to build physical, neuromorphic models of brain circuits in silicon. Neuromorphic systems can be used to gain understanding of learning and development in biological brains and as artificial neural systems for cognitive computing.
Date: Wednesday 5 April 2017 6-7pm
Venue: Monash Biomedical Imaging 770 Blackburn Road Clayton
Karlheinz Meier (* 1955) received his PhD in physics in 1984 from Hamburg University in Germany. He has more than 25 years of experience in experimental particle physics with contributions to 4 major experiments at particle colliders at DESY in Hamburg and CERN in Geneva. After fellowships and scientific staff positions at CERN and DESY he was appointed full professor of physics at Heidelberg University in 1992. In Heidelberg he co-founded the Kirchhoff-Institute for Physics and a laboratory for the development of microelectronic circuits for science experiments. For the ATLAS experiment at the Large Hadron Collider (LHC) he led a 10-year effort to design and build a large-scale electronic data processing system providing on-the-fly data reduction by 3 orders of magnitude enabling among other achievements the discovery of the Higgs Boson in 2012. In particle physics he took a leading international role in shaping the future of the field as president of the European Committee for Future Accelerators (ECFA).
Around 2005 he gradually shifted his scientific interests towards large-scale electronic implementations of brain-inspired computer architectures. His group pioneered several innovations in the field like the conception of a platform-independent description language for neural circuits (PyNN), time-compressed mixed-signal neuromorphic computing systems and wafer-scale integration for their implementation. He led 2 major European initiatives, FACETS and BrainScaleS, that both demonstrated the rewarding Interdisciplinary collaboration of neuroscience and information science. In 2009 he was one of the initiators of the European Human Brain Project (HBP) that was approved in 2013. In the HBP he leads the subproject on neuromorphic computing with the goal of establishing brain-inspired computing paradigms as research tools for neuroscience and generic hardware systems for cognitive computing, a new way of processing and interpreting the spatio-temporal structure of large data volumes. In the HBP he is a member of the project directorate and vice-chair of the science and infrastructure board.
Karlheinz Meier engages in public dissemination of science. His YouTube channel with physics movies has received more than a Million hits and he delivers regular lectures to the public about his research and general science topics.
San Harris (author of The Moral Landscape and host of the Waking Up podcast) discusses the need for AI Safety – while fun to think about, we are unable to “martial an appropriate emotional response” to improvements in AI and automation and the prospect of dangerous AI – it’s a failure of intuition to respond to it like one would a sci-fi like doom scenario.
Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris — and not just in some theoretical way. We’re going to build superhuman machines, says Harris, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants.
Anders gives a short tutorial on the Singularity – clearing up confusion and highlighting important aspects of the Technological Singularity and related ideas, such as accelerating change, horizons of predictability, self-improving artificial intelligence, and the intelligence explosion.
Points covered in the tutorial:
- The Mathematical Singularity
- The Technological Singularity: A Horizon of predictability
- Confusion Around The Technological Singularity
- Drivers of Accelerated Growth
- Technology Feedback Loops
- A History of Coordination
- Technological Inflection Points
- Difficult of seeing what happens after an Inflection Point
- The Intelligence Explosion
- An Optimisation Power Applied To Itself
- Group Minds
- The HIVE Singularity: A Networked Global Mind
- The Biointelligence explosion
- Humans are difficult to optimise
An Overview of Models of the Technological Singularity
See Anders’ paper ‘An overview of models of technological singularity‘
This paper reviews different definitions and models of technological singularity. The models range from conceptual sketches to detailed endogenous growth models, as well as attempts to fit empirical data to quantitative models. Such models are useful for examining the dynamics of the world-system and possible types of future crisis points where fundamental transitions are likely to occur. Current models suggest that, generically, even small increasing returns tends to produce radical growth. If mental capital becomes copyable (such as would be the case for AI or brain emulation) extremely rapid growth would also become likely.
A list of models described in the paper:
A. Accelerating change
Exponential or superexponential technological growth (with linked economical growth and social change) (Ray Kurzweil (Kur05), John Smart (Smang))
B. Self improving technology
Better technology allows faster development of new and better technology. (Flake (Fla06))
C. Intelligence explosion
Smarter systems can improve themselves, producing even more intelligence in a strong feedback loop. (I.J. Good (Goo65), Eliezer Yudkowsky)
D. Emergence of superintelligence
(Singularity Institute) 1
E. Prediction horizon
Rapid change or the emergence of superhuman intelligence makes the future impossible to predict from our current limited knowledge and experience. (Vinge, (Vin93))
F. Phase transition
The singularity represents a shift to new forms of organisation. This could be a fundamental difference in kind such as humanity being succeeded by posthuman or artificial intelligences,
a punctuated equilibrium transition or the emergence of a new meta-system level. (Teilhard de Chardin, Valentin Turchin (Tur77), Heylighen (Hey07))
G. Complexity disaster
Increasing complexity and interconnectedness causes increasing payoffs, but increases instability. Eventually this produces a crisis, beyond which point the dynamics must be different.
(Sornette (JS01), West (BLH+07))
H. Inflexion point
Large-scale growth of technology or economy follows a logistic growth curve. The singularity represents the inflexion point where change shifts from acceleration to de-acceleration. (Extropian
FAQ, T. Modis (Mod02))
I. Infinite progress
The rate of progress in some domain goes to infinity in nite time. (Few, if any, hold this to be plausible 2 )
Many thanks for watching!
Consider supporting SciFuture by:
a) Subscribing to the YouTube channel:
b) Donating via Patreon: https://www.patreon.com/scifuture and/or
c) Sharing the media SciFuture creates
Awesome to have Stuart Russell discussing AI Safety – a very important topic. Too long have people been associating the idea of AI safety issues with Terminator – unfortunately the human condition seems such that people often don’t give themselves permission to take seriously non-mainstream ideas unless they see a tip of the hat from an authority figure.
During the presentation Stuart brings up a nice quote by Norbert Wiener:
The lecture was presented at the 2016 Colloquium Series on Robust and Beneficial AI (CSRBAI) hosted by the Machine Intelligence Research Institute (MIRI) and Oxford’s Future of Humanity Institute (FHI).
The field [of AI] has operated for over 50 years on one simple assumption: the more intelligent, the better. To this must be conjoined an overriding concern for the benefit of humanity. The argument is very simple:
1. AI is likely to succeed.
2. Unconstrained success brings huge risks and huge benefits.
3. What can we do now to improve the chances of reaping the benefits and avoiding the risks?
Some organizations are already considering these questions, including the Future of Humanity Institute at Oxford, the Centre for the Study of Existential Risk at Cambridge, the Machine Intelligence Research Institute in Berkeley, and the Future of Life Institute at Harvard/MIT. I serve on the Advisory Boards of CSER and FLI.
Just as nuclear fusion researchers consider the problem of containment of fusion reactions as one of the primary problems of their field, it seems inevitable that issues of control and safety will become central to AI as the field matures. The research questions are beginning to be formulated and range from highly technical (foundational issues of rationality and utility, provable properties of agents, etc.) to broadly philosophical.
– Stuart Russell (Quote Source)
One can think of Existential Risk as a subcategory of a Global Catastrophic Risk – while GCR’s are really bad, civilization has the potential to recover from such a global catastrophic disaster.
An existential Risk is one in which there is no chance of recoverability. An example of the sort of disaster that fits the category of existential risk is human extinction which reduces the probability of [human] lives worth living – theories of value that imply even relatively small reductions in net existential risk have enormous expected value mostly fall under population ethics that consider an average or total utilitarian view of the well-being of the future of life in the universe. Since we haven’t seen any convincing evidence of life outside earth’s gravity well, it may be that there is no advanced technologically capable life elsewhere in the observable universe. If we value lives worth living, and lots of lives worth living, we might also value filling the uninhabited parts of the universe with lives worth living – and arguably we need an advanced technologically able civilization to achieve this. Hence, if humans become extinct it may be that evolution will never again produce a life form capable of escaping the gravity well and colonizing the universe with valuable life.
Here we focus on the reasons to focus on Existential Risk related to machine intelligence.
Say machine intelligence is created with a theory of value outside of, contradictory to, or simply different enough to one in which valued human existence, or the existence of valuable life in the universe. Also imagine that this machine intelligence could enact on it’s values in an exacting manner – it may cause humanity to become extinct on purpose, or as a side effect of implementing it’s values.
The paper ‘Existential Risk Prevention as Global Priority‘ by Nick Bostrom clarifies the concept of existential risk further:
Existential risks are those that threaten the entire future of humanity. Many theories of value imply that even relatively small reductions in net existential risk have enormous expected value. Despite their importance, issues surrounding human-extinction risks and related hazards remain poorly understood. In this paper, I clarify the concept of existential risk and develop an improved classification scheme. I discuss the relation between existential risks and basic issues in axiology, and show how existential risk reduction (via the maxipok rule) can serve as a strongly action-guiding principle for utilitarian concerns. I also show how the notion of existential risk suggests a new way of thinking about the ideal of sustainability. http://www.existential-risk.org
Interview with Nick Bostrom on Machine Intelligence and XRisk
I had the pleasure of doing an interview with Oxford philosopher Nick Bostrom on XRisk:
Transcription of interview:
In recent couple of years we’ve been focusing quite heavily on machine intelligence partly because it seems to raise some significant existentialist down the road part also because relatively little attention has been given to this risk. So when we are prioritizing what we want to spend our time researching then one variable that we take into account is how important is this topic that we could research? But another is how many other people are there who are already studying it? Because the more people who already studying it – the smaller the difference that having a few extra minds focusing on that topic.
So, say the topic of peace and war and how you can try to avoid international conflict is a very important topic – and many existential risks will be reduced if there is more global corporation.
However it is also hard to see how a very small group of people could make a substantial difference to today’s risk of arms races and wars. There is a big interest involved in this and so many people already working either on disarmament and peace and/or military strength that it’s an area where it would be great to make a change – but it’s hard to make a change if there are a smaller number people by contrast with something like the risk from machine intelligence and the risk of Super-Intelligence.
Only been a relatively small number of people have been thinking about this and there might be some low-hanging fruit there – some insights that might make a big difference. So that’s one of the criteria.
Now we are also looking at other existential risks and we are also looking at things other than existential risk like – with try to get a better understanding of what humanity’s situation is in the world and so we have been thinking some about the Fermi Paradox for example, some methodological tools that you need like observation selection theory how you can reason about these things. And to some extent also more near term impacts of technology and of course the opportunities involved in all of this – is that always worth to remind oneself that although enormous technological powers will pose great new dangers including existential risks they also of course make it possible to achieve enormous amount of good.
So one should bear in mind this ..the opportunities as well that are unleashed with technological advance.
About Professor Nick Bostrom
Director & James Martin Research Fellow
Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Programme on the Impacts of Future Technology within the Oxford Martin School. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009), and a forthcoming book on Superintelligence. He previously taught at Yale, and he was a Postdoctoral Fellow of the British Academy. Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy.
He is best known for his work in five areas: (i) the concept of existential risk; (ii) the simulation argument; (iii) anthropics (developing the first mathematically explicit theory of observation selection effects); (iv) transhumanism, including related issues in bioethics and on consequences of future technologies; and (v) foundations and practical implications of consequentialism. He is currently working on a book on the possibility of an intelligence explosion and on the existential risks and strategic issues related to the prospect of machine superintelligence.
In 2009, he was awarded the Eugene R. Gannon Award (one person selected annually worldwide from the fields of philosophy, mathematics, the arts and other humanities, and the natural sciences). He has been listed in the FP 100 Global Thinkers list, the Foreign Policy Magazine’s list of the world’s top 100 minds. His writings have been translated into more than 21 languages, and there have been some 80 translations or reprints of his works. He has done more than 500 interviews for TV, film, radio, and print media, and he has addressed academic and popular audiences around the world.
Personal Web: http://www.nickbostrom.com
Also consider joining the Facebook Group on Existential Risk: https://www.facebook.com/groups/ExistentialRisk
“Thinking Machines in the Physical World” invites cross-disciplinary conversations about the opportunities and threats presented by advances in cognitive computing:
– What concrete, real-world possibilities does intelligence-focused technology open up?
– What potential effects will “smart computers” exert on labor and jobs around the globe?
– What are the broader social implications of these changes?
When: Wednesday, July 13, 2016 8:30 AM until Friday ~6pm (then dinner)
Where: Melbourne Uni Law School Building, Level 10 185 Pelham Street, Carlton
Prof Brian Anderson – Distinguished Professor at ANU College of Engineering and Computer Science.
Dr James Hughes – Executive Director of the Institute for Ethics and Emerging Technologies.
Prof M. Vidyasagar – Cecil & Ida Green Chair in Systems Biology Science
Prof Judy Wajcman – Anthony Giddens Professor of Sociology, London School of Economics
Dr. Juerg von Kaenel, IBM Research – Cognitive Computing – IBM Watson
Professor Graeme Clark, AC Laureate Professor Emeritus says “It gives me great pleasure to have the opportunity to welcome your interest in the work of Norbert Wiener and invite you to Melbourne to participate in this important conference.”
Official Website: http://21stcenturywiener.org/
Facebook Event: https://www.facebook.com/events/625367860953411/
– long term impacts of various forms of utilitarianism
– Artificial Intelligence
– Reducing suffering in the long run and in the short term
– Practical ethics
– Pre-implantation genetic screening to reduce disease and low mood
– Lives today are worth the same as lives in the future – though uncertainty must be brought to bear in deciding how one weighs up the importance of life
– The Hedonistic Imperative and how people react to it
– Correlation to high hedonic set points with productivity
– existential risks and global catastrophic risks
– Closing factory farms
– Veganism and reducitarianism
– Red meat vs white meat – many more chickens are killed per ton of meat than beef
– Valence research
– Should one eliminate the suffering? And should we eliminate emotions of happiness?
– How can we answer the question of how far suffering is present in different life forms (like insects)?
Many thanks for watching!
– Support me via Patreon: https://www.patreon.com/scifuture
– Please Subscribe to this Channel: http://youtube.com/subscription_cente…
– Science, Technology & the Future website: http://scifuture.org
In an interview last year, I had a discussion with John Danaher on the Hedonistic Imperative & Superintelligence – a concern he has with HI is that it denies or de-emphasises some kind of moral agency – in moral theory there is a distinction between moral agents (being a responsible actor able to make moral decisions, influence direction of moral progress, shapes its future, and owes duties to others) and moral patients who may be deemed to have limited or no grounds for moral agency/autonomy/responsibility – they are simply a recipient of moral benefits – in contrast to humans, animals could be classified as moral patients – (see Stanford writing on Grounds for Moral Status).
As time goes on, the notion of strong artificial intelligence leading to Superintelligence (which may herald in something like an Intelligence Explosion) and ideas like the hedonistic imperative becomes less sensational sci-fi concepts and more like visions of realizable eventualities. Thinking about moral endpoints comes to me a paradoxical feeling of triumph and disempowerment.
John’s concern is that ensuring the well-being of humans (conscious entities) is consistent with denying their moral agency – minimizing their capacity to act – that there is a danger that the outcome of HI or an Intelligence Explosion may result in sentient life being made very happy forever, but unable to make choices – with a focus on a future entirely based on bliss whilst ignoring other aspects of what makes for a valuable or worthwhile existence.
So even if we have a future where a) we are made very happy and b) we are subject to a wide variety of novelty (which I argue for in Novelty Utilitarianism) without some kind of self-determination we may not be able to enjoy part of what arguably makes for a worthwhile existence.
If the argument for moral agency is completely toppled by the argument against free will then I can see why there would be no reason for it – and that bliss/novelty may be enough – though I personally haven’t been convinced that this is the case.
Also the idea that moral agency and novelty should be ranked as auxiliary aspects to the main imperative of reducing suffering/increasing bliss seems problematic – I get the sense that they (agency/novelty) could easily be swapped out for most non-optimal moral agents in the quest for -suffering/+bliss troublesome.
The idea that upon evaluating grounds for moral status, our ethical/moral quotient may not match or even come close to a potential ethical force of a superintelligence is also troubling. If we are serious about the best ethical outcomes, when the time comes, should we be committed to resigning all moral agency to agents that are more adept at producing peek moral outcomes?
Is it really possible for non-optimal agents to have a meaningful moral input in a universe where they’ve been completely outperformed by moral machines? Is a life of novelty & bliss the most optimal outcome we can hope for?
There probably should be some more discussion on trade-offs between moral agency, peek experience and novelty.
Discussion in this video here starts at 24:02
Below is the whole interview with John Danaher:
The human mind is one of the great mysteries in the Universe, and arguably the most interesting phenomenon to study.