Posts

Amazing Progress in Artificial Intelligence – Ben Goertzel

At a recent conference in Beijing (the Global Innovators Conference) – I did yet another video interview with the legendary AGI guru – Ben Goertzel. This is the first part of the interview, where he talks about some of the ‘amazing’ progress in AI over recent years, including Deep Mind’s AlphaGo sealing a 4-1 victory over Go grandmaster Lee Sedol, progress in hybrid architectures in AI (Deep Learning, Reinforcement Learning, etc), interesting academic research in AI being taken up by tech giants, and finally providing some sobering remarks on the limitations of deep neural networks.

The future of neuroscience and understanding the complexity of the human mind – Brains and Computers

Two of the world’s leading brain researchers will come together to discuss some of the latest international efforts to understand the brain. They will discuss two massive initiatives – the US based Allen Institute for Brain Science and European Human Brain Project. By combining neuroscience with the power of computing both projects are harnessing the efforts of hundreds of neuroscientists in unprecedented collaborations aimed at unravelling the mysteries of the human brain.

This unique FREE public event, hosted by ABC Radio and TV personality Bernie Hobbs, will feature two presentations by each brain researcher followed by an interactive discussion with the audience.

This is your chance to ask the big brain questions.

[Event Registration Page] | [Meetup Event Page]

ARC Centre of Excellence for Integrative Brain Function

Monday, 3 April 2017 from 6:00 pm to 7:30 pm (AEST)

Melbourne Convention and Exhibition Centre
2 Clarendon Street
enter via the main Exhibition Centre entrance, opposite Crown Casino
South Wharf, VIC 3006 Australia

Professor Christof Koch
President and Chief Scientific Officer, Allen Institute for Brain Science, USA

Professor Koch leads a large scale, 10-year effort to build brain observatories to map, analyse and understand the mouse and human cerebral cortex. His work integrates theoretical, computational and experimental neuroscience. Professor Koch pioneered the scientific study of consciousness with his long-time collaborator, the late Nobel laureate Francis Crick. Learn more about the Allen Institute for Brain Science and Christof Koch.

Professor Karlheinz Meier
Co-Director and Vice Chair of the Human Brain Project
Professor of Physics, University of Heidelberg, Germany

Professor Meier is a physicist working on unravelling theoretical principles of brain information processing and transferring them to novel computer architectures. He has led major European initiatives that combine neuroscience with information science. Professor Meier is a co-founder of the European Human Brain Project where he leads the research to create brain-inspired computing paradigms. Learn more about the Human Brain Project and Karlheinz Meier.

 

 

This event is brought to you by the Australian Research Council Centre of Excellence for Integrative Brain Function.

Discovering how the brain interacts with the world.

The ARC Centre of Excellence for Integrative Brain Function is supported by the Australian Research Council.

Building Brains – How to build physical models of brain circuits in silicon

Event Description: The brain is a universe of 100 billion cells interacting through a constantly changing network of 1000 trillion synapses. It runs on a power budget of 20 Watts and holds an internal model of the world.   Understanding our brain is among the key challenges for science, on equal footing with understanding genesis and the fate of our universe. The lecture will describe how to build physical, neuromorphic models of brain circuits in silicon. Neuromorphic systems can be used to gain understanding of learning and development in biological brains and as artificial neural systems for cognitive computing.

Event Page Here | Meetup Event Page Here

Date: Wednesday 5 April 2017 6-7pm

Venue:  Monash Biomedical Imaging 770 Blackburn Road Clayton

Karlheinz Meier

Karlheinz Meier (* 1955) received his PhD in physics in 1984 from Hamburg University in Germany. He has more than 25 years of experience in experimental particle physics with contributions to 4 major experiments at particle colliders at DESY in Hamburg and CERN in Geneva. After fellowships and scientific staff positions at CERN and DESY he was appointed full professor of physics at Heidelberg University in 1992. In Heidelberg he co-founded the Kirchhoff-Institute for Physics and a laboratory for the development of microelectronic circuits for science experiments. For the ATLAS experiment at the Large Hadron Collider (LHC) he led a 10-year effort to design and build a large-scale electronic data processing system providing on-the-fly data reduction by 3 orders of magnitude enabling among other achievements the discovery of the Higgs Boson in 2012. In particle physics he took a leading international role in shaping the future of the field as president of the European Committee for Future Accelerators (ECFA).
Around 2005 he gradually shifted his scientific interests towards large-scale electronic implementations of brain-inspired computer architectures. His group pioneered several innovations in the field like the conception of a platform-independent description language for neural circuits (PyNN), time-compressed mixed-signal neuromorphic computing systems and wafer-scale integration for their implementation. He led 2 major European initiatives, FACETS and BrainScaleS, that both demonstrated the rewarding Interdisciplinary collaboration of neuroscience and information science. In 2009 he was one of the initiators of the European Human Brain Project (HBP) that was approved in 2013. In the HBP he leads the subproject on neuromorphic computing with the goal of establishing brain-inspired computing paradigms as research tools for neuroscience and generic hardware systems for cognitive computing, a new way of processing and interpreting the spatio-temporal structure of large data volumes. In the HBP he is a member of the project directorate and vice-chair of the science and infrastructure board.
Karlheinz Meier engages in public dissemination of science. His YouTube channel with physics movies has received more than a Million hits and he delivers regular lectures to the public about his research and general science topics.

 

Consciousness in Biological and Artificial Brains – Prof Christof Koch

Event Description: Human and non-human animals not only act in the world but are capable of conscious experience. That is, it feels like something to have a brain and be cold, angry or see red. I will discuss the scientific progress that has been achieved over the past decades in characterizing the behavioral and the neuronal correlates of consciousness, based on clinical case studies as well as laboratory experiments. I will introduce the Integrated Information Theory (IIT) that explains in a principled manner which physical systems are capable of conscious, subjective experience. The theory explains many biological and medical facts about consciousness and its pathologies in humans, can be extrapolated to more difficult cases, such as fetuses, mice, or non-mammalian brains and has been used to assess the presence of consciousness in individual patients in the clinic. IIT also explains why consciousness evolved by natural selection. The theory predicts that deep convolutional networks and von Neumann computers would experience next to nothing, even if they perform tasks that in humans would be associated with conscious experience and even if they were to run software faithfully simulating the human brain.

[Meetup Event Page]

Supported by The Florey Institute of Neuroscience & Mental Health, the University of Melbourne and the ARC Centre of Excellence for Integrative Brain Function.

 

 

Who: Prof Christof Koch, President and Chief Scientific Officer, Allen Institute for Brain Sciences, Seattle, USA

Venue: Melbourne Brain Centre, Ian Potter Auditorium, Ground Floor, Kenneth Myer Building (Building 144), Genetics Lane, 30 Royal Parade, University of Melbourne, Parkville

This will be of particular interest to those who know of David Pearce, Andreas Gomez, Mike Johnson and Brian Tomasik’s works – see this online panel:

Can we build AI without losing control over it? – Sam Harris

San Harris (author of The Moral Landscape and host of the Waking Up podcast) discusses the need for AI Safety – while fun to think about, we are unable to “martial an appropriate emotional response” to improvements in AI and automation and the prospect of dangerous AI – it’s a failure of intuition to respond to it like one would a sci-fi like doom scenario.

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris — and not just in some theoretical way. We’re going to build superhuman machines, says Harris, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants.

Elon Musk on the Future of AI

elon-musk-sml2Elon Musk discusses possible best of alternative AI futures – in that the advanced AI tech is democratized – no one company has complete control over the AI tech – it could become a very unstable situation if powerful AI is concentrated in the hands of a few.
Elon also discusses improving the neural link between humans and AI – because humans are so slow – and also believes that merging with the AI will solve the control problem with AI.
Open AI seems to have a good team – as a 501c non-profit (unlike many non-profits) does have a sense of urgency in increasing the odds of a friendly AI outcome.

Transcript of the section of the interview where Elon Musk discusses Artificial Intelligence:
Interviewer: Speaking of really important problems, AI. You have been outspoken about AI. Could you talk about what you think the positive future for AI looks like and how we get there?
Elon: Okay, I mean I do want to emphasize that this is not really something that I advocate or this is not prescriptive. This is simply, hopefully, predictive. Because you will hear some say, well, like this is something that I want to occur instead of this is something I think that probably is the best of the available alternatives. The best of the available alternatives that I can come up with, and maybe someone else can come up with a better approach or better outcome, is that we achieve democratization of AI technology. Meaning that no one company or small set of individuals has control over advanced AI technology. I think that’s very dangerous. It could also get stolen by somebody bad, like some evil dictator or country could send their intelligence agency to go steal it and gain control. It just becomes a very unstable situation, I think, if you’ve got any incredibly powerful AI. You just don’t know who’s going to control that. So it’s not that I think that the risk is that the AI would develop a will of its own right off the bat. I think the concern is that someone may use it in a way that is bad. Or even if they weren’t going to use it in a way that’s bad but somebody could take it from them and use it in a way that’s bad, that, I think, is quite a big danger. So I think we must have democratization of AI technology to make it widely available. And that’s the reason that obviously you, me, and the rest of the team created OpenAI was to help spread out AI technology so it doesn’t get concentrated in the hands of a few. But then, of course, that needs to be combined with solving the high-bandwidth interface to the cortex.
Interviewer: Humans are so slow.
Elon: Humans are so slow. Yes, exactly. But we already have a situation in our brain where we’ve got the cortex and the limbic system… The limbic system is kind of a…I mean, that’s the primitive brain. That’s kind of like your instincts and whatnot. And the cortex is the thinking upper part of the brain. Those two seem to work together quite well. Occasionally, your cortex and limbic system will disagree, but they…
Interviewer: It generally works pretty well.
Elon: Generally works pretty well, and it’s like rare to find someone who…I’ve not found someone wishes to either get rid of the cortex or get rid of the limbic system.
Interviewer: Very true.
Elon: Yeah, that’s unusual. So I think if we can effectively merge with AI by improving the neural link between your cortex and your digital extension of yourself, which already, like I said, already exists, just has a bandwidth issue. And then effectively you become an AI-human symbiote. And if that then is widespread, with anyone who wants it can have it, then we solve the control problem as well, we don’t have to worry about some evil dictator AI because we are the AI collectively. That seems like the best outcome I can think of.
Interviewer: So, you’ve seen other companies in their early days that start small and get really successful. I hope I never get this asked on camera, but how do you think OpenAI is going as a six-month-old company?
Elon: I think it’s going pretty well. I think we’ve got a really talented group at OpenAI.
Interviewer: Seems like it.
Elon: Yeah, a really talented team and they’re working hard. OpenAI is structured as a 501(c)(3) non-profit. But many non-profits do not have a sense of urgency. It’s fine, they don’t have to have a sense of urgency, but OpenAI does because I think people really believe in the mission. I think it’s important. And it’s about minimizing the risk of existential harm in the future. And so I think it’s going well. I’m pretty impressed with what people are doing and the talent level. And obviously, we’re always looking for great people to join in the mission.

The full interview is available in video/audio and text format at Y Combinator as part of the How to Build the Future series : https://www.ycombinator.com/future/elon/

elon-musk_future-of-ai

Anders Sandberg -The Technological Singularity

Anders Sandberg.00_23_53_16.Still031Anders gives a short tutorial on the Singularity – clearing up confusion and highlighting important aspects of the Technological Singularity and related ideas, such as accelerating change, horizons of predictability, self-improving artificial intelligence, and the intelligence explosion.

Tutorial Video:

Points covered in the tutorial:

  • The Mathematical Singularity
  • The Technological Singularity: A Horizon of predictability
  • Confusion Around The Technological Singularity
  • Drivers of Accelerated Growth
  • Technology Feedback Loops
  • A History of Coordination
  • Technological Inflection Points
  • Difficult of seeing what happens after an Inflection Point
  • The Intelligence Explosion
  • An Optimisation Power Applied To Itself
  • Group Minds
  • The HIVE Singularity: A Networked Global Mind
  • The Biointelligence explosion
  • Humans are difficult to optimise

An Overview of Models of the Technological Singularity

anders-sandberg-technology-feedback-loopsSee Anders’ paper ‘An overview of models of technological singularity
This paper reviews different definitions and models of technological singularity. The models range from conceptual sketches to detailed endogenous growth models, as well as attempts to fit empirical data to quantitative models. Such models are useful for examining the dynamics of the world-system and possible types of future crisis points where fundamental transitions are likely to occur. Current models suggest that, generically, even small increasing returns tends to produce radical growth. If mental capital becomes copyable (such as would be the case for AI or brain emulation) extremely rapid growth would also become likely.
http://agi-conf.org/2010/wp-content/uploads/2009/06/agi10singmodels2.pdf

[The] Technological singularity is of increasing interest among futurists both as a predicted possibility in the midterm future and as subject for methodological debate. The concept is used in a variety of contexts, and has acquired an unfortunately large number of meanings. Some versions stress the role of artificial intelligence, others refer to more general technological change. These multiple meanings can overlap, and many writers use combinations of meanings: even Vernor Vinge’s seminal essay that coined the term uses several meanings. Some of these meanings may imply each other but often there is a conflation of different elements that likely (but not necessarily) occur in parallel. This causes confusion and misunderstanding to the extent that some critics argue that the term should be avoided altogether. At the very least the term ‘singularity’ has led to many unfortunate assumptions that technological singularity involves some form of mathematical singularity and can hence be ignored as unphysical.Anders Sandberg

A list of models described in the paper:

A. Accelerating change

Exponential or superexponential technological growth (with linked economical growth and social change) (Ray Kurzweil (Kur05), John Smart (Smang))

B. Self improving technology

Better technology allows faster development of new and better technology. (Flake (Fla06))

C. Intelligence explosion

Smarter systems can improve themselves, producing even more intelligence in a strong feedback loop. (I.J. Good (Goo65), Eliezer Yudkowsky)

D. Emergence of superintelligence

(Singularity Institute) 1

E. Prediction horizon

Rapid change or the emergence of superhuman intelligence makes the future impossible to predict from our current limited knowledge and experience. (Vinge, (Vin93))

F. Phase transition

The singularity represents a shift to new forms of organisation. This could be a fundamental difference in kind such as humanity being succeeded by posthuman or artificial intelligences,
a punctuated equilibrium transition or the emergence of a new meta-system level. (Teilhard de Chardin, Valentin Turchin (Tur77), Heylighen (Hey07))

G. Complexity disaster

Increasing complexity and interconnectedness causes increasing payoffs, but increases instability. Eventually this produces a crisis, beyond which point the dynamics must be different.
(Sornette (JS01), West (BLH+07))

H. Inflexion point

Large-scale growth of technology or economy follows a logistic growth curve. The singularity represents the inflexion point where change shifts from acceleration to de-acceleration. (Extropian
FAQ, T. Modis (Mod02))

I. Infinite progress

The rate of progress in some domain goes to infinity in nite time. (Few, if any, hold this to be plausible 2 )

anders-sandberg-the-technological-singularity-predictability-horizon

Many thanks for watching!

Consider supporting SciFuture by:
a) Subscribing to the YouTube channel:
b) Donating via Patreon: https://www.patreon.com/scifuture and/or
c) Sharing the media SciFuture creates

Science, Technology & the Future: http://scifuture.org

Sam Harris on AI Implications -The Ruben Report

A transcription of Sam Harris’ discussion on the Implications of Strong AI during recent appearance on the Ruben Report. Sam contrasts narrow AI with strong AI, AI Safety, the possibility of rapid AI self-improvement, the idea of AI superintelligence may seem alien to us, and he also brings up the idea that it is important to solve consciousness before superintelligence (especially if superintelligence wipes us out) in hope for a future inclusive of the value that consciousness experience entails – instead of a mechanized future with no consciousness to experience it.
I explored the idea of a consciousness in artificial intelligence in ‘The Knowledge Argument Applied to Ethics‘ – which deals with whether an AI will act differently if it can experience ‘raw feels’ – and this seems to me to be of importance to AI Safety and (if we are ethically serious, and also assume value in ‘raw feels’ or) about preserving the future of value.

Dave Rubin asks the question: “If we get to a certain point with Artificial Intelligence and robots become aware and all that stuff… this can only end horribly right? …it will be pretty good for a while, but then at some point, by their own self-preservation basically, they will have to turn on their masters… I want the answer right now…”

Sam Harris responds: “..I worry about it [AI] to that degree but not quite in those terms. The concern for me is not that we will build superintelligent AI or superintelligent robots which initially seem to work really well and then by some process we don’t understand will become malevolent; and kill us – you know – the terminator movies. That’s not the concern…. Most people who are really worried about this – that’s not really what they are worried about. Although that’s not inconceivable – it’s almost worse than that. What’s more reasonable is that will.. As we’re building right now… we’re building machines that embody intelligence to increasing degree.. But it’s narrow AI.. so the best chess player on earth is a computer but it can’t play tic-tac-toe – it’s narrowly focused on a specific kind of goal – and that’s broadening more and more as we get machines that can play many different kinds of games for instance well. So we’re creeping up on what is now called ‘general intelligence’ – the ability to think flexibly in multiple domains – and we’re you’re learning in one domain doesn’t cancel you’re learning in another – and so it’s something more like how human beings can acquire many different skills and engage in many different modes of cognition and not have everything fall apart – that’s the Holy Grail of artificial intelligence – we want ‘general intelligence’ and something that’s robust – it’s not brittle…it’s something that if parts of it fail it’s not catastrophic to the whole enterprise… and I think there is no question that we will get there, but there are many false assumptions about the path ahead. One is that what we have now is not nearly as powerful as the human mind – and we’re just going to incrementally get to something that is essentially a human equivalent. Now I don’t see that as the path forward at all… all of our narrow intelligence … much of our narrow intelligence insomuch as we find it interesting is already superhuman, right, so like we have your calculator on your phone and it’s superhuman for arithmetic – and the chess playing program is superhuman – it’s not almost as good as a human – it’s better than any human on earth and will always be better than any human on earth right? Um, and more and more we will get that piecemeal effort of superhuman narrow AIs and when this is ever brought together in a general intelligence what you’re going to have is not just another ordinary human level intelligence – you’re going to have something that is in some ways may be radically foreign – in some ways it’s not going to be everything about us emulated in the system – but whatever is intelligent there is not going to be superhuman almost by definition and if it isn’t t=0 it’s going to be the next day – it’s just going to improve so quickly and when you talk about a system that can improve itself – if we ever build intelligent AI that then becomes the best source of it’s own improvement – so something that can improve it’s source code better than any human could improve it’s source code – once we start that process running, and the temptation to do that will be huge, then we have – what has been worried about now for 75 years – the prospect of an intelligence explosion – where the birth of this intelligence could get away from us – it’s now improving itself in a way that is unconstrained.  So people talk about ‘the Singularity’ now which is what happens when that takes off – it’s a horizon line in technological innovation that we can’t see beyond – and we can’t predict beyond because it’s now just escaping – you’re getting 1000’s of years of progress in minutes – right if in fact this process gets initiated – and so it’s not that we have superhuman robots that are just well behaved and it goes on for decades and then all of the sudden they get quirky and they take their interests to heart more than they take ours to heart and … you know the game is over. I think what is more likely is we’ll build intelligent systems that are so much more competent than we are – that even the tiniest misalignment between their goals and our own – will ultimately become completely hostile to our well being and our survival.”

The video of the conversation is here, more of the transcription below the video

Dave Rubin: “That’s scarier, pretty much, than what I laid out right? I laid out sort of a futuristic .. ahh there going to turn on us and start shooting us one day maybe because of an error or something – but you’re laying out really that they would… almost at some point that they would, if they could become aware enough, that they simply wouldn’t need us – because they would become ‘super-humans’ in effect – and what use would we serve for them at some point right? (maybe not because of consciousness…)”

Sam Harris: “I would put consciousness and awareness aside because – I mean it might be that consciousness comes along for the ride – it may be the case that you can’t be as intelligent as a human and not be conscious – but I don’t know if that’s right…”

Dave Rubin: “That’s horizon mind stuff right?”

Sam Harris: “Well I just don’t know if that’s actually true – it’s quite possible that we could build something as intelligent as we are – in a sense that it can meet any kind of cognitive or perceptual challenge or logical challenge we would pose it better than we can – but there is nothing that is like to be that thing – if the lights aren’t on it doesn’t experience happiness, though it might say it experiences happiness right? I think what will happen is that we will definitely – you know the notion of a Turing test?”

Dave Rubin: “This is like, if you type – it seems like it’s responding to you but it’s not actually really…”

Sam Harris: “Well, Allan Turing, the person who is more responsible than anyone else for giving us computers once thought about what it would mean to have intelligent machines – and he proposed what has been come to be known as the ‘Turing Test’.”

Dave Rubin: “It’s like the chat right?”

Sam Harris: “Yeah but .. when you can’t tell whether you’re interacting with a person or a computer – that computer in that case is passing the Turing Test – and as a measure of intelligence – that’s certainly a good proxy for a more detailed analysis of what it would mean to have machine intelligence… if I’m talking to something at length about anything that I want – and I can’t tell it’s not a person, and it turns out it’s somebody’s laptop – that laptop is passing the Turing Test. It may be that you can pass the Turing Test without even the subtlest glimmer of consciousness arising. Right, so that laptop is no more conscious than that glass of water is – right? That may in fact be the case, it may not be though – so I just don’t know there. If that’s the case, for me that’s just the scariest possibility – because what’s happening is .. I even heard at least one computer scientist and it was kind of alarming but I don’t have a deep argument against it – if you assume that consciousness comes along for the ride, if you assume that anything more intelligent than us gives rise to – either intentionally for by happenstance – is more conscious than we are, experiences a greater range of creative states – in well-being and can suffer more – by definition, in my view ethically, it becomes more important… if we’re more important than Cocker Spaniels or ants or anything below us – then if we create something that’s obviously above us in every conceivable way – and it’s conscious – right?”

Dave Ruben: “It would view us in the same way any we view anything that [???] us”

Sam Harris: “It’s more important than us right? And I’d have to grant that even though I’d not be happy about it deciding to annihilate us… I don’t have a deep ethical argument against why… I can’t say from a god’s eye view that it’s bad that we gave birth to super beings that then trampled on us – but then went on to become super in any ways we can’t possibly imagine – just as, you know, bacteria can’t imagine what we’re up to – right. So there are some computer scientists who kind of solve the fears, or silence the fears with this idea – that say just listen, if we build something that’s god like in that respect – we will have given birth to – our descendants will not be apes, they will be gods, and that’s a good thing – it’s the most beautiful thing – I mean what could be more beautiful than us creating the next generation of intelligent systems – that are infinitely profound and wise and knowledgeable from our point of view and are just improving themselves endlessly up to the limit of the resources available in the galaxy – what could be more rewarding than that?”

Dave Ruben: “Sounds pretty good”

Sam Harris: “And the fact that we all destroyed ourselves in the process because we were the bugs that hit their windshield when they were driving off – that’s just the price you pay. Well ok that’s possible but it’s also conceivable that all that could happen without consciousness right? That we could build mere mechanism that is competent in all the ways so as to plow us under – but that there is no huge benefit on the side of deep experience and well being and beauty and all that – it’s all just blind mechanism, which is intelligent mechanism .. in the same way as the best chess playing program – which is highly intelligent with respect to chess but nobody thinks as conscious. So that’s the theory … but on the way there – there is many weird moments where I think we will build machines that will pass the Turing Test – which is to say that they will seem conscious to us, they will seem to be able to detect our emotions and respond to our emotions, you know will say ‘you know what – you look tired, and maybe you should take a nap’ – and it will be right you know, it will be a better judge of your emotions than your friends are – right? And yet at a certain point certainly if you emulate this in a system whether it’s an avatar online or an actual robot that has a face right? That can display it’s own emotion and we get out of the uncanny valley where it just looks creepy and begins to look actually beautiful and rewarding and natural – then our intuitions that we are in dialog with a conscious other will be played upon perfectly right? .. and I think we will lose sight of it being an interesting problem – it will no longer be interesting to wonder whether our computers are conscious because they will be demonstrating as much as any person has ever demonstrated it – and in fact even more right? And unless we understand exactly how consciousness emerges in physical systems, at some point along the way of developing that technology – I don’t think we will actually know that they’re conscious – and that will be interesting – because we will successfully fool ourselves into just assuming – it will seem totally unethical to kill your robot off – it will be a murder worse than you killing a person because at a certain point it will be the most competent person – you know, the wisest person.”

Dave Ruben: “Sam, I don’t know if you’re writing a book about this – but you clearly should write a book about this – I’ll write one of the intros or something – there you go. Well listen we did two hours here – so I’m not going to give you the full Rogen treatment ”

Sam Harris: “We did a half Rogen”

Dave Ruben: “We did a half Rogen – but you know you helped me launch the first season – you’re launching second season – legally you have to now launch every season..”

* Some breaks in conversation (sentences, words, ums and ahs) have been omitted to make it easier to read

Peter Singer & David Pearce on Utilitarianism, Bliss & Suffering

Moral philosophers Peter Singer & David Pearce discuss some of the long term issues with various forms of utilitarianism, the future of predation and utilitronium shockwaves.

Topics Covered

Peter Singer

– long term impacts of various forms of utilitarianism
– Consciousness
– Artificial Intelligence
– Reducing suffering in the long run and in the short term
– Practical ethics
– Pre-implantation genetic screening to reduce disease and low mood
– Lives today are worth the same as lives in the future – though uncertainty must be brought to bear in deciding how one weighs up the importance of life
– The Hedonistic Imperative and how people react to it
– Correlation to high hedonic set points with productivity
existential risks and global catastrophic risks
– Closing factory farms

David Pearce

– Veganism and reducitarianism
– Red meat vs white meat – many more chickens are killed per ton of meat than beef
– Valence research
– Should one eliminate the suffering? And should we eliminate emotions of happiness?
– How can we answer the question of how far suffering is present in different life forms (like insects)?

Talk of moral progress can make one sound naive. But even the darkest cynic should salute the extraordinary work of Peter Singer to promote the interests of all sentient beings.David Pearce
 

 

Many thanks for watching!
– Support me via Patreon: https://www.patreon.com/scifuture
– Please Subscribe to this Channel: http://youtube.com/subscription_cente…
– Science, Technology & the Future website: http://scifuture.org

Is there a Meaningful Future for Non-Optimal Moral Agents?

In an interview last year, I had a discussion with John Danaher on the Hedonistic Imperative & Superintelligence – a concern he has with HI is that it denies or de-emphasises some kind of moral agency – in moral theory there is a distinction between moral agents (being a responsible actor able to make moral decisions, influence direction of moral progress, shapes its future, and owes duties to others) and moral patients who may be deemed to have limited or no grounds for moral agency/autonomy/responsibility – they are simply a recipient of moral benefits – in contrast to humans, animals could be classified as moral patients – (see Stanford writing on Grounds for Moral Status).

As time goes on, the notion of strong artificial intelligence leading to Superintelligence (which may herald in something like an Intelligence Explosion) and ideas like the hedonistic imperative becomes less sensational sci-fi concepts and more like visions of realizable eventualities. Thinking about moral endpoints comes to me a paradoxical feeling of triumph and disempowerment.

John’s concern is that ensuring the well-being of humans (conscious entities) is consistent with denying their moral agency – minimizing their capacity to act – that there is a danger that the outcome of HI or an Intelligence Explosion may result in sentient life being made very happy forever, but unable to make choices – with a focus on a future entirely based on bliss whilst ignoring other aspects of what makes for a valuable or worthwhile existence.

Artificial Heart chipsSo even if we have a future where a) we are made very happy and b) we are subject to a wide variety of novelty (which I argue for in Novelty Utilitarianism) without some kind of self-determination we may not be able to enjoy part of what arguably makes for a worthwhile existence.

If the argument for moral agency is completely toppled by the argument against free will then I can see why there would be no reason for it – and that bliss/novelty may be enough – though I personally haven’t been convinced that this is the case.

Also the idea that moral agency and novelty should be ranked as auxiliary aspects to the main imperative of reducing suffering/increasing bliss seems problematic – I get the sense that they (agency/novelty) could easily be swapped out for most non-optimal moral agents in the quest for -suffering/+bliss troublesome.
The idea that upon evaluating grounds for moral status, our ethical/moral quotient may not match or even come close to a potential ethical force of a superintelligence is also troubling. If we are serious about the best ethical outcomes, when the time comes, should we be committed to resigning all moral agency to agents that are more adept at producing peek moral outcomes?
ancillary-one-esk-glitchIs it really possible for non-optimal agents to have a meaningful moral input in a universe where they’ve been completely outperformed by moral machines? Is a life of novelty & bliss the most optimal outcome we can hope for?

There probably should be some more discussion on trade-offs between moral agency, peek experience and novelty.

Discussion in this video here starts at 24:02

Below is the whole interview with John Danaher: