Towards the Abolition of Suffering Through Science

An online panel focusing on reducing suffering & paradise engineering through the lens of science.

Panelists: Andrés Gómez Emilsson, David Pearce, Brian Tomasik and Mike Johnson

Note, consider skipping to to 10:19 to bypass some audio problems in the beginning!!


Topics

Andrés Gómez Emilsson: Qualia computing (how to use consciousness for information processing, and why that has ethical implications)

  • How do we know consciousness is causally efficacious? Because we are conscious and evolution can only recruit systems/properties when they do something (and they do it better than the available alternatives).
  • What is consciousness’ purpose on animals?  (Information processing).
  • What is consciousness’ comparative advantage?  (Phenomenal binding).
  • Why does this matter for suffering reduction? Suffering has functional properties that play a role in the inclusive fitness of organisms. If we figure out exactly what role they play (by reverse-engineering the computational properties of consciousness), we can substitute them by equally (or better) functioning non-conscious or positive hedonic-tone analogues.
  • What is the focus of Qualia Computing? (it focuses on basic fundamental questions and simple experimental paradigms to get at them (e.g. computational properties of visual qualia via psychedelic psychophysics)).

Brian Tomasik:

  • Space colonization “Colonization of space seems likely to increase suffering by creating (literally) astronomically more minds than exist on Earth, so we should push for policies that would make a colonization wave more humane, such as not propagating wild-animal suffering to other planets or in virtual worlds.”
  • AGI safety “It looks likely that artificial general intelligence (AGI) will be developed in the coming decades or centuries, and its initial conditions and control structures may make an enormous impact to the dynamics, values, and character of life in the cosmos.”,
  • Animals and insects “Because most wild animals die, often painfully, shortly after birth, it’s plausible that suffering dominates happiness in nature. This is especially plausible if we extend moral considerations to smaller creatures like the ~1019 insects on Earth, whose collective neural mass outweighs that of humanity by several orders of magnitude.”

Mike Johnson:

  • If we successfully “reverse-engineer” the patterns for pain and pleasure, what does ‘responsible disclosure’ look like? Potential benefits and potential for abuse both seem significant.
  • If we agree that valence is a pattern in a dataset, what’s a good approach to defining the dataset, and what’s a good heuristic for finding the pattern?
  • What order of magnitude is the theoretical potential of mood enhancement? E.g., 2x vs 10x vs 10^10x
  • What are your expectations of the distribution of suffering in the world? What proportion happens in nature vs within the boundaries of civilization? What are counter-intuitive sources of suffering? Do we know about ~90% of suffering on the earth, or ~.001%?
  • Valence Research, The Mystery of Pain & Pleasure.
  • Why is it such an exciting time round about now to be doing valence research?  Are we at a sweet spot in history with this regard?  What is hindering valence research? (examples of muddled thinking, cultural barriers etc?)
  • How do we use the available science to improve the QALY? GiveDirectly has used change in cortisol levels to measure effectiveness, and the EU (what’s EU stand for?) evidently does something similar involving cattle. It seems like a lot of the pieces for a more biologically-grounded QALY- and maybe a SQALY (Species and Quality-Adjusted Life-Year)- are available, someone just needs to put them together. I suspect this one of the lowest-hanging highest-leverage research fruits.

David Pearce: The ultimate scope of our moral responsibilities. Assume for a moment that our main or overriding goal should be to minimise and ideally abolish involuntary suffering. I typically assume that (a) only biological minds suffer and (b) we are probably alone within our cosmological horizon. If so, then our responsibility is “only” to phase out the biology of involuntary suffering here on Earth and make sure it doesn’t spread or propagate outside our solar system. But Brian, for instance, has quite a different metaphysics of mind, most famously that digital characters in video games can suffer (now only a little – but in future perhaps a lot). The ramifications here for abolitionist bioethics are far-reaching.

 

Other:
– Valence research, Qualia computing (how to use consciousness for information processing, and why that has ethical implications),  animal suffering, insect suffering, developing an ethical Nozick’s Experience Machine, long term paradise engineering, complexity and valence
– Effective Altruism/Cause prioritization and suffering reduction – People’s practical recommendations for the best projects that suffering reducers can work on (including where to donate, what research topics to prioritize, what messages to spread). – So cause prioritization applied directly to the abolition of suffering?
– what are the best projects people can work on to reduce suffering? and what to work on first? (including where to donate, what research topics to prioritize, what messages to spread)
– If we successfully “reverse-engineer” the patterns for pain and pleasure, what does ‘responsible disclosure’ look like? Potential benefits and potential for abuse both seem significant
– If we agree that valence is a pattern in a dataset, what’s a good approach to defining the dataset, and what’s a good heuristic for finding the pattern?
– What order of magnitude is the theoretical potential of mood enhancement? E.g., 2x vs 10x vs 10^10x

Panelists

David Pearce: http://hedweb.com/
Mike Johnson: http://opentheory.net/
Andrés Gómez Emilsson: http://qualiacomputing.com/
Brain Tomasik: http://reducing-suffering.org/

 

#hedweb ‪#EffectiveAltruism ‪#HedonisticImperative ‪#AbolitionistProject

The event was hosted on the 10th of August 2015, Venue: The Internet

Towards the Abolition of Suffering Through Science was hosted by Adam Ford for Science, Technology and the Future.

Towards the Abolition of Suffering Through Science

Towards the Abolition of Suffering Through Science

Future Day Melbourne 2017

WHERE: The Bull & Bear Tavern – 347 Flinders Lane (btw Queen and Elizabeth streets) Melbourne  WHEN – Wednesday March 1st 2017
See the Facebook event, and the Meetup Event.

SCHEDULE

* Noushin Shabab ‘The Evolution of Cybersecurity – Looking Towards 2045’ (Senior Security Researcher at Kaspersky Lab) – 20 mins
* Luke James (Science Party Melbourne) a (nonpartisan) talk about promises and pitfalls of government and future technology – 20 mins
* Dushan Phillips – To be what one is.. (spoken word) – 20 mins
* Patrick Poke – The Future of Finance – 20 mins
* There will also be discussion on the up and coming March for Science in Melbourne! (April 22nd) – 10 – 15 mins

Abstracts/Synopsis:

Promises and Pitfalls of Government and Future Technology

By Luke James

My talk is focusing on the interaction between technological developments (future tech) and government. From the point of view of government and from the point of view of those developing and trying to use new tech. I have a couple of scenarios to go over in which government has reacted poorly and well to new technologies and when new tech has integrated poorly and well with governments. Then I’ll speak about the policies and systems governments can utilise to encourage and take advantage of new tech. Which will lead me in to my final topic which will be a few minutes about the March for Science. I’ll leave a few minutes for questions at the end as well.
Throughout the speech I’ll be speaking about government purely from a systematic standpoint.

The Evolution of Cybersecurity – Looking Towards 2045

By Noushin Shabab

“Journey through the top cybersecurity criminal cases caught by the Global Research And Analysis Team (GReAT) from Kaspersky Lab and find out their current and future trends in cybercriminal activity.”

The Future of Finance

By Patrick Poke

 

  • I’ll start off with a bit of an introduction on what the finance industry is really about and where we are now.
  • I’ll then discuss some of the problems/opportunities that we face now (as these will form the basis for future changes.
  • I’ll go through some expectations over the short-term, medium-term, and long-term.
  • Finally, look at some of the over-hyped areas where I don’t think we’ll see as much change as people expect.

 

To be what one is..

By Dushan Phillips

TBA

 

About Future Day

“Humanity is on the edge of understanding that our future will be astoundingly different from the world we’ve lived in these last several generations. Accelerating technological change is all around us, and transformative solutions are near at hand for all our problems, if only we have the courage to see them. Future Day helps us to foresee our personal potentials, and acknowledge that we have the power to pull together and push our global system to a whole new level of collective intelligence, resiliency, diversity, creativity, and adventure. Want to help build a more foresighted culture? Don’t wait for permission, start celebrating it now!” – John Smart

Future Day is a global day of focusing and celebrating the energy that more and more people around the world are directing toward creating a radically better future.

The Future & You

We all have aspirations, yet we are all too often sidetracked in this age of distraction – however, to firmly ritualize our commitment to the future, each year we celebrate the future seeking to address the glorious problems involved in arriving at a future that we want. Lurking behind every unfolding minute is the potential for a random tangent with no real benefit for our future selves – so it is Future Day to the rescue! A day to remind us to include more of the future in our attention economies, and help us to procrastinate being distracted by the usual gauntlet of noise we run every other day. We take seriously the premise that our future is very important – the notion that *accelerating technological progress will change the world* deserves a lot more attention than that which can be gleaned from most other days of celebration. So, let us remind ourselves to remember the future – an editable history of a time to come – a future, that without our conscious deliberation and positive action, may not be the future that we intended.

Ethics In An Uncertain World – Australian Humanist Convention 2017

Join Peter Singer & AC Grayling to discuss some of the most pressing issues facing society today – surviving the Trump era, Climate Change, Naturalism & the Future of Humanity.

Ethics In An Uncertain World

After an incredibly successful convention in Brisbane in May, 2016, the Humanist Society of Victoria together with the Council of Australian Humanist Societies will be hosting Australian Humanists at the start of April to discuss and learn about some of the most pressing issues facing society today and how Humanists and the world view we hold can help to shape a better future for all of society.

Official Conference LinkGet Tickets Here | Gala Dinner | FAQs | Meetup Link | Google Map Link

Lineup

AC Grayling – Humanism, the individual and society
Peter Singer – Public Ethics in the Trump Era
Clive Hamilton – Humanism and the Anthropocene
Meredith Doig – Interbelief presentations in schools
Monica Bini – World-views in the school curriculum
James Fodor – ???
Adam Ford – Humanism & Population Axiology

SciFuture supports and endorses the Humanist Convention in 2017 in efforts to explore ethics foundational in enlightenment values, march against prejudice, and help make sense of the world. SciFuture affirms that human beings (and indeed many other nonhuman animals) have the right to flourish, be happy, and give meaning and shape to their own lives.

Peter Singer wrote about Taking Humanism Beyond Speciesism – Free Inquiry, 24, no. 6 (Oct/Nov 2004), pp. 19-21

AC Grayling’s talk on Humanism at the British Humanists Association:

 

Zombie Rights

andrew-dun-zombie-rightsAndrew Dun provides an interesting discussion on the rights of sentient entities. Drawing inspiration from quantum complementarity, defends a complementary notion of ontological dualism, countering zombie hypotheses. Sans zombie concerns, ethical discussions should therefore focus on assessing consciousness purely in terms of the physical-functional properties of any putatively conscious entity.

Below is the video of the presentation:

At 12:17 point, Andrew introduces the notion of Supervenience (where high level properties supervene on low-level properties) – do zombies have supervenience? Is consciousness merely a supervenient property that supervenes on characteristics of brain states? If so, we should be able to compute whether a system is conscious (if we do know its full physical characterization). The zombie hypothesis suggests that consciousness does not logically supervene on the physical.

Slides for presentation can be found on slide-share!


Andrew Dun spoke at the Singularity Summit. Talk title : “Zombie Rights”.

Andrew’s research interest relates to both the ontology and ethics of consciousness. Andrew is interested in the ethical significance of consciousness, including the way in which our understanding of consciousness impacts our treatment of other humans, non-human animals, and artifacts. Andrew defends the view that the relationship between physical and conscious properties is one of symmetrical representation, rather than supervenience. Andrew argues that on this basis we can confidently approach ethical questions about consciousness from the perspective of ‘common-sense’ materialism.

Andrew also composes and performs original music.

Extending Life is Not Enough

Dr Randal Koene covers the motivation for human technological augmentation and reasons to go beyond biological life extension.

randal_koene_squareCompetition is an inescapable occurrence in the animate and even in the inanimate universe. To give our minds the flexibility to transfer and to operate in different substrates bestows upon our species the most important competitive advantage.” I am a neuroscientist and neuroengineer who is currently the Science Director at Foundation 2045, and the Lead Scientist at Kernel, and I head the organization carboncopies.org, which is the outreach and roadmapping organization for the development of substrate-independent minds (SIM) and also previously participated in the ambitious and fascinating efforts of the nanotechnology startup Halcyon Molecular in Silicon Valley.

Slides of talk online here
Video of Talk:

Points discussed in the talk:
1. Biological Life-Extension is Not Enough Randal A. Koene Carboncopies.org
2. PERSONAL
3. No one wants to live longer just to live longer. Motivation informs Method.
4. Having an Objective, a Goal, requires that you have some notion of success.
5. Creating (intelligent) machines that have the capabilities we do not — is not as good as being able to experience them ourselves… Imagine… creating/playing music. Imagine… being the kayak.Imagine… perceiving the background radiation of the universe.
6. Is being out of the loop really your goal?
7. Near-term goals: Extended lives without expanded minds are in conflict with creative development.
8. Social
9. Gene survival is extremely dependent on an environment — it is unlikely to survive many changes.Worse… gene replication does not sustain that which we care most about!
10. Is CTGGAGTAC better than GTTGACTGAC? We are vessels for that game — but for the last10,000 years something has been happening!
11. Certain future experiences are desirable, others are not — these are your perspectives, the memes you champion…Death keeps stealing our champions, our experts.
12. Too early to do uploading? – No! The big perspective is relevant now. We don’t like myopic thinking in our politicians, lets not be myopic about world issues ourselves.
13. SPECIES
14. Life-extension in biology may increase the fragility of our species & civilization… More people? – Resources. Less births? – Fewer novel perspectives. Expansion? – Environmental limitation.
15. Biological life-extension within the same evolutionary niche = further specialization to the same performance “over-training” in conflict with generalization
16. Aubrey de Grey: Ultimately, desires “uploading”
17. TECHNICAL
18. Significant biological life-extension is incredibly difficult and beset by threats. Reality vs. popular perception.
19. Life-extension and Substrate-Independence are two different objectives
20. Developing out of a “catchment area” (S. Gildert) may demand iterations of exploration — and exploration involves risk.Hard-wired delusions and drives. What would an AGI do? Which types of AGI would exist in the long run?
21. “Uploading” is just one step of many — but a necessary step — for a truly advanced species
22. Thank You carboncopies.orgrandal.a.koene@carboncopies.org

http://www.carboncopies.org/singularity-summit-australia-2012
http://2012.singularitysummit.com.au/2012/11/randal-koene-extending-life-is-not-enough/

There is a short promo-interview for the Singularity Summit AU 2012 conference that Adam Ford did with Dr. Koene, though unfortunately the connection was a bit unreliable, which is noticeable in the video:

Most of those videos are available through the SciFuture YouTube channel: http://www.youtube.com/user/TheRationalFuture

randal-koene-extending-life-is-not-enough

Lawrence Krauss, Ben Goertzel and Steve Omohundro on the Perils of Prediction

Panel on the Perils of Prediction where Lawrence Krauss , Steve Omohundro and Ben Goertzel set sail on an epic adventure careening through the perilous waves of prediction! And the seas are angry my friends! Our future stands upon the prow our past drowns in the wake. Our most foolish sailors leave the shore without a compass and an eyeglass. We need to stretch our forecasting abilities further than our intuitions and evolved biases allow.

Video of the panel

Filmed at the Singularity Summit Australia 2011 http://2011.singularitysummit.com.au

Lawrence Krauss - SmilingLawrence Maxwell Krauss (born May 27, 1954) is a Canadian-American theoretical physicist who is a professor of physics, Foundation Professor of the School of Earth and Space Exploration, and director of the Origins Project at Arizona State University. He is the author of several bestselling books, including The Physics of Star Trek and A Universe from Nothing. He is an advocate of scientific skepticism, science education, and the science of morality.

 

Ben Goertzel (born December 8, 1966 in Rio de Janeiro, Brazil), is an American author and researcher in the field of artificial intelligence. He currently leads Novamente LLC, a privately held software company that attempts to develop a form of strong AI, which he calls “Artificial General Intelligence”. He is also the CEO of Biomind LLC, a company that markets a software product for the AI-supported analysis of biological microarray data; and he is an advisor to the Singularity Institute for Artificial Intelligence, and formerly its Director of Research.

steve_omohundro_headSteve Omohundro is an American scientist known for his research on Hamiltonian physics, dynamical systems, programming languages, machine learning, machine vision, and the social implications of artificial intelligence. His current work uses rational economics to develop safe and beneficial intelligent technologies for better collaborative modeling, understanding, innovation, and decision making.

Conference: Thinking Machines in the Physical World

“Thinking Machines in the Physical World” invites cross-disciplinary conversations about the opportunities and threats presented by advances in cognitive computing:
  – What concrete, real-world possibilities does intelligence-focused technology open up?
  – What potential effects will “smart computers” exert on labor and jobs around the globe?
  – What are the broader social implications of these changes?

When: Wednesday, July 13, 2016 8:30 AM until Friday ~6pm (then dinner)
Where: Melbourne Uni Law School Building, Level 10 185 Pelham Street, Carlton

Keynotes (see details here):

Prof Brian Anderson – Distinguished Professor at ANU College of Engineering and Computer Science.

Dr James Hughes – Executive Director of the Institute for Ethics and Emerging Technologies.

Prof M. Vidyasagar – Cecil & Ida Green Chair in Systems Biology Science

Prof Judy Wajcman – Anthony Giddens Professor of Sociology, London School of Economics

Dr. Juerg von Kaenel, IBM Research – Cognitive Computing – IBM Watson

Register here | Main website | Program

Professor Graeme Clark, AC Laureate Professor Emeritus  says “It gives me great pleasure to have the opportunity to welcome your interest in the work of Norbert Wiener and invite you to Melbourne to participate in this important conference.”

Official Website: http://21stcenturywiener.org/
Video: https://www.youtube.com/watch?v=etBMY6Orj50
Meetup: http://www.meetup.com/Science-Technology-and-the-Future/events/228816058/
Google+: https://plus.google.com/events/chcmpbupi30ffps4kf94gtn2rpc
Facebook Event: https://www.facebook.com/events/625367860953411/

The long-term future of AI (and what we can do about it) : Daniel Dewey at TEDxVienna

daniel deweyThis has been one of my favourite simple talks on AI Impacts – Simple, clear and straight to the point. Recommended as an introduction to the ideas (referred to in the title).

I couldn’t find the audio of this talk at TED – it has been added to archive.org:

 

Daniel Dewey is a research fellow in the Oxford Martin Programme on the Impacts of Future Technology at the Future of Humanity Institute, University of Oxford. His research includes paths and timelines to machine superintelligence, the possibility of intelligence explosion, and the strategic and technical challenges arising from these possibilities. Previously, Daniel worked as a software engineer at Google, did research at Intel Research Pittsburgh, and studied computer science and philosophy at Carnegie Mellon University. He is also a research associate at the Machine Intelligence Research Institute.

http://www.tedxvienna.at/

 

Can Intelligence Explode? – Marcus Hutter at Singularity Summit Australia 2012

Abstract: The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. The most popular scenario is the creation of super-intelligent algorithms that recursively create ever higher intelligences. After a short introduction to this intriguing potential future, I will elaborate on what it could mean for intelligence to explode. In this course, I will (have to) provide a more careful treatment of what intelligence actually is, separate speed from intelligence explosion, compare what super-intelligent participants and classical human observers might experience and do, discuss immediate implications for the diversity and value of life, consider possible bounds on intelligence, and contemplate intelligences right at the singularity.

 


 

Slides (pdf): http://www.hutter1.net/publ/ssingularity.pdf
Slides (PowerPoint): http://www.hutter1.net/publ/ssingularity.ppsx
Paper: M.Hutter, Can Intelligence Explode, Journal of Consciousness Studies, Vol.19, Nr 1-2 (2012) pages 143–166.
http://www.hutter1.net/publ/singularity.pdf

Also see:
http://2012.singularitysummit.com.au/2012/08/can-intelligence-explode/
http://2012.singularitysummit.com.au/2012/08/universal-artificial-intelligence/
http://2012.singularitysummit.com.au/2012/08/panel-intelligence-substrates-computation-and-the-future/
http://2012.singularitysummit.com.au/2012/01/marcus-hutter-to-speak-at-the-singularity-summit-au-2012/
http://2012.singularitysummit.com.au/agenda

Marcus Hutter (born 1967) is a German computer scientist and professor at the Australian National University. Hutter was born and educated in Munich, where he studied physics and computer science at the Technical University of Munich. In 2000 he joined Jürgen Schmidhuber’s group at the Swiss Artificial Intelligence lab IDSIA, where he developed the first mathematical theory of optimal Universal Artificial Intelligence, based on Kolmogorov complexity and Ray Solomonoff’s theory of universal inductive inference. In 2006 he also accepted a professorship at the Australian National University in Canberra.

Hutter’s notion of universal AI describes the optimal strategy of an agent that wants to maximize its future expected reward in some unknown dynamic environment, up to some fixed future horizon. This is the general reinforcement learning problem. Solomonoff/Hutter’s only assumption is that the reactions of the environment in response to the agent’s actions follow some unknown but computable probability distribution.

team-marcus-hutter

Professor Marcus Hutter

Research interests:

Artificial intelligence, Bayesian statistics, theoretical computer science, machine learning, sequential decision theory, universal forecasting, algorithmic information theory, adaptive control, MDL, image processing, particle physics, philosophy of science.

Bio:

Marcus Hutter is Professor in the RSCS at the Australian National University in Canberra, Australia. He received his PhD and BSc in physics from the LMU in Munich and a Habilitation, MSc, and BSc in informatics from the TU Munich. Since 2000, his research at IDSIA and now ANU is centered around the information-theoretic foundations of inductive reasoning and reinforcement learning, which has resulted in 100+ publications and several awards. His book “Universal Artificial Intelligence” (Springer, EATCS, 2005) develops the first sound and complete theory of AI. He also runs the Human Knowledge Compression Contest (50’000€ H-prize).

Should We Fear or Welcome the Singularity? Nobel Week Dialogue 2015 – The Future of Intelligence

Panel - Ray Kurzweil Stuart Russell Max Tegmark Harry Shum - mod Margaret BodenShould science and society welcome ‘the singularity’ – the idea of the hypothetical moment in time when artificial intelligence surpasses human intelligence?
The discussion has been growing over decades, institutes dedicated to solving AI friendliness have popped up, and more recently the ideas have found popular advocates. Certainly super intelligent machines could help solve classes of problems that humans struggle with, and also if not designed well may cause more problems that they solve.

Is the question of fear or hope in AI a false dichotomy?

Ray Kurzweil

Ray Kurzweil

While Kurzweil agrees that AI risks are real argues that we already face risks involving biotechnology – I think Kurzweil believes we can solve the biotech threat and other risks though building superintelligence.

Stuart Russell believes that a) we should be exactly sure what we want before we let the AI genie out of the bottle, and b) it’s a technological problem in much the same way as the containment of nuclear fusion is a technological problem.

Max Tegmark says we should both welcome and fear the Technological Singularity. We shouldn’t just bumble into it unprepared. All technologies have been double edged swords – in the past we learned from mistakes (i.e. with out of control fires) but with AI we may only get one chance.

Harry Shum says we should be focussing on what we believe we can develop with AI in the next few decades. We find it difficult to talk about AGI. Most of the social fears are around killer robots.

Maggie Boden

Maggie Boden

Maggie Boden poses an audience question about how will AI cope with our lack of development in ethical and moral norms?

Stuart Russell answers that machines have to come to understand what human values are. If the first sudo-general-purpose AI’s don’t get human values well enough they may end up cooking it’s owners cat – this could irreparably tarnish the AI and home robot industry.

Kurzweil adds that human society is getting more ethical – it seems that statistically we are making ethical progress.

Max Tegmark

Max Tegmark

Max Tegmark brings up that intelligence is defined by the degree of ability to achieve goals – so we can’t ignore the question of what goals to give the system if we are building highly intelligent AI. We need to make AI systems understand what humans really want, not what they say they want.

Harry Shum says that the important ethical question for AI systems needs to address data and user privacy.

Panelists: Harry Shum (Microsoft Research EVP of Tech), Max Tegmark (Cosmologist, MIT) Stuart Russell (Prof. of Computer Science, UC Berkeley) and Ray Kurzweil (Futurist, Google Director of Engineering). Moderator: Margaret Boden (Prof. of Cognitive Science, Uni. of Sussex).

This debate is from the 2015 edition of the meeting, held in Gothenburg, Sweden on 9 Dec.