The Anatomy of Happiness – David Pearce
| | | |

The Anatomy of Happiness – David Pearce

David Pearce in interview on the The Anatomy of Happiness. While researching epilepsy, neuroscientist Itzhak Fried stumbled on a ‘mirth’ center in the brain – given this, what ought we be doing to combat extreme suffering and promote wellbeing? 0:00 Mastery of reward circuitry 0:25 Itzhak Fried’s experiments on stimulating ‘the Humor Centre’ of the…

Stuart Armstrong on AI Interpretability, Accidental Misalignment & Risks of Opaque AI
| | | |

Stuart Armstrong on AI Interpretability, Accidental Misalignment & Risks of Opaque AI

Interview with Stuart Armstrong (Aligned AI) Video / Audio of interview will be up soon To watch interview live, join the zoom call: Time: Nov 9, 2022 07:30 PM Canberra, Melbourne, SydneyJoin Zoom Meetinghttps://us02web.zoom.us/j/81320547208?pwd=MGFnZ2RGcFl5cW9aZ1BaUm5qcnh1UT09Meeting ID: 813 2054 7208Passcode: scifuture Auditing and interpreting AI (and their models) seems obviously important to achieve verifiably safe AI (by…

Hard-line Negative Utilitarianism vs Tradeoffy Classical Utilitarianism
|

Hard-line Negative Utilitarianism vs Tradeoffy Classical Utilitarianism

If you are a Negative Utilitarian (NU) you are morally obliged to wipe out all existence for the sake of a pin prick, but if you are a Classical Utilitarian (CU) you are morally obliged to accept trade-offs between suffering and bliss as long as the bliss outweighs the suffering. NU doesn’t accept trade-offs, it’s…

The Future of Consciousness – Andrés Gómez Emilsson
| | |

The Future of Consciousness – Andrés Gómez Emilsson

Synopsis: In this talk we articulate a positive vision of the future that is both viable given what we know, and also utterly radical in its implications. We introduce two key insights that, when taken together, synergize in powerful ways. Namely, (a) the long-tails of pleasure and pain, and (b) the correlation between wellbeing, productivity,…

Panel: AGI Architectures & Trustworthy AGI
| | |

Panel: AGI Architectures & Trustworthy AGI

The state of the art deep learning models do not really embody understanding. This panel will focusing on AGI transparency, auditability and explainability.. differences btw causal understanding and prediction as well as surrounding practical / systemic / ethical issues. Can the blackbox problem be fully solved without machine understanding (the AI actually ‘understanding’ rather than…

Panel: Long Term Futures
| | |

Panel: Long Term Futures

Why should we prioritize improving the long-term future? Longtermism is an ethical stance motivates the reduction of existential risks such as nuclear war, engineered pandemics and emerging technologies like AI and nanotechnology. Sigal Samuel summarizes the key argument for longtermism as follows: “future people matter morally just as much as people alive today; (…) there…

What do we Need to Do to Align AI? – Stuart Armstrong
| | | |

What do we Need to Do to Align AI? – Stuart Armstrong

Synopsis: The goal of Aligned AI is to implement scalable solutions to the alignment problem, and distribute these solutions to actors developing powerful transformative artificial intelligence. What is Alignment? Algorithms are shaping the present and will shape the future ever more strongly. It is crucially important that these powerful algorithms be aligned – that they…

Agency in an Age of Machines – Joscha Bach
| | |

Agency in an Age of Machines – Joscha Bach

This talk is part of the ‘Stepping Into the Future‘ conference. Synopsis: The arrival of homo sapiens on Earth amounted to a singularity for its ecosystems, a transition that dramatically changed the distribution and interaction of living species within a relatively short amount of time. Such transitions are not unprecedented during the evolution of life,…