Stuart Armstrong on AI Interpretability, Accidental Misalignment & Risks of Opaque AI
| | | |

Stuart Armstrong on AI Interpretability, Accidental Misalignment & Risks of Opaque AI

Interview with Stuart Armstrong (Aligned AI) Video / Audio of interview will be up soon To watch interview live, join the zoom call: Time: Nov 9, 2022 07:30 PM Canberra, Melbourne, SydneyJoin Zoom Meetinghttps://us02web.zoom.us/j/81320547208?pwd=MGFnZ2RGcFl5cW9aZ1BaUm5qcnh1UT09Meeting ID: 813 2054 7208Passcode: scifuture Auditing and interpreting AI (and their models) seems obviously important to achieve verifiably safe AI (by…

Response to ‘Meta’s AI guru LeCun: Most of today’s AI approaches will never lead to true intelligence’ at Zednet

Response to ‘Meta’s AI guru LeCun: Most of today’s AI approaches will never lead to true intelligence’ at Zednet

In interesting article I read today was ‘Meta’s AI guru LeCun: Most of today’s AI approaches will never lead to true intelligence‘ by Tiernan Ray. Though I think the title is a bit misleading with it’s reference to ‘true intelligence’. The article uses ‘human level intelligence’ as it’s framing, though the space of possible intelligence…

The Red Pill of Machine Learning – Monica Anderson
| | |

The Red Pill of Machine Learning – Monica Anderson

Synopsis: The new cognitive capabilities in our machines were the result of a shift in the way wethink about problem solving. The shift is the most significant change in AI, ever, if not inscience as a whole. Machine Learning based systems are now successfully attackingboth simple and complex problems using these novel Methods. We are…

The Future of Consciousness – Andrés Gómez Emilsson
| | |

The Future of Consciousness – Andrés Gómez Emilsson

Synopsis: In this talk we articulate a positive vision of the future that is both viable given what we know, and also utterly radical in its implications. We introduce two key insights that, when taken together, synergize in powerful ways. Namely, (a) the long-tails of pleasure and pain, and (b) the correlation between wellbeing, productivity,…

Panel: AGI Architectures & Trustworthy AGI
| | |

Panel: AGI Architectures & Trustworthy AGI

The state of the art deep learning models do not really embody understanding. This panel will focusing on AGI transparency, auditability and explainability.. differences btw causal understanding and prediction as well as surrounding practical / systemic / ethical issues. Can the blackbox problem be fully solved without machine understanding (the AI actually ‘understanding’ rather than…

Panel: Long Term Futures
| | |

Panel: Long Term Futures

Why should we prioritize improving the long-term future? Longtermism is an ethical stance motivates the reduction of existential risks such as nuclear war, engineered pandemics and emerging technologies like AI and nanotechnology. Sigal Samuel summarizes the key argument for longtermism as follows: “future people matter morally just as much as people alive today; (…) there…

What do we Need to Do to Align AI? – Stuart Armstrong
| | | |

What do we Need to Do to Align AI? – Stuart Armstrong

Synopsis: The goal of Aligned AI is to implement scalable solutions to the alignment problem, and distribute these solutions to actors developing powerful transformative artificial intelligence. What is Alignment? Algorithms are shaping the present and will shape the future ever more strongly. It is crucially important that these powerful algorithms be aligned – that they…

Agency in an Age of Machines – Joscha Bach
| | |

Agency in an Age of Machines – Joscha Bach

This talk is part of the ‘Stepping Into the Future‘ conference. Synopsis: The arrival of homo sapiens on Earth amounted to a singularity for its ecosystems, a transition that dramatically changed the distribution and interaction of living species within a relatively short amount of time. Such transitions are not unprecedented during the evolution of life,…