Stuart Russell – AI Ethics – Provably Beneficial Artificial Intelligence
| | | |

Stuart Russell – AI Ethics – Provably Beneficial Artificial Intelligence

Delighted to have Stuart Russell on video discussing the importance of AI Alignment – achieving friendly Strong AI that is provably beneficial. Points of discussion:A clash of intuitions about the beneficiality of Strong Artificial Intelligence The Value Alignment problem Basic AI Drives: Any objective generates sub-goals Aggregated Volition: How does an AI optimise for many…

ChatGPT on issues relating to the Hedonistic Imperative
|

ChatGPT on issues relating to the Hedonistic Imperative

Here, Adam Ford interviews OpenAI’s ChatGPT. Adam Ford: What is the anatomical basis for raw feels? ChatGPT: I’m not exactly sure what you mean by “raw feels.” Could you please provide some more context or clarify your question? Adam Ford: By “raw feels” I mean pleasure and suffering. ChatGPT: Pleasure and suffering are subjective experiences,…

Stuart Armstrong on AI Interpretability, Accidental Misalignment & Risks of Opaque AI
| | | |

Stuart Armstrong on AI Interpretability, Accidental Misalignment & Risks of Opaque AI

Interview with Stuart Armstrong (Aligned AI) Video / Audio of interview will be up soon To watch interview live, join the zoom call: Time: Nov 9, 2022 07:30 PM Canberra, Melbourne, SydneyJoin Zoom Meetinghttps://us02web.zoom.us/j/81320547208?pwd=MGFnZ2RGcFl5cW9aZ1BaUm5qcnh1UT09Meeting ID: 813 2054 7208Passcode: scifuture Auditing and interpreting AI (and their models) seems obviously important to achieve verifiably safe AI (by…

Response to ‘Meta’s AI guru LeCun: Most of today’s AI approaches will never lead to true intelligence’ at Zednet

Response to ‘Meta’s AI guru LeCun: Most of today’s AI approaches will never lead to true intelligence’ at Zednet

In interesting article I read today was ‘Meta’s AI guru LeCun: Most of today’s AI approaches will never lead to true intelligence‘ by Tiernan Ray. Though I think the title is a bit misleading with it’s reference to ‘true intelligence’. The article uses ‘human level intelligence’ as it’s framing, though the space of possible intelligence…

The Red Pill of Machine Learning – Monica Anderson
| | |

The Red Pill of Machine Learning – Monica Anderson

Synopsis: The new cognitive capabilities in our machines were the result of a shift in the way wethink about problem solving. The shift is the most significant change in AI, ever, if not inscience as a whole. Machine Learning based systems are now successfully attackingboth simple and complex problems using these novel Methods. We are…

What do we Need to Do to Align AI? – Stuart Armstrong
| | | |

What do we Need to Do to Align AI? – Stuart Armstrong

Synopsis: The goal of Aligned AI is to implement scalable solutions to the alignment problem, and distribute these solutions to actors developing powerful transformative artificial intelligence. What is Alignment? Algorithms are shaping the present and will shape the future ever more strongly. It is crucially important that these powerful algorithms be aligned – that they…

Agency in an Age of Machines – Joscha Bach
| | |

Agency in an Age of Machines – Joscha Bach

This talk is part of the ‘Stepping Into the Future‘ conference. Synopsis: The arrival of homo sapiens on Earth amounted to a singularity for its ecosystems, a transition that dramatically changed the distribution and interaction of living species within a relatively short amount of time. Such transitions are not unprecedented during the evolution of life,…

Causal Incentives and Safe AGI – Tom Everitt
| |

Causal Incentives and Safe AGI – Tom Everitt

This talk is part of the ‘Stepping Into the Future‘ conference. Synopsis: Along with many benefits of powerful machine learning methods comes significant challenges. For example, as content recommendation algorithms become increasingly competent at satisfying user preferences, they may also become more competent at manipulating human preferences, to make the preferences more easily satisfiable. These kinds of alignment…