The AI Safety Dynamic – Dr Simon Goldstein
| | |

The AI Safety Dynamic – Dr Simon Goldstein

Dr Simon Goldstein is an associate professor at the Dianoia Institute of Philosophy at ACU. In 2023, he is a research fellow at the Center for AI Safety. Simon’s research focuses on AI safety, epistemology, and philosophy of language. Before ACU, Simon was an assistant professor at Lingnan University in Hong Kong. Simon received my…

Stuart Armstrong on AI Interpretability, Accidental Misalignment & Risks of Opaque AI
| | | |

Stuart Armstrong on AI Interpretability, Accidental Misalignment & Risks of Opaque AI

Interview with Stuart Armstrong (Aligned AI) Video / Audio of interview will be up soon To watch interview live, join the zoom call: Time: Nov 9, 2022 07:30 PM Canberra, Melbourne, SydneyJoin Zoom Meetinghttps://us02web.zoom.us/j/81320547208?pwd=MGFnZ2RGcFl5cW9aZ1BaUm5qcnh1UT09Meeting ID: 813 2054 7208Passcode: scifuture Auditing and interpreting AI (and their models) seems obviously important to achieve verifiably safe AI (by…

What do we Need to Do to Align AI? – Stuart Armstrong
| | | |

What do we Need to Do to Align AI? – Stuart Armstrong

Synopsis: The goal of Aligned AI is to implement scalable solutions to the alignment problem, and distribute these solutions to actors developing powerful transformative artificial intelligence. What is Alignment? Algorithms are shaping the present and will shape the future ever more strongly. It is crucially important that these powerful algorithms be aligned – that they…

CLAIRE – a new European confederation for AI research
|

CLAIRE – a new European confederation for AI research

While the world wakes up to the huge potential impacts of AI in the future, how will national worries about other nations gaining ‘AI Supremacy’ effect development? Especially development in AI Ethics & safety? Claire-AI is a new European confederation. Self described as Liking the ‘human-centered’ focus (albeit a bit vague), but where is their…

Sam Harris on AI Implications -The Ruben Report
| | |

Sam Harris on AI Implications -The Ruben Report

A transcription of Sam Harris’ discussion on the Implications of Strong AI during recent appearance on the Ruben Report. Sam contrasts narrow AI with strong AI, AI Safety, the possibility of rapid AI self-improvement, the idea of AI superintelligence may seem alien to us, and he also brings up the idea that it is important…

AI: The Story So Far – Stuart Russell
| |

AI: The Story So Far – Stuart Russell

Awesome to have Stuart Russell discussing AI Safety – a very important topic. Too long have people been associating the idea of AI safety issues with Terminator – unfortunately the human condition seems such that people often don’t give themselves permission to take seriously non-mainstream ideas unless they see a tip of the hat from…