The AI Safety Dynamic – Dr Simon Goldstein
| | |

The AI Safety Dynamic – Dr Simon Goldstein

Dr Simon Goldstein is an associate professor at the Dianoia Institute of Philosophy at ACU. In 2023, he is a research fellow at the Center for AI Safety. Simon’s research focuses on AI safety, epistemology, and philosophy of language. Before ACU, Simon was an assistant professor at Lingnan University in Hong Kong. Simon received my…

The Unfolding Mysteries of the Cosmos: A Reflection on Seneca’s Vision
|

The Unfolding Mysteries of the Cosmos: A Reflection on Seneca’s Vision

The time will come when diligent research over long periods will bring to light things which now lie hidden. A single lifetime, even though entirely devoted to the sky, would not be enough for the investigation of so vast a subject… And so this knowledge will be unfolded only through long successive ages. There will come a time when our descendants will be amazed that we did not know things that are so plain to them… Many discoveries are reserved for ages still to come, when memory of us will have been effaced.

AI & the Faustian Bargain with Technological Change – A. C. Grayling
| |

AI & the Faustian Bargain with Technological Change – A. C. Grayling

With AI we have made a ‘Faustian bargain’ – are the risks reason enough to halt technological progress? Professor A. C. Grayling discusses the implications of machine learning, artificial intelligence, and robotics on society, and shares his views on how they will reshape the future. The thing that’s going to change everything is machine learning,…

Stuart Russell – AI Ethics – Provably Beneficial Artificial Intelligence
| | | |

Stuart Russell – AI Ethics – Provably Beneficial Artificial Intelligence

Delighted to have Stuart Russell on video discussing the importance of AI Alignment – achieving friendly Strong AI that is provably beneficial. Points of discussion:A clash of intuitions about the beneficiality of Strong Artificial Intelligence The Value Alignment problem Basic AI Drives: Any objective generates sub-goals Aggregated Volition: How does an AI optimise for many…

ChatGPT on issues relating to the Hedonistic Imperative
|

ChatGPT on issues relating to the Hedonistic Imperative

Here, Adam Ford interviews OpenAI’s ChatGPT. Adam Ford: What is the anatomical basis for raw feels? ChatGPT: I’m not exactly sure what you mean by “raw feels.” Could you please provide some more context or clarify your question? Adam Ford: By “raw feels” I mean pleasure and suffering. ChatGPT: Pleasure and suffering are subjective experiences,…

The Anatomy of Happiness – David Pearce
| | | |

The Anatomy of Happiness – David Pearce

David Pearce in interview on the The Anatomy of Happiness. While researching epilepsy, neuroscientist Itzhak Fried stumbled on a ‘mirth’ center in the brain – given this, what ought we be doing to combat extreme suffering and promote wellbeing? 0:00 Mastery of reward circuitry 0:25 Itzhak Fried’s experiments on stimulating ‘the Humor Centre’ of the…

Stuart Armstrong on AI Interpretability, Accidental Misalignment & Risks of Opaque AI
| | | |

Stuart Armstrong on AI Interpretability, Accidental Misalignment & Risks of Opaque AI

Interview with Stuart Armstrong (Aligned AI) Video / Audio of interview will be up soon To watch interview live, join the zoom call: Time: Nov 9, 2022 07:30 PM Canberra, Melbourne, SydneyJoin Zoom Meetinghttps://us02web.zoom.us/j/81320547208?pwd=MGFnZ2RGcFl5cW9aZ1BaUm5qcnh1UT09Meeting ID: 813 2054 7208Passcode: scifuture Auditing and interpreting AI (and their models) seems obviously important to achieve verifiably safe AI (by…

Panel: AGI Architectures & Trustworthy AGI
| | |

Panel: AGI Architectures & Trustworthy AGI

The state of the art deep learning models do not really embody understanding. This panel will focusing on AGI transparency, auditability and explainability.. differences btw causal understanding and prediction as well as surrounding practical / systemic / ethical issues. Can the blackbox problem be fully solved without machine understanding (the AI actually ‘understanding’ rather than…

Panel: Long Term Futures
| | |

Panel: Long Term Futures

Why should we prioritize improving the long-term future? Longtermism is an ethical stance motivates the reduction of existential risks such as nuclear war, engineered pandemics and emerging technologies like AI and nanotechnology. Sigal Samuel summarizes the key argument for longtermism as follows: “future people matter morally just as much as people alive today; (…) there…

What do we Need to Do to Align AI? – Stuart Armstrong
| | | |

What do we Need to Do to Align AI? – Stuart Armstrong

Synopsis: The goal of Aligned AI is to implement scalable solutions to the alignment problem, and distribute these solutions to actors developing powerful transformative artificial intelligence. What is Alignment? Algorithms are shaping the present and will shape the future ever more strongly. It is crucially important that these powerful algorithms be aligned – that they…