J. Dmitri Gallow – AI Interpretability, Orthogonality, Instrumental Convergence & Divergence
| |

J. Dmitri Gallow – AI Interpretability, Orthogonality, Instrumental Convergence & Divergence

J. Dmitri Gallow discusses the principles of instrumental convergence and divergence in AI. The orthogonality thesis, which states intelligence and desire are independent, and the instrumental convergence thesis, which suggests intelligent beings will have similar instrumental desires, are critical concepts. Gallow’s argument focuses on the instrumental divergence, which emerges from the complexity and unpredictability of AI’s actions based on its desires.

David Pearce – Effective Altruism – Phasing Out Suffering
| | | |

David Pearce – Effective Altruism – Phasing Out Suffering

This interview was conducted in 2012 in San Francisco. In the future may will see it is not ethically responsible to play genetic roulette and instead take the decision to have happy, healthy, pro-social offspring. 0:00 Introduction0:36 Alleviating Suffering 7:00 Justified Suffering? 13:12 Buddhism 14:42 The World Transhumanist Association 22:35 Recalibration of Society or Biology?…

Exploring the Frontiers of AI with David Quarel: Emerging Capabilities, Interpretability, and Future Impacts
|

Exploring the Frontiers of AI with David Quarel: Emerging Capabilities, Interpretability, and Future Impacts

David Quarel, a Ph.D. student at the Australian National University, is deeply involved in the field of AI, specifically focusing on AI safety and reinforcement learning. He works under the guidance of Marcus Hutter and is currently engaged in studying Hutter’s Universal AI model. This model is an ambitious attempt to define intelligence through the…

Effective Policy Advocacy – Interview with Greg Sadler
| | |

Effective Policy Advocacy – Interview with Greg Sadler

Greg Sadler, the CEO of Good Ancestors Policy, is working to help members of communities in Australia advocate for the positions that they think are important and the policies that they value the most. Greg has over 10 years’ experience in the Australian Public Service, including at the Department of the Prime Minister and Cabinet,…

Digital Twins in Healthcare and as Cyber Butlers
| | | |

Digital Twins in Healthcare and as Cyber Butlers

The concept of digital twins combined with engineering simulation is an exciting framework for professionals working to improve medical devices in biomedicine. A digital twin acts as a virtual representation or digital replica of a physical object, system, processes which can include parts of the human body. Digital twins can be applied to medical devices…

The AI Safety Dynamic – Dr Simon Goldstein
| | |

The AI Safety Dynamic – Dr Simon Goldstein

Dr Simon Goldstein is an associate professor at the Dianoia Institute of Philosophy at ACU. In 2023, he is a research fellow at the Center for AI Safety. Simon’s research focuses on AI safety, epistemology, and philosophy of language. Before ACU, Simon was an assistant professor at Lingnan University in Hong Kong. Simon received my…

Stuart Russell – AI Ethics – Provably Beneficial Artificial Intelligence
| | | |

Stuart Russell – AI Ethics – Provably Beneficial Artificial Intelligence

Delighted to have Stuart Russell on video discussing the importance of AI Alignment – achieving friendly Strong AI that is provably beneficial. Points of discussion:A clash of intuitions about the beneficiality of Strong Artificial Intelligence The Value Alignment problem Basic AI Drives: Any objective generates sub-goals Aggregated Volition: How does an AI optimise for many…

The Anatomy of Happiness – David Pearce
| | | |

The Anatomy of Happiness – David Pearce

David Pearce in interview on the The Anatomy of Happiness. While researching epilepsy, neuroscientist Itzhak Fried stumbled on a ‘mirth’ center in the brain – given this, what ought we be doing to combat extreme suffering and promote wellbeing? 0:00 Mastery of reward circuitry 0:25 Itzhak Fried’s experiments on stimulating ‘the Humor Centre’ of the…

Stuart Armstrong on AI Interpretability, Accidental Misalignment & Risks of Opaque AI
| | | |

Stuart Armstrong on AI Interpretability, Accidental Misalignment & Risks of Opaque AI

Interview with Stuart Armstrong (Aligned AI) Video / Audio of interview will be up soon To watch interview live, join the zoom call: Time: Nov 9, 2022 07:30 PM Canberra, Melbourne, SydneyJoin Zoom Meetinghttps://us02web.zoom.us/j/81320547208?pwd=MGFnZ2RGcFl5cW9aZ1BaUm5qcnh1UT09Meeting ID: 813 2054 7208Passcode: scifuture Auditing and interpreting AI (and their models) seems obviously important to achieve verifiably safe AI (by…