J. Dmitri Gallow – AI Interpretability, Orthogonality, Instrumental Convergence & Divergence
| |

J. Dmitri Gallow – AI Interpretability, Orthogonality, Instrumental Convergence & Divergence

J. Dmitri Gallow discusses the principles of instrumental convergence and divergence in AI. The orthogonality thesis, which states intelligence and desire are independent, and the instrumental convergence thesis, which suggests intelligent beings will have similar instrumental desires, are critical concepts. Gallow’s argument focuses on the instrumental divergence, which emerges from the complexity and unpredictability of AI’s actions based on its desires.

James Hughes
|

James Hughes on the Economic Impacts of Artificial General Intelligence

The following is an enlightening session with James Hughes, Associate Provost at the University of Massachusetts Boston and Director of the Institute for Ethics and Emerging Technologies (IEET), we delve into the intricate world of Artificial General Intelligence (AGI) and its profound economic implications. In this interview, Hughes, a renowned expert in the field, sheds…

Exploring the Frontiers of AI with David Quarel: Emerging Capabilities, Interpretability, and Future Impacts
|

Exploring the Frontiers of AI with David Quarel: Emerging Capabilities, Interpretability, and Future Impacts

David Quarel, a Ph.D. student at the Australian National University, is deeply involved in the field of AI, specifically focusing on AI safety and reinforcement learning. He works under the guidance of Marcus Hutter and is currently engaged in studying Hutter’s Universal AI model. This model is an ambitious attempt to define intelligence through the…

Effective Policy Advocacy – Interview with Greg Sadler
| | |

Effective Policy Advocacy – Interview with Greg Sadler

Greg Sadler, the CEO of Good Ancestors Policy, is working to help members of communities in Australia advocate for the positions that they think are important and the policies that they value the most. Greg has over 10 years’ experience in the Australian Public Service, including at the Department of the Prime Minister and Cabinet,…

Digital Twins in Healthcare and as Cyber Butlers
| | | |

Digital Twins in Healthcare and as Cyber Butlers

The concept of digital twins combined with engineering simulation is an exciting framework for professionals working to improve medical devices in biomedicine. A digital twin acts as a virtual representation or digital replica of a physical object, system, processes which can include parts of the human body. Digital twins can be applied to medical devices…

The AI Safety Dynamic – Dr Simon Goldstein
| | |

The AI Safety Dynamic – Dr Simon Goldstein

Dr Simon Goldstein is an associate professor at the Dianoia Institute of Philosophy at ACU. In 2023, he is a research fellow at the Center for AI Safety. Simon’s research focuses on AI safety, epistemology, and philosophy of language. Before ACU, Simon was an assistant professor at Lingnan University in Hong Kong. Simon received my…

AI & the Faustian Bargain with Technological Change – A. C. Grayling
| |

AI & the Faustian Bargain with Technological Change – A. C. Grayling

With AI we have made a ‘Faustian bargain’ – are the risks reason enough to halt technological progress? Professor A. C. Grayling discusses the implications of machine learning, artificial intelligence, and robotics on society, and shares his views on how they will reshape the future. The thing that’s going to change everything is machine learning,…

Stuart Russell – AI Ethics – Provably Beneficial Artificial Intelligence
| | | |

Stuart Russell – AI Ethics – Provably Beneficial Artificial Intelligence

Delighted to have Stuart Russell on video discussing the importance of AI Alignment – achieving friendly Strong AI that is provably beneficial. Points of discussion:A clash of intuitions about the beneficiality of Strong Artificial Intelligence The Value Alignment problem Basic AI Drives: Any objective generates sub-goals Aggregated Volition: How does an AI optimise for many…

ChatGPT on issues relating to the Hedonistic Imperative
|

ChatGPT on issues relating to the Hedonistic Imperative

Here, Adam Ford interviews OpenAI’s ChatGPT. Adam Ford: What is the anatomical basis for raw feels? ChatGPT: I’m not exactly sure what you mean by “raw feels.” Could you please provide some more context or clarify your question? Adam Ford: By “raw feels” I mean pleasure and suffering. ChatGPT: Pleasure and suffering are subjective experiences,…