The AI Safety Dynamic – Dr Simon Goldstein
| | |

The AI Safety Dynamic – Dr Simon Goldstein

Dr Simon Goldstein is an associate professor at the Dianoia Institute of Philosophy at ACU. In 2023, he is a research fellow at the Center for AI Safety. Simon’s research focuses on AI safety, epistemology, and philosophy of language. Before ACU, Simon was an assistant professor at Lingnan University in Hong Kong. Simon received my…

Leslie Allan – Postmodernism & Relativism are Wrong
| |

Leslie Allan – Postmodernism & Relativism are Wrong

Postmodernism and relativism buckle under their own contradiction: they passionately assert the objective truth that there exists no objective truth. This very paradox embodies a self-negating precept, a construct that dismantles itself from within. They champion the idea of relative truths, yet in this declaration, they inadvertently sculpt an overarching meta-narrative, contradicting the foundational principle…

Leslie Allan – Progressive vs Degenerative Research Programmes
| |

Leslie Allan – Progressive vs Degenerative Research Programmes

Our scientific observations are not pristine windows to reality; they are tinted by the theoretical frameworks and conceptual schemes that scaffold our interpretations, often smuggling in with them implicit assumptions of the very thesis they’re meant to support. We don’t simply absorb the world as it is; rather, we interpret it through the filter of…

Leslie Allan: The Theory-Ladenness of Observation
| |

Leslie Allan: The Theory-Ladenness of Observation

Our scientific observations are not pristine windows to reality; they are tinted by the theoretical frameworks and conceptual schemes that scaffold our interpretations, often smuggling in with them implicit assumptions of the very thesis they’re meant to support. We don’t simply absorb the world as it is; rather, we interpret it through the filter of…

AI & the Faustian Bargain with Technological Change – A. C. Grayling
| |

AI & the Faustian Bargain with Technological Change – A. C. Grayling

With AI we have made a ‘Faustian bargain’ – are the risks reason enough to halt technological progress? Professor A. C. Grayling discusses the implications of machine learning, artificial intelligence, and robotics on society, and shares his views on how they will reshape the future. The thing that’s going to change everything is machine learning,…

Stuart Russell – AI Ethics – Provably Beneficial Artificial Intelligence
| | | |

Stuart Russell – AI Ethics – Provably Beneficial Artificial Intelligence

Delighted to have Stuart Russell on video discussing the importance of AI Alignment – achieving friendly Strong AI that is provably beneficial. Points of discussion:A clash of intuitions about the beneficiality of Strong Artificial Intelligence The Value Alignment problem Basic AI Drives: Any objective generates sub-goals Aggregated Volition: How does an AI optimise for many…

The Anatomy of Happiness – David Pearce
| | | |

The Anatomy of Happiness – David Pearce

David Pearce in interview on the The Anatomy of Happiness. While researching epilepsy, neuroscientist Itzhak Fried stumbled on a ‘mirth’ center in the brain – given this, what ought we be doing to combat extreme suffering and promote wellbeing? 0:00 Mastery of reward circuitry 0:25 Itzhak Fried’s experiments on stimulating ‘the Humor Centre’ of the…

Stuart Armstrong on AI Interpretability, Accidental Misalignment & Risks of Opaque AI
| | | |

Stuart Armstrong on AI Interpretability, Accidental Misalignment & Risks of Opaque AI

Interview with Stuart Armstrong (Aligned AI) Video / Audio of interview will be up soon To watch interview live, join the zoom call: Time: Nov 9, 2022 07:30 PM Canberra, Melbourne, SydneyJoin Zoom Meetinghttps://us02web.zoom.us/j/81320547208?pwd=MGFnZ2RGcFl5cW9aZ1BaUm5qcnh1UT09Meeting ID: 813 2054 7208Passcode: scifuture Auditing and interpreting AI (and their models) seems obviously important to achieve verifiably safe AI (by…

Panel: AGI Architectures & Trustworthy AGI
| | |

Panel: AGI Architectures & Trustworthy AGI

The state of the art deep learning models do not really embody understanding. This panel will focusing on AGI transparency, auditability and explainability.. differences btw causal understanding and prediction as well as surrounding practical / systemic / ethical issues. Can the blackbox problem be fully solved without machine understanding (the AI actually ‘understanding’ rather than…

Foresight Superpowers – an interview with John Smart
| |

Foresight Superpowers – an interview with John Smart

Anticipating, Creating, & Leading the Accelerating Future John Smart gave an outline of topics in his new book ‘Introduction to Foresight: Personal, Team, and Organizational Adaptiveness‘. John will also speak at the up and coming conference ‘Stepping into the Future‘ – this talk will be ‘The Goodness of the Universe: Outer Space, Inner Space, and…