Vernor Vinge on the Turing Test, Artificial Intelligence
|

Vernor Vinge on the Turing Test, Artificial Intelligence

On the coat-tails of a the blockbuster film “The Imitation Game” I saw quite a bit of buzz on the internet about Alan Turing, and the Turing Test.  The title of the movie refers to the idea of the Turing Test may someday show that machines would ostensibly be (at least in controlled circumstances) indistinguishable…

ChatGPT on issues relating to the Hedonistic Imperative
|

ChatGPT on issues relating to the Hedonistic Imperative

Here, Adam Ford interviews OpenAI’s ChatGPT. Adam Ford: What is the anatomical basis for raw feels? ChatGPT: I’m not exactly sure what you mean by “raw feels.” Could you please provide some more context or clarify your question? Adam Ford: By “raw feels” I mean pleasure and suffering. ChatGPT: Pleasure and suffering are subjective experiences,…

The Anatomy of Happiness – David Pearce
| | | |

The Anatomy of Happiness – David Pearce

David Pearce in interview on the The Anatomy of Happiness. While researching epilepsy, neuroscientist Itzhak Fried stumbled on a ‘mirth’ center in the brain – given this, what ought we be doing to combat extreme suffering and promote wellbeing? 0:00 Mastery of reward circuitry 0:25 Itzhak Fried’s experiments on stimulating ‘the Humor Centre’ of the…

Stuart Armstrong on AI Interpretability, Accidental Misalignment & Risks of Opaque AI
| | | |

Stuart Armstrong on AI Interpretability, Accidental Misalignment & Risks of Opaque AI

Interview with Stuart Armstrong (Aligned AI) Video / Audio of interview will be up soon To watch interview live, join the zoom call: Time: Nov 9, 2022 07:30 PM Canberra, Melbourne, SydneyJoin Zoom Meetinghttps://us02web.zoom.us/j/81320547208?pwd=MGFnZ2RGcFl5cW9aZ1BaUm5qcnh1UT09Meeting ID: 813 2054 7208Passcode: scifuture Auditing and interpreting AI (and their models) seems obviously important to achieve verifiably safe AI (by…

Response to ‘Meta’s AI guru LeCun: Most of today’s AI approaches will never lead to true intelligence’ at Zednet

Response to ‘Meta’s AI guru LeCun: Most of today’s AI approaches will never lead to true intelligence’ at Zednet

In interesting article I read today was ‘Meta’s AI guru LeCun: Most of today’s AI approaches will never lead to true intelligence‘ by Tiernan Ray. Though I think the title is a bit misleading with it’s reference to ‘true intelligence’. The article uses ‘human level intelligence’ as it’s framing, though the space of possible intelligence…

The Red Pill of Machine Learning – Monica Anderson
| | |

The Red Pill of Machine Learning – Monica Anderson

Synopsis: The new cognitive capabilities in our machines were the result of a shift in the way wethink about problem solving. The shift is the most significant change in AI, ever, if not inscience as a whole. Machine Learning based systems are now successfully attackingboth simple and complex problems using these novel Methods. We are…

The Future of Consciousness – Andrés Gómez Emilsson
| | |

The Future of Consciousness – Andrés Gómez Emilsson

Synopsis: In this talk we articulate a positive vision of the future that is both viable given what we know, and also utterly radical in its implications. We introduce two key insights that, when taken together, synergize in powerful ways. Namely, (a) the long-tails of pleasure and pain, and (b) the correlation between wellbeing, productivity,…

Panel: AGI Architectures & Trustworthy AGI
| | |

Panel: AGI Architectures & Trustworthy AGI

The state of the art deep learning models do not really embody understanding. This panel will focusing on AGI transparency, auditability and explainability.. differences btw causal understanding and prediction as well as surrounding practical / systemic / ethical issues. Can the blackbox problem be fully solved without machine understanding (the AI actually ‘understanding’ rather than…

Panel: Long Term Futures
| | |

Panel: Long Term Futures

Why should we prioritize improving the long-term future? Longtermism is an ethical stance motivates the reduction of existential risks such as nuclear war, engineered pandemics and emerging technologies like AI and nanotechnology. Sigal Samuel summarizes the key argument for longtermism as follows: “future people matter morally just as much as people alive today; (…) there…