Posts

The Age of A.I. – empowering narratives or accurate forecasts?

An interesting new series ‘The Age of A.I.’ narrated by Robert Downey Jr. – this first episode looks at how AI’s interact with humans ‘Affective Computing’ using object recognition, NLP attempting to simulate human emotion, digital avatars which work like agents for us – similar to John M. Smart’s idea of a digital twin (I kept thinking he will suddenly appear and start narrating), and robotic arms

In a lot of discussion around AI I see what seems like attempts to sooth peoples fears about AI, pandering to our need to feel relevant or unique with dichotomies that take an extreme position (like ‘Superintelligence already exists’) portraying it as a silly misconception, and then offer an attractive alternative which takes the edge off and sometimes even empowers us like ‘AI is a simulation of us’, or Gil Weinberg saying ‘AI augments us, it’s not going to replace us, AI will enhance us’ as opposed to ‘AI will overtake us or replace us’ – which dismisses nuanced alternative scenarios that look more like a combinations of the above dichotomies i.e. ‘AI simulating us’ and ‘AI innovation outside of anthropocentric design’ or ‘ai augmenting us’ as well as ‘surpassing us’… do we really need to wait see AI smashing every ball out of the park before we admit that it can outperform us and do stuff we can’t?

Another part that stuck out for me seems partly true is when Dr Ayanna Howard brings up
1) a misconception that agi / superintelligence exists (now).. I agree but I’d add ‘not yet’ – for what it’s worth, I’ve argued elsewhere that rather than think of generality in AGI as either on or off – there are degrees of generality – and it may our future selves in hind sight look back to the current trends in AI and will be able with confidence pinpoint small but apparent gradients of generality in AI in some projects.
2) and then goes onto say that AI is basically a simulation us humans… some of it attempts to be, but a lot of AI isn’t – it’s alien, it’s obvious that some projects are not trying to replicate the way humans compute intelligence. This quote seems wrong headed.

Here is episode 1: “How far is too far?”

> “Can A.I. make music? Can it feel excitement and fear? Is it alive? Will.i.am and Mark Sagar push the limits of what a machine can do. How far is too far, and how much further can we go?”

Here is the trailer:

The YouTube series so far seem like documentaries to me, and though the purpose may not be to try to be as accurate and intellectually honest as possible but instead be somewhat accurate, make people feel empowered and try not to cause a panic – I feel if we head into the future somewhat blinkered, clinging to empowering narratives, then we may be blindsided when the reality of AI kicks in – in whatever form it actually takes.

Well, maybe narratives are the easiest way for humans to process information – we aren’t unbounded rational machines ourselves, we are inherently bad at thinking about some things – but in order to avoid a narrative trap, it seems at least with some critical thinking skills to discern the world through lenses outside of narrative space we can, we are and we should continue to make headway.

— Adam Ford