Response to ‘Meta’s AI guru LeCun: Most of today’s AI approaches will never lead to true intelligence’ at Zednet
In interesting article I read today was ‘Meta’s AI guru LeCun: Most of today’s AI approaches will never lead to true intelligence‘ by Tiernan Ray.
Though I think the title is a bit misleading with it’s reference to ‘true intelligence’.
The article uses ‘human level intelligence’ as it’s framing, though the space of possible intelligence is huge – and so, the notion of ‘intelligence’ should relate to an abstract concept of ‘intelligence possibility space’ not tightly coupled with any species or a particular version of it.
> [current AI / TransformerNets is] “missing essential pieces” – to be like human intelligence. It would be great to know what those pieces are.
> “religious probabilists” .. I think LeCun is on point that some current rends in AI heavily relies on probability, and is experiencing bottlenecking – I think there may be far more efficient means of achieving capability gain through causal learning or some such.
Though it’s surprising how much new capability emerges when you just add a lot more parameters for transformer nets to absorb.
“Ultimately, there’s going to be a more satisfying and possibly better solution that involves systems that do a better job of understanding the way the world works.” – I agree with the wording in this statement, but it would have been more useful if there was an accompanying definition of ‘understanding’.
I think Henk W. de Regt and assoc has been doing a good job in relation to ‘scientific understanding’ – perhaps the AI field can take inspiration from this work?