| | |

Panel: AGI Architectures & Trustworthy AGI

The state of the art deep learning models do not really embody understanding.

This panel will focusing on AGI transparency, auditability and explainability.. differences btw causal understanding and prediction as well as surrounding practical / systemic / ethical issues.

Can the blackbox problem be fully solved without machine understanding (the AI actually ‘understanding’ rather than i.e. merely making predictions across massive datasets)?

Will an add-on explanation modules be enough to make AI trustworthy?

Can imitation become understanding? Or do we need to develop an entirely different approach to AI?

Panelists include AI experts Ben Goertzel, Joscha Bach and Monica Anderson!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *