‘Retrospective learning’ refers to the assumption that the future is an extension of the past. An intelligent system may learn to name certain objects if it is shown pictures of them along with their names. And a model that employs retrospective learning will be able to recognise and name more pictures of the same objects. Still, it will not be able to name previously unencountered objects.
A paper published earlier this year argued that retrospective learning isn’t a good representation of true intelligence. According to the study–supported by Microsoft Research and DARPA–learning needs to be future-oriented to solve problems in the real world. Accordingly, NI (Natural Intelligence) and AI have to take an unknown future into account. Their internal models have to adapt to naming new objects and using them in a new context. This is called ‘prospective learning.’
Prospective learning is important because many critical problems are novel experiences that come with little information, negligible probability, and high consequences. Unfortunately, such problems precipitate the downfall of AI systems, such as when medical diagnoses systems cannot detect underrepresented diseases in the samples used to train them. Therefore, the challenge with intelligent systems is to distinguish novel experiences, discern the potentially complex ways in which they connect to past experiences, and then act accordingly.
Introduction La cybersécurité est devenue une priorité stratégique pour toutes les entreprises, grandes ou petites.…
Cybersécurité : les établissements de santé renforcent leur défense grâce aux exercices de crise Face…
La transformation numérique du secteur financier n'a pas que du bon : elle augmente aussi…
L'IA : opportunité ou menace ? Les DSI de la finance s'interrogent Alors que l'intelligence…
Telegram envisage de quitter la France : le chiffrement de bout en bout au cœur…
Sécurité des identités : un pilier essentiel pour la conformité au règlement DORA dans le…
This website uses cookies.