Prospective learning in AI

Prospective learning in AI

‘Retrospective learning’ refers to the assumption that the future is an extension of the past. An intelligent system may learn to name certain objects if it is shown pictures of them along with their names. And a model that employs retrospective learning will be able to recognise and name more pictures of the same objects. Still, it will not be able to name previously unencountered objects.

paper published earlier this year argued that retrospective learning isn’t a good representation of true intelligence. According to the study–supported by Microsoft Research and DARPA–learning needs to be future-oriented to solve problems in the real world. Accordingly, NI (Natural Intelligence) and AI have to take an unknown future into account. Their internal models have to adapt to naming new objects and using them in a new context. This is called ‘prospective learning.’

Prospective learning is important because many critical problems are novel experiences that come with little information, negligible probability, and high consequences. Unfortunately, such problems precipitate the downfall of AI systems, such as when medical diagnoses systems cannot detect underrepresented diseases in the samples used to train them. Therefore, the challenge with intelligent systems is to distinguish novel experiences, discern the potentially complex ways in which they connect to past experiences, and then act accordingly.

Read more