A new study shows that humans can learn new things from artificial intelligence systems and pass them to other humans, in ways that could potentially influence wider human culture.
The study, published on Monday by a group of researchers at the Center for Human and Machines at the Max Planck Institute for Human Development, suggests that while humans can learn from algorithms how to better solve certain problems, human biases prevented performance improvements from lasting as long as expected. Humans tended to prefer solutions from other humans over those proposed by algorithms, because they were more intuitive, or were less costly upfront—even if they paid off more, later.
« Digital technology already influences the processes of social transmission among people by providing new and faster means of communication and imitation, » the researchers write in the study. « Going one step further, we argue that rather than a mere means of cultural transmission (such as books or the Internet), algorithmic agents and AI may also play an active role in shaping cultural evolution processes online where humans and algorithms routinely interact. »
The crux of this research rests on a relatively simple question: If social learning, or the ability of humans to learn from one another, forms the basis of how humans transmit culture or solve problems collectively, what would social learning look like between humans and algorithms? Considering scientists don’t always know and often can’t reproduce how their own algorithms work or improve, the idea that machine learning could influence human learning—and culture itself—throughout generations is a frightening one.