Intelligence Artificielle

AI Use Potentially Dangerous “Shortcuts” To Solve Complex Recognition Tasks

Research from York University finds that even the smartest AI can’t match up to humans’ visual processing.

Deep convolutional neural networks (DCNNs) do not view things in the same way that humans do (through configural shape perception), which might be harmful in real-world AI applications. This is according to Professor James Elder, co-author of a York University study recently published in the journal iScience.

The study, which conducted by Elder, who holds the York Research Chair in Human and Computer Vision and is Co-Director of York’s Centre for AI & Society, and Nicholas Baker, an assistant psychology professor at Loyola College in Chicago and a former VISTA postdoctoral fellow at York, finds that deep learning models fail to capture the configural nature of human shape perception.

In order to investigate how the human brain and DCNNs perceive holistic, configural object properties, the research used novel visual stimuli known as “Frankensteins.”

“Frankensteins are simply objects that have been taken apart and put back together the wrong way around,” says Elder. “As a result, they have all the right local features, but in the wrong places.”

The researchers discovered that whereas Frankensteins confuse the human visual system, DCNNs do not, revealing an insensitivity to configural object properties.

“Our results explain why deep AI models fail under certain conditions and point to the need to consider tasks beyond object recognition in order to understand visual processing in the brain,” Elder says. “These deep models tend to take ‘shortcuts’ when solving complex recognition tasks. While these shortcuts may work in many cases, they can be dangerous in some of the real-world AI applications we are currently working on with our industry and government partners,” Elder points out.

One such application is traffic video safety systems: “The objects in a busy traffic scene – the vehicles, bicycles, and pedestrians – obstruct each other and arrive at the eye of a driver as a jumble of disconnected fragments,” explains Elder. “The brain needs to correctly group those fragments to identify the correct categories and locations of the objects. An AI system for traffic safety monitoring that is only able to perceive the fragments individually will fail at this task, potentially misunderstanding risks to vulnerable road users.”

Read more

Veille-cyber

Share
Published by
Veille-cyber

Recent Posts

Sécurité des mots de passe : bonnes pratiques pour éviter les failles

Sécurité des mots de passe : bonnes pratiques pour éviter les failles La sécurité des…

7 jours ago

Ransomware : comment prévenir et réagir face à une attaque

Ransomware : comment prévenir et réagir face à une attaque Le ransomware est l’une des…

1 semaine ago

Cybersécurité et e-commerce : protéger vos clients et vos ventes

Cybersécurité et e-commerce : protéger vos clients et vos ventes En 2025, les sites e-commerce…

1 semaine ago

Les ransomwares : comprendre et se défendre contre cette menace

Les ransomwares : comprendre et se défendre contre cette menace En 2025, les ransomwares représentent…

2 semaines ago

RGPD et cybersécurité : comment rester conforme en 2025

RGPD et cybersécurité : comment rester conforme en 2025 Depuis sa mise en application en…

2 semaines ago

VPN : un outil indispensable pour protéger vos données

VPN : un outil indispensable pour protéger vos données Le VPN, ou « Virtual Private…

2 semaines ago

This website uses cookies.