IN BRIEF Miscreants can easily steal someone else’s identity by tricking live facial recognition software using deepfakes, according to a new report.
Sensity AI, a startup focused on tackling identity fraud, carried out a series of pretend attacks. Engineers scanned the image of someone from an ID card, and mapped their likeness onto another person’s face. Sensity then tested whether they could breach live facial recognition systems by tricking them into believing the pretend attacker is a real user.
So-called « liveness tests » try to authenticate identities in real-time, relying on images or video streams from cameras like face recognition used to unlock mobile phones, for example. Nine out of ten vendors failed Sensity’s live deepfake attacks.
Sensity did not name the companies susceptible to the deepfake attacks. « We told them ‘look you’re vulnerable to this kind of attack,’ and they said ‘we do not care,' » Francesco Cavalli, Sensity’s chief operating officer, told The Verge. « We decided to publish it because we think, at a corporate level and in general, the public should be aware of these threats. »