The news that Ukraine is using facial recognition software to uncover Russian assailants and identify Ukrainians killed in the ongoing war is noteworthy largely because it’s one of few documented uses of artificial intelligence in the conflict. A Georgetown University think tank is trying to figure out why while advising U.S. policymakers of the risks of AI.
The CEO of the controversial American facial recognition company Clearview AI told Reuters that Ukraine’s defense ministry began using its imaging software Saturday after Clearview offered it for free. The reportedly powerful recognition tool relies on artificial intelligence algorithms and a massive quantity of image training data scraped from social media and the internet.
But aside from Russian influence campaigns with their much-discussed “deep fakes” and misinformation-spreading bots, the lack of known tactical use (at least publicly) of AI by the Russian military has surprised many observers. Andrew Lohn isn’t one of them.
Le règlement DORA : un tournant majeur pour la cybersécurité des institutions financières Le 17…
L’Agence nationale de la sécurité des systèmes d'information (ANSSI) a publié un rapport sur les…
Directive NIS 2 : Comprendre les nouvelles obligations en cybersécurité pour les entreprises européennes La…
Alors que la directive européenne NIS 2 s’apprête à transformer en profondeur la gouvernance de…
L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…
Des chercheurs en cybersécurité ont détecté une intensification des activités du groupe APT36, affilié au…
This website uses cookies.