UNIVERSITY PARK, Pa. — Mobile devices use facial recognition technology to help users quickly and securely unlock their phones, make a financial transaction or access medical records. But facial recognition technologies that employ a specific user-detection method are highly vulnerable to deepfake-based attacks that could lead to significant security concerns for users and applications, according to new research involving the Penn State College of Information Sciences and Technology.
The researchers found that most application programming interfaces that use facial liveness verification — a feature of facial recognition technology that uses computer vision to confirm the presence of a live user — don’t always detect digitally altered photos or videos of individuals made to look like a live version of someone else, also known as deepfakes. Applications that do use these detection measures are also significantly less effective at identifying deepfakes than what the app provider has claimed.
“In recent years we have observed significant development of facial authentication and verification technologies, which have been deployed in many security-critical applications,” said Ting Wang, associate professor of information sciences and technology and one principal investigator on the project. “Meanwhile, we have also seen substantial advances in deepfake technologies, making it fairly easy to synthesize live-looking facial images and video at little cost. We thus ask the interesting question: Is it possible for malicious attackers to misuse deepfakes to fool the facial verification systems?”
The research, which was presented this week at the USENIX Security Symposium, is the first systemic study on the security of facial liveness verification in real-world settings.