How can we verify users are genuine in an online world?
Online identity theft has steadily risen in recent years, while the generative AI boom has driven a new wave of deepfake-powered attacks that threaten remote enrollment and identity security. Meanwhile, bias in biometric systems has been monitored for some time, varying significantly across solutions and impacting consumer trust and perception of the technology.
As the adoption of remote identity verification technology rises, two critical concerns and challenges need to be addressed: bias and new security threats.
The FIDO Alliance recently sponsored an independent study of 2,000 respondents across the U.S. and the U.K. to understand consumer perception towards remote identity verification, online security, and biometrics. This eBook reveals those insights on remote biometric face verification, including how many people have used biometric face recognition successfully, and their opinions on verification accuracy, potential discrimination, and concerns about deepfakes.
Key findings
- Consumers want to use biometrics to verify themselves online more, especially in sensitive use cases like financial services – where one out of two people said they would use biometric technology (48%).
- One in four feel they experience regular discrimination when using automated facial biometric systems (25%).
- Equity in biometric systems is vital to trust – with half saying they would lose trust in a brand or institution (50%), with one out of five saying they’d stop using a service entirely if found to have a biased biometric system (22%).
- Over half of respondents are concerned about deepfakes when verifying identities online (52%).
Read the full results of the survey in this eBook and learn about these consumers’ experiences with biometric face verification technologies, and discover how organizations can improve global digital access, authentication, and security when leveraging these remote identity verification technologies.