Ten Possible Solutions to Address the Abuse of Facial Recognition
-
This article presents ten policy recommendations for facial recognition, addressing potential risks across three key dimensions: facial data, testing technology, and certification evaluation entities.
Core Summary of the Ten Recommendations:
Currently, facial recognition technology is increasingly integrated into daily life, with widespread applications in areas such as missing person searches, security enhancements, navigation for the visually impaired, and counter-terrorism. However, it has also raised public concerns and skepticism.
Given the numerous uncertainties and risks associated with facial recognition technology, this article offers ten policy recommendations to mitigate these risks, focusing on facial data, testing technology, and certification evaluation entities.
Facial recognition refers to the process of verifying and identifying individuals in specific scenarios—whether through static images or dynamic videos—by comparing them against stored facial image databases.
Facial recognition typically consists of three steps: detecting and segmenting faces in a scene, extracting and analyzing facial features, and matching them against a database for identification.
One of the most concerning aspects of facial recognition is the long-term digital storage of facial images, which poses significant risks of misuse. Addressing this requires reforms to alleviate public anxiety, such as setting limits on the storage duration of images and videos. While certain data retention may be necessary during emergencies, once the crisis passes, retaining such data becomes unnecessary. For most applications, limiting storage duration can balance the benefits of facial recognition with minimizing risks.
Storage durations vary by scenario. For instance, images compiled for emergencies have high immediate value, while others may need to be stored in large databases for future matching and identification.
One solution to enhance data security is Federated Learning, a decentralized machine learning approach where data remains on local devices (e.g., cameras) rather than being transmitted to central servers, thereby improving security.
Sharing data across multiple purposes is another concern. For example, U.S. vehicle registration agencies have sold facial images to third parties for unrelated recognition purposes. This raises issues of consent, as individuals are often unaware of such secondary uses, undermining trust. Justifications must be provided for cross-scenario data sharing.
Public opinion on facial recognition varies by context. A Brookings Institution survey found that 41% of respondents supported its use in school safety, while only 30% approved of its deployment in airports or stadiums. The lowest approval (under 30%) was for retail theft prevention.
Both private and public entities should clearly mark areas where facial recognition is in use, ensuring public awareness and allowing individuals to avoid such zones if desired. This transparency fosters trust and respects individual autonomy.
Accuracy in facial recognition is influenced by factors like race and lighting. Studies show that systems perform better for lighter-skinned individuals, with accuracy declining for darker skin tones due to biases in training data. Such biases can lead to discrimination in law enforcement, border security, and retail settings.
Lighting conditions also affect accuracy, as shadows and intensity alter facial image contrast. For example, Cardiff University research highlighted thousands of mismatches in Australia, underscoring the need for accuracy standards before large-scale public deployment. Higher standards are crucial in contexts like law enforcement, where errors could infringe on core rights.
Third-party evaluations can boost public confidence in facial recognition products. A star-rating system, similar to the Energy Star program, could help consumers assess product reliability and risks.
Some applications collect excessive data unrelated to their primary purpose, violating the "minimum necessary" principle. For instance, police body cameras may capture bystanders unrelated to investigations. Unnecessary data should be blurred or deleted once its investigative value expires.
Opt-in consent ensures individuals agree to the use of their biometric data, such as for personalized advertising. Given growing privacy concerns, facial biometrics—being highly identifiable—should be classified as sensitive data.
Alternatively, opt-out mechanisms and the right to be forgotten can apply in low-risk scenarios, allowing individuals to restrict data collection or sharing. Over time, outdated or irrelevant data should be removable to enhance public acceptance.
Industry standards, like those for mobile communications, are essential for ensuring facial recognition technology is safe and privacy-respecting. Organizations like IEEE and NIST are developing such standards, while ISO certifications can validate compliance, ensuring consumer trust in the technology's responsible use.
In the United States, NIST is responsible for product technology certification. It compares facial detection results through public databases and certifies related applications. However, criticisms have been raised that NIST overly relies on initial data from private websites, which cannot be generalized to everyday usage scenarios. Additionally, NIST's data selection is too narrow, focusing only on law enforcement-related facial data, and its testing standards rely solely on image quality and operational functionality. Therefore, facial recognition verification should combine automated testing with manual review, improve standardization organization certifications, and establish reliable testing and credible verification.
To ensure the accuracy of facial recognition, facial verification, technical standards, and government compliance testing must be based on broadly representative, non-specific-use data. Amid the commercialization wave of facial recognition, adopting representative databases for baseline testing and product certification is particularly crucial.
Single-purpose data, such as police facial photos, cannot fully represent all demographics, significantly diminishing their testing value. Beyond data representativeness, practical testing is especially important for addressing public concerns about facial recognition. After all, facial recognition tests based on massive image data, real-world application environments, and representative population samples can effectively mitigate the negative impacts of lighting conditions and image resolution on testing accuracy.
Note: This research report is derived from the Brookings Institution's publication on October 31, titled 10 Actions That Will Protect People from Facial Recognition Software.