Last updated 2 months ago. Our resources are updated regularly but please keep in mind that links, programs, policies, and contact information do change.
Using your face as identification has become routine in modern American life. Millions of people unlock smartphones with a glance, creating seamless personal security that feels natural. Social media platforms automatically recognize and suggest tags for friends in photographs, weaving the technology into our social connections.
These familiar, often consent-based interactions represent only the most visible part of facial recognition technology (FRT). Beyond personal devices, FRT operates in powerful and often invisible ways. Government agencies deploy it for surveillance. Private companies use it to track consumer behavior.
The technology promises enhanced security, efficiency, and convenience. Yet it poses significant risks to personal privacy, civil liberties, and equity.
How Facial Recognition Works
At its core, facial recognition uses software to determine similarity between two face images. It’s a form of biometric identification that automatically recognizes individuals based on unique biological characteristics. The process transforms a physical face into digital data that computers can analyze, compare, and match.
The Digital Faceprint
The journey from a human face in a photograph to computer-readable data involves sophisticated, multi-step processing powered by artificial intelligence.
Step 1: Face Detection
Software scans a digital image or video frame to locate and isolate human faces. Advanced algorithms identify patterns and shapes that resemble faces, effectively drawing boundary boxes around each one. This ensures all subsequent analysis focuses solely on relevant facial features, filtering out background and other objects.
Step 2: Facial Feature Extraction
Once a face is detected, the algorithm analyzes its unique geometry. It maps and measures key facial features, often called “nodal points.” These can include dozens of distinct characteristics: distance between eyes, width of nose, depth of eye sockets, shape of cheekbones, contour of jawline and lips, and subtler details like skin texture and wrinkle patterns.
This set of measurements forms a unique digital “facial signature” or “faceprint” for that individual.
Step 3: Creating the Embedding
The facial signature isn’t stored as an image but converted into a numerical expression—a string of numbers known as a “facial embedding” or vector. This mathematical representation is the final, computer-readable product.
The transformation uses complex, computer-generated filters created through “deep learning.” This involves artificial neural networks, AI systems modeled on the human brain, that process vast amounts of facial data and learn to create unique and reliable embeddings.
Verification vs. Identification
Once a facial embedding is created, it serves two fundamentally different purposes. The distinction between these functions is critical, as they carry vastly different implications for privacy and consent.
Verification (1:1 Matching)
Verification answers: “Are you who you say you are?” The system compares the facial embedding of a live person to a single, pre-enrolled image the person provided as proof of identity, such as a photo on a passport, driver’s license, or employee ID.
This is an authentication task. Common examples include unlocking your smartphone or using the Transportation Security Administration’s Credential Authentication Technology, which matches your live face to the photo on your government-issued ID.
Verification is almost always an active, consensual process where users participate to gain benefits like faster access or enhanced security. Opt-out alternatives are often available.
Identification (1:N Matching)
Identification answers: “Who is this person?” The facial embedding of an unknown individual is compared against a large database containing many embeddings (from thousands to billions) to find potential matches.
Law enforcement agencies use this method when they take a surveillance camera image of a suspect and search it against a mugshot database to generate investigative leads. Unlike verification, identification is often passive and non-consensual. Subjects typically don’t know their face is being scanned and compared against databases that may have been compiled without their knowledge or consent.
The public discourse around FRT is often confused because the benign, convenient example of verification is sometimes used to justify the far more invasive and controversial application of identification.
Technology Components
The effectiveness and security of facial recognition systems depend on several underlying technological components.
2D vs. 3D Technology
Most facial recognition systems rely on standard 2D cameras, which capture flat, two-dimensional images of faces. These systems analyze nodal points from flat images. While 2D technology can work well in controlled environments with stable, direct lighting, it has significant limitations. It’s less effective in variable or dark lighting and can be easily “spoofed” with high-quality photographs.
Advanced systems use 3D technology. Apple’s Face ID projects and analyzes thousands of infrared dots to create precise depth maps of faces. Some systems use thermal infrared imagery, mapping facial patterns based on heat emitted by superficial blood vessels under skin. These 3D methods are far more secure and resistant to spoofing with photographs or masks.
Liveness Detection
To combat fraud and spoofing attempts, modern systems incorporate “liveness detection.” This crucial security feature confirms the system is interacting with a real, live human who is physically present, not a photograph, video, or mask.
Systems might test for liveness by looking for subtle indicators of non-live images, such as inconsistencies between foreground and background. Or they may actively challenge users to perform simple actions like blinking, smiling, or moving their head.
Similarity Scores and Thresholds
Facial recognition searches don’t produce definitive “yes” or “no” answers. Instead, algorithms generate “similarity scores” for each potential match—numerical values representing how closely the unknown face’s embedding compares to database embeddings.
Human operators or system administrators must set “comparison thresholds”—specific similarity scores above which potential matches are considered valid and returned for review.
Setting thresholds involves critical trade-offs between two error types:
- False Positives: Incorrectly matching an individual to someone else’s photo. Low thresholds increase false positive likelihood.
- False Negatives: Failing to match an individual to their own photo. High thresholds increase false negative likelihood.
Threshold choice depends entirely on application risk tolerance. Low-stakes applications like organizing personal photos might accept lower thresholds. High-stakes law enforcement investigations require careful calibration to minimize wrongly implicating innocent people.
A Brief History of Facial Recognition
While today’s AI-powered systems seem futuristic, using faces for identification dates to the 19th century. The technology’s history reveals a strategic cycle: conceived for government control, developed with military and law enforcement funding, normalized by consumer technology, and now redeployed for state surveillance with unprecedented power and scale.
Pre-Computer Origins
Long before computers, government and law enforcement agencies understood the power of human faces as identification and control tools. In 1852, England instituted formal prison photography to identify escaped convicts and share records between police stations.
In the United States, the first photograph appeared on a reward poster in 1865 to help capture President Abraham Lincoln’s assassins. The Pinkerton National Detective Agency systematized this practice, developing the first criminal database of “mug shots” by 1870.
These early efforts established the foundational concept: centralized collections of facial images could track and identify individuals of interest to the state.
The Digital Age Begins
The first attempts to automate this process came with computers. Between 1964 and 1967, artificial intelligence pioneer Woodrow W. Bledsoe led research to see if computers could recognize human faces.
Bledsoe’s system was semi-automated and rudimentary. Human operators manually plotted coordinates of key facial features onto photographs using a graphical device called a RAND tablet. Computers then compared these coordinate sets to find matches.
The system wasn’t particularly successful, but Bledsoe’s work was groundbreaking. He formally identified immense challenges that continue to plague the field, noting that recognition is made difficult by “the great variability in head rotation and tilt, lighting intensity and angle, facial expression, aging, etc.”
Throughout the 1970s, researchers refined these computerized methods, increasing accuracy by expanding facial markers used for comparison to 21.
Government-Fueled Innovation
The next major leap came from significant U.S. government investment, particularly from defense and justice departments. A major technical breakthrough came in the late 1980s and early 1990s with “Eigenfaces”—a method allowing more efficient, low-dimensional digital representation of facial images, paving the way for truly automatic face recognition.
Recognizing the technology’s potential, the government actively cultivated a commercial market. From 1993 through the 2000s, the Defense Advanced Research Projects Agency (DARPA) and the National Institute of Standards and Technology jointly ran the Face Recognition Technology (FERET) Program. This initiative sponsored research, developed standardized evaluation methods, and encouraged commercial FRT industry growth.
Simultaneously, the National Institute of Justice, the Department of Justice’s research arm, began funding face recognition research in 1996, supporting new algorithm development and their transition to local law enforcement agencies.
This period cemented FRT as a U.S. government strategic priority long before it became a common consumer product.
Commercialization and Ubiquity
The 21st century saw FRT move from labs into public spheres, sparking fascination and controversy. Early high-profile deployment occurred at the 2002 Super Bowl in Tampa, Florida, where law enforcement scanned crowds for known criminals and terrorists. The event, quickly nicknamed the “Snooper Bowl,” ignited one of the first major public debates about facial surveillance privacy implications.
A decade later, in 2011, the technology had a major public success when it helped confirm Osama bin Laden’s identity after his death, demonstrating its value for critical national security operations.
However, the true catalyst for the technology’s explosion in capability and social acceptance was the private sector. In 2014, Facebook unveiled its DeepFace software, which used sophisticated deep learning techniques to achieve near-human accuracy in photo-tagging. By deploying this feature to its massive user base, Facebook normalized automated facial recognition on an unprecedented global scale.
This was followed by FRT integration into personal devices for security. Microsoft introduced Windows Hello in 2015, and Apple launched its highly secure, 3D-based Face ID system with the iPhone X in 2017.
This commercialization created a powerful feedback loop: consumer applications generated vast new facial image datasets and made the public comfortable with the technology, which in turn made the original government goal of surveillance easier to achieve, both technically and socially.
Government Use: Securing Borders and Travel
The U.S. federal government, particularly the Department of Homeland Security (DHS), has become one of the most prominent facial recognition users. It’s deployed extensively at airports and border crossings to verify traveler identities, with stated goals of enhancing security and streamlining travel processes. However, this deployment operates on a tiered system of privacy rights, where data collection, retention, and opt-out ability explicitly depend on citizenship status.
The DHS Framework
DHS officially describes Face Recognition and Face Capture (FR/FC) technologies as powerful AI tools used to improve public interactions with the department and support critical law enforcement investigations, while protecting privacy and individual rights.
The department asserts its FRT use complies with all applicable federal laws, including the Privacy Act of 1974 and the Homeland Security Act of 2002.
Under its own policies, DHS mandates that all FRT systems undergo thorough testing to ensure no unintended bias or disparate impact. Crucially, the department’s framework establishes two key principles for non-law enforcement uses: U.S. citizens must be afforded the right to opt out of face recognition, and the technology cannot be used as the sole basis for any law or civil enforcement action.
At the Airport: The TSA Experience
For most Americans, the most common government facial recognition encounter occurs at airport security checkpoints. The TSA has deployed second-generation Credential Authentication Technology (CAT-2) scanners at hundreds of airports nationwide.
The process is designed to be straightforward. Travelers approach the podium and either hand their physical ID to a Transportation Security Officer (TSO) or insert it into the CAT-2 unit themselves. The machine verifies the ID’s security features and confirms the traveler’s flight reservation. Simultaneously, a camera takes a live photo of the traveler. The system then performs one-to-one facial comparison, matching the live photo against the digital image on the ID to confirm the traveler is who they claim to be.
A critical point for public understanding is that for U.S. citizens, participation in this facial comparison process is voluntary. Travelers can decline to have their photo taken. In this case, the TSO performs a standard, manual identity check. The TSA explicitly states that travelers who opt out face no penalty, negative consequence, or lose their place in line. The agency posts signage at checkpoints informing travelers of this right.
To address privacy concerns, the TSA asserts that in this context, the technology is used solely to automate manual ID checks and is not used for surveillance or law enforcement. After a positive match is confirmed, the traveler’s live photo is deleted and not stored or saved, except in limited testing environments for evaluating technology effectiveness.
At the Border: CBP Biometric Programs
U.S. Customs and Border Protection (CBP), another DHS component, has a congressional mandate to implement a biometric entry-exit system to track foreign nationals. The agency has chosen facial recognition as its preferred technology and deployed it across air, land, and sea ports of entry.
CBP operates several key programs:
Simplified Arrival: When travelers arrive on international flights or at land pedestrian crossings, photos are taken at primary inspection points. These photos are compared against small, temporary galleries of images of people expected to be traveling that day (pulled from existing government holdings like passport and visa photos). This streamlines identity checks and reduces manual document processing needs.
Biometric Exit by Air: As passengers board departing international flights, cameras at gates capture photos. These images are matched against travel document photos to create secure departure records, helping CBP enforce immigration laws.
Global Entry: This “Trusted Traveler Program” serves pre-approved, low-risk travelers. Members can use touchless portals or a mobile app that uses facial recognition to verify identity against stored photos, expediting U.S. entry.
CBP’s data handling policies reveal crucial distinctions based on citizenship. CBP states that for U.S. citizens, photos taken during biometric processes are retained for no more than 12 hours after identity verification for operational purposes. In stark contrast, biometric data collected from most non-U.S. citizens is enrolled and stored in the Automated Biometric Identification System (IDENT), one of the largest biometric repositories in the U.S. government.
This creates a de facto biometric “caste system” at the border, where privacy protections afforded to individuals are directly tied to their citizenship.
As with the TSA process, U.S. citizens have the right to opt out of CBP’s facial photo capture. They can request alternative processes, which typically involve manual review of travel documents by CBP officers.
Law Enforcement and Investigations
While facial recognition in travel is largely about verification, its use in law enforcement is about identification—a far more controversial application. Here, the technology is presented as an indispensable tool for public safety, yet criticized as a grave threat to justice and civil liberties. A significant disconnect exists between official policies and public assurances of law enforcement agencies and their actual implementation practices. The primary risk may not be “rogue AI,” but rather systemic failure of human governance and accountability.
The Argument For: Essential Tool for Public Safety
Law enforcement agencies and technology proponents argue that FRT is a transformative investigative tool that helps keep communities safe. They consistently emphasize that in law enforcement contexts, the technology generates leads, not arrests or probable cause on its own.
When an image of an unknown person of interest is searched against a database, the system returns a list of potential candidates ranked by similarity score. It’s then a trained human detective’s responsibility to conduct further investigation using traditional police work to determine if any leads are valid and to build a case.
The value of this capability is demonstrated through numerous success stories where FRT provided crucial first steps in solving difficult cases:
Solving Violent Crimes: The technology has been instrumental in identifying suspects in high-profile cases. For example, it helped identify the man who left rice cookers in a New York City subway station, sparking a terror scare, and was used to identify the Capital Gazette shooter in Maryland after he refused to provide his name.
Combating Human Trafficking and Child Exploitation: Investigators have used FRT to identify and rescue child victims by matching their faces to images found in online sex advertisements. It has also helped identify and apprehend suspected traffickers and child predators who might have otherwise remained anonymous.
Exonerating the Innocent: In some cases, FRT has protected the innocent. A Florida man falsely accused of vehicular homicide was exonerated when the technology helped locate a key witness who confirmed he was not the driver. In another case, a man identified as a suspect in an assault was cleared when investigators confirmed he was already in jail at the time of the crime.
Identifying Unknown Victims: When crime victims are found unconscious or deceased with no identification, FRT has been used to query images from sources like cell phone screens to provide names, allowing authorities to notify family members in a timely manner.
The Argument Against: Threat to Justice and Civil Liberties
Civil liberties organizations like the American Civil Liberties Union and the Electronic Frontier Foundation present a starkly different picture, arguing that FRT is dangerously flawed technology that threatens free society foundations. Their arguments are grounded in real-world harms and potential for systemic abuse.
The most damning evidence against police use of FRT is the growing list of individuals wrongfully arrested based on faulty facial recognition matches. These victims, who are disproportionately Black, include Robert Williams, Nijeer Parks, Randal Reid, and Porcha Woodruff, among others. In these cases, police often violated their own policies by treating FRT results as definitive proof of identity rather than unconfirmed tips, leading to innocent people being jailed.
Beyond wrongful arrests, critics warn of the technology’s “chilling effect” on First Amendment rights. Knowledge that government can use FRT to passively and remotely identify individuals in crowds could deter people from participating in protests, political rallies, or other forms of public assembly, for fear of being cataloged and monitored.
This threat of mass surveillance fundamentally changes public space nature, eroding anonymity crucial for free expression and association.
Another major concern is “mission creep,” where technology initially adopted for narrow, serious purposes like identifying violent felons is gradually expanded to cover minor infractions or political surveillance. For example, a system used to find terrorists could eventually issue automated fines for jaywalking or track individuals attending specific religious services or community meetings.
The Oversight Gap
The U.S. Government Accountability Office (GAO), Congress’s independent watchdog agency, has conducted multiple investigations into federal government FRT use. Its findings provide crucial, evidence-based checks on law enforcement agency claims and reveal patterns of inadequate oversight and accountability.
Across several reports, the GAO has found systemic failures:
Widespread and Untracked Use: Dozens of federal agencies use FRT, including the FBI, DEA, U.S. Marshals Service, Secret Service, and ICE. Many agencies couldn’t even track how often their own employees were using non-federal systems, such as state-level databases or controversial commercial services like Clearview AI, creating massive accountability vacuums.
Systemic Lack of Training: The GAO found that federal agencies conducted tens of thousands of facial recognition searches before implementing any mandatory training for agents using the technology. In one case, the GAO found that of 196 FBI staff with access to a facial recognition service, only 10 had completed training.
Absence of Clear Policies: For years, many agencies used this powerful surveillance tool without having specific policies in place to govern its use, protect civil liberties, and ensure proper procedures were followed.
These findings reveal that the official safeguard of a “trained human in the loop” is often a myth. The human analyst is frequently untrained and operating without clear, binding rules. This suggests that the primary danger of FRT in law enforcement may not be the technology itself, but the unreliable and unaccountable human systems tasked with wielding it.
Accuracy and Bias Questions
The debate over facial recognition is inseparable from questions of accuracy, particularly whether the technology is biased against certain demographic groups. The public conversation on this topic is often polarized and oversimplified. A nuanced look at evidence reveals that the critical question isn’t simply “Is the technology biased?” but rather a more sophisticated set of risk-assessment questions that must be asked of any specific deployment.
The NIST Reports
The National Institute of Standards and Technology serves as the U.S. government’s primary, independent evaluator of facial recognition algorithms. Through its ongoing Face Recognition Vendor Test (FRVT) program, NIST rigorously assesses algorithms submitted by developers worldwide, providing the most authoritative data on their performance.
NIST’s influential 2019 report on demographic effects presented a complex picture often misinterpreted by both critics and proponents. The key findings were:
A majority of the 189 algorithms tested did exhibit measurable “demographic differentials.” Specifically, they produced higher rates of false positives (incorrect matches) for women, Black individuals, and East Asian individuals when compared to white men. For some algorithms, the false positive rate for Black women was nearly 35 percent.
However, the report also found that the most accurate, top-tier algorithms showed “undetectable” or negligible differences across these same demographic groups.
NIST concluded that higher overall accuracy strongly correlates with lower demographic differences. This suggests that bias is not an inherent, unsolvable property of the technology itself, but rather a problem that can be engineered away with better algorithms and development processes.
This creates a performance spectrum: some algorithms are highly biased and should not be used, while the best are highly accurate across all groups. A blanket statement that “FRT is biased” is therefore imprecise and misleading. The focus must shift from a binary debate to a risk-management framework that asks: Which specific algorithm is an agency using, what were its NIST test results, and what safeguards are in place to mitigate its known error rates?
Understanding Errors
To understand accuracy, it’s essential to distinguish between the two types of errors a system can make, as they have very different real-world consequences.
False Positive (Type I Error): This occurs when the system incorrectly declares a match between two different people. In a law enforcement search, a false positive is the error that can lead to an innocent person being misidentified as a suspect and potentially facing wrongful arrest. NIST found that false positive rates varied much more significantly across demographic groups than false negative rates.
False Negative (Type II Error): This occurs when the system fails to recognize that two photos are of the same person. In a system used for verifying identity to access unemployment benefits, a false negative could mean a legitimate applicant is wrongly rejected by the automated system and forced into a lengthy and burdensome manual review process to get benefits they are entitled to.
Sources of Error Beyond Algorithms
Errors in facial recognition don’t stem solely from algorithm code. Several other factors play critical roles in system performance.
Image Quality: NIST has identified variations in image quality as a primary driver of errors. Real-world conditions are often far from ideal. Factors like poor or uneven lighting, low camera resolution, off-angle views of faces, or partial occlusion from hats, sunglasses, or masks can all dramatically reduce system accuracy.
Training Data: The data used to “teach” AI algorithms is paramount. An algorithm is only as good as the data it learns from. If an algorithm is trained on a dataset that is not diverse and consists primarily of images of white men, it will naturally be less accurate when trying to identify women and people of color.
This reframes the issue from abstract “biased AI” to a more concrete problem of “AI trained on biased data.” NIST’s finding that some algorithms developed in China performed better on Asian faces supports this conclusion, as those algorithms were likely trained on datasets rich with Asian faces.
The Legal Maze
As facial recognition technology has advanced and proliferated, laws and regulations have struggled to keep pace. In the United States, this has resulted in a fractured legal landscape where a person’s rights against facial surveillance can change dramatically simply by crossing state or city lines. This “privacy federalism” means that fundamental rights concerning one’s own biometric identity are not consistently protected nationwide, creating confusion for citizens and compliance challenges for government and industry.
The Federal Stalemate
Despite the technology’s widespread use by numerous federal agencies, there is currently no comprehensive federal law that specifically governs or restricts facial recognition use. Congress has held hearings and several bills have been introduced, but none have been enacted into law. These proposals, however, illustrate different approaches being considered at the federal level.
The Facial Recognition Technology Warrant Act, a bipartisan bill, sought to establish a “warrant for face scans” rule. It would have required federal law enforcement agencies to obtain warrants based on probable cause before using FRT for targeted, ongoing surveillance of individuals for more than 72 hours.
The Facial Recognition and Biometric Technology Moratorium Act represents a much more stringent approach. It would place a moratorium on acquisition and use of FRT and other biometric surveillance systems by the federal government. The ban would remain in effect unless and until Congress passes specific law that explicitly authorizes its use, detailing which agencies can use it, for what purposes, and with robust safeguards for accuracy, due process, privacy, and equity.
States and Cities as Policy Laboratories
In the absence of federal action, states and local governments have taken the lead, becoming policy laboratories for regulating FRT. This has resulted in a complex and inconsistent “patchwork” of laws across the country. As of late 2024, at least 15 states have enacted laws limiting police use of the technology.
These regulations can be categorized into several types:
Comprehensive Bans on Government Use: A growing number of cities have enacted outright bans on facial recognition use by municipal agencies, including police. This movement was pioneered by San Francisco, California, in 2019, and was followed by other cities such as Boston, Massachusetts, and Portland, Oregon.
Moratoriums: Some states, like Vermont, have passed laws placing temporary halts on law enforcement’s FRT use, allowing for further study and statewide policy development.
Strict Use-Limits for Law Enforcement: This is the most common approach at the state level, with various restrictions being implemented:
- Warrant Requirements: Montana and Utah became the first states to require law enforcement to obtain warrants before conducting facial recognition searches in most cases.
- Serious Crime Limits: States like Maryland and Illinois restrict FRT use to investigations involving specific lists of serious or violent crimes.
- Defendant Notification: A New Jersey court ruling and a Maryland state law now require that criminal defendants be notified if FRT was used at any point in investigations that led to their charges, protecting their due process rights.
Narrower Restrictions: Some states have passed more limited laws. For example, Oregon and New Hampshire specifically ban real-time facial recognition use in conjunction with police body cameras.
Private Sector Regulation: The most influential law governing commercial use of biometric data is Illinois’s Biometric Information Privacy Act (BIPA). Enacted in 2008, BIPA requires private entities to obtain written consent from individuals before collecting, capturing, or otherwise obtaining their biometric identifiers, including “scan of hand or face geometry.” Crucially, BIPA grants individuals a private right of action, allowing them to sue companies for violations. This has led to numerous high-profile lawsuits against companies like Facebook and Clearview AI.
State-Level Facial Recognition Laws
State | Type of Law | Key Provisions |
---|---|---|
Alabama | Government Use Limit | Prohibits FRT from being the sole basis for an arrest or establishing probable cause. |
Colorado | Government Use Limit | Requires government agencies to provide public notice before using FRT and specifies the purpose of use. |
Illinois | Biometric Privacy Act | Requires private entities to obtain written consent before collecting biometric data; provides a private right of action for violations. |
Maine | Government Use Limit | Prohibits government use of FRT except with a warrant or in emergencies; has multiple strong limits. |
Maryland | Government Use Limit | Limits law enforcement use to a list of serious crimes; requires defendant notification; prohibits use in employment interviews without consent. |
Massachusetts | Government Use Limit | Limits law enforcement use, requiring a court order for searches except in emergencies; bans use by public agencies except the Registry of Motor Vehicles. |
Montana | Government Use Limit | Requires a warrant based on probable cause for law enforcement searches; limits use to serious crimes; requires defendant notification. |
New Hampshire | Body Camera Ban | Bans the use of FRT in combination with police body cameras. |
New Jersey | Government Use Limit | Court ruling requires defendant notification if FRT was used in an investigation. |
Oregon | Body Camera Ban | Prohibits law enforcement from using FRT in connection with body-worn cameras. |
Utah | Government Use Limit | Requires a warrant for law enforcement to obtain biometric surveillance information; limits use to serious crimes. |
Vermont | Moratorium | Prohibits law enforcement use of FRT, with some exceptions for investigating sexual exploitation of children. |
Virginia | Government Use Limit | Prohibits local law enforcement and campus police from purchasing or deploying FRT without specific authorization. |
Washington | Government Use Limit | Requires a warrant for ongoing surveillance; requires agencies to test for fairness and accuracy; has multiple strong limits. |
FRT in the Private Sector
Beyond government applications, facial recognition has been rapidly integrated into the private sector, where it’s used to mediate our digital lives, reshape retail experiences, and, most controversially, act as a gatekeeper to essential government benefits. The privatization of identity verification through FRT represents a profound shift in the relationship between citizen and state. It outsources a core government function to for-profit companies, creating new forms of digital bureaucracy that can systematically exclude the most vulnerable populations while concentrating sensitive biometric data in unaccountable private entities.
In Your Daily Life
The most common and widely accepted uses of FRT are in our personal lives.
Personal Devices: Millions of people use facial recognition daily to securely unlock smartphones, tablets, and computers. On-device processing, as pioneered by Apple’s Face ID, is a key privacy-preserving feature. In this model, the user’s biometric data—the facial map—is encrypted and stored in a secure enclave on the device itself and never uploaded to cloud servers, giving users full control.
Social Media: Social media platforms use FRT to enhance user experience. The technology powers features like automatic photo-tagging suggestions, making it easier to share memories with friends. It also helps users organize vast personal photo libraries by automatically identifying and grouping photos of specific people. For content creators and public figures, it can be a tool to track unauthorized use of their image or content across the internet.
In the Store: The Retail Revolution
The retail industry has embraced facial recognition for two primary purposes: enhancing security and personalizing customer experience.
Security and Loss Prevention: Faced with what they describe as a “theft crisis” of rising shoplifting and in-store violence, many retailers are turning to FRT as a key security tool. These systems typically work by scanning shoppers’ faces as they enter stores and comparing them against privately curated watchlists of known shoplifters or individuals who have engaged in disruptive or violent behavior.
If a match is detected, real-time alerts are sent to store managers or security personnel, allowing them to respond proactively—for example, by offering enhanced customer service as a deterrent or, if necessary, contacting law enforcement. Proponents argue this makes stores safer for both employees and customers. Retailers are generally expected to provide transparency about this practice through clear signage at store entrances.
Marketing and Customer Experience: Retailers are also using FRT to gather business intelligence and create more personalized shopping environments. This includes:
- Analytics: Using cameras to track customer foot traffic patterns, analyze dwell times in certain aisles, and gather aggregate demographic data (such as age and gender) to optimize store layouts, product placement, and staffing levels.
- Personalized Marketing: Deploying “smart” digital signage that can change advertisements displayed based on detected demographics of people looking at screens.
- VIP Recognition: Identifying enrolled loyalty program members as they enter stores, allowing staff to offer personalized greetings, tailored recommendations based on purchase history, and exclusive offers.
Innovative Applications: New frontiers in retail FRT include virtual try-on technology (VTOT), which uses device cameras to let shoppers see how clothes or makeup would look on them, and contactless payment systems like “Smile to Pay,” where transactions are authorized with facial scans.
Accessing Your Benefits: The ID.me Controversy
One of the most significant and controversial applications of private-sector FRT has been its use as a gatekeeper for government services. During the COVID-19 pandemic, state unemployment agencies were overwhelmed by massive waves of fraudulent claims. To combat this, at least 27 states, along with the Internal Revenue Service, contracted with a private company called ID.me to provide identity verification for people seeking unemployment benefits and other services.
The ID.me process requires users to upload photos of their government-issued ID and then take a live “selfie” with a smartphone or webcam. The system performs one-to-one matching to verify their identity. While intended to stop fraud, this system created severe problems that disproportionately harmed the most vulnerable Americans:
The Digital Divide: The system’s requirements—a smartphone with a camera or a computer with a webcam, and stable, high-speed internet connection—created immediate barriers for many low-income, elderly, or rural individuals. For those lacking these resources, accessing rightfully earned benefits became nearly impossible.
High Failure Rates and Extreme Wait Times: A significant percentage of users—between 10 and 15 percent—were unable to pass automated verification. These individuals were then funneled into manual verification processes: live video chats with ID.me employees. An investigation by the U.S. House Committee on Oversight and Reform revealed that during a peak period in April 2021, the average wait time for these video chats was over four hours in 14 states. In North Dakota, the average wait was a staggering 9 hours and 49 minutes.
These catastrophic delays left desperate people who had lost their jobs unable to access life-saving benefits for weeks or months.
Bias and Equity Concerns: The U.S. Department of Labor’s Office of Inspector General issued an official alert warning that facial recognition use in unemployment systems could result in inequitable access to benefits. Citing NIST research, the OIG noted that such systems can have higher error rates for women and people of color, creating risk that the technology could systematically and unfairly block legitimate applicants from these demographic groups.
Accountability and Outsourcing: Critics argue that identity verification is a core government function that should never be outsourced to private, for-profit companies. Doing so places a private entity, whose algorithms and processes are not subject to public audit or accountability, in the position of deciding who gets access to essential government services.
Our articles make government information more accessible. Please consult a qualified professional for financial, legal, or health advice specific to your circumstances.