AI Facial Recognition: The Legal Battle for Your Privacy Rights

You’ve probably used facial recognition today without thinking twice about it. Maybe your phone unlocked when you looked at it, or you saw friends automatically tagged in photos on social media. This technology has quietly become part of our daily lives, promising convenience and security. But what happens when the same system that recognizes your face to unlock your phone is used to identify you as a suspect in a crime you didn’t commit?

The story of Robert Williams shows us exactly what can go wrong. On January 9, 2020, this 45-year-old father from Detroit was arrested in front of his wife and children, accused of stealing watches from a store over a year earlier. The police had almost no evidence except for one thing: a facial recognition system said he was the thief. The problem? The system was completely wrong (Naughton, 2025).

Williams spent 30 hours in jail before the mistake was discovered. His case isn’t unique—it’s part of a growing pattern that’s forcing courts, lawmakers, and everyday people to ask hard questions about this technology and our privacy rights.

How Facial Recognition Actually Works

Before we dive into the legal battles, let’s understand what we’re dealing with. Facial recognition technology works in four basic steps that sound simple but involve complex computer processes.

First, the system detects that there’s a face in a photo or video. Then it analyzes that face, measuring things like the distance between your eyes, the shape of your nose, and the curve of your chin. These measurements create what experts call a "faceprint"—essentially a mathematical description of your face. Finally, the system compares this faceprint against databases of other faces to find matches (Naughton, 2025).

Think of it like a digital fingerprint, but for your face. The technology has become incredibly sophisticated, with some systems claiming accuracy rates near 97 percent. But as we’ll see, that remaining 3 percent can have devastating consequences.

When Technology Gets It Wrong: Real Cases of Misidentification

Robert Williams’ case led to a landmark settlement in 2024 that changed how Detroit police can use facial recognition. The settlement requires officers to find additional evidence before making arrests based on facial recognition matches. It also mandates comprehensive training and regular audits of cases where the technology was used (Naughton, 2025).

But Williams wasn’t alone. In New Jersey, a case called State v. Arteaga revealed another troubling aspect of facial recognition use. When Mr. Arteaga was charged with armed robbery based partly on facial recognition evidence, his lawyers demanded to see information about how the system worked, its error rates, and the specific details of his identification. The prosecution initially refused, saying they didn’t control the technology so they didn’t have to share the information.

The New Jersey appeals court disagreed. In 2023, they ruled that defendants have a constitutional right to challenge facial recognition evidence, including access to technical details about how the system identified them. The court recognized that without this transparency, people couldn’t properly defend themselves against potentially flawed technology (State v. Arteaga, 2023).

The Bias Problem: Why Some Faces Are More Likely to Be Misidentified

Perhaps the most disturbing aspect of facial recognition technology is its bias problem. A comprehensive study by the National Institute of Standards and Technology examined 189 different facial recognition systems from 99 developers. The results were stark: the technology showed significantly higher error rates for people of color, particularly African Americans, Asian Americans, and Native Americans (NIST, 2019).

This isn’t just a technical glitch—it has real-world consequences. Both Robert Williams and many others who have been wrongfully identified by facial recognition systems have been people of color. The technology that’s supposed to make us safer is actually putting certain communities at greater risk of wrongful arrest and prosecution.

The bias stems from how these systems are trained. If the databases used to "teach" facial recognition systems contain mostly images of white faces, the technology becomes less accurate at distinguishing between faces of people from other racial groups. It’s a problem that researchers are working to solve, but progress has been slow.

Corporate Surveillance: The Rite Aid Case

The problems with facial recognition extend beyond law enforcement. In 2023, the Federal Trade Commission took action against Rite Aid, the pharmacy chain, for its use of facial recognition technology in stores. For nearly eight years, Rite Aid used facial recognition systems in hundreds of locations, primarily in low-income neighborhoods with large populations of people of color.

The system was supposed to identify shoplifters, but it frequently generated false matches. Customers were wrongly accused, publicly humiliated, searched, detained, and sometimes arrested based on these incorrect identifications. The FTC found that Black, Asian, Latino, and female customers were disproportionately affected by these false matches.

As part of the settlement, Rite Aid is banned from using facial recognition technology for surveillance purposes for five years. The case sends a clear message to other retailers: you can’t just deploy this technology without proper safeguards and oversight (FTC v. Rite Aid, 2023).

The Patchwork of State Laws

Across the United States, lawmakers are struggling to keep up with facial recognition technology. The result is a confusing patchwork of different rules depending on where you live.

Illinois leads the way with its Biometric Information Privacy Act (BIPA), passed in 2008. This law requires companies to get written consent before collecting biometric data, including facial scans. It also gives people the right to sue if their biometric privacy is violated. The law has teeth—Facebook paid $650 million in 2021 to settle a BIPA lawsuit over its photo-tagging feature (McKnight, 2021).

Other states have taken different approaches. Massachusetts requires law enforcement to document every facial recognition search. Virginia allows the technology but prohibits using it to track people in real-time or create surveillance databases. Washington state requires warrants for most facial recognition use by police.

Some cities have gone further, banning facial recognition entirely. San Francisco, Oakland, Berkeley, and Portland have all prohibited government use of the technology, citing concerns about privacy and bias.

What This Means for Your Daily Life

You might think facial recognition only affects people who get arrested, but the technology touches all of our lives in ways we might not realize. Every time you’re in a store, airport, or public space, there’s a chance your face is being scanned and analyzed.

Some venues use facial recognition for security, others for marketing. Concert venues have used it to identify known troublemakers, while retailers use it to spot suspected shoplifters. Even schools have experimented with the technology for attendance and security purposes.

The legal landscape around these uses is still developing. In many places, there are no laws requiring businesses to tell you they’re using facial recognition or to get your consent. You might be in a facial recognition database without ever knowing it.

Looking Ahead: The Future of Facial Recognition and Privacy

As facial recognition technology continues to evolve and spread, the legal battles are just beginning. Courts are still figuring out how constitutional protections apply to this new technology. Lawmakers are debating what regulations are needed. And technology companies are working to address bias and accuracy problems.

The cases we’ve discussed—Williams, Arteaga, and Rite Aid—are setting important precedents. They’re establishing that people have rights when facial recognition is used against them, including the right to challenge the technology and understand how it works.

But many questions remain unanswered. Should facial recognition be banned entirely in certain contexts? What consent should be required before your face can be scanned? How can we ensure the technology doesn’t perpetuate racial bias? How do we balance security benefits with privacy rights?

Protecting Yourself in the Age of Facial Recognition

While lawmakers and courts work out these bigger questions, there are steps you can take to protect your privacy. Learn about your state’s laws regarding biometric data. When possible, opt out of facial recognition features on social media and other platforms. Be aware that your face might be scanned in public spaces, and consider whether you’re comfortable with that.

If you’re ever arrested or charged based on facial recognition evidence, remember the Arteaga case—you have the right to challenge that evidence and demand information about how the system identified you.

Conclusion: Balancing Innovation and Rights

Facial recognition technology isn’t inherently good or evil—it’s a tool that can be used responsibly or irresponsibly. The cases we’ve examined show both the promise and the peril of this technology. When used properly, with appropriate oversight and safeguards, it can enhance security and convenience. When used carelessly, it can violate privacy, perpetuate bias, and destroy lives.

The legal battles happening now will shape how this technology is used for decades to come. As citizens, we all have a stake in ensuring that facial recognition serves society while protecting our fundamental rights to privacy and due process.

The conversation about facial recognition and privacy is far from over. In fact, it’s just getting started. As the technology becomes more powerful and more pervasive, the need for clear legal frameworks and strong protections becomes more urgent. The cases of Robert Williams, Mr. Arteaga, and Rite Aid’s customers remind us that behind every algorithm is a human being whose rights and dignity must be protected.

References

Federal Trade Commission v. Rite Aid Corp., Case No. 2:23-cv-5023 (E.D. Pa. Dec. 19, 2023).

McKnight, P. (2021). Historic Biometric Privacy Suit Settles for $650 Million. Business Law Today.

National Institute of Standards and Technology. (2019). NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software. NIST.

Naughton, M. C. (2025). Considering Face Value: The Complex Legal Implications of Facial Recognition Technology. American Bar Association Criminal Justice Magazine.

State v. Arteaga, 476 N.J. Super. 36 (App. Div. 2023).

Leave a comment

About the author

Sophia Bennett is an art historian and freelance writer with a passion for exploring the intersections between nature, symbolism, and artistic expression. With a background in Renaissance and modern art, Sophia enjoys uncovering the hidden meanings behind iconic works and sharing her insights with art lovers of all levels.

Get updates

Spam-free subscription, we guarantee. This is just a friendly ping when new content is out.