Facial recognition tools are improving every day. But the images behind the algorithms, including yours, may be used without permission.

The pictures you share, the selfies you post, and even faces that appear in the background of others' photos are increasingly used to train artificial intelligence systems. This is happening across social media platforms, surveillance networks, and publicly available databases—with little to no consent from those being analyzed.

Researchers and human rights organizations in the United States and Europe are raising alarm. While facial recognition technology is advancing rapidly, the laws meant to protect individuals are lagging far behind.

A Global Data Grab Without Consent

To train accurate face recognition models, developers use enormous image datasets. These are often scraped from the internet—social media, video footage, and public image libraries. Your face may already be part of these datasets.

Clearview AI is a prime example. The company collected over 20 billion facial images from across the internet without notifying users. Regulatory bodies in Europe have ruled this practice illegal under the General Data Protection Regulation (GDPR), and fines have followed.

Academic research confirms the issue. According to the journal Neurocomputing (2024), many AI developers bypass informed consent, prioritizing quantity over transparency. Another study in Pattern Recognition Letters warns that datasets lacking ethical oversight risk both bias and abuse.

What the European Union Is Doing

The European Union has taken steps to regulate biometric data through the GDPR and the newly adopted AI Act. Under the AI Act, indiscriminate scraping of images to build facial databases is banned.

EU lawmakers also require that any AI system dealing with biometric identification must go through strict risk assessments, human oversight, and transparency checks. Still, enforcement is complex—and slow.

While the GDPR provides stronger protections than most global frameworks, it still allows exceptions for law enforcement and certain security uses, which creates loopholes for misuse.

What Is Happening in the United States

The United States has no national law that regulates how companies collect or use biometric data. A few states—like Illinois and California—have their own rules, but in most of the country, there are no legal protections stopping companies from using your face to train AI.

A 2025 report from Georgetown University’s Center on Privacy and Technology revealed that at least 25 major technology firms were using facial images for AI research without notifying the individuals whose faces appeared.

Major lawsuits are underway. Meta (formerly Facebook) and Google are both under scrutiny for allegedly using user-uploaded photos to train AI without clear consent. But legal processes are slow, and penalties are often weak compared to the scale of data involved.

Ethical and Technical Risks

Beyond privacy, facial recognition systems can misfire—especially with people of color, women, and non-binary individuals. The MIT Media Lab has shown error rates as high as 34 percent in certain demographics.

Bias in training data leads to discrimination in real life: job hiring tools, airport security, and predictive policing may all be skewed.

Experts warn that synthetic mimicry—AI systems that don’t understand, but repeat—can make the consequences even worse. Systems trained on flawed or stolen data can embed bias deeper into infrastructure without any way to audit or reverse the damage.

What You Can Do About It

  1. Use privacy tools: Programs like Fawkes (developed by the University of Chicago) can subtly distort your images before you upload them, making them unusable for facial recognition training.

  2. Opt out where possible: Some platforms now allow you to opt out of AI training—though it is rarely the default setting.

  3. Support better legislation: Join organizations advocating for stronger AI transparency laws and biometric consent rules.

  4. Avoid platforms that don't disclose AI training practices: If a platform cannot explain how your data is used, they likely use it without meaningful boundaries.

Conclusion

Facial recognition is no longer the future—it is the present. But the rules about whose faces are allowed to power the AI revolution are still being written. Whether you like it or not, your face may already be part of a machine learning model somewhere. The only question is whether that happened with your knowledge—or in complete silence.

Sources

Your Face Is Being Used for AI Training Without Your Consent

By Maria Johansen, Founder and Editor, Konsultbiz News
Published: 10 October 2025