February 5, 2026
20 Comments

Unethical AI Examples: Deepfakes, Surveillance & Bias

Advertisements

Let's cut to the chase. When people ask for an example of AI being used unethically, they're not looking for a sci-fi plot. They're worried about the technology impacting lives right now—hurting reputations, deepening inequality, and making biased decisions behind a veil of code. The unethical use isn't always a mustache-twirling villain; it's often a combination of careless design, hidden bias, and willful ignorance of consequences.

I've spent years looking at where these systems fail. The most damaging examples aren't the flashy ones, but those that embed themselves into critical systems like justice, employment, and personal security. They're hard to spot, even harder to fight, and their harm is real.

1. Deepfakes as a Weapon of Personal Harassment and Fraud

Everyone talks about deepfakes in politics, but the most widespread and devastating unethical use is non-consensual synthetic media, overwhelmingly targeting women. This isn't a hypothetical.

Open-source tools like DeepFaceLab and face-swapping apps have put this technology in the hands of anyone with a mid-range laptop. The process? They scrape dozens of photos of a target from social media—Facebook, Instagram, LinkedIn. The AI model trains on those images, learning the person's face. Then, it swaps that face onto existing pornographic video content.

The result is a horrifyingly convincing fake used for revenge, extortion, or simply harassment.

The Niche, Ugly Reality Most Articles Miss

Mainstream reporting focuses on celebrity deepfakes. The real epidemic is in private Discord servers, Telegram channels, and dedicated forums where ordinary people are targeted. There are communities where users request specific individuals to be "deepfaked," sharing photos and target details. The victims aren't public figures with legal teams; they're ex-partners, classmates, or random women whose social media profiles were scraped. The harm is psychological torment, reputational ruin in their community, and a legal system utterly unequipped to help. A report from Sensity AI (a threat intelligence firm) found that over 90% of deepfakes online are pornographic, and virtually all target women.

Beyond pornography, deepfakes are a potent tool for fraud. In 2019, the CEO of a UK-based energy firm was tricked into transferring $243,000 after hearing what he believed was his boss's voice on the phone—an AI-generated voice clone. The technology required only a short sample of the real voice, easily obtained from a company podcast or YouTube interview.

The core unethical element here is the complete removal of consent and agency. A person's likeness is turned into a tool for their own humiliation or financial harm, with little recourse.

2. Biased Facial Recognition and Predictive Policing

This is where unethical AI wears a uniform. The promise was public safety. The reality has been mass surveillance and the amplification of racial bias.

Facial recognition systems, like those sold by Clearview AI (which scraped billions of images from the web without consent) or deployed by police departments, have well-documented accuracy disparities. Landmark studies, like the one by the National Institute of Standards and Technology (NIST), found these systems misidentify Asian and Black faces up to 100 times more often than white male faces.

Now, pair this flawed technology with "predictive policing" algorithms like PredPol (now Geolitica) or HunchLab. These tools analyze historical crime data to predict where future crime will occur. Here's the critical flaw everyone glosses over: historical crime data reflects policing bias, not actual crime rates.

If police have historically over-patrolled low-income, minority neighborhoods (for minor offenses like loitering), that area will be flooded with data points. The algorithm sees this and predicts more crime there, suggesting more patrols. This creates a pernicious feedback loop: more patrols lead to more arrests (for minor offenses), which feeds more data back into the algorithm, justifying even more patrols. It's digital redlining.

AI System / Case Unethical Mechanism Primary Harm & Impact
Clearview AI Facial Recognition Mass scraping of online images without consent to build a searchable biometric database, sold to law enforcement. Eliminates public anonymity, enables perpetual line-up without oversight, chills free speech and assembly. Banned in multiple countries.
PredPol / Geolitica Predictive Policing Uses historically biased policing data to forecast crime locations, reinforcing over-policing of minority communities. Creates a discriminatory feedback loop, wastes resources on surveillance of communities rather than solving crimes, erodes trust.
Detroit Wrongful Arrest (2020) Man wrongly arrested based on a false facial recognition match; the algorithm was the primary evidence. Real-world consequence of bias: imprisonment based on flawed, opaque technology. The victim spent 30 hours in jail.

I find this particularly insidious because it launders human prejudice through technology. Police chiefs can point to the "objective algorithm" as justification for their deployment strategies, abdicating responsibility for the discriminatory outcomes.

3. Discriminatory Hiring and CV-Screening Algorithms

Companies like HireVue, Pymetrics, and countless ATS (Applicant Tracking System) vendors promise to remove human bias from hiring. The result is often the opposite: encoding and scaling bias at the resume-screening stage.

Here's how it typically goes wrong. A company feeds its historical hiring data into a machine learning model. The model looks for patterns in the resumes of people who were hired over the last decade. It might learn that successful candidates often:

  • Went to certain "feeder" universities (which are often less diverse and expensive).
  • Used specific verbs or jargon.
  • Had gaps in employment less than a certain length.
  • Played certain sports like lacrosse or rowing (a common proxy for affluent background).

The algorithm isn't evaluating skill. It's evaluating similarity to past hires. If your company historically hired mostly men from Ivy League schools, the algorithm will downgrade resumes from women, state school graduates, or career-changers. Amazon famously scrapped an internal recruiting AI in 2018 for precisely this reason—it penalized resumes containing the word "women's" (as in "women's chess club captain").

Gamified assessments, like those from Pymetrics, claim to be bias-free by testing cognitive and emotional traits. But if the benchmark for "good" traits is based on your current high-performing (and potentially homogenous) workforce, you're just baking in a new form of cultural fit bias.

The unethical core is the black box rejection. A human might have a biased reason for rejecting you, but you could theoretically appeal or challenge it. An AI simply sends a generic "we've decided to move forward with other candidates" email. There's no explanation, no avenue for appeal, and no way for the candidate to know they were filtered out by a flawed model. It systematizes discrimination while providing the company with plausible deniability.

How to Spot and Push Back Against Unethical AI Use

You can't fight what you can't see. Here are concrete signs that an unethical AI system might be in play, and what you can actually do.

Red Flags to Watch For

  • Opacity: The system makes significant decisions (loan denial, job rejection, risk score) but provides no meaningful explanation. The answer is always "the algorithm decided."
  • Inevitability Framing: Being told "this is just how the technology works" or "all our competitors use it" when questioning a dubious outcome.
  • Bias Denial: Claims that the system is "100% objective" or "mathematically neutral." All models have biases; honest developers admit and work to mitigate them.
  • Consent Bypass: Your image, voice, or personal data was used to train a model (e.g., for deepfakes or facial recognition) without your explicit, informed consent.

Practical Steps for Individuals and Professionals

If you're a job seeker: Try to bypass the algorithm. Network on LinkedIn to get a referral directly to a hiring manager. Tailor your resume with keywords from the job description, but understand this is gaming a flawed system, not a guarantee.

If you're a citizen concerned about surveillance: Support local and federal legislation that bans or places moratoriums on facial recognition use by police, like the efforts led by the ACLU. Ask your local city council if police use predictive policing algorithms and demand audits.

If you're a developer or manager: Advocate for and implement algorithmic impact assessments before deployment. Demand diverse test datasets and continuous bias auditing. Fight for transparency, even if it's internal. The book "Weapons of Math Destruction" by Cathy O'Neil is a mandatory starting point.

Progress is slow. The EU's AI Act is a major step toward regulating high-risk AI. In the U.S., the Federal Trade Commission (FTC) has begun taking action against companies for biased and deceptive AI. But legal frameworks are years behind the technology.

Your Questions on Unethical AI, Answered

Can AI bias be completely eliminated from systems?

Complete elimination is a myth in practical terms. Bias is often baked into the data, which reflects historical and societal inequities. The goal shifts from 'elimination' to rigorous 'mitigation' and ongoing 'auditing.' It's a continuous process, not a one-time fix. You need diverse teams building the models, constant testing on edge cases, and transparency about a system's known limitations and failure rates.

How can I tell if a hiring algorithm is screening me out unfairly?

It's notoriously difficult for an individual to prove, which is the core of the problem. You might suspect it if you're highly qualified but get instant, generic rejections from companies known to use such tools. The red flag is a complete lack of human review. The practical step isn't to diagnose the algorithm but to network directly. Try to get your resume to a human hiring manager via LinkedIn or referrals, bypassing the automated gatekeeper entirely.

Are there any laws currently regulating unethical uses of AI like deepfakes?

Legislation is scrambling to catch up and is a patchwork. In the U.S., there's no comprehensive federal law yet. Some states have passed laws against non-consensual deepfake pornography. The EU's AI Act is a major step, proposing to ban certain unethical uses like social scoring and impose strict rules on high-risk AI. Currently, victims often rely on existing laws against harassment, defamation, or copyright infringement, which are ill-fitting tools for a new technological problem.