Back to blog
Scam Prevention
7 min

How to Detect Deepfakes: A Practical Guide to Spotting AI-Generated Media

Learn how to identify deepfake videos, images, and audio with visual tells, verification tools, and reverse image search techniques.

LOCK.PUB
2026-01-25
How to Detect Deepfakes: A Practical Guide to Spotting AI-Generated Media

How to Detect Deepfakes: A Practical Guide to Spotting AI-Generated Media

A video of a CEO announcing a merger. An audio clip of a politician making a controversial statement. A photo of a celebrity endorsing a product. Any of these could be completely fabricated using AI — and the technology is getting better every month.

Deepfakes are AI-generated or AI-manipulated media designed to look and sound authentic. They are used in scams, misinformation campaigns, and identity fraud. This guide covers the practical techniques you can use to detect them.

What Are Deepfakes

Deepfakes use deep learning models — typically generative adversarial networks (GANs) or diffusion models — to create or alter video, audio, or images. The term covers:

  • Face swaps — placing one person's face onto another's body
  • Face reenactment — making a person appear to say or do things they never did
  • Voice cloning — generating synthetic speech that mimics a specific person
  • Full synthetic generation — creating entirely fictional people or scenes

Visual Tells in Deepfake Videos

Unnatural Eye Movement and Blinking

Early deepfakes had a well-known problem: the subjects rarely blinked. While newer models have improved, blinking patterns often remain irregular. Watch for:

  • Unusually infrequent or rapid blinking
  • Eyes that do not track naturally with head movement
  • Pupils that appear different sizes or shapes

Lip Sync Issues

Lip movements that do not quite match the audio are one of the most reliable indicators. Pay attention to:

  • Slight delays between words and lip movement
  • Lips that blur or distort during speech
  • Mouth shapes that do not correspond to the sounds being made

Facial Boundary Artifacts

The boundary where the generated face meets the original footage often reveals the manipulation:

  • Slight color or texture mismatch at the jawline or hairline
  • Blurring or softening around the edges of the face
  • Skin tone that shifts when the head turns

Lighting and Shadow Inconsistencies

AI models struggle with complex lighting. Look for:

  • Shadows that do not match the apparent light source
  • Reflections in eyes that show different environments
  • Skin highlights that remain static while the face moves

Hair and Accessories

Fine details like hair, earrings, and glasses are difficult for AI to render consistently:

  • Hair that appears blurred or painted rather than individual strands
  • Earrings that change shape between frames
  • Glasses frames that warp or disappear momentarily

Audio Deepfake Detection

Unnatural Speech Patterns

  • Monotone delivery or unusual rhythm
  • Breaths that sound mechanical or are absent entirely
  • Words that blend together without natural pauses

Background Audio

  • Inconsistent background noise (suddenly appearing or disappearing)
  • Audio quality that does not match the apparent recording environment
  • Echo patterns that change mid-recording

Emotional Mismatch

  • Voice that sounds calm while discussing something emotional
  • Stress patterns that do not match what a real speaker would exhibit
  • Laughter or sighs that sound artificial

Detection Tools and Techniques

Reverse Image Search

If you suspect an image is synthetic or manipulated, use reverse image search to find its origin:

Tool Best For
Google Images (reverse search) Finding original versions of manipulated photos
TinEye Tracking where an image has appeared online
Yandex Images Broader results for faces and locations

AI Detection Tools

Several tools specifically designed to detect AI-generated content:

  • Hive Moderation — detects AI-generated images and text
  • Content Credentials (C2PA) — checks for digital provenance metadata
  • FakeCatcher (Intel) — analyzes blood flow patterns in video pixels
  • Deepware Scanner — mobile app for scanning deepfake videos

Metadata Analysis

Authentic media contains metadata (EXIF data) from the recording device. Deepfakes often lack this data or contain inconsistent metadata:

  • Check for camera make, model, and settings
  • Verify GPS coordinates if present
  • Look for editing software signatures

Frame-by-Frame Analysis

Pausing a video and advancing frame by frame can reveal artifacts invisible at normal playback speed:

  • Facial features that flicker or shift between frames
  • Compression artifacts that appear only around the face
  • Background elements that warp near the subject's head

Verification Methods

Source Verification

Before trusting media content, verify its source:

  1. Check the original publisher. Was it posted by an official account or an unknown source?
  2. Cross-reference with other sources. Does any other credible outlet have the same footage?
  3. Check the date. Is the content being presented in its original context?

Contextual Analysis

  • Does the content match what you know about the person?
  • Is the setting plausible?
  • Are there other witnesses or recordings of the same event?

Contact the Subject

When possible, reach out to the person allegedly depicted in the content through verified channels.

Common Deepfake Scam Scenarios

Scenario How It Works
CEO fraud Deepfake video or voice call impersonating an executive to authorize transfers
Romance scams Fake video calls using someone else's face to build false trust
Celebrity endorsements Fabricated videos of celebrities promoting scam products
Political misinformation Fake speeches or statements attributed to public figures
Extortion Synthetic explicit content created using a victim's likeness

What to Do If You Encounter a Suspected Deepfake

  1. Do not share it. Sharing amplifies the damage, even if you label it as potentially fake.
  2. Report it to the platform where you found it.
  3. Verify through official channels. Check the official accounts or websites of the people depicted.
  4. Use detection tools. Run the content through the tools listed above.
  5. Preserve evidence. Save the URL and a screenshot in case it is needed for reporting.

Protecting Yourself

  • Be skeptical of sensational content, especially from unfamiliar sources
  • Verify before sharing — a 30-second check can prevent misinformation spread
  • Keep your own images and videos private when possible to reduce source material for deepfakes
  • Use services like LOCK.PUB to share sensitive content through password-protected links rather than posting publicly

Trust Through Verification

In a world where seeing is no longer believing, verification matters more than ever. When you share important information, use channels that provide built-in trust. LOCK.PUB lets you share links, memos, and messages through password-protected access on a consistent, verified domain — so recipients know the content is authentic and comes from someone they trust.

Create a Protected Link -->

Keywords

how to detect deepfake
deepfake detection
spot fake video
AI generated media
deepfake tells
reverse image search

Create your password-protected link now

Create password-protected links, secret memos, and encrypted chats for free.

Get Started Free
How to Detect Deepfakes: A Practical Guide to Spotting AI-Generated Media | LOCK.PUB Blog