The "Truth" Button: How to Spot a Deepfake in 3 Seconds
Welcome back to AI Brews.
Over the last few weeks, we’ve talked a lot about creative power. We showed you how anyone can become a Bedroom Director using tools like Sora and Runway. We explored how professionals are cloning themselves for work using Synthesia. The story so far has been mostly optimistic—AI as leverage, AI as creativity, AI as scale.
But there’s another side to this story.
And it’s starting to matter more every day.
If a content creator can generate a realistic product ad without a camera, someone else can generate a fake CEO apology video without a boardroom. If a marketer can clone their voice for training videos, a scammer can clone it for a phone call. The same tools that empower creators also lower the cost of deception.
As we move into 2026—a year filled with elections, market volatility, and constant online noise—the question isn’t “Can this be fake?” anymore.
It’s “How quickly can I tell if this is real?”
This article isn’t about creating content.
It’s about learning how to pause, look closer, and press your own internal Truth button.
Before reaching for detection tools or browser extensions, it’s worth remembering something simple: AI still struggles with reality in small, human ways. Most deepfakes don’t fall apart in big, dramatic glitches. They fail in quiet details.
AI models have improved dramatically with hands, but interaction is still hard.
A hand holding a glass might look fine at first glance, but if you watch closely, you’ll notice the fingers don’t quite press into the object. The shadow might not match the lighting in the room. The glass may appear slightly fused to the skin.
Content creators who use tools like Runway often talk about this. They’ll generate a beautiful product shot, only to re-roll it multiple times because the hand doesn’t feel grounded in the scene. If creators notice it while polishing ads, you can notice it while scrolling.
Human faces are full of micro-behaviours we don’t consciously track. Blinking is one of them.
Deepfake videos often show people blinking too evenly or not enough. The eyes stay open just a bit too long. The face reacts, but the eyes don’t follow naturally.
If a video feels emotionally intense but the person’s eyes seem strangely calm or mechanical, that discomfort is often your brain detecting something artificial.
Real cameras capture imperfections. Slight grain in low light. Uneven skin texture. Tiny flyaway hairs. These flaws are normal.
AI-generated video tends to smooth everything out. Skin can look waxy. Hair sits too neatly. Lighting feels evenly balanced even in messy environments.
Creators using Sora often praise this smoothness because it looks cinematic. Viewers should treat that same smoothness as a signal to slow down and look again.
Sometimes, the fake is genuinely good. That’s when software becomes useful—not as a replacement for judgment, but as a second opinion.
Video grabs attention, but audio causes panic.
There have already been cases where people received phone calls using AI-generated voices of family members asking for urgent help. These situations work because emotion overrides logic.
Tools like McAfee’s Project Mockingbird are designed for exactly this problem. They listen for patterns in synthetic speech that humans can’t hear, such as unnatural frequency consistency or missing breath artifacts. When something sounds off, the software flags it before you react.
Journalists, investors, and PR teams often use platforms like Reality Defender. Instead of simply labelling content as fake or real, these tools analyse where manipulation likely occurred.
For example, the system might highlight mismatched lip movement or inconsistent lighting between frames. This is especially important for industries where a single viral video can move markets or damage reputations.
For casual users, tools like Hive AI and Sensity offer browser-level detection. Right-clicking an image to check if it’s AI-generated is becoming as normal as checking grammar in a document.
It’s not perfect, but it’s enough to slow down the spread of obvious misinformation.
The long-term solution to deepfakes isn’t catching every fake. It’s proving what’s real.
This is where Content Credentials (C2PA) come in. Think of them as nutrition labels for media.
When content is created or edited using participating tools, metadata travels with the file. Clicking a small “CR” icon can show where the content originated, what tools touched it, and when changes were made.
If a video claims to be raw footage but lists an AI video generator in its history, you don’t need a debate. The evidence is already there.
Major platforms and creators are slowly adopting this because trust is becoming a feature, not a given.
Interestingly, many creators are now proactively labelling AI-generated content. Not because they’re forced to, but because transparency builds trust.
Marketing agencies using Runway and Synthesia often disclose AI usage in campaign descriptions. Educators creating AI-generated explainer videos include disclaimers. Even entertainment creators are starting to treat AI visuals like VFX—impressive, but acknowledged.
The audience isn’t rejecting AI content. They’re rejecting undisclosed AI content.
The next time a video triggers a strong reaction, pause for a moment.
Zoom into the face. Watch the eyes. Listen for breathing. Look for context and source history.
Three seconds of attention can prevent hours of confusion.
AI hasn’t broken truth. It has exposed how fragile our assumptions about media were in the first place.
The future won’t belong to people who reject AI, nor to those who believe everything it produces. It will belong to people who learn how to question calmly and verify confidently.
In 2026, the smartest skill online isn’t creating content.
It’s knowing when not to believe it.
See you in our next article!
If this article helped you understand how you can be at two places at once, do have a look at our recent stories on Smart & Slow AI, Gen Z's new obsession, Perplexity's dominance, GPT Store, Apple AI, and Lovable 2.0. Share this with a friend who’s curious about where AI and the tech industry are heading next.
Until the next brew ☕