So far, we have talked about all the possible ways through which deepfakes can be a big issue for us, in what ways they can harm our social life, why they are dangerously addictive, deceptive, and disturbingly easy to make. From AI-generated politicians giving fake speeches to influencers endorsing products they have never even heard of. Deepfake revolution is ot just knocking at the door, it is already chilling on your couch. The main question that lies in our minds is, what can we do about it?

There are multiple ways through which we can differentiate the fake from reality. You don’t really need a PhD in computer vision to actually spot it; you just need to be a bit more observant. To do that, you will have to notice the following things about the deepfake:
1. Watch the eyes and mouth and the eyes: Deepfakes often have strange blinking patterns or unnatural lip syncs. If the mouth moves weirdly or the eyes look “dead” or glassy, then it is a red flag. DEEP FAKE SPOTTED.
2. Check Lighting inconsistencies: Fake videos often have uneven lighting between the face and the background
3. Glitches in facial expressions: Look closely for overly smooth skin, jerky head movements, or parts of the face that seem to “melt” briefly.
4. Check lighting inconsistencies: Fake videos often have uneven lighting between the face and the background.
Use Technology Against Itself
AI has made this mess, but AI can help clean it up, too. Several companies and research labs are working on deepfake detection tools, some of which are available to the public. You can use these AU tools to analyze videos and give a confidence score of whether it’s fake or not. Other AI helps you. track and detect deepfake threats globally. A browser can plugin that flags fake media on the go. There are several mobile tools that detect known fake audio/video databases. You can install one software and share the same for additional safety.
Lock Down Your Face and Voice
If you think that you are not human enough to be faked? Think again. AI models do ot need much to learn your face or voice, sometimes just a few seconds of a podcast or Instagram video is enough to initiate AI models. To avoid this, you should
- Avoid oversharing videos/photos in high resolution unless necessary.
- Use privacy settings on social media to limit public access to your content.
- Consider watermarking personal videos, especially if you’re a creator or public figure.
- Disable facial recognition logins on unsecured devices or platforms.
Demand Verification Layers
Let us face it, you can not check every video you see. So there should be some platforms that need to step up. Start demanding medi authenticity layers on various apps.
- Support platforms that use Content Provenance standards, like Adobe’s C2PA, which tracks a photo or video’s origin and edits.
- Push for verified source indicators on YouTube, Instagram, and X (Twitter). Like blue ticks, but for content authenticity.
- Encourage platforms to label AI-generated content clearly, not just in fine print.
- Laws are coming, but public pressure moves faster.
Educate Everyone

Let us face it, educating and awareness go a long way. You might be a Gen Z all tech-savvy and handy with AI, but not everyone has their fair share of knowledge on AI. Especially the elderly, like grandma forwarding that WhatsApp video of a “famous doctor” saying aliens created COVID? There is no change in that. What the public needs is literacy campaigns, just like we had for phishing scams and fake news. There should be awareness workshops held in schools, colleges, and workplaces. Create bite-sized videos on spotting fake videos and share lisstof trusted news sources and how to verify the content. Encourage yourself in the intergenerational tech learning platforms because deepfakes can not discriminate by age.
Keep Reading Foramz.com for your daily source of moral support.
Leave a Reply