Deepfakes: How to Spot Manipulated Media
What Are Deepfakes?
Deepfakes are synthetic media, including videos, images, and audio recordings, created or altered using artificial intelligence, specifically deep learning technology. The term combines "deep learning" with "fake" and covers a range of manipulations: superimposing one person's face onto another's body, generating entirely fictional people who do not exist, altering lip movements to match fabricated audio, and creating realistic video of events that never occurred.
Deepfake technology has advanced rapidly. What once required specialised expertise and powerful computing equipment can now be achieved with consumer-grade hardware and freely available software. This accessibility means that deepfakes are no longer limited to state-sponsored disinformation campaigns. They are increasingly used in personal harassment, financial fraud, and online scams.
How Deepfakes Are Used Maliciously
The misuse of deepfake technology falls into several categories:
- Non-consensual intimate imagery: The most common form of deepfake abuse involves placing someone's face onto explicit content without their consent. This is a form of image-based sexual abuse and is illegal in the UK.
- Financial fraud: Deepfake videos of company executives have been used to authorise fraudulent wire transfers. Criminals have also created fake video endorsements from public figures to promote investment scams.
- Identity fraud: Deepfake technology can defeat facial recognition systems used for identity verification, enabling criminals to open accounts or access services under someone else's identity.
- Disinformation: Fabricated videos of politicians, journalists, or public figures making false statements can spread rapidly on social media, influencing public opinion before the manipulation is detected.
Practical Detection Tips
Whilst deepfakes are becoming increasingly convincing, current technology still leaves detectable artefacts. Train yourself to look for these indicators:
- Eye reflections: In genuine photos and videos, the light reflected in each eye should be consistent. Deepfakes often produce slightly different reflections in the left and right eye, a subtle but revealing flaw.
- Lip synchronisation: Watch whether the lip movements precisely match the audio. Deepfake videos often show slight misalignment between what is being said and how the lips move, particularly with consonant sounds.
- Facial boundaries: Look at the edges where the face meets the hair, ears, and neck. Deepfakes sometimes produce blurring, colour mismatches, or unnatural transitions at these boundaries.
- Blinking patterns: Some deepfake models produce subjects who blink less frequently than normal or who exhibit unnatural blinking patterns.
- Skin texture: Examine the skin closely. Deepfakes may produce overly smooth skin, inconsistent texture between the face and neck, or strange shadowing that does not match the lighting in the scene.
- Background inconsistencies: The background behind a deepfake subject may show warping, distortion, or inconsistencies that are not present in genuine footage.
- Unnatural head movements: If the subject moves their head in a way that seems disconnected from their body or produces momentary distortions, the video may be synthetic.
Metadata and Technical Analysis
Beyond visual inspection, examining a file's metadata can provide clues. Genuine photos and videos contain metadata that records the device used, the date and time of capture, and other technical details. Deepfakes generated by AI tools often lack this metadata entirely or contain generic, inconsistent values.
Several tools and services exist for deepfake detection:
- Reverse image searches can reveal whether a supposedly original image appears elsewhere on the internet in a different context.
- Forensic analysis tools such as FotoForensics can highlight areas of an image that have been digitally altered.
- AI-based detection platforms are being developed by technology companies and research institutions to identify deepfakes automatically.
What to Do If You Find a Deepfake of Yourself
Discovering a deepfake that uses your likeness can be deeply distressing. Take these steps:
- Document it: Save copies of the deepfake content, including screenshots, URLs, and the platforms where it appears. This evidence will be important for any reports or legal proceedings.
- Report it to the platform: Report the content on the platform where it is hosted. Most social media platforms, including KF.Social, have policies against manipulated media and will remove it upon review.
- Use StopNCII: If the deepfake involves intimate imagery, StopNCII.org is a free tool that creates a digital fingerprint (hash) of the image, which participating platforms use to detect and remove the content proactively.
- Report to the police: In the UK, sharing non-consensual intimate images (including deepfakes) is a criminal offence. Report the incident to your local police or through Action Fraud if financial fraud is involved.
- Seek support: Organisations such as the Revenge Porn Helpline (0345 6000 459) provide confidential support for victims of image-based abuse.
Staying Informed
Deepfake technology evolves rapidly, and detection methods must keep pace. Stay informed about emerging threats and detection techniques through trusted sources such as the NCSC, which publishes regular updates on AI-related threats and countermeasures. Critical thinking remains your strongest tool: question the source, verify through alternative channels, and be sceptical of content that provokes a strong emotional reaction.