AI-Generated Deepfake Scams
What Are Deepfakes?
Deepfakes are synthetic media generated by artificial intelligence. Using machine learning techniques, it is now possible to create highly convincing fake videos, audio recordings, and images of real people. The technology can make someone appear to say or do things they never actually said or did, with a level of realism that is increasingly difficult to distinguish from genuine footage.
While deepfake technology has legitimate applications in entertainment, education, and accessibility, it has also become a powerful tool for scammers. The barrier to creating deepfakes has dropped significantly, with freely available software capable of producing convincing results from just a few seconds of sample audio or a handful of photographs.
How Scammers Use Deepfakes
Video Call Impersonation
One of the most alarming applications of deepfake technology in fraud is real-time video call impersonation. Scammers use AI to overlay a fabricated face and voice onto their own during a live video call, making it appear as though you are speaking with someone you know and trust. This technique has been used in several high-profile cases:
- Employees have been tricked into transferring large sums of money after video calls with what appeared to be their CEO or finance director.
- Family members have received video calls from someone who looked and sounded like a relative, requesting urgent financial assistance.
- Investors have been convinced to commit funds after video presentations by fabricated versions of known business figures.
The technology is not yet perfect, and real-time deepfakes may exhibit subtle irregularities such as slightly unnatural facial movements, inconsistent lighting, or brief glitches. However, these flaws are becoming less noticeable as the technology improves.
AI-Cloned Voices
Voice cloning technology can replicate a person's voice from just a few seconds of recorded audio, which might be obtained from social media videos, voicemail greetings, or public speaking engagements. Scammers use cloned voices for:
- Vishing (voice phishing): Calling victims while impersonating a trusted individual, such as a family member, colleague, or bank representative.
- Voicemail fraud: Leaving convincing voicemail messages that prompt the victim to call back a fraudulent number or take urgent action.
- Authorisation bypass: Attempting to pass voice-based security verification systems used by some banks and organisations.
Fabricated Video Evidence
Scammers may create deepfake videos to fabricate evidence in support of a fraud. For example, a fake video "testimonial" from a well-known public figure endorsing an investment opportunity, or fabricated footage of a supposed business meeting to lend credibility to a scam proposal. These videos are often shared on social media or in private messages to convince victims that an opportunity is genuine.
How to Verify Identity When Deepfakes Are Possible
As deepfake technology advances, visual and auditory confirmation alone can no longer be considered reliable proof of identity. Adopt these verification practices:
- Use a pre-agreed code word: Establish a secret word or phrase with close contacts that can be used to verify identity in unusual situations, particularly when money or sensitive information is involved.
- Call back on a known number: If you receive a suspicious video or voice call, end it and call the person back using a number you already have saved in your contacts, not a number provided during the suspicious call.
- Ask unexpected questions: During a video call, ask the person something specific that only they would know. A deepfake operator impersonating someone will struggle to answer personal questions convincingly.
- Look for technical artefacts: While deepfakes are improving, current real-time systems may show irregularities around the edges of the face, inconsistent blinking patterns, unnatural mouth movements, or brief visual distortions during sudden head movements.
- Verify through a separate channel: If a colleague requests an urgent transfer during a video call, confirm the request through a different communication channel, such as a text message or in-person conversation.
- Do not trust video alone for high-stakes decisions: For any request involving money, sensitive data, or significant actions, always verify through multiple independent channels regardless of how convincing the video appears.
Deepfakes and Social Media
On social platforms like KF.Social, deepfake content can be used to create fake profiles, fabricate endorsements, or produce misleading content designed to manipulate public opinion. If you encounter content that seems inconsistent with what you know about a person, or if a public figure appears to be endorsing something unexpected, consider the possibility that the content may be synthetic.
Report suspected deepfake content to the platform where you encountered it. On KF.Social, you can use the content reporting feature to flag potentially fabricated media for review.
The Evolving Threat
Deepfake technology is advancing rapidly, and the scams that use it will continue to grow more sophisticated. The National Cyber Security Centre (NCSC) monitors emerging threats and provides updated guidance as new attack methods develop. Staying informed about these developments is essential, as the verification strategies that work today may need to evolve alongside the technology.
The core principle remains constant: never rely solely on what you can see or hear to verify someone's identity. Always use multiple, independent verification methods, particularly when money, credentials, or sensitive information is at stake.