AI-Powered Social Engineering
A New Era of Social Engineering
Social engineering, the art of manipulating people into divulging confidential information or taking harmful actions, has existed for as long as communication itself. What has changed dramatically in recent years is the sophistication that artificial intelligence brings to these attacks. AI-powered tools enable criminals to craft messages that are grammatically flawless, contextually relevant, and tailored to individual targets with minimal effort.
Traditional phishing emails were often easy to spot. Poor grammar, generic greetings, and implausible scenarios gave them away. AI has largely eliminated these tell-tale signs, making modern social engineering attacks significantly more dangerous. The National Cyber Security Centre (NCSC) has flagged AI-enhanced phishing as a growing threat in its annual cyber threat assessments.
How AI Chatbots Craft Personalised Phishing
Large language models, the technology behind tools like ChatGPT, can generate human-sounding text on any topic in seconds. Criminals exploit this capability in several ways:
- Personalised messages at scale: An attacker can feed publicly available information about a target, gathered from social media, professional profiles, and company websites, into an AI tool and receive a custom-tailored phishing message. The resulting email might reference your recent holiday, your job title, your company's latest project, or a friend you recently tagged in a post.
- Multiple language fluency: AI tools produce natural-sounding text in dozens of languages, allowing attackers to target victims in their native language regardless of the attacker's own linguistic ability.
- Rapid iteration: If a phishing approach does not work, attackers can instantly generate alternative versions with different emotional hooks: urgency, curiosity, fear, or authority.
- Bypassing spam filters: Because AI-generated text is unique each time and does not match known phishing templates, it is more likely to evade automated detection systems.
AI Mimicking the Writing Style of People You Know
One of the most concerning developments is AI's ability to replicate an individual's writing style. Given a sample of someone's messages, posts, or emails, an AI model can produce new text that reads as though it was written by that person. Criminals can use this to impersonate a friend, colleague, or family member, sending you a message that sounds exactly like them.
Imagine receiving a message from a friend's compromised social media account asking you to transfer money urgently. The message uses their typical expressions, their usual greeting, and references a shared experience. Without knowing that AI was used to craft the message, you might respond without hesitation. This technique is particularly effective on social platforms like KF.Social, where users communicate regularly with known contacts.
AI-Generated Scam Scripts
Beyond individual phishing messages, criminals use AI to develop entire scam scripts for phone calls, chat conversations, and even video interactions. These scripts are designed to guide a victim through a carefully structured interaction, responding to objections and building trust at each stage. Romance scams, investment fraud, and technical support scams all benefit from AI-generated scripts that sound natural and adapt to the conversation.
How to Tell the Difference
Whilst AI-generated content is increasingly difficult to distinguish from genuine communication, there are strategies you can use:
- Unexpected requests: Be suspicious of any message that asks you to take an unusual action, transfer money, share credentials, click a link, or download a file, regardless of who appears to have sent it.
- Emotional pressure: AI-crafted messages often create a sense of urgency or appeal to emotions. Phrases like "I need this right now" or "please do not tell anyone" are designed to override your critical thinking.
- Verify through another channel: If you receive an unusual request from someone you know, contact them through a different method. If they messaged you on KF.Social, call them. If they emailed you, send them a text. Confirm the request is genuine before acting.
- Ask a question only they would know: Pose a personal question that requires specific knowledge not available publicly. An AI impersonator will struggle with truly private details.
- Check for inconsistencies: Whilst AI-generated text is polished, it can sometimes be too polished. Messages from friends that lack their usual typos, slang, or conversational quirks may warrant a second look.
Protecting Yourself on KF.Social
On KF.Social, be particularly cautious of messages from accounts that have recently been created or that exhibit sudden changes in behaviour. If a long-time contact suddenly sends you unusual requests, their account may have been compromised and an AI tool may be generating messages on the attacker's behalf.
Report any suspicious messages through KF.Social's reporting system. Our moderation team analyses reported content and can take action to protect the wider community from AI-enhanced scam campaigns.
For the latest advice on recognising and defending against AI-powered threats, visit the NCSC's website, which regularly publishes updated threat intelligence and practical guidance for individuals and organisations.