Social engineering is the most common initial access medium for cybercriminals when infiltrating organizations. With the rapid development of artificial intelligence, the scale of social engineering attacks is expanding year by year, and the methods of attack are becoming bolder.
How Does AI Advance Social Engineering Attacks?
Artificial intelligence is assisting cybercriminals in advancing their social engineering activities in various ways:
-
Personalized Phishing: AI algorithms can analyze data from social media (such as background, interests, employment, contacts, affiliations, locations, etc.) and various OSINT sources to create more personalized and convincing spear-phishing attacks.
-
Localized and Contextual Content: Tools like ChatGPT, Copilot, and Gemini can help draft grammatically correct, contextually appropriate phishing emails that can be translated into any local language. AI can be prompted to imitate specific writing styles or tones and craft phishing emails based on the recipient’s responses or behaviors.
-
Realistic Deepfakes: Threat actors use deepfake tools to create fake virtual personas and audio clones of senior executives and trusted business partners. Deepfakes are used to lure employees into sharing sensitive information, making transfers, or granting access to the organization’s network.
The Latest Developments in AI Further Amplify Social Engineering Risks
In November 2022, the first large language model (LLM) was launched and made publicly available for free. In 2023, the world began using generative AI tools, and developers released a series of features and functions based on these LLMs. By the second half of 2024, a new iteration is rapidly emerging—AI-driven agents (“Agentic AI”) that can act autonomously and perform complex tasks.
As AI becomes accessible to everyone, we can foresee that cybercriminals will exploit agentic AI technology for malicious attacks. Here are some cases explaining how bad actors can leverage agentic AI to launch social engineering attacks:
Self-Improving, Adaptive, and Ruthless Threats: One of the main advantages of agentic AI is its memory capability, which allows it to learn and improvise. As AI interacts with more and more victims, it collects data on which types of messages or methods work best for certain demographics. Therefore, it can self-adjust, improving its future phishing activities, making each subsequent attack more powerful, convincing, and effective.
Automated Spear Phishing: Non-agentic AI is essentially prompt-based; cybercriminals must provide specific inputs for the AI to create phishing emails. In the new world order, malicious AI agents will autonomously collect data from social media profiles, craft phishing messages tailored to specific individuals or organizations, and disseminate them until the desired outcome is achieved.
Dynamic Targeting: AI agents may dynamically update or change their phishing campaigns based on the recipient’s responses or location, or factors such as holidays, events, or the target’s interests, marking a significant shift from static phishing attacks to highly adaptive and real-time social engineering threats. For example, if a phishing message is ignored, the AI may send a follow-up message with a more urgent tone.
Multi-Stage Activities: Agentic AI may be meticulously orchestrated to launch complex and multi-stage social engineering attacks. In short, the AI can be instructed to use data from one interaction to drive the next. For example, a phishing attack can entice someone to disclose a small amount of information in the first round of attacks. The AI can then use that information to plan the next steps.
Multi-modal Social Engineering: Autonomous AI agents may go beyond email, using or combining other communication channels such as SMS, phone calls, or social media in phishing attempts. For example, if a phishing email is ignored, the AI can use audio or video deepfakes for follow-up calls to increase the chances of target response.
Key Takeaways for Organizations
Here are some best practices and recommendations for organizations:
Using Agentic AI Against Agentic AI: To counter advanced social engineering attacks, consider building or acquiring an AI agent that can assess changes in the attack surface, detect irregular activities indicating malicious behavior, analyze global information to detect threats early, monitor user behavior deviations to uncover internal threats, and determine patch priorities based on vulnerability trends.
Leveraging AI-Based Security Awareness: Security awareness training is an essential component of enhancing human defensive capabilities. Organizations must go beyond traditional security training by utilizing tools that can assign engaging content to users based on risk scores and failure rates, generate quizzes and social engineering scenarios based on the latest threat dynamics, and trigger brief reviews.
Preparing Employees for Agentic AI Social Engineering: Human intuition and vigilance are crucial for combating social engineering threats. Organizations must double down on fostering a cybersecurity culture, educating employees about the risks of social engineering and its impact on the organization, training them to recognize and report such threats, and providing them with tools to improve security behavior. Gartner predicts that by 2028, one-third of our interactions with AI will transition from simply inputting commands to fully interacting with autonomous agents that can act based on their own goals and intentions. Clearly, cybercriminals are equally adept at exploiting these advancements for illicit activities. Organizations must strengthen defenses, prepare for this possibility, deploy their own AI-based cybersecurity agents, utilize AI-based security training, and instill a sense of security responsibility.
— Welcome to Follow Past Reviews —
Qi Yin Talks Cybersecurity 2024 Year in Review
New Regulations in Henan Province for Assessment and Budgeting Lowered Again
Budget Calculation Method for Level Assessment and Commercial Secret Evaluation in Sichuan Province
What is the Budget for Level Assessment and Commercial Secret Evaluation in Guangxi Zhuang Autonomous Region?
Interpretation of the “Cyber Data Security Management Regulations”
Cybersecurity Company “Insider” Steals 208 Million Personal Information
Three Companies in Zhengzhou Fined for Failing to Fulfill Cybersecurity Protection Obligations by the Cyberspace Administration
In 25 years, Two New Companies in Zhengzhou Violated the “Cybersecurity Law” and Were Administrative Punished by the Municipal Cyberspace Administration
Cyberspace Administration of Zhumadian Lawfully Interviewed Relevant Responsible Units Regarding Cybersecurity Issues
Two Banks Fined for Data Security Related Issues