Trend Micro Predicts Emergence of Deepfake-Powered Malicious Digital Twins

In This Article:

The age of hyper-personalized attacks is almost upon us, warns security leader

DALLAS, Dec. 16, 2024 /PRNewswire/ -- Trend Micro Incorporated (TYO: 4704; TSE: 4704), a global cybersecurity leader, today warned that highly customized, AI-powered attacks could supercharge scams, phishing and influence operations in 2025 and beyond.

Trend Micro logo (PRNewsfoto/Trend Micro Incorporated)
Trend Micro logo (PRNewsfoto/Trend Micro Incorporated)

To read Trend Micro's cybersecurity predictions for 2025, The Easy Way In/Out: Securing The Artificial Future, please visit: https://www.trendmicro.com/vinfo/us/security/research-and-analysis/predictions/the-artificial-future-trend-micro-security-predictions-for-2025

Jon Clay, VP of Threat Intelligence at Trend Micro: "As generative AI makes its way ever deeper into enterprises and the societies they serve, we need to be alert to the threats. Hyper-personalized attacks and agent AI subversion will require industry-wide effort to root out and address. Business leaders should remember that there's no such thing as standalone cyber risk today. All security risk is ultimately business risk, with the potential to impact future strategy profoundly."

Trend's 2025 predictions report warns of the potential for malicious "digital twins," where breached/leaked personal information (PII) is used to train an LLM to mimic the knowledge, personality, and writing style of a victim/employee. When deployed in combination with deepfake video/audio and compromised biometric data, they could be used to convince identity fraud or to "honeytrap" a friend, colleague, or family member.

Deepfakes and AI could also be leveraged in large-scale, hyper-personalized attacks to:

  • Enhance business compromise (BEC/BPC) and "fake employee" scams at scale.

  • Identify pig butchering victims.

  • Lure and romance these victims before handing them off to a human operator, who can chat via the "personality filter" of an LLM.

  • Improved open-source intelligence gathering by adversaries.

  • Capability development in pre-attack prep will improve attack success.

  • Create authentic-seeming social media personas at scale to spread mis/disinformation and scams.

Elsewhere, businesses that adopt AI in greater numbers in 2025 will need to be on the lookout for threats such as:

  • Vulnerability exploitation and hijacking of AI agents to manipulate them into performing harmful or unauthorized actions.

  • Unintended information leakage (from GenAI)

  • Benign or malicious system resource consumption by AI agents, leading to denial of service.

Outside the world of AI threats

The report highlights additional areas for concern in 2025, including: