4 ways AI will be a game changer for cybersecurity

Artificial intelligence poses immense challenges for cybersecurity – most of which we are only beginning to understand.

At a minimum, AI has the potential to cause enormous upheavals in the cybersecurity strategies of corporations and governments. Fundamental concepts like encryption, malware detection, and multi-factor authentication will all be put to the test. The sheer speed and computational power of AI also threatens to outmatch human defenders, potentially requiring entirely new modes of defense. But AI will also pose even more complex challenges for society at large, by undermining the veracity of data, our faith in reliable sources and trusted institutions, and by unleashing the most advanced psychological manipulation ever seen in human history.

Due to AI’s constantly evolving nature, it is hard to fathom the vast potential that “bad AI” could offer cybercriminals, foreign adversaries and other malicious actors. But by using current models as our guide, we can predict several critical areas where AI will tip the scales – and unleash dangerous new attacks that could undermine businesses, governments, the economy, and society more broadly.

Here are the top four threats the security industry is most concerned about:

1) Hacked or infected AI systems

When it comes to AI, one of the biggest threats of all is the possibility that these systems may be hacked or corrupted by malicious actors.

This is an incredibly important issue, because companies, government agencies, critical services like healthcare, and even entire industries, will soon come to rely on AI to make critical decisions that will have widespread implications for essential services, patient care, business deals, regulation, surveillance, you name it.

Hoodie, hacker and person at computer screen at night for cybersecurity, ransomware and fraud. Back of thief, spy hacking pc software for scam, phishing virus or crime of online firewall with malware
Hoodie, hacker and person at computer screen at night for cybersecurity, ransomware and fraud. Back of thief, spy hacking pc software for scam, phishing virus or crime of online firewall with malware · Sean Anthony Eddy via Getty Images

The most significant of these threats is data poisoning.

AI systems have to be trained with enormous data sets, in order to develop the right algorithms and capabilities before they are deployed into the real world. For example, image recognition software (such as facial recognition) must be trained to distinguish between different objects and people by first studying millions of labeled images. If a malicious actor can seed this data set with “poisoned” images (i.e., fake, deliberately misleading, or otherwise malicious images), they can jeopardize the AI system’s effectiveness. Even a small number of fake images can undermine an entire algorithm.

Another tactic is “prompt injection” which can be used to manipulate or corrupt LLMs (large language models) that utilize prompt-based learning. A low-tech example of this is the debacle that occurred with Microsoft’s Twitter chatbot, Tay.ai, back in 2016. Shortly after it launched, Tay quickly unraveled as it spewed racist, misogynistic and homophobic comments after being manipulated by malicious user inputs.