If there is a defining buzzword of the last few years, it’s probably artificial intelligence (AI). Barely a day passes without a company, tech entrepreneur, or even government proselytising about the technology and its transformative potential. However, when it comes to AI and cybersecurity, the picture is blurrier.
We often hear about the negative side of AI. Justifiably so, as the NCSC puts it:
“AI will almost certainly continue to make elements of cyber intrusion operations more effective and efficient, leading to an increase in frequency and intensity of cyber threats.”
Nevertheless, things are more complicated than they first appear. Of course, AI can be used by criminals to supercharge cyber threats. But it can also be used to create stronger defences than ever before. In this blog, we’ll look at both the good and the bad, before asking how businesses looking to leverage AI can do so safely.
What do we mean by AI?
For the purposes of this explanation, we’ll be focusing on the two most widely used types of AI: generative and agentic (there are others).
Generative AI
This is the type of AI you probably use in your daily life – think ChatGPT, Claude, Microsoft Copilot or Perplexity. Also known as large language models (LLMs), generative AI uses existing data to ‘generate’ new content like text, video, images, audio or code. Crucially, this type of AI relies heavily on human inputs or ‘prompts’ to create things.
Agentic AI
Agentic AI is a type of artificial intelligence that is designed to operate more or less autonomously with minimal human supervision. Unlike generative AI, agentic AI is proactive and has agency to adapt to context and pursue complex goals without prompting or guidance at least in theory.
Both types of AI can be put to a wide variety of tasks, from software development to customer support. Plus, they can even be used for cybersecurity, potentially offering levels of protection that businesses would’ve needed very deep pockets to acquire just a few years ago.
So far, so good, we hear you say. What’s the catch?
Well, like any powerful technology, AI isn’t inherently good or bad. The key is how it’s used and who is using it.
AI and cybersecurity: the bad
For all its transformative potential, AI has a darker side. Let’s deal with that first. We’re going to split this into two sections: one dealing with the non-malicious cybersecurity risks AI raises, and the other, how the bad guys are using it to their advantage.
Hallucinations and human error
Hallucinations
Have you ever asked an LLM a question and felt that the answer it gave you wasn’t quite right? You weren’t imagining things. A common problem, particularly with LLMs, is AI hallucination. This is exactly what it says on the tin. Sometimes, generative AI models present incorrect or misleading information as fact.
There are a few reasons for this, ranging from being trained on poor-quality data to unclear prompts from the human guiding it. The technical how and why isn’t particularly important, but what is, is how this becomes a cyber risk.
Unfortunately, it’s not just content that LLMs hallucinate. If you’re using one to code, it may write scripts containing open-source software packages that don’t exist. Why does this matter? Well, cybercriminals have devised a way to use AI hallucinations to their own malicious ends – a process called “slopsquatting”.
Basically, a cybercriminal could publish a fake package, containing malware, to an official repository with the same details as the hallucinated one. When another user prompts the same LLM to generate code and it returns the same hallucinated response, the victim would be directed to download the malicious package, exposing their systems to malware.
This might sound unlikely. After all, what are the chances of a different user generating the same code? Sadly, it’s more common than you might think. According to research from the University of Texas at San Antonio, Virginia Tech, and the University of Oklahoma, package hallucination is a common problem with (LM-generated code. Across all LLMs, researchers found hallucinations present in around 20% of all packages.
The persistent, repeatable nature of these hallucinations makes them very bad news for anyone other than cybercriminals.
Human error
A far bigger problem than slopsquatting is good, old-fashioned human error. We can all be guilty of not really thinking about what we’re doing when using tools like LLMs, particularly when it comes to the data we share with them.
For example, say you want to automate certain tasks in your business using generative or agent-based AI. The first thing you’d need to do is feed the AI algorithm data to train it. However, it really matters what data you use; it’s possible for an AI to “see too much”.
Hackers have realised that some companies will feed sensitive information such as business strategy, customer data, or intellectual property into AI algorithms. Unfortunately, they’ve also devised a way of accessing this data. All an attacker who gains access to the AI needs to do is continually prompt it with leading questions until the AI unwittingly reveals corporate IP.
It’s a similar story with employees using AI for day-to-day tasks. Without clear guidance, there’s always a risk that staff could share too much information with AIs, particularly those using free or open-source models.
The bad guys tool up
We’ve already mentioned a couple of ways cybercriminals can exploit AI to launch cyberattacks. Unfortunately, those are far from its only uses for hackers. Here are a few additional examples of how cybercriminals are taking advantage of AI.
AI-powered social engineering
Social engineering is by far the most common form of cyberattack. And that isn’t likely to change with the advent of generative and agentic AI. Both allow cybercriminals to perfect the art, whether that’s by spinning up convincing phishing campaigns faster than ever, creating malicious AI chatbots, or generating near-perfect deepfakes.
Malware and ransomware development
Perhaps the most worrying use of AI is the automation of malware creation and attacks. There’s already evidence of cybercriminals using AI to create new malware variants with unique characteristics to help evade detection.
The picture becomes even more sinister when you consider how agentic AI could be used. It’s entirely plausible that agentic AI could be used to create and run malware that analyses an organisation’s defences and adapts in real time to evade detection by malware protection tools.
Worse still, we know that malware as a service has grown in popularity over the past few years. This means there’s likely to be a secondary market for AI-developed malware and ransomware, selling to cybercriminals who aren’t sophisticated enough to create these tools, potentially upping the level of attack they’re able to launch.
Data poisoning
Data poisoning refers to when cybercriminals go after the very datasets AIs are trained on. By injecting false information into the dataset, attacks can foul the model’s learning process, corrupting its decision-making.
AI-driven breaches
While relatively unsophisticated tactics like brute force and DDoS attacks are already largely carried out by bots, AI has the potential to supercharge them. AI can be used to improve the efficiency and speed of password cracking or to create tools that can bypass security measures like CAPTCHAs and other forms of biometric authentication.
AI and cybersecurity: the good
Although we might be entering a golden age of cybercrime for hackers, don’t despair; there is hope. Here are just a few potential uses of AI for cybersecurity.
Agentic AI as a defensive tool
We’ve painted a pretty bleak picture about the nefarious capabilities of AI. However, it’s important to remember that AI is a tool which can be used for good or ill, depending on who’s using it.
There are a number of companies that’ve already developed defensive agentic AI products. For example, Darktrace’s Antigena which can can identify and mitigate threats as they emerge, without needing prior knowledge of the specific malware or attack method. Likewise, Crowdstrike has developed its Falcon platform, an advanced endpoint protection solution.
These are just two examples of the technology being developed. And, while the cost of these tools is currently prohibitively expensive for most small businesses, options for all price points are likely to become available over time.
Threat intelligence and predictive analysis
AI models can be invaluable in determining where the next attack is coming from. Some models can be used to aggregate and analyse data from across the globe, identifying emerging threats and predicting attack patterns.
Automating security measures
Another exciting potential use of AI is for automating routine security measures. For instance, AI models can be instructed to configure firewalls or scan for threats, or patch vulnerabilities, freeing up staff for complex technical issues.
Incident response
We all know how quickly you respond to a breach or threat can often determine its outcome. Agentic AI could dramatically speed up our ability to respond by analysing incidents, providing recommendations or even initiating automated responses to threats, minimising the damage and downtime.
Training
Finally, Generative AI is fantastic at creating engaging content. So why not use it to create scenario-based training, such as phishing simulations? These scenarios could be tailored to adapt in real time, helping employees stay ahead of emerging threats.
What should you do to mitigate AI risks?
We’ve tackled the potential uses of AI models for cybersecurity, both good and bad. But what should you be doing to safeguard your business in the here and now?
Improve your AI security posture
If you’re using AI systems for day-to-day tasks around your business, there are a few basic cybersecurity measures you’ll need to adopt. These include:
- Encrypting corporate data to prevent AI models from training on it and cybercriminals from accessing it
- Adopting strict access controls on who uses AI and for what within your business
- Continuous monitoring of your AI systems for unusual behaviour
Educate your people
Ensure everyone in your business is aware of the dangers of AI tools, both in terms of attacks and human error. You can do this through cybersecurity awareness training and by implementing an AI security and governance policy that outlines what legitimate and risky uses are.
Avoid vibe coding
If you’re a business that employs developers or programmers, you should discourage the use of vibe coding for any system that could pose a security risk. “Vibe coding” is a process where users express their intention using plain speech, and the AI transforms that into code.
The problem with vibe coding is that the developer often doesn’t know how the AI created the code. It’s a leap of faith, one which can lead to problems like slopsquatting or inadvertently creating vulnerabilities.
A note of caution
All of the above comes with a fairly large caveat. Current agentic AI, despite the hype, still has technical limits. Many AI agents today perform well in narrow, well-defined tasks but struggle with complex, general contexts.
In a live cybersecurity scenario, an AI might handle known patterns but get confused by a novel situation that doesn’t fit its training. This applies equally to attack and defence, making a lot of what we’ve talked about in this blog theoretical.
That being said, AI models continue to improve at a rapid rate, and these capabilities aren’t far away. So, if you aren’t already thinking about how AI affects your cybersecurity, there’s no time like the present.
Want to give your people the skills to recognise cyber threats before they turn into breaches? Check out CyberSmart Learn, our cybersecurity focused learning management system.