Skip to content

AI and cybersecurity: From deepfakes to data poisoning

Meredith Kreisa headshot
Meredith Kreisa|June 28, 2024
Cartoon showing sysadmin fading into the background as AI projects a new, deepfake version of him that is pixelated.
Cartoon showing sysadmin fading into the background as AI projects a new, deepfake version of him that is pixelated.

Depending on who you talk to, artificial intelligence (AI) is either the best or worst thing that’s happened in endpoint security in the last 30 years. But the ironic truth is that it’s probably both the best and worst thing all rolled up into one, like a postmodernist burrito. We’ll break down the unique AI cybersecurity risks along with how you might protect against them.

Risks have never been higher, so all your users need to know the fundamentals of cybersecurity 101. At the same time, cybersecurity solutions are increasingly advanced, giving you new tools to tackle threats head-on.

Deepfakes 

Deepfakes have gotten a lot of press lately, and rightly so. They’re pretty darn alarming, and it’s easy to understand why people fall for the disinformation from this fake content. 

Deepfake technology creates synthetic media, such as video, image, audio, or text, that leverages generative AI models and deep learning to replicate the likeness of a real person. 

In short, it can convincingly depict someone doing or saying something they never did or said. Probably something to keep in mind for election season. 

And the scams are already running rampant. In February of 2024, an unsuspecting finance worker paid out $25 million after a realistic deepfake video call with someone appearing to be the chief financial officer. There are also reports of audio deepfake kidnapping scams, deepfake blackmail, next-level fake IDs, misinformation spreading, and more.

So how do we combat deepfakes? Well, a keen eye, ear, and sense of logic can help you detect many of them. But as they become increasingly advanced, AI tools will probably be the solution for deepfake detection at scale. Leading machine learning expert Siwei Lyu already made the DeepFake-o-meter to help assess whether images, video, and audio are authentic. We expect more and more tools to crop up as time goes on. And, as with all things cybersecurity, teaching your users what to look for is a step in the right direction.

The Department of Homeland Security recommends paying attention to the following to spot a deepfake:

Fake video: 

  • Facial blurring when the rest of the video isn’t blurry 

  • Edge issues (discoloration along the edge of face, double edges, etc.) 

  • Inconsistent quality in the same video 

  • Boxy shapes and cropping around mouth, neck, and eyes 

  • Unnatural blinking or other movements 

  • Background and lighting changes 

  • Inconsistency between background and foreground objects 

Fake audio:

  • Choppy phrases 

  • Inconsistent inflection and tone 

  • Unnatural phrasing 

  • Context of the background sounds 

  • Context of the message 

Fake text: 

  • Spelling errors 

  • Poor flow of sentences 

  • Unnatural phrasing 

  • Context of the message 

  • Sender’s email address or phone number 

Data poisoning 

Data poisoning is exactly what it sounds like: tampering with the data in AI models to influence their outputs. AI poisoning attacks may alter existing data, introduce malicious data, or delete data. While data poisoning isn’t a concern for a lot of sysadmins, anyone who oversees an environment that houses an AI model or training data should be on high alert.

The implications of poisoned data can be far-reaching. A data poisoning attack could introduce biases, create vulnerabilities, take control of AI-powered manufacturing equipment, or even train AI-powered threat detection solutions to identify attack traffic as benign. A threat actor is limited only by their skill and imagination.

And unfortunately, data poisoning can be challenging to detect as models evolve, especially if the attack stems from an insider threat. That’s why your security operations team must protect the data integrity of your AI model or machine learning model to avert data poisoning attacks. The following can help reduce the risk:

Malware generation 

Generative AI capability could be used to develop malware and ransomware, giving less sophisticated attackers more opportunities to join the big leagues. While popular AI systems typically have guardrails to prevent this, loopholes frequently exist. Plus, there are already subscription-based generative AI services built specifically for illicit purposes, so a potential threat actor just needs to know where to look to get convenient support from AI (🤫 don’t let the hackers hear us). 

Worse still, AI systems can be taught how security software detects malware and how previous attacks played out, allowing these systems to create malware that sidesteps common protections. AI could also create malware that mutates the code for each attack to evade signature-based detection or adjusts dynamically during an attack based on the target’s defenses. According to Palo Alto Networks, AI can even generate malware that impersonates the work of specific threat actors, which could have pretty significant cyberwarfare ramifications. 

The good news is that generative AI tools are likely to produce rudimentary code that requires more advanced expertise to effectively implement. While AI may help skilled threat actors save a little time, your standard malware prevention and detection methods should still do the trick in most cases:

Attack optimization 

Unlike much of the population, we here at PDQ don’t believe that the right AI algorithm makes anything better. But it can definitely make cybersecurity threats more efficient and potent. Here’s just a taste of the potentially nefarious tasks AI can support:

  • Vulnerability discovery 

  • Infrastructure mapping 

  • Privilege escalation 

  • Detection evasion 

  • Sensitive data extraction 

  • Open-source intelligence (OSINT) gathering on targets 

  • Automated attacks, like credential stuffing and phishing 

  • Increased personalization for a social engineering or phishing attack 

But thankfully, a savvy cybersecurity professional already knows what to do to secure their environment. The following best practices may be particularly important for thwarting AI-optimized attacks:

Data breaches

Training AI models requires a lot of data. We’re talking massive amounts. So what happens if threat actors get their sticky little paws on your training data? And what if you’ve trained your AI system with PII? We have no doubt that Stephen King has considered that as the plot for his next bestseller.

With that in mind, if you’re training AI, make sure to use strong security protocols, data encryption, access controls, and monitoring to ensure it doesn’t fall into the wrong hands.

You may also want to specify in your security policy that users can share sensitive information only in company-approved AI tools. No one should be inputting anything private in questionable tools with dubious terms of use. Even ChatGPT inadvertently leaked a little data, so your security team should thoroughly vet any AI tools before employees start using them.

Regulatory concerns

Depending on your industry, you may need to comply with the specific cybersecurity requirements of certain regulations. So far, most regulatory standards don’t spell out clear guidelines for AI. Unfortunately, that doesn’t mean that the existing rules don’t still apply to AI. It’s just slightly more confusing. 

But as time goes on, we expect more and more compliance standards to detail AI-related guidelines, including relevant assessments, security measures, data protection, visibility, privacy, and more.

Security professionals should pay attention to the relevant industry standards, stay up to date on the latest announcements, and consider how existing standards may apply to AI.

AI: The problem and the solution 

AI may ultimately be one of the best solutions to the problems it creates. 

That’s because while AI is undoubtedly changing the threat landscape, security solutions incorporating responsible AI are also providing new approaches to age-old problems. By analyzing and learning from large volumes of data to detect patterns and anomalies, an AI cybersecurity solution may effectively use artificial intelligence to improve vulnerability management, cyber threat intelligence, network security, threat detection and response, access control, threat hunting, and other cybersecurity functions. In combination with following general cybersecurity best practices, these solutions can help you maintain your security posture and cyber resilience. 

But the truth remains: Within our lifetimes, even the most advanced AI cybersecurity solutions are unlikely to be adequate replacements for cybersecurity experts. And with AI presenting more and more threats, skilled professionals may actually be more important than ever.


Malicious activity is a constant. However, attack vectors and the specific tactics, techniques, and procedures (TTPs) are constantly evolving. Thankfully, more advanced solutions are also cropping up to help your cybersecurity team tackle emerging threats. The right AI security tool can help fortify your cybersecurity measures to protect against the latest threats. 

PDQ Detect incorporates machine learning to map your business processes for unmatched contextual insights into how vulnerabilities might affect your fleet. Identify the most critical and exploitable vulnerabilities in your environment before threat actors find them.

From there, PDQ Connect or PDQ Deploy & Inventory come in clutch to make your patching quick and painless. Sign up for a free trial.

It’s time to show artificial intelligence what real intelligence looks like.

Meredith Kreisa headshot
Meredith Kreisa

Meredith gets her kicks diving into the depths of IT lore and checking her internet speed incessantly. When she's not spending quality time behind a computer screen, she's probably curled up under a blanket, silently contemplating the efficacy of napping.

Related articles