Shadow AI happens when employees use unsanctioned AI tools that put company data at risk.
And let’s be honest — users are side-loading AI tools faster than you can block ports. And after PDQ’s recent webinar, Your users are doing some unhinged stuff with AI. What now?, it’s clear that AI chaos is now just another Tuesday for sysadmins.
The conversation brought together some serious IT brainpower — PDQ’s own Brock Bingham; Tara Sinquefield; Senior Director of IT & Security, Josh Mackelprang; guest Kendra Valle from Legato Security; and Microsoft MVP Andrew Pla. Together they answered the one question every IT team is asking: How do we stop users from turning AI into our next data breach headline?
Spoiler: You can’t stop them. But you can get smart about managing AI. Watch the video above and then review below how to identify and manage shadow AI risks without killing productivity.
How AI increases security risks
Panelists kicked things off with a stat that made everyone clutch their coffee mugs: Palo Alto Networks currently blocks 30 billion cyberattacks per day — triple last year’s volume. A big reason? AI has lowered the barrier to entry for attackers. Anyone can now prompt-engineer their way into trouble.
But here’s the kicker: Those same tools are being used by your employees. Your marketers, finance folks, and interns are all piping internal data into unapproved chatbots. They’re not being malicious — they’re just trying to work faster. Unfortunately, speed and security rarely get along.
The advice? Focus less on fear, more on visibility. You can’t defend what you can’t see, and that means knowing what AI tools your users are actually using before you lock them down.
Why fear works (sometimes)
Kendra from Legato Security didn’t mince words about user education.
"One thing that was really effective for us is that we leveraged fear-mongering, for lack of a better term," said Kendra.
Her team actually created their own deepfake to show employees how easily AI can impersonate someone. The results were eye-opening — and terrifying. But that’s the point.
“The best training is the training that engages,” said Josh.
The rest of the panel agreed. Mandatory cybersecurity training might tick a compliance box, but they rarely change behavior. Contextual training — showing users how AI applies to their workflow — is what sticks.
And if you can make it funny or a little scary? Even better.
Why can’t you block shadow AI?
If you think blocking ChatGPT is the answer, congratulations — you’ve just created a thriving underground market of unsanctioned AI usage. Users will do what users do.
“If you tell someone ... ‘Don't push the red button,’ how many people want to go push this stinking button?” asked Josh.
That’s how shadow AI starts — and ends with your data sitting in someone else’s model.
The takeaway: Give users a safe, sanctioned AI sandbox. Deploy an enterprise AI model your org can monitor, log, and control. Make it the path of least resistance. Otherwise, they’ll just take the path with fewer firewalls.
How shadow AI causes hidden data loss risks
Deepfakes and cloned voices grab headlines, but the old-school risks still apply. Users are still exfiltrating sensitive data; they’re just doing it through AI prompts now instead of USB drives.
The fix hasn’t changed much either: principle of least privilege, data classification, and user education. As Brock pointed out, “It's nothing new, but AI kind of opens up some novel ways for data to slip through the cracks.”
Even the best DLP can’t stop someone from typing proprietary data into a chatbot from their phone. So your biggest defense remains the same: Train your users, trust but verify, and keep your managers in the loop.
Why defense in depth still matters
Despite the AI panic, the fundamentals haven’t changed. You still need a layered defense strategy: patching, vulnerability management, endpoint control, and visibility. Kendra’s team uses PDQ Connect to keep vulnerabilities in check and respond to issues fast.
AI might be rewriting the attack surface, but patching is still your best way to slam those doors shut.
Governance is coming (eventually)
The panel’s consensus was clear: AI governance is still the Wild West. There’s no universal standard yet — no governing body to define “trusted” versus “untrusted” models. Until then, sysadmins need to set their own AI acceptable use policies rooted in existing software approval processes.
Start simple. Treat AI like any other SaaS tool. Audit what data it touches, what it stores, and where it goes. Your policy doesn’t need to be fear-based — it needs to be realistic. Because the harder you make it to use AI safely, the more users will find creative ways to use it unsafely.
TL;DR — You can’t unhinge-proof your users
AI isn’t going away. Neither are your users. The job isn’t to stop them from experimenting — it’s to make sure they don’t take your data along for the ride. The best approach blends visibility, education, and empathy. Be the sysadmin who talks to people, not the one who blocks everything and wonders why ShadowGPT exists.
Final thoughts: Give them the tools, not the keys
Your users are going to use AI. Make it official, make it monitored, and make sure your environment is patched and resilient. Start by closing the vulnerabilities you already have. PDQ Connect helps you patch smarter, faster, and without babysitting updates.
AI might be unhinged, but your patching strategy doesn’t have to be. Try PDQ Connect and bring order to the chaos. Then, watch the webcast and join our subreddit or Discord to keep the conversation going.




