Skip to main content

You Blocked ChatGPT? That’s Cute. Your Data Still Leaks.

·562 words·3 mins· loading · loading ·
Ronny Roethof
Author
Ronny Roethof
A security-minded sysadmin who fights corporate BS with open source weapons and sarcasm
Table of Contents

A recent quote making the rounds nails the current AI chaos:

“AI is being embraced faster than it’s being secured—especially in healthcare and SMBs.”

This isn’t just a hot take. It’s exactly what’s playing out in real time. In the rush for efficiency, healthcare providers are pasting sensitive patient data into public AI tools. SMBs are adopting unsanctioned apps at warp speed, often without a second thought about data risk.

And what’s the typical corporate response once someone finally notices the tsunami of shadow IT?

A Modern IT Tragedy in Three Acts
#

Manager: “We’ve blocked ChatGPT.”
Engineer: “Why?”
Manager: “It’s dangerous. It’s a data leak risk.”
Manager: “People are pasting sensitive info into it.”
Engineer: “And blocking it fixes that?”
Manager: “Well… it’s a start.”
Engineer: “Are Gemini, Claude, Perplexity, and browser plugins also blocked?”
Manager: “No.”
Engineer: “What if I use ChatGPT on my phone, and paste the output back via WhatsApp Web?”
Manager: “No…”
Manager: [Visible confusion] “Hmm.”

You didn’t stop the behavior. You just made it invisible.

This Isn’t a Strategy—It’s Security Theater
#

Blocking a tool isn’t the same as solving the problem. It’s a feel-good checkbox that lets management pretend they’ve “addressed the AI issue,” while doing nothing to stop actual risk.

The threat isn’t ChatGPT. The threat is uncontrolled data flow, lack of user education, and no real AI governance.

What Really Happens When You Block Useful Tools
#

Here’s the reality from the trenches: When you block a tool that helps people do their jobs, they don’t stop using it. They just go around you. They’ll:

  • Use personal devices
  • Sign up with personal emails
  • Leverage unmonitored SaaS tools
  • Install sketchy browser extensions

Congratulations. You’ve just invited shadow IT in through the front door—and turned off the lights behind it.

Stop Banning. Start Enabling (and Governing).
#

The answer isn’t a longer blocklist. It’s doing the actual, hard work of data governance. That means:

1. Educate Your People
#

This is non-negotiable. Invest in digital literacy. Train your teams on:

  • What constitutes sensitive data
  • Why AI tools can create risk
  • How to use new tech responsibly

As Adriaan Van Bragt put it:

“Give people tips and tricks—that’s way more effective than blocking.”

2. Establish Clear Governance
#

Don’t assume people know the rules. Write them down. Make them visible. Good governance means:

  • Clear, simple policies on data handling
  • Defined AI usage guidelines
  • Consequences for violations and support for doing the right thing

3. Provide Safe, Sanctioned Alternatives
#

People use ChatGPT because it works. So offer a secure, enterprise-grade version they can use—one that:

  • Has a proper data processing agreement
  • Supports access control and audit trails
  • Keeps data in your environment (via private or self-hosted models)

If you don’t give your people a safe path, they’ll take the risky one. It’s that simple.

Don’t Confuse Control With Security
#

Blocking ChatGPT might feel like you’re in control—but it’s an illusion. Security isn’t about blocking. It’s about trusting, verifying, and enabling. The real work is in culture, education, and tooling.

If your AI “strategy” ends at a firewall rule, it’s not a strategy. It’s a liability.


Want to build a real AI governance strategy your teams will actually follow? Start with education. Pair it with clarity. Back it with tools they’re not ashamed to use.

You don’t need a bigger hammer. You need a better blueprint.

Related

NCSC's Late-Stage Panic: BYOD Is Risky? No Shit.
·1659 words·8 mins· loading · loading
That Cybersecurity Alarm Bell? It's Ringing for All of Us in the Netherlands.
·970 words·5 mins· loading · loading
Beyond the Hype: Navigating AI's Power and the Critical Privacy Line
·817 words·4 mins· loading · loading