Skip to main content

The Human in Cybersecurity: Liability or Asset? My Take.

·1444 words·7 mins· loading · loading ·
Ronny Roethof
Author
Ronny Roethof
A security-minded sysadmin who fights corporate BS with open source weapons and sarcasm
Table of Contents

I often think back to my former Director. He was an older gentleman, a big soft guy, definitely not what you’d call tech-savvy. But he’d learned one crucial, simple thing about handling suspicious emails: forward them to me, the security guy, as an attachment. Every time something looked even slightly dodgy in his inbox, I’d get that email: “Ronny, can you report or check this?” Because he’d learned that specific step – sending it as an attachment – I could actually dig into the headers, do a proper check, and give him a quick thumbs up or down. You know what? He was genuinely proud to do his part, to contribute to security in his own way. Simple, targeted training for a non-technical user, and it worked beautifully. He wasn’t a liability; he was actively helping.

This experience always sticks with me when I hear the constant refrain in cybersecurity: humans are the ‘weakest link’. We focus so much on the clicks, the mistakes, the need for more rules and stricter filters. But what about stories like my Director’s? What about empowering people with the right, simple actions they can take? Are we missing a huge opportunity by defaulting to the ‘user is the problem’ mindset? Honestly, I think it’s time we had a real conversation about this.

The “Weakest Link” Dogma: Convenient, But Flawed?#

Let’s be honest, we’ve all heard it a million times: humans are the weakest link. You see it plastered across reports citing ‘human error’ as the cause of almost every breach under the sun. It’s a convenient narrative, maybe? It certainly makes it easier to justify certain approaches.

When you start from the assumption that people are inherently unreliable, you naturally gravitate towards a security model built on heavy-handed rule compliance. The goal becomes forcing adherence, often with little regard for context or practicality. Think mandatory password rotations that lead to Password_Summer24!#, blanket restrictions on useful tools, or those soul-crushing, mandatory e-learning modules about phishing that everyone just clicks through. The underlying message feels pretty clear: “We don’t trust you, so we’ll protect the systems from you.” Sound familiar? It often feels like the approach where excessive security measures backfire, a concept related to Secure by Design – more control doesn’t always equal better security.

Why Just Focusing on Compliance Backfires (In My Experience)
#

Look, rules and basic training have their place. I’m not saying throw out the rulebook. But relying only on this compliance-first, blame-the-user mentality? I think it actively harms our security posture, moving beyond security as just compliance.

From what I’ve seen, it fosters rule-following drones, not security-aware thinkers. People learn to jump through the hoops to avoid getting flagged, not to actually understand why something is risky or how to apply security principles themselves. Worse, it actively encourages risky workarounds. Remember my post about using personal stuff on company laptops? When security rules make getting your actual job done a nightmare, smart people will find ways around them – writing down passwords (instead of using proper password management, using unsanctioned cloud storage, you name it. These aren’t malicious acts; they’re often desperate attempts to be productive, but they blow holes in our carefully crafted defenses.

This whole approach also seems to wilfully ignore basic human factors. People get tired (like that sleep-deprived sysadmin I wrote about!, overwhelmed, and make mistakes, especially when systems are confusing or frustrating. Blaming the person when the process or tool itself is poorly designed is just lazy security. And maybe the biggest issue for me? It wastes incredible potential. Most people I know in tech want to help, they want to fix things, they notice when stuff looks weird. Treating them like untrustworthy liabilities stifles that natural vigilance and problem-solving instinct – unlike my former Director, who just needed the right guidance to become helpful.

Slapping on more rules often just papers over the cracks and alienates the very people who could be our best defenders.

A Different Approach: Maybe Trust People a Bit?
#

So, what’s the alternative? How do we move beyond this? I believe we need a more human-centric approach to cybersecurity. This isn’t about being soft; it’s about being smart and realistic, like the approach with my Director.

It starts with dropping the ‘weakest link’ talk. Seriously, language matters. How about ‘first line of defence’ or ‘human sensor’? It changes the whole dynamic. Then, we need to build genuine psychological safety. People must feel safe reporting mistakes or suspicions without fear of blame. Think blameless post-mortems focused on learning, not finger-pointing. That’s how you get early warnings instead of people hiding problems.

We also need to get serious about security by behavioural design. Let’s design systems where the secure way is the easy way. Think user-friendly password managers, clear ‘External Sender’ warnings, MFA that doesn’t make you want to throw your laptop out the window. Reduce the friction!

And yes, training still matters, but let’s make it relevant, engaging, and continuous – focused on critical thinking and providing simple, actionable steps people can actually use, like forwarding emails as attachments when suspicious.

Speaking of engaging awareness, I remember one Christmas… We decided to run a phishing simulation with a twist. I created a fake Christmas gift webshop exclusively for our company employees. The premise was simple: pick your own gifts, enter your name, address, etc., and you’d receive them. I emailed about 80% of the staff. Now, I didn’t make it subtle – the website name was intentionally scammy, the URL looked dodgy, the text was riddled with outdated information and the kind of grammatical errors you’d expect from a low-effort scam (“mountains of gold” might have been promised!). Classic signs.

To my astonishment, the first ‘failure’ – someone clicking through and entering details – happened within 30 seconds. And yes, quite a few people fell for it initially. But here’s the crucial part: almost immediately, my inbox started flooding with emails. People were reporting the suspicious site. A buzz started spreading through the office and online channels – “Did you see that weird gift email?”, “Something’s not right here.” People were talking, warning each other, putting their colleagues on alert.

Of course, it wasn’t all smooth sailing. A few of the folks who fell for it were, understandably, quite upset. Some even went straight to the CEO demanding my head on a plate, arguing that since I’d promised them lavish gifts, the company now had to deliver! It created quite a stir. But here’s the unexpected win: Management, seeing the engagement (and perhaps wanting to smooth things over!), decided to lean into it. The end result? Everyone in the company received a genuinely significant Christmas present that year.

So, while some failed the test (and fast!), and a few wanted my metaphorical head, the overall response was exactly what you want to see from an awareness perspective. It wasn’t just about catching individuals; it was about triggering collective vigilance and discussion. We followed up with a presentation explaining the exercise and provided targeted training based on what we saw. The result? It helped. A LOT. It was a powerful reminder that while mistakes happen, fostering vigilance and providing the right kind of engaging follow-up turns potential incidents into valuable, organization-wide learning opportunities. And hey, the big real gifts and the successful awareness campaign certainly didn’t hurt my reputation either – sometimes, even near-disasters can turn into a win-win, right? It showed that our people, when alerted and engaged, were far more of an asset than a liability.

My Take: Stop Blaming, Start Empowering
#

Look, the idea that humans are just a liability in cybersecurity feels outdated and counterproductive to me. Yes, people make mistakes, like clicking on a tempting (but fake) Christmas gift link, and sometimes they get understandably upset about being tricked. But often our systems and cultures set them up to fail, or we fail to give them the simple tools and engaging awareness they need to succeed, like my Director’s training or the lessons (and unexpected bonuses!) learned from that phishing simulation.

I genuinely believe that by shifting our mindset – fostering trust, designing better systems, using respectful language, and providing meaningful, targeted support – we can turn our people from a perceived problem into our most valuable security asset. They have the potential to be vigilant, adaptable, and insightful in ways technology alone can never be.

So, my plea is this: let’s stop treating our colleagues like the biggest risk factor. Let’s start empowering them with the right knowledge and tools to be part of the solution. It might just be the smartest security investment we can make.

Related

Taming the Log Tsunami: My Quest for an Open Source Syslog Solution
·1765 words·9 mins· loading · loading
Community-Driven Anti-Spam Arsenal: From SpamCop to Modern Defense Networks
·753 words·4 mins· loading · loading
Building a Proper CI/CD Pipeline for Ansible Roles (Because Manual Testing is for Suckers)
·867 words·5 mins· loading · loading