The CISO risk calculus: Navigating the thin line between paranoia and vigilance


Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.


Born and raised in Israel, I remember the first time I ventured to an American shopping mall. The parking lot was full of cars and people were milling about, yet I couldn’t figure out where the entrance was. It took me a few minutes before I realized that unlike in Israel, shopping malls in the U.S. don’t all have armed guards and metal detectors stationed outside every door.

I often share this anecdote as a way to illuminate the concept of “healthy paranoia” in the domain of cybersecurity. Just as Israel’s political reality has rightly instilled a state of constant vigilance among its citizens for physical security, today’s CISO must likewise cultivate a similar ethos among its employees to prepare and protect them from an evolving slate of digital threats.

Of course, CISOs by their very nature have little choice but to be paranoid about all the things that can go wrong. Conversely, others in an organization usually don’t become paranoid until that bad thing happens.  

So, where do you draw the line between useful vigilance and debilitating paranoia?

VB Event

The AI Impact Tour

Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!

 


Learn More

Paranoia needs a purpose

Asking users to maintain a constant state of vigilance is both unrealistic and counterproductive. On a psychological level, sustained alertness can be mentally exhausting, often leading to fatigue and burnout. When individuals are consistently asked to be on high alert, they can experience diminished cognitive function, decreased productivity and increased susceptibility to errors. Such alert fatigue can ultimately counteract the benefits of vigilance, making people more susceptible to mistakes.

These tendencies are only exacerbated in the era of zero trust, where we are implored to ‘never trust and always verify.’ It’s easy to understand how some can take this edict to an extreme, blurring the lines between healthy skepticism and debilitating distrust.

While zero trust principles in cybersecurity advocate for rigorous verification and monitoring, it’s crucial to differentiate between this strategic approach and an all-consuming paranoia that can hamper operations, collaboration and innovation.

Consider some of the ways organizations have codified their paranoia to an unhealthy degree in how they secure their systems and data.

  • Onerous password requirements: The inadequacies of passwords are well understood by most users these days, yet their broad usage persists. As a result, most large organizations require workers to use and regularly change complex combinations of characters, numbers and symbols. However, such protocols often overlook the reality that many authentication breaches aren’t due to a password being cracked, but rather come undone by relatively simple social engineering schemes. Moreover, if your strong password gets leaked on the dark web, no amount of complexity can prevent the attacker from performing credential stuffing attacks.
  • Pursuit of ‘zero risk’: As with many strategic endeavors, risk mitigation often experiences a law of diminishing returns. Overly restrictive security measures can impede productivity and frustrate users, leading them to find workarounds that might inadvertently introduce new vulnerabilities. While the pursuit of absolute security is of course commendable, it’s often more practical to allocate resources to areas where they will have the most significant impact on reducing overall risk.
  • Fear-driven decision making: Too often, we make decisions based on emotional reactions rooted in fear and uncertainty, rather than objective analysis and rational judgment. For instance, if an employee accidentally clicks on a malware phishing email, a fear-driven response might be to severely restrict internet access for all employees, hampering productivity and collaboration, instead of addressing the root cause through better training or more nuanced access controls.

Fortifying the human firewall

Sometimes we forget the critical survival role that paranoia and anxiety have served in the collective survival of our species. Our early ancestors lived in environments filled with predators and other unknown threats. A healthy dose of paranoia enabled them to be more vigilant, helping them detect and avoid potential dangers.

The challenge in our modern era is being able to distinguish genuine threats from the endless noise of false alarms, ensuring that our inherited paranoia and anxiety serve us, rather than hinder us. It also requires that we acknowledge and address the human element in the security calculus.

As the late Kevin Mitnick wrote, “as developers invent continually better security technologies, making it increasingly difficult to exploit technical vulnerabilities, attackers will turn more and more to exploiting the human element. Cracking the human firewall is often easy.” 

So what steps can security leaders take to harness these instincts more constructively so that we can help users be alert to and navigate these real-world dangers without becoming overwhelmed? Here are a few strategies that can help.

  • Embrace a security by design approach: While it’s common rhetoric to claim that security is everyone’s responsibility and advocate for a pervasive security culture, the real challenge lies in operationalizing this mindset and integrating security measures into the very fabric of product and system development. To truly achieve this, security principles must be seamlessly embedded into processes and practices, ensuring that they become instinctive behaviors rather than just mandated tasks.
  • Emphasize the edge cases: An edge case refers to a situation or user behavior that occurs outside of the expected parameters of a system. For instance, while most CISOs will prioritize their efforts on protecting against digital threats, what happens if someone gains physical access to a server room? As technology and user behavior evolve, what’s considered an edge case today might become more common in the future. By identifying and preparing for these outlier situations, security teams will be better able to respond to an uncertain future threat landscape.
  • Security training must be persistent: Security training shouldn’t be a one-off initiative. While establishing robust policies is a crucial first step, it’s unrealistic to expect that people will automatically understand and consistently adhere to them. Human nature is not inherently programmed to retain and act on information presented only once. It’s not merely about providing information; it’s about continuously reinforcing that knowledge through repeated training. The occasional nudge or reminder, even if it feels like nagging, plays an essential role in keeping security principles top of mind and ensuring compliance over the long term.

As Joseph Heller wrote in Catch-22, “just because you’re paranoid doesn’t mean they aren’t after you.” It’s a good reminder that in this unpredictable world of ours, a healthy dose of paranoia can be the best defense against complacency.

Omer Cohen is CISO at Descope.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers



Source link