Zero-Risk Bias

Zero-risk bias is a flawed concept in cybersecurity. Absolute security with no vulnerabilities or weaknesses is a pipe dream that never seems to be quite obtainable. You can draw a hard line with an autocratic company security policy however this poses immediate questions concerning flexibility and implosions aren’t exactly out of the question. So what is zero-risk bias exactly? Schneider et al. (2017) notes that zero-risk bias is the mentality in which a person values and pursues absolute zero risk solutions, which tend to lead to unfavourable and suboptimal outcomes. A great example of this is the Great Australian Panic-Buying of Toilet Paper in 2020, where consumers sought to eliminate risk through hoarding an excessive amount of toilet paper. It allowed them to feel in control and not be exposed to uncertainty, such as whether there will be stock, or if they will run out.

In order to understand zero-risk bias in the context of cybersecurity, it is essential to briefly touch on this bias within a psychological context as the primary reasons for zero-risk bias are deeply embedded in the human psyche.

Solutions as a result of the zero-risk bias are largely driven by certainty as it is predictable, safe and verified. However from a broader perspective, nothing is ever truly 100% certain as there will always be components and random variables out of the scope of control that can contribute to errors. In their paper on Psychological Perspectives on Perceived Safety: Zero-Risk Bias, Feelings and Learned Carelessness, Raue and Schneider (2019) aligns this mode of thinking to perceived risk, indicating that bad things can happen and thus “uncertainty remains and safety becomes a matter of perception.” (p. 62) Whilst their paper provides a broad overview of biases and rational thought in both financial and medical contexts, they present valid points on why biased opinions towards zero-risk solutions should not be the defacto answer.

Raue, M., & Schneider, E. (2019). Psychological Perspectives on Perceived Safety: Zero-Risk Bias, Feelings and Learned Carelessness. Nachhaltigkeit Und Soziale Ungleichheit, p. 63

Within their paper, they referenced an experiment conducted by Maurice Allais in 1953 (Le comportement de l’homme rationnel devant le risque), in which subjects preferred a 100% certainty of gaining $100 million over a 89% probability of gaining $100 million, 10% probably of gaining 500 million, and a 1% probabability of gaining nothing. The results of this experiment indicated that people leaned towards the safer and more certain option, albeit with an unfavourable outcome as opposed to the more rational option with a greater outcome (2019, p. 64). Thus, maximised results are generally overlooked when adopting the zero-risk bias.

So what can be learned from this and how can this be applied to cybersecurity practices?

Human weaknesses are embedded in cybersecurity, whether it is communicating to management, end-users or developers of systems. As Richard Buckland (2022) noted in his lecture recording Human Weaknesses and Cognitive Flaws, we have to understand brains, we have to understand the human mind.” The zero-risk bias is a double-edged sword. Not only is it a fallible bias, it tends to overlook the maximised result and the broader scope of things, such as the business functions.

Cybersecurity professionals are hired in service of the company to support business functions and provide security without interrupting workflow and operations. In some ways, zero-risk biases brings a disservice to the company. Adopting a zero-risk bias can result in an infallible belief, which can also lead to a type of tunnel vision. It dissuades innovation and investment in security resources and can potentially hold a significant impact to the business, whether it is financially or the scalability and not necessarily as a direct result of an incident. Examples of this can include unpatched or non-upgraded systems to move away from the current reliable and certain system, or maintaining certain rules in the firewall. Further non-technological examples include user management and anomalous operations.

As mentioned at the start, absolute security with no vulnerabilities or weaknesses is a pipe dream. Vulnerabilities are inevitable and incidents will occur. 3 recommendations that I propose can combat the zero-risk bias and maximise gains at a slightly lower, yet effective probability, are:

  • Conduct risk assessments
  • Adopt countermeasures
  • Make compromises

These activities have a very high chance of interaction with other individuals for consulting and collaborating as a team. An additional solution that I propose is to frame the risks in a contextual manner to allow visibility of both the benefits and risks involved in the final response or decision, whether it is to the end-user or management for business-related resolutions. Studies have been conducted on a sample of participants’ final decision after being presented hypothetical gains vs losses, or the advantages and disadvantages of each choice. Cox and Cox (2001) provides a summary on how there was an equal distribution of decisions based on positive and negative influence, highlighting that both are equally persuasive methods.

Furthermore, sound decisions have been proven to be made with an individual’s involvement in the context and outcome of the decision. In a study of the involvement hypothesis, Maheswaran and Meyers-Levy (1990) concluded that “negatively framed appeals can be highly persuasive… only if individuals who receive the appeal are sufficiently involved with the issue.” (p. 366)

Case Study

I once received an e-mail from an employee who will be departing from the business in a week, asking to keep their account open beyond their terminate date. The general rule of thumb is that there is no reason for a user to maintain access to their account if they are no longer with the business as it poses a security threat and has cause for data breach concerns. The user, however, is adamant about it due to unforeseen circumstances. A zero-risk bias would dictate that the account should still not be extended.

Here is the catch though: this request was somehow approved by HR.

Transferring risk and educating the user is a strategy that can be performed in this situation. As Raue and Schneider (2019) surmised, “risk communication plays a crucial role in the perception of safety. As people’s risk perceptions often deviate from objective probabilities, risks need to be communicated in a way that embraces the way humans process information. ” (p. 76) Following the framing solution I provided above, it is possible to educate the user on the recommendations provided by security, as well as outline the consequences as a result of not following it. Involving the individual in the negative actions heightens personal responsibility and can lead to a more sound answer.

However, this solution is not infallible as well. There are still a healthy dose of stubborn people and at that point, just make sure you have the person’s confirmation in writing!

In hindsight, had the user been denied access to their account, there could be a high probability they can become a threat and there will be destruction of company assets. Thus, it is all about compromise.

Reference List

Buckland, B. (2022). Human Weaknesses and Cognitive Flaws, Foundations of Cyber Security, University of New South Wales. https://www.openlearning.com/unswcourses/courses/cyber-foundations-21/podcast/2-1/?cl=1

Cox, D., & Cox, A. D. (2001). Communicating the Consequences of Early Detection: The Role of Evidence and Framing. Journal of Marketing, 65(3), 91–103. http://www.jstor.org/stable/3203469

Maheswaran, D., & Meyers-Levy, J. (1990). The Influence of Message Framing and Issue Involvement. Journal of Marketing Research, 27(3), 361–367.

Schneider, E., Streicher, B., Lermer, E., Sachs, R., Frey, D. (2017). Measuring the Zero-Risk Bias: Methodological Artefact or Decision-Making Strategy?, Applied Psychological Measurement, 225(1)

Raue, M., & Schneider, E. (2019). Psychological Perspectives on Perceived Safety: Zero-Risk Bias, Feelings and Learned Carelessness. Nachhaltigkeit Und Soziale Ungleichheit, 61–81.