When does cybersecurity awareness training actually work?

Cybersecurity awareness training has become a central activity in many firms. It takes time, requires planning and management follow-up, and is very often mandatory for all employees. But does it work? That depends – first and foremost on people’s feelings towards cybersecurity.

A very informal survey in my network shows that most people don’t receive any awareness training at all at work, and among those that do, there are more people who say it does not change their behaviors, than those that think it has had a positive impact.

The results of a simple survey show that most people receive no cybersecurity awareness training, and that among those that do, people do more often than not judge it to be of little value.

At the end of last year I participated in a local meeting in the Norwegian Association for Quality and Risk Management, where I heard a very interesting talk by Maria Bartnes (Twitter: @mariabartnes) from SINTEF on user behaviors and cybersecurity training. She argued that training is only effective if people are motivated for the training – and for that they need to have beliefs and goals that are well aligned with the organization they are a part of. She portrayed this in a matrix with various employee stereotypes, with “feelings towards policies and company goals” on one axis and “risk understanding” on the other axis – which I found was a very effective way of communicating the fact that all employees are not created equal 🙂 . You have people ranging from technical risk experts that love the company and policies they are working for, and you have people who don’t understand risk at all, and at the same time are feeling angry or resentful towards both their company and its policies – and you have everything in between.

Another issue is that many organizations tend to make training mandatory and the same for all. It makes little sense to force your experts to sit through basic introductions that are second nature to them anyway – a lot of knowledge workers experience this when HR departments push e-learning modules to all employees.

What does it all mean?

Some people have argued that security awareness training is completely useless. This is probably going a bit too far but there are clear limits to what can be achieved by “training” of any kind when it comes to changing people’s behaviors. We use computers by habit – the way we act when we read e-mails, research the internet, write Word documents or compile code – it is all “second nature” when you are experienced at it. Changing those habits is hard and it does not happen automagically through training.

Focusing on motivation and feelings is a good start – without the motivation to do so, it is very unlikely that users that exhibit risky behaviors will make any effort to change those behaviors.

Continuous effort is needed to change behaviors, to create new habits. This means that employees must not only receive the knowledge about the “why” and the “how”, but they must also attain practical knowledge by doing. When we realize that, we see that it becomes very important not to demotivate employees that already have positive feelings about cybersecurity. Forcing the highly motivated and technically competent to take very basic e-learning lessons may kill that motivation – and thus increase your organizations risk exposure.

It also becomes very important to motivate those that are feeling resentful, both  the technically competent ones, and those in the “worst-case corner” of resentful and low technical competency. Motivation comes before technical know-how.

For cybersecurity awareness training to have a positive effect it is thus necessary to tailor the contents to each employee based on skills and motivation. Further, the real work really starts after the training – it is the action of “doing” that changes habits, not the mere presentation of information about phishing e-mails and strong passwords. This means you need leadership, and you need change agents.

Use your technically skilled and highly motivated people as change agents. They can help motivate others, and they can exemplify good behaviors. Let the these supercyberusers support management, and educate management. And bring the managers on board on following up security regularly – not to outsource it to the IT department. Entertaining abuse cases for discussion in meetings can help, as well as publicly praising employees that make an effort to bring the maturity of both their own security practices, and the security maturity of the company as a whole to a new level.


Make sure you adapt your training to both motivation and technical skills of those who receive it. See maturity work in the area of cybersecurity as a part of your organization’s continuous improvement program – embed it in the way your organization works instead of relying solely on information campaigns. Use change agents and inspiring leaders in you organization to change the way the organization behaves from the individual to the firm as a whole. That is the only way to success with building security awareness that actually changes behaviors.


Thinking about risk through methods

Risk management is  a topic with a large number of methods. Within the process industries, semi-quantitative methods are popular, in particular for determining required SIL for safety instrumented functions (automatic shutdowns, etc.). Two common approaches are known as LOPA, which is short for “layers of protection analysis” and Riskgraph. These methods are sometimes treated as “holy” by practicioners, but truth is that they are merely coginitive aids in sorting through our thinking about risks.


Riskgraph #sliderule – methods are formalisms. See picture on Instagram


In short, our risk assessment process consists of a series of steps here:

  • Identify risk scenarios
  • Find out what can reduce the risk that you have in place, like design features and procedures
  • Determine what the potential consequences of the scenario at hand is, e.g. worker fatalities or a major environmental disaster
  • Make an estimate of how likely or credible you think it is that the risk scenario should occur
  • Consider how much you trust the existing barriers to do the job
  • Determine how trustworthy your new barrier must be for the situation to be acceptable

Several of these bullet points can be very difficult tasks alone, and putting together a risk picture that allows you to make sane decisions is hard work. That’s why we lean on methods, to help us make sense of the mess that discussions about risk typically lead to.

Consequences can be hard to gauge, and one bad situation may lead to a set of different outcomes. Think about the risk of “falling asleep while driving a car”. Both of these are valid consequences that may occur:

  • You drive off the road and crash in the ditch – moderate to serious injuries
  • You steer the car into the wrong lane and crash head-on with a truck – instant death

Should you think about both, or pick one of them, or another consequence not on this list? In many “barrier design” cases the designer chooses to design for the worst-case credible consequence. It may be difficult to judge what is really credible, and what is truly the worst-case. And is this approach sound if the worst-case is credible but still quite unlikeley, while at the same time you have relatively likely scenarios with less serious outcomes? If you use a method like LOPA or RiskGraph, you may very well have a statement in your method description to always use the worst-case consequence. A bit of judgment and common sense is still a good idea.

Another difficult topic is probability, or credibility. How likely is it that an initiating event should occur, and what is the initating event in the first place? If you are the driver of the car, is “falling asleep behind the wheel” the initating event? Let’s say it is. You can definitely find statistics on how often people fall asleep behind the wheel. The key question is, is this applicable to the situation at hand? Are data from other countries applicable? Maybe not, if they have different road standards, different requirements for getting a driver’s license, etc. Personal or local factors can also influence the probability. In the case of the driver falling asleep, the probabilities would be influenced by his or her health, stress levels, maintenance of the car, etc. Bottom line is, also the estimate of probability will be a judgment call in most cases. If you are lucky enough to have statistical data to lean on, make sure you validate that the data are representative for your situation.Good method descriptions should also give guidance on how to do these judgment calls.

Most risks you identify already have some risk reducing barrier elements. These can be things like alarms and operating procedures, and other means to reduce the likelihood or consequence of escalation of the scenario. Determining how much you are willing to rely on these other barriers is key to setting a requirement on your safety function of interest – typically a SIL rating. Standards limit how much you can trust certain types of safeguards, but also here there will be some judgment involved. Key questions are:

  • Are multiple safeguards really independent, such that the same type of failure cannot know out multiple defenses at once?
  • How much trust can you put in each safeguard?
  • Are there situations where the safeguards are less trustworthy, e.g. if there are only summer interns available to handle a serious situation that requires experience and leadership?

Risk assessmen methods are helpful but don’t forget that you make a lot of assumptions when you use them. Don’t forget to question your assumptions even if you use a recognized method, especially not if somebody’s life will depend on your decision.

Would you operate a hydraulic machine with no cover or emergency stop?

Sounds like a crazy idea, right? Research, however, has shown that about half of professional machine operators do not think safety functions are necessary. You know, things like panel limit switches, torque limiters and emergency stop buttons. Who would need that, right? This number comes from a report issued in Germany about 10 years ago, but I am not very optimistic in improvements in these numbers since then. The report can be found here: http://www.dguv.de/ifa/Publikationen/Reports-Download/BGIA-Reports-2005-bis-2006/Report-Manipulation-von-Schutzeinrichtungen/index.jsp (in German).

Machine safety is important for both users and bystanders. Manipulations of safety functions are common – and the risk increase is typically unknown to users and others. How can we avoid putting our people at risk due to degraded safety of our machinery?

Researchers have found that the safety functions of machines are frequently manipulated. This is typically done because workers perceive the manipulation as necessary to perform work, or to improve productivity. Everyone from machine builders, to purchasers to operrators should take this into account, to avoid accidents from happening. Consider for example a limit switch. A machine built to conform to the machinery directive (with CE marking) has to satisfy safety standards. Perhaps has a SIL 2 requirement been assigned to the limit switch because operation without it is deemed dangerous and a 100-fold risk reduction is necessary for operation to be acceptable. This means, if the limit switch is put out of function, the risk of operation is 100 times higher than the designer has intended!

What can we do about this? We need to design machine such that safety functions become part of the work flow – not an obstacle to it. If workers have nothing to gain in their own perception from manipulating the machine, they are not likely to do it either. This boils down to something we are aware of, but are not good enough at taking into account in design processes; usability testing is essential not only to make sure operators are happy with the ergonomics – it is also essential for the safety of the people using the machine!

Machine safety – what is it? 

Machines can be dangerous. Many occupational accidents are related to use of machinery, and taking care of safety requires attention to the user in design, operation and training as well as when planning maintenance. 

In Europe there is a directive regulating the safety of machinery, namely 2006/42/EC. This directive is known as the machinery directive and has been made mandatory in all EU member states as well as Norway, Liechtenstein and Iceland. 

The directive requires producers of machines to identify hazards and design the machine such that the risks are removed or controlled. Only machines conforming to the directive can be sold and used in the EU. 

In practice risks must be treated using safety functions in the control system. They should be designed in accordance with recognized standards. The recommended standards are ISO 13849-1 or IEC 62061. These are different but equivalent in terms of safety. The former defines 5 performance levels (a,b,c,d,e) and the latter used 3 safety integrity levels. The most common risk analysis approach for defining PL or SIL requirements is Riskgraph. 

By conforming to the directive, basically through application of these standards together with the general principles in ISO 12100 you can put the CE mark on the machine and declare it is safe to use. Through these practices we safeguard our people, and can be confident that the machine will not be the cause of someone losing a loved one.