One of the biggest challenges of our time is climate change. The world struggles to get our ongoing path to environmental destruction under control. Today is Earth Day. This day is for most people about avoiding meat, taking public transport, using reusable shopping bags, drinking wine instead of beer, and turning lights off – but nerds can do more than that. Our biggest challenge is to reduce the climate gas emissions from transport.
Walkable cities are nice – and cybersecurity can contribute to that! Happy Earth Day 2017!
Information technology has a gigantic role to play in the solution to that problem:
Self-driving cars, buses, metros make public transport cheaper. But can they be hacked? Of course they can.
Smart assistants using AI to help plan your day, your travel and to optimize your choices also with regard to environmental footprint can do a lot. But can they be hacked, thereby destroying all hope of privacy protection? Sure they can.
Telework can reduce the need to travel to work, and the need for business travel to talk to people in other locations. This brings a whole swath of issues: privacy, reliability. If people don’t trust the solutions for communication, system access, and if they don’t work reliably, people will keep boarding planes to meet clients and driving cars to go to the office.
Cloud services are nice. They make working together over distances a lot easier. Cloud services require data centers. If the reliability of a data center is not quite up to expectations the standard solution is to replicate everything in another datacenter, or for the customer perhaps to replicate everything in his or her own datacenter, or possibly mirroring it to another cloud provider. This may not be seen as necessary if the reliability is super-good with the primary provider – particularly the ability to deal with DDoS attacks. Building reliable datacenters is therefore part of the climate solution – in addition to providing datacenters with green energy and efficient cooling systems.
OK, so DDoS is a climate problem? Yes, it is. And what do cybercriminals need to perform large-scale DDoS attacks? They need botnets. They get botnets by infecting IoT devices, laptops, phones, workstations and so on with malware. Endpoint security is therefore, also, a climate issue. Following sensible security management is therefore a contributor to protecting the environment. So in addition to choosing the bus over the car today, you can also help Mother Earth by beefing up the security on your private devices:
Make sure to patch everything, including routers, cell phones, laptops, smart home solutions, alarm systems, internet connected refrigerators and the whole lot.
Stop using cloud services with sketchy security and privacy practices. Force vendors to beef up their security by using your consumer power. And protect your own interests at the same time. This is doing everyone a favor – it makes AI assistants and such trustworthy, making more people use them, which favors optimized transport, consumption and communications.
Prioritize efficient, safe and secure telework. Use VPN when working from coffee shops, and promote the “local work global impact” way of doing things. Being able to avoid excessive travel, whether it is to the office or to a client on the other side of the globe, your decisions have impact. Especially if you manage to influence other people to prioritize the same things.
Happy Earth Day 2017. Promote climate action through security practices!
Still one of the most common vulnerabilities in web applications, XSS (cross-site scripting) still serves as a useful point of attack for hackers. If you are a we developer, knowing how to properly protect your application from these attacks is a must.
Don’t leave your app ooen to attack – injection vulnerabilities are not nice.
Cross-site scripting vulnerabilities exist when user input in web forms or in API calls are not properly escaped and sanitzed before it is used. Directly reflecting user input back to the browser can be sketchy practice. If the user inputs JavaScript into a form input field, and that script executes, then you have a vulnerability that hackers can take advantage of.
There are two ways users can give input to a web page; through web forms, and through URL parameters (usually by clicking links on the page). Both input types are interesting injection points for someone looking to exploit your page.
Modern web applications seek to filter out this type of input. OWASP has put together a large selection of attack vectors for XSS exploits that try to bypass these filters. You can see the list here: https://www.owasp.org/index.php/XSS_Filter_Evasion_Cheat_Sheet.
To manually test your own applications you can try the following input strings:
[Script]alert()[/script]: usually doesn’t work. (Brackets to avoid getting lost in WordPress escaping… Should’ve been script tags)
<img src=x onerror=alert()>: typical stored xss exploit, typically in comment functionality etc. If this one pops an alert on reload of the page you have successfully injected JavaScript into the page that will be used by others too. Now you can just go ahead an change the alert with a more evil kind of thing, like a redirect to you phishing site of choice (don’t do it, it really is evil. And illegal).
In url parameters: data:html,alert() or data:text/JavaScript,alert(); or JavaScript:alert()
The URL manipulation is typically used in links supplied in scam emails etc. It makes your code execute within the context of the web application, and is often used to steal session data.
Avoiding XSS as a developer
There are several things you can do as a developer to avoid these vulnerabilities. The best way is to use a framework/templating system that autoescapes dangerous input for you. Most modern web frameworks will do this for you, as long as you enable the right middleware!
You should also test for vulnerabilities, including XSS. You can do this manually by trying to inject strings like the ones above, and you can use a vulnerability scanner. Allow someone else to look at your code to try and find weaknesses – it is harder to see errors when you have made them yourself! You should use multiple test methods when available, and also consider including security tests in unit testing for your code.
Some takeaways:
Also big league players have XSS vulns on their sites. See this Register story from 2014 on a plugin bug for WordPress, affecting most of the platform (2014)
XSS will allow hackers to attack your users. You are partially to blame if this was possible due to neglect on your behalf. Right? And your customers would get angry.
Web application frameworks deal with this in a good way. It is very hard to write context-aware escaping manually, so stick with a framework!
Recently, a group called Shadow Brokers released hundreds of megabytes of tools claimed to be stemming from the NSA and other intelligence organizations. Ars has written extensively on the subject: https://arstechnica.com/security/2017/04/nsa-leaking-shadow-brokers-just-dumped-its-most-damaging-release-yet/. The leaked code is available on github.com/misterch0c/shadowbroker. The exploits target several Microsoft products still in service (and commonly used), as well as the SWIFT banking network. Adding speculation to the case is the fact that Microsoft silently released patches to vulnerabilities claimed to be zerodays in the leaked code prior to the actual leak. But what does all of this mean for “the rest of us”?
Analysis shows that lifecycle management of software needs to be proactive, considering the security features of new products against the threat landscape prior to end-of-life for existing systems as a best practice. The threat from secondary adversaries may be increasing due to availability of new tools, and the intelligence agencies have also demonstrated willingness to target organizations in “friendly” countries; nation state actors should thus include domestic ones in threat modeling.
There are two key questions we need to ask and try to answer:
Should threat models include domestic nation state actors, including illegal use of intelligence capabilities against domestic targets?
Does the availability of the leaked tools increase the threat from secondary actors, e.g. organized crime groups?
Taking on the first issue first: should we include domestic intelligence in threat models for “normal” businesses? Let us examine the C-I-A security triangle from this perspective.
Confidentiality: are domestic intelligence organizations interested in stealing intellectual property or obtaining intelligence on critical personnel within the firm? This tends to be either supply chain driven if you are not yourself the direct target, or data collection may occur due to innocent links to other organization’s that are being targeted by the intelligence unit.
Integrity (data manipulation): if your supply chain is involved in activities drawing sufficient attention to require offensive operations, including non-cyber operations, integrity breaches are possible. Activities involving terrorism funding or illegal arms trade would increase the likelihood of such interest from authorities.
Availability: nation state actors are not the typical adversary that will use DoS-type attacks, unless it is to mask other intelligence activities by drawing response capabilities to the wrong frontier.
The probability of APT activities from domestic intelligence is for most firms still low. The primary sectors where this could be a concern are critical infrastructure and financial institutions. Also firms involved in the value chains of illegal arms trade, funding of terrorism or human trafficking are potential targets but these firms are often not aware of their role in the illegal business streams of their suppliers and customers.
The second question was if the leak poses an increased threat from other adversary types, such as organized crime groups. Organized crime groups run structured operations across multiple sectors, both legal and illegal. They tend to be opportunistic and any new tools being made available that can support their primary cybercrime activities will most likely be made use of quickly. The typical high-risk activities include credit card and payment fraud, document fraud and identity theft, illicit online trade including stolen intellectual property, and extortion schemes by direct blackmail or use of malware. The leaked tools can support several of these activities, including extortion schemes and information theft. This indicates that the risk level does in fact increase with the leaks of additional exploit packages.
How should we now apply this knowledge in our security governance?
The tools use exploits in older versions of operating systems. Keeping systems up-to-date remains crucial. New versions of Windows tend to come with improved security. Migration prior to end-of-life of previous version should be considered.
In risk assessments, domestic intelligence should be considered together with foreign intelligence and proxy actors. Stakeholder and value chain links remain key drivers for this type of threat.
Organized crime: targeted threats are value chain driven. Most likely increased exposure due to new cyberweapons available to the organized crime groups for firms with exposure and old infrastructure.
All businesses have processes for their operations. These can be production, sales, support, IT, procurement, auditing, and so on.
All businesses also need risk management. Traditional risk management has focused on financial risks, as well as HSE risks. These governance activities are also legal requirements in most countries. Recently cybersecurity has also caught mainstream attention, thanks to heavy (and often exaggerated) media coverage of breaches. Cyber threats are a real risk amplifier for any data centric business. Therefore it needs to be dealt with as part of risk management. In many businesses this is, however, still not the case, as discussed in detail in this excellent Forbes article.
Most employees do not even get basic cybersecurity training at work. Is that an indicator that businesses have not embedded security practices in their day-to-day business?
One common mistake many business leaders make is to view cybersecurity as an IT issue alone. Obviously, IT plays a big role here but the whole organization must pull the load together.
Another mistake a leader can make is to view security as a “set and forget” thing. It is unlikely that this would be the case for HSE risks, and even less so for financial risks.
The key to operating with a reasonable risk level is to embed risk management in all business processes. This includes activities such as:
Identify and evaluate risks to the business related to the business process in question
Design controls where appropriate. Evaluate controls up against other business objectives as well as security
Get your people processes right (e.g. roles and responsibilities, hiring, firing, training, leadership, performance management)
What does the security aware organization look like?
Boiling this down to practice, what would some key characteristics of a business that has successfully embedded security in their operations?
Cybersecurity would be a standard part of the agenda for board meetings. The directors would review security governance together with other governance issues, and also think about how security can be a growth enhancer for the business in the markets they are operating.
Procurement considers security when selecting suppliers. They identify cybersecurity threats together with other supply chain risks and act accordingly. They ensure baseline requirements are included in the assessment.
The CISO does not report to the head of IT. The CISO should report directly to the CEO and be regularly involved in strategic business decisions within all aspects of operations.
The company has an internal auditing system that includes cybersecurity. It has based its security governance on an established framework and created standardized ways of measuring compliance, ranging from automated audits of IT system logs to employee surveys and policy compliance audits.
Human resources is seen as a key department for security management. Not only are they involved in designing role requirements, performance management tools and training materials but they are heavily involved in helping leaders build a security aware culture in the company. HR should also be a key resource for evaluating M&A activites when it comes to cultural fit, including cybersecurity culture.
If you follow security news in media you get the impression that there are millions of super-evil super-intelligent nation state and hacktivist hackers constantly attacking you, and you specifically, in order to ruin your day, your business, your life, and perhaps even the lives of everyone you have ever known. Is this true? Are there hordes of barbarians targeting you specifically? Probably not.
Monsters waiting outside your gates to attack at first opportunity? That may be, but it is most likely not because they think your infrastructure is particularly tasty!
So what is the reality? The reality is that the threat landscape is foggy; it is hard to get a clear view. What is obviously true, though, is that you can easily fall victim to cyber criminals – although it is less likely that they are targeting you specifically. Of course, if you are the CEO of a big defense contractor, or you are the CIO of a large energy conglomerate – you are most likely specifically targeted by lots of bad (depending on perspective) guys – but most people don’t hold such positions, and most companies are not being specifically targeted. But all companies are potential targets of automated criminal supply chains.
The most credible cyber threats to the majority of companies and individuals are the following:
Phishing attacks with direct financial fraud intention (e.g. credit card fraud)
Non-targeted data theft for the sake of later monetization (typically user accounts traded on criminal market places)
Ransomware attacks aimed at extorting money
None of these attacks are targeted. They may be quite intelligent, nevertheless. Cybercriminals are often quite sophisticated, and they are in many cases “divisions” in organized crime groups that are active also in more traditional crime, such as human trafficking, drug and illegal weapons trade, etc. Sometimes these groups may even have capabilities that mirror those of state-run intelligence organizations. In the service of organized crime, they develop smart malware that can evade anti-virus software, analyze user behaviors and generally maximized the return on their criminal investment in self-replicating worms, botnets and other tools of the cybercrime trade.
We know how to protect ourselves against this threat from the automated hordes of non-targeted barbarians trying to leach money from us all. If we keep our software patched, avoid giving end-users admin rights, and use whitelists to avoid unauthorized software from running – we won’t stop organized crime. But we will make their automated supply chain leach from someone else’s piggybank; these simple security management practices stop practically all non-targeted attacks. So much for the hordes of barbarians.
These groups may also work on behalf of actual spies on some cases – they may in practice be the same people. So, the criminal writing the most intelligent antivirus-evading new ransomware mutation, may also be the one actively targeting your energy conglomerate’s infrastructure and engineering zero-day exploits. Defending against that is much more difficult – because of the targeting. But then they aren’t hordes of barbarians or an army of ogres anymore. They are agents hiding in the shadows.
Bottom line – stop crying wolf all the time. Stick to good practices. Knowing what you have and what you value is the starting point. Build defense-in-depth based on your reality. That will keep your security practices and controls balanced, allowing you to keep building value instead of drowning in fear of the cyber hordes at your internet gateways.
Most business leaders think about security as a cost. It is hard to demonstrate positive returns on security investments, which makes it a “cost” issue. Even people who work with securing information often struggle with answering the simple and very reasonable question: “where is the business benefit?”.
Finding the right path to make security beneficial for your business involves thinking about market trust, trends and consumer behavior. For many security professionals this is difficult to do because it is not what they have been trained to focus on. How would you answer the question “what is the business benefit of security management”?
What if you turn it around, and view security as a selling point? It may not be the driver of revenue growth today – but it may very well be an important prerequisite for growth tomorrow. Here are three issues that can help clarify why keeping your data and systems secure will be necessary for the days to come if you want your business to grow:
Your customers will not trust you with their data if you cannot keep it safe from hackers and criminals. The GDPR will even make it illegal to not secure customer data in a reasonable manner if you do business in Europe from 2018. If you don’t secure your customers’ data and also show them why they can trust you to do so, people will increasingly take their business elsewhere.
If you operate in the B2B world, the number of suppliers and buyers setting requirements to their supply chain partners is growing. They will not buy from you unless you can show that you satisfy some minimum security requirements – including keeping tabs on risks and vulnerabilities. This is true for engineering firms, for consultancies, for banks, for betting operators, for retail stores, and so on. You’d better be prepared to demonstrate you satisfy those requirements.
You will get hacked. Seriously, it is going to happen one day. Then you’d better be prepared for handling it, which means you need to have invested in security and trained for these events. It is like mandatory fire drills – if you don’t do them, our evacuation during a fire is less likely to be successful. Companies handling being hacked in a good way respond quickly, inform third-parties and the public in a way that has been thought out and tested up front, and generally limit the damage that hackers can do. This mitigates the risk that your customers lose all trust in you. You live to do business another day. The companies that haven’t prepared? Sometimes they never recover, or at least their short-term growth will be seriously threatened.
Viewing security as a growth component rather than a cost issue turns the discusssion around. It allows you to go from “reactive” to “proactive”. Securing your business is a core business process – this is the focus you can achieve, when security becomes a unique selling point rather than a budget constraint. Happy selling!
This post is based on the excellent mindmap posted on taosecurity.blogspot.com – detailing the different fields of cybersecurity. The author (Richard) said he was not really comfortable with the risk assessment portion. I have tried to change the presentation of that portion – into the more standard thinking about risk stemming from ISO 31000 rather than security tradition.
Read team and blue team activities are presented under penetration testing in the original mind map. I agree that the presentation there is a bit off – red team is about pentesting, whereas blue team is the defensive side. In normal risk management lingo, these terms aren’t that common – which is why I left them out of the mind map for risk assessment. For an excellent discussion on these terms, see this post by Daniel Miessler: https://danielmiessler.com/study/red-blue-purple-teams/#gs.aVhyZis.
Suggested presentation of risk assessment mind map – to wrap in in typical risk assessment activity descriptions
The map shown here breaks down the risk assessment process into the following containers:
Context description
Risk identification
Risk analysis
Treatment planning
There are of course many links between other security related activities and risk assessments. Risk monitoring and communication processes are connecting these dots.
Also threat intelligence is essential for understanding the context – which again dictates attack scenarios and credibility necessary to prioritize risks. Threat intelligence entails many activities, as indicated by the original mind map. One source of intel from ops that is missing on that map by the way, is threat hunting. That also ties into risk identification.
I have also singled out security ops as it is essential for risk monitoring. This is required on the tactical level to evaluate whether risk treatments are effective.
Further, “scorecards” have been used as a name for strategic management here – and integration in strategic management and governance is necessary to ensure effective risk management – and involving the right parts of the organization.
After being home with paternal leave 80% of the weak and working 20% of the week, I will be switching percentages from tomorrow. That means more time to get hands-on with security. I’ve recently switched from risk management consulting to a pure security position within a fast-growing organization with a very IT-centric culture. Working one day a week in this environment has been great to get an impression of the organization and its context, and now the real work begins. I think habits from the consulting world will be beneficial to everyone involved. Here’s how.
Successful consultants must not only be good at what their technical area of expertise is, but also at moving around in unknown territories in client organizations while navigating complex issues with many stakeholders – these are habituated skills that security professionals should adopt.
Slipping into someone else’s shoes
Consulting is about understanding the unarticulated problems, and getting to the core through intelligent questions. That is the essence of it; the good consultant understands that context is everything, and that the perception of context is different depending on the shoes you wear. This goes for strategy development, for risk management in general, in definitely for cybersecurity.
Use your analytics for (almost) everything
As a consultant you must be able to back up your claims. Your recommendations are expensive to get, and they’d better be worth the price. Often you will create recommendations that will be uncomfortable to decision makers – due to cost, challenged assumptions or that your recommendations are not aligned with their gut feeling.
This is why consultants must be ready to back up their claims, with two essential big guns; a convincing approach to analysis, and solid data. Further, to add to the credibility of the recommendations, the methods and data should be described together with the uncertainties surrounding both.
Working in security means that you are trying to protect assets – some tangible, but most are not. The recommendations you make usually carry a cost, and to convince your stakeholders that your recommendations are meaningful you need to provide the methods and the data to make them compelling. Which brings us to the next step…
Always make an effort to communicate with purpose
Analysis and data become useless without communication. This is the high-stakes point of consulting, communicating with clients, stakeholders, internal and external subject matter experts. Not only for presenting your facts but as a support for the whole process. Understanding context is never a one-way street; it is a multifaceted, multichannel communication challenge. Understanding data and uncertainties often require multidisciplinary input. This requires questions to be asked, provocations to be made and conversations to be had. Presenting your recommendations requires public speaking skills. And following up requires perseverance, empathy and prioritization.
In cybersecurity you deal with a number of groups, each with their own perspectives. Involving the right people at the right time is key to any successful security program, ranging from optimizing automated security testing during software integration to teaching support staff about social engineering awareness.
And that leaves one more thing: learning
If there is one thing consulting teaches you, it is that you have a lot to learn. With every challenge you find another topic to dive into, another white spot in your know-how. Consultants are experts at thriving outside their comfort zones – that is what you need to do to help clients solve complex issues you have never seen before. You must constantly reinvent, you must constantly remain curious, and you must process new information every day, in every interaction you have.
Cybersecurity requires learning all the time. One thing that strikes me when looking at new attack patterns is the creativity and ingenious engineering of bad guys. Not all attacks are great, not all malware is complex, but their ability to distill an understanding of people’s behaviors into attack patterns that are hard to detect, deny and understand is truly inspiring; to beat the adversaries we can never stop learning.
Disclosing vulnerabilities is a part of handling your risk exposure. Many times, web vulnerabilities are found by security firms scanning large portions of the web, or it may come from independent security researchers that have taken an interest in your site.
Ignoring the communication issues around vulnerability disclosure can cost you a lot. Working on maturity at the top is a high ROI activity!
How companies deal with such reported vulnerabilities usually will take one of the following 3 paths:
Fix the issue, tell your customers what happened, and let them know what their risk exposure is
Fix the issue but try to keep it a secret.
Threaten the reporter of the vulnerability, claim that there was never any risk regardless of the facts, refuse to disclose details
Number 2 is perhaps still the normal, unfortunately. Number 1 is idea. Number 3 is bad.
If you want to see an example of ideal disclosure, this Wired.com article about revealing password hashes in source shows how it should be done.
A different case was the Norwegian grocery chain REMA 1000, where a security researcher reported lack of authentication between frontend and backend, exposing the entire database of customer data. They chose to go with route 3. The result: media backlash, angry consumers and the worst quarter results since…., well, probably forever.
So, what separates the businesses that do it the right way, and those that choose to go down the way of the rambling angry ignorant? It is about maturity and skills a the top. This is why boards and top management need to care about information security – it is a key business issue.
NorSIS has studied what they term cybersecurity culture in Norway. The purpose of their study has been to help designing effective cybersecurity practices and to understand what security regulations Norwegians will typically accept.
The study wants to measure culture, a concept that does not easily lend itself to quantification or simple KPI’s. The attempt is based on a survey sent to a group of people that is representative for the Norwegian population.
The key insights sought by the study are summarized in 4 research questions:
What characterizes the Norwegian cybersecurity culture?
To what degree does cybersecurity education influence behaviors and awareness?
How do Norwegians relate and react to cyber risks?
To which degree do individuals take responsibility for the safety and security of cyberspace?
Thanks to Bjarte Malmedal for sending me a nice hardcopy of the report he wrote with Hanne Eggen Røislien – you should follow him on Twitter for insightful security discussions!
The cultural dimension
NorSIS does not fall into the trap of reducing culture to behaviors alone but attempt to treat the cultural dimension as a set of norms, beliefs and practices influenced in various ways. They define 8 core issues that influence the cybercultural fabric of society:
Collectivism
Governance and control
Trust
Risk perception
Techno-optimism and digitalization
Competence
Interest
Behaviors
The discussion of these core issues that follows is sensible and logic. Then the authors summarize some results from their questionnaires, mapping answers to the 8 core issues. For example they report that only 18% of the respondents say they have little interest in IT and technology.
Competence and learning
Surprisingly, the report states that 59% of respondents report having received cybersecurity training sometime the last 2 years (without specifying any further what this entails). They also look into how people prefer to learn about security.
The authors take the perspective that many children are not receiving the cybersecurity guidance they need because only half the adult population has received cybersecurity training.
The report also states that it is unlikely that training will typically relate the security of cyberspace as a whole to the security of individual devices.
Risk perception
A key finding in the report is that 7 of 10 respondents think they expose themselves to threats online. They further associate the risk exposure with external factors rather than their own actions. Further 6 of 10 people feel confident about their own ability to identify what is and isn’t safe to do online.
The highest fear factors are found when doing online banking and using online government services. This is perhaps because it is during these activities the users are interacting with their most sensitive data.
Behavioral patterns
Most people report that they think about how safe a website is before using it and only 18% say they don’t think about this. The ability to actually assess this is most likely varying, and 61% report they feel competent to do such assessments.
Another interesting finding is that people report deliberately breaking security rules at work; 14% in the private sector, 8% in the public sector, and men report doing this more than women.
Risk-taking behaviors should be expected in any large group of people, and the self-reported numbers are reasonable when compared to other studies about motivation and willingness to follow corporate norms.
Study conclusions
The report draws up some main conclusions based on the data gathered. One is about education, where the authors feel confident that positive security behaviors correlate with security education. They argue that it should be a government responsibility to educate the population about security, ie by making it a part of school curriculum.
Regarding the surveillance-privacy tension in cybersecurity governance, the authors conclude that people mostly support giving police authority and the tools to fight cybercrime but they do not believe they will get any help by going to the police. Only 13% of victims to cybercrime file a police report.
They further propose policy for government action; primarily strengthening security education in the school system, and giving law enforcement further tools to fight cybercrime.
My thoughts on this
This report provides an interesting piece of work, in many aspects confirming with data assumptions security professionals tend to make about people in general, and perhaps the “typical user”.
The research questions asked at the outset of the report are perhaps implicitly answered through data and interpretations of those data. I will try to add my impression based on the report, and based on my personal experience from the corporate world.
What characterizes the Norwegian cybersecurity culture?
Norwegians are tech savvy – in the sense that they use technology. The report indicates that a lot of people are confident about their own use of technology, and most people believe they can assess what is safe and not safe to do online. When the report drills down into some behavioral aspects, there are issues that may paint a somewhat different picture.
People still use the same password on many services, although many report sounder practices. It is not unlikely that this self-reporting is skewed because people answer what they know they should be doing, instead of what they are actually doing.
People feel at risk when using online services, but still most people do not back up their data more often than every month, 15% report they never back up data, and 10% say they don’t even know. If the “correct answer” bias is affecting the results here, the situation is likely worse than this in practice. Think about the question: “how often do you check the oil on your car?”. Most people would like to say they do this regularly, like every month – but we all know that is not true.
The question asked about backup was actually how often people back up data that is important to them. I have a suspicion that a lot of people have never thought about what data is important. Is it the pictures of the grandchildren? Is it your financial documents, insurance papers, etc? Is it the recipe collection you keep in Microsoft OneNote? Most people will never have thought about this. A lot of people also believe nothing bad can happen as long as they store their files in the cloud. Beliefs are thus often formed without the competence needed to form informed decisions about value and risk.
My conclusion is that Norwegians are feeling quite confident about their own security practices, without necessarily having very good practices. Overconfidence is often a sign of insufficient know-how, which for the population as a whole is probably the case.
To what degree does cybersecurity education influence behaviors and awareness?
Awareness training is often about practices – knowing what to do. Then comes motivation and the habituation of that information – how can you make theory into practice, how can you make a conscious effort into habit and second nature? I think two important things are at play here that we tend to underestimate; building on a feeling of responsibility for the collective good (which is also one of the 8 core issues of cybersecurity culture as defined in the NorSIS report), and creating skills that lower the effort barrier for secure practices. People who feel the use of IT is difficult are unlikely to change their existing habits before the “difficulty barrier” has been reduced.
This is where schools can play a role, like NorSIS suggests – but that is also a major challenge based on the current state of affairs, at least in Norwegian schools. I have been arranging an after-school activity on coding for elementary school pupils a couple of years (mostly based on Scratch, and some Python). What is very visible in those sessions is that socio-economic backgrounds correlate to a very large degree with children’s technical know-how. A lot of the teachers also lack the know-how and perhaps interest to be an equalizing factor when it comes to technology as well, although political efforts do exist to make technology a more central topic in schools. In this regard we see Norway currently lagging behind other similar nations, like Sweden or the United Kingdom, where IT plays a bigger and more fundamental role in education.
How do Norwegians relate and react to cyber risks?
People worry about cyber risks, and they worry more the older they get. Another interesting aspect is that people are worried about being subject to online credit card fraud, whereas using debit or credit cards online is one of the behaviors with lower perceived risk scores in the study. Further, using online banking is seen as a low risk activity – which correlates well with banks being seen as “secure”.
Ironically, “using email” is only perceived as slightly more risky than using online banking – in spite of social engineering through e-mail being the primary initial attack vector for 30 years, and still going strong.
They also conclude that having received cybersecurity education does not necessarily change how people perceive online risks, and that this is at odds with how many security professionals view the effects of awareness training. This does not come as a surprise – changing feelings by transfer of facts is not likely a good strategy, and risk perception at the personal level is typically based on feelings, as the report also correctly states. Changing risk perception requires continuity, leadership and the challenging of assumptions among peers – it requires the evolution of culture, and that is a slow beast to move. Training is only one of many levers to pull to achieve that.
To which degree do individuals take responsibility for the safety and security of cyberspace?
Creating botnets would be really hard if all devices were patched, hardened and all users careful to avoid social engineering schemes. This is not something most people are thinking about when they dismiss the prompt to update their iOS version for the n’th time.
Most people probably don’t realize that it is the collective security of all the connected devices combined that make up the security landscape for the internet as a whole. Further it is easy to fall into the thinking trap that “there are so many computers that my actions have no impact” – more or less like the “my vote doesn’t count” among voters who stay at home on election day.
NorSIS sees education as a possible medicine, and that is definitely part of the story. Perhaps should that educational bit be distributed among many different curriculums – languages, social sciences, IT, mathematics – to help form consensus about why individual actions count for the safety of the many.
Summary of the summary
The NorSIS report on Norwegian cybersecurity culture is an ambitious project trying to highlight how society as a whole deals with security practices, beliefs, education and perceptions
The report indicates that interest and motivation is a key driver of positive security behaviors, and of know-how
There is an indication that education works in driving good behaviors.Security training seems to be less effective in changing risk perception. This should not be surprising based on knowledge about change processes in corporate environments: transfer of knowhow is not enough to change attitudes and norms.
There is a clear recommendation to increase security competence through the educational system. This seems well-founded and something all nations should consider.