Consider this: internet is down. Power is out. And the water in the tap is no longer safe to drink. The stores are basically out of groceries. And the banking sector is not working. No mobile payments. No credit cards accepted. And no ATM’s are working. Scenarios like this may be dystopia but are perhaps less far-fetched today than a few years ago. Some recent reports hinting of this have come out of the Ukrainian conflict as well as more recent events of cyber attacks targeting the utility sectors in the United States, Europe and the Middle East. There is no other way to put it than this: we are as a society vulnerable.
Sweden is taking steps to increase the population’s preparedness for a major crisis, up to and including invasion by a foreign power. Norwegian authorities are planning a similar move. This type of communication was common during the cold war but feels chilling today. We are no longer used to thinking about disasters that target society at this scale.
A major conflict today would for sure include cyber domain operations, and most likely not only for information gathering. Availability of key services would be hit, and this could lead to power outages, water supply failure and payment system collapse. How do we cope in this situation?
Most people are not prepared for the “usual channels” to be unavailable. Most organizations are unprepared for disasters like this. This further exacerbates the challenges individuals would be faced with in the event of a crisis, because many businesses are essential for providing services and goods. When these businesses cannot deliver, it means power is unavailable, hospitals close, food is not available in the store and the fancy autonomous public transport systems grind to a halt.
Because of this, it is a civic duty for businesses to plan not only for a rainy day, but for long-term hurricane conditions. When the economy fails to produce the services and goods people depend on, we all suffer. Here are five bullet points for building resilience from the individual level, to our workplaces, and to society as a whole.
Do like the Swedes: keep an emergency supply of food, water, and other necessities at home. Have a plan for how to act in the case of a crisis.
At the workplace, do not stop at a risk assessment for “normal operations”. Identify business continuity challenges, and abnormal situations that can occur, including natural disasters, nationwide cyber attacks, terror attacks and a state of war. What services should the organization be able of supplying under such conditions? How can a plan be put in place to be able of doing so?
Planning is smart, but without training its value is very limited. This is why businesses run stress tests, table-top exercises, red-team simulations and the like. We do, however, focus on risks under “normal conditions”. Have you tested your business continuity handling plan the same way? You probably should. Exercise emergency response with no network access, with phone lines down and your staff dispersed.
Do not get paranoid, but also do not be afraid to mention what people typically would see as “black swans”. Only by acknowledging that disasters do happen, we can prepare to restore functionality to the level we have defined necessary.
Engage in conversations and organizations that keep you on top of societal risks, and how you can contribute. Contributing to the security of society as a whole is the essence of social corporate responsibility.
If we keep contributing during a crisis, we will increase our collective ability to handle adversity. This is why business continuity needs to be part of our thinking around social corporate responsibility.
Business continuity and emergency preparedness have become familiar concepts for many businesses – and having such risk management practices in place is expected in many industries. In spite of this, apart from software companies, inclusion of cybersecurity and preparing for handling of serious cyber attacks and security incidents is far from mature. Many businesses have digitized their value chains to a very high degree without thinking about how this affects their overall risk picture. Another challenge for many businesses that have seen the need to include their digital footprint in their risk management process, is that they don’t know where to start. That is what this post is about: how do you start to think about emergency preparedness for cyber incidents? If you have a robust process for this in place, this post is not meant for you. This is a “how-to” for those who stand bewildered at the start position of their crisis management planning process.
Know what you have
Before planning your incident response, or emergency preparedness plan, you should have a clear overview of what assets you have that is worth protecting. Creating a detailed asset inventory can be a daunting task. However, for most organizations, it is sufficient to identify the key information and organizational assets without aiming for completeness.
What are your main business processes? Identify all of the main processes you need to work in order for your organization to serve its purpose. The breakdown can be of different granularity, but here’s an example for an e-commerce business:
Management and leadership
Sales and marketing
Procurement and logistics
For each of the main business processes you have, there will be various types of assets that are necessary to make that process work. Think about what you need in various categories:
Software you need to get the job done
Data that is needed to support the function (if you know what software you depend on, this is often easier to identify)
Why are we mentioning people here? Often people have knowledge that cannot easily be replaced within the organization, or that would require considerable effort and investment. If such a person disappears, the situation can be hard to deal with. That is why, also from an information security point of view, it is important to know who your key employees are, and put down a plan for what to do if they are not available.
When you have identified these assets, it is a good idea to group them into two categories: critical and non-critical (you can use more than 2 categories if you want to, but a binary division is usually sufficient). Critical assets would lead to serious consequences if the security is breached: if data is leaked, or changed in an unauthorized manner, or made unavailable. It is unfortunate if non-critical assets are breached too, but not at a level where the business itself can be threatened. The critical assets are your crown jewels – the assets you need to protect as good as you can.
Baseline defense: do the small things that matter
Before planning how to respond to a cyber attack, we should introduce some baseline practices that do not depend on criticality or risk assessments. These are practices all organizations should aim to internalize; they significantly reduce the likelihood that a cyber attack would be successful, and they also prepare you to respond to an attack when it happens.
Introduce a security policy and make it known to the organization. Work systematically to make sure the policy is adhered to.
Maintain the data register (using the process described above for “knowing what you have”). This way you make sure critical assets do not get overlooked.
Include security requirements when selecting suppliers. Do not get breached because a supplier or business partner has weak security practices.
Take regular backups of all critical data. This way, you can restore your data if they should become unavailable or destroyed, whether this happens because of a hacker’s malicious actions or due to a hardware failure.
Use firewall rules to deny all traffic that is not needed in your business. Deny all incoming requests, unless there is a specific reason to keep a service available.
Run up-to-date endpoint protection such as antivirus software on all computers.
Keep all of your software up-to-date and patched. Do not forget appliances and IoT devices.
Do not give end users administrative access to their computers.
Give security awareness training to all employees.
With this in place, 80% of the job is done. Now you can focus on the “disaster scenarios”; those where you crown jewels are at risk.
Prepare to defend your assets
You know what assets you have. You know what your crown jewels are. You have your baseline security in place. Now you are ready to take on the remaining risk – responding to attacks and more advanced incidents. Here’s how you prepare for that.
Before you develop your incident response plan, it pays off to create a simple threat model. Your model should describe credible attack patterns. In order to identify such attack patterns, you should think about who the attacker would be, and what their motivation would be. Is it a script kiddie, a person without deep technical knowledge hacking for fun using tools downloaded from the internet? Is it a cyber crime group hoping to earn money on extortion or by selling your intellectual property? Is it a nation-state actor, hoping to use your company as a foothold for attacking government assets? Or perhaps it is an insider threat, a dishonest or angry employee attacking his own employer? Likely scenarios depend on your assumptions here.
You don’t need a very detailed threat model to gain understanding that can aid your incident response planning. You should think about phases of the attack?
How is the initial breach obtained? In most cases this would be some form of social engineering, like phishing.
How do they get a foothold and gain persistence? Malware based? Using built-in functions?
How do they get access to the crown jewels? What actions will they perform on the object?
What are the consequences of the attack for your organization and its stakeholders?
Having this down, you should start to prepare an incident response plan. Thinking about this in phases too is helpful:
Incident detection and escalation
During preparation you should get down who is responsible for incident handling, who should be communicated with and how suspected incidents should be reported. Include a budget for training and running exercises. Cyber threat incident response needs to be tested the same way we do fire drills.
Incident detection is difficult. Various reports all indicate the average time from compromise to detection of advanced attacks is somewhere between 3 months and 2 years. There are many ways to detect that something is wrong:
A user notices strange behavior of lack of access
Monitoring of logs and security systems may report unusual signals
A hacker contacts you for a ransom or to state demands
In all cases the company should have a clear process for categorizing potential incidents, verifying if it is a real incident or not, and making a decision to start incident response.
Containment is about stopping the problem from spreading throughout the network, and gathering evidence. Be aware that cutting access to the internet can sometimes set off pre-programmed destructive routines. Therefore containment should be based on observation of the hacker behavior within the network on a case by case basis.
Eradication is about removing the problem: taking away the persistent access, removing malware, patching security holes. The right way to do this is to format all disks, clean all data, and then restore from original media and trusted backups.
Recovery is about getting back to business: recovering the service at an acceptable level. It is not uncommon to see malware reappear after recovery, so testing in a controlled environment is always good practice, before connecting the restored system to the business network again.
Lessons learned is important. In this phase an after action review is done: how could this happen, what was the reason? Do a root cause analysis. Summarize what worked well in response, and what did not. Make recommendations for changes in practice or policy – and follow up on it.
If you have this down: knowing what your crown jewels are, a solid baseline security system and a risk based incident response plan your organization will be much more robust than before. The risk exposure of your organization to cyber threats will be greatly reduced – but do not forget that security is a continuous process: as the threat landscape changes, your security management should too. This is why you need to maintain your threat model, and update your response plan.
Container technologies are becoming a cornerstone of development and deployment in many software houses – including where I have my day job. Lately I’ve been creating a small web app with lots of vulnerabilities to use for security awareness training for developers (giving them target practice for typical web vulnerabilities). So I started thinking about the infrastructure: packing up the application in one or more containers – what are the security pitfalls? The plan was to look at that but as it turned out, I struggled for some time just to get the thing running in a Docker container.
First of all, the app consists of three architectural components:
A MongoDB database. During prototyping I used a cloud version at mlab.com. That has worked flawlessly.
A Vue 2.0 based frontend (could be anything, none of the built-in vulnerabilities are Vue specific)
An Express backend primarily working as an API to reach the MongoDB (and a little sorting and such)
So, for packing things up, I started with taking the Express backend and wanting to add that to a container to run with Docker. In theory, the container game should work like this:
Create your container image based on a verified image you can download from a repository, such as Docker Hub. For node applications the typical recommendation you will find in everything from Stack Overflow to personal blogs and even official doc pages from various projects is to start with a Node image from Docker Hub.
Run your docker image using the command
docker run -p exposedIP:hostIP myimage
You should be good to go – and access the running NodeJS app at localhost:hostIP
So, when we try this, it seems to run smoothly…. until it doesn’t. The build crashes – what gives?
Building on top of Alpine, a minimal Linux distribution popular for use in containers in order to reduce the image size, we try to install some OS specific build tools required in order to install the npm package libxmljs. This package is a wrapper for the xmllib2 library for C (part of the Gnome project). Because that is what it is, it needs to set up those bindings locally for the platform it is running on, hence it needs a C compiler and a version of Python 2.7 to make this happen. To install packages on Alpine one uses the apk package manager. These packages are obviously there, so why does it fail?
EDIT: use node:8 official image, not alpine, as it does not play well with glibc dependencies (such as libxml2).
FROM mhart/alpine-node:8 FROM node:8
COPY . .
# Fixing dependencies for node-gyp / libxmljs RUN apk add –no-cache make gcc g++ python RUN apt-get install make gcc g++ python
RUN npm install –production
CMD [“node”, “index.js”]
After adding the “no-cache” option on the apk command the libraries installed fine. But running the container still led to crash.
After a few cups of coffee I found the culprit: I had copied the node_modules folder from my Windows working folder. Not a good idea. So, adding a .dockerignore file before building the image fixed it. That file includes this:
The backlog file is just a debug log. After doing this, and building again: Success!
Now running the image with
docker run -p 9000:9000 -d my_image_name
gives us a running container that serves the Exposed port 9000 to the localhost port 9000. I can check this in my browser by going to localhost:9000
OK, so we’re up and running with the API. Next tasks will be to set up separate containers for the frontend and possibly for the database server – and to set up proper networking between them. Then we can look at how many configuration mistakes we have made, perhaps close a few, and be ready to start attacking the application (which is the whole purpose of this small project).
Phishing is still the most common initial attack vector. Mass mailed spam is now taking cues from targeted campaigns, improving conversion rates through personalization and the use of seemingly authoritative content.
Scammers are getting better at targeting. Sharpen your defenses today – including your awareness training!
Here are some indicators that can help identify phishing:
Sender: the name and the email address don’t match. Your colleague is probably not emailing you from someone else’s Gmail account or a Mexican car dealership (unless you are in the car sales business in Mexico)
The link in the email leads somewhere else than the text of the link. Hover over to see the real url- or press an hold on a touch device. Here’s an example: https://bbc.co.uk
The logo in the email is hosted on a different domain than the email address of the sender, and it is not a CDN or cloud storage bucket.
Training people to look for these indicators will help reduce damage from the more advanced phishing campaigns!
The first few days of 2018 have been busy for security professionals and IT admins. As Ars Technica put it: every modern processor has “unfixable” security flaws. There are fixes – sort of. But they come with a cost: computers will run up to 30% slower because of it, depending on the type of work being performed. A lot of the heavy lifting performed in large data centers is related to data processing in databases. Unfortunately, this is the type of operation that looks to have the worst impact of fixing the Meltdown and Specter vulnerabilities. A selection of tests shows that a performance reduction of about 20% is at least realistic (see details at phoronix.com). This is equivalent to increasing the power consumption of data centers by 25%. That is a lot of energy! With assumed growth in data center consumption of 5% per year we should expect the total electricity consumption by data centers globally to hit 480 TWh this year.
A 2013 estimate of the US use of data centers put the 2020 electricity use at 139 TWh, and the Independent reported that the 2015 global data center energy consumption was 416 TWh. For comparison, the total electricity generation globally is approximately 25000 TWh, so data center usage is not insignificant – and it is growing fast.
Our energy mix is still heavily dependent on fossil fuels, mainly coal and natural gas. Globally about 40% of our electricity generation is done based on coal, and another 30% is based on petroleum, primarily natural gas. In OECD countries coal use is on the decline but demand growth for electricity in particularly the BRICS economies still outnumbers the achievable growth in renewable energy generation in these areas. This means that short-term increases in global electrical energy generation will still be heavily influenced by fossil fuels – meaning coal and natural gas, and to some extent oil.
According to an article on curbing greenhouse gas emissions in the Washington Post, average coal fired power plants in the United States emits the equivalent of 1768 lb CO2/MWh, whereas for natural gas the average number would be around 800-850 lb/MWh. This corresponds to 800 g/kWh and 385 g/kWh for coal and natural gas, respectively.
Based on these numbers we can estimate an approximate value of extra CO2 emissions based on the Meltdown and Specter vulnerability fixes. If the unadjusted data center electricity consumption for 2018 is estimated at 480 TWh, and the bug fixes will lead to a 25% increase in consumption we are talking about and extra energy demand of 120 TWh for running our data centers. If 40% of that energy is generated by coal fired power plants, and 30% by natural gas, and the remaining by nuclear and renewable energy sources we are looking at 48 TWh extra to be produced from coal, and 36 TWh extra to be produced from natural gas. The combined expected “extra” CO2 emissions would be 52 million tons!
That is the same as all the climate gas emissions from the Norwegian economy in 2016 – including the entire petroleum sector (source: SSB). Another “yard stick” for how huge this number is: it corresponds to 1/3 of all emissions from U.S. aviation (source: EPA). Or – it would correspond to driving the largest version of a Hummer with a 6.2l V8 engine 3 million times around the earth (source: energy.eu).
This is a huge increase in emissions due to data processing because the CPU optimizations we have come to rely on since 1995 cannot be used safely. Also note that we have only included data centers in our estimate; this excludes all PC’s, Macs and smartphones that could see hits in performance too – meaning we’d have to charge our tech toys more often, thereby consuming even more electricity.
Uploading source code with hardcoded credentials to GitHub (Uber)
Using weak passwords for administrator access. Really weak passwords. Like admin/admin. (Equifax)
Failure to implement authentication and access control in a robust way (Filesilo – with plain text password storage)
Allowing people to sign up without any verification of identity or ownership of e-mail address or other information used to sign up (Lots of second-tier sites)
That’s about the service itself, but what about the sign-up page and the sign-up process? Here’s a checklist for your signup page – and similar use cases.
Sanitize your inputs (protects your database from injection attacks, and your users from stored cross-site scripting attacks)
Validate your inputs (protects against injection attacks, user errors and garbage content)
Verify e-mail addresses, phone numbers etc (protects your users against abuse, and your service against spam)
Handle all exceptions: exceptions can give all sorts of trouble when not handled in a good way – ranging from poor user experience (due to unhelpful error messages) to information leaks containing sensitive data from your database
Treat secrets as secrets – use strong cryptographic controls to protect them. Hash all passwords before storing them.
OK, that was a brief checklist – but that’s by far not the most important part of creating a secure sign-up process, or any process for that matter. The key to secure software is to follow a good workflow that takes security into account. Here’s how I like to think about this process.
Whenever you are building something, the first thing that comes to mind is “functionality”. We don’t build security, and try to integrate functionality. It is the other way around. Although a secure development lifecycle includes a lot of “other things” like competence development, team organization and so on, let’s start with the backlog of functionality we want to build – and focus on the actual development, not the organization around it. Having a list of functionality is a good start for thinking about threats, and then security. Let’s take a signup page as the starting point – here are a few things we’d like the page to have:
Easy signup with username, password, e-mail
Good privacy control: data leaks should be unlikely, and if they occur they should not reveal sensitive info
So, how can we build a threat model based on this? We’d probably like some more information about the context, and the technology stack being used.
Who are the stakeholders (users, owners, developers, customers, competitors, etc)?
Who could be interested in attacking the site? (Script kiddies? Cybercriminals? Hacktivists?)
What is the value of the service to the various stakeholders?
If we know the answers to these questions, it is easier to understand the threat landscape for our web page, and the signup component in particular. We also need to know something about the technology stack. For example, we could build our signup page based on (it could be anything but lots of websites use these technologies today):
MongoDB for datastorage in the cloud
Nodejs for building a RESTful API backend
Vuejs for frontend
So, having this down, we see the contours of how things work. We can start building a threat model that looks at people, infrastructure, software, etc.
People risk: someone signs up with a different persons credentials and pretends to be this person (a personal risk on a social media platform for example)
Software risk: user inputs used to conduct injection attacks in order to leak data about users (MongoDB can be attacked in pretty similar manner as an SQL database)
Software risk: secrets are stored in unprotected form and they are leaked. User credentials sold on the dark web or posted to a pastebin.
Creating a big list of threats like this, we can rank them based on how serious the impact would be, and how likely they are. Based on this we create security requirements, that are then added to the backlog. These security requirements should then also be added to the testing plans (whether it is unit testing or integration testing), to make sure the controls actually work.
Testing and development is iterative – but at some point it seems we have covered the backlog and passed all tests. It is QA time! In many development projects QA will focus on functionality first, then performance. That is of course very important – but if users cannot trust the software, they will leave. That’s is why security should be a part of QA as well. This means more extensive testing, typically adding static testing, coding reviews and vulnerability scans to the QA process. Normally you would find something here, which could range from “small stuff” to “big stuff”. If the quality is not sufficient, say performance is poor or functionality fails, one would go back to the backlog and update for the next sprint. The same should be done directly for security.
When the backlog is updated – we need to update our threat model as well – also informed by what we’ve learned in the previous sprint.
Following a process like this will not give you a 100% bug-free, super-secure software – but it will certainly help. And by tracking some metrics you can also measure if quality improves over time. Especially static testing gives good metrics for this – the number of code defects, vulnerabilities or not, should decrease from one sprint to the next.
That’s why we need a good process, this way we can learn to build better things over time. Run fast and break things – but repair them fast as well. This way it is possible to combine innovation with good security. Innovation is of limited use unless we can trust its results.
This has really been the year of marketing and doomsday predictions for companies that need to follow the new European privacy regulations. Everyone from lawyers to consultants and numerous experts of every breed is talking about what a big problem it will be for companies to follow the new regulations, and the only remedy would be to buy their services. Yet, they give very few details on what those services actually are.
Let’s start with the new regulation; not the “what to do” but the “why it is coming”. The privacy conscious search engine duckduckgo.com gives the following definition:
Privacy: The state of being free from unsanctioned intrusion: a person’s right to privacy.
Who hasn’t felt his or her privacy intruded upon by marketing networks and information brokers online? The state of modern business is the practice of tracking people, analyzing their behavior and seeking to influence their decisions, whether it is how they vote in an election, or what purchasing decisions they make. In the vast majority of cases this surveillance based mind control activity occurs without conscious consent from the people being tracked. It seems clear that there is a need to protect the rights to privacy and freedoms of individuals in the age of tech based surveillance.
This is what the new regulation is about – the purpose is not to make life hard for businesses but to make life safe for individuals, allowing them to make conscious decisions rather than being brain washed by curated messages in every digital channel.
Still, for businesses, this means that things must change. There may be a need for consultants but there is also a need for practical tools. The way to dealing with the GDPR that all businesses should follow is:
Understand why this is necessary
Learn to see things from the end-user or customer perspective
Learn the key principles of good privacy management
Create an overview of what data is being gathered and why
With these 4 steps in place, it is much easier to sort the good advice from the bad, and the useful from the wasteful.
Most businesses are used to thinking about risk in terms of business impact. What would the consequence to our business be, and how likely is it to happen? That will still be important after May 2018, but this is not the perspective the GDPR takes. If we are going to make decisions about data protection for the sake of protecting privacy, we need to think about the risk the data collection and processing exposes the data subjects to (data subject is GDPR speak for people you store data about).
What consequences could the data subjects see from the processing itself? Are you using it for profiling or some sort of automated decision making? This is the usual “privacy concern” – and rightly so. Even felt it is creepy how marketers track your digital movements?
Another perspective we also need to take is what can the consequences for individuals be of data abuse? This can be a data breach like the Equifax and Uber stories we’ve heard a lot about this fall, or it can be something else, like an insider abusing your data, a hacker changing the data so that automated decisions don’t go your way, or that the data becomes unavailable and thereby stopping you from using a service or getting access to something. The personal consequences can be financial ruin, a poor reputation, family troubles or perhaps even divorce?
A key principle in the GDPR is data minimization; you shall only process and store the data where you have a good reason and a legal basis for doing so. Practicing data minimizaiton means less data that can be abused, thereby a lower threat to the freedoms and rights of the persons you process data about. This is perhaps the most important principle of good privacy practice: try to avoid processing or storing data that can be linked to individuals as far as you can (while getting your things done).
Surprisingly, many companies have no clue what personal data they are storing and processing, who they share it with, and why they do it. The starting point for achieving good privacy practice, good karma – and even GDPR compliance – is knowing what you have and why.
From scare-speak to tools and practice
We’ve had a year of data breaches, and we’ve had a year of GDPR themed conference talks, often with a focus on potential bankruptcy-inducing fines, cost of organizational changes and legal burdens. Now is the time for a more practical discussion; getting from theory to practice. In doing this we should all remember:
There are no GDPR experts: the regulation has yet not come into force, and a both regulatory oversight and practical implementations are still in its infancy phases
The regulation is risk-based: this means the data controllers and processors must take ownership of the risk and governance processes.
Documenting compliance should be a natural part of performing the thinking and practical work required to safeguard the privacy of customers, users, employees, visitors and whatever category of people you process data related to.
We need practical guidance documents, and we need tools that make it easier to follow good practice, and keep compliance documentation alive. That’s what we should be discussing today – not fines and stories about monsters eating non-compliant companies.
If you are like most people, you don’t read privacy statements. They are boring, often generic, and seem to be created to protect businesses from lawsuits rather than to inform customers about how they protect their privacy. Still, when you know what to look for to make up your mind about “is it OK to use this product”, such statements are helpful.
Even so, there is much to be learned from looking at a privacy statement. If you are like most people you are not afraid of sharing things on the internet, but still you don’t want the platforms you use to abuse the information you share. In addition, you would like to know what you are sharing. It is obvious that you are sharing a photo when you include it in a Facebook status update – but it is obvious that you are sharing your phone number and location when you are using a browser add-on? The former we are generally OK with (it is our decision to share), the latter not so much – we are typically tricked into sharing such information without even knowing that we do it.
So-called anonymous information: approximate geo-location, hardware specs, browser type and version, date of software installation (their add-on I presume), the date you last used their services, operating system type and version, OS language, registry entries (really??), URL requests, and time stamps.
Personal information: IP address, name, email, screen names, payment info, and other information we may ask for. In addition you can sign up with your Facebook profile, from which they will collect usernames, email, profile picture, birthday, gender, preferences. When anonymous information is linked to personal information it is treated as personal information. (OK….?)
Other information: information that is publicly available as a result of using the service (their socalled VPN network) may be accessed by other users as a cache on your device. This is basically your browser history.
125 million users accept that their personal data is being harvested, analysed and shared at will by a company that provides “VPN” with no encryption and that accepts the use of “password” as password when signing up for their service.
So, here’s the take-away:
What they collect
How they collect it
What they are using the information for
With whom do they share the informaiton
How do they secure the information?
Think about what this means for the things that are important to your privacy. Do you accept that they do the stuff they do?
What is the worst-case use of that information if the service provider is hacked? Identity theft? Incriminating cases for blackmail? Political profiling? Credibility building for phishing or other scams? The more information they gather, the worse the potential impact.
Finally, never trust someone claiming to sell a security product that obviously does not follow good security practice. No SSL, accepting weak passwords? Take your business elsewhere, it is not worth the risk.
Insurance relies on pooled risk; when a business is exposed to a risk it feels is not manageable with internal controls, the risk can be deferred to the capital markets through an insurance contract. For events that are unlikely to hit a very large number of insurance customers at once, this model makes sense. The pooled risk allows the insurer to create capital gains on the premiums paid by the customers, and the customers get their financial losses covered in case of a claim situation. This works very well for many cases, but insurers will naturally try to limit their liabilities, through “omissions clauses”; things that are not covered by the insurance policy. The omissions will typically include catastrophic systemic events that the insurance pool would not have the means to cover because a large number of customers would be hit simultaneously. It will also include conditions with the individual customer causing the insurance coverage to be voided – often referred to as pre-existing conditions. A typical example of the former is damages due to acts of war, or natural disasters. For these events, the insured would have to buy extra coverage, if at all offered. An example of the latter omission type, the pre-existing condition, would be diseases the insured would have before entering into a health insurance contract.
How does this translate into cyber insurance? There are several interesting aspects to think about, in both omissions categories. Let us start with the systemic risk – what happens to the insurance pool if all customers issue claims simultaneously? Each claim typically exceed the premiums paid by any one single customer. Therefore, a cyberattack that spreads to large portions of the internet are hard to insure while keeping the insurer’s risk under control. Take for example the WannaCry ransomware attack in May; within a day more than 200.000 computers in 150 countries were infected. The Petya attack following in June caused similar reactions but the number of infected computers is estimated to be much lower. As the WannaCry still looks like a poorly implemented cybercrime campaign intended to make money for the attacker, the Petya ransomware seems to have been a targeted cyberweapon used against the Ukraine; the rest was collateral damage, most likely. But for Ukrainian companies, the government and computer users this was a major attack: it took down systems belonging to critical infrastructure providers, it halted airport operations, it affected the government, it took hold of payment terminals in stores; the attack was a major threat to the entire society. What could a local insurer have done if it had covered most of those entities against any and all cyberattacks? It would most likely not have been able to pay out, and would have gone bankrupt.
A case that came up in security forums after the WannaCry attack was “pre-existing condition” in cyber insurance. Many policies had included “human error” in the omissions clauses; basically, saying that you are not covered if you are breached through a phishing e-mail. Some policies also include an “unpatched system” as an omission clause; if you have not patched, you are not covered. Not all policies are like that, and underwriters will typically gauge a firm’s cyber security maturity before entering into an insurance contract covering conditions that are heavily influenced by security culture. These are risks that are hard to include in quantitative risk calculations; the data are simply not there.
Insurance is definitely a reasonable control for mature companies, but there is little value in paying premiums if the business doesn’t satisfy the omissions clauses. For small businesses it will pay off to focus on the fundamentals first, and then to expand with a reasonable insurance policy.
For insurance companies it is important to find reasonable risk pooling measures to better cover large-scale infections like WannaCry. Because this is a serious threat to many businesses, not having meaningful insurance products in place will hamper economic growth overall. It is also important for insurers to get a grasp on individual omissions clauses – because in cyber risk management the thinking around “pre-existing condition” is flawed – security practice and culture is a dynamic and evolving thing, which means that the coverage omissions should be based on current states rather than a survey conducted prior to policy renewal.
What followed pretty much resembled the WannaCry media panic messages: things will be encrypted, there is nowhere to hide and society is going to crash hard… Of course, if somebody encrypts your files, and this spreads throughout your network, it is pretty close to an ocean of pain. At least if you are unprepared. So, instead of an analysis of the Petya virus, or some derivative of it, let’s get down to what we can do to prepare and survive a ransomware attack. Because it really isn’t that hard. Really.
A security baseline means the security controls you apply irrespective of risk level. The things that everybody should do, and that will actually stop almost every cyberattack. Yes, almost every attack, say like 90% of them. And it doesn’t even require you to buy a lot of expensive consulting services or other snakeoil to fix it. Here we go, this should be the minimum baseline for everyone:
Patch your software as fast as you can whenever a new security patch comes out. Operating systems normally do this automatically if you do not configure your system otherwise. Sometimes organizations need to check compatibility for critical systems, and such, but the main rule is: patch everything as fast as you can.
Do not allow users to perform regular work using an account with administrator rights. Most users don’t even need admin rights at all, but if they do, give them two accounts. Perform work using a standard account with limited privileges.
Run a firewall that is configured to block all incoming traffic (unless you need it).
If you run an organization: use application whitelisting. Do not allow execution of unauthorized code, or at least code running from unauthorized locations (like USB media or downloaded email attatchments).
Backup everything. See below for details. This doesn’t stop any attacks but it saves the day when you are under siege. It is like a secret superweapon that more or less guarantees criminals won’t get their payday.
None of these will require you to buy any new software, or new services. So just do it – it will reduce the chance of having a very bad day by 90%.
Most cyber-attacks spread through some form of social engineering, and in most cases this is an email with a malicious link or attachment. Train your people to spot the danger, and get it into your organization’s culture that file sharing is not done via email attachments. Provide them with real collaboration tools instead. This would further reduce the chance of a very bad day by another 90%.
If you want to be sure, you can scrub attachments and disable links in emails – but people may feel that is a little extreme and start using private email accounts instead, which is completely outside of the organization’s control, so only do this if you really have a compliance culture in place. Most organizations don’t.
Backups and restore testing
Ok, so you can reduce the likelihood of getting hit, but that only goes that far. Sooner or later, you will reach a day when you end up having to recover systems and remove a virus infection. Ransomware is icky because they encrypt and make your files useless, so in most of these cases your AV program cannot save you. So how can you avoid paying criminals and help fund money laundering, human trafficking, terrorism and drugs trade? Here’s how.
Backup your data, with reasonable frequency and retention. And verify the backups. If you run a backup of your files every few hours, and you do an offline/offsite every night, and you keep the rolling backup (online) for 30 days, and the offline backups for 180 days, it will be very hard to put you in a hard spot. If you generate important data very fast, increase the backup frequency.
Make sure you also verify backup integrity. An easy way to make sure you are safe is to do binary image disk backups as offline backup, and to do a hash of the image that is stored separately in a different offline secure location. This way you can make sure your offline backups really stay the way they were when you copied them by checking the hash. Do the same for the rolling backup – this way you can check if the cryptovirus has changed something on your backup.
Many companies back up their data but they never test if they are able to restore their data. So to be sure that everything works the day the shit hits the fan, do regular restore testing. Try to restore your system from scratch using various backups to make sure everything works as it should. If it doesn’t, review your backup practice and find you what you need to change to make it work.
Response: security monitoring, escalation and crisis intervention teams
In addition to these technical things you need a response team. The team should be ready to respond in a well-prepared and structured way. Typically, you would go through a series of steps:
Identify the threat and classify it as incident or not.
Contain the problem. Make sure it does not spread (disconnect from network if feasible)
Collect evidence. Create multiple binary images of the infected system, and store hashes of them. Some you will use for forensic analysis, some are collected as evidence and are not to be touched.
Eradicate the attacker form your system. Normally this means to format everything and to restore from a safe backup.
Test your restored system. Any signs of reinfection or problems? If not, bring it online step by step. One server or computer at the time. Monitor closely for strange behavior.
Lessons learned: what did you do well, what should you have done differently? Collect experiences and share with your peers. This is the way we learn. This is what we should be better at than the bad guys. I’m not entirely sure we are better than them at shared learning, though.
Would it have helped in the cases of the Petya mutant and WannaCry?
You bet! First of all, WannaCry only worked on computers that were either beyond end-of-life versions of Windows, or unpatched versions of newer operating systems. Patching would have kept everyone safe.
What about Petya? The attack is still ongoing. It spreads using the Eternalblue exploit (that the NSA wrote and lost), which Microsoft issued a patch for in March: https://technet.microsoft.com/en-us/library/security/ms17-010.aspx. In other words, if people had followed good baseline security practice, they wouldn’t have a problem now, most likely (you can never really be sure if there is another zeroday, but probably not).
So, who’s been affected? Just small and unknown companies? Nope. Here are a few examples:
Rosneft (a Russian oil giant)
Ukraine: Government, power companies, airports, supermarkets, and also the Chernobyl nuclear power plant. That one, yes.
Maersk: one of the largest shipping companies in the world
To remember what to do: please keep this miniposter handy.