The first few days of 2018 have been busy for security professionals and IT admins. As Ars Technica put it: every modern processor has “unfixable” security flaws. There are fixes – sort of. But they come with a cost: computers will run up to 30% slower because of it, depending on the type of work being performed. A lot of the heavy lifting performed in large data centers is related to data processing in databases. Unfortunately, this is the type of operation that looks to have the worst impact of fixing the Meltdown and Specter vulnerabilities. A selection of tests shows that a performance reduction of about 20% is at least realistic (see details at phoronix.com). This is equivalent to increasing the power consumption of data centers by 25%. That is a lot of energy! With assumed growth in data center consumption of 5% per year we should expect the total electricity consumption by data centers globally to hit 480 TWh this year.
A 2013 estimate of the US use of data centers put the 2020 electricity use at 139 TWh, and the Independent reported that the 2015 global data center energy consumption was 416 TWh. For comparison, the total electricity generation globally is approximately 25000 TWh, so data center usage is not insignificant – and it is growing fast.
Our energy mix is still heavily dependent on fossil fuels, mainly coal and natural gas. Globally about 40% of our electricity generation is done based on coal, and another 30% is based on petroleum, primarily natural gas. In OECD countries coal use is on the decline but demand growth for electricity in particularly the BRICS economies still outnumbers the achievable growth in renewable energy generation in these areas. This means that short-term increases in global electrical energy generation will still be heavily influenced by fossil fuels – meaning coal and natural gas, and to some extent oil.
According to an article on curbing greenhouse gas emissions in the Washington Post, average coal fired power plants in the United States emits the equivalent of 1768 lb CO2/MWh, whereas for natural gas the average number would be around 800-850 lb/MWh. This corresponds to 800 g/kWh and 385 g/kWh for coal and natural gas, respectively.
Based on these numbers we can estimate an approximate value of extra CO2 emissions based on the Meltdown and Specter vulnerability fixes. If the unadjusted data center electricity consumption for 2018 is estimated at 480 TWh, and the bug fixes will lead to a 25% increase in consumption we are talking about and extra energy demand of 120 TWh for running our data centers. If 40% of that energy is generated by coal fired power plants, and 30% by natural gas, and the remaining by nuclear and renewable energy sources we are looking at 48 TWh extra to be produced from coal, and 36 TWh extra to be produced from natural gas. The combined expected “extra” CO2 emissions would be 52 million tons!
That is the same as all the climate gas emissions from the Norwegian economy in 2016 – including the entire petroleum sector (source: SSB). Another “yard stick” for how huge this number is: it corresponds to 1/3 of all emissions from U.S. aviation (source: EPA). Or – it would correspond to driving the largest version of a Hummer with a 6.2l V8 engine 3 million times around the earth (source: energy.eu).
This is a huge increase in emissions due to data processing because the CPU optimizations we have come to rely on since 1995 cannot be used safely. Also note that we have only included data centers in our estimate; this excludes all PC’s, Macs and smartphones that could see hits in performance too – meaning we’d have to charge our tech toys more often, thereby consuming even more electricity.
Uploading source code with hardcoded credentials to GitHub (Uber)
Using weak passwords for administrator access. Really weak passwords. Like admin/admin. (Equifax)
Failure to implement authentication and access control in a robust way (Filesilo – with plain text password storage)
Allowing people to sign up without any verification of identity or ownership of e-mail address or other information used to sign up (Lots of second-tier sites)
That’s about the service itself, but what about the sign-up page and the sign-up process? Here’s a checklist for your signup page – and similar use cases.
Sanitize your inputs (protects your database from injection attacks, and your users from stored cross-site scripting attacks)
Validate your inputs (protects against injection attacks, user errors and garbage content)
Verify e-mail addresses, phone numbers etc (protects your users against abuse, and your service against spam)
Handle all exceptions: exceptions can give all sorts of trouble when not handled in a good way – ranging from poor user experience (due to unhelpful error messages) to information leaks containing sensitive data from your database
Treat secrets as secrets – use strong cryptographic controls to protect them. Hash all passwords before storing them.
OK, that was a brief checklist – but that’s by far not the most important part of creating a secure sign-up process, or any process for that matter. The key to secure software is to follow a good workflow that takes security into account. Here’s how I like to think about this process.
Whenever you are building something, the first thing that comes to mind is “functionality”. We don’t build security, and try to integrate functionality. It is the other way around. Although a secure development lifecycle includes a lot of “other things” like competence development, team organization and so on, let’s start with the backlog of functionality we want to build – and focus on the actual development, not the organization around it. Having a list of functionality is a good start for thinking about threats, and then security. Let’s take a signup page as the starting point – here are a few things we’d like the page to have:
Easy signup with username, password, e-mail
Good privacy control: data leaks should be unlikely, and if they occur they should not reveal sensitive info
So, how can we build a threat model based on this? We’d probably like some more information about the context, and the technology stack being used.
Who are the stakeholders (users, owners, developers, customers, competitors, etc)?
Who could be interested in attacking the site? (Script kiddies? Cybercriminals? Hacktivists?)
What is the value of the service to the various stakeholders?
If we know the answers to these questions, it is easier to understand the threat landscape for our web page, and the signup component in particular. We also need to know something about the technology stack. For example, we could build our signup page based on (it could be anything but lots of websites use these technologies today):
MongoDB for datastorage in the cloud
Nodejs for building a RESTful API backend
Vuejs for frontend
So, having this down, we see the contours of how things work. We can start building a threat model that looks at people, infrastructure, software, etc.
People risk: someone signs up with a different persons credentials and pretends to be this person (a personal risk on a social media platform for example)
Software risk: user inputs used to conduct injection attacks in order to leak data about users (MongoDB can be attacked in pretty similar manner as an SQL database)
Software risk: secrets are stored in unprotected form and they are leaked. User credentials sold on the dark web or posted to a pastebin.
Creating a big list of threats like this, we can rank them based on how serious the impact would be, and how likely they are. Based on this we create security requirements, that are then added to the backlog. These security requirements should then also be added to the testing plans (whether it is unit testing or integration testing), to make sure the controls actually work.
Testing and development is iterative – but at some point it seems we have covered the backlog and passed all tests. It is QA time! In many development projects QA will focus on functionality first, then performance. That is of course very important – but if users cannot trust the software, they will leave. That’s is why security should be a part of QA as well. This means more extensive testing, typically adding static testing, coding reviews and vulnerability scans to the QA process. Normally you would find something here, which could range from “small stuff” to “big stuff”. If the quality is not sufficient, say performance is poor or functionality fails, one would go back to the backlog and update for the next sprint. The same should be done directly for security.
When the backlog is updated – we need to update our threat model as well – also informed by what we’ve learned in the previous sprint.
Following a process like this will not give you a 100% bug-free, super-secure software – but it will certainly help. And by tracking some metrics you can also measure if quality improves over time. Especially static testing gives good metrics for this – the number of code defects, vulnerabilities or not, should decrease from one sprint to the next.
That’s why we need a good process, this way we can learn to build better things over time. Run fast and break things – but repair them fast as well. This way it is possible to combine innovation with good security. Innovation is of limited use unless we can trust its results.
This has really been the year of marketing and doomsday predictions for companies that need to follow the new European privacy regulations. Everyone from lawyers to consultants and numerous experts of every breed is talking about what a big problem it will be for companies to follow the new regulations, and the only remedy would be to buy their services. Yet, they give very few details on what those services actually are.
Let’s start with the new regulation; not the “what to do” but the “why it is coming”. The privacy conscious search engine duckduckgo.com gives the following definition:
Privacy: The state of being free from unsanctioned intrusion: a person’s right to privacy.
Who hasn’t felt his or her privacy intruded upon by marketing networks and information brokers online? The state of modern business is the practice of tracking people, analyzing their behavior and seeking to influence their decisions, whether it is how they vote in an election, or what purchasing decisions they make. In the vast majority of cases this surveillance based mind control activity occurs without conscious consent from the people being tracked. It seems clear that there is a need to protect the rights to privacy and freedoms of individuals in the age of tech based surveillance.
This is what the new regulation is about – the purpose is not to make life hard for businesses but to make life safe for individuals, allowing them to make conscious decisions rather than being brain washed by curated messages in every digital channel.
Still, for businesses, this means that things must change. There may be a need for consultants but there is also a need for practical tools. The way to dealing with the GDPR that all businesses should follow is:
Understand why this is necessary
Learn to see things from the end-user or customer perspective
Learn the key principles of good privacy management
Create an overview of what data is being gathered and why
With these 4 steps in place, it is much easier to sort the good advice from the bad, and the useful from the wasteful.
Most businesses are used to thinking about risk in terms of business impact. What would the consequence to our business be, and how likely is it to happen? That will still be important after May 2018, but this is not the perspective the GDPR takes. If we are going to make decisions about data protection for the sake of protecting privacy, we need to think about the risk the data collection and processing exposes the data subjects to (data subject is GDPR speak for people you store data about).
What consequences could the data subjects see from the processing itself? Are you using it for profiling or some sort of automated decision making? This is the usual “privacy concern” – and rightly so. Even felt it is creepy how marketers track your digital movements?
Another perspective we also need to take is what can the consequences for individuals be of data abuse? This can be a data breach like the Equifax and Uber stories we’ve heard a lot about this fall, or it can be something else, like an insider abusing your data, a hacker changing the data so that automated decisions don’t go your way, or that the data becomes unavailable and thereby stopping you from using a service or getting access to something. The personal consequences can be financial ruin, a poor reputation, family troubles or perhaps even divorce?
A key principle in the GDPR is data minimization; you shall only process and store the data where you have a good reason and a legal basis for doing so. Practicing data minimizaiton means less data that can be abused, thereby a lower threat to the freedoms and rights of the persons you process data about. This is perhaps the most important principle of good privacy practice: try to avoid processing or storing data that can be linked to individuals as far as you can (while getting your things done).
Surprisingly, many companies have no clue what personal data they are storing and processing, who they share it with, and why they do it. The starting point for achieving good privacy practice, good karma – and even GDPR compliance – is knowing what you have and why.
From scare-speak to tools and practice
We’ve had a year of data breaches, and we’ve had a year of GDPR themed conference talks, often with a focus on potential bankruptcy-inducing fines, cost of organizational changes and legal burdens. Now is the time for a more practical discussion; getting from theory to practice. In doing this we should all remember:
There are no GDPR experts: the regulation has yet not come into force, and a both regulatory oversight and practical implementations are still in its infancy phases
The regulation is risk-based: this means the data controllers and processors must take ownership of the risk and governance processes.
Documenting compliance should be a natural part of performing the thinking and practical work required to safeguard the privacy of customers, users, employees, visitors and whatever category of people you process data related to.
We need practical guidance documents, and we need tools that make it easier to follow good practice, and keep compliance documentation alive. That’s what we should be discussing today – not fines and stories about monsters eating non-compliant companies.
If you are like most people, you don’t read privacy statements. They are boring, often generic, and seem to be created to protect businesses from lawsuits rather than to inform customers about how they protect their privacy. Still, when you know what to look for to make up your mind about “is it OK to use this product”, such statements are helpful.
Even so, there is much to be learned from looking at a privacy statement. If you are like most people you are not afraid of sharing things on the internet, but still you don’t want the platforms you use to abuse the information you share. In addition, you would like to know what you are sharing. It is obvious that you are sharing a photo when you include it in a Facebook status update – but it is obvious that you are sharing your phone number and location when you are using a browser add-on? The former we are generally OK with (it is our decision to share), the latter not so much – we are typically tricked into sharing such information without even knowing that we do it.
So-called anonymous information: approximate geo-location, hardware specs, browser type and version, date of software installation (their add-on I presume), the date you last used their services, operating system type and version, OS language, registry entries (really??), URL requests, and time stamps.
Personal information: IP address, name, email, screen names, payment info, and other information we may ask for. In addition you can sign up with your Facebook profile, from which they will collect usernames, email, profile picture, birthday, gender, preferences. When anonymous information is linked to personal information it is treated as personal information. (OK….?)
Other information: information that is publicly available as a result of using the service (their socalled VPN network) may be accessed by other users as a cache on your device. This is basically your browser history.
125 million users accept that their personal data is being harvested, analysed and shared at will by a company that provides “VPN” with no encryption and that accepts the use of “password” as password when signing up for their service.
So, here’s the take-away:
What they collect
How they collect it
What they are using the information for
With whom do they share the informaiton
How do they secure the information?
Think about what this means for the things that are important to your privacy. Do you accept that they do the stuff they do?
What is the worst-case use of that information if the service provider is hacked? Identity theft? Incriminating cases for blackmail? Political profiling? Credibility building for phishing or other scams? The more information they gather, the worse the potential impact.
Finally, never trust someone claiming to sell a security product that obviously does not follow good security practice. No SSL, accepting weak passwords? Take your business elsewhere, it is not worth the risk.
Insurance relies on pooled risk; when a business is exposed to a risk it feels is not manageable with internal controls, the risk can be deferred to the capital markets through an insurance contract. For events that are unlikely to hit a very large number of insurance customers at once, this model makes sense. The pooled risk allows the insurer to create capital gains on the premiums paid by the customers, and the customers get their financial losses covered in case of a claim situation. This works very well for many cases, but insurers will naturally try to limit their liabilities, through “omissions clauses”; things that are not covered by the insurance policy. The omissions will typically include catastrophic systemic events that the insurance pool would not have the means to cover because a large number of customers would be hit simultaneously. It will also include conditions with the individual customer causing the insurance coverage to be voided – often referred to as pre-existing conditions. A typical example of the former is damages due to acts of war, or natural disasters. For these events, the insured would have to buy extra coverage, if at all offered. An example of the latter omission type, the pre-existing condition, would be diseases the insured would have before entering into a health insurance contract.
How does this translate into cyber insurance? There are several interesting aspects to think about, in both omissions categories. Let us start with the systemic risk – what happens to the insurance pool if all customers issue claims simultaneously? Each claim typically exceed the premiums paid by any one single customer. Therefore, a cyberattack that spreads to large portions of the internet are hard to insure while keeping the insurer’s risk under control. Take for example the WannaCry ransomware attack in May; within a day more than 200.000 computers in 150 countries were infected. The Petya attack following in June caused similar reactions but the number of infected computers is estimated to be much lower. As the WannaCry still looks like a poorly implemented cybercrime campaign intended to make money for the attacker, the Petya ransomware seems to have been a targeted cyberweapon used against the Ukraine; the rest was collateral damage, most likely. But for Ukrainian companies, the government and computer users this was a major attack: it took down systems belonging to critical infrastructure providers, it halted airport operations, it affected the government, it took hold of payment terminals in stores; the attack was a major threat to the entire society. What could a local insurer have done if it had covered most of those entities against any and all cyberattacks? It would most likely not have been able to pay out, and would have gone bankrupt.
A case that came up in security forums after the WannaCry attack was “pre-existing condition” in cyber insurance. Many policies had included “human error” in the omissions clauses; basically, saying that you are not covered if you are breached through a phishing e-mail. Some policies also include an “unpatched system” as an omission clause; if you have not patched, you are not covered. Not all policies are like that, and underwriters will typically gauge a firm’s cyber security maturity before entering into an insurance contract covering conditions that are heavily influenced by security culture. These are risks that are hard to include in quantitative risk calculations; the data are simply not there.
Insurance is definitely a reasonable control for mature companies, but there is little value in paying premiums if the business doesn’t satisfy the omissions clauses. For small businesses it will pay off to focus on the fundamentals first, and then to expand with a reasonable insurance policy.
For insurance companies it is important to find reasonable risk pooling measures to better cover large-scale infections like WannaCry. Because this is a serious threat to many businesses, not having meaningful insurance products in place will hamper economic growth overall. It is also important for insurers to get a grasp on individual omissions clauses – because in cyber risk management the thinking around “pre-existing condition” is flawed – security practice and culture is a dynamic and evolving thing, which means that the coverage omissions should be based on current states rather than a survey conducted prior to policy renewal.
What followed pretty much resembled the WannaCry media panic messages: things will be encrypted, there is nowhere to hide and society is going to crash hard… Of course, if somebody encrypts your files, and this spreads throughout your network, it is pretty close to an ocean of pain. At least if you are unprepared. So, instead of an analysis of the Petya virus, or some derivative of it, let’s get down to what we can do to prepare and survive a ransomware attack. Because it really isn’t that hard. Really.
A security baseline means the security controls you apply irrespective of risk level. The things that everybody should do, and that will actually stop almost every cyberattack. Yes, almost every attack, say like 90% of them. And it doesn’t even require you to buy a lot of expensive consulting services or other snakeoil to fix it. Here we go, this should be the minimum baseline for everyone:
Patch your software as fast as you can whenever a new security patch comes out. Operating systems normally do this automatically if you do not configure your system otherwise. Sometimes organizations need to check compatibility for critical systems, and such, but the main rule is: patch everything as fast as you can.
Do not allow users to perform regular work using an account with administrator rights. Most users don’t even need admin rights at all, but if they do, give them two accounts. Perform work using a standard account with limited privileges.
Run a firewall that is configured to block all incoming traffic (unless you need it).
If you run an organization: use application whitelisting. Do not allow execution of unauthorized code, or at least code running from unauthorized locations (like USB media or downloaded email attatchments).
Backup everything. See below for details. This doesn’t stop any attacks but it saves the day when you are under siege. It is like a secret superweapon that more or less guarantees criminals won’t get their payday.
None of these will require you to buy any new software, or new services. So just do it – it will reduce the chance of having a very bad day by 90%.
Most cyber-attacks spread through some form of social engineering, and in most cases this is an email with a malicious link or attachment. Train your people to spot the danger, and get it into your organization’s culture that file sharing is not done via email attachments. Provide them with real collaboration tools instead. This would further reduce the chance of a very bad day by another 90%.
If you want to be sure, you can scrub attachments and disable links in emails – but people may feel that is a little extreme and start using private email accounts instead, which is completely outside of the organization’s control, so only do this if you really have a compliance culture in place. Most organizations don’t.
Backups and restore testing
Ok, so you can reduce the likelihood of getting hit, but that only goes that far. Sooner or later, you will reach a day when you end up having to recover systems and remove a virus infection. Ransomware is icky because they encrypt and make your files useless, so in most of these cases your AV program cannot save you. So how can you avoid paying criminals and help fund money laundering, human trafficking, terrorism and drugs trade? Here’s how.
Backup your data, with reasonable frequency and retention. And verify the backups. If you run a backup of your files every few hours, and you do an offline/offsite every night, and you keep the rolling backup (online) for 30 days, and the offline backups for 180 days, it will be very hard to put you in a hard spot. If you generate important data very fast, increase the backup frequency.
Make sure you also verify backup integrity. An easy way to make sure you are safe is to do binary image disk backups as offline backup, and to do a hash of the image that is stored separately in a different offline secure location. This way you can make sure your offline backups really stay the way they were when you copied them by checking the hash. Do the same for the rolling backup – this way you can check if the cryptovirus has changed something on your backup.
Many companies back up their data but they never test if they are able to restore their data. So to be sure that everything works the day the shit hits the fan, do regular restore testing. Try to restore your system from scratch using various backups to make sure everything works as it should. If it doesn’t, review your backup practice and find you what you need to change to make it work.
Response: security monitoring, escalation and crisis intervention teams
In addition to these technical things you need a response team. The team should be ready to respond in a well-prepared and structured way. Typically, you would go through a series of steps:
Identify the threat and classify it as incident or not.
Contain the problem. Make sure it does not spread (disconnect from network if feasible)
Collect evidence. Create multiple binary images of the infected system, and store hashes of them. Some you will use for forensic analysis, some are collected as evidence and are not to be touched.
Eradicate the attacker form your system. Normally this means to format everything and to restore from a safe backup.
Test your restored system. Any signs of reinfection or problems? If not, bring it online step by step. One server or computer at the time. Monitor closely for strange behavior.
Lessons learned: what did you do well, what should you have done differently? Collect experiences and share with your peers. This is the way we learn. This is what we should be better at than the bad guys. I’m not entirely sure we are better than them at shared learning, though.
Would it have helped in the cases of the Petya mutant and WannaCry?
You bet! First of all, WannaCry only worked on computers that were either beyond end-of-life versions of Windows, or unpatched versions of newer operating systems. Patching would have kept everyone safe.
What about Petya? The attack is still ongoing. It spreads using the Eternalblue exploit (that the NSA wrote and lost), which Microsoft issued a patch for in March: https://technet.microsoft.com/en-us/library/security/ms17-010.aspx. In other words, if people had followed good baseline security practice, they wouldn’t have a problem now, most likely (you can never really be sure if there is another zeroday, but probably not).
So, who’s been affected? Just small and unknown companies? Nope. Here are a few examples:
Rosneft (a Russian oil giant)
Ukraine: Government, power companies, airports, supermarkets, and also the Chernobyl nuclear power plant. That one, yes.
Maersk: one of the largest shipping companies in the world
To remember what to do: please keep this miniposter handy.
Physical object security and cybersecurity defense have many similarities, such as:
Defense in depth
The need for awareness
Structure of response activities
There is one thing, however, that is taught to everyone responsible for providing physical security: you main focus is to protect the “vital objects”. These things can be a power substation, decision makers (like a prime minister), it can be a munitions storage or about anything you can imagine. But if a military unit, or private security guards, or the police, is told to secure this object – they will know exactly what the “vital object” is.
Jump to the world of cybersecurity in businesses. Here the main focus is on “vulnerabilities”. Obviously, that is a key part of the security equation – without vulnerabilities to exploit (human or technical), there is no way to access the target assets. The problem is, when people start with the “hunt for vulnerabilities”, they tend to forget what their vital object is. In fact, in many cases there has been no criticality analysis at all. This is why the step “risk and vulnerability assessment” must be taken seriously. So here’s a suggested approach to narrow down the scope for the teams responsible for providing security:
List all your assets and perform a critciality assessment. What will be the consequence to the overall goal of the owner organization, its partners and customers, if you should experience the following:
Confidentiality breach: adversaries get access to the asset’s core information
Integrity breach: adversaries can manipulate the asset’s core information
Availability breach: adversaries can deny legitimate users access to the asset
Perform a risk assessment based on known adversaries and available intelligence. Determine if it is likely or unlikely that the asset can be breached.
Narrow down your asset list to “assets” and “vital assets”.
Now, you have your priorities set. It is time to plan your security strategy. You should apply baseline security measures to all assets. The minimum baseline would be:
Network segregation: at least keep vital and non-vital assets on separate network segments with firewalls between them
Keep things patched and up-to-date
Harden both software and hardware as appropriate for “normal assets”
Apply monitoring as suited. Created red flags for suspicious activity (e.g. form an intrusion detection system).
Ensure your automated backup system is working and that you are able to restore when needed
Teach people what they need to know to reduce the likelihood of breaches, and what to do when one is suspected (usually call support).
For your non-vital objects this would typically be enough. For vital objects (a database containing credit card information for example) you need to base your defense on the risk exposure and your available resources.
Limit access as much as possible
Monitor and log more aggressively, and be prepared to refine the mesh on higher threat levels
Design sufficient capacity for fail-overs if possible
Apply stricter hardening policies
Apply backups as suited for this asset.
Be ready to act fast (train for it, and have sufficient resources available)
The last point cannot be stressed enough. In physical security you would typically have a team in place to provide deterrence and monitoring, and a quick reaction force to act fast when necessary. In many IT organizations the sysadmin on duty will be expected to fill both these roles with little training or backing to do so. That does not work.
Assign roles and responsibilities for responders
Pay them enough to make the extra vigilance feel worth it
Give them the resources needed to train, and focus 1/3 on baseline defense breaches and 2/3 on vital object breaches
Cyberinsurance cannot play the role of a great and well-prepared response team. If your vital data is breached it may easily be the IT equivalent of a nuke to your head office – your customers will lose trust in you – especially if you fail to respond fast and in a structured way to limit the damage as much as you can.
Don’t apply the same defense strategy to everything. Establish a baseline, and then focus on your vital assets.