Vendor Security Management: how to decide if tech is safe (enough) to use

tl;dr: Miessler is right. We need to focus on our own risk exposure, not vendor security questionnaires

If you want to make a cybersecurity expert shiver, utter the words “supply chain vulnerabilities”. Everything we do today, depends on a complex mixture of systems, companies, technologies and individuals. Any part of that chain of interconnected parts can be the dreaded weakest link. If hackers can find that weak link, the whole house of cards comes crumbling down. Managing cyber supply chain risk is challenging, to say the least. 

Most companies that have implemented a vendor cybersecurity risk process, will make decisions based on a questionnaire sent to the vendor during selection. In addition, audit reports for recognized standards such as ISO 27001, or SOC2, may be shared by the company and used to assess the risk. Is this process effective at stopping cyberattacks through third parties? That is at least up for debate.

Daniel Miessler recently wrote a blog post titled It’s time for vendor security 2.0, where he argues that the current approach is not effective, and that we need to change the way we manage vendor risks. Considering how many cybersecurity questionnaires Equifax, British Airways and Codecov must have filled in before being breached, it is not hard to agree with @danielmiessler about this. What he argues in his blog is: 

  1. Cybersecurity reputation service (rating companies, etc) are mostly operating like the mob, and security questions are mostly security theater. None of this will save you from cyber armageddon.
  2. Stay away from companies that seem extremely immature in terms of security
  3. Assume the vendor is breached
  4. Focus more on risk assessment under the assumption that the vendor is breached than questionable questionnaires. Build threat models and mitigation plans, make those risks visible. 

Will Miessler’s security 2.0 improve things?

Let’s pick at the 4 numbered points above one by one. 

Are rating companies mobsters? 

There are many cybersecurity rating companies out there. They take measure of themselves to be the Moody’s or S&P’s of cybersecurity. The way they operate is they pull in “open source information about cybersecurity posture” of companies. They also say that they enrich this information with other data that only they have access to (that is, they buy data from marketing information brokers and perform data exchange with insurance companies). Then they correlate this information in more or less sound statistical ways (combined with a good dose of something called expert judgment – or guessing, as we can also call it) with known data breaches and create a security score. Then they claim that using companies with a bad score is dangerous, and with a good score is much better. 

This is definitely not an exact science, but it does seem reasonable to assume that companies that show a lot of poor practice such as a lack of patching, botnet infected computers pinging out to sinkholes and so on, have worse security management than similar companies that do not have these indicators. Personally, I think a service like this can help sort the terrible ones from the reasonably OK ones. 

Then, are they acting as mobsters? Are they telling you “we know about all these vulnerabilities, if you don’t pay us we will tell your customers?”. Not exactly. They are telling everyone willing to pay for access to their data these things, but they are not telling you about it, unless pay them. It is not exactly in line with accepted standards of “responsible disclosure”. At the same time, their findings are often quite basic and anyone bothering to look could find the same things (such as support for old ciphers on TLS or web servers leaking use of an old PHP version). Bottom line, I think their business model is acceptable and that the service can provide efficiency gains for a risk assessment process. I agree with Miessler that trusting this to be a linear scale of cyber goodness is naive at best, but I do think companies with a very poor security rating would be more risky to use than those with good ratings. 

mobster planning his next security rating extortion of SaaS cybersecurity vendors
Some security vendors have a business model that resemble extortion rackets of a 1930’s mobster. But even mobsters can be useful at times.

Verdict – usefulness: rating services can provide a welcome substitute or addition for slower ways of assessing security posture. An added benefit is the ability to see how things develop over time. Small changes are likely to be of little significance, but a steady improvement of security rating over time is a good sign. These services can be quite costly, so it is worth thinking about how much money you want to throw at it. 

Verdict – are they mobsters? They are not mobsters but they are also not your best friends. 

Are security questionnaires just security theater? 

According to Miessler, you should slim down your security questionnaires to two questions: 

  1. “when was the last time you were breached (what happened, why, and how did you adjust)”?, 
  2. and “do you have security leadership and a security program?”.

The purpose of these questions is to judge if they have a reasonable approach to security. It is easy for people to lie on detailed but generic security forms, and they provide little value. To discover if a company is a metaphorical “axe murderer” the two questions above are enough, argues Miessler. He may have a point. Take for example a typical security questionnaire favorite: “does your company use firewalls to safeguard computers from online attacks?” Everyone will answer “yes”. Does that change our knowledge about their likelihood of being hacked? Not one bit. 

Of course, lying on a short questionnaire with Miessler’s 2 questions is not more difficult than lying on a long and detailed questionnaire. Most companies would not admit anything on a questionnaire like this, that is not already publicly known. It is like flying to the US a few years ago where they made you fill out an immigration questionnaire with questions like “are you a terrorist?” and “have you been a guard at a Nazi concentration camp during WWII”. It is thus a good question if we can even just scrap the whole questionnaire. If the vendor you are considering is a software firm, at least if it is a “Software as a Service” or another type of cloud service provider, they are likely to have some generic information about security on their web page. Looking up that will usually be just as informative as any answer to the question above. 

Verdict: Security questionnaires are mostly useless – here I agree with Miessler. I think you can even drop the minimalist axe murderer detection variant, as people who lie on long forms probably lie on short forms too. Perhaps a good middle ground is to first check the website of the vendor for a reasonable security program description, and if you don’t see anything, then you can ask the two questions above as a substitute. 

Stay away from extremely bad practice

Staying away from companies with extremely bad practice is a good idea. Sometimes this is hard to do because business needs a certain service, and all potential providers are horrible at security. But if you have a choice between someone with obviously terrible security habits and someone with a less worrying security posture, this is clearly good advice. Good ways to check for red flags include: 

  • Create a user account and check password policies, reset, etc. Many companies allow you to create free trial accounts, which is good for evaluating security practices as well. 
  • Check if the applications are using outdated practices, poor configuration etc. 
  • Run sslscan to check if they are vulnerable to very old crypto vulnerabilities. This is a good indicator that patching isn’t exactly a priority.

Verdict: obviously a good idea.

Assume the vendor is breached and create a risk assessment

This turns to focus on your own assets and risk exposure. Assuming the vendor is breached is obviously a realistic start. Focusing on how that affects the business and what you can do about it, makes the vendor risk assessment about business risk, instead of technical details that feel irrelevant. 

Miessler recommends: 

  • Understand how the external service integrates into the business
  • Figure out what can go wrong
  • Decide what you can do to mitigate that risk

This is actionable and practical. The first part here is very important, and to a large degree determines how much effort it is worth putting into the vendor assessment. If the vendor will be used for a very limited purpose that does not involve critical data or systems, a breach would probably not have any severe consequences. That seems acceptable without doing much about it. 

On the other hand, what if the vendor is a customer relationship management provider (CRM), that will integrate with your company’s e-commerce solution, payment portal, online banking and accounting systems? A breach of that system could obviously have severe consequences for the company in terms of cost, reputation and legal liabilities. In such a case, modeling what could happen, how one can reduce the risk and assessing whether the residual risk is acceptable would be the next steps.

Shared responsibility – not only in the cloud

Cloud providers talk a lot about the shared responsibility model (AWS version). The responsibility for security of software and data in the cloud is shared between the cloud provider and the cloud customer. They have documentation on what they will take care of, as well as what you as a customer need to secure yourself. For the work that is your responsibility, the cloud provider will typically give you lots of advice on good practices. This is a reasonable model for managing security across organizational interfaces – and one we should adopt with other business relationships too. 

The most mature software vendors will already work like this, they have descriptions of their own security practices that you can read. They also have advice on how you should set up integrations to stay secure. The less mature ones will lack both the transparency and the guidance. 

This does not necessarily mean you should stay away from them (unless they are very bad or using them would increase the risk in unacceptable ways). It means you should work with them to find good risk mitigations across organizational interfaces. Some of the work has to be done by them, some by you. Bringing the shared responsibility for security into contracts across your entire value chain will help grow security maturity in the market as a whole, and benefit everyone. 

Questionnaires are mostly useless – but transparency and shared responsibility is not. 

In Miessler’s vendor security 2.0 post there is a question about what vendor security 3.0 will look like. I think that is when we have transparency and shared responsibility established across our entire value chain. Reaching this cybersecurity Nirvana of resilience will be a long journey – but every journey starts with a first step. That first step is to turn the focus on how you integrate with vendors and how you manage the risk of this integration – and that is a step we can take today. 

Do you consider security when buying a SaaS subscription?

tl;dr;  SaaS apps often have poor security. Before deciding to use one do a quick security review. Read privacy statements, ask for security docs, and test authentication practices, crypto and console.log information leaks before deciding if you want to trust the app or not. This post gives you a handy checklist to breeze through your SaaS pre-trial security review.

Over the last year I’ve been involved in buying SaaS access for lots of services in a corporate setting. My stake in the evaluation has been security. Of course, we can’t do active attacks or the like to evaluate if we are going to buy a software, but we can read privacy statements, look for strange things in the HTML and test their login process if they allow you to set up a free account or a trial account (most of them do). Here’s what I’ve generally found of challenges:

  • Big name players know what they are doing, but they have so many options for setup that you need to be careful what you select, especially if you have specific compliance needs.
  • Smaller firms (including startups)  don’t offer a lot of documentation, and quite often when talking to them they don’t really have much in place in terms of security management (they may still make great software, it is just harder to trust it in an enterprise environment)
  • Authentication blunders are very common. From HR support systems to payroll to IT management tools, we’ve found a lot of bad practices:
    • Ineffective password policies (5 digits and digits only, anyone?)
    • A theoretical password policy in place but validation is only performed client-side (that means it is easy to trick the server into setting a very weak password, or even no password in same cases, if you should so desire)
    • Lack of cookie security for session cookies (missing HTTPOnly and Secure flags, allowing for cookie theft by XSS attacks or man-in-the-middle attacks)
    • Poor password reset processes. In one case I found the supposedly random string used for a password reset link to be a base 64 encoding of my username…. how hard is that to abuse?
  • When you ask about development practices and security testing, many firms will lack such processes but try to explain that they implicitly have them because their developers are so smart (even with the authentication blunders mentioned above).

Obviously, at some point, security will be so bad that you have to say “No, we cannot buy this shiny thing, because it is rotten on the inside”. This is not a very popular decision in some cases, because the person requesting the service probably has some reason to request it.

egg power fear hammer
Don’t put your data and business at risk by using a SaaS product you can’t trust. By doing a simple due dilligence job you can at least identify services you definitely should be avoiding!

In order to evaluate SaaS security without spending too much time on it I’ve come up with a process I find works pretty well. So, here’s a simple way to sort the terrible from the average!

  1. Read the privacy statement and the terms and conditions. Not the whole thing, that is what lawyers are for (if you have some), but scan it and look for anything security related. Usually they will try to explain how they protect your personal data. If it is a clear and understandable explanation, they usually know what they are doing. If not, they usually don’t.
  2. Look for security or compliance documentation, or request it from their sales/support team. Specific questions to consider are:
    1. Do they offer encryption at rest (if this is reasonable to expect)?
    2. Do they explain how they store passwords?
    3. Do they use trustworthy data centers, and have some form of compliance proof for good practice for their data centers? SOC-2 reports, ISO certificates, etc?
    4. Do they guarantee limited access for insiders?
    5. Do they explain what cryptographic controls they are using for TLS, signatures and at-rest encryption? That means how they perform key management and exchange, what ciphers they allow, and the key strength they require?
    6. Do they say anything about vulnerability management and security testing?
    7. Do they say anything about incident handling?
  3. Perform some super-lightweight testing. Don’t break the law but do your own due diligence on the app:
    1. Create a free account if possible, and test if the authentication practices seem sound:
      • Test the “I forgot my password functionality” to see if the password reset link has an expiry time, and if it is a unguessable” link
      • Try changing your password to something really bad, like “password” or “dog”. Try to replay post requests if stopped by client-side validation (this can be done directly from the dev tools in Firefox, no hacker tool necessary)
      • Try logging in many times with the wrong password to see what happens (warning, lockout,… or most likely, nothing)
    2. Check their website certificate practices by running sslscan, or by using the online version at ssllabs.com. If they support really weak ciphers, it is an indicator that they are not on the ball. If they support SSL (pre TLS 1.0) it is probably a completely outdated service.
    3. Check for client-side libraries with known vulnerabilities. If they are using ancient versions of jQuery and other client side libraries, they are likely at risk of being hacked. This goes for WordPress sites as well, including their plugins.
    4. Check for information leaks by opening the console (Ctrl + I in most browsers). If the console logs a lot of internal information coming from the backend, they probably don’t have the best security practices.

So, should you dump a service if any of these tests “fail”? Probably not. Most sites have some weak points, and sometimes that is a tradeoff for some reason that may be perfectly legitimate. But if there are a lot of these “bad signs” in the air and you would be running some critical process or storing your company secrets in the app, you will be better off finding another service to suit your needs – or at least you will be able to sleep better at night.

 

Why you should be reading privacy statements before using a web site

If you are like most people, you don’t read privacy statements. They are boring, often generic, and seem to be created to protect businesses from lawsuits rather than to inform customers about how they protect their privacy. Still, when you know what to look for to make up your mind about “is it OK to use this product”, such statements are helpful.

payphone-on-brick-wall_4460x4460
If somebody was wiretapping all of your phone calls you wouldn’t be happy. Why should you then accept surveillance when you use other services? Most people do, and while they may have a feeling that their browsing is “monitored” they may not have the full insight into what people can do with the data they collect, or how much data they actually get access to. 

Even so, there is much to be learned from looking at a privacy statement. If you are like most people you are not afraid of sharing things on the internet, but still you don’t want the platforms you use to abuse the information you share. In addition, you would like to know what you are sharing. It is obvious that you are sharing a photo when you include it in a Facebook status update – but it is obvious that you are sharing your phone number and location when you are using a browser add-on? The former we are generally OK with (it is our decision to share), the latter not so much – we are typically tricked into sharing such information without even knowing that we do it.

Here’s an example of a privacy statement that is both good and bad: http://hola.org/legal/privacy.  It is good in the way that it is relatively explicit about what information it collects (basically everything they can get their hands on), and it is bad because they collect everything they can get their hands on.. Hola is a “VPN” service that markets itself as a security and privacy product. Yet, their website does not use SSL, their socalled VPN service does not use encryption but is really an unencrypted proxy network, they accept users that register with “password” as password, and so on… So much for security. So, here’s a bit on what they collect (taken from their privacy policy):

  • So-called anonymous information: approximate geo-location, hardware specs, browser type and version, date of software installation (their add-on I presume), the date you last used their services, operating system type and version, OS language, registry entries (really??), URL requests, and time stamps.
  • Personal information: IP address, name, email, screen names, payment info, and other information we may ask for. In addition you can sign up with your Facebook profile, from which they will collect usernames, email, profile picture, birthday, gender, preferences. When anonymous information is linked to personal information it is treated as personal information. (OK….?)
  • Other information: information that is publicly available as a result of using the service (their socalled VPN network) may be accessed by other users as a cache on your device. This is basically your browser history.

Would you use this service when seeing all of these things? They collect as much as they can about you, and they have pretty lax security. The next thing in their privacy statement that should be of interest is “Information we share”. What they call anonymous they share with whoever they please – meaning marketing people. They may also share it for “research purposes”. Note that the anonymous information is probably enough to fingerprint exactly who you are, and to track you around the web afterwards using tracking cookies. This is pretty bad. They also state when they share “personal information” – it includes the usual reason; due to legal obligations (like subpoenas, court orders). In addition they may share it to detect, prevent or address fraud, security, violations of policy or other technical issues (basically, this can be whatever you like it to be), to enforce the privacy policy or any other agreements between the user and them, and finally the best reason they share personal information: to protect against harm to the rights, property or safety of the company, its partners, users or the public. So basically, they collect as much as they want about you and they share it with whoever they like for whatever reasons they like. Would anyone be using such a service? According to their web page they have 125 million users.

125 million users accept that their personal data is being harvested, analysed and shared at will by a company that provides “VPN” with no encryption and that accepts the use of “password” as password when signing up for their service.

So, here’s the take-away:

  • Read the privacy policy, look specifically for:
    • What they collect
    • How they collect it
    • What they are using the information for
    • With whom do they share the informaiton
    • How do they secure the information?
  • Think about what this means for the things that are important to your privacy. Do you accept that they do the stuff they do?
  • What is the worst-case use of that information if the service provider is hacked? Identity theft? Incriminating cases for blackmail? Political profiling? Credibility building for phishing or other scams? The more information they gather, the worse the potential impact.
  • Finally, never trust someone claiming to sell a security product that obviously does not follow good security practice. No SSL, accepting weak passwords? Take your business elsewhere, it is not worth the risk.

Do SCADA vulnerabilities matter?

Sometimes we talk to people who are responsible for operating distributed control systems. These are sometimes linked up to remote access solutions for a variety of reasons. Still, the same people do often not understand that vulnerabilities are still found for mature systems, and they often fail to take the typically simple actions needed to safeguard their systems.

For example, a new vulnerability was recently discovered for the Siemens Simatic CP 343-1 family. Siemens has published a description of the vulnerability, together with a firmware update to fix the problem: see Siemens.com for details.

So, are there any CP 343’s facing the internet? A quick trip to Shodan shows that, yes, indeed, there are lots of them. Everywhere, more or less.

shodan_cp343

Now, if you did have a look at the Siemens site, you see that the patch was available from release date of the vulnerability, 27 November 2015. What then, is the average update time for patches in a control system environment? There are no patch Tuesdays. In practice, such systems are patched somewhere from monthly to never, with a bias towards never. That means that the bad guys have lots of opportunities for exploiting your systems before a patch is deployed.

This simple example reinforces that we should stick to the basics:

  • Know the threat landscape and your barriers
  • Use architectures that protect your vulnerable systems
  • Do not use remote access where is not needed
  • Reward good security behaviors and sanction bad attitudes with employees
  • Create a risk mitigation plan based on the threat landscape and stick to it practice too

 

New security requirements to safety instrumented systems in IEC 61511

IEC 61511 is undergoing revision and one of the more welcome changes is inclusion of cyber security clauses. According to a presentation held by functional safety expert Dr. Angela Summers at the Mary Kay Instrument Symposium in January 2015, the following clauses are now included in the new draft – the standard is planned issued in 2016:

  • Clause 8.2.4: Description of identified [security] threats for determination of requirements for additional risk reduction. There shall also be a description of measures taken to reduce or remove the hazards.
  • Clause 11.2.12: The SIS design shall provide the necessary resilience against the identified security risks

What does this mean for asset owners? It obviously makes it a requirement to perform a cyber security risk assessment for the safety instrumented systems (SIS). Such information asset risk assessments should, of course, be performed in any case for automation and safety systems. This, however, makes it necessary to keep security under control to obtain compliance with IEC 61511 – something that is often overlooked today, as described in this previous post. Further, when performing a security study, it is important that also human factors and organizational factors are taken into account – a good technical perimeter defense does not help if the users are not up to the task and have sufficient awareness of the security problem.

In the respect of organizational context, the new Clause 11.2.12 is particularly interesting as it will require security awareness and organizational resilience planning to be integrated into the functional safety management planning. As noted by many others, we have seen a sharp rise in attacks on SCADA systems over the past few years – these security requirements will bring the reliability and security fields together and ensure better overall risk management for important industrial assets. These benefits, however, will only be achieved if practitioners take the full weight of the new requirements on board.

Gas station’s tank monitoring systems open to cyber attacks

Darkreading.com brought news about a project to set up a free honeypot tool for monitoring attacks against gas tank monitoring systems. Researchers have found attacks against gas tank monitoring systems at several locations in the United States (read about it @darkreading). Interestingly, many of these systems for monitoring tank levels etc., are internet facing with no protection whatsoever – not even passwords. Attacks have so far only been of the cyberpunk type – changing a product’s name and the like; no intelligent attacks have been observed.

If we dwell on this situation a bit – we have to consider who would be interested in attacking gas station chains at a SCADA level? Obviously, if you can somehow halt the operation of all gas stations in a country, you do limit people’s mobility. In addition to that, you obviously harm the gas station’s business. Two of the most obvious attack motivations may thus be “sabotage against the nation as a whole” as part of a larger campaign, and pure criminal activity by using for example ransomware to halt gasoline sales until a ransom is payed. The latter would perhaps be the most likely of the two threats.

So – what should the gas stations do? Obviously, there are some technical barriers missing here when the system is completely open and facing the internet. The immediate solution would be to protect all network traffic by VPN tunneling, and to require a password for accessing the SCADA interfaces. Hopefully this will be done soon. The worrying aspect of this is that gas stations are not the only installation type with very weak security – there are many potential targets for black hats that are very easy to reach. The more connected our world becomes through integration of #IoT into our lives – the more important basic security measures become. Hopefully this will be realized not only by equipment vendors, but also by consumers.

The false sense of security people gain from firewalls

Firewalls are important to maintain security. On that, I suppose almost all of us agree. It is, however, not the final solution to the cyber security problem. First, there is the chance of bad guys pushing malware over traffic that is actually allowed through the firewall (people visiting bad web sites, for example). Then there is the chance that the firewall itself is set up in the wrong way. Finally, there is the possibility that people are bringing their horrible stuff inside the walled garden by using USB sticks, their own devices hooked up to the network, or similar. People running both IT and automation systems tend to be aware of all of these issues – and probably most users too. On the other hand, maybe not – but they should be aware of it and avoid doing obviously stupid stuff.

Then there is the oxymoron of the social engineer. For a skilled con artist it is easy to trick almost anyone by bribing them, using temptations (drugs, sex, money, fame, prestige, power, etc) or blackmailing them into helping an evil outsider. For some reason, companies tend to overlook this very human weakness in the defense layers. You normally do not find much mention of social engineering in operating policies, training and risk assessments for corporations running production critical IT systems, such as industrial control systems. Recent studies have shown that as many as 25% of people receiving phishing e-mails, actually click on links to websites with malware downloads. Tricksters are becoming more skilled – and the language in phishing e-mails has improved tremendously since the Viagra e-mail spam period of ten years ago. This can be summarized in a “tricking the dog” drawing:


Stuff that makes organizations easier to penetrate using social engineering includes:

  • Low employee loyalty due to underpay, bad working environment and psychotic bosses
  • Stressed employees and organizations in a state of constant overload
  • Lack of understanding of the production processes and what is critical
  • Insufficient confidentiality about IT infrastructure – allowing sys
  • tem to be analyzed from the outside
  • Lack of active follow-up of policies and practices such that security awareness erodes over time

In spite that this is well known – few organizations actually do something about that. The best defense against the social engineering attack vector may very well be a security awareness focus by the organization combined with efforts to create a good working environment and happy employees. That should be a win-win situation for both employees and the employer.

Does safety engineering require security engineering?

Safetey critical control systems are developed with respect to reliability requirements, often following a reliability standard such as IEC 61508 or CENELEC EN 50128. These standards put requirements on development practices and activities with regard to creating software that works the way it is intended based on the expected input, and where availability and integrity is of paramount importance. However, these standards do not address information security. Some of the practices required from reliability standards do help in removing bugs and design flaws – which to a large extent also removes security vulnerabilites – but they do not explicitly express such conceerns. Reliability engineering is about building trust into the intended functionality of the system. Security is about lack of unintended functionality.

Consider a typical safety critical system installed in an industrial process, such as an overpressure protection system. Such a system may consist of a pressure transmitter, a logic unit (ie a computer) and some final elements. This simple system meausres the pressure  and transmits it to the computer, typically over a hardwired analog connection. The computer then decides if the system is within a safe operating region, or above a set point for stopping operation. If we are in the unsafe region, the computer tells the final element to trip the process, for example by flipping an electrical circuit breaker or closing a valve. Reliability standards that include software development requirements focus on how development is going to work in order to ensure that whenever the sensor transmits pressure above the threshold, the computer will tell the process to stop. Further the computer is connected over a network to an engineering station which is used for such things as updating the algorithm in the control system, changing the threshold limits, etc.

What if someone wants to put the system out of order, without anyone noticing? The software’s access control would be a crucial barrier against anyone tampering with the functionality. Reliability standards do not talk about how to actually avoid weak authentication schemes, although they talk about access management in general. You may very well be compliant with the reliability standard – yet have very weak protection against compromising the access control. For example, the coder may very well use a “getuser()” call  in C in the authentication part of the software – without violating the reliability standard requirements. This is a very unsecure way of getting user credentials from the computer and should generally be avoided. If such a practice is used, a hacker with access to the network could with relaitve ease  get admin access to the system and change for example set points, or worse, recalibrate the pressure  sensor to report wrong readings – something that was actually done in the Stuxnet case.

In other words – as long as someone can be interested in harming your operation – your safety system needs security built-in, and that is not coming for free through reliability engineering. And there is always someone out to get you – for sports, for money or just because they do not like you. Managing security is an important part of managing your business risk – so do not neglect this issue while worrying only about reliability of intended functionality.