Do you consider security when purchasing control system equipment?

SCADA security has lately received lots of attention, both in mainstream media and within the security community. There are good reasons for this, as an increasing number of manufacturing and energy companies are seeing information security threats becoming more important. SANS recently issued a whitepaper on industrial control systems and security based on a survey among professionals on all continents. The whitepaper contains many interesting results from the survey. These are three of the most interesting findings:

  • 15% of respondents who had identified security breaches found that current contractors, consultants or suppliers were responsible for the breach
  • Only 35% of respondents actively include security requirements in their purchasing process
  • The number of breaches in industrial control systems seem to be increasing (9% of respondents had identified such breaches in the 2014 survey vs. 17% in the 2015 survey)

That external parties with legitimate access to the SCADA networks are sources in many actual breaches is not very surprising. These parties are normally outside the asset owner’s management control, and they may have very different policies, awareness and protections available at their end. One notable example from a while back where this was the case in a much publicized data breach was the Target fraud in 2014. In that case, the attack vector came via a phishing e-mail sent to representatives for one of Target’s HVAC vendors. The vendor connected to Target’s IT network over a VPN tunnel, unknowingly transporting the malware to the network. The advanced attack managed to siphon away credit card data from millions of customers, uploading it to an FTP server located in Russia. Media has described this attack in detail and from many angles and it is an interesting case study on advanced persistent threats.

This is obviously a cause of concern; how do you protect yourself against attack vectors using your vendors as point of entry? Of course, managing credentials and only giving access to systems the vendor should have access to is a good starting point. But what can you do to influence whaty they do on “the other side” of the contract interface?

First of all – security awareness training should not only be for engineers and operators. The same type of awareness training and understanding of the business risk related to cyber attacks should be provided to your purchasers. From reliability engineering we have long seen that obtaining items that comply with requirements can be challenging; if the requirements are not even articulated, no compliance can be expected. Including security requirements in purchasing and contracts should be an important priority. It is probably a good idea to include this in your company’s security policy.

The next obvious question is maybe “what kind of requirements should I put forward to the vendor”? This depends on your situation and the risk to your assets, but it should include both technology requirements, after-sales updating and service requirements, security practices, and awareness requirements for companies providing services. If your HVAC vendor has low security awareness, it is more likely that he or she will fall for a phishing attempt – putting your control system assets at risk. Due diligence should thus include cyber security requirements; it is really no different from other quality and risk management controls that we normally integrate into our purchasing processes.

Is security management still following the «whack-a-mole» philosophy of yesteryear?

Anyone following current news can see that cyber security is an increasing problem for society, from private individuals to government institutions to small and large corporations. Traditional defense of information assets has followed a simplistic perimeter defense sort of thinking, with an incident response team in addition responsible for finding and fixing problems after they have occurred. Modern thinking in security emerging over the last decade has largely left this approach to security because it is very inefficient for protecting our information assets. The current term used for a more holistic view on security management is often referred to as “cyber intelligence”, such as this article at Darkreading.com. Modern thinking around this has emerged by combining developments in software security, criminology, military counter-insurgency tactics and risk management. This change was summed up nicely at the last RSA security summit with one sentence;

There is no perimeter.

The meaning of this is that setting up a defense consisting of firewalls and anti-virus protection is a good thing to do – but by no means is it a solution to all problems; even with these kind of technologies present, breaches are inevitable. Still, many organizations still follow the whack-a-mole-thinking:

  • Invest in antivirus and firewall tools
  • Buy an intrusion detection system
  • If a breach is discovered, disconnect the computer from the network and re-image it to cleanse
Photo: TPapi under Creative Commons license (https://creativecommons.org/licenses/by-nc-sa/2.0/)

There are many reasons why this does not work. Here are three of them; viruses today often mutate from on infection to the next, making signature based AV more or less useless for advanced malware, and most attacks live on the network for extended time before delivering a payload, basically invisible from both users and automated network tools for intrusion detection. Finally, there are always people with legitimate access to the information assets who can be influenced to initiate an attack – knowingly or not (typically this is referred to as social engineering). Basically – you don’t know what hits you before it’s too late.

The worst thing is probably that there is no direct cure. The good news is that you can make your systems much harder targets through good risk management and defense strategies that can help you cope with the threat in a much better way. Following basic risk management thinking can get you a long way by identifying potential weaknesses, threats and vulnerabilities in all parts of your information system lifecycle is the starting point. This means even during development (if it is your software/hardware) or procurement – you need to assess the dangers and find mitigation plans. A mitigation plan should not be simply reactive (whack-the-mole) but rather proactive, such as “how can we minimize the risk such that we think it is acceptable taking both probabilities and consequences into account”? In order to have an informed opinion on this, you need to determine not only what the potential impact of an attack could be, but also the credibility of such an attack. In order to do that, you need to review who the attackers are, how is the outside world affecting the situation, what are the attackers motivations, what are their capabilities, is there a cost-benefit trade-off, and so on. It is from this view the term “cyber intelligence” comes. Having such information at hand, together with a lifecycle oriented mitigation plan, puts you in place to build a resilient organization that is not played out on the sideline easily by the bad guys.

New security requirements to safety instrumented systems in IEC 61511

IEC 61511 is undergoing revision and one of the more welcome changes is inclusion of cyber security clauses. According to a presentation held by functional safety expert Dr. Angela Summers at the Mary Kay Instrument Symposium in January 2015, the following clauses are now included in the new draft – the standard is planned issued in 2016:

  • Clause 8.2.4: Description of identified [security] threats for determination of requirements for additional risk reduction. There shall also be a description of measures taken to reduce or remove the hazards.
  • Clause 11.2.12: The SIS design shall provide the necessary resilience against the identified security risks

What does this mean for asset owners? It obviously makes it a requirement to perform a cyber security risk assessment for the safety instrumented systems (SIS). Such information asset risk assessments should, of course, be performed in any case for automation and safety systems. This, however, makes it necessary to keep security under control to obtain compliance with IEC 61511 – something that is often overlooked today, as described in this previous post. Further, when performing a security study, it is important that also human factors and organizational factors are taken into account – a good technical perimeter defense does not help if the users are not up to the task and have sufficient awareness of the security problem.

In the respect of organizational context, the new Clause 11.2.12 is particularly interesting as it will require security awareness and organizational resilience planning to be integrated into the functional safety management planning. As noted by many others, we have seen a sharp rise in attacks on SCADA systems over the past few years – these security requirements will bring the reliability and security fields together and ensure better overall risk management for important industrial assets. These benefits, however, will only be achieved if practitioners take the full weight of the new requirements on board.

Gas station’s tank monitoring systems open to cyber attacks

Darkreading.com brought news about a project to set up a free honeypot tool for monitoring attacks against gas tank monitoring systems. Researchers have found attacks against gas tank monitoring systems at several locations in the United States (read about it @darkreading). Interestingly, many of these systems for monitoring tank levels etc., are internet facing with no protection whatsoever – not even passwords. Attacks have so far only been of the cyberpunk type – changing a product’s name and the like; no intelligent attacks have been observed.

If we dwell on this situation a bit – we have to consider who would be interested in attacking gas station chains at a SCADA level? Obviously, if you can somehow halt the operation of all gas stations in a country, you do limit people’s mobility. In addition to that, you obviously harm the gas station’s business. Two of the most obvious attack motivations may thus be “sabotage against the nation as a whole” as part of a larger campaign, and pure criminal activity by using for example ransomware to halt gasoline sales until a ransom is payed. The latter would perhaps be the most likely of the two threats.

So – what should the gas stations do? Obviously, there are some technical barriers missing here when the system is completely open and facing the internet. The immediate solution would be to protect all network traffic by VPN tunneling, and to require a password for accessing the SCADA interfaces. Hopefully this will be done soon. The worrying aspect of this is that gas stations are not the only installation type with very weak security – there are many potential targets for black hats that are very easy to reach. The more connected our world becomes through integration of #IoT into our lives – the more important basic security measures become. Hopefully this will be realized not only by equipment vendors, but also by consumers.

The false sense of security people gain from firewalls

Firewalls are important to maintain security. On that, I suppose almost all of us agree. It is, however, not the final solution to the cyber security problem. First, there is the chance of bad guys pushing malware over traffic that is actually allowed through the firewall (people visiting bad web sites, for example). Then there is the chance that the firewall itself is set up in the wrong way. Finally, there is the possibility that people are bringing their horrible stuff inside the walled garden by using USB sticks, their own devices hooked up to the network, or similar. People running both IT and automation systems tend to be aware of all of these issues – and probably most users too. On the other hand, maybe not – but they should be aware of it and avoid doing obviously stupid stuff.

Then there is the oxymoron of the social engineer. For a skilled con artist it is easy to trick almost anyone by bribing them, using temptations (drugs, sex, money, fame, prestige, power, etc) or blackmailing them into helping an evil outsider. For some reason, companies tend to overlook this very human weakness in the defense layers. You normally do not find much mention of social engineering in operating policies, training and risk assessments for corporations running production critical IT systems, such as industrial control systems. Recent studies have shown that as many as 25% of people receiving phishing e-mails, actually click on links to websites with malware downloads. Tricksters are becoming more skilled – and the language in phishing e-mails has improved tremendously since the Viagra e-mail spam period of ten years ago. This can be summarized in a “tricking the dog” drawing:


Stuff that makes organizations easier to penetrate using social engineering includes:

  • Low employee loyalty due to underpay, bad working environment and psychotic bosses
  • Stressed employees and organizations in a state of constant overload
  • Lack of understanding of the production processes and what is critical
  • Insufficient confidentiality about IT infrastructure – allowing sys
  • tem to be analyzed from the outside
  • Lack of active follow-up of policies and practices such that security awareness erodes over time

In spite that this is well known – few organizations actually do something about that. The best defense against the social engineering attack vector may very well be a security awareness focus by the organization combined with efforts to create a good working environment and happy employees. That should be a win-win situation for both employees and the employer.

Does safety engineering require security engineering?

Safetey critical control systems are developed with respect to reliability requirements, often following a reliability standard such as IEC 61508 or CENELEC EN 50128. These standards put requirements on development practices and activities with regard to creating software that works the way it is intended based on the expected input, and where availability and integrity is of paramount importance. However, these standards do not address information security. Some of the practices required from reliability standards do help in removing bugs and design flaws – which to a large extent also removes security vulnerabilites – but they do not explicitly express such conceerns. Reliability engineering is about building trust into the intended functionality of the system. Security is about lack of unintended functionality.

Consider a typical safety critical system installed in an industrial process, such as an overpressure protection system. Such a system may consist of a pressure transmitter, a logic unit (ie a computer) and some final elements. This simple system meausres the pressure  and transmits it to the computer, typically over a hardwired analog connection. The computer then decides if the system is within a safe operating region, or above a set point for stopping operation. If we are in the unsafe region, the computer tells the final element to trip the process, for example by flipping an electrical circuit breaker or closing a valve. Reliability standards that include software development requirements focus on how development is going to work in order to ensure that whenever the sensor transmits pressure above the threshold, the computer will tell the process to stop. Further the computer is connected over a network to an engineering station which is used for such things as updating the algorithm in the control system, changing the threshold limits, etc.

What if someone wants to put the system out of order, without anyone noticing? The software’s access control would be a crucial barrier against anyone tampering with the functionality. Reliability standards do not talk about how to actually avoid weak authentication schemes, although they talk about access management in general. You may very well be compliant with the reliability standard – yet have very weak protection against compromising the access control. For example, the coder may very well use a “getuser()” call  in C in the authentication part of the software – without violating the reliability standard requirements. This is a very unsecure way of getting user credentials from the computer and should generally be avoided. If such a practice is used, a hacker with access to the network could with relaitve ease  get admin access to the system and change for example set points, or worse, recalibrate the pressure  sensor to report wrong readings – something that was actually done in the Stuxnet case.

In other words – as long as someone can be interested in harming your operation – your safety system needs security built-in, and that is not coming for free through reliability engineering. And there is always someone out to get you – for sports, for money or just because they do not like you. Managing security is an important part of managing your business risk – so do not neglect this issue while worrying only about reliability of intended functionality.