CCSK Domain 1: Cloud Computing Concepts and Architecture

Recently I participated in a one-day class on the contents required for the “Certificate of Cloud Security Knowledge” held by Peter HJ van Eijk in Trondheim as part of the conference Sikkerhet og Sårbarhet 2019 (translates from Norwegian to: Security and Vulnerability 2019). The one-day workshop was interesting and the instructor was good at creating interactive discussions – making it much better than the typical PowerPoint overdose of commmercial professional training sessions. There is a certification exam that I have not yet taken, and I decided I should document my notes on my blog; perhaps others can find some use for them too.

The CCSK exam closely follows a document made by the Cloud Security Alliance (CSA) called “CSA Security Guidance for Critical Areas of Focus in Cloud Computing v4.0” – a document you can download for free from the CSA webpage. They also lean on ENISA’s “Cloud Computing Risk Assessment”, which is also a free download.

Cloud computing isn’t about who owns the compute resources (someone else’s computer) – it is about providing scale and cost benefits through rapid elasticity, self-service, shared resource pools and a shared security responsibility model.

The way I’ll do these blog posts is that I’ll first share my notes, and then give a quick comment on what the whole thing means from my point of view (which may not really be that relevant to the CCSK exam if you came here for a shortcut to that).

Introduction to D1 (Cloud Concepts and Architecture)

Domain 1 contains 4 sections:  

  • Defining cloud computing 
  • The cloud logical model 
  • Cloud conceptual, architectural and reference model 
  • Cloud security and compliance scope, responsibilities and models 

NIST definition of cloud computing: a model for ensuring ubiquitous, convenient, on-demand network access to a shared pool for configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction. 

A Cloud User is the person or organization requesting computational resources. The Cloud Provider is the person or organization offering the resources. 

Key techniques to create a cloud:  

  • Abstraction: we abstract resources from the underlying infrastructure to create resource pools  
  • Orchestration: coordination of delivering resources out of the pool on demand.  

Clouds are multitenant by nature. Consumers are segregated and isolated but share resource pools.  

Cloud computing models 

The foundation model of cloud computing of the CSA is the NIST model. A more in-depth model used as a reference model is taken from ISO/IEC.  The guidance talks mostly about the NIST model and doesn’t dive into the ISO/IEC model, which probably is sufficient for most definition needs.

Cloud computing has 5 charcteristics:

  1. Shared resource pool (compute resources in a pool that consumers can pull from)
  2. Rapid elasticity (can scale up and down quickly)
  3. Broad network access
  4. On-demand self-service (management plane, API’s)
  5. Measured service (pay-as-you-go)

Cloud computing has 3 service models

  • Software as a Service (SaaS): like Cybehave or Salesforce
  • Platform as a Service (PaaS): like WordPress or AWS Elastic Beanstalk
  • Infrastructure as a Service (IaaS): like VM’s running in Google Cloud

Cloud computing has 4 deployment models:

  • Public Cloud: pool shared by anyone
  • Private Cloud: pool shared within an organization
  • Hybrid Cloud: connection between two clouds, commonly used when an on-prem datacenter connects to a public cloud
  • Community Cloud: pool shared by a community, for example insurance companies that have formed some form of consortium

Models for discussing cloud security

The CSA document discusses multiple model type in a somewhat incoherent manner. The types of models it mentions can be categorized as follows:

  • Conceptual models: descriptions to explain concepts, such as the logic model from CSA.  
  • Controls models: like CCM 
  • Reference architectures: templates for implementing security controls 
  • Design patterns: solutions to particular problems 

The document also outlines a simple cloud security process model 

  • Identify security and compliance requirements, and existing controls 
  • Select provider, service and deployment models 
  • Define the architecture 
  • Assess the security controls 
  • Identify control gaps 
  • Design and implement controls to fill gaps 
  • Manage changes over time 

The CSA logic model

This model explains 4 “layers” of a cloud enviornment and introduces some “funny words”:

  • Infrastructure: the core components in computing infrastructure, such as servers, storage and networks 
  • Metastructure: protocols and mechanisms providing connections between infrastructure and the other layers 
  • Infostructure: The data and information (database records, file storage, etc) 
  • Applistructure: The applications deployed in the cloud and the underlying applications used ot build them. 

The key difference between traditional IT and cloud is the metastructure. Cloud metastructure contains the management plane components.  

Another key feature of cloud is that each layer tends to double. For example infrastructure is managed by the cloud provider, but the cloud consumer will establish a virtual infrastructure that will also need ot be managed (at least in the case of IaaS). 

Cloud security scope and responsibilities 

The responsibility for security domains maps to the access the different stakeholders have to each layer in the architecture stack.  

  • SaaS: cloud provider is responsible for perimeter, logging, and application security and the consumer may only have access to provision users and manage entitlemnets 
  • PaaS: the provider is typically responsible for platform security and the consumer is responsible for the security of the solutions deployed on the platform. Configuring the offered security features is often left to the consumer.  
  • IaaS: cloud provider is responsible for hypervisors, host OS, hardware and facilities, consumer for guest OS and up in the stack.  

Shared responsibility model leaves us with two focus areas:  

  • Cloud providers should clearly document internal security management and security controls available to consumers.  
  • Consumers should create a responsibility matrix to make sure controls are followed up by one of the parties 

Two compliance tools exist from the CSA and are recommended for mapping security controls:  

  • The Consensus Assessment Initiative Questionnaire (CAIQ) 
  • The Cloud Controls Matrix (CCM) 

#2cents

This domain is introductory and provides some terminology for discussing cloud computing. The key aspects from a risk management point of view are:

  • Cloud creates new risks that need to be managed, especially as it introduces more companies involved in maintaining security of the full stack compared to a full in-house managed stack. Requirements, contracts and audits become important tools.
  • The NIST model is more or less universally used in cloud discussions in practice. The service models are known to most IT practitioners, at least on the operations side.
  • The CSA guidance correctly designates the “metastructure” as the new kid on the block. The practical incarnation of this is API’s and console access (e.g. gcloud at API level and Google Cloud Console on “management plane” level). From a security point of view this means that maintaining security of local control libraries becomes very important, as well as identity and access management for the control plane in general.

In addition to the “who does what” problem that can occur with a shared security model, the self-service and fast-scaling properties of cloud computing often lead to “new and shiny” being pushed faster than security is aware of. An often overlooked part of “pushing security left” is that we also need to push both knowledge and accountability together with the ability to access the management plane (or parts of it through API’s or the cloud management console).

Does cyber insurance make sense?

Insurance relies on pooled risk; when a business is exposed to a risk it feels is not manageable with internal controls, the risk can be deferred to the capital markets through an insurance contract. For events that are unlikely to hit a very large number of insurance customers at once, this model makes sense. The pooled risk allows the insurer to create capital gains on the premiums paid by the customers, and the customers get their financial losses covered in case of a claim situation. This works very well for many cases, but insurers will naturally try to limit their liabilities, through “omissions clauses”; things that are not covered by the insurance policy. The omissions will typically include catastrophic systemic events that the insurance pool would not have the means to cover because a large number of customers would be hit simultaneously. It will also include conditions with the individual customer causing the insurance coverage to be voided – often referred to as pre-existing conditions. A typical example of the former is damages due to acts of war, or natural disasters. For these events, the insured would have to buy extra coverage, if at all offered. An example of the latter omission type, the pre-existing condition, would be diseases the insured would have before entering into a health insurance contract.

20150424_155646090_iOS
Risk pooling works well for protecting the solvency of insurers when issuing policies covering rare independent events with high individual impact – but is harder to design for covering events where there is systemic risk involved. Should insurers cover the effects of large-scale virus infections like WannaCry over normal cyber-insurance policies? Can they? What would the financial aggregate cost of a coordinated cyber-attack be on a society when major functions collapse – such as the Petya case in the Ukraine? Can insurers cover the cost of such events?

How does this translate into cyber insurance? There are several interesting aspects to think about, in both omissions categories. Let us start with the systemic risk – what happens to the insurance pool if all customers issue claims simultaneously? Each claim typically exceed the premiums paid by any one single customer. Therefore, a cyberattack that spreads to large portions of the internet are hard to insure while keeping the insurer’s risk under control. Take for example the WannaCry ransomware attack in May; within a day more than 200.000 computers in 150 countries were infected. The Petya attack following in June caused similar reactions but the number of infected computers is estimated to be much lower. As the WannaCry still looks like a poorly implemented cybercrime campaign intended to make money for the attacker, the Petya ransomware seems to have been a targeted cyberweapon used against  the Ukraine; the rest was collateral damage, most likely. But for Ukrainian companies, the government and computer users this was a major attack: it took down systems belonging to critical infrastructure providers, it halted airport operations, it affected the government, it took hold of payment terminals in stores; the attack was a major threat to the entire society. What could a local insurer have done if it had covered most of those entities against any and all cyberattacks? It would most likely not have been able to pay out, and would have gone bankrupt.

A case that came up in security forums after the WannaCry attack was “pre-existing condition” in cyber insurance. Many policies had included “human error” in the omissions clauses; basically, saying that you are not covered if you are breached through a phishing e-mail.  Some policies also include an “unpatched system” as an omission clause; if you have not patched, you are not covered. Not all policies are like that, and underwriters will typically gauge a firm’s cyber security maturity before entering into an insurance contract covering conditions that are heavily influenced by security culture. These are risks that are hard to include in quantitative risk calculations; the data are simply not there.

Insurance is definitely a reasonable control for mature companies, but there is little value in paying premiums if the business doesn’t satisfy the omissions clauses. For small businesses it will pay off to focus on the fundamentals first, and then to expand with a reasonable insurance policy.

For insurance companies it is important to find reasonable risk pooling measures to better cover large-scale infections like WannaCry. Because this is a serious threat to many businesses, not having meaningful insurance products in place will hamper economic growth overall. It is also important for insurers to get a grasp on individual omissions clauses – because in cyber risk management the thinking around “pre-existing condition” is flawed – security practice and culture is a dynamic and evolving thing, which means that the coverage omissions should be based on current states rather than a survey conducted prior to policy renewal.

 

Handling suppliers with low security awareness

Supply chain risk – in cyberspace

Cyber supply chain risk is a difficult area to manage. According to NIST 80% of all breaches originate in the supply chain, meaning it should be a definite priority of any security conscious organization to try and manage that risk. That number was given in a presentation by Jon Boyens at the 2016 RSA conference. A lot of big companies have been breached due to suppliers with poor information security practices, for example Target and Home Depot.

supplychainexpansion
Your real attack surface includes the people you do business with – and those that they do business with again. And this is not all within your span of control!

Most companies do not have any form of cybersecurity screening of their suppliers. Considering the facts above, this seems like a very bad idea. Why is this so?

A lot of people think cybersecurity is difficult. The threat landscape itself is difficult to assess unless you have the tools and knowledge to do so. Most companies don’t have that sort of competence in-house, and they are often unaware that they are lacking know-how in a critical risk governance area.

Why are suppliers important when it comes to cybersecurity? The most important factor is that you trust your suppliers, and you may already have shared authentication secrets with them. Consider the following scenarios;

  1. Your HVAC service provider has VPN access to you network in order to troubleshoot the HVAC system in your office. What if hackers gain control over your HVAC vendor’s computer? Then they also have access to your network.
  2. A supplier that you frequently communicate with has been hacked. You receive an email from one of your contacts in this firm, asking if you can verify your customer information by logging into their web based self-service solution. What is the chance you would do that, provided the web page looks professional? You would at least click the link.
  3. You are discussing a contract proposal with a supplier. After emailing back and forth about the details for a couple of weeks he sends you a download link to proposed contract documents from his legal department. Do you click?

All of these are real use cases. All of them were successful for the cybercriminals wanting access to a bigger corporation’s network. The technical set-up was not exploited; in the HVAC case the login credentials of the supplier was stolen and abused (this was the Target attack resulting in leak of 70 million customer credit cards). In the other two cases an existing trust relationship was used to increase the credibility of a spear-phishing attack.

To counter social engineering, most companies offer “cybersecurity awareness training”. That can be helpful, and it can reduce how easy it is to trick employees into performing dangerous actions. When the criminals leverage an existing trust relationship, this kind of training is unlikely to have any effect. Further, your awareness training is probably only including your own organization. Through established buyer-supplier relationships the initial attack surface is not only your own organization; it is expanded to include all the organizations you do business with. And their attack surface again, includes of the people they do business with. This quickly expands to a very large network. You can obviously not manage the whole network – but what you can do is to evaluate the risk of using a particular supplier, and use that to determine which security controls to apply to the relationship with that supplier.

Screening the contextual risk of supplier organizations

What then determines the supplier risk level? Obviously internal affairs within the supplier’s organization is important but at least in the early screening of potential suppliers this information is not available. The supplier may also be reluctant to reveal too much information about his or her company. This means you can only evaluate the external context of the supplier. As it turns out, there are several indicators you can use to gauge the likelihood of a supplier breach. Main factors include:

  • Main locations of the supplier’s operations, including corporate functions
  • The size of the company
  • The sector the company operates in

In addition to these factors, which can help determine how likely the organization is to be breached, you should consider what kind of information about your company the supplier would possess. Obviously, somebody with VPN login credentials to your network would be of more concern than a restaurant where you order overtime food for you employees. Of special concern should be suppliers or partners with access to critical business secrets, with login credentials, or with access to critical application programming interfaces.

Going back to the external context of the supplier; why is the location of the supplier’s operations important? It turns out that the amount of malware campaigns a company is exposed to is strongly correlated with the political risk in the countries where the firm operates. Firms operating in countries with a high crime rate, significant corruption and dubious attitudes to democracy and freedom of speech, also tend to be attacked more from the outside. They are also more likely to have unlicensed software, e.g. pirated versions of Windows – leaving them more vulnerable to those attacks.

The size of the company is also an interesting indicator. Smaller companies, i.e. less than 250 employees, have a lower fraction of their incoming communication being malicious. At the same time, the defense of these companies is often weak; many of them lack processes for managing information security, and a lot of companies in this group do not have internal cybersecurity expertise.

The medium sized companies (250-500 employees) receive more malicious communications. These companies often lack formal cybersecurity programs too, and competence may also be missing here, especially on the process side of the equation. For example, few companies in this category have established an information security management system.

Larger companies still receive large amounts of malicious communications but they tend to have better defense systems, including management processes. The small and medium sized business therefore pose a higher threat for value chain exploitation than larger more established companies do.

Also, the sector the supplier operates is a determining factor for the external context risk.  Sectors that are particularly exposed to cyberattacks include:

  • Retail
  • Public sector and governmental agencies
  • Business services (consulting companies, lawyers, accountants, etc.)

Here the topic of “what information do I share” comes in. You are probably not very likely to share internal company data with a retailer unless you are part of the retailers supply chain. If you are, then you should be thinking about some controls, especially if the retailer is a small or medium sized business.

For many companies the “business services” category is of key interest. These are service providers that you would often share critical information with. Consulting companies gain access to strategic information, to your IT network, gets to know a lot of key stakeholders in your company. Lawyers would obviously have access to confidential information. Accountants would be trusted, have access to information and perhaps also to your ERP systems. Business service providers often get high levels of access, and they are often targeted by cybercriminals and other hackers; this is good reason to be vigilant with managing security in the buyer-supplier relationship.

Realistic assessments require up to date threat intelligence

There are more factors that come into play when selecting a supplier for your firm than security. Say you have an evaluation scheme that takes into account:

  • Financials
  • Capacity
  • Service level
  • And now… cybersecurity

If the risk is considered unreasonably high for using a supplier, you may end up selecting a supplier that is more expensive, or where the level of service is lower, than for the “best” supplier but with a high perceived risk. Therefore it becomes important that the contextual coarse risk assessment is performed based on up-to-date threat models, even for the macro indicators discussed above.

Looking at historical data shows that the threat impact of company size remains relatively stable over time. Big companies tend to have better governance than small ones. On the positive side for smaller companies is that they tend to be more interested in cooperating on risk governance than bigger players are. This, however, is usually not problematic when it comes to understanding the threat context.

Political risk is more volatile. Political changes in countries can happen quickly, and the effects of political change can be subtle but important for cybersecurity context. This factor depends on up to date threat intelligence, primarily available from open sources. This means that when you establish a contextual threat model, you should take care to keep it up to date with political risk factors that do change at least on a quarterly basis, and can even change abruptly in the case of revolutions, terror attacks or other major events causing social unrest. A slower stream would be legislative processes that affect not only how businesses deal with cyber threats but also on the governmental level. Key uncertainties in this field today include the access of intelligence organizations to communications data, and the evolvement of privacy laws.

Also the sector influence on cyber threat levels do change dynamically. Here threat intelligence is not that easy to access but some open sources do exist. Open intel sources that can be taken into account to adjust the assessment of business sector risk are:

  • General business news and financial market trends
  • Threat intelligence reports from cybersecurity firms
  • Company annual reports
  • Regulations affecting the sector, as also mentioned under political risk
  • Vulnerability reports for business critical software important to each sectoor

In addition to this, less open sources of interest would be:

  • Contacts working within the sectors with access to trend data on cyber threats (e.g. sysadmins in key companies’ IT deparments)
  • Sensors in key networks (often operated by government security organizations), sharing of information typically occurs in CERT organizations

Obviously, staying on top of the threat landscape is a challenging undertaking but failing to do so can lead to weak risk assessments that would influence business decisions the wrong way. Understanding the threat landscape is thus a business investment where the expected returns are long-term and hard to measure.

How to take action

How should you, as a purchaser, use this information about supplier threats? Considering now the situation where you have access to a sound contextual threat model, and you are able to sort supplier companies into broad risk categories, e.g. low, medium, high risk categories. How can you use that information to improve your own risk governance and reduce the exposure to supply chain cyber threats?

First, you should establish a due diligence practice for cybersecurity. You should require more scrutiny for high-risk situations than low-risk ones. Here is one way to categorize the governance practices for supply chain cyber risks – but this is only a suggested structure. The actual activities should be adapted to your company’s needs and capabilities.

Practice Low risk supplier Medium risk supplier High risk supplier
Require review of supplier’s policy for information security No Yes Yes
State minimum supplier security requirements (antivirus, firewalls, updated software, training) Yes Yes Yes
Require right to audit supplier for cybersecurity compliance No To be considered Yes
Establish cooperation for incident handling No To be considered Yes
Require external penetration test including social engineering prior to and during business relationship No No To be considered
Agree on communication channels for security incidents related to buyer-supper relationship Yes Yes Yes
Require ISO 27001 or similar certification No No To be considered

If you found this post interesting, please share it with your contacts – and let me know what you think in the comments!

Why “secure iframes” on http sites are bad for security

Earlier this year it was reported that half of the web is now served over SSL (Wired.com). Still, quite a number of sites are trying to keep things in http, and to serve secure content in embedded parts of the site. There are two approaches to this:

  • A form embedded in an iframe served over https (not terrible but still a bad idea)
  • A form that loads over http and submits over https (this is terrible)

The form loading on the http site and submitting to a https site is security-wise meaningless because an attacker can read the data entered into the form on the web page. This means the security added by https is lost because a man-in-the-middle attacker on the http site can snoop on the data in the form directly.

ssl_safecontrols
Users are slowly but surely being trained to look for this padlock symbol and the “https” protocol when interacting with web pages and applications. 

The “secure iframe” is slightly better because the form is served over https and a man-in-the-middle cannot easily read the contents of the form. This is aided by iframe sandboxing in modern browsers (see some info about this in Chrome here), although old ones may not be as secure because the sandboxing function was not included. Client-side restrictions can, however, be manipulated.

One of the big problems with security is lack of awareness about security risks. To counter this, browsers today indicate that login forms, payment forms, etc. on http sites are insecure. If you load your iframe over https on an http site, the browser will still warn the user (although the actual content is not submitted insecurely). This counteracts the learned (positive) behavior of looking for a green padlock symbol and the https protocol. Two potential bad effects:

  • Users start to ignore the unison cry of “only submit data when you see the green padlock” – which will be great for phishing agents and other scammers. This may be “good for business” in the short run, but it certainly is bad for society as a whole, and for your business in the long run.
  • Users will not trust your login form because it looks insecure and they choose not to trust your site – which is good for the internet and bad for your business.

Takeaways from this:

  • Serve all pages that interact with users in any form over https
  • Do not use mixed content in the same page. Just don’t do it.
  • While you are at it: don’t support weak ciphers and vulnerable crypto. That is also bad for karma, and good for criminals.

How do leaked cyber weapons change the threat landscape for businesses?

Recently, a group called Shadow Brokers released hundreds of megabytes of tools claimed to be stemming from the NSA and other intelligence organizations. Ars has written extensively on the subject: https://arstechnica.com/security/2017/04/nsa-leaking-shadow-brokers-just-dumped-its-most-damaging-release-yet/. The leaked code is available on github.com/misterch0c/shadowbroker. The exploits target several Microsoft products still in service (and commonly used), as well as the SWIFT banking network. Adding speculation to the case is the fact that Microsoft silently released patches to vulnerabilities claimed to be zerodays in the leaked code prior to the actual leak. But what does all of this mean for “the rest of us”?

shadowbroker
Analysis shows that lifecycle management of software needs to be proactive, considering the security features of new products against the threat landscape prior to end-of-life for existing systems as a best practice. The threat from secondary adversaries may be increasing due to availability of new tools, and the intelligence agencies have also demonstrated willingness to target organizations in “friendly” countries; nation state actors should thus include domestic ones in threat modeling. 

There are two key questions we need to ask and try to answer:

  1. Should threat models include domestic nation state actors, including illegal use of intelligence capabilities against domestic targets?
  2. Does the availability of the leaked tools increase the threat from secondary actors, e.g. organized crime groups?

Taking on the first issue first: should we include domestic intelligence in threat models for “normal” businesses? Let us examine the C-I-A security triangle from this perspective.

  • Confidentiality: are domestic intelligence organizations interested in stealing intellectual property or obtaining intelligence on critical personnel within the firm? This tends to be either supply chain driven if you are not yourself the direct target, or data collection may occur due to innocent links to other organization’s that are being targeted by the intelligence unit.
  • Integrity (data manipulation): if your supply chain is involved in activities drawing sufficient attention to require offensive operations, including non-cyber operations, integrity breaches are possible. Activities involving terrorism funding or illegal arms trade would increase the likelihood of such interest from authorities.
  • Availability: nation state actors are not the typical adversary that will use DoS-type attacks, unless it is to mask other intelligence activities by drawing response capabilities to the wrong frontier.

The probability of APT activities from domestic intelligence is for most firms still low. The primary sectors where this could be a concern are critical infrastructure and financial institutions. Also firms involved in the value chains of illegal arms trade, funding of terrorism or human trafficking are potential targets but these firms are often not aware of their role in the illegal business streams of their suppliers and customers.

The second question was if the leak poses an increased threat from other adversary types, such as organized crime groups. Organized crime groups run structured operations across multiple sectors, both legal and illegal. They tend to be opportunistic and any new tools being made available that can support their primary cybercrime activities will most likely be made use of quickly. The typical high-risk activities include credit card and payment fraud, document fraud and identity theft, illicit online trade including stolen intellectual property, and extortion schemes by direct blackmail or use of malware. The leaked tools can support several of these activities, including extortion schemes and information theft. This indicates that the risk level does in fact increase with the leaks of additional exploit packages.

How should we now apply this knowledge in our security governance?

  • The tools use exploits in older versions of operating systems. Keeping systems up-to-date remains crucial. New versions of Windows tend to come with improved security. Migration prior to end-of-life of previous version should be considered.
  • In risk assessments, domestic intelligence should be considered together with foreign intelligence and proxy actors. Stakeholder and value chain links remain key drivers for this type of threat.
  • Organized crime: targeted threats are value chain driven. Most likely increased exposure due to new cyberweapons available to the organized crime groups for firms with exposure and old infrastructure.

How to embed security awareness in business processes

All businesses have processes for their operations. These can be production, sales, support, IT, procurement, auditing, and so on.

All businesses also need risk management. Traditional risk management has focused on financial risks, as well as HSE risks. These governance activities are also legal requirements in most countries. Recently cybersecurity has also caught mainstream attention, thanks to heavy (and often exaggerated) media coverage of breaches. Cyber threats are a real risk amplifier for any data centric business. Therefore it needs to be dealt with as part of risk management. In many businesses this is, however, still not the case, as discussed in detail in this excellent Forbes article.

awareness_monkey
Most employees do not even get basic cybersecurity training at work. Is that an indicator that businesses have not embedded security practices in their day-to-day business?

  1. One common mistake many business leaders make is to view cybersecurity as an IT issue alone. Obviously, IT plays a big role here but the whole organization must pull the load together.
  2. Another mistake a leader can make is to view security as a “set and forget” thing.  It is unlikely that this would be the case for HSE risks, and even less so for financial risks.

The key to operating with a reasonable risk level is to embed risk management in all business processes. This includes activities such as:

  • Identify and evaluate risks to the business related to the business process in question
  • Design controls where appropriate. Evaluate controls up against other business objectives as well as security
  • Plan recovery and incident handling
  • Monitor risk (e.g. measurements, auditing, reporting)
  • Get your people processes right (e.g. roles and responsibilities, hiring, firing, training, leadership, performance management)

What does the security aware organization look like?

Boiling this down to practice, what would some key characteristics of a business that has successfully embedded security in their operations?

Cybersecurity would be a standard part of the agenda for board meetings. The directors would review security governance together with other governance issues, and also think about how security can be a growth enhancer for the business in the markets they are operating.

Procurement considers security when selecting suppliers. They identify cybersecurity threats together with other supply chain risks and act accordingly. They ensure baseline requirements are included in the assessment.

The CISO does not report to the head of IT. The CISO should report directly to the CEO and be regularly involved in strategic business decisions within all aspects of operations.

The company has an internal auditing system that includes cybersecurity. It has based its security governance on an established framework and created standardized ways of measuring compliance, ranging from automated audits of IT system logs to employee surveys and policy compliance audits.

Human resources is seen as a key department for security management. Not only are they involved in designing role requirements, performance management tools and training materials but they are heavily involved in helping leaders build a security aware culture in the company. HR should also be a key resource for evaluating M&A activites when it comes to cultural fit, including cybersecurity culture.

If you are looking to improve your company’s cybersecurity governance, introducing best practices based on a framework or a standard is a great starting point. See How to build up your information security management system in accordance with ISO 27001 for practical tips on how to do that.

The [Cyber] Barbarians are at the [Internet] Gateways?

If you follow security news in media you get the impression that there are millions of super-evil super-intelligent nation state and hacktivist hackers constantly attacking you, and you specifically, in order to ruin your day, your business, your life, and perhaps even the lives of everyone you have ever known. Is this true? Are there hordes of barbarians targeting you specifically? Probably not.

 

dungeons_26_dragons_miniatures_2
Monsters waiting outside your gates to attack at first opportunity? That may be, but it is most likely not because they think your infrastructure is particularly tasty!

 

 

So what is the reality? The reality is that the threat landscape is foggy; it is hard to get a clear view. What is obviously true, though, is that you can easily fall victim to cyber criminals – although it is less likely that they are targeting you specifically. Of course, if you are the CEO of a big defense contractor, or you are the CIO of a large energy conglomerate – you are most likely specifically targeted by lots of bad (depending on perspective) guys – but most people don’t hold such positions, and most companies are not being specifically targeted. But all companies are potential targets of automated criminal supply chains.

The most credible cyber threats to the majority of companies and individuals are the following:

  • Phishing attacks with direct financial fraud intention (e.g. credit card fraud)
  • Non-targeted data theft for the sake of later monetization (typically user accounts traded on criminal market places)
  • Ransomware attacks aimed at extorting money

None of these attacks are targeted. They may be quite intelligent, nevertheless. Cybercriminals are often quite sophisticated, and they are in many cases “divisions” in organized crime groups that are active also in more traditional crime, such as human trafficking, drug and illegal weapons trade, etc. Sometimes these groups may even have capabilities that mirror those of state-run intelligence organizations. In the service of organized crime, they develop smart malware that can evade anti-virus software, analyze user behaviors and generally maximized the return on their criminal investment in self-replicating worms, botnets and other tools of the cybercrime trade.

We know how to protect ourselves against this threat from the automated hordes of non-targeted barbarians trying to leach money from us all. If we keep our software patched, avoid giving end-users admin rights, and use whitelists to avoid unauthorized software from running – we won’t stop organized crime. But we will make their automated supply chain leach from someone else’s piggybank; these simple security management practices stop practically all non-targeted attacks. So much for the hordes of barbarians.

These groups may also work on behalf of actual spies on some cases – they may in practice be the same people. So, the criminal writing the most intelligent antivirus-evading new ransomware mutation, may also be the one actively targeting your energy conglomerate’s infrastructure and engineering zero-day exploits. Defending against that is much more difficult – because of the targeting. But then they aren’t hordes of barbarians or an army of ogres anymore. They are agents hiding in the shadows.

Bottom line – stop crying wolf all the time. Stick to good practices. Knowing what you have and what you value is the starting point. Build defense-in-depth based on your reality. That will keep your security practices and controls balanced, allowing you to keep building value instead of drowning in fear of the cyber hordes at your internet gateways.