This has been the first week back at work after 3 weeks of vacation. Vacation was mostly spent playing with the kids, relaxing on the beach and building a garden fence. Then Monday morning came and reality came back, demanding a solid dose of coffee.
Wave of phishing attacks. One of those led to a lightweight investigation finding the phishing site set up for credential capture on a hacked WordPress site (as usual). This time the hacked site was a Malaysian site set up to sell testosteron and doping products… and digging around on that site, a colleague of mine found the hackers’ uploaded webshell. A gem with lots of hacking batteries included.
Next task: due diligence of a SaaS vendor, testing password reset. Found out they are using Base64 encoded userID’s as “random tokens” for password reset – meaning it is possible to reset the password for any user. The vendor has been notified (they are hopefully working on it).
Surfing Facebook, there’s an ad for a productivity tool. Curious as I am I create an account, and by habit I try to set a very weak password (12345). The app accepts this. Logging in to a fancy app, I can then by forced browsing look at the data from all users. No authorization checks. And btw, there is no way to change your password, or reset it if you forget. This is a commercial product. Don’t forget to do some due diligence, people.
Phishing for credentials?
Phishing is a hacker’s workhorse, and for compromising an enterprise it is by far the most effective tool, especially if those firms are not using two-factor authentication. Phishing campaigns tend to come in bursts, and this needs to be handled by helpdesk or some other IT team. And with all the spam filters in the world, and regular awareness training, you can reduce the number of compromised accounts, but it is still going to succeed every single time. This is why the right solution to this is not to think that you can stop every malicious email or train every user to always be vigilant – the solution is primarily: multifactor authentication. Sure, it is possible to bypass many forms of it, but it is far more difficult to do than to just steal a username and a password.
Another good idea is to use a password manager. It will not offer to fill in passwords on sites that aren’t actually on the domain they pretend to be.
To secure against phishing, don’t rely on awareness training and spam filters only. Turn on 2FA and use a password manager for all passwords. #infosec
The password reset thing was interesting. First on this app I registered an account with a Mailinator email account and the password “passw0rd”. Promising.. Then trying the “I forgot” on login to see if the password recovery flow was broken – and it really was in a very obvious way. Password reset links are typically sent by email. Here’s how it should work:
You are sent a one-time link to recover your password. The link should contain an unguessable token and should be disabled once clicked. The link should also expire after a certain time, for example one hour.
This one sent a link, that did not expire, and that would work several times in a row. And the unguessable token? Looked something like this: “MTAxMjM0”. Hm… that’s too short to really be a random sequence worth anything at all. Trying to identify if this is a hash or something encoded, the first thing we try is to decode from Base64 – and behold – we can a 6-digit number (101234 in this case, not the userID from this app). Creating a new account, and then doing the same reveals we get the next number (like 101235). In other words, using the reset link of the type /password/iforgot/token/MTAxMjM0, we can simply Base64 encode a sequence of numbers and reset the passwords for every user.
Was this a hobbyist app made by a hobbyist developer? No, it is an enterprise app used by big firms. Does it contain personal data? Oh, yes. They have been notified, and I’m waiting for feedback from them on how soon they will have deployed a fix.
Broken access control
The case with the non-random random reset token is an example of broken authentication. But before the week is over we also need an example of broken access control. Another web app, another dumpster fire. This was a post shared on social media that looked like an interesting product. I created an account. Password this time: 12345. It worked. Of course it did…
This time there is no password reset function to test, but I suspect if there had been one it wouldn’t have been better than the one just described above.
This app had a forced browsing vulnerability. It was a project tracking app. Logging in, and creating a project, I got an URL of the following kind: /project/52/dashboard. I changed 52 to 25 – and found the project goals of somebody planning an event in Brazil. With budgets and all. The developer has been notified.
Always check the security of the apps you would like to use. And always turn on maximum security on authentication (use a password manager, use 2FA everywhere). Don’t get pwnd. #infosec
Recently I participated in a one-day class on the contents required for the “Certificate of Cloud Security Knowledge” held by Peter HJ van Eijk in Trondheim as part of the conference Sikkerhet og Sårbarhet 2019 (translates from Norwegian to: Security and Vulnerability 2019). The one-day workshop was interesting and the instructor was good at creating interactive discussions – making it much better than the typical PowerPoint overdose of commmercial professional training sessions. There is a certification exam that I have not yet taken, and I decided I should document my notes on my blog; perhaps others can find some use for them too.
The CCSK exam closely follows a document made by the Cloud Security Alliance (CSA) called “CSA Security Guidance for Critical Areas of Focus in Cloud Computing v4.0” – a document you can download for free from the CSA webpage. They also lean on ENISA’s “Cloud Computing Risk Assessment”, which is also a free download.
The way I’ll do these blog posts is that I’ll first share my notes, and then give a quick comment on what the whole thing means from my point of view (which may not really be that relevant to the CCSK exam if you came here for a shortcut to that).
Introduction to D1 (Cloud Concepts and Architecture)
Domain 1 contains 4 sections:
Defining cloud computing
The cloud logical model
Cloud conceptual, architectural and reference model
Cloud security and compliance scope, responsibilities and models
NIST definition of cloud computing: a model for ensuring ubiquitous, convenient, on-demand network access to a shared pool for configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction.
A Cloud User is the person or organization requesting computational resources. The Cloud Provider is the person or organization offering the resources.
Key techniques to create a cloud:
Abstraction: we abstract resources from the underlying infrastructure to create resource pools
Orchestration: coordination of delivering resources out of the pool on demand.
Clouds are multitenant by nature. Consumers are segregated and isolated but share resource pools.
Cloud computing models
The foundation model of cloud computing of the CSA is the NIST model. A more in-depth model used as a reference model is taken from ISO/IEC. The guidance talks mostly about the NIST model and doesn’t dive into the ISO/IEC model, which probably is sufficient for most definition needs.
Cloud computing has 5 charcteristics:
Shared resource pool (compute resources in a pool that consumers can pull from)
Rapid elasticity (can scale up and down quickly)
Broad network access
On-demand self-service (management plane, API’s)
Measured service (pay-as-you-go)
Cloud computing has 3 service models
Software as a Service (SaaS): like Cybehave or Salesforce
Platform as a Service (PaaS): like WordPress or AWS Elastic Beanstalk
Infrastructure as a Service (IaaS): like VM’s running in Google Cloud
Cloud computing has 4 deployment models:
Public Cloud: pool shared by anyone
Private Cloud: pool shared within an organization
Hybrid Cloud: connection between two clouds, commonly used when an on-prem datacenter connects to a public cloud
Community Cloud: pool shared by a community, for example insurance companies that have formed some form of consortium
Models for discussing cloud security
The CSA document discusses multiple model type in a somewhat incoherent manner. The types of models it mentions can be categorized as follows:
Conceptual models: descriptions to explain concepts, such as the logic model from CSA.
Controls models: like CCM
Reference architectures: templates for implementing security controls
Design patterns: solutions to particular problems
The document also outlines a simple cloud security process model
Identify security and compliance requirements, and existing controls
Select provider, service and deployment models
Define the architecture
Assess the security controls
Identify control gaps
Design and implement controls to fill gaps
Manage changes over time
The CSA logic model
This model explains 4 “layers” of a cloud enviornment and introduces some “funny words”:
Infrastructure: the core components in computing infrastructure, such as servers, storage and networks
Metastructure: protocols and mechanisms providing connections between infrastructure and the other layers
Infostructure: The data and information (database records, file storage, etc)
Applistructure: The applications deployed in the cloud and the underlying applications used ot build them.
The key difference between traditional IT and cloud is the metastructure. Cloud metastructure contains the management plane components.
Another key feature of cloud is that each layer tends to double. For example infrastructure is managed by the cloud provider, but the cloud consumer will establish a virtual infrastructure that will also need ot be managed (at least in the case of IaaS).
Cloud security scope and responsibilities
The responsibility for security domains maps to the access the different stakeholders have to each layer in the architecture stack.
SaaS: cloud provider is responsible for perimeter, logging, and application security and the consumer may only have access to provision users and manage entitlemnets
PaaS: the provider is typically responsible for platform security and the consumer is responsible for the security of the solutions deployed on the platform. Configuring the offered security features is often left to the consumer.
IaaS: cloud provider is responsible for hypervisors, host OS, hardware and facilities, consumer for guest OS and up in the stack.
Shared responsibility model leaves us with two focus areas:
Cloud providers should clearly document internal security management and security controls available to consumers.
Consumers should create a responsibility matrix to make sure controls are followed up by one of the parties
Two compliance tools exist from the CSA and are recommended for mapping security controls:
The Consensus Assessment Initiative Questionnaire (CAIQ)
The Cloud Controls Matrix (CCM)
This domain is introductory and provides some terminology for discussing cloud computing. The key aspects from a risk management point of view are:
Cloud creates new risks that need to be managed, especially as it introduces more companies involved in maintaining security of the full stack compared to a full in-house managed stack. Requirements, contracts and audits become important tools.
The NIST model is more or less universally used in cloud discussions in practice. The service models are known to most IT practitioners, at least on the operations side.
The CSA guidance correctly designates the “metastructure” as the new kid on the block. The practical incarnation of this is API’s and console access (e.g. gcloud at API level and Google Cloud Console on “management plane” level). From a security point of view this means that maintaining security of local control libraries becomes very important, as well as identity and access management for the control plane in general.
In addition to the “who does what” problem that can occur with a shared security model, the self-service and fast-scaling properties of cloud computing often lead to “new and shiny” being pushed faster than security is aware of. An often overlooked part of “pushing security left” is that we also need to push both knowledge and accountability together with the ability to access the management plane (or parts of it through API’s or the cloud management console).
Insurance relies on pooled risk; when a business is exposed to a risk it feels is not manageable with internal controls, the risk can be deferred to the capital markets through an insurance contract. For events that are unlikely to hit a very large number of insurance customers at once, this model makes sense. The pooled risk allows the insurer to create capital gains on the premiums paid by the customers, and the customers get their financial losses covered in case of a claim situation. This works very well for many cases, but insurers will naturally try to limit their liabilities, through “omissions clauses”; things that are not covered by the insurance policy. The omissions will typically include catastrophic systemic events that the insurance pool would not have the means to cover because a large number of customers would be hit simultaneously. It will also include conditions with the individual customer causing the insurance coverage to be voided – often referred to as pre-existing conditions. A typical example of the former is damages due to acts of war, or natural disasters. For these events, the insured would have to buy extra coverage, if at all offered. An example of the latter omission type, the pre-existing condition, would be diseases the insured would have before entering into a health insurance contract.
How does this translate into cyber insurance? There are several interesting aspects to think about, in both omissions categories. Let us start with the systemic risk – what happens to the insurance pool if all customers issue claims simultaneously? Each claim typically exceed the premiums paid by any one single customer. Therefore, a cyberattack that spreads to large portions of the internet are hard to insure while keeping the insurer’s risk under control. Take for example the WannaCry ransomware attack in May; within a day more than 200.000 computers in 150 countries were infected. The Petya attack following in June caused similar reactions but the number of infected computers is estimated to be much lower. As the WannaCry still looks like a poorly implemented cybercrime campaign intended to make money for the attacker, the Petya ransomware seems to have been a targeted cyberweapon used against the Ukraine; the rest was collateral damage, most likely. But for Ukrainian companies, the government and computer users this was a major attack: it took down systems belonging to critical infrastructure providers, it halted airport operations, it affected the government, it took hold of payment terminals in stores; the attack was a major threat to the entire society. What could a local insurer have done if it had covered most of those entities against any and all cyberattacks? It would most likely not have been able to pay out, and would have gone bankrupt.
A case that came up in security forums after the WannaCry attack was “pre-existing condition” in cyber insurance. Many policies had included “human error” in the omissions clauses; basically, saying that you are not covered if you are breached through a phishing e-mail. Some policies also include an “unpatched system” as an omission clause; if you have not patched, you are not covered. Not all policies are like that, and underwriters will typically gauge a firm’s cyber security maturity before entering into an insurance contract covering conditions that are heavily influenced by security culture. These are risks that are hard to include in quantitative risk calculations; the data are simply not there.
Insurance is definitely a reasonable control for mature companies, but there is little value in paying premiums if the business doesn’t satisfy the omissions clauses. For small businesses it will pay off to focus on the fundamentals first, and then to expand with a reasonable insurance policy.
For insurance companies it is important to find reasonable risk pooling measures to better cover large-scale infections like WannaCry. Because this is a serious threat to many businesses, not having meaningful insurance products in place will hamper economic growth overall. It is also important for insurers to get a grasp on individual omissions clauses – because in cyber risk management the thinking around “pre-existing condition” is flawed – security practice and culture is a dynamic and evolving thing, which means that the coverage omissions should be based on current states rather than a survey conducted prior to policy renewal.
Cyber supply chain risk is a difficult area to manage. According to NIST 80% of all breaches originate in the supply chain, meaning it should be a definite priority of any security conscious organization to try and manage that risk. That number was given in a presentation by Jon Boyens at the 2016 RSA conference. A lot of big companies have been breached due to suppliers with poor information security practices, for example Target and Home Depot.
Most companies do not have any form of cybersecurity screening of their suppliers. Considering the facts above, this seems like a very bad idea. Why is this so?
A lot of people think cybersecurity is difficult. The threat landscape itself is difficult to assess unless you have the tools and knowledge to do so. Most companies don’t have that sort of competence in-house, and they are often unaware that they are lacking know-how in a critical risk governance area.
Why are suppliers important when it comes to cybersecurity? The most important factor is that you trust your suppliers, and you may already have shared authentication secrets with them. Consider the following scenarios;
Your HVAC service provider has VPN access to you network in order to troubleshoot the HVAC system in your office. What if hackers gain control over your HVAC vendor’s computer? Then they also have access to your network.
A supplier that you frequently communicate with has been hacked. You receive an email from one of your contacts in this firm, asking if you can verify your customer information by logging into their web based self-service solution. What is the chance you would do that, provided the web page looks professional? You would at least click the link.
You are discussing a contract proposal with a supplier. After emailing back and forth about the details for a couple of weeks he sends you a download link to proposed contract documents from his legal department. Do you click?
All of these are real use cases. All of them were successful for the cybercriminals wanting access to a bigger corporation’s network. The technical set-up was not exploited; in the HVAC case the login credentials of the supplier was stolen and abused (this was the Target attack resulting in leak of 70 million customer credit cards). In the other two cases an existing trust relationship was used to increase the credibility of a spear-phishing attack.
To counter social engineering, most companies offer “cybersecurity awareness training”. That can be helpful, and it can reduce how easy it is to trick employees into performing dangerous actions. When the criminals leverage an existing trust relationship, this kind of training is unlikely to have any effect. Further, your awareness training is probably only including your own organization. Through established buyer-supplier relationships the initial attack surface is not only your own organization; it is expanded to include all the organizations you do business with. And their attack surface again, includes of the people they do business with. This quickly expands to a very large network. You can obviously not manage the whole network – but what you can do is to evaluate the risk of using a particular supplier, and use that to determine which security controls to apply to the relationship with that supplier.
Screening the contextual risk of supplier organizations
What then determines the supplier risk level? Obviously internal affairs within the supplier’s organization is important but at least in the early screening of potential suppliers this information is not available. The supplier may also be reluctant to reveal too much information about his or her company. This means you can only evaluate the external context of the supplier. As it turns out, there are several indicators you can use to gauge the likelihood of a supplier breach. Main factors include:
Main locations of the supplier’s operations, including corporate functions
The size of the company
The sector the company operates in
In addition to these factors, which can help determine how likely the organization is to be breached, you should consider what kind of information about your company the supplier would possess. Obviously, somebody with VPN login credentials to your network would be of more concern than a restaurant where you order overtime food for you employees. Of special concern should be suppliers or partners with access to critical business secrets, with login credentials, or with access to critical application programming interfaces.
Going back to the external context of the supplier; why is the location of the supplier’s operations important? It turns out that the amount of malware campaigns a company is exposed to is strongly correlated with the political risk in the countries where the firm operates. Firms operating in countries with a high crime rate, significant corruption and dubious attitudes to democracy and freedom of speech, also tend to be attacked more from the outside. They are also more likely to have unlicensed software, e.g. pirated versions of Windows – leaving them more vulnerable to those attacks.
The size of the company is also an interesting indicator. Smaller companies, i.e. less than 250 employees, have a lower fraction of their incoming communication being malicious. At the same time, the defense of these companies is often weak; many of them lack processes for managing information security, and a lot of companies in this group do not have internal cybersecurity expertise.
The medium sized companies (250-500 employees) receive more malicious communications. These companies often lack formal cybersecurity programs too, and competence may also be missing here, especially on the process side of the equation. For example, few companies in this category have established an information security management system.
Larger companies still receive large amounts of malicious communications but they tend to have better defense systems, including management processes. The small and medium sized business therefore pose a higher threat for value chain exploitation than larger more established companies do.
Also, the sector the supplier operates is a determining factor for the external context risk. Sectors that are particularly exposed to cyberattacks include:
Public sector and governmental agencies
Business services (consulting companies, lawyers, accountants, etc.)
Here the topic of “what information do I share” comes in. You are probably not very likely to share internal company data with a retailer unless you are part of the retailers supply chain. If you are, then you should be thinking about some controls, especially if the retailer is a small or medium sized business.
For many companies the “business services” category is of key interest. These are service providers that you would often share critical information with. Consulting companies gain access to strategic information, to your IT network, gets to know a lot of key stakeholders in your company. Lawyers would obviously have access to confidential information. Accountants would be trusted, have access to information and perhaps also to your ERP systems. Business service providers often get high levels of access, and they are often targeted by cybercriminals and other hackers; this is good reason to be vigilant with managing security in the buyer-supplier relationship.
Realistic assessments require up to date threat intelligence
There are more factors that come into play when selecting a supplier for your firm than security. Say you have an evaluation scheme that takes into account:
And now… cybersecurity
If the risk is considered unreasonably high for using a supplier, you may end up selecting a supplier that is more expensive, or where the level of service is lower, than for the “best” supplier but with a high perceived risk. Therefore it becomes important that the contextual coarse risk assessment is performed based on up-to-date threat models, even for the macro indicators discussed above.
Looking at historical data shows that the threat impact of company size remains relatively stable over time. Big companies tend to have better governance than small ones. On the positive side for smaller companies is that they tend to be more interested in cooperating on risk governance than bigger players are. This, however, is usually not problematic when it comes to understanding the threat context.
Political risk is more volatile. Political changes in countries can happen quickly, and the effects of political change can be subtle but important for cybersecurity context. This factor depends on up to date threat intelligence, primarily available from open sources. This means that when you establish a contextual threat model, you should take care to keep it up to date with political risk factors that do change at least on a quarterly basis, and can even change abruptly in the case of revolutions, terror attacks or other major events causing social unrest. A slower stream would be legislative processes that affect not only how businesses deal with cyber threats but also on the governmental level. Key uncertainties in this field today include the access of intelligence organizations to communications data, and the evolvement of privacy laws.
Also the sector influence on cyber threat levels do change dynamically. Here threat intelligence is not that easy to access but some open sources do exist. Open intel sources that can be taken into account to adjust the assessment of business sector risk are:
General business news and financial market trends
Threat intelligence reports from cybersecurity firms
Company annual reports
Regulations affecting the sector, as also mentioned under political risk
Vulnerability reports for business critical software important to each sectoor
In addition to this, less open sources of interest would be:
Contacts working within the sectors with access to trend data on cyber threats (e.g. sysadmins in key companies’ IT deparments)
Sensors in key networks (often operated by government security organizations), sharing of information typically occurs in CERT organizations
Obviously, staying on top of the threat landscape is a challenging undertaking but failing to do so can lead to weak risk assessments that would influence business decisions the wrong way. Understanding the threat landscape is thus a business investment where the expected returns are long-term and hard to measure.
How to take action
How should you, as a purchaser, use this information about supplier threats? Considering now the situation where you have access to a sound contextual threat model, and you are able to sort supplier companies into broad risk categories, e.g. low, medium, high risk categories. How can you use that information to improve your own risk governance and reduce the exposure to supply chain cyber threats?
First, you should establish a due diligence practice for cybersecurity. You should require more scrutiny for high-risk situations than low-risk ones. Here is one way to categorize the governance practices for supply chain cyber risks – but this is only a suggested structure. The actual activities should be adapted to your company’s needs and capabilities.
Low risk supplier
Medium risk supplier
High risk supplier
Require review of supplier’s policy for information security
State minimum supplier security requirements (antivirus, firewalls, updated software, training)
Require right to audit supplier for cybersecurity compliance
To be considered
Establish cooperation for incident handling
To be considered
Require external penetration test including social engineering prior to and during business relationship
To be considered
Agree on communication channels for security incidents related to buyer-supper relationship
Require ISO 27001 or similar certification
To be considered
If you found this post interesting, please share it with your contacts – and let me know what you think in the comments!
Earlier this year it was reported that half of the web is now served over SSL (Wired.com). Still, quite a number of sites are trying to keep things in http, and to serve secure content in embedded parts of the site. There are two approaches to this:
A form embedded in an iframe served over https (not terrible but still a bad idea)
A form that loads over http and submits over https (this is terrible)
The form loading on the http site and submitting to a https site is security-wise meaningless because an attacker can read the data entered into the form on the web page. This means the security added by https is lost because a man-in-the-middle attacker on the http site can snoop on the data in the form directly.
The “secure iframe” is slightly better because the form is served over https and a man-in-the-middle cannot easily read the contents of the form. This is aided by iframe sandboxing in modern browsers (see some info about this in Chrome here), although old ones may not be as secure because the sandboxing function was not included. Client-side restrictions can, however, be manipulated.
One of the big problems with security is lack of awareness about security risks. To counter this, browsers today indicate that login forms, payment forms, etc. on http sites are insecure. If you load your iframe over https on an http site, the browser will still warn the user (although the actual content is not submitted insecurely). This counteracts the learned (positive) behavior of looking for a green padlock symbol and the httpsprotocol. Two potential bad effects:
Users start to ignore the unison cry of “only submit data when you see the green padlock” – which will be great for phishing agents and other scammers. This may be “good for business” in the short run, but it certainly is bad for society as a whole, and for your business in the long run.
Users will not trust your login form because it looks insecure and they choose not to trust your site – which is good for the internet and bad for your business.
Takeaways from this:
Serve all pages that interact with users in any form over https
Do not use mixed content in the same page. Just don’t do it.
While you are at it: don’t support weak ciphers and vulnerable crypto. That is also bad for karma, and good for criminals.
Recently, a group called Shadow Brokers released hundreds of megabytes of tools claimed to be stemming from the NSA and other intelligence organizations. Ars has written extensively on the subject: https://arstechnica.com/security/2017/04/nsa-leaking-shadow-brokers-just-dumped-its-most-damaging-release-yet/. The leaked code is available on github.com/misterch0c/shadowbroker. The exploits target several Microsoft products still in service (and commonly used), as well as the SWIFT banking network. Adding speculation to the case is the fact that Microsoft silently released patches to vulnerabilities claimed to be zerodays in the leaked code prior to the actual leak. But what does all of this mean for “the rest of us”?
There are two key questions we need to ask and try to answer:
Should threat models include domestic nation state actors, including illegal use of intelligence capabilities against domestic targets?
Does the availability of the leaked tools increase the threat from secondary actors, e.g. organized crime groups?
Taking on the first issue first: should we include domestic intelligence in threat models for “normal” businesses? Let us examine the C-I-A security triangle from this perspective.
Confidentiality: are domestic intelligence organizations interested in stealing intellectual property or obtaining intelligence on critical personnel within the firm? This tends to be either supply chain driven if you are not yourself the direct target, or data collection may occur due to innocent links to other organization’s that are being targeted by the intelligence unit.
Integrity (data manipulation): if your supply chain is involved in activities drawing sufficient attention to require offensive operations, including non-cyber operations, integrity breaches are possible. Activities involving terrorism funding or illegal arms trade would increase the likelihood of such interest from authorities.
Availability: nation state actors are not the typical adversary that will use DoS-type attacks, unless it is to mask other intelligence activities by drawing response capabilities to the wrong frontier.
The probability of APT activities from domestic intelligence is for most firms still low. The primary sectors where this could be a concern are critical infrastructure and financial institutions. Also firms involved in the value chains of illegal arms trade, funding of terrorism or human trafficking are potential targets but these firms are often not aware of their role in the illegal business streams of their suppliers and customers.
The second question was if the leak poses an increased threat from other adversary types, such as organized crime groups. Organized crime groups run structured operations across multiple sectors, both legal and illegal. They tend to be opportunistic and any new tools being made available that can support their primary cybercrime activities will most likely be made use of quickly. The typical high-risk activities include credit card and payment fraud, document fraud and identity theft, illicit online trade including stolen intellectual property, and extortion schemes by direct blackmail or use of malware. The leaked tools can support several of these activities, including extortion schemes and information theft. This indicates that the risk level does in fact increase with the leaks of additional exploit packages.
How should we now apply this knowledge in our security governance?
The tools use exploits in older versions of operating systems. Keeping systems up-to-date remains crucial. New versions of Windows tend to come with improved security. Migration prior to end-of-life of previous version should be considered.
In risk assessments, domestic intelligence should be considered together with foreign intelligence and proxy actors. Stakeholder and value chain links remain key drivers for this type of threat.
Organized crime: targeted threats are value chain driven. Most likely increased exposure due to new cyberweapons available to the organized crime groups for firms with exposure and old infrastructure.
All businesses have processes for their operations. These can be production, sales, support, IT, procurement, auditing, and so on.
All businesses also need risk management. Traditional risk management has focused on financial risks, as well as HSE risks. These governance activities are also legal requirements in most countries. Recently cybersecurity has also caught mainstream attention, thanks to heavy (and often exaggerated) media coverage of breaches. Cyber threats are a real risk amplifier for any data centric business. Therefore it needs to be dealt with as part of risk management. In many businesses this is, however, still not the case, as discussed in detail in this excellent Forbes article.
One common mistake many business leaders make is to view cybersecurity as an IT issue alone. Obviously, IT plays a big role here but the whole organization must pull the load together.
Another mistake a leader can make is to view security as a “set and forget” thing. It is unlikely that this would be the case for HSE risks, and even less so for financial risks.
The key to operating with a reasonable risk level is to embed risk management in all business processes. This includes activities such as:
Identify and evaluate risks to the business related to the business process in question
Design controls where appropriate. Evaluate controls up against other business objectives as well as security
Get your people processes right (e.g. roles and responsibilities, hiring, firing, training, leadership, performance management)
What does the security aware organization look like?
Boiling this down to practice, what would some key characteristics of a business that has successfully embedded security in their operations?
Cybersecurity would be a standard part of the agenda for board meetings. The directors would review security governance together with other governance issues, and also think about how security can be a growth enhancer for the business in the markets they are operating.
Procurement considers security when selecting suppliers. They identify cybersecurity threats together with other supply chain risks and act accordingly. They ensure baseline requirements are included in the assessment.
The CISO does not report to the head of IT. The CISO should report directly to the CEO and be regularly involved in strategic business decisions within all aspects of operations.
The company has an internal auditing system that includes cybersecurity. It has based its security governance on an established framework and created standardized ways of measuring compliance, ranging from automated audits of IT system logs to employee surveys and policy compliance audits.
Human resources is seen as a key department for security management. Not only are they involved in designing role requirements, performance management tools and training materials but they are heavily involved in helping leaders build a security aware culture in the company. HR should also be a key resource for evaluating M&A activites when it comes to cultural fit, including cybersecurity culture.