This post is based on the excellent mindmap posted on taosecurity.blogspot.com – detailing the different fields of cybersecurity. The author (Richard) said he was not really comfortable with the risk assessment portion. I have tried to change the presentation of that portion – into the more standard thinking about risk stemming from ISO 31000 rather than security tradition.
Read team and blue team activities are presented under penetration testing in the original mind map. I agree that the presentation there is a bit off – red team is about pentesting, whereas blue team is the defensive side. In normal risk management lingo, these terms aren’t that common – which is why I left them out of the mind map for risk assessment. For an excellent discussion on these terms, see this post by Daniel Miessler: https://danielmiessler.com/study/red-blue-purple-teams/#gs.aVhyZis.
The map shown here breaks down the risk assessment process into the following containers:
There are of course many links between other security related activities and risk assessments. Risk monitoring and communication processes are connecting these dots.
Also threat intelligence is essential for understanding the context – which again dictates attack scenarios and credibility necessary to prioritize risks. Threat intelligence entails many activities, as indicated by the original mind map. One source of intel from ops that is missing on that map by the way, is threat hunting. That also ties into risk identification.
I have also singled out security ops as it is essential for risk monitoring. This is required on the tactical level to evaluate whether risk treatments are effective.
Further, “scorecards” have been used as a name for strategic management here – and integration in strategic management and governance is necessary to ensure effective risk management – and involving the right parts of the organization.
After being home with paternal leave 80% of the weak and working 20% of the week, I will be switching percentages from tomorrow. That means more time to get hands-on with security. I’ve recently switched from risk management consulting to a pure security position within a fast-growing organization with a very IT-centric culture. Working one day a week in this environment has been great to get an impression of the organization and its context, and now the real work begins. I think habits from the consulting world will be beneficial to everyone involved. Here’s how.
Slipping into someone else’s shoes
Consulting is about understanding the unarticulated problems, and getting to the core through intelligent questions. That is the essence of it; the good consultant understands that context is everything, and that the perception of context is different depending on the shoes you wear. This goes for strategy development, for risk management in general, in definitely for cybersecurity.
Use your analytics for (almost) everything
As a consultant you must be able to back up your claims. Your recommendations are expensive to get, and they’d better be worth the price. Often you will create recommendations that will be uncomfortable to decision makers – due to cost, challenged assumptions or that your recommendations are not aligned with their gut feeling.
This is why consultants must be ready to back up their claims, with two essential big guns; a convincing approach to analysis, and solid data. Further, to add to the credibility of the recommendations, the methods and data should be described together with the uncertainties surrounding both.
Working in security means that you are trying to protect assets – some tangible, but most are not. The recommendations you make usually carry a cost, and to convince your stakeholders that your recommendations are meaningful you need to provide the methods and the data to make them compelling. Which brings us to the next step…
Always make an effort to communicate with purpose
Analysis and data become useless without communication. This is the high-stakes point of consulting, communicating with clients, stakeholders, internal and external subject matter experts. Not only for presenting your facts but as a support for the whole process. Understanding context is never a one-way street; it is a multifaceted, multichannel communication challenge. Understanding data and uncertainties often require multidisciplinary input. This requires questions to be asked, provocations to be made and conversations to be had. Presenting your recommendations requires public speaking skills. And following up requires perseverance, empathy and prioritization.
In cybersecurity you deal with a number of groups, each with their own perspectives. Involving the right people at the right time is key to any successful security program, ranging from optimizing automated security testing during software integration to teaching support staff about social engineering awareness.
And that leaves one more thing: learning
If there is one thing consulting teaches you, it is that you have a lot to learn. With every challenge you find another topic to dive into, another white spot in your know-how. Consultants are experts at thriving outside their comfort zones – that is what you need to do to help clients solve complex issues you have never seen before. You must constantly reinvent, you must constantly remain curious, and you must process new information every day, in every interaction you have.
Cybersecurity requires learning all the time. One thing that strikes me when looking at new attack patterns is the creativity and ingenious engineering of bad guys. Not all attacks are great, not all malware is complex, but their ability to distill an understanding of people’s behaviors into attack patterns that are hard to detect, deny and understand is truly inspiring; to beat the adversaries we can never stop learning.
Disclosing vulnerabilities is a part of handling your risk exposure. Many times, web vulnerabilities are found by security firms scanning large portions of the web, or it may come from independent security researchers that have taken an interest in your site.
How companies deal with such reported vulnerabilities usually will take one of the following 3 paths:
Fix the issue, tell your customers what happened, and let them know what their risk exposure is
Fix the issue but try to keep it a secret.
Threaten the reporter of the vulnerability, claim that there was never any risk regardless of the facts, refuse to disclose details
Number 2 is perhaps still the normal, unfortunately. Number 1 is idea. Number 3 is bad.
If you want to see an example of ideal disclosure, this Wired.com article about revealing password hashes in source shows how it should be done.
A different case was the Norwegian grocery chain REMA 1000, where a security researcher reported lack of authentication between frontend and backend, exposing the entire database of customer data. They chose to go with route 3. The result: media backlash, angry consumers and the worst quarter results since…., well, probably forever.
Security awareness training is one of many strategies used by companies to reduce their security risks. It seems like an obvious thing to do, considering the fact that almost every attack contains some form of social engineering as the initial perimeter breach. In most cases it is a phishing e-mail.
Security awareness training is often cast as a mandatory training for all employees, with little customization or role based adaptation. As discussed previously, this can have detrimental effects on the effectiveness of training, on your employee’s motivation, and on the security culture as a whole. Only when we manage to deliver a message adapted to both skill level and motivation levels we can hope to be successful in our awareness training programs: When does cybersecurity awareness training actually work?
So, while many employees will need training about identification of malicious links in e-mails, or understanding that they should not use the same password on every user account, other employees may have a higher level of security understanding; typically an understanding that is linked to the role they have and the responsibilities they take. So, while the awareness training for your salesforce may look quite similar to the awareness training you give to your managers and to your customer service specialists, the security awareness discussions you need to have with your more technical teams may look completely different. They already know about password strength. They already understand how to spot shaky URL’s and strange domains. But what they may not understand (without having thought about it and trained for it) is how their work practices can make products and services less secure – forcing us to rely even more on awareness training for the less technically inclined coworkers, customers and suppliers. One example of a topic for a security conversation with developers is the use of authentication information during development and how this information is treated throughout the code evolution. Basically, how to avoid keeping your secrets where bad guys can find them because you never considered the fact that they are still there – more or less hidden in plain site. Like this example, with hardcoded passwords in old versions of a git repository: Avoid keeping sensitive info in a code repo – how to remove files from git version history
So, how can you plan your security conversations to target the audience in a good way? For this, you do need to do some up-front work, like any good teacher would tell you that you need to do for all students; people are different in terms of skills, knowledge, motivation for compliance, and motivation to learn. This means that tailoring your message to be as effective as possible is going to be very hard, and still very necessary to do.
The following 5-step process can be helpful in planning your content, delivery method and follow-up for a more effective awareness training session.
First you need to specify the roles in the organization that you want to convey your message to. What would be the expectations of the role holders of a good security awareness training? What are the responsibilities of these roles? Are the responsibilities well understood in the organization, both by the people holding these roles, and the organization as a whole? Clarity here will help but if the organizaiton is less mature, understanding this fact will help you target your training. A key objective of awareness training should here be to facilitate role clarification and identify expectations that are always exisiting but sometimes implicitly rather than explicitly.
When the role has been clarified, as well as the expectations they will have, you need to consider the skillsets they have. Are they experts in log analysis from your sys.admin department? Don’t insult them by stressing that it is important to log authentication attempts – this sort of thing kills motivation and makes key team members hostile to your security culture project. For technical specialists, use their own insights about deficiencies to target the training. Look also to external clues about technical skill levels and policy compliance – security audit reports and audit logs are great starting points in addition to talking to some of the key employees. But remember, always start with the people before you dive into technical artefacts. And don’t over-do it – you are trying to get a grasp of the general level of understanding in your audience, not evaluate them for a new job.
The next point should be to consider the atmosphere in the group you are talking to. Are they motivated to work with policies and stick with the program? Do they oppose the security rules of the company? If so, do you understand why? Make sure role models understand they are role models. Make sure policies do make sense, also for your more technical people. If there is a lack of leadership as an underlying reason for low motivation to get on board the security train, work with the senior leadership to address this. Get the leadership in place, and focus on motivation before extra skills – nobody will operationalize new skills if they do not agree with the need to do so, or at least understand why it makes sense for the company as a whole. You need both to get the whole leadership team on board, and you probably need to show quite some leadership yourself too to pull off a successful training event in a low motivation type of environment.
Your organization hopefully has articulated security objectives. For a more in-depth discussion on objectives, see this post on ISO 27001. Planning in-depth security awareness training without having a clear picture of the objectives the organization is hoping to achieve is like starting an expedition without knowing where you are trying to end up. It is going to be painful, time-consuming, costly and probably not very useful. When you do have the objectives in place – assess how the roles in question are going to support the objectives. What are the activities and outcomes expected? What are the skillsets required? Why are these skillsets required, and are they achievale based on the starting point? When you are able to ask these questions you are starting to get a grip not only on the right curriculum but also on the depth level you should aim for.
When you have gone through this whole planning excercise to boil down the necessary curriculum and at what level of detail you should be talking about it, you are ready to state the learning goals for your training sessions. Learning goals are written expressions of what your students should gain from the training, in terms of abilities they acquire. These goals makes it easier for you to develop the material using the thinking of “backwards course design“, and it makes it easier to evaluate the effectiveness of your training approach.
Finally, remember that the training outcomes do not come from coursework, e-learning or reading scientific papers. It comes from practice, operationalization of the ideas discussed in training, and it comes from culture, when practice is so second nature that it becomes “the way we do things around here”.
To achieve that you need training, you need leadership, and you need people with the right skills and attitudes for their jobs. That means that in order to succeed with security the whole organizaiton must pull the load together – which makes security not only IT’s responsibility but everybody’s. And perhaps most of all, it is the responsibility of the CEO and the board of directors. In many cases, lack of awareness in the trenches in the form of no secure dev practices, bad authentication routines, insufficient testing stems from a lack of security prioritization by the board.
Europol has recently released its 2017 report on organized (SOCTA) crime in the EU. In this report they identify 5 key threats to Europe from organized crime groups. In addition to cybercrime itself, the report pulls forward illicit drugs crimes, migrant smuggling, organized property crime and labor market crime. Cybercriminal activities are often integral to or supporting also the other key operations of organized crime groups.
Key tools of organized crime groups are
Counterintelligence against law enforcement
Violence and extortion
They carry out crimes through currency counterfeiting, various cybercrimes including child exploitation, payment fraud, data trade and malware campaigns. Also sports corruption is a major area for organized criminals, drawing profits from the gambling markets.
Document fraud is increasing and is a significant threat to Europe. It is an enabler of types of criminal activities, including terorrism. These documents are increasingly traded online.
Document fraud is one of the key drivers of identity theft. Document fraud can be necessary to facilitate other criminal activities, and cyberattacks may be used to steal credentials used to obtain documents.
Trade in illicit goods is increasing, and a lot of this trade is conducted on darknet sites. Key products are drugs, illegal firearms and malware. Other Crime-as-a-service segments are also of interest, like botnets for hire, ransomeware-as-a service, exploit coding. Europol sees Crime-as-a-Service as a growing threat to society, according to the SOCTA 2017 report. In particular the growth in ransomeware (#fiction #usecase) targeting not only individuals but also public and private organizations is worrying.
Geopolitical events are driving changes in organized crime in Europe. Conflicts close to European borders are influencing crime through migration, need for illicit goods, as well as European targets being picked by non-European fighters performing terrorist acts in Europe. Cybercrime is one source of funding for such terror groups, in addition to cybercrime being an enabler of the organized crime groups that support the needs of terrorism through illicit firearms trade, trade in drugs and narcotics and human trafficking.
Pulling EUROPOL’s intelligence into your cybersecurity threat context
What does this mean for European businesses? Depending on your exposure, technology base and value chain, this may affect the threat landscape for your organization.
Increasing the direct threat level, e.g. ransomeware and payment frauds
Supply chain effects, including money laundering schemes
Threats to your intellectual property
Corruption affecting your markets, including partners, owners, suppliers and customers
Potential investments from money laundering schemes into your infrastructure
If growth in the activities of organized crime groups affects your threat landscape, it may also mean that you need to rethink your cybersecurity defense priorities. Is availability still the main threat, or are confidentiality issues coming to the forefront?
Most organizations have password policies that require users to change their passwords every XX days, and that they use a minimum (or sometimes fixed!) length, and a combination of capital and small letters, numbers and special symbols. But what exactly makes a password “strong” or difficult to guess?
Entropy can be used to measure the complexity of an information string – or the number of possible combinations within the given “rule” for constructing a string if you want. To calculate the information entropy of your password, use this formule:
ENTROPY = LOG (Characters in set you make password from) / LOG (2) x (Length of password)
So, comparing a password using only lower case letters, and one with a combination of upper case and lower case, we get that the entropy in the first case is 37, and in the latter case it is 45. This means the latter case is harder to crack using brute-force attacks – but how much so? (Higher entropy is better). Open security research has made a calculator for brute force time that we can use to estimate that. The estimate is based on benchmarks for common cracking tools on a regular consumer grade PC. Assuming a SHA-encrypted salted password we get about 7 hours to crack the first but 2000 hours to crack the latter – entropy is obviously a big deal. As we see form the formula above, increasing the character set length is one way to increase entropy, the other one is increasing the length of the password itself. Note that in terms of cracking – using some symbols or characters not normally found in words is necessary to avoid dictionary based attacks – these brute force times are “worst-case times” seen from the attacker perspective – the time it takes to exhaust the entire character space.
What is better – more characters or longer passwords?
Turning to some basic maths, we can use the formula for entropy to look at the effects of increasing character set size versus password length. The entropy is proportional to the logarithm of the character set size – that means entropy growth rate with character set size c is 1/c. When c is large, the derivative approaches zero; increasing the set size is efficient for small set sizes but the value of doing so becomes smaller as the set size grows larger.
The effect of increasing password length however, is linear, and remains the same for a given charset size for each length of the password. What does this mean in practice?
Add complextiy up to a certain level – that also takes away dictionary attacks as an efficient way to brute-force the password
Increase length after that instead of including more complexity
Using the brute force time calculator, we estimate the following exhaustion times:
Lower case letters, 8-character password: 7 hours to crack
Lower case and upper case letters, 8-character password: 2000 hours to crack
Lower case letters, 16-character password: 189 million years to crack
Lower and upper case letters, 16-character password: 12 trillion years to crack
Logical conclusion: use passphrases with some added complexity. This makes a brute-force attack on your password extremely difficult.
In CISO circles the term “shadow IT” is commonly used for when employees use private accounts, devices and networks to conduct work outside of the company’s IT policies. People often do this because they feel they don’t have the freedom to get the job done within the rules.
Reasons why people do their business in the IT shadows
I’ll nominate 3 main reasons why people tend to use private and unauthorized tools and services in companies and public service. Then let’s look at what we can do about it, because this is a serious expansion of the organization’s attack surface! And we don’t want that, do we?
I believe (based on experience) the 3 main reasons are:
The tools they are provided with are hard to use, impractical or not available
They do not understand the security implications and have not internalized what secure behaviors really are
The always-on culture is making the distinction between “work” and “personal” foggy; people don’t see that risks they are willing to take in their personal lives are also affecting their organizations that typically will have a completely different risk context
How to avoid the shadow IT rabbit hole of vulnerabilities
First of all, don’t treat your employees and co-workers are idiots. IT security is very often about locking everything down and hardening machines and services. If you go too far in this direction you make it very hard for people to do their jobs, and you can end of driving them into the far riskier practices of inventing their own workarounds using unauthorized solutions – like private email accounts. Make sure controls are balanced, and don’t forget that security is there to protect productivity – not as the key product of most organizations. Therefore, your risk governance must ensure:
Select risk-based controls – don’t lock everything down by default
Provide your employees with the solutions they need to do their jobs
Remember that no matter how much you harden your servers, the human factor still remains.
Second, make people your most important security assets. Build a security aware culture. This has to be done by training, by leadership and by grassroots engagement in your organization.
Third, and for now last, disconnect. Allow people to disconnect. Encourage it. Introduce separations between the private and what is work or for your organization. This is important because the threat contexts of the private sphere and the organizational sphere are in most cases very different. This is also the most difficult part of the management equation: allowing flexible work but ensuring there is a divide between “work” and “life”. This is what work-life balance means for security; it allows people to maintain different contexts for different parts of their lives.
Phishing e-mails is the most common way for a hacker to breach the initial attack surface. Filters and blacklisting technologies have been less than effective in stopping such threats, and it is up to the cybersecurity training and awareness of the user to ensure safe choices are made. Now phishermen have new ideas about making their bait more trustworthy; hijacking existing mail threads, piggybacking on existing interpersonal trust. A received an e-mail sent me from a contact who told me he realized he’d fallen for a scam the second he submitted his username and password to the phoney login site he was led to. Here’s (a somewhat edited) excerpt of the e-mail thread leading him into the phisherman’s trap.
From: Jim Salesman
To: Danny Customer
Subject: Re: confirm order details
thank you for your purchase. Please download and check these documents.
With best regards,
From: Danny Customer
To: Jim Salesman
Subject: Re: confirm order deetails
I agree to the conditions as you have suggested. Make sure the part serial numbers are indicated correctly on the labels.
With best regards,
——- (after multiple e-mails back and forth)
Where does the link lead to?
The link does not lead to a Google page, despite claiming to be a Google Docs file. Also the lack of Google branding in the download section could be an indicator. The URL is “ehbd-dot-ml/hbdesigns/gibberish/” and is rendered over http – no security. It displays a selection of “login credentials” to choose from.
My friend realized the mistake the moment he hit “submit”. He then called his company’s IT department, and was told to change his passwords and run a virus scan. That was the right thing to do. But why is this dangerous?
Giving hackers access to your e-mail makes it easy for them to:
your e-mails and attachments
they can impersonate you by sending e-mails as you
they can hijack other accounts where your email is used to reset your password
Phishing scammers are skilled at exploiting established trust between you and your contacts. Always be suspicous about links in e-mails, even from people you know. Before clicking, always check:
Does the URL look reasonable?
Does the branding (logos etc.) look right for the contents?
Is the site it leads to secured when you would expect it to be? All major service providers will only serve https – not http
Is the domain name strange? The .ml top domain is the national domain for Mali in Africa. Google Docs does not use that as the default login site domain.
When performing the risk and vulnerability assessment required by the new IEC 61511 standard, make sure the level of detail is just right for your application. Normally the system integrator is operating at the architectural level, meaning signal validation in software components should probably already have been dealt with. On the other hand, upgrading and maintaining the system during the entire lifecycle has to be looked into. Just enough detail can be hard to aim for but digging too deep is costly, and being too shallow doesn’t help your decision making. Therefore, planning the security assessment depth level already from the beginning should be a priority!
Starting with the context – having the end in mind
The purpose of including cybersecurity requirements in a safety instrumented system design is to make sure the reliability of the system is not threatened by security incidents. That reliability requires each safety instrumented function (SIF) to perform its intended task at the right moment; we are concerned with the availability and the integrity of the system.
In order to understand the threats to your system you need to start with the company and its place in the world, and in the supply chain. What does the company do? Consider an oil producer active in a global upstream market – producing offshore, onshore, as well as from unconventional sources such as tar-sands, arctic fields and shale oil. The company is also investing heavily in Iraq, including areas recently captured from ISIS. Furthermore, on the owner side of this company you find a Russian oligarch, who is known to be close to the Kremlin, as a majority stock holder. The firm is listed on the Hong Kong stock Market. Its key suppliers are Chinese engineering firms and steel producers, and its top customers are also Chinese government-backed companies. How does all of this affect the threat landscape as it applies to this firm?
The firm is interfering with causes that may trigger the interest of hacktivists:
Unconventional oil production
Arctic oil production
It also operates in an area that can make them a target for terrorist groups, in one of the most politically unstable regions in the world, where the world’s largest military powers also have to some degree opposing interests. This could potentially draw the interest of both terrorist groups and of nation state hackers. It is also worth noting that the company is on good terms with both the Russian and Chinese governments, two countries often accused of using state sponsored hackers to target companies in the west. The largest nation state threat to this oil company may thus be from western countries, including the one headed by Donald Trump. He has been quite silent on cybersecurity after taking office but issued statements during his campaign in 2016 hinting at more aggressive build-ups of offensive capacities. So, the company itself should at least expect the interest of script kiddies, hacktivists, cybercriminals, terrorists, nation states and insiders. These groups have quite varying capacities and the SIS is typically hard to get at due to multiple firewalls and network segregations. Our main focus should thus be of hacktivists, terrorists and nation states – with cybercriminals and insiders acting as proxies (knowingly or not).
The end in mind: keeping safety-critical systems reliable also under attack, or at least make it an insignificant contribution to unreliability.
Granularity of security assessment
Our goal of this discussion was to find the right depth level for risk and vulnerability assessments under IEC 61511. If we start with the threat actors and their capabilities, we observe some interesting issues:
Nation states: capable of injecting unknown features into firmware and application software at the production stage, including human infiltration of engineering teams. This can also be “features” sanctioned by the producer in some countries. Actual operations can include cyberphysical incursions with real asset destruction.
Terrorists: infiltration of vendors less likely. Typical capabilities are ATP’s using phishing to break the attack surface, and availability attacks through DDoS provided the SIS can be reached. Physical attack is also highly likely.
Cybercriminals: similar to terrorists, but may also have more advanced capabilities. Can also act out of own interest, e.g. through extortion schemes.
Hacktivists: unlikely to threaten firmware and software integrity. Not likely to desire asset damage as that can easily lead to pollution, which is in conflict with their likely motivations. DDoS attacks can be expected, SIS usually not exposed.
Some of these actors have serious capabilities, and it is possible that they will be used if the political climate warrants this. As we are most likely relying on procured systems form established vendors, using limited variability languages for the SIS, we have little influence over the low-level software engineering. Configurations, choice of blocks and any inclusion of custom-designed software blocks is another story. Regarding our assessment we should thus, at least, include the following aspects:
Procurement – setting security requirements and general information security requirements, and managing the follow-up process and cross-organizational competence management.
Software components – criticality assessment. Extra testing requirements to vendors. Risk assessment including configuration items.
Hardware – tampering risk, exposure to physical attacks, ports and access points, network access points including wireless (VSAT, microwave, GSM, WiFi)
Organizational security risks: project organization, operations organization. Review of roles and responsibilities, criticality of key personnel, workload aspects, contractual interfaces, third-party personnel.
This post does not give a general procedure for depth of analysis decisions but it does outline important factors. Always start with the context to judge both impact and expected actions from threat actors. Use this to determine capabilities of the main threat actors. This will help you decide the granularity level of your assessment. The things that are outside of your control should also not be neglected by considered an uncertainty point that may influence the necessary security controls you need to put in place.