Extending the risk assessment mind map for information security

This post is based on the excellent mindmap posted on taosecurity.blogspot.com – detailing the different fields of cybersecurity. The author (Richard) said he was not really comfortable with the risk assessment portion. I have tried to change the presentation of that portion – into the more standard thinking about risk stemming from ISO 31000 rather than security tradition.

Read team and blue team activities are presented under penetration testing in the original mind map. I agree that the presentation there is a bit off – red team is about pentesting, whereas blue team is the defensive side. In normal risk management lingo, these terms aren’t that common – which is why I left them out of the mind map for risk assessment. For an excellent discussion on these terms, see this post by Daniel Miessler: https://danielmiessler.com/study/red-blue-purple-teams/#gs.aVhyZis.

risk_assessment_security
Suggested presentation of risk assessment mind map – to wrap in in typical risk assessment activity descriptions

The map shown here breaks down the risk assessment process into the following containers:

  • Context description
  • Risk identification
  • Risk analysis
  • Treatment planning

There are of course many links between other security related activities and risk assessments. Risk monitoring and communication processes are connecting these dots.

Also threat intelligence is essential for understanding the context – which again dictates attack scenarios and credibility necessary to prioritize risks. Threat intelligence entails many activities, as indicated by the original mind map. One source of intel from ops that is missing on that map by the way, is threat hunting. That also ties into risk identification.

I have also singled out security ops as it is essential for risk monitoring. This is required on the tactical level to evaluate whether risk treatments are effective.

Further, “scorecards” have been used as a name for strategic management here – and integration in strategic management and governance is necessary to ensure effective risk management – and involving the right parts of the organization.

4 habits from consulting every security professional should steal

After being home with paternal leave 80% of the weak and working 20% of the week, I will be switching percentages from tomorrow. That means more time to get hands-on with security. I’ve recently switched from risk management consulting to a pure security position within a fast-growing organization with a very IT-centric culture. Working one day a week in this environment has been great to get an impression of the organization and its context, and now the real work begins. I think habits from the consulting world will be beneficial to everyone involved. Here’s how.

 

sec_pic_habits_consulting
Successful consultants must not only be good at what their technical area of expertise is, but also at moving around in unknown territories in client organizations while navigating complex issues with many stakeholders – these are habituated skills that security professionals should adopt.

 

Slipping into someone else’s shoes

Consulting is about understanding the unarticulated problems, and getting to the core through intelligent questions. That is the essence of it; the good consultant understands that context is everything, and that the perception of context is different depending on the shoes you wear. This goes for strategy development, for risk management in general, in definitely for cybersecurity.

Use your analytics for (almost) everything

As a consultant you must be able to back up your claims. Your recommendations are expensive to get, and they’d better be worth the price. Often you will create recommendations that will be uncomfortable to decision makers – due to cost, challenged assumptions or that your recommendations are not aligned with their gut feeling.

This is why consultants must be ready to back up their claims, with two essential big guns; a convincing approach to analysis, and solid data. Further, to add to the credibility of the recommendations, the methods and data should be described together with the uncertainties surrounding both.

Working in security means that you are trying to protect assets – some tangible, but most are not. The recommendations you make usually carry a cost, and to convince your stakeholders that your recommendations are meaningful you need to provide the methods and the data to make them compelling. Which brings us to the next step…

Always make an effort to communicate with purpose

Analysis and data become useless without communication. This is the high-stakes point of consulting, communicating with clients, stakeholders, internal and external subject matter experts. Not only for presenting your facts but as a support for the whole process. Understanding context is never a one-way street; it is a multifaceted, multichannel communication challenge. Understanding data and uncertainties often require multidisciplinary input. This requires questions to be asked, provocations to be made and conversations to be had. Presenting your recommendations requires public speaking skills. And following up requires perseverance, empathy and prioritization.

In cybersecurity you deal with a number of groups, each with their own perspectives. Involving the right people at the right time is key to any successful security program, ranging from optimizing automated security testing during software integration to teaching support staff about social engineering awareness.

And that leaves one more thing: learning

If there is one thing consulting teaches you, it is that you have a lot to learn. With every challenge you find another topic to dive into, another white spot in your know-how. Consultants are experts at thriving outside their comfort zones – that is what you need to do to help clients solve complex issues you have never seen before. You must constantly reinvent, you must constantly remain curious, and you must process new information every day, in every interaction you have.

Cybersecurity requires learning all the time. One thing that strikes me when looking at new attack patterns is the creativity and ingenious engineering of bad guys. Not all attacks are great, not all malware is complex, but their ability to distill an understanding of people’s behaviors into attack patterns that are hard to detect, deny and understand is truly inspiring; to beat the adversaries we can never stop learning.

How do you tell your audience that somebody found a vulnerability on your site?

Disclosing vulnerabilities is a part of handling your risk exposure. Many times, web vulnerabilities are found by security firms scanning large portions of the web, or it may come from independent security researchers that have taken an interest in your site.

022217_1129_HowFileSilo1.jpg
Ignoring the communication issues around vulnerability disclosure can cost you a lot. Working on maturity at the top is a high ROI activity!

How companies deal with such reported vulnerabilities usually will take one of the following 3 paths:

  1. Fix the issue, tell your customers what happened, and let them know what their risk exposure is
  2. Fix the issue but try to keep it a secret.
  3. Threaten the reporter of the vulnerability, claim that there was never any risk regardless of the facts, refuse to disclose details

Number 2 is perhaps still the normal, unfortunately. Number 1 is idea. Number 3 is bad.

If you want to see an example of ideal disclosure, this Wired.com article about revealing password hashes in source shows how it should be done.

A different case was the Norwegian grocery chain REMA 1000, where a security researcher reported lack of authentication between frontend and backend, exposing the entire database of customer data. They chose to go with route 3. The result: media backlash, angry consumers and the worst quarter results since…., well, probably forever.

So, what separates the businesses that do it the right way, and those that choose to go down the way of the rambling angry ignorant? It is about maturity and skills a the top. This is why boards and top management need to care about information security – it is a key business issue.

 

 

Can cybersecurity culture be measured, and how can it drive national policy?

Background

NorSIS has studied what they term cybersecurity culture in Norway. The purpose of their study has been to help designing effective cybersecurity practices and to understand what security regulations Norwegians will typically accept.

The study wants to measure culture, a concept that does not easily lend itself to quantification or simple KPI’s. The attempt is based on a survey sent to a group of people that is representative for the Norwegian population.

The key insights sought by the study are summarized in 4 research questions:

  1. What characterizes the Norwegian cybersecurity culture?
  2. To what degree does cybersecurity education influence behaviors and awareness?
  3. How do Norwegians relate and react to cyber risks?
  4. To which degree do individuals take responsibility for the safety and security of cyberspace?

 

wp-1489524275511.jpg
Thanks to Bjarte Malmedal for sending me a nice hardcopy of the report he wrote with Hanne Eggen Røislien – you should follow him on Twitter for insightful security discussions!

 

The cultural dimension

NorSIS does not fall into the trap of reducing culture to behaviors alone but attempt to treat the cultural dimension as a set of norms, beliefs and practices influenced in various ways. They define 8 core issues that influence the cybercultural fabric of society:

  • Collectivism
  • Governance and control
  • Trust
  • Risk perception
  • Techno-optimism and digitalization
  • Competence
  • Interest
  • Behaviors

The discussion of these core issues that follows is sensible and logic. Then the authors summarize some results from their questionnaires, mapping answers to the 8 core issues. For example they report that only 18% of the respondents say they have little interest in IT and technology.

Competence and learning

Surprisingly, the report states that 59% of respondents report having received cybersecurity training sometime the last 2 years (without specifying any further what this entails). They also look into how people prefer to learn about security.

The authors take the perspective that many children are not receiving the cybersecurity guidance they need because only half the adult population has received cybersecurity training.

The report also states that it is unlikely that training will typically relate the security of cyberspace as a whole to the security of individual devices.

Risk perception

A key finding in the report is that 7 of 10 respondents think they expose themselves to threats online. They further associate the risk exposure with external factors rather than their own actions. Further 6 of 10 people feel confident about their own ability to identify what is and isn’t safe to do online.

The highest fear factors are found when doing online banking and using online government services. This is perhaps because it is during these activities the users are interacting with their most sensitive data.

Behavioral patterns

Most people report that they think about how safe a website is before using it and only 18% say they don’t think about this. The ability to actually assess this is most likely varying, and 61% report they feel competent to do such assessments.

Another interesting finding is that people report deliberately breaking security rules at work; 14% in the private sector, 8% in the public sector, and men report doing this more than women.

Risk-taking behaviors should be expected in any large group of people, and the self-reported numbers are reasonable when compared to other studies about motivation and willingness to follow corporate norms.

Study conclusions

The report draws up some main conclusions based on the data gathered. One is about education, where the authors feel confident that positive security behaviors correlate with security education. They argue that it should be a government responsibility to educate the population about security, ie by making it a part of school curriculum.

Regarding the surveillance-privacy tension in cybersecurity governance, the authors conclude that people mostly support giving police authority and the tools to fight cybercrime but they do not believe they will get any help by going to the police. Only 13% of victims to cybercrime file a police report.

They further propose policy for government action; primarily strengthening security education in the school system, and giving law enforcement further tools to fight cybercrime.

My thoughts on this

This report provides an interesting piece of work, in many aspects confirming with data assumptions security professionals tend to make about people in general, and perhaps the “typical user”.

The research questions asked at the outset of the report are perhaps implicitly answered through data and interpretations of those data. I will try to add my impression based on the report, and based on my personal experience from the corporate world.

What characterizes the Norwegian cybersecurity culture?

Norwegians are tech savvy – in the sense that they use technology. The report indicates that a lot of people are confident about their own use of technology, and most people believe they can assess what is safe and not safe to do online. When the report drills down into some behavioral aspects, there are issues that may paint a somewhat different picture.

  • People still use the same password on many services, although many report sounder practices. It is not unlikely that this self-reporting is skewed because people answer what they know they should be doing, instead of what they are actually doing.
  • People feel at risk when using online services, but still most people do not back up their data more often than every month, 15% report they never back up data, and 10% say they don’t even know. If the “correct answer” bias is affecting the results here, the situation is likely worse than this in practice. Think about the question: “how often do you check the oil on your car?”. Most people would like to say they do this regularly, like every month – but we all know that is not true.

The question asked about backup was actually how often people back up data that is important to them. I have a suspicion that a lot of people have never thought about what data is important. Is it the pictures of the grandchildren? Is it your financial documents, insurance papers, etc? Is it the recipe collection you keep in Microsoft OneNote? Most people will never have thought about this. A lot of people also believe nothing bad can happen as long as they store their files in the cloud. Beliefs are thus often formed without the competence needed to form informed decisions about value and risk.

My conclusion is that Norwegians are feeling quite confident about their own security practices, without necessarily having very good practices. Overconfidence is often a sign of insufficient know-how, which for the population as a whole is probably the case.

To what degree does cybersecurity education influence behaviors and awareness?

The effectiveness of cybersecurity education is a big area up for debate, especially in the corporate world, and it has also be discussed at length in academia. You can read about my take on when awareness training and when it actually works can be found here: https://safecontrols.blog/2017/02/16/when-does-cybersecurity-awareness-training-actually-work/ .

Awareness training is often about practices – knowing what to do. Then comes motivation and the habituation of that information – how can you make theory into practice, how can you make a conscious effort into habit and second nature? I think two important things are at play here that we tend to underestimate; building on a feeling of responsibility for the collective good (which is also one of the 8 core issues of cybersecurity culture as defined in the NorSIS report), and creating skills that lower the effort barrier for secure practices. People who feel the use of IT is difficult are unlikely to change their existing habits before the “difficulty barrier” has been reduced.

This is where schools can play a role, like NorSIS suggests – but that is also a major challenge based on the current state of affairs, at least in Norwegian schools. I have been arranging an after-school activity on coding for elementary school pupils a couple of years (mostly based on Scratch, and some Python). What is very visible in those sessions is that socio-economic backgrounds correlate to a very large degree with children’s technical know-how. A lot of the teachers also lack the know-how and perhaps interest to be an equalizing factor when it comes to technology as well, although political efforts do exist to make technology a more central topic in schools. In this regard we see Norway currently lagging behind other similar nations, like Sweden or the United Kingdom, where IT plays a bigger and more fundamental role in education.

How do Norwegians relate and react to cyber risks?

People worry about cyber risks, and they worry more the older they get. Another interesting aspect is that people are worried about being subject to online credit card fraud, whereas using debit or credit cards online is one of the behaviors with lower perceived risk scores in the study. Further, using online banking is seen as a low risk activity – which correlates well with banks being seen as “secure”.

Ironically, “using email” is only perceived as slightly more risky than using online banking – in spite of social engineering through e-mail being the primary initial attack vector for 30 years, and still going strong.

They also conclude that having received cybersecurity education does not necessarily change how people perceive online risks, and that this is at odds with how many security professionals view the effects of awareness training. This does not come as a surprise – changing feelings by transfer of facts is not likely a good strategy, and risk perception at the personal level is typically based on feelings, as the report also correctly states. Changing risk perception requires continuity, leadership and the challenging of assumptions among peers – it requires the evolution of culture, and that is a slow beast to move. Training is only one of many levers to pull to achieve that.

To which degree do individuals take responsibility for the safety and security of cyberspace?

Creating botnets would be really hard if all devices were patched, hardened and all users careful to avoid social engineering schemes. This is not something most people are thinking about when they dismiss the prompt to update their iOS version for the n’th time.

Most people probably don’t realize that it is the collective security of all the connected devices combined that make up the security landscape for the internet as a whole. Further it is easy to fall into the thinking trap that “there are so many computers that my actions have no impact” – more or less like the “my vote doesn’t count” among voters who stay at home on election day.

NorSIS sees education as a possible medicine, and that is definitely part of the story. Perhaps should that educational bit be distributed among many different curriculums – languages, social sciences, IT, mathematics – to help form consensus about why individual actions count for the safety of the many.

Summary of the summary

  • The NorSIS report on Norwegian cybersecurity culture is an ambitious project trying to highlight how society as a whole deals with security practices, beliefs, education and perceptions
  • The report indicates that interest and motivation is a key driver of positive security behaviors, and of know-how
  • There is an indication that education works in driving good behaviors.Security training seems to be less effective in changing risk perception. This should not be surprising based on knowledge about change processes in corporate environments: transfer of knowhow is not enough to change attitudes and norms.
  • There is a clear recommendation to increase security competence through the educational system. This seems well-founded and something all nations should consider.

Security Awareness: A 5-step process to making your training program role based and relevant

Security awareness training is one of many strategies used by companies to reduce their security risks. It seems like an obvious thing to do, considering the fact that almost every attack contains some form of social engineering as the initial perimeter breach. In most cases it is a phishing e-mail.

Security awareness training is often cast as a mandatory training for all employees, with little customization or role based adaptation. As discussed previously, this can have detrimental effects on the effectiveness of training, on your employee’s motivation, and on the security culture as a whole. Only when we manage to deliver a message adapted to both skill level and motivation levels we can hope to be successful in our awareness training programs: When does cybersecurity awareness training actually work?

So, while many employees will need training about identification of malicious links in e-mails, or understanding that they should not use the same password on every user account, other employees may have a higher level of security understanding; typically an understanding that is linked to the role they have and the responsibilities they take. So, while the awareness training for your salesforce may look quite similar to the awareness training you give to your managers and to your customer service specialists, the security awareness discussions you need to have with your more technical teams may look completely different. They already know about password strength. They already understand how to spot shaky URL’s and strange domains. But what they may not understand (without having thought about it and trained for it) is how their work practices can make products and services less secure – forcing us to rely even more on awareness training for the less technically inclined coworkers, customers and suppliers. One example of a topic for a security conversation with developers is the use of authentication information during development and how this information is treated throughout the code evolution. Basically, how to avoid keeping your secrets where bad guys can find them because you never considered the fact that they are still there – more or less hidden in plain site. Like this example, with hardcoded passwords in old versions of a git repository: Avoid keeping sensitive info in a code repo – how to remove files from git version history

So, how can you plan your security conversations to target the audience in a good way? For this, you do need to do some up-front work, like any good teacher would tell you that you need to do for all students; people are different in terms of skills, knowledge, motivation for compliance, and motivation to learn. This means that tailoring your message to be as effective as possible is going to be very hard, and still very necessary to do.

The following 5-step process can be helpful in planning your content, delivery method and follow-up for a more effective awareness training session.

training_personal_targeting
A 5-step process for preparing your awareness training sessions. 

First you need to specify the roles in the organization that you want to convey your message to. What would be the expectations of the role holders of a good security awareness training? What are the responsibilities of these roles? Are the responsibilities well understood in the organization, both by the people holding these roles, and the organization as a whole? Clarity here will help but if the organizaiton is less mature, understanding this fact will help you target your training. A key objective of awareness training should here be to facilitate role clarification and identify expectations that are always exisiting but sometimes implicitly rather than explicitly.

When the role has been clarified, as well as the expectations they will have, you need to consider the skillsets they have. Are they experts in log analysis from your sys.admin department? Don’t insult them by stressing that it is important to log authentication attempts – this sort of thing kills motivation and makes key team members hostile to your security culture project. For technical specialists, use their own insights about deficiencies to target the training. Look also to external clues about technical skill levels and policy compliance – security audit reports and audit logs are great starting points in addition to talking to some of the key employees. But remember, always start with the people before you dive into technical artefacts. And don’t over-do it – you are trying to get a grasp of the general level of understanding in your audience, not evaluate them for a new job.

The next point should be to consider the atmosphere in the group you are talking to. Are they motivated to work with policies and stick with the program? Do they oppose the security rules of the company? If so, do you understand why? Make sure role models understand they are role models. Make sure policies do make sense, also for your more technical people. If there is a lack of leadership as an underlying reason for low motivation to get on board the security train, work with the senior leadership to address this. Get the leadership in place, and focus on motivation before extra skills – nobody will operationalize new skills if they do not agree with the need to do so, or at least understand why it makes sense for the company as a whole. You need both to get the whole leadership team on board, and you probably need to show quite some leadership yourself too to pull off a successful training event in a low motivation type of environment.

Your organization hopefully has articulated security objectives. For a more in-depth discussion on objectives, see this post on ISO 27001. Planning in-depth security awareness training without having a clear picture of the objectives the organization is hoping to achieve is like starting an expedition without knowing where you are trying to end up. It is going to be painful, time-consuming, costly and probably not very useful. When you do have the objectives in place – assess how the roles in question are going to support the objectives. What are the activities and outcomes expected? What are the skillsets required? Why are these skillsets required, and are they achievale based on the starting point? When you are able to ask these questions you are starting to get a grip not only on the right curriculum but also on the depth level you should aim for.

When you have gone through this whole planning excercise to boil down the necessary curriculum and at what level of detail you should be talking about it, you are ready to state the learning goals for your training sessions. Learning goals are written expressions of what your students should gain from the training, in terms of abilities they acquire. These goals makes it easier for you to develop the material using the thinking of “backwards course design“, and it makes it easier to evaluate the effectiveness of your training approach.

Finally, remember that the training outcomes do not come from coursework, e-learning or reading scientific papers. It comes from practice, operationalization of the ideas discussed in training, and it comes from culture, when practice is so second nature that it becomes “the way we do things around here”.

To achieve that you need training, you need leadership, and you need people with the right skills and attitudes for their jobs. That means that in order to succeed with security the whole organizaiton must pull the load together – which makes security not only IT’s responsibility but everybody’s. And perhaps most of all, it is the responsibility of the CEO and the board of directors. In many cases, lack of awareness in the trenches in the form of no secure dev practices, bad authentication routines, insufficient testing stems from a lack of security prioritization by the board.


Want more? Get free updates by joining the Safecontrols e-mail list!

 

Avoid keeping sensitive info in a code repo – how to remove files from git version history

One of the vulnerabilities that are really easy to exploit is when people leave super-sensitive information in source code – and you get your hands on this source code. In early prototyping a lot of people will hardcode passwords and certificate keys in their code, and remove it later when moving to production code. Sometimes it is not even removed from production. But even in the case where you do remove it, this sensitive information can linger in your version history. What if your app is an open source app where you are sharing the code on github? You probably don’t want to share your passwords…

Key on keyboard
Don’t let bad guys get the key to your databases and other valuable files by searching old versions of your code in the repository.

Getting this sensitive info out of your repository is not as easy as deleting the file from the repo and adding it to the .gitignore file – because this does not touch your version history. What you need to do is this:

  • Merge any remote changes into your local repo, to make sure you don’t remove the work of your team if they have commited after your own last merge/commit
  • Remove the file history for your sensitive files from your local repo using the filter-branch command:

git filter-branch –force –index-filter \
‘git rm –cached –ignore-unmatch \
PATH-TO-YOUR-FILE-WITH-SENSITIVE-DATA‘ cat — –all

Although the command above looks somewhat scary it is not that hard to dig out – you can find in the the Github doc files. When that’s done, there’s only a few more things to do:

  • Add the files in question to your .gitignore file
  • Force write to the repo (git push origin –force –all)
  • Tell all your collaborator to clone the repo as a fresh start to avoid them merging in the sensitive files again

Also, if you have actually pushed sensitive info to a remote repository, particularly if it is an open source publicly available one, make sure you change all passwords and certificates that were included previously – this info should be considered compromised.


Like what you read? Sign up for free updates!

Cybercrime one of 5 top organized crime threats to Europe according to EUROPOL

Europol has recently released its 2017 report on organized (SOCTA) crime in the EU. In this report they identify 5 key threats to Europe from organized crime groups. In addition to cybercrime itself, the report pulls forward illicit drugs crimes, migrant smuggling, organized property crime and labor market crime. Cybercriminal activities are often integral to or supporting also the other key operations of organized crime groups.

090515_1236_Managinginf1.jpg
Organized crime groups are highly adaptable, and cybercrime is not an enabler of much of their more traditional criminal businesses. Threat intelligence becomes a key part of any defense strategy when the adversary is a powerful and diverse organization. 

Key tools of organized crime groups are

  • Corruption
  • Counterintelligence against law enforcement
  • Money laundering
  • Document fraud
  • Online trade
  • Technology
  • Violence and extortion

They carry out crimes through currency counterfeiting, various cybercrimes including child exploitation, payment fraud, data trade and malware campaigns. Also sports corruption is a major area for organized criminals, drawing profits from the gambling markets.

Document fraud is increasing and is a significant threat to Europe. It is an enabler of types of criminal activities, including terorrism. These documents are increasingly traded online.

Document fraud is one of the key drivers of identity theft. Document fraud can be necessary to facilitate other criminal activities, and cyberattacks may be used to steal credentials used to obtain documents.

Trade in illicit goods is increasing, and a lot of this trade is conducted on darknet sites. Key products are drugs, illegal firearms and malware. Other Crime-as-a-service segments are also of interest, like botnets for hire, ransomeware-as-a service, exploit coding. Europol sees Crime-as-a-Service as a growing threat to society, according to the SOCTA 2017 report. In particular the growth in ransomeware (#fiction #usecase) targeting not only individuals but also public and private organizations is worrying.

Geopolitical events are driving changes in organized crime in Europe. Conflicts close to European borders are influencing crime through migration, need for illicit goods, as well as European targets being picked by non-European fighters performing terrorist acts in Europe. Cybercrime is one source of funding for such terror groups, in addition to cybercrime being an enabler of the organized crime groups that support the needs of terrorism through illicit firearms trade, trade in drugs and narcotics and human trafficking.

Pulling EUROPOL’s intelligence into your cybersecurity threat context

What does this mean for European businesses? Depending on your exposure, technology base and value chain, this may affect the threat landscape for your organization.

  • Increasing the direct threat level, e.g. ransomeware and payment frauds
  • Supply chain effects, including money laundering schemes
  • Threats to your intellectual property
  • Corruption affecting your markets, including partners, owners, suppliers and customers
  • Potential investments from money laundering schemes into your infrastructure

If growth in the activities of organized crime groups affects your threat landscape, it may also mean that you need to rethink your cybersecurity defense priorities. Is availability still the main threat, or are confidentiality issues coming to the forefront?

 

Hashtag bots spreading spam and malicious links

Automation is a part of social media today. They can help locate, aggregate and share interesting content. They can also be used to spread spam and malicious links. 

Try any popular hashtag and it is quite likely a bot will retweet you. Some of these bots have lots of followers – potentially reaching a lot of possible fraud victims. Here’s one example; the Twitter account @thehackerbot intends to retweet hacker news. Be of its triggers is the hashtag #hacked. 

The hacker bot

It does retweet a lot of hacker news. But then we also have this..

From Russia with love – spam retweeted by a bot

So, if you want to create a retweet bot, it is a great opportunity to work on your machine learning and AI skills – teach your bot to filter out spam. 

Is complexity better than length when it comes to passwords?

Most organizations have password policies that require users to change their passwords every XX days, and that they use a minimum (or sometimes fixed!) length, and a combination of capital and small letters, numbers and special symbols. But what exactly makes a password “strong” or difficult to guess?

Entropy can be used to measure the complexity of an information string – or the number of possible combinations within the given “rule” for constructing a string if you want. To calculate the information entropy of your password, use this formule:

ENTROPY = LOG (Characters in set you make password from) / LOG (2) x (Length of password)

So, comparing a password using only lower case letters, and one with a combination of upper case and lower case, we get that the entropy in the first case is 37, and in the latter case it is 45. This means the latter case is harder to crack using brute-force attacks – but how much so? (Higher entropy is better). Open security research has made a calculator for brute force time that we can use to estimate that. The estimate is based on benchmarks for common cracking tools on a regular consumer grade PC. Assuming a SHA-encrypted salted password we get about 7 hours to crack the first but 2000 hours to crack the latter – entropy is obviously a big deal. As we see form the formula above, increasing the character set length is one way to increase entropy, the other one is increasing the length of the password itself. Note that in terms of cracking – using some symbols or characters not normally found in words is necessary to avoid dictionary based attacks – these brute force times are “worst-case times” seen from the attacker perspective – the time it takes to exhaust the entire character space.

What is better – more characters or longer passwords?

Turning to some basic maths, we can use the formula for entropy to look at the effects of increasing character set size versus password length. The entropy is proportional to the logarithm of the character set size – that means entropy growth rate with character set size c is 1/c. When c is large, the derivative approaches zero; increasing the set size is efficient for small set sizes but the value of doing so becomes smaller as the set size grows larger.

charset_entropy
The effect of increasing character set size on entropy is best when the charset is still small. 

The effect of increasing password length however, is linear, and remains the same for a given charset size for each length of the password. What does this mean in practice?

  • Add complextiy up to a certain level – that also takes away dictionary attacks as an efficient way to brute-force the password
  • Increase length after that instead of including more complexity

Using the brute force time calculator, we estimate the following exhaustion times:

  • Lower case letters, 8-character password: 7 hours to crack
  • Lower case and upper case letters, 8-character password: 2000 hours to crack
  • Lower case letters, 16-character password: 189 million years to crack
  • Lower and upper case letters, 16-character password: 12 trillion years to crack

Logical conclusion: use passphrases with some added complexity. This makes a brute-force attack on your password extremely difficult.

Top of the iceberg: politicians’ private email accounts and shadow IT

In CISO circles the term “shadow IT” is commonly used for when employees use private accounts, devices and networks to conduct work outside of the company’s IT policies. People often do this because they feel they don’t have the freedom to get the job done within the rules.

071615_1406_Treatyourpe1.jpg
If you deny your people a well-stacked toolbox, they will bring their own. That may not be the best solution for your security. 

This is not only for low-level clerks and helpdesk ninjas: top level managers are known to do this a lot, including politicians. Hillary Clinton probably lost the presidential election at least partially due to her poor security awareness. Now VP Mike Pence has also been outed as “private email wielding pubic servant” – and he was hacked too. Why do people do this?

Reasons why people do their business in the IT shadows

I’ll nominate 3 main reasons why people tend to use private and unauthorized tools and services in companies and public service. Then let’s look at what we can do about it, because this is a serious expansion of the organization’s attack surface! And we don’t want that, do we?

I believe (based on experience) the 3 main reasons are:

  1. The tools they are provided with are hard to use, impractical or not available
  2. They do not understand the security implications and have not internalized what secure behaviors really are
  3. The always-on culture is making the distinction between “work” and “personal” foggy; people don’t see that risks they are willing to take in their personal lives are also affecting their organizations that typically will have a completely different risk context

How to avoid the shadow IT rabbit hole of vulnerabilities

First of all, don’t treat your employees and co-workers are idiots. IT security is very often about locking everything down and hardening machines and services. If you go too far in this direction you make it very hard for people to do their jobs, and you can end of driving them into the far riskier practices of inventing their own workarounds using unauthorized solutions – like private email accounts. Make sure controls are balanced, and don’t forget that security is there to protect productivity – not as the key product of most organizations. Therefore, your risk governance must ensure:

  • Select risk-based controls – don’t lock everything down by default
  • Provide your employees with the solutions they need to do their jobs
  • Remember that no matter how much you harden your servers, the human factor still remains.

Second, make people your most important security assets. Build a security aware culture. This has to be done by training, by leadership and by grassroots engagement in your organization.

Third, and for now last, disconnect. Allow people to disconnect. Encourage it. Introduce separations between the private and what is work or for your organization. This is important because the threat contexts of the private sphere and the organizational sphere are in most cases very different. This is also the most difficult part of the management equation: allowing flexible work but ensuring there is a divide between “work” and “life”. This is what work-life balance means for security; it allows people to maintain different contexts for different parts of their lives.