Security should be an organization-wide effort. That means getting everyone to play the same game, requiring IT to stop thinking about “users as internal threats”, and start instead to think about “internal customers as security enhancers”. This can only be achived by using balanced security measures, involving the internal customers (users) through sharing the risk picture, and putting risk based thinking behind security planning to drive rational and balanced decisions. For many organiations with a pure compliance focus this can be a challenging journey – but the rewards at the end is an organization better equipped to tackle a dynamic threat landscape.
Users have traditionally been seen as a major threat in information security settings. The insider threat is a very real thing but this does not mean that the user is the threat as such. There has recently been much discussion about how we can achieve a higher degree of cybersecurity maturity in organizations, and whether cybersecurity awareness training really works. This post does not give you the answers but describes some downsides to the compliance oriented tradition. The challenge is to find a good balance between controls and compliance on one side, and driving a positive security culture on the other.
Most accidents involve “human error” as part of the accident chain, pretty much like most security breaches also involve some form of human error, typically a user failing to spot a social engineering attempt where the security technology is also inept at making good protection decisions. Email is still the most common malware delivery method, and phishing would not work without humans on the other end. This picture is what your security department is used to seeing; the user performs some action that allows the attacker to penetrate the organization. Hence, the user is a threat. The cure for this is supposed to be cyberscurity awareness training teaching users not to open attachments from sketchy sources, not to click those links, not to use weak passwords and so on. The problem is just that this only partially works. Some people have even gone so far as to say that this is completely useless.
The other part of the story is the user that reports his or her computer is misbehaving, or that some resoures have become unavailable, or forwards spear-phishing attempts. Those users are complying with policy and allowing the organization to spot potential attempts of recon or attack before the fact, or at least realtively soon after a breach. These users are security enhancers, in the way security awareness training is trying to at least make users a little bit less dangerous.
Because people do risky things when possible, the typical IT department answer to the insider threat is to lock down every workstation as much as possible, to “harden it”, ie making the attack surface smaller. This attack surface view, however, only considers the technology, not the social component. If you lock down the systems more than what is felt necessary by the users, they will probably start opposing company policies. They will not be reporting suspicious activities as often anymore. They will go through the motions of your awareness training but little behavioral change is seen afterwards. You risk that shadow IT starts to take a hold of your business – that employees use their private cloud accounts, portable apps or private computers to do their jobs – because the tools they feel they need to do their jobs are locked down, made inflexible or simply unavailable by the IT department in order to “reduce the attack surface”. So, not only are you risking to prime your employees for social engineering attacks (angry employees are easier to manipulate), making your staff less able to benefit from your training courses, but you may also be significantly increasing the technical attack surface through shadow IT.
So what is the solution – to allow users to do whatever they want on the network, give the admin rights and no controls? Obviously a bad idea. Keywords are balanced measures, involvement and risk based thinking.
Balanced: there must be a balance between security and productivity. A full lockdown may be required for information objects of high value to the firm and with credible attack scenarios, but not every piece of data and every operation is in that category.
Involvement: people need to understand why security measures are in place to make sense of the measures. Most security measures are impractical to people just wanting to get the job done. Understanding the implications of a breach and the cost-benefit ratio of the measures in place greatly helps people motivate themselves to do what feels slightly impractical.
Risk based thinking: measures must be adequate to the risk posed to the organization and not exaggerated. The risk picture must be shared with the employees as part of the security communication – this is a core leadership responsibility and the foundation of security aware cultures.
In the end it comes down to respect. Respect other people for what they do, and what value they bring to the organization. Think of them as customers instead of users. Only drug dealers and IT departments refer to their customers as users (quoted from somewhere forgotten on the internet).
The EU is ramping up the focus on privacy with a new regulation that will be implemented into local legislations in the EEC area from 2018. The changes are huge for some countries, and in particular the sanctions the new law is making available to authorities should be cause for concern for business that have not adapted. Shockingly, a Norwegian survey shows that 1 in 3 business leaders have not even heard of the new legislation, and 80% of the respondents have not made any effort to learn about the new requirements and its implications for their business (read the DN article here in Norwegian: http://www.dn.no/nyheter/2017/02/18/1149/Teknologi/norske-ledere-uvitende-om-ny-personvernlov). The Norwegian Data Protection Authority says this is “shocking” and says all businesses will face new requirements and that it is the duty of business leaders to orient themselves about this and act to comply with the new rules.
Here’s a short form of key requirements in the new regulation:
You need to do a risk assessment for privacy and data protection of personal data. The risk assessment should consider the risk to the owner of the data, not only the business. If the potential consequences of a data breach are high for the data owner, the authorities should be involved in discussions on how to mitigate the risk.
All new solutions need to build privacy protections into the design. The highest level of data protection in a software’s settings must be used as default, meaning you can only collect a minimum of data by default unless the user actively changes the settings to allow you to collect more data. This will have large implications for many cloud providers that by default collect a lot of data. See for example here, how Google Maps is collecting location data and tracking the user’s location: https://safecontrols.blog/2017/02/18/physically-tracking-people-using-their-cloud-service-accounts/
All services run by authorities and most services run by private companies will require the organization to assign a data protection officer responsible for compliance with the GDPR and for communicating with the authorities. This applies to all businesses that in their operation is handling personal data on a certain scale and frequency – meaning in practice that most businesses must have a data protection officer. It is permissible to hire in a third-party for this role instead of having an employee to fill the position.
The new regulation also applies to non-European businesses that offer services to Europe.
The new rules also apply to data processing service providers, and subcontractors. That means that cloud providers must also follow these rules, even if the service is used by their customer, who must also comply.
There will be new rules about communication of data breaches – both to the data protection authorities and to the data subjects being harmed. All breaches that have implications for individuals must be reported to the data protection authorities within 72 hours of the breach.
The data subjects hold the keys to your use of their data. If you store data about a person and this person orders you to delete their personal data, you must do so. You are also required to let the person transfer personal data to another service provider in a commonly used file format if so requested.
The new regulation also provides the authorities with the ability to impose very large fines, up to 20 million Euros or up to 4% of the global annual turnover, whichever is greater.This is, however, a maximum and not likely to be the normal sanctions. A warning letter would be the start, then audits from the data protection authorities. Fines can be issued but will most likely be within the common practice of corporate fines within the country in question.
“Personal data should be processed in a manner that ensures appropriate security and confidentiality of the personal data, including for preventing unauthorised access to or use of personal data and the equipment used for the processing.”
This means that you should implement reasonable controls for ensuring the confidentiality, integrity and availability of these data and the processing facilities (software, networks, hardware, and also the people involved in processing the data). It would be a very good idea to implement at least a reasonable information security management system, following good practices such as described in ISO 27001. If you want a roadmap to an ISO 27001 compliance management system, see this post summarizing the key aspects there: https://safecontrols.blog/2017/02/12/getting-started-with-information-management-systems-based-on-iso-27001/.
There are two big trends in the power utilities business today – with opposing signs:
Addition of micro-producers and microgrids, making consumers less bound to the large grid operators
Increasing integration of power grids over large distances, allowing mega-powerplants to serve enormous areas
Both trends will have impact on grid resilience; the microgrids are usually connected to regional grids in order to sell surplus power, and the mega plants obviously require large grid investments as well. When we seek to understand the effect on resilience we need to examine two types of events:
Large-scale random event threatening the regularity of the power transmission capability
Large-scale attack by SCADA hackers that knock out production and transmission capacities over extended areas
We will not perform a structured risk assessment here but we will rather look at some possible effects of these trends when it comes to power regularity and (national?) security.
Recent events that are interesting to know about
Mega-plants and increasing grid integration
Power plants are in the wind, literally speaking. The push for renewables to come to the market is giving concrete large-scale investments. Currently we are seeing several interesting projects moving ahead:
Fosen Vind: Europe’s largest onshore wind park under construction in central Norway with a total capacity of 1000 MW: https://en.wikipedia.org/wiki/Fosen_Vind. This wind park requires investments in the core cable for the grid, and coincides with increased capacity in transfer lines between Scandinavia and continental Europe (Germany and the Baltics)
In addition to this, we see that NERC, the American organization responsible for the reliability of the power grids in the United States, Canada and parts of Mexico are working to include Mexico as a full member. This will very likely lead to increased integration of the power transmission capacities across the U.S.-Mexico border, at least at the organizational and grid management levels.
Random faults and large-scale network effects
What happens to the transmission capacity when random faults occur? This depends on the redundancy built into the network, and the capacities of the remaining lines when one or more paths fail. As more of the energy mix moves towards renewables we are going to be even more dependent on a reliable transmission grid; renewable energy is hard to store, and the cost of high-capacity storage will add to the energy price, making renewable sources less competitive compared with fossil fuels.
If we start relying on mega plants, this is also going to make us depend more on a reliable grid. The network effects would have to be investigated using methods like Monte Carlo simulations (RAM analysis) but what we should expect is:
Mega plants will require redundancy in intercontinental grid connections to avoid blackouts if one route is down
Areas without access to base load energy supply would be more vulnerable than those that can supply their own energy locally
Prices will fluctuate over larger areas when energy production is centralized
Micro-grids and micro-production should alleviate some of the increased vulnerability for small consumers (like private households) but are unlikely to be an effective buffer for industrial consumers
Coordinated cyber warfare campaigns
Recent international events have brought cyber warfare to the forefront of politics. Recently it was suggested at the RSA conference that deterrence through information sharing and openness does not work, and we are not able to deny the intrusion of state sponsored hackers, so we need to respond in force to such attacks, including armed military response in the physical world.
Recent cyberattacks in this domain have been reported from conflict zones. The reports receiving the most attention in media are those coming out of the Ukraine, where the authorities have accused Russia to be responsible for a series of cyber-attacks, including the one causing a major blackout in parts of Ukraine in December 2015. For a nice summary of the Ukrainian situation, see this post on the cybersecurity blog from SANS.
Increasing cooperation across national borders can increase or resilience but at the same time it will make effects of attacks spread to larger regions. Depending on the security architecture of the network as a whole, attackers could be able of compromising entire continents, potentially damaging the defense capabilities of those countries severely as population morale is hit by the loss of critical infrastructure.
What should we do now?
There are many positive outcomes of increased integration and very large renewable energy producers – but we should not disregard risks, including the political ones. When building such plants and the grids necessary to serve customers we need to ensure sufficient redundancy exists to cope with partial fallouts in a reasonable manner. We should also build our grids such that we have a robust security architecture, with auditable rules to ensure security management is on par across borders. This is the strength of NERC. Cyber resilience considerations should be made also for other parts of the world. Perhaps it is time to lay the groundwork for international conventions on grid reliability and security before we end up connecting all our continents to the same electrical network.
Cybersecurity awareness training has become a central activity in many firms. It takes time, requires planning and management follow-up, and is very often mandatory for all employees. But does it work? That depends – first and foremost on people’s feelings towards cybersecurity.
A very informal survey in my network shows that most people don’t receive any awareness training at all at work, and among those that do, there are more people who say it does not change their behaviors, than those that think it has had a positive impact.
At the end of last year I participated in a local meeting in the Norwegian Association for Quality and Risk Management, where I heard a very interesting talk by Maria Bartnes (Twitter: @mariabartnes) from SINTEF on user behaviors and cybersecurity training. She argued that training is only effective if people are motivated for the training – and for that they need to have beliefs and goals that are well aligned with the organization they are a part of. She portrayed this in a matrix with various employee stereotypes, with “feelings towards policies and company goals” on one axis and “risk understanding” on the other axis – which I found was a very effective way of communicating the fact that all employees are not created equal 🙂 . You have people ranging from technical risk experts that love the company and policies they are working for, and you have people who don’t understand risk at all, and at the same time are feeling angry or resentful towards both their company and its policies – and you have everything in between.
Another issue is that many organizations tend to make training mandatory and the same for all. It makes little sense to force your experts to sit through basic introductions that are second nature to them anyway – a lot of knowledge workers experience this when HR departments push e-learning modules to all employees.
What does it all mean?
Some people have argued that security awareness training is completely useless. This is probably going a bit too far but there are clear limits to what can be achieved by “training” of any kind when it comes to changing people’s behaviors. We use computers by habit – the way we act when we read e-mails, research the internet, write Word documents or compile code – it is all “second nature” when you are experienced at it. Changing those habits is hard and it does not happen automagically through training.
Focusing on motivation and feelings is a good start – without the motivation to do so, it is very unlikely that users that exhibit risky behaviors will make any effort to change those behaviors.
Continuous effort is needed to change behaviors, to create new habits. This means that employees must not only receive the knowledge about the “why” and the “how”, but they must also attain practical knowledge by doing. When we realize that, we see that it becomes very important not to demotivate employees that already have positive feelings about cybersecurity. Forcing the highly motivated and technically competent to take very basic e-learning lessons may kill that motivation – and thus increase your organizations risk exposure.
It also becomes very important to motivate those that are feeling resentful, both the technically competent ones, and those in the “worst-case corner” of resentful and low technical competency. Motivation comes before technical know-how.
For cybersecurity awareness training to have a positive effect it is thus necessary to tailor the contents to each employee based on skills and motivation. Further, the real work really starts after the training – it is the action of “doing” that changes habits, not the mere presentation of information about phishing e-mails and strong passwords. This means you need leadership, and you need change agents.
Use your technically skilled and highly motivated people as change agents. They can help motivate others, and they can exemplify good behaviors. Let the these supercyberusers support management, and educate management. And bring the managers on board on following up security regularly – not to outsource it to the IT department. Entertaining abuse cases for discussion in meetings can help, as well as publicly praising employees that make an effort to bring the maturity of both their own security practices, and the security maturity of the company as a whole to a new level.
Make sure you adapt your training to both motivation and technical skills of those who receive it. See maturity work in the area of cybersecurity as a part of your organization’s continuous improvement program – embed it in the way your organization works instead of relying solely on information campaigns. Use change agents and inspiring leaders in you organization to change the way the organization behaves from the individual to the firm as a whole. That is the only way to success with building security awareness that actually changes behaviors.
Maintaining security is an ongoing process which requires coordinated effort by the whole organization. Without backing from the top management levels and buy-in through the ranks there is little chance of building up resilience against cyber attacks. As organization complexity increases and value creation becomes distributed it will be necessary to have an integrated approach to security; your company needs an information security management system. ISO 27001 is an international standard that sets requirements to such as system based on what has been internationally recognized as best practice.
ISO 27001 [external link] is a management system standard that follows many of the same principles as other ISO standards such as ISO 9001 for quality management. Assuming that the client has a ISO 9001 compliant system in place, the information security management system should be built on the existing processes and workflows. This means that existing auditing systems and reporting requirements should be appended, rather than building everything from scratch.
The following are key elements of information security management system establishment. First we look at the activities that need to be performed in the order of appearance of requirements in ISO 27001. Afterwards, we summarize the bare minimum that you will have to do in a table.
Under which regulatory regimes does the organization operate?
Who are the main threat actors based on the external context? (Script kiddies, hacktivists, cyber criminals, nation states, etc.)
Internal stakeholder definitions
Who are the system owners?
Who are the system users?
Which process owners depend the most on the information assets?
Who are responsible for maintaining security?
Identify main information assets
What are the critical information objects?
Why are they critical in the context of operations?
Are there assets that require security due to external stakeholder situations (legal or commercial requirements, or due to risk drivers)
The most efficient approach for this type of context development is a working meeting with the organization’s top management where these key issues are identified.
Policy development and leadership
(ISO 27001 Section 5)
Top management must be involved in policy development, and promote its integration in the overall management system of the organization
A policy should be developed and be sanctioned and signed by top management. The policy shall include the following:
Should commit the organization to compliance with infosec requirements, and to continuous improvement. It should therefore refer to the organizations existing systems for compliance measurement and continuous improvement processes, as well as to internal information security standards with more practical requirements.
The policy shall be documented and made available and communicated to all users
Top management shall assign responsibility and authority for follow-up of information security, and for reporting to top management. In most organizations a single role is recommended for this, and a person competent in both the organization’s core activity and in information security principles should take this role. In most commercial organizations this role is designated as CISO.
Policy objectives should conform to the requirements of Clause 6.3 of ISO 27001. In order to identify these goals when building a new system it is recommended to write the policy after an initial risk and vulnerability assessment has been performed.
Recommended practice is to develop the policy in cooperation with the assigned CISO (if existing at this point). A policy document should be written and discussed with top management before it is updated. The policy should be dated, and an expiry date should be set in order to guarantee regular reviews (this is not an ISO 27001 requirement but is considered good practice for security critical process documents).
Information security risk management planning
(ISO 27001 Section 6)
Define a process for information security risk assessment. The recommended elements of this process:
Requirements to documentation of [USERS, HARDWARE, SOFTWARE, NETWORKS]
Requirements to performing risk assessments
Risk acceptance criteria. It is recommended to keep this at a coarse level and use qualitative descriptors
HAZID-type risk identification (use of guidewords)
Control planning methodology (ref. to Annex A of ISO 27001)
Perform a risk assessment for all applicable systems (Scope definition à HAZID à Risk ranking à Risk treatment planning)
Produce a statement of applicability for the controls in Annex A of ISO 27001
Formulate infosec objectives (ref policy development). These objectives should be measurable, or at least possible to evaluate with respect to performance. The objectives should align well with the overall criticality of the information assets (ref. risk context). Annex A of ISO 27001 is a good guidance point for developing objectives. Also, the organization should not choose objectives that are inconsistent with the maturity and capabilities of the organization.
The risk assessment procedure should be written in a practical way, such that the organization can apply it with the available resources. It should include examples of format for reporting, and also the recommended guidewords/threat descriptors.
A key difficulty for infosec risk assessments is the risk ranking. There are several ways this has been approached, varying from using “complexity of attack vector” as an proxy for probability and generic ratings for impact, to context related impact assessments in operationally relevant categories such as revenue loss, legal and litigation consequences, or reputation loss. The probability dimension can also be treated using aggressor profiling techniques, which is recommended for sophisticated organizations with a good understanding of the threat landscape. You can read more about that technique in this blog post from 2015: https://safecontrols.blog/2015/09/08/profiling-of-hackers-presented-at-esrel-2015/
(ISO 27001 Section 7)
The organization must perform a competence requirements mapping with respect to infosec for the various roles in the organization. This work should be performed in cooperation with the organization’s HR department, and set verifiable requirements for groups of employees. Responsibility for following up this type of competence should be given, preferably to the HR director or similar. Typical employee groups would be:
HR and middle management
Information system users
Specific roles (CISO, internal auditor, etc.)
The organization must develop an awareness program. The awareness program should as a minimum include:
Making employees aware of the policy
Why complying with the policy and the procedures is necessary and beneficial
Implications of non-compliance (up to and including employee termination and criminal charges in serious circumstances, depending on local legislation)
Information security aspects should be included in the communication plans for both internal and external communication.
For document control and similar processes, it is assumed that the organization has an appropriate system. If not, see ISO 27001 Section 7, Clause 7.5.3, as well as ISO 9001 requirements).
The awareness program should be made the responsibility of either the CISO or the training manager /HR. These departments must cooperate on this issue.
The communication plan for information security can be integrated in other communication plans but shall be approved by the CISO. It is recommended to develop a specific plan for information security that other communication plans can refer to. This is especially relevant for communications during incident handling, which may require tight stakeholder cooperation and maintaining good public relations and media contacts.
Operations and Performance Monitoring
(ISO 27001 Section 8-9)
The organization must implement and document the performance of the risk mitigating controls. A lot of the proof can be extracted from data from technological barrier functions, whereas other measures may be necessary to document organizational controls.
Information security aspects should be included in the organizations change management procedures (ref. ISO 9001 requirements)
Information security monitoring should be implemented based on control and objectives
Information security auditing should be included in the internal auditing program. It is recommended to build up on the existing system, and to include requirements to competence for the subject matter expert assisting the head auditor (ref. back to competence management and HR processes). Some extra reading about auditing and what it is good for can be found here, but for the context of reliability engineering. It should be equally applicable in the context of cybersecurity: Why functional safety audits are useful
Include infosec in management review. In particular ensure efficient reporting on infosec objectives. It is recommended to create a simple and standardized reporting format (e.g. a dashboard) for this use.
(ISO 27001 Section 10)
Include infosec into the existing non-conformance system
Assign CISO as owner of infosec related deviations
Activity summary and sequence
Building a management system requires multiple activities that have interdependencies, as well as dependencies on other management system artifacts. The following sequence is a suggested path to developing an information security management system from scratch in a robust organization.
Note that it should be expected that some iterations will be needed, especially on:
Policy and objectives
Risk assessment procedure and risk and vulnerability study (the procedure is updated based on experience with the method)
Objectives and measurements will need to be reviewed and updated based on experience
Note also that a consultant has been included in the “People” category. For organizations that do not have sufficient in-house competence in management system development it can be beneficial to contract a knowledgeable consultant to help with the project. For organizations with sufficient in-house capacity this is not necessary, and it is not a requirement for compliance with ISO 27001.
Customers/users, organization charts, suppliers, partner lists, etc.
Information in technical note on Context: stakeholders. Should include who, why, what and how with respect to the information security risk.
Network topologies, asset lists, document systems
Prioritized inventory description as section in technical note on Context.
Threat actor assessment
Outputs from previous activities.
News and general media. Experience from previous incidents.
Open security assessments from police and intelligence communities.
List of threat actor categories with descriptions of motivations and capabilities.
Risk procedure development
Risk assessment procedure document
Scope definition for risk assessment
Context note with inventory.
Topology drawings. Organization charts. Use cases.
Use of guidewords for each scope node, ref risk assessment procedure.
Risk identification table (HAZID table)
Mitigation planning (including ISO 27001 Annex A review)
HAZID table with risk ranking.
List of actions and controls to be evaluated or implemented.
HAZID table and risk mitigation results.
Risk and vulnerability report.
Statement of applicability
Review each control in Annex A
Context note. Risk and vulnarbility report.
Statement of applicability (report)
Suggest objectives based on previous activities and maturity of the organization
Risk assessment, context, statement of applicability
Information security objectives, including measurement and review requirements in technical note or procedure.
Review of objectives with key stakeholders
Revised objective note.
Develop draft policy for information security.
Objectives, statement of applicability, risk and vulnerability report, context, policy templates.
Review draft policy in meeting with top management. Top leadership needs to be involved and take ownership, headed by the CISO.
HR Integration: competence management
Develop competence requirements for roles
Updated competence requirements in role descriptions
Develop awareness program, tailored to competence requirements of groups.
Updated role descriptions
Awareness program plan
Internal auditing requirements
Update internal auditing requirements
Infosec policy and procedures, objectives
Updated audit plans and competence requirements for subject matter expert
Update change management system and management’s annual review reporting requirements
Infosec policy and objectives
Updated change management procedure
Updated reporting format to top management.
CISO (recommend that this is done internally unless consultant’s assistance is needed)
After the management system has been established, it is recommended to perform an internal requirements audit to identify gaps.
After the system has been in operation for 6 months an internal security audit with focus on evidence of use is recommended.
Summing up what you just read
You have determined your company needs a security management system. This blog post gives you a blueprint for building one from scratch. Keep in mind that the system with its processes, governing documents and role descriptions only provide a framework to work within. Key to getting value from this process is starting to use the system.
Building a management system from scratch is a big undertaking, and for many companies it makes more sense to do it piece by piece. Start with a minimum solution, start using it, and improve on the processes and documents based on your experience. That is much better than trying to build the system to be fully compliant from day 1 – and you will start to see real benefits much sooner.
A term that is often used in the cybersecurity community is threat hunting. This is the activity of hunting for intruders in your computer systems, and then locking them out. In the more extreme cases it can also involve attacking them back – but this is illegal in most countries. Threat hunting involves several activities that you can do to find hackers on your network. The reason we need this is that the threats are to some extent intelligent operators who adapt to the defenses you set up in your network – they find workarounds for each new hurdle you throw at them. Therefore, the defense needs to get smart and use a wide arsenal of analysis techniques to find the threats; meaning analysis of data that can indicate that an intrusion has occurred. Data on user behavior, logins, changes to files, errors, and so on can be found in the systems logs. In addition to things that can be automated (looking for peaks in network traffic, etc.), threat hunting will always include some manual inquisitive labor by the analyst – both for understanding the context more deeply, and perhaps utilizing statistical and data science tools for special cases. Based on successful hunts, automated signals can be added to improve future resilience. The interplay between automated red flags, context intelligence and data science is shown below.
The conglomerate joint venture deal: a potential source of an advanced persistent threat?
Johnny the Hunter was going to work as usual in the morning. He got a cup of coffee at sat down at his computer to start his day. As most office workers, Johnny first skimmed his e-mails, and checked his Twitter feed for any interesting news. He noticed one e-mail that stood out, from one of the sys.admins, who told him that one of the application servers had rebooted without any good reason last night. No functionality had been lost, and no significant downtime was recorded – it was just a simple reboot. The logs on the server did not show any suspicious activity.
This triggered Johnny’s curiousity – what had casued the reboot? Was it some random hardware issue? Was it a software bug causing a kernel crash? Probably not, that would have been recorded in the server logs.
Johnny decided to make this the starting point for a hunt. First, he checked all automated surveillance systems; there were a few orange flags (detected abnormal activity but not something considered critical). He decided he needed to review the newest intelligence data they had on the threat landscape. There was nothing from the typical providers that caught his attention, so he turned to the intranet to check if something was going on internally in the company. He noticed the CEO had posted a video explaining that they were negotiating with an Asian conglomerate about buying up one of the conglomerate’s competitors as a joint venture. They had not yet agreed on who would be the controlling company in the joint venture. He didn’t notice any other big news.
He then called HR to ask if there were any new hires onboarding that would have anything to do with the Asian deal. The HR director told him that they had several applicants, all coming from the Asian conglomerate, and they were all highly qualified. It seemed a waste of talent not to hire at least one of them but the CSO had told HR to hold it off.
Johnny decided to start looking at network logs from the last 2 years, to have a baseline, and then to look for anomaly’s after negotiations about the buy-up started. For this he collected logs not only from the application servers, e-mail servers, web servers and network security devices, but also news items and social media posts. He deciced he would use supervised learning to correlate news events with network anomalies and called up Sin Jing, the head of their internal big data and machine learning R&D unit to discuss how best to do this.
Using a range of techniques Johnny investigated behaviors and could find a correlation between news and strange network activities from the last 4 months. Prior to that there was no such correlation. He also tracked down the activity to two user accounts in the accounting department, and the activity was always managed over VPN outside of normal office hours. He had a lead on the threat actors – and decided to discuss it with the HR department to assess the possibility of this being an insider threat, or if the compromised accounts were simply compromised accounts not detected by their endpoint security solutions.
This is threat hunting – and for the most advanced threats it is the only way to decrease detection time, and to effectively reduce the attack surface.