Does the AI act make it illegal to use AI for European companies?

The AI Act does not make it illegal to use AI, but it does regulate a lot of the use cases. As EU acts typically go, it makes it mandatory to do a lot of assessment, documentation and governance – at least for so-called “high-risk” use cases. The EU has published an official short summary here: https://artificialintelligenceact.eu/high-level-summary/.

The main points of the AI Act

  • The AI Act classifies AI systems based on risk. There are 4 levels: unacceptable (illegal use cases), high-risk (OK, but with a lot of paperwork and controls), limited risk (chatbots, be transparent), and minimal risk (unregulated, for example spam filters).
  • The AI Act has rules for companies using AI, but more rules for companies making AI systems. Your personal hobby use and development is not regulated.
  • General purpose AI systems (basically, systems capable of solving many tasks such as AI agents able to execute commands via API’s) has requirements for documentation, instructions for use, respect copyright and publish a summary of the content used for training. Open source: only copyright and summary of training data needed, unless the system is “high-risk”. GPAI systems also need threat modeling, testing, incident reporting and reasonable security controls.

Banned AI systems

The unacceptable ones: these protections are there to protect you against evil, basically systems made for mass surveillance, social credit systems, predictive crime profiling of individuals, manipulation of people’s decisions, etc.

High-risk AI systems

Systems that are safety critical are considered high-risk, including a long list of systems under other EU legislation such as important components in machinery, aircraft, cars and medical systems (Annex I in the EU Act has a long list of systems). There is also an Annex III, listing particular high-risk systems, including using AI for employee management, immigration decisions and safety critical components in critical infrastructure. OK – it is quite important that we can trust all of this, perhaps a bit of governance and oversight is not so bad? At the same time, the important cases are perhaps also the areas where we would expect to see a lot of benefit from using technology to make things better, more efficient, cheaper, etc. So, what are makers and users of high-risk AI systems required to do? Let’s begin with the makers. They need to:

  • Create a risk management system
  • Perform data governance, to make sure training and validation data sets are appropriate and of good quality
  • Create technical documentation to demonstrate compliance (this can be interpreted in many ways)
  • Design the system for “record keeping” to identify national level risks(?) and substantial modifcations throughout the system’s lifecycle
  • Create instructions for use to downstream deployers
  • Design the system so that users can implement human oversight
  • Ensure acceptable levels of cybersecurity, robustness and accuracy
  • Establish a quality management system

Most of these requirements should be part of any serious software or product development.

Limited risk

For limited risk systems, the main requirement is to be transparent to the user that the system is using artificial intelligence. The transparency requirement is regulated in Article 50 of the AI Act. Content generated by AI systems must be marked as such, including deep-fakes. There is an exception for satirical or artistic content (to avoid making the art less enjoyable, but you still have to be honest about AI being part of the content), and also for “assistive editing functions”, like asking an LLM to help you edit a piece of text you wrote.

Risk management requirements for “high-risk” systems

The first requirement for developers of “high-risk” AI systems is to have a risk management system. The system must ensure that risk management activities follow the lifecycle of the AI system. The key requirements for this system:

  • Identify and analyze potential risks to health, safety or fundamental rights
  • Estimate and evaluate the risks
  • Adopt measures to manage the risks to acceptable levels, following the ALARP principle
  • The systems shall be tested to identify the best risk management methods
  • The developer must consider whether the AI system can have negative effects for people under the age of 18 years, or other vulnerable groups

In other words, the developer needs to perform risk assessments and follow up on these. Most companies are used to performing risk assessments, but in this case the term “fundamental rights” is perhaps less common, except for in privacy assessments under the GDPR. The fundamental rights requirements are detailed out in Article 27. The EU has a Charter of fundamental rights covering dignity, freedoms, equality, solidarity, citizen’s rights and justice. The AI Office will publish tools to simplify the fundamental rights assessment for AI system developers.

AI based glucose level regulation in diabetes patients (a ficticious example)

Consider the use of an AI system used to optimize blood glucose level regulation in diabetes type I patients. The system works in a closed loop, and automatically adjusts continuous insulin injection using an insulin pump. The system measures blood glucose levels, but also senses activity level, environmental factors such as humidity, temperature, altitude. The system also uses image recognition using a small camera to detect what the patient is eating as early as possible, including interpreting menu items in a restaurant before the food is ordered. Using this system, the developer claims to completely remove the hassle of carbohydrate calculations and manual insulin adjustments, to reduce the amount of time the patient has a too high or low glucose level, and avoid the typical delayed insulin-glucose response in the body through feedforward mechanisms based on predictive powers of the AI.

Can I based systems make it unnecessary for patients to look at the phone to keep treatments under control?

For a system like this, how could one approach the risk management requirements? Let’s first consider the risk categories and establish acceptance criteria.

Health and safety (for the patient):

  • Critical: Death or severe patient injuries: unacceptable
  • High severity: Serious symptoms related to errors in glucose level adjustment (such as hyperglycemia with very high glucose levels): should occur very rarely
  • Medium: Temporary hypoglycemia (low blood sugar levels) or hyperglycemia (increased blood suger levels): acceptable if the frequency is lower than in manually regulated patients (e.g. once per month)
  • Low: annoyances, requiring patient to perform manual adjustments. Should occur less than weekly.

If we compile this into a risk matrix representation, we get:

CriticalUnacceptableUnacceptableUnacceptable
HighUnacceptableUnacceptableALARP
MediumALARPAcceptableAcceptable
LowAcceptableAcceptableAcceptable
WeeklyYearlyDecades
Example risk acceptance matrix for health and safety effects due to adverse AI events

Fundamental rights (for the patient and people in the vicinity of the patient). A fundamental rights assessment should be performed at the beginning of the development, and to be updated with major feature or capability changes. Key questions:

  • Will use of the system reveal to others your health data?
  • Will the sensors in the system process data about others that they have not consented to, or where there is no legal basis for collecting the data?

We are not performing the fundamental rights assessment here, but if there are risks to fundamental rights, mitigations need to be put in place.

Let’s consider some risk factors related to patient safety. We can use the MIT AI Risk repository as as starting point for selecting relevant checklist items in order to trigger identification of relevant risks. The taxonomy of AI risks has 7 main domains:

  1. Discrimination and toxicity
  2. Privacy and security
  3. Misinformation
  4. Malicious actors and misuse
  5. Human-computer interaction
  6. Socioeconomic and environmental harms
  7. AI system safety, failures and limitations

In our ficticious glucose regulation system, we consider primarily domain 7 (AI system safety, failures and limitations) and domain 2 (Privacy and security)

  • AI system safety failures and limitations (7)
    • AI possessing dangerous capabilities (7.2)
      • Self-proliferation: the AI system changes its operational confines, evades safeguards due to its own internal decisions
    • Lack of capability or robustness (7.3)
      • Lack of capability or skill: the quality of the decisions is not good enough
      • Out-of-distribution inputs: input data is outside the validity for the trained AI model
      • Oversights and undetected bugs: lack of safeguards to catch bugs or prevent unintended use
      • Unusual changes or perturbations in input data (low noise robustness)
    • Lack of transparency and interpretability (7.4)
      • Furstrate achievement of auditing: lack of compliance to relevant standards, cannot be assessed.
  • Privacy and security (2)
    • Compromise privacy by obtaining, leaking or correctly inferring personal data (2.1)
      • PII memorization: Models inadvertently memorizing or producing personal data present in training data
      • Prompt injection: Compromise of privacy by prompt based attacks on the AI model
    • AI system vulnerability exploitation (2.2)
      • Physical or network based attack: can lead to manipulation of model weights and system prompts
      • Toolchain and dependency vulnerabilities (vulnerabilities in software)

To assess the AI system for these risks, the typical process would follow typical risk management practices:

  • Describe the system and its context
  • Break down the system into parts or use cases
  • Assess each part or use case, as well as interactions between parts to identify hazards
  • Document the finding with cause, consequence, existing safeguards
  • Perform evaluation of probability and severity, compare with acceptance criteria
  • Identify mitigations

Let’s consider a particular risk for our glucose regulator:

RISK CATEGORY: (7.3) Lack of capability or skill.

  • Possible risk: the system makes the wrong decision about insulin injection rate due to lack of capabilities.
  • Possible causes: insufficient training data, insufficient testing.

Consequence: over time it can lead to frequent hypo- or hyperglycemia, causing long-term patient complications and injury.

Probability: would require testing or an assessment of the training and testing regime to determine the probability.

Suggested decision: provide extra safeguards based on blood glucose level measurements, and let the patient take over to adjust manually if the glucose regulation is detected as outside of expected performance bounds. Use this while performing testing to to assess the reliability of the model’s inference in order to allow fully automatic regulation.

Key take-aways

  1. The AI act puts requirements on developers and users of AI systems.
  2. For high-risk systems, a robust risk management system must be put in place
  3. AI risks is an active field of research. A good resource for AI risks is the MIT AI Risk taxonomy.

Further reading

AI Risk repository

AI Act Explorer

The security sweet spot: avoid destroying your profitability with excessive security controls

Excessive security controls when the organization isn’t ready causes friction and destroys value. Learn to identify your organization’s security sweet spot and avoid making the security team the most unpopular group in your company.

Many cybersecurity professionals are good at security but bad at supporting their organizations. When security takes priority over the mission of the organization, your security team may be just as bad for business as the adversary. Security paranoia will often lead to symptoms such as:

  • Security controls introducing so much friction that people can’t get much done. The best employees give up and become disengaged or leave.
  • Mentioning IT to people makes them angry. The IT department in general, and security team in particular, is hated by everyone.
  • IT security policies are full of threats of disciplinary actions, including reporting employees to the police and firing them.

Security when done wrong, can be quite toxic. When security aligns with the culture and mission of the organization, it creates value. When it is abrasive and misaligned, it destroys value. Paranoia is destructive.

An illustrative graph showing that the more secruity you add, the better it is, until it isn’t.

The minimum on the green line on the graph is perhaps the sweet spot for how much security to apply. The difficulty is in finding the sweet spot. It is also not a fixed point on the scale, it is a sliding scale. As the maturity of the organization develops, the sweet spot will move towards the right on the graph. Higher maturity in the organization will allow you to tighten security without destroying value through friction, inefficiencies and misalignment.

As the organization’s workflows and competence matures, it can benefit from tightening security

If you want to kick-start security improvements at work, consider e-mailing this article to your manager with your own take on what your organization’s security sweet spot is

Finding your sweet spot and translating it into security controls

Finding the sweet spot can be challenging. You want to challenge the organization, and help it grow its security maturity, without causing value destruction and disengagement. To achieve this, it is helpful to think about 3 dimensions in your security strategy:

  1. Business process risk
  2. Lean process flow with minimal waste
  3. Capacity for change

If you want to be profitable, keep an engaged workforce, and maintain a high level of security it requires good understanding of cyber risk, that you have established digital work processes that are not getting in the way of the organization’s goals, and a motivated workforce that welcomes change. If you are starting in a completely different place than that, tightening security can easily destory more value than it protects.

Understanding your business process cyber risk is necessary so that you can prioritize what needs to be protected. There are many methods available to asses risks or threats to a system. The result is a list of risks, with a description of possible causes and consequences, an evaluation of likelihood and severity, and suggested security controls or mitigations to reduce the risk. No matter what process you use to create the risk overview, you will need to

  • Describe the system you are studying and what about it is important to protect.
  • Identify events that can occur and disturb the system
  • Evaluate the risk contribution from the elements
  • Find risk treatments

If the risk to your business value from cyber attacks is very high, it would indicate a need for tighter security. If the risk is not too worrying, less security tightness may be appropriate.

The next step is about your workflows. Do you have work processes with low friction? Securing a cumbersome process is really difficult. Before you apply more security controls, focus on simplifying and optimizing the processes such that they become lean, reliable and joyful to work with. Get rid of the waste! If you are far from being lean and streamlined, be careful about how much security you apply.

The final point is the capacity for change. If the workforce is not too strained, has a clear understanding of the strategic goals, and feel they get rewarded for contributing to the organization’s mission, the capacity for change will typically be high. You can introduce more security measures without destroying value or causing a lot of frustration. If this is not in place, it will be a precursor for going deep on security measures.

To summarize – make sure you have efficient value creation processes and enough capacity for change before you apply a lot of security improvements. If your organization sees a high risk from cyber attacks, but has low process efficiency and limited capacity for change, it would be a good approach to apply basic security controls, and focus on improving the efficiency and capacity for change before doing further tightening. That does mean operating with higher risk than desired for some time, but there is no way to rush change in an organization that is not ready for it.

Security growth through continuous improvement is the way.

Like what you read? Remember to subscribe – and share the article with colleagues and friends!

The balancing act

Consider an engineering company that provides engineering services for the energy sector. They are worried about cyber attacks delaying their projects, which could cause big financial problems. The company is stretched, with most engineers routinely working 60-hour weeks. The workflows are highly dependent on the knowledge of individuals, and not much is documented or standardized. The key IT systems they have are Windows PC’s and Office 365, as well as CAD software.

The CEO has engaged a security consulting company to review the cybersecurity posture of the firm. The consulting report shows that the company is not very robust to cyber attacks, that security awareness is low. The cyber risk level is high.

The CEO, herself an experienced mechanical engineer, introduces a security improvement program that will require heavy standardization, introduction of new administrative software and processes, and will limit the personal freedom in choice of working methods for the engineers. He meets massive opposition, and one of the most senior and well-respected engineering managers says “this is a distraction, we have never seen a cyber attack before. We already work a lot of overtime, and cannot afford to spend time on other things than our core business – which is engineering.”. The other lead engineers support this view.

The CEO calls the consultants up again, and explains that she will have difficulties with introducing a lot of changes, especially in the middle of a big project for one of the key customers. She asks what the most important security measures would be. She gets a list of some key measures that should be implemented, such as least privilege access, multifactor authentication and patching. The CEO then makes a plan to roll out MFA first, and then to focus on working with the engineers to improve the work flows to reduce “waste”. With a step-by-step approach, they have seen some security wins, and after 12 months, the organization is at a much healthier state.

  • Engineers no longer log on to their PC’s as administrators for daily work
  • MFA is used everywhere, and 90% of logons are now SSO thorugh Entra ID
  • They have documented, standardized and optimized some of the work processes that they do often. This has freed up a lot of time, 60-hour weeks are no longer the norm.
  • The CEO has renewed focus on strategic growth for the company, and everyone knows what the mission is, and what they are trying to achieve. Staff motivation is much higher than before.

Thanks to good organizational understanding of the CEO, and helpful input from the security consultants, the actual security posture is vastly improved, even with few actual security controls implemented. The sweet spot has taken a giant leap to the right on the attacker-paranoia graph, and the firm is set to start its maturity growth journey for improved cybersecurity.

The key take-aways

  • Don’t apply more security tightness than the organization can take. That will be destructive.
  • Assess the security needs and capacity by evaluating risk, business process efficiency and capacity for change
  • Prioritize based on risk and capacity, improve continuously instead of trying to take unsustainable leaps in security maturity

Impact of OT attacks: death, environmental disasters and collapsing supply-chains

Securing operational technologies (OT) is different from securing enterprise IT systems. Not because the technologies themselves are so different – but the consequences are. OT systems are used to control all sorts of systems we rely on for modern society to function; oil tankers, high-speed trains, nuclear power plants. What is the worst thing that could happen, if hackers take control of the information and communication technology based systems used to operate and safeguard such systems? Obviously, the consequences could be significantly worse than a data leak showing who the customers of an adult dating site are. Death is generally worse than embarrassment.

No electricity could be a consequence of an OT attack.

When people think about cybersecurity, they typically think about confidentiality. IT security professionals will take a more complete view of data security by considering not only confidentiality, but also integrity and availability. For most enterprise IT systems, the consequences of hacking are financial, and sometimes also legal. Think about data breaches involving personal data – we regularly see stories about companies and also government agencies being fined for lack of privacy protections. This kind of thinking is often brought into industrial domains; people doing risk assessments describe consequences in terms such as “unauthorized access to data” or “data could be changed be an unauthorized individual”.

The real consequences we worry about are physical. Can a targeted attack cause a major accident at an industrial plant, leaking poisonous chemicals into the surroundings or starting a huge fire? Can damage to manufacturing equipment disrupt important supply-chains, thereby causing shortages of critical goods such as fuels or food? That is the kind of consequences we should worry about, and these are the scenarios we need to use when prioritizing risks.

Let’s look at three steps we can take to make cyber risks in the physical world more tangible.

Step 1 – connect the dots in your inventory

Two important tools for cyber defenders of all types are “network topologies” and “asset inventory”. If you do not have that type of visibility in place, you can’t defend your systems. You need to know what you have to defend it. A network topology is typically a drawing showing you what your network consists of, like network segments, servers, laptops, switches, and also OT equipment like PLC’s (programmable logic curcuits), pressure transmitters and HMI’s (human-machine interfaces – typically the software used to interact with the sensors and controllers in an industrial plant). Here’s a simple example:

An example of a simplified network topology

A drawing like this would be instantly recognizable to anyone working with IT or OT systems. In addition to this, you would typically want to have an inventory describing all your hardware systems, as well as all the software running on your hardware. In an environment where things change often, this should be generated dynamically. Often, in OT systems, these will exist as static files such as Excel files, manually compiled by engineers during system design. It is highly likely to be out of date after some time due to lack of updates when changes happen.

Performing a risk assessment based on these two common descriptions is a common exercise. The problem is, that it is very hard to connect this information to the physical consequences we want to safeguard against. We need to know what the “equipment under control” is, and what it is used for. For example, the above network may be used to operate a batch chemical reactor running an exothermic reaction. That is, a reaction that produces heat. Such reactions need cooling, if not the system could overheat, and potentially explode as well if it produces gaseous products. We can’t see that information from the IT-type documentation alone; we need to connect this information to the physical world.

Let’s say the system above is controlling a reactor that has a heat-producing reaction. This reactor needs cooling, which is provided by supplying cooling water to a jacket outside the actual reactor vessel. A controller opens and closes a valve based on a temperature measurement in order to maintain a safe temperature. This controller is the “Temperature Control PLC” in the drawing above. Knowing this, makes the physical risk visible.

Without knowing what our OT system controls, we would be led to think about the CIA triad, not really considering that the real consequences could be a severe explosion that could kill nearby workers, destroy assets, release dangerous chemical to the environment, and even cause damage to neighboring properties. Unfortunately, lack of inventory control, especially connecting industrial IT systems to the physical assets they control, is a very common problem (disclaimer – this is an article from DNV, where I am employed) across many industries.

An example of a physical system: a continuously stirred-tank reactor (CSTR) for producing a chemical in a batch-type process.

Step 1 – connect the dots: For every server, switch, transmitter, PLC and so on in your network, you need to know what jobs these items are a part of performing. Only that way, you can understand the potential consequences of a cyberattack against the OT system.

Step 2 – make friends with physical domain experts

If you work in OT security, you need to master a lot of complexity. You are perhaps an expert in industrial protocols, ladder logic programming, or building adversarial threat models? Understanding the security domain is itself a challenge, and expecting security experts to also be experts in all the physical domains they touch, is unrealistic. You can’t expect OT security experts to know the details of all the technologies described as “equipment under control” in ISO standards. Should your SOC analyst be a trained chemical engineer as well as an OT security expert? Or should she know the details of steel strength decreases with a temperature increase due to being engulfed in a jet fire? Of course not – nobody can be an expert at everything.

This is why risk assessments have to be collaborative; you need to make sure you get the input from relevant disciplines when considering risk scenarios. Going back to the chemical reactor discussed above, a social engineering incident scenario could be as follows.

John, who works as a plant engineer, receives a phishing e-mail that he falls for. Believing the attachment to be a legitimate instruction of re-calibration of the temperature sensor used in the reactor cooling control system, he executes the recipe from the the attachment in the e-mail. This tells him to download a Python file from Dropbox folder, and execute it on the SCADA server. By doing so, he calibrates the temperature sensor to report 10 degrees lower temperature than what it really measures. It also installs a backdoor on the SCADA server, allowing hackers to take full control of it over the Internet.

The consequences of this could potentially overpressurizing the reactor, causing a deadly explosion. The lack of cooling on the reactor would make a chemical engineering react, and help understand the potential physical consequences. Make friends with domain experts.

Another important aspect of domain expertise, is knowing the safety barriers. The example above was lacking several safety features that would be mandatory in most locations, such as having a passive pressure-relief system that works without the involvement of any digital technologies. In many locations it is also mandatory to have a process shutdown systems, a control system with separate sensors, PLC’s and networks to intervene and stop the potential accident from happening by using actuators also put in place only for safety use, in order to avoid common cause failures between normal production systems and safety critical systems. Lack of awareness of such systems can sometimes make OT security experts exaggerate the probability of the most severe consequences.

Step 2 – Make friends with domain experts. By involving the right domain expertise, you can get a realistic picture of the physical consequences of a scenario.

Step 3 – Respond in context

If you find yourself having to defend industrial systems against attacks, you need an incident response plan. This is no different from an enterprise IT environment; you also need an incident response plan that takes the operational context into account here. A key difference, though, is that for physical plants your response plan may actually involve taking physical action, such as manually opening and closing valves. Obviously, this needs to be planned – and exercised.

If welding will be a necessary part of handling your incident, coordinating with the industrial operations side better be part of your incident response playbooks.

Even attacks that do not affect OT systems directly, may lead to operational changes in the industrial environment. Hydro, for example, was hit with a ransomware attack in 2019, crippling its enterprise IT systems. This forced the company to turn to manual operations of its aluminum production plants. This bears lessons for us all, we need to think about how to minimize impact not just after an attack, but also during the response phase, which may be quite extensive.

Scenario-based playbooks can be of great help in planning as well as execution of response. When creating the playbook we should

  • describe the scenario in sufficient detail to estimate affected systems
  • ask what it will take to return to operations if affected systems will have to be taken out of service

The latter question would be very difficult to answer for an OT security expert. Again, you need your domain expertise. In terms of the cyber incident response plan, this would lead to information on who to contact during response, who has the authority to make decision about when to move to next steps, and so on. For example, if you need to switch to manual operations in order to continue with recovery of control system ICT equipment in a safe way, this has to be part of your playbook.

Step 3 – Plan and exercise incident response playbooks together with industrial operations. If valves need to be turned, or new pipe bypasses welded on as part of your response activities, this should be part of your playbook.

OT security is about saving lives, the environment and avoiding asset damage

In the discussion above it was not much mention of the CIA triad (confidentiality, integrity and availability), although seen from the OT system point of view, that is still the level we operate at. We still need to ensure only authorized personnel has access to our systems, we need to ensure we protect data during transit and in storage, and we need to know that a packet storm isn’t going to take our industrial network down. The point we want to make is that we need to better articulate the consequences of security breaches in the OT system.

Step 1 – know what you have. It is often not enough to know what IT components you have in your system. You need to know what they are controlling too. This is important for understanding the risk related to a compromise of the asset, but also for planning how to respond to an attack.

Step 2 – make friends with domain experts. They can help you understand if a compromised asset could lead to a catastrophic scenario, and what it would take for an attacker to make that happen. Domain experts can also help you understand independent safety barriers that are part of the design, so you don’t exaggerate the probability of the worst-case scenarios.

Step 3 – plan your response with the industrial context in mind. Use the insight of domain experts (that you know are friends with) to make practical playbooks – that may include physical actions that need to be taken on the factory floor by welders or process operators.

Does cyber insurance make sense?

Insurance relies on pooled risk; when a business is exposed to a risk it feels is not manageable with internal controls, the risk can be deferred to the capital markets through an insurance contract. For events that are unlikely to hit a very large number of insurance customers at once, this model makes sense. The pooled risk allows the insurer to create capital gains on the premiums paid by the customers, and the customers get their financial losses covered in case of a claim situation. This works very well for many cases, but insurers will naturally try to limit their liabilities, through “omissions clauses”; things that are not covered by the insurance policy. The omissions will typically include catastrophic systemic events that the insurance pool would not have the means to cover because a large number of customers would be hit simultaneously. It will also include conditions with the individual customer causing the insurance coverage to be voided – often referred to as pre-existing conditions. A typical example of the former is damages due to acts of war, or natural disasters. For these events, the insured would have to buy extra coverage, if at all offered. An example of the latter omission type, the pre-existing condition, would be diseases the insured would have before entering into a health insurance contract.

20150424_155646090_iOS
Risk pooling works well for protecting the solvency of insurers when issuing policies covering rare independent events with high individual impact – but is harder to design for covering events where there is systemic risk involved. Should insurers cover the effects of large-scale virus infections like WannaCry over normal cyber-insurance policies? Can they? What would the financial aggregate cost of a coordinated cyber-attack be on a society when major functions collapse – such as the Petya case in the Ukraine? Can insurers cover the cost of such events?

How does this translate into cyber insurance? There are several interesting aspects to think about, in both omissions categories. Let us start with the systemic risk – what happens to the insurance pool if all customers issue claims simultaneously? Each claim typically exceed the premiums paid by any one single customer. Therefore, a cyberattack that spreads to large portions of the internet are hard to insure while keeping the insurer’s risk under control. Take for example the WannaCry ransomware attack in May; within a day more than 200.000 computers in 150 countries were infected. The Petya attack following in June caused similar reactions but the number of infected computers is estimated to be much lower. As the WannaCry still looks like a poorly implemented cybercrime campaign intended to make money for the attacker, the Petya ransomware seems to have been a targeted cyberweapon used against  the Ukraine; the rest was collateral damage, most likely. But for Ukrainian companies, the government and computer users this was a major attack: it took down systems belonging to critical infrastructure providers, it halted airport operations, it affected the government, it took hold of payment terminals in stores; the attack was a major threat to the entire society. What could a local insurer have done if it had covered most of those entities against any and all cyberattacks? It would most likely not have been able to pay out, and would have gone bankrupt.

A case that came up in security forums after the WannaCry attack was “pre-existing condition” in cyber insurance. Many policies had included “human error” in the omissions clauses; basically, saying that you are not covered if you are breached through a phishing e-mail.  Some policies also include an “unpatched system” as an omission clause; if you have not patched, you are not covered. Not all policies are like that, and underwriters will typically gauge a firm’s cyber security maturity before entering into an insurance contract covering conditions that are heavily influenced by security culture. These are risks that are hard to include in quantitative risk calculations; the data are simply not there.

Insurance is definitely a reasonable control for mature companies, but there is little value in paying premiums if the business doesn’t satisfy the omissions clauses. For small businesses it will pay off to focus on the fundamentals first, and then to expand with a reasonable insurance policy.

For insurance companies it is important to find reasonable risk pooling measures to better cover large-scale infections like WannaCry. Because this is a serious threat to many businesses, not having meaningful insurance products in place will hamper economic growth overall. It is also important for insurers to get a grasp on individual omissions clauses – because in cyber risk management the thinking around “pre-existing condition” is flawed – security practice and culture is a dynamic and evolving thing, which means that the coverage omissions should be based on current states rather than a survey conducted prior to policy renewal.