Machine safety – what is it? 

Machines can be dangerous. Many occupational accidents are related to use of machinery, and taking care of safety requires attention to the user in design, operation and training as well as when planning maintenance. 

  
In Europe there is a directive regulating the safety of machinery, namely 2006/42/EC. This directive is known as the machinery directive and has been made mandatory in all EU member states as well as Norway, Liechtenstein and Iceland. 

The directive requires producers of machines to identify hazards and design the machine such that the risks are removed or controlled. Only machines conforming to the directive can be sold and used in the EU. 

In practice risks must be treated using safety functions in the control system. They should be designed in accordance with recognized standards. The recommended standards are ISO 13849-1 or IEC 62061. These are different but equivalent in terms of safety. The former defines 5 performance levels (a,b,c,d,e) and the latter used 3 safety integrity levels. The most common risk analysis approach for defining PL or SIL requirements is Riskgraph. 

By conforming to the directive, basically through application of these standards together with the general principles in ISO 12100 you can put the CE mark on the machine and declare it is safe to use. Through these practices we safeguard our people, and can be confident that the machine will not be the cause of someone losing a loved one. 

Four golden practices for functional safety management

Managing functional safety activities and ensuring high integrity of instrumented barriers is not fundamentally different from other project management activities. This means that functional safety management should be integrated into the overall project planning, management and controlling activities. I will be presenting a paper written in cooperation with several colleagues at Lloyd’s Register Consulting at the next ESREL conference on this topic, but here is a sneak-peak at the four golden practices.

Golden practice 1 – Planning of functional safety should be a group activity involving all relevant organizations

Management of functional safety should be planned for the asset as a system, taking the whole lifecycle into account. Normally, the scope is split between a number of organizations and persons (owner, engineering, vendors, consultants, etc.). In order to plan activities and responsibilities such that it can be integrated into all these different organizations’ activities, a common planning session at the outset of a project is a good practice to coordinate activities and align priorities. Such a meeting should be facilitated by a competent functional safety expert. The results of functional safety planning should then be integrated into each organization’s project plan.

Golden practice 2 – Competence mapping and training development

Each company involved in the safety lifecycle shall have competence requirements for each role related to the work to be done. Mapping of competence of the employees should be performed in order to identify gaps, and training plans developed to make sure such gaps are closed. In assessing competence requirements, the factors described in Chapter 5 in the Norwegian Oil and Gas Association’s Guideline 070 should be used as a basis.

Golden practice 3
– Functional safety requirements in contracts

Include functional safety requirements in contracts across all interfaces, with clear descriptions of expected level of involvement, as well as deliverables such as hardware, software and documentation of such in accordance with project requirements. It should be included in the contract that all parties are required to prepare for and participate in audits and functional safety assessments as needed by the project. A simple reference to a standard may be legally binding but with only a simple standard reference it is unclear exactly what the priorities are and which activities each organization shall take care of.

Golden practice 4 – Constructive auditing

Consider need for audits of partners and vendors based on project risk (non-conformance risks, schedule risks and cost impact of such slips). If vendors have responsibility for development and engineering activities, auditing of these vendors should be considered. Functional safety audits should be integrated into the projects overall project plan.

Implementing the golden practices does not ensure a problem free project, but chances of high performance will certainly be improved by adopting these practices in your next project. Especially Golden Practice 1 – looking at functional safety planning as a cross-organizational activity is especially beneficial for establishing a common understanding and common goals for everyone involved.

Building barriers against major accidents

We all have someone we love – a life partner, kids, friends, family or even a dog. These are the most important things in our lives – and we care deeply about the wellbeing of these special people (and animals) in our lives. We trust employers to make workplaces safe such that our most important ones can come back safely from work every day. Some workplaces have inherent dangers that are exposing people to unacceptable risks unless handled in a good way. How do we manage the most severe accident risks, such as explosion risk on an offshore oil platform, nuclear accidents, or releases of toxic chemicals, such as the horrific 1984 Bhopal accident?

When we build and operate such plants we need to know what the hazards are, and we need to plan barriers to avoid accident scenarios from developing. Risk management is thus integral to all sound engineering activity. A good description of a risk management process is given in ISO 30001 – such a process consists of several steps that should be familiar to practicing engineers and plant managers.

In the figure you can see this risk process explained. First of all, it is necessary to establish the context so that we can understand the impact of the risk – this is we need to ask questions such as;

  • What is the business environment we are operating in?
  • Who will be present and exposed to the risk?
  • What type of training do these people have?
  • Where is the plant located?
  • Etc., etc.

Then, we work to identify risks. In a process plant this activity is typically done in a number of workshop meetings such as design reviews, and maybe the most important, the HAZOP (a Hazard and operability study). The risks identified are then analyzed, to see what the overall risk to the asset and the people operating it are. Based on the risk analysis, the risk is evaluated up against acceptance criteria; is the risk acceptable, or do we need to devise some scheme to lower the risk?

In most cases where major accident hazards are possible, some form of risk treatment is necessary. In fact, an overall principle for barriers against major accident hazards (MAH’s) that is included in many legislations is:

“A single failure shall not lead directly to an unacceptable outcome.”

This leads us directly to our next natural line of thought; we need to build barriers into our process to stop accidents from happening, or to at least make sure an accident development path is changed to avoid unacceptable outcomes.

Common practice in process engineering is to require two barriers against accident scenarios, and these shall be different in working principle and be able independently to stop an accident from occurring. In practice, one of these barriers would typically be mechanical system not relying on electronics at all – such as a spring-loaded pressure relieving valve. The other barrier is typically implemented in an automation system as a safety trip. It is to this latter barrier type, the Safety Instrumented Function (SIF) we apply the concept of safety integrity levels (SIL) and the reliability standards IEC 61511 and IEC 61508.

Taking overpressure in a pressure vessel as an example, we see how these barriers work to stop an accident from occurring. Assume a pressure vessel has a single feed coming from a higher pressure source, but where the pressure is reduced before entry into the vessel by using a pressure reduction valve (a choke valve). As long as the design pressure (the maximum allowable working pressure, MAWP) of the pressure vessel is below the pressure of the source, we have a potential for overpressurizing the tank. This is always dangerous – and particularly so if the contents are flammable (hydrocarbon gases, anyone?) or toxic (try googling methyl icocyanate). Clearly, in this situation, a single error in the choke valve can lead to a large release of dangerous material. Such errors may be due to material failure of the valve (e.g. fatigue), maloperation or a control system error if the valve is an actuated valve used as final element in a control system, for example for production rate control. Process safety standards, such as ISO 10418 or API RP 14C require such pressure vessels to be equipped with pressure safety valves, that will release the pressure to a safe location when the design pressure is exceeded (typically the gas is burnt in a controlled flaring process). That is one barrier. Another barrier would be to install a pressure transmitter on the tank, and a safety valve that will shut off the supply of the gas from the pressure source. This valve and measurement should be connected to a control system that is independent of the normal process control system – to avoid a failure in the control system from also disabling the barrier function.

To sum it up; by systematically identifying risks and evaluating them against acceptance criteria we have a good background for introducing barriers. All accident scenarios should be controlled with at least two independent barriers, where one of them should be instrumented and the other one preferably not. Instrumented functions should be in addition to the basic control system to avoid common cause failures. The Safety Instrumented System (SIS) should be designed in accordance with applicable reliability standards to ensure sufficient integrity. Finally – the design must comply with local regulations and required industry practice and guidance – such as applicable international or local standards.

Do you trust your numbers?

People tend to rely more on numbers than on other types of «proof» of goodness. Where those numbers come from seems to play less of a role. Of course, a number picked out of thin air is just as worthless as a Greek government bond – but why do we then seem to trust a promise, as long as someone has put a number on it? Several people have discussed this previously and in many settings before, but one of my favorites is this blog post at the American Mathematical Society from 2012 by Jean Joseph.

This tendency of “everything is fine because the numbers say so” thinking is very much present in functional safety; over focus on probability calculations is common. I believe there are several reasons for this. First, engineers like quantitative measures – and there are good and sound methodologies for performing reliability calculations. We tend to trust what numbers say more than qualitative information that we perceive to be less accurate.

A SIL requirement consists of four types of requirements – the practical implications of which depending on the integrity level sought. The four types of requirements are illustrated below.

The quantitative requirements are probability calculations. We tend to overfocus on these at the expense of the others. The quality of these calculations depends on the quality of the input data (failure rates) – and the quality of such data can be very hard to verify.

Semi-quantitative requirements are in most cases expressed as the required redundancy (hardware fault tolerance) and the safe failure fraction. To build in the necessary robustness in a safety function, redundancy is required to ensure a single failure does not lead to a dangerous failure of the safety function. The required redundancy depends on the SIL of the function, as well as the fraction of failures that will lead to a safe state directly (the so-called safe failure fraction, SFF). In practice, we see somewhat less focus on this than on the probability calculations themselves (PFD).

Software requirements depend on the required SIL and the type of software development involved. Software competence among system users and system integrators is typically lower than their hardware competence. This causes the software requirement setting and compliance assessment to be delegated to the software vendor without much oversight from the integrator or user. This is a competence-based weakness in the lifecycle in such cases that we cannot capture in the numbers we calculate.

Qualitative requirements include how we work with the SIS development process itself, including managing changes, and ensuring systematic errors are not introduced. An important part of this work and the requirements we need to meet is to ensure that personnel competent for their roles perform all activities.

If we are going to trust the probabilities calculated, we need to trust that the right level of redundancy exists. We need to trust that software developers create their code in a way that makes the existence of bugs with potential dangerous outcomes very unlikely. We need to trust that everybody involved in the SIS development has the right level of competence and experience, and that the organizations involved have systems in place to properly manage the development process and all its requirements. A simple probability estimate does not tell us much, unless it is born in the context of a properly managed SIS development process.

Contracts, interfaces and safety integrity

What does contract structures have to do with the safety of an industrial plant? A whole lot, actually. First, let us consider how contract structures regulate who does what on a large engineering and construction project. Normally, there will be an operator company that wants to build a new plant, be it a refinery, a chemical plant or an offshore oil platform. Such companies do not normally perform planning and construction themselves, nor do they plan what has to be done and separate this into many small work packages. They outsource the engineering, construction and installation to a large contractor – in the form of an EPC contract. The contractor is then responsible for planning, engineering and construction in accordance with contract requirements. Such contract requirements will consist of many commercial and legal provisions, as well as a large range of technical regulations. On the technical side, the plant has to be engineered and built in accordance with applicable laws and regulations for the location the plant is to be commissioned and used, as well as to company policies and standards, as defined by the operating company.

What is the structure of the EPC contractor’s organization then, and how does this structure influence the safety of the final design? There is a lot of variation out there, but common to all large projects is:

  • A mix of employees and contractors working for the EPC company
  • Separation of engineering scope into EPC contractor scope and vendor scopes
  • Interface management is always a challenge

So – the situation we have is that long-term competence management is difficult due to a large number of contractors being involved. Communication is challenging due to many organizational interfaces. There is a significant risk of scope overlap or scope mismatch between vendor scopes. Finally, some interfaces will work well, and some will not.

Management of functional safety is a lifecycle activity that ties into many parts of the overall EPC scope. Hence, it is critical that everyone involved understands what his or her responsibilities are. Unfortunately, the competence level of various players on this field is highly variable; and an overall competence management scheme is hard to implement. The closest tool available across company interfaces is functional safety audits – a tool that seems to be largely underutilized.

Contracts tend to include functional safety requirements simply by reference to a standard. This may be sufficient in the situation where both parties fully comprehend what t this means for the scope of work, but most likely, there will be need for clarification regarding split of the scope, even in this case. In order to make interface management easer (or even feasible), the scope split should be included in the contract, as well as requirements to communication across interfaces and the existence of role descriptions with proper competence requirements. This would then be easier to work with for the people involved, including HR, procurement, quality assurance, HSE and other management roles.

A quest for knowledge – and the usefulness of your HR department in functional safety management

Most firms claim that their people is the most important asset. If this has any effect on operations, is another thing – some actually mean it and others seem not to do so much about keeping their people well-equipped for the tasks they need  to do.

knowledge_management

When it comes to functional safety, competence management is a very important part of the game. In many projects, one of the major challenges is related to getting the right information and documentation from suppliers. Why is this so difficult? It comes down to work processes, communication and knowledge, as discussed in a previous post. One requirement common to IEC 61508 and 61511 is that every role involved in the safety lifecycle should be competent to perform his or her function. In practice, this is only fulfilled if each of these roles have clear competence requirement descriptions and expectations, a description of how competence will be assessed, and how knowledge will be created for those roles.

There are many ways of training your people, and this is a huge part of the field of HR. Most likely, people in your company’s HR functions actually know a great deal about planning, organizing and executing competence development programs. Involving them in your functional safety management planning can thus be a good idea! A few key issues to think about:

  • What are the requirements for your key roles?
  • What are your key roles (package engineer, procurement specialist(!), instrument engineer, project manager, etc., etc.)?
  • How do you check if they have the right competence? (peer assessment, tests, interviews, experience, etc.)?
  • What training resources do you have available? (Courses, e-learning, on-the-job-training, self-study, etc.)?
  • How often do you need to reassess competence?
  • Who is responsile for this system? (HR, project manager, functional safety engineer, etc.)?

A firm that has this firmly in place will most likely be able to steer their supply chain and help them also gain confidence and knowledge – vastly improving communication across interfaces and thereby also the quality of cross-organizational work.

Taking the human factor into account when setting SIL requirements

A well-known fact from accident investigations is that the human factor plays a huge role. In many large accidents, the enquiry will mention organizational factors, leadership focus, procedures and training as important factors in a complex picture involving both human factors and technological factors. In the oil and gas industry it has been found that more than half of the gas leaks detected offshore are down to human factors and errors made during operation, maintenance or startup. On the other hand – humans may also play the role of the safeguard – an operator may choose to shut down a unit behaving suspiciously prior to any dangerous situation occurring, a vehicle driver may slow down to avoid relying heavily on the ABS system for braking on icy roads, an electrician suggests to exchange a discolored socket that otherwise is well-functioning. All of these are human actions that lower the risk. The human thus always comes into the risk picture and can both enhance the safety, and threaten the safety of an asset. This all depends on leadership, training, organizational maturity and attitudes. How do we deal with this in the context of safety integrity levels?

There are many practices. There are thorough methodologies for analysis of human performance as part of barrier systems available, such as human reliability analysis (HRA), developed first in the nuclear industry but now also commonplace in many sectors (petroleum, chemical industry, aviation and transport). On the other side there are the extremes of assuming “humans always fail to do the correct thing” or “humans always do the right thing”. When performing a SIL allocation analysis using typical methods for this such as layers of protection analysis or RiskGraph (both described in IEC 61511), an important thing to consider is: can the bad things be avoided by human intervention? In many cases humans can intervene, and then we do need to have a notion of how reliable the human is. Human performance is influenced by many factors, and these factors are analyzed in depth in the framework of HRA. During a LOPA very detailed analysis of the human contribution is usually not within the scope, and a more simple approach is taken. However, there are some important questions we can bring from the HRA toolbox that will help us build more trust into the numbers we use in the LOPA, or the trust we put in this barrier element in the RiskGraph:

  • Is the operator well-trained and is the task easy to understand?
  • Does the operator have the necessary experience?
  • Does the organization have a positive safety culture?
  • Are there many tasks to handle at once and no clear priorities?
  • Is the situation stressful?
  • Does the operator have time to comprehend the situation, analyze the next action and execute before it is too late?

In many cases the operator will be well-trained exactly for the accident scenarios in question. Also, if designed correctly, there will be clear alarm prioritization and helpful messages from the alarm system – but it is always good to challenge this because quality of alarm design is varying a lot in practice. The situation is almost always stressful if the consequence of the accident is grave and there is some confusion to the situation but training can do wonders in handling such situations by resorting to reflex operating steps – think of basic training of field skills in the military. The last question is always important – does the operator have enough time? What enough time is, can also be hard to give a fixed limit on; for simple situations it is maybe sufficient with 10-15 minutes, whereas for more complex situations maybe a full hour would be needed for human intervention to be a trustworthy barrier element. Companies may have different guidelines regarding these factors – it should always be considered if these guidelines are in line with current knowledge of human performance. No shorter reaction times than 15 minutes should be allowed in analysis if credit is given to the operator. For unusual scenarios, such as the case is for “low-demand” safety functions, a PFD of the human intervention lower than 10% should not be used.

Giving credit to human intervention in SIL allocation is good practice – but the credit given should be realistic based on what we know about how humans react in these situations. Due to the large uncertainty, especially when performing a “quick-and-dirty” shortcut analysis such as discussed above, conservative values for human error should be assumed.

Also note that when a human action is included as an “independent protection layer” in a LOPA, the integrity of the entire barrier system includes this action as well. This means that in order to have control over barrier integrity, the company must carefully manage the underlying factors such as organizational maturity, safety leadership and competence management. Increased attention also to these factors in internal hazard reviews could lead to improved safety performance; maybe could the number of accidents with human errors as root cause be significantly reduced through more structured inclusion of human elements in barrier management thinking.

How independent should your FSA leader be?

Functional safety assessment is a mandatory 3rd party review/audit for functional safety work, and is required by most reliability standards. In line with good auditing practice, the FSA leader should be independent of the project development. Exactly what does this mean? Practice varies from company to company, from sector to sector and even from project to project. It seems reasonable to require a greater degree of independence for projects where the risks managed through the SIS are more serious. IEC 61511 requires (Clause 5.2.6.1.2) that functional safety assessments are conducted with “at least one senior competent person not involved in the project design team”. In a note to this clause the standard remarks that the planner should consider the independence of the assessment team (among other things). This is hardly conclusive.

If we go to the mother standard IEC 61508, requirements are slightly more explicit, as given by Clause 8.2.15 of IEC 61508-1:2010, which states that the level of independence shall be linked to perceived consequence class and required SILs. For major accident hazards, two categories are used in IEC 61508:

  • Class C: death to several people
  • Class D: very many people killed

For class C the standard accepts the use of an FSA team from “independent department”, whereas for class D only an “independent organization” is acceptable. Further, also for class C, an independent organization should be used if the degree of complexity is high, the design is novel or the design organization is lacking experience with this particular type of design. There are also requirements based on systematic capability in terms of SIL but those are normally less stringent in the context of industrial processes than the consequence based requirements to FSA team independence. The standard also specifies that compliance to sector specific standards, such as IEC 61511, would make a different basis for consideration of independence acceptable.

In this context, the definitions of “independent department” and “independent department” are given in Part 4 of the standard. An independent department is separate from and distinct from departments responsible for activities which take place during the specified phase of the overall system or software lifecycle subject to the validation activity. This means also, that the line managers of those departments should not be the same person. An independent organization is separate by management and other resources from the organizations responsible for activities that take place during the lifecycle phase. This means, in practice, that the organization leading a HAZOP or LOPA should not perform the FSA for the same project if there are potential major accident hazards within the scope, and preferably also not if there are any significant fatal accident risks in the project. Considering the requirement of separate management and resource access, it is not a non-conformity if two different legal entities within the same corporate structure perform the different activities, provided they have separate budgets and leadership teams.

If we consider another sector specific standard, EN 50129 for RAMS management in the European railway sector, we see that similar independence requirements exist for third-party validation activities. Figure 6 in that standard seemingly allows the assessor to be a part of the same organization as an organization involved in SIS development, but further requires for this situation that the assessor has an authorization from the national safety authority, is completely independent form the project team and shall report directly to the safety authorities. In practice the independent assessor is in most cases from an independent organization.

It is thus highly recommended to have an FSA team from a separate organization for all major SIS developments intended to handle serious risks to personnel; this is in line with common auditing practice in other fields.

Why is this important? Because we are all humans. If we feel ownership to a certain process, product or affiliation with an organization, it will inevitably be more difficult for us to point out what is not so good. We do not want to hurt people we work with by stating that their work is not good enough – even if we know that inferior quality in a safety instrumented system may actually lead to workers getting killed at work later. If we look to another field with the same type of challenges but potentially more guidance on independence, we can refer to the Sarbanes-Oxley act of 2002 from the United States. The SEC has issued guidelines about auditor independence and what should be assessed. Specifically they include:

  1. Will a relationship with the auditor create a mutual or conflicting interest with their audit client?
  2. Will the relationship place the auditor in the position of auditing his/her own work?
  3. Will the relationship result in the auditor acting as management or an employee of the audit client?
  4. Will the relationship result in the auditor being put in a position where he/she will act as an advocate for the audit client?

It would be prudent to consider at least these questions if considering using an organization that is already involved in the lifecycle phase subject to the FSA.

What is the demand rate on a safety function?

When we estimate the reliability of a safety instrumented function, we separate between “low demand” functions and “high demand” or “continuous demand” functions. These are all safety critical functions but their nature is different in terms of how frequently they must act on the system of study.

Consider for example the braking system on a train – the brakes need to work every time they are used – for every curve, for every station. The train driver will activate the brakes several times every hour. Obviously, this is a “high demand” system. As an example of the opposite, think of systems where we are monitoring some process and only acting if the system detects a dangerous state. A common example of this is an over-temperature trip on a heating system; if the temperature becomes too high in the system, the system shuts off power to the heater through a circuit breaker (assuming it is an electrical heater). Nobody will design a system such that its intention is to overheat, so this function would only need to activate when a specific scenario hits. Whether this is a “low demand” or “high demand” function depends on how often the function must work – and this again depends both on the “intrinsic frequency” of overheating, and other protection measures that may exist such as independent alarms or special operator training and procedures.

If you think of the stove guard installed in your kitchen that monitors overheating in the range area, what would the demand rate be? If we assume you are a 25-year old person and normally functioning as long as you are sober, you would not forget to turn off the plate on the oven more than once per year. In addition to this, you may get really drunk 10 times per year, and you cook something some of those times, with a higher probability of forgetting – say this also happens once per year – then you have an initial event rate of 2 times per year. Is this the demand rate on the stove guard function? It depends. Do you have any other measures that help you reduce the fire risk?

Typically you would have a smoke detector with alarm, and possibly the stove guard would also give you a pre-alarm. The smoke detector would be completely independent from the stove guard – and if it goes off you would react to it. This takes down the demand on the stove guard if you look at it solely as a way to stop fires from occurring (smoke comes before fire). We now assume that the smoke alarm works 1 out of 10 times and that you or a (sober) friend would always react correctly in this situation – it is normally easy to identify the smoke coming from the kitchen. Then we have reduced the demand on the stove guard to 2 x 0.1 = 0.2 times per year. This is safe to put in the bracket “low demand”. We did not count the pre-alarm on the stove guard itself on purpose, because it can have common cause failures with the core functionality of the stove guard – if one fails, the other one fails too.

The next natural question to ask in this connection is: “how reliable must the stove guard be”? We may conservatively assume that every 10th time there is a real demand on the guard and it does fail there will be a fire that can kill you and destroy the house. This risk is quite severe, and say you would only allow your house on average to burn down every 10.000 years, statistically speaking. This is your “acceptance criterion”. That is, you accept 0.0001 fires per year due to this potential source. We know the demand is 0.2 times per year – what is the allowable probability of failure on demand for the stove guard? This would be 0.0001 / 0.02 = 0.005. This means that we should require the system to have SIL 2 performance with a PFD of minimum 0.005 with this acceptance criterion, if we have a system developed in accordance with IEC 61508.

As a side note – the Norwegian research institute SINTEF has tested some stove guards. They tested 7 different types, and concluded that only 3 of them worked well. The reliability of the devices also depend on installation (location of sensors). This means that close to SIL 3 performance seems unreasonable to expect for the solutions on the market today. The SINTEF report is found at the Norwegian Directorate for Civil Protection’s website.

Electrical isolation of ignition sources on offshore installations

triangle-of-fireOne of the typical major accident scenarios considered when building and operating an offshore drilling or production rig, is a gas leak that is ignited, leading to a jet fire, or even worse, an explosion. For this scenario to happen we need three things (from the fire triangle):

  • Flammable material (the gas)
  • Oxygen (air)
  • An ignition source

The primary protection against such accidents is containment of flammable materials; avoiding leaks is the top safety priority offshore. As a lot of this equipment exists outdoors (or in “naturally ventilated areas” as standards tend to call it), it is not an option to remove “air”. Removing the ignition source hence becomes very important, in the event that you have a gas leak. The technical system used to achieve this consists of a large number of distributed gas detectors on the installation, sending message of detected gas to a controller, which then sends a signal to shut down all potential ignition sources (ie non-EX certified equipment, see the ATEX directive for details).

This being the barrier between “not much happening” and “a major disaster”, the reliability of this ignition source control is very important. Ignition sources are normally electrical systems not designed specifically to avoid ignition (so-called EX certified equipment). In order to have sufficient reliability of this set-up the number of ignition sources should be kept at a minimum; this means that the non-EX equipment should be grouped in distribution boards such that an incomer breaker can be used to isolate the whole group, instead of doing it at the individual consumer level. This is much more reliable, as the probability of a failure on demand (PFD) will contain an additive term for each of the breakers included:

PFD = PFD(Detector) + PFD(Logic) + Sum of PFD of each breaker

Consider a situation where you have 100 consumers, and the dangerous undetected failure rate for the breakers used is 10-7 failures per hour of operation, with testing every 24 months, the contribution from a single breaker is

PFD(Breaker) = 10-7 x (8760 x 2) / 2 = 0.000876

If we then have 6 breakers that need to open for full isolation, we have a breaker PFD contribution of 0.005 from the breakers (which means that with reliable gas detectors and logic solver, a full loop can satisfy a SIL 2 requirement). If we have 100 breakers the contribution to PFD is 0.08 – and the best we can hope for is SIL 1.