Treat your people well if you want your safety systems to work

Reliability standards all state requirements to planning and managing the functional safety work throughout the lifecycle. There are good and bad ways of doing this, and there is room for interpretation of the requirements in the standard. I’ve previously suggested 4 golden practices for good functional safety management – based on experience with what does not work in complex project organizations in different real-world offshore projects. This time we turn to another important aspect of maintaining focus, drive and high quality in all types of projects. That is how to plan and run a project with the well being of the project members and stakeholders during the process in mind.

“Making a small mistake due to stress and overload can have severe consequences when you are designing a barrier system.”

All resource constrained projects are running the risk of sliding into organizational overload. Overload is when individuals and organizations are fully saturated with workload; a situation where additional stress cannot be handled without decreasing performance. For an individual or organization in overload state, dysfunctional behaviors and dramatically reduced productivity can be expected. Four important factors related to perceived stress levels are competence, comfort, confidence and control. If these “4 C’s” are disrupted, stress increases rapidly and the organization or individual may go into a state of overload. There are many contributors to perceived stress levels for individuals; some of them may be called baseline factors such as type of organization and the organizational culture. Factors typically affecting individual stress levels within a project group are:

•    Scope creep without adequate resource allocation – leading to unrealistically high workloads

•    Need to apply technologies not understood by team members, and without appropriate training

•    Inadequate sponsor and management support

•    Lack of change management capabilities within project team

•    Management pushing for high-risk project without being willing to accept potential failures.

So, how would an overload situation affect the outcomes of SIL work? Would the system be passed through to operations with insufficient quality, or would lower quality products be caught in one or several verification acitivites and be rectified before the system is put to operation? Both scenarios are quite likely to occur. If overload occurs in early phases of the project, it is likely that errors would sneak into the requirement setting phase. There are fewer formal checks on the requirements themselves than on the implementation. The most likely verification activity to catch an error in the requirement phase would be during the functional safety assessment (FSA). It is recommended to have one FSA just after the requirement phase, but many projects skip this and wait until further into the SIS development part of the lifecycle. Discovering errors in requirements late in the project would have significant schedule and cost impact. Worse yet, finding errors in the requirements phase is much more difficult than finding implementation errors later. In software development, it is well-known that about 40-50% of bugs originate from errors in specification. Without specific verification activities on the requirement setting, this is likely to be even worse. For an interesting discussion on software bugs and specifications, see this old blog post from Tyner Blain; http://tynerblain.com/blog/2006/01/22/where-bugs-come-from/. One type of error would be to include protection layers that are not really dependable into the risk assessment, generating a lower SIL requirement than necessary. Another error would be to overlook identified hazards, such that there are “holes in the safety net” and effectively zero integrity for that type of scenario. Consequences of organizational overload can thus be severe in a project related to functional safety.

Strategies to counter risk of overload within a project are related to active management and control, both on the people level, as well as on measures of progress and cost control. Practices that are known to reduce likelihood of overload conditions on individual or group levels are:

•    Make few changes, and carefully mange changes when they are necessary

•    Check time constraints and resource allocation – management should not assign unrealistic time constraints on tasks. Most project management frameworks include the notion of “slack planning” to incorporate this task related schedule risk

•    Keep sustaining sponsors actively involved. Lack of interest from senior sponsors is very visible to project members and may hurt the energy levels available to perform at the expected level

•    Improve change capacity by managing talent within the project actively; this includes training, offering career possibilities when project nears delivery as well as succession planning for key roles.

From these bullet points, that are taken from PMI recommendations for handling overload risk, we conclude that taking care of your people is essential for good functional safety. Good competence management is a key to achieving this; lack of confidence in abilities is one of the most common stressors in projects where there is also a high degree of time pressure. Allocating resources such that there are not enough manhours available is just adding fuel to the fire. I have previously referred to this as “stupid resource planning“, and unfortunatley this is quite common, especially when there are schedule slips in projects.

To sum it up – make sure your people are not exposed to negative stressors over time. Maintain a positive attitude as a project manager – you are the leader of your people working on the project. In that respect – the most important thing of all is – “catch your people doing something right” – give praise whenever it is deserved. That takes the edge off of people and motivates them – the best cure there is against negative stress.

Favourite boots for walking

My good colleague Ida has been reading my blog, and she told me it was great, but… it is lacking an important type of post that she was used to reading on other great blogs. So, in honor of Ida, I bring to you, a one-time only happening: the outfit of the day. I really promise never to do this again 😉

Here, the safety consultant is wearing McKinley hiking boots, pants from Nordheim, and a soft-shell from Craft that is a trusted ally on short hiking trips. Nothing is like starting a rainy summer day in the forest instead of the office!

Machine safety – what is it? 

Machines can be dangerous. Many occupational accidents are related to use of machinery, and taking care of safety requires attention to the user in design, operation and training as well as when planning maintenance. 

  
In Europe there is a directive regulating the safety of machinery, namely 2006/42/EC. This directive is known as the machinery directive and has been made mandatory in all EU member states as well as Norway, Liechtenstein and Iceland. 

The directive requires producers of machines to identify hazards and design the machine such that the risks are removed or controlled. Only machines conforming to the directive can be sold and used in the EU. 

In practice risks must be treated using safety functions in the control system. They should be designed in accordance with recognized standards. The recommended standards are ISO 13849-1 or IEC 62061. These are different but equivalent in terms of safety. The former defines 5 performance levels (a,b,c,d,e) and the latter used 3 safety integrity levels. The most common risk analysis approach for defining PL or SIL requirements is Riskgraph. 

By conforming to the directive, basically through application of these standards together with the general principles in ISO 12100 you can put the CE mark on the machine and declare it is safe to use. Through these practices we safeguard our people, and can be confident that the machine will not be the cause of someone losing a loved one. 

Four golden practices for functional safety management

Managing functional safety activities and ensuring high integrity of instrumented barriers is not fundamentally different from other project management activities. This means that functional safety management should be integrated into the overall project planning, management and controlling activities. I will be presenting a paper written in cooperation with several colleagues at Lloyd’s Register Consulting at the next ESREL conference on this topic, but here is a sneak-peak at the four golden practices.

Golden practice 1 – Planning of functional safety should be a group activity involving all relevant organizations

Management of functional safety should be planned for the asset as a system, taking the whole lifecycle into account. Normally, the scope is split between a number of organizations and persons (owner, engineering, vendors, consultants, etc.). In order to plan activities and responsibilities such that it can be integrated into all these different organizations’ activities, a common planning session at the outset of a project is a good practice to coordinate activities and align priorities. Such a meeting should be facilitated by a competent functional safety expert. The results of functional safety planning should then be integrated into each organization’s project plan.

Golden practice 2 – Competence mapping and training development

Each company involved in the safety lifecycle shall have competence requirements for each role related to the work to be done. Mapping of competence of the employees should be performed in order to identify gaps, and training plans developed to make sure such gaps are closed. In assessing competence requirements, the factors described in Chapter 5 in the Norwegian Oil and Gas Association’s Guideline 070 should be used as a basis.

Golden practice 3
– Functional safety requirements in contracts

Include functional safety requirements in contracts across all interfaces, with clear descriptions of expected level of involvement, as well as deliverables such as hardware, software and documentation of such in accordance with project requirements. It should be included in the contract that all parties are required to prepare for and participate in audits and functional safety assessments as needed by the project. A simple reference to a standard may be legally binding but with only a simple standard reference it is unclear exactly what the priorities are and which activities each organization shall take care of.

Golden practice 4 – Constructive auditing

Consider need for audits of partners and vendors based on project risk (non-conformance risks, schedule risks and cost impact of such slips). If vendors have responsibility for development and engineering activities, auditing of these vendors should be considered. Functional safety audits should be integrated into the projects overall project plan.

Implementing the golden practices does not ensure a problem free project, but chances of high performance will certainly be improved by adopting these practices in your next project. Especially Golden Practice 1 – looking at functional safety planning as a cross-organizational activity is especially beneficial for establishing a common understanding and common goals for everyone involved.

Building barriers against major accidents

We all have someone we love – a life partner, kids, friends, family or even a dog. These are the most important things in our lives – and we care deeply about the wellbeing of these special people (and animals) in our lives. We trust employers to make workplaces safe such that our most important ones can come back safely from work every day. Some workplaces have inherent dangers that are exposing people to unacceptable risks unless handled in a good way. How do we manage the most severe accident risks, such as explosion risk on an offshore oil platform, nuclear accidents, or releases of toxic chemicals, such as the horrific 1984 Bhopal accident?

When we build and operate such plants we need to know what the hazards are, and we need to plan barriers to avoid accident scenarios from developing. Risk management is thus integral to all sound engineering activity. A good description of a risk management process is given in ISO 30001 – such a process consists of several steps that should be familiar to practicing engineers and plant managers.

In the figure you can see this risk process explained. First of all, it is necessary to establish the context so that we can understand the impact of the risk – this is we need to ask questions such as;

  • What is the business environment we are operating in?
  • Who will be present and exposed to the risk?
  • What type of training do these people have?
  • Where is the plant located?
  • Etc., etc.

Then, we work to identify risks. In a process plant this activity is typically done in a number of workshop meetings such as design reviews, and maybe the most important, the HAZOP (a Hazard and operability study). The risks identified are then analyzed, to see what the overall risk to the asset and the people operating it are. Based on the risk analysis, the risk is evaluated up against acceptance criteria; is the risk acceptable, or do we need to devise some scheme to lower the risk?

In most cases where major accident hazards are possible, some form of risk treatment is necessary. In fact, an overall principle for barriers against major accident hazards (MAH’s) that is included in many legislations is:

“A single failure shall not lead directly to an unacceptable outcome.”

This leads us directly to our next natural line of thought; we need to build barriers into our process to stop accidents from happening, or to at least make sure an accident development path is changed to avoid unacceptable outcomes.

Common practice in process engineering is to require two barriers against accident scenarios, and these shall be different in working principle and be able independently to stop an accident from occurring. In practice, one of these barriers would typically be mechanical system not relying on electronics at all – such as a spring-loaded pressure relieving valve. The other barrier is typically implemented in an automation system as a safety trip. It is to this latter barrier type, the Safety Instrumented Function (SIF) we apply the concept of safety integrity levels (SIL) and the reliability standards IEC 61511 and IEC 61508.

Taking overpressure in a pressure vessel as an example, we see how these barriers work to stop an accident from occurring. Assume a pressure vessel has a single feed coming from a higher pressure source, but where the pressure is reduced before entry into the vessel by using a pressure reduction valve (a choke valve). As long as the design pressure (the maximum allowable working pressure, MAWP) of the pressure vessel is below the pressure of the source, we have a potential for overpressurizing the tank. This is always dangerous – and particularly so if the contents are flammable (hydrocarbon gases, anyone?) or toxic (try googling methyl icocyanate). Clearly, in this situation, a single error in the choke valve can lead to a large release of dangerous material. Such errors may be due to material failure of the valve (e.g. fatigue), maloperation or a control system error if the valve is an actuated valve used as final element in a control system, for example for production rate control. Process safety standards, such as ISO 10418 or API RP 14C require such pressure vessels to be equipped with pressure safety valves, that will release the pressure to a safe location when the design pressure is exceeded (typically the gas is burnt in a controlled flaring process). That is one barrier. Another barrier would be to install a pressure transmitter on the tank, and a safety valve that will shut off the supply of the gas from the pressure source. This valve and measurement should be connected to a control system that is independent of the normal process control system – to avoid a failure in the control system from also disabling the barrier function.

To sum it up; by systematically identifying risks and evaluating them against acceptance criteria we have a good background for introducing barriers. All accident scenarios should be controlled with at least two independent barriers, where one of them should be instrumented and the other one preferably not. Instrumented functions should be in addition to the basic control system to avoid common cause failures. The Safety Instrumented System (SIS) should be designed in accordance with applicable reliability standards to ensure sufficient integrity. Finally – the design must comply with local regulations and required industry practice and guidance – such as applicable international or local standards.

Do you trust your numbers?

People tend to rely more on numbers than on other types of «proof» of goodness. Where those numbers come from seems to play less of a role. Of course, a number picked out of thin air is just as worthless as a Greek government bond – but why do we then seem to trust a promise, as long as someone has put a number on it? Several people have discussed this previously and in many settings before, but one of my favorites is this blog post at the American Mathematical Society from 2012 by Jean Joseph.

This tendency of “everything is fine because the numbers say so” thinking is very much present in functional safety; over focus on probability calculations is common. I believe there are several reasons for this. First, engineers like quantitative measures – and there are good and sound methodologies for performing reliability calculations. We tend to trust what numbers say more than qualitative information that we perceive to be less accurate.

A SIL requirement consists of four types of requirements – the practical implications of which depending on the integrity level sought. The four types of requirements are illustrated below.

The quantitative requirements are probability calculations. We tend to overfocus on these at the expense of the others. The quality of these calculations depends on the quality of the input data (failure rates) – and the quality of such data can be very hard to verify.

Semi-quantitative requirements are in most cases expressed as the required redundancy (hardware fault tolerance) and the safe failure fraction. To build in the necessary robustness in a safety function, redundancy is required to ensure a single failure does not lead to a dangerous failure of the safety function. The required redundancy depends on the SIL of the function, as well as the fraction of failures that will lead to a safe state directly (the so-called safe failure fraction, SFF). In practice, we see somewhat less focus on this than on the probability calculations themselves (PFD).

Software requirements depend on the required SIL and the type of software development involved. Software competence among system users and system integrators is typically lower than their hardware competence. This causes the software requirement setting and compliance assessment to be delegated to the software vendor without much oversight from the integrator or user. This is a competence-based weakness in the lifecycle in such cases that we cannot capture in the numbers we calculate.

Qualitative requirements include how we work with the SIS development process itself, including managing changes, and ensuring systematic errors are not introduced. An important part of this work and the requirements we need to meet is to ensure that personnel competent for their roles perform all activities.

If we are going to trust the probabilities calculated, we need to trust that the right level of redundancy exists. We need to trust that software developers create their code in a way that makes the existence of bugs with potential dangerous outcomes very unlikely. We need to trust that everybody involved in the SIS development has the right level of competence and experience, and that the organizations involved have systems in place to properly manage the development process and all its requirements. A simple probability estimate does not tell us much, unless it is born in the context of a properly managed SIS development process.

Contracts, interfaces and safety integrity

What does contract structures have to do with the safety of an industrial plant? A whole lot, actually. First, let us consider how contract structures regulate who does what on a large engineering and construction project. Normally, there will be an operator company that wants to build a new plant, be it a refinery, a chemical plant or an offshore oil platform. Such companies do not normally perform planning and construction themselves, nor do they plan what has to be done and separate this into many small work packages. They outsource the engineering, construction and installation to a large contractor – in the form of an EPC contract. The contractor is then responsible for planning, engineering and construction in accordance with contract requirements. Such contract requirements will consist of many commercial and legal provisions, as well as a large range of technical regulations. On the technical side, the plant has to be engineered and built in accordance with applicable laws and regulations for the location the plant is to be commissioned and used, as well as to company policies and standards, as defined by the operating company.

What is the structure of the EPC contractor’s organization then, and how does this structure influence the safety of the final design? There is a lot of variation out there, but common to all large projects is:

  • A mix of employees and contractors working for the EPC company
  • Separation of engineering scope into EPC contractor scope and vendor scopes
  • Interface management is always a challenge

So – the situation we have is that long-term competence management is difficult due to a large number of contractors being involved. Communication is challenging due to many organizational interfaces. There is a significant risk of scope overlap or scope mismatch between vendor scopes. Finally, some interfaces will work well, and some will not.

Management of functional safety is a lifecycle activity that ties into many parts of the overall EPC scope. Hence, it is critical that everyone involved understands what his or her responsibilities are. Unfortunately, the competence level of various players on this field is highly variable; and an overall competence management scheme is hard to implement. The closest tool available across company interfaces is functional safety audits – a tool that seems to be largely underutilized.

Contracts tend to include functional safety requirements simply by reference to a standard. This may be sufficient in the situation where both parties fully comprehend what t this means for the scope of work, but most likely, there will be need for clarification regarding split of the scope, even in this case. In order to make interface management easer (or even feasible), the scope split should be included in the contract, as well as requirements to communication across interfaces and the existence of role descriptions with proper competence requirements. This would then be easier to work with for the people involved, including HR, procurement, quality assurance, HSE and other management roles.

A quest for knowledge – and the usefulness of your HR department in functional safety management

Most firms claim that their people is the most important asset. If this has any effect on operations, is another thing – some actually mean it and others seem not to do so much about keeping their people well-equipped for the tasks they need  to do.

knowledge_management

When it comes to functional safety, competence management is a very important part of the game. In many projects, one of the major challenges is related to getting the right information and documentation from suppliers. Why is this so difficult? It comes down to work processes, communication and knowledge, as discussed in a previous post. One requirement common to IEC 61508 and 61511 is that every role involved in the safety lifecycle should be competent to perform his or her function. In practice, this is only fulfilled if each of these roles have clear competence requirement descriptions and expectations, a description of how competence will be assessed, and how knowledge will be created for those roles.

There are many ways of training your people, and this is a huge part of the field of HR. Most likely, people in your company’s HR functions actually know a great deal about planning, organizing and executing competence development programs. Involving them in your functional safety management planning can thus be a good idea! A few key issues to think about:

  • What are the requirements for your key roles?
  • What are your key roles (package engineer, procurement specialist(!), instrument engineer, project manager, etc., etc.)?
  • How do you check if they have the right competence? (peer assessment, tests, interviews, experience, etc.)?
  • What training resources do you have available? (Courses, e-learning, on-the-job-training, self-study, etc.)?
  • How often do you need to reassess competence?
  • Who is responsile for this system? (HR, project manager, functional safety engineer, etc.)?

A firm that has this firmly in place will most likely be able to steer their supply chain and help them also gain confidence and knowledge – vastly improving communication across interfaces and thereby also the quality of cross-organizational work.

Taking the human factor into account when setting SIL requirements

A well-known fact from accident investigations is that the human factor plays a huge role. In many large accidents, the enquiry will mention organizational factors, leadership focus, procedures and training as important factors in a complex picture involving both human factors and technological factors. In the oil and gas industry it has been found that more than half of the gas leaks detected offshore are down to human factors and errors made during operation, maintenance or startup. On the other hand – humans may also play the role of the safeguard – an operator may choose to shut down a unit behaving suspiciously prior to any dangerous situation occurring, a vehicle driver may slow down to avoid relying heavily on the ABS system for braking on icy roads, an electrician suggests to exchange a discolored socket that otherwise is well-functioning. All of these are human actions that lower the risk. The human thus always comes into the risk picture and can both enhance the safety, and threaten the safety of an asset. This all depends on leadership, training, organizational maturity and attitudes. How do we deal with this in the context of safety integrity levels?

There are many practices. There are thorough methodologies for analysis of human performance as part of barrier systems available, such as human reliability analysis (HRA), developed first in the nuclear industry but now also commonplace in many sectors (petroleum, chemical industry, aviation and transport). On the other side there are the extremes of assuming “humans always fail to do the correct thing” or “humans always do the right thing”. When performing a SIL allocation analysis using typical methods for this such as layers of protection analysis or RiskGraph (both described in IEC 61511), an important thing to consider is: can the bad things be avoided by human intervention? In many cases humans can intervene, and then we do need to have a notion of how reliable the human is. Human performance is influenced by many factors, and these factors are analyzed in depth in the framework of HRA. During a LOPA very detailed analysis of the human contribution is usually not within the scope, and a more simple approach is taken. However, there are some important questions we can bring from the HRA toolbox that will help us build more trust into the numbers we use in the LOPA, or the trust we put in this barrier element in the RiskGraph:

  • Is the operator well-trained and is the task easy to understand?
  • Does the operator have the necessary experience?
  • Does the organization have a positive safety culture?
  • Are there many tasks to handle at once and no clear priorities?
  • Is the situation stressful?
  • Does the operator have time to comprehend the situation, analyze the next action and execute before it is too late?

In many cases the operator will be well-trained exactly for the accident scenarios in question. Also, if designed correctly, there will be clear alarm prioritization and helpful messages from the alarm system – but it is always good to challenge this because quality of alarm design is varying a lot in practice. The situation is almost always stressful if the consequence of the accident is grave and there is some confusion to the situation but training can do wonders in handling such situations by resorting to reflex operating steps – think of basic training of field skills in the military. The last question is always important – does the operator have enough time? What enough time is, can also be hard to give a fixed limit on; for simple situations it is maybe sufficient with 10-15 minutes, whereas for more complex situations maybe a full hour would be needed for human intervention to be a trustworthy barrier element. Companies may have different guidelines regarding these factors – it should always be considered if these guidelines are in line with current knowledge of human performance. No shorter reaction times than 15 minutes should be allowed in analysis if credit is given to the operator. For unusual scenarios, such as the case is for “low-demand” safety functions, a PFD of the human intervention lower than 10% should not be used.

Giving credit to human intervention in SIL allocation is good practice – but the credit given should be realistic based on what we know about how humans react in these situations. Due to the large uncertainty, especially when performing a “quick-and-dirty” shortcut analysis such as discussed above, conservative values for human error should be assumed.

Also note that when a human action is included as an “independent protection layer” in a LOPA, the integrity of the entire barrier system includes this action as well. This means that in order to have control over barrier integrity, the company must carefully manage the underlying factors such as organizational maturity, safety leadership and competence management. Increased attention also to these factors in internal hazard reviews could lead to improved safety performance; maybe could the number of accidents with human errors as root cause be significantly reduced through more structured inclusion of human elements in barrier management thinking.

How independent should your FSA leader be?

Functional safety assessment is a mandatory 3rd party review/audit for functional safety work, and is required by most reliability standards. In line with good auditing practice, the FSA leader should be independent of the project development. Exactly what does this mean? Practice varies from company to company, from sector to sector and even from project to project. It seems reasonable to require a greater degree of independence for projects where the risks managed through the SIS are more serious. IEC 61511 requires (Clause 5.2.6.1.2) that functional safety assessments are conducted with “at least one senior competent person not involved in the project design team”. In a note to this clause the standard remarks that the planner should consider the independence of the assessment team (among other things). This is hardly conclusive.

If we go to the mother standard IEC 61508, requirements are slightly more explicit, as given by Clause 8.2.15 of IEC 61508-1:2010, which states that the level of independence shall be linked to perceived consequence class and required SILs. For major accident hazards, two categories are used in IEC 61508:

  • Class C: death to several people
  • Class D: very many people killed

For class C the standard accepts the use of an FSA team from “independent department”, whereas for class D only an “independent organization” is acceptable. Further, also for class C, an independent organization should be used if the degree of complexity is high, the design is novel or the design organization is lacking experience with this particular type of design. There are also requirements based on systematic capability in terms of SIL but those are normally less stringent in the context of industrial processes than the consequence based requirements to FSA team independence. The standard also specifies that compliance to sector specific standards, such as IEC 61511, would make a different basis for consideration of independence acceptable.

In this context, the definitions of “independent department” and “independent department” are given in Part 4 of the standard. An independent department is separate from and distinct from departments responsible for activities which take place during the specified phase of the overall system or software lifecycle subject to the validation activity. This means also, that the line managers of those departments should not be the same person. An independent organization is separate by management and other resources from the organizations responsible for activities that take place during the lifecycle phase. This means, in practice, that the organization leading a HAZOP or LOPA should not perform the FSA for the same project if there are potential major accident hazards within the scope, and preferably also not if there are any significant fatal accident risks in the project. Considering the requirement of separate management and resource access, it is not a non-conformity if two different legal entities within the same corporate structure perform the different activities, provided they have separate budgets and leadership teams.

If we consider another sector specific standard, EN 50129 for RAMS management in the European railway sector, we see that similar independence requirements exist for third-party validation activities. Figure 6 in that standard seemingly allows the assessor to be a part of the same organization as an organization involved in SIS development, but further requires for this situation that the assessor has an authorization from the national safety authority, is completely independent form the project team and shall report directly to the safety authorities. In practice the independent assessor is in most cases from an independent organization.

It is thus highly recommended to have an FSA team from a separate organization for all major SIS developments intended to handle serious risks to personnel; this is in line with common auditing practice in other fields.

Why is this important? Because we are all humans. If we feel ownership to a certain process, product or affiliation with an organization, it will inevitably be more difficult for us to point out what is not so good. We do not want to hurt people we work with by stating that their work is not good enough – even if we know that inferior quality in a safety instrumented system may actually lead to workers getting killed at work later. If we look to another field with the same type of challenges but potentially more guidance on independence, we can refer to the Sarbanes-Oxley act of 2002 from the United States. The SEC has issued guidelines about auditor independence and what should be assessed. Specifically they include:

  1. Will a relationship with the auditor create a mutual or conflicting interest with their audit client?
  2. Will the relationship place the auditor in the position of auditing his/her own work?
  3. Will the relationship result in the auditor acting as management or an employee of the audit client?
  4. Will the relationship result in the auditor being put in a position where he/she will act as an advocate for the audit client?

It would be prudent to consider at least these questions if considering using an organization that is already involved in the lifecycle phase subject to the FSA.