Testing is an integral part of operation and maintenance of equipment with a SIL rating. Testing is necessary to ensure the achieved integrity of the safety instrumented function is actually as intended. First of all, the mere assumption of a test frequency (hours between each proof test, τ) as a direct impact on the calculated probability of failure on demand (PFD) with a given failure rate for dangerous undetected failures, λDU :
PFD = λDU x (τ/2)
This function is valid for calculating the average PFD for a single component. For redundant configurations things become more complicated but let us stick to this one for the sake of simplicity. Obviously, if we cut the number of hours between tests in half, we cut the PFD in half. So, the more often we test, the better it is – right? No – Wrong!
Why is that wrong? Testing has two negative sides:
- It stops production, which means it stops cash flow, which means it costs money there and then
- It is a source of errors itself, either through increased wear on the system or more likely, a possibility of human error like forgetting to put a system back into automatic mode after testing is done
Of course, this does not mean that we should not test – that is absolutely necessary to make sure the safety function works. Also, over time we can use the results from testing of our functions in the SIS to check whether the assumed failure rates are correct. What it means is – we need to find the right balance between the good side and bad side of testing. In practice, annual testing is often used, and this may be a sweet spot for test frequencies? Sometimes engineers are tempted to increase the test frequency to avoid trouble with PFD numbers after they have bough inferior equipment. People working on the installations tend to strongly oppose this – and rightly so. Buy good components, and test with a reasonable frequency to minimize the impact of the bad things about testing.