Proficiency Deficiency in ISO 17025

Audits and Proficiency Test Deficiencies

You do it.  I do it.  Everyone does it.  What is "it?" We make mistakes.  In the technical and business world, quality systems have a regularly administered audit component to provide a means for testing and finding these "opportunities for improvement."  The Modal Shop’s calibration product group manager recently attended a regional meeting of NCSLi. Of particular interest and usefulness, is an excellent presentation by Mr. Robert Knake of A2LA addressing the “Most Common Deficiencies” found during the ISO 17025 assessment audit process.  We thank Mr. Knake and the A2LA organization for the approval to reprint his presentation.  Let’s look at the number 10 deficiency: Infractions in ISO17025 Section 5.9, Assuring the Quality of Test and Calibration Results.

This clause (ISO17025 Section 5.9, Assuring the Quality of Test and Calibration Results) safeguards the technical quality (in terms of uncertainty and repeatable performance) of your calibrations.  Often this means running a trial test with a control sample and comparing the results to the expected traceable results.  In the sound and vibration community, this is the practice of a daily calibration of a verification sensor.  To expand the trial beyond the scope of your own people, systems and environment, this also involves the annual participation in an inter-laboratory comparison proficiency test.  Here a control sensor is circulated “round-robin” style to be tested and compared between various independent laboratories of suitable capability and certification  Both means of testing discussed here have the value of comparing data to expected results (day by day and also lab to lab) for examination of variance beyond the expected.

The daily verification test is generally carried out each day prior to using a specific calibration system by recording the results and deviation from traceable on a controlled verification sensor.  Note: Daily verification should also be carried out before any trained but infrequent resource is allowed to calibrate (For example: cross trained technicians or engineering resources responding to peak demand work loads).  This ensures that his/her procedural and calibration skills are conformant within the expected uncertainty distribution.  Plotting these validation calibration results over time allows for viewing a classic control chart showing daily variation compared to upper and lower control limits.  It is advisable to review these plots weekly/monthly to see trends over time.  If a daily verification exceeds the control limits, then a prescribed action plan needs to be initiated immediately.  Typically it is a simple mistake and is easily remedied, but the process and controls are imperative to ensure quality calibration output.

First, all calibrations since the last valid verification are all suspect to invalidity until an immediate and thorough root cause analysis is completed.  However, it’s first advisable to get a good handle on the all-important “root cause.”  Your in-house quality system will have a standard form/procedure for your process for dealing with nonconformance.  The unacceptable variance in a calibration system output is clearly grounds for a quality concern.  While the form will direct you through immediate steps for thinking about simple corrective actions the key, in this case, is the root cause analysis. Assuming “the easy problems” are ruled out first (power on, cables and mounting correct) you will need to go through steps like comparing a few other verification sensors. We recommend having a set of 3 in-house validation sensors. Checking units 2 and 3 of the verification sensor set will allow you to see if the unacceptable variance runs across all validation sensors or if there is a problem only with a certain one.  The valid performance of the other two will give you the confidence to isolate the behavior to the one.  If all three show the problem, then root cause analysis moves on to the system level.  Because you will likely be opening the measurement system, you will want to get help from your engineering team or vendor support for system troubleshooting. This is often accomplished by viewing the performance while substituting out system pieces on a component-by-component basis.  Eventually the trouble can be isolated.  It becomes significantly more complex at the code or firmware level, yet ultimately is handled in a similar piecewise fashion.  In the case of calibration system users, do-it-yourselfers need to be able to verify down to the firmware/code level anytime system code is changed while purchasers of commercial systems can, thankfully, rely on the quality system and reputation for excellence of their chosen manufacturer and simply stick to electrical or mechanical component testing. Once root cause is identified, corrective actions at the system level and preventive actions at the procedural level can be implemented to be sure the problem is remedied.

To ensure that your entire system is performing adequately to the uncertainty norms of the industry, an annual interlab comparison (ILC) for proficiency is a common practice.  While frequency is not specifically defined in the ISO17025 standard, annual participation is often selected and is part of the A2LA R103 requirements document.  Your data/system results will be compared with the results of other organizations calibrating the same sensor thus allowing you to see where you fall both in terms of calibration value and uncertainty.  The data, variance and subsequent discussion between participants forms a mechanism for check/balance, dialog and continuous improvement for the entire industry.  Participation in an ILC can be arranged through reputable calibration system vendors or a body like National Association for Proficiency Testing or contact your national metrology lab (see list of links on cover page) or an accreditation body like A2LA for a recommendation.