byJohn P. Clark and A. Harper Shull – Westinghouse Savannah River Company
Aiken, South Carolina 29808
The information contained in this article was developed during the course of work under Contract No. DE-AC09- 89SR18035 with the U. S. Department of Energy. By acceptance of this paper, the publisher and/or recipient acknowledges the U. S. Government’s right to retain a nonexclusive, royalty-free license in and to any copyright covering this paper along with the right to reproduce, and to authorize others to reproduce all or part of the copyrighted paper WSRC-MS-96-0405.
Introduction
It is important to remember that measurements are estimates of parameters of interest and these estimates contain random and systematic errors. These errors decrease our certainty of knowing the “true value” of the material measured. Hence, all measurements have uncertainty. For any measurement to have value, the magnitude of those errors must be known. Dr. John Keenan Taylor, the dean of analytical laboratory quality assurance, describes the necessity of reporting uncertainty estimates with measurements as follows:
“Quantitative measurements are always estimates of the value of the measure and involve some level of uncertainty. The measurements must be made so that the limits of uncertainty can be assigned within a stated probability. Without such an assignment, no logical use can be made of the data. To achieve this, measurements must be made in such a way as to provide statistical predictability .”[2]
For decades, measurement quality was described by accuracy and precision and knowledge of these parameters has been required and/or recommended by regulations and guides for good laboratory practices for analytical chemistry laboratories. Collectively, these parameters have been referred to as measurement uncertainty. Measurement Control (MC) and or Measurement Assurance programs (MAPs) are means for determining and controlling the accuracy and precision of a laboratory’s measurements. Regulations and guides often allow for interpretation of what is necessary to assure measurement quality and how it is accomplished. Consequently, a great diversity often exists between different laboratories’ measurement quality control programs.
Even though accuracy and precision are the most common descriptors of measurement quality, they are often misused and or misunderstood. Industrial measurement personnel, especially chemists, are far from a consensus on the use and application of these terms. Both between and within large laboratories there has been differing opinions on the use and definition of these terms. Consequently when chemists quantify accuracy and precision, the values reported vary according to the assumptions made and so do the values reported to customers.
For example, in describing the precision of an alpha counting method, the author has observed several different values were quoted as the relative standard deviations (RSD) as shown in Table 1. The chemist, who developed the method, stated in the procedure that the method had a 2% RSD. A monthly quality control stated that method had a 4% RSD. The annual report of methods’ biases and precisions stated that method had a 6% RSD. What is the correct estimate of precision for this method?
Source of Precisions
%RSD
Method’s Procedure
2.0%
Monthly QC Report
4.0%
Yearly QC Report
6.0
Table 1. Estimates of Method’s %RSD
Another example of differences between accuracy and precision claimed and demonstrated occurred in an inter-laboratory sample exchange program the author coordinated. Over 60 laboratories analyzed samples of the same materials several times. The individual laboratories usually demonstrated tight precisions, but the between laboratory variations in the accuracy of their measurements were an order of magnitude greater than the within laboratory precision they demonstrated and stated. These large variations are not uncommon. Most participants in external sample exchange or performance testing programs have noted or had a similar experience.
The reason for both cases is quite elementary. Laboratory measurements are the products of measurement systems composed of instruments, operators, reagents, and standards that are influenced by temperature, humidity, barometric pressure, etc. Variations in each or all of these parameters decrease the certainty of measuring the “true value.” These variations may occur randomly and or systematically. The degree to which these variables are controlled determines the magnitude of the accuracy and precision estimates for a measurement. Insufficient knowledge of the sources of variation and their effects on the quality of measurements is often the root cause for incorrectly quantifying accuracy and precision estimates for measurements.
Measurement Quality Regulations and National Standards
Regulatory agencies require analytical laboratories to have quality assurance programs to ensure the reliability of the measurements they produce. The Department of Energy in its DOE Order 5633.3B requires nuclear material measurements to have MCPs that provide accuracy and precision estimates. A comprehensive MCP should demonstrate the reliability of measurement data, quantify the performance of the measurement system, and ensure the measurements are suitable or fit for their intended use. Over a dozen times this order uses the phrase “Accuracy and Precision.” Yet in the specific requirements section it states, as a minimum, for an “Analytical QC program . . . Data from routine measurements shall be analyzed statistically to determine and ensure accuracy and precision of the measurements.”[3] Much latitude is given to the individual laboratories in “what” they do and “how” they go about providing assurance that their measurements meet performance requirements.
The Environment Protection Agency is much more prescriptive in the measurement controls that it requires for analytical chemistry laboratories. They work from a policy of setting “Data Quality Objectives” and specifying the methods, MCs and sample handling requirements for the laboratories that will be analyzing samples for their programs.
Industrial laboratories that support their plant’s established processes are usually not regulated as tightly as the EPA regulates contractor laboratories. Often, the customers of the industrial chemistry laboratories are satisfied with the analytical services being provided. However, they may value other quality attributes such as cost, turn around time and back-up capability higher than accuracy and precision. However, if the laboratory’s measurements of their process samples differ from process specifications, first the laboratory is suspected of having problems, rather than their process. When this occurs, rework is required which includes repeat measurements of current samples and or additional samples. Occasionally, the customer will send samples to an outside laboratory. When laboratories do not agree, and they often do not, each laboratory must attempt to justify their results using information from their MCPs. Statisticians may become involved to determine if the differences are statistically significant. This requires knowledge of the measurement uncertainties associated with the values reported. This is where Dr. Taylor’s emphasizes making measurements so that the limits of uncertainty can be assigned within a stated probability.
There are different levels of MCPs and the data quality information may vary according to the level being used. It important to define key measurement quality descriptors before the different levels of MCPs and other MCs are discussed.
Definitions
Eurachem’s Working Group on Uncertainty in Chemical Measurement prepared the “Quantifying Uncertainty in Analytical Measurement” publication in 1995. It is a current document using the International Standards Organizations (ISO) measurement terminology to standardize the words used to describe measurement quality as it relates to analytical chemistry. [4]
Accuracy of measurement is closeness of the agreement between the result of a measurement and a true value of the analyte.
“Accuracy” is a qualitative concept.
The term “precision” should not be used for “accuracy.”
Precision is the closeness of agreement between independent test results obtained under stipulated conditions.
Precision depends only on the distribution of random errors and does not relate to the true value or specified value.
The measure of precision is usually expressed in terms of imprecision and computed as a standard deviation of the test results. Less precision is reflected by a larger standard deviation.
Quantitative measures of precision depend critically on the stipulated conditions. Repeatability and reproducibility conditions are particular sets of extreme stipulated conditions (more will be said about these measures of precision below).
The Eurochem team that put this document together choose their words carefully in their attempt to standardize the terminology used in describing measurement quality. In the notes listed after the definitions, the authors stress the term precision should not be used to describe accuracy. It would be incorrect if a series of measurements were biased, and the bias was not removed. Fig. 1 gives a graphic representation of the distribution of series of results relative to the reference value of a standard. The average value is significantly biased and the individual results appear to have a wide distribution.
Fig. 1. Distribution of a Series of Measurements
To convey information to a customer about the quality of a single measurement or the average, knowledge of both systematic effects (bias) and precision are needed. This information must also be quantified to be of real value. Therefore, it would be useful if a single term were available to combine this information. The term most commonly being used is uncertainty. It is defined as follows in the same Eurachem publication.
Uncertainty of a measurement is a parameter associated with the result of a measurement that characterizes the dispersion of the values that could reasonably be attributed to assay.
The parameter may be, for example a standard deviation (or a given multiple of it), or the width of a confidence interval.
Uncertainty of measurement comprises many components. Some of these components may be evaluated from the statistical distribution of the results of a series of measurement and can be characterized by experimental standard deviations. The other components, which can also be characterized by standard deviations, are evaluated from assumed probability distributions based on experience or other information.
It is understood that the result of the measurement is the best estimate of the value of the assay and that all components of uncertainty, including those arising from systematic effects, such as components associated with corrections and reference standards contribute to the dispersion.
The statistical term used to describe the phenomenon of random variation is variance and is measured by standardizing the deviations of individual measurements from the average or mean values of a series of measurements. Hence, the term standard deviation is used in quantifying measurement quality of precision in the analytical chemistry laboratory. The systematic uncertainty or bias is a function of the difference between the average of several measurements and the true or reference value. This difference is the systematic variation in measurements that effect all of the measurements in the same direction. This difference is usually referred to as “bias”. Fig. 2 shows the bias and precision in a graphic representation of a series of pH measurements. If the bias is not removed from the results reported to the customer, it will contribute to the uncertainty of the measurements. Therefore, both the random variation, which is quantified at 3 standard deviations (s), and the systematic variation, expressed as bias, are added to give an uncertainty estimate of 0.11 pH units. This paper will not go into the actual computations. The Eurachem publication listed above is an excellent resource for chemists to use for guidance in “quantifying uncertainties” in their assays.
Fig. 2. Example of pH Measurement Uncertainty
In the nuclear community, analytical laboratories’ bias and precision reports provide statisticians with measurement data to propagate limits of error (LOE) for nuclear material inventory differences(ID). When propagated LOEs are compared to LOEs based on historic IDs, they are always smaller because measurement personnel have difficulty identifying all sources of uncertainty in their measurement systems. A major source of uncertainty in the analytical chemistry measurement systems that is often over looked is the uncertainty of the standards. Fig. 3 illustrates total maximum uncertainty associated with the pH community, the uncertainty of the standards, as well as the systematic error, contribute to the uncertainty of the method. This is done because the reference value of a particular standard could be biased by as much 0.01 pH units. This would give a total uncertainty of 0.12 pH units. If the chemist were merely trying to estimate the average uncertainty of the method for several different standards, the root sum square (RSS) technique would be appropriate.
Fig. 3. Total Uncertainty Includes Standard’s Uncertainty
Measurement Control Programs
American National Standard N15.51-1990 lists the objectives of MCPs. “The goal of any measurement control program is to document and quantify the performance of each analytical measurement system and to provide for detection and correction of adverse changes.” It is important that the level of performance required and the consequences of using faulty data are known. Lastly, MCPs should provide data for establishing the uncertainty (or limit of error) associated with each measurement. [5] This is one of the national standards dealing with measurement quality in the United States of America published by the American National Standards Institute. This standard was prepared by the Institute for Nuclear Materials Management and is titled “Measurement Control Program — Nuclear Materials – Analytical Chemistry Laboratory.” It stresses the importance of developing models for both the physical and measurement process and the error structure. The chemist needs to understand both the physical and error models to develop an adequate MCP. It is also essential to base the MCP on an independent verisimilitude (matrix matched) standard having known uncertainties.
It is imperative that chemists understand the physical measurement model and the error model associated with a specific method. For the chemist to adequately quantify the quality of a measurement process, the principles and limitations of the physical model must be understood. To quantify the uncertainty of a measurement, the error model must be understood. Developers or manufacturers of the common laboratory instruments usually define the parameters within which a measurement system functions. Hence, the chemist must consider particular measurement system parameters in developing specific MC procedures for that system.
A Co-Operation on International Traceability in Analytical Chemistry (CITAC) working group has prepared CITAC Guide 1. “International guide to Quality in Analytical Chemistry – An Aid to Accreditation.”[6] It is also a good resource for information on measurements and MC. Fig. 4 is facsimile of a flow chart on page 18 of that publication. It shows a typical analytical chemistry process and illustrates the role of calibration in relation to quality control. This is a graphical representation of modeling the physical measurement process and is similar to the recommendations in N15.41 above.
It is important for the chemist to set up MCPs using the same measurement model that is used for samples. The MCs shown on the right side of the chart must go through all of the pre-treatment steps samples go through to capture the random and systematic variations that the samples experience. This is the key to developing reliable estimates of the uncertainty of a laboratory measurement. However, not all MCPs utilize this principle. This will be shown in the four different levels of MCPs that will be discussed later.
Fig. 4. Measurement System Flow Chart
Measurement Control Goals
Variation is the most important concept that must be mastered to fully understand measurements and their uncertainty. Recall, there are two basic types. The first is random (from common causes) variation from producing measurements at differing times, conditions, and other parameters. Random variation is usually normally distributed about its mean, is estimated by the standard deviation statistic, and is often referred to as measurement precision or a measure of measurement repeatability at different times, etc. The second type is systematic (from special causes) variation, which is generally a fixed effect due to a distinct cause and is estimated by the difference between an average value and a standard value. It is often referred to as bias or systematic uncertainty. Mastery of these concepts provides a foundation to thoroughly understanding measurement uncertainty.
As a chemist gains knowledge concerning the magnitude of the sources of variation’s effects on the measurement process, he can invest appropriate time and effort on controlling and reducing the major sources of variation and consequently reduce the uncertainty of the measurements.
Types of Controls Recommended for MC Programs
Different MCs are often recommended in guides or company QA manuals. Examples of some of those recommendations are listed in Table 2. They may be able to provided estimates of a method’s accuracy or precision. Table 2 delineates which measurement qualities each MC can determine and under what condition. Controls like these are often requested by the customer who wants to ensure he obtains reliable data from the laboratory. It must be remembered these controls have their limitations.
Measurement Control Technique
Accuracy
Precision
• Bench or check standards
Yes
Yes
• Blind standards
Yes
Yes
• Split samples
No
Yes*
• Replicate measurements
No
Yes
• External sources or laboratories
Yes
Yes*
• Quality control charts
No**
Yes
• Spike of known concentration
Yes
Yes*
- Yes, if several analyses are run on different samples over time.
**No for QC Charts that compare current data with historic data.
(Yes accuracy can be estimated, if plotted against a known value.)
Table 2. Techniques For Measurement Accuracy and or Precision
Four levels of MCPs
Level 1 is simple tolerance that involves a “Go, No-Go” approach. Primarily the method uses log books having tolerance limits that were not established by data analysis. If the analyst’s measurement of a check standard falls within the limits, they proceed with sample measurements. Usually they do not have requirements that establish measurement uncertainties.
Level 2 typically uses control charting programs based on SPC methods for data analysis. These programs do not require measurement assurance methods, and thus do not fully determine or control measurement uncertainty. They rarely include uncertainty of standards. This method is most commonly used when an acceptable batch of product is used as a control standard and current measurements are plotted against a historical “target” value as shown in Fig. 5. Variation in the measurement process is monitored and the data are used to estimate measurement system precision. Control charts using the lab measurements of process samples capture both the process and measurement system variations.
Fig. 5. Control Chart Monitors Measurement Variations –> Under Construction
Level 3 MCPs often duplicate the measurement process and evaluate the measurements of a control (QC) standard against limits about the standard’s reference values. They are typically used to determine measurement system bias and precision. The data collected can be used to estimate the measurement process uncertainty. They often use warning limits at the 2 s level and alarm limits at the 3 s level. Many are computer based “real time” MCPs that “lock out” the method if the QC falls outside the control limits until another QC standard is successfully analyzed. Some have built in diagnostics to detect changes in bias and precision and test for significant differences between standards.
Most of these programs use some of the control chart rules for detecting abnormal variation in the measurement process.[7] Some of these rules are shown in Table 3. Unless a method has stable variation and the method is unbiased, the chemist will spend a great deal of time investigating violation of many of these rules. These rules work better for monitoring a stable industrial process than an unstable analytical chemistry method. These rules are also more applicable to the Level 2 and Level 4 MCPs that plot the control standard measurements about the historical average or mean measurement value.
Rule #
Rule
1
1 point above 3 sigma
2
2 of 3 points above 2 sigma
3
4 of 5 points above 1 sigma
4
8 consecutive points above center line
5
1 point below –3 sigma
6
2 of 3 points below –2 sigma
7
4 of 5 points below –1 sigma
8
8 consecutive points below center line
9
15 points inside +1 sigma
10
8 points outside +1 sigma
Table 3. Rules for Out-of-Control Conditions
Level 4 MCPs are called Process Measurement Assurance Programs (PMAPs) which determine and control measurement uncertainty by making control measurement with control standards having certified values. The PMAP level of a MCP duplicates the measurement process (method), as shown in Fig. 4 above, to evaluate method and environmental affects on measurement uncertainty. PMAPs require independence in standards to assure systematic uncertainty is detected due to changes between the calibration and reference standard as well as the control standard. Calibration intervals can be based on the analysis of historical data. PMAP can be used to determine if a laboratory measurement system is capable of meeting customer tolerances. In determining the total uncertainty of the measurement process, PMAP includes the uncertainty of the standards and uses the techniques discussed in Fig. 3 above. A good PMAP provides for analysis of data for current measurement uncertainty and has the testing capability to detect changes in random and systematic variation performance, thus determining when to adjust or calibrate.
Example of a PMAP MCP
Commercial PMAP software was evaluated in a demonstration program for the Department of Energy and reported in Westinghouse Savannah River Company document WSRC-MS-96-0032. [8] Some of the screen dumps from JTIPMAP™ software are shown below as examples of how control charts are set-up and MC data are evaluated to maximize the information available in a MCP. This software, like any commercially available software has its strengths and weaknesses. The Savannah River Site (SRS) has several types of MCPs and uses commercial and custom developed software for various laboratories. None are endorsed or recommended as a standard for this or any other DOE site. Many of the PMAP principles are applied in the SRS MCPs.
Several screens from the software are shown below depicting how: a JTIPMAP™ control chart is set-up; data are entered; results are plotted; data sets statistically evaluated; and a total uncertainty estimate generated. Screens from the evaluation of a Thenoyl Trifluoro Acetone (TTA) extraction and alpha counting method are shown. This PMAP was setup by using existing QC data that were normalized, because several different standards were used during the data collection period. Optimally, only one standard is used as the, non-varying element of the MCP. TTA alpha standards had large uncertainties that contributed significantly to the total uncertainty.
Fig. 6. Card 1–Control Set-Up (Main)
There were 750 quality control standards (QCS) analysis records from January 1, 1995, through March 31, 1996, available for evaluation. The data were downloaded from a spread sheet after normalizing the results of several different standards. Once a PMAP is established, MC data are entered each time an analyst uses the method to analyze unknown samples from a customer. The results of the analyses of the QCS are evaluated against up to three sets of control limits in the PMAP. They are shown on Fig. 11 below. These computer based control charts are dynamic. They are updated with each additional data point and the current means of the QCS data are updated with each new result that is put into the computer.
The software is user friendly and prompts for basic information in setting up the initial PMAP for a laboratory method. Fig. 6 shows the first of 4 cards to be completed. This card prompts for information on method and measurement, specifies the number of significant figures, and indicates the units. QC measurements can be any of the following: actual values, a ratio of measured to reference values, or a deviation from the reference value. Only the number of standard deviations (s) to be used for the reference limits and the graph scaling are required for setup.
Fig. 7 shows card 2 which requires the following information for the control standard: the reference value (1 if normalizing QCS assay data), systematic uncertainty (drift for use and time) for the standard(s), and variability in the number of SDs used in the control charts (random error or precision), and documentation of the identity of a “check standard” that may be used off-line as an independent reference standard. The values obtained on the check standard may be tracked on a separate PMAP.
Fig. 7. Card 2- Set-Up (Standards Info.)
Card 3 in Fig. 8 contains information on 3 sets of limits and scaling information for the PMAP control chart. The limits include: reference limits are required (they are normally set by the best technicians, who fully understand the effects of variables on the method performance); production limits are optional (all of the other analysts make the production measurements under all lab conditions); and tolerance limits which represent the maximum permissible error that can be tolerated by the customer.
Fig. 8. Card 3–Control Set-Up (Limits)
Fig. 9 shows Card 4 that is used for inputting optional information for the graphs of the QCS data. Each control chart will contain up to 40 QCS measurements. Once this card is completed, all its information will be included on all charts. This information may include the cognizant chemist in charge of the method and the person to be contacted if the method goes out of control. The method ID and related information should be included here, so it will provide all the information necessary on each and every control chart that is printed for hard copies that can be stored as permanent records.
Fig. 9. Card 4 — Control Set-Up (Other)
Fig. 10 shows a the control chart for the TTA method. It is an example of what the analysts sees when using the PMAP program. Results of the individual analyses can be classified as either production or reference personnel generated values. (See key in Fig. 11 below.) Both classifications have their own control limits and the software continuously updates the current mean values and plots them on the computer screen chart as well as the reference value of the standard. This allows the analyst to see the magnitude of the current average biases in relation to the reference value. The control limits are plotted about the means of the reference and productions QC standards run during the previous calibration interval. If a method is biased, the control chart still has symmetrical limits about the historical means, rather than the reference value, that better lend themselves to the control rules listed in Table 3 above. Off-the-chart data can be read in the data box shown in Fig. 12 after clicking on the QCS number as shown in Fig 10.
Fig. 10. TTA Alpha Spectrometry Chart
Two types of QC data
Reference (required for limits)
Production (optional)
Two classes of data
Rejected data
Data included in calculations.
Up to three sets of Limits
Up to two types of Means and the Reference Value.
Out of Control data marked
Fig. 11. Key To PMAP Control Chart Elements
Fig. 12 Single Data Point Entry Card
Fig. 13 shows the “data entry grid.” It can be used to input data as it is generated, or to receive data that is imported from spread sheets. When using an Excel™ Spreadsheet, it must be saved as a comma delimited document and named with a “csv” suffix. All data can be imported as reference data. If done that way, only the mean line for reference data would be shown. When in the control chart mode, the data grid can be viewed by checking the “grid icon” in the menu bar. Single QC measurements can also be entered using the data entry form shown in Fig 12. that is also pulled down from the menu. The “notes” section should include information concerning why a measurement was out of control. This card also allows the analysts to reject a data point from calculations, but shows it on the control chart. Many levels of MCPs fail to capture the many attempts it may take to correct the measurement system when a QCS indicates it is out of control. This feature is very valuable if used by analysts.
Fig. 13. Grid Form For Entering QC Data
Fig. 14 is another control chart that clearly shows the method is biased high for the reference data. The highest center line is the average of the reference data collected since the last calibration. The lowest and lighter center line is the reference value. The screen only displays the last 20 data points, while printouts of the QC data display the 40 points per page. For QC results that are off the chart, the control bar at the bottom of the chart will tab across the chart and display the values in the window in the control bar. The 30th data point has been selected on the control chart and a value of 0.682 is shown in the window. It is excluded in computing the mean of the reference QC measurements.
Fig. 14. Control Chart of TTA QCs Data
The most useful feature of the JTIPMAP™ MCP is shown in Fig. 15. Here the total uncertainty for the method is calculated from the standard deviation and mean of the current reference QC data and the uncertainty of the standard and it is centered about the mean value. Of the 20 reference QC data points collected, all were used to determine the current method precision of 0.0344 or 3.44% RSD. The bias is 1.47%. The random uncertainties of the method and the standard were combined by the root sum of the squares method and the standard’s systematic variance was added linearly to the combined random errors. Note the uncertainties are expressed as a + 0.1232 and – 0.09381. These values are equivalent to +12.32% and -9.38% uncertainties. If bias corrections are not made to the results reported, the larger uncertainty estimate of ± 12.32 must be reported to provide a total uncertainty estimate at the 3 standard deviation level. The Calibration function also reports the bias and precision statistics for the production QCS. Note on Fig. 15 the bias of 0.969% is smaller than the reference bias of 1.47%. However, the standard deviation is much larger at 4.82% for the production QCS as compared to the 3.44% value for Fig. 14. Control Chart of TTA QCs Data the reference QCS. If the production values were used to calculate the total uncertainty it would be close to ± 18%.
Fig. 15. Form for Uncertainty Calculations
Fig. 16 shows the screen for using t-tests on biases and F tests on variances (precision) to determine the significance of changes in these parameters between calibration periods. The software allows great flexibility in selection of any number of combinations of data summaries. This screen shows the difference in reference standard deviations of two separate data sets is statistically significant. The software calculates the table values using the appropriate degrees of freedom for each data set. The software has the capability of making comparisons for both the reference and production QC data. By using the same MCP software, a laboratory is able to achieve consistency in its estimates of uncertainty for the analytical work it performs.
Fig. 16. Bias and Precision Testing
PMAP Features and Benefits
The discussion of the JTIPMAP™ software was cursory, but adequate to demonstrate the difference between the levels of MCPs and to point out that the major value of the PMAP is to determine the total uncertainty of the analytical chemistry measurement system. Time and space do not permit discussing some of the more useful features of the software, such as tolerance testing and calibration history analysis features. Additional information on this subject can be obtained from reference 8 listed below. The essential features of PMAP include the following:
Requires independent calibration and matrix matched control standards, so drifts or biases can easily be determined
Includes “tolerance limits” to determine if the measurement process is capable and continues to meet customer requirements
Goes beyond statistical process control (SPC) by considering all uncertainties (e.g. standards and calibration) in calculating an overall estimate of uncertainty
Installed at calibration of the measurement system to assure calibration is maintained
Determines when instrument or standards need adjusting (avoids tinkering)
Automatically plots current means of QCs against three sets of limits for convenient visual evaluation of method performance
Determines and controls measurement process uncertainties
Provides analysis of measurement uncertainty histories, (e.g., t [bias] and F [precision] statistical tests)
Automatic determination of calibration uncertainty from random and systematic errors of calibrations, standard, and historical data
Can evaluate analyst performance against an established standard
Conclusions & Summary
Measurements are estimates and have uncertainty. Dr. Taylor provides a warning. Without knowledge of a measurement’s uncertainty, the measurement cannot logically be used. Therefore, it is important for the chemist to develop MCPs in a manner that will capture realistic estimates of the total uncertainty associated with each analytical chemistry method. The uncertainty estimates should capture the normal variations that surround the generation of measurement values with an analytical method. They include the equipment, standards, measurement personnel, reagents, environmental conditions, etc. If bias corrections are not made, then the uncertainty estimates should be expanded to include the systematic error in the measurements made with that method. Lastly, the uncertainty of the standard must be included in the uncertainty estimate. Fig. 3 represents the optimum model for determining total measurement uncertainty. If the laboratory is interested in the average uncertainty for a method, then the systematic errors could be combined with the random errors using the RSS method.
One author noted three different values for the RSD of a method had been reported at different times. All were based on QC data. The reason for their differences was because the estimates included different sources of variation. The chemist had one analyst make a series of measurements on one day using the same equipment, reagents and standards to determine the precision for his method. This form of precision is called repeatability. With most of the variables controlled, it was possible to obtain a precision, expressed as a RSD, of 2%. When the method was used for a month by shift personnel, the precision increased to a RSD of 4%. This was due to different analysts and varying environmental conditions. After 12 months of QCS measurements had been collected and evaluated, the precision estimate expanded to a RSD of 6%. This was due to additional variations in reagents, new analysts, different equipment and the extremes of summer and winter temperatures. These last two estimates of precision are examples of reproducibility. This infers variation within the laboratory might cause future measurements to deviate from the current measurement to within +/- 6% about 68% of the time at one standard deviation.
The last two estimates of precision were generated with a 3rd level MCP that had been established to determine bias and precision. To complete Dr. Taylor’s recommendation for generating an uncertainty estimate with stated probability, we would have to multiply the 6% by a factor to give us a corresponding level of confidence. Since the PMAPs were done at 3 standard deviations, the corresponding confidence level would be 18%, based on the 6% RSD multiplied by 3. This value only reflects the random variation of the method during the year the data was collected. It does not include the uncertainty of the standards or address any systematic variation. The TTA method’s uncertainty estimate based on data from 3 months, as shown in Fig. 15, is 12.1%. This estimate does include the uncertainty of the standard and assumes that no bias corrections are made on data reported to the customer. Regardless of what method is used, the chemist should know what assumptions have been made in generating estimates of the uncertainty associated with measurements.
In summary, there are different levels of MCPs. It is important to implement a MCP that provides assurance that measurements are fit for their intended use and produces adequate estimates of the total uncertainty associated with laboratory measurements, so the customer can use the data logically.
References
Clark, John P. Clark & Shull, A. Harper, Westinghouse Savannah River Co. internal record MS-96-0405, 1996
Taylor, John Keenan, “Quality Assurance of Chemical Measurements”, Lewis Publishers,1987 .
U. S. Department of Energy, “Control and Accountability of Nuclear Materials” Order 5633.3B, September 7, 1994.
“Quantifying Uncertainty in Analytical Measurement” Appendix B Definitions, ISBN 0-948926-08-2, Eurachem English Publication 1995.
ANSI N155.51-1990 “for Nuclear Materials Management — Measurement Control Program — Nuclear Materials — Analytical Chemistry Laboratory ,” American National Standards Institute, New York, NY, October 1990.
CITAC Guide 1 “International Guide to Quality in Analytical Chemistry– An Aid to Accreditation”, ISBN 094892609 0 English First Edition 1995.
Western Electric Company, Inc., Statistical Quality Control Handbook, Delmar Printing Co., Charlotte, NC. 1989.
Clark, J. P. and Shull, A. H., “Uncertainty Demonstration Program on MC&A Measurement Systems”, 1997 Proceedings of the INMM 36th Annual Meeting, Naples, FL., 1996.
To Contact:
JTI Systems, Inc.
P.O. Box 45536
Rio Rancho, NM 87174
FREE ARTICLE!
We’ll gladly mail the published version of this article, click here. Be sure to include your name, mailing address, and the name of the article.
To learn more about how JTI can help your business, contact us at 505-710-4999