by Jerry Everhart – Vice President of Quality and Measurement Technologies
JTI Systems, Inc. Rio Rancho, New Mexico
Introduction
Uncertainty determination of mass measurement is required for both the calibration of mass standards (weight calibration) and for products that are quantified with mass measurements. An understanding of the uncertainty components of mass measurements as well as the methods of data collection and analysis is often more critical than the more often discussed and published equations of uncertainty determinations. This understanding is crucial to evaluating mass measurement results against tolerance classification or product specifications.
Calibrating and making adjustments to mass standards without this understanding causes less accurate standards. Manufacturing products by making adjustments to the mass quantities can provide customers with products that are out of design specifications if the uncertainty components are unknown. The question of when to adjust is answered by understanding and using the analysis methods.
Components of Mass uncertainties
Mass measurements are affected by several factors in the measurement process, including balance or mass comparatorās ability to repeat measurements, environmental affect on the ability of the balance or mass comparator to make measurements, procedure used to obtain the measurement results, operatorās technique and skill of making weighings, stability of standards and weighing equipment, uncertainties of the standards used, ability to determine the density of the air being displaced, ability to determine the density or volume of the standard and item being weighed, and buoyancy affect due to displacement of air on the volume of the standard and the volume of the item being weighed.
The factors that affect mass measurements and the appropriate control requirements for various levels of accuracy requirements may be reviewed in the section āTechnical Criteria For Mass Calibrations,ā NIST Handbook 150. [1] The factors affecting mass uncertainty can be categorized as:
Measurement Process Variation
Systematic Bias or Change in Bias
Uncertainty in the Mass Standards Used
Measurement Process Variation is the measure of the repeatability of the total measurement process to determine mass values at a designated load and is most often described using the standard deviation equation. This analysis assumes a normal distribution curve and does not allow for changes or drift in the mean values (bias).
Standard Deviation =
where x = individual measurements
n = number of measurements
x = mean value
In the calibration of standard weights, procedures are selected to compensate for balance drift and changes in value per division (sensitivity), thus minimizing possible bias in the assigned mass values. These procedures, such as Double Substitution and Three Ones Design are described in the NIST publication, Handbook for the Quality Assurance of Metrological Measurements – 145.[2] Calibration of mass standards of the highest level of accuracy are performed using weighing designs and the NIST Mass Measurement Assurance Program.[3] The requirement for these procedures is determined by the level of uncertainty desired.
The procedures categorize uncertainty as Type A and Type B.[4] Type A uncertainty is a statistical evaluation of the measurement process and the standards used in the calibration process, considered to be measurement process variations. Type B uncertainty is based on scientific judgment and calculations that can not be directly verified by measurements. An example of Type B is the possible uncertainty due to air buoyance as a result of undetermined density of the standard mass being calibrated. Both Type A and B are treated as variation and the resulting uncertainty is expressed as the estimated standard deviation, which is calculated by using the square root of the sum-of-squares.
Measurement Process Variation
Uc =
where Us = random component of the standards used (Type A standard deviation) Sp = standard deviation of the measurement process (Type A, from control charts) Ub = Type B estimated standard deviation (scientific judgment)
The value for Uc is the combined uncertainty of the measurement process. The combined uncertainty is multiplied by a factor of 2 to obtain the expanded uncertainty which provides an approximate 95 % confidence level.
Expanded Uncertainty U = 2(Uc)
The Measurement Process Variation formula conforms to the International Committee for Weighs and Measures (CIPM) recommendations for expression of uncertainties. These uncertainties and assigned calibration values reflect the unit under calibration at the time of calibration. This method assumes that all components of uncertainties are random and all biases are corrected or reported where primary level calibration laboratories are using drift free procedures and with no compensation made for instability due to use and time.
Most industrial and defense laboratories must assign calibration intervals and compensate At Time of Test uncertainties for use and time. In some applications of production weighing processes, it is not practical to report biases or correction factors, thus the uncertainty statements include bias and compensation for drift. Systematic bias or change in bias can result from lack of drift compensation in the calibration or measurement process or instability of the item or standard due to use or time. In some cases, it is not practical to compensate for the bias or systematic error, for example, a balance non-linearity or corner load error in a direct weighing process.
In product manufacturing, the bias is often due to the instability of the part being loaded with a net mass. This mass instability of the tare weight can be due to the tare container or part being hygroscopic and changing with the environment. A bias may be due to excessive volume difference between the item being weighed and the basis of the balance (8.0 g/cm3) or the standards used.
The problem with systematic bias errors is not how to apply them to measurement uncertainty statements, but how too find them and determine their size. Significant systematic errors that are undefined can destroy your level of confidence in the measurement results. Process Measurement Assurance Programs [4] are useful for duplicating the measurement process to determine and control the variability and bias using certified check (control) standards and control charts. It is important to realize that these uncorrected errors produce large increases to uncertainties when compared to random variations which are root-sum-of-squares.
A change in bias can result from change in the calibrated value due to time or use. Mass standards can change value from use, usually seen as a decrease in mass. An increase in mass values can occur when the mass standards have been contaminated by their surroundings. In direct production measurement processes, the change is most often due to balance calibration or standard weight change. Well designed measurement assurance programs will detect these changes. The prediction of change and compensation of both the uncertainty statement and the calibration interval can be accomplished if calibration history is maintained and analyzed.
Uncertainty in the mass standards used to establish the random variation and the systematic bias as described are also components of the uncertainty. It is worth noting that uncertainty statements from NIST prior to January 1994 are based on 3S (99%) confidence and require dividing by three prior to performing the combined uncertainty calculation. After January 1994, the uncertainty statements are based on 2S (95%) confidence. It is always important to read and evaluate the uncertainty statements of your standards.
The task of collecting data from measurements and performing analysis to determine measurement uncertainty requires the use of standards with sufficiently small uncertainties that do not significantly increase the uncertainty results of the calibration or product measurement.
Ratio of Standard Uncertainty to Process Uncertainty:
Combined uncertainty (Uc) =
where Up = process standard deviation Us = standard uncertainty (based on one standard deviation)
10 to 1 ratio Uc =
Uc = 100.5 (increase of 0.5%)
4 to 1 ratio Uc =
Uc = 103.1 (increase of 3.1%)
1 to 1 ratio Uc =
Uc = 141.4 (increase of 41.4%)
The illustration of standards uncertainty to process uncertainty above assumes that all systematic uncertainties are corrected and there is no compensation for change due to time or use. It is crucial that the technique and procedure remain the same for the measurement, calibration and production processes. Selection of the standards should be made to insure that all factors on the measurement accuracy are evaluated. Variation and bias must be evaluated according to the standards selected. The standards should be used at varying times of the day to sample environmental influence on the measurement process. If more than one operator performs measurements, each operator should be included in the uncertainty determination measurements.
Methods of Mass Uncertainty Determination
The At Time of Test method determines the mass value and uncertainty at the time of evaluation without compensation for use or change per time. At Time of Test is frequently used when there is a lack of control of how the calibrated mass standard or balance is used and when it will be returned for calibration. NIST and the State Weights and Measures laboratories provide calibration results based on analysis of the customerās mass value at the time it is compared to the standards. This comparison is performed using measurement assurance methods that establish and control the capability of the calibration process.
Since NIST has little control over when or even if the mass standards will be returned for calibration, the stability of the standard mass over time and use can not be established. The At Time of Test calibration value and uncertainty reflect these conditions.
It is important to note that many calibration laboratories perform calibrations without the rigor of NIST and the use of measurement assurance programs. In these laboratories, calibration uncertainties are at best established by repeated measurements with similar standards and a measure of repeatability and by using the uncertainty of the reference standards to estimate the uncertainty in the mass values. These uncertainty estimates often do not evaluate many factors such as density difference, air buoyance measurement capability, environmental affects on equipment and standards, operator techniques, and stability of reference or working standards. At Time of Test calibration methods do not use the history of calibration results (values) to evaluate long term capability of the calibration process.
From Calibration History method of uncertainty determination provides a long term analysis of ability to reproduce calibration results. The analysis of calibration results over time allows the laboratory to evaluate their capability to repeat the determination of calibration values. This analysis often reveals that repeatability prediction based on At Time of Test analysis is smaller than the actual ability to reproduce calibration results over extended time periods. Additional benefits are analysis of capability to clean weights and maintain their mass values. This method also provides the information necessary to estimate calibration intervals and adjustments required.
Uncertainty analysis from the Calibration History requires that calibration data be corded and maintained. The as left calibrated values and the as found calibrated values must be recorded before adjustments are made to those values. It is the analysis of the difference between as left and as found that determines the stability of the calibration and whether uncertainty predictions made at the time of test were accurate. Charting calibration values (results) over time (Figure 1) is a good way to visualizing the stability of the item being calibrated. It also demonstrates the variability of the calibration process over time. However, visualizing calibration results does not provide analysis and determination of calibration stability over time or separate the change per time from calibration repeatability (variation). An analysis of the differences between as left and as found values are required to predict change over time and long term calibration repeatability.
To separate the variability from drift per time it is necessary to use some basic statistical tools from the traditional Statistical Process Control (SPC) tool box. Range calculations are used to establish the difference between the two calibration values. The difference between the as left and as found value provides a range value [r]. After calibrating to obtain the next value, we obtain the as left value used to determine the next range value. The sum of the range values divided by the number of range values provides the average range value [r]. The average range value is multiplied by a constant based on the number of subgroups in each range. The subgroup size is one, thus the constant of 2.66 is used to estimate the random variability of the calibration process. It is necessary to divide the results by three to obtain an estimate of variability to approximate one standard deviation (Figure 2).
Figure 2
After the estimate of standard deviation is determined, it should be compared to the At Time of Test standard deviation. If the uncertainty from historical data is significantly larger than the predicted uncertainty, it should be investigated to establish what factors that affect the weighing process were not evaluated by the At Time of Test method.
The determination of the stability or shift in the calibrated values over time and use can also be evaluated using the differences from as left to as found values. The algebraic average of the differences provides the average drift per calibration interval. A more exacting rate of change is determined by the slope of the correlation using least squares analysis.
y = mx + c where: m is slope or change per time.
The analysis of the slope of the differences (Figure 3) estimates the amount of change in bias or systematic shift per time. This change should be compared to the variability to determine how much is random versus systematic . If the rate of change is significant , then the calibration interval should be investigated and shortened, if practical. If it is not significant, the calibration interval may be increased to reduce the cost of calibration. It is important to realize that if a significant change occurs, the customer may not be aware that they have an increased uncertainty. Adjustments that are made to mass standards or weighing equipment without the knowledge of random uncertainty versus the systematic change per time often lead to mis-adjustments. If the value that is being adjusted into tolerance is a result of random variability, the adjustment will create a bias error. If adjustments are made at sequential calibrations, the random variability will dramatically increase. A knowledge of the process uncertainty is required to make adjustments to calibrated values.
Figure 3
Process measurement assurance programs provide continual methods to determine measurement uncertainty of the measurement process. A measurement assurance program uses check standards, better understood as control standards, in the measurement process to evaluate the total process. The control standards have previous established values that are used to see how accurately the measurement process can determine the established values . The control standards are injected into the calibration or production process and measured as if they were the product (or calibration). The results of these measurements are immediately evaluated against established control limits to determine the proficiency of the process and establish the known value. The control charts (Figure 4) contain the data that is used to establish the process uncertainty. If a bias or systematic change occurs, it will be detected. Change in the process variability will also be detected by the control charting.
Figure 4
A properly designed measurement assurance program will include measurements of the working standards against external standards to assure the working standards stability. The calibration of the control standard and weighing equipment will also be assured by the measurement assurance methods. The environmental affect on the measuring equipment, standards, buoyancy, and procedure is determined and controlled. Operator performance and skills can also be assured by a properly designed program. The uncertainties of the mass measurements can be established by combining the uncertainties from the control chart with the uncertainty of the standards used. This is performed using root-sum-of-squares for the random variation portions and correcting for or compensating for the uncertainties of the systematic portion.
Establishing the measurement uncertainties as products are manufactured eliminates the need for repeated and costly inspections. The Measurement Assurance Method of uncertainty determination provides many benefits of assuring the measurement process and capabilities of the standards.
References
Crickenberger, J. M., Calibration Laboratories Technical Guide, NIST Handbook 150-2, (1994).
Oppermann, H.V., and Taylor, J. K., Handbook for the Quality Assurance of Metrological Measurements, NIST Handbook 145, (1986).
Croarkin, C., Measurement Assurance Programs, Part II: Development and Implementation, NBS Special Publication 676-II, (1984).
Taylor, B. N., and Kuyatt, C. E., Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results, NIST Technical Note 1297, (1993).
Harris, G. L., Implementation of the Guide to the Expression of Uncertainty in Measurement for State Calibration Laboratories, 1994 NCSL Workshop & Symposium, (1994).
Jerry Everhart is a metrologist with more than 25 years experience in standards and calibration technology. He has conducted seminars on process measurement assurance programs for the American Society of Quality Control, National Conference of Standards Laboratories, Ohio Quality and Productivity, Westinghouse, EG&G, Rockwell International, General Electric, Mason & Hanger-Silas Mason, Inc., Martin Marietta, Allied-Signal, Sandia National Laboratories, 3M Corporation, the National Institute of Standards and Technology, and the Department of Energy. JTI Systems conducts seminars on setting up PMAPs. This paper was presented at the 1995 Measurement Science Conference.
FREE ARTICLE!
We’ll gladly mail the published version of this article, click here. Be sure to include your name, mailing address, and the name of the article.
To learn more about how JTI can help your business, contact us at 505-710-4999