Definitions
[T]he degree of conformity of an indicated value to a recognized accepted standard value, or ideal value.
Two common methods of rating or expressing accuracy are:
- As a percent of scale length (or percent of full scale). This rating method is most commonly employed with instruments equipped with an analog meter.
- As a percent of actual output reading. This rating method has become more popular for instruments provided with a digital meter.
The above referenced standard also notes that:
As a performance specification, accuracy (or reference accuracy) shall be assumed to mean accuracy rating of the device, when used at reference operating conditions. Accuracy rating includes the combined effects of conformity (linearity), hysteresis, dead band, and repeatability errors.
What Accuracy Is Not
In the world of gas detection instrumentation, as in other places, confusion exists. Consensus standards are helpful, of course, but cannot be expected to address all practical issues. Finally, critical terms can have very different meanings in contexts outside of gas detection.
Accuracy is not minimum detectability
“Minimum detectability” is simply the lowest meter reading or other type of instrument output that can be unambiguously discriminated from noise. [Some agencies set a standard that minimum detectability must be at least 2-2.5 times the noise level.]
Note that any data garnered at the level of minimum detectability will not be very accurate. For example, in a typical case, the minimum detectability of a particular instrument, provided with an analog meter, is given as 1% of full scale, and accuracy is ± 2% of full scale. Thus, for a 0-100 ppm scale, the minimum detectable reading of 1 ppm would actually be 1 ppm ± 2 ppm—hardly a useful measurement.
Similarly, on a digital unit, the minimum detectability of a particular instrument is often given as the least significant digit. On a commonly used 3½ digit meter, for a range of 0-199.9 ppm, this would be 0.1 ppm. In this case, accuracy is specified at ± 2% of reading ± 1 least significant digit. Here, the minimum detectable reading of 0.1 ppm would actually be 0.1 ppm ± 0.002 ppm ± 0.1 ppm. Technically better than the analog example, but still of little value.
Even so, knowing the minimum detectability of an instrument can be helpful in situations when “go/no-go” readings are of interest. Given a properly calibrated instrument, the smallest observable response would be—by definition—the minimum detectable level, and would indicate at least the presence of the analyte in question (any interferences notwithstanding).
Of course, such practices should only be done when instruments with more appropriate sensitivity are not available.
Accuracy is not precision
“Precision” as defined in ASTM Standard D 1356—05 Standard Terminology Relating to Sampling and Analysis of Atmospheres is:
The degree of agreement of repeated measurements of the same property, expressed in terms of dispersion of test results about the mean result obtained by repetitive testing of a homogeneous sample under specified conditions.
A classic illustration—courtesy of the Montgomery County, KY schools—shows a target with many arrows closely clustered around the bull’s-eye. This scenario would be both accurate and precise. If many arrows were closely clustered far from the bull’s-eye, the situation would be precise, but not accurate. If arrows were all over the target, those closest to the bull’s-eye would be accurate, but the archery session was not precise. [Click on the images to enlarge them.]
However, if this understanding of precision holds in the fields of science, engineering, and statistics, as the term is used in the world of computing, precision can be either:
- The number of significant decimal digits or bits by which a particular value is expressed. For example, a calculation which rounds to three digits is said to have a working precision or rounding precision of 3.
- The units of the least significant digit of a measurement. For example, if a measurement is 25.371 meters then its precision is millimeters (one unit in the last place, or “ulp,” is 1 mm)
Accuracy is not resolution
“Resolution” as defined in the ANSI/ISA—51.1—1979 (R1993) standard Process Instrumentation Terminology is:
The least interval between two adjacent discrete details which can be distinguished one from the other.
We can invoke another use of “resolution” to help visualize this concept. In the computer world, “resolution” is a measure of the sharpness of an image or of the fineness with which a device (such as a video display, printer, or scanner) can produce or record such an image. Resolution in this context is usually expressed as the total number or density of pixels in the image—typically as dots per inch or dots per millimeter.
For example, a 600-dpi (dots per inch) printer is one that is capable of printing 600 distinct dots in a line 1 inch long. Thus, it can print 360,000 dots per square inch.
Now, draw the analogy between all these “distinct dots” and the least significant digit on a digital meter. We referred earlier to a 3½ digit meter, set up for a measuring range of 0-199.9 ppm. Our least significant digit here is 0.1 ppm. There is no way to read a digital meter beyond this least significant digit. Or, to put it another way, the resolution of the meter is 0.1 ppm.
Clearly, attempting to measure 0.1 ppm on this meter would not be accurate. As shown earlier, for an instrument accuracy specified at ± 2% of reading ± 1 least significant digit, a measurement at the level of resolution would render 0.1 ppm ± 0.002 ppm ± 0.1 ppm.
If we consider resolution as it might apply to an instrument equipped with an analog meter, best practice would dictate that even though an eagle-eyed observer might be able to discern elements between the actual divisions on the meter, resolution in this case will be limited to the smallest meter division. Analog meters are rarely provided with more than 100 divisions on the scale, yielding a typical 1% of full scale resolution.
Given a 0-100 ppm analog meter, on an instrument with a stated accuracy of ± 2% of full scale, a reading at the resolution point would be 1 ppm ± 2 ppm.
Further Issues
The preceding discussion left out, for reasons of clarity and brevity, certain other factors that should be considered in any rigorous presentation of accuracy as it applies to gas detection.
Absolute method, definitive method, and reference method
An analytical measurement is concerned with determining the amount of a given analyte in a defined mass or volume of the sample. If the analyte cannot directly be counted or measured, then a macroscopic parameter must be found which is functionally related to the amount (concentration) of the analyte.
Interscan, as a matter of course, has always included the disclaimer “Limited to the accuracy of the calibration standard” in any designation of the accuracy of our gas analyzers. This is an important point, since our instruments—and most other gas detection devices—must be calibrated against a known standard, commonly called a “span gas.” Furthermore, for best performance, most gas detection instruments should also be zeroed with a zero gas.
In analytical work, this type of instrument is said to be a “reference method”. This term is in contrast to an absolute method or a definitive method.
- An “absolute method” is a method of of chemical analysis that bases characterization completely on standards defined in terms of physical properties.
- A “definitive method” [per IUPAC Compendium of Chemical Terminology 2nd Edition (1997)] is a method of exceptional scientific status which is sufficiently accurate to stand alone in the determination of a given property for the certification of a reference material. Such a method must have a firm theoretical foundation so that systematic error is negligible relative to the intended use.
- A “reference method” [per IUPAC Compendium of Chemical Terminology 2nd Edition (1997)] is a method having small, estimated inaccuracies relative to the end use requirement. The accuracy of a reference method must be demonstrated through direct comparison with a definitive method or with a primary reference material.
Commercial gas calibration standards are NIST-traceable, and can be obtained with stated accuracies of ± 2 percent, or even better in certain cases. While permeation devices are also NIST-traceable, and are often the only available standards for certain gases, since carrier gas flow rate and temperature of the permeation oven must be carefully controlled, additional sources of error are introduced.
Either way, there is more to be worried about in a gas concentration measurement than the inherent accuracy of the instrument itself.
Errors
If we wish to determine the overall accuracy of our gas detection measurement, we must then somehow combine all the known sources of error. For the most part, errors deriving from the instrument and errors deriving from the calibration method can be considered as being “additive.”
- An “additive error” is an error that is added to the true value, and does not depend on the true value itself. Thus, the result of the measurement is the sum of the true value and the additive error(s).
Consider the example of a 50 ppm reading taken on a digital instrument with a stated accuracy of ± 2% of reading ± 1 least significant digit, and a measuring range of 0-199.9 ppm. Also include the fact that you calibrated the instrument with a 50 ppm standard, having an accuracy of ± 2%.
This becomes 50 ppm ± 1 ppm ± 0.1 ppm ± 1 ppm. Simplifying this, the true value of your measurement lies somewhere between 47.9 and 52.1 ppm.
It is sometimes asked what concentration of span gas should be used to calibrate the instrument. Unfortunately, there is not a simple answer. Best analytical practice is to calibrate and measure at or very near the same point, but this is not easily achievable in all cases, and would defeat the purpose of any instrument with a reasonably useful measuring range.
In the real world, instrument linearity is specified, and a calibration curve, whereby a table or graph of the measured relationship of an instrument as compared over its range against a known source, can be provided. Moreover, instrument accuracy ratings include linearity, which is just a special case of conformity.
However, if a calibration standard is employed that forces the user to read the instrument in a very low region of the measuring range in order to make the calibration setting, the diminished accuracy of that reading will manifest itself as systematic error.
- A “systematic error” is an error that is constant in a series of repetitions of the same experiment or observation.
Contrast this with a random error.
- A “random error” is the fluctuating part of the overall error that varies from measurement to measurement.
Generally, systematic errors are more insidious than random errors, because their magnitude cannot be reduced by simple repetition of the measurement procedure. In this case, the calibration error derived from too low a calibration standard will also be a multiplicative error, as it is proportional to the true value of the quantity being measured, and will get worse, the higher the instrument reading.
Environmental effects, such as changes in ambient temperature and pressure will affect nearly all gas detection methods, and must be taken into account. Many analytical methods are also affected by changes in ambient humidity.
For highly reactive gases such as hydrazine and chlorine dioxide, chemisorption effects on all instrument wetted parts, including sample probes, can affect the measurements. Likewise, accumulated water and particulate in wetted areas can impair accuracy.
And, virtually unique to gas detection is the problem that other compounds can produce similar outputs on the instrument, causing so-called interferences. The prudent instrument user should seek out comprehensive interference data, as it would pertain to his application.
Finally, one could argue that a particular gas detection measurement suffers from some sampling error, in that the portion of air being sampled is not truly representative of the area in question. While there may be rare examples of this phenomenon occurring, the conditions producing such an anomaly would likely be known to the individual performing the measurement. In nearly all cases, though, we can assume that the gas molecules will properly diffuse throughout the environment, so that a sample taken anywhere can be considered representative.
Conclusion
There are many factors involved in obtaining accurate gas detection measurements, and they extend well beyond the instrument itself. Calibration standards, calibration methods, ambient conditions, chemisorption, entrained water and particulate, and interferences can all play a part in destroying accuracy.
We stand ready to work with you, to help you achieve the most accurate results—in a cost-effective manner—for your gas detection application.