Accuracy and precision are crucial properties of your measurements when you’re relying on data to draw conclusions. Both concepts apply to a series of measurements from a measurement system and relate to types of measurement error.
Measurement systems facilitate the quantification of characteristics for data collection. They include a collection of instruments, software, and personnel necessary to assess the property of interest. For example, a research project studying bone density will devise a measurement system to produce accurate and precise measurements of bone density.
If your project involves collecting data for research or quality management, your measurement system must produce data that are both accurate and precise. After all, if you can’t trust the data you collect, you can’t trust the results!
While people often use accuracy and precision interchangeably in everyday conversation, they have distinct definitions in statistics, the scientific method, engineering, and quality management. Learn more about them!
Definition of Accuracy
Accuracy assesses whether a series of measurements are correct on average. For example, if a part has an accepted length of 5mm, a series of accurate data will have an average right around 5mm.
In statistical terms, accuracy is an absence of bias. In other words, measurements are not systematically too high or too low. However, accuracy tells you nothing about the distance from the target.
Please note that I’ve seen numerous incorrect definitions of accuracy on the Internet. Accuracy doesn’t assess how close measurements are to the target. Instead, it evaluates the “correct on average” aspect. You can have data that are correct on average but fall relatively far from the proper value. That still counts as accuracy!
Definition of Precision
Precision indicates how close the measurements are to each other. Each measurement in a series has a component of random error. This error causes them to differ to some extent even when measuring the same item. For example, repeatedly measuring the same 5mm part will produce a spread of values.
In this manner, precision relates to reproducibility or repeatability. How reproducible are the data when you measure the same thing multiple times? High precision measurements are closer together than low precision measurements.
Measurements are precise when you measure the same item multiple times and the values are close to each other. However, precision tells you nothing about whether the measured values are near to the correct value. Measurements can be close to each other but far from the proper value.
Precision relates to the variability of the measurements. Low precision corresponds with high random error in the measurements.
Examples of Accuracy vs. Precision
You might think that accurate data would also be precise, and the other way around too! But that’s not necessarily true.
Accuracy assesses whether the measurements find the target value on average, but it does not indicate the distance from the target. You can have data that are correct on average but fall relatively far from the target.
For example, a project measures the heights of people, but the measuring tape has too few marks. The personnel guess the values between the lines by eye and are correct on average, but there’s high variation around the average. These measurements are not repeatable even though they’re correct overall.
On the other hand, you can have very precise measurements that are close to each other but off target on average.
For example, imagine your bathroom scale reads too high consistently. You can take repeated measurements of your weight that are very consistent, but on the whole they’re just too high. The data are precise because they’re repeatable, but they are inaccurate because they are systematically biased high.
A valid measurement system is both accurate and precise. In these cases, the data are correct on average, and they are close to the correct value. For example, if the weights from your bathroom scale center on the proper value and are close together, you have a valid scale!
Accuracy vs. Precision on a Dartboard
The classic way to represent these concepts is by using darts on a dartboard! For simplicity, I’ll refer to the accepted or correct value as the target. You want your measurements to hit this target!
This dartboard represents accurate data because they average out to be on target. However, they are not precise!
This one depicts precise data because they’re close to each other. However, they are systematically off target!
This dartboard shows data that are both accurate and precise. The darts hit the target on average and are close together. These are the measurements you want!
How to Remember Accuracy vs. Precision
Here’s a handy mnemonic device for remembering which term corresponds to which concept.
- aCcuracy = Correct. Are the measurements correct on average?
- pRecision = reproducible, repeatable. When you measure the same item multiple times, do you obtain similar values?
How to Test Accuracy and Precision
You can use measurement systems analysis methods to test the accuracy and precision of your data. These analyses are specialized procedures that’ll describe in brief. Scientific experiments and quality control studies typically invest a respectable amount of time and money assessing their measurement systems. Again, they need to trust their data before they can trust the results!
Calibration studies test the accuracy of your measurement system. Typically, these studies measure items with a range of known properties multiple times and compare the measured values to known values. This process determines whether the measurements are correct on average or biased. If the data are biased high or low, you can recalibrate the device to center on the proper values.
Gage R&R (repeatability and reproducibility) studies test the precision of your measurement system. Specifically, they determine the sources of measurement variability using an ANOVA method. Typically, gage R&R studies tell you whether your measurements have too much variability and where to target your corrective measures. They determine how much variability originates in the devices and personnel, allowing you to identify the source of problematic variability.
For a related topic about measurement quality, read about Reliability vs Validity.