Random error and systematic error are the two main types of measurement error. Measurement error occurs when the measured value differs from the true value of the quantity being measured.
Even when you try your best, you can never measure something perfectly—it’s normal when you measure something. In science, we call this measurement error. There will always be a little uncertainty in our measurements. It’s not that we did something wrong; it’s an inherent part of measuring things. Statisticians also refer to it as experimental error or observational error.
There are two types of measurement error:
- Random error occurs due to chance. Even if we do everything correctly for each measurement, we’ll get slightly different results when measuring the same item multiple times.
- Systematic error is when the measurement system makes the same kind of mistake every time it measures something. Often, that happens because of a problem with the tool we’re using or the way we’re doing the experiment. For example, a caliper might be miscalibrated and always show larger widths than they are.
Researchers must assess measurement error in scientific studies because too much of it reduces the validity and reliability of their experiment.
In this post, I’ll explain the differences between random vs systematic error, provide examples, and explore how they occur and ways to reduce them.
Random Error
Random error is a type of measurement error that is caused by the natural variability in the measurement process. It is unpredictable and occurs equally in both directions (e.g., too high and too low) relative to the correct value. It is usually caused by factors such as limitations in the measuring instrument, fluctuations in environmental conditions, and slight procedural variations.
Statisticians often refer to random error as “noise” because it can interfere with the true value (or “signal”) of what you’re trying to measure. If you can keep the random error low, you can collect more precise data.
For example, imagine you want to measure the height of a tree using a measuring tape. The tree’s height is 10 feet, but due to variations in the measuring tape, the angle you look at the tape, the sun in your eyes, the wind blowing the tape, etc., you get slightly different measurements each time you measure it. The first measurement is 10.2 feet, the second is 9.9 feet, and the third is 10.1 feet. These differences are due to random error.
Unlike systematic error, we can estimate and reduce random error using statistics to analyze repeated measurements. To do this, use the same measurement device and measure the same object at least ten times. Then find the average and the standard deviation. Although there are several ways to report the random error, a standard method is to write the mean plus or minus two times the standard deviation.
To see how random error affects a measurement system’s precision, you can perform a Gage R&R study.
Example
Let’s return to the tree height example to illustrate random error. 10 is the correct height value for this tree.
This graph shows how the measurements randomly cluster around the true value of 10. They have no pattern. The red diamond is the average of the 30 data points, and it is pretty close to the correct value because the positive and negative errors cancel each other out.
Random error primarily affects precision, which is the degree to which repeated measurements of the same thing under similar conditions produce the same result. Additionally, random error mainly affects Reliability in an Experiment. Learn more about Accuracy vs. Precision.
Reducing Random Error
Random error is unavoidable in research, even if you try to control everything perfectly. However, there are simple ways to reduce it, such as:
- Take repeated measurements: If you take multiple measurements of the same thing, you can average them together to get a more precise result.
- Increase your sample size: The more data points you have, the less random error will affect your results. That’s why larger sample sizes are generally better than smaller ones regarding precision and statistical power.
- Increase the precision of measuring instruments: Use more precise instruments or calibrate them regularly.
- Control other variables: In controlled experiments, keep everything as consistent as possible so that extraneous factors don’t introduce random error into your measurements. By controlling all relevant variables, you can minimize sources of error and get more accurate results.
Taking the average of multiple measurements reduces the random error by canceling out the positive and negative errors. This property is a form of the law of large numbers. Learn more about the Law of Large Numbers.
For example, averaging our multiple tree measurements produced a mean close to the correct value. For additional improvements, researchers can measure the tree during calm and stable meteorological conditions to reduce distracting factors. And they can use a more precise measuring tape with finer units marked out. They might even use a specialized rig to hold and measure trees if they need high precision.
Systematic Error
Systematic error is a measurement error that occurs consistently in the same direction. It can be a constant difference or one that varies in a relationship with the actual value of the measurement. Statisticians refer to the former as an offset error and the latter as a scale factor error. In either case, there is a persistent factor that predictably affects all measurements. Systematic errors create bias in your data.
Many factors can cause systematic error, including errors in the measurement instrument calibration, a bias in the measurement process, or external factors that influence the measurement process in a consistent non-random manner.
For example, imagine you want to weigh objects in an experiment. Unfortunately, the scale has a calibration error. It always shows the weight to be 1 kilogram heavier than the true weight. Alternatively, the scale might consistently add a percentage to the correct value. Either way, this difference between the actual and measured values is systematically wrong.
That’s a simple example but imagine more complex scenarios.
A survey might have a systematic error due to a cognitive bias, such as the framing effect, where the wording unduly influences the participants. Perhaps the survey’s language is unintentionally prejudicial in some manner, causing people to react more negatively to survey items than they really feel.
In other cases, the expectations of the measurer and the subject can influence the measurements!
Example
Let’s return to the tree example to illustrate systematic error.
In this graph, the data points are systematically too high relative to the true value of 10. They cluster around the wrong value. For any given measurement, you can predict that the error will be positive, making them non-random. Furthermore, unlike the random error graph, the mean is also wrong for these data. Because the errors are all positive, averaging them doesn’t cancel them out. As an aside, the range of values in this example looks much smaller compared to the previous graph, but that’s only due to the graph scaling.
Systematic error mainly affects accuracy, which is how close the average of a set of measurements is to the correct value. It also affects validity in research because the instrument isn’t measuring what you think it is measuring.
Reducing Systematic Error
To reduce systematic errors, you can use the following methods in your study:
- Triangulation: use multiple techniques to record observations so you’re not relying on only one instrument or method.
- Regular calibration: frequently comparing what the instrument records with the value of a known, standard quantity reduces the likelihood of systematic errors affecting your study.
- Blinding: hiding the condition assignment from participants and researchers helps reduce systematic bias caused by experimenter expectancies and cues in an experimental situation that might influence participants to behave in a certain way or provide specific responses, even if these responses do not reflect their true thoughts or behaviors.
Unfortunately, there are many possible sources of systematic error, each requiring a unique solution. So, a comprehensive list is impossible. Some instances will require a lot of investigation. More on that in the next section!
Random Error vs Systematic Error: Which is Worse?
Both types can be problematic, but systematic error is generally considered to be worse than random error. Systematic error affects all measurements consistently in the same direction, leading to biased results. Random error, on the other hand, affects measurements in different directions, canceling out the errors in the long run.
Systematic error is tricky to figure out and fix. Even if you take many measurements and average them, the error remains. Unlike random error, averaging and larger sample sizes don’t reduce systematic error. You can’t use math to eliminate systematic error or even know it’s there. To minimize systematic error, you can try doing these things:
- Look carefully at the way you’re doing the experiment and try to figure out what might be causing the error. Then, change the procedure or conditions to fix it.
- Compare your results to studies using different equipment or methods. If their results differ from yours, it could signal systematic error in your experiment.
- Try using a known value to check your measurements. This process is called calibration.
Comments and Questions