The standard error of the mean (SEM) is a bit mysterious. You’ll frequently find it in your statistical output. Is it a measure of variability? How does the standard error of the mean compare to the standard deviation? How do you interpret it?
In this post, I answer all these questions about the standard error of the mean, show how it relates to sample size considerations and statistical significance, and explain the general concept of other types of standard errors. In fact, I view standard errors as the doorway from descriptive statistics to inferential statistics. You’ll see how that works!
Standard Deviation versus the Standard Error of the Mean
Both the standard deviation (SD) and the standard error of the mean (SEM) measure variability. However, after that initial similarity, they’re vastly different!
Let’s start with the more familiar standard deviation. The calculation for this statistic compares each observation in a dataset to the mean. Consequently, the standard deviation assesses how data points spread out around the mean.
The standard error of the mean also measures variability, but the variability of what exactly?
The standard error of the mean is the variability of sample means in a sampling distribution of means.
Okay, let’s break that down so it’s easier to understand!
Inferential statistics uses samples to estimate the properties of entire populations. The standard error of the mean involves fundamental concepts in inferential statistics—namely repeated sampling and sampling distributions. SEMs are a crucial component of that process.
If you want to learn more about the differences between these two statistics, read my post about that topic specifically, Differences between SD and SE.
Related post: Measures of Variability
Sampling Distributions and the Standard Error of the Mean
Imagine you draw a random sample of 50 from a population, measure a property, and calculate the mean. Now, suppose you repeat that study many times. You repeatedly draw random samples of the same size, calculate the mean for each sample, and graph all the means on a histogram. Ultimately, the histogram displays the distribution of sample means for random samples of size 50 for the characteristic you’re measuring.
Statisticians call this type of distribution a sampling distribution. And, because we’re calculating the mean, it’s the sampling distribution of the mean. There’s a different sampling distribution for each sample size.
This distribution is the sampling distribution for the above experiment. Remember that the curve describes the distribution of sample means and not individual observations. Like other distributions, sampling distributions have a central location and variability around that center.
- The center falls on the population mean because random sampling tends to converge on this value.
- The variability, or spread, describes how far sample means tend to fall from the population mean.
The wider the distribution, the further the sample means tend to fall from the population mean. That’s not good when you’re using sample means to estimate population means! You want narrow sampling distributions where sample means fall near the population mean.
The variability of the sampling distribution is the standard error of the mean! More specifically, the SEM is the standard deviation of the sampling distribution. For the example sampling distribution, the SEM is 3. We’ll interpret that value shortly.
Related post: Descriptive versus Inferential Statistics
SEM and the Precision of Sample Estimates
Because SEMs assess how far your sample mean is likely to fall from the population mean, it evaluates how closely your sample estimates the population, which statisticians refer to as precision. Learn more about the statistical differences between accuracy and precision.
That’s crucial information for inferential statistics!
When you have a sample and calculate its mean, you know that it won’t equal the population mean exactly. Sampling error is the difference between the sample and population mean. When using a sample to estimate the population, you want to know how wrong the sample estimate is likely to be. Specifically, you’re hoping that the sampling error is small. You want your sample mean to be close to the population parameter. Hello SEM!
Fortunately, you don’t need to repeat your study an insane number of times to obtain the standard error of the mean. Statisticians know how to estimate the properties of sampling distributions mathematically, as you’ll see later in this post. Consequently, you can assess the precision of your sample estimates without performing the repeated sampling.
Interpreting the Standard Error of the Mean
Let’s return to the standard deviation briefly because interpreting it helps us understand the standard error of the mean. The value for the standard deviation indicates the standard or typical distance that an observation falls from the sample mean using the original data units. Larger values correspond with broader distributions and signify that data points are likely to fall farther from the sample mean.
For the standard error of the mean, the value indicates how far sample means are likely to fall from the population mean using the original measurement units. Again, larger values correspond to wider distributions.
For a SEM of 3, we know that the typical difference between a sample mean and the population mean is 3.
We could stop there. However, statistical software uses SEMs to calculate p-values and confidence intervals. Often, these statistics are more helpful than the standard error of the mean. As I mentioned, the SEM is the doorway that opens up to these standard tools of inferential statistics.
Related posts: Sample Statistics are Always Wrong (to Some Extent)! and How Hypothesis Tests Work
Standard Error of the Mean and Sample Size
I’m sure you’ve always heard that larger sample sizes are better. The reason becomes apparent when you understand how to calculate the standard error of the mean.
Here’s the equation for the standard error of the mean.
The numerator (s) is the sample standard deviation, which represents the variability present in the data. The denominator is the square root of the sample size (N), which is an adjustment for the amount of data.
Imagine that you start a study but then increase the sample size. During this process, the numerator won’t change much because the variability in the underlying population is a constant. However, the denominator increases because it contains the sample size. The total effect is that the standard error of the mean declines as the sample size increases.
Because the denominator is the square root of the sample size, quadrupling the sample size cuts the standard error in half.
The SEM equation quantifies how larger samples produce more precise estimates!
Mathematical and Graphical Illustration of Precision
For this example, I’ll use the distribution properties for IQ scores. These scores have a mean of 100 and a standard deviation of 15. To calculate the SEM, I’ll use the standard deviation in the calculations for sample sizes of 25 and 100.
As expected, quadrupling the sample size cuts the SEM in half. We know that the larger sample size produces a smaller standard error of the mean (1.5 vs. 3), indicating more precise estimates. Let’s see it graphically.
The probability distribution plot displays the sampling distributions for sample sizes of 25 and 100. Both distributions center on 100 because that is the population mean. However, notice how the blue distribution (N=100) clusters more tightly around the actual population mean, indicating that sample means tend to be closer to the true value. The red distribution (N=25) is more likely to have sample means further away from the population mean.
Again, smaller standard errors signify more precise estimates of a population parameter.
Additionally, smaller standard errors of the mean translate to smaller p-values and narrower confidence intervals, both of which are desirable properties. Consequently, even if you’re not interpreting SEMs directly, they’re helping you out!
Finally, I’ve been writing about the standard error of the mean. However, standard errors (SEs) exist for other population parameters, such as the population proportion, correlation, regression coefficients, etc. For all these parameters, their standard errors assess the precision of the sample estimates and help calculate their p-values and confidence intervals!
Even though you don’t necessarily need to interpret the standard error of the mean itself, I hope you see how it is crucial for inferential statistics!