The standard error of the mean (SEM) is a bit mysterious. You’ll frequently find it in your statistical output. Is it a measure of variability? How does the standard error of the mean compare to the standard deviation? How do you interpret it?
In this post, I answer all these questions about the standard error of the mean, show how it relates to sample size considerations and statistical significance, and explain the general concept of other types of standard errors. In fact, I view standard errors as the doorway from descriptive statistics to inferential statistics. You’ll see how that works!
Standard Deviation versus the Standard Error of the Mean
Both the standard deviation (SD) and the standard error of the mean (SEM) measure variability. However, after that initial similarity, they’re vastly different!
Let’s start with the more familiar standard deviation. The calculation for this statistic compares each observation in a dataset to the mean. Consequently, the standard deviation assesses how data points spread out around the mean.
The standard error of the mean also measures variability, but the variability of what exactly?
The standard error of the mean is the variability of sample means in a sampling distribution of means.
Okay, let’s break that down so it’s easier to understand!
Inferential statistics uses samples to estimate the properties of entire populations. The standard error of the mean involves fundamental concepts in inferential statistics—namely repeated sampling and sampling distributions. SEMs are a crucial component of that process.
If you want to learn more about the differences between these two statistics, read my post about that topic specifically, Differences between SD and SE.
Related post: Measures of Variability
Sampling Distributions and the Standard Error of the Mean
Imagine you draw a random sample of 50 from a population, measure a property, and calculate the mean. Now, suppose you repeat that study many times. You repeatedly draw random samples of the same size, calculate the mean for each sample, and graph all the means on a histogram. Ultimately, the histogram displays the distribution of sample means for random samples of size 50 for the characteristic you’re measuring.
Statisticians call this type of distribution a sampling distribution. And, because we’re calculating the mean, it’s the sampling distribution of the mean. There’s a different sampling distribution for each sample size.
This distribution is the sampling distribution for the above experiment. Remember that the curve describes the distribution of sample means and not individual observations. Like other distributions, sampling distributions have a central location and variability around that center.
- The center falls on the population mean because random sampling tends to converge on this value.
- The variability, or spread, describes how far sample means tend to fall from the population mean.
The wider the distribution, the further the sample means tend to fall from the population mean. That’s not good when you’re using sample means to estimate population means! You want narrow sampling distributions where sample means fall near the population mean.
The variability of the sampling distribution is the standard error of the mean! More specifically, the SEM is the standard deviation of the sampling distribution. For the example sampling distribution, the SEM is 3. We’ll interpret that value shortly.
Related post: Descriptive versus Inferential Statistics
SEM and the Precision of Sample Estimates
Because SEMs assess how far your sample mean is likely to fall from the population mean, it evaluates how closely your sample estimates the population, which statisticians refer to as precision. Learn more about the statistical differences between accuracy and precision.
That’s crucial information for inferential statistics!
When you have a sample and calculate its mean, you know that it won’t equal the population mean exactly. Sampling error is the difference between the sample and population mean. When using a sample to estimate the population, you want to know how wrong the sample estimate is likely to be. Specifically, you’re hoping that the sampling error is small. You want your sample mean to be close to the population parameter. Hello SEM!
Fortunately, you don’t need to repeat your study an insane number of times to obtain the standard error of the mean. Statisticians know how to estimate the properties of sampling distributions mathematically, as you’ll see later in this post. Consequently, you can assess the precision of your sample estimates without performing the repeated sampling.
Related posts: Populations, Parameters, and Samples in Inferential Statistics and Interpreting P-values
Interpreting the Standard Error of the Mean
Let’s return to the standard deviation briefly because interpreting it helps us understand the standard error of the mean. The value for the standard deviation indicates the standard or typical distance that an observation falls from the sample mean using the original data units. Larger values correspond with broader distributions and signify that data points are likely to fall farther from the sample mean.
For the standard error of the mean, the value indicates how far sample means are likely to fall from the population mean using the original measurement units. Again, larger values correspond to wider distributions.
For a SEM of 3, we know that the typical difference between a sample mean and the population mean is 3.
We could stop there. However, statistical software uses SEMs to calculate p-values and confidence intervals. Often, these statistics are more helpful than the standard error of the mean. As I mentioned, the SEM is the doorway that opens up to these standard tools of inferential statistics.
Related posts: Sample Statistics are Always Wrong (to Some Extent)! and How Hypothesis Tests Work
Standard Error of the Mean and Sample Size
I’m sure you’ve always heard that larger sample sizes are better. The reason becomes apparent when you understand how to calculate the standard error of the mean.
Here’s the equation for the standard error of the mean.
The numerator (s) is the sample standard deviation, which represents the variability present in the data. The denominator is the square root of the sample size (N), which is an adjustment for the amount of data.
Imagine that you start a study but then increase the sample size. During this process, the numerator won’t change much because the variability in the underlying population is a constant. However, the denominator increases because it contains the sample size. The total effect is that the standard error of the mean declines as the sample size increases.
Because the denominator is the square root of the sample size, quadrupling the sample size cuts the standard error in half.
The SEM equation quantifies how larger samples produce more precise estimates!
Mathematical and Graphical Illustration of Precision
For this example, I’ll use the distribution properties for IQ scores. These scores have a mean of 100 and a standard deviation of 15. To calculate the SEM, I’ll use the standard deviation in the calculations for sample sizes of 25 and 100.
As expected, quadrupling the sample size cuts the SEM in half. We know that the larger sample size produces a smaller standard error of the mean (1.5 vs. 3), indicating more precise estimates. Let’s see it graphically.
The probability distribution plot displays the sampling distributions for sample sizes of 25 and 100. Both distributions center on 100 because that is the population mean. However, notice how the blue distribution (N=100) clusters more tightly around the actual population mean, indicating that sample means tend to be closer to the true value. The red distribution (N=25) is more likely to have sample means further away from the population mean.
Again, smaller standard errors signify more precise estimates of a population parameter.
Additionally, smaller standard errors of the mean translate to smaller p-values and narrower confidence intervals, both of which are desirable properties. Consequently, even if you’re not interpreting SEMs directly, they’re helping you out!
Other SEs
Finally, I’ve been writing about the standard error of the mean. However, standard errors (SEs) exist for other population parameters, such as the population proportion, correlation, regression coefficients, etc. For all these parameters, their standard errors assess the precision of the sample estimates and help calculate their p-values and confidence intervals!
To see how the standard error of the mean is in the calculations for p-values, confidence intervals, and margins of error, read my posts about How Confidence Intervals Work and Margin of Error.
Even though you don’t necessarily need to interpret the standard error of the mean itself, I hope you see how it is crucial for inferential statistics!
Hi Jim,
Thank you for your swift response! I work in health economics where we run probabilistic analysis on the economic models which measure the cost effectiveness of new health interventions. One of the key parameters in these models is the price of the drug or health staff cost, we always use the cost at the national level. However, it varies regionally from the list prices, therefore we have to assume a standard error to run a probabilistic analysis.
Your response is interesting and very helpful, I am sure it will prompt discussion among colleagues.
Kind regards,
Charlotte
Hello Jim,
Thank you for your very informative article. I often see it assumed in economic models that the standard error of a parameter is 10% of its mean where measure of spread is unknown. What is your opinion on this? I have been looking for where this assumption might come from but have been unsuccessful.
Looking forward to hearing your thoughts 🙂
Hi Charlotte,
I have not heard of that practice. What is the context for making that assumption? Normally, you can obtain the standard errors of the coefficient when you’re fitting the model. You don’t need to guess at its value. So, I’m unsure why anyone would need an approximation. Is it for cases where someone doesn’t have the data?
I haven’t looked into whether this approach works, but my guess is no. Variability in general won’t correlate with the mean. They’re separate measures. You can have wide or narrow distributions around the mean. Then, with standard errors, you have to factor in the fact that the SE will shrink as the sample size increases regardless of the overall variability. So, if you have two models that include the same parameters, but one model has many more observations, then its SEs will be smaller than the model with fewer observations even though they’re assessing the same parameters!
I hope that helps!
Thanks for this post, Jim. It is so good to get an intuitive understanding of these statistics. One of my stats books says that two SEM’s is almost always roughly equal to one standard deviation. Can you explain this ? In the scientific literature, I’ve seen both standard deviation and two SEM’s used, for example as error bars around a point estimate.
Another point related to the graphs above, if you have one sample mean (and SEM) calculated from a smaller number of samples, and another sample mean (and it’s SEM) calculated from a larger number of samples, wouldn’t the one using the smaller number of samples be more likely to have a mean that differs from the real population mean? And in that case, I would presume that it’s wider SEM would be wide to accommodate the “real” population mean. (in other words, it might be a more useful illustration if the dotted red line is shifted to the left or right of the population mean a little, which might better reflect what it would look like if it was obtained from a smaller number of samples. And in that case, it’s wider SEM would still capture where the “real” population mean is). I hope that makes sense.
Hi Jeremy,
I think a lot of the answers to your questions will be clear if you look at the section titled Standard Error of the Mean and Sample Size, and the next section, in this post. I work through the SEM calculations.
I don’t know why your stats book would say that two SEMs roughly equals the standard deviation. That’s an overly simplistic “rule,” and I disagree with it. You can look at the calculations and know the precise relationship. The trick to understanding the relationship between the standard deviation and SEM is that SEM has the SD in the numerator and the square root of the sample size in denominator. Theoretically, SD = SEM when you have a sample size of one. Of course, you can’t calculate the SD with only one observations. As the sample size increases, SEM drops relative to the SD.
When you have a sample size of 4, SD is exactly twice the SEM. That’s because the denominator is the square root of 4 = 2. Literally dividing the SD in half! But, n=4 is tiny! Even if you have only 20 observations, SEM will be less than 1/4 of the standard deviation. Look at the examples I work through for more clarification. But, no, I disagree with the idea that SEM equals roughly half the SD. I don’t know why your book says that!
In terms of using SD vs. SEM for a margin of error, the SEM is the correct one to use. Or use the CI. Imagine you have two samples with a mean of 10. However, one mean is based on 10 observations while the other is based on 100. Clearly, the precision of those estimates will be dramatically different! The SD does not capture that difference. However, SEM (or CI) appropriately factors in the difference in sample size. Using the SD is not appropriate for that purpose.
If you’re referring to the red dotted line in my graph, that represents the true population mean. You can’t shift that off the correct mean. Normally you don’t know the true population mean. However, as the graph illustrates, when you have a smaller sample size, there’s a larger probability that your sample mean will be further away from the true population mean. In other words, the SEM isn’t wider to accommodate the true population mean. That is a fixed value that doesn’t change (but it is unknown). The wider SEM accommodates the larger sampling error associated with smaller samples. Again, your sample mean is likely to be further away from the true population mean with smaller samples. That’s what the sampling distributions in those graphs represent.
I hope that helps clarify!
Great explanation. Instead of reading two books, I read you post and everything is now crystal clear. Thank you.
This was the only thing that helped me to understand. Thank you!
Sir,
Thank you very much for your kind prompt reply DrPKS
Wow I am student studying statistics and this really helped a lot. However I am still confused, why in the SEM formula do we square root n? The evidence given was ” ” only describes the nature of the formula assuming that this dividing by square root of n is true. Is there any other mathematical proof for this formula?
Excellent article btw
Hi Marcus,
I’ll have to see if I can track down the specific derivation of that formula.
Hi Jim, Your approach to imparting knowledge to its seekers are really wonderful! Heartfelt congratulations. The graphs, anecdotes, frequent mention of topics, stress on concepts instead of formulas are very good. I also suggest to do away with Greek. It really look foreign. Why dont you statisticians decide on that
Your words “I view standard errors as the doorway from descriptive statistics to inferential statistics.” ring in my ears. Such words are very welcome
Yours Dr P K Sukumaran
Hello and thanks so much for your kind words! I’m so glad I can be a part of your statistical journey!
Hi Jim, I started reading your articles 1 week back and have become a fan of your articles, all are wonderful articles , I just enjoy reading them. Thanks so much once again for your efforts and simplifying stats. Expecting more and more great topics !!
I had a question on the standard error of mean formula. You mention it as “The numerator (σ) is the sample standard deviation, which represents the variability present in the data.” But doesn’t the symbol σ represent population standard deviation, as its a greek letter, and we use greek letters to represent population parameters or is my understanding wrong ?
Regards
Umer
Hi Umer,
Thanks so much for your kind words! I’m glad that I can be a part of your statistical journey!
Ah, you caught me being a little lazy! You are absolutely correct about using Greek letters for population parameters. I had an image of the equation using sigma, but you’re right that it should be an “s” for the sample standard deviation. I’ll make a new image of the formula shortly and replace it! You have a great eye for detail!
when should we use SEM? should we always report SEM?
Hi Khalil, unless you have a specific need for SEM, you often don’t need to report it. If you report the p-value and/or confidence interval, they contain the SEM information in them. Consequently, you don’t absolutely need to report the SEM.
Great post Jim
Thanks, Collinz!!
A very clear article on the standard error of the mean.
Thank you, Glenn!
Is the Sample mean-μ precision or accuracy?
Hi,
SEM relates to precision. While these two words might seem synonymous, they have distinct statistical meaning. Take a look at the last graph with the two sampling distributions.
Precision relates to how close the actual values come to the target value. In this case, the target value is 100 because that is the population mean. The tighter distribute indicates that more values will fall closer to the population mean. Hence, you’ll obtain more precise estimates with the narrower distribution. The width of the distribution assesses precision in this context.
Accuracy relates to whether the “aim” is on the correct value. Both of the distributions center on 100 and, hence, both are completely accurate. However, if one had a peak that was shifted to the right or left of 100, it would be inaccurate. You can have a mix of accurate/inaccurate and precise/imprecise properties.
In statistics, accuracy relates to bias, or the tendency to be systematically too high or too low. Precision relates to the spread of the values.
Thank you Jim. I got more knowledge from you post.
Hi Gemechu,
I’m so glad to hear this blog post was helpful! 🙂