The difference between a standard deviation and a standard error can seem murky. Let’s clear that up in this post!
Standard deviation (SD) and standard error (SE) both measure variability. High values of either statistic indicate more dispersion. However, that’s where the similarities end. The standard deviation is not the same as the standard error.
Here are the key differences between the two:
- Standard deviation: Quantifies the variability of values in a dataset. It assesses how far a data point likely falls from the mean.
- Standard error: Quantifies the variability between samples drawn from the same population. It assesses how far a sample statistic likely falls from a population parameter.
Let’s move on to graphical examples of both statistics so you can understand the differences intuitively. Then you’ll learn how to calculate both the standard deviation and standard error.
Learn more about measures of variability.
Examples of Standard Error vs. Standard Deviation
In the following examples, I use graphs to highlight the differences between standard deviation and standard error. Remember that a SD is the variability within a sample and compares data points to the mean. Conversely, a SE is the variability between samples and compares sample estimates to population parameters.
For these examples, I use statistical software to sample values randomly from a normal distribution with a mean of 100 and standard deviation of 15, which is the distribution of IQ scores.
Standard Deviation
Imagine you draw a random sample of 10 people and measure their IQs. You can plot their scores on an individual values plot. Visually, we can see the spread of the data points around the mean in the graph below. The red diamond is the sample mean.
The standard deviation mathematically measures the variability. More specifically, it assesses the distances between each data point and the sample mean.
Learn more about the standard deviation.
Standard Error
Now, imagine we draw ten random samples, and each one has ten observations. Even though the samples are all subsets of a common population, their means are bound to differ due to sampling error.
The graph below displays ten random samples drawn from the same population.
The red diamonds indicate the sample means. As you can see, the means fluctuate up and down between the samples.
The standard error of the mean measures the variability between sample means.
Learn more about the standard error of the mean.
Standard Deviation vs. Standard Error in Distributions
A crucial point is that while both statistics quantify variability in a distribution of values, they apply to different distributions. Let’s drill down on that aspect a bit more.
Suppose you draw a single random sample and graph its distribution of values with the curve below.
Each point on the curve represents a data value. The peak represents the mean, while the width is the sample variability. The standard deviation quantifies the width for a distribution of data values. Wider curves indicate that data points fall further from the mean and correspond to higher standard deviations.
Similarly, the standard error also measures the width of a distribution, but which distribution?
Imagine you draw many random samples from the same population, calculate their means, and graph those means in the distribution below.
Statisticians refer to this type of distribution as a sampling distribution. In this type of distribution, each point on the curve is a sample mean rather than an individual data value. The central peak is a population parameter (e.g., the population mean). When n > 1, sampling distributions are narrower than the distribution of individual values. Learn more about sampling distributions.
The standard error quantifies the width of a sampling distribution. Smaller SEs correspond to narrower curves, indicating that sample means tend to fall relatively close to the population mean. That’s fantastic when you’re using a sample to estimate the properties of a population! Learn more about the differences between sample statistics and population parameters.
In this manner, standard errors evaluate the precision of a sample’s estimate. Smaller SEs represent greater precision.
Confidence intervals and margins of error also evaluate the precision of sample estimates, and they do so by incorporating the standard error in their calculations.
Learn more about confidence intervals and margin of error.
Differences Between Calculating the SD and SE
Let’s quickly cover the differences between finding these two statistics. Read my articles about the standard deviation and the standard error for more in-depth information about both.
The sample standard deviation (s) formula below quantifies the difference between each data point and the sample mean.
In a nutshell, the formula finds the average squared difference between the data points and the sample mean, and then takes the square root of that. For more information about how this formula works, read about calculating the standard deviation.
Finding the standard error of the mean involves taking the standard deviation above and dividing it by the square root of the sample size, as shown in the formula below.
These formulas lead to the final difference between the standard deviation and the standard error, the sample size’s effect on the two statistics.
The standard deviation does not tend to increase or decrease as the sample size (N) increases. N is in the denominator, but as it increases the numerator also increases, producing no net tendency to change.
However, the standard error tends to decrease as N increases. This decrease occurs because s is in the numerator and tends to stay constant while N increases in the denominator. Hence, the standard error quantifies how larger sample sizes produce more precise estimates!
Summary of the Differences
Finally, the table provides a quick overview of the differences between the standard deviation and standard error.
Standard Deviation | Standard Error | |
Measures variability | Within a sample | Between samples |
Defines width of a | Distribution of individual values | Sampling distribution |
Assesses distances between | Data values and sample mean | Sample statistics and population parameter (i.e., accuracy) |
As sample size increases, there is | No tendency to change | A tendency to decrease. |
Hi Jim, thanks for this blog.
My question is when I am creating some robust method for example in a survey and i have the proportion of persons stemated and the sample size, , my professor commented something like this: “the sampling distribution of the sample proportion won’t give the true standard deviation and therefore SD should be maximized and he gave us SD(max) = 1/2 sqrt(n)”.
Now the times passed by and i still didn’t get when to use this SD and when to use the formula above. Will be very helpful if you have any comments on this.
Thank you in advance,
Hi Ana,
Your professor is correct that there is a formula for a maximum standard deviation of the sampling distribution for the proportion, which we call the standard error (SE) of the proportion. This maximum occurs when the proportion equals 0.5. As the proportion moves from 0.5 towards either 0 or 1, its standard error decreases.
You use the SE of the proportion to calculate the margin of error (MOE), which is what I assume you’re doing.
Many surveys will report only the maximum MOE for all survey items, which corresponds to a proportion of 0.5. However, this is a conservative approach because if the proportion doesn’t equal 0.5 for a specific item, it is creating an MOE that is too wide.
You can report only the maximum MOE if you want. However, you can also report the MOE for specific items as needed using their individual proportions. You’ll obtain a narrower MOE (which is good) when the proportion does not equal 0.5. However, if you want to report only one MOE for your entire survey, use the maximum value.
For more information, please read my post about the Margin of Error. In it, I go over the formulas and MOEs for various proportions and how it changes. Note that I use a different form of the equation than the one in your comment but it produces the same MOEs.
Hi Jim. Is there any relationship between confidence intervals and the SE?
Hi Destiny,
Yes, confidence intervals are built using standard errors! For more information, read my post about Confidence Intervals where I show how SEs are incorporated.
Jim,
I’m retired and trying to keep my brain going. I’m currently analyzing home sales in my area within communities that compete with my development. I’m using Excel. In the Multiple Linear Regression formula do you ALWAYS add the Standard Error? I thought I read somewhere that Excel factored Standard Error into their calculations so it was necessary for me to do it. At one point when I did add in the Standard air it seemed to create a much higher value for my home then the market would bear. Any thoughts from you would be most appreciated. Thank you. Tim
Hi Tim, there are various standard errors in regression. For example, there are SEs for regression coefficients and the constant. However, I’m guessing that you’re referring to the SE of regression, which is essentially the standard deviation of the residuals. You can take 2X the SER and add and subtract that to a predicted value to obtain a range that approximates a 95% prediction interval. Use the search box near the top-right margin on my website to read about them. Also search for standard error of the regression on my website to read more about it. I’m on my phone and can’t easily include the links for you.
In scientific reports that compare means of groups, I believe it is most appropriate to state means +- standard errors, but I often see means +- standard deviations. Which is most correct?
Hi Harry,
When you’re comparing means between group, you actually aren’t interested in either the standard deviations or the standard error of the means! Yeah, I know they’re often reported in those cases. You should take those as just potentially interesting information about the sample. However, neither are directly helpful for comparing group means.
For comparing group means, you’re most interested in the confidence interval of the mean difference. If that CI excludes zero (i.e., no difference), then your results are statistically significant. To calculate that CI of the mean difference, you (or your statistical software) first needs to calculate the standard error of the mean difference. Consequently, the SE of the mean difference is the most germane after the CI of difference. Learn more about the confidence interval for the mean difference.
I think the reason why reports also include the standard deviation is because it’s one of the useful pieces information on its own after the mean difference and its CI, and the group means themselves.