• Skip to secondary menu
  • Skip to main content
  • Skip to primary sidebar
  • My Store
  • Glossary
  • Home
  • About Me
  • Contact Me

Statistics By Jim

Making statistics intuitive

  • Graphs
  • Basics
  • Hypothesis Testing
  • Regression
  • ANOVA
  • Probability
  • Time Series
  • Fun

Difference Between Standard Deviation and Standard Error

By Jim Frost 6 Comments

The difference between a standard deviation and a standard error can seem murky. Let’s clear that up in this post!

Standard deviation (SD) and standard error (SE) both measure variability. High values of either statistic indicate more dispersion. However, that’s where the similarities end. The standard deviation is not the same as the standard error.

Here are the key differences between the two:

  • Standard deviation: Quantifies the variability of values in a dataset. It assesses how far a data point likely falls from the mean.
  • Standard error: Quantifies the variability between samples drawn from the same population. It assesses how far a sample statistic likely falls from a population parameter.

Let’s move on to graphical examples of both statistics so you can understand the differences intuitively. Then you’ll learn how to calculate both the standard deviation and standard error.

Learn more about measures of variability.

Examples of Standard Error vs. Standard Deviation

In the following examples, I use graphs to highlight the differences between standard deviation and standard error. Remember that a SD is the variability within a sample and compares data points to the mean. Conversely, a SE is the variability between samples and compares sample estimates to population parameters.

For these examples, I use statistical software to sample values randomly from a normal distribution with a mean of 100 and standard deviation of 15, which is the distribution of IQ scores.

Standard Deviation

Imagine you draw a random sample of 10 people and measure their IQs. You can plot their scores on an individual values plot. Visually, we can see the spread of the data points around the mean in the graph below. The red diamond is the sample mean.

Graph that illustrates the standard deviation.

The standard deviation mathematically measures the variability. More specifically, it assesses the distances between each data point and the sample mean.

Learn more about the standard deviation.

Standard Error

Now, imagine we draw ten random samples, and each one has ten observations. Even though the samples are all subsets of a common population, their means are bound to differ due to sampling error.

The graph below displays ten random samples drawn from the same population.

Graph that illustrates the standard error.

The red diamonds indicate the sample means. As you can see, the means fluctuate up and down between the samples.

The standard error of the mean measures the variability between sample means.

Learn more about the standard error of the mean.

Standard Deviation vs. Standard Error in Distributions

A crucial point is that while both statistics quantify variability in a distribution of values, they apply to different distributions. Let’s drill down on that aspect a bit more.

Suppose you draw a single random sample and graph its distribution of values with the curve below.

Graph that displays the distribution of IQ scores.

Each point on the curve represents a data value. The peak represents the mean, while the width is the sample variability. The standard deviation quantifies the width for a distribution of data values. Wider curves indicate that data points fall further from the mean and correspond to higher standard deviations.

Similarly, the standard error also measures the width of a distribution, but which distribution?

Imagine you draw many random samples from the same population, calculate their means, and graph those means in the distribution below.

Graph that displays the sampling distribution of the mean for IQ scores.

Statisticians refer to this type of distribution as a sampling distribution. In this type of distribution, each point on the curve is a sample mean rather than an individual data value. The central peak is a population parameter (e.g., the population mean). When n > 1, sampling distributions are narrower than the distribution of individual values. Learn more about sampling distributions.

The standard error quantifies the width of a sampling distribution. Smaller SEs correspond to narrower curves, indicating that sample means tend to fall relatively close to the population mean. That’s fantastic when you’re using a sample to estimate the properties of a population! Learn more about the differences between sample statistics and population parameters.

In this manner, standard errors evaluate the precision of a sample’s estimate. Smaller SEs represent greater precision.

Confidence intervals and margins of error also evaluate the precision of sample estimates, and they do so by incorporating the standard error in their calculations.

Learn more about confidence intervals and margin of error.

Differences Between Calculating the SD and SE

Let’s quickly cover the differences between finding these two statistics. Read my articles about the standard deviation and the standard error for more in-depth information about both.

The sample standard deviation (s) formula below quantifies the difference between each data point and the sample mean.

In a nutshell, the formula finds the average squared difference between the data points and the sample mean, and then takes the square root of that. For more information about how this formula works, read about calculating the standard deviation.

Finding the standard error of the mean involves taking the standard deviation above and dividing it by the square root of the sample size, as shown in the formula below.

Standard error of the mean formula.

These formulas lead to the final difference between the standard deviation and the standard error, the sample size’s effect on the two statistics.

The standard deviation does not tend to increase or decrease as the sample size (N) increases. N is in the denominator, but as it increases the numerator also increases, producing no net tendency to change.

However, the standard error tends to decrease as N increases. This decrease occurs because s is in the numerator and tends to stay constant while N increases in the denominator. Hence, the standard error quantifies how larger sample sizes produce more precise estimates!

Summary of the Differences

Finally, the table provides a quick overview of the differences between the standard deviation and standard error.

Standard Deviation Standard Error
Measures variability Within a sample Between samples
Defines width of a Distribution of individual values Sampling distribution
Assesses distances between Data values and sample mean Sample statistics and population parameter (i.e., accuracy)
As sample size increases, there is No tendency to change A tendency to decrease.

 

Share this:

  • Tweet

Related

Filed Under: Basics Tagged With: conceptual, distributions, graphs

Reader Interactions

Comments

  1. Destiny Timothy says

    September 3, 2022 at 6:45 am

    Hi Jim. Is there any relationship between confidence intervals and the SE?

    Reply
    • Jim Frost says

      September 3, 2022 at 3:36 pm

      Hi Destiny,

      Yes, confidence intervals are built using standard errors! For more information, read my post about Confidence Intervals where I show how SEs are incorporated.

      Reply
  2. Tim Rieckhoff says

    August 3, 2022 at 3:48 pm

    Jim,
    I’m retired and trying to keep my brain going. I’m currently analyzing home sales in my area within communities that compete with my development. I’m using Excel. In the Multiple Linear Regression formula do you ALWAYS add the Standard Error? I thought I read somewhere that Excel factored Standard Error into their calculations so it was necessary for me to do it. At one point when I did add in the Standard air it seemed to create a much higher value for my home then the market would bear. Any thoughts from you would be most appreciated. Thank you. Tim

    Reply
    • Jim Frost says

      August 4, 2022 at 12:32 am

      Hi Tim, there are various standard errors in regression. For example, there are SEs for regression coefficients and the constant. However, I’m guessing that you’re referring to the SE of regression, which is essentially the standard deviation of the residuals. You can take 2X the SER and add and subtract that to a predicted value to obtain a range that approximates a 95% prediction interval. Use the search box near the top-right margin on my website to read about them. Also search for standard error of the regression on my website to read more about it. I’m on my phone and can’t easily include the links for you.

      Reply
  3. HARRY FRED DOWNEY says

    May 31, 2022 at 8:28 am

    In scientific reports that compare means of groups, I believe it is most appropriate to state means +- standard errors, but I often see means +- standard deviations. Which is most correct?

    Reply
    • Jim Frost says

      June 2, 2022 at 11:11 pm

      Hi Harry,

      When you’re comparing means between group, you actually aren’t interested in either the standard deviations or the standard error of the means! Yeah, I know they’re often reported in those cases. You should take those as just potentially interesting information about the sample. However, neither are directly helpful for comparing group means.

      For comparing group means, you’re most interested in the confidence interval of the mean difference. If that CI excludes zero (i.e., no difference), then your results are statistically significant. To calculate that CI of the mean difference, you (or your statistical software) first needs to calculate the standard error of the mean difference. Consequently, the SE of the mean difference is the most germane after the CI of difference. Learn more about the confidence interval for the mean difference.

      I think the reason why reports also include the standard deviation is because it’s one of the useful pieces information on its own after the mean difference and its CI, and the group means themselves.

      Reply

Comments and Questions Cancel reply

Primary Sidebar

Meet Jim

I’ll help you intuitively understand statistics by focusing on concepts and using plain English so you can concentrate on understanding your results.

Read More...

Buy My Introduction to Statistics Book!

Cover of my Introduction to Statistics: An Intuitive Guide ebook.

Buy My Hypothesis Testing Book!

Cover image of my Hypothesis Testing: An Intuitive Guide ebook.

Buy My Regression Book!

Cover for my ebook, Regression Analysis: An Intuitive Guide for Using and Interpreting Linear Models.

Subscribe by Email

Enter your email address to receive notifications of new posts by email.

    I won't send you spam. Unsubscribe at any time.

    Follow Me

    • FacebookFacebook
    • RSS FeedRSS Feed
    • TwitterTwitter

    Top Posts

    • How to Interpret P-values and Coefficients in Regression Analysis
    • How To Interpret R-squared in Regression Analysis
    • Multicollinearity in Regression Analysis: Problems, Detection, and Solutions
    • How to do t-Tests in Excel
    • How to Interpret the F-test of Overall Significance in Regression Analysis
    • Z-table
    • How to Find the P value: Process and Calculations
    • Mean, Median, and Mode: Measures of Central Tendency
    • Understanding Interaction Effects in Statistics
    • F-table

    Recent Posts

    • Sampling Frame: Definition & Examples
    • Probability Mass Function: Definition, Uses & Example
    • Using Scientific Notation
    • Selection Bias: Definition & Examples
    • ANCOVA: Uses, Assumptions & Example
    • Fibonacci Sequence: Formula & Uses

    Recent Comments

    • Morris on Validity in Research and Psychology: Types & Examples
    • Jim Frost on What are Robust Statistics?
    • Allan Fraser on What are Robust Statistics?
    • Steve on Survivorship Bias: Definition, Examples & Avoiding
    • Jim Frost on Using Post Hoc Tests with ANOVA

    Copyright © 2023 · Jim Frost · Privacy Policy