• Skip to secondary menu
  • Skip to main content
  • Skip to primary sidebar
  • My Store
  • Glossary
  • Home
  • About Me
  • Contact Me

Statistics By Jim

Making statistics intuitive

  • Graphs
  • Basics
  • Hypothesis Testing
  • Regression
  • ANOVA
  • Probability
  • Time Series
  • Fun

Difference Between Standard Deviation and Standard Error

By Jim Frost 2 Comments

The difference between a standard deviation and a standard error can seem murky. Let’s clear that up in this post!

Standard deviation (SD) and standard error (SE) both measure variability. High values of either statistic indicate more dispersion. However, that’s where the similarities end. The standard deviation is not the same as the standard error.

Here are the key differences between the two:

  • Standard deviation: Quantifies the variability of values in a dataset. It assesses how far a data point likely falls from the mean.
  • Standard error: Quantifies the variability between samples drawn from the same population. It assesses how far a sample statistic likely falls from a population parameter.

Let’s move on to graphical examples of both statistics so you can understand the differences intuitively. Then you’ll learn how to calculate both the standard deviation and standard error.

Learn more about measures of variability.

Examples of Standard Error vs. Standard Deviation

In the following examples, I use graphs to highlight the differences between standard deviation and standard error. Remember that a SD is the variability within a sample and compares data points to the mean. Conversely, a SE is the variability between samples and compares sample estimates to population parameters.

For these examples, I use statistical software to sample values randomly from a normal distribution with a mean of 100 and standard deviation of 15, which is the distribution of IQ scores.

Standard Deviation

Imagine you draw a random sample of 10 people and measure their IQs. You can plot their scores on an individual values plot. Visually, we can see the spread of the data points around the mean in the graph below. The red diamond is the sample mean.

Graph that illustrates the standard deviation.

The standard deviation mathematically measures the variability. More specifically, it assesses the distances between each data point and the sample mean.

Learn more about the standard deviation.

Standard Error

Now, imagine we draw ten random samples, and each one has ten observations. Even though the samples are all subsets of a common population, their means are bound to differ due to sampling error.

The graph below displays ten random samples drawn from the same population.

Graph that illustrates the standard error.

The red diamonds indicate the sample means. As you can see, the means fluctuate up and down between the samples.

The standard error of the mean measures the variability between sample means.

Learn more about the standard error of the mean.

Standard Deviation vs. Standard Error in Distributions

A crucial point is that while both statistics quantify variability in a distribution of values, they apply to different distributions. Let’s drill down on that aspect a bit more.

Suppose you draw a single random sample and graph its distribution of values with the curve below.

Graph that displays the distribution of IQ scores.

Each point on the curve represents a data value. The peak represents the mean, while the width is the sample variability. The standard deviation quantifies the width for a distribution of data values. Wider curves indicate that data points fall further from the mean and correspond to higher standard deviations.

Similarly, the standard error also measures the width of a distribution, but which distribution?

Imagine you draw many samples from the sample population, calculate their means, and graph those means in the distribution below.

Graph that displays the sampling distribution of the mean for IQ scores.

Statisticians refer to this type of distribution as a sampling distribution. In this type of distribution, each point on the curve is a sample mean rather than an individual data value. The central peak is a population parameter (e.g., the population mean). When n > 1, sampling distributions are narrower than the distribution of individual values. Learn more about sampling distributions.

The standard error quantifies the width of a sampling distribution. Smaller SEs correspond to narrower curves, indicating that sample means tend to fall relatively close to the population mean. That’s fantastic when you’re using a sample to estimate the properties of a population! Learn more about the differences between sample statistics and population parameters.

In this manner, standard errors evaluate the precision of a sample’s estimate. Smaller SEs represent greater precision.

Confidence intervals and margins of error also evaluate the precision of sample estimates, and they do so by incorporating the standard error in their calculations.

Learn more about confidence intervals and margin of error.

Differences Between Calculating the SD and SE

Let’s quickly cover the differences between finding these two statistics. Read my articles about the standard deviation and the standard error for more in-depth information about both.

The sample standard deviation (s) formula below quantifies the difference between each data point and the sample mean.

In a nutshell, the formula finds the average squared difference between the data points and the sample mean, and then takes the square root of that. For more information about how this formula works, read about calculating the standard deviation.

Finding the standard error of the mean involves taking the standard deviation above and dividing it by the square root of the sample size, as shown in the formula below.

Standard error of the mean formula.

These formulas lead to the final difference between the standard deviation and the standard error, the sample size’s effect on the two statistics.

The standard deviation does not tend to increase or decrease as the sample size (N) increases. N is in the denominator, but as it increases the numerator also increases, producing no net tendency to change.

However, the standard error tends to decrease as N increases. This decrease occurs because s is in the numerator and tends to stay constant while N increases in the denominator. Hence, the standard error quantifies how larger sample sizes produce more precise estimates!

Summary of the Differences

Finally, the table provides a quick overview of the differences between the standard deviation and standard error.

Standard Deviation Standard Error
Measures variability Within a sample Between samples
Defines width of a Distribution of individual values Sampling distribution
Assesses distances between Data values and sample mean Sample statistics and population parameter (i.e., accuracy)
As sample size increases, there is No tendency to change A tendency to decrease.

 

Share this:

  • Tweet

Related

Filed Under: Basics Tagged With: conceptual, distributions, graphs

Reader Interactions

Comments

  1. HARRY FRED DOWNEY says

    May 31, 2022 at 8:28 am

    In scientific reports that compare means of groups, I believe it is most appropriate to state means +- standard errors, but I often see means +- standard deviations. Which is most correct?

    Reply
    • Jim Frost says

      June 2, 2022 at 11:11 pm

      Hi Harry,

      When you’re comparing means between group, you actually aren’t interested in either the standard deviations or the standard error of the means! Yeah, I know they’re often reported in those cases. You should take those as just potentially interesting information about the sample. However, neither are directly helpful for comparing group means.

      For comparing group means, you’re most interested in the confidence interval of the mean difference. If that CI excludes zero (i.e., no difference), then your results are statistically significant. To calculate that CI of the mean difference, you (or your statistical software) first needs to calculate the standard error of the mean difference. Consequently, the SE of the mean difference is the most germane after the CI of difference. Learn more about the confidence interval for the mean difference.

      I think the reason why reports also include the standard deviation is because it’s one of the useful pieces information on its own after the mean difference and its CI, and the group means themselves.

      Reply

Comments and Questions Cancel reply

Primary Sidebar

Meet Jim

I’ll help you intuitively understand statistics by focusing on concepts and using plain English so you can concentrate on understanding your results.

Read More...

Buy My Introduction to Statistics eBook!

New! Buy My Hypothesis Testing eBook!

Buy My Regression eBook!

Subscribe by Email

Enter your email address to receive notifications of new posts by email.

    I won't send you spam. Unsubscribe at any time.

    Follow Me

    • FacebookFacebook
    • RSS FeedRSS Feed
    • TwitterTwitter
    • Popular
    • Latest
    Popular
    • How To Interpret R-squared in Regression Analysis
    • How to Interpret P-values and Coefficients in Regression Analysis
    • Measures of Central Tendency: Mean, Median, and Mode
    • Normal Distribution in Statistics
    • Multicollinearity in Regression Analysis: Problems, Detection, and Solutions
    • How to Interpret the F-test of Overall Significance in Regression Analysis
    • Understanding Interaction Effects in Statistics
    Latest
    • Cohens D: Definition, Using & Examples
    • Statistical Inference: Definition, Methods & Example
    • T Distribution: Definition & Uses
    • Representative Sample: Definition, Uses & Methods
    • Difference Between Standard Deviation and Standard Error
    • How to Find the P value: Process and Calculations
    • Sampling Methods: Different Types in Research

    Recent Comments

    • Jim Frost on Median Definition and Uses
    • Jim Frost on The Monty Hall Problem: A Statistical Illusion
    • Jim Frost on Choosing the Correct Type of Regression Analysis
    • Rafea on Median Definition and Uses
    • Kevin Boyle on The Monty Hall Problem: A Statistical Illusion

    Copyright © 2022 · Jim Frost · Privacy Policy