What is a Sampling Distribution?
A sampling distribution of a statistic is a type of probability distribution created by drawing many random samples of a given size from the same population. These distributions help you understand how a sample statistic varies from sample to sample.
Sampling distributions are essential for inferential statistics because they allow you to understand a specific sample statistic in the broader context of other possible values. Crucially, they let you calculate probabilities associated with your sample.
Sampling distributions describe the assortment of values for all manner of sample statistics. While the sampling distribution of the mean is the most common type, they can characterize other statistics, such as the median, standard deviation, range, correlation, and test statistics in hypothesis tests. I focus on the mean in this post.
For this post, I’ll show you sampling distributions for both normal and nonnormal data and demonstrate how they change with the sample size. I conclude with a brief explanation of how hypothesis tests use them.
Let’s start with a simple example and move on from there!
Sampling Distribution of the Mean Example
For starters, I want you to fully understand the concept of a sampling distribution. So, here’s a simple example!
Imagine you draw a random sample of 10 apples. Then you calculate the mean of that sample as 103 grams. That’s one sample mean from one sample. However, you realize that if you were to draw another sample, you’d obtain a different mean. A third sample would produce yet another mean. And so on.
With this in mind, suppose you decide to collect 50 random samples of the same apple population. Each sample contains 10 apples, and you calculate the mean for each sample.
Repeated Apple Samples
At this point, you have 50 sample means for apple weights. You plot these sample means in the histogram below to display your sampling distribution of the mean.
This histogram shows us that our initial sample mean of 103 falls near the center of the sampling distribution. Means occur in this range the most frequently—18 of the 50 samples (36%) fall within the middle bar. However, other samples from the same population have higher and lower means. The frequency of means is highest in the sampling distribution center and tapers off in both directions. None of our 50 sample means fall outside the range of 85-118. Consequently, it is very unusual to obtain sample means outside this range.
Typically, you don’t know the population parameters. Instead, you use samples to estimate them. However, we know the parameters for this simulation because I’ve set the population to follow a normal distribution with a mean (µ) weight of 100 grams and a standard deviation (σ) of 15 grams. Those are the parameters of the apple population from which we’ve been sampling.
Notice how the histogram centers on the population mean of 100, and sample means become rarer further away. It’s also a reasonably symmetric distribution. Those are features of many sampling distributions. This distribution isn’t particularly smooth because 50 samples is a small number for this purpose, as you’ll see.
Related post: Interpreting Histograms
I used Excel to create this example. I had it randomly draw 50 samples with a sample size of 10 from a population with µ = 100 and σ = 15.
If you want to simulate 50 random samples yourself, use the link at the end of this post to download the Excel file.
Let’s learn more about the central tendency and the variability in sampling distributions.
Related posts: Measures of Central Tendency and Measures of Variability
Sampling Distributions of the Mean for Normal Distributions
As you saw in the apple example, sampling distributions have their own overall shape, central tendency and variability. Let’s start exploring this for cases where the parent distribution is normal.
When the parent distribution is normally distributed, its sampling distributions will also be normal (symmetrical) and have specific properties for the central tendency and variability.
Mean | Standard Deviation | |
Parent Distribution | µ | σ |
Sampling Distribution | µ |
Where,
- µ and σ are the population parameters for the mean and standard deviation, respectively.
- n is the sample size.
Notice how the mean of the parent population is also the central value for the sampling distribution.
However, the variabilities are different. The variability for the parent distribution is a fixed value (σ), while for a sampling distribution it’s related to σ but also depends on the sample size (n). From the formula, we know the variability for a parent distribution differs from its sampling distributions in all cases where n > 1. Additionally, each sampling distribution has a unique spread depending on its sample size.
Statisticians refer to the standard deviation for a sampling distribution as the standard error. Because we’re assessing the mean, the variability of that distribution is the standard error of the mean.
In summary, sampling distributions center on the population parameter while the standard error defines the width.
Related posts: The Normal Distribution and Parameters vs. Statistics
Returning to Our Apple Simulation
Let’s return to our apple example. We know what statistical theory and its equation says. Now, let’s see how this works using random sampling to see how reality compares. We’ll also get to see nice graphs!
Recall that we specified that the apple population follows a normal distribution with a mean of 100 and a standard deviation of 15. We used a sample size of 10, which shows up in the standard error of the mean. Therefore, we’d expect its sampling distribution to center on µ = 100 and that the standard error will be 15 / √10 = 4.743.
We’ll rerun our previous apple sampling simulation but on a massive scale. This time I’ll draw 500,000 samples instead of just 50. To perform this simulation, I’ll use the Statistics101 simulation software. I include links for this giftware and my scripts at the end of this post. Try it yourself!
This simulation follows the same process as the Excel version. It draws random samples from a population with a mean of 100 and a standard deviation of 15. Each sample has a size of 10. It calculates the sample means and plots them using a histogram. This setup is basically our previous simulation on steroids!
I also have the simulation software calculate the mean and standard error of the sample means, which should be close to the theoretical values of 100 and 4.743. Please remember that these statistics are the mean and standard error of the sample means, not the individual observations.
Simulation Results
Because this simulation draws so many more samples, it produces a smooth distribution curve that reveals the underlying function. This graph displays the distribution of sample means rather than individual values. It’s a histogram like the one in Excel, but with many more samples and very tiny bars!
This sampling distribution clearly follows a normal distribution. Additionally, the calculated mean of the samples and the standard error of the mean almost precisely match the theoretical values. Consequently, statistical equations can estimate the sampling distribution without drawing all those samples!
As you’ll see in a later section, this fact is crucial for hypothesis testing.
Standard Errors and Sample Sizes in a Sampling Distribution
As I mentioned above, the standard error of a sampling distribution depends on the sample size. Here’s the formula for the standard error of the mean: σ / √n
Notice how the formula is a ratio with the square root of the sample size in the denominator? This fact causes the value of the denominator to increase as the sample size increases. In turn, a larger denominator causes the standard error to decrease. Consequently, sampling distributions based on larger sample sizes should have a smaller standard error, causing it to cluster more tightly around the central value.
For instance, quadrupling the sample size halves the standard error because the √4 = 2. We’ll quadruple the sample size in the following simulation to see what happens!
Related post: Standard Error of the Mean
Standard Error Comparison
Let’s return to the apple example to see this in action. This time we’ll collect 500,000 samples with 10 observations in each and another 500,000 samples where n = 40. Theoretically, this increase in sample size should reduce the standard error of the mean by half from 4.743 to 2.372.
Let’s run a simulation!
As we’d expect, both sampling distributions center on the population mean of 100. However, the red curve for n = 40 is noticeably tighter than the grey curve for n = 10. Again, the actual means and standard errors almost exactly match the theoretical values.
What are the practical implications of this difference? The tighter sampling distribution indicates that sample means cluster closer to the actual population mean. Notice how the wider n = 10 spread has more sample means farther away from the population mean (100).
Hence, as you increase the sample size, the difference between your sample mean and the population mean tends to decrease. In other words, larger sample sizes produce more precise estimates!
I’m sure you already knew this old statistical adage, but now you see why that’s the case!
Related post: Precision vs. Accuracy
Sampling Distributions for Nonnormal Distributions
Sampling distributions for nonnormal data tend to follow the parent’s skewed distribution for very small sample sizes. However, as the sample size increases, the sampling distribution converges on a symmetric normal distribution with a mean = µ and a standard error of σ/√n—just like they do for normal distributions!
This convergence has a name—the central limit theorem. For more details, please read my post, Central Limit Theorem Explained. For this post, I’ll simply show an example of this convergence in action using another simulation with different sample sizes.
This simulation uses a body fat distribution that I measured during a study. These values follow a moderately skewed lognormal distribution. I also use this dataset in my post about identifying the distribution of your data.
I had the simulation software draw random samples from this skewed distribution 500,000 times for sample sizes of 5 and 20.
In the graph above, the gray color displays the skewed distribution of values in the parent population, which also corresponds to a sample size of 1. The red curve corresponds to a sample size of 5, while the blue curve relates to a sample size of 20. The red curve is still skewed, but the blue plot is not visibly skewed. You can see convergence on the normal distribution as sample size progressively increases from 1 to 20.
As sample sizes increase, the sampling distributions more closely approximate the normal distribution and become more tightly clustered around the population mean even for skewed, nonnormal data!
Related post: Skewed Distributions
Sampling Distributions in Hypothesis Tests
All hypothesis tests calculate a test statistic. Their calculations take your sample data and boil them down to a single number indicating how your data compare to the null hypothesis. These are the z-scores, t-values, F-values, and chi-square values, which you probably know. These test statistics have known sampling distributions for when the null hypothesis is true.
Learn more about Test Statistics and Populations, Parameters, and Samples in Inferential Statistics
As you saw earlier, it’s possible to accurately produce sampling distributions using equations rather than drawing many samples. Hypothesis tests take your sample data and do just that for the test statistic.
Then, the analysis takes your sample’s test statistic and places it within its sampling distribution. Because these distributions are a type of probability distribution, hypothesis tests can calculate probabilities related to the likelihood of obtaining your sample statistic if the null hypothesis is true. When that probability is sufficiently low, you can reject the null hypothesis.
For example, imagine performing a t-test and obtaining a t-value of 2. What does that mean?
To find out, place that statistic within the t-distribution, which is the sampling distribution of t-values when the null hypothesis is true. In this manner, you can see how unlikely your sample statistic is if the null is correct for the population, as shown below.
The probability plot indicates that t-values of two are somewhat rare when the null is true. To determine statistical significance, you’ll need several more tools that go beyond this post. However, in the following posts, you can read about test statistics and their sampling distributions to learn how they work together:
Simulation Resources
Download the Excel file: SamplingDistributionSimulation.
After opening the Excel file, press SHIFT+F9 to draw a new set of 50 samples. That will cause Excel to redraw the samples, recalculate the sample means, and create a new histogram. Each time you draw new samples, the graph will be somewhat different. You can alter the population’s mean and standard deviation by changing the values in the upper-left corner of the spreadsheet.
Download the Statistics101 freeware and my scripts for a normal distribution, changing sample sizes, and nonnormal distributions.
Abhinandan says
you are simply simple yet great in unwinding the facts with figures so that one can understand in their own way and align with your final inference
Achieng Ineen Aram says
Thank u so much , you made me understand the sampling distribution Great!
Judson says
Nice write up, you’ve often helped me understand statistical concepts more thoroughly. under “Standard Errors and Sample Sizes in a Sampling Distribution” you write “Hereโs the formula for the standard error of the mean: ยต / โn ” Did you mean sigma over square root of n?
Jim Frost says
Yes! Thanks for catching that! I had it correct earlier in the post. Not sure what happened, but I’ve corrected it.
Padraig says
Excellent post Jim, one question, why didn’t you use xbar for sample mean?
Jim Frost says
Hi Padraig,
That’s a great question. The reason is because the properties of the true sampling distribution depend on the population parameters. If you know the population parameters, you can directly calculate the characteristics of the sampling distribution. However, it’s true that in practice you don’t know the population parameters. Instead, you’ll use sample estimates (x-bar, s, etc.) to calculate an estimated sampling distribution.
In this post, I used simulations where we know the parameters because I wanted to show that direct connection. Hence, I use the parameters. That’s what is truly happening even though we don’t have all the information. Unfortunately, we’re stuck with sample estimates!