The normal distribution is the most important probability distribution in statistics because it fits many natural phenomena. For example, heights, blood pressure, measurement error, and IQ scores follow the normal distribution. It is also known as the Gaussian distribution and the bell curve.

The normal distribution is a probability function that describes how the values of a variable are distributed. It is a symmetric distribution where most of the observations cluster around the central peak and the probabilities for values further away from the mean taper off equally in both directions. Extreme values in both tails of the distribution are similarly unlikely.

In this blog post, you’ll learn how to use the normal distribution, its parameters, and how to calculate Z-scores to standardize your data and find probabilities.

## Example of Normally Distributed Data: Heights

Height data are normally distributed. The distribution in this example fits real data that I collected from 14-year-old girls during a study.

As you can see, the distribution of heights follows the typical pattern for all normal distributions. Most girls are close to the average (1.512 meters). Small differences between an individual’s height and the mean occur more frequently than substantial deviations from the mean. The standard deviation is 0.0741m, which indicates the typical distance that individual girls tend to fall from mean height.

The distribution is symmetric. The number of girls shorter than average equals the number of girls taller than average. In both tails of the distribution, extremely short girls occur as infrequently as extremely tall girls.

## Parameters of the Normal Distribution

As with any probability distribution, the parameters for the normal distribution define its shape and probabilities entirely. The normal distribution has two parameters, the mean and standard deviation. The normal distribution does not have just one form. Instead, the shape changes based on the parameter values, as shown in the graphs below.

### Mean

The mean is the central tendency of the distribution. It defines the location of the peak for normal distributions. Most values cluster around the mean. On a graph, changing the mean shifts the entire curve left or right on the X-axis.

### Standard deviation

The standard deviation is a measure of variability. It defines the width of the normal distribution. The standard deviation determines how far away from the mean the values tend to fall. It represents the typical distance between the observations and the average.

On a graph, changing the standard deviation either tightens or spreads out the width of the distribution along the X-axis. Larger standard deviations produce distributions that are more spread out.

When you have narrow distributions, the probabilities are higher that values won’t fall far from the mean. As you increase the spread of the distribution, the likelihood that observations will be further away from the mean also increases.

### Population parameters versus sample estimates

The mean and standard deviation are parameter values that apply to entire populations. For the normal distribution, statisticians signify the parameters by using the Greek symbol μ (mu) for the population mean and σ (sigma) for the population standard deviation.

Unfortunately, population parameters are usually unknown because it’s generally impossible to measure an entire population. However, you can use random samples to calculate estimates of these parameters. Statisticians represent sample estimates of these parameters using x̅ for the sample mean and s for the sample standard deviation.

**Related posts**: Measures of Central Tendency and Measures of Variability

## Common Properties for All Forms of the Normal Distribution

Despite the different shapes, all forms of the normal distribution have the following characteristic properties.

- They’re all symmetric. The normal distribution cannot model skewed distributions.
- The mean, median, and mode are all equal.
- Half of the population is less than the mean and half is greater than the mean.
- The Empirical Rule allows you to determine the proportion of values that fall within certain distances from the mean. More on this below!

While the normal distribution is essential in statistics, it is just one of many probability distributions, and it does not fit all populations. To learn how to determine whether the normal distribution provides the best fit to your sample data, read my posts about How to Identify the Distribution of Your Data and Assessing Normality: Histograms vs. Normal Probability Plots.

## The Empirical Rule for the Normal Distribution

When you have normally distributed data, the standard deviation becomes particularly valuable. You can use it to determine the proportion of the values that fall within a specified number of standard deviations from the mean. For example, in a normal distribution, 68% of the observations fall within +/- 1 standard deviation from the mean. This property is part of the Empirical Rule, which describes the percentage of the data that fall within specific numbers of standard deviations from the mean for bell-shaped curves.

Mean +/- standard deviations | Percentage of data contained |

1 | 68% |

2 | 95% |

3 | 99.7% |

Let’s look at a pizza delivery example. Assume that a pizza restaurant has a mean delivery time of 30 minutes and a standard deviation of 5 minutes. Using the Empirical Rule, we can determine that 68% of the delivery times are between 25-35 minutes (30 +/- 5), 95% are between 20-40 minutes (30 +/- 2*5), and 99.7% are between 15-45 minutes (30 +/-3*5). The chart below illustrates this property graphically.

## Standard Normal Distribution and Standard Scores

As we’ve seen above, the normal distribution has many different shapes depending on the parameter values. However, the standard normal distribution is a special case of the normal distribution where the mean is zero and the standard deviation is 1. This distribution is also known as the Z-distribution.

A value on the standard normal distribution is known as a standard score or a Z-score. A standard score represents the number of standard deviations above or below the mean that a specific observation falls. For example, a standard score of 1.5 indicates that the observation is 1.5 standard deviations above the mean. On the other hand, a negative score represents a value below the average. The mean has a Z-score of 0.

Suppose you weigh an apple and it weighs 110 grams. There’s no way to tell from the weight alone how this apple compares to other apples. However, as you’ll see, after you calculate its Z-score, you know where it falls relative to other apples.

## Standardization: How to Calculate Z-scores

Standard scores are a great way to understand where a specific observation falls relative to the entire distribution. They also allow you to take observations drawn from normally distributed populations that have different means and standard deviations and place them on a standard scale. This standard scale enables you to compare observations that would otherwise be difficult.

This process is called standardization, and it allows you to compare observations and calculate probabilities across different populations. In other words, it permits you to compare apples to oranges. Isn’t statistics great!

To standardize your data, you need to convert the raw measurements into Z-scores.

To calculate the standard score for an observation, take the raw measurement, subtract the mean, and divide by the standard deviation. Mathematically, the formula for that process is the following:

X represents the raw value of the measurement of interest. Mu and sigma represent the parameters for the population from which the observation was drawn.

After you standardize your data, you can place them within the standard normal distribution. In this manner, standardization allows you to compare different types of observations based on where each observation falls within its own distribution.

## Example of Using Standard Scores to Make an Apples to Oranges Comparison

Suppose we literally want to compare apples to oranges. Specifically, let’s compare their weights. Imagine that we have an apple that weighs 110 grams and an orange that weighs 100 grams.

If we compare the raw values, it’s easy to see that the apple weighs more than the orange. However, let’s compare their standard scores. To do this, we’ll need to know the properties of the weight distributions for apples and oranges. Assume that the weights of apples and oranges follow a normal distribution with the following parameter values:

Apples | Oranges | |

Mean weight grams | 100 | 140 |

Standard Deviation | 15 | 25 |

Now we’ll calculate the Z-scores:

- Apple = 110-100/15 = 0.667
- Orange = 100-140/25 = -1.6

The Z-score for the apple (0.667) is positive, which means that our apple weighs more than the average apple. It’s not an extreme value by any means, but it is above average for apples. On the other hand, the orange has fairly negative Z-score (-1.6). It’s pretty far below the mean weight for oranges. I’ve placed these Z-values in the standard normal distribution below.

While our apple weighs more than our orange, we are comparing a somewhat heavier than average apple to a downright puny orange! Using Z-scores, we’ve learned how each fruit fits within its own distribution and how they compare to each other.

## Finding Areas Under the Curve of a Normal Distribution

The normal distribution is a probability distribution. As with any probability distribution, the proportion of the area that falls under the curve between two points on a probability distribution plot indicates the probability that a value will fall within that interval. To learn more about this property, read my post about Understanding Probability Distributions.

Typically, I use statistical software to find areas under the curve. However, when you’re working with the normal distribution and convert values to standard scores, you can calculate areas by looking up Z-scores in a Standard Normal Distribution Table.

Because there are an infinite number of different normal distributions, publishers can’t print a table for each distribution. However, you can transform the values from any normal distribution into Z-scores, and then use a table of standard scores to calculate probabilities.

### Using a Table of Z-scores

Let’s take the Z-score for our apple (0.667) and use it to determine its weight percentile. A percentile is the proportion of a population that falls below a specific value. Consequently, to determine the percentile, we need to find the area that corresponds to the range of Z-scores that are less than 0.667. In the portion of the table below, the closest Z-score to ours is 0.65, which we’ll use.

The trick with these tables is to use the values in conjunction with the properties of the normal distribution to calculate the probability that you need. The table value indicates that the area of the curve between -0.65 and +0.65 is 48.43%. However, that’s not what we want to know. We want the area that is less than a Z-score of 0.65.

We know that the two halves of the normal distribution are mirror images of each other. So, if the area for the interval from -0.65 and +0.65 is 48.43%, then the range from 0 to +0.65 must be half of that: 48.43/2 = 24.215%. Additionally, we know that the area for all scores less than zero is half (50%) of the distribution.

Therefore, the area for all scores up to 0.65 = 50% + 24.215% = 74.215%

Our apple is at approximately the 74^{th} percentile.

Below is a probability distribution plot produced by statistical software that shows the same percentile along with a graphical representation of the corresponding area under the curve. The value is slightly different because we used a Z-score of 0.65 from the table while the software uses the more precise value of 0.667.

## Other Reasons Why the Normal Distribution is Important

In addition to all of the above, there are several other reasons why the normal distribution is crucial in statistics.

- Some statistical hypothesis tests assume that the data follow a normal distribution. However, as I explain in my post about parametric and nonparametric tests, there’s more to it than only whether the data are normally distributed.
- Linear and nonlinear regression both assume that the residuals follow a normal distribution. Learn more in my post about assessing residual plots.
- The central limit theorem states that as the sample size increases, the sampling distribution of the mean follows a normal distribution even when the underlying distribution of the original variable is non-normal.

That was quite a bit about the normal distribution! Hopefully, you can understand that it is crucial because of the many ways that analysts use it.

safin ghoghabori says

Pretty much good..๐

Elizabeth says

Hi Jim,

This is great. Iโve got a class of kids with chrome books and Iโm trying to teach with tools we have. Namely Google sheets. Excel uses many of the same Stats functions. I donโt like to have them use any function unless I can really explain what it does. I want to know the math behind it. But some of the math is beyond what they would have. Still I like them to have a visual idea of whatโs happening. I think we rely too much on calculator/ spreadsheet functions without really understanding what they do and how they work. Most of the time the functions are straight forward. But this one was weird.

I ran through 8 Stats books and I really didnโt get a good feeling of how it worked. I can approximate a normal distribution curve of a dataset using norm.dist(), but I wanted to know more about why it worked.

First we will look at a few generic datasets. Then they will pull in stock data and they will tell me if current stock prices fall within 1 standard deviation of a years worth of data. Fun.

Thanks!!

Elizabeth

Jim Frost says

Hi Elizabeth,

That sounds fantastic that you’re teaching them these tools! And, I entirely agree that we often rely to much on functions and numbers without graphing what we’re doing.

For this particular function, a graph would make it very clear. I do explain probability functions in the post that I link you to in my previous comment, and I use graphs for both discrete and continuous distributions. Unfortunately, I don’t show a cumulative probability function (I should really add that!). For the example I describe, imagine the bell curve of a normal distribution, the value of 42 is above the mean, and you shade the curve for all values less than equal to 42. You’re shading about 90.87% of the distribution for the cumulative probability.

That does sound like fun! ๐

Elizabeth W Dillard says

Hi Jim,

This is really neat.

I’ve been looking at the formula norm.dist(x, Mean, StandardDev, False) in Excel and Google Sheets.

I’m trying to understand what it is actually calculating.

I’m just getting back into Statistics – and this one is stumping me.

This is where x is a point in the dataset

Thanks!

Jim Frost says

Hi Elizabeth,

I don’t use Excel for statistics, but I did take a look into this function.

Basically, you’re defining the parameters of a normal distribution (mean and standard deviation) and supply an X-value that you’re interested in. You can use this Excel function to derive the cumulative probability for your X-value or the probability of that specific value. Here’s an example that Microsoft uses on its Help page for the norm.dist function.

If you have a normal distribution that has a mean of 40, standard deviation of 1.5, and you’re interested in the properties of the value 42 for this distribution. This function indicates that the cumulative probability for this value is 0.90. In other words, the probability that values in this distribution will be less than or equal to 42 is 90.87%. Said in another way, values of 42 and less comprise about 90.87% of this distribution.

Alternatively, this Excel function can calculate the probability of an observation having the value of 42 exactly. There’s a caveat because this distribution is for a continuous variable and it is unlikely that an observation will have a value of exactly 42 out to a infinite number of decimal places. So, these calculations use a small range of values that includes 42 and calculates the probability that a value falls within that small range. That’s known as the probability distribution function (PDF). In this case, the probability of a value being 42 equals approximately 10.9%.

For more information about PDFs, please read my post about Understanding Probability Distributions.

Z Table says

Hey Jim. This is a fantastic post. I came across a lot of people asking the significance of normal distribution (more people should) and I was looking for an answer that puts its as eloquently as you did. Thank you for writing this.

Jim Frost says

Hi, thank you so much! I really appreciate your kind words! ๐

Sudhakar says

Excellent Jim, great explanation. I have a doubt, you used some software to calculate Z-score and to display graphs right, can you please let me know which software you used for the same?

Jim Frost says

Hi Sudhakar,

I’m using Minitab statistical software.

Thanks for reading!

Sule Suleiman Taura says

Great to have met someone like Jim who can explain Statistics in plain language for everyone to understand. Another questions are; a) what is the function of probability distribution and would one use a probability distribution?

Jim Frost says

Hi,

I’ve written a post all about probability distributions. I include the link to it in this post, but here it is again: Understanding Probability Distributions.

Bhaskar says

Very nice explanation .

Xavier says

Finally I found a post which explains normal distribution in plain english. It helped me a lot to understand the basic concepts. Thank you very much, Jim

Jim Frost says

You’re very welcome, Xavier! It’s great to hear that it was helpful!

Jimmy says

Hi Jim thanks for this. How large a number makes normal distribution?

Jim Frost says

Hi, I don’t understand your question. A sample of any size can follow a normal distribution. However, when your sample is very small, it’s hard to determine which distribution it follows. Additionally, there is no sample size that guarantees your data follows a normal distribution. For example, you can have a very large sample size that follows a skewed, non-normal distribution.

Are you possibly thinking about the central limit theorem? This theorem states that the sampling distribution of the mean follows a normal distribution if your sample size is sufficiently large. If this is what you’re asking about, read my post on the central limit theorem for more information.

Ranjan venkatesh says

best post ever, thanks a lot

Jim Frost says

Thanks, Ranjan!

Arjul Islam says

Great work

So many confusion cleared

mt says

thank you very much for this very good explanation of normal distribution ๐๐๐ป

Jim Frost says

Thank you!

Rajendra Prabhu says

During my B.E (8 semester course), we had “Engg. Maths.” for four semesters, and in the one semester we had Prob & Stat. (along with other topics), which was purely theoretical even though we had lots of exercises and problems, could not digest and didnt knew its practical significane, (i.e., how and where to apply and use) and again in MTech (3 sem course) we had one subject “Reliability Analysis and Design of Structures” , but this was relatively more practically oriented. While working in Ready Mix Concrete industry and while doing PhD in concrete, I came across this Normal Distribution concept, where concrete mix design is purely based on Std Dev and Z score, and also concrete test results are assesed statistically for their performance monitoring, acceptace criteria, non-compliance etc., where normal distribution is the back-bone. However because of my thirst to gain knowledge, to fully understand, a habit of browsing internet (I wanted Confidence Interval concept) made me to meet your website accidentally.

I observed your effort in explaining the topic in a simple, meaningful and understandable manner, even for a person with non-science or Engg background can learn from scratch with zero-background. That’s great.

My heart felt gratitude and regards and appreciate you for your volunteering mentality (broad mind) in sharing your knowledge from your experience to the needy global society.

Thank you once again,

Rajendra Prabhu

NASI says

THANK YOU FOR YOUR HELP

VERY USEFUL

williams kwarah says

thank you, very useful

Ali says

Jim, you truly love what you are doing, and saving us at the same time. i just want to say thank you i was about to give up on statistics because of formulas with no words

Jim Frost says

Hi Ali, Thank you so much! I really appreciate your kind words! Yes, I absolutely love statistics. I also love helping other learn and appreciate statistics as well. I don’t always agree with the usual way statistics is taught, so I wanted to provide an alternative!

Lucyna says

I was frustrated in my statistics learning by the lecturerโs focus on formulae. While obviously critical, they were done in isolation so I could not see the underlying rationale and where they fit in. Your posts make that very clear, and explain the context, the connections and limitations while also working through the calculations. Thank you.

Jim Frost says

Hi Lucyna,

First, I’m so happy to hear that my post have been helpful! What you describe are exactly my goals for my website. So, your kind words mean so much to me! Thank you!

Noor Nawaz says

Nice work sir…

Sanjay Sinha says

Fantastic way of explaining

Jim Frost says

Thank you, Sanjay!

Qaz says

Sir kindly guide me. I have panel data. My all variables are not normally distributed. data is in ratios form. My question is that , For descriptive statistics and correlation analysis, do i need to use raw data in its original form?? and transformed data for regression analysis only?

Moreover, which transformation method should be used for ratios, when data is highly positively or negatively skewed. I tried, log, difference, reciprocal, but could not get the normality.

Kindly help me. Thank You

Mona says

Do natural phenomena such as hemoglobin levels or the weight of ants really follow a normal distribution? If you add up a large number of random events, you get a normal distribution.

Jim Frost says

To obtain a normal distribution, you need the random errors to have an equal probability of being positive and negative and the errors are more likely to be small than large.

Many datasets will naturally follow the normal distribution. For example, the height data in this blog post are real data and they follow the normal distribution. However, not all datasets and variables have that tendency. The weight data for the same subjects that I used for the weight data are not normally distributed. Those data are right skewed–which you can read about in my post about identifying the distribution of a dataset.

Carlos says

Hello Jim, first of all, your page is very good, it has helped me a lot to understand statistics.

Query, then when I have a data set that is not distributed normally, should I first transform them to normal and then start working them? Greetings from Chile, CLT

Jim Frost says

Hi Carlos,

This gets a little tricky. For one thing, it depends what you want do with the data. If you’re talking about hypothesis tests, you can often use the regular tests with non-normal data when you have a sufficiently large sample size. “Sufficiently large” isn’t really even that large. You can also use nonparametric tests for nonnormal data. There are several issues to consider, which I write about in my post that compares parametric and nonparametric hypothesis tests.

That should help clarify some of the issues. After reading that, let me now if you have any additional questions. Generally, I’m not a fan of transforming data because it completely changes the properties of your data.

Aashay Sukhthankar says

Hi Jim. What exactly do you mean by a true normal distribution. You’ve not used the word “true” anywhere in your post. Just plain normal distribution.

Jim Frost says

Hi Aashay, sorry about the confusing terminology. What I meant by true normal distribution is one that follows a normal distribution to mathematically perfect degree. For example, the graphs of all the normal distributions in this post are true normal distributions because the statistical software graphs them based on the equation for the normal distribution plus the parameter values for the inputs.

By the way, there is not one shape that corresponds to a true normal distribution. Instead, there are an infinite number and they’re all based on the infinite number of different means and standard deviations that you can input into the equation for the normal distribution.

Typically, data don’t follow the normal distribution exactly. A distribution test can determine whether the deviation from the normal distribution is statistically significant.

In the comment where I used this terminology, I was just trying to indicate how as a distribution deviated from a true normal distribution, the Empirical Rule also deviates.

I hope this helps.

Josh Pius says

I’m glad I stumbled across your blog ๐ Wonderful work!! I’ve gained an new perspective on what statistics could mean to me

Jim Frost says

Hi Josh, that is awesome! My goal is to show that statistics can actually be exciting! So, your comment means a lot to me! Thanks!

Asis Kumar Dirghangi says

Excellent…..

Jim Frost says

Thank you, Asis!

Muhammad Arif says

Many Many thanks for help dear Jim sir!

Jim Frost says

You’re very welcome! ๐

Muhammad Arif says

dear Jim, tell me please what is normality?. and how we can understand to use normal or any other distribution for a data set?

Jim Frost says

Hi Muhammad, you’re in the right place to find the information that you need! This blog post tells you all about the normal distribution. Normality simply refers to data that are normally distributed (i.e., the data follow the normal distribution).

I have links in this post to another post called Understand Probability Distributions that tells you about other distributions. And yet another link to a post that tells you How to Determine the Distribution of Your Data.

Masum Ahmed says

your are far better than my teachers. Thank you Jim

Jim Frost says

Thank you, Masum!

John-Harold says

Another great post. Simple, clear and direct language and logic.

Jim Frost says

Thanks so much! That’s always my goal–so your kind words mean a lot to me!

Khursheed Ahmad Ganaie says

I was eagerly waitng fr ths topic ..

Normal distribution

Thnks a lott ,,,,,,

Jim Frost says

You’re very welcome, Khursheed!

Fernando Antunez says

Jim, it is my understanding that the normal distribution is unique and it is the one that follows to perfection the 68 95 99.7%. The rest of the distributions are “approximately” normal, as you say when they get wider. They are still symmetric but not normal because they lost perfection to the empirical rule. I was taught this by a professor when I was doing my master;s in Stats

Jim Frost says

Hi Fernando, all normal distributions (for those cases where you input any values for the mean and standard deviation parameters) follow the Empirical Rule (68%, 95%, 99.7%). There are other symmetric distributions that aren’t quite normal distributions. I think you’re referring to these symmetric distributions that have thicker or thinner tails than normal distributions should. Kurtosis measures the thickness of the tails. Distributions with high kurtosis have thicker tails and those with low kurtosis has thinner tails. If a distribution has thicker or thinner tails than the true normal distribution, then the Empirical Rule doesn’t hold true. How off the rule is depends on how different the distribution is from a true normal distribution. Some of these distributions can be considered approximately normal.

However, this gets confusing because you can have true normal distributions that have wider spreads than other normal distributions. This spread doesn’t necessarily make them non-normal. The example of the wider distribution that I show in the Standard Deviation section

isa true normal distribution. These wider normal distributions follow the Empirical Rule. If you have sample data and are trying to determine whether they follow a normal distribution, perform a normality test.On the other hand, there are other distributions that are not symmetrical at all and very different from the normal distribution. They’re different by more than just the thickness of the tails. For example the lognormal distribution can model very skewed distributions. Some of these distributions are nowhere close to being approximately normal!

So, you can have a wide variety of non-normal distributions that range from approximately normal to not close at all!

MG says

Thank you very much for your great post. Cheers from MA

Jim Frost says

You’re very welcome! I’m glad it was helpful! ๐