Descriptive and inferential statistics are two broad categories in the field of statistics. In this blog post, I show you how both types of statistics are important for different purposes. Interestingly, some of the statistical measures are similar, but the goals and methodologies are very different.

## Descriptive Statistics

Use descriptive statistics to summarize and graph the data for a group that you choose. This process allows you to understand that specific set of observations.

Descriptive statistics describe a sample. That’s pretty straightforward. You simply take a group that you’re interested in, record data about the group members, and then use summary statistics and graphs to present the group properties. With descriptive statistics, there is no uncertainty because you are describing only the people or items that you actually measure. You’re not trying to infer properties about a larger population.

The process involves taking a potentially large number of data points in the sample and reducing them down to a few meaningful summary values and graphs. This procedure allows us to gain more insights and visualize the data than simply pouring through row upon row of raw numbers!

### Common tools of descriptive statistics

Descriptive statistics frequently use the following statistical measures to describe groups:

**Central tendency**: Use the mean or the median to locate the center of the dataset. This measure tells you where most values fall.

**Dispersion**: How far out from the center do the data extend? You can use the range or standard deviation to measure the dispersion. A low dispersion indicates that the values cluster more tightly around the center. Higher dispersion signifies that data points fall further away from the center. We can also graph the frequency distribution.

**Skewness**: The measure tells you whether the distribution of values is symmetric or skewed.

You can present this summary information using both numbers and graphs. These are the standard descriptive statistics, but there are other descriptive analyses you can perform, such as assessing the relationships of paired data using correlation and scatterplots.

**Related posts**: Measures of Central Tendency and Measures of Dispersion

### Example of descriptive statistics

Suppose we want to describe the test scores in a specific class of 30 students. We record all of the test scores and calculate the summary statistics and produce graphs. Here is the CSV data file: Descriptive_statistics.

Statistic | Class value |

Mean | 79.18 |

Range | 66.21 – 96.53 |

Proportion >= 70 | 86.7% |

These results indicate that the mean score of this class is 79.18. The scores range from 66.21 to 96.53, and the distribution is symmetrically centered around the mean. A score of at least 70 on the test is acceptable. The data show that 86.7% of the students have acceptable scores.

Collectively, this information gives us a pretty good picture of this specific class. There is no uncertainty surrounding these statistics because we gathered the scores for everyone in the class. However, we can’t take these results and extrapolate to a larger population of students.

We’ll do that later.

## Inferential Statistics

Inferential statistics takes data from a sample and makes inferences about the larger population from which the sample was drawn. Because the goal of inferential statistics is to draw conclusions from a sample and generalize them to a population, we need to have confidence that our sample accurately reflects the population. This requirement affects our process. At a broad level, we must do the following:

- Define the population we are studying.
- Draw a representative sample from that population.
- Use analyses that incorporate the sampling error.

We don’t get to pick a convenient group. Instead, random sampling allows us to have confidence that the sample represents the population. This process is a primary method for obtaining samples that mirrors the population on average. Random sampling produces statistics, such as the mean, that do not tend to be too high or too low. Using a random sample, we can generalize from the sample to the broader population. Unfortunately, gathering a truly random sample can be a complicated process.

**Related post**: Populations, Parameters, and Samples in Inferential Statistics

### Pros and cons of working with samples

You gain tremendous benefits by working with a random sample drawn from a population. In most cases, it is simply impossible to measure the entire population to understand its properties. The alternative is to gather a random sample and then use the methodologies of inferential statistics to analyze the sample data.

While samples are much more practical and less expensive to work with, there are tradeoffs. Typically, we learn about the population by drawing a relatively small sample from it. We are a very long way off from measuring all people or objects in that population. Consequently, when you estimate the properties of a population from a sample, the sample statistics are unlikely to equal the actual population value exactly.

For instance, your sample mean is unlikely to equal the population mean exactly. The difference between the sample statistic and the population value is the sampling error. Inferential statistics incorporate estimates of this error into the statistical results.

In contrast, summary values in descriptive statistics are straightforward. The average score in a specific class is a known value because we measured all individuals in that class. There is no uncertainty.

**Related post**: Sample Statistics Are Always Wrong (to Some Extent)!

### Standard analysis tools of inferential statistics

The most common methodologies in inferential statistics are hypothesis tests, confidence intervals, and regression analysis. Interestingly, these inferential methods can produce similar summary values as descriptive statistics, such as the mean and standard deviation. However, as I’ll show you, we use them very differently when making inferences.

### Hypothesis tests

Hypothesis tests use sample data answer questions like the following:

- Is the population mean greater than or less than a particular value?
- Are the means of two or more populations different from each other?

For example, if we study the effectiveness of a new medication by comparing the outcomes in a treatment and control group, hypothesis tests can tell us whether the drug’s effect that we observe in the sample is likely to exist in the population. After all, we don’t want to use the medication if it is effective only in our specific sample. Instead, we need evidence that it’ll be useful in the entire population of patients. Hypothesis tests allow us to draw these types of conclusions about entire populations.

**Related post**: Statistical Hypothesis Testing Overview

### Confidence intervals (CIs)

In inferential statistics, a primary goal is to estimate population parameters. These parameters are the unknown values for the entire population, such as the population mean and standard deviation. These parameter values are not only unknown but almost always unknowable. Typically, it’s impossible to measure an entire population. The sampling error I mentioned earlier produces uncertainty, or a margin of error, around our estimates.

Suppose we define our population as all high school basketball players. Then, we draw a random sample from this population and calculate the mean height of 181 cm. This sample estimate of 181 cm is the best estimate of the mean height of the population. However, it’s virtually guaranteed that our estimate of the population parameter is not exactly correct.

Confidence intervals incorporate the uncertainty and sample error to create a range of values the actual population value is like to fall within. For example, a confidence interval of [176 186] indicates that we can be confident that the real population mean falls within this range.

**Related post**: Understanding Confidence Intervals

### Regression analysis

Regression analysis describes the relationship between a set of independent variables and a dependent variable. This analysis incorporates hypothesis tests that help determine whether the relationships observed in the sample data actually exist in the population.

For example, the fitted line plot below displays the relationship in the regression model between height and weight in adolescent girls. Because the relationship is statistically significant, we have sufficient evidence to conclude that this relationship exists in the population rather than just our sample.

**Related post**: When Should I Use Regression Analysis?

### Example of inferential statistics

For this example, suppose we conducted our study on test scores for a specific class as I detailed in the descriptive statistics section. Now we want to perform an inferential statistics study for that same test. Let’s assume it is a standardized statewide test. By using the same test, but now with the goal of drawing inferences about a population, I can show you how that changes the way we conduct the study and the results that we present.

In descriptive statistics, we picked the specific class that we wanted to describe and recorded all of the test scores for that class. Nice and simple. For inferential statistics, we need to define the population and then draw a random sample from that population.

Let’s define our population as 8^{th}-grade students in public schools in the State of Pennsylvania in the United States. We need to devise a random sampling plan to help ensure a representative sample. This process can actually be arduous. For the sake of this example, assume that we are provided a list of names for the entire population and draw a random sample of 100 students from it and obtain their test scores. Note that these students will not be in one class, but from many different classes in different schools across the state.

### Inferential statistics results

For inferential statistics, we can calculate the point estimate for the mean, standard deviation, and proportion for our random sample. However, it is staggeringly improbable that any of these point estimates are exactly correct, and there is no way to know for sure anyway. Because we can’t measure all subjects in this population, there is a margin of error around these statistics. Consequently, I’ll report the confidence intervals for the mean, standard deviation, and the proportion of satisfactory scores (>=70). Here is the CSV data file: Inferential_statistics.

Statistic | Population Parameter Estimate (CIs) |

Mean | 77.4 – 80.9 |

Standard deviation | 7.7 – 10.1 |

Proportion scores >= 70 | 77% – 92% |

Given the uncertainty associated with these estimates, we can be 95% confident that the population mean is between 77.4 and 80.9. The population standard deviation (a measure of dispersion) is likely to fall between 7.7 and 10.1. And, the population proportion of satisfactory scores is expected to be between 77% and 92%.

## Differences between Descriptive and Inferential Statistics

As you can see, the difference between descriptive and inferential statistics lies in the process as much as it does the statistics that you report.

For descriptive statistics, we choose a group that we want to describe and then measure all subjects in that group. The statistical summary describes this group with complete certainty (outside of measurement error).

For inferential statistics, we need to define the population and then devise a sampling plan that produces a representative sample. The statistical results incorporate the uncertainty that is inherent in using a sample to understand an entire population.

A study using descriptive statistics is simpler to perform. However, if you need evidence that an effect or relationship between variables exists in an entire population rather than only your sample, you need to use inferential statistics.

Sol says

Many thanks for this post. You’re a godsend. Have you authored any books?

Jim Frost says

Hi Sol, You’re very welcome! ๐ And, that’s a timely question. I’m working on my first book at the moment!

Carlo Lauro says

Very useful presentation of the topic. What about their use in big data analysis?

ANN MARY CHACKO says

Thank you Jim for making things simpler and better. I am Ann, PhD Scholar from India

Jim Frost says

Hi Ann, you’re very welcome! I’m so glad that you find my posts to be helpful! I love India! I’ve been there several times!

Jerry Tuttle says

I have seen definitions of sample standard deviation in social science textbooks using an n denominator for descriptive statistics and an n-1 for inferential statistics. I have never seen a math book using the n denominator for descriptive. Any comment on why the social science world goes off on a different direction here?

Jim Frost says

Hi Jerry, I donโt know why social science takes that route. I can tell you that in statistics the correct formula to use for standard deviation depends on whether the data are the entire group or population or a sample from a larger population.

When the data are the entire group (descriptive statistics), the denominator is n. However, if you are using a sample to estimate the value of a population (inferential), you use n-1. This is because you need to account for the degrees of freedom that you use for the estimate.

Aayush says

Hello sir, l want to know that what is the need of interval estimation while already we have point estimation?

Jim Frost says

Hi Aayush, that is a great question! I talk about this in the Example of Inferential Statistics section. It is possible to calculate the point estimate for the population. However, it’s virtually guaranteed that this estimate is wrong by some amount. So, the question becomes, how far off is the point estimate likely to be?

Confidence intervals answer this question. The narrower the intervals, the more precise the estimate. With narrow intervals, you can be reasonably sure that the point estimate isn’t too far wrong. However, if the CI is wide, you know that you shouldn’t expect the point estimate to be too near the true value. In that case, don’t place to much confidence in the point estimate! Interval estimation provides additional information about the precision of the point estimate.

I hope this helps clarify things!

rama krishna reddy says

I am a data scientist,i enjoy while going through your articles.thank you jim.

Jim Frost says

Hi Rama, I’m glad that you find my posts to be helpful!

daboo says

thank u so much continuously i need such brief explanation about statistics therefore i need another material specially about Bayesian distribution b/c i.m post graduate class a thesis on maternal mortality approach of bayesian model

Anandaraj says

Very good one. Explains the basics well. Thanks

Evelyn says

Just discovered this website today very helpful. Thank you Jim..

Jim Frost says

Hi Evelyn, thank you for you kind words! I’m glad you found it to be helpful!

Carlo Lauro says

Still waiting for your reply

Jim Frost says

Hi Carlo, that’s a very broad question–I could write an entire book about that topic. Is there something more specific you want to know?

John Sneed says

This was a good introduction and an important help to me. I wish you had gone into a little more detail about standard deviation. I also wish there were a link to print this page. It is the kind I could go back to from time to time to refresh what I have learned. I am John and I am a PhD student in education. Thanks for this help.

Jim Frost says

Hi John, I’m happy to hear that you found this helpful. I’m also adding new content all the time. As for the standard deviation, I write about it in a different post about Measures of Variability. You might find that helpful.

Ndamona Namalemo says

Please help me on this assignment this is the following questions

1. Define the descriptive statistic and inferential statistics

2. The difference between descriptive statistics and inferential statistics

Jim Frost says

Hi Ndamona, the information you need to answer your questions are in this blog post. You’re in the right place!

SUBROTO CHATTERJEE says

Your blog explains statistics in a very student-friendly manner. Importantly, your explanations to various terminologies is nicely illustrated. Could you write more on multi-variate statistical analysis? Thanks.

Jim Frost says

Hi, thanks so much! I strive to make statistics as easy to understand as possible. Your nice comments mean a lot to me!

I’ll try to write more about multivariate analyses in the future.

Patrik Silva says

I am getting addicted to your blog, Jim Frost.

I think this is what should be taught at the first statistic class, before going to any math and formulas.

I am safe here, at least I know who can help me solving my doubts.

Thank you Jim, God bless you always.

Jim Frost says

Hi Patrik, you have no idea how much your kind comments mean to me! Thank you!

Patrik Silva says

You’re welcome Jim!

Your blog is pulling me into statistics every time I read any of your post.

Statistics is nice and beautiful.

I am a Geographer I like modelling. I would like to see some of your post talking something about spatial statistic, if you now something that might be useful.

I will be here every time with you teacher.

Thank you again, Jim.

karma says

Happened to discover your website recently and have been going through it. Very helpful!

Thank you.

Jim Frost says

Thanks, Karma! I’m glad you have found it to be helpful!

Nick says

Thank you so much for such great content! I use your posts frequently to grasp all the material currently studied in school. I do have one question I can not wrap my head around. Was hoping you could help explain.

I would certainly agree that we can gain value by analyzing random samples because it is sometimes impossible to measure the entire population. With that being said, let us for a moment consider methods described in textbooks: estimating population mean using the Z statistic (when pop. st. dev. is known) or t statistic (when pop. st. dev. is unknown). If we can not measure the entire population and are unable to get a population standard deviation or a population mean as result, how can we use these methods or construct a confidence interval if we actually know nothing about the population? Most problems in textbooks state (assume population mean is xxx or std. dev. is yyy). To me, this does not sound practical… How is this process done in industry?

Jim Frost says

Hi Nick,

I’m glad that my posts help you out!

You’re entirely correct about when to use t-values versus Z scores. Because you almost never know the population standard deviation, you never really use Z-tests in practice. After all, if you knew the population standard deviation, wouldn’t you probably also know the population mean? I don’t know why some statistics classes and textbooks use that test and assume you know the population standard deviation. I suppose it’s a little simpler case than using the t-distribution which changes depending on your degrees of freedom.

If you need to test hypotheses or find confidence intervals about a population mean and you’re using a sample, you’ll almost always use t-tests and t-values.

Arliezl D. Mancio says

i’d been reading several readings but still confused… Thank you so much for the informations you shared… And now everything is clear…

prem shankar Mishra says

Thank You Sir….! It’s really really nice, i have been found very simplistic way to understand the things which you have taken care of very well sir. thank you once again sir

Nick says

Jim,

Thank you for the insight! I wish someone told me this earlier. To follow up with another similar question, most example problems also state “assume alpha = 0.05.” Someone told me that in practice, we use alpha from similar research topics found in industry that pertains to your own. Would you agree with that statement?

-Nick

Jim Frost says

Hey again Nick,

You bet!!

As for significance levels, in the field, the most commonly used alpha by far is 0.05. I almost never see a different value. The most I see is that analysts will adjust the significance level when they’re making many comparisons, such as between the factor levels in an ANOVA.

I do agree with the practice of seeing what others in your industry have used and their rational. For example, if a Type I error is particularly costly, dangerous, or bad in whatever way, you might change the significance level to 0.01. If a Type II error is particularly bad, you might change alpha to 0.10. Although, I’m always leery of increasing alpha from 0.05 to say 0.10. Simulation studies show that p-values near 0.05 actually reflect very weak evidence of an effect–so decreasing the strength of evidence you require (e.g., by increasing alpha from 0.05 to 0.10) doesn’t seem like a good idea. I cover this a bit at the end of my post about interpreting p-values. But, I can often imagine a need to lower alpha to something like 0.01.

So, I do agree with the principle, but I often don’t see it in practice. Although, I think 0.05 is often a good value to use, so that’s probably part of why it is so ubiquitous. It’s probably a good value to use unless you can identify a specific and important reason to use a different value. And, that information is what you might gain by looking at similar research topics in your industry.

Motlatsi says

hy Jim you are inspirational worldwide by helping us thank you so much im now a distinction student in statistics all because of you,you are a blessing to us

Jim Frost says

Hi Motlatsi,

Thank you so much for your very kind comment! I really appreciate it. I put a lot of work into my website because I want to make statistics easier to learn for all.

That all said, I’m sure you put in a lot of hard work learning statistics! Congratulations on being such a great student!

I wish you the very best!

Rosa M says

This was so unbelievably helpful! Thanks for making this so easy to understand!

Jim Frost says

You’re very welcome, Rosa! I’m glad it was helpful!

Ajay S says

Hi Sir,

Nice article, I had a question….

I have a dataset which is skewed to the right and when I perform “Descriptive Statistics” it provides MEAN as one of the parameter (Mean = sum/Number of data points), but when I fit the same data to a distribution and I found “Weibull” to be a best fit and calculate “Mean” [Mean of Weibull = Scale *Gamma(1+1/Beta)], now the “Descriptive Statistics” Mean and Weibull Mean have same value, how is this possible when the formulas of calculating Mean are different for each approach?

Jim Frost says

Hi,

Just a guess but either beta equals 1, or the descriptive statistics procedure simply uses the general calculation of the mean rather than the Weibull specific calculation.

MARIA SOCORRO QUIDER GUIBONE says

A great help for us who are studying statistics. Thank you for making it easier for us to understand this subject. God bless.

Jim Frost says

Thank you, Maria!

AMANUEL TAFESSE says

Thanks a lots for your clear and conscious note posts. I understood the better know-how on the area of descriptive and inferential statistics.

Jim Frost says

You’re very welcome, Amanuel. I appreciated your nice comment!

Zed says

Pretty cleared about this concept now ๐, you are doing a great job ๐

Jim Frost says

Thank you, Zed. I really appreciate the nice comment!