Nonparametric tests don’t require that your data follow the normal distribution. They’re also known as distributionfree tests and can provide benefits in certain situations. Typically, people who perform statistical hypothesis tests are more comfortable with parametric tests than nonparametric tests.
You’ve probably heard it’s best to use nonparametric tests if your data are not normally distributed—or something along these lines. That seems like an easy way to choose, but there’s more to the decision than that.
In this post, I’ll compare the advantages and disadvantages to help you decide between using the following types of statistical hypothesis tests:
 Parametric analyses to assess group means
 Nonparametric analyses to assess group medians
In particular, I’d like you to focus on one key reason to perform a nonparametric test that doesn’t get the attention it deserves! If you need a primer on the basics, read my hypothesis testing overview.
Related Pairs of Parametric and Nonparametric Tests
Nonparametric tests are a shadow world of parametric tests. In the table below, I show linked pairs of statistical hypothesis tests.
Parametric tests of means  Nonparametric tests of medians 
1sample ttest  1sample Sign, 1sample Wilcoxon 
2sample ttest  MannWhitney test 
OneWay ANOVA  KruskalWallis, Mood’s median test 
Factorial DOE with a factor and a blocking variable  Friedman test 
Advantages of Parametric Tests
Advantage 1: Parametric tests can provide trustworthy results with distributions that are skewed and nonnormal
Many people aren’t aware of this fact, but parametric analyses can produce reliable results even when your continuous data are nonnormally distributed. You just have to be sure that your sample size meets the requirements for each analysis in the table below. Simulation studies have identified these requirements. Read here for more information about these studies.
Parametric analyses  Sample size requirements for nonnormal data 
1sample ttest  Greater than 20 
2sample ttest  Each group should have more than 15 observations 
OneWay ANOVA 

You can use these parametric tests with nonnormally distributed data thanks to the central limit theorem. For more information about it, read my post: Central Limit Theorem Explained.
Related posts: The Normal Distribution and How to Identify the Distribution of Your Data.
Advantage 2: Parametric tests can provide trustworthy results when the groups have different amounts of variability
It’s true that nonparametric tests don’t require data that are normally distributed. However, nonparametric tests have the disadvantage of an additional requirement that can be very hard to satisfy. The groups in a nonparametric analysis typically must all have the same variability (dispersion). Nonparametric analyses might not provide accurate results when variability differs between groups.
Conversely, parametric analyses, like the 2sample ttest or oneway ANOVA, allow you to analyze groups that have unequal variances. In most statistical software, it’s as easy as checking the correct box! You don’t have to worry about groups having different amounts of variability when you use a parametric analysis.
Related post: Measures of Variability
Advantage 3: Parametric tests have greater statistical power
In most cases, parametric tests have more power. If an effect actually exists, a parametric analysis is more likely to detect it.
Related post: Statistical Power and Sample Size
Advantages of Nonparametric Tests
Advantage 1: Nonparametric tests assess the median which can be better for some study areas
Now we’re coming to my preferred reason for when to use a nonparametric test. The one that practitioners don’t discuss frequently enough!
For some datasets, nonparametric analyses provide an advantage because they assess the median rather than the mean. The mean is not always the better measure of central tendency for a sample. Even though you can perform a valid parametric analysis on skewed data, that doesn’t necessarily equate to being the better method. Let me explain using the distribution of salaries.
Salaries tend to be a rightskewed distribution. The majority of wages cluster around the median, which is the point where half are above and half are below. However, there is a long tail that stretches into the higher salary ranges. This long tail pulls the mean far away from the central median value. The two distributions are typical for salary distributions.
In these distributions, if several very highincome individuals join the sample, the mean increases by a significant amount despite the fact that incomes for most people don’t change. They still cluster around the median.
In this situation, parametric and nonparametric test results can give you different results, and they both can be correct! For the two distributions, if you draw a large random sample from each population, the difference between the means is statistically significant. Despite this, the difference between the medians is not statistically significant. Here’s how this works.
For skewed distributions, changes in the tail affect the mean substantially. Parametric tests can detect this mean change. Conversely, the median is relatively unaffected, and a nonparametric analysis can legitimately indicate that the median has not changed significantly.
You need to decide whether the mean or median is best for your study and which type of difference is more important to detect.
Related post: Determining which Measure of Central Tendency is Best for Your Data
Advantage 2: Nonparametric tests are valid when our sample size is small and your data are potentially nonnormal
Use a nonparametric test when your sample size isn’t large enough to satisfy the requirements in the table above and you’re not sure that your data follow the normal distribution. With small sample sizes, be aware that tests for normality can have insufficient power to produce useful results.
This situation is difficult. Nonparametric analyses tend to have lower power at the outset, and a small sample size only exacerbates that problem.
Advantage 3: Nonparametric tests can analyze ordinal data, ranked data, and outliers
Parametric tests can analyze only continuous data and the findings can be overly affected by outliers. Conversely, nonparametric tests can also analyze ordinal and ranked data, and not be tripped up by outliers.
Sometimes you can legitimately remove outliers from your dataset if they represent unusual conditions. However, sometimes outliers are a genuine part of the distribution for a study area, and you should not remove them.
You should verify the assumptions for nonparametric analyses because the various tests can analyze different types of data and have differing abilities to handle outliers.
If your data use the ordinal Likert scale and you want to compare two groups, read my post about which analysis you should use to analyze Likert data.
Related post: Data Types and How to Use Them
Advantages and Disadvantages of Parametric and Nonparametric Tests
Many people believe that the decision between using parametric or nonparametric tests depends on whether your data are normally distributed. If you have a small dataset, the distribution can be a deciding factor. However, in many cases, this issue is not the critical issue because of the following:
 Parametric analyses can analyze nonnormal distributions for many datasets.
 Nonparametric analyses have other firm assumptions that can be harder to meet.
The answer is often contingent upon whether the mean or median is a better measure of central tendency for the distribution of your data.
 If the mean is a better measure and you have a sufficiently large sample size, a parametric test usually is the better, more powerful choice.
 If the median is a better measure, consider a nonparametric test regardless of your sample size.
Lastly, if your sample size is tiny, you might be forced to use a nonparametric test. It would make me ecstatic if you collect a larger sample for your next study! As the table shows, the sample size requirements aren’t too large. If you have a small sample and need to use a less powerful nonparametric analysis, it doubly lowers the chance of detecting an effect.
Jovana says
Hi Jim,
Thank you for this nice explanation. I must consult with you regarding the situation I have with my data. I have 10 data sets (10 different metals), each data set consisting of 20 values (5 values in 4 seasons). These are the measurements of the metal concentrations in fish liver and I want to assess if there are seasonal variations. I tested the normality of distribution and got normal distribution for 7 metals, and for 3 a non normal distribution. I have tested the homogeneity of variance (Leven’s test) and got result that 6 of the metals have homogeneous variation, while other 4 metals (3 of which have non normal distribution) does not have homogeneous variance. Finally, my question is, should I use parametric test (One way ANOVA) for all the 10 data sets, since majority of samples have normal distribution and homogeneous variance? Should I use non parametric (KruskalWallis H) since my data sets are not large (20 values)? Or should I test normally distributed data with parametric, and non normally distributed data with non parametric?
Thank you in advance,
Kind regards,
Jovana
Pam says
Hi again Jim,
This time my query regarding missing data when sample size is low. How do we deal with missing dependent variables in a continuous data set observed at different time intervals?
Is multiple imputation a good option when data (sample) is missing at some time points and some were not detected due to method limitations. Some suggest replacing undetected data with the lowest possible value, such as 1/2 of the limit of detection instead of using zero. Can undetected data be treated as missing data?
I have looked up some multiple imputation methods in SPSS but not sure how much acceptable it is and how to report if acceptable.
Please enlighten with your expertise.
Thank you in advance!
Jim Frost says
Hi Pam,
Generally speaking, the less data you have the more difficult it is to estimate missing data. The missing values also play a larger role because they’re part of a smaller set. I don’t have personal experience using SPSS’ missing data imputation. I’ve read about it and it sounds good, but I’m not sure about limitations.
I’m not really sure about the detection limits issue. For one thing, I’d imagine that it depends on whether the lowest detectable value is still large enough to be important to your study. In other words, if it is so low that you’re not missing anything important, it might not be a problem. Perhaps the lowest detectable value is so low that in practical terms it’s not different from zero. But, that might not be the case. Additionally, I’d imagine it also depends on how much of your data fall in that region. If you’re obtaining lots of missing values or zeroes because much of the observations fall within that range, it becomes more problematic. Consequently, how to address it become very context sensitive and I wouldn’t be able to give you a good answer. I’d consult with subjectarea specialists and see how similar studies have handled it. Sorry I couldn’t give you a more specific answer.
Pam says
Great! Thanks Jim. This is really helpful.
Cheers!
Brittney says
Thank you so much for this article! I wasn’t planning on using statistics in my research, but my research took a turn and my committee wanted to see testable hypotheses…for paleontology! Ugh. But, this article and your website is incredibly useful in dusting off the stats in my brain!
Jim Frost says
Your kind words mean so much to me. Thank you, Brittney!
Pam says
Hi Jim,
Thank you for making statistics a lot easier to understand. I now understand that parametric tests can be performed on a nonnormal data if the sample size is big enough as indicated.
I have a few confusions regarding when and when not to perform log transformation of skewed data?
When does the data have to be log transformed to perform statistical analysis? Can parametric tests be done on a log transformed data and how do we report the results after log transformation?
Do you have a blog post regarding this? Please provide your expert insights on these when possible.
Thank you
Jim Frost says
Hi Pam,
Yes, you can log transform data and use parametric analyses although it does change a key aspect of the test. You can present the results as saying that the difference between the log transformed means are statistically significant. Then, back transform those values to the natural units and present those as well. Also, note that using log transformed data changes the nature of that test so that it is comparing geometric means rather than the usual arithmetic means. Be sure that is acceptable. Also, check that the transformed data follow the normal distribution.
However, you generally don’t need to do this if you have a large enough sample size per group–as I point out in this post. Consider using transformations only when the data are severely skewed and/or you have a smaller sample size. Unfortunately, I don’t have a blog post on this process. However, unless you have a strong need to transform your data, I would not use that approach.
I hope this helps!
Mrinali says
Very helpful article. Nice explanation
Lynn says
Jim, your site in general and this page has helped me understand statistics so much better as a novice. Regarding the Wilcoxon, although super helpful in understanding the basics Iâ€™m still unsure about how I can relate this to my study. Itâ€™s been loosely suggested to me by a peer that I use the Wilcoxon text, but Iâ€™m not sure how to confirm this.
I have 13 participants. They each watched Video 1 and answered 16 corresponding questions (8 for construct A and 8 for construct B). They then watched Video 2 and answered the same 16 questions (8 for construct A and 8 for construct B). The questions were 3, 5, and 7 likert scale questions.
I want to find the differences in ratings between Videos 1 and 2 for construct A, the differences in ratings between Videos 1 and 2 for construct B, and the highest rated Video in total (combining both constructs). Any advice? Thanks
Asmat says
It is really helpful article. I learned a lot. Thanks for posting.
Jim Frost says
You’re very welcome. I’m glad it was helpful!
Asmat says
Thanks Jim. Which posthoc test would you suggest in this case. I really appreciate it. Thanks.
Jim Frost says
The posthoc test I’m most familiar with is the GamesHowell test, which is similar to Tukey’s test. I’m sure there are others, but I’m not familiar with them. For more information and an example of Welch’s with this posthoc test, read my post on Welch’s ANOVA.
Asmat says
Hi Jim,
I am dealing with 6 groups of a data set with different number of sample sizes. The minimum sample size of one group is 56 and maximum is 350 and other groups sample sizes are in between these two points. My data is not normal and through levene’s test I found that the variances are not equal. I think comparison of mean is somehow meaningful compared to median. Could you please guide me to select between Welchtest ANOVA or Kruskal Wallis test?
Thanks
Jim Frost says
Hi Asmat,
Given your large sample sizes, unequal variances, and the fact that you want to compare means, I ‘d use Welch’s ANOVA.
Best of luck with your analysis!
Ferhat says
Hi from Turkey
I have followed your post for 6 months. Every article is better than the last. Thank you for have loved the statistic.
Jim Frost says
Hi Ferhat, thank you so much! That means a lot to me!
John says
Hi Jim,
This is really an insightful article. I have a question though regarding my study. Can I still use a parametric test even if the distribution is not normal and the variances aren’t homogeneous? I checked those assumptions via ShapiroWilk test and Levene’s Ftest and the results suggested that both assumptions were violated. Other online articles mentioned that if this is the case, I should use a nonparametric test but I also read somewhere that oneway ANOVA would do. By the way, I have 3 groups with equal number of observations, i.e., 21 for each group.
Thanks for your time.
Jim Frost says
Hi John,
If you sample size per group meets the requirements that I present in the Advantage #1 for parametric test, then nonnormal data are not a problem. These tests are robust to departures from normality as long as you have a sufficient number of observations per group.
As for unequal variances, you often have stricter requirements when you use nonparametric tests. This fact isn’t discussed much but nonparametric tests typically requires the same spread across groups. For ttests and ANOVA, you have options that allow you to use them when variances are not equal. For example, for ANOVA you can use Welch’s ANOVA. For details on that method, read my post about Welch’s ANOVA.
Based on your sample size per group, you should be able to use ANOVA regardless of whether the data are normally distributed. If you suspect that the variances are not equal, you can use Welch’s ANOVA.
I hope this helps.
John says
Thanks a lot for your prompt response, Jim. Really appreciate it. I’ll check on Welch’s ANOVA, then. Again, many thanks!
jain says
My data of 350 doesnt follow normal distribution.. which one should i take median or mean..how should it be reported.. should i report on mean sd cv etc
Jim Frost says
Hi Jain,
The answer to this question depends on which measure best represents the middle of your distribution and what is important to the subject area. In general, the more skewed your distribution, the more you should consider using the median. Graph your data to help answer this question. Also, I’ve written a post about the different measures of central tendency that you should read!
I hope this helps!
Muhammad Nazir says
Thanks Respected Sir
I got your point. You are great.
Jim Frost says
You’re welcome. I’m glad I could help!
Muhammad Nazir says
there is no significant difference in preintervention scores of groups with p value>0.05 but when we see Mean scores of groups there are minor difference among the groups. In this case Can I use ANCOVA?
Jim Frost says
ANCOVA allows you include a covariate (a continuous variable that might be correlated with the dependent variable) in the analysis along with your categorical variables (factors). Telling me about the means of the groups is not applicable to whether you should use ANCOVA specifically. Do you have a continuous independent variable to include in the analysis?
I’m not sure why you’re analyzing the preintervention scores? However, it is entirely normal to see differences between the group means when the pvalue is greater than 0.05. However, that issue does not relate to whether you should use ANCOVA or not.
If you have only the 5 groups and there are no other variables in your analysis, no you can’t use ANCOVA because you don’t have a covariate. Seems like you should use oneway ANOVA. You can subtract the pretest scores from the posttest scores so you’re analyzing the differences by group. This process will tell you how the changes in the experimental groups compare to the change in the control group.
Muhammad Nazir says
Respected Sir, please answer my last two questions too.
Jim Frost says
I will, Muhammad. Please keep in mind that the website is something I do in my spare time. I try to answer all questions but sometimes it will take a day or two depending on what else I have going on.
Muhammad Nazir says
Thanks Great Sir
Muhammad Nazir says
Dear Jim Frost thanks for your kind reply,
Please also guide and answer my two questions more:
1. NO significant difference was found among the Covariates with p>0.05 before intervention. But there is minor difference in their mean score. In this case, Can I use ANCOVA for analysis with covariates having significant score with p>0.05?
Is it okay that using ANCOVA will remove the initial differences found in mean score of covariate though there was No significant difference found in terms of p>0.05 before intervention?
2. In my experimental study sample size is 50. There are 5 groups (4 experimental and 1 control group). I am using randomized pretestposttest control group design but some people say this research design is not appropriate. Please guide is this research okay or not? if not then please tell the appropriate design?
I am giving different interventions to 4 experimental groups, No intervention to control group. Please reply immediately.
Jim Frost says
Hi Muhammad,
I’m a bit confused by your first question. Covariates are continuous variables so there are not any significant differences. Covariates don’t assess the differences between the means of the levels of a categorical variable. Instead, you use the pvalue to determine whether there is a significant relationship between the covariate and dependent variable in the same manner as for linear regression. Usually, if the it is not significant, you don’t include it in the model. However, if theory strongly suggests that it should be in the model, it is ok to include it even when the pvalue is greater than 0.05.
I don’t see why a pretestposttest would not be OK. But, I don’t have much information to go by. Why did they say it was not appropriate?
Muhammad Nazir says
Actually I have only 10 subjects each group which is not greater than 15. thats why I asked?
Jim Frost says
Hi Muhammad,
That size limit is only important when your data don’t follow a normal distribution. You said that your data do follow the normal distribution. So, it shouldn’t be a problem!
Muhammad Nazir says
I have 5 groups in experimental study (4 experimental and 01 control). Sample size 50 with 10 subjects in each group. All groups have normal distribution. Can I use parametric test, please reply immediate.
Jim Frost says
Hi Muhammad, given what you state, I see no reason why you couldn’t use a parametric test.
sam says
Hi Jim, thanks for the overview! Do you happen to have a source/reference I can refer to when using the claims you make as argumentation in my paper?
Jim Frost says
Hi Sam, I include a link in this post to a white paper about the sample size claims. You’ll find your answers there!
Mohammad Hasan says
Wonderful article…love all your articles…ðŸ˜ƒ
Jim Frost says
Thank you, Mohammad! That means a lot to me!
david okurut says
I have benefited from your information. May God bless You.
Jim Frost says
Thank you, David! It makes me happy to hear that this has been helpful for you!
Anitha Suseelan.s. says
Very nice explanation
Of central tendencies
Jim Frost says
Thank you, Anitha!
Mosbah says
How can I cite this article?
Jim Frost says
Hi, there are several standard formats for electronic sources, such as MLA, APA, and Chicago style. You’ll need to check with your institution to determine which one you should use.
BIRUK AYALEW Wondem says
very nice
Lucas says
Great article. This is one of those statistical tests that took a while to understand. But you explained it very nicely!
Jim Frost says
Thank you so much Lucas!