Nonparametric tests don’t require that your data follow the normal distribution. They’re also known as distribution-free tests and can provide benefits in certain situations. Typically, people who perform statistical hypothesis tests are more comfortable with parametric tests than nonparametric tests.
You’ve probably heard it’s best to use nonparametric tests if your data are not normally distributed—or something along these lines. That seems like an easy way to choose, but there’s more to the decision than that.
In this post, I’ll compare the advantages and disadvantages to help you decide between using the following types of statistical hypothesis tests:
- Parametric analyses to assess group means
- Nonparametric analyses to assess group medians
In particular, I’d like you to focus on one key reason to perform a nonparametric test that doesn’t get the attention it deserves! If you need a primer on the basics, read my hypothesis testing overview.
Related Pairs of Parametric and Nonparametric Tests
Nonparametric tests are a shadow world of parametric tests. In the table below, I show linked pairs of statistical hypothesis tests.
Parametric tests of means | Nonparametric tests of medians |
1-sample t-test | 1-sample Sign, 1-sample Wilcoxon |
2-sample t-test | Mann-Whitney test |
One-Way ANOVA | Kruskal-Wallis, Mood’s median test |
Factorial DOE with a factor and a blocking variable | Friedman test |
Advantages of Parametric Tests
Advantage 1: Parametric tests can provide trustworthy results with distributions that are skewed and nonnormal
Many people aren’t aware of this fact, but parametric analyses can produce reliable results even when your continuous data are nonnormally distributed. You just have to be sure that your sample size meets the requirements for each analysis in the table below. Simulation studies have identified these requirements. Read here for more information about these studies.
Parametric analyses | Sample size requirements for nonnormal data |
1-sample t-test | Greater than 20 |
2-sample t-test | Each group should have more than 15 observations |
One-Way ANOVA |
|
You can use these parametric tests with nonnormally distributed data thanks to the central limit theorem. For more information about it, read my post: Central Limit Theorem Explained.
Related posts: The Normal Distribution and How to Identify the Distribution of Your Data.
Advantage 2: Parametric tests can provide trustworthy results when the groups have different amounts of variability
It’s true that nonparametric tests don’t require data that are normally distributed. However, nonparametric tests have the disadvantage of an additional requirement that can be very hard to satisfy. The groups in a nonparametric analysis typically must all have the same variability (dispersion). Nonparametric analyses might not provide accurate results when variability differs between groups.
Conversely, parametric analyses, like the 2-sample t-test or one-way ANOVA, allow you to analyze groups with unequal variances. In most statistical software, it’s as easy as checking the correct box! You don’t have to worry about groups having different amounts of variability when you use a parametric analysis.
Related post: Measures of Variability
Advantage 3: Parametric tests have greater statistical power
In most cases, parametric tests have more power. If an effect actually exists, a parametric analysis is more likely to detect it.
Related post: Statistical Power and Sample Size
Advantages of Nonparametric Tests
Advantage 1: Nonparametric tests assess the median which can be better for some study areas
Now we’re coming to my preferred reason for when to use a nonparametric test. The one that practitioners don’t discuss frequently enough!
For some datasets, nonparametric analyses provide an advantage because they assess the median rather than the mean. The mean is not always the better measure of central tendency for a sample. Even though you can perform a valid parametric analysis on skewed data, that doesn’t necessarily equate to being the better method. Let me explain using the distribution of salaries.
Salaries tend to be a right-skewed distribution. The majority of wages cluster around the median, which is the point where half are above and half are below. However, there is a long tail that stretches into the higher salary ranges. This long tail pulls the mean far away from the central median value. The two distributions are typical for salary distributions.
In these distributions, if several very high-income individuals join the sample, the mean increases by a significant amount despite the fact that incomes for most people don’t change. They still cluster around the median.
In this situation, parametric and nonparametric test results can give you different results, and they both can be correct! For the two distributions, if you draw a large random sample from each population, the difference between the means is statistically significant. Despite this, the difference between the medians is not statistically significant. Here’s how this works.
For skewed distributions, changes in the tail affect the mean substantially. Parametric tests can detect this mean change. Conversely, the median is relatively unaffected, and a nonparametric analysis can legitimately indicate that the median has not changed significantly.
You need to decide whether the mean or median is best for your study and which type of difference is more important to detect.
Related post: Determining which Measure of Central Tendency is Best for Your Data
Advantage 2: Nonparametric tests are valid when our sample size is small and your data are potentially nonnormal
Use a nonparametric test when your sample size isn’t large enough to satisfy the requirements in the table above and you’re not sure that your data follow the normal distribution. With small sample sizes, be aware that normality tests can have insufficient power to produce useful results.
This situation is difficult. Nonparametric analyses tend to have lower power at the outset, and a small sample size only exacerbates that problem.
Advantage 3: Nonparametric tests can analyze ordinal data, ranked data, and outliers
Parametric tests can analyze only continuous data and the findings can be overly affected by outliers. Conversely, nonparametric tests can also analyze ordinal and ranked data, and not be tripped up by outliers.
Sometimes you can legitimately remove outliers from your dataset if they represent unusual conditions. However, sometimes outliers are a genuine part of the distribution for a study area, and you should not remove them.
You should verify the assumptions for nonparametric analyses because the various tests can analyze different types of data and have differing abilities to handle outliers.
If your data use the ordinal Likert scale and you want to compare two groups, read my post about which analysis you should use to analyze Likert data.
Related posts: Data Types and How to Use Them and 5 Ways to Find Outliers in Your Data
Advantages and Disadvantages of Parametric and Nonparametric Tests
Many people believe that choosing between parametric and nonparametric tests depends on whether your data follow the normal distribution. If you have a small dataset, the distribution can be a deciding factor. However, in many cases, this issue is not critical because of the following:
- Parametric analyses can analyze nonnormal distributions for many datasets.
- Nonparametric analyses have other firm assumptions that can be harder to meet.
The answer is often contingent upon whether the mean or median is a better measure of central tendency for the distribution of your data.
- If the mean is a better measure and you have a sufficiently large sample size, a parametric test usually is the better, more powerful choice.
- If the median is a better measure, consider a nonparametric test regardless of your sample size.
Lastly, if your sample size is tiny, you might be forced to use a nonparametric test. It would make me ecstatic if you collect a larger sample for your next study! As the table shows, the sample size requirements aren’t too large. If you have a small sample and need to use a less powerful nonparametric analysis, it doubly lowers the chance of detecting an effect.
If you’re learning about hypothesis testing and like the approach I use in my blog, check out my eBook!
Sir while comparing parametric and non-parametric methods we miss the two real question
1) what if we use non-parametric tests in parametric conditions ?
2) what if we use parametric tests in non-parametric conditions ?
Please detail on the error in outcome as the real life deterrent, Thanks
Hi, I touch on those issues in this post. Specifically:
1) Typically, non-parametric tests have less power than their parametric counterparts. For power reasons, you’ll want to use a parametric test when it’s valid. Using a nonparametric test in these conditions increases the Type II error rate (false negatives)
2) If you use a parametric test when a nonparametric test is appropriate, you’ll obtain inaccurate results. The Type I error rate won’t necessarily equal the significance level you define for the test. I’m not sure if there is a consistent direction of change in that error rate. I suspect that the Type I error rate can be higher or lower than the significance level depending on the nature of the violation.
I hope that helps.
Great article, thank you
But may I ask when to say its better to choose the mean or the median as the best measure of central tendency for my data? is there any guide?
Hi Maria,
Thanks for writing! In my post about measures of central tendencies, I write about which measure is best for different situations, including choosing between the mean and median. I’d recommend reading it. In a nutshell, the mean is better when your data are symmetric, or at least not extremely skewed, while the median is better when your data are fairly skewed. In my other post, I show why that’s the case.
Hi Jim. Very informative article. I would like know one more thing.
Can we use parametric tests to analyse ordinal data? If so, in what circumstances? Please advise.
Hi Rafi,
That questions has been behind many debates in statistics! In some cases, yes! In this post, I have a link near the end for an article I wrote about analyzing Likert scaled data. Likert scale is an ordinal scale. And for those data, you can use the parametric 2-sample t-test. That’s based on a thorough simulation study. However, I would not say that means you can always use parametric tests for all scenarios where you have ordinal data. There are probably requirements for samples sizes and number of ordinal levels. At any rate, read that one post about analyzing Likert data to get an idea of some of the issues and how it works out for 2-sample t-tests.
I hope that helps!
Hi Jim,
Thanks for the very informative Article. It looks great to see all Hypothesis tests in one article, and appreciate the details and depth of the explanation.
One thing that I been struck upon is to make the best choice between Parametric and non-parametric tests, when there are many varying features and under the influence of many varying features the distribution become highly uneven making it hard to compare and harder to draw inferences.
But this is the actual case in practical application when you want to do A/B Testing. Real life A/B testing involves dealing with distributions that vary largely due to high number of Features(columns or variables).
For doing A/B Testing with varying distributions in the 2 experiments under conditions of multiple features involved, would you recommend Parametric Statistical Hypothesis Tests or Non-Parametric Statistical Hypothesis Tests?
( I have tried Parametric Statistical Hypothesis Tests but it was getting hard to meet the statistical significance, as there are multiple features involved. If I remove/ignore most of the variables I may end-up getting the statistical significance, but that may not be the intended purpose of A/B testing though.)
Can you throw some light,please?
Hi Jim!
A researcher conducted a research that majority of people who died during pandemic bought a new phone during last year. What type of research is this? If his assumption is correct which statistical test should be apropriate to analyse the data?
please answer this question in detail. i will be really thankful to you.
Hi MahNoor, apparently this is a question from a test because someone else recently asked the identical question. I’m not going to do your test for you. However, I will point you towards a 2-sample proportions test, which will allow you to determine whether there is a difference the proportion of fatalities between those who bought a new phone and those who didn’t.
Amazing thanks!
Hi Jim,
Thanks so much for explaining this all!
I want to compare the ages of two groups I have (one is only 17 people and one is 51 people). Because the first group is <20 people do I need a Mann-Whitney U test or can I just use a t test here?
Many thanks!
Ben
Hi Ben,
Do you have any theoretical reasons or empirical data that suggests the population for the smaller group follows a nonnormal distribution? If you can reasonably assume that it follows a normal distribution, you can probably use a t-test. However, if you have any doubts about that, best to go with Mann-Whitney.
hi jim ,,,, thank you for the wonderful article ,,,,can you tell special features of factorial design.. it would be very helpful
Thanks heaps for this excellent overview.
However, I am bit confused with ‘The groups in a nonparametric analysis typically must all have the same variability (dispersion).’
As far as I can remember, ANOVA, as a parametric test assumes equal variances of the samples that wil be tested.
Do you think i should stick to ANOVA if the samples are normally distributed but have unequal variance?
Hi Elzed,
If you have unequal variances, you can use Welch’s ANOVA. Click the link to read my post about it!
Thanks a bunch Jim !
Hi Jim,
Thanks for this article! I would like to kindly seek your advise-
I’m currently looking to filter out variables that are highly correlated so that I may remove one or the other for an analysis, I was thinking of using the non parametric test Spearmans Rank Correlation, would that be correct? Data are of equal groups, each group >20 observations, continuous data.
Hi Lisa,
You can use that or even just the regular Pearson’s correlation. If you’re performing regression analysis and worried about multicollinearity, you can fit the model with the variables and then check the VIFs.
Hi Jim.
Thankyou for your article it was very helpful. I was wondering if you could help me- I’m currently doing my thesis and am carrying out a few statistical tests. One is an independent samples t test with 1 categorical independent variable (PP group 1, N= 57, PP group 2, N=45) with one continuous dependent variable. However, my data has violated the assumptions: Normality, Homogeneity of variance & has a few outliers. In this case, would I bootstrap my t-test or use the alternative non-parametric test (Mann-u Whitney). How would I make this decision? What would the criteria be for using bootstrapping over the alternative non-parametric test?
Thanks in advance for any insight you can offer! 🙂
Hi Heather,
In your case, I would strongly consider using the t-test. In fact there are specific reasons for not using a nonparametric test in your case.
Specifically, you have a large enough sample size in each group so that the central limit theorem kicks in (see the table in the post for sample size requirements). Even though the data in your groups are non-normal, the sampling distributions should follow a normal distribution, which gives you valid results. Additionally, t-tests can handle unequal variances. Just be sure that your statistical software uses the version of the t-test that does NOT assume equal variances.
While nonparametric tests don’t assume that your data follow a particular distribution, they do assume that the spread of the data in each group is the same. Because your data have different variances, it violates that assumption for nonparametric tests.
I’d use the t-test! You could also use bootstrapping, but a t-test should work fine.
Hi Jim, very good post (along many others in your blog). Could you please provide any formal reference for the table of minimum sampling size?
Thanks a lot!
Ben
Thanks a lot for the valuable information, but may I ask how much do you mean by tiny size of data, are they less than 30?
Thank you.
Hello Jim, when did you publish this article? I would like to cite it for my school work
Hi Mukhles,
I’m glad this article was helpful for you! When you cite web pages, you actually use the date you accessed the article. See this link at Purdue University for Electronic Citations. Look in the section for “A Page on a Website.” Thanks!
Hi jim would u Please answer one of my doubt, i m badly stuck in
Hi Akshat,
Please find the blog post that is closest to the topic of your question. There is a search box in the right hand column part way down that can help you. Ask your question in the comments of the appropriate post and I’ll answer it!
Just wanted to add that the book “Nonparametric Statistical Inference, fifth edition” by Gibbons and Chakraborti (2010; CRC Press) has discussions about the power of some nonparametric tests, including Minitab Macro codes to simulate power. The updated edition (work in-progress) will discuss R codes. Hope this helps.
Hi Jim! Great article, it really helped me for my study.
Only problem now is that I need scientific papers for the statements made in your text, to refer to them in my study.
Specifically I was wondering if you coud provide me with the paper you used to draw this conclusion “parametric tests have more power. If an effect actually exists, a parametric analysis is more likely to detect it”
Thanks a lot!
Hi Julia,
Thanks for your kind words. I’m glad it was helpful!
It’s generally recognized that nonparametric tests have somewhat lower power compared to a similar parametric test. In other words, to have the same power as a similar parametric test, you’d need a somewhat larger sample size for the nonparametric test. That’s the tendency.
However, calculating the power for a nonparametric test and understanding the difference in power for a specific parametric and nonparametric tests is difficult. The problem arises because the specific difference in power depends on the precise distribution of your data. That makes it impossible to state a constant power difference by test. In other words, the power difference doesn’t just depend on the tests themselves but also the properties of your data.
For more information about these considerations, look at the following texts:
Walsh, J.E. (1962) Handbook of Nonparametric Statistics, New York: D.V. Nostrand.
Conover, W.J. (1980). Practical Nonparametric Statistics, New York: Wiley & Sons.
Jim, do you have anything which describes how to estimate the power of a nonparametric test?
Hi Andrea,
Calculating power for nonparametric tests can be a bit complicated. For one thing, while nonparametric tests don’t require particular distributions, you need to know the distribution to be able to calculate statistical power for these tests. I don’t think many statistical packages have built in analyses for this type of power analysis. I’ve also heard of people using bootstrap methods or Monte Carlo simulations to come up with an answer. For these methods, you’ll still need either representative data or knowledge about the distribution.
Apparently, the pwr.boot function in R uses the bootstrap method to calculate power for nonparametric tests. Unfortunately, I have not used it myself but could be something to try. The problem is that you should not use data from a hypothesis test to calculate the power for that hypothesis test. If the test was statistically significant, power will be high. If the test was not significant, the power is low. You don’t know the real power. So, I’m not sure about the rational for using this command, but it is one approach.
Hi. I wanted to leave a comment . . .
Hi John,
Thanks for the heads-up. I tried sending you an email but it bounced.
Hi Jim,
Thank you for this nice explanation. I must consult with you regarding the situation I have with my data. I have 10 data sets (10 different metals), each data set consisting of 20 values (5 values in 4 seasons). These are the measurements of the metal concentrations in fish liver and I want to assess if there are seasonal variations. I tested the normality of distribution and got normal distribution for 7 metals, and for 3 a non normal distribution. I have tested the homogeneity of variance (Leven’s test) and got result that 6 of the metals have homogeneous variation, while other 4 metals (3 of which have non normal distribution) does not have homogeneous variance. Finally, my question is, should I use parametric test (One way ANOVA) for all the 10 data sets, since majority of samples have normal distribution and homogeneous variance? Should I use non parametric (Kruskal-Wallis H) since my data sets are not large (20 values)? Or should I test normally distributed data with parametric, and non normally distributed data with non parametric?
Thank you in advance,
Kind regards,
Jovana
Hi again Jim,
This time my query regarding missing data when sample size is low. How do we deal with missing dependent variables in a continuous data set observed at different time intervals?
Is multiple imputation a good option when data (sample) is missing at some time points and some were not detected due to method limitations. Some suggest replacing undetected data with the lowest possible value, such as 1/2 of the limit of detection instead of using zero. Can undetected data be treated as missing data?
I have looked up some multiple imputation methods in SPSS but not sure how much acceptable it is and how to report if acceptable.
Please enlighten with your expertise.
Thank you in advance!
Hi Pam,
Generally speaking, the less data you have the more difficult it is to estimate missing data. The missing values also play a larger role because they’re part of a smaller set. I don’t have personal experience using SPSS’ missing data imputation. I’ve read about it and it sounds good, but I’m not sure about limitations.
I’m not really sure about the detection limits issue. For one thing, I’d imagine that it depends on whether the lowest detectable value is still large enough to be important to your study. In other words, if it is so low that you’re not missing anything important, it might not be a problem. Perhaps the lowest detectable value is so low that in practical terms it’s not different from zero. But, that might not be the case. Additionally, I’d imagine it also depends on how much of your data fall in that region. If you’re obtaining lots of missing values or zeroes because much of the observations fall within that range, it becomes more problematic. Consequently, how to address it become very context sensitive and I wouldn’t be able to give you a good answer. I’d consult with subject-area specialists and see how similar studies have handled it. Sorry I couldn’t give you a more specific answer.
Great! Thanks Jim. This is really helpful.
Cheers!
Thank you so much for this article! I wasn’t planning on using statistics in my research, but my research took a turn and my committee wanted to see testable hypotheses…for paleontology! Ugh. But, this article and your website is incredibly useful in dusting off the stats in my brain!
Your kind words mean so much to me. Thank you, Brittney!
Hi Jim,
Thank you for making statistics a lot easier to understand. I now understand that parametric tests can be performed on a non-normal data if the sample size is big enough as indicated.
I have a few confusions regarding when and when not to perform log transformation of skewed data?
When does the data have to be log transformed to perform statistical analysis? Can parametric tests be done on a log transformed data and how do we report the results after log transformation?
Do you have a blog post regarding this? Please provide your expert insights on these when possible.
Thank you
Hi Pam,
Yes, you can log transform data and use parametric analyses although it does change a key aspect of the test. You can present the results as saying that the difference between the log transformed means are statistically significant. Then, back transform those values to the natural units and present those as well. Also, note that using log transformed data changes the nature of that test so that it is comparing geometric means rather than the usual arithmetic means. Be sure that is acceptable. Also, check that the transformed data follow the normal distribution.
However, you generally don’t need to do this if you have a large enough sample size per group–as I point out in this post. Consider using transformations only when the data are severely skewed and/or you have a smaller sample size. Unfortunately, I don’t have a blog post on this process. However, unless you have a strong need to transform your data, I would not use that approach.
I hope this helps!
Very helpful article. Nice explanation
Jim, your site in general and this page has helped me understand statistics so much better as a novice. Regarding the Wilcoxon, although super helpful in understanding the basics- I’m still unsure about how I can relate this to my study. It’s been loosely suggested to me by a peer that I use the Wilcoxon text, but I’m not sure how to confirm this.
I have 13 participants. They each watched Video 1 and answered 16 corresponding questions (8 for construct A and 8 for construct B). They then watched Video 2 and answered the same 16 questions (8 for construct A and 8 for construct B). The questions were 3, 5, and 7 likert scale questions.
I want to find the differences in ratings between Videos 1 and 2 for construct A, the differences in ratings between Videos 1 and 2 for construct B, and the highest rated Video in total (combining both constructs). Any advice? Thanks
It is really helpful article. I learned a lot. Thanks for posting.
You’re very welcome. I’m glad it was helpful!
Thanks Jim. Which post-hoc test would you suggest in this case. I really appreciate it. Thanks.
The post-hoc test I’m most familiar with is the Games-Howell test, which is similar to Tukey’s test. I’m sure there are others, but I’m not familiar with them. For more information and an example of Welch’s with this post-hoc test, read my post on Welch’s ANOVA.
Hi Jim,
I am dealing with 6 groups of a data set with different number of sample sizes. The minimum sample size of one group is 56 and maximum is 350 and other groups sample sizes are in between these two points. My data is not normal and through levene’s test I found that the variances are not equal. I think comparison of mean is somehow meaningful compared to median. Could you please guide me to select between Welch-test ANOVA or Kruskal Wallis test?
Thanks
Hi Asmat,
Given your large sample sizes, unequal variances, and the fact that you want to compare means, I ‘d use Welch’s ANOVA.
Best of luck with your analysis!
Hi from Turkey
I have followed your post for 6 months. Every article is better than the last. Thank you for have loved the statistic.
Hi Ferhat, thank you so much! That means a lot to me!
Hi Jim,
This is really an insightful article. I have a question though regarding my study. Can I still use a parametric test even if the distribution is not normal and the variances aren’t homogeneous? I checked those assumptions via Shapiro-Wilk test and Levene’s F-test and the results suggested that both assumptions were violated. Other online articles mentioned that if this is the case, I should use a non-parametric test but I also read somewhere that oneway ANOVA would do. By the way, I have 3 groups with equal number of observations, i.e., 21 for each group.
Thanks for your time.
Hi John,
If you sample size per group meets the requirements that I present in the Advantage #1 for parametric test, then nonnormal data are not a problem. These tests are robust to departures from normality as long as you have a sufficient number of observations per group.
As for unequal variances, you often have stricter requirements when you use nonparametric tests. This fact isn’t discussed much but nonparametric tests typically requires the same spread across groups. For t-tests and ANOVA, you have options that allow you to use them when variances are not equal. For example, for ANOVA you can use Welch’s ANOVA. For details on that method, read my post about Welch’s ANOVA.
Based on your sample size per group, you should be able to use ANOVA regardless of whether the data are normally distributed. If you suspect that the variances are not equal, you can use Welch’s ANOVA.
I hope this helps.
Thanks a lot for your prompt response, Jim. Really appreciate it. I’ll check on Welch’s ANOVA, then. Again, many thanks!
My data of 350 doesnt follow normal distribution.. which one should i take median or mean..how should it be reported.. should i report on mean sd cv etc
Hi Jain,
The answer to this question depends on which measure best represents the middle of your distribution and what is important to the subject area. In general, the more skewed your distribution, the more you should consider using the median. Graph your data to help answer this question. Also, I’ve written a post about the different measures of central tendency that you should read!
I hope this helps!
Thanks Respected Sir
I got your point. You are great.
You’re welcome. I’m glad I could help!
there is no significant difference in pre-intervention scores of groups with p value>0.05 but when we see Mean scores of groups there are minor difference among the groups. In this case Can I use ANCOVA?
ANCOVA allows you include a covariate (a continuous variable that might be correlated with the dependent variable) in the analysis along with your categorical variables (factors). Telling me about the means of the groups is not applicable to whether you should use ANCOVA specifically. Do you have a continuous independent variable to include in the analysis?
I’m not sure why you’re analyzing the pre-intervention scores? However, it is entirely normal to see differences between the group means when the p-value is greater than 0.05. However, that issue does not relate to whether you should use ANCOVA or not.
If you have only the 5 groups and there are no other variables in your analysis, no you can’t use ANCOVA because you don’t have a covariate. Seems like you should use one-way ANOVA. You can subtract the pretest scores from the post-test scores so you’re analyzing the differences by group. This process will tell you how the changes in the experimental groups compare to the change in the control group.
Respected Sir, please answer my last two questions too.
I will, Muhammad. Please keep in mind that the website is something I do in my spare time. I try to answer all questions but sometimes it will take a day or two depending on what else I have going on.
Thanks Great Sir
Dear Jim Frost thanks for your kind reply,
Please also guide and answer my two questions more:
1. NO significant difference was found among the Covariates with p>0.05 before intervention. But there is minor difference in their mean score. In this case, Can I use ANCOVA for analysis with covariates having significant score with p>0.05?
Is it okay that using ANCOVA will remove the initial differences found in mean score of covariate though there was No significant difference found in terms of p>0.05 before intervention?
2. In my experimental study sample size is 50. There are 5 groups (4 experimental and 1 control group). I am using randomized pretest-posttest control group design but some people say this research design is not appropriate. Please guide is this research okay or not? if not then please tell the appropriate design?
I am giving different interventions to 4 experimental groups, No intervention to control group. Please reply immediately.
Hi Muhammad,
I’m a bit confused by your first question. Covariates are continuous variables so there are not any significant differences. Covariates don’t assess the differences between the means of the levels of a categorical variable. Instead, you use the p-value to determine whether there is a significant relationship between the covariate and dependent variable in the same manner as for linear regression. Usually, if the it is not significant, you don’t include it in the model. However, if theory strongly suggests that it should be in the model, it is ok to include it even when the p-value is greater than 0.05.
I don’t see why a pretest-posttest would not be OK. But, I don’t have much information to go by. Why did they say it was not appropriate?
Actually I have only 10 subjects each group which is not greater than 15. thats why I asked?
Hi Muhammad,
That size limit is only important when your data don’t follow a normal distribution. You said that your data do follow the normal distribution. So, it shouldn’t be a problem!
I have 5 groups in experimental study (4 experimental and 01 control). Sample size 50 with 10 subjects in each group. All groups have normal distribution. Can I use parametric test, please reply immediate.
Hi Muhammad, given what you state, I see no reason why you couldn’t use a parametric test.
Hi Jim, thanks for the overview! Do you happen to have a source/reference I can refer to when using the claims you make as argumentation in my paper?
Hi Sam, I include a link in this post to a white paper about the sample size claims. You’ll find your answers there!
Wonderful article…love all your articles…😃
Thank you, Mohammad! That means a lot to me!
I have benefited from your information. May God bless You.
Thank you, David! It makes me happy to hear that this has been helpful for you!
Very nice explanation
Of central tendencies
Thank you, Anitha!
How can I cite this article?
Hi, there are several standard formats for electronic sources, such as MLA, APA, and Chicago style. You’ll need to check with your institution to determine which one you should use.
very nice
Great article. This is one of those statistical tests that took a while to understand. But you explained it very nicely!
Thank you so much Lucas!