You’re probably familiar with data that follow the normal distribution. The normal distribution is that nice, familiar bell-shaped curve. Unfortunately, not all data are normally distributed or as intuitive to understand. You can picture the symmetric normal distribution, but what about the Weibull or Gamma distributions? This uncertainty might leave you feeling unsettled. In this post, I show you how to identify the probability distribution of your data.
You might think of nonnormal data as abnormal. However, in some areas, you should actually expect nonnormal distributions. For instance, income data are typically right skewed. If a process has a natural limit, data tend to skew away from the limit. For example, purity can’t be greater than 100%, which might cause the data to cluster near the upper limit and skew left towards lower values. On the other hand, drill holes can’t be smaller than the drill bit. The sizes of the drill holes might be right-skewed away from the minimum possible size.
Data that follow any probability distribution can be valuable. However, many people donโt feel as comfortable with nonnormal data.ย Letโs shed light on how to identify the distribution of your data!
Weโll learn how to identify the probability distribution using body fat percentage data from middle school girls that I collected during an experiment. You can download the CSV data file: body_fat.
Related posts: Understanding Probability Distributionsย and The Normal Distribution
Graph the Raw Data
Letโs plot the raw data to see whatย it looks like.
The histogram gives us a good overview of the data. At a glance, we can see that these data clearly are not normally distributed. They are right skewed. The peak is around 27%, and the distribution extends further into the higher values than to the lower values. Learn more about skewed distributions. Histograms can also identify bimodal distributions.
These data are not normal, but which probability distribution do they follow? Fortunately, statistical software can help us!
Related posts: Using Histograms to Understand Your Data, Dot Plots: Using, Examples, and Interpreting, and Assessing Normality: Histograms vs. Normal Probability Plots
Using Distribution Tests to Identify the Probability Distribution that Your Data Follow
Distribution goodness-of-fit tests are hypothesis tests that determine whether your sample data were drawn from a population that follows a hypothesized probability distribution. Like any statistical hypothesis test, distribution tests have a null hypothesis and an alternative hypothesis.
- H0: The sample data follow the hypothesized distribution.
- H1: The sample data do not follow the hypothesized distribution.
For distribution goodness-of-fit tests, small p-values indicate that you can reject the null hypothesis and conclude that your data were not drawn from a population with the specified distribution. However, we want to identify the probability distribution that our data follow rather than the distributions they donโt follow! Consequently, distribution tests are a rare case where you look for high p-values to identify candidate distributions. Learn more about Goodness of Fit: Definition & Tests.
Before we test our data to identify the distribution, here are some measures you need to know:
Anderson-Darling statistic (AD): There are different distribution tests. The test Iโllย use for our data is the Anderson-Darling test. The Anderson-Darling statistic is the test statistic. Itโs like the t-value for t-tests or the F-value for F-tests. Typically, you donโt interpret this statistic directly, but the software uses it to calculate the p-value for the test.
P-value: Distribution tests that have high p-values are suitable candidates for your dataโs distribution. Unfortunately, it is not possible to calculate p-values for some distributions with three parameters.
LRT P: If you are considering a three-parameter distribution, assess the LRT P to determine whether the third parameter significantly improves the fit compared to the associated two-parameter distribution. An LRT P value that is less than your significance level indicates a significant improvement over the two-parameter distribution. If you see a higher value, consider staying with the two-parameter distribution.
Note that this example covers continuous data. For categorical and discrete variables, you should use the chi-square goodness of fit test.
Goodness of Fit Test Results for the Distribution Tests
Iโm using Minitab, which can test 14 probability distributions and two transformations all at once. Letโs take a look at the output below. Weโre looking for higher p-values in the Goodness-of-Fit Test table below.
As we expected, the Normal distribution does not fit the data. The p-value is less than 0.005, which indicates that we can reject the null hypothesis that these data follow the normal distribution.
The Box-Cox transformation and the Johnson transformation both have high p-values. If we need to transform our data to follow the normal distribution, the high p-values indicate that we can use these transformations successfully. However, weโll disregard the transformations because we want to identify our probability distribution rather than transform it.
The highest p-value is for the three-parameter Weibull distribution (>0.500). For the three-parameter Weibull, the LRT P is significant (0.000), which means that the third parameter significantly improves the fit.
The lognormal distribution has the next highest p-value of 0.345.
Letโs consider the three-parameter Weibull distribution and lognormal distribution to be our top two candidates.
Related post: Understanding the Weibull Distribution
Using Probability Plots to Identify the Distribution of Your Data
Probability plots might be the best way to determine whether your data follow a particular distribution. If your data follow the straight line on the graph, the distribution fits your data. This process is simple to do visually. Informally, this process is called the โfat pencilโ test. If all the data points line up within the area of a fat pencil laid over the center straight line, you can conclude that your data follow the distribution.
Probability plots are also known as quantile-quantile plots, or Q-Q plots. These plots are similar to Empirical CDF plots except that they transform the axes so the fitted distribution follows a straight line.
Q-Q plots are especially useful in cases where the distribution tests are too powerful. Distribution tests are like other hypothesis tests. As the sample size increases, the statistical power of the test also increases. With very large sample sizes, the test can have so much power that trivial departures from the distribution produce statistically significant results. In these cases, your p-value will be less than the significance level even when your data follow the distribution.
The solution is to assess Q-Q plots to identify the distribution of your data. If the data points fall along the straight line, you can conclude the data follow that distribution even if the p-value is statistically significant. Learn more about QQ Plots: Uses, Benefits & Interpreting.
The probability plots below include the normal distribution, our top two candidates, and the gamma distribution.
The data points for the normal distribution donโt follow the centerย line. However, the data points do follow the line very closely for both the lognormal and the three-parameter Weibull distributions. The gamma distribution doesn’t follow the center line quite as well as the other two, and its p-value is lower. Again, it appears like the choice comes down to our top two candidates from before. How do we choose?
An Additional Consideration for Three-Parameter Distributions
Three-parameter distributions have a threshold parameter. The threshold parameter is also known as the location parameter. This parameter shifts the entire distribution left and right along the x-axis. The threshold/location parameter defines the smallest possible value in the distribution. You should use a three-parameter distribution onlyย if the location truly is the lowest possible value. In other words, use subject-area knowledge to help you choose.
The threshold parameter for our data is 16.06038 (shown in the table below). This cutoff point defines the smallest value in the Weibull distribution. However, in the full population of middle school girls, it is unlikely that there is a strict cutoff at this value. Instead, lower values are possible even though they are less likely. Consequently, Iโll pick the lognormal distribution.
Related post: Understanding the Lognormal Distribution
Parameter Values for Our Distribution
Weโve identified our distribution as the lognormal distribution. Now, we need to find the parameter values for it. Population parameters are the values that define the shape and location of the distribution. We just need to look at the distribution parameters table below!
Our body fat percentage data for middle school girls follow a lognormal distribution with a location of 3.32317 and a scale of 0.24188.
Below, I created a probability distribution plot of our two top candidates using the parameter estimates. It displays the probability density functions for these distributions. You can see how the three-parameter Weibull distribution stops abruptly at the threshold/location value. However, the lognormal distribution continues to lower values.
Identifying the probability distribution that your data follow can be critical for analyses that are very sensitive to the distribution, such as capability analysis. In a future blog post, Iโll show you what else you can do by simply knowing the distribution of your data. This post is all continuous data and continuous probability distributions. If you have discrete data, read my post about Goodness-of-Fit Tests for Discrete Distributions.
Finally, I’llย close this post with a graph that compares the raw data to the fitted distribution that we identified.
Note: I wrote a different version of this post that appeared elsewhere. I’ve completely rewritten and updated it for my blog site.
Sam says
If I have data and I am told in a question what the 50th percentile is what the 75th percentile is and what the 95th percentile is how can I then identify itโs distribution?
KALYAN DAWAR says
if my data is very big how i can distribute and what is the techniques. plz briefly describe?
Jim Frost says
Hi Kalywan,
It’s not entirely clear what you’re asking exactly. I think you’re asking about how to identify the distribution of your data when you have a very large dataset.
So, a refresher, the problem with using distribution tests to identify the distribution of a very large dataset is that they’ll have very high statistical power and they’ll find trivial departures from a distribution to be statistically significant. That makes it hard to identify it!
In those cases, I recommend using QQ Plots. Click the link to learn more.
Anurag says
Hello
How to know the distribution when we are dealing with censored data
Please let me know
Thank You
Anurag says
Hello
My data is data [6,10,3,2,16,1,17,11,4,5] and I want to know the distribution and estimate the parameters for this. As these points are completely random and they don’t seem to follow any particular distribution, I tried using non-parametric method (KDE) and obtained the KDE plot but not able to understand how to interpret that plot and how to proceed with estimating the parameters from here.
Please help me proceed further with this or if there is any other way or method to deal with this problem. Please let me know
Thank You
Leyla DepretBixio says
Hello Jim,
How can a test in SAS for different distributions ? like you are doing in minitab .
thank you very much
Leyla
Gustavo says
Hi Jim, thanks for your answer!!!
Yes my data is of purity and the UCL is 98.5 bacause this is the acceptance limit, values lower than 98.5 are not expected and they must have a justification of why that happened and in the data set I have 60 values that are lower than the UCL and most of them are higher than 97, around 40 samples, and only 9 are lower than 95 being 84 the lowest value.
If a remove this data that are justified of my analysis, would it bias the analyses?
About the p-value you are right, it was a typo I am looking for p-value greater than 0.05
I checked de probability plots and none of them were anywhere close to follow a straight line. I guess this is happening because of the high skewness.
If you could help with this problem, I would apreciate it so much.
Jim Frost says
Hi Gustavo,
Ah, so that’s NOT an Upper Control Limit (UCL) then. It’s actually the LOWER control limit or LCL. That’s what confused me.
Assuming that the data below the LCL is correctโthat is, they are out of spec, but the measurement is validโthen you should leave them in your dataset. However, if you have reason to believe that the measurements themselves are not valid due to some error, then you can take them out. But if they’re valid, they represent part of the distribution of outcomes and you should leave them in.
The skewness by itself isn’t the problem because some probability distributions can fit skewed data. It’s probably the specific shape of the skewness that is cause problems. The probability plots transform the axes so even skewed data can follow the straight line on the graph. They don’t have to follow the line perfectly. Do the “fat pencil test” that I describe here (I’m talking normal distributions there, but it also applies to probability (Q-Q) plots for other distributions).
I’m assuming that you also checked to see if any transformations can make your data normal? If not, look for that. But I’m guessing you did because it’s right there in the output with the other distributions.
Also, did you check to be sure that your data are in statistical control using a control chart? Out of control data can cause problems like this.
If all the above checks out, then it gets tougher.
I’d do some research and see what others in a similar subject area have done. Someone else might have figured it out!
If that doesn’t work, you might need to look into other methods. These methods I’ve heard of but I’m not overly familiar with. These would be things like nonparametric or bootstrapped capability analysis. Those methods should be able to handle data that don’t have an identifiable distribution. Unfortunately, Minitab can’t do those types.
Unfortunately, that’s all I’ve got! Hopefully, one of those suggestions work.
Gustavo says
Hi, i read your post and it was very helpful, but I am still having some troubles while analyzing my data.
My data set is from process yield in % and the closer the to 100% the better, the data set has around 1100 samples and only 60 of them are smaller than 98,5, that is my UCL, so my data is highly skewed to left (skewness = -8) and I would like to run a capability test, but as I do not find a suitable distribution to my data set I think that the capability test may give some inconsistent results.
When I run a probabiliy distribution test in minitab, no distribution gives me a p-value greater than 0.005. So what should I do?
When I have a distribution that have a natural limit, as 100% is the max value I can get in a probability test, which approach should I have to treat or anylise the data?
Jim Frost says
Hi Gustavo,
If close to 100% is better, why is the UCL at only 98.5%? Are these purity measurements by any chance? Those tend to be left-skewed when 100% is desired.
Also, just be clear, you state you’re looking for a p-value greater than 0.005, but that should be 0.05.
Here’s one possibility to consider. You have a very large sample size with n=1100. That gives these distribution tests very high statistical power. Consequently, they can detect trivial departures from any given probability distribution. Check the probability plots and see if they tend to follow the straight line. That’s the approach I recommend, particularly for very large samples like yours. I talk about that in this post in the section titled, “Using Probability Plots . . .”. If the dots follow the line fairly well, go with that distribution despite the p-value.
If you still can’t identify a distribution, let me know and I’ll think about other possibilities.
For capability analysis, choosing the distribution correct matters. Using the wrong one will definitely mess up the results! Capability analysis is sensitive to that.
Jeremy says
Hi Jim, It is interesting to see that Minitab tests 14 possible distributions! But as I understand it, there’s more than one “version” of any given distribution—for example a “normal” distribution is a bell-shaped curve, but may be slightly taller, or slightly wider and fatter than some other normal distribution curve, and still be “normal” within limits at least. My question is, in your body fat data for example, if you sampled body fat at a different school and you still got a lognormal curve but one that was wider and not quite as tall, would the probability of having a value between 20 and 24% (for example) still be the same? Or would it vary based on how squeezed or stretched your lognormal curve is (while still being lognormal)? Does the software compute this based on some ideal lognormal curve or use the actual data?
Jim Frost says
Hi Jeremy,
That’s a great question.
The first thing that I’d point out is that there is not one Normal distribution or any other distribution. There are an infinite number of normal distributions. They share some characteristics, such as being symmetrical, having a single peak in the center, and tapers off equally both directions from the mean. However, they can be taller and narrower or shorter and wider. And have the majority of their values fall in entirely different places than other normal distributions. The same concept applies to lognormal and other distributions.
For this reason, I try to say things like, the data follow A normal distribution. Or A lognormal distribution. Rather than saying the data follow the normal or lognormal distribution because there isn’t one of each. Instead, the body fat percentage data follow a lognormal distribution with specific parameters.
To answer your question, yes, if I had taken a sample from another school, I would’ve likely gotten a slightly different distribution. It could’ve been squeeze or stretched a bit as you describe. That other lognormal distribution would’ve produced a somewhat different probability for values falling within that interval. Like many things we do in statistics, we’re using samples to estimate populations. A key notion in inferential statistics is that those estimates will vary from sample to sample. The quality of our estimates depends on how good our sample is. How large is it? Does it represent the population? Etc.
The software estimates these parameters using maximum likelihood estimation (MLE). Likelihood estimation is a process that calculates how likely a population with particular parameters is to produce a random sample with your sample’s properties. Maximizing that likelihood function simply means choosing the population parameters that are MOST likely to have produced your sample’s characteristics. It performs that process for all distributions. Then you need to determine which distribution best fits your data.
So, with this example, we end up with a lognormal distribution providing the best fit with specific parameters that were most likely to produce our sample.
Aron Haracska says
Hi Jim!
I’ve read through you forum and I purchased two of your books โ and everything is fantastic, thank you for all the great information!
However, I do have a question (which popped up during my scientific research) that I couldn’t seem to find an answer to: how do I interpret the results of OLS when the dependent variable has been transformed using a Box-Cox transformation? (It was necessary, since the residuals were extremely non-normal, but this fixed the issue)
More specifically, I’m looking to answer the following questions:
1) Do my independent variables have a significant effect on the dependent variable?
2) Whatโs the direction of the effect of my significant independent variables (positive/negative)?
3) Whatโs the order of my independent variables by strength of effect? (e.g.: Which independent variable has the strongest effect, and which one has the weakest effect?)
Please note, that I’m not trying to build a predictive model โ I just want to know what the important independent variables are in my model, their direction of effect, and their ordinal strength (strongest – 2nd strongest – … – 2nd weakest – weakest). Also, when looking at their “ordinal strength” (my own words haha), I’m assuming correctly that I should be looking at their standardized coefficients, right?
Or, for this purpose, is the normality of my residuals important at all? The significant independent variables do change after the Box-Cox transformation of the dependent variable, I just don’t know which model (transformed or untransformed DV) answers my research questions better…
Sorry for the long post, keep up the good work!
Thanks!
Jim Frost says
Hi Aron,
Thanks for writing and thanks so much for buying two of my books. If you happen to have bought my regression book, go to Chapter 9 and look for a section titled, “Using Data Transformations to Fix Problems.” There’s a sub-section in it about “How to Interpret the Results for Transformed Data.” I think the entire section will be helpful for you but particularly the interpretation one.
In the transformations section, I note how transformations are my solution of last resort. They can fix problems but, as you’re finding, it complicates interpretation. So, you have non-normal residuals. Hopefully you’ve tried other solutions for fix that, such as specifying a better model. For example, one that properly models curvature. However, sometimes that’s just not possible. In that case, a transformation might be the best choice possible. Another option would be trying a generalized linear model that doesn’t necessarily require your residuals to follow a normal distribution but allows them to follow other distributions.
But back to transformations. If you’re stuck using a transformation, the results apply to the transformed data, and you need to describe them that way. For example, you might say there is a significant, positive relationship between the predictor and the Box-Cox transformed response variable. And in that case, the coefficients explain changes in the transformed response. It’s just not as intuitive understanding what the results mean. Some software can automatically back transform the numbers to help you understand some of it, but you’re not really seeing the true relationships.
Because you are developing a predictive model, many of these concerns are lessened for you because you don’t need to understand the explanatory roles of each variable. However, you will still need to back transform the predicted values. The predicted values you get “out of the box” will be in transformed units. Additionally, the margin of error (prediction intervals) might be drastically different depending on your predictor values. The transformation will make the transformed prediction intervals nice and constant, but that won’t necessarily be true for the back transformed PIs. So, you’ll need to back transform those too. Again, some software does that automatically. I know Minitab statistical software does that.
Understanding the predictive power of each predictor is complicated by the transformation because the standardized coefficients apply to the transformed data. In non-transformed models, you’re correct that standardized coefficients are a good measure to consider. Another good measure is the change in R-squared when the variable is added to the model last. However, the R-squared for your model applies to the transformed DV.
I guess for your overall question about how essential the transformation is to use, like many things in statistics, it depends on all the details. If your residuals are severely non-normal, then it’s important. However, if they’re only mildly non-normal, not so much. What I’d do is graph your residuals using a normal probability plot (aka a Q-Q plot) and use the “fat pencil test” I describe in the linked post. BTW, that post is about Q-Q plots and data distributions but apply to your residuals as well.
I hope that helps clarify some of the issues! Transformations can help matters but they do cause complications for interpretation.
Sayed Jawid says
6) As a proxy for exposure to benzene (a known human carcinogen) you collect 30 samples (one sample from 30 individuals who work at an oil refinery) looking for phenol in the urine. The measure is usually reported as mg/g of creatinine. The mean concentration of all the samples was 252.5 mg/g of creatinine. This is worrying to you because you know that values above 250 indicate an overexposure to benzene. You look at the descriptive statistics and find that the standard deviation in the sample is 75, the range is 500 (2-502), and the interquartile range is 50 (57-107)
a. Looking at the standard deviation, range, and IQR what do you suspect about the distribution of the data?
b. What is the standard error of the mean for this sample?
c. What is the 95% confidence interval of the mean?
d. In your own words, what can you say about the sample you have collected with respect to the mean you have calculated, the 95% CI, and the levels at which we become concerned about overexposure (250mg/g creatinine).
Jim Frost says
Hi Sayed,
I’m not going to answer your homework question for you, but I’ll provide some suggestions and posts I’ve written that will help you answer them yourself. That’s the path to true learning!
One key thing you need to do is determine the general shape of your distribution. At the most basic level, that means determining whether it is symmetrical (e.g., normally distributed) or skewed. That’s easy if you have the data and can graph it. However, if you just have the summary statistics, you can still draw some conclusions. For tips on determining the shape of the distribution, read my post about Skewed Distributions. To help answer that, you’ll need to know what the median is and compare it to the mean. If the median is not provided, you know it falls somewhere within the IQR. I’ll give you the hint that you have reason to believe it is skewed and not normal. Or the dataset might contain one or more extreme outliers.
Read Standard error of the mean to see how to calculate and interpret it.
Learn how to use the SEM to calculate and interpret the 95% Confidence Interval.
By understanding the IQR and quartiles, you can determine what percentage of the sample is below 107 (the upper IQR value).
I hope that helps!
Nishanth says
Hello Jim
All (30) data points are 6.5 & 6.6.
P-Value is <0.005
Individual distribution shows P-values as <0.005 & <0.010.
How to choose distribution (non-normal) for calculating process capabilities? Below are the values for reference.
Goodness of Fit Test
Distribution AD P LRT P
Normal 12.101 <0.005
Box-Cox Transformation 12.101 <0.005
Lognormal 12.101 <0.005
Exponential 22.715 <0.003
2-Parameter Exponential 14.362 <0.010 0.000
Weibull 14.524 <0.010
Smallest Extreme Value 14.524 <0.010
Largest Extreme Value 11.028 <0.010
Gamma 12.246 <0.005
Logistic 11.973 <0.005
Loglogistic 11.973 <0.005
ML Estimates of Distribution Parameters
Distribution Location Shape Scale Threshold
Normal* 6.57600 0.04314
Box-Cox Transformation* 12302.42739 397.08665
Lognormal* 1.88341 0.00659
Exponential 6.57600
2-Parameter Exponential 0.07755 6.49845
Weibull 278.12723 6.59360
Smallest Extreme Value 6.59364 0.02355
Largest Extreme Value 6.55247 0.04785
Gamma 23582.81096 0.00028
Logistic 6.58577 0.02289
Loglogistic 1.88490 0.00350
* Scale: Adjusted ML estimate
Your response is much appreciated..
Jim Frost says
Hi Nishanth,
That’s a tough dataset you have! The p-values are all significant, which indicates that none of the distributions fit. However, I notice you don’t some of the distributions with more parameters (e.g., three parameter Weibull, two parameter exponential, etc.) You should check those. Also the Johnson transformation is not included.
If you can’t find any distribution that the data fit, or get a successful transformation, you might need a nonparametric approach. Or a bootstrapping approach. Unfortunately, your data just don’t follow any of the listed distributions!
Taewoo Ko says
There is a mention that “The p-value is less than 0.005, which indicates that we can reject the null hypothesis that these data follow the normal distribution.”
Can the above mention be rephrased that if the p-value is greater than 0.005, it can make sure that the actual data follow the null hypothesis?
I would like to get to know the difference between the statements of “we can follow the null hypothesis” and “we failed to reject the null hypothesis”.
Jim Frost says
Hi Taewoo,
Thanks for writing with your great question!
First, I should clarify that correct cutoff value is 0.05. When the p-value is less than or equal to 0.05 for a normality test, we can reject the null hypothesis and conclude that the data do not follow a normal distribution.
Distribution tests are unusual for hypothesis tests. For almost all other tests, we want p-values to be low and significant, and draw conclusions when they are. However, for distribution tests, it’s a good sign when p-values are high. We fail to reject the null.
However, we never say that we accept the null. Why not? Well, it has to do with being unable to prove a negative. All we can say is that we have not seen evidence that the data do not follow the normal distribution. However, we can never prove that negative. Perhaps our sample size is too small to detect the difference or the data are too noisy? I write a post about this very issue that you should read: Failing to Reject the Null Hypothesis. That should help you understand why that is the correct wording!
Gemechu Asfaw says
How does we identify the data follow binomial or Poisson or other distribution rather than follow normal or not normal?
Jim Frost says
Hi Gemechu,
That’s a great question. I’ve written a post that covers exactly that and I discuss both the binomial and Poisson distributions, along with others. Please read my post, Goodness-of-Fit Tests for Discrete Distributions.
Jenny Taylor says
Hi Jim, if a dataset has a skewness of -0.3, can we still consider it to be approximately normally distributed? Is the jacque bera test a good way to verify if the distribution of a dataset is ‘normal’? Thank you.
Mounika Tripurari says
if my data is not following any distribution. can i say it is approximately following a Weibull distribution using the probability plot? if yes, can you share any reference document.
Jim Frost says
Hi Mounika,
If your data are not following any distribution, I’m not sure why you’d be able to say it’s following a Weibull distribution? Are you saying that the p-value is significant but the dots on the probability plot following the straight line? It’s hard to tell from what you wrote. If that’s the case, you can conclude that the data follow the distribution. That usually happens when you have a large dataset.
shamshul othman says
if the continuous data fits other distribution type than the normal distribution, say weibull making it a parametric test, can we do anova similarly like the normal distribution?
Jim Frost says
Hi Shamshul,
Generally speaking, when we are talking about parametric tests, they assume that the data follow the normal distribution specifically. There are exceptions, but ANOVA does assume normality. However, when your data exceed a certain sample size, these analyses are valid with nonnormal data. For more information about this and a table with the sample sizes, please see my post about parametric vs. nonparametric analyses. I include ANOVA in that table.
Collinz says
hope you are doing great.
In your hypothesis test ebook, you clearly expressed the no need to worry about the normality assumption provided the data is large.
Now I see you emphasising the need to determine the distribution of the data.
Understand what circumstances do I need to determine the distribution of my data so that I can make transformations before proceeding to hypothesis testing.
In other words, when do I have to over mind about the normality of my data?
Its because after reading your ebook, I clearly noticed that normality is not a big issue I should pay attention to when my sample data is huge.
Jim Frost says
Hi Collinz,
You’re quite right that many hypothesis tests don’t require the data to follow a normal distribution when you have a large enough sample. And, an important note, the sample size doesn’t have to be huge. Often you don’t need more than 15-20 observations per group to be able to waive the normality assumption. At any rate, on to answering your question!
There are other situations where knowing the distribution is crucial. Often these are situations where you want to determine probabilities of outcomes falling within particular ranges. For example, capability analysis determines a process’s capability of producing parts that fall within the spec limits. Or, perhaps you want to calculate percentiles for you data using the probability distribution function. In these cases, you need to know which distribution best fits your data. In fact, it’ll often be obvious that the data don’t follow the normal distribution (as with the data in this example) and then the next step becomes determining which distribution your data follow.
Thanks for the great question! And, I hope that helps clarify it.
eli says
Hi Jim i have a question
Why do we need other contionous distributions if everything just converge to normal why we need to define other distributions
Jim Frost says
Hi Eli/Asya,
Continuous distributions don’t necessarily converge to normality. As I describe in this post, some continuous distributions are naturally nonnormal. Gathering larger and larger samples for these inherently nnnnormal distributions won’t produce a normal distribution.
I think you’re referring to the central limit theorem. This theorem states that sampling distributions of the mean will approximate the normal distribution even when the population distribution is not normal. The fact that this occurs is very helpful in allowing you use to use some hypothesis tests even when distribution of values is not normal. For more information, read my post about the central limit theorem.
However, sometimes you need to understand the properties of the distribution of values and not the sampling distribution, which are very different things. Consequently, there are occasions when you need to identify the distribution of your data!
DB says
Thanks Jim for wonderful article . I am new to DS field am trying to find ways to proceed on a project that I am working on . I have a dataset say X which is actually the number of hits our website receives , captured everyhour ( shall we call it independent variable ?) . I also have Y1,Y2 which are the dependent variables . Here Y1 is the CPU utilization , Y2 is the Memory utilization of our servers . My objective is to calculate the expected CPU , Memory utilization of , say next month in relation to the volume , X , we receive .
When I plot X ( I am unable paste the picture here ) it shows a proper daily and weekly seasonality . In a day the graph rises to a max peak around 11 am and drips down , and again reaches another peak around 2 pm . So its kind of a two bell curves in a day . This pattern repeats day after day …Also the curves are similar on a weekday and weekends . Now I used fbprophet to do the forecast of X using past values of X .
Also the Y1 the CPU values make similar patterns – I am able to plot Y1 also and forecast using fbprophet .
However I am in a situation where is I need to find out the exact correlation between X and Y1 and Y2 . Also the correlation between Y1 & Y2 itself and combinations there of these 3 …
I tried add_aggressor() method of fbprophet to influence the forecast of Y1 and Y2 . The resulting forecast values are much closer to the actuals ( training data ) . However I am not convinced with this approach . I need to mathematically derive the correlation between X and Y1 , X and Y2 , Y1 and Y2,, X and Y1 and Y2 .
I checked pearson correlation the number is positive 0.025 between X and Y1 . I tried ANOVA with excel it shows negative -1.025 ( it says CPU is inversely correlated to Volume ..) this is unbelievable because I expect a positive correlation only between X and Y1 …I did Granger casuality and it says X preceds Y1 which means my hypothesis that “Volume contributes to CPU ” is true …
I am wondering how I can use a kind of moment generating function to exactly forecast or calculate values of Y1 , Y2 WITHOUT using forecasting models like ARIMA etc .
I need to be able to calculate , with least error margin , the value of Y1 , Y2 given value of X ….
Please advise me the best approach I need to take .
Thanks in advance
DB
[email protected]
Paul says
I have been stuck on a very important project for a long time knowing in the back of my mind if I just could know what type of distribution a data set i have came from, I could make leaps and bounds worth of progress as a result. Iโm so glad I finally googled this problem directly and came upon this article.
I canโt stress enough how valuable this blogpost is to what Iโm working on. Thank you so much Jim.
Happybean says
Hello Jim,
thank you for your input. I am wondering what does it mean if I have a distribution where mean and standard deviation are really close together. Is this indication of something? The data is exponentially distributed.
Diogo says
Hi Jim,
Thank you very much for your post! It helped me and a lot of other people out a lot!
Cheers,
Diogo
Manuel Soler Ortiz says
It definitely helped! I appreciate your detailed answer. Through it and the links provided I even managed to work out a couple follow-up questions i was ruminating!
Since i started meddling with statistics i’ve been under the impression that the hard part is to develop the mindset to appropriately understand the results… without it one tend to just “believe” in the numbers. I thank you kindly for helping me understand.
Keep up the good work!
Jim Frost says
Hi Manuel,
I’m so glad to hear that! It’s easy to just go with the numbers. You learning how it all works is fantastic. I always think subject-area knowledge plus enough statistical knowledge to understand what the numbers are telling you, plus their limitations, is a crucial combination! Always glad to help!
Manuel Soler Ortiz says
I appreciate your detailed response, and the links provided allowed me to work out a couple of follow up questions!
Since i started meddling with statistics and (theoretically) learned to use the tools i required, i felt it takes time and practice to get the mindset needed to properly understand statistic results… and by default one tends to “believe” the numbers instead of understanding them! Thank you kindly for the attention.
Keep up the good work!
Manuel Soler Ortiz says
Hi Jim!
Thank you very much for your blog. Since I found it i know where to search if i’m in dire need of statistical enlightenment!
I just noticed this article and left me wondering… If the best fit distribution is chosen based on the one that has higher p-value, doesn’t it mean we’re accepting the null hypothesis? This aspect of the goodness of fit tests always puzzled me.
I’ve skimmed through the comments and you address this somehow, indicating that, technically, with high p-values “your sample provides insufficient evidence to conclude that the population follows a distribution other than the normal distribution”. If we accept the distribution with the highest p-value as the best fit distribution but formally speaking we shouldn’t accept the null hypotesis, how strong is then the evidence given by Goodness of fit tests?
Thanks again, and sorry for the long question
Jim Frost says
Hi Manuel,
That is a great question. As I mention, this is an unusual case where we look for higher p-values. However, it’s important to note that a high p-value is just one factor. You still need to incorporate your subject area knowledge. Notice that in this post, I don’t go with the distribution that has the highest p-value (3-parameter Weibull p > 0.500). Instead I go with the lognormal distribution, which has a high (0.345) p-value but not the highest. As I discuss near the end, I use subject area knowledge to choose between the two. So, it’s not just the p-value.
Also, bear in mind that you’re looking at a range of distribution tests. Presumably some of those test will reject the null hypothesis and help rule out some distributions. Notice in this example (which uses real data) that low p-values rule out many distributions, which helps greatly in narrowing down the possibilities. Consequently, we’re not only picking by high p-values, we’re also using low p-values to rule out possibilities.
Also consider that statistical power is important. For hypothesis tests in general, when you have a small sample size, your statistical power is lower, which means it is easier to obtain high p-values. Normally, that is good because it protects you from jumping to conclusions based on larger sampling error that tends to happen in smaller samples. I write about the protective function of high p-values. However, in this scenario with distribution tests, where you want high p-values, an underpowered study can lead you in the wrong direction. Do keep an eye out for small sample sizes. I point out in this post that small samples size can cause these distribution tests to fail to identify departures from a specific distribution. Using the probability plots can help you identify some cases where a small sample deviates from the distribution being tested but the p-value is not significant. I discuss that in this post, but really focus on it in my post about using normal probability plots to assess normality. While that can help in some cases, you should always strive to have a larger sample size. I start to worry when it’s smaller than 20.
I hope that helps!
Poonam Yerunkar says
Hello Jim,
Thanks a million for this wonderful article.
Honest;y, I am also one of those not very comfortable with the distributions other than normal ones.
I was working on some data, for which the distributions were so different than normal and we wanted to perform linear regression. So, to even apply transformations to get to a normal shape the first step was to identify the original distribution.
Your article helped me learn something new and very important.
Thanks again for sharing !
Regards,
Poonam
Jim Frost says
Hi Poonam,
I’m happy to hear that it was helpful! Thanks for writing! ๐
Amogh Bharadwaj says
Hello Jim, Thank you so much for this brilliant article. I am looking forward to the use cases after knowing the underlying distribution, is the article up ? Thank you ๐
Jim Frost says
Hi Amogh,
Thanks for the reminder! It’s not up yet but I do need to get around to writing it!
Anmol says
This is a wonderful article for a student like myself; who is just beginning a statistics oriented career. I want to know how do I generate those 95% CI interval plots (%fat v/s Percent). Further, Im assuming that whenever any activity such needs to be done, we would have to start off with the frequency distribution and then transition to probability distribution, correct? And this probability distribution is same as pdf? Pls help me clear off my doubts.
Collin M says
Hello Jim, thank you 4 making statistics so easy to understand, am pleased to inform you that I have managed to buy all your 3 books and I hope they will be of much help to me…
and if u could also write about how to report research work for publication.
Hana says
okay thank you so much, but I really got the concept from your explanation, it is very clear !
Jim Frost says
Thanks, Hana! I’m so glad to hear that!
Hana says
Thanks Jim for the interesting and useful article. Do you recommend any other alternative for Minitab, maybe R package or other free software?
Jim Frost says
Hi Hana,
I don’t have a good sense for what other software, particularly free, would be best. I’m sure it’s doable in R though.
Salwa says
Hi Jim
Thanks for this text
I want to ask you
How to find Goodness of fit test when the distribution not defaults in R ?
ChadL says
Hello Jim,
Thank you for this awesome article; it is very much helpful. Quick question here: what should I look for when comparing the distribution of one sample against the distribution of another sample?
The end goal is to ensure that they are similar, so I imagine I want to make sure that their means are the same (an ANOVA test) and that their variances are the same (F-Test).
Jim Frost says
Hi Chad,
There’s a distinction between identifying the distribution of your data (Normal vs. Weibull, Lognormal, etc.) and estimating the properties of your distribution. Although, identifying the distribution does involve estimating the properties for each type of distribution.
The method you write would help you determine whether those two properties (mean and variances) are different. Just be mindful of the statistical power of these tests. If you have particularly small sample sizes, the tests won’t be sensitive enough to unequal means or variances. Failing to reject the null doesn’t necessarily prove they’re equal.
Additionally, testing the means using ANOVA assumes that the variances are equal unless you use Welch’s ANOVA. I write more about this in an article about Welch’s ANOVA versus the typical F-test ANOVA.
Awe says
Good question. Once you know the distribution of your data you can actually have a better prediction of uncertainties such as likelihood of occurrence of events and corresponding impacts. You can also make some meaningful decisions by setting categories.
adekanmbidende says
Thank you for the brilliant explanation. Please, what else can i do by simply knowing the distribution of my data ?
SITHARA SASIDHARAN says
Hi Jim,
Very good explanation. Thank you so much for your effort. I have downloaded Minitab software, but unfortunately I couldn’t find the goodness of fit tab. where I can find it? Kindly reply
ADEKANMBI Dende Ibrahim says
please, what else can i do by simply knowing the distribution of my data ?
Hong says
Hello Jim, your article is very clear and easy to understand for newbie in stats. I’m looking forward for the article that shows me what I can do by simply knowing the distribution of your data. Did you have already published it? If yes, can you send me the link ?
Thanks again,
Hans says
How I can understand p-value in distribution identification with goodness of fit test?
For an example, when p-value is 0.45 with normal distribution, it means a data point can 45% probability fitting normal distribution, is it right?
Thank you very much!
Jim Frost says
Hi Hans,
When the p-value is less than your significance level, you reject the null hypothesis. That’s the general rule. In this case, the null hypothesis states that the data follow a specific distribution, such as the normal distribution. Consequently, if the p-value is greater than the significance level, you fail to reject the null hypothesis. Your data favor the notion that it follows the distribution you are assessing. In your case, the p-value of 0.45 indicates you can reasonably assume that your data follow the normal distribution.
As for the precise meaning of the p-value, it indicates the probability of obtaining your observed sample or more extreme if the null hypothesis is true. Your sample doesn’t perfectly follow the normal distribution. No sample follows it perfectly. There’s always some deviation. The deviation between the distribution of your sample and the normal distribution, and more extreme deviations, have a 45% chance of occurring if the null hypothesis is true (i.e., that the population distribution is normally distributed). In other words, your sample is not unusual if the population is normally distributed. Hence, our conclusion that your sample follows a normal distribution. Technically, we’d say that your sample provides insufficient evidence to conclude that the population follows a distribution other than the normal distribution.
P-values are commonly misinterpreted in the manner that you state. For more information, read my post about interpreting p-values correctly.
Hanna says
Thanks Jim!
Unfortunately I can’t find the Minitab in the CRAN repository. Is there any other way to download the package. Is it available for the for R version 3.5.1?
Hanna
Jim Frost says
Hi Hanna!
Minitab is an entirely separate statistical software package–like SPSS (but different). It’s not an R function. Sorry about the confusion!
Hanna says
Hi Jim,
Thanks for this explanation! Is minitab a function or a package? I’m wondering how you performed the Goodness of fit for multiple distributions.
Many thanks,
Hanna
Jim Frost says
Hi Hanna,
Minitab is a statistical software package. Performing the various goodness-of-fit tests all at once is definitely a convenience. However, you can certainly try them one at a time. I’m not sure how other packages handle that.
Lakshay Guglani says
Hi Jim
Hope you are in the best of your health. I had a query with regard the application part white modelling severity of an event; say Claim Sizes in an insurance company, which distribution would be an ideal choice.
Gamma or Lognormal?
As far as I could make sense out of it, lognormal is preferable for modelling while dealing with units whose unit size is very small, Eg. alpha particles emitted per minute. Am I on the right lines?
Thanks a ton
Jim Frost says
Hi Lakshay,
I don’t know enough about claim sizes to be able to say–that’s not my field of expertise. You’ll probably need to do some research and try fitting some distributions to your data to see which one fits the best. I show you how to do that in this blog post.
Many distributions can model very small units. It’s more the overall shape of the distribution that is the limiting factor. Lognormal distributions are particularly good at modeling skewed distributions. I show an example of a lognormal distribution in this post. However, other distributions can model skewed distributions, such as the Weibull distribution. So, it depends on the precise shape of the skewness.
In general, the Weibull distribution is a very flexible distribution that can fit a wide variety of shapes. That would be a good distribution to start with if I had to name just one (besides the normal distribution). However, you should assess other distributions. Even though the Weibull distribution is very flexible, it did not provide the best fit for my real world data that I show in this post.
I hope this helps!
Anurag Chakraborty says
Is there any difference between a distribution (hypothesis) test and goodness-of-fit test ? Or are they the same thing ?
Jim Frost says
Hi Anurag,
They’re definitely related. However, goodness-of-fit is a broader term. It includes distribution tests but it also includes measures such as R-squared, which assesses how well a regression model fits the data.
A distribution test is a more specific term that applies to tests that determine how well a probability distribution fits sample data.
Distribution tests are a subset of goodness-of-fit tests.
I hope this helps!
Anurag Chakraborty says
Hello Jim,
Excellent article and I found it very helpful. I opened the csv data file of body fat % in Excel and I found there was 92 separate data points.
Could you please let me know if this data is discrete or continuous, if you don’t mind me asking ? Thank you.
Jim Frost says
Hi Anurag,
The data are recorded as percentages. Therefore, they are continuous data.
Peter Moses says
Hi jim how are you , i really wish to thank you for your indefatigable efforts towards relating your publications to the world. It helps me so very much to prepared fully against the university education.
Furthermore, i will like to have a blog copy of your work.
thank you.
Jim Frost says
Hi Peter,
Thanks so much for writing. I really appreciate your kind words!
The good news is that I’ll be writing a series of ebooks that goes far beyond what I can cover in my blog posts. I’ve completed the first one, which is an ebook about regression analysis. I’ll be working on others. The next one up is an introduction to statistics.
Asmat says
Thank you so much for your detailef email. I really appriciate it.
Asmat says
Hi Jim,
Thanks for your detailed reply. I am using cross-sectional continuous data of inputs (11 variables) used in crop production system. Variabilities exist within the dataset due to different level of inputs consumptions in farming systems and in some cases some inputs are even zero. Should I go for any grouping of the data. If yes, what kind of grouping approach should I use. I am basically interested in the uncertainty analysis of inputs (fuel and chemicals consumptions) and sensitivity analysis of desired output and associated environmental impacts. It will be great if you can guide me.
Thanks.
Jim Frost says
Hi Asmat,
Given the very specific details of your data and goals for your study, I think you’ll need to discuss this with someone who can sit down with you and go over all of it and give it the time that your study deserves. There’s just not enough information for me to go on and I don’t have the time, unfortunately, to really look into it.
One thing I can say is that if you’re trying to link your inputs to changes in the output, consider using regression analysis. For regression analysis, you only need to worry about the distribution of your residuals rather than your inputs and outputs. Regression is all about linking changes in the inputs to changes in the output. Read my post about when to use regression analysis for more information. It sounds like that might be the goal of your analysis, but I’m not sure.
Best of luck with your study!
Andre Konski says
Thanks for the reply. No I am trying to determine the distribution of my survival curve from a published analysis. I was able to identify the survival probabilities from the published graph. The Minitab program only allows for the importation of one column. The distribution looks like a Weibull distribution but the Minitab results showed a normal distribution had the highest P value which didn’t make sense.
Jim Frost says
Ok, in your original comment you didn’t mention that you were using a published graph. I don’t fully understand what you’re trying to do, and it’s impossible for a concrete reply without the full details. However, below are some things to consider.
Analysts often need to use their process knowledge to help them determine which distribution is appropriate. Perhaps that’s a consideration here? I also don’t know how different the p-values are. Small differences are not meaningful. Additionally, in some cases, Weibull distributions can approximate a normal distribution. Consequently, there might be only a small difference between those distributions.
But, it’s really hard to say. There’s just not enough information.
Asmat says
Hi Jim,
Thanks for your reply.
Yes I on the basis of p-values only I am concluding that the data is not following any distribution.
My sample size is 1366 and 11 variables. None is following normal distribution. I tried Box Cox transformation and checked normality again following p-value.
After transformation, the data points of some variables largly follow the line but some data points deviate from the line either at the begging or at the end.
However some variabls, the data points largely follow the line even without transformation with some points deviations at the ends. Thanks.
Jim Frost says
Hi Asmat,
You have a particularly large sample size. Consequently, you might need to focus more on the probability plots rather than the p-values. I suggest you read the section in this post that is titled “Using Probability Plots to Identify the Distribution of Your Data.” It describes how the additional power that distribution tests have with large sample sizes can detect meaningless deviations from a distribution. In those cases, using probability plots might be a better approach.
After you look at the probability plots, if an untransformed distribution fits well, I’d use that, otherwise go with the transformed.
You didn’t mention what you’re using these data for but be aware that some hypothesis tests are robust to departures from the normal distribution.
Asmat says
Hi Jim,
after reading your blog, I have tried Minitab to check the distribution of my data but it surprisingly it does not follow any of the listed probability distribution. Could you please help me how should I move forward. Thanks.
Jim Frost says
Hi Asmat,
Before I can attempt to answer your question, I need to ask you several questions about your data.
What type of data are you talking about?
Did the Box-Cox transformation or Johnson transformation produce a good fit?
What is your sample size?
Are you primarily going by p-values? If so, do any of the probability plots look good? Good meaning that the data points largely following the line. There’s the informal fat pencil test where if you put a pencil over the line, do the data points stay within it.
Andre Konski says
Jim
I enjoyed this blog. I tried to determine the distribution and parameters of a survival curve by importing into minitab. Minitab only allows 1 parameter or column while the survival curve has time on x axis and y on the y. How does one find the type of curve and parameters of a survival curve.
Jim Frost says
Hi Andre,
Sorry about the delay in replying!
If I’m understanding your question correctly, the answer is that creating a survival plot with a survival curve is not a part of the process for identifying your distribution in Minitab that I show in this blog post. However, you can find the proper analyses in the Reliability/Survival menu in Minitab. In that menu path, there are distribution analyses for failure data specifically.
Additionally, there are other analyses in the Reliability/Survival path including the following:
Stat > Reliability/Survival > Probit Analysis.
And, if you’re using accelerated testing: Stat > Reliability/Survival > Accelerated Life Testing.
I hope this helps!
Jerry says
Hi Jim, in the next-to-last graph in your post (the distribution plots), you say the Weibull plot stops abruptly at the location value of 3.32. Yet it appears to stop at more like 13-ish. Did I misunderstand something, or is the graph incorrect? Also what is the ‘scale’ metric in Weibull plots? Thanks –
Jim Frost says
Hi Jerry, the Weibull distribution actually stops at the threshold value of ~16. The threshold value shifts the distribution along the X-axis relative to zero. Consequently, a threshold of 16 indicates the distribution starts with the lowest value of 16. Without the threshold parameter, the Weibull distribution starts at zero.
The scale parameter is similar to a measure of dispersion. For a given shape, it indicates how spread out the values are.
Here’s a nice site that shows the effect of the shape, scale, and threshold parameters for the Weibull distribution.
Alice zhang says
Hi, Jim. Thank you so much for your detailed reply. Does the Null Hypothesis in Minitab is the dsitribution follows the specific distribution? So larger p-value cannot reject the null hypothesis. Another question is about the correlation coefficient (PPCC value), does it can denote the goodness-of-fit of each dsitribution? Thanks.
Jim Frost says
Hi Alice, you’re very welcome!
Yes, as I detail in this post, the null hypothesis states that the data follow the specific distribution. Consequently, a low p-value suggests that you should reject the null and conclude that the data do not follow that distribution. Reread this post for more information about that aspect of these distribution tests.
Unless Minitab has changed something that I’m unaware of, you do not need to worry about PPCC when interpreting the probability plots. Again, reread this post to learn how to interpret the probability plots. With such a large sample size, it will be more important for you to interpret the probability plots rather than the p-values.
If you want to learn about PPCC for other reasons, here’s a good source of information about it: Probability Plot Correlation Coefficient.
Best of luck with your analysis!
Alice zhang says
Hi, Jim. Besides, I have tried the calculation with 1000 data, but the p-value is extremely small. Almost the p-value of all distributions are less than 0.005. Do you have any suggestions about this?
Jim Frost says
Yes, this is the issue that I described in my first response to you. With so many data points (even 1000 is a large dataset) these tests are very powerful. Trivial departures from the distribution will produce a low p-value. That’s why you’ll likely need to focus on the probability plots for each distribution.
Alice zhang says
Jim, Thanks for your detailed reply. Actually, I have tried the probability plots, and several distributions perform almost the same. And I used Minitab to calculate the p-value, but the software said that it is out of stock. There is no results for p-value. Do you know how to deal with it? Is the data (1,000,000) too much for the p-value calculation? Thanks.
Jim Frost says
I’m not sure. I think it might be out of memory. You should contact their technical support to find out for sure. They’ll know. Their support is quite good. You’ll reach a real person very quickly who can help you.
To answer your question, no, it’s not possible to have too many data points to calculate the p-value mathematically. But, it’s possible that the program can’t handle such a large dataset. I’m not sure about that.
Alice zhang says
Hi, Jim. Thanks for your detailed explanation. Actually, I have no experience in Minitab. I have a large matrix (1000000*13), but I found that when I went to Stat > Quality Tools > Individual Distribution Identification in Minitab, It only can do the single column data analysis. And it seems that it takes much time for 1,000,000 data. Do you have any sugggestions about how to find the appropriate distribution for 13 columns with 1,000,000 data?
Jim Frost says
Hi Alice,
Yes, that tool analyzes individual columns only. It assesses the distribution of a single variable. If you’re looking for some sort of multivariate distribution analysis, it won’t do that.
I think Minitab is good software, but it can struggle with extremely large datasets like yours.
One thing to be aware of is that with so many data points, the distribution tests become extremely powerful. They will be so powerful that they can detect trivial departures from a distribution. In other words, your data might follow a specific distribution, but the test is so powerful that it will reject the null hypothesis that it follows that distribution. For such a large dataset, pay particular attention to the probability plots! If those look good but the p-value is significant, go with the graph!
Rashmi Tiwari says
Hi Jim, very nicely explained. Thank you so much for your effort.
Are your blogs are available in printable version?
Jim Frost says
Hi Rashmi, thank you so much for your kind words. I really appreciate them!
I’m working on ebooks that contain the blog material plus a lot more! The first one should be available early next year.
Hani Hani says
Hello. I have a question. I have genome data has lots of zeros. I want to check the distribution of this data. It could be continuous or discrete. We can use ks.test, but this is for continuous distributions. Is there any way to check if the data follows a specific discrete distribution? Thank you
Jim Frost says
Hi, there are distribution tests for discrete data. I’d start by reading my post about goodness-of-fit tests for discrete distributions and see if that helps.
Rainy says
Hi Jim,
Great sharing. I have perform an identification of distribution of my nonnormal data, however none of the distribution have good fit to my data. All the p-value < 0.05. What are the different approach I can use to study the distribution model before I can perform capability analysis ๐ Thanks for your help
Alice Pettersson says
Hi Jim! I’m trying to test the distribution of my data in SPSS and have used the One-Sample Kolmogorov-Smirnov Test which test for normal, uniform, poisson or exponential distribution. Non of them fit my data… How do I preoceed, I don’t know how to work in R or MiniTab, so do you know if there’s another test in SPSS I can do or do I have to learn a new program? I need to know the distribution to be able to choose the right model for the GLM test I’m gonna do.
Jim Frost says
Hi Alice! Unfortunately, I haven’t used SPSS in quite some time and I’m not familiar with its distribution testing capabilities. The one additional distribution that I’d check is the Weibull distribution. That is a particularly flexible distribution that can fit many different shapes–but I don’t see that in your list.
Maria says
Very good explanation!!
Jim Frost says
Thank you, Maria!
Olayan Albalawi says
Great explanation, thanks Jim.
Jim Frost says
Thank you, Olayan!
Wilbrod Ntawiha says
Thanks Jim. I am going to try the same implementation using stata and/or R.
Jim Frost says
You’re very welcome Wilbrod! Best of luck with your analysis!
Charles says
Hi Jim,
Great article! ๐
What software are you using to evaluate the distribution of the data?
Jim Frost says
Hi Charles, thanks and I’m glad you found it helpful! I’m using Minitab statistical software.
Ricardo says
Hello Jim, what kind of statistics software do you use?
Jim Frost says
Hi Ricardo, I’m using Minitab statistical software.
Muhammad Arif says
what a fantastic example Jim!
Jim Frost says
Thank you so much, Muhammad!