Outliers are unusual values in your dataset, and they can distort statistical analyses and violate their assumptions. Unfortunately, all analysts will confront outliers and be forced to make decisions about what to do with them. Given the problems they can cause, you might think that it’s best to remove them from your data. But, that’s not always the case. Removing outliers is legitimate only for specific reasons.

Outliers can be very informative about the subject-area and data collection process. It’s essential to understand how outliers occur and whether they might happen again as a normal part of the process or study area. Unfortunately, resisting the temptation to remove outliers inappropriately can be difficult. Outliers increase the variability in your data, which decreases statistical power. Consequently, excluding outliers can cause your results to become statistically significant.

In my previous post, I showed five methods you can use to identify outliers. However, identification is just the first step. Deciding how to handle outliers depends on investigating their underlying cause.

In this post, I’ll help you decide whether you should remove outliers from your dataset and how to analyze your data when you can’t remove them. The proper action depends on what causes the outliers. In broad strokes, there are three causes for outliers—data entry or measurement errors, sampling problems and unusual conditions, and natural variation.

Let’s go over these three causes!

## Data Entry and Measurement Errors and Outliers

Errors can occur during measurement and data entry. During data entry, typos can produce weird values. Imagine that we’re measuring the height of adult men and gather the following dataset.

In this dataset, the value of 10.8135 is clearly an outlier. Not only does it stand out, but it’s an impossible height value. Examining the numbers more closely, we conclude the zero might have been accidental. Hopefully, we can either go back to the original record or even remeasure the subject to determine the correct height.

These types of errors are easy cases to understand. If you determine that an outlier value is an error, correct the value when possible. That can involve fixing the typo or possibly remeasuring the item or person. If that’s not possible, you must delete the data point because you know it’s an incorrect value.

## Sampling Problems Can Cause Outliers

Inferential statistics use samples to draw conclusions about a specific population. Studies should carefully define a population, and then draw a random sample from it specifically. That’s the process by which a study can learn about a population.

Unfortunately, your study might accidentally obtain an item or person that is not from the target population. There are several ways this can occur. For example, unusual events or characteristics can occur that deviate from the defined population. Perhaps the experimenter measures the item or subject under abnormal conditions. In other cases, you can accidentally collect an item that falls outside your target population, and, thus, it might have unusual characteristics.

**Related post**: Inferential vs. Descriptive Statistics

### Examples of Sampling Problems

Let’s bring this to life with several examples!

Suppose a study assesses the strength of a product. The researchers define the population as the output of the standard manufacturing process. The normal process includes standard materials, manufacturing settings, and conditions. If something unusual happens during a portion of the study, such as a power failure or a machine setting drifting off the standard value, it can affect the products. These abnormal manufacturing conditions can cause outliers by creating products with atypical strength values. Products manufactured under these unusual conditions do not reflect your target population of products from the normal process. Consequently, you can legitimately remove these data points from your dataset.

During a bone density study that I participated in as a scientist, I noticed an outlier in the bone density growth for a subject. Her growth value was very unusual. The study’s subject coordinator discovered that the subject had diabetes, which affects bone health. Our study’s goal was to model bone density growth in pre-adolescent girls with no health conditions that affect bone growth. Consequently, her data were excluded from our analyses because she was not a member of our target population.

If you can establish that an item or person does not represent your target population, you can remove that data point. However, you must be able to attribute a specific cause or reason for why that sample item does not fit your target population.

## Natural Variation Can Produce Outliers

The previous causes of outliers are bad things. They represent different types of problems that you need to correct. However, natural variation can also produce outliers—and it’s not necessarily a problem.

All data distributions have a spread of values. Extreme values can occur, but they have lower probabilities. If your sample size is large enough, you’re bound to obtain unusual values. In a normal distribution, approximately 1 in 340 observations will be at least three standard deviations away from the mean. However, random chance might include extreme values in smaller datasets! In other words, the process or population you’re studying might produce weird values naturally. There’s nothing wrong with these data points. They’re unusual, but they are a normal part of the data distribution.

**Related post**: Normal Distribution and Measures of Variability

### Example of Natural Variation Causing an Outlier

For example, I fit a model that uses historical U.S. Presidential approval ratings to predict how later historians would ultimately rank each President. It turns out a President’s lowest approval rating predicts the historian ranks. However, one data point severely affects the model. President Truman doesn’t fit the model. He had an abysmal lowest approval rating of 22%, but later historians gave him a relatively good rank of #6. If I remove that single observation, the R-squared increases by over 30 percentage points!

However, there was no justifiable reason to remove that point. While it was an oddball, it accurately reflects the potential surprises and uncertainty inherent in the political system. If I remove it, the model makes the process appear more predictable than it actually is. Even though this unusual observation is influential, I left it in the model. It’s bad practice to remove data points simply to produce a better fitting model or statistically significant results.

If the extreme value is a legitimate observation that is a natural part of the population you’re studying, you should leave it in the dataset. I’ll explain how to analyze datasets that contain outliers you can’t exclude shortly!

To learn more about the example above, read my article about it, Understanding Historians’ Rankings of U.S. Presidents using Regression Models.

## Guidelines for Dealing with Outliers

Sometimes it’s best to keep outliers in your data. They can capture valuable information that is part of your study area. Retaining these points can be hard, particularly when it reduces statistical significance! However, excluding extreme values solely due to their extremeness can distort the results by removing information about the variability inherent in the study area. You’re forcing the subject area to appear less variable than it is in reality.

When considering whether to remove an outlier, you’ll need to evaluate if it appropriately reflects your target population, subject-area, research question, and research methodology. Did anything unusual happen while measuring these observations, such as power failures, abnormal experimental conditions, or anything else out of the norm? Is there anything substantially different about an observation, whether it’s a person, item, or transaction? Did measurement or data entry errors occur?

If the outlier in question is:

- A measurement error or data entry error, correct the error if possible. If you can’t fix it, remove that observation because you know it’s incorrect.
- Not a part of the population you are studying (i.e., unusual properties or conditions), you can legitimately remove the outlier.
- A natural part of the population you are studying, you should not remove it.

When you decide to remove outliers, document the excluded data points and explain your reasoning. You must be able to attribute a specific cause for removing outliers. Another approach is to perform the analysis with and without these observations and discuss the differences. Comparing results in this manner is particularly useful when you’re unsure about removing an outlier and when there is substantial disagreement within a group over this question.

## Statistical Analyses that Can Handle Outliers

What do you do when you can’t legitimately remove outliers, but they violate the assumptions of your statistical analysis? You want to include them but don’t want them to distort the results. Fortunately, there are various statistical analyses up to the task. Here are several options you can try.

Nonparametric hypothesis tests are robust to outliers. For these alternatives to the more common parametric tests, outliers won’t necessarily violate their assumptions or distort their results.

In regression analysis, you can try transforming your data or using a robust regression analysis available in some statistical packages.

Finally, bootstrapping techniques use the sample data as they are and don’t make assumptions about distributions.

These types of analyses allow you to capture the full variability of your dataset without violating assumptions and skewing results.

Fred says

Hi Jim,

This information about outliers is incredibly helpful and so easy to understand. I intend on using some of this information in my dissertation. I am hoping to find a source to reference this information. Is this information also located in any of your books?

Thank you so much, Fred

Jim Frost says

Hi Fred,

Thanks for your kind words! I’m so glad it is helpful!

I talk about outliers in my Hypothesis Testing book because they can affect the results greatly!

Melissa says

Hello Jim, thank you for the thorough explanation and resources!

I’m having an issue with flagging the outliers in my dataset. For instance, the outliers are being flagged with ‘0’ and a dot, rather than being flagged with ‘0’ and 1’s,

Steve says

Hi Jim,

Suppose I have a dataset of 3,4,4,5,9. The average is 5. However, 9 is sort of an outlier from first 4 numbers. Is there some kind of weighted average that will give the 9 less influence and bring the overall average closer to 4 which is the average of the first 4 numbers?

Jim Frost says

Hi Steve, how about the median? See my post about measures of central tendency for more details!

Faheem Jan says

Hi, Jim I have a time series data of 24*2192 matrix, In which each row are the 24 hours a days and each column is the represent a single day. By boxplot its seems their are so many outliers in the data set. in my data first five (total six years data) are use as validation set and last one year as testing set, when I measure the accuracy of the model through mean absolute and mean absolute percentage error are quite large so my supervisor suggest me that it may due to outliers, so he suggest me the moving window filter method, but i could not implement through R, please he in implementing such a method in R or suggest me some other outliers treatment method which minimize my forecast error.

Regard,

abdullah says

i have secondary data for 120 companies, and i have outliers around 20 companies. the difference high in ACP and CCC. i want to treat these outliers. can you suggest me how I can?.

Jim Frost says

Hi Abdullah,

Your first questions should be whether they are truly outliers or part of the natural variation. If they are outliers, then you need wonder why you’re obtaining so many outliers! That’s very unusual.

There’s no way I can tell if they’re truly outliers and whether you need to do anything about them. But following the guidelines I present. Learn about them, why they occur, etc. Determine whether they’re natural variation of subject area or truly outliers caused by one of the reasons I discuss. Making the determination will help you decide what to do. Also, look at similar studies to see how they handled them.

Wamiti says

Hi Jim,

I am working 36 paired samples from 18 study sites, a pair each from dry and wet seasons. These are measures of biomass of invertebrates. One observation in the wet season is an outlier (it has a value of 5.52g compared to the mean of 1.45g). While most other samples had insect invertebrates, this one was dominated by snails! Like the other invertebrates, snails also constitute (or potentially constitute) the diet of my study species, a waterbird. This outlier, from your notes, is a natural condition since it forms part of what waterbirds may feed on. Normality test of the wet season dataset gives a Shapiro-Wilk value of 0.83 and a p-value of 0.004 without normalization while the dry season had W = 0.84, p-value = 0.007.

I guess I should include this outlier in the analysis since its a natural condition, and make notes/observations that sampling may have happened in a site with specific conditions that favor the survival of the snail species in question. Do you agree? Any otherwise thoughts will be appreciated, and many thanks for your educative and enlightening posts. We take our time to read through because they are valuable.

Jim Frost says

Hi Wamiti,

So, first a caveat. These types of decisions always use a large amount of subject area knowledge. And, that’s not exactly my area.

With that out of the way, it sure sounds like natural variability to me. However. consider how you’re defining your population or study area. If it falls outside what you’re defining at the population you’re studying, that’s another reason to exclude it. Like in the bone density study I was involved in. We defined our target population as healthy individuals with no disease that affect bone density. One of our potential subject had diabetes, which affects the bones. We had to exclude her from the study because she wasn’t part of our target population even though people with diabetes are part of the larger population. Of course, our results applied only to those without a condition that affects bone density.

So, factor that in too. How are you defining your study area? To what population do you want to generalize your findings?

It’s hard for me to give you a concrete answer! But that’s the type of issue I’d think about. Is it natural variation in your target population? Or is it outside your target population?

I hope that helps at least somewhat. Discussions with someone else in the field or assessing similar studies might helpful to see how others have handled similar situations.

Tiffany says

Hello, thanks so much for your explanation! I have a couple of questions since my colleagues and I are having some trouble dealing with variability in our cell-based assays:

1. We plan to use Grubbs test on 4 replicate values to remove outliers. Is it okay to proceed with our computations even if some set-ups have an outlier removed (i.e. we’ll be averaging 3 values only), while others may not have any outliers (we’ll be averaging 4 values since there’s no outlier)?

2. Which statistical value should we compute for to ensure that our trials are valid? We are looking at B-score, Z-score, Z-factor, Z’-score, but we are not quite sure what the difference of these are and which one is more appropriate for cell assays done in 96-well plates.

Thank you so much! Would mean a lot if you could share your insights and expertise since we have no statistician on our team. 🙂

Jim Frost says

Hi Tiffany,

Why are you already so sure that you’ll be removing values values from the set-ups? Would they definitely be outliers for removal or a part of the natural variation? Would you investigate and try to understand the cause for these outliers?

I’d recommend investigating the outliers, understanding the reason they occurred, and then making the determination if it’s because some identifiable event or problem occurred with the set-up that makes it invalid because it’s not part of the normal variation. If there’s no identifiable reason, it might just part of the normal variability. Typically, you don’t remove values only because they’re unusual. Usually, you need some identifiable situation or condition that caused them to be invalid because the don’t represent represent the population/conditions that you’re studying.

I’m asking these questions because it seems like you’re already planning to remove a large proportion of values without understanding what is causing them. You don’t want to remove them if they valid values that just happen to be a bit unusual (but a normal part of the variation). On the other hand, if they truly are invalid values, then you’d want to understand why you’re getting so many of them!

I also think that performing a Grubbs test (or any hypothesis test) on a sample size of 4 is problematic!

Please read my related post about 5 Ways to Find Outliers. In that article, I write about methods such as Z-scores and the Grubbs tests, and particularly their limitation. Note that with a sample size of only 4, you’re maximum Z-score can be only 1.5, which won’t be flagged as an outlier. I’m not familiar with using Z-factor, aka Z prime and Z’, to find outliers. My understanding is that is an effect size for differences between sample means. I’m not sure how or if you can use it to identify outliers. I believe it is typically used to identify potentially interesting effects. It is different from Z-values.

I hope that helps at least point you in the right direction!

KG says

Hello,

Thank you for this post. I have a question that is not really about outliers, but I can’t figure out what keywords to search to find any answers. I am working with a very large dataset relating fishing effort to spatial location. The sampling unit is individual fishing boats; all fishing boats were surveyed on random days with the goal of capturing 20% of the population. When a boat was surveyed, variables collected included the target fish species, how much they caught, how many anglers were aboard the boat, how many days they were out fishing before returning to shore, and the “block” they were fishing in. I want to look at summary statistics by block and species, but I have numerous instances where only one boat was recorded fishing in a given block. I am unsure as to whether I should drop any blocks that have only one observation or even any blocks with fewer than three observations. Do you have any suggestions or can you point me in the right direction? Thank you in advance!

Jim Frost says

Hi KG,

I’m going to assume you mean blocks in the experimental sense where you’re grouping observations by similar conditions to reduce variability. However, typically, blocks contain nuisance variables that you’re not particularly interested in but you need to control for them. However, you indicate that you want to understand the summary statistics by blocks, so perhaps you’re meaning something else? Are they geographic regions?

That’s a bit of a tricky situation. It doesn’t sound like you have enough observations for the blocks. The more complex answer depends on how precise your estimates must be, the variability in the data, and, potentially, the type of analyses you might want to perform. More simply, I’d say that even three is too few. The problem is that estimates of the mean with only three data points are too unstable. While there’s no concrete answer that covers all situations (see the more complex answer), a good rule of thumb is that you probably would want 10 data points per mean. In a pinch, you can go a bit lower but I wouldn’t go too much lower.

Is there a way to combine blocks meaningfully?

Marco De Nardi says

Thanks for the feedback.

Marco

Marco De Nardi says

HI Jim,

as part of a technical assistance project in a East European country we are developing a framework (istitutional, technical and IT) to collect regularly and analyzse data on milk quality from milk producers (total bacterial count, somatic cell counts etc…) present in specific regions of this country. These data are then aggregated in quarter periods (3 months each) and geometric means are being calculated. Speific thresold in these parameters would indicate whether producers are producing milk according to quality standards or not. Looking at the dataset there are clearly outliners (individual farms) influencing the mean calcualtion. I don’t feel removing these data (these could very well be true values) and I am reflecting on the best way to analyze those data (calculating the mean) taking into considerationt the outliners effect. What would you suggest?

Thanks for the feedback

Marco

Jim Frost says

Hi Marco,

The key question you need to ask yourself is whether these outliers are a normal part of the population you’re studying? Populations have a normal amount of variability. Some populations have a lot of variability. If these farms are unusual but a part of your defined population, I’d lean towards leaving them in.

However, if there is something unusual about them that makes them demonstrably not a part of the population you define, then you have reason to exclude them. For example, you might define your population as farms that use method X for producing milk. If some farms use method Y, it would be legitimate to remove them. Or perhaps there was some other unusual circumstance that affected their milk which is not a part of the study.

So, it comes down to a close understanding of your study, the target population, and the specific farms.

If you do leave in these outliers (which it sounds like you’re leaning towards), you might need use another type of analysis, such as a nonparametric analysis that compares the means. For more information, read my post about parametric versus nonparametric analyses.

Boruch Fishman says

Hi.

I’ve been measuring the correlation between the number of international adoptions countries make and their cases of coronavirus/million. With T tests comparing the rate of COVID-19 in the 35 countries that adopt to the COVID-19/million rate in all 214 countries. it was significantly a different population. Likewise the correlation was positive with Pearson coefficients. When I looked at the association in regression equations (using 32 randomly picked countries), the effect of international adoptions was sometimes significant depending on which other explanatory variables I included in the equation. The association was even was significant in a mediator analysis. And my whole paper explains why the association should be significant. However, the data set of # international adoptions/country has a big outlier – the USA, which adopts the most and has a large COVID-19 / million rate. When I take out the USA, international adoptions is not significant in regression equations.

I don’t yet have the software for bootstrapping. However, I found equations which suggest that if I add about 40-50 countries, my results will be significant regardless of the distribution.

But will a maneuver like this pass peer review? How can I focus in statistically on the measurements in the USA?

Melisa says

If you are analyzing an entire data set (descriptive) rather than a sample of the group, is there any reason to remove outliers?

Jim Frost says

Hi Melisa,

I’d say that usually you don’t. Descriptive statistics assumes that you want to understand that particular group. And, if that particular group has an outlier, then understanding that group would suggest leaving the outlier in. So, I’m having a difficult time thinking why you’d want to remove an outlier in that case. If you’re tempted to use that group to understand a larger picture, and that’s the motivation for removing an outlier, that’s not descriptive statistics. You’re simply describing a group with outliers and all. Removing an outlier would be an incomplete/inaccurate description of that group.

However, I suppose it’s possible that if a measurement was invalid, that could be a legitimate reason. For example, imagine you’re measuring a group for some characteristic but if the measurement device was incorrectly calibrated for one subject, it might be valid to remove that value from the group. If you can show that the measurement is invalid (doesn’t represent the subject), that’s probably a good reason to exclude it for a descriptive study because it will make the description inaccurate.

I guess that would be the main reason in my mind for excluding data from a descriptive study. If a measurement doesn’t accurately reflect the subject due to some glitch or temporary condition, you wouldn’t want to include it because it makes the description of the group as whole inaccurate.

Ana says

Thanks so much for this. Do you have any recommended reading on this that would also be something I could cite in order to justify my choice of not removing my outliers? I am studying a particular group (classical musicians) and in a sample of over 700 I have 12 outliers for one of the measures. When I look at them individually across 9 other measures I have no reason to lean towards the possibility of measurement error (although not sure how to justify this entirely). But it looks to me that they engaged well with the 9 measures of the survey and that these are legitimate observations. Those 12 don’t affect the assumptions but do indeed change the means a bit. Thanks so much for your thoughts.

Louis says

Thanks Jim for fast reply.

See my question only a theoretical one. Of course, I am aware about understanding the why of those outliers, and unless there is a solid reason, I would keep data. Maybe I can be a bit more precise: imagine it is a field case study, several treatments were performed in a design with randomized blocks (10 blocks – the 10 replicates). Each block contains at least 20 plants. As one block represents the stat. unit, measures performed on plants should be averaged. Now, one series of measures is performed on let ´s say 10 plants in each block but averaged data finally show a significant outlier in the block 1 as compared with the other blocks. Later, another series of measures (a different analysis than previously) is performed on 5 plants, different ones than the 10 in the first analysis. Average data show a significant outlier in block 2, as compared with other blocks. The question is how to deal with outlier in this case? I mean here, let´s assume outliers should be removed (whatever the reason is): should I remove the block 1 and 2 from my all data set? Should I only consider to remove data from block 1 in the first analysis, and block 2 in the second analysis because they were performed from distinct individual groups? or should I consider the most important analysis (for example the first one with outlier in block 1) and I should remove data set of the block 1 from the second analysis ? My question is also dealing outliers when variables are independant or not. I know my question is a bit strange, it is only for curiosity.

Thanks a lot for your time !

Louis says

Thanks Jim for this interesting post. I have a little theoretical and very basic question: imagine a trial with 10 replicates per condition (each replicate contains let´s say 20 individuals), and 2 evaluated independant variables (independant because measures were done from different groups of individuals within each replicates). I am in the optic that these outliers are coming from natural variation. There is for example a significant outlier in repetition 1 with the variable 1, and one significant outlier in repetition 2 with the variable 2. How to deal then those outliers? Could I remove those outliers independantly from the variable, or should I connect them between variables – i.e. I remove data from repetition 1 and 2 for each of the 2 variables? Then, maybe it is interesting also if you could say some words when variables are dependant. Thanks a lot for your reply !

Jim Frost says

Hi Louis,

If I understand your question correctly, I’d remove only the outliers if you determine that they really need to be removed. I wouldn’t remove observations in other replicates just because a different replicate has an outlier.

As for the dependent case, I’m assuming you mean multiple observations on the same individuals. In that case, if you know that one observation is an outlier, yes, you’d probably remove that individual completely. As usual, you’d want to investigate why it is an outlier. It would be strange if the same individual has regular values and then suddenly one is an outlier. Maybe it’s a fixable data entry error and you just need to correct it? Or, perhaps, that’s just normal fluctuation for an individual that your capturing. So, investigate the underlying cause.

But, if you determine it is an outlier, it seems likely you’d need to remove the individual entirely.

king mofasa says

Thanks very much for your reply. I will certainly keep them in.

King Mofasa says

Thanks for taking the time to explain this in simple words. I would like to ask the following question: regression analysis is sensitive to multivariate outliers. Most of the references I have reviewed suggest that multivariate outliers should be removed. I don’t see any other suggestion anywhere other than removal. I am hesitant to remove them as the cases seem valid, just different from the rest of the cases. I personally find myself against removing ANY valid outlier. Is there a way to keep multivariate outliers and at least winsorize them (similar to univariate outliers)?

Jim Frost says

Hi King,

Your hesitancy is very wise. I talk about this in this article, but it’s important to distinguish between unusual values that are caused by some sort of problem (unusual conditions, data entry errors, subject is from the wrong population, etc.) versus unusual values that are caused by the natural variation in the process you’re studying. I mention the the regression case where one observation was very unusual when it came to predicting the eventual ranking of U.S. President’s by historians. However, that unusual value was a normal part of the process, so I left it in.

That’s the important distinction that you need to evaluate for these outliers. If they’re valid, then you don’t want to remove them using any method. They provide important information about the natural variability in your subject-area. If you remove them, you’re losing that important information.

I hope that helps! My regression ebook discussions outliers, unusual values, and leverage points in much greater detail from the perspective of regression analysis specifically. It might be helpful.

Tamara says

Hi Jim,

Can you explain the process of winsorizing to deal with outliers that are not measurement errors or mistakes but outliers that are true from the data set?

I had a few outliers in my data set and I winsorized the outliers by changing the outliers to the largest and smallest values that are nearest to them which are not outliers themselves.

Thank you for your continued knowledge about statistical techniques.

Jim Frost says

Hi Tamara,

Winsorizing is process that either reduces the weight of an unusual value or changes it to be a value that is not so unusual. It sounds like you used a method changes them to less extreme values.

I’m not a big fan of this process for several reasons. For one thing, I think it’s vital to learn why these outliers are happening. You never know what valuable information you might learn about your study area through this investigation. Ideally, you should determine whether these points are valid data or not.

If they are a valid part of the population you’re studying, changing these values will mean that your sample doesn’t reflect the true variability in the population and could lead you to draw incorrect conclusions. In short, you’ll draw conclusions based on an assumption that there is less variability than what actually exists.

If the data are not a valid part of the population (measurement/data entry error, unusual conditions, drawn from a different population, etc.), then those points should not be included at all.

So, in my mind, Winsorizing doesn’t address either condition, and it reduces your chances of learning something new about your population.

Methods that reduce/remove outliers will usually increase the power of your test and make the results look stronger. So, it can be hard to resist the temptation to use automated methods and whenever possible really look into each outlier and learn what is happening.

I suppose the case for Winsorizing is that if, for whatever reason, you cannot assess the outliers and make the determination I describe, but you highly suspect that they’re invalid values but can’t prove it, Winsorizing might be a better middle ground approach than just removing suspected but unproven bad data. You’d have to carefully weight that decision based on the data you have about the outliers and subject area knowledge.

I hope this helps!

Youssef Karam says

Hi Jim,

Please I am a student carying a study on the compensation of directors and how this compensation is mainely affected by performence of the firm. 3 outliers were noticed (graphically and by parameters) where 2 of them are realted to years of experience and age and one is related to %profit knowing that the profit is also included as a variable among others (sales, market value,…). Which of the 3 do you recommend to keep knowing that the a significant improvant was achived (Multiple R, R-squarred, Error, Adjusted R squarred, parameters, std errors of paramaters, p- values) when omitted the 3 of them.

Thank you

Youssef

Lauren says

Hi Jim,

Thank you so much for your reply. That all makes sense – your reasoning is articulated very well! It is a tricky area but I feel a lot more confident with the decision I have made now from reading this. Thank you again.

Kind regards,

Lauren.

Lauren says

Hi,

I did an experiment and through visual inspection I have identified 7 major outliers from the data set. Most of the outliers belong to one participant who appears to have found the experiment particularly hard. Is this a justified reason for not including these outliers in further analysis?

Thanks,

Lauren.

Jim Frost says

Hi Lauren,

This can be tricky! One thing you need to ask yourself is whether your population normally contains individuals who find the task particularly hard. In other words, does the individual in question simply represent the normal range of skill in the population you’re studying? If yes, then you should include the participant because s/he represents part of the variability in the population.

However, if there is some underlying condition or factor that makes it so the participant doesn’t reflect the normal range of abilities, you can consider removing. I’m thinking of things like some sort of medical/psychological condition that makes the subject not a part of your target population. For example, I excluded the girl with diabetes from the bone density study because we were studying bone density in girls who didn’t have conditions that affected it. Or, perhaps there were unusual conditions that made that particular session more difficult. Fire alarms, interruptions, etc.

There should be some reason for excluding this participant beyond just it being extra hard. If you can say, it was extra hard BECAUSE he has a condition that makes it hard to focus and that is not the population under study. Ok. Or, it was extra hard because fire alarms kept going off during the test. That’s OK too. But, if it was just extra hard just because the subject was on the low end of the distribution of abilities, I’d say that’s typically not enough reason by itself.

John Grenci says

Hey Jim, what about dealing with zeros? Particularly, where you have many of them? say, 50% of your values are zeros and the context in my case, anyway, is that we are talking about scrap steel per bar of steel (some have it, some don’t), leaving them in would give a weird distribution in the way of modeling, but taking them out would take away not only much of the data set, but defeat the whole essence of cause and effect. do we standardize in some way?

I have your latest book. I do not recall if you address that. I have not gotten thru all of it 🙂 . typing this from work. thanks John

Hui says

Hi Jim,

When we have a dataset to deal with, missing data or outliners which one treat first?

Jim Frost says

Hi Hui,

You should determine how you’ll handle missing data before you even begin data collection. After you collect the data, you can assess outliers.

If you’re going to toss out observations with missing data, it’s probably easier to do that first and then assess outliers, but the order probably doesn’t matter too much.

However, if you’re going to estimate the values of the missing data, it’s probably better to generate those estimates after removing the outliers because the outliers will affect those estimates. Here’s the logic for removing outliers first. By removing outliers, you’ve explicitly decided that those values should not affect the results, which includes the process of estimating missing values.

Both cases suggest removing outliers first, but it’s more critical if you’re estimating the values of missing data. I’m not sure there is much literature on this issue, but you should determine whether studies in your field employ standard procedures regarding this question.

Marlon says

Jim, how can you probe that a dataset is normally distributed using Excel?

Mohd Shehzoor Hussain says

Thank you for your reply Jim.

Can we use median and IQR to measure CT and variance if data is skewed? If yes, i have two questions.

1) what tests are there to measure the change in median and IQR?

2) what do we do with the outliers?

Mohd Shehzoor Hussain says

Hi Jim, how do you get standard deviations for data set without a proper bell curve due to outliers?

Jim Frost says

Hi Mohd,

You can calculate standard deviations using the usual formula regardless of the distribution. However, only in the normal distribution does the SD have special meaning that you can relate to probabilities. If your data are highly skewed, it could affect the standard deviations that you’d expect to see and what counts as an outliers. It’s always important to graph the distribution of your data to help you understand things like outliers. I’ll often make the point throughout my books that using graphs and numerical measures together will help you better understand your data. And this example of understanding the meaning of SDs as a measure of being an outlier makes more sense when you can see the distribution of your data in a graph is a good illustration of this principle.

Thanks for writing!

Jimoh, S. O. says

Thank you very much for this post. It is very clear and informative.

Helge says

Thank you! Yes, we were not interested in individuals with ongoing infections, so it seems legit to conclude that the 19 were not our target population. I use Stata for my analyses, and I added the command “vce(robust)” to the syntax to apply robust standard errors to account for any kind of violation of assumptions. Is it possible to say if this was a good or bad idea? 🙂 I understand that robust standard errors account for heteroscedasticity, but since I also used the lrtest syntax (likelihood ratio test) to examine whether variables were heteroscedastic or homoscedastic, and added the syntax “residuals(independent, by(variable)” to allow for heteroscedasticity for the heteroscedastic variables, it might be unnecessary to use both robust standard errors as well as allowing for heteroscedasticity?

Thank you so much. 🙂

Jim Frost says

My hope would be that after dropping those 19 cases with ongoing infections you won’t need to use anything outside the normal analysis. I’d only make adjustments for outliers or heteroscedasticity if you see evidence in the residual plots that there is a need to do so.

Unfortunately, I don’t use Stata and I’m not familiar with their commands. So, I don’t know which ones would be appropriate should there be the issues you describe. But, here’s the general approach that I recommend.

Start with regular analysis first and see how that works. Check the residual plots and assumptions. Typically, when you use an alternative method there is some sort of tradeoff. Don’t go down that road unless residual analysis indicates you need to! If you find that you do need to make adjustments, the residual plots should give you an idea of what needs fixing. Start with minimal corrections and escalate only as needed.

Helge says

Thank you for your swift answer. The 19 were removed due to suspected ongoing infection (e.g. having a cold, HIV or hepatitis), as the variable was a biomarker for inflammation. So the decision was made based on the idea that ongoing infection would bias the biomarkers we were looking at. I will however run the analyses with and without the 19 and compare results, as you suggest. Thank you very much, I really appreciate your work, Jim.

Jim Frost says

Hi Helge,

Ah, so knowing that additional information makes all the difference! It sounds like removing them is the correct action. That’s the additional mystery I suspected was there!

In this post, I talk about this as a sampling problem. It’s similar to the situation I describe with the bone density study and the subject who had diabetes.

It sounds like in your study you’ve defined your target population as something like comprising healthy individuals or individuals without a condition that affects inflammation. That’s your target population that are studying. In this light, an individual with a condition that affects inflammation would not be a part of that target population. So, if you identify these conditions, those are the specific reasons you can attribute to those individuals.

Based on that information, I’d concur in principle with removing those observations. The results from your study should inform you about the target population. In your report, I’d be sure to clearly define the target population and then explain how you excluded individuals outside of that population. The study is designed to understand the healthy/normal population and not individuals with conditions that affect inflammation.

Additionally, with this information, it’s probably not necessary to run the analyses with and without those subjects–unless it’s informative in someway.

You’re very welcome! And, I’m really glad that you wrote back with the followup information. Your study provides a great example to other readers by highlighting issues that real world studies face involving outliers and deciding how to handle them.

Helge says

Thank you so much for explaining this subject so well! I hope it is okay to ask one question: In multilevel models using a dataset with a number of extreme outliers in a medium-size dataset (19 subjects above 95th percentile in a continuous variable in a dataset of 147 subjects), would you say that the multilevel modeling technique is robust enough to handle the outliers? Or should these 19 be removed in order to not violate assumptions?

Jim Frost says

Hi Helge,

Multilevel modeling is a generalization of linear least squares modeling. As such, it is highly sensitive to outliers.

Whether you should remove these observations is a separate matter though. Use the principles outlined in this blog post to help you make that decision. Do these subjects represent your target population as it is defined for your study? Removing even several outliers is a big deal. So, removing 19 would be far beyond that! On the face of it, removing all 19 doesn’t sound like a good idea. But, as you hopefully gathered from this blog post, answering that question depends on a lot of subject-area knowledge and real close investigation of the observations in question. It’s not possible to give you a blanket answer about it.

I’d recommend really thinking about the target population for your study and take a very close look at these observations. How is that you obtained so many subjects above the 95th percentile? Maybe they do represent your target population and you wouldn’t want to artificially reduce the variability? Keep in mind, simply being an extreme value is not enough by itself to warrant exclusion. You need to find an additional reason you can attribute to every data point you exclude.

Again, it’s hard for me to imagine removing 19 observation, or 13% of your dataset! It seems like there must be more to the story here. It’s impossible for me to say what it is, of course, but you should investigate.

If you leave some or all of these outliers in the dataset, you might need to change something about the analysis. However, you should try the regular analysis first, and then check the residual plots and assess the regression assumptions. If you’re lucky, your model might satisfy the assumptions and you won’t need to make adjustments.

If your model does violate assumptions, you can try transforming the data or possibly using a robust regression analysis that you can find in some statistical software packages. These techniques reduce the impact of outliers, including making it so they don’t violate the assumptions.

Another thing to consider is comparing the results with and without the outliers and understanding how it changes the outcomes. As I mention in this post, if a research group is in disagreement or completely unsure about what to do, you can analyze it both ways and report the differences, etc.

Best of luck with your study!