• Skip to secondary menu
  • Skip to main content
  • Skip to primary sidebar
  • My Store
  • Glossary
  • Home
  • About Me
  • Contact Me

Statistics By Jim

Making statistics intuitive

  • Graphs
  • Basics
  • Hypothesis Testing
  • Regression
  • ANOVA
  • Probability
  • Time Series
  • Fun

distributions

Mean Absolute Deviation: Definition, Finding & Formula

By Jim Frost 4 Comments

What is the Mean Absolute Deviation?

The mean absolute deviation (MAD) is a measure of variability that indicates the average distance between observations and their mean. MAD uses the original units of the data, which simplifies interpretation. Larger values signify that the data points spread out further from the average. Conversely, lower values correspond to data points bunching closer to it. The mean absolute deviation is also known as the mean deviation and average absolute deviation. [Read more…] about Mean Absolute Deviation: Definition, Finding & Formula

Filed Under: Basics Tagged With: choosing analysis, conceptual, distributions

Stem and Leaf Plot: Making, Reading & Examples

By Jim Frost 2 Comments

What is a Stem and Leaf Plot?

Stem and leaf plots display the shape and spread of a continuous data distribution. These graphs are similar to histograms, but instead of using bars, they show digits. It’s a particularly valuable tool during exploratory data analysis. They can help you identify the central tendency, variability, skewness of your distribution, and outliers. Stem and leaf plots are also known as stemplots. [Read more…] about Stem and Leaf Plot: Making, Reading & Examples

Filed Under: Graphs Tagged With: choosing analysis, distributions, interpreting results

Skewed Distribution: Definition & Examples

By Jim Frost 5 Comments

What is a Skewed Distribution?

A skewed distribution occurs when one tail is longer than the other. Skewness defines the asymmetry of a distribution. Unlike the familiar normal distribution with its bell-shaped curve, these distributions are asymmetric. The two halves of the distribution are not mirror images because the data are not distributed equally on both sides of the distribution’s peak. [Read more…] about Skewed Distribution: Definition & Examples

Filed Under: Basics Tagged With: conceptual, distributions, graphs

Range of a Data Set

By Jim Frost 1 Comment

The range of a data set is the difference between the maximum and the minimum values. It measures variability using the same units as the data. Larger values represent greater variability.

The range is the easiest measure of dispersion to calculate and interpret in statistics, but it has some limitations. In this post, I’ll show you how to find the range mathematically and graphically, interpret it, explain its limitations, and clarify when to use it. [Read more…] about Range of a Data Set

Filed Under: Basics Tagged With: conceptual, distributions, graphs, interpreting results

Z-score: Definition, Formula, and Uses

By Jim Frost 11 Comments

A z-score measures the distance between a data point and the mean using standard deviations. Z-scores can be positive or negative. The sign tells you whether the observation is above or below the mean. For example, a z-score of +2 indicates that the data point falls two standard deviations above the mean, while a -2 signifies it is two standard deviations below the mean. A z-score of zero equals the mean. Statisticians also refer to z-scores as standard scores, and I’ll use those terms interchangeably. [Read more…] about Z-score: Definition, Formula, and Uses

Filed Under: Basics Tagged With: conceptual, distributions, Excel, probability

Relative Frequencies and Their Distributions

By Jim Frost 5 Comments

A relative frequency indicates how often a specific kind of event occurs within the total number of observations. It is a type of frequency that uses percentages, proportions, and fractions.

In this post, learn about relative frequencies, the relative frequency distribution, and its cumulative counterpart. [Read more…] about Relative Frequencies and Their Distributions

Filed Under: Basics Tagged With: conceptual, distributions, graphs

Empirical Rule: Definition & Formula

By Jim Frost 1 Comment

What is the Empirical Rule?

The empirical rule in statistics, also known as the 68 95 99 rule, states that for normal distributions, 68% of observed data points will lie inside one standard deviation of the mean, 95% will fall within two standard deviations, and 99.7% will occur within three standard deviations. [Read more…] about Empirical Rule: Definition & Formula

Filed Under: Probability Tagged With: conceptual, distributions, graphs

Interquartile Range (IQR): How to Find and Use It

By Jim Frost 19 Comments

What is the Interquartile Range (IQR)?

The interquartile range (IQR) measures the spread of the middle half of your data. It is the range for the middle 50% of your sample. Use the IQR to assess the variability where most of your values lie. Larger values indicate that the central portion of your data spread out further. Conversely, smaller values show that the middle values cluster more tightly.

In this post, learn what the interquartile range means and the many ways to use it! I’ll show you how to find the interquartile range, use it to measure variability, graph it in boxplots to assess distribution properties, use it to identify outliers, and test whether your data are normally distributed.

The interquartile range is one of several measures of variability. To learn about the others and how the IQR compares, read my post, Measures of Variability.

Interquartile Range Definition

To visualize the interquartile range, imagine dividing your data into quarters. Statisticians refer to these quarters as quartiles and label them from low to high as Q1, Q2, Q3, and Q4. The lowest quartile (Q1) covers the smallest quarter of values in your dataset. The upper quartile (Q4) comprises the highest quarter of values. The interquartile range is the middle half of the data that lies between the upper and lower quartiles. In other words, the interquartile range includes the 50% of data points that are above Q1 and below Q4. The IQR is the red area in the graph below, containing Q2 and Q3 (not labeled).

Graph the illustrates the interquartile range (IQR) as a measure of variability.

When measuring variability, statisticians prefer using the interquartile range instead of the full data range because extreme values and outliers affect it less. Typically, use the IQR with a measure of central tendency, such as the median, to understand your data’s center and spread. This combination creates a fuller picture of your data’s distribution.

Unlike the more familiar mean and standard deviation, the interquartile range and the median are robust measures. Outliers do not strongly influence either statistic because they don’t depend on every value. Additionally, like the median, the interquartile range is superb for skewed distributions. For normal distributions, you can use the standard deviation to determine the percentage of observations that fall specific distances from the mean. However, that doesn’t work for skewed distributions, and the IQR is an excellent alternative.

Related posts: Quartiles: Definition, Finding, and Using, Median: Definition and Uses, and What are Robust Statistics?

How to Find the Interquartile Range (IQR) by Hand

The formula for finding the interquartile range takes the third quartile value and subtracts the first quartile value.

IQR = Q3 – Q1

Equivalently, the interquartile range is the region between the 75th and 25th percentile (75 – 25 = 50% of the data).

Using the IQR formula, we need to find the values for Q3 and Q1. To do that, simply order your data from low to high and split the value into four equal portions.

I’ve divided the dataset below into quartiles. The interquartile range extends from the Q1 value to the Q3 value. For this dataset, the interquartile range is 39 – 20 = 19.

Dataset that shows how to find the interquartile range (IQR)

Note that different methods and statistical software programs will find slightly different Q1 and Q3 values, which affects the interquartile range. These variations stem from alternate ways of finding percentiles. For details about that, read my post about Percentiles: Interpretations and Calculations.

How to Find the Interquartile Range using Excel

All statistical software packages will identify the interquartile range as part of their descriptive statistics. Here, I’ll show you how to find it using Excel because most readers can access this application.

To follow along, download the Excel file: IQR. This dataset is the same as the one I use in the illustration above. This file also includes the interquartile range calculations for finding outliers and the IQR normality test described later in this post.

In Excel, you’ll need to use the QUARTILE.EXC function, which has the following arguments: QUARTILE.EXC(array, quart)

  • Array: Cell range of numeric values.
  • Quart: Quartile you want to find.

In my spreadsheet, the data are in cells A2:A20. Consequently, I’ll use the following syntax to find Q1 and Q3, respectively:

  • =QUARTILE.EXC(A2:A20,1)
  • =QUARTILE.EXC(A2:A20,3)

As with my example of finding the interquartile range by hand, Excel indicates that Q3 is 39 and Q1 is 20. IQR = 39 – 20 = 19

Related post: Descriptive Statistics in Excel

Using Boxplots to Graph the Interquartile Range

Boxplots are a great way to visualize interquartile ranges and their relation to the median and the overall distribution. These graphs display ranges of values based on quartiles and show asterisks for outliers that fall outside the whiskers. Boxplots work by splitting your data into quarters.

Let’s look at the boxplot anatomy before getting to the example. Notice how it divides your data into quartiles.

Diagram of boxplots that displays the interquartile range (IQR).

The box in the boxplot is your interquartile range! It contains 50% of your data. By comparing the size of these boxes, you can understand your data’s variability. More dispersed distributions have wider boxes.

Additionally, find where the median line falls within each interquartile box. If the median is closer to one side or the other of the box, it’s a skewed distribution. When the median is near the center of the interquartile range, your distribution is symmetric.

For example, in the boxplot below, method 3 has the highest variability in scores and is left-skewed. Conversely, method 2 has a tighter distribution that is symmetrical, although it also has an outlier—read the next section for more about that!

Example of a boxplot that displays scores by teaching method.

Related post: Box Plots Explained with Examples

Using the IQR to Find Outliers

The interquartile range can help you identify outliers. For other methods of finding outliers, the outliers themselves influence the calculations, potentially causing you to miss them. Fortunately, interquartile ranges are relatively robust against outlier influence and can avoid this problem. This method also does not assume the data follow the normal distribution or any other distribution. That’s why using the IQR to find outliers is one of my favorite methods!

To find outliers, you’ll need to know your data’s IQR, Q1, and Q3 values. Take these values and input them into the equations below. Statisticians call the result for each equation an outlier gate. I’ve included these calculations in the IQR example Excel file.

Q1 − 1.5 * IQR: Lower outlier gate.

Q3 + 1.5 * IQR: Upper outlier gate.

Using the same example dataset, I’ll calculate the two outlier gates. For that dataset, the interquartile range is 19, Q1 = 20, and Q3 = 39.

Lower outlier gate: 20 – 1.5 * 19 = -8.5

Upper outlier gate: 39 + 1.5 * 19 = 67.5

Then look for values in the dataset that are below the lower gate or above the upper gate. For the example dataset, there are no outliers. All values fall between these two gates.

Boxplots typically use this method to identify outliers and display asterisks when they exist. In the teaching method boxplot above, notice that the Method 2 group has an outlier. The researchers should investigate that value.

Related post: Five Ways to Find Outliers

Using the Interquartile Range to Test Normality

You can even use the interquartile range as a simple test to determine whether your data are normally distributed. When data follow a normal distribution, the interquartile range will have specific properties. The image below highlights these properties. Specifically, in our calculations below, we’ll use the standard deviations (σ) that correspond to the interquartile range, -0.67 and 0.67.

Image shows how a probability distribution function relates to a boxplot.
By Jhguch at en.wikipedia, CC BY-SA 2.5, Link

You can assess whether your IQR is consistent with a normal distribution. However, this test should not replace a formal normality hypothesis test.

To perform this test, you’ll need to know the sample standard deviation (s) and sample mean (x̅). Input these values into the formulas for Q1 and Q3 below.

  • Q1 = x̅ − (s * 0.67)
  • Q3 = x̅ + (s * 0.67)

Compare these calculated values to your data’s actual Q1 and Q3 values. If they are notably different, your data might not follow the normal distribution.

We’ll return to our example dataset from before. Our actual Q1 and Q3 are 20 and 39, respectively.

The sample average is 31.3, and its standard deviation is 14.1. I’ll input those values into the equations.

Q1 = 31.3 – (14.1 * 0.67) = 21.9

Q3 = 31.3 + (14.1 * 0.67) = 40.7

The calculated values are pretty close to the actual data values, suggesting that our data follow the normal distribution. I’ve included these calculations in the IQR example spreadsheet.

Related posts: Understanding the Normal Distribution and How to Identify the Distribution of Your Data

Filed Under: Basics Tagged With: conceptual, distributions, Excel

Standard Deviation: Interpretations and Calculations

By Jim Frost 8 Comments

The standard deviation (SD) is a single number that summarizes the variability in a dataset. It represents the typical distance between each data point and the mean. Smaller values indicate that the data points cluster closer to the mean—the values in the dataset are relatively consistent. Conversely, higher values signify that the values spread out further from the mean. Data values become more dissimilar, and extreme values become more likely. [Read more…] about Standard Deviation: Interpretations and Calculations

Filed Under: Basics Tagged With: conceptual, distributions, graphs

What is the Mean and How to Find It: Definition & Formula

By Jim Frost 1 Comment

What is the Mean?

The mean in math and statistics summarizes an entire dataset with a single number representing the data’s center point or typical value. It is also known as the arithmetic mean, and it is the most common measure of central tendency. It is frequently called the “average.” [Read more…] about What is the Mean and How to Find It: Definition & Formula

Filed Under: Basics Tagged With: conceptual, distributions, graphs

Gamma Distribution: Uses, Parameters & Examples

By Jim Frost 17 Comments

What is the Gamma Distribution?

The gamma distribution is a continuous probability distribution that models right-skewed data. Statisticians have used this distribution to model cancer rates, insurance claims, and rainfall. Additionally, the gamma distribution is similar to the exponential distribution, and you can use it to model the same types of phenomena: failure times, wait times, service times, etc. [Read more…] about Gamma Distribution: Uses, Parameters & Examples

Filed Under: Probability Tagged With: conceptual, distributions, graphs

Exponential Distribution: Uses, Parameters & Examples

By Jim Frost 6 Comments

What is the Exponential Distribution?

The exponential distribution is a right-skewed continuous probability distribution that models variables in which small values occur more frequently than higher values. It is a unimodal distribution where small values have relatively high probabilities, which consistently decline as data values increase. Statisticians use the exponential distribution to model the amount of change in people’s pockets, the length of phone calls, and sales totals for customers. In all these cases, small values are more likely than larger values. [Read more…] about Exponential Distribution: Uses, Parameters & Examples

Filed Under: Probability Tagged With: conceptual, distributions, graphs

Weibull Distribution: Uses, Parameters & Examples

By Jim Frost 6 Comments

What is a Weibull Distribution?

The Weibull distribution is a continuous probability distribution that can fit an extensive range of distribution shapes. Like the normal distribution, the Weibull distribution is unimodal and describes probabilities associated with continuous data. However, unlike the normal distribution, it can also model skewed data. In fact, its extreme flexibility allows it to model both left- and right-skewed data. [Read more…] about Weibull Distribution: Uses, Parameters & Examples

Filed Under: Probability Tagged With: conceptual, distributions, graphs

Poisson Distribution: Definition & Uses

By Jim Frost 11 Comments

What is the Poisson Distribution?

The Poisson distribution is a discrete probability distribution that describes probabilities for counts of events that occur in a specified observation space. It is named after Siméon Denis Poisson.

In statistics, count data represent the number of events or characteristics over a given length of time, area, volume, etc. For example, you can count the number of cigarettes smoked per day, meteors seen per hour, the number of defects in a batch, and the occurrence of a particular crime by county. [Read more…] about Poisson Distribution: Definition & Uses

Filed Under: Probability Tagged With: conceptual, distributions, graphs

Dot Plots: Using, Examples, and Interpreting

By Jim Frost Leave a Comment

Use dot plots to display the distribution of your sample data when you have continuous variables. These graphs stack dots along the horizontal X-axis to represent the frequencies of different values. More dots indicate greater frequency. Each dot represents a set number of observations. [Read more…] about Dot Plots: Using, Examples, and Interpreting

Filed Under: Graphs Tagged With: analysis example, choosing analysis, data types, distributions, interpreting results

Chebyshev’s Theorem in Statistics

By Jim Frost 19 Comments

Chebyshev’s Theorem estimates the minimum proportion of observations that fall within a specified number of standard deviations from the mean. This theorem applies to a broad range of probability distributions. Chebyshev’s Theorem is also known as Chebyshev’s Inequality. [Read more…] about Chebyshev’s Theorem in Statistics

Filed Under: Basics Tagged With: choosing analysis, distributions, probability

Coefficient of Variation in Statistics

By Jim Frost 32 Comments

The coefficient of variation (CV) is a relative measure of variability that indicates the size of a standard deviation in relation to its mean. It is a standardized, unitless measure that allows you to compare variability between disparate groups and characteristics. It is also known as the relative standard deviation (RSD).

In this post, you will learn about the coefficient of variation, how to calculate it, know when it is particularly useful, and when to avoid it. [Read more…] about Coefficient of Variation in Statistics

Filed Under: Basics Tagged With: conceptual, distributions

How the Chi-Squared Test of Independence Works

By Jim Frost 21 Comments

Chi-squared tests of independence determine whether a relationship exists between two categorical variables. Do the values of one categorical variable depend on the value of the other categorical variable? If the two variables are independent, knowing the value of one variable provides no information about the value of the other variable.

I’ve previously written about Pearson’s chi-square test of independence using a fun Star Trek example. Are the uniform colors related to the chances of dying? You can test the notion that the infamous red shirts have a higher likelihood of dying. In that post, I focus on the purpose of the test, applied it to this example, and interpreted the results.

In this post, I’ll take a bit of a different approach. I’ll show you the nuts and bolts of how to calculate the expected values, chi-square value, and degrees of freedom. Then you’ll learn how to use the chi-squared distribution in conjunction with the degrees of freedom to calculate the p-value. [Read more…] about How the Chi-Squared Test of Independence Works

Filed Under: Hypothesis Testing Tagged With: analysis example, distributions, interpreting results

Low Power Tests Exaggerate Effect Sizes

By Jim Frost 14 Comments

If your study has low statistical power, it will exaggerate the effect size. What?!

Statistical power is the ability of a hypothesis test to detect an effect that exists in the population. Clearly, a high-powered study is a good thing just for being able to identify these effects. Low power reduces your chances of discovering real findings. However, many analysts don’t realize that low power also inflates the effect size. Learn more about Statistical Power.

In this post, I show how this unexpected relationship between power and exaggerated effect sizes exists. I’ll also tie it to other issues, such as the bias of effects published in journals and other matters about statistical power. I think this post will be eye-opening and thought provoking! As always, I’ll use many graphs rather than equations. [Read more…] about Low Power Tests Exaggerate Effect Sizes

Filed Under: Hypothesis Testing Tagged With: conceptual, distributions, graphs

Revisiting the Monty Hall Problem with Hypothesis Testing

By Jim Frost 22 Comments

The Monty Hall Problem is where Monty presents you with three doors, one of which contains a prize. He asks you to pick one door, which remains closed. Monty opens one of the other doors that does not have the prize. This process leaves two unopened doors—your original choice and one other. He allows you to switch from your initial choice to the other unopened door. Do you accept the offer?

If you accept his offer to switch doors, you’re twice as likely to win—66% versus 33%—than if you stay with your original choice.

Mind-blowing, right?

The solution to the Monty Hall Problem is tricky and counter-intuitive. It did trip up many experts back in the 1980s. However, the correct answer to the Monty Hall Problem is now well established using a variety of methods. It has been proven mathematically, with computer simulations, and empirical experiments, including on television by both the Mythbusters (CONFIRMED!) and James Mays’ Man Lab. You won’t find any statisticians who disagree with the solution.

In this post, I’ll explore aspects of this problem that have arisen in discussions with some stubborn resisters to the notion that you can increase your chances of winning by switching!

The Monty Hall problem provides a fun way to explore issues that relate to hypothesis testing. I’ve got a lot of fun lined up for this post, including the following!

  • Using a computer simulation to play the game 10,000 times.
  • Assessing sampling distributions to compare the 66% percent hypothesis to another contender.
  • Performing a power and sample size analysis to determine the number of times you need to play the Monty Hall game to get an answer.
  • Conducting an experiment by playing the game repeatedly myself, record the results, and use a proportions hypothesis test to draw conclusions! [Read more…] about Revisiting the Monty Hall Problem with Hypothesis Testing

Filed Under: Hypothesis Testing Tagged With: analysis example, conceptual, distributions, interpreting results

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to Next Page »

Primary Sidebar

Meet Jim

I’ll help you intuitively understand statistics by focusing on concepts and using plain English so you can concentrate on understanding your results.

Read More...

Buy My Introduction to Statistics Book!

Cover of my Introduction to Statistics: An Intuitive Guide ebook.

Buy My Hypothesis Testing Book!

Cover image of my Hypothesis Testing: An Intuitive Guide ebook.

Buy My Regression Book!

Cover for my ebook, Regression Analysis: An Intuitive Guide for Using and Interpreting Linear Models.

Subscribe by Email

Enter your email address to receive notifications of new posts by email.

    I won't send you spam. Unsubscribe at any time.

    Top Posts

    • How to Interpret P-values and Coefficients in Regression Analysis
    • Placebo Effect Overview: Definition & Examples
    • How To Interpret R-squared in Regression Analysis
    • Mean, Median, and Mode: Measures of Central Tendency
    • Z-table
    • Bernoulli Distribution: Uses, Formula & Example
    • Cronbach’s Alpha: Definition, Calculations & Example
    • F-table
    • Weighted Average: Formula & Calculation Examples
    • Multicollinearity in Regression Analysis: Problems, Detection, and Solutions

    Recent Posts

    • Bernoulli Distribution: Uses, Formula & Example
    • Placebo Effect Overview: Definition & Examples
    • Randomized Controlled Trial (RCT) Overview
    • Prospective Study: Definition, Benefits & Examples
    • T Test Overview: How to Use & Examples
    • Wilcoxon Signed Rank Test Explained

    Recent Comments

    • Jim Frost on Cronbach’s Alpha: Definition, Calculations & Example
    • John on Cronbach’s Alpha: Definition, Calculations & Example
    • Jim Frost on Multicollinearity in Regression Analysis: Problems, Detection, and Solutions
    • Thu Nguyen on Multicollinearity in Regression Analysis: Problems, Detection, and Solutions
    • Quang Dat on 7 Classical Assumptions of Ordinary Least Squares (OLS) Linear Regression

    Copyright © 2023 · Jim Frost · Privacy Policy