Percent error compares an estimate to a correct value and expresses the difference between them as a percentage. This statistic allows analysts to understand the size of the error relative to the true value. It is also known as percentage error and % error. [Read more…] about Percent Error
Accuracy and precision are crucial properties of your measurements when you’re relying on data to draw conclusions. Both concepts apply to a series of measurements from a measurement system.
Measurement systems facilitate the quantification of characteristics for data collection. They include a collection of instruments, software, and personnel necessary to assess the property of interest. For example, a research project studying bone density will devise a measurement system to produce accurate and precise measurements of bone density. [Read more…] about Accuracy vs Precision
A control group in an experiment does not receive the treatment. Instead, it serves as a comparison group for the treatments. Researchers compare the results of a treatment group to the control group to determine the effect size, also known as the treatment effect. [Read more…] about Control Group in an Experiment
The range of a data set is the difference between the maximum and the minimum values. It measures variability using the same units as the data. Larger values represent greater variability.
The range is the easiest measure of dispersion to calculate and interpret in statistics, but it has some limitations. In this post, I’ll show you how to find the range mathematically and graphically, interpret it, explain its limitations, and clarify when to use it. [Read more…] about Range of a Data Set
A z-score measures the distance between a data point and the mean using standard deviations. Z-scores can be positive or negative. The sign tells you whether the observation is above or below the mean. For example, a z-score of +2 indicates that the data point falls two standard deviations above the mean, while a -2 signifies it is two standard deviations below the mean. A z-score of zero equals the mean. Statisticians also refer to z-scores as standard scores, and I’ll use those terms interchangeably. [Read more…] about Z-score: Definition, Formula, and Uses
Pascal’s triangle is a number pattern that fits in a triangle. It is named after Blaise Pascal, a French mathematician, and it has many beneficial mathematic and statistical properties, including finding the number of combinations and expanding binomials.
To make Pascal’s triangle, start with a 1 at that top. Then work your way down in a triangular pattern. Each value in the triangle is the sum of the two values above it. The animation below depicts how to calculate the values in Pascal’s triangle.
Navigating Pascal’s Triangle
The notation for Pascal’s triangle is the following:
- n = row the number. The top of the pyramid is row zero. The next row down with the two 1s is row 1, and so on.
- k = the column or item number. K = 0 for the left-most values and increases by one as you move right.
The notation for an entry in Pascal’s triangle at row n and column k is the following:
Caution, it’s easy to forget that the top of the triangle is row 0 and that the first 1 in any row is item or column 0.
Using Pascal’s Triangle to Find the Number of Combinations
This number pattern has many intriguing and valuable properties. Because this is a statistics blog, I’ll start with its ability to find combinations. In probability theory, combinations are a sequence of outcomes where order does not matter. For example, a pizza with ham, mushroom, and pepperoni is a combination. You can change the order of those ingredients, but it’s still the same pizza.
When calculating probabilities, you’ll often need to find the number of combinations given several parameters. The standard notation for combinations is the following
- n = the number of options
- r = the size of the combination
You can use Pascal’s triangle to find the number of combinations without repetition, which means the outcomes cannot repeat. To use Pascal’s triangle to find the number of combinations, look in row n, column r.
Suppose we want to find the number of pizza combinations using five possible ingredients (n = 5), and we’ll only include three on the pizza (r = 3). And you can only use each ingredient once—no double pepperoni!
To use Pascal’s triangle to find the number of combinations for 5C3, look in row 5, column 3.
There are 10 combinations for the specified parameters!
Pascal’s Triangle and Binomial Expansion
In algebra, binomial expansion describes expanding (x + y)n to a sum of terms using the form axbyc, where:
- b and c are nonnegative integers
- n = b + c
- a = is the coefficient of each term and is a positive integer.
For example, (x + y)4 = x4 + 4x3y + 6x2y2 + 4xy3 + y4
Notice that the coefficients in the equation are: 1, 4, 6, 4, 1.
Using Pascal’s triangle, you can find the coefficient values of a binomial expansion by looking at row n, column b. For our example, n = 4 and b ranges from 4 to 0.
For our example binomial expansion, we need to look at the 4th row. Then work our way through the b values, 4 to 0. Voila! Pascal’s triangle provides the coefficients for the binomial expansion!
Other Uses for Pascal’s Triangle
Because my site is primarily about statistics and probability, I’ll only touch on several other patterns and ways to use Pascal’s triangle.
When the 1st element of a row (the first number after the leading 1) is a prime number, you can evenly divide all numbers in that row (except the 1s) by it.
Natural numbers, triangular numbers, and more!
When you left justify Pascal’s triangle, the columns represent various types of numbers.
When you left justify the rows, the diagonals in Pascal’s triangle sum to the Fibonacci sequence.
Powers of 2
The sum of each row equals 2n, where n = the row number.
Hockey Stick Pattern
Start at any of the 1s at either edge of the triangle. Work your way down a diagonal. At any point, bend your path downward. That last value equals the sum of the previous values.
For example, in the top left hockey stick (light blue), 1 + 4 + 10 = 15
The same pattern holds for all other hockey sticks in Pascal’s triangle.
Robust statistics provide valid results across a broad variety of conditions, including assumption violations, the presence of outliers, and various other problems. The term “robust statistic” applies both to a statistic (i.e., median) and statistical analyses (i.e., hypothesis tests and regression). [Read more…] about What are Robust Statistics?
A relative frequency indicates how often a specific kind of event occurs within the total number of observations. It is a type of frequency that uses percentages, proportions, and fractions.
In this post, learn about relative frequencies, the relative frequency distribution, and its cumulative counterpart. [Read more…] about Relative Frequencies and Their Distributions
The interquartile range (IQR) measures the spread of the middle half of your data. It is the range for the middle 50% of your sample. Use the IQR to assess the variability where most of your values lie. Larger values indicate that the central portion of your data spread out further. Conversely, smaller values show that the middle values cluster more tightly.
In this post, learn what the interquartile range means and the many ways to use it! I’ll show you how to find the interquartile range, use it to measure variability, graph it in boxplots to assess distribution properties, use it to identify outliers, and test whether your data are normally distributed.
The interquartile range is one of several measures of variability. To learn about the others and how the IQR compares, read my post, Measures of Variability.
Interquartile Range Overview
To visualize the interquartile range, imagine dividing your data into quarters. Statisticians refer to these quarters as quartiles and label them from low to high as Q1, Q2, Q3, and Q4. The lowest quartile (Q1) covers the smallest quarter of values in your dataset. The upper quartile (Q4) comprises the highest quarter of values. The interquartile range is the middle half of the data that lies between the upper and lower quartiles. In other words, the interquartile range includes the 50% of data points that are above Q1 and below Q4. The IQR is the red area in the graph below, containing Q2 and Q3 (not labeled).
When measuring variability, statisticians prefer using the interquartile range instead of the full data range because extreme values and outliers affect it less. Typically, use the IQR with a measure of central tendency, such as the median, to understand your data’s center and spread. This combination creates a fuller picture of your data’s distribution.
Unlike the more familiar mean and standard deviation, the interquartile range and the median are robust measures. Outliers do not strongly influence either statistic because they don’t depend on every value. Additionally, like the median, the interquartile range is superb for skewed distributions. For normal distributions, you can use the standard deviation to determine the percentage of observations that fall specific distances from the mean. However, that doesn’t work for skewed distributions, and the IQR is an excellent alternative.
How to Find the IQR by Hand
The formula for calculating the interquartile range takes the third quartile value and subtracts the first quartile value.
IQR = Q3 – Q1
Equivalently, the interquartile range is the region between the 75th and 25th percentile (75 – 25 = 50% of the data).
Using the IQR formula, we need to find the values for Q3 and Q1. To do that, simply order your data from low to high and split the value into four equal portions.
I’ve divided the dataset below into quartiles. The interquartile range extends from the Q1 value to the Q3 value. For this dataset, the interquartile range is 39 – 20 = 19.
Note that different methods and statistical software programs will find slightly different Q1 and Q3 values, which affects the interquartile range. These variations stem from alternate ways of finding percentiles. For details about that, read my post about Percentiles: Interpretations and Calculations.
Finding the Interquartile Range using Excel
All statistical software packages will identify the interquartile range as part of their descriptive statistics. Here, I’ll show you how to find it using Excel because most readers can access this application.
To follow along, download the Excel file: IQR. This dataset is the same as the one I use in the illustration above. This file also includes the interquartile range calculations for finding outliers and the IQR normality test described later in this post.
In Excel, you’ll need to use the QUARTILE.EXC function, which has the following arguments: QUARTILE.EXC(array, quart)
- Array: Cell range of numeric values.
- Quart: Quartile you want to find.
In my spreadsheet, the data are in cells A2:A20. Consequently, I’ll use the following syntax to find Q1 and Q3, respectively:
As with my example of finding the interquartile range by hand, Excel indicates that Q3 is 39 and Q1 is 20. IQR = 39 – 20 = 19
Related post: Descriptive Statistics in Excel
Using Boxplots to Graph the Interquartile Range
Boxplots are a great way to visualize interquartile ranges and their relation to the median and the overall distribution. These graphs display ranges of values based on quartiles and show asterisks for outliers that fall outside the whiskers. Boxplots work by splitting your data into quarters.
Let’s look at the boxplot anatomy before getting to the example. Notice how it divides your data into quartiles.
The box in the boxplot is your interquartile range! It contains 50% of your data. By comparing the size of these boxes, you can understand your data’s variability. More dispersed distributions have wider boxes.
Additionally, find where the median line falls within each interquartile box. If the median is closer to one side or the other of the box, it’s a skewed distribution. When the median is near the center of the interquartile range, your distribution is symmetric.
For example, in the boxplot below, method 3 has the highest variability in scores and is left-skewed. Conversely, method 2 has a tighter distribution that is symmetrical, although it also has an outlier—read the next section for more about that!
Related post: Boxplots versus Individual Value Plots
Using the IQR to Find Outliers
The interquartile range can help you identify outliers. For other methods of finding outliers, the outliers themselves influence the calculations, potentially causing you to miss them. Fortunately, interquartile ranges are relatively robust against outlier influence and can avoid this problem. This method also does not assume the data follow the normal distribution or any other distribution. That’s why using the IQR to find outliers is one of my favorite methods!
To find outliers, you’ll need to know your data’s IQR, Q1, and Q3 values. Take these values and input them into the equations below. Statisticians call the result for each equation an outlier gate. I’ve included these calculations in the IQR example Excel file.
Q1 − 1.5 * IQR: Lower outlier gate.
Q3 + 1.5 * IQR: Upper outlier gate.
Using the same example dataset, I’ll calculate the two outlier gates. For that dataset, the interquartile range is 19, Q1 = 20, and Q3 = 39.
Lower outlier gate: 20 – 1.5 * 19 = -8.5
Upper outlier gate: 39 + 1.5 * 19 = 67.5
Then look for values in the dataset that are below the lower gate or above the upper gate. For the example dataset, there are no outliers. All values fall between these two gates.
Boxplots typically use this method to identify outliers and display asterisks when they exist. In the teaching method boxplot above, notice that the Method 2 group has an outlier. The researchers should investigate that value.
Related post: Five Ways to Find Outliers
Using the Interquartile Range to Test Normality
You can even use the interquartile range as a simple test to determine whether your data are normally distributed. When data follow a normal distribution, the interquartile range will have specific properties. The image below highlights these properties. Specifically, in our calculations below, we’ll use the standard deviations (σ) that correspond to the interquartile range, -0.67 and 0.67.
You can assess whether your IQR is consistent with a normal distribution. However, this test should not replace a formal normality hypothesis test.
To perform this test, you’ll need to know the sample standard deviation (s) and sample mean (x̅). Input these values into the formulas for Q1 and Q3 below.
- Q1 = x̅ − (s * 0.67)
- Q3 = x̅ + (s * 0.67)
Compare these calculated values to your data’s actual Q1 and Q3 values. If they are notably different, your data might not follow the normal distribution.
We’ll return to our example dataset from before. Our actual Q1 and Q3 are 20 and 39, respectively.
The sample average is 31.3, and its standard deviation is 14.1. I’ll input those values into the equations.
Q1 = 31.3 – (14.1 * 0.67) = 21.9
Q3 = 31.3 + (14.1 * 0.67) = 40.7
The calculated values are pretty close to the actual data values, suggesting that our data follow the normal distribution. I’ve included these calculations in the IQR example spreadsheet.
In statistics, the median is the value that splits an ordered list of data values in half. Half the values are below it and half are above—it’s right in the middle of the dataset. The median is the same as the second quartile or the 50th percentile. It is one of several measures of central tendency. [Read more…] about Median Definition and Uses
The standard deviation is a single number that summarizes the variability in a dataset. It represents the typical distance between each data point and the mean. Smaller values indicate that the data points cluster closer to the mean—the values in the dataset are relatively consistent. Conversely, higher values signify that the values spread out further from the mean. Data values become more dissimilar, and extreme values become more likely. [Read more…] about Standard Deviation: Interpretations and Calculations
In statistics, the mean summarizes an entire dataset with a single number representing the data’s center point or typical value. It is also known as the arithmetic average, and it is one of several measures of central tendency. It is likely the measure of central tendency with which you’re most familiar! Learn how to calculate the mean, and when it is and is not a good statistic to use!
How Do You Find the Mean?
Finding the mean is very simple. Just add all the values and divide by the number of observations—the formula is below.
For example, if the heights of five people are 48, 51, 52, 54, and 56 inches, their average height is 52.2 inches.
48 + 51 + 52 + 54 + 56 / 5 = 52.2
When Do You Use the Mean?
Ideally, the mean indicates the region where most values in a distribution fall. Statisticians refer to it as the central location of a distribution. You can think of it as the tendency of data to cluster around a middle value. The histogram below illustrates the average accurately finding the center of the data’s distribution.
However, the mean does not always find the center of the data. It is sensitive to skewed data and extreme values. For example, when the data are skewed, it can miss the mark. In the histogram below, the average is outside the area with the most common values.
This problem occurs because outliers have a substantial impact on the mean. Extreme values in an extended tail pull the it away from the center. As the distribution becomes more skewed, the average is drawn further away from the center.
In these cases, the mean can be misleading because because it might not be near the most common values. Consequently, it’s best to use the average to measure the central tendency when you have a symmetric distribution.
For skewed distributions, it’s often better to use the median, which uses a different method to find the central location. Note that the mean provides no information about the variability present in a distribution. To evaluate that characteristic, assess the standard deviation.
Relate post: Measures of Central Tendency: Mean, Median, and Mode
Using Sample Means to Estimate Population Means
In statistics, analysts often use a sample average to estimate a population mean. For small samples, the sample mean can differ greatly from the population. However, as the sample size grows, the law of large numbers states that the sample average is likely to be close to the population value.
Hypothesis tests, such as t-tests and ANOVA, use samples to determine whether population means are different. Statisticians refer to this process of using samples to estimate the properties of entire populations as inferential statistics.
Related post: Descriptive Statistics Vs. Inferential Statistics
The R programming language is a powerful and free statistical software tool that analysts use frequently.
The R programming language is open source software where the R community develops and maintains it, while users can download it for free.
Being open source provides many advantages, including the following:
- New statistical methods are quickly available because the R community is vast and active.
- The source code for each function is freely available and everybody can review it.
- Using the R programming language is free! That’s a significant advantage to relatively expensive statistical tools, such as SAS, STATA, and SPSS.
In this article, I give you a brief introduction to the strengths of the R programming language by applying basic statistical concepts to a real dataset using R functions. [Read more…] about Introduction to Statistics Using the R Programming Language
Excel can calculate correlation coefficients and a variety of other statistical analyses. Even if you don’t use Excel regularly, this post is an excellent introduction to calculating and interpreting correlation.
In this post, I provide step-by-step instructions for having Excel calculate Pearson’s correlation coefficient, and I’ll show you how to interpret the results. Additionally, I include links to relevant statistical resources I’ve written that provide intuitive explanations. Together, we’ll analyze and interpret an example dataset! [Read more…] about Using Excel to Calculate Correlation
The law of large numbers states that as the number of trials increases, sample values tend to converge on the expected result. The two forms of this law lay the foundation for both statistics and probability theory.
In this post, I explain both forms of the law, simulate them in action, and explain why they’re crucial for statistics and probability! [Read more…] about Law of Large Numbers
Chebyshev’s Theorem estimates the minimum proportion of observations that fall within a specified number of standard deviations from the mean. This theorem applies to a broad range of probability distributions. Chebyshev’s Theorem is also known as Chebyshev’s Inequality. [Read more…] about Chebyshev’s Theorem in Statistics
Spearman’s correlation in statistics is a nonparametric alternative to Pearson’s correlation. Use Spearman’s correlation for data that follow curvilinear, monotonic relationships and for ordinal data. Statisticians also refer to Spearman’s rank order correlation coefficient as Spearman’s ρ (rho).
In this post, I’ll cover what all that means so you know when and why you should use Spearman’s correlation instead of the more common Pearson’s correlation. [Read more…] about Spearman’s Correlation Explained
Effect sizes in statistics quantify the differences between group means and the relationships between variables. While analysts often focus on statistical significance using p-values, effect sizes determine the practical importance of the findings. [Read more…] about Effect Sizes in Statistics
Descriptive statistics summarize your dataset, painting a picture of its properties. These properties include various central tendency and variability measures, distribution properties, outlier detection, and other information. Unlike inferential statistics, descriptive statistics only describe your dataset’s characteristics and do not attempt to generalize from a sample to a population. [Read more…] about Descriptive Statistics in Excel
My background includes working on scientific projects as the data guy. In these positions, I was responsible for establishing valid data collection procedures, collecting usable data, and statistically analyzing and presenting the results. In this post, I describe the excitement of being a statistician helping expand the limits of human knowledge, what I learned about applied statistics and data analysis during the first big project in my career, and the challenges along the way! [Read more…] about Using Applied Statistics to Expand Human Knowledge