Autocorrelation is the correlation between two observations at different points in a time series. For example, values that are separated by an interval might have a strong positive or negative correlation. When these correlations are present, they indicate that past values influence the current value. Analysts use the autocorrelation and partial autocorrelation functions to understand the properties of time series data, fit the appropriate models, and make forecasts. [Read more…] about Autocorrelation and Partial Autocorrelation in Time Series Data
Blog
Using Combinations to Calculate Probabilities
Combinations in probability theory and other areas of mathematics refer to a sequence of outcomes where the order does not matter. For example, when you’re ordering a pizza, it doesn’t matter whether you order it with ham, mushrooms, and olives or olives, mushrooms, and ham. You’re getting the same pizza! [Read more…] about Using Combinations to Calculate Probabilities
Law of Large Numbers
What is the Law of Large Numbers in Statistics?
The Law of Large Numbers is a cornerstone concept in statistics and probability theory. This law asserts that as the number of trials or samples increases, the observed outcomes tend to converge closer to the expected value. [Read more…] about Law of Large Numbers
Chebyshev’s Theorem in Statistics
Chebyshev’s Theorem estimates the minimum proportion of observations that fall within a specified number of standard deviations from the mean. This theorem applies to a broad range of probability distributions. Chebyshev’s Theorem is also known as Chebyshev’s Inequality. [Read more…] about Chebyshev’s Theorem in Statistics
Using Permutations to Calculate Probabilities
Permutations in probability theory and other branches of mathematics refer to sequences of outcomes where the order matters. For example, 9-6-8-4 is a permutation of a four-digit PIN because the order of numbers is crucial. When calculating probabilities, it’s frequently necessary to calculate the number of possible permutations to determine an event’s probability.
In this post, I explain permutations and show how to calculate the number of permutations both with repetition and without repetition. Finally, we’ll work through a step-by-step example problem that uses permutations to calculate a probability. [Read more…] about Using Permutations to Calculate Probabilities
Understanding Historians’ Rankings of U.S. Presidents using Regression Models
Historians rank the U.S. Presidents from best to worse using all the historical knowledge at their disposal. Frequently, groups, such as C-Span, ask these historians to rank the Presidents and average the results together to help reduce bias. The idea is to produce a set of rankings that incorporates a broad range of historians, a vast array of information, and a historical perspective. These rankings include informed assessments of each President’s effectiveness, leadership, moral authority, administrative skills, economic management, vision, and so on. [Read more…] about Understanding Historians’ Rankings of U.S. Presidents using Regression Models
Spearman’s Correlation Explained
Spearman’s correlation in statistics is a nonparametric alternative to Pearson’s correlation. Use Spearman’s correlation for data that follow curvilinear, monotonic relationships and for ordinal data. Statisticians also refer to Spearman’s rank order correlation coefficient as Spearman’s ρ (rho).
In this post, I’ll cover what all that means so you know when and why you should use Spearman’s correlation instead of the more common Pearson’s correlation. [Read more…] about Spearman’s Correlation Explained
Effect Sizes in Statistics
Effect sizes in statistics quantify the differences between group means and the relationships between variables. While analysts often focus on statistical significance using p-values, effect sizes determine the practical importance of the findings. [Read more…] about Effect Sizes in Statistics
Proxy Variables: The Good Twin of Confounding Variables
Proxy variables are easily measurable variables that analysts include in a model in place of a variable that cannot be measured or is difficult to measure. Proxy variables can be something that is not of any great interest itself, but has a close correlation with the variable of interest. [Read more…] about Proxy Variables: The Good Twin of Confounding Variables
Multiplication Rule for Calculating Probabilities
The multiplication rule in probability allows you to calculate the joint probability of multiple events occurring together using known probabilities of those events individually. There are two forms of this rule, the specific and general multiplication rules.
In this post, learn about when and how to use both the specific and general multiplication rules. Additionally, I’ll use and explain the standard notation for probabilities throughout, helping you learn how to interpret it. We’ll work through several example problems so you can see them in action. There’s even a bonus problem at the end! [Read more…] about Multiplication Rule for Calculating Probabilities
Exponential Smoothing for Time Series Forecasting
Exponential smoothing is a forecasting method for univariate time series data. This method produces forecasts that are weighted averages of past observations where the weights of older observations exponentially decrease. Forms of exponential smoothing extend the analysis to model data with trends and seasonal components. [Read more…] about Exponential Smoothing for Time Series Forecasting
Descriptive Statistics in Excel
Descriptive statistics summarize your dataset, painting a picture of its properties. These properties include various central tendency and variability measures, distribution properties, outlier detection, and other information. Unlike inferential statistics, descriptive statistics only describe your dataset’s characteristics and do not attempt to generalize from a sample to a population. [Read more…] about Descriptive Statistics in Excel
Using Contingency Tables to Calculate Probabilities
Contingency tables are a great way to classify outcomes and calculate different types of probabilities. These tables contain rows and columns that display bivariate frequencies of categorical data. Analysts also refer to contingency tables as crosstabulation (cross tabs), two-way tables, and frequency tables.
Statisticians use contingency tables for a variety of reasons. I love these tables because they both organize your data and allow you to answer a diverse set of questions. In this post, I focus on using them to calculate different types of probabilities. These probabilities include joint, marginal, and conditional probabilities. [Read more…] about Using Contingency Tables to Calculate Probabilities
Probability Definition and Fundamentals
What is Probability?
The definition of probability is the likelihood of an event happening. Probability theory analyzes the chances of events occurring. You can think of probabilities as being the following:
- The long-term proportion of times an event occurs during a random process.
- The propensity for a particular outcome to occur.
Common terms for describing probabilities include likelihood, chances, and odds. [Read more…] about Probability Definition and Fundamentals
Using Applied Statistics to Expand Human Knowledge
My background includes working on scientific projects as the data guy. In these positions, I was responsible for establishing valid data collection procedures, collecting usable data, and statistically analyzing and presenting the results. In this post, I describe the excitement of being a statistician helping expand the limits of human knowledge, what I learned about applied statistics and data analysis during the first big project in my career, and the challenges along the way! [Read more…] about Using Applied Statistics to Expand Human Knowledge
Variance Inflation Factors (VIFs)
Variance Inflation Factors (VIFs) measure the correlation among independent variables in least squares regression models. Statisticians refer to this type of correlation as multicollinearity. Excessive multicollinearity can cause problems for regression models.
In this post, I focus on VIFs and how they detect multicollinearity, why they’re better than pairwise correlations, how to calculate VIFs yourself, and interpreting VIFs. If you need a refresher about the types of problems that multicollinearity causes and how to fix them, read my post: Multicollinearity: Problems, Detection, and Solutions. [Read more…] about Variance Inflation Factors (VIFs)
Assessing a COVID-19 Vaccination Experiment and Its Results
Moderna has announced encouraging preliminary results for their COVID-19 vaccine. In this post, I assess the available data and explain what the vaccine’s effectiveness really means. I also look at Moderna’s experimental design and examine how it incorporates statistical procedures and concepts that I discuss throughout my blog posts and books. [Read more…] about Assessing a COVID-19 Vaccination Experiment and Its Results
P-Values, Error Rates, and False Positives
In my post about how to interpret p-values, I emphasize that p-values are not an error rate. The number one misinterpretation of p-values is that they are the probability of the null hypothesis being correct.
The correct interpretation is that p-values indicate the probability of observing your sample data, or more extreme, when you assume the null hypothesis is true. If you don’t solidly grasp that correct interpretation, please take a moment to read that post first.
Hopefully, that’s clear.
Unfortunately, one part of that blog post confuses some readers. In that post, I explain how p-values are not a probability, or error rate, of a hypothesis. I then show how that misinterpretation is dangerous because it overstates the evidence against the null hypothesis. [Read more…] about P-Values, Error Rates, and False Positives
How to Perform Regression Analysis using Excel
Excel can perform various statistical analyses, including regression analysis. It is a great option because nearly everyone can access Excel. This post is an excellent introduction to performing and interpreting regression analysis, even if Excel isn’t your primary statistical software package.
[Read more…] about How to Perform Regression Analysis using Excel
Coefficient of Variation in Statistics
The coefficient of variation (CV) is a relative measure of variability that indicates the size of a standard deviation in relation to its mean. It is a standardized, unitless measure that allows you to compare variability between disparate groups and characteristics. It is also known as the relative standard deviation (RSD).
In this post, you will learn about the coefficient of variation, how to calculate it, know when it is particularly useful, and when to avoid it. [Read more…] about Coefficient of Variation in Statistics