The empirical rule in statistics, also known as the 68-95-99.7 rule, states that for normal distributions, 68% of observed data points will lie inside one standard deviation of the mean, 95% will fall within two standard deviations, and 99.7% will occur within three standard deviations. [Read more…] about Empirical Rule: Definition, Formula, and Uses
The gamma distribution is a continuous probability distribution that models right-skewed data. Statisticians have used this distribution to model cancer rates, insurance claims, and rainfall. Additionally, the gamma distribution is similar to the exponential distribution, and you can use it to model the same types of phenomena: failure times, wait times, service times, etc. [Read more…] about Gamma Distribution
The exponential distribution is a continuous probability distribution that models variables in which small values occur more frequently than higher values. Small values have relatively high probabilities, which consistently decline as data values increase. Statisticians use the exponential distribution to model the amount of change in people’s pockets, the length of phone calls, and sales totals for customers. In all these cases, small values are more likely than larger values. [Read more…] about Exponential Distribution
The Weibull distribution is a continuous probability distribution that can fit an extensive range of distribution shapes. Like the normal distribution, the Weibull distribution describes the probabilities associated with continuous data. However, unlike the normal distribution, it can also model skewed data. In fact, its extreme flexibility allows it to model both left- and right-skewed data. [Read more…] about Weibull Distribution
The Poisson distribution is a discrete probability distribution that describes probabilities for counts of events that occur in a specified observation space. It is named after Siméon Denis Poisson.
In statistics, count data represent the number of events or characteristics over a given length of time, area, volume, etc. For example, you can count the number of cigarettes smoked per day, meteors seen per hour, the number of defects in a batch, and the occurrence of a particular crime by county.
Ladislaus Bortkiewicz, a Russian economist, used this probability distribution to analyze the annual count of Prussian army officer deaths caused by horse kicks from 1875-1894.
Count data have discrete values comprised of non-negative integers (0, 1, 2, 3, etc.), and their distributions are frequently skewed. These characteristics make using statistical analyses designed for continuous data (e.g., t-tests, least squares regression) potentially problematic.
The distribution below reflects a study area that averages 2.24 counts during the observation period. You can see the distribution itself consists of discrete counts and is right-skewed.
If only we had a special probability distribution designed for this type of data . . . cue the Poisson distribution!
The Poisson distribution is defined by a single parameter, lambda (λ), which is the mean number of occurrences during an observation unit. A rate of occurrence is simply the mean count per standard observation period. For example, a call center might receive an average of 32 calls per hour.
To estimate lambda, simply calculate the sample’s mean rate of occurrence. Lambda is also a parameter for the exponential and gamma distributions. These three distributions all model different aspects of a Poisson process. Read my posts about the exponential distribution and gamma distribution to learn about their relationship with the Poisson distribution.
Related post: Understanding Probability Distributions
Using the Poisson Distribution in Statistical Analyses
Analysts frequently use this probability distribution for quality control, survival analysis, and insurance analysis.
The Poisson distribution can help you estimate probabilities for counts of occurrences. For example, it can calculate the likelihood of horse kicks killing three or more Prussian officers in a year.
Hypothesis tests that use the Poisson distribution assess the rate of occurrence. For example, Poisson Rate Tests can determine whether the difference between the count of customer complaints per day at two stores is statistically significant.
Poisson regression models determine how changes in the independent variables correspond to changes in the counts of events that the dependent variable measures. For example, these models can evaluate how multiple independent variables predict the count of gold medals that countries win in the Olympics.
Normal Approximation of the Poisson Distribution
The normal distribution can adequately approximate the Poisson distribution when the mean (λ) is ~20 or more. The normal approximation uses the lambda and the square root of lambda for its mean and standard deviation, respectively. In general, as lambda increases, the distribution becomes less skewed and increasingly approximates the normal distribution, as shown below.
The probability plot below shows a normal distribution that closely follows a Poisson distribution with a lambda of 25.
Related post: Normal Distribution
Requirements for the Poisson Distribution
A variable follows a Poisson distribution when the following conditions are true:
- Data are counts of events.
- All events are independent.
- The average rate of occurrence does not change during the period of interest.
The last two points relate to an assumption that statisticians refer to as Independent and Identically Distributed (IID) Data.
Comparing the Poisson and Binomial Distributions
The Poisson and binomial distributions are similar because they both model the occurrence of events. However, the Poisson distribution places no upper bound on the count per observation unit. For example, while the number of meteors observed per hour might fall within a typical range, the Poisson distribution does not impose an upper limit.
Conversely, the binomial distribution calculates the probability of an event occurring a particular number of times in a set number of trials. Specifically, it calculates the likelihood of X events happening within N trials. For the binomial distribution, the number of events (X) cannot be greater than the number of trials. For example, it can calculate the probability of getting seven heads during ten coin tosses. Obviously, the number of heads cannot exceed the number of coin tosses.
Related post: Binomial and other Distributions for Binary Data
Combinations in probability theory and other areas of mathematics refer to a sequence of outcomes where the order does not matter. For example, when you’re ordering a pizza, it doesn’t matter whether you order it with ham, mushrooms, and olives or olives, mushrooms, and ham. You’re getting the same pizza! [Read more…] about Using Combinations to Calculate Probabilities
Permutations in probability theory and other branches of mathematics refer to sequences of outcomes where the order matters. For example, 9-6-8-4 is a permutation of a four-digit PIN because the order of numbers is crucial. When calculating probabilities, it’s frequently necessary to calculate the number of possible permutations to determine an event’s probability.
In this post, I explain permutations and show how to calculate the number of permutations both with repetition and without repetition. Finally, we’ll work through a step-by-step example problem that uses permutations to calculate a probability. [Read more…] about Using Permutations to Calculate Probabilities
The multiplication rule in probability allows you to calculate the probability of multiple events occurring together using known probabilities of those events individually. There are two forms of this rule, the specific and general multiplication rules.
In this post, learn about when and how to use both the specific and general multiplication rules. Additionally, I’ll use and explain the standard notation for probabilities throughout, helping you learn how to interpret it. We’ll work through several example problems so you can see them in action. There’s even a bonus problem at the end! [Read more…] about Multiplication Rule for Calculating Probabilities
Contingency tables are a great way to classify outcomes and calculate different types of probabilities. These tables contain rows and columns that display bivariate frequencies of categorical data. Analysts also refer to contingency tables as crosstabulation (cross tabs), two-way tables, and frequency tables.
Statisticians use contingency tables for a variety of reasons. I love these tables because they both organize your data and allow you to answer a diverse set of questions. In this post, I focus on using them to calculate different types of probabilities. These probabilities include joint, marginal, and conditional probabilities. [Read more…] about Using Contingency Tables to Calculate Probabilities
Probability theory analyzes the likelihood of events occurring. You can think of probabilities as being the following:
- The long-term proportion of times an event occurs during a random process.
- The propensity for a particular outcome to occur.
Common terms for describing probabilities include likelihood, chances, and odds. [Read more…] about Probability Fundamentals