• Skip to secondary menu
  • Skip to main content
  • Skip to primary sidebar
  • My Store
  • Glossary
  • Home
  • About Me
  • Contact Me

Statistics By Jim

Making statistics intuitive

  • Graphs
  • Basics
  • Hypothesis Testing
  • Regression
  • ANOVA
  • Probability
  • Time Series
  • Fun

Blog

Independent and Dependent Variables: Differences & Examples

By Jim Frost 6 Comments

Scientist at work on an experiment consider independent and dependent variables.Independent variables and dependent variables are the two fundamental types of variables in statistical modeling and experimental designs. Analysts use these methods to understand the relationships between the variables and estimate effect sizes. What effect does one variable have on another?

In this post, learn the definitions of independent and dependent variables, how to identify each type, how they differ between different types of studies, and see examples of them in use. [Read more…] about Independent and Dependent Variables: Differences & Examples

Filed Under: Regression Tagged With: conceptual, experimental design

Standard Deviation: Interpretations and Calculations

By Jim Frost 2 Comments

The standard deviation (SD) is a single number that summarizes the variability in a dataset. It represents the typical distance between each data point and the mean. Smaller values indicate that the data points cluster closer to the mean—the values in the dataset are relatively consistent. Conversely, higher values signify that the values spread out further from the mean. Data values become more dissimilar, and extreme values become more likely. [Read more…] about Standard Deviation: Interpretations and Calculations

Filed Under: Basics Tagged With: conceptual, distributions, graphs

What is the Mean in Statistics?

By Jim Frost Leave a Comment

In statistics, the mean summarizes an entire dataset with a single number representing the data’s center point or typical value. It is also known as the arithmetic average, and it is one of several measures of central tendency. It is likely the measure of central tendency with which you’re most familiar! Learn how to calculate the mean, and when it is and is not a good statistic to use!

How Do You Find the Mean?

Finding the mean is very simple. Just add all the values and divide by the number of observations—the formula is below.

{\displaystyle \frac {x_{1}+x_{2}+\cdots +x_{n}}{n}}

For example, if the heights of five people are 48, 51, 52, 54, and 56 inches, their average height is 52.2 inches.

48 + 51 + 52 + 54 + 56 / 5 = 52.2

When Do You Use the Mean?

Ideally, the mean indicates the region where most values in a distribution fall. Statisticians refer to it as the central location of a distribution. You can think of it as the tendency of data to cluster around a middle value. The histogram below illustrates the average accurately finding the center of the data’s distribution.

Histogram of a symmetric distribution that shows the mean in the center.

However, the mean does not always find the center of the data. It is sensitive to skewed data and extreme values. For example, when the data are skewed, it can miss the mark. In the histogram below, the average is outside the area with the most common values.

Histogram of a skewed distribution showing the mean falling away from the most common values.

This problem occurs because outliers have a substantial impact on the mean. Extreme values in an extended tail pull the it away from the center. As the distribution becomes more skewed, the average is drawn further away from the center.

In these cases, the mean can be misleading because because it might not be near the most common values. Consequently, it’s best to use the average to measure the central tendency when you have a symmetric distribution.

For skewed distributions, it’s often better to use the median, which uses a different method to find the central location. Note that the mean provides no information about the variability present in a distribution. To evaluate that characteristic, assess the standard deviation.

Relate post: Measures of Central Tendency: Mean, Median, and Mode

Using Sample Means to Estimate Population Means

In statistics, analysts often use a sample average to estimate a population mean. For small samples, the sample mean can differ greatly from the population. However, as the sample size grows, the law of large numbers states that the sample average is likely to be close to the population value.

Hypothesis tests, such as t-tests and ANOVA, use samples to determine whether population means are different. Statisticians refer to this process of using samples to estimate the properties of entire populations as inferential statistics.

Related post: Descriptive Statistics Vs. Inferential Statistics

In statistics, we usually use the arithmetic mean, which is the type I focus on this post. However, there are other types of means, including the geometric mean. Read my post about the geometric mean to learn more.

Filed Under: Basics Tagged With: conceptual, distributions, graphs

Gamma Distribution: Uses, Parameters & Examples

By Jim Frost 11 Comments

What is the Gamma Distribution?

The gamma distribution is a continuous probability distribution that models right-skewed data. Statisticians have used this distribution to model cancer rates, insurance claims, and rainfall. Additionally, the gamma distribution is similar to the exponential distribution, and you can use it to model the same types of phenomena: failure times, wait times, service times, etc. [Read more…] about Gamma Distribution: Uses, Parameters & Examples

Filed Under: Probability Tagged With: conceptual, distributions, graphs

Exponential Distribution: Uses, Parameters & Examples

By Jim Frost 6 Comments

What is the Exponential Distribution?

The exponential distribution is a right-skewed continuous probability distribution that models variables in which small values occur more frequently than higher values. Small values have relatively high probabilities, which consistently decline as data values increase. Statisticians use the exponential distribution to model the amount of change in people’s pockets, the length of phone calls, and sales totals for customers. In all these cases, small values are more likely than larger values. [Read more…] about Exponential Distribution: Uses, Parameters & Examples

Filed Under: Probability Tagged With: conceptual, distributions, graphs

Weibull Distribution: Uses, Parameters & Examples

By Jim Frost 5 Comments

What is a Weibull Distribution?

The Weibull distribution is a continuous probability distribution that can fit an extensive range of distribution shapes. Like the normal distribution, the Weibull distribution describes the probabilities associated with continuous data. However, unlike the normal distribution, it can also model skewed data. In fact, its extreme flexibility allows it to model both left- and right-skewed data. [Read more…] about Weibull Distribution: Uses, Parameters & Examples

Filed Under: Probability Tagged With: conceptual, distributions, graphs

Poisson Distribution: Definition & Uses

By Jim Frost 11 Comments

What is the Poisson Distribution?

The Poisson distribution is a discrete probability distribution that describes probabilities for counts of events that occur in a specified observation space. It is named after Siméon Denis Poisson.

In statistics, count data represent the number of events or characteristics over a given length of time, area, volume, etc. For example, you can count the number of cigarettes smoked per day, meteors seen per hour, the number of defects in a batch, and the occurrence of a particular crime by county. [Read more…] about Poisson Distribution: Definition & Uses

Filed Under: Probability Tagged With: conceptual, distributions, graphs

Introduction to Statistics Using the R Programming Language

By Joachim Schork 20 Comments

The R programming language is a powerful and free statistical software tool that analysts use frequently.

The R programming language is open source software where the R community develops and maintains it, while users can download it for free.

Being open source provides many advantages, including the following:

  • New statistical methods are quickly available because the R community is vast and active.
  • The source code for each function is freely available and everybody can review it.
  • Using the R programming language is free! That’s a significant advantage to relatively expensive statistical tools, such as SAS, STATA, and SPSS.

In this article, I give you a brief introduction to the strengths of the R programming language by applying basic statistical concepts to a real dataset using R functions. [Read more…] about Introduction to Statistics Using the R Programming Language

Filed Under: Basics

Scatterplots: Using, Examples, and Interpreting

By Jim Frost 2 Comments

Use scatterplots to show relationships between pairs of continuous variables. These graphs display symbols at the X, Y coordinates of the data points for the paired variables. Scatterplots are also known as scattergrams and scatter charts. [Read more…] about Scatterplots: Using, Examples, and Interpreting

Filed Under: Graphs Tagged With: analysis example, choosing analysis, data types, interpreting results

Pie Charts: Using, Examples, and Interpreting

By Jim Frost Leave a Comment

Use pie charts to compare the sizes of categories to the entire dataset. To create a pie chart, you must have a categorical variable that divides your data into groups. These graphs consist of a circle (i.e., the pie) with slices representing subgroups. The size of each slice is proportional to the relative size of each category out of the whole. [Read more…] about Pie Charts: Using, Examples, and Interpreting

Filed Under: Graphs Tagged With: analysis example, choosing analysis, data types, interpreting results

Bar Charts: Using, Examples, and Interpreting

By Jim Frost 4 Comments

Use bar charts to compare categories when you have at least one categorical or discrete variable. Each bar represents a summary value for one discrete level, where longer bars indicate higher values. Types of summary values include counts, sums, means, and standard deviations. Bar charts are also known as bar graphs. [Read more…] about Bar Charts: Using, Examples, and Interpreting

Filed Under: Graphs Tagged With: analysis example, choosing analysis, data types, interpreting results

Line Charts: Using, Examples, and Interpreting

By Jim Frost 2 Comments

Use line charts to display a series of data points that are connected by lines. Analysts use line charts to emphasize changes in a metric on the vertical Y-axis by another variable on the horizontal X-axis. Often, the X-axis reflects time, but not always. Line charts are also known as line plots. [Read more…] about Line Charts: Using, Examples, and Interpreting

Filed Under: Graphs Tagged With: analysis example, choosing analysis, data types, interpreting results

Dot Plots: Using, Examples, and Interpreting

By Jim Frost Leave a Comment

Use dot plots to display the distribution of your sample data when you have continuous variables. These graphs stack dots along the horizontal X-axis to represent the frequencies of different values. More dots indicate greater frequency. Each dot represents a set number of observations. [Read more…] about Dot Plots: Using, Examples, and Interpreting

Filed Under: Graphs Tagged With: analysis example, choosing analysis, data types, distributions, interpreting results

Empirical Cumulative Distribution Function (CDF) Plots

By Jim Frost Leave a Comment

Use an empirical cumulative distribution function plot to display the data points in your sample from lowest to highest against their percentiles. These graphs require continuous variables and allow you to derive percentiles and other distribution properties. This function is also known as the empirical CDF or ECDF. [Read more…] about Empirical Cumulative Distribution Function (CDF) Plots

Filed Under: Graphs Tagged With: analysis example, choosing analysis, data types, interpreting results

Contour Plots: Using, Examples, and Interpreting

By Jim Frost Leave a Comment

Use contour plots to display the relationship between two independent variables and a dependent variable. The graph shows values of the Z variable for combinations of the X and Y variables. The X and Y values are displayed along the X and Y-axes, while contour lines and bands represent the Z value. The contour lines connect combinations of the X and Y variables that produce equal values of Z. [Read more…] about Contour Plots: Using, Examples, and Interpreting

Filed Under: Graphs Tagged With: choosing analysis, data types, interpreting results

Using Excel to Calculate Correlation

By Jim Frost Leave a Comment

Excel can calculate correlation coefficients and a variety of other statistical analyses. Even if you don’t use Excel regularly, this post is an excellent introduction to calculating and interpreting correlation.

In this post, I provide step-by-step instructions for having Excel calculate Pearson’s correlation coefficient, and I’ll show you how to interpret the results. Additionally, I include links to relevant statistical resources I’ve written that provide intuitive explanations. Together, we’ll analyze and interpret an example dataset! [Read more…] about Using Excel to Calculate Correlation

Filed Under: Basics Tagged With: analysis example, Excel, graphs, interpreting results

Standard Error of the Mean (SEM)

By Jim Frost 24 Comments

The standard error of the mean (SEM) is a bit mysterious. You’ll frequently find it in your statistical output. Is it a measure of variability? How does the standard error of the mean compare to the standard deviation? How do you interpret it?

In this post, I answer all these questions about the standard error of the mean, show how it relates to sample size considerations and statistical significance, and explain the general concept of other types of standard errors. In fact, I view standard errors as the doorway from descriptive statistics to inferential statistics. You’ll see how that works! [Read more…] about Standard Error of the Mean (SEM)

Filed Under: Hypothesis Testing Tagged With: conceptual, graphs, interpreting results

Autocorrelation and Partial Autocorrelation in Time Series Data

By Jim Frost 4 Comments

Autocorrelation is the correlation between two observations at different points in a time series. For example, values that are separated by an interval might have a strong positive or negative correlation. When these correlations are present, they indicate that past values influence the current value. Analysts use the autocorrelation and partial autocorrelation functions to understand the properties of time series data, fit the appropriate models, and make forecasts.

In this post, I cover both the autocorrelation function and partial autocorrelation function. You’ll learn about the differences between these functions and what they can tell you about your data. In later posts, I’ll show you how to incorporate this information in regression models of time series data and other time-series analyses.

Autocorrelation and Partial Autocorrelation Basics

Autocorrelation is the correlation between two values in a time series. In other words, the time series data correlate with themselves—hence, the name. We talk about these correlations using the term “lags.” Analysts record time-series data by measuring a characteristic at evenly spaced intervals—such as daily, monthly, or yearly. The number of intervals between the two observations is the lag. For example, the lag between the current and previous observation is one. If you go back one more interval, the lag is two, and so on.

In mathematical terms, the observations at yt and yt–k are separated by k time units. K is the lag. This lag can be days, quarters, or years depending on the nature of the data. When k=1, you’re assessing adjacent observations. For each lag, there is a correlation.

The autocorrelation function (ACF) assesses the correlation between observations in a time series for a set of lags. The ACF for time series y is given by: Corr (yt,yt−k), k=1,2,….

Analysts typically use graphs to display this function.

Related posts: Time Series Analysis Introduction and Interpreting Correlations

Autocorrelation Function (ACF)

Use the autocorrelation function (ACF) to identify which lags have significant correlations, understand the patterns and properties of the time series, and then use that information to model the time series data. From the ACF, you can assess the randomness and stationarity of a time series. You can also determine whether trends and seasonal patterns are present.

In an ACF plot, each bar represents the size and direction of the correlation. Bars that extend across the red line are statistically significant.

Randomness/White Noise

For random data, autocorrelations should be near zero for all lags. Analysts also refer to this condition as white noise. Non-random data have at least one significant lag. When the data are not random, it’s a good indication that you need to use a time series analysis or incorporate lags into a regression analysis to model the data appropriately.

Autocorrelation function plot for random data.

This ACF plot indicates that these time series data are random.

Stationarity

Stationarity means that the time series does not have a trend, has a constant variance, a constant autocorrelation pattern, and no seasonal pattern. The autocorrelation function declines to near zero rapidly for a stationary time series. In contrast, the ACF drops slowly for a non-stationary time series.

Autocorrelation function plot of stationary time series data.

In this chart for a stationary time series, notice how the autocorrelations decline to non-significant levels quickly.

Trends

When trends are present in a time series, shorter lags typically have large positive correlations because observations closer in time tend to have similar values. The correlations taper off slowly as the lags increase.

Autocorrelations plot for metal sales that indicates a trend is present.

In this ACF plot for metal sales, the autocorrelations decline slowly. The first five lags are significant.

Seasonality

When seasonal patterns are present, the autocorrelations are larger for lags at multiples of the seasonal frequency than for other lags.

When a time series has both a trend and seasonality, the ACF plot displays a mixture of both effects. That’s the case in the autocorrelation function plot for the carbon dioxide (CO2) dataset from NIST. This dataset contains monthly mean CO2 measurements at the Mauna Loa Observatory. Download the CO2_Data.

Autocorrelation plot of carbon dioxide data.

Notice how you can see the wavy correlations for the seasonal pattern and the slowly diminishing lags of a trend.

Partial Autocorrelation Function (PACF)

The partial autocorrelation function is similar to the ACF except that it displays only the correlation between two observations that the shorter lags between those observations do not explain. For example, the partial autocorrelation for lag 3 is only the correlation that lags 1 and 2 do not explain. In other words, the partial correlation for each lag is the unique correlation between those two observations after partialling out the intervening correlations.

As you saw, the autocorrelation function helps assess the properties of a time series. In contrast, the partial autocorrelation function (PACF) is more useful during the specification process for an autoregressive model. Analysts use partial autocorrelation plots to specify regression models with time series data and Auto Regressive Integrated Moving Average (ARIMA) models. I’ll focus on that aspect in posts about those methods.

Related post: Using Moving Averages to Smooth Time Series Data

For this post, I’ll show you a quick example of a PACF plot. Typically, you will use the ACF to determine whether an autoregressive model is appropriate. If it is, you then use the PACF to help you choose the model terms.

This partial autocorrelation plot displays data from the southern oscillations dataset from NIST. The southern oscillations refer to changes in the barometric pressure near Tahiti that predicts El Niño. Download the southern_oscillations_data.

Partial autocorrelation plot for the southern oscillation data.

On the graph, the partial autocorrelations for lags 1 and 2 are statistically significant. The subsequent lags are nearly significant. Consequently, this PACF suggests fitting either a second or third-order autoregressive model.

By assessing the autocorrelation and partial autocorrelation patterns in your data, you can understand the nature of your time series and model it!

Filed Under: Time Series Tagged With: analysis example, conceptual, graphs

Using Combinations to Calculate Probabilities

By Jim Frost 6 Comments

Combinations in probability theory and other areas of mathematics refer to a sequence of outcomes where the order does not matter. For example, when you’re ordering a pizza, it doesn’t matter whether you order it with ham, mushrooms, and olives or olives, mushrooms, and ham. You’re getting the same pizza! [Read more…] about Using Combinations to Calculate Probabilities

Filed Under: Probability Tagged With: analysis example, choosing analysis, conceptual

Law of Large Numbers

By Jim Frost 4 Comments

The law of large numbers states that as the number of trials increases, sample values tend to converge on the expected result. The two forms of this law lay the foundation for both statistics and probability theory.

In this post, I explain both forms of the law, simulate them in action, and explain why they’re crucial for statistics and probability! [Read more…] about Law of Large Numbers

Filed Under: Basics Tagged With: conceptual, probability

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to page 6
  • Go to page 7
  • Interim pages omitted …
  • Go to page 12
  • Go to Next Page »

Primary Sidebar

Meet Jim

I’ll help you intuitively understand statistics by focusing on concepts and using plain English so you can concentrate on understanding your results.

Read More...

Buy My Introduction to Statistics eBook!

New! Buy My Hypothesis Testing eBook!

Buy My Regression eBook!

Subscribe by Email

Enter your email address to receive notifications of new posts by email.

    I won't send you spam. Unsubscribe at any time.

    Follow Me

    • FacebookFacebook
    • RSS FeedRSS Feed
    • TwitterTwitter
    • Popular
    • Latest
    Popular
    • How To Interpret R-squared in Regression Analysis
    • How to Interpret P-values and Coefficients in Regression Analysis
    • Measures of Central Tendency: Mean, Median, and Mode
    • Normal Distribution in Statistics
    • Multicollinearity in Regression Analysis: Problems, Detection, and Solutions
    • How to Interpret the F-test of Overall Significance in Regression Analysis
    • Understanding Interaction Effects in Statistics
    Latest
    • How to Find the P value: Process and Calculations
    • Sampling Methods: Different Types in Research
    • Beta Distribution: Uses, Parameters & Examples
    • Geometric Distribution: Uses, Calculator & Formula
    • What is Power in Statistics?
    • Conditional Distribution: Definition & Finding
    • Marginal Distribution: Definition & Finding

    Recent Comments

    • Hannah on How to Interpret Adjusted R-Squared and Predicted R-Squared in Regression Analysis
    • James on Introduction to Bootstrapping in Statistics with an Example
    • Jim Frost on Introduction to Bootstrapping in Statistics with an Example
    • Jim Frost on How To Interpret R-squared in Regression Analysis
    • Jim Frost on Comparing Regression Lines with Hypothesis Tests

    Copyright © 2022 · Jim Frost · Privacy Policy