• Skip to secondary menu
  • Skip to main content
  • Skip to primary sidebar
  • My Store
  • Glossary
  • Home
  • About Me
  • Contact Me

Statistics By Jim

Making statistics intuitive

  • Graphs
  • Basics
  • Hypothesis Testing
  • Regression
  • ANOVA
  • Probability
  • Time Series
  • Fun

graphs

Understanding Interaction Effects in Statistics

By Jim Frost 480 Comments

What are Interaction Effects?

An interaction effect occurs when the effect of one variable depends on the value of another variable. Interaction effects are common in regression models, ANOVA, and designed experiments. In this post, I explain interaction effects, the interaction effect test, how to interpret interaction models, and describe the problems you can face if you don’t include them in your model. [Read more…] about Understanding Interaction Effects in Statistics

Filed Under: Regression Tagged With: analysis example, conceptual, graphs, interpreting results

Using Log-Log Plots to Determine Whether Size Matters

By Jim Frost 3 Comments

Log-log plots display data in two dimensions where both axes use logarithmic scales. When one variable changes as a constant power of another, a log-log graph shows the relationship as a straight line. In this post, I’ll show you why these graphs are valuable and how to interpret them. [Read more…] about Using Log-Log Plots to Determine Whether Size Matters

Filed Under: Regression Tagged With: analysis example, graphs, interpreting results

Flu Shots, How Effective Are They?

By Jim Frost

With the arrival of Fall in the Northern hemisphere, it’s flu season again.

Do you debate getting a flu shot every year? I do get flu shots every year. I realize that they’re not perfect, but I figure they’re a low-cost way to reduce my chances of a crummy week suffering from the flu.

The media report that flu shots have an effectiveness of approximately 68%. But what does that mean exactly? What is the absolute reduction in risk? Are there long-term benefits?

In this blog post, I explore the effectiveness of flu shots from a statistical viewpoint. We’ll statistically analyze the data ourselves to go beyond the simplified accounts that the media presents. I’ll also model the long-term outcomes you can expect with regular flu vaccinations. By the time you finish this post, you’ll have a crystal clear picture of flu shot effectiveness. Some of the results surprised me! [Read more…] about Flu Shots, How Effective Are They?

Filed Under: Hypothesis Testing Tagged With: analysis example, distributions, graphs, interpreting results

Use Control Charts with Hypothesis Tests

By Jim Frost 17 Comments

Typically, quality improvement analysts use control charts to assess business processes and don’t have hypothesis tests in mind. Do you know how control charts provide tremendous benefits in other settings and with hypothesis testing? Spoilers—control charts check an assumption that we often forget about for hypothesis tests! [Read more…] about Use Control Charts with Hypothesis Tests

Filed Under: Hypothesis Testing Tagged With: assumptions, graphs, quality improvement

Understand Precision in Predictive Analytics to Avoid Costly Mistakes

By Jim Frost 9 Comments

Precision in predictive analytics refers to how close the model’s predictions are to the observed values. The more precise the model, the closer the data points are to the predictions. When you have an imprecise model, the observations tend to be further away from the predictions, thereby reducing the usefulness of the predictions. If you have a model that is not sufficiently precise, you risk making costly mistakes! [Read more…] about Understand Precision in Predictive Analytics to Avoid Costly Mistakes

Filed Under: Regression Tagged With: analysis example, conceptual, graphs, interpreting results

Heteroscedasticity in Regression Analysis

By Jim Frost 63 Comments

Heteroscedasticity means unequal scatter. In regression analysis, we talk about heteroscedasticity in the context of the residuals or error term. Specifically, heteroscedasticity is a systematic change in the spread of the residuals over the range of measured values. Heteroscedasticity is a problem because ordinary least squares (OLS) regression assumes that all residuals are drawn from a population that has a constant variance (homoscedasticity).

To satisfy the regression assumptions and be able to trust the results, the residuals should have a constant variance. In this blog post, I show you how to identify heteroscedasticity, explain what produces it, the problems it causes, and work through an example to show you several solutions. [Read more…] about Heteroscedasticity in Regression Analysis

Filed Under: Regression Tagged With: assumptions, conceptual, graphs

World Travel, Rough Roads, and Manually Adjusting Graph Scales!

By Jim Frost 2 Comments

As my family and I were being rattled around in a four-wheel drive vehicle in the remote Osa Peninsula in Costa Rica, it struck me that traveling to exotic locations is just like manually adjusting the scales on graphs! That’s probably not what you were expecting, but let me explain! Unlike most of my statistical blog posts, this one gets a bit philosophical! [Read more…] about World Travel, Rough Roads, and Manually Adjusting Graph Scales!

Filed Under: Fun Tagged With: conceptual, graphs

How to Interpret Regression Models that have Significant Variables but a Low R-squared

By Jim Frost 85 Comments


Does your regression model have a low R-squared? That seems like a problem—but it might not be. Learn what a low R-squared does and does not mean for your model. [Read more…] about How to Interpret Regression Models that have Significant Variables but a Low R-squared

Filed Under: Regression Tagged With: conceptual, graphs

How to Identify the Distribution of Your Data

By Jim Frost 106 Comments

You’re probably familiar with data that follow the normal distribution. The normal distribution is that nice, familiar bell-shaped curve. Unfortunately, not all data are normally distributed or as intuitive to understand. You can picture the symmetric normal distribution, but what about the Weibull or Gamma distributions? This uncertainty might leave you feeling unsettled. In this post, I show you how to identify the probability distribution of your data. [Read more…] about How to Identify the Distribution of Your Data

Filed Under: Hypothesis Testing Tagged With: distributions, graphs

How Hypothesis Tests Work: Significance Levels (Alpha) and P values

By Jim Frost 45 Comments

Hypothesis testing is a vital process in inferential statistics where the goal is to use sample data to draw conclusions about an entire population. In the testing process, you use significance levels and p-values to determine whether the test results are statistically significant.

You hear about results being statistically significant all of the time. But, what do significance levels, P values, and statistical significance actually represent? Why do we even need to use hypothesis tests in statistics? [Read more…] about How Hypothesis Tests Work: Significance Levels (Alpha) and P values

Filed Under: Hypothesis Testing Tagged With: conceptual, graphs, interpreting results

Hypothesis Testing and Confidence Intervals

By Jim Frost 20 Comments

Confidence intervals and hypothesis testing are closely related because both methods use the same underlying methodology. Additionally, there is a close connection between significance levels and confidence levels. Indeed, there is such a strong link between them that hypothesis tests and the corresponding confidence intervals always agree about statistical significance.

A confidence interval is calculated from a sample and provides a range of values that likely contains the unknown value of a population parameter. To learn more about confidence intervals in general, how to interpret them, and how to calculate them, read my post about Understanding Confidence Intervals.

In this post, I demonstrate how confidence intervals work using graphs and concepts instead of formulas. In the process, I compare and contrast significance and confidence levels. You’ll learn how confidence intervals are similar to significance levels in hypothesis testing. You can even use confidence intervals to determine statistical significance.

Read the companion post for this one: How Hypothesis Tests Work: Significance Levels (Alpha) and P-values. In that post, I use the same graphical approach to illustrate why we need hypothesis tests, how significance levels and P-values can determine whether a result is statistically significant, and what that actually means.

Significance Level vs. Confidence Level

Let’s delve into how confidence intervals incorporate the margin of error. Like the previous post, I’ll use the same type of sampling distribution that showed us how hypothesis tests work. This sampling distribution is based on the t-distribution, our sample size, and the variability in our sample. Download the CSV data file: FuelsCosts.

There are two critical differences between the sampling distribution graphs for significance levels and confidence intervals–the value that the distribution centers on and the portion we shade.

The significance level chart centers on the null value, and we shade the outside 5% of the distribution.

Conversely, the confidence interval graph centers on the sample mean, and we shade the center 95% of the distribution.

Probability distribution plot that displays 95% confidence interval for our fuel cost dataset.

The shaded range of sample means [267 394] covers 95% of this sampling distribution. This range is the 95% confidence interval for our sample data. We can be 95% confident that the population mean for fuel costs fall between 267 and 394.

Confidence Intervals and the Inherent Uncertainty of Using Sample Data

The graph emphasizes the role of uncertainty around the point estimate. This graph centers on our sample mean. If the population mean equals our sample mean, random samples from this population (N=25) will fall within this range 95% of the time.

We don’t know whether our sample mean is near the population mean. However, we know that the sample mean is an unbiased estimate of the population mean. An unbiased estimate does not tend to be too high or too low. It’s correct on average. Confidence intervals are correct on average because they use sample estimates that are correct on average. Given what we know, the sample mean is the most likely value for the population mean.

Given the sampling distribution, it would not be unusual for other random samples drawn from the same population to have means that fall within the shaded area. In other words, given that we did, in fact, obtain the sample mean of 330.6, it would not be surprising to get other sample means within the shaded range.

If these other sample means would not be unusual, we must conclude that these other values are also plausible candidates for the population mean. There is inherent uncertainty when using sample data to make inferences about the entire population. Confidence intervals help gauge the degree of uncertainty, also known as the margin of error.

Related post: Sampling Distributions

Confidence Intervals and Statistical Significance

If you want to determine whether your hypothesis test results are statistically significant, you can use either P-values with significance levels or confidence intervals. These two approaches always agree.

The relationship between the confidence level and the significance level for a hypothesis test is as follows:

Confidence level = 1 – Significance level (alpha)

For example, if your significance level is 0.05, the equivalent confidence level is 95%.

Both of the following conditions represent statistically significant results:

  • The P-value in a hypothesis test is smaller than the significance level.
  • The confidence interval excludes the null hypothesis value.

Further, it is always true that when the P-value is less than your significance level, the interval excludes the value of the null hypothesis.

In the fuel cost example, our hypothesis test results are statistically significant because the P-value (0.03112) is less than the significance level (0.05). Likewise, the 95% confidence interval [267 394] excludes the null hypotheses value (260). Using either method, we draw the same conclusion.

Hypothesis Testing and Confidence Intervals Always Agree

The hypothesis testing and confidence interval results always agree. To understand the basis of this agreement, remember how confidence levels and significance levels function:

  • A confidence level determines the distance between the sample mean and the confidence limits.
  • A significance level determines the distance between the null hypothesis value and the critical regions.

Both of these concepts specify a distance from the mean to a limit. Surprise! These distances are precisely the same length.

A 1-sample t-test calculates this distance as follows:

The critical t-value * standard error of the mean

Interpreting these statistics goes beyond the scope of this article. But, using this equation, the distance for our fuel cost example is $63.57.

P-value and significance level approach: If the sample mean is more than $63.57 from the null hypothesis mean, the sample mean falls within the critical region, and the difference is statistically significant.

Confidence interval approach: If the null hypothesis mean is more than $63.57 from the sample mean, the interval does not contain this value, and the difference is statistically significant.

Of course, they always agree!

The two approaches always agree as long as the same hypothesis test generates the P-values and confidence intervals and uses equivalent confidence levels and significance levels.

Related posts: Standard Error of the Mean and Critical Values

I Really Like Confidence Intervals!

In statistics, analysts often emphasize using hypothesis tests to determine statistical significance. Unfortunately, a statistically significant effect might not always be practically meaningful. For example, a significant effect can be too small to be important in the real world. Confidence intervals help you navigate this issue!

Similarly, the margin of error in a survey tells you how near you can expect the survey results to be to the correct population value.

Learn more about this distinction in my post about Practical vs. Statistical Significance.

Learn how to use confidence intervals to compare group means!

Finally, learn about bootstrapping in statistics to see an alternative to traditional confidence intervals that do not use probability distributions and test statistics. In that post, I create bootstrapped confidence intervals.

Reference

Neyman, J. (1937). Outline of a Theory of Statistical Estimation Based on the Classical Theory of Probability. Philosophical Transactions of the Royal Society A. 236 (767): 333–380.

Filed Under: Hypothesis Testing Tagged With: conceptual, graphs, interpreting results

How t-Tests Work: t-Values, t-Distributions, and Probabilities

By Jim Frost 12 Comments

T-tests are statistical hypothesis tests that you use to analyze one or two sample means. Depending on the t-test that you use, you can compare a sample mean to a hypothesized value, the means of two independent samples, or the difference between paired samples. In this post, I show you how t-tests use t-values and t-distributions to calculate probabilities and test hypotheses.

As usual, I’ll provide clear explanations of t-values and t-distributions using concepts and graphs rather than formulas! If you need a primer on the basics, read my hypothesis testing overview. [Read more…] about How t-Tests Work: t-Values, t-Distributions, and Probabilities

Filed Under: Hypothesis Testing Tagged With: conceptual, graphs, probability

How to Interpret the Constant (Y Intercept) in Regression Analysis

By Jim Frost 91 Comments


The constant term in regression analysis is the value at which the regression line crosses the y-axis. The constant is also known as the y-intercept. That sounds simple enough, right? Mathematically, the regression constant really is that simple. However, the difficulties begin when you try to interpret the meaning of the y-intercept in your regression output. [Read more…] about How to Interpret the Constant (Y Intercept) in Regression Analysis

Filed Under: Regression Tagged With: conceptual, graphs, interpreting results

How F-tests work in Analysis of Variance (ANOVA)

By Jim Frost 47 Comments

Analysis of variance (ANOVA) uses F-tests to statistically assess the equality of means when you have three or more groups. In this post, I’ll answer several common questions about the F-test.

  • How do F-tests work?
  • Why do we analyze variances to test means?

I’ll use concepts and graphs to answer these questions about F-tests in the context of a one-way ANOVA example. I’ll use the same approach that I use to explain how t-tests work. If you need a primer on the basics, read my hypothesis testing overview.

To learn more about ANOVA tests, including the more complex forms, read my ANOVA Overview.

[Read more…] about How F-tests work in Analysis of Variance (ANOVA)

Filed Under: ANOVA Tagged With: conceptual, graphs, probability

Check Your Residual Plots to Ensure Trustworthy Regression Results!

By Jim Frost 63 Comments

Use residual plots to check the assumptions of an OLS linear regression model. If you violate the assumptions, you risk producing results that you can’t trust. Residual plots display the residual values on the y-axis and fitted values, or another variable, on the x-axis. After you fit a regression model, it is crucial to check the residual plots. If your plots display unwanted patterns, you can’t trust the regression coefficients and other numeric results.

In this post, I explain the conceptual reasons why residual plots help ensure that your regression model is valid. I’ll also show you what to look for and how to fix the problems. [Read more…] about Check Your Residual Plots to Ensure Trustworthy Regression Results!

Filed Under: Regression Tagged With: assumptions, conceptual, graphs

When is Easter this Year?

By Jim Frost 22 Comments

When is Easter in 2022? I ask this question every year! The next Easter occurs on April 17, 2022. And then, in the next year, Easter falls on April 9, 2023. I have a hard time remembering when it occurs in any given year. I think that March Easters are both early and unusual. Is that true?

Being a statistician, my first thought is to study the distribution of Easter dates. By analyzing the distribution, we can determine which dates are rare and which are common. How unusual are Easter dates in March? Are there patterns in the dates? [Read more…] about When is Easter this Year?

Filed Under: Fun Tagged With: distributions, graphs

Statistics, Exoplanets, and the Search for Earthlike Planets

By Jim Frost 3 Comments

I love astronomy! The discovery of thousands of exoplanets has made it only more exciting. You often hear about the really weird planets in the news. You know, things like low density puffballs, hot Jupiters, rogue planets, planets that orbit their star in hours, and even a Jupiter mass planet that is one huge diamond! As neat as these discoveries are, I also want to know how Earth fits in. [Read more…] about Statistics, Exoplanets, and the Search for Earthlike Planets

Filed Under: Fun Tagged With: distributions, graphs

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4

Primary Sidebar

Meet Jim

I’ll help you intuitively understand statistics by focusing on concepts and using plain English so you can concentrate on understanding your results.

Read More...

Buy My Introduction to Statistics Book!

Cover of my Introduction to Statistics: An Intuitive Guide ebook.

Buy My Hypothesis Testing Book!

Cover image of my Hypothesis Testing: An Intuitive Guide ebook.

Buy My Regression Book!

Cover for my ebook, Regression Analysis: An Intuitive Guide for Using and Interpreting Linear Models.

Subscribe by Email

Enter your email address to receive notifications of new posts by email.

    I won't send you spam. Unsubscribe at any time.

    Follow Me

    • FacebookFacebook
    • RSS FeedRSS Feed
    • TwitterTwitter

    Top Posts

    • How to Interpret P-values and Coefficients in Regression Analysis
    • How To Interpret R-squared in Regression Analysis
    • Z-table
    • How to do t-Tests in Excel
    • Multicollinearity in Regression Analysis: Problems, Detection, and Solutions
    • How to Find the P value: Process and Calculations
    • F-table
    • How to Interpret the F-test of Overall Significance in Regression Analysis
    • Mean, Median, and Mode: Measures of Central Tendency
    • One-Tailed and Two-Tailed Hypothesis Tests Explained

    Recent Posts

    • Sampling Frame: Definition & Examples
    • Probability Mass Function: Definition, Uses & Example
    • Using Scientific Notation
    • Selection Bias: Definition & Examples
    • ANCOVA: Uses, Assumptions & Example
    • Fibonacci Sequence: Formula & Uses

    Recent Comments

    • Jim Frost on Beta Distribution: Uses, Parameters & Examples
    • Norman Abraham on Beta Distribution: Uses, Parameters & Examples
    • Morris on Validity in Research and Psychology: Types & Examples
    • Jim Frost on What are Robust Statistics?
    • Allan Fraser on What are Robust Statistics?

    Copyright © 2023 · Jim Frost · Privacy Policy