• Skip to secondary menu
  • Skip to main content
  • Skip to primary sidebar
  • My Store
  • Glossary
  • Home
  • About Me
  • Contact Me

Statistics By Jim

Making statistics intuitive

  • Graphs
  • Basics
  • Hypothesis Testing
  • Regression
  • ANOVA
  • Probability
  • Time Series
  • Fun

Hypothesis Testing

Sampling Distribution: Definition, Formula & Examples

By Jim Frost 5 Comments

What is a Sampling Distribution?

A sampling distribution of a statistic is a type of probability distribution created by drawing many random samples of a given size from the same population. These distributions help you understand how a sample statistic varies from sample to sample. [Read more…] about Sampling Distribution: Definition, Formula & Examples

Filed Under: Hypothesis Testing Tagged With: conceptual, distributions, graphs

Critical Value: Definition, Finding & Calculator

By Jim Frost Leave a Comment

What is a Critical Value?

A critical value defines regions in the sampling distribution of a test statistic. These values play a role in both hypothesis tests and confidence intervals. In hypothesis tests, critical values determine whether the results are statistically significant. For confidence intervals, they help calculate the upper and lower limits. [Read more…] about Critical Value: Definition, Finding & Calculator

Filed Under: Hypothesis Testing Tagged With: conceptual, distributions, graphs

Chi-Square Table

By Jim Frost 2 Comments

This chi-square table provides the critical values for chi-square (χ2) hypothesis tests. The column and row intersections are the right-tail critical values for a given probability and degrees of freedom. [Read more…] about Chi-Square Table

Filed Under: Hypothesis Testing Tagged With: distributions, graphs

Z-table

By Jim Frost 7 Comments

Z-Score Table

A z-table, also known as the standard normal table, provides the area under the curve to the left of a z-score. This area represents the probability that z-values will fall within a region of the standard normal distribution. Use a z-table to find probabilities corresponding to ranges of z-scores and to find p-values for z-tests. [Read more…] about Z-table

Filed Under: Hypothesis Testing Tagged With: distributions, graphs

T-Distribution Table of Critical Values

By Jim Frost 5 Comments

This t-distribution table provides the critical t-values for both one-tailed and two-tailed t-tests, and confidence intervals. Learn how to use this t-table with the information, examples, and illustrations below the table. [Read more…] about T-Distribution Table of Critical Values

Filed Under: Hypothesis Testing Tagged With: distributions

Test Statistic: Definition, Types & Formulas

By Jim Frost 2 Comments

What is a Test Statistic?

A test statistic assesses how consistent your sample data are with the null hypothesis in a hypothesis test. Test statistic calculations take your sample data and boil them down to a single number that quantifies how much your sample diverges from the null hypothesis. As a test statistic value becomes more extreme, it indicates larger differences between your sample data and the null hypothesis. [Read more…] about Test Statistic: Definition, Types & Formulas

Filed Under: Hypothesis Testing Tagged With: conceptual, interpreting results

Paired T Test: Definition & When to Use It

By Jim Frost 5 Comments

What is a Paired T Test?

Use a paired t-test when each subject has a pair of measurements, such as a before and after score. A paired t-test determines whether the mean change for these pairs is significantly different from zero. This test is an inferential statistics procedure because it uses samples to draw conclusions about populations.

Paired t tests are also known as a paired sample t-test or a dependent samples t test. These names reflect the fact that the two samples are paired or dependent because they contain the same subjects. Conversely, an independent samples t test contains different subjects in the two samples. [Read more…] about Paired T Test: Definition & When to Use It

Filed Under: Hypothesis Testing Tagged With: analysis example, assumptions, choosing analysis, interpreting results

Independent Samples T Test: Definition, Using & Interpreting

By Jim Frost 3 Comments

What is an Independent Samples T Test?

Use an independent samples t test when you want to compare the means of precisely two groups—no more and no less! Typically, you perform this test to determine whether two population means are different. This procedure is an inferential statistical hypothesis test, meaning it uses samples to draw conclusions about populations. The independent samples t test is also known as the two sample t test. [Read more…] about Independent Samples T Test: Definition, Using & Interpreting

Filed Under: Hypothesis Testing Tagged With: analysis example, assumptions, choosing analysis, interpreting results

Standard Error of the Mean (SEM)

By Jim Frost 24 Comments

The standard error of the mean (SEM) is a bit mysterious. You’ll frequently find it in your statistical output. Is it a measure of variability? How does the standard error of the mean compare to the standard deviation? How do you interpret it?

In this post, I answer all these questions about the standard error of the mean, show how it relates to sample size considerations and statistical significance, and explain the general concept of other types of standard errors. In fact, I view standard errors as the doorway from descriptive statistics to inferential statistics. You’ll see how that works! [Read more…] about Standard Error of the Mean (SEM)

Filed Under: Hypothesis Testing Tagged With: conceptual, graphs, interpreting results

Assessing a COVID-19 Vaccination Experiment and Its Results

By Jim Frost 35 Comments

Moderna has announced encouraging preliminary results for their COVID-19 vaccine. In this post, I assess the available data and explain what the vaccine’s effectiveness really means. I also look at Moderna’s experimental design and examine how it incorporates statistical procedures and concepts that I discuss throughout my blog posts and books. [Read more…] about Assessing a COVID-19 Vaccination Experiment and Its Results

Filed Under: Hypothesis Testing Tagged With: coronavirus, interpreting results

P-Values, Error Rates, and False Positives

By Jim Frost 39 Comments

In my post about how to interpret p-values, I emphasize that p-values are not an error rate. The number one misinterpretation of p-values is that they are the probability of the null hypothesis being correct.

The correct interpretation is that p-values indicate the probability of observing your sample data, or more extreme, when you assume the null hypothesis is true. If you don’t solidly grasp that correct interpretation, please take a moment to read that post first.

Hopefully, that’s clear.

Unfortunately, one part of that blog post confuses some readers. In that post, I explain how p-values are not a probability, or error rate, of a hypothesis. I then show how that misinterpretation is dangerous because it overstates the evidence against the null hypothesis. [Read more…] about P-Values, Error Rates, and False Positives

Filed Under: Hypothesis Testing Tagged With: conceptual, probability

New eBook Release! Hypothesis Testing: An Intuitive Guide

By Jim Frost 10 Comments

I’m thrilled to release my new book! Hypothesis Testing: An Intuitive Guide for Making Data Driven Decisions. [Read more…] about New eBook Release! Hypothesis Testing: An Intuitive Guide

Filed Under: Hypothesis Testing Tagged With: ebook

Failing to Reject the Null Hypothesis

By Jim Frost 66 Comments

Failing to reject the null hypothesis is an odd way to state that the results of your hypothesis test are not statistically significant. Why the peculiar phrasing? “Fail to reject” sounds like one of those double negatives that writing classes taught you to avoid. What does it mean exactly? There’s an excellent reason for the odd wording!

In this post, learn what it means when you fail to reject the null hypothesis and why that’s the correct wording. While accepting the null hypothesis sounds more straightforward, it is not statistically correct! [Read more…] about Failing to Reject the Null Hypothesis

Filed Under: Hypothesis Testing Tagged With: conceptual

Understanding Significance Levels in Statistics

By Jim Frost 30 Comments

Significance levels in statistics are a crucial component of hypothesis testing. However, unlike other values in your statistical output, the significance level is not something that statistical software calculates. Instead, you choose the significance level. Have you ever wondered why?

In this post, I’ll explain the significance level conceptually, why you choose its value, and how to choose a good value. Statisticians also refer to the significance level as alpha (α). [Read more…] about Understanding Significance Levels in Statistics

Filed Under: Hypothesis Testing Tagged With: conceptual

How the Chi-Squared Test of Independence Works

By Jim Frost 21 Comments

Chi-squared tests of independence determine whether a relationship exists between two categorical variables. Do the values of one categorical variable depend on the value of the other categorical variable? If the two variables are independent, knowing the value of one variable provides no information about the value of the other variable.

I’ve previously written about Pearson’s chi-square test of independence using a fun Star Trek example. Are the uniform colors related to the chances of dying? You can test the notion that the infamous red shirts have a higher likelihood of dying. In that post, I focus on the purpose of the test, applied it to this example, and interpreted the results.

In this post, I’ll take a bit of a different approach. I’ll show you the nuts and bolts of how to calculate the expected values, chi-square value, and degrees of freedom. Then you’ll learn how to use the chi-squared distribution in conjunction with the degrees of freedom to calculate the p-value. [Read more…] about How the Chi-Squared Test of Independence Works

Filed Under: Hypothesis Testing Tagged With: analysis example, distributions, interpreting results

How to Test Variances in Excel

By Jim Frost 7 Comments

Use a variances test to determine whether the variability of two groups differs. In this post, we’ll work through a two-sample variances test that Excel provides. Even if Excel isn’t your primary statistical software, this post provides an excellent introduction to variance tests. Excel refers to this analysis as F-Test Two-Sample for Variances. [Read more…] about How to Test Variances in Excel

Filed Under: Hypothesis Testing Tagged With: analysis example, Excel, interpreting results

How to do t-Tests in Excel

By Jim Frost 114 Comments

Excel can perform various statistical analyses, including t-tests. It is an excellent option because nearly everyone can access Excel. This post is a great introduction to performing and interpreting t-tests even if Excel isn’t your primary statistical software package.

In this post, I provide step-by-step instructions for using Excel to perform t-tests. Importantly, I also show you how to select the correct form of t-test, choose the right options, and interpret the results. I also include links to additional resources I’ve written, which present clear explanations of relevant t-test concepts that you won’t find in Excel’s documentation. And, I use an example dataset for us to work through and interpret together! [Read more…] about How to do t-Tests in Excel

Filed Under: Hypothesis Testing Tagged With: analysis example, Excel, interpreting results

Low Power Tests Exaggerate Effect Sizes

By Jim Frost 14 Comments

If your study has low statistical power, it will exaggerate the effect size. What?!

Statistical power is the ability of a hypothesis test to detect an effect that exists in the population. Clearly, a high-powered study is a good thing just for being able to identify these effects. Low power reduces your chances of discovering real findings. However, many analysts don’t realize that low power also inflates the effect size. Learn more about Statistical Power.

In this post, I show how this unexpected relationship between power and exaggerated effect sizes exists. I’ll also tie it to other issues, such as the bias of effects published in journals and other matters about statistical power. I think this post will be eye-opening and thought provoking! As always, I’ll use many graphs rather than equations. [Read more…] about Low Power Tests Exaggerate Effect Sizes

Filed Under: Hypothesis Testing Tagged With: conceptual, distributions, graphs

Revisiting the Monty Hall Problem with Hypothesis Testing

By Jim Frost 22 Comments

The Monty Hall Problem is where Monty presents you with three doors, one of which contains a prize. He asks you to pick one door, which remains closed. Monty opens one of the other doors that does not have the prize. This process leaves two unopened doors—your original choice and one other. He allows you to switch from your initial choice to the other unopened door. Do you accept the offer?

If you accept his offer to switch doors, you’re twice as likely to win—66% versus 33%—than if you stay with your original choice.

Mind-blowing, right?

The solution to the Monty Hall Problem is tricky and counter-intuitive. It did trip up many experts back in the 1980s. However, the correct answer to the Monty Hall Problem is now well established using a variety of methods. It has been proven mathematically, with computer simulations, and empirical experiments, including on television by both the Mythbusters (CONFIRMED!) and James Mays’ Man Lab. You won’t find any statisticians who disagree with the solution.

In this post, I’ll explore aspects of this problem that have arisen in discussions with some stubborn resisters to the notion that you can increase your chances of winning by switching!

The Monty Hall problem provides a fun way to explore issues that relate to hypothesis testing. I’ve got a lot of fun lined up for this post, including the following!

  • Using a computer simulation to play the game 10,000 times.
  • Assessing sampling distributions to compare the 66% percent hypothesis to another contender.
  • Performing a power and sample size analysis to determine the number of times you need to play the Monty Hall game to get an answer.
  • Conducting an experiment by playing the game repeatedly myself, record the results, and use a proportions hypothesis test to draw conclusions! [Read more…] about Revisiting the Monty Hall Problem with Hypothesis Testing

Filed Under: Hypothesis Testing Tagged With: analysis example, conceptual, distributions, interpreting results

Using Confidence Intervals to Compare Means

By Jim Frost 60 Comments

To determine whether the difference between two means is statistically significant, analysts often compare the confidence intervals for those groups. If those intervals overlap, they conclude that the difference between groups is not statistically significant. If there is no overlap, the difference is significant.

While this visual method of assessing the overlap is easy to perform, regrettably it comes at the cost of reducing your ability to detect differences. Fortunately, there is a simple solution to this problem that allows you to perform a simple visual assessment and yet not diminish the power of your analysis.

In this post, I’ll start by showing you the problem in action and explain why it happens. Then, we’ll proceed to an easy alternative method that avoids this problem. [Read more…] about Using Confidence Intervals to Compare Means

Filed Under: Hypothesis Testing Tagged With: conceptual, graphs, interpreting results

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to Next Page »

Primary Sidebar

Meet Jim

I’ll help you intuitively understand statistics by focusing on concepts and using plain English so you can concentrate on understanding your results.

Read More...

Buy My Introduction to Statistics Book!

Cover of my Introduction to Statistics: An Intuitive Guide ebook.

Buy My Hypothesis Testing Book!

Cover image of my Hypothesis Testing: An Intuitive Guide ebook.

Buy My Regression Book!

Cover for my ebook, Regression Analysis: An Intuitive Guide for Using and Interpreting Linear Models.

Subscribe by Email

Enter your email address to receive notifications of new posts by email.

    I won't send you spam. Unsubscribe at any time.

    Top Posts

    • How to Interpret P-values and Coefficients in Regression Analysis
    • F-table
    • How To Interpret R-squared in Regression Analysis
    • Z-table
    • How to do t-Tests in Excel
    • How to Find the P value: Process and Calculations
    • Weighted Average: Formula & Calculation Examples
    • Cronbach’s Alpha: Definition, Calculations & Example
    • T-Distribution Table of Critical Values
    • Multicollinearity in Regression Analysis: Problems, Detection, and Solutions

    Recent Posts

    • Longitudinal Study: Overview, Examples & Benefits
    • Correlation vs Causation: Understanding the Differences
    • One Way ANOVA Overview & Example
    • Observational Study vs Experiment with Examples
    • Goodness of Fit: Definition & Tests
    • Binomial Distribution Formula: Probability, Standard Deviation & Mean

    Recent Comments

    • Jim Frost on Joint Probability: Definition, Formula & Examples
    • Harmeet on Joint Probability: Definition, Formula & Examples
    • kafia on Cronbach’s Alpha: Definition, Calculations & Example
    • Jim Frost on How to Interpret P-values and Coefficients in Regression Analysis
    • Jim Frost on Convenience Sampling: Definition & Examples

    Copyright © 2023 · Jim Frost · Privacy Policy