• Skip to secondary menu
  • Skip to main content
  • Skip to primary sidebar
  • My Store
  • Glossary
  • Home
  • About Me
  • Contact Me

Statistics By Jim

Making statistics intuitive

  • Graphs
  • Basics
  • Hypothesis Testing
  • Regression
  • ANOVA
  • Probability
  • Time Series
  • Fun

Hypothesis Testing

Fishers Exact Test: Using & Interpreting

By Jim Frost Leave a Comment

Fishers exact test determines whether a statistically significant association exists between two categorical variables.

For example, does a relationship exist between gender (Male/Female) and voting Yes or No on a referendum? [Read more…] about Fishers Exact Test: Using & Interpreting

Filed Under: Hypothesis Testing Tagged With: analysis example, choosing analysis

Z Test: Uses, Formula & Examples

By Jim Frost Leave a Comment

What is a Z Test?

Use a Z test when you need to compare group means. Use the 1-sample analysis to determine whether a population mean is different from a hypothesized value. Or use the 2-sample version to determine whether two population means differ. [Read more…] about Z Test: Uses, Formula & Examples

Filed Under: Hypothesis Testing Tagged With: analysis example, assumptions, choosing analysis, interpreting results

Statistical Significance: Definition & Meaning

By Jim Frost 3 Comments

What is Statistical Significance?

The Greek sympol of alpha, which represents the significance level.
Alpha represents the level of statistical significance.

Statistical significance is the goal for most researchers analyzing data. But what does statistically significant mean? Why and when is it important to consider? How do P values fit in with statistical significance? I’ll answer all these questions in this blog post!

Evaluate statistical significance when using a sample to estimate an effect in a population. It helps you determine whether your findings are the result of chance versus an actual effect of a variable of interest. [Read more…] about Statistical Significance: Definition & Meaning

Filed Under: Hypothesis Testing Tagged With: conceptual

Statistical Inference: Definition, Methods & Example

By Jim Frost 1 Comment

What is Statistical Inference?

Statistical inference is the process of using a sample to infer the properties of a population. Statistical procedures use sample data to estimate the characteristics of the whole population from which the sample was drawn.

Image of a scientist who wants to make a statistical inference.Scientists typically want to learn about a population. When studying a phenomenon, such as the effects of a new medication or public opinion, understanding the results at a population level is much more valuable than understanding only the comparatively few participants in a study.

Unfortunately, populations are usually too large to measure fully. Consequently, researchers must use a manageable subset of that population to learn about it.

By using procedures that can make statistical inferences, you can estimate the properties and processes of a population. More specifically, sample statistics can estimate population parameters. Learn more about the differences between sample statistics and population parameters.

For example, imagine that you are studying a new medication. As a scientist, you’d like to understand the medicine’s effect in the entire population rather than just a small sample. After all, knowing the effect on a handful of people isn’t very helpful for the larger society!

Consequently, you are interested in making a statistical inference about the medicine’s effect in the population.

Read on to see how to do that! I’ll show you the general process for making a statistical inference and then cover an example using real data.

Related post: Descriptive vs. Inferential Statistics

How to Make Statistical Inferences

In its simplest form, the process of making a statistical inference requires you to do the following:

  1. Draw a sample that adequately represents the population.
  2. Measure your variables of interest.
  3. Use appropriate statistical methodology to generalize your sample results to the population while accounting for sampling error.

Of course, that’s the simple version. In real-world experiments, you might need to form treatment and control groups, administer treatments, and reduce other sources of variation. In more complex cases, you might need to create a model of a process. There are many details in the process of making a statistical inference! Learn how to incorporate statistical inference into scientific studies.

Statistical inference requires using specialized sampling methods that tend to produce representative samples. If the sample does not look like the larger population you’re studying, you can’t trust any inferences from the sample. Consequently, using an appropriate method to obtain your sample is crucial. The best sampling methods tend to produce samples that look like the target population. Learn more about Sampling Methods and Representative Samples.

After obtaining a representative sample, you’ll need to use a procedure that can make statistical inferences. While you might have a sample that looks similar to the population, it will never be identical to it. Statisticians refer to the differences between a sample and the population as sampling error. Any effect or relationship you see in your sample might actually be sampling error rather than a true finding. Inferential statistics incorporate sampling error into the results. Learn more about Sampling Error.

Common Inferential Methods

The following are four standard procedures than can make statistical inferences.

  • Hypothesis Testing: Uses representative samples to assess two mutually exclusive hypotheses about a population. Statistically significant results suggest that the sample effect or relationship exists in the population after accounting for sampling error.
  • Confidence Intervals: A range of values likely containing the population value. This procedure evaluates the sampling error and adds a margin around the estimate, giving an idea of how wrong it might be.
  • Margin of Error: Comparable to a confidence interval but usually for survey results.
  • Regression Modeling: An estimate of the process that generates the outcomes in the population.

Example Statistical Inference

Let’s look at a real flu vaccine study for an example of making a statistical inference. The scientists for this study want to evaluate whether a flu vaccine effectively reduces flu cases in the general population. However, the general population is much too large to include in their study, so they must use a representative sample to make a statistical inference about the vaccine’s effectiveness.

The Monto et al. study* evaluates the 2007-2008 flu season and follows its participants from January to April. Participants are 18-49 years old. They selected ~1100 participants and randomly assigned them to the vaccine and placebo groups. After tracking them for the flu season, they record the number of flu infections in each group, as shown below.

Treatment Flu count Group size Percent infections
Placebo 35 325 10.8%
Vaccine 28 813 3.4%
Effect 7.4%

Monto Study Findings

From the table above, 10.8% of the unvaccinated got the flu, while only 3.4% of the vaccinated caught it. The apparent effect of the vaccine is 10.8% – 3.4% = 7.4%. While that seems to show a vaccine effect, it might be a fluke due to sampling error. We’re assessing only 1,100 people out of a population of millions. We need to use a hypothesis test and confidence interval (CI) to make a proper statistical inference.

While the details go beyond this introductory post, here are two statistical inferences we can make using a 2-sample proportions test and CI.

  1. The p-value of the test is < 0.0005. The evidence strongly favors the hypothesis that the vaccine effectively reduces flu infections in the population after accounting for sampling error.
  2. Additionally, the confidence interval for the effect size is 3.7% to 10.9%. Our study found a sample effect of 7.4%, but it is unlikely to equal the population effect exactly due to sampling error. The CI identifies a range that is likely to include the population effect.

For more information about this and other flu vaccine studies, read my post about Flu Vaccine Effectiveness.

In conclusion, by using a representative sample and the proper methodology, we made a statistical inference about vaccine effectiveness in an entire population.

Reference

Monto AS, Ohmit SE, Petrie JG, Johnson E, Truscon R, Teich E, Rotthoff J, Boulton M, Victor JC. Comparative efficacy of inactivated and live attenuated influenza vaccines. N Engl J Med. 2009;361(13):1260-7.

Filed Under: Hypothesis Testing Tagged With: analysis example, conceptual

How to Find the P value: Process and Calculations

By Jim Frost 2 Comments

P values are everywhere in statistics. They’re in all types of hypothesis tests. But how do you calculate a p-value? Unsurprisingly, the precise calculations depend on the test. However, there is a general process that applies to finding a p value.

In this post, you’ll learn how to find the p value. I’ll start by showing you the general process for all hypothesis tests. Then I’ll move on to a step-by-step example showing the calculations for a p value. This post includes a calculator so you can apply what you learn. [Read more…] about How to Find the P value: Process and Calculations

Filed Under: Hypothesis Testing

What is Power in Statistics?

By Jim Frost 1 Comment

Power in statistics is the probability that a hypothesis test can detect an effect in a sample when it exists in the population. It is the sensitivity of a hypothesis test. When an effect exists in the population, how likely is the test to detect it in your sample? [Read more…] about What is Power in Statistics?

Filed Under: Hypothesis Testing Tagged With: conceptual

Chi-Square Goodness of Fit Test: Uses & Examples

By Jim Frost 4 Comments

The chi-square goodness of fit test evaluates whether proportions of categorical or discrete outcomes in a sample follow a population distribution with hypothesized proportions. In other words, when you draw a random sample, do the observed proportions follow the values that theory suggests. [Read more…] about Chi-Square Goodness of Fit Test: Uses & Examples

Filed Under: Hypothesis Testing Tagged With: analysis example, conceptual, distributions, interpreting results

Sampling Error: Definition, Sources & Minimizing

By Jim Frost 6 Comments

What is Sampling Error?

Sampling error is the difference between a sample statistic and the population parameter it estimates. It is a crucial consideration in inferential statistics where you use a sample to estimate the properties of an entire population. [Read more…] about Sampling Error: Definition, Sources & Minimizing

Filed Under: Hypothesis Testing Tagged With: conceptual

Inter-Rater Reliability: Definition, Examples & Assessing

By Jim Frost Leave a Comment

What is Inter-Rater Reliability?

Inter-rater reliability measures the agreement between subjective ratings by multiple raters, inspectors, judges, or appraisers. It answers the question, is the rating system consistent? High inter-rater reliability indicates that multiple raters’ ratings for the same item are consistent. Conversely, low reliability means they are inconsistent. [Read more…] about Inter-Rater Reliability: Definition, Examples & Assessing

Filed Under: Hypothesis Testing Tagged With: analysis example, conceptual, interpreting results

Margin of Error: Formula and Interpreting

By Jim Frost Leave a Comment

What is the Margin of Error?

The margin of error (MOE) for a survey tells you how near you can expect the survey results to be to the correct population value. For example, a survey indicates that 72% of respondents favor Brand A over Brand B with a 3% margin of error. In this case, the actual population percentage that prefers Brand A likely falls within the range of 72% ± 3%, or 69 – 75%. [Read more…] about Margin of Error: Formula and Interpreting

Filed Under: Hypothesis Testing Tagged With: conceptual, interpreting results

Null Hypothesis: Definition, Rejecting & Examples

By Jim Frost 3 Comments

What is a Null Hypothesis?

The null hypothesis in statistics states that there is no difference between groups or no relationship between variables. It is one of two mutually exclusive hypotheses about a population in a hypothesis test. [Read more…] about Null Hypothesis: Definition, Rejecting & Examples

Filed Under: Hypothesis Testing Tagged With: conceptual

Confidence Intervals: Interpreting, Finding & Formulas

By Jim Frost 2 Comments

What is a Confidence Interval?

A confidence interval (CI) is a range of values that is likely to contain the value of an unknown population parameter. These intervals represent a plausible domain for the parameter given the characteristics of your sample data. Confidence intervals are derived from sample statistics and are calculated using a specified confidence level. [Read more…] about Confidence Intervals: Interpreting, Finding & Formulas

Filed Under: Hypothesis Testing Tagged With: conceptual, interpreting results

F-table

By Jim Frost Leave a Comment

These F-tables provide the critical values for right-tail F-tests. Your F-test results are statistically significant when its test statistic is greater than this value. [Read more…] about F-table

Filed Under: Hypothesis Testing Tagged With: conceptual, distributions, graphs

Sampling Distribution: Definition, Formula & Examples

By Jim Frost 5 Comments

What is a Sampling Distribution?

A sampling distribution of a statistic is a type of probability distribution created by drawing many random samples of a given size from the same population. These distributions help you understand how a sample statistic varies from sample to sample. [Read more…] about Sampling Distribution: Definition, Formula & Examples

Filed Under: Hypothesis Testing Tagged With: conceptual, distributions, graphs

Critical Value: Definition, Finding & Calculator

By Jim Frost Leave a Comment

What is a Critical Value?

A critical value defines regions in the sampling distribution of a test statistic. These values play a role in both hypothesis tests and confidence intervals. In hypothesis tests, critical values determine whether the results are statistically significant. For confidence intervals, they help calculate the upper and lower limits. [Read more…] about Critical Value: Definition, Finding & Calculator

Filed Under: Hypothesis Testing Tagged With: conceptual, distributions, graphs

Chi-Square Table

By Jim Frost 2 Comments

This chi-square table provides the critical values for chi-square (χ2) hypothesis tests. The column and row intersections are the right-tail critical values for a given probability and degrees of freedom. [Read more…] about Chi-Square Table

Filed Under: Hypothesis Testing Tagged With: distributions, graphs

Z-table

By Jim Frost 1 Comment

Z-Score Table

A z-table, also known as the standard normal table, provides the area under the curve to the left of a z-score. This area represents the probability that z-values will fall within a region of the standard normal distribution. Use a z-table to find probabilities corresponding to ranges of z-scores and to find p-values for z-tests. [Read more…] about Z-table

Filed Under: Hypothesis Testing Tagged With: distributions, graphs

T-Distribution Table of Critical Values

By Jim Frost 5 Comments

This t-distribution table provides the critical t-values for both one-tailed and two-tailed t-tests, and confidence intervals. Learn how to use this t-table with the information, examples, and illustrations below the table. [Read more…] about T-Distribution Table of Critical Values

Filed Under: Hypothesis Testing Tagged With: distributions

Test Statistic: Definition, Types & Formulas

By Jim Frost 1 Comment

What is a Test Statistic?

A test statistic assesses how consistent your sample data are with the null hypothesis in a hypothesis test. Test statistic calculations take your sample data and boil them down to a single number that quantifies how much your sample diverges from the null hypothesis. As a test statistic value becomes more extreme, it indicates larger differences between your sample data and the null hypothesis. [Read more…] about Test Statistic: Definition, Types & Formulas

Filed Under: Hypothesis Testing Tagged With: conceptual, interpreting results

Paired T Test: Definition & When to Use It

By Jim Frost 5 Comments

What is a Paired T Test?

Use a paired t-test when each subject has a pair of measurements, such as a before and after score. A paired t-test determines whether the mean change for these pairs is significantly different from zero. This test is an inferential statistics procedure because it uses samples to draw conclusions about populations.

Paired t tests are also known as a paired sample t-test or a dependent samples t test. These names reflect the fact that the two samples are paired or dependent because they contain the same subjects. Conversely, an independent samples t test contains different subjects in the two samples. [Read more…] about Paired T Test: Definition & When to Use It

Filed Under: Hypothesis Testing Tagged With: analysis example, assumptions, choosing analysis, interpreting results

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to Next Page »

Primary Sidebar

Meet Jim

I’ll help you intuitively understand statistics by focusing on concepts and using plain English so you can concentrate on understanding your results.

Read More...

Buy My Introduction to Statistics Book!

Cover of my Introduction to Statistics: An Intuitive Guide ebook.

Buy My Hypothesis Testing Book!

Cover image of my Hypothesis Testing: An Intuitive Guide ebook.

Buy My Regression Book!

Cover for my ebook, Regression Analysis: An Intuitive Guide for Using and Interpreting Linear Models.

Subscribe by Email

Enter your email address to receive notifications of new posts by email.

    I won't send you spam. Unsubscribe at any time.

    Follow Me

    • FacebookFacebook
    • RSS FeedRSS Feed
    • TwitterTwitter

    Top Posts

    • How to Interpret P-values and Coefficients in Regression Analysis
    • How To Interpret R-squared in Regression Analysis
    • Mean, Median, and Mode: Measures of Central Tendency
    • Multicollinearity in Regression Analysis: Problems, Detection, and Solutions
    • Principal Component Analysis Guide & Example
    • Z-table
    • How to do t-Tests in Excel
    • Interpreting Correlation Coefficients
    • How to Find the P value: Process and Calculations
    • Choosing the Correct Type of Regression Analysis

    Recent Posts

    • Monte Carlo Simulation: Make Better Decisions
    • Principal Component Analysis Guide & Example
    • Fishers Exact Test: Using & Interpreting
    • Percent Change: Formula and Calculation Steps
    • X and Y Axis in Graphs
    • Simpsons Paradox Explained

    Recent Comments

    • Sultan Mahmood on Linear Regression Equation Explained
    • Sanjay Kumar P on What is the Mean and How to Find It: Definition & Formula
    • Dave on Control Variables: Definition, Uses & Examples
    • Jim Frost on How High Does R-squared Need to Be?
    • Mark Solomons on How High Does R-squared Need to Be?

    Copyright © 2023 · Jim Frost · Privacy Policy