Power in statistics is the probability that a hypothesis test can detect an effect in a sample when it exists in the population. It is the sensitivity of a hypothesis test. When an effect exists in the population, how likely is the test to detect it in your sample? [Read more…] about What is Power in Statistics?
The chi-square goodness of fit test evaluates whether proportions of categorical or discrete outcomes in a sample follow a population distribution with hypothesized proportions. In other words, when you draw a random sample, do the observed proportions follow the values that theory suggests. [Read more…] about Chi-Square Goodness of Fit Test: Uses & Examples
What is Sampling Error?
Sampling error is the difference between a sample statistic and the population parameter it estimates. It is a crucial consideration in inferential statistics where you use a sample to estimate the properties of an entire population. [Read more…] about Sampling Error: Definition, Sources & Minimizing
What is Inter-Rater Reliability?
Inter-rater reliability measures the agreement between subjective ratings by multiple raters, inspectors, judges, or appraisers. It answers the question, is the rating system consistent? High inter-rater reliability indicates that multiple raters’ ratings for the same item are consistent. Conversely, low reliability means they are inconsistent. [Read more…] about Inter-Rater Reliability: Definition, Examples & Assessing
What is the Margin of Error?
The margin of error (MOE) for a survey tells you how near you can expect the survey results to be to the correct population value. For example, a survey indicates that 72% of respondents favor Brand A over Brand B with a 3% margin of error. In this case, the actual population percentage that prefers Brand A likely falls within the range of 72% ± 3%, or 69 – 75%. [Read more…] about Margin of Error: Formula and Interpreting
What is a Null Hypothesis?
The null hypothesis in statistics states that there is no difference between groups or no relationship between variables. It is one of two mutually exclusive hypotheses about a population in a hypothesis test. [Read more…] about Null Hypothesis: Definition, Rejecting & Examples
What is a Confidence Interval?
A confidence interval (CI) is a range of values that is likely to contain the value of an unknown population parameter. These intervals represent a plausible domain for the parameter given the characteristics of your sample data. Confidence intervals are derived from sample statistics and are calculated using a specified confidence level. [Read more…] about Confidence Intervals: Interpreting, Finding & Formulas
These F-tables provide the critical values for right-tail F-tests. Your F-test results are statistically significant when its test statistic is greater than this value. [Read more…] about F-table
A sampling distribution of a statistic is a type of probability distribution created by drawing many random samples of a given size from the same population. These distributions help you understand how a sample statistic varies from sample to sample. [Read more…] about Sampling Distribution
A critical value defines regions in the sampling distribution of a test statistic. These values play a role in both hypothesis tests and confidence intervals. In hypothesis tests, critical values determine whether the results are statistically significant. For confidence intervals, they help calculate the upper and lower limits. [Read more…] about Critical Value
This chi-square table provides the critical values for chi-square (χ2) hypothesis tests. The column and row intersections are the right-tail critical values for a given probability and degrees of freedom. [Read more…] about Chi-Square Table
A z-table, also known as the standard normal table, provides the area under the curve to the left of a z-score. This area represents the probability that z-values will fall within a region of the standard normal distribution. Use a z-table to find probabilities corresponding to ranges of z-scores and to find p-values for z-tests. [Read more…] about Z-table
This t-distribution table provides the critical t-values for both one-tailed and two-tailed t-tests, and confidence intervals. Learn how to use this t-table with the information, examples, and illustrations below the table. [Read more…] about T-Distribution Table of Critical Values
What is a Test Statistic?
A test statistic assesses how consistent your sample data are with the null hypothesis in a hypothesis test. Test statistic calculations take your sample data and boil them down to a single number that quantifies how much your sample diverges from the null hypothesis. As a test statistic value becomes more extreme, it indicates larger differences between your sample data and the null hypothesis. [Read more…] about Test Statistic
Use a paired t-test when each subject has a pair of measurements, such as a before and after score. A paired t-test determines whether the mean change for these pairs is significantly different from zero. This test is an inferential statistics procedure because it uses samples to draw conclusions about populations.
Paired t tests are also known as dependent samples t tests. The two samples are dependent because they contain the same subjects. Conversely, an independent samples t test contains different subjects in the two samples. [Read more…] about Paired T Test
Use an independent samples t test when you want to compare the means of precisely two groups—no more and no less! Typically, you perform this test to determine whether two population means are different. This procedure is an inferential statistical hypothesis test, meaning it uses samples to draw conclusions about populations. The independent samples t test is also known as the two sample t test. [Read more…] about Independent Samples T Test
The standard error of the mean (SEM) is a bit mysterious. You’ll frequently find it in your statistical output. Is it a measure of variability? How does the standard error of the mean compare to the standard deviation? How do you interpret it?
In this post, I answer all these questions about the standard error of the mean, show how it relates to sample size considerations and statistical significance, and explain the general concept of other types of standard errors. In fact, I view standard errors as the doorway from descriptive statistics to inferential statistics. You’ll see how that works! [Read more…] about Standard Error of the Mean (SEM)
Moderna has announced encouraging preliminary results for their COVID-19 vaccine. In this post, I assess the available data and explain what the vaccine’s effectiveness really means. I also look at Moderna’s experimental design and examine how it incorporates statistical procedures and concepts that I discuss throughout my blog posts and books. [Read more…] about Assessing a COVID-19 Vaccination Experiment and Its Results
In my post about how to interpret p-values, I emphasize that p-values are not an error rate. The number one misinterpretation of p-values is that they are the probability of the null hypothesis being correct.
The correct interpretation is that p-values indicate the probability of observing your sample data, or more extreme, when you assume the null hypothesis is true. If you don’t solidly grasp that correct interpretation, please take a moment to read that post first.
Hopefully, that’s clear.
Unfortunately, one part of that blog post confuses some readers. In that post, I explain how p-values are not a probability, or error rate, of a hypothesis. I then show how that misinterpretation is dangerous because it overstates the evidence against the null hypothesis. [Read more…] about P-Values, Error Rates, and False Positives
I’m thrilled to release my new ebook! Hypothesis Testing: An Intuitive Guide for Making Data Driven Decisions. [Read more…] about New eBook Release! Hypothesis Testing: An Intuitive Guide