• Skip to secondary menu
  • Skip to main content
  • Skip to primary sidebar
  • Home
  • About Me
  • Contact Me

Statistics By Jim

Making statistics intuitive

  • Basics
  • Hypothesis Testing
  • Regression
  • ANOVA
  • Time Series
  • Fun
  • Glossary
  • Blog
  • My Store

conceptual

Using Post Hoc Tests with ANOVA

By Jim Frost 59 Comments

Post hoc tests are an integral part of ANOVA. When you use ANOVA to test the equality of at least three group means, statistically significant results indicate that not all of the group means are equal. However, ANOVA results do not identify which particular differences between pairs of means are significant. Use post hoc tests to explore differences between multiple group means while controlling the experiment-wise error rate.

In this post, I’ll show you what post hoc analyses are, the critical benefits they provide, and help you choose the correct one for your study. Additionally, I’ll show why failure to control the experiment-wise error rate will cause you to have severe doubts about your results. [Read more…] about Using Post Hoc Tests with ANOVA

Filed Under: ANOVA Tagged With: analysis example, choosing analysis, conceptual, graphs, interpreting results

When Can I Use One-Tailed Hypothesis Tests?

By Jim Frost 13 Comments

One-tailed hypothesis tests offer the promise of more statistical power compared to an equivalent two-tailed design. While there is some debate about when you can use a one-tailed test, the general consensus among statisticians is that you should use two-tailed tests unless you have concrete reasons for using a one-tailed test.

In this post, I discuss when you should and should not use one-tailed tests. I’ll cover the different schools of thought and offer my own opinion. [Read more…] about When Can I Use One-Tailed Hypothesis Tests?

Filed Under: Hypothesis Testing Tagged With: assumptions, conceptual

One-Tailed and Two-Tailed Hypothesis Tests Explained

By Jim Frost 49 Comments

Choosing whether to perform a one-tailed or a two-tailed hypothesis test is one of the methodology decisions you might need to make for your statistical analysis. This choice can have critical implications for the types of effects it can detect, the statistical power of the test, and potential errors.

In this post, you’ll learn about the differences between one-tailed and two-tailed hypothesis tests and their advantages and disadvantages. I include examples of both types of statistical tests. In my next post, I cover the decision between one and two-tailed tests in more detail.
[Read more…] about One-Tailed and Two-Tailed Hypothesis Tests Explained

Filed Under: Hypothesis Testing Tagged With: analysis example, conceptual, interpreting results

Central Limit Theorem Explained

By Jim Frost 52 Comments

The central limit theorem in statistics states that, given a sufficiently large sample size, the sampling distribution of the mean for a variable will approximate a normal distribution regardless of that variable’s distribution in the population.

Unpacking the meaning from that complex definition can be difficult. That’s the topic for this post! I’ll walk you through the various aspects of the central limit theorem (CLT) definition, and show you why it is vital in statistics. [Read more…] about Central Limit Theorem Explained

Filed Under: Basics Tagged With: assumptions, conceptual, distributions, graphs

Introduction to Bootstrapping in Statistics with an Example

By Jim Frost 74 Comments

Bootstrapping is a statistical procedure that resamples a single dataset to create many simulated samples. This process allows you to calculate standard errors, construct confidence intervals, and perform hypothesis testing for numerous types of sample statistics. Bootstrap methods are alternative approaches to traditional hypothesis testing and are notable for being easier to understand and valid for more conditions.

In this blog post, I explain bootstrapping basics, compare bootstrapping to conventional statistical methods, and explain when it can be the better method. Additionally, I’ll work through an example using real data to create bootstrapped confidence intervals. [Read more…] about Introduction to Bootstrapping in Statistics with an Example

Filed Under: Hypothesis Testing Tagged With: analysis example, assumptions, choosing analysis, conceptual, distributions, graphs, interpreting results

Confounding Variables Can Bias Your Results

By Jim Frost 57 Comments

Omitted variable bias occurs when a regression model leaves out relevant independent variables, which are known as confounding variables. This condition forces the model to attribute the effects of omitted variables to variables that are in the model, which biases the coefficient estimates. [Read more…] about Confounding Variables Can Bias Your Results

Filed Under: Regression Tagged With: assumptions, conceptual

Sample Statistics Are Always Wrong (to Some Extent)!

By Jim Frost 11 Comments

Here’s some shocking information for you—sample statistics are always wrong! When you use samples to estimate the properties of populations, you never obtain the correct values exactly. Don’t worry. I’ll help you navigate this issue using a simple statistical tool! [Read more…] about Sample Statistics Are Always Wrong (to Some Extent)!

Filed Under: Basics Tagged With: conceptual

Populations, Parameters, and Samples in Inferential Statistics

By Jim Frost 18 Comments

Inferential statistics lets you draw conclusions about populations by using small samples. Consequently, inferential statistics provide enormous benefits because typically you can’t measure an entire population.

However, to gain these benefits, you must understand the relationship between populations, subpopulations, population parameters, samples, and sample statistics.

In this blog post, I discuss these concepts, and how to obtain representative samples using random sampling.

Related post: Difference between Descriptive and Inferential Statistics
[Read more…] about Populations, Parameters, and Samples in Inferential Statistics

Filed Under: Basics Tagged With: conceptual

Types of Errors in Hypothesis Testing

By Jim Frost 4 Comments

Hypothesis tests use sample data to make inferences about the properties of a population. You gain tremendous benefits by working with random samples because it is usually impossible to measure the entire population.

However, there are tradeoffs when you use samples. The samples we use are typically a minuscule percentage of the entire population. Consequently, they occasionally misrepresent the population severely enough to cause hypothesis tests to make errors.

In this blog post, you will learn about the two types of errors in hypothesis testing, their causes, and how to manage them. [Read more…] about Types of Errors in Hypothesis Testing

Filed Under: Hypothesis Testing Tagged With: conceptual

Practical vs. Statistical Significance

By Jim Frost 22 Comments

Important ink stamp that relates to the concept of practical significance.You’ve just performed a hypothesis test and your results are statistically significant. Hurray! These results are important, right? Not so fast. Statistical significance does not necessarily mean that the results are practically significant in a real-world sense of importance.

In this blog post, I’ll talk about the differences between practical significance and statistical significance, and how to determine if your results are meaningful in the real world.
[Read more…] about Practical vs. Statistical Significance

Filed Under: Hypothesis Testing Tagged With: conceptual, interpreting results

Normal Distribution in Statistics

By Jim Frost 143 Comments

The normal distribution is the most important probability distribution in statistics because it fits many natural phenomena. For example, heights, blood pressure, measurement error, and IQ scores follow the normal distribution. It is also known as the Gaussian distribution and the bell curve.

The normal distribution is a probability function that describes how the values of a variable are distributed. It is a symmetric distribution where most of the observations cluster around the central peak and the probabilities for values further away from the mean taper off equally in both directions. Extreme values in both tails of the distribution are similarly unlikely.

In this blog post, you’ll learn how to use the normal distribution, about its parameters, and how to calculate Z-scores to standardize your data and find probabilities. [Read more…] about Normal Distribution in Statistics

Filed Under: Basics Tagged With: conceptual, distributions, graphs, probability

Understanding Probability Distributions

By Jim Frost 62 Comments

A probability distribution is a function that describes the likelihood of obtaining the possible values that a random variable can assume. In other words, the values of the variable vary based on the underlying probability distribution.

Suppose you draw a random sample and measure the heights of the subjects. As you measure heights, you can create a distribution of heights. This type of distribution is useful when you need to know which outcomes are most likely, the spread of potential values, and the likelihood of different results.

In this blog post, you’ll learn about probability distributions for both discrete and continuous variables. I’ll show you how they work and examples of how to use them. [Read more…] about Understanding Probability Distributions

Filed Under: Basics Tagged With: conceptual, data types, distributions, graphs, interpreting results, probability

Interpreting Correlation Coefficients

By Jim Frost 73 Comments

A correlation between variables indicates that as one variable changes in value, the other variable tends to change in a specific direction.  Understanding that relationship is useful because we can use the value of one variable to predict the value of the other variable. For example, height and weight are correlated—as height increases, weight also tends to increase. Consequently, if we observe an individual who is unusually tall, we can predict that his weight is also above the average. [Read more…] about Interpreting Correlation Coefficients

Filed Under: Basics Tagged With: conceptual, graphs, interpreting results

Estimating a Good Sample Size for Your Study Using Power Analysis

By Jim Frost 43 Comments

Determining a good sample size for a study is always an important issue. After all, using the wrong sample size can doom your study from the start. Fortunately, power analysis can find the answer for you. Power analysis combines statistical analysis, subject-area knowledge, and your requirements to help you derive the optimal sample size for your study.

Statistical power in a hypothesis test is the probability that the test will detect an effect that actually exists. As you’ll see in this post, both under-powered and over-powered studies are problematic. Let’s learn how to find a good sample size for your study! [Read more…] about Estimating a Good Sample Size for Your Study Using Power Analysis

Filed Under: Hypothesis Testing Tagged With: analysis example, conceptual, graphs, interpreting results

Measures of Variability: Range, Interquartile Range, Variance, and Standard Deviation

By Jim Frost 58 Comments

A measure of variability is a summary statistic that represents the amount of dispersion in a dataset. How spread out are the values? While a measure of central tendency describes the typical value, measures of variability define how far away the data points tend to fall from the center. We talk about variability in the context of a distribution of values. A low dispersion indicates that the data points tend to be clustered tightly around the center. High dispersion signifies that they tend to fall further away.

In statistics, variability, dispersion, and spread are synonyms that denote the width of the distribution. Just as there are multiple measures of central tendency, there are several measures of variability. In this blog post, you’ll learn why understanding the variability of your data is critical. Then, I explore the most common measures of variability—the range, interquartile range, variance, and standard deviation. I’ll help you determine which one is best for your data. [Read more…] about Measures of Variability: Range, Interquartile Range, Variance, and Standard Deviation

Filed Under: Basics Tagged With: conceptual, distributions, graphs

Measures of Central Tendency: Mean, Median, and Mode

By Jim Frost 101 Comments

A measure of central tendency is a summary statistic that represents the center point or typical value of a dataset. These measures indicate where most values in a distribution fall and are also referred to as the central location of a distribution. You can think of it as the tendency of data to cluster around a middle value. In statistics, the three most common measures of central tendency are the mean, median, and mode. Each of these measures calculates the location of the central point using a different method.

Choosing the best measure of central tendency depends on the type of data you have. In this post, I explore these measures of central tendency, show you how to calculate them, and how to determine which one is best for your data. [Read more…] about Measures of Central Tendency: Mean, Median, and Mode

Filed Under: Basics Tagged With: conceptual, distributions, graphs

Difference between Descriptive and Inferential Statistics

By Jim Frost 85 Comments

Descriptive and inferential statistics are two broad categories in the field of statistics. In this blog post, I show you how both types of statistics are important for different purposes. Interestingly, some of the statistical measures are similar, but the goals and methodologies are very different. [Read more…] about Difference between Descriptive and Inferential Statistics

Filed Under: Basics Tagged With: conceptual

Learn How Anecdotal Evidence Can Trick You!

By Jim Frost 1 Comment

Anecdotal evidence is a story told by individuals. It comes in many forms that can range from product testimonials to word of mouth. It’s often testimony, or a short account, about the truth or effectiveness of a claim. Typically, anecdotal evidence focuses on individual results, is driven by emotion, and presented by individuals who are not subject area experts. [Read more…] about Learn How Anecdotal Evidence Can Trick You!

Filed Under: Basics Tagged With: conceptual

The Importance of Statistics

By Jim Frost 38 Comments

The field of statistics is the science of learning from data. Statistical knowledge helps you use the proper methods to collect the data, employ the correct analyses, and effectively present the results. Statistics is a crucial process behind how we make discoveries in science, make decisions based on data, and make predictions. Statistics allows you to understand a subject much more deeply. [Read more…] about The Importance of Statistics

Filed Under: Basics Tagged With: conceptual

Statistical Hypothesis Testing Overview

By Jim Frost 47 Comments

In this blog post, I explain why you need to use statistical hypothesis testing and help you navigate the essential terminology. Hypothesis testing is a crucial procedure to perform when you want to make inferences about a population using a random sample. These inferences include estimating population properties such as the mean, differences between means, proportions, and the relationships between variables.

[Read more…] about Statistical Hypothesis Testing Overview

Filed Under: Hypothesis Testing Tagged With: conceptual

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to Next Page »

Primary Sidebar

Meet Jim

I’ll help you intuitively understand statistics by focusing on concepts and using plain English so you can concentrate on understanding your results.

Read More…

Buy My Introduction to Statistics eBook!

New! Buy My Hypothesis Testing eBook!

Buy My Regression eBook!

Subscribe by Email

Enter your email address to receive notifications of new posts by email.

    I won't send you spam. Unsubscribe at any time.

    Follow Me

    • FacebookFacebook
    • RSS FeedRSS Feed
    • TwitterTwitter
    • Popular
    • Latest
    Popular
    • How To Interpret R-squared in Regression Analysis
    • How to Interpret P-values and Coefficients in Regression Analysis
    • Measures of Central Tendency: Mean, Median, and Mode
    • Normal Distribution in Statistics
    • Multicollinearity in Regression Analysis: Problems, Detection, and Solutions
    • How to Interpret the F-test of Overall Significance in Regression Analysis
    • Understanding Interaction Effects in Statistics
    Latest
    • Using Applied Statistics to Expand Human Knowledge
    • Variance Inflation Factors (VIFs)
    • Assessing a COVID-19 Vaccination Experiment and Its Results
    • P-Values, Error Rates, and False Positives
    • How to Perform Regression Analysis using Excel
    • Coefficient of Variation in Statistics
    • Independent and Dependent Samples in Statistics

    Recent Comments

    • Collinz on Using Confidence Intervals to Compare Means
    • Samiullah on 7 Classical Assumptions of Ordinary Least Squares (OLS) Linear Regression
    • Javier Gonzalez on The Monty Hall Problem: A Statistical Illusion
    • Micheal on 7 Classical Assumptions of Ordinary Least Squares (OLS) Linear Regression
    • Jim Frost on Using Applied Statistics to Expand Human Knowledge

    Copyright © 2021 · Jim Frost · Privacy Policy