Effect sizes in statistics quantify the differences between group means and the relationships between variables. While analysts often focus on statistical significance using p-values, effect sizes determine the practical importance of the findings. [Read more…] about Effect Sizes in Statistics
conceptual
Proxy Variables: The Good Twin of Confounding Variables
Proxy variables are easily measurable variables that analysts include in a model in place of a variable that cannot be measured or is difficult to measure. Proxy variables can be something that is not of any great interest itself, but has a close correlation with the variable of interest. [Read more…] about Proxy Variables: The Good Twin of Confounding Variables
Multiplication Rule for Calculating Probabilities
The multiplication rule in probability allows you to calculate the probability of multiple events occurring together using known probabilities of those events individually. There are two forms of this rule, the specific and general multiplication rules.
In this post, learn about when and how to use both the specific and general multiplication rules. Additionally, I’ll use and explain the standard notation for probabilities throughout, helping you learn how to interpret it. We’ll work through several example problems so you can see them in action. There’s even a bonus problem at the end! [Read more…] about Multiplication Rule for Calculating Probabilities
Using Contingency Tables to Calculate Probabilities
Contingency tables are a great way to classify outcomes and calculate different types of probabilities. These tables contain rows and columns that display bivariate frequencies of categorical data. Analysts also refer to contingency tables as crosstabulation (cross tabs), two-way tables, and frequency tables.
Statisticians use contingency tables for a variety of reasons. I love these tables because they both organize your data and allow you to answer a diverse set of questions. In this post, I focus on using them to calculate different types of probabilities. These probabilities include joint, marginal, and conditional probabilities. [Read more…] about Using Contingency Tables to Calculate Probabilities
Probability Definition and Fundamentals
What is Probability?
The definition of probability is the likelihood of an event happening. Probability theory analyzes the chances of events occurring. You can think of probabilities as being the following:
- The long-term proportion of times an event occurs during a random process.
- The propensity for a particular outcome to occur.
Common terms for describing probabilities include likelihood, chances, and odds. [Read more…] about Probability Definition and Fundamentals
Variance Inflation Factors (VIFs)
Variance Inflation Factors (VIFs) measure the correlation among independent variables in least squares regression models. Statisticians refer to this type of correlation as multicollinearity. Excessive multicollinearity can cause problems for regression models.
In this post, I focus on VIFs and how they detect multicollinearity, why they’re better than pairwise correlations, how to calculate VIFs yourself, and interpreting VIFs. If you need a refresher about the types of problems that multicollinearity causes and how to fix them, read my post: Multicollinearity: Problems, Detection, and Solutions. [Read more…] about Variance Inflation Factors (VIFs)
P-Values, Error Rates, and False Positives
In my post about how to interpret p-values, I emphasize that p-values are not an error rate. The number one misinterpretation of p-values is that they are the probability of the null hypothesis being correct.
The correct interpretation is that p-values indicate the probability of observing your sample data, or more extreme, when you assume the null hypothesis is true. If you don’t solidly grasp that correct interpretation, please take a moment to read that post first.
Hopefully, that’s clear.
Unfortunately, one part of that blog post confuses some readers. In that post, I explain how p-values are not a probability, or error rate, of a hypothesis. I then show how that misinterpretation is dangerous because it overstates the evidence against the null hypothesis. [Read more…] about P-Values, Error Rates, and False Positives
Coefficient of Variation in Statistics
The coefficient of variation (CV) is a relative measure of variability that indicates the size of a standard deviation in relation to its mean. It is a standardized, unitless measure that allows you to compare variability between disparate groups and characteristics. It is also known as the relative standard deviation (RSD).
In this post, you will learn about the coefficient of variation, how to calculate it, know when it is particularly useful, and when to avoid it. [Read more…] about Coefficient of Variation in Statistics
Independent and Dependent Samples in Statistics
When comparing groups in your data, you can have either independent or dependent samples. The type of samples in your experimental design impacts sample size requirements, statistical power, the proper analysis, and even your study’s costs. Understanding the implications of each type of sample can help you design a better experiment. [Read more…] about Independent and Dependent Samples in Statistics
Independent and Identically Distributed Data (IID)
Having independent and identically distributed (IID) data is a common assumption for statistical procedures and hypothesis tests. But what does that mouthful of words actually mean? That’s the topic of this post! And, I’ll provide helpful tips for determining whether your data are IID. [Read more…] about Independent and Identically Distributed Data (IID)
Using Moving Averages to Smooth Time Series Data
Moving averages can smooth time series data, reveal underlying trends, and identify components for use in statistical modeling. Smoothing is the process of removing random variations that appear as coarseness in a plot of raw time series data. It reduces the noise to emphasize the signal that can contain trends and cycles. Analysts also refer to the smoothing process as filtering the data. [Read more…] about Using Moving Averages to Smooth Time Series Data
A Tour of Survival Analysis
Note: this is a guest post by Alexander Moreno, a Computer Science PhD student at the Georgia Institute of Technology. He blogs at www.boostedml.com
Survival analysis is an important subfield of statistics and biostatistics. These methods involve modeling the time to a first event such as death. In this post we give a brief tour of survival analysis. We first describe the motivation for survival analysis, and then describe the hazard and survival functions. We follow this with non-parametric estimation via the Kaplan Meier estimator. Then we describe Cox’s proportional hazard model and after that Aalen’s additive model. Finally, we conclude with a brief discussion.
Why Survival Analysis: Right Censoring
Modeling first event times is important in many applications. This could be time to death for severe health conditions or time to failure of a mechanical system. If one always observed the event time and it was guaranteed to occur, one could model the distribution directly. For instance, in the non-parametric setting, one could use the empirical cumulative distribution function to estimate the probability of death by some time. In the parametric setting one could do non-negative regression.
However, in some cases one might not observe the event time: this is generally called right censoring. In clinical trials with death as the event, this occurs when one of the following happens. 1) participants drop out of the study 2) the study reaches a pre-determined end time, and some participants have survived until the end 3) the study ends when a certain number of participants have died. In each case, after the surviving participants have left the study, we don’t know what happens to them. We then have the question:
- How can we model the empirical distribution or do non-negative regression when for some individuals, we only observe a lower bound on their event time?

The above figure illustrates right censoring. For participant 1 we see when they died. Participant 2 dropped out, and we know that they survived until then, but don’t know what happened afterwards. For participant 3, we know that they survived until the pre-determined study end, but again don’t know what happened afterwards.
The Survival Function and the Hazard
Two of the key tools in survival analysis are the survival function and the hazard. The survival function describes the probability of the event not having happened by a time . The hazard describes the instantaneous rate of the first event at any time
.
More formally, let be the event time of interest, such as the death time. Then the survival function is
. We can also note that this is related to the cumulative distribution function
via
.
For the hazard, the probability of the first event time being in the small interval , given survival up to
is
. This is illustrated in the following figure.

Rearranging terms and taking limits we obtain
where is the density function of
and the second equality follows from applying Bayes theorem. By rearranging again and solving a differential equation, we can use the hazard to compute the survival function via
The key question then is how to estimate the hazard and/or survival function.
Non-Parametric Estimation with Kaplan Meier
In non-parametric survival analysis, we want to estimate the survival function without covariates, and with censoring. If we didn’t have censoring, we could start with the empirical CDF
. This equation is a succinct representation of: how many people have died by time
? The survival function would then be: how many people are still alive? However, we can’t answer this question as posed when some people are censored by time
.
While we don’t necessarily know how many people have survived by an arbitrary time , we do know how many people in the study are still at risk. We can use this instead. Partition the study time into
, where each
is either an event time or a censoring time for a participant. Assume that participants can only lapse at observed event times. Let
be the number of people at risk at just before time
. Assuming no one dies at exactly the same time (no ties), we can look at each time someone died. We say that the probability of dying at that specific time is
, and say that the probability of dying at any other time is
. We can then say that the probability of surviving at any event time
, given survival at previous candidate event times is
. The probability of surviving up to a time
is then
We call this [1] the Kaplan Meier estimator. Under mild assumptions, including that participants have independent and identically distributed event times and that censoring and event times are independent, this gives an estimator that is consistent. The next figure gives an example of the Kaplan Meier estimator for a simple case.

Kaplan Meier R Example
In R we can use the Surv and survfit functions from the survival package to fit a Kaplan Meier model. We can also use ggsurvplot from the survminer package to make plots. Here we will use the ovarian cancer dataset from the survival package. We will stratify based on treatment group assignment.
library(survminer)
library(survival)
kaplan_meier <- Surv(time = ovarian[['futime']], event = ovarian[['fustat']])
kaplan_meier_treatment<-survfit(kaplan_meier~rx,data=ovarian, type='kaplan-meier',conf.type='log')
ggsurvplot(kaplan_meier_treatment,conf.int = 'True')

Semi-Parametric Regression with Cox’s Proportional Hazards Model
Kaplan Meier makes sense when we don’t have covariates, but often we want to model how some covariates affect death risk. For instance, how does one’s weight affect death risk? One way to do this is to assume that covariates have a multiplicative effect on the hazard. This leads us to Cox’s proportional hazard model, which involves the following functional form for the hazard:
The baseline hazard describes how the average person’s risk evolves over time. The relative risk
describes how covariates affect the hazard. In particular, a unit increase in
leads to an increase of the hazard by a factor of
.
Because of the non-parametric nuissance term , it is difficult to maximize the full likelihood for
directly. Cox’s insight [2] was that the assignment probabilities given the death times contain most of the information about
, and the remaining terms contain most of the information about
. The assignment probabilities give the following partial likelihood
We can then maximize this to get an estimator of
. In [3,4] they show that this estimator is consistent and asymptotically normal.
Cox Proportional Hazards R Example
In R, we can use the Surv and coxph functions from the survival package. For the ovarian cancer dataset, we notice from the Kaplan Meier example that treatment is not proportional. Under a proportional hazards assumption, the curves would have the same pattern but diverge. However, instead they move apart and then move back together. Further, treatment does seem to lead to different survival patterns over shorter time horizons. We should not use it as a covariate, but we can stratify based on it. In R we can regress on age and presence of residual disease.
cox_fit <- coxph(Surv(futime, fustat) ~ age + ecog.ps+strata(rx), data=ovarian)
summary(cox_fit)
which gives the following results
Call:
coxph(formula = Surv(futime, fustat) ~ age + ecog.ps + strata(rx),
data = ovarian)
n= 26, number of events= 12
coef exp(coef) se(coef) z Pr(>|z|)
age 0.13853 1.14858 0.04801 2.885 0.00391 **
ecog.ps -0.09670 0.90783 0.62994 -0.154 0.87800
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
exp(coef) exp(-coef) lower .95 upper .95
age 1.1486 0.8706 1.0454 1.262
ecog.ps 0.9078 1.1015 0.2641 3.120
Concordance= 0.819 (se = 0.058 )
Likelihood ratio test= 12.71 on 2 df, p=0.002
Wald test = 8.43 on 2 df, p=0.01
Score (logrank) test = 12.24 on 2 df, p=0.002
this suggests that age has a significant multiplicative effect on death, and that a one year increase in age increases instantaneous risk by a factor of 1.15.
Aalen’s Additive Model
Cox regression makes two strong assumptions: 1) that covariate effects are constant over time 2) that effects are multiplicative. Aalen’s additive model [5] relaxes the first, and replaces the second with the assumption that effects are additive. Here the hazard takes the form
As this is a linear model, we can estimate the cumulative regression functions using a least squares type procedure.
Aalen’s Additive Model R Example
In R we can use the timereg package and the aalen function to estimate cumulative regression functions, which we can also plot.
library(timereg)
data(sTRACE)
# Fits Aalen model
out<-aalen(Surv(time,status==9)~age+sex+diabetes+chf+vf, sTRACE,max.time=7,n.sim=100)
summary(out)
par(mfrow=c(2,3))
plot(out)
This gives us
Additive Aalen Model
Test for nonparametric terms
Test for non-significant effects
Supremum-test of significance p-value H_0: B(t)=0
(Intercept) 7.29 0.00
age 8.63 0.00
sex 2.95 0.01
diabetes 2.31 0.24
chf 5.30 0.00
vf 2.95 0.03
Test for time invariant effects
Kolmogorov-Smirnov test
(Intercept) 0.57700
age 0.00866
sex 0.11900
diabetes 0.16200
chf 0.12900
vf 0.43500
p-value H_0:constant effect
(Intercept) 0.00
age 0.00
sex 0.18
diabetes 0.43
chf 0.06
vf 0.02
Cramer von Mises test
(Intercept) 0.875000
age 0.000179
sex 0.017700
diabetes 0.041200
chf 0.053500
vf 0.434000
p-value H_0:constant effect
(Intercept) 0.00
age 0.00
sex 0.29
diabetes 0.42
chf 0.02
vf 0.05
Call:
aalen(formula = Surv(time, status == 9) ~ age + sex + diabetes +
chf + vf, data = sTRACE, max.time = 7, n.sim = 100)
The results first test whether the cumulative regression functions are non-zero, and then whether the effects are constant. The plots of the cumulative regression functions are given below.

Discussion
In this post we did a brief tour of several methods in survival analysis. We first described why right censoring requires us to develop new tools. We then described the survival function and the hazard. Next we discussed the non-parametric Kaplan Meier estimator and the semi-parametric Cox regression model. We concluded with Aalen’s additive model.
[1] Kaplan, Edward L., and Paul Meier. “Nonparametric estimation from incomplete observations.” Journal of the American statistical association 53, no. 282 (1958): 457-481.
[2] Cox, David R. “Regression models and life-tables.” In Breakthroughs in statistics, pp. 527-541. Springer, New York, NY, 1992.
[3] Tsiatis, Anastasios A. “A large sample study of Cox’s regression model.” The Annals of Statistics 9, no. 1 (1981): 93-108.
[4] Andersen, Per Kragh, and Richard David Gill. “Cox’s regression model for counting processes: a large sample study.” The annals of statistics (1982): 1100-1120.
[5] Aalen, Odd. “A model for nonparametric regression analysis of counting processes.” In Mathematical statistics and probability theory, pp. 1-25. Springer, New York, NY, 1980.
Time Series Analysis Introduction
Time series analysis tracks characteristics of a process at regular time intervals. It’s a fundamental method for understanding how a metric changes over time and forecasting future values. Analysts use time series methods in a wide variety of contexts. [Read more…] about Time Series Analysis Introduction
Failing to Reject the Null Hypothesis
Failing to reject the null hypothesis is an odd way to state that the results of your hypothesis test are not statistically significant. Why the peculiar phrasing? “Fail to reject” sounds like one of those double negatives that writing classes taught you to avoid. What does it mean exactly? There’s an excellent reason for the odd wording!
In this post, learn what it means when you fail to reject the null hypothesis and why that’s the correct wording. While accepting the null hypothesis sounds more straightforward, it is not statistically correct! [Read more…] about Failing to Reject the Null Hypothesis
Understanding Significance Levels in Statistics
Significance levels in statistics are a crucial component of hypothesis testing. However, unlike other values in your statistical output, the significance level is not something that statistical software calculates. Instead, you choose the significance level. Have you ever wondered why?
In this post, I’ll explain the significance level conceptually, why you choose its value, and how to choose a good value. Statisticians also refer to the significance level as alpha (α). [Read more…] about Understanding Significance Levels in Statistics
Guidelines for Removing and Handling Outliers in Data
Outliers are unusual values in your dataset, and they can distort statistical analyses and violate their assumptions. Unfortunately, all analysts will confront outliers and be forced to make decisions about what to do with them. Given the problems they can cause, you might think that it’s best to remove them from your data. But, that’s not always the case. Removing outliers is legitimate only for specific reasons. [Read more…] about Guidelines for Removing and Handling Outliers in Data
5 Ways to Find Outliers in Your Data
Outliers are data points that are far from other data points. In other words, they’re unusual values in a dataset. Outliers are problematic for many statistical analyses because they can cause tests to either miss significant findings or distort real results.
Unfortunately, there are no strict statistical rules for definitively identifying outliers. Finding outliers depends on subject-area knowledge and an understanding of the data collection process. While there is no solid mathematical definition, there are guidelines and statistical tests you can use to find outlier candidates. [Read more…] about 5 Ways to Find Outliers in Your Data
Low Power Tests Exaggerate Effect Sizes
If your study has low statistical power, it will exaggerate the effect size. What?!
Statistical power is the ability of a hypothesis test to detect an effect that exists in the population. Clearly, a high-powered study is a good thing just for being able to identify these effects. Low power reduces your chances of discovering real findings. However, many analysts don’t realize that low power also inflates the effect size. Learn more about Statistical Power.
In this post, I show how this unexpected relationship between power and exaggerated effect sizes exists. I’ll also tie it to other issues, such as the bias of effects published in journals and other matters about statistical power. I think this post will be eye-opening and thought provoking! As always, I’ll use many graphs rather than equations. [Read more…] about Low Power Tests Exaggerate Effect Sizes
Revisiting the Monty Hall Problem with Hypothesis Testing
The Monty Hall Problem is where Monty presents you with three doors, one of which contains a prize. He asks you to pick one door, which remains closed. Monty opens one of the other doors that does not have the prize. This process leaves two unopened doors—your original choice and one other. He allows you to switch from your initial choice to the other unopened door. Do you accept the offer?
If you accept his offer to switch doors, you’re twice as likely to win—66% versus 33%—than if you stay with your original choice.
Mind-blowing, right?
The solution to the Monty Hall Problem is tricky and counter-intuitive. It did trip up many experts back in the 1980s. However, the correct answer to the Monty Hall Problem is now well established using a variety of methods. It has been proven mathematically, with computer simulations, and empirical experiments, including on television by both the Mythbusters (CONFIRMED!) and James Mays’ Man Lab. You won’t find any statisticians who disagree with the solution.
In this post, I’ll explore aspects of this problem that have arisen in discussions with some stubborn resisters to the notion that you can increase your chances of winning by switching!
The Monty Hall problem provides a fun way to explore issues that relate to hypothesis testing. I’ve got a lot of fun lined up for this post, including the following!
- Using a computer simulation to play the game 10,000 times.
- Assessing sampling distributions to compare the 66% percent hypothesis to another contender.
- Performing a power and sample size analysis to determine the number of times you need to play the Monty Hall game to get an answer.
- Conducting an experiment by playing the game repeatedly myself, record the results, and use a proportions hypothesis test to draw conclusions! [Read more…] about Revisiting the Monty Hall Problem with Hypothesis Testing
Causation in Statistics: Hill’s Criteria
Causation indicates that an event affects an outcome. Do fatty diets cause heart problems? If you study for a test, does it cause you to get a higher score?
In statistics, causation is a bit tricky. As you’ve no doubt heard, correlation doesn’t necessarily imply causation. An association or correlation between variables simply indicates that the values vary together. It does not necessarily suggest that changes in one variable cause changes in the other variable. Proving causality can be difficult.
If correlation does not prove causation, what statistical test do you use to assess causality? That’s a trick question because no statistical analysis can make that determination. In this post, learn about why you want to determine causation and how to do that. [Read more…] about Causation in Statistics: Hill’s Criteria