What is Predictive Validity?
Predictive validity is the degree to which a test score or construct scale predicts a criterion variable measuring a future outcome, behavior, or performance. Evaluating predictive validity involves assessing the correlation between the pre-test score and the subsequent criterion outcome.
Researchers frequently assess predictive validity in psychology, job performance, and academic settings.

Examples of predictive validity include the following:
- Psychological personality inventory that predicts future behaviors.
- Survey of risk factors that predicts chances of getting a disease.
- Pre-hire test assessment that predicts job performance after a year.
- SAT and GRE scores that predict future grade point averages (GPA).
In all these predictive validity examples, you have a preliminary assessment followed by a completely independent assessment that uses a different measurement system (e.g., SAT scores → GPAs).
Additionally, each example focuses on an extremely specific criterion the study defines. For example, the researchers might correlate SAT scores to college freshman GPA. For job performance, HR needs to devise a meaningful performance measure they can correlate with the pre-hire test.
The designers of the initial assessment typically intend to use its predictive ability to improve decision-making about things such as health choices, hiring decisions, and student selection. Consequently, evaluating its effectiveness is essential. For example, if you’re developing a survey of heart disease risk factors to help individuals determine and reduce their risk, it’s crucial that the instrument actually predicts heart disease outcomes.
If the assessment instrument has low predictive validity, it can lead people to make the wrong decisions.
Predictive validity is one of three subtypes of criterion validity. Criterion validity evaluates how well a test score correlates with an outcome measured in the past (retrospective), present (concurrent), or future (predictive).
Learn more about Validity in Research: Types and Examples, Criterion Validity, and Concurrent Validity.
Evaluating Predictive Validity
Evaluating predictive validity involves a pre-test that investigators measure before the outcome occurs. The outcome itself is measured at a later point using a different assessment process than the pre-test. Then analysts find the correlation between the two measures.
A test has predictive validity when a large correlation exists between test scores and the criterion variable. In a general sense, higher absolute correlations are better. A near zero correlation indicates that the assessment instrument does not predict the outcome—it has poor predictive validity.
The correlation can be either highly positive or negative. Other sources indicate only highly positive correlations are acceptable, but highly negative correlations are also predictive. However, positive correlations are more common.
Learn more about Interpreting Correlation Coefficients.
The question you’re probably asking is, how strong of a correlation does it require?
Subject-area knowledge provides the answer. Compare the correlation magnitude to similar studies and incorporate contextual information to determine whether it represents satisfactory predictive validity. In the next section, I cover an example using SAT scores showing how to do that.
Be aware that many social science and psychological predictive validity assessments produce low correlations—frequently less than r < 0.5, accounting for only 25% of the variance (R2). Predicting human behavior is difficult! Consequently, a single test often produces lackluster results when it tries to predict something as complex and multifaceted as job performance or Freshman GPA.
Finally, consider how a selection process might restrict a sample’s range of test scores. Suppose a selection process chooses people only with SAT or pre-hire test scores greater than a specific value. In that case, your sample will be more homogenous than the general population. Range restriction reduces the correlation with the criterion, making predictive validity appear worse than it actually is.
Predictive Validity Example—SAT Scores
Let’s assess the predictive validity of using SAT scores to predict first-year of college GPAs. The SAT is the pre-test, and freshman GPAs are the criterion variable. College admissions use SAT scores as part of the selection process.
Studies consistently find SAT scores by themselves have an r ≈ 0.50 correlation with first year of college GPAs. They account for only 25% of the variability in GPAs. However, researchers consider SATs as having high predictive validity because of the context.
Consider that SATs are taken on a single day during high school and attempt to predict first-year GPAs in an entirely different college setting. This single test taken at one point in time predicts Freshman GPAs as effectively as four years of cumulative high school GPAs!
Comments and Questions