What is Concurrent Validity?
Concurrent validity is the degree to which assessment scores correlate with a criterion variable when researchers measure both variables at approximately the same time (i.e., concurrently). This method validates an assessment instrument by comparing its scores to another test or variable that researchers had validated previously.
The assessment is a psychological inventory, measurement instrument, or test that assesses a latent construct. Because you can’t see the construct directly, it’s impossible to look at the scores and determine whether they accurately reflect the intended construct.
The criterion is another test or a variable that quantifies an outcome, behavior, or performance that is an observable manifestation of the construct.
Concurrent validity evaluates whether the construct and criterion correlate as theory states. Do the test scores correlate with measurable, real-world results in a pattern consistent with theory? For example, does a job interview test correlate with job performance? If so, it provides evidence that the assessment measures what it was designed to measure.
Researchers frequently assess concurrent validity in education, sociology, and psychology.
Examples of concurrent validity include the following:
- Aggressive tendency inventory scores that correlate with currently observed aggressive behaviors.
- Student evaluations of teachers that correlate with a professional assessment.
- SAT Math scores that correlate with a new math assessment.
Typically, the designers of an assessment develop it to provide essential diagnostic information and help decision-makers. If the test does not measure the expected construct (i.e., it is invalid), it can mislead by giving misinformation. Consequently, assessing concurrent validity is crucial.
Concurrent validity is one of three subtypes of criterion validity. Criterion validity evaluates how well a test score correlates with a criterion measured in the future (predictive), present (concurrent), or past (retrospective).
Learn more about Validity in Research: Types and Examples, Criterion Validity, and Predictive Validity
Evaluating Concurrent Validity
Assessing concurrent validity requires that an accepted standard of comparison exists. You need a documented measure that correlates with the construct in a known manner. If such a validation variable does not exist, you can’t assess concurrent validity. Remember that we’re validating a new assessment by correlating it to a previously validated measure (the criterion).
If test scores and the criterion correlate properly, it suggests that the instrument truly quantifies the construct it was designed to measure. That gets to the fundamental core of validity’s definition.
Conversely, if there is no correlation or it has a correlation with the incorrect sign, it suggests that the assessment does not measure the expected construct—it is invalid.
Learn more about Interpreting Correlation Coefficients.
Concurrent Validity Examples
In some cases, researchers assess concurrent validity by administering two tests simultaneously. They want to determine whether a new test correlates with a previously validated test. Frequently, they compare two tests to see if they can replace an old test with a new one. The new one might be easier, cheaper, or faster to implement, but they must ensure it is valid.
For example, a school pays external specialists to evaluate the teachers. However, the administrators ask the students to assess the teachers at the same time as the specialists because they are considering a less expensive process. If the student evaluations correlate highly with the professional assessments, the new process exhibits concurrent validity.
Other researchers might want to assess concurrent validity to determine whether an assessment is an effective diagnostic tool.
For example, researchers compare reading assessment scores to observed reading skills in the classroom. If the assessment scores have a high correlation with currently observed reading skills, the test is valid. Educators can use it to identify students who require an intervention without needing to observe all students in person.
In other cases, researchers evaluate concurrent validity when they’re ultimately interested in predicting future outcomes.
For example, HR analysts might administer a potential pre-hire test to current employees and correlate those scores with their current performance evaluations. Ultimately, HR wants the pre-hire test to predict the future performance of new hires and use it to make hiring decisions. However, assessing concurrent validity allows them to accelerate the development and validation process. After all, if the test is not valid for current employees, it probably won’t predict the performance of new hires. They can get back to the drawing board more quickly!
Reference
Validity in psychological tests (umn.edu)
Comments and Questions