What is concurrent validity?

We assess the concurrent validity of a measurement procedure when two different measurement procedures are carried out at the same time. Concurrent validity is established when the scores from a new measurement procedure are directly related to the scores from a well-established measurement procedure for the same construct; that is, there is consistent relationship between the scores from the two measurement procedures. This gives us confidence that the two measurement procedures are measuring the same thing (i.e., the same construct). Take the following example:

Study #1
Test effectiveness, intellectual ability, and concurrent validity

Let's imagine that we are interested in determining test effectiveness; that is, we want to create a new measurement procedure for intellectual ability, but we unsure whether it will be as effective as existing, well-established measurement procedures, such as the 11+ entrance exams, Mensa, ACTs (American College Tests), or SATs (Scholastic Aptitude Tests). However, we want to create a new measurement procedure that is much shorter, reducing the demands on students whilst still measuring their intellectual ability.

A sample of students complete the two tests (e.g., the Mensa test and the new measurement procedure). There is little if any interval between the taking of the two tests. We want to know whether the new measurement procedure really measures intellectual ability. If it does, you need to show a strong, consistent relationship between the scores from the new measurement procedure and the scores from the well-established measurement procedure. This is often measured using a correlation.

The scores must differentiate individuals in the same way on both measurement procedures; that is, a student that gets a high score on Mensa test (i.e., the well-established measurement procedure) should also get a high score on the new measurement procedure. This should be mirrored for students that get a medium and low score (i.e., the relationship between the scores should be consistent). If the relationship is inconsistent or weak, the new measurement procedure does not demonstrate concurrent validity.

What is predictive validity?

Assessing predictive validity involves establishing that the scores from a measurement procedure (e.g., a test or survey) make accurate predictions about the construct they represent (e.g., constructs like intelligence, achievement, burnout, depression, etc.). Such predictions must be made in accordance with theory; that is, theories should tell us how scores from a measurement procedure predict the construct in question. In order to be able to test for predictive validity, the new measurement procedure must be taken after the well-established measurement procedure. By after, we typically would expect there to be quite some time between the two measurements (i.e., weeks, if not months or years). Take the following example:

Study #2
Student admissions, intellectual ability, academic performance, and predictive validity

Universities often use ACTs (American College Tests) or SATs (Scholastic Aptitude Tests) scores to help them with student admissions because there is strong predictive validity between these tests of intellectual ability and academic performance, where academic performance is measured in terms of freshman (i.e., first year) GPA (grade point average) scores at university (i.e., GPA score reflect honours degree classifications; e.g., 2:2, 2:1, 1st class). This is important because if these pre-university tests of intellectual ability (i.e., ACT, SAT, etc.) did not predict academic performance (i.e., GPA) at university, they would be a poor measurement procedure to attract the right students.

However, let's imagine that we are only interested in finding the brightest students, and we feel that a test of intellectual ability designed specifically for this would be better than using ACT or SAT tests. For the purpose of this example, let's imagine that this advanced test of intellectual ability is a new measurement procedure that is the equivalent of the Mensa test, which is designed to detect the highest levels of intellectual ability. Therefore, a sample of students take the new test just before they go off to university. After one year, the GPA scores of these students are collected. The aim is to assess whether there is a strong, consistent relationship between the scores from the new measurement procedure (i.e., the intelligence test) and the scores from the well-established measurement procedure (i.e., the GPA scores). This is often measured using a correlation. If such a strong, consistent relationship is demonstrated, we can say that the new measurement procedure (i.e., the new intelligence test) has predictive validity.

To test the correlation between two sets of scores, we would recommend that you read the articles on the Pearson correlation coefficient and Spearman's rank-order correlation in the Data Analysis section of Lærd Dissertation, which shows you how to run these statistical tests, interpret the output from them, and write up the results.


Haynes, S. N., Richard, D. C. S., & Kubany, E. S. (1995). Content validity in psychological assessment: A functional approach to concepts and methods. Psychological Assessment, 7(3): 238-247.

Vogt, D. S., King, D. W., & King, L. A. (2004). Focus groups in psychological assessment: Enhancing content validity by consulting members of the target population. Psychological Assessment, 16(3): 231-243.

1 2