Convergent validity and concurrent validity both indicate how well a test score and another variable compare to one another.
Convergent validity indicates how well one measure corresponds to other measures of the same or similar constructs. These measures do not have to be obtained at the same time.
Concurrent validity instead assesses how well a measure aligns with a benchmark or “gold-standard,” which can be a ground truth or another validated measure. Both measurements should be taken at the same time.
Continue reading: What is the difference between concurrent validity and convergent validity?
Convergent validity and discriminant validity (or divergent validity) are both forms of construct validity. They are both used to determine whether a test is measuring the thing it’s supposed to.
However, each form of validity tells you something slightly different about a test:
- Convergent validity indicates whether the results of a test correspond to other measures of a similar construct. In theory, there should be a high correlation between two tests that measure the same thing.
- Discriminant validity instead measures whether a test is similar to measures of a different construct. There should be a low correlation between two tests that measure different things.
If a test is measuring what it is supposed to, it should correspond to other tests that measure the same thing while differing from tests that measure other things. To assess these two qualities, you must determine both convergent and discriminant validity.
Continue reading: Why are convergent and discriminant validity often evaluated together?
Construct validity evaluates how well a test reflects the concept it’s designed to measure.
Criterion validity captures how well a test correlates with another “gold standard” measure or outcome of the same construct.
Although both construct validity and criterion validity reflect the validity of a measure, they are not the same. Construct validity is generally considered the overarching concern of measurement validity; criterion validity can therefore be considered a form of evidence for construct validity.
Continue reading: What is the difference between construct and criterion validity?
Construct validity assesses how well a test reflects the phenomenon it’s supposed to measure. Construct validity cannot be directly measured; instead, you must gather evidence in favor of it.
This evidence comes in the form of other types of validity, including face validity, content validity, criterion validity, convergent validity, and divergent validity. The stronger the evidence across these measures, the more confident you can be that you are measuring what you intended to.
Continue reading: How do you measure construct validity?
Concurrent validity and predictive validity are both types of criterion validity. Both assess how well one test corresponds to another, theoretically related, test or outcome. However, the key difference is when each test is conducted:
- Concurrent validity compares one measure to a second, well-established measure that acts as a gold-standard. Both measures should be obtained at the same time, or concurrently.
- Predictive validity instead captures how well a measure corresponds to a measure taken later in time (i.e., how well one measure predicts a future measure).
Continue reading: What is the difference between concurrent and predictive validity?
A construct is a phenomenon that cannot be directly measured, such as intelligence, anxiety, or happiness. Researchers must instead approximate constructs using related, measurable variables.
The process of defining how a construct will be measured is called operationalization. Constructs are common in psychology and other social sciences.
To evaluate how well a construct measures what it’s supposed to, researchers determine construct validity. Face validity, content validity, criterion validity, convergent validity, and discriminant validity all provide evidence of construct validity.
Continue reading: What is a construct?
Convergent and concurrent validity both indicate how well a test score and another variable compare to one another.
However, convergent validity indicates how well one measure corresponds to other measures of the same or similar constructs. These measures do not have to be obtained at the same time.
Concurrent validity instead assesses how well a measure aligns with a benchmark or “gold-standard,” which can be a ground truth or another validated measure. Both measurements should be taken at the same time.
Continue reading: What is the difference between convergent and concurrent validity?
Criterion validity measures how well a test corresponds to another measure, or criterion. The two types of criterion validity are concurrent and predictive validity.
Continue reading: What are the two types of criterion validity?
Construct validity assesses how well a test measures the concept it was meant to measure, whereas predictive validity evaluates to what degree a test can predict a future outcome or behavior.
Continue reading: What is the difference between construct validity and predictive validity?
The interview type with the highest predictive validity differs based on the goal of the interview.
- Generally speaking, a structured interview has the highest predictive validity.
- Unstructured interviews have the lowest predictive validity, especially in recruitment or job performance settings.
- Semi-structured interviews have adequate predictive validity but not as high as structured interviews.
Situational questions, work sample requests, and interview questions about past behavior are the best question types in the case of job interviews.
When designing job interview questions, make sure to minimize bias and to also account for other types of validity, such as construct validity and content validity.
You can use QuillBot’s Grammar Checker to make sure your interview questions are error-free.
Continue reading: Which type of interview has been shown to have the highest predictive validity?