A mediator (or mediating variable) is a variable that falls between a dependent and independent variable; that is, it connects them.
For example, the dependent variable “academic performance” is influenced by the independent variable “exercise” via the mediator variable “stress.” Exercise reduces stress, which in turn improves academic performance. Stress therefore mediates the relationship.
A moderator (or moderating variable) influences how an independent variable influences a dependent variable; in other words, it impacts their relationship.
For example, the relationship between the dependent variable “mental health” and the independent variable “social media use” may be influenced by the moderator “age.” The impact that social media has on mental health depends on someone’s age.
The expected influence of mediator and moderator variables can be captured in a conceptual framework.
Continue reading: What is the difference between mediator and moderator variables?
A variable is something that can take on different values. A study contains independent and dependent variables, control variables, and confounding variables that influence its results.
Dependent variables represent the outcome of a study. Researchers measure how they change under different values of the independent variable(s).
Independent variables are manipulated by the researcher to observe their effect on dependent variables.
Control variables are variables that are held constant to isolate the effect of the independent variable.
Confounding variables are variables that have not been controlled for that may influence a study’s results.
The expected relationship between these variables can be illustrated using a conceptual framework.
Continue reading: What is the difference between dependent variables, independent variables, control variables, and confounding variables?
The literature review, conceptual framework, and theoretical framework are all important steps in defining a research project.
A literature review is conducted early in the research process. Its purpose is to describe the current state of a research area, identify gaps, and emphasize the relevance of your own research question or study.
A theoretical framework is the lens through which a research question is viewed and answered. Different fields have their own assumptions, methods, and interpretations related to the same phenomenon that influence the choice of a theoretical framework.
Consider a neuroscientist and a social psychologist studying the construct “love.” They will each take a different approach, applying specialized methods and interpretations. In other words, they each use a unique theoretical framework that is guided by the existing theories of their field.
A conceptual framework describes the variables relevant to a study and how they relate to one another. This may include dependent and independent variables as well as any confounding variables that could influence results.
Continue reading: What is the difference between a conceptual framework, a theoretical framework, and a literature review?
You may encounter different terms for independent and dependent variables in different contexts. Some common synonyms for dependent variables are as follows:
- Dependent measure
- Outcome
- Response variable
- Predicted variable
- Output variable
- Measured variable
Continue reading: What is a dependent variable synonym?
Independent and dependent variables are called by various names across different contexts and fields. Some common synonyms for independent variables include the following:
- Predictor variable
- Regressor
- Covariate
- Manipulated variable
- Explanatory variable
- Exposure variable
- Feature
- Input variable
Continue reading: What is an independent variable synonym?
An outcome variable, or outcome measure, is another term for a dependent variable.
Dependent variables are the outcome or response that is measured in a study. Independent variables are manipulated by the researcher, and changes in the dependent variable are recorded and analyzed. An experiment explores cause-and-effect relationships between dependent and independent variables.
Continue reading: What is an outcome variable?
An experiment is a study that attempts to establish a cause-and-effect relationship between an independent and dependent variable.
In experimental design, the researcher first forms a hypothesis. They then test this hypothesis by manipulating an independent variable while controlling for potential confounds that could influence results. Changes in the dependent variable are recorded, and data are analyzed to determine if the results support the hypothesis.
Nonexperimental research does not involve the manipulation of an independent variable. Nonexperimental studies therefore cannot establish a cause-and-effect relationship. Nonexperimental studies include correlational designs and observational research.
Continue reading: What is an experiment?
Test validity refers to whether a test or measure actually measures the thing it’s supposed to. Construct validity is considered the overarching concern of test validity; other types of validity provide evidence of construct validity and thus the overall test validity of a measure.
Experimental validity concerns whether a true cause-and-effect relationship exists in an experimental design (internal validity) and how well findings generalize to the real world (external validity and ecological validity).
Verifying that an experiment has both test and experimental validity is imperative to ensuring meaningful and generalizable results.
Continue reading: What is the difference between test validity and experimental validity?
Psychology and other social sciences often involve the study of constructs—phenomena that cannot be directly measured—such as happiness or stress.
Because we cannot directly measure a construct, we must instead operationalize it, or define how we will approximate it using observable variables. These variables could include behaviors, survey responses, or physiological measures.
Validity is the extent to which a test or instrument actually captures the construct it’s been designed to measure. Researchers must demonstrate that their operationalization properly captures a construct by providing evidence of multiple types of validity, such as face validity, content validity, criterion validity, convergent validity, and discriminant validity.
When you find evidence of different types of validity for an instrument, you’re proving its construct validity—you can be fairly confident it’s measuring the thing it’s supposed to.
In short, validity helps researchers ensure that they’re measuring what they intended to, which is especially important when studying constructs that cannot be directly measured and instead must be operationally defined.
Continue reading: Why is validity so important in psychology research?
In short, yes! The terms discriminant validity and divergent validity are often used synonymously to refer to whether a test yields different results than other tests that measure unrelated concepts. However, “discriminant validity” is the more commonly used and accepted term.
Continue reading: Are discriminant and divergent validity the same thing?