What are the 3 chapters of a research proposal?
A research proposal has three main parts: the introduction, the literature review, and the methods section.
A research proposal has three main parts: the introduction, the literature review, and the methods section.
Construct validity evaluates how well a test reflects the concept it’s designed to measure.
Criterion validity captures how well a test correlates with another “gold standard” measure or outcome of the same construct.
Although both construct validity and criterion validity reflect the validity of a measure, they are not the same. Construct validity is generally considered the overarching concern of measurement validity; criterion validity can therefore be considered a form of evidence for construct validity.
A variable is something that can take on different values. A study contains independent and dependent variables, control variables, and confounding variables that influence its results.
Dependent variables represent the outcome of a study. Researchers measure how they change under different values of the independent variable(s).
Independent variables are manipulated by the researcher to observe their effect on dependent variables.
Control variables are variables that are held constant to isolate the effect of the independent variable.
Confounding variables are variables that have not been controlled for that may influence a study’s results.
The expected relationship between these variables can be illustrated using a conceptual framework.
The literature review, conceptual framework, and theoretical framework are all important steps in defining a research project.
A literature review is conducted early in the research process. Its purpose is to describe the current state of a research area, identify gaps, and emphasize the relevance of your own research question or study.
A theoretical framework is the lens through which a research question is viewed and answered. Different fields have their own assumptions, methods, and interpretations related to the same phenomenon that influence the choice of a theoretical framework.
Consider a neuroscientist and a social psychologist studying the construct “love.” They will each take a different approach, applying specialized methods and interpretations. In other words, they each use a unique theoretical framework that is guided by the existing theories of their field.
A conceptual framework describes the variables relevant to a study and how they relate to one another. This may include dependent and independent variables as well as any confounding variables that could influence results.
Independent and dependent variables are called by various names across different contexts and fields. Some common synonyms for independent variables include the following:
You may encounter different terms for independent and dependent variables in different contexts. Some common synonyms for dependent variables are as follows:
An outcome variable, or outcome measure, is another term for a dependent variable.
Dependent variables are the outcome or response that is measured in a study. Independent variables are manipulated by the researcher, and changes in the dependent variable are recorded and analyzed. An experiment explores cause-and-effect relationships between dependent and independent variables.
An experiment is a study that attempts to establish a cause-and-effect relationship between an independent and dependent variable.
In experimental design, the researcher first forms a hypothesis. They then test this hypothesis by manipulating an independent variable while controlling for potential confounds that could influence results. Changes in the dependent variable are recorded, and data are analyzed to determine if the results support the hypothesis.
Nonexperimental research does not involve the manipulation of an independent variable. Nonexperimental studies therefore cannot establish a cause-and-effect relationship. Nonexperimental studies include correlational designs and observational research.
Test validity refers to whether a test or measure actually measures the thing it’s supposed to. Construct validity is considered the overarching concern of test validity; other types of validity provide evidence of construct validity and thus the overall test validity of a measure.
Experimental validity concerns whether a true cause-and-effect relationship exists in an experimental design (internal validity) and how well findings generalize to the real world (external validity and ecological validity).
Verifying that an experiment has both test and experimental validity is imperative to ensuring meaningful and generalizable results.
Psychology and other social sciences often involve the study of constructs—phenomena that cannot be directly measured—such as happiness or stress.
Because we cannot directly measure a construct, we must instead operationalize it, or define how we will approximate it using observable variables. These variables could include behaviors, survey responses, or physiological measures.
Validity is the extent to which a test or instrument actually captures the construct it’s been designed to measure. Researchers must demonstrate that their operationalization properly captures a construct by providing evidence of multiple types of validity, such as face validity, content validity, criterion validity, convergent validity, and discriminant validity.
When you find evidence of different types of validity for an instrument, you’re proving its construct validity—you can be fairly confident it’s measuring the thing it’s supposed to.
In short, validity helps researchers ensure that they’re measuring what they intended to, which is especially important when studying constructs that cannot be directly measured and instead must be operationally defined.
In short, yes! The terms discriminant validity and divergent validity are often used synonymously to refer to whether a test yields different results than other tests that measure unrelated concepts. However, “discriminant validity” is the more commonly used and accepted term.
Convergent validity and concurrent validity both indicate how well a test score and another variable compare to one another.
Convergent validity indicates how well one measure corresponds to other measures of the same or similar constructs. These measures do not have to be obtained at the same time.
Concurrent validity instead assesses how well a measure aligns with a benchmark or “gold-standard,” which can be a ground truth or another validated measure. Both measurements should be taken at the same time.
Convergent validity and discriminant validity (or divergent validity) are both forms of construct validity. They are both used to determine whether a test is measuring the thing it’s supposed to.
However, each form of validity tells you something slightly different about a test:
If a test is measuring what it is supposed to, it should correspond to other tests that measure the same thing while differing from tests that measure other things. To assess these two qualities, you must determine both convergent and discriminant validity.
A mediator (or mediating variable) is a variable that falls between a dependent and independent variable; that is, it connects them.
For example, the dependent variable “academic performance” is influenced by the independent variable “exercise” via the mediator variable “stress.” Exercise reduces stress, which in turn improves academic performance. Stress therefore mediates the relationship.
A moderator (or moderating variable) influences how an independent variable influences a dependent variable; in other words, it impacts their relationship.
For example, the relationship between the dependent variable “mental health” and the independent variable “social media use” may be influenced by the moderator “age.” The impact that social media has on mental health depends on someone’s age.
The expected influence of mediator and moderator variables can be captured in a conceptual framework.
Construct validity assesses how well a test reflects the phenomenon it’s supposed to measure. Construct validity cannot be directly measured; instead, you must gather evidence in favor of it.
This evidence comes in the form of other types of validity, including face validity, content validity, criterion validity, convergent validity, and divergent validity. The stronger the evidence across these measures, the more confident you can be that you are measuring what you intended to.
Concurrent validity and predictive validity are both types of criterion validity. Both assess how well one test corresponds to another, theoretically related, test or outcome. However, the key difference is when each test is conducted:
A construct is a phenomenon that cannot be directly measured, such as intelligence, anxiety, or happiness. Researchers must instead approximate constructs using related, measurable variables.
The process of defining how a construct will be measured is called operationalization. Constructs are common in psychology and other social sciences.
To evaluate how well a construct measures what it’s supposed to, researchers determine construct validity. Face validity, content validity, criterion validity, convergent validity, and discriminant validity all provide evidence of construct validity.
Convergent and concurrent validity both indicate how well a test score and another variable compare to one another.
However, convergent validity indicates how well one measure corresponds to other measures of the same or similar constructs. These measures do not have to be obtained at the same time.
Concurrent validity instead assesses how well a measure aligns with a benchmark or “gold-standard,” which can be a ground truth or another validated measure. Both measurements should be taken at the same time.
Criterion validity measures how well a test corresponds to another measure, or criterion. The two types of criterion validity are concurrent and predictive validity.
Construct validity assesses how well a test measures the concept it was meant to measure, whereas predictive validity evaluates to what degree a test can predict a future outcome or behavior.
The interview type with the highest predictive validity differs based on the goal of the interview.
Situational questions, work sample requests, and interview questions about past behavior are the best question types in the case of job interviews.
When designing job interview questions, make sure to minimize bias and to also account for other types of validity, such as construct validity and content validity.
You can use QuillBot’s Grammar Checker to make sure your interview questions are error-free.
To ensure high external validity, it’s important to draw a sample that’s representative of the population you want to generalize to. It’s always best to choose a probability sampling (also known as random sampling) method for this.
The most popular sampling methods are stratified sampling, systematic sampling, simple random sampling, and cluster sampling.
A probability sampling method also increases other types of validity, such as internal validity, and it reduces bias.
Random assignment can increase external validity, but it has a bigger impact on internal validity.
Random assignment helps to reduce confounding variables and ensures that the treatment and control groups are comparable in all aspects except for the independent variable.
This increases the confidence that any observed differences between the groups can be attributed to the treatment rather than other factors, which means an increase in internal validity.
It can also improve external validity because random assignment of participants prevents researchers from inadvertently selecting participants who may be more or less likely to respond to the treatment.
However, the external validity may still be limited by sampling bias if the participants are not representative of the target population, which is why choosing the appropriate sampling method is also important to ensure external validity.
A probability sampling method, such as simple random sampling, stratified sampling, cluster sampling, or systematic sampling, is always the best choice.
Content validity and criterion validity are two types of validity in research:
Content validity and predictive validity are two types of validity in research:
Qualitative and quantitative research methods are used to investigate different types of research questions.
Quantitative methods are best if:
Qualitative methods are best if:
Case studies have historically been used in psychology to understand rare conditions. For example, Phineas Gage was a railroad worker who had an iron rod driven through his head in an accident and miraculously survived. However, this accident drastically altered his personality and behavior for the remaining 12 years of his life.
Detailed studies of Phineas Gage helped scientists realize that different areas of the brain have specific functions. This famous case study is an example of how studying one individual in detail can provide insights that drive the formation of broader theories.
Though case studies can be classified in many ways, the most common types are intrinsic, instrumental, and collective case studies.
Intrinsic case studies focus on a specific subject (i.e., case). The point of such studies is to learn more about this specific subject rather than to generalize findings.
Instrumental case studies also focus on a single subject, but the intent is to generalize findings to a broader population.
Collective case studies have the same purpose as instrumental case studies—to use findings to increase one’s understanding of a broader topic—but they include multiple cases.
An interrupted time series design is a quasi-experimental research method. It is similar to a pretest-posttest design, but multiple data points, called a time series, are collected for a participant before and after an intervention is administered. The intervention “interrupts” the time series of observations.
If scores taken after the intervention are consistently different from scores taken before the intervention, a researcher can conclude that the intervention was successful. Considering multiple measurements helps reduce the impact of external factors
Regression discontinuity design is a quasi-experimental approach that compares two groups of participants that are separated based on an arbitrary threshold. This method assumes that people immediately above and immediately below this threshold are quite similar. Any subsequent differences between these groups can therefore be attributed to interventions that one group does or does not receive.
For example, imagine you’re testing the efficacy of a cholesterol medication. You administer this medication only to patients whose cholesterol levels exceed 200 mg/dl. You then compare heart health indicators of patients with cholesterol levels slightly over 200 mg/dl, who do receive the medication, to patients with cholesterol levels slightly below 200 mg/dl, who do not receive the medication. If the heart health of the former group improves relative to the latter group, you may conclude that the treatment worked.
A pretest-posttest design is a quasi-experimental research design. Two data points are collected for a participant: one from before an intervention is introduced and one from after an intervention. A difference in these scores may indicate that the intervention was effective.
For example, imagine you complete a depression inventory before and after a 6-week therapy program. An improvement in your score may indicate that the program worked.
In a true experiment, participants are randomly assigned to different study conditions. A quasi-experiment lacks this random assignment.
True experiments are also usually conducted in controlled laboratory settings, which facilitates control of confounding variables that may impact study results. Quasi-experimental designs often collect data in real-world settings, which increases external validity but reduces control of confounds.
Finally, both true experiments and quasi-experiments generally involve the manipulation of an independent variable to determine its causal effect on a dependent variable. However, in a quasi-experimental study, researchers may have less control over this manipulation (e.g., they may be studying the impact of an intervention or treatment that has already happened).
Practical or ethical concerns may prevent researchers from using a true experimental design:
Practical concerns that prevent researchers from conducting a true experiment may include the cost of a study or the time required to design the experiment and collect and analyze data.
Ethical concerns may also limit the feasibility of true experimental research. It would be unethical to intentionally prevent study participants from accessing medication or other treatments that the researcher knows would benefit them.
In these cases, a quasi-experimental design may be more appropriate.
The four main types of mixed methods research designs differ in when the quantitative and qualitative data are collected and analyzed:
Mixed methods research questions combine qualitative methods and quantitative methods to answer a research question. Examples of mixed methods research questions include the following:
Data collection is the process of gathering data (measurements, observations, and other information) to answer a research question. Though many different methods of data collection exist, all are systemic and follow a procedure defined before data collection begins. Data can be qualitative or quantitative.
Operationalization is when you define how a variable will be measured. Operationalization is especially important in fields like psychology that involve the study of more abstract ideas (e.g., “fear”).
Because fear is a construct that cannot be directly measured, a researcher must define how they will represent it. For example, in studies involving mice, fear is often operationalized as “”how often a mouse freezes (i.e., stops moving) during an experiment.”
Operationalization can be used to turn an abstract concept into a numerical form for use in quantitative research.
Some operationalizations are better than others. It is important to consider both reliability and validity (how consistent and accurate a measurement is, respectively) when operationalizing a construct.
Ecological validity is a subtype of external validity that is specifically concerned with the extent to which the study environment, tasks, and conditions reflect the real-world settings in which the behavior naturally occurs.
External validity also consists of population validity, which refers to the extent to which the results of a study can be generalized to the larger population from which the sample was drawn.
There are many types of qualitative research. The following are five common approaches:
Choosing the right approach depends on the research question you are studying.
Member checking is when participants are allowed to review their data or results to confirm accuracy. This process can happen during or after data collection.
In qualitative research, data are often collected through interviews or observations. Allowing a participant to review their data can help build trust and ensure that their thoughts and experiences are being accurately expressed.
Qualitative data are generally narrative in nature. They may include interview transcripts or experimenter observations. Different approaches exist to analyze qualitative data, but common steps are as follows:
Common qualitative data analysis techniques include content analysis, thematic analysis, and discourse analysis.
Grounded theory is a systematic approach that can be applied in qualitative research. Its goal is to create new theories that are grounded in data.
With a grounded theory approach, data collection and analysis occur at the same time (this is called theoretical sampling). This approach can be helpful when you are conducting research in a new area and do not have a hypothesis related to study outcomes.
Triangulation involves using a combination of data or techniques to answer a research question. Triangulation can help you confirm the validity of your findings. This can be helpful in qualitative research, which is often subjective and vulnerable to bias.
Types of triangulation include the following:
Anonymity and confidentiality are both important aspects of research ethics.
Anonymity means that researchers do not collect personal information that can be used to identify a participant or that someone’s responses cannot be linked to their identity.
Confidentiality means that only the researchers conducting a study can link study responses or data to individual participants.
If you run a study and do not know who your participants are (i.e., you collect no identifying information), your data are anonymous. If you know who your participants are but no one else does (i.e., you collect identifying information but don’t publish it), your data are confidential.
An institutional review board (IRB) is a committee that reviews proposed studies involving human participants to ensure research ethics are being followed. In most countries, a study must be approved by an IRB before data can be collected.
An IRB is sometimes called a research ethics board (REB), an ethical review board (ERB), or an independent ethics committee (IEC).
The National Institutes of Health (NIH) has defined seven principles to protect clinical research participants and promote research ethics:
Social and clinical value: the scientific advances of a research study should justify the costs or risks of conducting this research.
Scientific validity: a study should be designed to address an answerable question using feasible and accepted research methods.
Fair subject selection: participants should be selected based on the scientific aims of the study and should not be included or excluded for reasons unrelated to research goals.
Favorable risk-benefit ratio: the potential risks to participants should be minimized and should be outweighed by potential benefits.
Independent review: an independent review panel should ensure a study is ethical before research begins.
Informed consent: participants should decide whether to voluntarily participate in a study after learning about its research question, methods, potential risks, and benefits.
Respect for potential and enrolled subjects: individuals should be treated with respect throughout the research process.
The American Psychological Association (APA) has five principles to guide psychologists in conducting ethical research and scientific work.
Beneficence and nonmaleficence: protect the welfare of research participants and do no harm.
Fidelity and responsibility: serve the best interests of society and the specific communities impacted by research and scientific work.
Integrity: conduct and teach psychology in an accurate and honest manner.
Justice: ensure that all people have equal access to the benefits of psychology services and research.
Respect for people’s rights and dignity: show consideration for people’s dignity and their right to privacy, confidentiality, and autonomy.
Research ethics are principles that guide scientists, helping them distinguish right from wrong when conducting research. Research ethics help protect the people involved in scientific studies and ensure the integrity of scientific research.
Yes, stratified sampling is a random sampling method (also known as a probability sampling method). Within each stratum, a random sample is drawn, which ensures that each member of a stratum has an equal chance of being selected.
You can’t use an ANOVA test if the nominal data is your dependent variable. The dependent variable needs to be continuous (interval or ratio data).
The independent variable for an ANOVA should be categorical (either nominal or ordinal data).
A pre-experimental design is a simple research process that happens before the actual experimental design takes place. The goal is to obtain preliminary results to gauge whether the financial and time investment of a true experiment will be worth it.
An experimental design diagram is a visual representation of the research design, showing the relationships among the variables, conditions, and participants. It helps researchers to:
The four principles of experimental design are:
Data at the nominal level of measurement is qualitative.
Nominal data is used to identify or classify individuals, objects, or phenomena into distinct categories or groups, but it does not have any inherent numerical value or order.
You can use numerical labels to replace textual labels (e.g., 1 = male, 2 = female, 3 = nonbinary), but these numerical labels are random and are not meaningful. You could rank the labels in any order (e.g., 1 = female, 2 = nonbinary, 3 = male). This means you can’t use these numerical labels for calculations.
Randomization is a crucial component of experimental design, and it’s important for several reasons:
In experimental design, the two main groups are:
In other words, the control group is used as a baseline to compare with the treatment group, which receives the experimental treatment or intervention.
A within-participant design, also known as a repeated-measures design, is a type of experimental design where the same participants are assigned to multiple groups or conditions. Some advantages of this design are:
It’s important to note that within-participant designs also have some limitations, such as increased risk of order effects (where the order of conditions affects the outcome) and carryover effects (where the effects of one condition persist into another condition).
Cluster sampling usually harms internal validity, especially if you use multiple clustering stages. The results are also more likely to be biased and invalid, especially if the clusters don’t accurately represent the population. Lastly, cluster sampling is often much more complex than other sampling methods.
Cluster sampling is generally more inexpensive and efficient than other sampling methods. It is also one of the probability sampling methods (or random sampling methods), which contributes to high external validity.
In all three types of cluster sampling, you start by dividing the population into clusters before drawing a random sample of clusters for your research. The next steps depend on the type of cluster sampling:
No, nominal data can only be assigned to categories that have no inherent order to them.
Categorical data with categories that can be ordered in a meaningful way is called ordinal data.
Proportionate sampling in stratified sampling is a technique where the sample size from each stratum is proportional to the size of that stratum in the overall population.
This ensures that each stratum is represented in the sample in the same proportion as it is in the population, representing the population’s overall structure and diversity in the sample.
For example, the population you’re investigating consists of approximately 60% women, 30% men, and 10% people with a different gender identity. With proportionate sampling, your sample would have a similar distribution instead of equal parts.
Disproportionate sampling in stratified sampling is a technique where the sample sizes for each stratum are not proportional to their sizes in the overall population.
Instead, the sample size for each stratum is determined based on specific research needs, such as ensuring sufficient representation of small subgroups to draw statistical conclusions.
For example, the population you’re interested in consists of approximately 60% women, 30% men, and 10% people with a different gender identity. With disproportionate sampling, your sample would have 33% women, 33% men, and 33% people with a different gender identity. The sample’s distribution does not match the population’s.
Stratified sampling and systematic sampling are both probabilistic sampling methods used to obtain representative samples from a population, but they differ significantly in their approach and execution.
Simple random sampling is a common probability sampling technique.
In probability sampling, each individual in the population has the same chance of being selected for the sample. With simple random sampling, individuals are chosen from a list at random, which makes it a probability sampling method.
Other examples of probability sampling are stratified sampling, systematic sampling, and cluster sampling. Examples of nonprobability sampling are convenience sampling, quota sampling, self-selection sampling, snowball sampling, and purposive sampling.
Simple random sampling is one of the most commonly used probability sampling methods.
The most important pros of simple random sampling are:
The most important cons of simple random sampling are:
Systematic sampling is sometimes used in place of simple random sampling because it’s easier to implement.
With systematic sampling, you only draw one random number and then select subjects at regular intervals. This is especially helpful when the population is large.
Topics for action research in education are:
Examples of action research papers are:
The research design is the backbone of your research project. It includes research objectives, the types of sources you will consult (i.e., primary vs secondary), data collection methods, and data analysis techniques.
A thorough and well-executed research design can facilitate your research and act as a guide throughout both the research process and the thesis or dissertation writing process.
The research process comprises five steps.
Once you’ve written your proposal, you may need your advisor’s approval of your plan before you can dive into the research process.
Construct validity refers to the extent to which a study measures the underlying concept or construct that it is supposed to measure.
Internal validity refers to the extent to which observed changes in the dependent variable are caused by the manipulation of the independent variable rather than other factors, such as extraneous variables or research biases.
When a study has high ecological validity, the findings are more likely to generalize to real-world situations, making them more applicable and useful for practical purposes, such as improving witness testimony and investigative procedures.
High ecological validity minimizes the influence of factors that can affect results, such as laboratory settings or overly structured procedures, which can lead to biases or unrepresentative data.
Ecological validity is a subtype of external validity.
As you research, write down citation information for any sources you plan to use. Record quotes and ideas carefully, along with the page numbers where you found them. You can write them on note cards, on paper, or in a digital document.
When writing your first draft, include enough citation information in the text to ensure accurate referencing. After finishing the draft, you can go through your paper and add the full citations, following the style guide.
QuillBot’s Citation Generator can help you automatically generate in-text citations and a reference list for your paper.
Finally, use QuillBot’s Plagiarism Checker to double-check your work and avoid plagiarism.
Most research papers contain at least an introduction and sections for methodology, results, discussion, and references. Many also include an abstract and a literature review. Some other common elements are a title page, a table of contents, tables and figures, and appendices.
These are three major mistakes to avoid when writing a research proposal:
You can use a formula to calculate the sampling interval in systematic sampling, which is a probability sampling method where the researcher systematically selects subjects for their sample at a regular interval.
You can calculate the sampling interval (n) by dividing the total population by the desired sample size.
In some cases, people might use a different letter to indicate the sampling interval (e.g., k). This is irrelevant to the use of the formula.
Systematic sampling is a probability sampling method, which typically ensures a lower risk of bias than nonprobability sampling methods.
However, systematic sampling can be vulnerable to sampling bias, especially if the starting point isn’t truly random. The choice of sampling interval can also introduce bias:
Purposive sampling is often chosen over systematic sampling in situations where the researcher wants to select subjects that have specific traits that are needed in their sample.
It is inappropriate to use systematic random sampling when your population has a periodic or cyclic order. This could result in only including individuals with a specific characteristic (e.g., age) in your sample.
Systematic sampling is a random sampling method. Another name for random sampling is probability sampling.
In systematic sampling, the researcher chooses a random starting point in a list of the population (e.g., by using a random number generator) before selecting subjects for the sample at a regular sampling interval (n). The random starting point and regular interval ensure the random nature of this sampling method.
The 12 main threats to internal validity are:
There are several ways to counter these threats to internal validity, for example, through randomization, the addition of control groups, and blinding.
Before you can conduct a research project, you must first decide what topic you want to focus on. In the first step of the research process, identify a topic that interests you. The topic can be broad at this stage and will be narrowed down later.
Do some background reading on the topic to identify potential avenues for further research, such as gaps and points of debate, and to lay a more solid foundation of knowledge. You will narrow the topic to a specific focal point in step 2 of the research process.
Content validity and face validity are both types of measurement validity. Both aim to ensure that the instrument is measuring what it’s supposed to measure.
However, content validity focuses on how well the instrument covers the entire construct, whereas face validity focuses on the overall superficial appearance of the instrument.
The best way for a researcher to judge the face validity of items on a measure is by asking both other experts and test participants to evaluate the instrument.
The combination of experts with background knowledge and research experience, along with test participants who form the target audience of the instrument, provides a good idea of the instrument’s face validity.
Face validity refers to the extent to which a research instrument appears to measure what it’s supposed to measure. For example, a questionnaire created to measure customer loyalty has high face validity if the questions are strongly and clearly related to customer loyalty.
Construct validity refers to the extent to which a tool or instrument actually measures a construct, rather than just its surface-level appearance.
Content validity and face validity are both types of measurement validity.
Ordinal is the second level of measurement. It has two main properties:
The variable age can be measured at the ordinal or ratio level.
Ordinal data and ratio data are similar because they can both be ranked in a logical order. However, for ratio data, the differences between adjacent scores are equal and there’s a true, meaningful zero.
Ordinal data and interval data are similar because they can both be ranked in a logical order. However, for interval data, the differences between adjacent scores are equal.
Ordinal data is usually considered qualitative in nature. The data can be numerical, but the differences between categories are not equal or meaningful. This means you can’t use them to calculate measures of central tendency (e.g., mean) or variability (e.g., standard deviation).
Nominal data and ordinal data are similar because they can both be grouped into categories. However, ordinal data can be ranked in a logical order (e.g., low, medium high), whereas nominal data can’t (e.g., male, female, nonbinary).
Data at the nominal level of measurement typically describes categorical or qualitative descriptive information, such as gender, religion, or ethnicity.
Contrary to ordinal data, nominal data doesn’t have an inherent order to it, so you can’t rank the categories in a meaningful order.