What are the alternative names for the false dichotomy fallacy?
The false dichotomy fallacy is also known as the false dilemma fallacy or the either-or fallacy.
The false dichotomy fallacy is also known as the false dilemma fallacy or the either-or fallacy.
Deductive reasoning is considered stronger than inductive reasoning in a specific sense:
If a deductive argument’s premises are factually correct, and its structure is valid, then its conclusion is guaranteed to be true.
An inductive argument, in contrast, can only suggest the strong likelihood of its conclusion
Inductive reasoning and deductive reasoning are the two most prominent approaches to critical thinking and argumentation. Each plays a crucial role in reasoning and argumentation, but they serve different functions:
An example of analogical reasoning in everyday life is the expression “Love is a battlefield.” This analogy emphasizes the challenges, conflicts, and emotional turmoil that can occur in relationships. It suggests that navigating romantic relationships requires strategy, resilience, and sometimes sacrifice, much like a physical battle.
To determine the strength of analogical reasoning, the most important question to ask is whether the similarities between the two situations or entities being compared are relevant and meaningful to the conclusion being drawn.
Analogical reasoning and the representative heuristic both involve making judgments based on similarities between objects or situations, but there is a key difference:
Analogical reasoning is sometimes considered a subcategory of inductive reasoning because it involves generalizing from specific instances to derive broader principles or patterns. However, some argue that analogical reasoning is distinct from induction because it involves drawing conclusions based on similarities between cases rather than generalizing from specific instances.
Along with abductive reasoning, they are forms of ampliative reasoning (in contrast to deductive reasoning).
The opposite of black-and-white thinking is often referred to as seeing “shades of gray” or recognizing nuance. This mindset involves appreciating subtleties and complexity and acknowledging a spectrum of possibilities.
Pushing back against the cognitive bias of black-and-white thinking enables us to form deeper and more balanced judgments about the world. Appreciating nuance and complexity helps us guard against logical fallacies such as false dichotomies.
Nuanced thinking involves recognizing that situations, ideas, and individuals are complex and typically have a combination of strengths and weaknesses, allowing for flexibility, understanding, and appreciation of diverse viewpoints and interpretations.
This is closely related to the idea of “seeing shades of gray,” an idiom often used in contrast to black-and-white thinking. This metaphor conveys the idea of considering and acknowledging multiple perspectives, recognizing complexities and nuances rather than interpreting everything in extreme terms.
In psychology, the term splitting describes a defense mechanism that involves thinking about people in extreme terms (e.g., seeing a person as completely good and later deciding that person is completely evil). Whereas black-and-white thinking is a cognitive bias that pertains to reasoning and affects humans in general, splitting involves human relationships and is associated with specific mental health conditions.
Thinking in extremes makes people susceptible to logical fallacies that involve exaggerated and simplistic representations of an issue, such as the false dilemma fallacy.
Binary thinking, or black-and-white thinking, involves categorizing ideas, people, and situations into two distinct, often opposite, groups. “Binary” in this context refers to a classification system that acknowledges only two possibilities, ignoring a spectrum that exists in between. This bias can lead to logical fallacies such as the either-or fallacy.
A basic premise is a fundamental assumption or principle that serves as the foundation of an argument or theory. Basic premises are often implicit and taken for granted, serving as starting points from which logical deductions or inferences are made (e.g., “We assume, as a basic premise, that causing unnecessary suffering is morally wrong).
A premise is the basis for an argument. It is a foundational element upon which further conclusions or deductions are made. Premises play an especially important role in syllogisms, which express deductive reasoning.
Modus ponens arguments are always valid based on their logical structure, which ensures the conclusion logically follows from the premises.
However, for an argument to be both valid and sound, the premises must also be true. Validity refers to the argument’s structure ensuring the conclusion follows from the premises, while soundness refers to the argument’s validity plus the actual truth of the premises.
Modus ponens is not a logical fallacy; it is a valid form of deductive reasoning. Also known as “affirming the antecedent,” it employs a straightforward logical structure:
However, flawed attempts at forming a syllogism often result in formal logical fallacies, such as denying the antecedent, which resembles modus ponens in form but fails to provide logical certainty:
Although the two arguments look similar, denying the antecedent is an invalid form of argument.
“Modus tollens” translates to “method of denying” in English.
In contrast, the Latin term “modus ponens” means “method of affirming.” Both refer to types of syllogisms.
A contrapositive negates and reverses a conditional (if–then) statement. For example, the contrapositive for the statement “If P, then Q” is “If not Q, then not P.”
Modus tollens validates the contrapositive, demonstrating that “not P” follows logically from “not Q” as follows:
Modus tollens is not a logical fallacy; it is a valid approach to deductive reasoning.
However, syllogisms such as modus tollens are often conflated with formal logical fallacies (or non sequitur fallacies).
The two fallacies that are most easily conflated with modus tollens are affirming the consequent and denying the antecedent.
“Syllogism” has several near-synonyms:
Our AI Rewriter can help you find synonyms for words like “syllogism.”
The word “syllogism” is pronounced SIL-uh-jiz-uhm (IPA: /ˈsɪləˌdʒɪzəm/).
A literary syllogism mirrors formal logic by presenting two premises, often implicit, followed by a conclusion, enhancing a narrative’s depth and complexity.
For example, in To Kill a Mockingbird, Atticus Finch’s argument that all humans are created equal, coupled with evidence of Tom Robinson’s innocence, leads to the conclusion that Tom should be acquitted.
There are three main types of syllogisms in classical logic:
The main distinction between them is the relationships expressed by their premises.
An example of deductive reasoning in real life is a student forming conclusions about shapes and angles based on the laws of geometry.
Deductive reasoning applies a general rule to a specific case to draw a conclusion.
Deductive reasoning is a crucial part of critical thinking, especially in domains such as philosophy, mathematics, and science. It allows us to make predictions and evaluate theories objectively.
Deductive arguments provide frameworks for testing hypotheses (typically developed through inductive reasoning) and allow us to establish conclusions with logical certainty.
Hypothetical syllogisms express deductive reasoning, beginning with relatively general premises and inferring specific conclusions. All three major categories of syllogisms (hypothetical syllogisms, disjunctive syllogisms, and categorical syllogisms) are deductive.
In contrast, inductive reasoning begins with specific observations and infers relatively broad conclusions.
In symbolic logic, the validity of a disjunctive syllogism can be proved using a truth table. This table expresses all truth values (i.e., true or false, expressed as T or F) of the premises and conclusion under all possible conditions.
P | Q | P ∨ Q (“Either P or Q.”) |
¬P (“Not P.”) |
Conclusion (“Therefore, Q”) |
T
T F F |
T
F T F |
T
T T F |
F
F T T |
T
F T F |
This truth table demonstrates that disjunctive syllogisms are valid by showing that when both premises are true (which occurs in row three) the conclusion is also true.
An example of a disjunctive syllogism in media would be the narrator of a science documentary explaining, “Either the observed celestial object is a comet, or it is an asteroid. It has a tail, which comets have but asteroids do not; therefore, it is a comet.”
Note: Examples of “either–or” arguments seen in the media typically aren’t syllogisms. Arguments found in media discourse are typically examples of inductive reasoning. (When inductive arguments present exaggerated binary options and ignore nuance, they exemplify the either-or fallacy or the false dilemma fallacy.)
In media, reductio ad absurdum arguments can be used to demonstrate logical contradictions in policies or positions. For example, a news commentator might make the following argument against government surveillance:
“If total security requires total surveillance, then the government must monitor its own surveillance activities continuously to be consistent. This leads to the absurd conclusion that there must be an infinite number of layers of surveillance, each monitoring the previous layer.”
The Greek philosopher Zeno is renowned for his early examples of reductio ad absurdum, presented in the form of paradoxes. Zeno’s paradoxes challenged assumptions about time and space, laying the groundwork for later philosophers to formalize reductio ad absurdum.
Reductio ad absurdum is used in philosophy to uncover flaws and inconsistencies in various theories and beliefs.
For example, the following reductio ad absurdum argument is inspired by Emmanuel Kant:
“If moral relativism is true and all moral beliefs are equally valid, then the beliefs that ‘helping others is a moral duty’ and ‘helping others is never a moral duty’ must both be valid. This leads to a contradiction, as an action cannot be both a moral duty and not a moral duty simultaneously.”
This argument exposes how moral relativism defies the law of non-contradiction, encouraging further examination and refinement of moral theories.
In debates, loaded questions are used to discredit opponents and force them into a defensive position.
Examples of loaded questions used in debate:
As an underhanded debate tactic, loaded questions are logical fallacies. They can be considered a form of circular reasoning.
You can use the QuillBot Paraphraser to improve the clarity of sentences and avoid ambiguity.
A classic example of a loaded question fallacy is “Have you stopped [bad behavior] yet?” For example, “Have you stopped cheating on your taxes yet?”
This logical fallacy is characterized by its assumptions. It is designed to get the respondent to either become defensive or agree with an assertion they either don’t believe or don’t want to admit.
Loaded questions are defined by their inherent assumptions or assertions that may not be agreed upon by the person being questioned. These assumptions are often unwarranted or unproven, leading the respondent into a rhetorical trap. The question is structured in such a way that any direct answer would implicitly confirm the assumption, thereby putting the respondent at a disadvantage.
This logical fallacy assumes the very thing it attempts to prove, making it a form of circular reasoning or begging the question.
Antonyms for ambiguity include clarity, precision, certainty, lucidity, and explicitness. These words describe a state of being clearly defined and easy to understand.
In contrast, ambiguity describes the condition of being unclear or having multiple meanings.
You can use QuillBot’s Paraphrasing Tool to help you vary your vocabulary to reflect your intended meaning.
Ambiguity is pronounced am-bih-GYOO-ih-tee (/ˌæm.bɪˈɡjuː.ɪ.ti/). Understanding ambiguity is an essential part of critical thinking and helps avoid logical fallacies such as the equivocation fallacy and the motte and bailey fallacy.
The Ethics of Ambiguity is a book by feminist philosopher Simone de Beauvoir. It explores existentialist ethics, focusing on the ambiguity inherent in human existence and challenging the idea of absolute truths.
Having tolerance for ambiguity means being comfortable with uncertain and unclear situations. It involves the ability to accept, or even embrace, situations with multiple possible interpretations or outcomes.
The opposite is black-and-white thinking, the tendency to view people, situations, and ideas in absolute terms.
A major premise is one of the two premises in a syllogism. It is a broad statement expressing a generalization or a principle accepted as true. The major premise always comes first in a syllogism and contains the predicate of the conclusion.
For example, in the syllogism “All dogs have fur. Fido is a dog. Therefore, Fido has fur”, the major premise is “All dogs have fur”.
The word “amphiboly” is pronounced am-FIH-buh-lee (IPA: /æmˈfɪbəli/).
It is the name of a linguistic error as well as a logical fallacy (i.e., the amphiboly fallacy).
A fallacy of ambiguity occurs when an argument relies on ambiguous language or unclear definitions to mislead. These fallacies often exploit the vagueness or multiple meanings of terms to make an argument seem strong when it is not.
Fallacies in this category include the following:
The amphiboly fallacy involves using the confusing syntax of a sentence to prove a point. Whereas many logical fallacies result from reasoning errors, the amphiboly fallacy stems directly from linguistic ambiguity—whether due to a mistake or an intentional misuse of language.
Its name is based on the term “amphiboly”: syntactic ambiguity that results in a sentence having multiple possible interpretations.
The term “motte and bailey” originates from the fortifications of medieval castles. A motte (a raised mound) provided a strong, defensible position, while a bailey (an enclosed courtyard) offered more accessible but less defensible space.
The motte and bailey fallacy is named after this castle design because, like the tactic of switching between an easily defensible position (the motte) and a more vulnerable but easier to access position (the bailey), it involves switching between extreme and moderate positions in an argument.
The motte and bailey fallacy and the straw man fallacy both involve misrepresenting an argument, but the main difference lies in their tactics:
The motte and bailey fallacy can include coherent and logically sound points, but the strategy of shifting back and forth between two different claims is considered intellectually dishonest and makes an argument unsound overall. In other words, using this strategy is considered an informal logical fallacy.
The word “dichotomy” refers to a division or contrast between two things that are (or are represented as being) opposed or entirely different.
The false dichotomy fallacy occurs when someone presents a situation as having only two possible outcomes or options when there are more alternatives available.
Dichotomies are valid when, considering all scenarios, only two options are indeed possible.
Here are some examples of legitimate dichotomies:
Here is an example of how the word “dichotomy” can be used accurately in a sentence:
“The professor discussed the dichotomy between living and non-living entities, teaching students to distinguish between organisms that exhibit all characteristics of life and those that do not.”
The false dichotomy fallacy occurs when an issue is presented as if it had only two mutually exclusive possibilities, even though it is actually more complex. This fallacy is also called the false dilemma fallacy.
False equivalence fallacies and false analogy fallacies both involve arguing a point by making faulty comparisons. However, there is a key difference:
Both the false equivalence fallacy and the false dilemma fallacy present flawed reasoning by oversimplifying complex situations or comparisons, but there is a difference:
Logical fallacies that involve false comparisons include the following:
The conjunction fallacy is typically considered a type of heuristic or cognitive bias. These are mental shortcuts that people use to make judgments and decisions. The conjunction fallacy specifically refers to the tendency to incorrectly believe that the conjunction of two events is more likely than one of the events occurring alone.
In psychology, the conjunction rule states that the likelihood of two events happening together cannot exceed the likelihood of either event happening independently.
This principle is fundamental to understanding logical reasoning and decision-making processes, particularly in contexts where individuals assess the likelihood of compound events.
The conjunction fallacy occurs when a person mistakenly believes the opposite: that two events are more likely to occur together than independently.
The conjunction fallacy occurs when someone believes two events are more likely to occur together than separately. This error in judgment often arises in situations where individuals assess the likelihood of combined events without correctly applying the principle that the probability of joint occurrences cannot exceed the probability of individual occurrences.
Examples of the burden of proof principle can be seen in many everyday contexts.
For example, if a person claims, “Astrology accurately predicts personality,” the person who makes this assertion must provide supporting evidence in order to make a compelling argument. This responsibility to provide evidence is the burden of proof.
If instead of offering evidence, the speaker challenges others to disprove the claim (e.g., “Astrology accurately predicts personality, and you can’t prove that it doesn’t”), this constitutes a logical fallacy known as the burden of proof fallacy.
In a debate, the person who makes a claim bears the burden of proof for that particular claim.
If one party makes a claim without supporting evidence and suggests that it must be assumed to be true unless someone else can disprove it, this person has committed the burden of proof fallacy.
There are two logical fallacies that involve essentially reversing the burden of proof:
The fallacy of division incorrectly assumes that the properties of a whole apply to its parts.
Its counterpart is the fallacy of composition, which assumes that the properties of parts apply to the whole. These are not two forms of the same fallacy but distinct and essentially opposite errors.
The fallacy of division could also be compared to the ecological fallacy, which similarly involves making assumptions about the parts from the whole. However, the ecological fallacy applies strictly to the misuse of statistical data.
The fallacy of division bears similarities to other logical fallacies that involve overgeneralization:
The is-ought problem is related to the naturalistic fallacy, but there is a key difference:
The term “naturalistic fallacy” was coined by British analytic philosopher G. E. Moore in his 1903 work Principia Ethica. Moore argued against defining moral qualities such as “goodness” on the basis of observations about nature.
David Hume did not use the term “naturalistic fallacy.” However, Hume’s thoughts on the problem of “is” vs. “ought” (first explored in A Treatise of Human Nature) influenced later discussions on the relationship between facts and values, including critiques of the naturalistic fallacy.
A non-fallacious argument can include the idea of what is “natural” or “unnatural” along with specific, evidence-based reasons.
However, an appeal to nature fallacy claims that something is good because it’s natural, or bad because it’s unnatural, without any justification.
The appeal to novelty fallacy and the appeal to modernity fallacy are near opposites of the appeal to nature fallacy. Both contrast with the appeal to nature fallacy because they value newness for its own sake:
The cherry picking fallacy is evident in the selective presentation of data. Examples can be found in areas such as scientific research and business:
In its annual report, a company emphasizes its achievements and obscures negative data: “This year, we expanded our customer base by 30%, making it our most successful year in terms of growth.” Although the report includes a comprehensive section on financial performance, it uses complex language and formatting that makes it less obvious that the company is also experiencing a downward trend in profit margins and an increase in operational costs.
As the example demonstrates, cherry picking is often applied to data to convey a specific narrative, aiming to validate a hypothesis or portray an organization more favorably than merited.
Both the cherry-picking fallacy and card stacking (sometimes called stacking the deck) mislead by presenting one-sided information, but while the two can overlap, there are key differences:
While card stacking is deliberate, committing the cherry picking fallacy doesn’t require intentionality.
The cherry picking fallacy is similar to the hasty generalization fallacy and the Texas sharpshooter fallacy, which also involve arguing from poorly chosen data. However, there are key differences:
The following example of an appeal to pity fallacy demonstrates how this fallacy replaces reasoned analysis with sympathy-inducing imagery:
Legislators debate a proposed bill that would require users to register online accounts with their legal names and government-issued IDs. A proponent of the bill tells the story of one teenager who was bullied online and argues, “Too many of our young people are bullied online by anonymous users, and too many of their lives have been ruined. We must protect our children from such dangers if we have any humanity.”
This example of an appeal to pity fallacy focuses exclusively on descriptions of online bullying and its effects on children without addressing the proposed bill’s logistics, potential efficacy, or implications for free speech and privacy.
The appeal to pity fallacy and the red herring fallacy have similarities and differences:
The appeal to pity fallacy is also known as argumentum ad misericordiam, which is Latin for “argument from compassion or pity.” It involves evoking sympathy to sidestep the core issues of an argument and avoid presenting solid evidence or reasoning.
To avoid the hasty generalization fallacy, apply critical thinking and scrutinize evidence carefully, using the following strategies:
A fallacy that contrasts with hasty generalization fallacy is the slothful induction fallacy.
The hasty generalization fallacy and the appeal to anecdote differ in scope and in the type of evidence used to draw conclusions:
The appeal to ignorance fallacy can take two forms:
Both forms of the fallacy make the same essential error, misconstruing the absence of contrary evidence as definitive proof.
The lack of definitive proof that cryptids, such as Bigfoot, do not exist is sometimes presented as evidence that they do exist. This line of argumentation is an example of the appeal to ignorance fallacy that one might encounter in everyday life.
The appeal to ignorance fallacy is often countered with the maxim “Absence of evidence is not evidence of absence.” A lack of evidence may merely reflect the current limitations of our knowledge; it does not necessarily mean that evidence will never be discovered.
An argument can include an appeal to relevant traditions without committing the appeal to tradition fallacy. There is a key difference between arguments that consider traditions and arguments that commit the fallacy:
Both the appeal to tradition fallacy and the appeal to emotion fallacy can leverage social pressures and sentiments, but they do so in different ways:
The opposite of the appeal to tradition fallacy is the appeal to novelty fallacy, also known by its Latin name, argumentum ad novitatem.
The appeal to novelty fallacy occurs when an argument assumes that something is superior simply because it is new or modern.
Like the appeal to tradition fallacy, it relies on the timing of an idea or practice to prove its merits.
The ad populum fallacy asserts that a claim is true solely because it’s popular. This fallacy typically occurs in an argument that disregards the need for evidence or sound reasoning, relying instead on the human tendency to conform to prevailing opinions.
In politics, the ad populum fallacy can compel conformity through either desire (e.g., the desire to belong to the winning party) or fear (e.g., the fear of the stigma of supporting an unpopular candidate).
One historical example of ad populum reasoning is the Red Scare phenomenon in the United States. During periods of strong anti-communist sentiment in the twentieth century, many United States citizens were accused of being communists, often based on accusations without any other evidence. The fear of communism and the pressure to conform to anti-communist sentiments led to snowballing accusations and blacklisting
The core problem with the equivocation fallacy is its deceptive nature. An argument that commits this fallacy is misleading because it uses a word in multiple ways without acknowledging the different meanings.
The equivocation fallacy can lead an audience to accept a conclusion that seems to be supported by the premises but is actually based on a semantic trick.
Examples of equivocation fallacies can be found in many advertisements. In particular, advertisements for products marketed as natural, environmentally friendly, or healthy often commit the equivocation fallacy.
“Feeling tired? Pick up a can of NutriBuzz, the healthy energy drink. It’s designed to energize you to pursue a healthy lifestyle, so you can hit the gym and stay active.”
This advertisement initially suggests that NutriBuzz is a “healthy” product, implying that its ingredients are beneficial. However, the primary benefit mentioned is an energy boost to support a “healthy” lifestyle (i.e., exercise), which doesn’t necessarily make the drink itself healthy in terms of ingredients.
Genetic fallacies are similar to ad hominem fallacies in that they are both fallacies of relevance that focus on the source of an argument rather than criticizing it in terms of facts and reasoning. However, there is a difference:
Fallacies of relevance, also known as red herring fallacies, divert attention from the core issues of an argument, dismissing an opposing view based on irrelevant information. Examples include the following:
Arguments that commit logical fallacies can be misleading because they typically resemble valid or sound arguments on a superficial level, while they actually present conclusions that aren’t adequately supported by their premises.
Fallacious arguments are often effective at misleading an audience because they fall into convincing patterns of errors that people tend to make based on emotional instincts, cognitive biases, and heuristic decision-making patterns.
The no true Scotsman fallacy is inherently fallacious when used to arbitrarily dismiss counterexamples that disprove a general claim. However, arguments that look similar at a glance aren’t always fallacious. The soundness or fallaciousness of the argument depends on the nature of the claim and the definitions involved.
If a claim is made about a category based on well-defined, objective, and agreed-upon criteria, then refining a definition to exclude a counterexample that doesn’t meet those criteria typically isn’t considered fallacious.
No true Scotsman arguments are fallacious because they arbitrarily redefine criteria to exclude counterexamples rather than addressing the substance of counterarguments. This technique allows one to avoid engaging with evidence in an intellectually dishonest manner, rendering the debate useless.
The appeal to purity fallacy and the no true Scotsman fallacy are closely related, but the appeal to purity fallacy is broader:
The false dilemma fallacy is also known as the false dichotomy, false binary, or either-or fallacy.
The false dilemma fallacy artificially limits choices, creating a situation where it seems there are only two mutually exclusive options. This fallacy rules out the possibility of any alternative, including combined or middle-ground solutions.
The following strategies can help you avoid committing the false dilemma fallacy:
Post hoc and non sequitur fallacies both involve the concept of “following.” However, post hoc fallacies are related to the chronological sequence of events, whereas non sequitur fallacies are related to the logical connection between statements.
To accurately distinguish between the two fallacies, assess whether the argument’s focus is chronological (post hoc) or logical (non sequitur).
Examples of non sequitur fallacies, also known as formal fallacies, aren’t easy to find in daily life because they typically occur in formal disciplines such as logic, mathematics, and physics. The following example illustrates the nature of a non sequitur fallacy:
More specifically, this example falls into the subcategory of the fallacy of the undistributed middle, in which the middle term in the premises doesn’t cover all possible cases, leading to a faulty conclusion.
The direct opposite of the fallacy of composition is the fallacy of division.
A related concept is the ecological fallacy, an error in statistical analysis where conclusions about individuals are wrongly inferred from group-level data. While not the exact opposite of the fallacy of composition, the ecological fallacy also involves the unwarranted transfer of qualities between parts and wholes
The fallacy of composition can be considered a type of hasty generalization fallacy.
Cognitive biases and logical fallacies are distinct but related concepts that both involve errors in reasoning.
Logical fallacies sometimes result from, or appeal to, cognitive biases.
To identify a false cause fallacy, look for the following mistakes in an argument:
In the correlation–causation fallacy, a perceived similarity or relationship between two variables is wrongly assumed to imply a cause-and-effect relationship. It’s important to understand the differences between correlation and causation:
The maxim “correlation does not imply causation” is often used to rebut the correlation–causation fallacy. Observing a similarity or relationship between two variables does not necessarily indicate a causal link.
False cause fallacies assume a causal relationship between events, as demonstrated in the following examples:
There are several types of false cause fallacies that have specific names, including the post hoc fallacy and the cum hoc fallacy.
The following fictional scenario is an example of the base rate fallacy:
A search for extraterrestrial intelligence (SETI) program develops an algorithm with 99% accuracy for identifying alien signals among cosmic noise, where the actual occurrence of alien signals is estimated to be only 1 in a million. When the algorithm flags a signal as alien, the media reports that alien life has been contacted. This assumption is based on the algorithm’s high accuracy rate, but it ignores the extremely low probability that the signal is from alien life.
In this example, the media commits the base rate fallacy by ignoring statistical reality and focusing on a specific incident. Given the base rate of 1 alien signal in a million, the vast majority of flagged signals are false positives.
To avoid being influenced by the base rate fallacy, consider the following strategies:
Apply Bayesian reasoning: Start with initial probabilities and systematically update them with new evidence to balance general data with specific information.
The term “cost-benefit fallacy” is not a formally recognized logical fallacy, but it might be used to refer to errors in cost-benefit analysis.
Cost-benefit analysis is a framework for systematically evaluating the advantages and disadvantages of investments, policies, and other decisions in fields such as economics, public policy, and healthcare.
Mistakes in cost-benefit analyses can include the following:
Time horizon: misjudging the appropriate timeframe for analysis
The appeal to emotion fallacy is problematic because it replaces logic and evidence with emotionally charged content.
Including evocative language and imagery in an argument is an acceptable rhetorical strategy. However, an argument is rendered unsound when an emotional appeal is used to distract from the main points of the argument.
All ecological fallacies have the following traits:
The ecological fallacy can occur in the field of epidemiology when individual risk factors or health outcomes are inferred from population-level data. Consider the following example:
Logical fallacies that are common in research include the following:
The either-or fallacy is an informal logical fallacy because it is a content-level error that occurs in inductive arguments. Inductive arguments reason from specific observations to propose general principles. If an inductive argument commits an informal fallacy, it is called “unsound.”
By contrast, formal fallacies are structural errors that occur in formal (or deductive) arguments and make the argument “invalid.”
The either-or fallacy is also known as “false dilemma” or “false dichotomy.” These terms are used interchangeably to describe a common logical fallacy that limits options to just two, overlooking the potential for middle-ground solutions or a spectrum of possibilities.
To avoid the either-or fallacy, consider the following questions:
Post hoc fallacies can be recognized by the following attributes:
Post hoc and hasty generalization fallacies both involve jumping to conclusions, but there is a difference between the two.
The post hoc fallacy could be considered a subcategory of the hasty generalization fallacy that focuses specifically on causation and timing.
The post hoc fallacy and the non sequitur fallacy are sometimes conflated, but they are fundamentally different.
The following scenario is an example of the post hoc fallacy:
A country introduces new environmental regulations. Shortly afterward, there is a downturn in the economy. Some politicians argue that the new regulations caused the economic decline, neglecting other global economic factors at play.
The argument is fallacious because it assumes that the order of events is sufficient to prove causation. Although it’s possible that the regulations affected the economy, they can’t be assumed to be the main or sole cause of the economic downturn without further evidence.
The logical fallacy “tu quoque” is pronounced /ˈtuː ˈkwoʊkwiː/ (too-kwoh-kwee).
Other accepted pronunciations include the following:
The tu quoque fallacy is a specific kind of ad hominem fallacy.
Both belong to the category of fallacies of relevance, also known as red herring fallacies.
The tu quoque fallacy and whataboutism sometimes overlap, but they have distinct characteristics.
Both are typically considered informal logical fallacies or specious approaches to argumentation.
Cognitive biases describe flawed thought processes, whereas logical fallacies describe errors in argumentation.
A cognitive bias describes a common error in judgment. Examples of cognitive biases include confirmation bias (i.e., the tendency to seek out information that confirms one’s beliefs) and the halo effect (i.e., the tendency to assume that someone who exhibits one positive attribute, such as beauty, also has another positive attribute, such as honesty).
A logical fallacy is a type of flawed argument. Many logical fallacies either result from or intentionally appeal to cognitive biases.
Yes, an appeal to ignorance is a type of logical fallacy. It involves asserting that because something hasn’t been proven true, it must be false, or because something hasn’t been proven false, it must be true (e.g., “Scientists can’t prove that the Egyptian pyramids don’t have extraterrestrial origins”).
There is an aphorism that is often used to counter arguments from ignorance: “Absence of evidence is not evidence of absence.”
A similar mistake is the burden of proof fallacy, which occurs when someone makes a claim but doesn’t offer evidence, instead claiming that others must disprove it (e.g., “There’s a secret society manipulating world governments. Prove me wrong”).
Ad hominem is the informal logical fallacy of attacking a person instead of refuting an argument. Based on the Latin “to the person,” ad hominems focus on irrelevant criticisms of an individual rather than making a good-faith rebuttal.
Name-calling is one common form of ad hominem fallacy. It’s used to dismiss an argument by simply ridiculing the individual presenting it (e.g., “Now that we’ve heard the bleeding-heart proposals from my naive young colleague, let’s move on to discussing realistic solutions”).
The sunk cost fallacy can lead to an escalation of commitment (or commitment bias).
An escalation of commitment stems from fallacious sunk cost reasoning and entails committing even more time, money, effort, emotions, or conviction to a failed decision in a futile attempt to recover what has been lost.
Common types of fallacies, or errors in reasoning, that are found in research include the following:
Not all slippery slope arguments are fallacious.
There are several ways to debunk slippery slope fallacies:
To effectively respond to a straw man fallacy, identify and explain the misrepresentation as precisely as possible. Restate your original argument accurately to dispel any misconceptions, and ask the other party to address your argument directly, rather than the distorted version. This approach not only highlights the fallacy but also refocuses the discussion on the substantive points of the debate.
The straw man fallacy disrupts productive discourse and makes it difficult to resolve problems by shifting focus away from the most relevant issues. Committing the straw man fallacy also causes a speaker to lose credibility, as it typically demonstrates a degree of intellectual dishonesty.
The straw man fallacy can be considered a subcategory of red herring fallacy.
Straw man arguments are the simplified, distorted, or fabricated versions of an opponent’s stance that are presented in debates where the straw man fallacy is committed.
Although many sources use circular reasoning fallacy and begging the question interchangeably, others point out that there is a subtle difference between the two:
In other words, we could say begging the question is a form of circular reasoning.
The circular reasoning fallacy is a logical fallacy in which the evidence used to support a claim assumes that the claim is true, resulting in a self-reinforcing but ultimately unconvincing argument. For instance, someone might argue, “This brand is the best (conclusion) because it’s superior to all other brands on the market (premise).”
Argumentum ad hominem is a Latin phrase meaning “argument against the person.” Ad hominem arguments, often referred to in daily life as “personal attacks,” distract from the main point of an argument by unfairly criticizing the person making it.
Ad hominem is the name of a logical fallacy, but the term can also refer to a general insult that’s not part of a logical argument.
A fallacious ad hominem argument shifts the focus away from the main topic by making irrelevant personal attacks.
Not all personal criticisms are ad hominem fallacies. In some contexts, critiques of an individual’s character are relevant to an argument.
Ad hominem is a persuasive technique that attempts to sway an audience’s opinion by criticizing an individual’s personal characteristics.
When used to sidestep the main topic of an argument, an ad hominem is an informal logical fallacy. The use of an ad hominem attack is often intended to manipulate. It can be an obstacle to productive debate.
Although many sources use circular reasoning fallacy and begging the question interchangeably, others point out that there is a subtle difference between the two:
In other words, we could say begging the question is a form of circular reasoning.
The complex question fallacy and begging the question fallacy are similar in that they are both based on assumptions. However, there is a difference between them:
In other words, begging the question is about drawing a conclusion based on an assumption, while a complex question involves asking a question that presupposes the answer to a prior question.
The red herring fallacy hinders constructive dialogue and prevents meaningful progress in addressing the central issues of a discussion.
The intentional use of red herrings and other fallacies can mislead and manipulate the audience by drawing attention to unrelated topics or emotions, potentially swaying opinions without addressing the substance of the original argument.
The halo effect is important in marketing because it means that an individual product characteristic can influence how consumers perceive the product’s other characteristics.
A product may be perceived as being high quality if the packaging looks expensive, for instance—even if this isn’t the case. Conversely, the halo effect can work in the other direction (the horn effect) and negatively impact sales if the packaging of a high-quality product looks too cheap.
The horn effect is the halo effect in reverse. While the halo effect makes us more likely to make positive judgments about someone or something based on a single positive characteristic, the horn effect makes us more likely to make negative judgments based on a negative characteristic.
For instance, the horn effect might lead you to unconsciously decide against asking a new colleague for help because you formed a negative first impression of them based on the way they were dressed when you were introduced.
The term cognitive bias describes a broad range of ways in which our experiences and beliefs affect our judgments and decisions. These include mental shortcuts or heuristics involving preconceptions that enable us to quickly process and understand new information.
But cognitive bias can cause us to misinterpret events and facts, and misread people’s intentions. It can also be a root cause of research bias.
Common forms of cognitive bias include:
The observer-expectancy effect is a cognitive bias referring to the tendency of researchers inadvertently influencing their study participants. It can contain elements of the Pygmalion effect as well as the halo or horn effects. It is also related to the idea of self-fulfilling prophecies, which can have either a positive or negative impact.
This is one of the reasons that experimental design is so important in the crafting of a research proposal, to reduce the risk that this effect will color the eventual results.
The Rosenthal effect is another name for the Pygmalion effect. It describes how a teacher, leader, or coach can improve the performance of those they are leading by consistently having, and expressing, high expectations of them.
It is named after one of the researchers (Lenore Jacobson and Robert Rosenthal) who first described the effect. In short, it shows how low expectations of others can lead them to perform badly, while high expectations can lead to higher performance.
Affirming the consequent is invalid because it assumes a specific cause for an outcome that can have multiple causes. Consider the formula for affirming the consequent:
The above syllogism is fallacious because Q can be true for reasons other than P. The mistake lies in assuming a single cause for an effect or trait.
For example:
You can avoid committing the affirming the consequent fallacy by remembering that in hypothetical syllogisms, the antecedent should be affirmed instead.
The correct way to form a valid affirmative hypothetical syllogism is:
In this correct form of the syllogism, called modus ponens (or “affirming the antecedent”), the fact that the antecedent (P) is true logically requires that the consequent (Q) is also true.
Affirming the consequent and denying the antecedent are both logical fallacies that occur in hypothetical syllogisms, but the two fallacies have different forms.
Affirming the consequent takes the following form:
Denying the antecedent takes the following form:
Post hoc ergo propter hoc is a Latin phrase meaning “after this, therefore because of this.” It refers to the logical fallacy of assuming that because Event B follows Event A, Event A caused Event B. This error is often referred to as the post hoc fallacy.
Denying the antecedent is a logical fallacy because the absence of one potential cause doesn’t mean that no other causes exist.
Consider the following example:
This argument is clearly faulty because the ground could be wet for many reasons other than rain (e.g., lawn sprinklers). In other words, the conclusion is not solely dependent on the premise.
Denying the antecedent is an invalid argument form. In other words, it is a formal logical fallacy.
In logic, the term “invalid” describes a type of argument in which the premises do not guarantee the truth of the conclusion, even if all the premises are true. In the fallacy of denying the antecedent, it is possible that the expected outcome could occur without one specific cause being true.
Consider the following example:
It is clear that this argument is invalid. The animal could be an insect or a reptile or many other animals. The conclusion is not guaranteed by the premises.
A real-life example of denying the antecedent is the following argument:
This is an invalid argument because the fact that Maria is not a professor does not necessarily mean she does not have a PhD. Maria might be someone who has a PhD but chose a non-academic career path.
In commercials, weasel words like “up to,” “virtually,” and “helps” are often used. These words allow companies to make claims about their product without providing details that could later be challenged.
For instance, “This cream helps reduce the appearance of wrinkles” implies assistance without guaranteeing wrinkle elimination.
Use QuillBot’s Paraphrasing Tool to find ways to express your exact meaning and avoid ambiguous language.
Common weasel words include:
These words allow a speaker or writer to avoid making firm commitments or statements that might later be challenged.
Try QuillBot’s Paraphraser to vary your word choice to communicate clearly and directly.
Weasel words (i.e., words that are unhelpfully vague, such as “possibly” and “reportedly”) should be avoided because they can diminish the clarity and honesty of communication, leading to misunderstandings and a lack of trust. Avoiding these words can enhance the transparency and trustworthiness of your statements.
Try QuillBot’s Paraphraser to find the right words to communicate your message.