A fallacy is the use of invalid or otherwise faulty reasoning, or "wrong moves"[1] in the construction of an argument.[2][3] A fallacious argument may be deceptive by appearing to be better than it really is. Some fallacies are committed intentionally to manipulate or persuade by deception, while others are committed unintentionally due to carelessness or ignorance. The soundness of legal arguments depends on the context in which the arguments are made.[4]

Fallacies are commonly divided into "formal" and "informal". A formal fallacy can be expressed neatly in a standard system of logic, such as propositional logic,[2] while an informal fallacy originates in an error in reasoning other than an improper logical form.[5] Arguments containing informal fallacies may be formally valid, but still fallacious.[6]

A special case is a mathematical fallacy, an intentionally invalid mathematical proof, often with the error subtle and somehow concealed. Mathematical fallacies are typically crafted and exhibited for educational purposes, usually taking the form of spurious proofs of obvious contradictions.


Fallacies are defects that weaken arguments. Fallacious arguments are very common and can be persuasive in common use. They may be even "unsubstantiated assertions that are often delivered with a conviction that makes them sound as though they are proven facts".[7] Informal fallacies in particular are found frequently in mass media such as television and newspapers.[8] It is important to understand what fallacies are so that one can recognize them in either one's own or others' writing. Avoiding fallacies will strengthen one's ability to produce strong arguments.

It can be difficult to evaluate whether an argument is fallacious, as arguments exist along a continuum of soundness and an argument that has several stages or parts might have some sound sections and some fallacious ones.[9]

"Fallacious arguments usually have the deceptive appearance of being good arguments."[11] Recognizing fallacies in everyday arguments may be difficult since arguments are often embedded in rhetorical patterns that obscure the logical connections between statements. Informal fallacies may also exploit the emotional, intellectual, or psychological weaknesses of the audience. Recognizing fallacies can develop reasoning skills to expose the weaker links between premises and conclusions to better discern between what appears to be true and what is true.

Argumentation theory provides a different approach to understanding and classifying fallacies. In this approach, an argument is regarded as an interactive protocol between individuals that attempts to resolve their disagreements. The protocol is regulated by certain rules of interaction, so violations of these rules are fallacies.

Fallacies are used in place of valid reasoning to communicate a point with the intention to persuade. Examples in the mass media today include but are not limited to propaganda, advertisements, politics, newspaper editorials and opinion-based “news” shows.

Systems of classification

Because of their variety of structure and application, fallacies are challenging to classify so as to satisfy all practitioners. Fallacies can be classified strictly by either their structure or their content, such as classifying them as formal fallacies or informal fallacies, respectively. The classification of informal fallacies may be subdivided into categories such as linguistic, relevance through omission, relevance through intrusion, and relevance through presumption.[12] On the other hand, fallacies may be classified by the process by which they occur, such as material fallacies (content), verbal fallacies (linguistic), and again formal fallacies (error in inference). In turn, material fallacies may be placed into the more general category of informal fallacies. Yet, verbal fallacies may be placed in either formal or informal classifications; compare equivocation which is a word or phrase based ambiguity, e. g. "he is mad", which may refer to either him being angry or clinically insane, to the fallacy of composition which is premise and inference based ambiguity, e. g. "this must be a good basketball team because each of its members is an outstanding player".[13]

Even the definitions of the classes may not be unique. For example, Whately treats material fallacies as a complement to logical fallacies, which makes them synonymous to informal fallacies, while others consider them to be a subclass of informal fallacies, like mentioned above.


Aristotle was the first to systematize logical errors into a list, as being able to refute an opponent's thesis is one way of winning an argument.[14] Aristotle's "Sophistical Refutations" (De Sophisticis Elenchis) identifies thirteen fallacies. He divided them up into two major types, linguistic fallacies and non-linguistic fallacies, some depending on language and others that do not depend on language.[15][16] These fallacies are called verbal fallacies and material fallacies, respectively. A material fallacy is an error in what the arguer is talking about, while a verbal fallacy is an error in how the arguer is talking. Verbal fallacies are those in which a conclusion is obtained by improper or ambiguous use of words.[17] An example of a language dependent fallacy is given as a debate as to who amongst humanity are learners: the wise or the ignorant.[18] A language-independent fallacy is for example:

  1. "Coriscus is different from Socrates."
  2. "Socrates is a man."
  3. "Therefore, Coriscus is different from a man."[19]

Whately's grouping

Richard Whately defines a fallacy broadly as, "any argument, or apparent argument, which professes to be decisive of the matter at hand, while in reality it is not".[20]

Whately divided fallacies into two groups: logical and material. According to Whately, logical fallacies are arguments where the conclusion does not follow from the premises. Material fallacies are not logical errors because the conclusion does follow from the premises. He then divided the logical group into two groups: purely logical and semi-logical. The semi-logical group included all of Aristotle's sophisms except:ignoratio elenchi, petitio principii, and non causa pro causa, which are in the material group.[21]

Other systems of classification

Of other classifications of fallacies in general the most famous are those of Francis Bacon and J. S. Mill. Bacon (Novum Organum, Aph. 33, 38 sqq.) divided fallacies into four Idola (Idols, i.e. False Appearances), which summarize the various kinds of mistakes to which the human intellect is prone. With these should be compared the Offendicula of Roger Bacon, contained in the Opus maius, pt. i. J. S. Mill discussed the subject in book v. of his Logic, and Jeremy Bentham's Book of Fallacies (1824) contains valuable remarks. See Rd. Whateley's Logic, bk. v.; A. de Morgan, Formal Logic (1847); A. Sidgwick, Fallacies (1883) and other textbooks.

Formal fallacy

A formal fallacy, deductive fallacy, logical fallacy or non sequitur (Latin for "it does not follow") is a flaw in the structure of a deductive argument which renders the argument invalid. The flaw can neatly be expressed in standard system of logic.[2] Such an argument is always considered to be wrong. The presence of the formal fallacy does not imply anything about the argument's premises or its conclusion. Both may actually be true, or may even be more probable as a result of the argument; but the deductive argument is still invalid because the conclusion does not follow from the premises in the manner described.

By extension, an argument can contain a formal fallacy even if the argument is not a deductive one: for instance, an inductive argument that incorrectly applies principles of probability or causality can be said to commit a formal fallacy. "Since deductive arguments depend on formal properties and inductive arguments don't, formal fallacies apply only to deductive arguments."[5]

A logical form such as "A and B" is independent of any particular conjunction of meaningful propositions. Logical form alone can guarantee that given true premises, a true conclusion must follow. However, formal logic makes no such guarantee if any premise is false; the conclusion can be either true or false. Any formal error or logical fallacy similarly invalidates the deductive guarantee. Both the argument and all its premises must be true for a statement to be true.

The term logical fallacy is in a sense self-contradictory, because logic refers to valid reasoning, whereas a fallacy is the use of poor reasoning. Therefore, the term formal fallacy is preferred. In informal discourse however, logical fallacy is used to mean an argument which is problematic for any reason.

The term non sequitur denotes a general formal fallacy, often meaning one which does not belong to any named subclass of formal fallacies like affirming the consequent.

Common examples

Ecological fallacy

An ecological fallacy is committed when one draws an inference from data based on the premise that qualities observed for groups necessarily hold for individuals; for example, "if countries with more Protestants tend to have higher suicide rates, then Protestants must be more likely to commit suicide."[22]

The fallacy fork

Maarten Boudry argues that formal, deductive fallacies rarely occur in real life and that arguments that would be fallacious in formally deductive terms are not necessarily so when context and prior probabilities are taken into account, thus making the argument defeasible and inductive. For a given fallacy, one must either characterize it by means of a deductive argumentation schema, which rarely applies (the first prong of the fork) or one must relax definitions and add nuance to take the actual intent and context of the argument into account (the other prong of the fork). To argue, for example, that one became nauseated after eating a mushroom because the mushroom was poisonous could be an example of the post hoc ergo propter hoc fallacy unless one were actually arguing inductively and probabilistically that it is likely that the mushroom caused the illness since some mushrooms are poisonous, it is possible to misidentify a mushroom as edible, one doesn't usually feel nauseated, etc.[23]

Informal fallacy

In contrast to a formal fallacy, an informal fallacy originates in a reasoning error other than a flaw in the logical form of the argument.[5] A deductive argument containing an informal fallacy may be formally valid,[6] but still remain rationally unpersuasive. Nevertheless, informal fallacies apply to both deductive and non-deductive arguments.

Though the form of the argument may be relevant, fallacies of this type are the "types of mistakes in reasoning that arise from the mishandling of the content of the propositions constituting the argument".[24]

Faulty generalization

A special subclass of the informal fallacies is the set of faulty generalizations, also known as inductive fallacies. Here the most important issue concerns inductive strength or methodology (for example, statistical inference). In the absence of sufficient evidence, drawing conclusions based on induction is unwarranted and fallacious. With the backing of empirical evidence, however, the conclusions may become warranted and convincing (at which point the arguments are no longer considered fallacious).

Hasty generalization

For instance, hasty generalization is making assumptions about a whole group or range of cases based on a sample that is inadequate (usually because it is atypical or just too small). Stereotypes about people ("frat boys are drunkards", "grad students are nerdy", "women don’t enjoy sports", etc.) are a common example of the principle.

Hasty generalisation often follows a pattern such as:

X is true for A.
X is true for B.
Therefore, X is true for C, D, etc.

While never a valid logical deduction, if such an inference can be made on statistical grounds, it may nonetheless be convincing. This is because with enough empirical evidence, the generalization is no longer a hasty one.

Relevance fallacy

The fallacies of relevance are a broad class of informal fallacies (see the navbox below), generically represented by missing the point: Presenting an argument, which may be sound, but fails to address the issue in question.

Argumentum ex silentio

An argument from silence features an unwarranted conclusion advanced based on the absence of data.

Examples of informal fallacies

Post hoc (false cause)

This fallacy gets its name from the Latin phrase "post hoc, ergo propter hoc," which translates as "after this, therefore because of this." Definition: Assuming that because B comes after A, A caused B. Sometimes one event really does cause another one that comes later—for example, if I register for a class, and my name later appears on the roll, it's true that the first event caused the one that came later. But sometimes two events that seem related in time aren't really related as cause and event. That is, correlation isn't the same thing as causation.

Slippery slope

Definition: The arguer claims that a sort of chain reaction, usually ending in some dire consequence, will take place, but in fact there is not enough evidence for that assumption. The arguer asserts that if we take even one step onto the "slippery slope," we will end up sliding all the way to the bottom; he or she assumes we can't stop halfway down the hill.[25]

False analogy

This error in reasoning occurs when claims are supported by unsound comparisons, hence the false analogy's informal nickname of the "apples and oranges" fallacy.[26]

Measurement fallacy

Some of the fallacies described above may be committed in the context of measurement. Where mathematical fallacies are subtle mistakes in reasoning leading to invalid mathematical proofs, measurement fallacies are unwarranted inferential leaps involved in the extrapolation of raw data to a measurement-based value claim. The ancient Greek Sophist Protagoras was one of the first thinkers to propose that humans can generate reliable measurements through his "human-measure" principle and the practice of dissoi logoi (arguing multiple sides of an issue).[27][28] This history helps explain why measurement fallacies are informed by informal logic and argumentation theory.

Knowledge value measurement fallacy

Increasing availability and circulation of big data are driving proliferation of new metrics for scholarly authority,[29][30] and there is lively discussion regarding the relative usefulness of such metrics for measuring the value of knowledge production in the context of an "information tsunami".[31]

For example, anchoring fallacies can occur when unwarranted weight is given to data generated by metrics that the arguers themselves acknowledge is flawed. For example, limitations of the journal impact factor (JIF) are well documented,[32] and even JIF pioneer Eugene Garfield notes, "while citation data create new tools for analyses of research performance, it should be stressed that they supplement rather than replace other quantitative-and qualitative-indicators."[33] To the extent that arguers jettison acknowledged limitations of JIF-generated data in evaluative judgments, or leave behind Garfield's "supplement rather than replace" caveat, they court commission of anchoring fallacies.

A naturalistic fallacy can occur for example in the case of sheer quantity metrics based on the premise "more is better"[31] or, in the case of developmental assessment in the field of psychology, "higher is better."[34]

A false analogy occurs when claims are supported by unsound comparisons between data points. For example, the Scopus and Web of Science bibliographic databases have difficulty distinguishing between citations of scholarly work that are arms-length endorsements, ceremonial citations, or negative citations (indicating the citing author withholds endorsement of the cited work).[29] Hence, measurement-based value claims premised on the uniform quality of all citations may be questioned on false analogy grounds.

For the next example let us consider Academic Analytics' Faculty Scholarly Productivity Index, which purports to measure overall faculty productivity, yet the tool does not capture data based on citations in books. This creates a possibility that low productivity measurements using the tool may constitute argument from silence fallacies, to the extent that such measurements are supported by the absence of book citation data.

Ecological fallacies can be committed when one measures scholarly productivity of a sub-group of individuals (e.g. "Puerto Rican" faculty) via reference to aggregate data about a larger and different group (e.g. "Hispanic" faculty).[35]

Intentional fallacy

Sometimes a speaker or writer uses a fallacy intentionally. In any context, including academic debate, a conversation among friends, political discourse, advertising, or for comedic purposes, the arguer may use fallacious reasoning to try to persuade the listener or reader, by means other than offering relevant evidence, that the conclusion is true.

Examples of this include the speaker or writer:[36]

  1. Diverting the argument to unrelated issues with a red herring (Ignoratio elenchi)
  2. Insulting someone's character (argumentum ad hominem)
  3. Assume the conclusion of an argument, a kind of circular reasoning, also called "begging the question" (petitio principi)
  4. Making jumps in logic (non-sequitur)
  5. Identifying a false cause and effect (post hoc ergo propter hoc)
  6. Asserting that everyone agrees (argumentum ad populum, bandwagoning)
  7. Creating a "false dilemma" ("either-or fallacy") in which the situation is oversimplified
  8. Selectively using facts (card-stacking)
  9. Making false or misleading comparisons (false equivalence and false analogy)
  10. Generalizing quickly and sloppily (hasty generalization)

In humor, errors of reasoning are used for comical purposes. Groucho Marx used fallacies of amphiboly, for instance, to make ironic statements; Gary Larson and Scott Adams employed fallacious reasoning in many of their cartoons. Wes Boyer and Samuel Stoddard have written a humorous essay teaching students how to be persuasive by means of a whole host of informal and formal fallacies.[37]

Assessment — pragmatic theory

According to the pragmatic theory,[38] a fallacy can in some instances be an error a fallacy, use of a heuristic (short version of an argumentation scheme) to jump to a conclusion. However, even more worryingly, in other instances it is a tactic or ploy used inappropriately in argumentation to try to get the best of a speech part unfairly. There are always two parties to an argument containing a fallacy — the perpetrator and the intended victim. The dialogue framework required to support the pragmatic theory of fallacy is built on the presumption that argumentative dialogue has both an adversarial component and a collaborative component. A dialogue has individual goals for each participant, but also collective (shared) goals that apply to all participants. A fallacy of the second kind is seen as more than simply violation of a rule of reasonable dialogue. It is also a deceptive tactic of argumentation, based on sleight-of-hand. Aristotle explicitly compared contentious reasoning to unfair fighting in athletic contest. But the roots of the pragmatic theory go back even further in history to the Sophists. The pragmatic theory finds its roots in the Aristotelian conception of a fallacy as a sophistical refutation, but also supports the view that many of the types of arguments traditionally labelled as fallacies are in fact reasonable techniques of argumentation that can be used, in many cases, to support legitimate goals of dialogue. Hence on the pragmatic approach, each case needs to analyzed individually, to determine by the textual evidence whether the argument is fallacious or reasonable.

See also




  1. ^ van Eemeren, Frans; Garssen, Bart; Meuffels, Bert (2009). Fallacies and Judgments of Reasonablene Empirical Research Concerning the Pragma-Dialectical Discussion Rules. Dordrecht: Springer. doi:10.1007/978-90-481-2614-9. ISBN 978-90-481-2613-2.
  2. ^ a b c Harry J. Gensler, The A to Z of Logic (2010:p74). Rowman & Littlefield, ISBN 9780810875968
  3. ^ Woods, John (2004). "Who Cares About the Fallacies?". The Death of Argument. Applied Logic Series. 32. pp. 3–23. doi:10.1007/978-1-4020-2712-3_1. ISBN 9789048167005.
  4. ^ Bustamente, Thomas; Dahlman, Christian, eds. (2015). Argument types and fallacies in legal argumentation. Heidelberg: Springer International Publishing. p. x. ISBN 978-3-319-16147-1.
  5. ^ a b c "Informal Fallacies, Northern Kentucky University". Retrieved 2013-09-10.
  6. ^ a b Dowden, Bradley. "Fallacy". Internet Encyclopedia of Philosophy. Retrieved 17 February 2016.
  7. ^ McMullin, Rian E. (2000). The new handbook of cognitive therapy techniques ([Rev. ed.] ed.). New York: W. W. Norton. ISBN 978-0393703139. OCLC 41580357.
  8. ^ McMurtry, John (December 1990). "The mass media: An analysis of their system of fallacy". Interchange. 21 (4): 49–66. doi:10.1007/BF01810092.
  9. ^ DeLancey, Craig, Ph.D. "Evaluating Arguments—Distinguishing between reasonable and fallacious tactics" (PDF). oswego.edu. self-published. Archived from the original (PDF) on 2013-09-03. Retrieved 7 March 2018.
  10. ^ a b Damer, T. Edward; Rudinow, J.; Barry, V. E.; Munson, R.; Black, A.; Salmon, M. H.; Cederblom, J.; Paulsen, D.; Epstein, R. L.; Kernberger, C.; others (2009), Attacking Faulty Reasoning: A Practical Guide to Fallacy-free Arguments (6th ed.), Belmont, California: Wadsworth, ISBN 978-0-495-09506-4, retrieved 2016-02-24. See also Wikipedia article on the book
  11. ^ Damer 2009,[10] page 52.
  12. ^ Pirie, Madsen (2006). How to Win Every Argument: The Use and Abuse of Logic. A&C Black. p. 46. ISBN 978-0-8264-9006-3. Retrieved 10 September 2015.
  13. ^ "fallacy". Encyclopedia Brittanica. Encyclopedia Brittanica. Retrieved 13 June 2017.
  14. ^ Frans, van Eemeren; Bart, Garssen; Bert, Meuffels (2009). "1". Fallacies and judgements of reasonableness, Empirical Research Concerning the Pragma-Dialectical Discussion Rules. Dordrecht: Springer Science+Business Media B.V. p. 2. ISBN 978-90-481-2613-2.
  15. ^ "Aristotle's original 13 fallacies". The Non Sequitur. 2008-03-13. Retrieved 2013-05-28.
  16. ^ "Aristotle's 13 fallacies". www.logiclaw.co.uk. Retrieved 2017-12-12.
  17. ^ "PHIL 495: Philosophical Writing (Spring 2008), Texas A&M University". Archived from the original on 2008-09-05. Retrieved 2013-09-10.
  18. ^ Frans, van Eemeren; Bart, Garssen; Bert, Meuffels (2009). "1". Fallacies and judgements of reasonableness, Empirical Research Concerning the Pragma-Dialectical Discussion Rules. Dordrecht: Springer Science+Business Media B.V. p. 3. ISBN 978-90-481-2613-2.
  19. ^ Frans, van Eemeren; Bart, Garssen; Bert, Meuffels (2009). "1". Fallacies and judgements of reasonableness, Empirical Research Concerning the Pragma-Dialectical Discussion Rules. Dordrecht: Springer Science+Business Media B.V. p. 4. ISBN 978-90-481-2613-2.
  20. ^ Frans H. van Eemeren, Bart Garssen, Bert Meuffels (2009). Fallacies and Judgments of Reasonableness: Empirical Research Concerning the Pragma-Dialectical Discussion Rules, p.8. ISBN 9789048126149.
  21. ^ Coffey, P. (1912). The Science of Logic. Longmans, Green, and Company. p. 302. LCCN 12018756. Retrieved 2016-02-22.
  22. ^ Freedman, David A. (2004). Michael S. Lewis-Beck & Alan Bryman & Tim Futing Liao, ed. Encyclopedia of Social Science Research Methods. Thousand Oaks, CA: Sage. pp. 293–295. ISBN 978-0761923633.
  23. ^ Boudry, Maarten (2017). "The Fallacy Fork: Why It's Time to Get Rid of Fallacy Theory". Skeptical Inquirer. 41 (5): 46–51.
  24. ^ Copi, Irving M.; Cohen, Carl (2005). Introduction to Logic (12 ed.). Pearson Education, Inc. ISBN 978-0-13-189834-9. p.125
  25. ^ "The Most Common Logical Fallacies". www.webpages.uidaho.edu. Retrieved 2017-12-12.
  26. ^ Kornprobst, Markus (2007). "Comparing Apples and Oranges? Leading and Misleading Uses of Historical Analogies". Millennium — Journal of International Studies. 36: 29–49. doi:10.1177/03058298070360010301. Archived from the original on 30 October 2013. Retrieved 29 October 2013.
  27. ^ Schiappa, Edward (1991). Protagoras and Logos: A Study in Greek Philosophy and Rhetoric. Columbia, SC: University of South Carolina Press. ISBN 978-0872497580.
  28. ^ Protagoras (1972). The Older Sophists. Indianapolis, IN: Hackett Publishing Co. ISBN 978-0872205567.
  29. ^ a b Meho, Lokman I. (2007). "The Rise and Rise of Citation Analysis". Physics World. January: 32–36. arXiv:physics/0701012. Bibcode:2007physics...1012M.
  30. ^ Jensen, Michael (June 15, 2007). Riley, Michael G., ed. "The New Metrics of Scholarly Authority". The Chronicle of Higher Education. The Chron. ISSN 0009-5982. OCLC 1554535. Retrieved 28 October 2013.
  31. ^ a b Baveye, Phillippe C. (2010). "Sticker Shock and Looming Tsunami: The High Cost of Academic Serials in Perspective". Journal of Scholarly Publishing. 41 (2): 191–215. doi:10.1353/scp.0.0074.
  32. ^ National Communication Journal (2013). Impact Factors, Journal Quality, and Communication Journals: A Report for the Council of Communication Associations (PDF). Washington, D.C.: National Communication Association. Archived from the original (PDF) on April 4, 2016. Retrieved 2016-02-22.
  33. ^ Gafield, Eugene (1993). "What Citations Tell us About Canadian Research". Canadian Journal of Library and Information Science. 18 (4): 34.
  34. ^ Stein, Zachary (October 2008). "Myth Busting and Metric Making: Refashioning the Discourse about Development". Integral Leadership Review. 8 (5). Archived from the original on October 30, 2013. Retrieved October 28, 2013.
  35. ^ Allen, Henry L. (1997). "Faculty Workload and Productivity: Ethnic and Gender Disparities" (PDF). NEA 1997 Almanac of Higher Education: 39. Retrieved October 29, 2013.
  36. ^ Shewan, Edward (2003). "Soundness of Argument". Applications of Grammar: Principles of Effective Communication (2nd ed.). Christian Liberty Press. ISBN 978-1-930367-28-9. Retrieved February 22, 2016.
  37. ^ Boyer, Web; Stoddard, Samuel. "How to Be Persuasive". Rink Works. Retrieved December 5, 2012.
  38. ^ Walton, Douglas N. (1995). A Pragmatic Theory of Fallacy. Tuscaloosa: University of Alabama Press. p. 324. ISBN 9780817307981.

Further reading

  • C. L. Hamblin, Fallacies, Methuen London, 1970. reprinted by Vale Press in 1998 as ISBN 0-916475-24-7.
  • Hans V. Hansen; Robert C. Pinto (1995). Fallacies: classical and contemporary readings. Penn State Press. ISBN 978-0-271-01417-3.
  • Frans van Eemeren; Bart Garssen; Bert Meuffels (2009). Fallacies and Judgments of Reasonableness: Empirical Research Concerning the Pragma-Dialectical Discussion. Springer. ISBN 978-90-481-2613-2.
  • Douglas N. Walton, Informal logic: A handbook for critical argumentation. Cambridge University Press, 1989.
  • Douglas, Walton (1987). Informal Fallacies. Amsterdam: John Benjamins.
  • Walton, Douglas (1995). A Pragmatic Theory of Fallacy. Tuscaloosa: University of Alabama Press.
  • Walton, Douglas (2010). "Why Fallacies Appear to Be Better Arguments than They Are". Informal Logic. 30 (2): 159–184.
  • John Woods (2004). The death of argument: fallacies in agent based reasoning. Springer. ISBN 978-1-4020-2663-8.
  • Fearnside, W. Ward and William B. Holther, Fallacy: The Counterfeit of Argument, 1959.
  • Vincent F. Hendricks, Thought 2 Talk: A Crash Course in Reflection and Expression, New York: Automatic Press / VIP, 2005, ISBN 87-991013-7-8
  • D. H. Fischer, Historians' Fallacies: Toward a Logic of Historical Thought, Harper Torchbooks, 1970.
  • Warburton Nigel, Thinking from A to Z, Routledge 1998.
  • Sagan, Carl, "The Demon-Haunted World: Science As a Candle in the Dark". Ballantine Books, March 1997 ISBN 0-345-40946-9, 480 pgs. 1996 hardback edition: Random House, ISBN 0-394-53512-X, xv+457 pages plus addenda insert (some printings). Ch.12.

Historical texts

External links

Ad hominem

Ad hominem (Latin for "to the person"), short for argumentum ad hominem, is a fallacious argumentative strategy whereby genuine discussion of the topic at hand is avoided by instead attacking the character, motive, or other attribute of the person making the argument, or persons associated with the argument, rather than attacking the substance of the argument itself. The terms ad mulierem and ad feminam have been used specifically when the person receiving the criticism is female.

However, its original meaning was an argument "calculated to appeal to the person addressed more than to impartial reason".Fallacious ad hominem reasoning is categorized among informal fallacies, more precisely as a genetic fallacy, a subcategory of fallacies of irrelevance.

Argument from authority

An argument from authority (argumentum ab auctoritate), also called an appeal to authority, or argumentum ad verecundiam, is a form of defeasible argument in which a claimed authority's support is used as evidence for an argument's conclusion. It is well known as a fallacy, though it is used in a cogent form when all sides of a discussion agree on the reliability of the authority in the given context.

Argumentum ad populum

In argumentation theory, an argumentum ad populum (Latin for "argument to the people") is a fallacious argument that concludes that a proposition must be true because many or most people believe it, often concisely encapsulated as: "If many believe so, it is so."

This type of argument is known by several names, including appeal to the masses, appeal to belief, appeal to the majority, appeal to democracy, appeal to popularity, argument by consensus, consensus fallacy, authority of the many, bandwagon fallacy, vox populi, and in Latin as argumentum ad numerum ("appeal to the number"), fickle crowd syndrome, and consensus gentium ("agreement of the clans"). It is also the basis of a number of social phenomena, including communal reinforcement and the bandwagon effect. The Chinese proverb "three men make a tiger" concerns the same idea.

This fallacy is similar in structure to certain other fallacies that involve a confusion between the justification of a belief and its widespread acceptance by a given group of people. When an argument uses the appeal to the beliefs of a group of supposed experts, it takes on the form of an appeal to authority; if the appeal is to the beliefs of a group of respected elders or the members of one's community over a long period of time, then it takes on the form of an appeal to tradition.

One who commits this fallacy may assume that individuals commonly analyze and edit their beliefs and behaviors. This is often not the case. (See conformity.)

The argumentum ad populum can be a valid argument in inductive logic; for example, a poll of a sizeable population may find that 100% prefer a certain brand of product over another. A cogent (strong) argument can then be made that the next person to be considered will also very likely prefer that brand (but not always 100% since there could be exceptions), and the poll is valid evidence of that claim. However, it is unsuitable as an argument for deductive reasoning as proof, for instance to say that the poll proves that the preferred brand is superior to the competition in its composition or that everyone prefers that brand to the other.

Association fallacy

An association fallacy is an informal inductive fallacy of the hasty-generalization or red-herring type and which asserts, by irrelevant association and often by appeal to emotion, that qualities of one thing are inherently qualities of another. Two types of association fallacies are sometimes referred to as guilt by association and honor by association.

Begging the question

Begging the question is an informal fallacy that occurs when an argument's premises assume the truth of the conclusion, instead of supporting it. It is a type of circular reasoning: an argument that requires that the desired conclusion be true. This often occurs in an indirect way such that the fallacy's presence is hidden, or at least not easily apparent.

The phrase begging the question originated in the 16th century as a mistranslation of the Latin petitio principii, which actually translates to "assuming the initial point". In modern vernacular usage, "begging the question" is often used to mean "raising the question" or "dodging the question". In contexts that demand strict adherence to a technical definition of the term, many consider these usages incorrect.

Cherry picking

Cherry picking, suppressing evidence, or the fallacy of incomplete evidence is the act of pointing to individual cases or data that seem to confirm a particular position while ignoring a significant portion of related cases or data that may contradict that position. It is a kind of fallacy of selective attention, the most common example of which is the confirmation bias. Cherry picking may be committed intentionally or unintentionally. This fallacy is a major problem in public debate.The term is based on the perceived process of harvesting fruit, such as cherries. The picker would be expected to only select the ripest and healthiest fruits. An observer who only sees the selected fruit may thus wrongly conclude that most, or even all, of the tree's fruit is in a likewise good condition. This can also give a false impression of the quality of the fruit (since it is only a sample and is not a representative sample).

Cherry picking has a negative connotation as the practice neglects, overlooks or directly suppresses evidence that could lead to a complete picture.

A concept sometimes confused with cherry picking is the idea of gathering only the fruit that is easy to harvest, while ignoring other fruit that is higher up on the tree and thus more difficult to obtain (see low-hanging fruit).

Cherry picking can be found in many logical fallacies. For example, the "fallacy of anecdotal evidence" tends to overlook large amounts of data in favor of that known personally, "selective use of evidence" rejects material unfavorable to an argument, while a false dichotomy picks only two options when more are available. Cherry picking can refer to the selection of data or data sets so a study or survey will give desired, predictable results which may be misleading or even completely contrary to reality.

Correlation does not imply causation

In statistics, many statistical tests calculate correlations between variables and when two variables are found to be correlated, it is tempting to assume that this shows that one variable causes the other. That "correlation proves causation" is considered a questionable cause logical fallacy when two events occurring together are taken to have established a cause-and-effect relationship. This fallacy is also known as cum hoc ergo propter hoc, Latin for "with this, therefore because of this", and "false cause". A similar fallacy, that an event that followed another was necessarily a consequence of the first event, is the post hoc ergo propter hoc (Latin for "after this, therefore because of this.") fallacy.

For example, in a widely studied case, numerous epidemiological studies showed that women taking combined hormone replacement therapy (HRT) also had a lower-than-average incidence of coronary heart disease (CHD), leading doctors to propose that HRT was protective against CHD. But randomized controlled trials showed that HRT caused a small but statistically significant increase in risk of CHD. Re-analysis of the data from the epidemiological studies showed that women undertaking HRT were more likely to be from higher socio-economic groups (ABC1), with better-than-average diet and exercise regimens. The use of HRT and decreased incidence of coronary heart disease were coincident effects of a common cause (i.e. the benefits associated with a higher socioeconomic status), rather than a direct cause and effect, as had been supposed.As with any logical fallacy, identifying that the reasoning behind an argument is flawed does not imply that the resulting conclusion is false. In the instance above, if the trials had found that hormone replacement therapy does in fact have a negative incidence on the likelihood of coronary heart disease the assumption of causality would have been correct, although the logic behind the assumption would still have been flawed. Indeed, a few go further, using correlation as a basis for testing a hypothesis to try to establish a true causal relationship; examples are the Granger causality test, convergent cross mapping, and Liang-Kleeman information flow.

False dilemma

A false dilemma is a type of informal fallacy in which something is falsely claimed to be an "either/or" situation, when in fact there is at least one additional option.A false dilemma can arise intentionally, when a fallacy is used in an attempt to force a choice or outcome. The opposite of this fallacy is false compromise. For example, what is described as the "TINA factor" in elections is often in reality a false dilemma, as there are about 3 to 25 electoral candidates for most electoral seats.The false dilemma fallacy can also arise simply by accidental omission of additional options rather than by deliberate deception. For example, "Stacey spoke out against capitalism, therefore she must be a communist" (she may be neither capitalist nor communist). "Roger opposed an atheist argument against Christianity, so he must be a Christian" (When it's assumed the opposition by itself means he's a Christian). Roger might be an atheist who disagrees with the logic of some particular argument against Christianity. Additionally, it can be the result of habitual tendency, whatever the cause, to view the world with limited sets of options.

Some philosophers and scholars believe that "unless a distinction can be made rigorous and precise it isn't really a distinction". An exception is analytic philosopher John Searle, who called it an incorrect assumption that produces false dichotomies. Searle insists that "it is a condition of the adequacy of a precise theory of an indeterminate phenomenon that it should precisely characterize that phenomenon as indeterminate; and a distinction is no less a distinction for allowing for a family of related, marginal, diverging cases." Similarly, when two options are presented, they often are, although not always, two extreme points on some spectrum of possibilities; this may lend credence to the larger argument by giving the impression that the options are mutually exclusive of each other, even though they need not be. Furthermore, the options in false dichotomies typically are presented as being collectively exhaustive, in which case the fallacy may be overcome, or at least weakened, by considering other possibilities, or perhaps by considering a whole spectrum of possibilities, as in fuzzy logic.

Faulty generalization

A faulty generalization is a conclusion about all or many instances of a phenomenon that has been reached on the basis of just one or just a few instances of that phenomenon. It is an example of jumping to conclusions. For example, we may generalize about all people, or all members of a group, based on what we know about just one or just a few people. If we meet an angry person from a given country X, we may suspect that most people in country X are often angry. If we see only white swans, we may suspect that all swans are white. Faulty generalizations may lead to further incorrect conclusions. We may, for example, conclude that citizens of country X are genetically inferior, or that poverty is generally the fault of the poor.

Expressed in more precise philosophical language, a fallacy of defective induction is a conclusion that has been made on the basis of weak premises. Unlike fallacies of relevance, in fallacies of defective induction, the premises are related to the conclusions yet only weakly buttress the conclusions. A faulty generalization is thus produced. This inductive fallacy is any of several errors of inductive inference.

Formal fallacy

In philosophy, a formal fallacy, deductive fallacy, logical fallacy or non sequitur (Latin for "it does not follow") is a pattern of reasoning rendered invalid by a flaw in its logical structure that can neatly be expressed in a standard logic system, for example propositional logic. It is defined as a deductive argument that is invalid. The argument itself could have true premises, but still have a false conclusion. Thus, a formal fallacy is a fallacy where deduction goes wrong, and is no longer a logical process. However, this may not affect the truth of the conclusion since validity and truth are separate in formal logic.

While a logical argument is a non sequitur if, and only if, it is invalid, the term "non sequitur" typically refers to those types of invalid arguments which do not constitute formal fallacies covered by particular terms (e.g. affirming the consequent). In other words, in practice, "non sequitur" refers to an unnamed formal fallacy.

A special case is a mathematical fallacy, an intentionally invalid mathematical proof, often with the error subtle and somehow concealed. Mathematical fallacies are typically crafted and exhibited for educational purposes, usually taking the form of spurious proofs of obvious contradictions.

A formal fallacy is contrasted with an informal fallacy, which may have a valid logical form and yet be unsound because one or more premises are false.

Gambler's fallacy

The gambler's fallacy, also known as the Monte Carlo fallacy or the fallacy of the maturity of chances, is the mistaken belief that, if something happens more frequently than normal during a given period, it will happen less frequently in the future (or vice versa). In situations where the outcome being observed is truly random and consists of independent trials of a random process, this belief is false. The fallacy can arise in many situations, but is most strongly associated with gambling, where it is common among players.

The term "Monte Carlo fallacy" originates from the best known example of the phenomenon, which occurred in the Monte Carlo Casino in 1913.

Irrelevant conclusion

Irrelevant conclusion, also known as ignoratio elenchi (Latin for 'ignoring refutation') or missing the point, is the informal fallacy of presenting an argument that may or may not be logically valid and sound, but (whose conclusion) fails to address the issue in question. It falls into the broad class of relevance fallacies.Irrelevant conclusion should not be confused with formal fallacy, an argument whose conclusion does not follow from its premises.

List of fallacies

In reasoning to argue a claim, a fallacy is reasoning that is evaluated as logically incorrect and that undermines the logical validity of the argument and permits its recognition as unsound. Regardless of their soundness, all registers and manners of speech can demonstrate fallacies.

Because of their variety of structure and application, fallacies are challenging to classify so as to satisfy all practitioners. Fallacies can be classified strictly by either their structure or content, such as classifying them as formal fallacies or informal fallacies, respectively. The classification of informal fallacies may be subdivided into categories such as linguistic, relevance through omission, relevance through intrusion, and relevance through presumption. On the other hand, fallacies may be classified by the process by which they occur, such as material fallacies (content), verbal fallacies (linguistic), and again formal fallacies (error in inference). In turn, material fallacies may be placed into the more general category of informal fallacies, while formal fallacies may be clearly placed into the more precise category of logical (deductive) fallacies. Yet, verbal fallacies may be placed into either informal or deductive classifications; compare equivocation which is a word or phrase based ambiguity, e. g. "he is mad", which may refer to either him being angry or clinically insane, to the fallacy of composition which is premise and inference based ambiguity, e. g. "this must be a good basketball team because each of its members is an outstanding player".Faulty inferences in deductive reasoning are common formal or logical fallacies. As the nature of inductive reasoning is based on probability, a fallacious inductive argument or one that is potentially misleading, is often classified as "weak".

The conscious or habitual use of fallacies as rhetorical devices are prevalent in the desire to persuade when the focus is more on communication and eliciting common agreement rather than the correctness of the reasoning. The effective use of a fallacy by an orator may be considered clever, but by the same token, the reasoning of that orator should be recognized as unsound, and thus the orator's claim, supported by an unsound argument, will be regarded as unfounded and dismissed.

No true Scotsman

No true Scotsman or appeal to purity is an informal fallacy in which one attempts to protect a universal generalization from counterexamples by changing the definition in an ad hoc fashion to exclude the counterexample. Rather than denying the counterexample or rejecting the original claim, this fallacy modifies the subject of the assertion to exclude the specific case or others like it by rhetoric, without reference to any specific objective rule ("no true Scotsman would do such a thing"; i.e., those who perform that action are not part of our group and thus criticism of that action is not criticism of the group).

Post hoc ergo propter hoc

Post hoc ergo propter hoc (Latin: "after this, therefore because of this") is a logical fallacy that states "Since event Y followed event X, event Y must have been caused by event X." It is often shortened simply to post hoc fallacy.

A logical fallacy of the questionable cause variety, it is subtly different from the fallacy cum hoc ergo propter hoc ("with this, therefore because of this"), in which two events occur simultaneously or the chronological ordering is insignificant or unknown.

Post hoc is a particularly tempting error because correlation appears to suggest causality. The fallacy lies in a conclusion based solely on the order of events, rather than taking into account other factors potentially responsible for the result that might rule out the connection.

A simple example is "the rooster crows immediately before sunrise; therefore the rooster causes the sun to rise."

Red herring

A red herring is something that misleads or distracts from a relevant or important issue. It may be either a logical fallacy or a literary device that leads readers or audiences towards a false conclusion. A red herring might be intentionally used, such as in mystery fiction or as part of rhetorical strategies (e.g., in politics), or it could be inadvertently used during argumentation.

The term was popularized in 1807 by English polemicist William Cobbett, who told a story of having used a kipper (a strong-smelling smoked fish) to divert hounds from chasing a hare.

Straw man

A straw man is a form of argument and an informal fallacy based on giving the impression of refuting an opponent's argument, while actually refuting an argument that was not presented by that opponent. One who engages in this fallacy is said to be "attacking a straw man."

The typical straw man argument creates the illusion of having completely refuted or defeated an opponent's proposition through the covert replacement of it with a different proposition (i.e., "stand up a straw man") and the subsequent refutation of that false argument ("knock down a straw man") instead of the opponent's proposition.Straw man tactics in the United Kingdom can be known as an Aunt Sally, after a pub game of the same name, where patrons threw sticks or battens at a post to knock off a skittle balanced on top.

Sunk cost

In economics and business decision-making, a sunk cost is a cost that has already been incurred and cannot be recovered (also known as retrospective cost).

Sunk costs are sometimes contrasted with prospective costs, which are future costs that may be incurred or changed if an action is taken. In that regard, both retrospective and prospective costs could be either fixed costs (continuous for as long as the business is in operation and unaffected by output volume) or variable costs (dependent on volume). However, many economists consider it a mistake to classify sunk costs as "fixed" or "variable." For example, if a firm sinks $400 million on an enterprise software installation, that cost is "sunk" because it was a one-time expense and cannot be recovered once spent. A "fixed" cost would be monthly payments made as part of a service contract or licensing deal with the company that set up the software. The upfront irretrievable payment for the installation should not be deemed a "fixed" cost, with its cost spread out over time. Sunk costs should be kept separate. The "variable costs" for this project might include data centre power usage, for example.

In traditional microeconomic theory, only prospective (future) costs are relevant to an investment decision. The fields of traditional economics propose that economic actors should not let sunk costs influence their decisions. Doing so would not be rationally assessing a decision exclusively on its own merits. Alternatively, a decision-maker might make rational decisions according to their own incentives, outside of efficiency or profitability. This is considered to be an incentive problem and is distinct from a sunk cost problem. Evidence from behavioral economics suggests this theory may fail to predict real-world behavior. Sunk costs do, in fact, influence actors' decisions because humans are prone to loss aversion and framing effects.

Sunk costs should not affect the rational decision-maker's best choice. However, until a decision-maker irreversibly commits resources, the prospective cost is an avoidable future cost and is properly included in any decision-making processes.

For instance, if someone is considering preordering movie tickets, but has not actually purchased them yet, the cost remains avoidable. However, if the price of the tickets rises to an amount that requires him or her to pay more than the value he or she places on them, they should figure out the change in terms of prospective cost that goes into the decision-making process and re-evaluate his or her decision hence.

Tu quoque

Tu quoque (; Latin for "you also"), or the appeal to hypocrisy, is a fallacy that intends to discredit the opponent's argument by asserting the opponent's failure to act consistently in accordance with its conclusion(s).

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.