Inductive reasoning

Inductive reasoning is a method of reasoning in which the premises are viewed as supplying some evidence for the truth of the conclusion; this is in contrast to deductive reasoning. While the conclusion of a deductive argument is certain, the truth of the conclusion of an inductive argument may be probable, based upon the evidence given.[1]

Many dictionaries define inductive reasoning as the derivation of general principles from specific observations, though there are many inductive arguments that do not have that form.[2]

Comparison with deductive reasoning

Argument terminology used in logic
Argument terminology

Inductive reasoning is a form of argument that—in contrast to deductive reasoning—allows for the possibility that a conclusion can be false, even if all of the premises are true.[3] Instead of being valid or invalid, inductive arguments are either strong or weak, according to how probable it is that the conclusion is true.[4] We may call an inductive argument plausible, probable, reasonable, justified or strong, but never certain or necessary. Logic affords no bridge from the probable to the certain.

The futility of attaining certainty through some critical mass of probability can be illustrated with a coin-toss exercise. Suppose someone shows us a coin and tests to see if the coin is either a fair one or two-headed. They flip the coin ten times, and ten times it comes up heads. At this point, there is a strong reason to believe it is two-headed. After all, the chance of ten heads in a row is .000976: less than one in one thousand. Then, after 100 flips, every toss has come up heads. Now there is “virtual” certainty that the coin is two-headed. Still, one can neither logically or empirically rule out that the next toss will produce tails. No matter how many times in a row it comes up heads this remains the case. If one programmed a machine to flip a coin over and over continuously at some point the result would be a string of 100 heads. In the fullness of time, all combinations will appear.

As for the slim prospect of getting ten out of ten heads from a fair coin—the outcome that made the coin appear biased—many may be surprised to learn that the chance of any sequence of heads or tails is equally unlikely (e.g. H-H-T-T-H-T-H-H-H-T) and yet it occurs in every trial of ten tosses. That means all results for ten tosses have the same probability as getting ten out of ten heads, which is 0.000976. If one records the heads-tails sequences, for whatever result, that exact sequence had a chance of 0.000976.

An argument is deductive when the conclusion is necessary given the premises. That is, the conclusion cannot be false if the premises are true.

If a deductive conclusion follows duly from its premises, then it is valid; otherwise, it is invalid (that an argument is invalid is not to say it is false. It may have a true conclusion, just not on account of the premises). An examination of the following examples will show that the relationship between premises and conclusion is such that the truth of the conclusion is already implicit in the premises. Bachelors are unmarried because we say they are; we have defined them so. Socrates is mortal because we have included him in a set of beings that are mortal. The conclusion for a valid deductive argument is already contained in the premises since its truth is strictly a matter of logical relations. It cannot say more than its premises. Inductive premises, on the other hand, draw their substance from fact and evidence, and the conclusion accordingly makes a factual claim or prediction. Its reliability varies proportionally with the evidence. Induction wants to reveal something new about the world. One could say that induction wants to say more than is contained in the premises.

To better see the difference between inductive and deductive arguments, consider that it would not make sense to say: "all rectangles so far examined have four right angles, so the next one I see will have four right angles." This would treat logical relations as something factual and discoverable, and thus variable and uncertain. Likewise, speaking deductively we may permissibly say. "All unicorns can fly; I have a unicorn named Charlie; Charlie can fly." This deductive argument is valid because the logical relations hold; we are not interested in their factual soundness.

Inductive reasoning is inherently uncertain. It only deals in the extent to which, given the premises, the conclusion is credible according to some theory of evidence. Examples include a many-valued logic, Dempster–Shafer theory, or probability theory with rules for inference such as Bayes' rule. Unlike deductive reasoning, it does not rely on universals holding over a closed domain of discourse to draw conclusions, so it can be applicable even in cases of epistemic uncertainty (technical issues with this may arise however; for example, the second axiom of probability is a closed-world assumption).[5]

Another crucial difference between these two types of argument is that deductive certainty is impossible in non-axiomatic systems such as reality, leaving inductive reasoning as the primary route to (probabilistic) knowledge of such systems.[6]

Given that "if A is true then that would cause B, C, and D to be true", an example of deduction would be "A is true therefore we can deduce that B, C, and D are true". An example of induction would be "B, C, and D are observed to be true therefore A might be true". A is a reasonable explanation for B, C, and D being true.

For example:

A large enough asteroid impact would create a very large crater and cause a severe impact winter that could drive the non-avian dinosaurs to extinction.
We observe that there is a very large crater in the Gulf of Mexico dating to very near the time of the extinction of the non-avian dinosaurs.
Therefore, it is possible that this impact could explain why the non-avian dinosaurs became extinct.

Note, however, that the asteroid explanation for the mass extinction is not necessarily correct. Other events with the potential to affect global climate also coincide with the extinction of the non-avian dinosaurs. For example, the release of volcanic gases (particularly sulfur dioxide) during the formation of the Deccan Traps in India.

Another example of an inductive argument:

All biological life forms that we know of depend on liquid water to exist.
Therefore, if we discover a new biological life form it will probably depend on liquid water to exist.

This argument could have been made every time a new biological life form was found, and would have been correct every time; however, it is still possible that in the future a biological life form not requiring liquid water could be discovered. As a result, the argument may be stated less formally as:

All biological life forms that we know of depend on liquid water to exist.
All biological life probably depends on liquid water to exist.

A classical example of an incorrect inductive argument was presented by John Vickers:

All of the swans we have seen are white.
Therefore, we know that all swans are white.

The correct conclusion would be: we expect all swans to be white.

Succinctly put: deduction is about certainty/necessity; induction is about probability.[7]. Any single assertion will answer to one of these two criteria. Another approach to the analysis of reasoning is that of modal logic, which deals with the distinction between the necessary and the possible in a way not concerned with probabilities among things deemed possible.

The philosophical definition of inductive reasoning is more nuanced than a simple progression from particular/individual instances to broader generalizations. Rather, the premises of an inductive logical argument indicate some degree of support (inductive probability) for the conclusion but do not entail it; that is, they suggest truth but do not ensure it. In this manner, there is the possibility of moving from general statements to individual instances (for example, statistical syllogisms, discussed below).

Note that the definition of inductive reasoning described here differs from mathematical induction, which, in fact, is a form of deductive reasoning. Mathematical induction is used to provide strict proofs of the properties of recursively defined sets.[8] The deductive nature of mathematical induction derives from its basis in a non-finite number of cases, in contrast with the finite number of cases involved in an enumerative induction procedure like proof by exhaustion. Both mathematical induction and proof by exhaustion are examples of complete induction. Complete induction is a masked type of deductive reasoning.

History

Ancient philosophy

For a move from particular to universal, Aristotle in the 300s BCE used the Greek word epagogé, which Cicero translated into the Latin word inductio.[9] In the 300s CE, Sextus Empiricus maintained that all knowledge derives from sensory experience and concluded in his Outlines of Pyrrhonism that induction cannot justify the acceptance of universal statements as true.[9]

Early modern philosophy

In 1620, early modern philosopher Francis Bacon repudiated the value of mere experience and enumerative induction alone. His method of inductivism required that minute and many-varied observations that uncovered the natural world's structure and causal relations needed to be coupled with enumerative induction in order to have knowledge beyond the present scope of experience. Inductivism therefore required enumerative induction as a component.

The empiricist David Hume's 1740 stance found enumerative induction to have no rational, let alone logical, basis but instead induction was a custom of the mind and an everyday requirement to live. While observations, such as the motion of the sun, could be coupled with the principle of the uniformity of nature to produce conclusions that seemed to be certain, the problem of induction arose from the fact that the uniformity of nature was not a logically valid principle. Hume was sceptical of the application of enumerative induction and reason to reach certainty about unobservables and especially the inference of causality from the fact that modifying an aspect of a relationship prevents or produces a particular outcome.

Awakened from "dogmatic slumber" by a German translation of Hume's work, Kant sought to explain the possibility of metaphysics. In 1781, Kant's Critique of Pure Reason introduced rationalism as a path toward knowledge distinct from empiricism. Kant sorted statements into two types. Analytic statements are true by virtue of the arrangement of their terms and meanings, thus analytic statements are tautologies, merely logical truths, true by necessity. Whereas synthetic statements hold meanings to refer to states of facts, contingencies. Finding it impossible to know objects as they truly are in themselves, however, Kant concluded that the philosopher's task should not be to try to peer behind the veil of appearance to view the noumena, but simply that of handling phenomena.

Reasoning that the mind must contain its own categories for organizing sense data, making experience of space and time possible, Kant concluded that the uniformity of nature was an a priori truth.[10] A class of synthetic statements that was not contingent but true by necessity, was then synthetic a priori. Kant thus saved both metaphysics and Newton's law of universal gravitation, but as a consequence discarded scientific realism and developed transcendental idealism. Kant's transcendental idealism gave birth to the movement of German idealism. Hegel's absolute idealism subsequently flourished across continental Europe.

Late modern philosophy

Positivism, developed by Saint-Simon and promulgated in the 1830s by his former student Comte, was the first late modern philosophy of science. In the aftermath of the French Revolution, fearing society's ruin, Comte opposed metaphysics. Human knowledge had evolved from religion to metaphysics to science, said Comte, which had flowed from mathematics to astronomy to physics to chemistry to biology to sociology—in that order—describing increasingly intricate domains. All of society's knowledge had become scientific, with questions of theology and of metaphysics being unanswerable. Comte found enumerative induction reliable as a consequence of its grounding in available experience. He asserted the use of science, rather than metaphysical truth, as the correct method for the improvement of human society.

According to Comte, scientific method frames predictions, confirms them, and states laws—positive statements—irrefutable by theology or by metaphysics. Regarding experience as justifying enumerative induction by demonstrating the uniformity of nature,[10] the British philosopher John Stuart Mill welcomed Comte's positivism, but thought scientific laws susceptible to recall or revision and Mill also withheld from Comte's Religion of Humanity. Comte was confident in treating scientific law as an irrefutable foundation for all knowledge, and believed that churches, honouring eminent scientists, ought to focus public mindset on altruism—a term Comte coined—to apply science for humankind's social welfare via sociology, Comte's leading science.

During the 1830s and 1840s, while Comte and Mill were the leading philosophers of science, William Whewell found enumerative induction not nearly as convincing, and, despite the dominance of inductivism, formulated "superinduction".[11] Whewell argued that "the peculiar import of the term Induction" should be recognised: "there is some Conception superinduced upon the facts", that is, "the Invention of a new Conception in every inductive inference". The creation of Conceptions is easily overlooked and prior to Whewell was rarely recognised.[11] Whewell explained:

"Although we bind together facts by superinducing upon them a new Conception, this Conception, once introduced and applied, is looked upon as inseparably connected with the facts, and necessarily implied in them. Having once had the phenomena bound together in their minds in virtue of the Conception, men can no longer easily restore them back to detached and incoherent condition in which they were before they were thus combined."[11]

These "superinduced" explanations may well be flawed, but their accuracy is suggested when they exhibit what Whewell termed consilience—that is, simultaneously predicting the inductive generalizations in multiple areas—a feat that, according to Whewell, can establish their truth. Perhaps to accommodate the prevailing view of science as inductivist method, Whewell devoted several chapters to "methods of induction" and sometimes used the phrase "logic of induction", despite the fact that induction lacks rules and cannot be trained.[11]

In the 1870s, the originator of pragmatism, C S Peirce performed vast investigations that clarified the basis of deductive inference as a mathematical proof (as, independently, did Gottlob Frege). Peirce recognized induction but always insisted on a third type of inference that Peirce variously termed abduction or retroduction or hypothesis or presumption.[12] Later philosophers termed Peirce's abduction, etc, Inference to the Best Explanation (IBE).[13]

Contemporary philosophy

Bertrand Russell

Having highlighted Hume's problem of induction, John Maynard Keynes posed logical probability as its answer, or as near a solution as he could arrive at.[14] Bertrand Russell found Keynes's Treatise on Probability the best examination of induction, and believed that if read with Jean Nicod's Le Probleme logique de l'induction as well as R B Braithwaite's review of Keynes's work in the October 1925 issue of Mind, that would cover "most of what is known about induction", although the "subject is technical and difficult, involving a good deal of mathematics".[15] Two decades later, Russell proposed enumerative induction as an "independent logical principle".[16][17] Russell found:

"Hume's skepticism rests entirely upon his rejection of the principle of induction. The principle of induction, as applied to causation, says that, if A has been found very often accompanied or followed by B, then it is probable that on the next occasion on which A is observed, it will be accompanied or followed by B. If the principle is to be adequate, a sufficient number of instances must make the probability not far short of certainty. If this principle, or any other from which it can be deduced, is true, then the casual inferences which Hume rejects are valid, not indeed as giving certainty, but as giving a sufficient probability for practical purposes. If this principle is not true, every attempt to arrive at general scientific laws from particular observations is fallacious, and Hume's skepticism is inescapable for an empiricist. The principle itself cannot, of course, without circularity, be inferred from observed uniformities, since it is required to justify any such inference. It must, therefore, be, or be deduced from, an independent principle not based on experience. To this extent, Hume has proved that pure empiricism is not a sufficient basis for science. But if this one principle is admitted, everything else can proceed in accordance with the theory that all our knowledge is based on experience. It must be granted that this is a serious departure from pure empiricism, and that those who are not empiricists may ask why, if one departure is allowed, others are forbidden. These, however, are not questions directly raised by Hume's arguments. What these arguments prove—and I do not think the proof can be controverted—is that induction is an independent logical principle, incapable of being inferred either from experience or from other logical principles, and that without this principle, science is impossible."[17]

Gilbert Harman

In a 1965 paper, Gilbert Harman explained that enumerative induction is not an autonomous phenomenon, but is simply a disguised consequence of Inference to the Best Explanation (IBE).[13] IBE is otherwise synonymous with C S Peirce's abduction.[13] Many philosophers of science espousing scientific realism have maintained that IBE is the way that scientists develop approximately true scientific theories about nature.[18]

Criticism

Thinkers as far back as Sextus Empiricus have criticised inductive reasoning.[19] The classic philosophical critique of the problem of induction was given by the Scottish philosopher David Hume.[20]

Although the use of inductive reasoning demonstrates considerable success, the justification for its application has been questionable. Recognizing this, Hume highlighted the fact that our mind often draws conclusions from relatively limited experiences that appear correct but which are actually far from certain. In deduction, the truth value of the conclusion is based on the truth of the premise. In induction, however, the dependence of the conclusion on the premise is always uncertain. For example, let us assume that all ravens are black. The fact that there are numerous black ravens supports the assumption. Our assumption, however, becomes invalid once it is discovered that there are white ravens. Therefore, the general rule "all ravens are black" is not the kind of statement that can ever be certain. Hume further argued that it is impossible to justify inductive reasoning: this is because it cannot be justified deductively, so our only option is to justify it inductively. Since this argument is circular, with the help of Hume's fork he concluded that our use of induction is unjustifiable .[21]

Hume nevertheless stated that even if induction were proved unreliable, we would still have to rely on it. So instead of a position of severe skepticism, Hume advocated a practical skepticism based on common sense, where the inevitability of induction is accepted.[22] Bertrand Russell illustrated Hume's skepticism in a story about a turkey, fed every morning without fail, who following the laws of induction concluded that this feeding would always continue, but then his throat was cut on Thanksgiving Day.[23]

In 1963, Karl Popper wrote, "Induction, i.e. inference based on many observations, is a myth. It is neither a psychological fact, nor a fact of ordinary life, nor one of scientific procedure."[24] [25] Popper's 1972 book Objective Knowledge—whose first chapter is devoted to the problem of induction—opens, "I think I have solved a major philosophical problem: the problem of induction".[25] In Popper's schema, enumerative induction is "a kind of optical illusion" cast by the steps of conjecture and refutation during a problem shift.[25] An imaginative leap, the tentative solution is improvised, lacking inductive rules to guide it.[25] The resulting, unrestricted generalization is deductive, an entailed consequence of all explanatory considerations.[25] Controversy continued, however, with Popper's putative solution not generally accepted.[26]

More recently, inductive inference has been shown to be capable of arriving at certainty, but only in rare instances, as in programs of machine learning in artificial intelligence (AI).[27] Popper's stance on induction being an illusion has been falsified: enumerative induction exists. Even so, inductive reasoning is overwhelmingly absent from science.[27] Although much-talked of nowadays by philosophers, abduction, or IBE, lacks rules of inference and the inferences reached by those employing it are arrived at with human imagination and creativity.[27]

Biases

Inductive reasoning is also known as hypothesis construction because any conclusions made are based on current knowledge and predictions. As with deductive arguments, biases can distort the proper application of inductive argument, thereby preventing the reasoner from forming the most logical conclusion based on the clues. Examples of these biases include the availability heuristic, confirmation bias, and the predictable-world bias.

The availability heuristic causes the reasoner to depend primarily upon information that is readily available to him or her. People have a tendency to rely on information that is easily accessible in the world around them. For example, in surveys, when people are asked to estimate the percentage of people who died from various causes, most respondents choose the causes that have been most prevalent in the media such as terrorism, murders, and airplane accidents, rather than causes such as disease and traffic accidents, which have been technically "less accessible" to the individual since they are not emphasized as heavily in the world around them.

The confirmation bias is based on the natural tendency to confirm rather than to deny a current hypothesis. Research has demonstrated that people are inclined to seek solutions to problems that are more consistent with known hypotheses rather than attempt to refute those hypotheses. Often, in experiments, subjects will ask questions that seek answers that fit established hypotheses, thus confirming these hypotheses. For example, if it is hypothesized that Sally is a sociable individual, subjects will naturally seek to confirm the premise by asking questions that would produce answers confirming that Sally is, in fact, a sociable individual.

The predictable-world bias revolves around the inclination to perceive order where it has not been proved to exist, either at all or at a particular level of abstraction. Gambling, for example, is one of the most popular examples of predictable-world bias. Gamblers often begin to think that they see simple and obvious patterns in the outcomes and therefore believe that they are able to predict outcomes based upon what they have witnessed. In reality, however, the outcomes of these games are difficult to predict and highly complex in nature. In general, people tend to seek some type of simplistic order to explain or justify their beliefs and experiences, and it is often difficult for them to realise that their perceptions of order may be entirely different from the truth.[28]

Types

The following are types of inductive argument. Notice that while similar, each has a different form.

Generalization

A generalization (more accurately, an inductive generalization) proceeds from a premise about a sample to a conclusion about the population.

The proportion Q of the sample has attribute A.
Therefore:
The proportion Q of the population has attribute A.
Example

There are 20 balls—either black or white—in an urn. To estimate their respective numbers, you draw a sample of four balls and find that three are black and one is white. A good inductive generalization would be that there are 15 black and five white balls in the urn.

How much the premises support the conclusion depends upon (a) the number in the sample group, (b) the number in the population, and (c) the degree to which the sample represents the population (which may be achieved by taking a random sample). The hasty generalization and the biased sample are generalization fallacies.

Statistical and inductive generalization

Of a sizeable random sample of voters surveyed, 66% support Measure Z.
Therefore, approximately 66% of voters support Measure Z.

This is a Statistical [29], aka Sample Projection.[30] The measure is highly reliable within a well-defined margin of error provided the sample is large and random. It is readily quantifiable. Compare the preceding argument with the following. “Six of the ten people in my book club are Libertarians. About 60% of people are Libertarians.” The argument is weak because the sample is non-random and the sample size is very small.

So far, this year his son's Little League team has won 6 of ten games.
By season’s end, they will have won about 60% of the games.

This is inductive generalization. This inference is less reliable than the statistical generalization, first, because the sample events are non-random, and secondly because it is not reducible to mathematical expression. Statistically speaking, there is simply no way to know, measure and calculate as to the circumstances affecting performance that will obtain in the future. On a philosophical level, the argument relies on the presupposition that the operation of future events will mirror the past. In other words, it takes for granted a uniformity of nature, an unproven principle that cannot be derived from the empirical data itself. Arguments that tacitly presuppose this uniformity are sometimes called Humean after the philosopher who was first to subject them to philosophical scrutiny. [31]

Statistical syllogism

A statistical syllogism proceeds from a generalization to a conclusion about an individual.

90% of graduates from Excelsior Preparatory school go on to University.
Bob is a graduate of Excelsior Preparatory school.
Bob will go on to University.

This is a statistical syllogism.[32] Even though one cannot be sure Bob will attend university, we can be fully assured of the exact probability for this outcome (given no further information). Arguably the argument is too strong and might be accused of "cheating." After all, the probability is given in the premise. Typically, inductive reasoning seeks to formulate a probability. Two dicto simpliciter fallacies can occur in statistical syllogisms: "accident" and "converse accident".

Simple induction

Simple induction proceeds from a premise about a sample group to a conclusion about another individual.

Proportion Q of the known instances of population P has attribute A.
Individual I is another member of P.
Therefore:
There is a probability corresponding to Q that I has A.

This is a combination of a generalization and a statistical syllogism, where the conclusion of the generalization is also the first premise of the statistical syllogism.

Enumerative induction

The basic form of inductive inference, simply induction, reasons from particular instances to all instances, and is thus an unrestricted generalization.[33] If one observes 100 swans, and all 100 were white, one might infer a universal categorical proposition of the form All swans are white. As this reasoning form's premises, even if true, do not entail the conclusion's truth, this is a form of inductive inference. The conclusion might be true, and might be thought probably true, yet it can be false. Questions regarding the justification and form of enumerative inductions have been central in philosophy of science, as enumerative induction has a pivotal role in the traditional model of the scientific method.

All life forms so far discovered are composed of cells.
All life forms are composed of cells.

This is enumerative induction, aka simple induction or simple predictive induction. It is a subcategory of inductive generalization. In everyday practice, this is perhaps the most common form of induction. For the preceding argument, the conclusion is tempting but makes a prediction well in excess of the evidence. First, it assumes that life forms observed until now can tell us how future cases will be: an appeal to uniformity. Second, the concluding All is a very bold assertion. A single contrary instance foils the argument. And last, to quantify the level of probability in any mathematical form is problematic.[34] By what standard do we measure our Earthly sample of known life against all (possible) life? For suppose we do discover some new organism—let’s say some microorganism floating in the mesosphere, or better yet, on some asteroid—and it is cellular. Doesn't the addition of this corroborating evidence oblige us to raise our probability assessment for the subject proposition? It is generally deemed reasonable to answer this question "yes," and for a good many this "yes" is not only reasonable but incontrovertible. So then just how much should this new data change our probability assessment? Here, consensus melts away, and in its place arises a question about whether we can talk of probability coherently at all without numerical quantification.

All life forms so far discovered have been composed of cells.
The next life form discovered will be composed of cells.

This is enumerative induction in its weak form. It truncates "all" to a mere single instance and, by making a far weaker claim, considerably strengthens the probability of its conclusion. Otherwise, it has the same shortcomings as the strong form: its sample population is non-random, and quantification methods are elusive.

Argument from analogy

The process of analogical inference involves noting the shared properties of two or more things and from this basis inferring that they also share some further property:[35]

P and Q are similar in respect to properties a, b, and c.
Object P has been observed to have further property x.
Therefore, Q probably has property x also.

Analogical reasoning is very frequent in common sense, science, philosophy and the humanities, but sometimes it is accepted only as an auxiliary method. A refined approach is case-based reasoning.[36]

Mineral A is an igneous rock often containing veins of quartz and most commonly found in South America in areas of ancient volcanic activity.
Additionally, mineral A is soft stone suitable for carving into jewelry.
Mineral B is an igneous rock often containing veins of quartz and most commonly found in South America in areas of ancient volcanic activity.
Mineral B is probably a soft stone suitable for carving into jewelry.

This is analogical induction, according to which things alike in certain ways are more prone to be alike in other ways. This form of induction was explored in detail by philosopher John Stuart Mill in his System of Logic, wherein he states:

"There can be no doubt that every resemblance [not known to be irrelevant] affords some degree of probability, beyond what
would otherwise exist, in favour of the conclusion."[37]

Analogical induction is a subcategory of inductive generalization because it assumes a pre-established uniformity governing events. Analogical induction requires an auxiliary examination of the relevancy of the characteristics cited as common to the pair. In the preceding example, if I add the premise that both stones were mentioned in the records of early Spanish explorers, this common attribute is extraneous to the stones and does not contribute to their probable affinity.

A pitfall of analogy is that features can be cherry-picked: while objects may show striking similarities, two things juxtaposed may respectively possess other characteristics not identified in the analogy that are characteristics sharply dissimilar. Thus, analogy can mislead if not all relevant comparisons are made.

Causal inference

A causal inference draws a conclusion about a causal connection based on the conditions of the occurrence of an effect. Premises about the correlation of two things can indicate a causal relationship between them, but additional factors must be confirmed to establish the exact form of the causal relationship.

Prediction

A prediction draws a conclusion about a future individual from a past sample.

Proportion Q of observed members of group G have had attribute A.
Therefore:
There is a probability corresponding to Q that other members of group G will have attribute A when next observed.

Bayesian inference

As a logic of induction rather than a theory of belief, Bayesian inference does not determine which beliefs are a priori rational, but rather determines how we should rationally change the beliefs we have when presented with evidence. We begin by committing to a prior probability for a hypothesis based on logic or previous experience and, when faced with evidence, we adjust the strength of our belief in that hypothesis in a precise manner using Bayesian logic.

Inductive inference

Around 1960, Ray Solomonoff founded the theory of universal inductive inference, a theory of prediction based on observations, for example, predicting the next symbol based upon a given series of symbols. This is a formal inductive framework that combines algorithmic information theory with the Bayesian framework. Universal inductive inference is based on solid philosophical foundations,[38] and can be considered as a mathematically formalized Occam's razor. Fundamental ingredients of the theory are the concepts of algorithmic probability and Kolmogorov complexity.

See also

References

  1. ^ Copi, I.M.; Cohen, C.; Flage, D.E. (2006). Essentials of Logic (Second ed.). Upper Saddle River, NJ: Pearson Education. ISBN 978-0-13-238034-8.
  2. ^ "Deductive and Inductive Arguments", Internet Encyclopedia of Philosophy, It is worth noting that some dictionaries and texts define "deduction" as reasoning from the general to specific and define "induction" as reasoning from the specific to the general. However, there are many inductive arguments that do not have that form, for example, 'I saw her kiss him, really kiss him, so I'm sure she's having an affair.'
  3. ^ John Vickers. The Problem of Induction. The Stanford Encyclopedia of Philosophy.
  4. ^ Herms, D. "Logical Basis of Hypothesis Testing in Scientific Research" (PDF).
  5. ^ Kosko, Bart (1990). "Fuzziness vs. Probability". International Journal of General Systems. 17 (1): 211–40. doi:10.1080/03081079008935108.
  6. ^ "Kant's Account of Reason". Stanford Encyclopedia of Philosophy : Kant's account of reason. Metaphysics Research Lab, Stanford University. 2018.
  7. ^ Introduction to Logic. Harry J. Gensler, Rutledge, 2002. p. 268
  8. ^ Chowdhry, K.R. (2 January 2015). Fundamentals of Discrete Mathematical Structures (3rd ed.). PHI Learning Pvt. Ltd. p. 26. ISBN 9788120350748. Retrieved 1 December 2016.
  9. ^ a b Stefano Gattei, Karl Popper's Philosophy of Science: Rationality without Foundations (New York: Routledge, 2009), ch. 2 "Science and philosophy", pp. 28–30.
  10. ^ a b Wesley C Salmon, "The uniformity of Nature", Philosophy and Phenomenological Research, 1953 Sep;14(1):39–48, [39].
  11. ^ a b c d Roberto Torretti, The Philosophy of Physics (Cambridge: Cambridge University Press, 1999), 219–21[216].
  12. ^ Roberto Torretti, The Philosophy of Physics (Cambridge: Cambridge University Press, 1999), pp. 226, 228–29.
  13. ^ a b c Ted Poston "Foundationalism", § b "Theories of proper inference", §§ iii "Liberal inductivism", Internet Encyclopedia of Philosophy, 10 Jun 2010 (last updated): "Strict inductivism is motivated by the thought that we have some kind of inferential knowledge of the world that cannot be accommodated by deductive inference from epistemically basic beliefs. A fairly recent debate has arisen over the merits of strict inductivism. Some philosophers have argued that there are other forms of nondeductive inference that do not fit the model of enumerative induction. C.S. Peirce describes a form of inference called 'abduction' or 'inference to the best explanation'. This form of inference appeals to explanatory considerations to justify belief. One infers, for example, that two students copied answers from a third because this is the best explanation of the available data—they each make the same mistakes and the two sat in view of the third. Alternatively, in a more theoretical context, one infers that there are very small unobservable particles because this is the best explanation of Brownian motion. Let us call 'liberal inductivism' any view that accepts the legitimacy of a form of inference to the best explanation that is distinct from enumerative induction. For a defense of liberal inductivism, see Gilbert Harman's classic (1965) paper. Harman defends a strong version of liberal inductivism according to which enumerative induction is just a disguised form of inference to the best explanation".
  14. ^ David Andrews, Keynes and the British Humanist Tradition: The Moral Purpose of the Market (New York: Routledge, 2010), pp. 63–65.
  15. ^ Bertrand Russell, The Basic Writings of Bertrand Russell (New York: Routledge, 2009), "The validity of inference"], pp. 157–64, quote on p. 159.
  16. ^ Gregory Landini, Russell (New York: Routledge, 2011), p. 230.
  17. ^ a b Bertrand Russell, A History of Western Philosophy (London: George Allen and Unwin, 1945 / New York: Simon and Schuster, 1945), pp. 673–74.
  18. ^ Stathis Psillos, "On Van Fraassen's critique of abductive reasoning", Philosophical Quarterly, 1996 Jan;46(182):31–47, [31].
  19. ^ Sextus Empiricus, Outlines of Pyrrhonism. Trans. R.G. Bury, Harvard University Press, Cambridge, Massachusetts, 1933, p. 283.
  20. ^ David Hume (1910) [1748]. An Enquiry concerning Human Understanding. P.F. Collier & Son. ISBN 978-0-19-825060-9. Archived from the original on 31 December 2007. Retrieved 27 December 2007.
  21. ^ Vickers, John. "The Problem of Induction" (Section 2). Stanford Encyclopedia of Philosophy. 21 June 2010
  22. ^ Vickers, John. "The Problem of Induction" (Section 2.1). Stanford Encyclopedia of Philosophy. 21 June 2010.
  23. ^ The story by Russell is found in Alan Chalmers, What is this thing Called Science?, Open University Press, Milton Keynes, 1982, p. 14
  24. ^ Popper, Karl R.; Miller, David W. (1983). "A proof of the impossibility of inductive probability". Nature. 302 (5910): 687–88. Bibcode:1983Natur.302..687P. doi:10.1038/302687a0.
  25. ^ a b c d e Donald Gillies, "Problem-solving and the problem of induction", in Rethinking Popper (Dordrecht: Springer, 2009), Zuzana Parusniková & Robert S Cohen, eds, pp. 103–05.
  26. ^ Ch 5 "The controversy around inductive logic" in Richard Mattessich, ed, Instrumental Reasoning and Systems Methodology: An Epistemology of the Applied and Social Sciences (Dordrecht: D. Reidel Publishing, 1978), pp. 141–43.
  27. ^ a b c Donald Gillies, "Problem-solving and the problem of induction", in Rethinking Popper (Dordrecht: Springer, 2009), Zuzana Parusniková & Robert S Cohen, eds, p. 111: "I argued earlier that there are some exceptions to Popper's claim that rules of inductive inference do not exist. However, these exceptions are relatively rare. They occur, for example, in the machine learning programs of AI. For the vast bulk of human science both past and present, rules of inductive inference do not exist. For such science, Popper's model of conjectures which are freely invented and then tested out seems to be more accurate than any model based on inductive inferences. Admittedly, there is talk nowadays in the context of science carried out by humans of 'inference to the best explanation' or 'abductive inference', but such so-called inferences are not at all inferences based on precisely formulated rules like the deductive rules of inference. Those who talk of 'inference to the best explanation' or 'abductive inference', for example, never formulate any precise rules according to which these so-called inferences take place. In reality, the 'inferences' which they describe in their examples involve conjectures thought up by human ingenuity and creativity, and by no means inferred in any mechanical fashion, or according to precisely specified rules".
  28. ^ Gray, Peter (2011). Psychology (Sixth ed.). New York: Worth. ISBN 978-1-4292-1947-1.
  29. ^ Schaum’s Outlines, Logic, Second Edition. John Nolt, Dennis Rohatyn, Archille Varzi. McGraw-Hill, 1998. p. 223
  30. ^ Schaum’s Outlines, Logic, p. 230
  31. ^ Introduction to Logic. Gensler p. 280
  32. ^ Introduction to Logic. Harry J. Gensler, Rutledge, 2002. p. 268
  33. ^ Churchill, Robert Paul (1990). Logic: An Introduction (2nd ed.). New York: St. Martin's Press. p. 355. ISBN 978-0-312-02353-9. OCLC 21216829. In a typical enumerative induction, the premises list the individuals observed to have a common property, and the conclusion claims that all individuals of the same population have that property.
  34. ^ Schaum’s Outlines, Logic, pp. 243–35
  35. ^ Baronett, Stan (2008). Logic. Upper Saddle River, NJ: Pearson Prentice Hall. pp. 321–25.
  36. ^ For more information on inferences by analogy, see Juthe, 2005.
  37. ^ A System of Logic. Mill 1843/1930. p. 333
  38. ^ Rathmanner, Samuel; Hutter, Marcus (2011). "A Philosophical Treatise of Universal Induction". Entropy. 13 (6): 1076–136. arXiv:1105.5721. Bibcode:2011Entrp..13.1076R. doi:10.3390/e13061076.

Further reading

  • Cushan, Anna-Marie (1983/2014). Investigation into Facts and Values: Groundwork for a theory of moral conflict resolution. [Thesis, Melbourne University], Ondwelle Publications (online): Melbourne. [1]
  • Herms, D. "Logical Basis of Hypothesis Testing in Scientific Research" (PDF).
  • Kemerling, G. (27 October 2001). "Causal Reasoning".
  • Holland, J.H.; Holyoak, K.J.; Nisbett, R.E.; Thagard, P.R. (1989). Induction: Processes of Inference, Learning, and Discovery. Cambridge, MA: MIT Press. ISBN 978-0-262-58096-0.
  • Holyoak, K.; Morrison, R. (2005). The Cambridge Handbook of Thinking and Reasoning. New York: Cambridge University Press. ISBN 978-0-521-82417-0.

External links

Action model learning

Action model learning (sometimes abbreviated action learning) is an area of machine learning concerned with creation and modification of software agent's knowledge about effects and preconditions of the actions that can be executed within its environment. This knowledge is usually represented in logic-based action description language and used as the input for automated planners.

Learning action models is important when goals change. When an agent acted for a while, it can use its accumulated knowledge about actions in the domain to make better decisions. Thus, learning action models differs from reinforcement learning. It enables reasoning about actions instead of expensive trials in the world. Action model learning is a form of inductive reasoning, where new knowledge is generated based on agent's observations. It differs from standard supervised learning in that correct input/output pairs are never presented, nor imprecise action models explicitly corrected.

Usual motivation for action model learning is the fact that manual specification of action models for planners is often a difficult, time consuming, and error-prone task (especially in complex environments).

Argument from analogy

Argument from analogy is a special type of inductive argument, whereby perceived similarities are used as a basis to infer some further similarity that has yet to be observed. Analogical reasoning is one of the most common methods by which human beings attempt to understand the world and make decisions. When a person has a bad experience with a product and decides not to buy anything further from the producer, this is often a case of analogical reasoning. It is also implicit in much of science; for instance, experiments on laboratory rats typically proceed on the basis that some physiological similarities between rats and humans entails some further similarity (e.g. possible reactions to a drug).

Backward induction

Backward induction is the process of reasoning backwards in time, from the end of a problem or situation, to determine a sequence of optimal actions. It proceeds by first considering the last time a decision might be made and choosing what to do in any situation at that time. Using this information, one can then determine what to do at the second-to-last time of decision. This process continues backwards until one has determined the best action for every possible situation (i.e. for every possible information set) at every point in time. It was first used by Zermelo in 1913, to prove that chess has pure optimal strategies.In the mathematical optimization method of dynamic programming, backward induction is one of the main methods for solving the Bellman equation. In game theory, backward induction is a method used to compute subgame perfect equilibria in sequential games. The only difference is that optimization involves just one decision maker, who chooses what to do at each point of time, whereas game theory analyzes how the decisions of several players interact. That is, by anticipating what the last player will do in each situation, it is possible to determine what the second-to-last player will do, and so on. In the related fields of automated planning and scheduling and automated theorem proving, the method is called backward search or backward chaining. In chess it is called retrograde analysis.

Backward induction has been used to solve games as long as the field of game theory has existed. John von Neumann and Oskar Morgenstern suggested solving zero-sum, two-person games by backward induction in their Theory of Games and Economic Behavior (1944), the book which established game theory as a field of study.

Case-based reasoning

Case-based reasoning (CBR), broadly construed, is the process of solving new problems based on the solutions of similar past problems. An auto mechanic who fixes an engine by recalling another car that exhibited similar symptoms is using case-based reasoning. A lawyer who advocates a particular outcome in a trial based on legal precedents or a judge who creates case law is using case-based reasoning. So, too, an engineer copying working elements of nature (practicing biomimicry), is treating nature as a database of solutions to problems. Case-based reasoning is a prominent type of analogy solution making.

It has been argued that case-based reasoning is not only a powerful method for computer reasoning, but also a pervasive behavior in everyday human problem solving; or, more radically, that all reasoning is based on past cases personally experienced. This view is related to prototype theory, which is most deeply explored in cognitive science.

Confabulation (neural networks)

A confabulation, also known as a false, degraded, or corrupted memory, is a stable pattern of activation in an artificial neural network or neural assembly that does not correspond to any previously learned patterns. The same term is also applied to the (nonartificial) neural mistake-making process leading to a false memory (confabulation).

Constant conjunction

Constant conjunction is a phrase used in philosophy as a variant or near synonym for causality and induction. It can be construed to contradict a more common phrase: correlation is not causation.

Deductive reasoning

Deductive reasoning, also deductive logic, logical deduction is the process of reasoning from one or more statements (premises) to reach a logically certain conclusion.Deductive reasoning goes in the same direction as that of the conditionals, and links premises with conclusions. If all premises are true, the terms are clear, and the rules of deductive logic are followed, then the conclusion reached is necessarily true.

Deductive reasoning ("top-down logic") contrasts with inductive reasoning ("bottom-up logic") in the following way; in deductive reasoning, a conclusion is reached reductively by applying general rules which hold over the entirety of a closed domain of discourse, narrowing the range under consideration until only the conclusion(s) is left. In inductive reasoning, the conclusion is reached by generalizing or extrapolating from specific cases to general rules, i.e., there is epistemic uncertainty. However, the inductive reasoning mentioned here is not the same as induction used in mathematical proofs – mathematical induction is actually a form of deductive reasoning.

Deductive reasoning differs from abductive reasoning by the direction of the reasoning relative to the conditionals. Deductive reasoning goes in the same direction as that of the conditionals, whereas abductive reasoning goes in the opposite direction to that of the conditionals.

Eduction

Eduction may refer to:

Eduction (geology)

A type of inductive inference from premises mentioning particulars to a conclusion mentioning another particular. See inductive reasoning.

Aspirator (pump)

Fact, Fiction, and Forecast

Fact, Fiction, and Forecast is a book by Nelson Goodman in which he explores some problems regarding scientific law and counterfactual conditionals and presents his New Riddle of Induction. Hilary Putnam described the book as "one of the few books that every serious student of philosophy in our time has to have read." According to Jerry Fodor, "it changed, probably permanently, the way we think about the problem of induction, and hence about a constellation of related problems like learning and the nature of rational decision." Noam Chomsky and Hilary Putnam attended some of the lectures on which the book is based as undergraduate students at the University of Pennsylvania, leading to a lifelong debate between the two over the question of whether the problems presented in the book imply that there must be an innate ordering of hypotheses.

Imperfect induction

The imperfect induction is the process of inferring from a sample of a group to what is characteristic of the whole group.

Inductionism

Inductionism is the scientific philosophy where laws are "induced" from sets of data. As an example, one might measure the strength of electrical forces at varying distances from charges and induce the inverse square law of electrostatics. This concept is considered one of the two pillars of the old view of the philosophy of science, together with verifiability. An application of inductionism can show how experimental evidence can confirm or inductively justify the belief in generalization and the laws of nature.The early form of inductionism is associated with the philosophies of thinkers such as Francis Bacon. It is also said to be based on Newtonian physics. This is evident in Isaac Newton's Rule of Reasoning in Philosophy, which articulated his belief that it is imperative to cover the unobservably small features of the world through a methodology that has a strong empirical base. Here, the speculative hypothesis was replaced by induction from premises obtained through observation and experiment.

Inductive reasoning aptitude

Inductive reasoning aptitude (also called differentiation or inductive learning ability) measures how well a person can identify a pattern within a large amount of data. It involves applying the rules of logic when inferring general principles from a constellation of particulars.

Measurement is generally done in a timed test by showing four pictures or words and asking the test taker to identify which of the pictures or words does not belong in the set. The test taker is shown a large number of sets of various degrees of difficulty. The measurement is made by timing how many of these a person can properly identify in a set period of time. The test resembles the game 'Which of These Is Not Like the Others'.

Inductive reasoning is very useful for scientists, auto mechanics, system integrators, lawyers, network engineers, medical doctors, system administrators and members of all fields where substantial diagnostic or data interpretation work is needed. Inductive reasoning aptitude is also useful for learning a graphical user interface quickly, because highly inductive people are very good at seeing others' categorization schemes. Inductive reasoning aptitude is often counter-productive in fields like sales where tolerance is very important, because highly inductive people tend to be good at seeing faults in others.

Inverse resolution

Inverse resolution is an inductive reasoning technique that involves inverting the resolution operator.

Pessimistic induction

In the philosophy of science, the pessimistic induction, also known as the pessimistic meta-induction, is an argument which seeks to rebut scientific realism, particularly the scientific realist's notion of epistemic optimism.

Petals Around the Rose

Petals Around the Rose is a mathematical challenging puzzle in which the object is to work out the formula by which a number is derived from the roll of a set of five or six dice. It is often used as an exercise in inductive reasoning. The puzzle became popular in computer circles in the mid 1970s, particularly through an anecdote recounted in Personal Computing which depicts Bill Gates working out the solution in an airport.

Problem of induction

The problem of induction is the philosophical question of whether inductive reasoning leads to knowledge understood in the classic philosophical sense, highlighting the apparent lack of justification for:

Generalizing about the properties of a class of objects based on some number of observations of particular instances of that class (e.g., the inference that "all swans we have seen are white, and, therefore, all swans are white", before the discovery of black swans) or

Presupposing that a sequence of events in the future will occur as it always has in the past (e.g., that the laws of physics will hold as they have always been observed to hold). Hume called this the principle of uniformity of nature.The problem calls into question all empirical claims made in everyday life or through the scientific method, and, for that reason, the philosopher C. D. Broad said that "induction is the glory of science and the scandal of philosophy." Although the problem arguably dates back to the Pyrrhonism of ancient philosophy, as well as the Carvaka school of Indian philosophy, David Hume popularized it in the mid-18th century.

Reason

Reason is the capacity of consciously making sense of things, establishing and verifying facts, applying logic, and changing or justifying practices, institutions, and beliefs based on new or existing information. It is closely associated with such characteristically human activities as philosophy, science, language, mathematics and art, and is normally considered to be a distinguishing ability possessed by humans.

Reason, or an aspect of it, is sometimes referred to as rationality.

Reasoning is associated with thinking, cognition, and intellect. The philosophical field of logic studies ways in which humans reason formally through argument. Reasoning may be subdivided into forms of logical reasoning (forms associated with the strict sense): deductive reasoning, inductive reasoning, abductive reasoning; and other modes of reasoning considered more informal, such as intuitive reasoning and verbal reasoning. Along these lines, a distinction is often drawn between logical, discursive reasoning (reason proper), and intuitive reasoning, in which the reasoning process through intuition—however valid—may tend toward the personal and the subjectively opaque. In some social and political settings logical and intuitive modes of reasoning may clash, while in other contexts intuition and formal reason are seen as complementary rather than adversarial. For example, in mathematics, intuition is often necessary for the creative processes involved with arriving at a formal proof, arguably the most difficult of formal reasoning tasks.

Reasoning, like habit or intuition, is one of the ways by which thinking moves from one idea to a related idea. For example, reasoning is the means by which rational individuals understand sensory information from their environments, or conceptualize abstract dichotomies such as cause and effect, truth and falsehood, or ideas regarding notions of good or evil. Reasoning, as a part of executive decision making, is also closely identified with the ability to self-consciously change, in terms of goals, beliefs, attitudes, traditions, and institutions, and therefore with the capacity for freedom and self-determination. In contrast to the use of "reason" as an abstract noun, a reason is a consideration given which either explains or justifies events, phenomena, or behavior. Reasons justify decisions, reasons support explanations of natural phenomena; reasons can be given to explain the actions (conduct) of individuals.

Using reason, or reasoning, can also be described more plainly as providing good, or the best, reasons. For example, when evaluating a moral decision, "morality is, at the very least, the effort to guide one's conduct by reason—that is, doing what there are the best reasons for doing—while giving equal [and impartial] weight to the interests of all those affected by what one does."Psychologists and cognitive scientists have attempted to study and explain how people reason, e.g. which cognitive and neural processes are engaged, and how cultural factors affect the inferences that people draw. The field of automated reasoning studies how reasoning may or may not be modeled computationally. Animal psychology considers the question of whether animals other than humans can reason.

Rule induction

Rule induction is an area of machine learning in which formal rules are extracted from a set of observations. The rules extracted may represent a full scientific model of the data, or merely represent local patterns in the data.

Textual case-based reasoning

Textual case-based reasoning is a subtopic of case-based reasoning, in short CBR, a popular area in artificial intelligence. CBR suggests the ways to use past experiences to solve future similar problems, requiring that past experiences be structured in a form similar to attribute-value pairs. This leads to the investigation of textual descriptions for knowledge exploration whose output will be, in turn, used to solve similar problems.

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.