The problem of induction is the philosophical question of whether inductive reasoning leads to knowledge understood in the classic philosophical sense, highlighting the apparent lack of justification for:
The problem calls into question all empirical claims made in everyday life or through the scientific method, and, for that reason, the philosopher C. D. Broad said that "induction is the glory of science and the scandal of philosophy." Although the problem arguably dates back to the Pyrrhonism of ancient philosophy, as well as the Carvaka school of Indian philosophy, David Hume popularized it in the mid-18th century.
In inductive reasoning, one makes a series of observations and infers a new claim based on them. For instance, from a series of observations that a woman walks her dog by the market at 8 am on Monday, it seems valid to infer that next Monday she will do the same, or that, in general, the woman walks her dog by the market every Monday. That next Monday the woman walks by the market merely adds to the series of observations, it does not prove she will walk by the market every Monday. First of all, it is not certain, regardless of the number of observations, that the woman always walks by the market at 8 am on Monday. In fact, David Hume would even argue that we cannot claim it is "more probable", since this still requires the assumption that the past predicts the future.
Domestic animals expect food when they see the person who usually feeds them. We know that all these rather crude expectations of uniformity are liable to be misleading. The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined views as to the uniformity of nature would have been useful to the chicken.
In several publications it is presented as a story about a turkey, fed every morning without fail, who following the laws of induction concludes this will continue, but then his throat is cut on Thanksgiving Day.
Pyrrhonian skeptic Sextus Empiricus first questioned the validity of inductive reasoning, positing that a universal rule could not be established from an incomplete set of particular instances. He wrote:
When they propose to establish the universal from the particulars by means of induction, they will effect this by a review of either all or some of the particulars. But if they review some, the induction will be insecure, since some of the particulars omitted in the induction may contravene the universal; while if they are to review all, they will be toiling at the impossible, since the particulars are infinite and indefinite.
The focus upon the gap between the premises and conclusion present in the above passage appears different from Hume's focus upon the circular reasoning of induction. However, Weintraub claims in The Philosophical Quarterly that although Sextus's approach to the problem appears different, Hume's approach was actually an application of another argument raised by Sextus:
Those who claim for themselves to judge the truth are bound to possess a criterion of truth. This criterion, then, either is without a judge's approval or has been approved. But if it is without approval, whence comes it that it is truthworthy? For no matter of dispute is to be trusted without judging. And, if it has been approved, that which approves it, in turn, either has been approved or has not been approved, and so on ad infinitum.
Although the criterion argument applies to both deduction and induction, Weintraub believes that Sextus's argument "is precisely the strategy Hume invokes against induction: it cannot be justified, because the purported justification, being inductive, is circular." She concludes that "Hume's most important legacy is the supposition that the justification of induction is not analogous to that of deduction." She ends with a discussion of Hume's implicit sanction of the validity of deduction, which Hume describes as intuitive in a manner analogous to modern foundationalism.
The Cārvāka, a materialist and skeptic school of Indian philosophy, used the problem of induction to point out the flaws in using inference as a way to gain valid knowledge. They held that since inference needed an invariable connection between the middle term and the predicate, and further, that since there was no way to establish this invariable connection, that the efficacy of inference as a means of valid knowledge could never be stated.
The 9th century Indian skeptic, Jayarasi Bhatta, also made an attack on inference, along with all means of knowledge, and showed by a type of reductio argument that there was no way to conclude universal relations from the observation of particular instances.
Medieval writers such as al-Ghazali and William of Ockham connected the problem with God's absolute power, asking how we can be certain that the world will continue behaving as expected when God could at any moment miraculously cause the opposite. Duns Scotus, however, argued that inductive inference from a finite number of particulars to a universal generalization was justified by "a proposition reposing in the soul, 'Whatever occurs in a great many instances by a cause that is not free, is the natural effect of that cause.'" Some 17th-century Jesuits argued that although God could create the end of the world at any moment, it was necessarily a rare event and hence our confidence that it would not happen very soon was largely justified.
David Hume, a Scottish thinker of the commercial era, is the philosopher most often associated with induction. His formulation of the problem of induction can be found in An Enquiry concerning Human Understanding, §4. Here, Hume introduces his famous distinction between "relations of ideas" and "matters of fact." Relations of ideas are propositions which can be derived from deductive logic, which can be found in fields such as geometry and algebra. Matters of fact, meanwhile, are not verified through the workings of deductive logic but by experience. Specifically, matters of fact are established by making an inference about causes and effects from repeatedly observed experience. While relations of ideas are supported by reason alone, matters of fact must rely on the connection of a cause and effect through experience. Causes of effects cannot be linked through a priori reasoning, but by positing a "necessary connection" that depends on the "uniformity of nature."
Hume situates his introduction the the problem of induction in A Treatise of Human Nature within his larger discussion on the nature of causes and effects (Book I, Part III, Section VI). He writes that reasoning alone cannot establish the grounds of causation. Instead, the human mind imputes causation to phenomena after repeatedly observing a connection between two objects. For Hume, establishing the link between causes and effects relies not on reasoning alone, but the observation of "constant conjunction" throughout one's sensory experience. From this discussion, Hume goes onto present his formulation of the problem of induction in A Treatise of Human Nature, writing "there can be no demonstrative arguments to prove, that those instances, of which we have had no experience, resemble those, of which we have had experience."
In other words, the problem of induction can be framed in the following way: we cannot apply a conclusion about a particular set of observations to a more general set of observations. While deductive logic allows one to arrive at a conclusion with certainty, inductive logic can only provide a conclusion that is probably true. It is mistaken to frame the difference between deductive and inductive logic as one between general to specific reasoning and specific to general reasoning. This is a common misperception about the difference between inductive and deductive thinking. According to the literal standards of logic, deductive reasoning arrives at certain conclusions while inductive reasoning arrives at probable conclusions. Hume's treatment of induction helps to establish the grounds for probability, as he writes in A Treatise of Human Nature that "probability is founded on the presumption of a resemblance betwixt those objects, of which we have had experience, and those, of which we have had none" (Book I, Part III, Section VI).
Therefore, Hume establishes induction as the very grounds for attributing causation. There might be many effects which stem from a single cause. Over repeated observation, one establishes that a certain set of effects are linked to a certain set of causes. However, the future resemblance of these connections to connections observed in the past depends on induction. Induction allows one to conclude that "Effect A2" was caused by "Cause A2" because a connection between "Effect A1" and "Cause A1" was observed repeatedly in the past. Given that reason alone can not be sufficient to establish the grounds of induction, Hume implies that induction must be accomplished through imagination. One does not make an inductive reference through a priori reasoning, but through an imaginative step automatically taken by the mind.
Hume does not challenge that induction is performed by the human mind automatically, but rather hopes to show more clearly how much human inference depends on inductive -- not a priori -- reasoning. He does not deny future uses of induction, but shows that it is distinct from deductive reasoning, helps to ground causation, and wants to inquire more deeply into its validity. Hume offers no solution to the problem of induction himself. He prompts other thinkers and logicians to argue for the validity of induction as an ongoing dilemma for philosophy. A key issue with establishing the validity of induction is that one is tempted to use an inductive inference as a form of justification itself. This is because people commonly justify the validity of induction by pointing to the many instances in the past when induction proved to be accurate. For example, one might argue that it is valid to use inductive inference in the future because this type of reasoning has yielded accurate results in the past. However, this argument relies on an inductive premise itself -- that past observations of induction being valid will mean that future observations of induction will also be valid. Thus, many solutions to the problem of induction tend to be circular.
Nelson Goodman's Fact, Fiction, and Forecast presented a different description of the problem of induction in the chapter entitled "The New Riddle of Induction". Goodman proposed the new predicate "grue". Something is grue if and only if it has been (or will be, according to a scientific, general hypothesis) observed to be green before a certain time t, or blue if observed after that time. The "new" problem of induction is, since all emeralds we have ever seen are both green and grue, why do we suppose that after time t we will find green but not grue emeralds? The problem here raised is that two different inductions will be true and false under the same conditions. In other words:
Goodman, however, points out that the predicate "grue" only appears more complex than the predicate "green" because we have defined grue in terms of blue and green. If we had always been brought up to think in terms of "grue" and "bleen" (where bleen is blue before time t, or green thereafter), we would intuitively consider "green" to be a crazy and complicated predicate. Goodman believed that which scientific hypotheses we favour depend on which predicates are "entrenched" in our language.
W. V. O. Quine offers a practical solution to this problem by making the metaphysical claim that only predicates that identify a "natural kind" (i.e. a real property of real things) can be legitimately used in a scientific hypothesis. R. Bhaskar also offers a practical solution to the problem. He argues that the problem of induction only arises if we deny the possibility of a reason for the predicate, located in the enduring nature of something. For example, we know that all emeralds are green, not because we have only ever seen green emeralds, but because the chemical make-up of emeralds insists that they must be green. If we were to change that structure, they would not be green. For instance, emeralds are a kind of green beryl, made green by trace amounts of chromium and sometimes vanadium. Without these trace elements, the gems would be colourless.
Although induction is not made by reason, Hume observes that we nonetheless perform it and improve from it. He proposes a descriptive explanation for the nature of induction in §5 of the Enquiry, titled "Skeptical solution of these doubts". It is by custom or habit that one draws the inductive connection described above, and "without the influence of custom we would be entirely ignorant of every matter of fact beyond what is immediately present to the memory and senses". The result of custom is belief, which is instinctual and much stronger than imagination alone.
David Stove's argument for induction, based on the statistical syllogism, was presented in the Rationality of Induction and was developed from an argument put forward by one of Stove's heroes, the late Donald Cary Williams (formerly Professor at Harvard) in his book The Ground of Induction. Stove argued that it is a statistical truth that the great majority of the possible subsets of specified size (as long as this size is not too small) are similar to the larger population to which they belong. For example, the majority of the subsets which contain 3000 ravens which you can form from the raven population are similar to the population itself (and this applies no matter how large the raven population is, as long as it is not infinite). Consequently, Stove argued that if you find yourself with such a subset then the chances are that this subset is one of the ones that are similar to the population, and so you are justified in concluding that it is likely that this subset "matches" the population reasonably closely. The situation would be analogous to drawing a ball out of a barrel of balls, 99% of which are red. In such a case you have a 99% chance of drawing a red ball. Similarly, when getting a sample of ravens the probability is very high that the sample is one of the matching or "representative" ones. So as long as you have no reason to think that your sample is an unrepresentative one, you are justified in thinking that probably (although not certainly) that it is.
An intuitive answer to Hume would be to say that a world inacessible to any inductive procedure would simply not be conceivable. This intuition was taken into account by Keith Campbell by considering that, to be built, a concept must be reapplied, which demands a certain continuity in its object of application and consequently some openness to induction . Recently, Claudio Costa has noted that a future can only be a future of its own past if it holds some identity with it. Moreover, the nearer a future is to the point of junction with its past, the greater are the similarities tendentially involved. Consequently – contra Hume – some form of principle of homogeneity (causal or structural) between future and past must be warranted, which would make some inductive procedure always possible..
Karl Popper, a philosopher of science, sought to solve the problem of induction. He argued that science does not use induction, and induction is in fact a myth. Instead, knowledge is created by conjecture and criticism. The main role of observations and experiments in science, he argued, is in attempts to criticize and refute existing theories.
According to Popper, the problem of induction as usually conceived is asking the wrong question: it is asking how to justify theories given they cannot be justified by induction. Popper argued that justification is not needed at all, and seeking justification "begs for an authoritarian answer". Instead, Popper said, what should be done is to look to find and correct errors. Popper regarded theories that have survived criticism as better corroborated in proportion to the amount and stringency of the criticism, but, in sharp contrast to the inductivist theories of knowledge, emphatically as less likely to be true. Popper held that seeking for theories with a high probability of being true was a false goal that is in conflict with the search for knowledge. Science should seek for theories that are most probably false on the one hand (which is the same as saying that they are highly falsifiable and so there are lots of ways that they could turn out to be wrong), but still all actual attempts to falsify them have failed so far (that they are highly corroborated).
Wesley C. Salmon criticizes Popper on the grounds that predictions need to be made both for practical purposes and in order to test theories. That means Popperians need to make a selection from the number of unfalsified theories available to them, which is generally more than one. Popperians would wish to choose well-corroborated theories, in their sense of corroboration, but face a dilemma: either they are making the essentially inductive claim that a theory's having survived criticism in the past means it will be a reliable predictor in the future; or Popperian corroboration is no indicator of predictive power at all, so there is no rational motivation for their preferred selection principle.
David Miller has criticized this kind of criticism by Salmon and others because it makes inductivist assumptions. Popper does not say that corroboration is an indicator of predictive power. The predictive power is in the theory itself, not in its corroboration. The rational motivation for choosing a well-corroborated theory is that it is simply easier to falsify: Well-corroborated means that at least one kind of experiment (already conducted at least once) could have falsified (but did not actually falsify) the one theory, while the same kind of experiment, regardless of its outcome, could not have falsified the other. So it is rational to choose the well-corroborated theory: It may not be more likely to be true, but if it is actually false, it is easier to get rid of when confronted with the conflicting evidence that will eventually turn up. Accordingly, it is wrong to consider corroboration as a reason, a justification for believing in a theory or as an argument in favor of a theory to convince someone who objects to it.
... the theory to be developed in the following pages stands directly opposed to all attempts to operate with the ideas of inductive logic.
Induction, i.e. inference based on many observations, is a myth. It is neither a psychological fact, nor a fact of ordinary life, nor one of scientific procedure.
The actual procedure of science is to operate with conjectures: to jump to conclusions – often after one single observation
Tests proceed partly by way of observation, and observation is thus very important; but its function is not that of producing theories. It plays its role in rejecting, eliminating, and criticizing theories
I propose to replace ... the question of the sources of our knowledge by the entirely different question: 'How can we hope to detect and eliminate error?'
The black swan theory or theory of black swan events is a metaphor that describes an event that comes as a surprise, has a major effect, and is often inappropriately rationalized after the fact with the benefit of hindsight. The term is based on an ancient saying that presumed black swans did not exist – a saying that became reinterpreted to teach a different lesson after black swans were discovered in the wild.
The theory was developed by Nassim Nicholas Taleb to explain:
The disproportionate role of high-profile, hard-to-predict, and rare events that are beyond the realm of normal expectations in history, science, finance, and technology.
The non-computability of the probability of the consequential rare events using scientific methods (owing to the very nature of small probabilities).
The psychological biases that blind people, both individually and collectively, to uncertainty and to a rare event's massive role in historical affairs.Unlike the earlier and broader "black swan problem" in philosophy (i.e. the problem of induction), Taleb's "black swan theory" refers only to unexpected events of large magnitude and consequence and their dominant role in history. Such events, considered extreme outliers, collectively play vastly larger roles than regular occurrences. More technically, in the scientific monograph 'Silent Risk', Taleb mathematically defines the black swan problem as "stemming from the use of degenerate metaprobability".Circular reasoning
Circular reasoning (Latin: circulus in probando, "circle in proving"; also known as circular logic) is a logical fallacy in which the reasoner begins with what they are trying to end with. The components of a circular argument are often logically valid because if the premises are true, the conclusion must be true. Circular reasoning is not a formal logical fallacy but a pragmatic defect in an argument whereby the premises are just as much in need of proof or evidence as the conclusion, and as a consequence the argument fails to persuade. Other ways to express this are that there is no reason to accept the premises unless one already believes the conclusion, or that the premises provide no independent ground or evidence for the conclusion. Begging the question is closely related to circular reasoning, and in modern usage the two generally refer to the same thing.Circular reasoning is often of the form: "A is true because B is true; B is true because A is true." Circularity can be difficult to detect if it involves a longer chain of propositions.
Academic Douglas Walton used the following example of a fallacious circular argument:
Wellington is in New Zealand.
Therefore, Wellington is in New Zealand.He notes that, although the argument is deductively valid, it cannot prove that Wellington is in New Zealand because it contains no evidence that is distinct from the conclusion. The context – that of an argument – means that the proposition does not meet the requirement of proving the statement; thus, it is a fallacy. He proposes that the context of a dialogue determines whether a circular argument is fallacious: if it forms part of an argument, then it is. Citing Cederblom and Paulsen 1986:109, Hugh G. Gauch observes that non-logical facts can be difficult to capture formally:
'Whatever is less dense than water will float, because whatever is less dense than water will float' sounds stupid, but 'Whatever is less dense than water will float, because such objects won't sink in water' might pass.Counterinduction
In logic, counterinduction is a measure that helps to call something into question by developing something against which it can be compared. Paul Feyerabend argued for counterinduction as a way to test unchallenged scientific theories; unchallenged simply because there are no structures within the scientific paradigm (positivism) to challenge itself (See Crotty, 1998 p. 39). For instance, Feyerabend is quoted as saying the following:
"Therefore, the first step in our criticism of customary concepts and customary reactions is to step outside the circle and either to invent a new conceptual system, for example, a new theory, that clashes with the most carefully established observational results and confounds the most plausible theoretical principles, or to import such a system from the outside science, from religion, from mythology, from the ideas of incompetents, or the ramblings of madmen." (Feyerabend, 1993, pp. 52-3)
This gets into the pluralistic methodology that Feyerabend espouses that will help support counterinductive methods. Paul Feyerabend's anarchist theory popularized the notion of counterinduction.
Most of the time when counterinduction is mentioned, it is not presented as a valid rule. Instead, it is given as a refutation of Max Black's proposed inductive justification of induction, since the counterinductive justification of counterinduction is formally identical to the inductive justification of induction. For further information, see Problem of induction.Discounted cash flow
In finance, discounted cash flow (DCF) analysis is a method of valuing a project, company, or asset using the concepts of the time value of money. All future cash flows are estimated and discounted by using cost of capital to give their present values (PVs). The sum of all future cash flows, both incoming and outgoing, is the net present value (NPV), which is taken as the value of the cash flows in question.Using DCF analysis to compute the NPV takes as input cash flows and a discount rate and gives as output a present value. The opposite process takes cash flows and a price (present value) as inputs, and provides as output the discount rate; this is used in bond markets to obtain the yield.
Discounted cash flow analysis is widely used in investment finance, real estate development, corporate financial management and patent valuation. It was used in industry as early as the 1700s or 1800s, widely discussed in financial economics in the 1960s, and became widely used in U.S. Courts in the 1980s and 1990s.Epistemological idealism
Epistemological idealism is a subjectivist position in epistemology that holds that what one knows about an object exists only in one's mind. It is opposed to epistemological realism.Fact, Fiction, and Forecast
Fact, Fiction, and Forecast is a book by Nelson Goodman in which he explores some problems regarding scientific law and counterfactual conditionals and presents his New Riddle of Induction. Hilary Putnam described the book as "one of the few books that every serious student of philosophy in our time has to have read." According to Jerry Fodor, "it changed, probably permanently, the way we think about the problem of induction, and hence about a constellation of related problems like learning and the nature of rational decision." Noam Chomsky and Hilary Putnam attended some of the lectures on which the book is based as undergraduate students at the University of Pennsylvania leading to a lifelong debate between the two over the matter of whether the problems presented in the book imply that there must be an innate ordering of hypotheses.Fallibilism
Broadly speaking, fallibilism (from Medieval Latin: fallibilis, "liable to err") is the philosophical claim that no belief can have justification which guarantees the truth of the belief. However, not all fallibilists believe that fallibilism extends to all domains of knowledge.Hierarchical epistemology
Hierarchical epistemology is a theory of knowledge which posits that beings have different access to reality depending on their ontological rank.Inductive reasoning
Inductive reasoning is a method of reasoning in which the premises are viewed as supplying some evidence for the truth of the conclusion, this is in contrast to deductive reasoning. While the conclusion of a deductive argument is certain, the truth of the conclusion of an inductive argument may be probable, based upon the evidence given.Many dictionaries define inductive reasoning as the derivation of general principles from specific observations, though some sources find this usage "outdated".Information source
An information source is a person, thing, or place from which information comes, arises, or is obtained. Information souces can be known as primary or secondary. That source might then inform a person about something or provide knowledge about it. Information sources are divided into separate distinct categories, primary, secondary, tertiary, and so on.Lawrence A. Boland
Lawrence Arthur Boland (born 1939 in Peoria, Illinois) is a professor of economics at Simon Fraser University.
Boland is critical of the neoclassical research program. He has attempted to draw out the unstated assumptions of neoclassical economics and submit them to methodological scrutiny. His key criticisms of traditional economics center on the problem of induction, methodological individualism, and the acquisition of knowledge.List of epistemologists
This is a list of epistemologists, that is, people who theorize about the nature of knowledge, belief formation and the nature of justification.List of unsolved problems in philosophy
This is a list of some of the major unsolved problems in philosophy. Clearly, unsolved philosophical problems exist in the lay sense (e.g. "What is the meaning of life?", "Where did we come from?", "What is reality?", etc.). However, professional philosophers generally accord serious philosophical problems specific names or questions, which indicate a particular method of attack or line of reasoning. As a result, broad and untenable topics become manageable. It would therefore be beyond the scope of this article to categorize "life" (and similar vague categories) as an unsolved philosophical problem.Logical reasoning
Two kinds of logical reasoning can be distinguished in addition to formal deduction: induction and abduction. Given a precondition or premise, a conclusion or logical consequence and a rule or material conditional that implies the conclusion given the precondition, one can explain the following.
Deductive reasoning determines whether the truth of a conclusion can be determined for that rule, based solely on the truth of the premises. Example: "When it rains, things outside get wet. The grass is outside, therefore: when it rains, the grass gets wet." Mathematical logic and philosophical logic are commonly associated with this type of reasoning.
Inductive reasoning attempts to support a determination of the rule. It hypothesizes a rule after numerous examples are taken to be a conclusion that follows from a precondition in terms of such a rule. Example: "The grass got wet numerous times when it rained, therefore: the grass always gets wet when it rains." While they may be persuasive, these arguments are not deductively valid, see the problem of induction. Science is associated with this type of reasoning.
Abductive reasoning, a.k.a. inference to the best explanation, selects a cogent set of preconditions. Given a true conclusion and a rule, it attempts to select some possible premises that, if true also, can support the conclusion, though not uniquely. Example: "When it rains, the grass gets wet. The grass is wet. Therefore, it might have rained." This kind of reasoning can be used to develop a hypothesis, which in turn can be tested by additional reasoning or data. Diagnosticians, detectives, and scientists often use this type of reasoning.Nelson Goodman
Henry Nelson Goodman (7 August 1906 – 25 November 1998) was an American philosopher, known for his work on counterfactuals, mereology, the problem of induction, irrealism, and aesthetics.New riddle of induction
Grue and bleen are examples of logical predicates coined by Nelson Goodman in Fact, Fiction, and Forecast to illustrate the "new riddle of induction" – a successor to Hume's original problem. These predicates are unusual because their application is time-dependent; many have tried to solve the new riddle on those terms, but Hilary Putnam and others have argued such time-dependency depends on the language adopted, and in some languages it is equally true for natural-sounding predicates such as "green." For Goodman they illustrate the problem of projectible predicates and ultimately, which empirical generalizations are law-like and which are not.
Goodman's construction and use of grue and bleen illustrates how philosophers use simple examples in conceptual analysis.Outline of scientific method
The following outline is provided as an overview of and topical guide to scientific method:
Scientific method – body of techniques for investigating phenomena and acquiring new knowledge, as well as for correcting and integrating previous knowledge. It is based on observable, empirical, reproducible, measurable evidence, and subject to the laws of reasoning.Pseudoskepticism
Pseudoskepticism (or pseudoscepticism) is a philosophical or scientific position which appears to be that of skepticism or scientific skepticism but which in reality fails to be so.Statistical syllogism
A statistical syllogism (or proportional syllogism or direct inference) is a non-deductive syllogism. It argues, using inductive reasoning, from a generalization true for the most part to a particular case.