Explanatory power

This article deals with explanatory power in the context of the philosophy of science. For a statistical measure of explanatory power, see coefficient of determination or mean squared prediction error.

Explanatory power is the ability of a hypothesis or theory to effectively explain the subject matter it pertains to. The opposite of explanatory power is explanatory impotence.

In the past, various criteria or measures for explanatory power have been proposed. In particular, one hypothesis, theory, or explanation can be said to have more explanatory power than another about the same subject matter

  • if more facts or observations are accounted for;
  • if it changes more "surprising facts" into "a matter of course" (following Peirce);
  • if more details of causal relations are provided, leading to a high accuracy and precision of the description;
  • if it offers greater predictive power, i.e., if it offers more details about what we should expect to see, and what we should not;
  • if it depends less on authorities and more on observations;
  • if it makes fewer assumptions;
  • if it is more falsifiable, i.e., more testable by observation or experiment (following Popper).

Recently, David Deutsch proposed that theorists should seek explanations that are hard to vary.

By this expression he intends to state that a hard to vary explanation provides specific details which fit together so tightly that it is impossible to change any one detail without affecting the whole theory.

Overview

Carbon cycle-cute diagram
Deutsch says that the truth consists of detailed and "hard to vary assertions about reality"

Philosopher and physicist David Deutsch offers a criterion for a good explanation that he says may be just as important to scientific progress as learning to reject appeals to authority, and adopting formal empiricism and falsifiability. To Deutsch, these aspects of a good explanation, and more, are contained in any theory that is specific and "hard to vary". He believes that this criterion helps eliminate "bad explanations" which continuously add justifications, and can otherwise avoid ever being truly falsified.[1] An explanation that is hard to vary but does not survive a critical test can be considered falsified.[1]

Examples

Deutsch takes examples from Greek mythology. He describes how very specific, and even somewhat falsifiable theories were provided to explain how the gods' sadness caused the seasons. Alternatively, Deutsch points out, one could have just as easily explained the seasons as resulting from the gods' happiness - making it a bad explanation because it is so easy to arbitrarily change details.[1] Without Deutsch's criterion, the 'Greek gods explanation' could have just kept adding justifications. This same criterion, of being "hard to vary", may be what makes the modern explanation for the seasons a good one: none of the details - about the earth rotating around the sun at a certain angle in a certain orbit - can be easily modified without changing the theory's coherence.[1]

Relation to other criteria

It can be argued that the criterion hard to vary is closely related to Occam's razor: both imply logical consistency and a minimum of assumptions.

The philosopher Karl Popper acknowledged it is logically possible to avoid falsification of a hypothesis by changing details to avoid any criticism, adopting the term an immunizing stratagem from Hans Albert.[2] Popper argued that scientific hypotheses should be subjected to methodological testing to select for the strongest hypothesis.[3]

See also

References

  1. ^ a b c d David Deutsch, "A new way of explaining explanation"
  2. ^ Ray S. Percival (2012), The Myth of the Closed Mind: Explaining why and how People are Rational, p.206, Chicago.
  3. ^ Karl R. Popper (1934), The Logic of Scientific Discovery, p.20, Routledge Classics (ed. 2004)
Alternative hypothesis

In statistical hypothesis testing,

the alternative hypothesis (or maintained hypothesis or research hypothesis) and the null hypothesis are the two rival hypotheses which are compared by a statistical hypothesis test.

In the domain of science two rival hypotheses can be compared by explanatory power and predictive power.

Astrology

Astrology is a pseudoscience that claims to divine information about human affairs and terrestrial events by studying the movements and relative positions of celestial objects. Astrology has been dated to at least the 2nd millennium BCE, and has its roots in calendrical systems used to predict seasonal shifts and to interpret celestial cycles as signs of divine communications. Many cultures have attached importance to astronomical events, and some—such as the Hindus, Chinese, and the Maya—developed elaborate systems for predicting terrestrial events from celestial observations. Western astrology, one of the oldest astrological systems still in use, can trace its roots to 19th–17th century BCE Mesopotamia, from which it spread to Ancient Greece, Rome, the Arab world and eventually Central and Western Europe. Contemporary Western astrology is often associated with systems of horoscopes that purport to explain aspects of a person's personality and predict significant events in their lives based on the positions of celestial objects; the majority of professional astrologers rely on such systems.Throughout most of its history, astrology was considered a scholarly tradition and was common in academic circles, often in close relation with astronomy, alchemy, meteorology, and medicine. It was present in political circles and is mentioned in various works of literature, from Dante Alighieri and Geoffrey Chaucer to William Shakespeare, Lope de Vega, and Calderón de la Barca.

Following the end of the 19th century and the wide-scale adoption of the scientific method, astrology has been challenged successfully on both theoretical and experimental grounds, and has been shown to have no scientific validity or explanatory power. Astrology thus lost its academic and theoretical standing, and common belief in it has largely declined. While polls have demonstrated that approximately one quarter of American, British, and Canadian people say they continue to believe that star and planet positions affect their lives, astrology is now recognized as a pseudoscience—a belief that is incorrectly presented as scientific.

Engel curve

In microeconomics, an Engel curve describes how household expenditure on a particular good or service varies with household income. There are two varieties of Engel curves. Budget share Engel curves describe how the proportion of household income spent on a good varies with income. Alternatively, Engel curves can also describe how real expenditure varies with household income. They are named after the German statistician Ernst Engel (1821–1896), who was the first to investigate this relationship between goods expenditure and income systematically in 1857. The best-known single result from the article is Engel's law which states that the poorer a family is, the larger the budget share it spends on nourishment.

Explanation

An explanation is a set of statements usually constructed to describe a set of facts which clarifies the causes, context, and consequences of those facts. This description of the facts et cetera may establish rules or laws, and may clarify the existing rules or laws in relation to any objects, or phenomena examined. The components of an explanation can be implicit, and interwoven with one another.

An explanation is often underpinned by an understanding or norm that can be represented by different media such as music, text, and graphics. Thus, an explanation is subjected to interpretation, and discussion.

In scientific research, explanation is one of several purposes for empirical research. Explanation is a way to uncover new knowledge, and to report relationships among different aspects of studied phenomena. Explanation attempts to answer the "why" and "how" questions. Explanations have varied explanatory power. The formal hypothesis is the theoretical tool used to verify explanation in empirical research.

Granger causality

The Granger causality test is a statistical hypothesis test for determining whether one time series is useful in forecasting another, first proposed in 1969. Ordinarily, regressions reflect "mere" correlations, but Clive Granger argued that causality in economics could be tested for by measuring the ability to predict the future values of a time series using prior values of another time series. Since the question of "true causality" is deeply philosophical, and because of the post hoc ergo propter hoc fallacy of assuming that one thing preceding another can be used as a proof of causation, econometricians assert that the Granger test finds only "predictive causality".A time series X is said to Granger-cause Y if it can be shown, usually through a series of t-tests and F-tests on lagged values of X (and with lagged values of Y also included), that those X values provide statistically significant information about future values of Y.

Granger also stressed that some studies using "Granger causality" testing in areas outside economics reached "ridiculous" conclusions. "Of course, many ridiculous papers appeared", he said in his Nobel lecture. However, it remains a popular method for causality analysis in time series due to its computational simplicity. The original definition of Granger causality does not account for latent confounding effects and does not capture instantaneous and non-linear causal relationships, though several extensions have been proposed to address these issues.

Horror vacui (physics)

In physics, horror vacui, or plenism (), commonly stated as "nature abhors a vacuum", is a postulate attributed to Aristotle, who articulated a belief, later criticized by the atomism of Epicurus and Lucretius, that nature contains no vacuums because the denser surrounding material continuum would immediately fill the rarity of an incipient void. He also argued against the void in a more abstract sense (as "separable"), for example, that by definition a void, itself, is nothing, and following Plato, nothing cannot rightly be said to exist. Furthermore, insofar as it would be featureless, it could neither be encountered by the senses, nor could its supposition lend additional explanatory power. Hero of Alexandria challenged the theory in the first century CE, but his attempts to create an artificial vacuum failed. The theory was debated in the context of 17th-century fluid mechanics, by Thomas Hobbes and Robert Boyle, among others, and through the early 18th century by Sir Isaac Newton and Gottfried Leibniz.

Johansen test

In statistics, the Johansen test, named after Søren Johansen, is a procedure for testing cointegration of several, say k, I(1) time series. This test permits more than one cointegrating relationship so is more generally applicable than the Engle–Granger test which is based on the Dickey–Fuller (or the augmented) test for unit roots in the residuals from a single (estimated) cointegrating relationship.

There are two types of Johansen test, either with trace or with eigenvalue, and the inferences might be a little bit different. The null hypothesis for the trace test is that the number of cointegration vectors is r = r* < k, vs. the alternative that r = k. Testing proceeds sequentially for r* = 1,2, etc. and the first non-rejection of the null is taken as an estimate of r. The null hypothesis for the "maximum eigenvalue" test is as for the trace test but the alternative is r = r* + 1 and, again, testing proceeds sequentially for r* = 1,2,etc., with the first non-rejection used as an estimator for r.

Just like a unit root test, there can be a constant term, a trend term, both, or neither in the model. For a general VAR(p) model:

There are two possible specifications for error correction: that is, two vector error correction models (VECM):

1. The longrun VECM:

where

2. The transitory VECM:

where

Be aware that the two are the same. In both VECM,

Inferences are drawn on Π, and they will be the same, so is the explanatory power.[citation needed]

Mean squared prediction error

In statistics the mean squared prediction error or mean squared error of the predictions of a smoothing or curve fitting procedure is the expected value of the squared difference between the fitted values implied by the predictive function and the values of the (unobservable) function g. It is an inverse measure of the explanatory power of and can be used in the process of cross-validation of an estimated model.

If the smoothing or fitting procedure has projection matrix (i.e., hat matrix) L, which maps the observed values vector to predicted values vector via then

The MSPE can be decomposed into two terms: the mean of squared biases of the fitted values and the mean of variances of the fitted values:

Knowledge of g is required in order to calculate the MSPE exactly; otherwise, it can be estimated.

Model selection

Model selection is the task of selecting a statistical model from a set of candidate models, given data. In the simplest cases, a pre-existing set of data is considered. However, the task can also involve the design of experiments such that the data collected is well-suited to the problem of model selection. Given candidate models of similar predictive or explanatory power, the simplest model is most likely to be the best choice (Occam's razor).

Konishi & Kitagawa (2008, p. 75) state, "The majority of the problems in statistical inference can be considered to be problems related to statistical modeling". Relatedly, Cox (2006, p. 197) has said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis".

Model selection may also refer to the problem of selecting a few representative models from a large set of computational models for the purpose of decision making or optimization under uncertainty.

Notion (philosophy)

A notion in philosophy is a reflection in the mind of real objects and phenomena in their essential features and relations. Notions are usually described in terms of scope and content. This is because notions are often created in response to empirical observations (or experiments) of covarying trends among variables.

Notion is the common translation for Begriff as used by Hegel in his Science of Logic (1816).

Opinion

An opinion is a judgment, viewpoint, or statement that is not conclusive.

Ordinary least squares

In statistics, ordinary least squares (OLS) is a type of linear least squares method for estimating the unknown parameters in a linear regression model. OLS chooses the parameters of a linear function of a set of explanatory variables by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable (values of the variable being predicted) in the given dataset and those predicted by the linear function.

Geometrically, this is seen as the sum of the squared distances, parallel to the axis of the dependent variable, between each data point in the set and the corresponding point on the regression surface – the smaller the differences, the better the model fits the data. The resulting estimator can be expressed by a simple formula, especially in the case of a simple linear regression, in which there is a single regressor on the right side of the regression equation.

The OLS estimator is consistent when the regressors are exogenous, and optimal in the class of linear unbiased estimators when the errors are homoscedastic and serially uncorrelated. Under these conditions, the method of OLS provides minimum-variance mean-unbiased estimation when the errors have finite variances. Under the additional assumption that the errors are normally distributed, OLS is the maximum likelihood estimator.

OLS is used in fields as diverse as economics (econometrics), data science, political science, psychology and engineering (control theory and signal processing).

Philosophical logic

Philosophical logic refers to those areas of philosophy in which recognized methods of logic have traditionally been used to solve or advance the discussion of philosophical problems. Among these, Sybil Wolfram highlights the study of argument, meaning, and truth, while Colin McGinn presents identity, existence, predication, necessity and truth as the main topics of his book on the subject.Philosophical logic also addresses extensions and alternatives to traditional, "classical" logic known as "non-classical" logics. These receive more attention in texts such as John P. Burgess's Philosophical Logic, the Blackwell Companion to Philosophical Logic, or the multi-volume Handbook of Philosophical Logic edited by Dov M. Gabbay and Franz Guenthner.

Psychological egoism

Psychological egoism is the view that humans are always motivated by self-interest and selfishness, even in what seem to be acts of altruism. It claims that, when people choose to help others, they do so ultimately because of the personal benefits that they themselves expect to obtain, directly or indirectly, from so doing. This is a descriptive rather than normative view, since it only makes claims about how things are, not how they ought to be. It is, however, related to several other normative forms of egoism, such as ethical egoism and rational egoism.

A specific form of psychological egoism is psychological hedonism, the view that the ultimate motive for all voluntary human action is the desire to experience pleasure or to avoid pain. Many discussions of psychological egoism focus on this type, but the two are not the same: theorists have explained behavior motivated by self-interest without using pleasure and pain as the final causes of behavior. Psychological hedonism argues actions are caused by both a need for pleasure immediately and in the future. However, immediate gratification can be sacrificed for a chance of greater, future pleasure. Further, humans are not motivated to strictly avoid pain and only pursue pleasure, but, instead, humans will endure pain to achieve the greatest net pleasure. Accordingly, all actions are tools for increasing pleasure or decreasing pain, even those defined as altruistic and those that do not cause an immediate change in satisfaction levels.

Scientific theory

A scientific theory is an explanation of an aspect of the natural world that can be repeatedly tested and verified in accordance with the scientific method, using accepted protocols of observation, measurement, and evaluation of results. Where possible, theories are tested under controlled conditions in an experiment. In circumstances not amenable to experimental testing, theories are evaluated through principles of abductive reasoning. Established scientific theories have withstood rigorous scrutiny and embody scientific knowledge.The meaning of the term scientific theory (often contracted to theory for brevity) as used in the disciplines of science is significantly different from the common vernacular usage of theory. In everyday speech, theory can imply an explanation that represents an unsubstantiated and speculative guess, whereas in science it describes an explanation that has been tested and widely accepted as valid. These different usages are comparable to the opposing usages of prediction in science versus common speech, where it denotes a mere hope.

The strength of a scientific theory is related to the diversity of phenomena it can explain and its simplicity. As additional scientific evidence is gathered, a scientific theory may be modified and ultimately rejected if it cannot be made to fit the new findings; in such circumstances, a more accurate theory is then required. That doesn’t mean that all theories can be fundamentally changed (for example, well established foundational scientific theories such as evolution, heliocentric theory, cell theory, theory of plate tectonics etc). In certain cases, the less-accurate unmodified scientific theory can still be treated as a theory if it is useful (due to its sheer simplicity) as an approximation under specific conditions. A case in point is Newton's laws of motion, which can serve as an approximation to special relativity at velocities that are small relative to the speed of light.

Scientific theories are testable and make falsifiable predictions. They describe the causes of a particular natural phenomenon and are used to explain and predict aspects of the physical universe or specific areas of inquiry (for example, electricity, chemistry, and astronomy). Scientists use theories to further scientific knowledge, as well as to facilitate advances in technology or medicine.

As with other forms of scientific knowledge, scientific theories are both deductive and inductive, aiming for predictive and explanatory power.

The paleontologist Stephen Jay Gould wrote that "...facts and theories are different things, not rungs in a hierarchy of increasing certainty. Facts are the world's data. Theories are structures of ideas that explain and interpret facts."

Stone Tape

The Stone Tape theory is the speculation that ghosts and hauntings are analogous to tape recordings, and that mental impressions during emotional or traumatic events can be projected in the form of energy, "recorded" onto rocks and other items and "replayed" under certain conditions. The Idea draws inspiration and shares similarities with views of 19th century intellectualists and psychic researchers, such as Charles Babbage, Eleonor Sidgwick and Edmund Gurney. Contemporarily, the concept was popularized by 1972 Christmas ghost story called The Stone Tape, produced by the BBC. Following the movie's popularity, the idea and the term "stone tape" were retrospectively and inaccurately attributed to the British archaeologist turned parapsychologist T. C. Lethbridge, who believed that ghosts were not spirits of the deceased, but were simply non-interactive recordings similar to a movie.

The eclipse of Darwinism

Julian Huxley used the phrase “the eclipse of Darwinism” to describe the state of affairs prior to what he called the modern synthesis, when evolution was widely accepted in scientific circles but relatively few biologists believed that natural selection was its primary mechanism. Historians of science such as Peter J. Bowler have used the same phrase as a label for the period within the history of evolutionary thought from the 1880s to around 1920, when alternatives to natural selection were developed and explored—as many biologists considered natural selection to have been a wrong guess on Charles Darwin's part, or at least as of relatively minor importance. An alternative term, the interphase of Darwinism, has been proposed to avoid the largely incorrect implication that the putative eclipse was preceded by a period of vigorous Darwinian research.While there had been multiple explanations of evolution including vitalism, catastrophism, and structuralism through the 19th century, four major alternatives to natural selection were in play at the turn of the 20th century:

Theistic evolution was the belief that God directly guided evolution.

Neo-Lamarckism was the idea that evolution was driven by the inheritance of characteristics acquired during the life of the organism.

Orthogenesis was the belief that organisms were affected by internal forces or laws of development that drove evolution in particular directions

Mutationism was the idea that evolution was largely the product of mutations that created new forms or species in a single step.Theistic evolution largely disappeared from the scientific literature by the end of the 19th century as direct appeals to supernatural causes came to be seen as unscientific. The other alternatives had significant followings well into the 20th century; mainstream biology largely abandoned them only when developments in genetics made them seem increasingly untenable, and when the development of population genetics and the modern synthesis demonstrated the explanatory power of natural selection. Ernst Mayr wrote that as late as 1930 most textbooks still emphasized such non-Darwinian mechanisms.

Two-source hypothesis

The two-source hypothesis (or 2SH) is an explanation for the synoptic problem, the pattern of similarities and differences between the three Gospels of Matthew, Mark, and Luke. It posits that the Gospel of Matthew and the Gospel of Luke were based on the Gospel of Mark and a hypothetical sayings collection from the Christian oral tradition called Q.

The two-source hypothesis emerged in the 19th century. B. H. Streeter definitively stated the case in 1924, adding that two other sources, referred to as M and L, lie behind the material in Matthew and Luke respectively. The strengths of the hypothesis are its explanatory power regarding the shared and non-shared material in the three gospels; its weaknesses lie in the exceptions to those patterns, and in the hypothetical nature of its proposed collection of Jesus-sayings. Later scholars have advanced numerous elaborations and variations on the basic hypothesis, and even completely alternative hypotheses. Nevertheless, "the 2SH commands the support of most biblical critics from all continents and denominations."When Streeter's two additional sources, M and L, are taken into account, this hypothesis is sometimes referred to as the four-document hypothesis.

William Diver

William Diver (July 20, 1921 – August 31, 1995) was an American linguist. He was the founder of the Columbia School of Linguistics, which is named after Columbia University, where he received his Ph.D. in comparative Indo-European linguistics.

Although his background lay mainly in the linguistics of ancient languages, his approach to linguistics was uniquely modern and scientific. His lectures were sprinkled with references to the history and the ethodology of science. He believed that science is explanation, not description or prediction, and he compared the explanatory power of the Copernican astronomical system with the explanatory weakness of the epicycles of the Ptolemaic system, both of which had equal descriptive and predictive power. He also believed that the purpose of language was chiefly communication, and his linguistic analyses reflected that orientation, along with that of human psychology and physiology. In other words, those orientations helped him to explain why languages take the forms they do.

During Diver’s career, most popular schools of linguistic thought tended towards pure formalism, based on traditional categories and entities, such as the parts of speech and the sentence. While this schools rejected prescriptivism and the idealization of the standard language, Diver stood almost alone in rejecting traditional entities that had no specific function, such as the syllable and the mechanistic interpretation of "government" or "agreement." He analyzed language as a form of human behavior, rather than as an idealized expression of truth. The article on the Columbia School of Linguistics has more details and successful application of Diver's methodology.

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.