Empirical evidence

Empirical evidence is the information received by means of the senses, particularly by observation and documentation of patterns and behavior through experimentation.[1] The term comes from the Greek word for experience, ἐμπειρία (empeiría).

After Immanuel Kant, in philosophy, it is common to call the knowledge gained a posteriori knowledge (in contrast to a priori knowledge).

Research design and evidence - Capho
Evidence pyramid representing different types of evidence and their general reliability

Meaning

Empirical evidence is information that verifies the truth (which accurately corresponds to reality) or falsity (inaccuracy) of a claim. In the empiricist view, one can claim to have knowledge only when based on empirical evidence (although some empiricists believe that there are other ways of gaining knowledge). This stands in contrast to the rationalist view under which reason or reflection alone is considered evidence for the truth or falsity of some propositions.[2] Empirical evidence is information acquired by observation or experimentation, in the form of recorded data, which may be the subject of analysis (e.g. by scientists). This is the primary source of empirical evidence. Secondary sources describe, discuss, interpret, comment upon, analyze, evaluate, summarize, and process primary sources. Secondary source materials can be articles in newspapers or popular magazines, book or movie reviews, or articles found in scholarly journals that discuss or evaluate someone else's original research.[2]

Empirical evidence may be synonymous with the outcome of an experiment. In this regard, an empirical result is a unified confirmation. In this context, the term semi-empirical is used for qualifying theoretical methods that use, in part, basic axioms or postulated scientific laws and experimental results. Such methods are opposed to theoretical ab initio methods, which are purely deductive and based on first principles.

In science, empirical evidence is required for a hypothesis to gain acceptance in the scientific community. Normally, this validation is achieved by the scientific method of forming a hypothesis, experimental design, peer review, reproduction of results, conference presentation, and journal publication. This requires rigorous communication of hypothesis (usually expressed in mathematics), experimental constraints and controls (expressed necessarily in terms of standard experimental apparatus), and a common understanding of measurement.

Statements and arguments depending on empirical evidence are often referred to as a posteriori ("following experience") as distinguished from a priori (preceding it). A priori knowledge or justification is independent of experience (for example "All bachelors are unmarried"), whereas a posteriori knowledge or justification is dependent on experience or empirical evidence (for example "Some bachelors are very happy"). The notion that the distinction between a posteriori and a priori is tantamount to the distinction between empirical and non-empirical knowledge comes from Kant's Critique of Pure Reason.[3]

The standard positivist view of empirically acquired information has been that observation, experience, and experiment serve as neutral arbiters between competing theories. However, since the 1960s, a persistent critique most associated with Thomas Kuhn,[4] has argued that these methods are influenced by prior beliefs and experiences. Consequently, it cannot be expected that two scientists when observing, experiencing, or experimenting on the same event will make the same theory-neutral observations. The role of observation as a theory-neutral arbiter may not be possible. Theory-dependence of observation means that, even if there were agreed methods of inference and interpretation, scientists may still disagree on the nature of empirical data.[5]

See also

Footnotes

  1. ^ Pickett 2006, p. 585
  2. ^ a b Feldman 2001, p. 293
  3. ^ Craig 2005, p. 1
  4. ^ Kuhn 1970
  5. ^ Bird 2013

References

  • Bird, Alexander (2013). Zalta, Edward N. (ed.). "Thomas Kuhn". Stanford Encyclopedia of Philosophy. Section 4.2 Perception, Observational Incommensurability, and World-Change. Retrieved 25 January 2012.
  • Craig, Edward (2005). "a posteriori". The Shorter Routledge Encyclopedia of Philosophy. Routledge. ISBN 9780415324953.
  • Feldman, Richard (2001) [1999]. "Evidence". In Audi, Robert (ed.). The Cambridge Dictionary of Philosophy (2nd ed.). Cambridge, UK: Cambridge University Press. pp. 293–294. ISBN 978-0521637220.
  • Kuhn, Thomas S. (1970) [1962]. The Structure of Scientific Revolutions (2nd ed.). Chicago: University of Chicago Press. ISBN 978-0226458045.
  • Pickett, Joseph P., ed. (2011). "Empirical". The American Heritage Dictionary of the English Language (5th ed.). Houghton Mifflin. ISBN 978-0-547-04101-8.

External links

5261 Eureka

5261 Eureka is the first Mars trojan discovered. It was discovered by David H. Levy and Henry Holt at Palomar Observatory on June 20, 1990. It trails Mars (at the L5 point) at a distance varying by only 0.3 AU during each revolution (with a secular trend superimposed, changing the distance from 1.5–1.8 AU around 1850 to 1.3–1.6 AU around 2400). Minimum distances from the Earth, Venus, and Jupiter, are 0.5, 0.8, and 3.5 AU, respectively.

Long-term numerical integration shows that the orbit is stable. Kimmo A. Innanen and Seppo Mikkola note that "contrary to intuition, there is clear empirical evidence for the stability of motion around the L4 and L5 points of all the terrestrial planets over a timeframe of several million years".

Since the discovery of 5261 Eureka, the Minor Planet Center has recognized three other asteroids as Martian trojans: 1999 UJ7 at the L4 point, 1998 VF31 at the L5 point, and 2007 NS2, also at the L5 point. At least five other asteroids in near-1:1 resonances with Mars have been discovered, but they do not exhibit trojan behavior. They are 2001 FR127, 2001 FG24, (36017) 1999 ND43, 1998 QH56 and (152704) 1998 SD4. Due to close orbital similarities, most of the other, smaller, members of the L5 group are hypothesized to be fragments of 5261 Eureka that were detached after it was spun up by the YORP effect (consistent with its rotational period of 2.69 h).The infrared spectrum for 5261 Eureka is typical for an A-type asteroid, but the visual spectrum is consistent with an evolved form of achondrite called an angrite. A-class asteroids are tinted red in hue, with a moderate albedo. The asteroid is located deep within a stable Lagrangian zone of Mars, which is considered indicative of a primordial origin—meaning the asteroid has most likely been in this orbit for much of the history of the Solar System.

Ali Qushji

Ala al-Dīn Ali ibn Muhammed (1403 – 16 December 1474), known as Ali Qushji (Ottoman Turkish/Persian language: علی قوشچی, kuşçu – falconer in Turkish; Latin: Ali Kushgii) was an astronomer, mathematician and physicist originally from Samarkand, who settled in the Ottoman Empire some time before 1472. As a disciple of Ulugh Beg, he is best known for the development of astronomical physics independent from natural philosophy, and for providing empirical evidence for the Earth's rotation in his treatise, Concerning the Supposed Dependence of Astronomy upon Philosophy. In addition to his contributions to Ulugh Beg's famous work Zij-i-Sultani and to the founding of Sahn-ı Seman Medrese, one of the first centers for the study of various traditional Islamic sciences in the Ottoman caliphate, Ali Kuşçu was also the author of several scientific works and textbooks on astronomy.

Alternative stable state

In ecology, the theory of alternative stable states (sometimes termed alternate stable states or alternative stable equilibria) predicts that ecosystems can exist under multiple "states" (sets of unique biotic and abiotic conditions). These alternative states are non-transitory and therefore considered stable over ecologically-relevant timescales. Ecosystems may transition from one stable state to another, in what is known as a state shift (sometimes termed a phase shift or regime shift), when perturbed. Due to ecological feedbacks, ecosystems display resistance to state shifts and therefore tend to remain in one state unless perturbations are large enough. Multiple states may persist under equal environmental conditions, a phenomenon known as hysteresis. Alternative stable state theory suggests that discrete states are separated by ecological thresholds, in contrast to ecosystems which change smoothly and continuously along an environmental gradient.

Andrica's conjecture

Andrica's conjecture (named after Dorin Andrica) is a conjecture regarding the gaps between prime numbers.

The conjecture states that the inequality

holds for all , where is the nth prime number. If denotes the nth prime gap, then Andrica's conjecture can also be rewritten as

Communal reinforcement

Communal reinforcement is a social phenomenon in which a concept or idea is repeatedly asserted in a community, regardless of whether sufficient empirical evidence has been presented to support it. Over time, the concept or idea is reinforced to become a strong belief in many people's minds, and may be regarded by the members of the community as fact. Often, the concept or idea may be further reinforced by publications in the mass media, books, or other means of communication. The phrase "millions of people can't all be wrong" is indicative of the common tendency to accept a communally reinforced idea without question, which often aids in the widespread acceptance of factoids. A very similar term to this term is community-reinforcement, which is a behavioral method to stop drug addiction.

Empirical research

Empirical research is research using empirical evidence. It is a way of gaining knowledge by means of direct and indirect observation or experience. Empiricism values such research more than other kinds. Empirical evidence (the record of one's direct observations or experiences) can be analyzed quantitatively or qualitatively. Quantifying the evidence or making sense of it in qualitative form, a researcher can answer empirical questions, which should be clearly defined and answerable with the evidence collected (usually called data). Research design varies by field and by the question being investigated. Many researchers combine qualitative and quantitative forms of analysis to better answer questions which cannot be studied in laboratory settings, particularly in the social sciences and in education.

In some fields, quantitative research may begin with a research question (e.g., "Does listening to vocal music during the learning of a word list have an effect on later memory for these words?") which is tested through experimentation. Usually, a researcher has a certain theory regarding the topic under investigation. Based on this theory, statements or hypotheses will be proposed (e.g., "Listening to vocal voice has a negative effect on learning a word list."). From these hypotheses, predictions about specific events are derived (e.g., "People who study a word list while listening to vocal music will remember fewer words on a later memory test than people who study a word list in silence."). These predictions can then be tested with a suitable experiment. Depending on the outcomes of the experiment, the theory on which the hypotheses and predictions were based will be supported or not, or may need to be modified and then subjected to further testing.

Giffen good

In economics and consumer theory, a Giffen good is a product that people consume more of as the price rises and vice versa—violating the basic law of demand in microeconomics. For any other sort of good, as the price of the good rises, the substitution effect makes consumers purchase less of it, and more of substitute goods; for most goods, the income effect (due to the effective decline in available income due to more being spent on existing units of this good) reinforces this decline in demand for the good. But a Giffen good is so strongly an inferior good in the minds of consumers (being more in demand at lower incomes) that this contrary income effect more than offsets the substitution effect, and the net effect of the good's price rise is to increase demand for it.

Homosociality

In sociology, homosociality means same-sex relationships that are not of a romantic or sexual nature, such as friendship, mentorship, or others. The opposite of homosocial is heterosocial, preferring non-sexual relations with the opposite sex. In group relations involving more than two individuals, the relation can be either homosocial (involving same-sex social relations), bisocial involving social relation with both sexes or heterosocial involving only opposite sex.

Homosocial was popularized by Eve Sedgwick in her discussion of male homosocial desire. Jean Lipman-Blumen had earlier (1976) defined homosociality as a preference for members of one's own sex – a social rather than a sexual preference.

Koolakamba

The Koolakamba or Kooloo-Kamba is a purported hybrid species of chimpanzees and gorillas. This alleged hybrid ape species has been reported in Africa as early as the mid 19th century though no empirical evidence has been found to substantiate the existence of the creature and it has no entry in the NCBI taxonomical database. The Koolakamba was referenced in the mid-19th century in French work by Franquet (1852, as cited by Shea, 1984) and in some descriptive work of Paul Du Chaillu from 1860, 1861, 1867, and 1899; some of which was republished in 1969 (Explorations and Adventures in Equatorial Africa).

Leukoedema

Leukoedema is a blue, grey or white appearance of mucosae, particularly the buccal mucosa (the inside of the cheeks); it may also occur on the mucosa of the larynx or vagina. It is a harmless and very common condition. Because it is so common, it has been argued that it may in fact represent a variation of the normal appearance rather than a disease, but empirical evidence suggests that leukoedema is an acquired condition caused by local irritation. It is found more commonly in black skinned people and tobacco users. The term is derived from the Greek words λευκός leukós, "white" and οἴδημα oídēma, "swelling".

Outline of engineering

The following outline is provided as an overview of and topical guide to engineering:

Engineering is the scientific discipline and profession that applies scientific theories, mathematical methods, and empirical evidence to design, create, and analyze technological solutions cognizant of safety, human factors, physical laws, regulations, practicality, and cost.

Positive statement

In the social sciences and philosophy, a positive or descriptive statement concerns what "is", "was", or "will be", and contains no indication of approval or disapproval (what should be). Positive statements are thus the opposite of normative statements. Positive statement is based on empirical evidence. For examples, "An increase in taxation will result in less consumption" and "A fall in supply of petrol will lead to an increase in its price". However, positive statement can be factually incorrect: "The moon is made of green cheese" is empirically false, but is still a positive statement, as it is a statement about what is, not what should be.

Postmodernism (international relations)

Postmodern international relations is an approach that has been part of international relations scholarship since the 1980s. Although there are various strands of thinking, a key element to postmodernist theories is a distrust of any account of human life which claims to have direct access to the truth. Postmodern international relations theory critiques theories like Marxism that provide an overarching metanarrative to history. Key postmodern thinkers include Jean-François Lyotard, Michel Foucault, and Jacques Derrida.A criticism made of postmodern approaches to international relations is that they place too much emphasis on theoretical notions and are generally not concerned with the empirical evidence.

Randomized experiment

In science, randomized experiments are the experiments that allow the greatest reliability and validity of statistical estimates of treatment effects. Randomization-based inference is especially important in experimental design and in survey sampling.

Scientific evidence

Scientific evidence is evidence which serves to either support or counter a scientific theory or hypothesis. Such evidence is expected to be empirical evidence and interpretation in accordance with scientific method. Standards for scientific evidence vary according to the field of inquiry, but the strength of scientific evidence is generally based on the results of statistical analysis and the strength of scientific controls.

Skepticism

Skepticism (American English) or scepticism (British English, Australian English, and Canadian English) is generally any questioning attitude or doubt towards one or more items of putative knowledge or belief. It is often directed at domains, such as the supernatural, morality (moral skepticism), religion (skepticism about the existence of God), or knowledge (skepticism about the possibility of knowledge, or of certainty). Formally, skepticism as a topic occurs in the context of philosophy, particularly epistemology, although it can be applied to any topic such as politics, religion, and pseudoscience.

Philosophical skepticism comes in various forms. Radical forms of skepticism deny that knowledge or rational belief is possible and urge us to suspend judgment on many or all controversial matters. More moderate forms of skepticism claim only that nothing can be known with certainty, or that we can know little or nothing about the "big questions" in life, such as whether God exists or whether there is an afterlife. Religious skepticism is "doubt concerning basic religious principles (such as immortality, providence, and revelation)". Scientific skepticism concerns testing beliefs for reliability, by subjecting them to systematic investigation using the scientific method, to discover empirical evidence for them.

Spectrum bias

In biostatistics, spectrum bias refers to the phenomenon that the performance of a diagnostic test may vary in different clinical settings because each setting has a different mix of patients. Because the performance may be dependent on the mix of patients, performance at one clinic may not be predictive of performance at another clinic. These differences are interpreted as a kind of bias. Mathematically, the spectrum bias is a sampling bias and not a traditional statistical bias; this has led some authors to refer to the phenomenon as spectrum effects, whilst others maintain it is a bias if the true performance of the test differs from that which is 'expected'. Usually the performance of a diagnostic test is measured in terms of its sensitivity and specificity and it is changes in these that are considered when referring to spectrum bias. However, other performance measures such as the likelihood ratios may also be affected by spectrum bias.Generally spectrum bias is considered to have three causes. The first is due to a change in the case-mix of those patients with the target disorder (disease) and this affects the sensitivity. The second is due to a change in the case-mix of those without the target disorder (disease-free) and this affects the specificity. The third is due to a change in the prevalence, and this affects both the sensitivity and specificity. This final cause is not widely appreciated, but there is mounting empirical evidence as well as theoretical arguments which suggest that it does indeed affect a test's performance.

Examples where the sensitivity and specificity change between different sub-groups of patients may be found with the carcinoembryonic antigen test and urinary dipstick tests.Diagnostic test performances reported by some studies may be artificially overestimated if it is a case-control design where a healthy population ('fittest of the fit') is compared with a population with advanced disease ('sickest of the sick'); that is two extreme populations are compared, rather than typical healthy and diseased populations.If properly analyzed, recognition of heterogeneity of subgroups can lead to insights about the test's performance in varying populations.

Theory of multiple intelligences

The theory of multiple intelligences differentiates human intelligence into specific 'modalities', rather than seeing intelligence as dominated by a single general ability. Howard Gardner proposed this model in his 1983 book Frames of Mind: The Theory of Multiple Intelligences. According to the theory, an intelligence 'modality' must fulfill eight criteria:

potential for brain isolation by brain damage

place in evolutionary history

presence of core operations

susceptibility to encoding (symbolic expression)

a distinct developmental progression

the existence of savants, prodigies and other exceptional people

support from experimental psychology

support from psychometric findingsGardner proposed eight abilities that he held to meet these criteria:

musical-rhythmic,

visual-spatial,

verbal-linguistic,

logical-mathematical,

bodily-kinesthetic,

interpersonal,

intrapersonal,

naturalisticHe later suggested that existential and moral intelligences may also be worthy of inclusion.Although the distinction between intelligences has been set out in great detail, Gardner opposes the idea of labeling learners to a specific intelligence. Gardner maintains that his theory should "empower learners", not restrict them to one modality of learning. According to Gardner, an intelligence is "a biopsychological potential to process information that can be activated in a cultural setting to solve problems or create products that are of value in a culture." According to a 2006 study, each of the domains proposed by Gardner involves a blend of the general g factor, cognitive abilities other than g, and, in some cases, non-cognitive abilities or personality characteristics.

Wet bias

The term wet bias refers to the phenomenon whereby some weather forecasters (usually deliberately) report a higher probability of precipitation (in particular, of rain) than the probability they believe (and the probability borne out by empirical evidence), in order to increase the usefulness and actionability of their forecast. The Weather Channel has been empirically shown, and has also admitted, to having a wet bias in the case of low probability of precipitation (for instance, a 5% probability may be reported as a 20% probability) but not at high probabilities of precipitation (so a 60% probability will be reported as a 60% probability). Some local TV stations have been shown as having significantly greater wet bias, often reporting a 100% probability of precipitation in cases where it rains only 70% of the time.

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.