# Null

Null may refer to:

## Science, technology, and mathematics

### Computing

• Null (SQL) (or NULL), a special marker and keyword in SQL indicating that something has no value
• Null character, the zero-valued ASCII character, also designated by NUL, often used as a terminator, separator or filler. This symbol has no visual representation
• Null device, a special computer file, named /dev/null on Unix systems, that discards all data written to it
• Null modem, a specially wired serial communications cable
• Null pointer (sometimes written NULL, nil, or None), used in computer programming for an uninitialized, undefined, empty, or meaningless value
• Null string, the unique string of length zero (in computer science and formal language theory)

## Other uses

• Null and void, having no legal validity
• Null-A, a term used in science fiction, referring to Korzybski's notation ${\displaystyle {\overline {A}}}$ as an abbreviation for "non-Aristotelian" logic
• Stunde Null, a term used in Germany to mark the end of the Second World War

• Ø (disambiguation)
• Null symbol (disambiguation)
• 0 (disambiguation)
• Nil (disambiguation)
• Nul (disambiguation)
Aleph number

In mathematics, and in particular set theory, the aleph numbers are a sequence of numbers used to represent the cardinality (or size) of infinite sets that can be well-ordered. They are named after the symbol used to denote them, the Hebrew letter aleph (${\displaystyle \aleph }$) (though in older mathematics books the letter aleph is often printed upside down by accident, partly because a monotype matrix for aleph was mistakenly constructed the wrong way up).

The cardinality of the natural numbers is ${\displaystyle \aleph _{0}}$ (read aleph-naught or aleph-zero; the term aleph-null is also sometimes used), the next larger cardinality is aleph-one ${\displaystyle \aleph _{1}}$, then ${\displaystyle \aleph _{2}}$ and so on. Continuing in this manner, it is possible to define a cardinal number ${\displaystyle \aleph _{\alpha }}$ for every ordinal number ${\displaystyle \alpha }$, as described below.

The concept and notation are due to Georg Cantor, who defined the notion of cardinality and realized that infinite sets can have different cardinalities.

The aleph numbers differ from the infinity (${\displaystyle \infty }$) commonly found in algebra and calculus. Alephs measure the sizes of sets; infinity, on the other hand, is commonly defined as an extreme limit of the real number line (applied to a function or sequence that "diverges to infinity" or "increases without bound"), or an extreme point of the extended real number line.

Chi-squared test

A chi-squared test, also written as χ2 test, is any statistical hypothesis test where the sampling distribution of the test statistic is a chi-squared distribution when the null hypothesis is true. Without other qualification, 'chi-squared test' often is used as short for Pearson's chi-squared test. The chi-squared test is used to determine whether there is a significant difference between the expected frequencies and the observed frequencies in one or more categories.

In the standard applications of this test, the observations are classified into mutually exclusive classes, and there is some theory, or say null hypothesis, which gives the probability that any observation falls into the corresponding class. The purpose of the test is to evaluate how likely the observations that are made would be, assuming the null hypothesis is true.

Chi-squared tests are often constructed from a sum of squared errors, or through the sample variance. Test statistics that follow a chi-squared distribution arise from an assumption of independent normally distributed data, which is valid in many cases due to the central limit theorem. A chi-squared test can be used to attempt rejection of the null hypothesis that the data are independent.

Also considered a chi-squared test is a test in which this is asymptotically true, meaning that the sampling distribution (if the null hypothesis is true) can be made to approximate a chi-squared distribution as closely as desired by making the sample size large enough.

Homotopy

In topology, two continuous functions from one topological space to another are called homotopic (from Greek ὁμός homós "same, similar" and τόπος tópos "place") if one can be "continuously deformed" into the other, such a deformation being called a homotopy between the two functions. A notable use of homotopy is the definition of homotopy groups and cohomotopy groups, important invariants in algebraic topology.In practice, there are technical difficulties in using homotopies with certain spaces. Algebraic topologists work with compactly generated spaces, CW complexes, or spectra.

Kernel (linear algebra)

In mathematics, and more specifically in linear algebra and functional analysis, the kernel (also known as null space or nullspace) of a linear map L : VW between two vector spaces V and W, is the set of all elements v of V for which L(v) = 0, where 0 denotes the zero vector in W. That is,

${\displaystyle \ker(L)=\left\{\mathbf {v} \in V\mid L(\mathbf {v} )=\mathbf {0} \right\}{\text{.}}}$
Kolmogorov–Smirnov test

In statistics, the Kolmogorov–Smirnov test (K–S test or KS test) is a nonparametric test of the equality of continuous (or discontinuous, see Section 2.2), one-dimensional probability distributions that can be used to compare a sample with a reference probability distribution (one-sample K–S test), or to compare two samples (two-sample K–S test). It is named after Andrey Kolmogorov and Nikolai Smirnov.

The Kolmogorov–Smirnov statistic quantifies a distance between the empirical distribution function of the sample and the cumulative distribution function of the reference distribution, or between the empirical distribution functions of two samples. The null distribution of this statistic is calculated under the null hypothesis that the sample is drawn from the reference distribution (in the one-sample case) or that the samples are drawn from the same distribution (in the two-sample case). In the one-sample case, the distribution considered under the null hypothesis may be continuous (see Section 2), purely discrete or mixed (see Section 2.2). In the two-sample case (see Section 3), the distribution considered under the null hypothesis is a continuous distribution but is otherwise unrestricted.

The two-sample K–S test is one of the most useful and general nonparametric methods for comparing two samples, as it is sensitive to differences in both location and shape of the empirical cumulative distribution functions of the two samples.

The Kolmogorov–Smirnov test can be modified to serve as a goodness of fit test. In the special case of testing for normality of the distribution, samples are standardized and compared with a standard normal distribution. This is equivalent to setting the mean and variance of the reference distribution equal to the sample estimates, and it is known that using these to define the specific reference distribution changes the null distribution of the test statistic (see Test with estimated parameters). Various studies have found that, even in this corrected form, the test is less powerful for testing normality than the Shapiro–Wilk test or Anderson–Darling test. However, these other tests have their own disadvantages. For instance the Shapiro–Wilk test is known not to work well in samples with many identical values.

Normalnull

Normalnull ("standard zero") or Normal-Null (short N. N. or NN ) is an outdated official vertical datum used in Germany. Elevations using this reference system were to be marked "Meter über Normal-Null" (“meters above standard zero”). Normalnull has been replaced by Normalhöhennull (short NHN).

Null (SQL)

Null (or NULL) is a special marker used in Structured Query Language to indicate that a data value does not exist in the database. Introduced by the creator of the relational database model, E. F. Codd, SQL Null serves to fulfil the requirement that all true relational database management systems (RDBMS) support a representation of "missing information and inapplicable information". Codd also introduced the use of the lowercase Greek omega (ω) symbol to represent Null in database theory. In SQL, NULL is a reserved word used to identify this marker.

A null should not be confused with a value of 0. A null value indicates a lack of a value — a lack of a value is not the same thing as a value of zero in the same way that a lack of an answer is not the same thing as an answer of "no". For example, consider the question "How many books does Adam own?" The answer may be "zero" (we know that he owns none) or "null" (we do not know how many he owns). In a database table, the column reporting this answer would start out with no value (marked by Null), and it would not be updated with the value "zero" until we have ascertained that Adam owns no books.

SQL null is a state, not a value. This usage is quite different from most programming languages, where null value of a reference means it is not pointing to any object.

Null character

The null character (also null terminator or null byte) is a control character with the value zero.

It is present in many character sets, including ISO/IEC 646 (or ASCII), the C0 control code, the Universal Coded Character Set (or Unicode), and EBCDIC. It is available in nearly all mainstream programming languages. It is often abbreviated as NUL (or NULL though in some contexts that term is used for the null pointer, a different object).

The original meaning of this character was like NOP—when sent to a printer or a terminal, it does nothing (some terminals, however, incorrectly display it as space). When electromechanical teleprinters were used as computer output devices, one or more null characters were sent at the end of each printed line to allow time for the mechanism to return to the first printing position on the next line. On punched tape, the character is represented with no holes at all, so a new unpunched tape is initially filled with null characters, and often text could be "inserted" at a reserved space of null characters by punching the new characters into the tape over the nulls.

Today the character has much more significance in C and its derivatives and in many data formats, where it serves as a reserved character used to signify the end of a string, often called a null-terminated string. This allows the string to be any length with only the overhead of one byte; the alternative of storing a count requires either a string length limit of 255 or an overhead of more than one byte (there are other advantages/disadvantages described under null-terminated string).

Null hypothesis

In inferential statistics, the null hypothesis is a general statement or default position that there is nothing new happening, like there is no association among groups, or no relationship between two measured phenomena. Testing (accepting, approving, rejecting, or disproving) the null hypothesis—and thus concluding that there are or are not grounds for believing that there is a relationship between two phenomena (e.g. that a potential treatment has a measurable effect)—is a central task in the modern practice of science; the field of statistics gives precise criteria for rejecting a null hypothesis.

The null hypothesis is generally assumed to be true until evidence indicates otherwise.

In statistics, it is often denoted H0, pronounced as "H-nought", "H-null", or "H-zero" (or, even, by some, "H-oh"), with the subscript being the digit 0.

The concept of a null hypothesis is used differently in two approaches to statistical inference. In the significance testing approach of Ronald Fisher, a null hypothesis is rejected if the observed data are significantly unlikely to have occurred if the null hypothesis were true. In this case, the null hypothesis is rejected and an alternative hypothesis is accepted in its place. If the data are consistent with the null hypothesis, then the null hypothesis is not rejected. In neither case is the null hypothesis or its alternative proven; the null hypothesis is tested with data and a decision is made based on how likely or unlikely the data are. This is analogous to the legal principle of presumption of innocence, in which a suspect or defendant is assumed to be innocent (null is not rejected) until proven guilty (null is rejected) beyond a reasonable doubt (to a statistically significant degree).

In the hypothesis testing approach of Jerzy Neyman and Egon Pearson, a null hypothesis is contrasted with an alternative hypothesis and the two hypotheses are distinguished on the basis of data, with certain error rates. It is used in formulating answers in researches.

Statistical inference can be done without a null hypothesis, by specifying a statistical model corresponding to each candidate hypothesis and using model selection techniques to choose the most appropriate model. (The most common selection techniques are based on either Akaike information criterion or Bayes factor.)

P-value

In statistical hypothesis testing, the p-value or probability value is the probability of obtaining test results at least as extreme as the results actually observed during the test, assuming that the null hypothesis is correct. The use of p-values in statistical hypothesis testing is common in many fields of research such as physics, economics, finance, political science, psychology, biology, criminal justice, criminology, and sociology. The misuse of p-values is a controversial topic in metascience.Italicisation, capitalisation and hyphenation of the term varies. For example, AMA style uses "P value", APA style uses "p value", and the American Statistical Association uses "p-value".

Power (statistics)

The power of a binary hypothesis test is the probability that the test rejects the null hypothesis (H0) when a specific alternative hypothesis (H1) is true. The statistical power ranges from 0 to 1, and as statistical power increases, the probability of making a type II error (wrongly failing to reject the null hypothesis) decreases. For a type II error probability of β, the corresponding statistical power is 1 − β. For example, if experiment 1 has a statistical power of 0.7, and experiment 2 has a statistical power of 0.95, then there is a stronger probability that experiment 1 had a type II error than experiment 2, and experiment 2 is more reliable than experiment 1 due to the reduction in probability of a type II error. It can be equivalently thought of as the probability of accepting the alternative hypothesis (H1) when it is true—that is, the ability of a test to detect a specific effect, if that specific effect actually exists. That is,

${\displaystyle {\text{power}}=\Pr {\big (}{\text{reject }}H_{0}\mid H_{1}{\text{ is true}}{\big )}.}$

If ${\displaystyle H_{1}}$ is not an equality but rather simply the negation of ${\displaystyle H_{0}}$ (so for example with ${\displaystyle H_{0}:\mu =0}$ for some unobserved population parameter ${\displaystyle \mu ,}$ we have simply ${\displaystyle H_{1}:\mu \neq 0}$) then power cannot be calculated unless probabilities are known for all possible values of the parameter that violate the null hypothesis. Thus one generally refers to a test's power against a specific alternative hypothesis.

As the power increases, there is a decreasing probability of a type II error, also referred to as the false negative rate (β) since the power is equal to 1 − β. A similar concept is the type I error probability, also referred to as the "false positive rate" or the level of a test under the null hypothesis.

Power analysis can be used to calculate the minimum sample size required so that one can be reasonably likely to detect an effect of a given size. For example: "how many times do I need to toss a coin to conclude it is rigged by a certain amount?" Power analysis can also be used to calculate the minimum effect size that is likely to be detected in a study using a given sample size. In addition, the concept of power is used to make comparisons between different statistical testing procedures: for example, between a parametric test and a nonparametric test of the same hypothesis.

In the context of binary classification, the power of a test is called its statistical sensitivity, its true positive rate, or its probability of detection.

Spoilt vote

In voting, a ballot is considered spoilt, spoiled, void, null, informal, invalid or stray if a law declares or an election authority determines that it is invalid and thus not included in the vote count. This may occur accidentally or deliberately. The total number of spoilt votes in a United States election has been called the residual vote. In Australia, such votes are generally referred to as informal votes, and in Canada they are referred to as rejected votes.

In some jurisdictions spoilt votes are counted and reported.

Statistical hypothesis testing

A statistical hypothesis, sometimes called confirmatory data analysis, is a hypothesis that is testable on the basis of observing a process that is modeled via a set of random variables. A statistical hypothesis test is a method of statistical inference. Commonly, two statistical data sets are compared, or a data set obtained by sampling is compared against a synthetic data set from an idealized model. A hypothesis is proposed for the statistical relationship between the two data sets, and this is compared as an alternative to an idealized null hypothesis that proposes no relationship between two data sets. The comparison is deemed statistically significant if the relationship between the data sets would be an unlikely realization of the null hypothesis according to a threshold probability—the significance level. Hypothesis tests are used when determining what outcomes of a study would lead to a rejection of the null hypothesis for a pre-specified level of significance.

The process of distinguishing between the null hypothesis and the alternative hypothesis is aided by considering two conceptual types of errors. The first type of error occurs when the null hypothesis is wrongly rejected. The second type of error occurs when the null hypothesis is wrongly not rejected. (The two types are known as type 1 and type 2 errors.)

Hypothesis tests based on statistical significance are another way of expressing confidence intervals (more precisely, confidence sets). In other words, every hypothesis test based on significance can be obtained via a confidence interval, and every confidence interval can be obtained via a hypothesis test based on significance.Significance-based hypothesis testing is the most common framework for statistical hypothesis testing. An alternative framework for statistical hypothesis testing is to specify a set of statistical models, one for each candidate hypothesis, and then use model selection techniques to choose the most appropriate model. The most common selection techniques are based on either Akaike information criterion or Bayes factor.

Statistical significance

In statistical hypothesis testing, statistical significance is a way of quantifying the unlikely-ness of an experimental result—if the null hypothesis were to be true. More precisely, a study's defined significance level, denoted by ${\displaystyle \alpha }$, is the probability of the study rejecting the null hypothesis, given that the null hypothesis were assumed to be true; and the p-value of a result, ${\displaystyle p}$, is the probability of obtaining a result at least as extreme, given that the null hypothesis were true. The result is statistically significant, by the standards of the study, when ${\displaystyle p\leq \alpha }$. The significance level for a study is chosen before data collection, and is typically set to 5% or much lower—depending on the field of study.

In any experiment or observation that involves drawing a sample from a population, there is always the possibility that an observed effect would have occurred due to sampling error alone. But if the p-value of an observed effect is less than (or equal to) the significance level, an investigator may conclude that the effect reflects the characteristics of the whole population, thereby rejecting the null hypothesis.

This technique for testing the statistical significance of results was developed in the early 20th century. The term significance does not imply importance here, and the term statistical significance is not the same as research, theoretical, or practical significance. For example, the term clinical significance refers to the practical importance of a treatment effect.

Statistics

Statistics is the discipline that concerns the collection, organization, displaying, analysis, interpretation and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments.

See glossary of probability and statistics.

When census data cannot be collected, statisticians collect data by developing specific experiment designs and survey samples. Representative sampling assures that inferences and conclusions can reasonably extend from the sample to the population as a whole. An experimental study involves taking measurements of the system under study, manipulating the system, and then taking additional measurements using the same procedure to determine if the manipulation has modified the values of the measurements. In contrast, an observational study does not involve experimental manipulation.

Two main statistical methods are used in data analysis: descriptive statistics, which summarize data from a sample using indexes such as the mean or standard deviation, and inferential statistics, which draw conclusions from data that are subject to random variation (e.g., observational errors, sampling variation). Descriptive statistics are most often concerned with two sets of properties of a distribution (sample or population): central tendency (or location) seeks to characterize the distribution's central or typical value, while dispersion (or variability) characterizes the extent to which members of the distribution depart from its center and each other. Inferences on mathematical statistics are made under the framework of probability theory, which deals with the analysis of random phenomena.

A standard statistical procedure involves the test of the relationship between two statistical data sets, or a data set and synthetic data drawn from an idealized model. A hypothesis is proposed for the statistical relationship between the two data sets, and this is compared as an alternative to an idealized null hypothesis of no relationship between two data sets. Rejecting or disproving the null hypothesis is done using statistical tests that quantify the sense in which the null can be proven false, given the data that are used in the test. Working from a null hypothesis, two basic forms of error are recognized: Type I errors (null hypothesis is falsely rejected giving a "false positive") and Type II errors (null hypothesis fails to be rejected and an actual relationship between populations is missed giving a "false negative"). Multiple problems have come to be associated with this framework: ranging from obtaining a sufficient sample size to specifying an adequate null hypothesis.Measurement processes that generate statistical data are also subject to error. Many of these errors are classified as random (noise) or systematic (bias), but other types of errors (e.g., blunder, such as when an analyst reports incorrect units) can also occur. The presence of missing data or censoring may result in biased estimates and specific techniques have been developed to address these problems.

The earliest writings on probability and statistics, statistical methods drawing from probability theory, date back to Arab mathematicians and cryptographers, notably Al-Khalil (717–786) and Al-Kindi (801–873). In the 18th century, statistics also started to draw heavily from calculus. In more recent years statistics has relied more on statistical software to produce tests such as descriptive analysis.

Syllable

A syllable is a unit of organization for a sequence of speech sounds. It is typically made up of a syllable nucleus (most often a vowel) with optional initial and final margins (typically, consonants). Syllables are often considered the phonological "building blocks" of words. They can influence the rhythm of a language, its prosody, its poetic metre and its stress patterns. Speech can usually be divided up into a whole number of syllables: for example, the word ignite is composed of two syllables: ig and nite.

Syllabic writing began several hundred years before the first letters. The earliest recorded syllables are on tablets written around 2800 BC in the Sumerian city of Ur. This shift from pictograms to syllables has been called "the most important advance in the history of writing".A word that consists of a single syllable (like English dog) is called a monosyllable (and is said to be monosyllabic). Similar terms include disyllable (and disyllabic; also bisyllable and bisyllabic) for a word of two syllables; trisyllable (and trisyllabic) for a word of three syllables; and polysyllable (and polysyllabic), which may refer either to a word of more than three syllables or to any word of more than one syllable.

Trent Reznor

Michael Trent Reznor (born May 17, 1965) is an American singer, songwriter, musician, record producer, and film score composer. He is the founder, lead vocalist, and principal songwriter of the industrial rock band Nine Inch Nails, which he founded in 1988 and of which he was the sole official member until adding long-time collaborator Atticus Ross as a permanent member in 2016. His first release under the Nine Inch Nails name, the 1989 album Pretty Hate Machine, was a commercial and critical success. He has since released nine Nine Inch Nails studio albums. He left Interscope Records in 2007 and was an independent recording artist until signing with Columbia Records in 2012.

Reznor was associated with the bands Option 30, The Urge, The Innocent, and Exotic Birds in the mid-1980s. Outside of Nine Inch Nails, he has contributed to the albums of artists such as Marilyn Manson and Saul Williams. He and his wife, Mariqueen Maandig, are members of the post-industrial group How to Destroy Angels, with Atticus Ross and long-time Nine Inch Nails graphic designer Rob Sheridan.Reznor and Ross scored the David Fincher films The Social Network (2010), The Girl with the Dragon Tattoo (2011), and Gone Girl (2014), winning the Academy Award for Best Original Score for The Social Network and the Grammy Award for Best Score Soundtrack for Visual Media for The Girl with the Dragon Tattoo. They also scored the 2018 film Bird Box. In 1997, Reznor appeared in Time's list of the year's most influential people, and Spin magazine described him as "the most vital artist in music".

Type I and type II errors

In statistical hypothesis testing a type I error is the rejection of a true null hypothesis (also known as a "false positive" finding or conclusion), while a type II error is the non-rejection of a false null hypothesis (also known as a "false negative" finding or conclusion). Much of statistical theory revolves around the minimization of one or both of these errors, though the complete elimination of either is a statistical impossibility for non-deterministic algorithms.

Void (law)

In law, void means of no legal effect. An action, document, or transaction which is void is of no legal effect whatsoever: an absolute nullity—the law treats it as if it had never existed or happened. The term void ab initio, which means "to be treated as invalid from the outset," comes from adding the Latin phrase ab initio (from the beginning) as a qualifier. For example, in many jurisdictions where a person signs a contract under duress, that contract is treated as being void ab initio. The frequent combination "null and void" is a legal doublet.

### Languages

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.