In probability theory, the normal (or Gaussian or Gauss or Laplace–Gauss) distribution is a very common continuous probability distribution. Normal distributions are important in statistics and are often used in the natural and social sciences to represent realvalued random variables whose distributions are not known.^{[1]}^{[2]} A random variable with a Gaussian distribution is said to be normally distributed and is called a normal deviate.
The normal distribution is useful because of the central limit theorem. In its most general form, under some conditions (which include finite variance), it states that averages of samples of observations of random variables independently drawn from independent distributions converge in distribution to the normal, that is, they become normally distributed when the number of observations is sufficiently large. Physical quantities that are expected to be the sum of many independent processes (such as measurement errors) often have distributions that are nearly normal.^{[3]} Moreover, many results and methods (such as propagation of uncertainty and least squares parameter fitting) can be derived analytically in explicit form when the relevant variables are normally distributed.
The normal distribution is sometimes informally called the bell curve. However, many other distributions are bellshaped (such as the Cauchy, Student's t, and logistic distributions).
The probability density of the normal distribution is
where
Normal Distribution  

Probability density function The red curve is the standard normal distribution  
Cumulative distribution function  
Notation  
Parameters  = mean (location) = variance (squared scale) 
Support  
CDF  
Quantile  
Mean  
Median  
Mode  
Variance  
Skewness  
Ex. kurtosis  
Entropy  
MGF  
CF  
Fisher information 
The simplest case of a normal distribution is known as the standard normal distribution. This is a special case when and , and it is described by this probability density function:
The factor in this expression ensures that the total area under the curve is equal to one.^{[4]} The factor in the exponent ensures that the distribution has unit variance (i.e. the variance is equal to one), and therefore also unit standard deviation. This function is symmetric around , where it attains its maximum value and has inflection points at and .
Authors may differ also on which normal distribution should be called the "standard" one. Gauss defined the standard normal as having variance , that is
Stigler^{[5]} goes even further, defining the standard normal with variance :
Every normal distribution is a version of the standard normal distribution whose domain has been stretched by a factor (the standard deviation) and then translated by (the mean value):
The probability density must be scaled by so that the integral is still 1.
If is a standard normal deviate, then will have a normal distribution with expected value and standard deviation . Conversely, if is a normal deviate with parameters and , then will have a standard normal distribution. This variate is called the standardized form of
Every normal distribution is the exponential of a quadratic function:
where and . In this form, the mean value is , and the variance is . For the standard normal distribution, , , and .
The probability density of the standard Gaussian distribution (standard normal distribution) (with zero mean and unit variance) is often denoted with the Greek letter (phi).^{[6]} The alternative form of the Greek letter phi, , is also used quite often.
The normal distribution is often referred to as or .^{[7]} Thus when a random variable is distributed normally with mean and variance , one may write
Some authors advocate using the precision as the parameter defining the width of the distribution, instead of the deviation or the variance . The precision is normally defined as the reciprocal of the variance, .^{[8]} The formula for the distribution then becomes
This choice is claimed to have advantages in numerical computations when is very close to zero and simplify formulas in some contexts, such as in the Bayesian inference of variables with multivariate normal distribution.
Also the reciprocal of the standard deviation might be defined as the precision and the expression of the normal distribution becomes
According to Stigler, this formulation is advantageous because of a much simpler and easiertoremember formula, and simple approximate formulas for the quantiles of the distribution.
Normal distributions form an exponential family with natural parameters and , and natural statistics x and x^{2}. The dual, expectation parameters for normal distribution are η_{1} = μ and η_{2} = μ^{2} + σ^{2}.
The normal distribution is the only absolutely continuous distribution whose cumulants beyond the first two (i.e., other than the mean and variance) are zero. It is also the continuous distribution with the maximum entropy for a specified mean and variance.^{[9]}^{[10]} Geary has shown, assuming that the mean and variance are finite, that the normal distribution is the only distribution where the mean and variance calculated from a set of independent draws are independent of each other.^{[11]}^{[12]}
The normal distribution is a subclass of the elliptical distributions. The normal distribution is symmetric about its mean, and is nonzero over the entire real line. As such it may not be a suitable model for variables that are inherently positive or strongly skewed, such as the weight of a person or the price of a share. Such variables may be better described by other distributions, such as the lognormal distribution or the Pareto distribution.
The value of the normal distribution is practically zero when the value lies more than a few standard deviations away from the mean (e.g., a spread of three standard deviations covers all but 0.27% of the total distribution). Therefore, it may not be an appropriate model when one expects a significant fraction of outliers—values that lie many standard deviations away from the mean—and least squares and other statistical inference methods that are optimal for normally distributed variables often become highly unreliable when applied to such data. In those cases, a more heavytailed distribution should be assumed and the appropriate robust statistical inference methods applied.
The Gaussian distribution belongs to the family of stable distributions which are the attractors of sums of independent, identically distributed distributions whether or not the mean or variance is finite. Except for the Gaussian which is a limiting case, all stable distributions have heavy tails and infinite variance. It is one of the few distributions that are stable and that have probability density functions that can be expressed analytically, the others being the Cauchy distribution and the Lévy distribution.
The normal distribution with density (mean and standard deviation ) has the following properties:
Furthermore, the density of the standard normal distribution (i.e. and ) also has the following properties:
The plain and absolute moments of a variable are the expected values of and , respectively. If the expected value of is zero, these parameters are called central moments. Usually we are interested only in moments with integer order .
If has a normal distribution, these moments exist and are finite for any whose real part is greater than −1. For any nonnegative integer , the plain central moments are:^{[16]}
Here denotes the double factorial, that is, the product of all numbers from to 1 that have the same parity as
The central absolute moments coincide with plain moments for all even orders, but are nonzero for odd orders. For any nonnegative integer
The last formula is valid also for any noninteger When the mean the plain and absolute moments can be expressed in terms of confluent hypergeometric functions and
These expressions remain valid even if is not integer. See also generalized Hermite polynomials.
Order  Noncentral moment  Central moment 

1  
2  
3  
4  
5  
6  
7  
8 
The expectation of conditioned on the event that lies in an interval is given by
where and respectively are the density and the cumulative distribution function of . For this is known as the inverse Mills ratio. Note that above, density of is used instead of standard normal density as in inverse Mills ratio, so here we have instead of .
The Fourier transform of a normal density with mean and standard deviation is^{[17]}
where is the imaginary unit. If the mean , the first factor is 1, and the Fourier transform is, apart from a constant factor, a normal density on the frequency domain, with mean 0 and standard deviation . In particular, the standard normal distribution is an eigenfunction of the Fourier transform.
In probability theory, the Fourier transform of the probability distribution of a realvalued random variable is closely connected to the characteristic function of that variable, which is defined as the expected value of , as a function of the real variable (the frequency parameter of the Fourier transform). This definition can be analytically extended to a complexvalue variable .^{[18]} The relation between both is:
The moment generating function of a real random variable is the expected value of , as a function of the real parameter . For a normal distribution with density , mean and deviation , the moment generating function exists and is equal to
The cumulant generating function is the logarithm of the moment generating function, namely
Since this is a quadratic polynomial in , only the first two cumulants are nonzero, namely the mean and the variance .
The cumulative distribution function (CDF) of the standard normal distribution, usually denoted with the capital Greek letter (phi), is the integral
The related error function gives the probability of a random variable with normal distribution of mean 0 and variance 1/2 falling in the range ; that is
These integrals cannot be expressed in terms of elementary functions, and are often said to be special functions. However, many numerical approximations are known; see below.
The two functions are closely related, namely
For a generic normal distribution with density , mean and deviation , the cumulative distribution function is
The complement of the standard normal CDF, , is often called the Qfunction, especially in engineering texts.^{[19]}^{[20]} It gives the probability that the value of a standard normal random variable will exceed : . Other definitions of the function, all of which are simple transformations of , are also used occasionally.^{[21]}
The graph of the standard normal CDF has 2fold rotational symmetry around the point (0,1/2); that is, . Its antiderivative (indefinite integral) is
The CDF of the standard normal distribution can be expanded by Integration by parts into a series:
where denotes the double factorial.
An asymptotic expansion of the CDF for large x can also be derived using integration by parts; see Error function#Asymptotic expansion.^{[22]}
About 68% of values drawn from a normal distribution are within one standard deviation σ away from the mean; about 95% of the values lie within two standard deviations; and about 99.7% are within three standard deviations. This fact is known as the 689599.7 (empirical) rule, or the 3sigma rule.
More precisely, the probability that a normal deviate lies in the range between and is given by
To 12 significant figures, the values for are:^{[23]}
OEIS  

1  0.682689492137  0.317310507863 

OEIS: A178647  
2  0.954499736104  0.045500263896 

OEIS: A110894  
3  0.997300203937  0.002699796063 

OEIS: A270712  
4  0.999936657516  0.000063342484 
 
5  0.999999426697  0.000000573303 
 
6  0.999999998027  0.000000001973 

The quantile function of a distribution is the inverse of the cumulative distribution function. The quantile function of the standard normal distribution is called the probit function, and can be expressed in terms of the inverse error function:
For a normal random variable with mean and variance , the quantile function is
The quantile of the standard normal distribution is commonly denoted as . These values are used in hypothesis testing, construction of confidence intervals and QQ plots. A normal random variable will exceed with probability , and will lie outside the interval with probability . In particular, the quantile is 1.96; therefore a normal random variable will lie outside the interval in only 5% of cases.
The following table gives the quantile such that will lie in the range with a specified probability . These values are useful to determine tolerance interval for sample averages and other statistical estimators with normal (or asymptotically normal) distributions:.^{[24]}^{[25]} NOTE: the following table shows , not as defined above.
0.80  1.281551565545  0.999  3.290526731492  
0.90  1.644853626951  0.9999  3.890591886413  
0.95  1.959963984540  0.99999  4.417173413469  
0.98  2.326347874041  0.999999  4.891638475699  
0.99  2.575829303549  0.9999999  5.326723886384  
0.995  2.807033768344  0.99999999  5.730728868236  
0.998  3.090232306168  0.999999999  6.109410204869 
For small , the quantile function has the useful asymptotic expansion
In the limit when tends to zero, the probability density eventually tends to zero at any , but grows without limit if , while its integral remains equal to 1. Therefore, the normal distribution cannot be defined as an ordinary function when .
However, one can define the normal distribution with zero variance as a generalized function; specifically, as Dirac's "delta function" translated by the mean , that is Its CDF is then the Heaviside step function translated by the mean , namely
The central limit theorem states that under certain (fairly common) conditions, the sum of many random variables will have an approximately normal distribution. More specifically, where are independent and identically distributed random variables with the same arbitrary distribution, zero mean, and variance and is their mean scaled by
Then, as increases, the probability distribution of will tend to the normal distribution with zero mean and variance .
The theorem can be extended to variables that are not independent and/or not identically distributed if certain constraints are placed on the degree of dependence and the moments of the distributions.
Many test statistics, scores, and estimators encountered in practice contain sums of certain random variables in them, and even more estimators can be represented as sums of random variables through the use of influence functions. The central limit theorem implies that those statistical parameters will have asymptotically normal distributions.
The central limit theorem also implies that certain distributions can be approximated by the normal distribution, for example:
Whether these approximations are sufficiently accurate depends on the purpose for which they are needed, and the rate of convergence to the normal distribution. It is typically the case that such approximations are less accurate in the tails of the distribution.
A general upper bound for the approximation error in the central limit theorem is given by the Berry–Esseen theorem, improvements of the approximation are given by the Edgeworth expansions.
Of all probability distributions over the reals with a specified mean and variance , the normal distribution is the one with maximum entropy.^{[27]} If is a continuous random variable with probability density , then the entropy of is defined as^{[28]}^{[29]}^{[30]}
where is understood to be zero whenever . This functional can be maximized, subject to the constraints that the distribution is properly normalized and has a specified variance, by using variational calculus. A function with two Lagrange multipliers is defined:
where is, for now, regarded as some density function with mean and standard deviation .
At maximum entropy, a small variation about will produce a variation about which is equal to 0:
Since this must hold for any small , the term in brackets must be zero, and solving for yields:
Using the constraint equations to solve for and yields the density of the normal distribution:
The family of normal distributions is closed under linear transformations: if X is normally distributed with mean μ and standard deviation σ, then the variable Y = aX + b, for any real numbers a and b, is also normally distributed, with mean aμ + b and standard deviation aσ.
Also if X_{1} and X_{2} are two independent normal random variables, with means μ_{1}, μ_{2} and standard deviations σ_{1}, σ_{2}, then their sum X_{1} + X_{2} will also be normally distributed,^{[proof]} with mean μ_{1} + μ_{2} and variance .
In particular, if X and Y are independent normal deviates with zero mean and variance σ^{2}, then X + Y and X − Y are also independent and normally distributed, with zero mean and variance 2σ^{2}. This is a special case of the polarization identity.^{[31]}
Also, if X_{1}, X_{2} are two independent normal deviates with mean μ and deviation σ, and a, b are arbitrary real numbers, then the variable
is also normally distributed with mean μ and deviation σ. It follows that the normal distribution is stable (with exponent α = 2).
More generally, any linear combination of independent normal deviates is a normal deviate.
For any positive integer n, any normal distribution with mean μ and variance σ^{2} is the distribution of the sum of n independent normal deviates, each with mean μ/n and variance σ^{2}/n. This property is called infinite divisibility.^{[32]}
Conversely, if X_{1} and X_{2} are independent random variables and their sum X_{1} + X_{2} has a normal distribution, then both X_{1} and X_{2} must be normal deviates.^{[33]}
This result is known as Cramér’s decomposition theorem, and is equivalent to saying that the convolution of two distributions is normal if and only if both are normal. Cramér's theorem implies that a linear combination of independent nonGaussian variables will never have an exactly normal distribution, although it may approach it arbitrarily closely.^{[34]}
Bernstein's theorem states that if X and Y are independent and X + Y and X − Y are also independent, then both X and Y must necessarily have normal distributions.^{[35]}^{[36]}
More generally, if X_{1}, ..., X_{n} are independent random variables, then two distinct linear combinations ∑a_{k}X_{k} and ∑b_{k}X_{k} will be independent if and only if all X_{k}'s are normal and ∑a_{k}b_{k}σ 2
k = 0, where σ 2
k denotes the variance of X_{k}.^{[35]}
The Hellinger distance between the same distributions is equal to
If X is distributed normally with mean μ and variance σ^{2}, then
If X_{1} and X_{2} are two independent standard normal random variables with mean 0 and variance 1, then
The split normal distribution is most directly defined in terms of joining scaled sections of the density functions of different normal distributions and rescaling the density to integrate to one. The truncated normal distribution results from rescaling a section of a single density function.
The notion of normal distribution, being one of the most important distributions in probability theory, has been extended far beyond the standard framework of the univariate (that is onedimensional) case (Case 1). All these extensions are also called normal or Gaussian laws, so a certain ambiguity in names exists.
A random variable X has a twopiece normal distribution if it has a distribution
where μ is the mean and σ_{1} and σ_{2} are the standard deviations of the distribution to the left and right of the mean respectively.
The mean, variance and third central moment of this distribution have been determined^{[46]}
where E(X), V(X) and T(X) are the mean, variance, and third central moment respectively.
One of the main practical uses of the Gaussian law is to model the empirical distributions of many different random variables encountered in practice. In such case a possible extension would be a richer family of distributions, having more than two parameters and therefore being able to fit the empirical distribution more accurately. The examples of such extensions are:
Normality tests assess the likelihood that the given data set {x_{1}, ..., x_{n}} comes from a normal distribution. Typically the null hypothesis H_{0} is that the observations are distributed normally with unspecified mean μ and variance σ^{2}, versus the alternative H_{a} that the distribution is arbitrary. Many tests (over 40) have been devised for this problem, the more prominent of them are outlined below:
It is often the case that we don't know the parameters of the normal distribution, but instead want to estimate them. That is, having a sample (x_{1}, ..., x_{n}) from a normal N(μ, σ^{2}) population we would like to learn the approximate values of parameters μ and σ^{2}. The standard approach to this problem is the maximum likelihood method, which requires maximization of the loglikelihood function:
Taking derivatives with respect to μ and σ^{2} and solving the resulting system of first order conditions yields the maximum likelihood estimates:
Estimator is called the sample mean, since it is the arithmetic mean of all observations. The statistic is complete and sufficient for μ, and therefore by the Lehmann–Scheffé theorem, is the uniformly minimum variance unbiased (UMVU) estimator.^{[47]} In finite samples it is distributed normally:
The variance of this estimator is equal to the μμelement of the inverse Fisher information matrix . This implies that the estimator is finitesample efficient. Of practical importance is the fact that the standard error of is proportional to , that is, if one wishes to decrease the standard error by a factor of 10, one must increase the number of points in the sample by a factor of 100. This fact is widely used in determining sample sizes for opinion polls and the number of trials in Monte Carlo simulations.
From the standpoint of the asymptotic theory, is consistent, that is, it converges in probability to μ as n → ∞. The estimator is also asymptotically normal, which is a simple corollary of the fact that it is normal in finite samples:
The estimator is called the sample variance, since it is the variance of the sample (x_{1}, ..., x_{n}). In practice, another estimator is often used instead of the . This other estimator is denoted s^{2}, and is also called the sample variance, which represents a certain ambiguity in terminology; its square root s is called the sample standard deviation. The estimator s^{2} differs from by having (n − 1) instead of n in the denominator (the socalled Bessel's correction):
The difference between s^{2} and becomes negligibly small for large n's. In finite samples however, the motivation behind the use of s^{2} is that it is an unbiased estimator of the underlying parameter σ^{2}, whereas is biased. Also, by the Lehmann–Scheffé theorem the estimator s^{2} is uniformly minimum variance unbiased (UMVU),^{[47]} which makes it the "best" estimator among all unbiased ones. However it can be shown that the biased estimator is "better" than the s^{2} in terms of the mean squared error (MSE) criterion. In finite samples both s^{2} and have scaled chisquared distribution with (n − 1) degrees of freedom:
The first of these expressions shows that the variance of s^{2} is equal to 2σ^{4}/(n−1), which is slightly greater than the σσelement of the inverse Fisher information matrix . Thus, s^{2} is not an efficient estimator for σ^{2}, and moreover, since s^{2} is UMVU, we can conclude that the finitesample efficient estimator for σ^{2} does not exist.
Applying the asymptotic theory, both estimators s^{2} and are consistent, that is they converge in probability to σ^{2} as the sample size n → ∞. The two estimators are also both asymptotically normal:
In particular, both estimators are asymptotically efficient for σ^{2}.
By Cochran's theorem, for normal distributions the sample mean and the sample variance s^{2} are independent, which means there can be no gain in considering their joint distribution. There is also a converse theorem: if in a sample the sample mean and sample variance are independent, then the sample must have come from the normal distribution. The independence between and s can be employed to construct the socalled tstatistic:
This quantity t has the Student's tdistribution with (n − 1) degrees of freedom, and it is an ancillary statistic (independent of the value of the parameters). Inverting the distribution of this tstatistics will allow us to construct the confidence interval for μ;^{[48]} similarly, inverting the χ^{2} distribution of the statistic s^{2} will give us the confidence interval for σ^{2}:^{[49]}
where t_{k,p} and χ 2
k,p are the pth quantiles of the t and χ^{2}distributions respectively. These confidence intervals are of the confidence level 1 − α, meaning that the true values μ and σ^{2} fall outside of these intervals with probability (or significance level) α. In practice people usually take α = 5%, resulting in the 95% confidence intervals. The approximate formulas in the display above were derived from the asymptotic distributions of and s^{2}. The approximate formulas become valid for large values of n, and are more convenient for the manual calculation since the standard normal quantiles z_{α/2} do not depend on n. In particular, the most popular value of α = 5%, results in z_{0.025} = 1.96.
Bayesian analysis of normally distributed data is complicated by the many different possibilities that may be considered:
The formulas for the nonlinearregression cases are summarized in the conjugate prior article.
The following auxiliary formula is useful for simplifying the posterior update equations, which otherwise become fairly tedious.
This equation rewrites the sum of two quadratics in x by expanding the squares, grouping the terms in x, and completing the square. Note the following about the complex constant factors attached to some of the terms:
A similar formula can be written for the sum of two vector quadratics: If x, y, z are vectors of length k, and A and B are symmetric, invertible matrices of size , then
where
Note that the form x′ A x is called a quadratic form and is a scalar:
In other words, it sums up all possible combinations of products of pairs of elements from x, with a separate coefficient for each. In addition, since , only the sum matters for any offdiagonal elements of A, and there is no loss of generality in assuming that A is symmetric. Furthermore, if A is symmetric, then the form
Another useful formula is as follows:
where
For a set of i.i.d. normally distributed data points X of size n where each individual point x follows with known variance σ^{2}, the conjugate prior distribution is also normally distributed.
This can be shown more easily by rewriting the variance as the precision, i.e. using τ = 1/σ^{2}. Then if and we proceed as follows.
First, the likelihood function is (using the formula above for the sum of differences from the mean):
Then, we proceed as follows:
In the above derivation, we used the formula above for the sum of two quadratics and eliminated all constant factors not involving μ. The result is the kernel of a normal distribution, with mean and precision , i.e.
This can be written as a set of Bayesian update equations for the posterior parameters in terms of the prior parameters:
That is, to combine n data points with total precision of nτ (or equivalently, total variance of n/σ^{2}) and mean of values , derive a new total precision simply by adding the total precision of the data to the prior total precision, and form a new mean through a precisionweighted average, i.e. a weighted average of the data mean and the prior mean, each weighted by the associated total precision. This makes logical sense if the precision is thought of as indicating the certainty of the observations: In the distribution of the posterior mean, each of the input components is weighted by its certainty, and the certainty of this distribution is the sum of the individual certainties. (For the intuition of this, compare the expression "the whole is (or is not) greater than the sum of its parts". In addition, consider that the knowledge of the posterior comes from a combination of the knowledge of the prior and likelihood, so it makes sense that we are more certain of it than of either of its components.)
The above formula reveals why it is more convenient to do Bayesian analysis of conjugate priors for the normal distribution in terms of the precision. The posterior precision is simply the sum of the prior and likelihood precisions, and the posterior mean is computed through a precisionweighted average, as described above. The same formulas can be written in terms of variance by reciprocating all the precisions, yielding the more ugly formulas
For a set of i.i.d. normally distributed data points X of size n where each individual point x follows with known mean μ, the conjugate prior of the variance has an inverse gamma distribution or a scaled inverse chisquared distribution. The two are equivalent except for having different parameterizations. Although the inverse gamma is more commonly used, we use the scaled inverse chisquared for the sake of convenience. The prior for σ^{2} is as follows:
The likelihood function from above, written in terms of the variance, is:
where
Then:
The above is also a scaled inverse chisquared distribution where
or equivalently
Reparameterizing in terms of an inverse gamma distribution, the result is:
For a set of i.i.d. normally distributed data points X of size n where each individual point x follows with unknown mean μ and unknown variance σ^{2}, a combined (multivariate) conjugate prior is placed over the mean and variance, consisting of a normalinversegamma distribution. Logically, this originates as follows:
The priors are normally defined as follows:
The update equations can be derived, and look as follows:
The respective numbers of pseudoobservations add the number of actual observations to them. The new mean hyperparameter is once again a weighted average, this time weighted by the relative numbers of observations. Finally, the update for is similar to the case with known mean, but in this case the sum of squared deviations is taken with respect to the observed data mean rather than the true mean, and as a result a new "interaction term" needs to be added to take care of the additional error source stemming from the deviation between prior and data mean.
The occurrence of normal distribution in practical problems can be loosely classified into four categories:
Certain quantities in physics are distributed normally, as was first demonstrated by James Clerk Maxwell. Examples of such quantities are:
Approximately normal distributions occur in many situations, as explained by the central limit theorem. When the outcome is produced by many small effects acting additively and independently, its distribution will be close to normal. The normal approximation will not be valid if the effects act multiplicatively (instead of additively), or if there is a single external influence that has a considerably larger magnitude than the rest of the effects.
I can only recognize the occurrence of the normal curve – the Laplacian curve of errors – as a very abnormal phenomenon. It is roughly approximated to in certain distributions; for this reason, and on account for its beautiful simplicity, we may, perhaps, use it as a first approximation, particularly in theoretical investigations.
There are statistical methods to empirically test that assumption, see the above Normality tests section.
In regression analysis, lack of normality in residuals simply indicates that the model postulated is inadequate in accounting for the tendency in the data and needs to be augmented; in other words, normality in residuals can always be achieved given a properly constructed model.
In computer simulations, especially in applications of the MonteCarlo method, it is often desirable to generate values that are normally distributed. The algorithms listed below all generate the standard normal deviates, since a N(μ, σ^{2}
_{}) can be generated as X = μ + σZ, where Z is standard normal. All these algorithms rely on the availability of a random number generator U capable of producing uniform random variates.
The standard normal CDF is widely used in scientific and statistical computing.
The values Φ(x) may be approximated very accurately by a variety of methods, such as numerical integration, Taylor series, asymptotic series and continued fractions. Different approximations are used depending on the desired level of accuracy.
Shore (1982) introduced simple approximations that may be incorporated in stochastic optimization models of engineering and operations research, like reliability engineering and inventory analysis. Denoting p=Φ(z), the simplest approximation for the quantile function is:
This approximation delivers for z a maximum absolute error of 0.026 (for 0.5 ≤ p ≤ 0.9999, corresponding to 0 ≤ z ≤ 3.719). For p < 1/2 replace p by 1 − p and change sign. Another approximation, somewhat less accurate, is the singleparameter approximation:
The latter had served to derive a simple approximation for the loss integral of the normal distribution, defined by
This approximation is particularly accurate for the right fartail (maximum error of 10^{−3} for z≥1.4). Highly accurate approximations for the CDF, based on Response Modeling Methodology (RMM, Shore, 2011, 2012), are shown in Shore (2005).
Some more approximations can be found at: Error function#Approximation with elementary functions. In particular, small relative error on the whole domain for the CDF and the quantile function as well, is achieved via an explicitly invertible formula by Sergei Winitzki in 2008.
Some authors^{[61]}^{[62]} attribute the credit for the discovery of the normal distribution to de Moivre, who in 1738^{[nb 2]} published in the second edition of his "The Doctrine of Chances" the study of the coefficients in the binomial expansion of (a + b)^{n}. De Moivre proved that the middle term in this expansion has the approximate magnitude of , and that "If m or ½n be a Quantity infinitely great, then the Logarithm of the Ratio, which a Term distant from the middle by the Interval ℓ, has to the middle Term, is ."^{[63]} Although this theorem can be interpreted as the first obscure expression for the normal probability law, Stigler points out that de Moivre himself did not interpret his results as anything more than the approximate rule for the binomial coefficients, and in particular de Moivre lacked the concept of the probability density function.^{[64]}
In 1809 Gauss published his monograph "Theoria motus corporum coelestium in sectionibus conicis solem ambientium" where among other things he introduces several important statistical concepts, such as the method of least squares, the method of maximum likelihood, and the normal distribution. Gauss used M, M′, M′′, … to denote the measurements of some unknown quantity V, and sought the "most probable" estimator of that quantity: the one that maximizes the probability φ(M − V) · φ(M′ − V) · φ(M′′ − V) · … of obtaining the observed experimental results. In his notation φΔ is the probability law of the measurement errors of magnitude Δ. Not knowing what the function φ is, Gauss requires that his method should reduce to the wellknown answer: the arithmetic mean of the measured values.^{[nb 3]} Starting from these principles, Gauss demonstrates that the only law that rationalizes the choice of arithmetic mean as an estimator of the location parameter, is the normal law of errors:^{[65]}
where h is "the measure of the precision of the observations". Using this normal law as a generic model for errors in the experiments, Gauss formulates what is now known as the nonlinear weighted least squares (NWLS) method.^{[66]}
Although Gauss was the first to suggest the normal distribution law, Laplace made significant contributions.^{[nb 4]} It was Laplace who first posed the problem of aggregating several observations in 1774,^{[67]} although his own solution led to the Laplacian distribution. It was Laplace who first calculated the value of the integral ∫ e^{−t2} dt = √π in 1782, providing the normalization constant for the normal distribution.^{[68]} Finally, it was Laplace who in 1810 proved and presented to the Academy the fundamental central limit theorem, which emphasized the theoretical importance of the normal distribution.^{[69]}
It is of interest to note that in 1809 an American mathematician Adrain published two derivations of the normal probability law, simultaneously and independently from Gauss.^{[70]} His works remained largely unnoticed by the scientific community, until in 1871 they were "rediscovered" by Abbe.^{[71]}
In the middle of the 19th century Maxwell demonstrated that the normal distribution is not just a convenient mathematical tool, but may also occur in natural phenomena:^{[72]} "The number of particles whose velocity, resolved in a certain direction, lies between x and x + dx is
Since its introduction, the normal distribution has been known by many different names: the law of error, the law of facility of errors, Laplace's second law, Gaussian law, etc. Gauss himself apparently coined the term with reference to the "normal equations" involved in its applications, with normal having its technical meaning of orthogonal rather than "usual".^{[73]} However, by the end of the 19th century some authors^{[nb 5]} had started using the name normal distribution, where the word "normal" was used as an adjective – the term now being seen as a reflection of the fact that this distribution was seen as typical, common – and thus "normal". Peirce (one of those authors) once defined "normal" thus: "...the 'normal' is not the average (or any other kind of mean) of what actually occurs, but of what would, in the long run, occur under certain circumstances."^{[74]} Around the turn of the 20th century Pearson popularized the term normal as a designation for this distribution.^{[75]}
Many years ago I called the Laplace–Gaussian curve the normal curve, which name, while it avoids an international question of priority, has the disadvantage of leading people to believe that all other distributions of frequency are in one sense or another 'abnormal'.
Also, it was Pearson who first wrote the distribution in terms of the standard deviation σ as in modern notation. Soon after this, in year 1915, Fisher added the location parameter to the formula for normal distribution, expressing it in the way it is written nowadays:
The term "standard normal", which denotes the normal distribution with zero mean and unit variance came into general use around the 1950s, appearing in the popular textbooks by P.G. Hoel (1947) "Introduction to mathematical statistics" and A.M. Mood (1950) "Introduction to the theory of statistics".^{[76]}
When the name is used, the "Gaussian distribution" was named after Carl Friedrich Gauss, who introduced the distribution in 1809 as a way of rationalizing the method of least squares as outlined above. Among English speakers, both "normal distribution" and "Gaussian distribution" are in common use, with different terms preferred by different communities.
registration=
(help)
registration=
(help)
registration=
(help)
In probability theory, the central limit theorem (CLT) establishes that, in some situations, when independent random variables are added, their properly normalized sum tends toward a normal distribution (informally a "bell curve") even if the original variables themselves are not normally distributed. The theorem is a key concept in probability theory because it implies that probabilistic and statistical methods that work for normal distributions can be applicable to many problems involving other types of distributions.
For example, suppose that a sample is obtained containing a large number of observations, each observation being randomly generated in a way that does not depend on the values of the other observations, and that the arithmetic mean of the observed values is computed. If this procedure is performed many times, the central limit theorem says that the distribution of the average will be closely approximated by a normal distribution. A simple example of this is that if one flips a coin many times the probability of getting a given number of heads in a series of flips will approach a normal curve, with mean equal to half the total number of flips in each series. (In the limit of an infinite number of flips, it will equal a normal curve.)
The central limit theorem has a number of variants. In its common form, the random variables must be identically distributed. In variants, convergence of the mean to the normal distribution also occurs for nonidentical distributions or for nonindependent observations, given that they comply with certain conditions.
The earliest version of this theorem, that the normal distribution may be used as an approximation to the binomial distribution, is now known as the de Moivre–Laplace theorem.
In more general usage, a central limit theorem is any of a set of weakconvergence theorems in probability theory. They all express the fact that a sum of many independent and identically distributed (i.i.d.) random variables, or alternatively, random variables with specific types of dependence, will tend to be distributed according to one of a small set of attractor distributions. When the variance of the i.i.d. variables is finite, the attractor distribution is the normal distribution. In contrast, the sum of a number of i.i.d. random variables with power law tail distributions decreasing as x−α − 1 where 0 < α < 2 (and therefore having infinite variance) will tend to an alphastable distribution with stability parameter (or index of stability) of α as the number of variables grows.
Chisquared distributionIn probability theory and statistics, the chisquared distribution (also chisquare or χ2distribution) with k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. The chisquared distribution is a special case of the gamma distribution and is one of the most widely used probability distributions in inferential statistics, notably in hypothesis testing or in construction of confidence intervals. When it is being distinguished from the more general noncentral chisquared distribution, this distribution is sometimes called the central chisquared distribution.
The chisquared distribution is used in the common chisquared tests for goodness of fit of an observed distribution to a theoretical one, the independence of two criteria of classification of qualitative data, and in confidence interval estimation for a population standard deviation of a normal distribution from a sample standard deviation. Many other statistical tests also use this distribution, such as Friedman's analysis of variance by ranks.
Elliptical distributionIn probability and statistics, an elliptical distribution is any member of a broad family of probability distributions that generalize the multivariate normal distribution. Intuitively, in the simplified two and three dimensional case, the joint distribution forms an ellipse and an ellipsoid, respectively, in isodensity plots.
In statistics, the normal distribution is used in classical multivariate analysis, while elliptical distributions are used in generalized multivariate analysis, for the study of symmetric distributions with tails that are heavy, like the multivariate tdistribution, or light (in comparison with the normal distribution). Some statistical methods that were originally motivated by the study of the normal distribution have good performance for general elliptical distributions (with finite variance), particularly for spherical distributions (which are defined below). Elliptical distributions are also used in robust statistics to evaluate proposed multivariatestatistical procedures.
Folded normal distributionThe folded normal distribution is a probability distribution related to the normal distribution. Given a normally distributed random variable X with mean μ and variance σ^{2}, the random variable Y = X has a folded normal distribution. Such a case may be encountered if only the magnitude of some variable is recorded, but not its sign. The distribution is called "folded" because probability mass to the left of the x = 0 is folded over by taking the absolute value. In the physics of heat conduction, the folded normal distribution is a fundamental solution of the heat equation on the upper plane (i.e. a heat kernel).
The probability density function (PDF) is given by
for x≥0, and 0 everywhere else. An alternative formulation is given by
,
where cosh is the cosine Hyperbolic function. It follows that the cumulative distribution function (CDF) is given by:
for x≥0, where erf() is the error function. This expression reduces to the CDF of the halfnormal distribution when μ = 0.
The mean of the folded distribution is then
or
where is the normal cumulative distribution function:
The variance then is expressed easily in terms of the mean:
Both the mean (μ) and variance (σ^{2}) of X in the original normal distribution can be interpreted as the location and scale parameters of Y in the folded distribution.
Generalized normal distributionThe generalized normal distribution or generalized Gaussian distribution (GGD) is either of two families of parametric continuous probability distributions on the real line. Both families add a shape parameter to the normal distribution. To distinguish the two families, they are referred to below as "version 1" and "version 2". However this is not a standard nomenclature.
Halfnormal distributionIn probability theory and statistics, the halfnormal distribution is a special case of the folded normal distribution.
Let follow an ordinary normal distribution, , then follows a halfnormal distribution. Thus, the halfnormal distribution is a fold at the mean of an ordinary normal distribution with mean zero.
Jarque–Bera testIn statistics, the Jarque–Bera test is a goodnessoffit test of whether sample data have the skewness and kurtosis matching a normal distribution. The test is named after Carlos Jarque and Anil K. Bera. The test statistic is always nonnegative. If it is far from zero, it signals the data do not have a normal distribution.
The test statistic JB is defined as
where n is the number of observations (or degrees of freedom in general); S is the sample skewness, C is the sample kurtosis, and k is the number of regressors (being 1 outside a regression context):
where and are the estimates of third and fourth central moments, respectively, is the sample mean, and is the estimate of the second central moment, the variance.
If the data comes from a normal distribution, the JB statistic asymptotically has a chisquared distribution with two degrees of freedom, so the statistic can be used to test the hypothesis that the data are from a normal distribution. The null hypothesis is a joint hypothesis of the skewness being zero and the excess kurtosis being zero. Samples from a normal distribution have an expected skewness of 0 and an expected excess kurtosis of 0 (which is the same as a kurtosis of 3). As the definition of JB shows, any deviation from this increases the JB statistic.
For small samples the chisquared approximation is overly sensitive, often rejecting the null hypothesis when it is true. Furthermore, the distribution of pvalues departs from a uniform distribution and becomes a rightskewed unimodal distribution, especially for small pvalues. This leads to a large Type I error rate. The table below shows some pvalues approximated by a chisquared distribution that differ from their true alpha levels for small samples.
True α level  20  30  50  70  100 

0.1  0.307  0.252  0.201  0.183  0.1560 
0.05  0.1461  0.109  0.079  0.067  0.062 
0.025  0.051  0.0303  0.020  0.016  0.0168 
0.01  0.0064  0.0033  0.0015  0.0012  0.002 
(These values have been approximated using Monte Carlo simulation in Matlab)
In MATLAB's implementation, the chisquared approximation for the JB statistic's distribution is only used for large sample sizes (> 2000). For smaller samples, it uses a table derived from Monte Carlo simulations in order to interpolate pvalues.
KurtosisIn probability theory and statistics, kurtosis (from Greek: κυρτός, kyrtos or kurtos, meaning "curved, arching") is a measure of the "tailedness" of the probability distribution of a realvalued random variable. In a similar way to the concept of skewness, kurtosis is a descriptor of the shape of a probability distribution and, just as for skewness, there are different ways of quantifying it for a theoretical distribution and corresponding ways of estimating it from a sample from a population. Depending on the particular measure of kurtosis that is used, there are various interpretations of kurtosis, and of how particular measures should be interpreted.
The standard measure of kurtosis, originating with Karl Pearson, is based on a scaled version of the fourth moment of the data or population. This number is related to the tails of the distribution, not its peak; hence, the sometimesseen characterization as "peakedness" is mistaken. For this measure, higher kurtosis is the result of infrequent extreme deviations (or outliers), as opposed to frequent modestly sized deviations.
The kurtosis of any univariate normal distribution is 3. It is common to compare the kurtosis of a distribution to this value. Distributions with kurtosis less than 3 are said to be platykurtic, although this does not imply the distribution is "flattopped" as sometimes reported. Rather, it means the distribution produces fewer and less extreme outliers than does the normal distribution. An example of a platykurtic distribution is the uniform distribution, which does not produce outliers. Distributions with kurtosis greater than 3 are said to be leptokurtic. An example of a leptokurtic distribution is the Laplace distribution, which has tails that asymptotically approach zero more slowly than a Gaussian, and therefore produces more outliers than the normal distribution. It is also common practice to use an adjusted version of Pearson's kurtosis, the excess kurtosis, which is the kurtosis minus 3, to provide the comparison to the normal distribution. Some authors use "kurtosis" by itself to refer to the excess kurtosis. For the reason of clarity and generality, however, this article follows the nonexcess convention and explicitly indicates where excess kurtosis is meant.
Alternative measures of kurtosis are: the Lkurtosis, which is a scaled version of the fourth Lmoment; measures based on four population or sample quantiles. These are analogous to the alternative measures of skewness that are not based on ordinary moments.
Lognormal distributionIn probability theory, a lognormal (or lognormal) distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable X is lognormally distributed, then Y = ln(X) has a normal distribution. Likewise, if Y has a normal distribution, then the exponential function of Y, X = exp(Y), has a lognormal distribution. A random variable which is lognormally distributed takes only positive real values. The distribution is occasionally referred to as the Galton distribution or Galton's distribution, after Francis Galton. The lognormal distribution also has been associated with other names, such as McAlister, Gibrat and Cobb–Douglas.A lognormal process is the statistical realization of the multiplicative product of many independent random variables, each of which is positive. This is justified by considering the central limit theorem in the log domain. The lognormal distribution is the maximum entropy probability distribution for a random variate X for which the mean and variance of ln(X) are specified.
Logitnormal distributionIn probability theory, a logitnormal distribution is a probability distribution of a random variable whose logit has a normal distribution. If Y is a random variable with a normal distribution, and P is the standard logistic function, then X = P(Y) has a logitnormal distribution; likewise, if X is logitnormally distributed, then Y = logit(X)= log (X/(1X)) is normally distributed. It is also known as the logistic normal distribution, which often refers to a multinomial logit version (e.g.).
A variable might be modeled as logitnormal if it is a proportion, which is bounded by zero and one, and where values of zero and one never occur.
Matrix normal distributionIn statistics, the matrix normal distribution or matrix Gaussian distribution is a probability distribution that is a generalization of the multivariate normal distribution to matrixvalued random variables.
Mode (statistics)The mode of a set of data values is the value that appears most often. If X is a discrete random variable, the mode is the value x (i.e, X = x) at which the probability mass function takes its maximum value. In other words, it is the value that is most likely to be sampled.
Like the statistical mean and median, the mode is a way of expressing, in a (usually) single number, important information about a random variable or a population. The numerical value of the mode is the same as that of the mean and median in a normal distribution, and it may be very different in highly skewed distributions.
The mode is not necessarily unique to a given discrete distribution, since the probability mass function may take the same maximum value at several points x1, x2, etc. The most extreme case occurs in uniform distributions, where all values occur equally frequently.
When the probability density function of a continuous distribution has multiple local maxima it is common to refer to all of the local maxima as modes of the distribution. Such a continuous distribution is called multimodal (as opposed to unimodal). A mode of a continuous probability distribution is often considered to be any value x at which its probability density function has a locally maximum value, so any peak is a mode.In symmetric unimodal distributions, such as the normal distribution, the mean (if defined), median and mode all coincide. For samples, if it is known that they are drawn from a symmetric unimodal distribution, the sample mean can be used as an estimate of the population mode.
Multivariate normal distributionIn probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the onedimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be kvariate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated realvalued random variables each of which clusters around a mean value.
NormalWishart distributionIn probability theory and statistics, the normalWishart distribution (or GaussianWishart distribution) is a multivariate fourparameter family of continuous probability distributions. It is the conjugate prior of a multivariate normal distribution with unknown mean and precision matrix (the inverse of the covariance matrix).
Skew normal distributionIn probability theory and statistics, the skew normal distribution is a continuous probability distribution that generalises the normal distribution to allow for nonzero skewness.
Slash distributionIn probability theory, the slash distribution is the probability distribution of a standard normal variate divided by an independent standard uniform variate. In other words, if the random variable Z has a normal distribution with zero mean and unit variance, the random variable U has a uniform distribution on [0,1] and Z and U are statistically independent, then the random variable X = Z / U has a slash distribution. The slash distribution is an example of a ratio distribution. The distribution was named by William H. Rogers and John Tukey in a paper published in 1972.
The probability density function (pdf) is
where φ(x) is the probability density function of the standard normal distribution. The result is undefined at x = 0, but the discontinuity is removable:
The most common use of the slash distribution is in simulation studies. It is a useful distribution in this context because it has heavier tails than a normal distribution, but it is not as pathological as the Cauchy distribution.
Truncated normal distributionIn probability and statistics, the truncated normal distribution is the probability distribution derived from that of a normally distributed random variable by bounding the random variable from either below or above (or both). The truncated normal distribution has wide applications in statistics and econometrics. For example, it is used to model the probabilities of the binary outcomes in the probit model and to model censored data in the Tobit model.
Wrapped normal distributionIn probability theory and directional statistics, a wrapped normal distribution is a wrapped probability distribution that results from the "wrapping" of the normal distribution around the unit circle. It finds application in the theory of Brownian motion and is a solution to the heat equation for periodic boundary conditions. It is closely approximated by the von Mises distribution, which, due to its mathematical simplicity and tractability, is the most commonly used distribution in directional statistics.
ZtestA Ztest is any statistical test for which the distribution of the test statistic under the null hypothesis can be approximated by a normal distribution. Because of the central limit theorem, many test statistics are approximately normally distributed for large samples. For each significance level, the Ztest has a single critical value (for example, 1.96 for 5% two tailed) which makes it more convenient than the Student's ttest which has separate critical values for each sample size. Therefore, many statistical tests can be conveniently performed as approximate Ztests if the sample size is large or the population variance is known. If the population variance is unknown (and therefore has to be estimated from the sample itself) and the sample size is not large (n < 30), the Student's ttest may be more appropriate.
If T is a statistic that is approximately normally distributed under the null hypothesis, the next step in performing a Ztest is to estimate the expected value θ of T under the null hypothesis, and then obtain an estimate s of the standard deviation of T. After that the standard score Z = (T − θ) / s is calculated, from which onetailed and twotailed pvalues can be calculated as Φ(−Z) (for uppertailed tests), Φ(Z) (for lowertailed tests) and 2Φ(−Z) (for twotailed tests) where Φ is the standard normal cumulative distribution function.
This page is based on a Wikipedia article written by authors
(here).
Text is available under the CC BYSA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.