Cumulative distribution function

In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable , or just distribution function of , evaluated at , is the probability that will take a value less than or equal to .

In the case of a continuous distribution, it gives the area under the probability density function from minus infinity to . Cumulative distribution functions are also used to specify the distribution of multivariate random variables.

Exponential distribution cdf
Cumulative distribution function for the Exponential distribution
Normal Distribution CDF
Cumulative distribution function for the Normal distribution

Definition

The cumulative distribution function of a real-valued random variable is the function given by[1]:p. 77

(Eq.1)

where the right-hand side represents the probability that the random variable takes on a value less than or equal to . The probability that lies in the semi-closed interval , where , is therefore[1]:p. 84

(Eq.2)

In the definition above, the "less than or equal to" sign, "≤", is a convention, not a universally used one (e.g. Hungarian literature uses "<"), but is important for discrete distributions. The proper use of tables of the binomial and Poisson distributions depends upon this convention. Moreover, important formulas like Paul Lévy's inversion formula for the characteristic function also rely on the "less than or equal" formulation.

If treating several random variables etc. the corresponding letters are used as subscripts while, if treating only one, the subscript is usually omitted. It is conventional to use a capital for a cumulative distribution function, in contrast to the lower-case used for probability density functions and probability mass functions. This applies when discussing general distributions: some specific distributions have their own conventional notation, for example the normal distribution.

The CDF of a continuous random variable can be expressed as the integral of its probability density function as follows:[1]:p. 86

In the case of a random variable which has distribution having a discrete component at a value ,

If is continuous at , this equals zero and there is no discrete component at .

Properties

Discrete probability distribution illustration
From top to bottom, the cumulative distribution function of a discrete probability distribution, continuous probability distribution, and a distribution which has both a continuous part and a discrete part.

Every cumulative distribution function is non-decreasing[1]:p. 78 and right-continuous[1]:p. 79, which makes it a càdlàg function. Furthermore,

Every function with these four properties is a CDF, i.e., for every such function, a random variable can be defined such that the function is the cumulative distribution function of that random variable.

If is a purely discrete random variable, then it attains values with probability , and the CDF of will be discontinuous at the points and constant in between:

If the CDF of a real valued random variable is continuous, then is a continuous random variable; if furthermore is absolutely continuous, then there exists a Lebesgue-integrable function such that

for all real numbers and . The function is equal to the derivative of almost everywhere, and it is called the probability density function of the distribution of .

Examples

As an example, suppose is uniformly distributed on the unit interval . Then the CDF of is given by

Suppose instead that takes only the discrete values 0 and 1, with equal probability. Then the CDF of is given by

Derived functions

Complementary cumulative distribution function (tail distribution)

Sometimes, it is useful to study the opposite question and ask how often the random variable is above a particular level. This is called the complementary cumulative distribution function (ccdf) or simply the tail distribution or exceedance, and is defined as

This has applications in statistical hypothesis testing, for example, because the one-sided p-value is the probability of observing a test statistic at least as extreme as the one observed. Thus, provided that the test statistic, T, has a continuous distribution, the one-sided p-value is simply given by the ccdf: for an observed value of the test statistic

In survival analysis, is called the survival function and denoted , while the term reliability function is common in engineering.

Properties
  • As , and in fact provided that is finite.
Proof: Assuming has a density function , for any
Then, on recognizing and rearranging terms,
as claimed.

Folded cumulative distribution

Folded-cumulative-distribution-function
Example of the folded cumulative distribution for a normal distribution function with an expected value of 0 and a standard deviation of 1.

While the plot of a cumulative distribution often has an S-like shape, an alternative illustration is the folded cumulative distribution or mountain plot, which folds the top half of the graph over,[3][4] thus using two scales, one for the upslope and another for the downslope. This form of illustration emphasises the median and dispersion (specifically, the mean absolute deviation from the median[5]) of the distribution or of the empirical results.

Inverse distribution function (quantile function)

If the CDF F is strictly increasing and continuous then is the unique real number such that . In such a case, this defines the inverse distribution function or quantile function.

Some distributions do not have a unique inverse (for example in the case where for all , causing to be constant). This problem can be solved by defining, for , the generalized inverse distribution function:

  • Example 1: The median is .
  • Example 2: Put . Then we call the 95th percentile.

Some useful properties of the inverse cdf (which are also preserved in the definition of the generalized inverse distribution function) are:

  1. is nondecreasing
  2. if and only if
  3. If has a distribution then is distributed as . This is used in random number generation using the inverse transform sampling-method.
  4. If is a collection of independent -distributed random variables defined on the same sample space, then there exist random variables such that is distributed as and with probability 1 for all .

The inverse of the cdf can be used to translate results obtained for the uniform distribution to other distributions.

Multivariate case

Definition for two random variables

When dealing simultaneously with more than one random variable the joint cumulative distribution function can also be defined. For example, for a pair of random variables , the joint CDF is given by[1]:p. 89

(Eq.3)

where the right-hand side represents the probability that the random variable takes on a value less than or equal to and that takes on a value less than or equal to .

Definition for more than two random variables

For random variables , the joint CDF is given by

(Eq.4)

Interpreting the random variables as a random vector yields a shorter notation:

Properties

Every multivariate CDF is:

  1. Monotonically non-decreasing for each of its variables,
  2. Right-continuous in each of its variables,

Complex case

Complex random variable

The generalization of the cumulative distribution function from real to complex random variables is not obvious because expressions of the form make no sense. However expressions of the form make sense. Therefore, we define the cumulative distribution of a complex random variables via the joint distribution of their real and imaginary parts:

.

Complex random vector

Generalization of Eq.4 yields

as definition for the CDS of a complex random vector .

Use in statistical analysis

The concept of the cumulative distribution function makes an explicit appearance in statistical analysis in two (similar) ways. Cumulative frequency analysis is the analysis of the frequency of occurrence of values of a phenomenon less than a reference value. The empirical distribution function is a formal direct estimate of the cumulative distribution function for which simple statistical properties can be derived and which can form the basis of various statistical hypothesis tests. Such tests can assess whether there is evidence against a sample of data having arisen from a given distribution, or evidence against two samples of data having arisen from the same (unknown) population distribution.

Kolmogorov–Smirnov and Kuiper's tests

Gamma distribution cdf

The Kolmogorov–Smirnov test is based on cumulative distribution functions and can be used to test to see whether two empirical distributions are different or whether an empirical distribution is different from an ideal distribution. The closely related Kuiper's test is useful if the domain of the distribution is cyclic as in day of the week. For instance Kuiper's test might be used to see if the number of tornadoes varies during the year or if sales of a product vary by day of the week or day of the month.

See also

References

  1. ^ a b c d e f Park,Kun Il (2018). Fundamentals of Probability and Stochastic Processes with Applications to Communications. Springer. ISBN 978-3-319-68074-3.
  2. ^ Zwillinger, Daniel; Kokoska, Stephen (2010). CRC Standard Probability and Statistics Tables and Formulae. CRC Press. p. 49. ISBN 978-1-58488-059-2.
  3. ^ Gentle, J.E. (2009). Computational Statistics. Springer. ISBN 978-0-387-98145-1. Retrieved 2010-08-06.
  4. ^ Monti, K.L. (1995). "Folded Empirical Distribution Function Curves (Mountain Plots)". The American Statistician. 49: 342–345. doi:10.2307/2684570. JSTOR 2684570.
  5. ^ Xue, J. H.; Titterington, D. M. (2011). "The p-folded cumulative distribution function and the mean absolute deviation from the p-quantile". Statistics & Probability Letters. 81 (8): 1179–1182. doi:10.1016/j.spl.2011.03.014.<

External links

ARGUS distribution

In physics, the ARGUS distribution, named after the particle physics experiment ARGUS, is the probability distribution of the reconstructed invariant mass of a decayed particle candidate in continuum background.

Beta rectangular distribution

In probability theory and statistics, the beta rectangular distribution is a probability distribution that is a finite mixture distribution of the beta distribution and the continuous uniform distribution. The support is of the distribution is indicated by the parameters a and b, which are the minimum and maximum values respectively. The distribution provides an alternative to the beta distribution such that it allows more density to be placed at the extremes of the bounded interval of support. Thus it is a bounded distribution that allows for outliers to have a greater chance of occurring than does the beta distribution.

Cantor distribution

The Cantor distribution is the probability distribution whose cumulative distribution function is the Cantor function.

This distribution has neither a probability density function nor a probability mass function, since although its cumulative distribution function is a continuous function, the distribution is not absolutely continuous with respect to Lebesgue measure, nor does it have any point-masses. It is thus neither a discrete nor an absolutely continuous probability distribution, nor is it a mixture of these. Rather it is an example of a singular distribution.

Its cumulative distribution function is continuous everywhere but horizontal almost everywhere, so is sometimes referred to as the Devil's staircase, although that term has a more general meaning.

Empirical distribution function

In statistics, an empirical distribution function is the distribution function associated with the empirical measure of a sample. This cumulative distribution function is a step function that jumps up by 1/n at each of the n data points. Its value at any specified value of the measured variable is the fraction of observations of the measured variable that are less than or equal to the specified value.

The empirical distribution function is an estimate of the cumulative distribution function that generated the points in the sample. It converges with probability 1 to that underlying distribution, according to the Glivenko–Cantelli theorem. A number of results exist to quantify the rate of convergence of the empirical distribution function to the underlying cumulative distribution function.

Fréchet distribution

The Fréchet distribution, also known as inverse Weibull distribution, is a special case of the generalized extreme value distribution. It has the cumulative distribution function

where α > 0 is a shape parameter. It can be generalised to include a location parameter m (the minimum) and a scale parameter s > 0 with the cumulative distribution function

Named for Maurice Fréchet who wrote a related paper in 1927, further work was done by Fisher and Tippett in 1928 and by Gumbel in 1958.

Gamma/Gompertz distribution

In probability and statistics, the Gamma/Gompertz distribution is a continuous probability distribution. It has been used as an aggregate-level model of customer lifetime and a model of mortality risks.

Half-logistic distribution

In probability theory and statistics, the half-logistic distribution is a continuous probability distribution—the distribution of the absolute value of a random variable following the logistic distribution. That is, for

where Y is a logistic random variable, X is a half-logistic random variable.

Joint probability distribution

Given random variables , that are defined on a probability space, the joint probability distribution for is a probability distribution that gives the probability that each of falls in any particular range or discrete set of values specified for that variable. In the case of only two random variables, this is called a bivariate distribution, but the concept generalizes to any number of random variables, giving a multivariate distribution.

The joint probability distribution can be expressed either in terms of a joint cumulative distribution function or in terms of a joint probability density function (in the case of continuous variables) or joint probability mass function (in the case of discrete variables). These in turn can be used to find two other types of distributions: the marginal distribution giving the probabilities for any one of the variables with no reference to any specific ranges of values for the other variables, and the conditional probability distribution giving the probabilities for any subset of the variables conditional on particular values of the remaining variables.

Kumaraswamy distribution

In probability and statistics, the Kumaraswamy's double bounded distribution is a family of continuous probability distributions defined on the interval [0,1]. It is similar to the Beta distribution, but much simpler to use especially in simulation studies due to the simple closed form of both its probability density function and cumulative distribution function. This distribution was originally proposed by Poondi Kumaraswamy for variables that are lower and upper bounded.

Location–scale family

In probability theory, especially in mathematical statistics, a location–scale family is a family of probability distributions parametrized by a location parameter and a non-negative scale parameter. For any random variable whose probability distribution function belongs to such a family, the distribution function of also belongs to the family (where means "equal in distribution"—that is, "has the same distribution as"). Moreover, if and are two random variables whose distribution functions are members of the family, and assuming 1) existence of the first two moments and 2) has zero mean and unit variance, then can be written as , where and are the mean and standard deviation of .

In other words, a class of probability distributions is a location–scale family if for all cumulative distribution functions and any real numbers and , the distribution function is also a member of .

In decision theory, if all alternative distributions available to a decision-maker are in the same location–scale family, and the first two moments are finite, then a two-moment decision model can apply, and decision-making can be framed in terms of the means and the variances of the distributions.

Logistic distribution

In probability theory and statistics, the logistic distribution is a continuous probability distribution. Its cumulative distribution function is the logistic function, which appears in logistic regression and feedforward neural networks. It resembles the normal distribution in shape but has heavier tails (higher kurtosis). The logistic distribution is a special case of the Tukey lambda distribution.

Normal distribution

In probability theory, the normal (or Gaussian or Gauss or Laplace–Gauss) distribution is a very common continuous probability distribution. Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known. A random variable with a Gaussian distribution is said to be normally distributed and is called a normal deviate.

The normal distribution is useful because of the central limit theorem. In its most general form, under some conditions (which include finite variance), it states that averages of samples of observations of random variables independently drawn from independent distributions converge in distribution to the normal, that is, they become normally distributed when the number of observations is sufficiently large. Physical quantities that are expected to be the sum of many independent processes (such as measurement errors) often have distributions that are nearly normal. Moreover, many results and methods (such as propagation of uncertainty and least squares parameter fitting) can be derived analytically in explicit form when the relevant variables are normally distributed.

The normal distribution is sometimes informally called the bell curve. However, many other distributions are bell-shaped (such as the Cauchy, Student's t-, and logistic distributions).

The probability density of the normal distribution is

where

Probability distribution

In probability theory and statistics, a probability distribution is a mathematical function that provides the probabilities of occurrence of different possible outcomes in an experiment. In more technical terms, the probability distribution is a description of a random phenomenon in terms of the probabilities of events. For instance, if the random variable X is used to denote the outcome of a coin toss ("the experiment"), then the probability distribution of X would take the value 0.5 for X = heads, and 0.5 for X = tails (assuming the coin is fair). Examples of random phenomena can include the results of an experiment or survey.

A probability distribution is specified in terms of an underlying sample space, which is the set of all possible outcomes of the random phenomenon being observed. The sample space may be the set of real numbers or a set of vectors, or it may be a list of non-numerical values; for example, the sample space of a coin flip would be {heads, tails} .

Probability distributions are generally divided into two classes. A discrete probability distribution (applicable to the scenarios where the set of possible outcomes is discrete, such as a coin toss or a roll of dice) can be encoded by a discrete list of the probabilities of the outcomes, known as a probability mass function. On the other hand, a continuous probability distribution (applicable to the scenarios where the set of possible outcomes can take on values in a continuous range (e.g. real numbers), such as the temperature on a given day) is typically described by probability density functions (with the probability of any individual outcome actually being 0). The normal distribution is a commonly encountered continuous probability distribution. More complex experiments, such as those involving stochastic processes defined in continuous time, may demand the use of more general probability measures.

A probability distribution whose sample space is one-dimensional (for example real numbers, list of labels, ordered labels or binary) is called univariate, while a distribution whose sample space is a vector space of dimension 2 or more is called multivariate. A univariate distribution gives the probabilities of a single random variable taking on various alternative values; a multivariate distribution (a joint probability distribution) gives the probabilities of a random vector – a list of two or more random variables – taking on various combinations of values. Important and commonly encountered univariate probability distributions include the binomial distribution, the hypergeometric distribution, and the normal distribution. The multivariate normal distribution is a commonly encountered multivariate distribution.

Q-Weibull distribution

In statistics, the q-Weibull distribution is a probability distribution that generalizes the Weibull distribution and the Lomax distribution (Pareto Type II). It is one example of a Tsallis distribution.

Quantile function

In probability and statistics, the quantile function, associated with a probability distribution of a random variable, specifies the value of the random variable such that the probability of the variable being less than or equal to that value equals the given probability. It is also called the percent-point function or inverse cumulative distribution function.

Survival function

The survival function is a function that gives the probability that a patient, device, or other object of interest will survive beyond any given specified time.The survival function is also known as the survivor function or reliability function.The term reliability function is common in engineering while the term survival function is used in a broader range of applications, including human mortality. Another name for the survival function is the complementary cumulative distribution function.

Type-2 Gumbel distribution

In probability theory, the Type-2 Gumbel probability density function is

for

.

This implies that it is similar to the Weibull distributions, substituting and . Note, however, that a positive k (as in the Weibull distribution) would yield a negative a, which is not allowed here as it would yield a negative probability density.

For the mean is infinite. For the variance is infinite.

The cumulative distribution function is

The moments exist for

The special case b = 1 yields the Fréchet distribution.

Based on The GNU Scientific Library, used under GFDL.

Uniform distribution (continuous)

In probability theory and statistics, the continuous uniform distribution or rectangular distribution is a family of symmetric probability distributions such that for each member of the family, all intervals of the same length on the distribution's support are equally probable. The support is defined by the two parameters, a and b, which are its minimum and maximum values. The distribution is often abbreviated U(a,b). It is the maximum entropy probability distribution for a random variable X under no constraint other than that it is contained in the distribution's support.

Weibull distribution

In probability theory and statistics, the Weibull distribution is a continuous probability distribution. It is named after Swedish mathematician Waloddi Weibull, who described it in detail in 1951, although it was first identified by Fréchet (1927) and first applied by Rosin & Rammler (1933) to describe a particle size distribution.

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.