Cumulant

In probability theory and statistics, the cumulants κn of a probability distribution are a set of quantities that provide an alternative to the moments of the distribution. The moments determine the cumulants in the sense that any two probability distributions whose moments are identical will have identical cumulants as well, and similarly the cumulants determine the moments.

The first cumulant is the mean, the second cumulant is the variance, and the third cumulant is the same as the third central moment. But fourth and higher-order cumulants are not equal to central moments. In some cases theoretical treatments of problems in terms of cumulants are simpler than those using moments. In particular, when two or more random variables are statistically independent, the nth-order cumulant of their sum is equal to the sum of their nth-order cumulants. As well, the third and higher-order cumulants of a normal distribution are zero, and it is the only distribution with this property.

Just as for moments, where joint moments are used for collections of random variables, it is possible to define joint cumulants.

Definition

The cumulants of a random variable X are defined using the cumulant-generating function K(t), which is the natural logarithm of the moment-generating function:

The cumulants κn are obtained from a power series expansion of the cumulant generating function:

This expansion is a Maclaurin series, so the n-th cumulant can be obtained by differentiating the above expansion n times and evaluating the result at zero:[1]

If the moment-generating function does not exist, the cumulants can be defined in terms of the relationship between cumulants and moments discussed later.

Alternative definition of the cumulant generating function

Some writers[2][3] prefer to define the cumulant-generating function as the natural logarithm of the characteristic function, which is sometimes also called the second characteristic function,[4][5]

An advantage of H(t)—in some sense the function K(t) evaluated for purely imaginary arguments—is that E(eitX) is well defined for all real values of t even when E(etX) is not well defined for all real values of t, such as can occur when there is "too much" probability that X has a large magnitude. Although the function H(t) will be well defined, it will nonetheless mimic K(t) in terms of the length of its Maclaurin series, which may not extend beyond (or, rarely, even to) linear order in the argument t, and in particular the number of cumulants that are well defined will not change. Nevertheless, even when H(t) does not have a long Maclaurin series, it can be used directly in analyzing and, particularly, adding random variables. Both the Cauchy distribution (also called the Lorentzian) and more generally, stable distributions (related to the Lévy distribution) are examples of distributions for which the power-series expansions of the generating functions have only finitely many well-defined terms.

Uses in statistics

Working with cumulants can have an advantage over using moments because for statistically independent random variables X and Y,

so that each cumulant of a sum of independent random variables is the sum of the corresponding cumulants of the addends. That is, when the addends are statistically independent, the mean of the sum is the sum of the means, the variance of the sum is the sum of the variances, the third cumulant (which happens to be the third central moment) of the sum is the sum of the third cumulants, and so on for each order of cumulant.

A distribution with given cumulants κn can be approximated through an Edgeworth series.

Cumulants of some discrete probability distributions

  • The constant random variables X = μ. The cumulant generating function is K(t) =μt. The first cumulant is κ1 = K '(0) = μ and the other cumulants are zero, κ2 = κ3 = κ4 = ... = 0.
  • The Bernoulli distributions, (number of successes in one trial with probability p of success). The cumulant generating function is K(t) = log(1 − p + pet). The first cumulants are κ1 = K '(0) = p and κ2 = K′′(0) = p·(1 − p). The cumulants satisfy a recursion formula
  • The geometric distributions, (number of failures before one success with probability p of success on each trial). The cumulant generating function is K(t) = log(p / (1 + (p − 1)et)). The first cumulants are κ1 = K′(0) = p−1 − 1, and κ2 = K′′(0) = κ1p−1. Substituting p = (μ + 1)−1 gives K(t) = −log(1 + μ(1−et)) and κ1 = μ.
  • The Poisson distributions. The cumulant generating function is K(t) = μ(et − 1). All cumulants are equal to the parameter: κ1 = κ2 = κ3 = ... = μ.
  • The binomial distributions, (number of successes in n independent trials with probability p of success on each trial). The special case n = 1 is a Bernoulli distribution. Every cumulant is just n times the corresponding cumulant of the corresponding Bernoulli distribution. The cumulant generating function is K(t) = n log(1 − p + pet). The first cumulants are κ1 = K′(0) = np and κ2 = K′′(0) = κ1(1 − p). Substituting p = μ·n−1 gives K '(t) = ((μ−1n−1)·et + n−1)−1 and κ1 = μ. The limiting case n−1 = 0 is a Poisson distribution.
  • The negative binomial distributions, (number of failures before n successes with probability p of success on each trial). The special case n = 1 is a geometric distribution. Every cumulant is just n times the corresponding cumulant of the corresponding geometric distribution. The derivative of the cumulant generating function is K '(t) = n·((1 − p)−1·et−1)−1. The first cumulants are κ1 = K '(0) = n·(p−1−1), and κ2 = K ' '(0) = κ1·p−1. Substituting p = (μ·n−1+1)−1 gives K′(t) = ((μ−1 + n−1)etn−1)−1 and κ1 = μ. Comparing these formulas to those of the binomial distributions explains the name 'negative binomial distribution'. The limiting case n−1 = 0 is a Poisson distribution.

Introducing the variance-to-mean ratio

the above probability distributions get a unified formula for the derivative of the cumulant generating function:

The second derivative is

confirming that the first cumulant is κ1 = K′(0) = μ and the second cumulant is κ2 = K′′(0) = με. The constant random variables X = μ have ε = 0. The binomial distributions have ε = 1 − p so that 0 < ε < 1. The Poisson distributions have ε = 1. The negative binomial distributions have ε = p−1 so that ε > 1. Note the analogy to the classification of conic sections by eccentricity: circles ε = 0, ellipses 0 < ε < 1, parabolas ε = 1, hyperbolas ε > 1.

Cumulants of some continuous probability distributions

  • For the normal distribution with expected value μ and variance σ2, the cumulant generating function is K(t) = μt + σ2t2/2. The first and second derivatives of the cumulant generating function are K '(t) = μ + σ2·t and K"(t) = σ2. The cumulants are κ1 = μ, κ2 = σ2, and κ3 = κ4 = ... = 0. The special case σ2 = 0 is a constant random variable X = μ.
  • The cumulants of the uniform distribution on the interval [−1, 0] are κn = Bn/n, where Bn is the n-th Bernoulli number.
  • The cumulants of the exponential distribution with parameter λ are κn = λn (n − 1)!.

Some properties of the cumulant generating function

The cumulant generating function K(t), if it exists, is infinitely differentiable and convex, and passes through the origin. Its first derivative ranges monotonically in the open interval from the infimum to the supremum of the support of the probability distribution, and its second derivative is strictly positive everywhere it is defined, except for the degenerate distribution of a single point mass. The cumulant-generating function exists if and only if the tails of the distribution are majorized by an exponential decay, that is, (see Big O notation,)

where is the cumulative distribution function. The cumulant-generating function will have vertical asymptote(s) at the infimum of such c, if such an infimum exists, and at the supremum of such d, if such a supremum exists, otherwise it will be defined for all real numbers.

If the support of a random variable X has finite upper or lower bounds, then its cumulant-generating function y = K(t), if it exists, approaches asymptote(s) whose slope is equal to the supremum and/or infimum of the support,

respectively, lying above both these lines everywhere. (The integrals

yield the y-intercepts of these asymptotes, since K(0) = 0.)

For a shift of the distribution by c, For a degenerate point mass at c, the cgf is the straight line , and more generally, if and only if X and Y are independent and their cgfs exist; (subindependence and the existence of second moments sufficing to imply independence.[6])

The natural exponential family of a distribution may be realized by shifting or translating K(t), and adjusting it vertically so that it always passes through the origin: if f is the pdf with cgf and is its natural exponential family, then and

If K(t) is finite for a range t1 < Re(t) < t2 then if t1 < 0 < t2 then K(t) is analytic and infinitely differentiable for t1 < Re(t) < t2. Moreover for t real and t1 < t < t2 K(t) is strictly convex, and K'(t) is strictly increasing.

Some properties of cumulants

Invariance and equivariance

The first cumulant is shift-equivariant; all of the others are shift-invariant. This means that, if we denote by κn(X) the n-th cumulant of the probability distribution of the random variable X, then for any constant c:

In other words, shifting a random variable (adding c) shifts the first cumulant (the mean) and doesn't affect any of the others.

Homogeneity

The n-th cumulant is homogeneous of degree n, i.e. if c is any constant, then

Additivity

If X and Y are independent random variables then κn(X + Y) = κn(X) + κn(Y).

A negative result

Given the results for the cumulants of the normal distribution, it might be hoped to find families of distributions for which κm = κm+1 = ⋯ = 0 for some m > 3, with the lower-order cumulants (orders 3 to m − 1) being non-zero. There are no such distributions.[7] The underlying result here is that the cumulant generating function cannot be a finite-order polynomial of degree greater than 2.

Cumulants and moments

The moment generating function is given by:

So the cumulant generating function is the logarithm of the moment generating function

The first cumulant is the expected value; the second and third cumulants are respectively the second and third central moments (the second central moment is the variance); but the higher cumulants are neither moments nor central moments, but rather more complicated polynomial functions of the moments.

The moments can be recovered in terms of cumulants by evaluating the n-th derivative of at ,

Likewise, the cumulants can be recovered in terms of moments by evaluating the n-th derivative of at ,

The explicit expression for the n-th moment in terms of the first n cumulants, and vice versa, can be obtained by using Faà di Bruno's formula for higher derivatives of composite functions. In general, we have

where are incomplete (or partial) Bell polynomials.

In the like manner, if the mean is given by , the central moment generating function is given by

and the n-th central moment is obtained in terms of cumulants as

Also, for n > 1, the n-th cumulant in terms of the central moments is

The n-th moment μn is an nth-degree polynomial in the first n cumulants. The first few expressions are:

The "prime" distinguishes the moments μn from the central moments μn. To express the central moments as functions of the cumulants, just drop from these polynomials all terms in which κ1 appears as a factor:

Similarly, the n-th cumulant κn is an n-th-degree polynomial in the first n non-central moments. The first few expressions are:

To express the cumulants κn for n > 1 as functions of the central moments, drop from these polynomials all terms in which μ'1 appears as a factor:

To express the cumulants κn for n > 2 as functions of the standardized central moments, also set μ'2=1 in the polynomials:

The cumulants are also related to the moments by the following recursion formula:

Cumulants and set-partitions

These polynomials have a remarkable combinatorial interpretation: the coefficients count certain partitions of sets. A general form of these polynomials is

where

  • π runs through the list of all partitions of a set of size n;
  • "Bπ" means B is one of the "blocks" into which the set is partitioned; and
  • |B| is the size of the set B.

Thus each monomial is a constant times a product of cumulants in which the sum of the indices is n (e.g., in the term κ3 κ22 κ1, the sum of the indices is 3 + 2 + 2 + 1 = 8; this appears in the polynomial that expresses the 8th moment as a function of the first eight cumulants). A partition of the integer n corresponds to each term. The coefficient in each term is the number of partitions of a set of n members that collapse to that partition of the integer n when the members of the set become indistinguishable.

Cumulants and combinatorics

Further connection between cumulants and combinatorics can be found in the work of Gian-Carlo Rota and Jianhong (Jackie) Shen, where links to invariant theory, symmetric functions, and binomial sequences are studied via umbral calculus.[8]

Joint cumulants

The joint cumulant of several random variables X1, ..., Xn is defined by a similar cumulant generating function

A consequence is that

where π runs through the list of all partitions of { 1, ..., n }, B runs through the list of all blocks of the partition π, and |π| is the number of parts in the partition. For example,

If any of these random variables are identical, e.g. if X = Y, then the same formulae apply, e.g.

although for such repeated variables there are more concise formulae. For zero-mean random vectors,

The joint cumulant of just one random variable is its expected value, and that of two random variables is their covariance. If some of the random variables are independent of all of the others, then any cumulant involving two (or more) independent random variables is zero. If all n random variables are the same, then the joint cumulant is the n-th ordinary cumulant.

The combinatorial meaning of the expression of moments in terms of cumulants is easier to understand than that of cumulants in terms of moments:

For example:

Another important property of joint cumulants is multilinearity:

Just as the second cumulant is the variance, the joint cumulant of just two random variables is the covariance. The familiar identity

generalizes to cumulants:

Conditional cumulants and the law of total cumulance

The law of total expectation and the law of total variance generalize naturally to conditional cumulants. The case n = 3, expressed in the language of (central) moments rather than that of cumulants, says

In general,[9]

where

  • the sum is over all partitions π of the set { 1, ..., n } of indices, and
  • π1, ..., πb are all of the "blocks" of the partition π; the expression κ(Xπm) indicates that the joint cumulant of the random variables whose indices are in that block of the partition.

Relation to statistical physics

In statistical physics many extensive quantities – that is quantities that are proportional to the volume or size of a given system – are related to cumulants of random variables. The deep connection is that in a large system an extensive quantity like the energy or number of particles can be thought of as the sum of (say) the energy associated with a number of nearly independent regions. The fact that the cumulants of these nearly independent random variables will (nearly) add make it reasonable that extensive quantities should be expected to be related to cumulants.

A system in equilibrium with a thermal bath at temperature T can occupy states of energy E. The energy E can be considered a random variable, having the probability density. The partition function of the system is

where β = 1/(kT) and k is Boltzmann's constant and the notation has been used rather than for the expectation value to avoid confusion with the energy, E. The Helmholtz free energy is then

and is clearly very closely related to the cumulant generating function for the energy. The free energy gives access to all of the thermodynamics properties of the system via its first second and higher order derivatives, such as its internal energy, entropy, and specific heat. Because of the relationship between the free energy and the cumulant generating function, all these quantities are related to cumulants e.g. the energy and specific heat are given by

and symbolizes the second cumulant of the energy. Other free energy is often also a function of other variables such as the magnetic field or chemical potential , e.g.

where N is the number of particles and is the grand potential. Again the close relationship between the definition of the free energy and the cumulant generating function implies that various derivatives of this free energy can be written in terms of joint cumulants of E and N.

History

The history of cumulants is discussed by Anders Hald.[10][11]

Cumulants were first introduced by Thorvald N. Thiele, in 1889, who called them semi-invariants.[12] They were first called cumulants in a 1932 paper[13] by Ronald Fisher and John Wishart. Fisher was publicly reminded of Thiele's work by Neyman, who also notes previous published citations of Thiele brought to Fisher's attention.[14] Stephen Stigler has said that the name cumulant was suggested to Fisher in a letter from Harold Hotelling. In a paper published in 1929,[15] Fisher had called them cumulative moment functions. The partition function in statistical physics was introduced by Josiah Willard Gibbs in 1901. The free energy is often called Gibbs free energy. In statistical mechanics, cumulants are also known as Ursell functions relating to a publication in 1927.

Cumulants in generalized settings

Formal cumulants

More generally, the cumulants of a sequence { mn : n = 1, 2, 3, ... }, not necessarily the moments of any probability distribution, are, by definition,

where the values of κn for n = 1, 2, 3, ... are found formally, i.e., by algebra alone, in disregard of questions of whether any series converges. All of the difficulties of the "problem of cumulants" are absent when one works formally. The simplest example is that the second cumulant of a probability distribution must always be nonnegative, and is zero only if all of the higher cumulants are zero. Formal cumulants are subject to no such constraints.

Bell numbers

In combinatorics, the n-th Bell number is the number of partitions of a set of size n. All of the cumulants of the sequence of Bell numbers are equal to 1. The Bell numbers are the moments of the Poisson distribution with expected value 1.

Cumulants of a polynomial sequence of binomial type

For any sequence { κn : n = 1, 2, 3, ... } of scalars in a field of characteristic zero, being considered formal cumulants, there is a corresponding sequence { μ ′ : n = 1, 2, 3, ... } of formal moments, given by the polynomials above. For those polynomials, construct a polynomial sequence in the following way. Out of the polynomial

make a new polynomial in these plus one additional variable x:

and then generalize the pattern. The pattern is that the numbers of blocks in the aforementioned partitions are the exponents on x. Each coefficient is a polynomial in the cumulants; these are the Bell polynomials, named after Eric Temple Bell.

This sequence of polynomials is of binomial type. In fact, no other sequences of binomial type exist; every polynomial sequence of binomial type is completely determined by its sequence of formal cumulants.

Free cumulants

In the above moment-cumulant formula

for joint cumulants, one sums over all partitions of the set { 1, ..., n }. If instead, one sums only over the noncrossing partitions, then, by solving these formulae for the in terms of the moments, one gets free cumulants rather than conventional cumulants treated above. These free cumulants were introduced by Roland Speicher[16] and play a central role in free probability theory.[17] In that theory, rather than considering independence of random variables, defined in terms of tensor products of algebras of random variables, one considers instead free independence of random variables, defined in terms of free products of algebras[17].

The ordinary cumulants of degree higher than 2 of the normal distribution are zero. The free cumulants of degree higher than 2 of the Wigner semicircle distribution are zero.[17] This is one respect in which the role of the Wigner distribution in free probability theory is analogous to that of the normal distribution in conventional probability theory.

See also

References

  1. ^ Weisstein, Eric W. "Cumulant". From MathWorld – A Wolfram Web Resource. http://mathworld.wolfram.com/Cumulant.html
  2. ^ Kendall, M. G., Stuart, A. (1969) The Advanced Theory of Statistics, Volume 1 (3rd Edition). Griffin, London. (Section 3.12)
  3. ^ Lukacs, E. (1970) Characteristic Functions (2nd Edition). Griffin, London. (Page 27)
  4. ^ Lukacs, E. (1970) Characteristic Functions (2nd Edition). Griffin, London. (Section 2.4)
  5. ^ Aapo Hyvarinen, Juha Karhunen, and Erkki Oja (2001) Independent Component Analysis, John Wiley & Sons. (Section 2.7.2)
  6. ^ Hamedani, G. G.; Volkmer, Hans; Behboodian, J. (2012-03-01). "A note on sub-independent random variables and a class of bivariate mixtures". Studia Scientiarum Mathematicarum Hungarica. 49 (1): 19–25. doi:10.1556/SScMath.2011.1183.
  7. ^ Lukacs, E. (1970) Characteristic Functions (2nd Edition), Griffin, London. (Theorem 7.3.5)
  8. ^ Rota, G.-C.; Shen, J. (2000). "On the Combinatorics of Cumulants". Journal of Combinatorial Theory. Series A. 91 (1–2): 283–304. doi:10.1006/jcta.1999.3017.
  9. ^ Brillinger, D.R. (1969). "The Calculation of Cumulants via Conditioning". Annals of the Institute of Statistical Mathematics. 21: 215–218. doi:10.1007/bf02532246.
  10. ^ Hald, A. (2000) "The early history of the cumulants and the Gram–Charlier series" International Statistical Review, 68 (2): 137–153. (Reprinted in Steffen L. Lauritzen, ed. (2002). Thiele: Pioneer in Statistics. Oxford U. P. ISBN 978-0-19-850972-1. External link in |publisher= (help))
  11. ^ Hald, Anders (1998). A History of Mathematical Statistics from 1750 to 1930. New York: Wiley. ISBN 0-471-17912-4.
  12. ^ H. Cramér (1946) Mathematical Methods of Statistics, Princeton University Press, Section 15.10, p. 186.
  13. ^ Fisher, R.A. , John Wishart, J.. (1932) The derivation of the pattern formulae of two-way partitions from those of simpler patterns, Proceedings of the London Mathematical Society, Series 2, v. 33, pp. 195–208 doi: 10.1112/plms/s2-33.1.195
  14. ^ Neyman, J. (1956): ‘Note on an Article by Sir Ronald Fisher,’ Journal of the Royal Statistical Society, Series B (Methodological), 18, pp. 288–94.
  15. ^ Fisher, R. A. (1929). "Moments and Product Moments of Sampling Distributions". Proceedings of the London Mathematical Society. 30: 199–238. doi:10.1112/plms/s2-30.1.199.
  16. ^ Speicher, Roland (1994), "Multiplicative functions on the lattice of non-crossing partitions and free convolution", Mathematische Annalen, 298 (4): 611–628
  17. ^ a b c Novak, Jonathan; Śniady, Piotr (2011). "What Is a Free Cumulant?". Notices of the American Mathematical Society. 58 (2): 300–301. ISSN 0002-9920.

External links

Binder parameter

The Binder parameter or Binder cumulant in statistical physics, also known as the fourth-order cumulant is defined as the kurtosis of the order parameter, s. It is frequently used to determine accurately phase transition points in numerical simulations of various models.

The phase transition point is usually identified comparing the behavior of as a function of the temperature for different values of the system size . The transition temperature is the unique point where the different curves cross in the thermodynamic limit. This behavior is based on the fact that in the critical region, , the Binder parameter behaves as , where .

Accordingly, the cumulant may also be used to identify the universality class of the transition by determining the value of the critical exponent of the correlation length.

In the thermodynamic limit, at the critical point, the value of the Binder parameter depends on boundary conditions, the shape of the system, and anisotropy of correlations.

Bispectrum

In mathematics, in the area of statistical analysis, the bispectrum is a statistic used to search for nonlinear interactions.

Central moment

In probability theory and statistics, a central moment is a moment of a probability distribution of a random variable about the random variable's mean; that is, it is the expected value of a specified integer power of the deviation of the random variable from the mean. The various moments form one set of values by which the properties of a probability distribution can be usefully characterised. Central moments are used in preference to ordinary moments, computed in terms of deviations from the mean instead of from zero, because the higher-order central moments relate only to the spread and shape of the distribution, rather than also to its location.

Sets of central moments can be defined for both univariate and multivariate distributions.

Dynamic light scattering

Dynamic light scattering (DLS) is a technique in physics that can be used to determine the size distribution profile of small particles in suspension or polymers in solution. In the scope of DLS, temporal fluctuations are usually analyzed by means of the intensity or photon auto-correlation function (also known as photon correlation spectroscopy or quasi-elastic light scattering). In the time domain analysis, the autocorrelation function (ACF) usually decays starting from zero delay time, and faster dynamics due to smaller particles lead to faster decorrelation of scattered intensity trace. It has been shown that the intensity ACF is the Fourier transformation of the power spectrum, and therefore the DLS measurements can be equally well performed in the spectral domain. DLS can also be used to probe the behavior of complex fluids such as concentrated polymer solutions.

Exponential dispersion model

In probability and statistics, the class of exponential dispersion models (EDM) is a set of probability distributions that represents a generalisation of the natural exponential family.

Exponential dispersion models play an important role in statistical theory, in particular in generalized linear models because they have a special structure which enables deductions to be made about appropriate statistical inference.

Free probability

Free probability is a mathematical theory that studies non-commutative random variables. The "freeness" or free independence property is the analogue of the classical notion of independence, and it is connected with free products.

This theory was initiated by Dan Voiculescu around 1986 in order to attack the free group factors isomorphism problem, an important unsolved problem in the theory of operator algebras. Given a free group on some number of generators, we can consider the von Neumann algebra generated by the group algebra, which is a type II1 factor. The isomorphism problem asks whether these are isomorphic for different numbers of generators. It is not even known if any two free group factors are isomorphic. This is similar to Tarski's free group problem, which asks whether two different non-abelian finitely generated free groups have the same elementary theory.

Later connections to random matrix theory, combinatorics, representations of symmetric groups, large deviations, quantum information theory and other theories were established. Free probability is currently undergoing active research.

Typically the random variables lie in a unital algebra A such as a C-star algebra or a von Neumann algebra. The algebra comes equipped with a noncommutative expectation, a linear functional φ: A → C such that φ(1) = 1. Unital subalgebras A1, ..., Am are then said to be freely independent if the expectation of the product a1...an is zero whenever each aj has zero expectation, lies in an Ak, and no adjacent aj's come from the same subalgebra Ak. Random variables are freely independent if they generate freely independent unital subalgebras.

One of the goals of free probability (still unaccomplished) was to construct new invariants of von Neumann algebras and free dimension is regarded as a reasonable candidate for such an invariant. The main tool used for the construction of free dimension is free entropy.

The relation of free probability with random matrices is a key reason for the wide use of free probability in other subjects. Voiculescu introduced the concept of freeness around 1983 in an operator algebraic context; at the beginning there was no relation at all with random matrices. This connection was only revealed later in 1991 by Voiculescu; he was motivated by the fact that the limit distribution which he found in his free central limit theorem had appeared before in Wigner's semi-circle law in the random matrix context.

The free cumulant functional (introduced by Roland Speicher) plays a major role in the theory. It is related to the lattice of noncrossing partitions of the set { 1, ..., n } in the same way in which the classic cumulant functional is related to the lattice of all partitions of that set.

Inverse Gaussian distribution

In probability theory, the inverse Gaussian distribution (also known as the Wald distribution) is a two-parameter family of continuous probability distributions with support on (0,∞).

Its probability density function is given by

for x > 0, where is the mean and is the shape parameter.

As λ tends to infinity, the inverse Gaussian distribution becomes more like a normal (Gaussian) distribution. The inverse Gaussian distribution has several properties analogous to a Gaussian distribution. The name can be misleading: it is an "inverse" only in that, while the Gaussian describes a Brownian motion's level at a fixed time, the inverse Gaussian describes the distribution of the time a Brownian motion with positive drift takes to reach a fixed positive level.

Its cumulant generating function (logarithm of the characteristic function) is the inverse of the cumulant generating function of a Gaussian random variable.

To indicate that a random variable X is inverse Gaussian-distributed with mean μ and shape parameter λ we write .

K-statistic

In statistics, a k-statistic is a minimum-variance unbiased estimator of a cumulant.

Kullback's inequality

In information theory and statistics, Kullback's inequality is a lower bound on the Kullback–Leibler divergence expressed in terms of the large deviations rate function. If P and Q are probability distributions on the real line, such that P is absolutely continuous with respect to Q, i.e. P<<Q, and whose first moments exist, then

where is the rate function, i.e. the convex conjugate of the cumulant-generating function, of , and is the first moment of

The Cramér–Rao bound is a corollary of this result.

Law of total cumulance

In probability theory and mathematical statistics, the law of total cumulance is a generalization to cumulants of the law of total probability, the law of total expectation, and the law of total variance. It has applications in the analysis of time series. It was introduced by David Brillinger.

It is most transparent when stated in its most general form, for joint cumulants, rather than for cumulants of a specified order for just one random variable. In general, we have

where

Mean squared displacement

In statistical mechanics, the mean squared displacement (MSD, also mean square displacement, average squared displacement, or mean square fluctuation) is a measure of the deviation of the position of a particle with respect to a reference position over time. It is the most common measure of the spatial extent of random motion, and can be thought of as measuring the portion of the system "explored" by the random walker. In the realm of biophysics and environmental engineering, the Mean Squared Displacement is measured over time to determine if a particle is spreading solely due to diffusion, or if an advective force is also contributing. Another relevant concept, the Variance-Related Diameter (VRD, which is twice the square root of MSD), is also used in studying the transportation and mixing phenomena in the realm of environmental engineering. It prominently appears in the Debye–Waller factor (describing vibrations within the solid state) and in the Langevin equation (describing diffusion of a Brownian particle). The MSD is defined as

where N is the number of particles to be averaged, is the reference position of each particle, is the position of each particle in determined time t.

Moment-generating function

In probability theory and statistics, the moment-generating function of a real-valued random variable is an alternative specification of its probability distribution. Thus, it provides the basis of an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the moment-generating functions of distributions defined by the weighted sums of random variables. However, not all random variables have moment-generating functions.

As its name implies, the moment generating function can be used to compute a distribution’s moments: the nth moment about 0 is the nth derivative of the moment-generating function, evaluated at 0.

In addition to real-valued distributions (univariate distributions), moment-generating functions can be defined for vector- or matrix-valued random variables, and can even be extended to more general cases.

The moment-generating function of a real-valued distribution does not always exist, unlike the characteristic function. There are relations between the behavior of the moment-generating function of a distribution and properties of the distribution, such as the existence of moments.

Natural exponential family

In probability and statistics, a natural exponential family (NEF) is a class of probability distributions that is a special case of an exponential family (EF). Every distribution possessing a moment-generating function is a member of a natural exponential family, and the use of such distributions simplifies the theory and computation of generalized linear models.

Noncentral chi-squared distribution

In probability theory and statistics, the noncentral chi-squared or noncentral distribution is a generalization of the chi-squared distribution. This distribution often arises in the power analysis of statistical tests in which the null distribution is (perhaps asymptotically) a chi-squared distribution; important examples of such tests are the likelihood-ratio tests.

Skewness

In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive or negative, or undefined.

For a unimodal distribution, negative skew commonly indicates that the tail is on the left side of the distribution, and positive skew indicates that the tail is on the right. In cases where one tail is long but the other tail is fat, skewness does not obey a simple rule. For example, a zero value means that the tails on both sides of the mean balance out overall; this is the case for a symmetric distribution, but can also be true for an asymmetric distribution where one tail is long and thin, and the other is short but fat.

Some popular intuitions about skewness are not correct. As a 2005 journal article points out:

Many textbooks teach a rule of thumb stating that the mean is right of the median under right skew, and left of the median under left skew. This rule fails with surprising frequency. It can fail in multimodal distributions, or in distributions where one tail is long but the other is heavy. Most commonly, though, the rule fails in discrete distributions where the areas to the left and right of the median are not equal.

Super-resolution optical fluctuation imaging

Super-resolution optical fluctuation imaging (SOFI) is a post-processing method for the calculation of super-resolved images from recorded image time series that is based on the temporal correlations of independently fluctuating fluorescent emitters.

SOFI has been developed for super-resolution of biological specimen that are labelled with independently fluctuating fluorescent emitters (organic dyes, fluorescent proteins). In comparison to other super-resolution microscopy techniques such as STORM or PALM that rely on single-molecule localization and hence only allow one active molecule per diffraction-limited area (DLA) and timepoint, SOFI does not necessitate a controlled photoswitching and/ or photoactivation as well as long imaging times. Nevertheless it still requires fluorophores that are cycling through two distinguishable states, either real on-/off-states or states with different fluorescence intensities. In mathematical terms SOFI-imaging relies on the calculation of cumulants, for what two distinguishable ways exist. For one thing an image can be calculated via auto-cumulants that by definition only rely on the information of each pixel itself, and for another thing an improved method utilizes the information of different pixels via the calculation of cross-cumulants. Both methods can increase the final image resolution significantly although the cumulant calculation has its limitations. Actually SOFI is able to increase the resolution in all three dimensions.

Tweedie distribution

In probability and statistics, the Tweedie distributions are a family of probability distributions which include the purely continuous normal and gamma distributions, the purely discrete scaled Poisson distribution, and the class of mixed compound Poisson–gamma distributions which have positive mass at zero, but are otherwise continuous. For any random variable Y that obeys a Tweedie distribution, the variance var(Y) relates to the mean E(Y) by the power law,

where a and p are positive constants.

The Tweedie distributions were named by Bent Jørgensen after Maurice Tweedie, a statistician and medical physicist at the University of Liverpool, UK, who presented the first thorough study of these distributions in 1984.

Uniform distribution (continuous)

In probability theory and statistics, the continuous uniform distribution or rectangular distribution is a family of symmetric probability distributions such that for each member of the family, all intervals of the same length on the distribution's support are equally probable. The support is defined by the two parameters, a and b, which are its minimum and maximum values. The distribution is often abbreviated U(a,b). It is the maximum entropy probability distribution for a random variable X under no constraint other than that it is contained in the distribution's support.

Ursell function

In statistical mechanics, an Ursell function or connected correlation function, is a cumulant of

a random variable. It is also called a connected correlation function as it can often be obtained by summing over

connected Feynman diagrams (the sum over all Feynman diagrams gives the correlation functions).

The Ursell function was named after Harold Ursell, who introduced it in 1927.

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.