Linear differential equation

In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form

where , ..., and are arbitrary differentiable functions that do not need to be linear, and are the successive derivatives of an unknown function y of the variable x.

This is an ordinary differential equation (ODE). A linear differential equation may also be a linear partial differential equation (PDE), if the unknown function depends on several variables, and the derivatives that appear in the equation are partial derivatives. In this article, only ordinary differential equations are considered.

A linear differential equation or a system of linear equations such that the associated homogeneous equations have constant coefficients may be solved by quadrature (mathematics), which means that the solutions may be expressed in terms of integrals. This is also true for a linear equation of order one, with non-constant coefficients. An equation of order two or higher with non-constant coefficients cannot, in general, be solved by quadrature. For order two, Kovacic's algorithm allows deciding whether there are solutions in terms of integrals, and computing them if any.

The solutions of linear differential equations with polynomial coefficients are called holonomic functions. This class of functions is stable under sums, products, differentiation, integration, and contains many usual functions and special functions such as exponential function, logarithm, sine, cosine, inverse trigonometric functions, error function, Bessel functions and hypergeometric functions. Their representation by the defining differential equation and initial conditions allows making algorithmic (on these functions) most operations of calculus, such as computation of antiderivatives, limits, asymptotic expansion, and numerical evaluation to any precision, with a certified error bound.

Basic terminology

The highest order of derivation that appears in a differentiable equation is the order of the equation. The term b(x), which does not depend on the unknown function and its derivatives, is sometimes called the constant term of the equation (by analogy with algebraic equations), even when this term is a non-constant function. If the constant term is the zero function, then the differential equation is said to be homogeneous, as it is a homogeneous polynomial in the unknown function and its derivatives. The equation obtained by replacing, in a linear differential equation, the constant term by the zero function is the associated homogeneous equation. A differential equation has constant coefficients if only constant functions appear as coefficients in the associated homogeneous equation.

A solution of a differential equation is a function that satisfies the equation. The solutions of a homogeneous linear differential equation form a vector space. In the ordinary case, this vector space has a finite dimension, equal to the order of the equation. All solutions of a linear differential equation are found by adding to a particular solution any solution of the associated homogeneous equation.

Linear differential operator

A basic differential operator of order i is a mapping that maps any differentiable function to its ith derivative, or, in the case of several variables, to one of its partial derivatives of order i. It is commonly denoted

in the case of univariate functions, and

in the case of functions of n variables. The basic differential operators include the derivative of order 0, which is the identity mapping.

A linear differential operator (abbreviated, in this article, as linear operator or, simply, operator) is a linear combination of basic differential operators, with differentiable functions as coefficients. In the univariate case, a linear operator has thus the form[1]

where are differentiable functions, and the nonnegative integer n is the order of the operator (if is not the zero function).

Let L be a linear differential operator. The application of L to a function f is usually denoted Lf or Lf(X), if one needs to specify the variable (this must not be confused with a multiplication). A linear differential operator is a linear operator, since it maps sums to sums and the product by a scalar to the product by the same scalar.

As the sum of two linear operators is a linear operator, as well as the product (on the left) of a linear operator by a differentiable function, the linear differential operators form a vector space over the real numbers or the complex numbers (depending on the nature of the functions that are considered). They form also a free module over the ring of differentiable functions.

The language of operators allows a compact writing for differentiable equations: if

is a linear differential operator, then the equation

may be rewritten

There may be several variants to this notation; in particular the variable of differentiation may appear explicitly or not in y and the right-hand and of the equation, such as or

The kernel of a linear differential operator is its kernel as a linear mapping, that is the vector space of the solutions of the (homogeneous) differential equation .

In the case of an ordinary differential operator of order n, Carathéodory's existence theorem implies that, under very mild conditions, the kernel of L is a vector space of dimension n, and that the solutions of the equation have the form

where are arbitrary numbers. Typically, the hypotheses of Carathéodory's theorem are satisfied in an interval I, if the functions are continuous in I, and there is a positive real number k such that for every x in I.

Homogeneous equation with constant coefficients

A homogeneous linear differential equation has constant coefficients if it has the form

where are (real or complex) numbers. In other words, it has constant coefficients if it is defined by a linear operator with constant coefficients.

The study of these differential equations with constant coefficients dates back to Leonhard Euler, who introduced the exponential function , which is the unique solution of the equation such that . It follows that the nth derivative of is and this allows solving homogeneous linear differential equations rather easily.

Let

be a homogeneous linear differential equation with constant coefficients (that is are real or complex numbers).

Searching solutions of this equation that have the form is equivalent to searching the constants such that

Factoring out (which is never zero) shows that must be a root of the characteristic polynomial

of the differential equation.

When these roots are all distinct, one has n distinct solutions that are not necessarily real, even if the coefficients of the equation are real. These solutions can be shown to be linearly independent, by considering the Vandermonde determinant of the values of these solutions at x = 0, ..., n – 1. Together they form a basis of the vector space of solutions of the differential equation (that is, the kernel of the differential operator).

Example

has the characteristic equation

This has zeros, i, i, and 1 (multiplicity 2). The solution basis is thus

A real basis of solution is thus

In the case where the characteristic polynomial has only simple roots, the preceding provides a complete basis of the solutions vector space. In the case of multiple roots, more linearly independent solutions are needed for having a basis. These have the form

where k is a nonnegative integer, is a root of the characteristic polynomial of multiplicity m, and k < m. For proving that these functions are solutions, one may remark that if is a root of the characteristic polynomial of multiplicity m, the characteristic polynomial may be factored as Thus, applying the differential operator of the equation is equivalent with applying first m times the operator and then the operator that has P as characteristic polynomial. By the exponential shift theorem,

and thus one gets zero after k + 1 application of

As, by the fundamental theorem of algebra, the sum of the multiplicities of the roots of a polynomial equals the degree of the polynomial, the number of above solutions equals the order of the differential equation, and these solutions form a base of the vector space of the solutions.

In the common case where the coefficients of the equation are real, it is generally more convenient to have a basis of the solutions consisting of real-valued functions. Such a basis may be obtained from the preceding basis by remarking that, if a + ib is a root of the characteristic polynomial, then aib is also a root, of the same multiplicity. Thus a real basis is obtained by using Euler's formula, and replacing and by and

Second-order case

A homogeneous linear differential equation of the second order may be written

and its characteristic polynomial is

If a and b are real, there are three cases for the solutions, depending on the discriminant In all three cases, the general solution depends on two arbitrary constants and .

  • If D > 0, the characteristic polynomial has two distinct real roots , and . In this case, the general solution is
  • If D = 0, the characteristic polynomial has a double root , and the general solution is
  • If D < 0, the characteristic polynomial has two complex conjugate roots , and the general solution is
which may be rewritten in real terms, using Euler's formula as

Finding the solution satisfying and one equates the values of the above general solution at 0 and its derivative there to and respectively. This results in a linear system of two linear equations in the two unknowns and Solving this system gives the solution for a so called Cauchy problem, in which the values at 0 for the solution of the DEQ and its derivative are specified.

Non-homogeneous equation with constant coefficients

A non-homogeneous equation of order n with constant coefficients may be written

where are real or complex numbers, f is a given function of x, and y is the unknown function (for sake of simplicity, "(x)" will be omitted in the following).

There are several methods for solving such an equation. The best method depends of the nature of the function f that makes the equation non-homogeneous. If f is a linear combination of exponential and sinusoidal functions, then exponential response formula may be used. If, more generally, f is linear combination of functions of the form , and , where n is a nonnegative integer and a a constant (which need not to be the same in each term), then the method of undetermined coefficients may be used. Still more general, the annihilator method applies when f satisfies a homogeneous linear differential equation, typically, a holonomic function.

The most general method is the variation of constants, which is presented here.

The general solution of the associated homogeneous equation

is

where is a basis of the vector space of the solutions and are arbitrary constants. The method of variation of constants takes its name that, instead of considering as constants, they are considered as functions that have to be determined for making y a solution of the non-homogeneous equation. For this purpose, one adds the constraints

which imply (by product rule and induction)

for i = 1, ..., n – 1, and

Replacing in the original equation y and its derivative by these expression, and using the fact that are solutions of the original homogeneous equation, one gets

One has thus a system of n linear equations in , which can be solved by any method of linear algebra. Then the computation of antiderivatives gives and then

As antiderivatives are defined up to the addition of a constant, one finds again that the general solution of the non-homogeneous equation is the sum of an arbitrary solution and the general solution of the associated homogeneous equation.

First-order equation with variable coefficients

Example
Solving the equation

The associated homogeneous equation gives

that is

Dividing the original equation by one of these solutions gives

That is

and

For the initial condition

one gets the particular solution

The general form of a linear ordinary differential linear equation of order 1 is, after having divided by the coefficient of ,

In the case of a homogeneous equation (that is g(x) is the zero function), the equation may be rewritten as (omitting "(x)" for sake of simplification)

that may easily be integrated as

where k is an arbitrary constant of integration and

is an antiderivative of f. Thus, the general solution of the homogeneous equation is

where is an arbitrary constant.

For solving the non homogeneous equation, one may multiply it by the multiplicative inverse of a solution the homogeneous equation. This gives

As the product rule allows rewriting the equation as

Thus, the general solution is

where c is a constant of integration, and .

System of linear differential equations

A system of linear differential equations consists of several linear differential equations that involve several unknown functions. In general one restricts the study to systems such that the number of unknown functions equals the number of equations.

An arbitrary linear ordinary differential equation and a system of such equations can be converted into a first order system of linear differential equations by adding variables for all but the highest order derivatives. That is, if appear in an equation, one may replace them by new unknown functions that must satisfy the equations and for i = 1, ..., k – 1.

A linear system of the first order, which has n unknown functions and n differential equations may normally be solved for the derivatives of the unknown functions. If it is not the case this is a differential-algebraic system, and this is a different theory. Therefore, the systems that are considered here have the form

where and the are functions of x. In matrix notation, this system may be written (omitting "(x)")

The solving method is similar to that of a single first order linear differential equations, but with complications stemming from noncommutativity of matrix multiplication.

Let

be the homogeneous equation associated to the above matrix equation. Its solutions form a vector space of dimension n, and are therefore the columns of a square matrix of functions , whose determinant is not the zero function. If n = 1, or A is a matrix of constants, or, more generally, if A is differentiable and commutes with its derivative, then one may choose for U the exponential of an antiderivative of A. In fact, in these cases, one has

In the general case there is no closed-form solution for the homogeneous equation, and one has to use either a numerical method, or an approximation method such as Magnus expansion.

Knowing the matrix U, the general solution of the non-homogeneous equation is

where the column matrix is an arbitrary constant of integration.

If initial conditions are given as

the solution that satisfies these initial conditions is

Higher order with variable coefficients

A linear ordinary equation of order one with variable coefficients may be solved by quadrature, which means that the solutions may be expressed in terms of integrals. This is not the case for order at least two. This is the main result of Picard–Vessiot theory, a theory that was initiated by Émile Picard and Ernest Vessiot, and whose recent developments are called differential Galois theory.

The impossibility of solving by quadrature can be compared with the Abel–Ruffini theorem, which states that an algebraic equation of degree at least five cannot, in general, be solved by radicals. This analogy extends to the proof methods and motivates the denomination of differential Galois theory

Similarly to the algebraic case, the theory allows deciding which equations may be solved by quadrature, and if possible solving them. However, for both theories, the necessary computations are extremely difficult, even with the most powerful computers.

Nevertheless, the case of order two with rational coefficients has been completely solved by Kovacic's algorithm.

Cauchy–Euler equation

Cauchy–Euler equations are examples of equations of any order, with variable coefficients, that can be solved explicitly. These are the equations of the form

where are constant coefficients.

Holonomic functions

A holonomic function, also called a D-finite function is a function that is a solution of a homogeneous linear differential equation with polynomial coefficients.

Most functions that are commonly considered in mathematics are holonomic or quotients of holonomic functions. In fact, holonomic functions include polynomials, algebraic functions, logarithm, exponential function, sine, cosine, hyperbolic sine, hyperbolic cosine, inverse trigonometric and inverse hyperbolic functions, and many special functions such as Bessel functions and hypergeometric functions.

Holonomic functions have several closure properties; in particular, sums, products, derivative and integrals of holonomic functions are holonomic. Moreover, these closure properties are effective, in the sense that there are algorithms for computing the differential equation of the result of any of these operations, knowing the differential equations of the input.[2]

Usefulness of the concept of holonomic functions results of Zeilberger's theorem, which follows.[2]

A holonomic sequence is a sequence of numbers that may be generated by a recurrence relation with polynomial coefficients. The coefficients of the Taylor series at a point of a holonomic function form a holonomic sequence. Conversely, if the sequence of the coefficients of a power series is holonomic, then the series defines a holonomic function (even if the radius of convergence is zero). There are efficient algorithms for both conversions, that is for computing the recurrence relation from the differential equation, and vice versa. [2]

It follows that, if one represents (in a computer) holonomic functions by their defining differential equations and initial conditions, most calculus operations can be done automatically on these functions, such as derivative, indefinite and definite integral, fast computation of Taylor series (thanks of the recurrence relation on its coefficients), evaluation to a high precision with certified bound of the approximation error, limits, localization of singularities, asymptotic behavior at infinity and near singularities, proof of identities, etc.[3]

See also

References

  1. ^ Gershenfeld 1999, p.9
  2. ^ a b c Zeilberger, Doron. A holonomic systems approach to special functions identities. Journal of computational and applied mathematics. 32.3 (1990): 321-368
  3. ^ Benoit, A., Chyzak, F., Darrasse, A., Gerhold, S., Mezzarobba, M., & Salvy, B. (2010, September). The dynamic dictionary of mathematical functions (DDMF). In International Congress on Mathematical Software (pp. 35-41). Springer, Berlin, Heidelberg.
  • Birkhoff, Garrett & Rota, Gian-Carlo (1978), Ordinary Differential Equations, New York: John Wiley and Sons, Inc., ISBN 0-471-07411-X
  • Gershenfeld, Neil (1999), The Nature of Mathematical Modeling, Cambridge, UK.: Cambridge University Press, ISBN 978-0-521-57095-4
  • Robinson, James C. (2004), An Introduction to Ordinary Differential Equations, Cambridge, UK.: Cambridge University Press, ISBN 0-521-82650-0

External links

Adjoint equation

An adjoint equation is a linear differential equation, usually derived from its primal equation using integration by parts. Gradient values with respect to a particular quantity of interest can be efficiently calculated by solving the adjoint equation. Methods based on solution of adjoint equations are used in wing shape optimization, fluid flow control and uncertainty quantification. For example this is an Itō stochastic differential equation. Now by using Euler scheme, we integrate the parts of this equation and get another equation, , here is a random variable, later one is an adjoint equation.

Airy function

In the physical sciences, the Airy function (or Airy function of the first kind) Ai(x) is a special function named after the British astronomer George Biddell Airy (1801–1892). The function Ai(x) and the related function Bi(x), are linearly independent solutions to the differential equation

known as the Airy equation or the Stokes equation. This is the simplest second-order linear differential equation with a turning point (a point where the character of the solutions changes from oscillatory to exponential).

The Airy function is the solution to Schrödinger's equation for a particle confined within a triangular potential well and for a particle in a one-dimensional constant force field. For the same reason, it also serves to provide uniform semiclassical approximations near a turning point in the WKB approximation, when the potential may be locally approximated by a linear function of position. The triangular potential well solution is directly relevant for the understanding of many semiconductor devices.

The Airy function also underlies the form of the intensity near an optical directional caustic, such as that of the rainbow. Historically, this was the mathematical problem that led Airy to develop this special function.

A different function that is also named after Airy is important in microscopy and astronomy; it describes the pattern, due to diffraction and interference, produced by a point source of light (one which is much smaller than the resolution limit of a microscope or telescope).

Bernoulli differential equation

In mathematics, an ordinary differential equation of the form:

is called a Bernoulli differential equation where is any real number and and . It is named after Jacob Bernoulli who discussed it in 1695. Bernoulli equations are special because they are nonlinear differential equations with known exact solutions. A famous special case of the Bernoulli equation is the logistic differential equation.

Floquet theory

Floquet theory is a branch of the theory of ordinary differential equations relating to the class of solutions to periodic linear differential equations of the form

with a piecewise continuous periodic function with period and defines the state of the stability of solutions.

The main theorem of Floquet theory, Floquet's theorem, due to Gaston Floquet (1883), gives a canonical form for each fundamental matrix solution of this common linear system. It gives a coordinate change with that transforms the periodic system to a traditional linear system with constant, real coefficients.

In solid-state physics, the analogous result is known as Bloch's theorem.

Note that the solutions of the linear differential equation form a vector space. A matrix is called a fundamental matrix solution if all columns are linearly independent solutions. A matrix is called a principal fundamental matrix solution if all columns are linearly independent solutions and there exists such that is the identity. A principal fundamental matrix can be constructed from a fundamental matrix using . The solution of the linear differential equation with the initial condition is where is any fundamental matrix solution.

Fuchsian theory

The Fuchsian theory of linear differential equations, which is named after Lazarus Immanuel Fuchs, provides a characterization of various types of singularities and the relations among them.

At any ordinary point of a homogeneous linear differential equation of order there exists a fundamental system of linearly independent power series solutions. A non-ordinary points is called a singularity. At a singularity the maximal number of linearly independent power series solutions may be less than the order of the differential equation.

Fundamental matrix

See:

Fundamental matrix (computer vision)

Fundamental matrix (linear differential equation)

Fundamental matrix (absorbing Markov chain)

Fundamental matrix (linear differential equation)

In mathematics, a fundamental matrix of a system of n homogeneous linear ordinary differential equations

is a matrix-valued function whose columns are linearly independent solutions of the system. Then every solution to the system can be written as , for some constant vector (written as a column vector of height n).

One can show that a matrix-valued function is a fundamental matrix of if and only if and is a non-singular matrix for all .

Homogeneous differential equation

A differential equation can be homogeneous in either of two respects.

A first order differential equation is said to be homogeneous if it may be written

where f and g are homogeneous functions of the same degree of x and y. In this case, the change of variable y = ux leads to an equation of the form

which is easy to solve by integration of the two members.

Otherwise, a differential equation is homogeneous if it is a homogeneous function of the unknown function and its derivatives. In the case of linear differential equations, this means that there are no constant terms. The solutions of any linear ordinary differential equation of any order may be deduced by integration from the solution of the homogeneous equation obtained by removing the constant term.

Lazarus Fuchs

Lazarus Immanuel Fuchs (5 May 1833 – 26 April 1902) was a Jewish-German mathematician who contributed important research in the field of linear differential equations. He was born in Moschin (Mosina) (located in Grand Duchy of Posen) and died in Berlin, Germany. He was buried in Schöneberg in the St. Matthew's Cemetery. His grave in section H is preserved and listed as a grave of honour of the State of Berlin.

He is the eponym of Fuchsian groups and functions, and the Picard–Fuchs equation. A singular point a of a linear differential equation

is called Fuchsian if p and q are meromorphic at the point a, and have poles of orders at most 1 and 2, respectively. According to a theorem of Fuchs, this condition is necessary and sufficient for the regularity of the singular point, that is, to ensure the existence of two linearly independent solutions of the form

where the exponents can be determined from the equation. In the case when is an integer this formula has to be modified.

Another well-known result of Fuchs is the Fuchs's conditions, the necessary and sufficient conditions for the non-linear differential equation of the form

to be free of movable singularities.

Lazarus Fuchs was the father of Richard Fuchs, a German mathematician.

Magnus expansion

In mathematics and physics, the Magnus expansion, named after Wilhelm Magnus (1907–1990), provides an exponential representation of the solution of a first order homogeneous linear differential equation for a linear operator. In particular it furnishes the fundamental matrix of a system of linear ordinary differential equations of order n with varying coefficients. The exponent is aggregated as an infinite series whose terms involve multiple integrals and nested commutators.

Nonlinear system

In mathematics and science, a nonlinear system is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many other scientists because most systems are inherently nonlinear in nature. Nonlinear dynamical systems, describing changes in variables over time, may appear chaotic, unpredictable, or counterintuitive, contrasting with much simpler linear systems.

Typically, the behavior of a nonlinear system is described in mathematics by a nonlinear system of equations, which is a set of simultaneous equations in which the unknowns (or the unknown functions in the case of differential equations) appear as variables of a polynomial of degree higher than one or in the argument of a function which is not a polynomial of degree one.

In other words, in a nonlinear system of equations, the equation(s) to be solved cannot be written as a linear combination of the unknown variables or functions that appear in them. Systems can be defined as nonlinear, regardless of whether known linear functions appear in the equations. In particular, a differential equation is linear if it is linear in terms of the unknown function and its derivatives, even if nonlinear in terms of the other variables appearing in it.

As nonlinear dynamical equations are difficult to solve, nonlinear systems are commonly approximated by linear equations (linearization). This works well up to some accuracy and some range for the input values, but some interesting phenomena such as solitons, chaos, and singularities are hidden by linearization. It follows that some aspects of the dynamic behavior of a nonlinear system can appear to be counterintuitive, unpredictable or even chaotic. Although such chaotic behavior may resemble random behavior, it is in fact not random. For example, some aspects of the weather are seen to be chaotic, where simple changes in one part of the system produce complex effects throughout. This nonlinearity is one of the reasons why accurate long-term forecasts are impossible with current technology.

Some authors use the term nonlinear science for the study of nonlinear systems. This is disputed by others:

Using a term like nonlinear science is like referring to the bulk of zoology as the study of non-elephant animals.

Picard–Vessiot theory

In differential algebra, Picard–Vessiot theory is the study of the differential field extension generated by the solutions of a linear differential equation, using the differential Galois group of the field extension. A major goal is to describe when the differential equation can be solved by quadratures in terms of properties of the differential Galois group. The theory was initiated by Émile Picard and Ernest Vessiot from about 1883 to 1904.

Kolchin (1973) and van der Put & Singer (2003) give detailed accounts of Picard–Vessiot theory.

Quantum algorithm for linear systems of equations

The quantum algorithm for linear systems of equations, designed by Aram Harrow, Avinatan Hassidim, and Seth Lloyd, is a quantum algorithm formulated in 2009 for solving linear systems. The algorithm estimates the result of a scalar measurement on the solution vector to a given linear system of equations.

The algorithm is one of the main fundamental algorithms expected to provide a speedup over their classical counterparts, along with Shor's factoring algorithm, Grover's search algorithm and quantum simulation. Provided the linear system is sparse and has a low condition number , and that the user is interested in the result of a scalar measurement on the solution vector, instead of the values of the solution vector itself, then the algorithm has a runtime of , where is the number of variables in the linear system. This offers an exponential speedup over the fastest classical algorithm, which runs in (or for positive semidefinite matrices).

An implementation of the quantum algorithm for linear systems of equations was first demonstrated in 2013 by Cai et al., Barz et al. and Pan et al. in parallel. The demonstrations consisted of simple linear equations on specially designed quantum devices. The first demonstration of a general-purpose version of the algorithm appeared in 2018 in the work of Zhao et al.

Due to the prevalence of linear systems in virtually all areas of science and engineering, the quantum algorithm for linear systems of equations has the potential for widespread applicability.

Regular singular point

In mathematics, in the theory of ordinary differential equations in the complex plane , the points of are classified into ordinary points, at which the equation's coefficients are analytic functions, and singular points, at which some coefficient has a singularity. Then amongst singular points, an important distinction is made between a regular singular point, where the growth of solutions is bounded (in any small sector) by an algebraic function, and an irregular singular point, where the full solution set requires functions with higher growth rates. This distinction occurs, for example, between the hypergeometric equation, with three regular singular points, and the Bessel equation which is in a sense a limiting case, but where the analytic properties are substantially different.

Riemann's differential equation

In mathematics, Riemann's differential equation, named after Bernhard Riemann, is a generalization of the hypergeometric differential equation, allowing the regular singular points (RSPs) to occur anywhere on the Riemann sphere, rather than merely at 0, 1, and . The equation is also known as the Papperitz equation.

The hypergeometric differential equation is a second-order linear differential equation which has three regular singular points, 0, 1 and . That equation admits two linearly independent solutions; near a singularity , the solutions take the form , where is a local variable, and is locally holomorphic with . The real number is called the exponent of the solution at . Let α, β and γ be the exponents of one solution at 0, 1 and respectively; and let α', β' and γ' be those of the other. Then

By applying suitable changes of variable, it is possible to transform the hypergeometric equation: Applying Möbius transformations will adjust the positions of the RSPs, while other transformations (see below) can change the exponents at the RSPs, subject to the exponents adding up to 1.

Siegel G-function

In mathematics, the Siegel G-functions are a class of functions in transcendental number theory introduced by C. L. Siegel. They satisfy a linear differential equation with polynomial coefficients, and the coefficients of their power series expansion lie in a fixed algebraic number field and have heights of at most exponential growth.

Surrogate data testing

Surrogate data testing (or the method of surrogate data) is a statistical proof by contradiction technique and similar to parametric bootstrapping used to detect non-linearity in a time series. The technique basically involves specifying a null hypothesis describing a linear process and then generating several surrogate data sets according to using Monte Carlo methods. A discriminating statistic is then calculated for the original time series and all the surrogate set. If the value of the statistic is significantly different for the original series than for the surrogate set, the null hypothesis is rejected and non-linearity assumed.

The particular surrogate data testing method to be used is directly related to the null hypothesis. Usually this is similar to the following: The data is a realization of a stationary linear system, whose output has been possibly measured by a monotonically increasing possibly nonlinear (but static) function. Here linear means that each value is linearly dependent on past values or on present and past values of some independent identically distributed (i.i.d.) process, usually also Gaussian. This is equivalent to saying that the process is ARMA type. In case of fluxes (continuous mappings), linearity of system means that it can be expressed by a linear differential equation. In this hypothesis, the static measurement function is one which depends only on the present value of its argument, not on past ones.

Von Bertalanffy function

The von Bertalanffy growth function (VBGF), or von Bertalanffy curve, is a type of growth curve model for a time series and is named after Ludwig von Bertalanffy. It is a special case of the generalised logistic function. The growth curve is used to model mean length from age in animals. The function is commonly applied in ecology to model fish growth.

The model can be written as the following:

where is age, is the growth coefficient, is a value used to calculate size when age is zero, and is asymptotic size. It is the solution of the following linear differential equation:

Wronskian

In mathematics, the Wronskian (or Wrońskian) is a determinant introduced by Józef Hoene-Wroński (1776) and named by Thomas Muir (1882, Chapter XVIII). It is used in the study of differential equations, where it can sometimes show linear independence in a set of solutions.

Classification
Solutions
Applications
Mathematicians

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.