Approximation

An approximation is anything that is similar but not exactly equal to something else.

Etymology and usage

The word approximation is derived from Latin approximatus, from proximus meaning very near and the prefix ap- (ad- before p) meaning to.[1] Words like approximate, approximately and approximation are used especially in technical or scientific contexts. In everyday English, words such as roughly or around are used with a similar meaning.[2] It is often found abbreviated as approx.

The term can be applied to various properties (e.g., value, quantity, image, description) that are nearly, but not exactly correct; similar, but not exactly the same (e.g., the approximate time was 10 o'clock).

Although approximation is most often applied to numbers, it is also frequently applied to such things as mathematical functions, shapes, and physical laws.

In science, approximation can refer to using a simpler process or model when the correct model is difficult to use. An approximate model is used to make calculations easier. Approximations might also be used if incomplete information prevents use of exact representations.

The type of approximation used depends on the available information, the degree of accuracy required, the sensitivity of the problem to this data, and the savings (usually in time and effort) that can be achieved by approximation.

Mathematics

Approximation theory is a branch of mathematics, a quantitative part of functional analysis. Diophantine approximation deals with approximations of real numbers by rational numbers. Approximation usually occurs when an exact form or an exact numerical number is unknown or difficult to obtain. However some known form may exist and may be able to represent the real form so that no significant deviation can be found. It also is used when a number is not rational, such as the number π, which often is shortened to 3.14159, or 2 to 1.414.

Numerical approximations sometimes result from using a small number of significant digits. Calculations are likely to involve rounding errors leading to approximation. Log tables, slide rules and calculators produce approximate answers to all but the simplest calculations. The results of computer calculations are normally an approximation expressed in a limited number of significant digits, although they can be programmed to produce more precise results.[3] Approximation can occur when a decimal number cannot be expressed in a finite number of binary digits.

Related to approximation of functions is the asymptotic value of a function, i.e. the value as one or more of a function's parameters becomes arbitrarily large. For example, the sum (k/2)+(k/4)+(k/8)+...(k/2^n) is asymptotically equal to k. Unfortunately no consistent notation is used throughout mathematics and some texts will use ≈ to mean approximately equal and ~ to mean asymptotically equal whereas other texts use the symbols the other way around.

As another example, in order to accelerate the convergence rate of evolutionary algorithms, fitness approximation—that leads to build model of the fitness function to choose smart search steps—is a good solution.

Science

Approximation arises naturally in scientific experiments. The predictions of a scientific theory can differ from actual measurements. This can be because there are factors in the real situation that are not included in the theory. For example, simple calculations may not include the effect of air resistance. Under these circumstances, the theory is an approximation to reality. Differences may also arise because of limitations in the measuring technique. In this case, the measurement is an approximation to the actual value.

The history of science shows that earlier theories and laws can be approximations to some deeper set of laws. Under the correspondence principle, a new scientific theory should reproduce the results of older, well-established, theories in those domains where the old theories work.[4] The old theory becomes an approximation to the new theory.

Some problems in physics are too complex to solve by direct analysis, or progress could be limited by available analytical tools. Thus, even when the exact representation is known, an approximation may yield a sufficiently accurate solution while reducing the complexity of the problem significantly. Physicists often approximate the shape of the Earth as a sphere even though more accurate representations are possible, because many physical characteristics (e.g., gravity) are much easier to calculate for a sphere than for other shapes.

Approximation is also used to analyze the motion of several planets orbiting a star. This is extremely difficult due to the complex interactions of the planets' gravitational effects on each other.[5] An approximate solution is effected by performing iterations. In the first iteration, the planets' gravitational interactions are ignored, and the star is assumed to be fixed. If a more precise solution is desired, another iteration is then performed, using the positions and motions of the planets as identified in the first iteration, but adding a first-order gravity interaction from each planet on the others. This process may be repeated until a satisfactorily precise solution is obtained.

The use of perturbations to correct for the errors can yield more accurate solutions. Simulations of the motions of the planets and the star also yields more accurate solutions.

The most common versions of philosophy of science accept that empirical measurements are always approximations—they do not perfectly represent what is being measured.

The error-tolerance property of several applications (e.g., graphics applications) allows use of approximation (e.g., lowering the precision of numerical computations) to improve performance and energy efficiency.[6] This approach of using deliberate, controlled approximation for achieving various optimizations is referred to as approximate computing.

Unicode

Symbols used to denote items that are approximately equal are wavy or dotted equals signs.[7]

  • (U+2248, almost equal to)
  • (U+2249, not almost equal to)
  • (U+2243), a combination of "≈" and "=", also used to indicate asymptotically equal to
    • (U+2252), which is used like "" in Japan, Taiwan, and Korea
    • (U+2253), a reversed variation of ""
  • (U+2245), another combination of "≈" and "=", which is used to indicate isomorphism or congruence
  • (U+224A), yet another combination of "≈" and "=", used to indicate equivalence or approximate equivalence
  • (U+223C), which is also sometimes used to indicate proportionality
  • (U+223D), which is also sometimes used to indicate proportionality
  • (U+2250, approaches the limit), which can be used to represent the approach of a variable, y, to a limit; like the common syntax, ≐ 0

LaTeX Symbols

(\approx), usually to indicate approximation between numbers, like .
(\not\approx), usually to indicate that numbers are not approximately equal (1 2).
(\simeq), usually to indicate asymptotic equivalence between functions, like . So writing would be wrong, despite wide use.
(\sim), usually to indicate proportionality between functions, the same of the line above will be .
(\cong), usually to indicate congruence between figures, like .

See also

References

  1. ^ The Concise Oxford Dictionary, Eighth edition 1990, ISBN 0-19-861243-5
  2. ^ Longman Dictionary of Contemporary English, Pearson Education Ltd 2009, ISBN 978 1 4082 1532 6
  3. ^ Numerical Computation Guide
  4. ^ Encyclopædia Britannica
  5. ^ The three body problem
  6. ^ Mittal, Sparsh (May 2016). "A Survey of Techniques for Approximate Computing". ACM Comput. Surv. ACM. 48 (4): 62:1–62:33. doi:10.1145/2893356.
  7. ^ "Mathematical Operators – Unicode" (PDF). Retrieved 2013-04-20.

External links

Adiabatic process

An adiabatic process occurs without transfer of heat or mass of substances between a thermodynamic system and its surroundings. In an adiabatic process, energy is transferred to the surroundings only as work. The adiabatic process provides a rigorous conceptual basis for the theory used to expound the first law of thermodynamics, and as such it is a key concept in thermodynamics.

Some chemical and physical processes occur so rapidly that they may be conveniently described by the term "adiabatic approximation", meaning that there is not enough time for the transfer of energy as heat to take place to or from the system.By way of example, the adiabatic flame temperature is an idealization that uses the "adiabatic approximation" so as to provide an upper limit calculation of temperatures produced by combustion of a fuel. The adiabatic flame temperature is the temperature that would be achieved by a flame if the process of combustion took place in the absence of heat loss to the surroundings.

In meteorology and oceanography, the adiabatic cooling process produces condensation of moisture or salinity and the parcel becomes oversaturated. Therefore, it is necessary to take away the excess. There the process becomes a pseudo-adiabatic process in which the liquid water/salt that condenses is assumed to be removed as soon as it is formed, by idealized instantaneous precipitation. The pseudoadiabatic process is only defined for expansion, since a parcel that is compressed becomes warmer and remains undersaturated.

Approximation algorithm

In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to NP-hard optimization problems with provable guarantees on the distance of the returned solution to the optimal one. Approximation algorithms naturally arise in the field of theoretical computer science as a consequence of the widely believed P ≠ NP conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in polynomial time. The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. However, there are also many approximation algorithms that provide an additive guarantee on the quality of the returned solution. A notable example of an approximation algorithm that provides both is the classic approximation algorithm of Lenstra, Shmoys and Tardos for Scheduling on Unrelated Parallel Machines.

The design and analysis of approximation algorithms crucially involves a mathematical proof certifying the quality of the returned solutions in the worst case. This distinguishes them from heuristics such as annealing or genetic algorithms, which find reasonably good solutions on some inputs, but provide no clear indication at the outset on when they may succeed or fail.

There is widespread interest in theoretical computer science to better understand the limits to which we can approximate certain famous optimization problems. For example, one of the long-standing open questions in computer science is to determine whether there is an algorithm that outperforms the 1.5 approximation algorithm of Christofides to the Metric Traveling Salesman Problem. The desire to understand hard optimization problems from the perspective of approximability is motivated by the discovery of surprising mathematical connections and broadly applicable techniques to design algorithms for hard optimization problems. One well-known example of the former is the Goemans-Williamson algorithm for Maximum Cut which solves a graph theoretic problem using high dimensional geometry.

Approximation error

The approximation error in some data is the discrepancy between an exact value and some approximation to it. An approximation error can occur because:

the measurement of the data is not precise due to the instruments. (e.g., the accurate reading of a piece of paper is 4.5 cm but since the ruler does not use decimals, you round it to 5 cm.) or

approximations are used instead of the real data (e.g., 3.14 instead of π).In the mathematical field of numerical analysis, the numerical stability of an algorithm indicates how the error is propagated by the algorithm.

Approximation theory

In mathematics, approximation theory is concerned with how functions can best be approximated with simpler functions, and with quantitatively characterizing the errors introduced thereby. Note that what is meant by best and simpler will depend on the application.

A closely related topic is the approximation of functions by generalized Fourier series, that is, approximations based upon summation of a series of terms based upon orthogonal polynomials.

One problem of particular interest is that of approximating a function in a computer mathematical library, using operations that can be performed on the computer or calculator (e.g. addition and multiplication), such that the result is as close to the actual function as possible. This is typically done with polynomial or rational (ratio of polynomials) approximations.

The objective is to make the approximation as close as possible to the actual function, typically with an accuracy close to that of the underlying computer's floating point arithmetic. This is accomplished by using a polynomial of high degree, and/or narrowing the domain over which the polynomial has to approximate the function.

Narrowing the domain can often be done through the use of various addition or scaling formulas for the function being approximated. Modern mathematical libraries often reduce the domain into many tiny segments and use a low-degree polynomial for each segment.

Approximations of π

Approximations for the mathematical constant pi (π) in the history of mathematics reached an accuracy within 0.04% of the true value before the beginning of the Common Era (Archimedes). In Chinese mathematics, this was improved to approximations correct to what corresponds to about seven decimal digits by the 5th century.

Further progress was not made until the 15th century (Jamshīd al-Kāshī). Early modern mathematicians reached an accuracy of 35 digits by the beginning of the 17th century (Ludolph van Ceulen), and 126 digits by the 19th century (Jurij Vega), surpassing the accuracy required for any conceivable application outside of pure mathematics.

The record of manual approximation of π is held by William Shanks, who calculated 527 digits correctly in the years preceding 1873. Since the middle of the 20th century, the approximation of π has been the task of electronic digital computers; as of November 2016, the record was 22.4 trillion digits. (For a comprehensive account, see Chronology of computation of π.) In March 2019 Emma Haruka Iwao, a Google employee from Japan calculated to a new world record length of 31 trillion digits with the help of the company's cloud computing service.

Binomial distribution

In probability theory and statistics, the binomial distribution with parameters n and p is the discrete probability distribution of the number of successes in a sequence of n independent experiments, each asking a yes–no question, and each with its own boolean-valued outcome: success/yes/true/one (with probability p) or failure/no/false/zero (with probability q = 1 − p).

A single success/failure experiment is also called a Bernoulli trial or Bernoulli experiment and a sequence of outcomes is called a Bernoulli process; for a single trial, i.e., n = 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance.

The binomial distribution is frequently used to model the number of successes in a sample of size n drawn with replacement from a population of size N. If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. However, for N much larger than n, the binomial distribution remains a good approximation, and is widely used.

Boussinesq approximation (water waves)

In fluid dynamics, the Boussinesq approximation for water waves is an approximation valid for weakly non-linear and fairly long waves. The approximation is named after Joseph Boussinesq, who first derived them in response to the observation by John Scott Russell of the wave of translation (also known as solitary wave or soliton). The 1872 paper of Boussinesq introduces the equations now known as the Boussinesq equations.The Boussinesq approximation for water waves takes into account the vertical structure of the horizontal and vertical flow velocity. This results in non-linear partial differential equations, called Boussinesq-type equations, which incorporate frequency dispersion (as opposite to the shallow water equations, which are not frequency-dispersive). In coastal engineering, Boussinesq-type equations are frequently used in computer models for the simulation of water waves in shallow seas and harbours.

While the Boussinesq approximation is applicable to fairly long waves – that is, when the wavelength is large compared to the water depth – the Stokes expansion is more appropriate for short waves (when the wavelength is of the same order as the water depth, or shorter).

Fast inverse square root

Fast inverse square root, sometimes referred to as Fast InvSqrt() or by the hexadecimal constant 0x5F3759DF, is an algorithm that estimates ​1⁄√x, the reciprocal (or multiplicative inverse) of the square root of a 32-bit floating-point number x in IEEE 754 floating-point format. This operation is used in digital signal processing to normalize a vector, i.e., scale it to length 1. For example, computer graphics programs use inverse square roots to compute angles of incidence and reflection for lighting and shading. The algorithm is best known for its implementation in 1999 in the source code of Quake III Arena, a first-person shooter video game that made heavy use of 3D graphics. The algorithm only started appearing on public forums such as Usenet in 2002 or 2003. At the time, it was generally computationally expensive to compute the reciprocal of a floating-point number, especially on a large scale; the fast inverse square root bypassed this step.

The algorithm accepts a 32-bit floating-point number as the input and stores a halved value for later use. Then, treating the bits representing the floating-point number as a 32-bit integer, a logical shift right by one bit is performed and the result subtracted from the magic number 0x5F3759DF, which is a floating point representation of an approximation of √2127. This results in the first approximation of the inverse square root of the input. Treating the bits again as a floating-point number, it runs one iteration of Newton's method, yielding a more precise approximation.

The algorithm was originally attributed to John Carmack, but an investigation showed that the code had deeper roots in both the hardware and software side of computer graphics. Adjustments and alterations passed through both Silicon Graphics and 3dfx Interactive, with Gary Tarolli's implementation for the SGI Indigo as the earliest known use. It is not known how the constant was originally derived, though investigation has shed some light on possible methods.

Finite difference

A finite difference is a mathematical expression of the form f (x + b) − f (x + a). If a finite difference is divided by b − a, one gets a difference quotient. The approximation of derivatives by finite differences plays a central role in finite difference methods for the numerical solution of differential equations, especially boundary value problems.

Certain recurrence relations can be written as difference equations by replacing iteration notation with finite differences.

Today, the term "finite difference" is often taken as synonymous with finite difference approximations of derivatives, especially in the context of numerical methods. Finite difference approximations are finite difference quotients in the terminology employed above.

Finite differences have also been the topic of study as abstract self-standing mathematical objects, such as in works by George Boole (1860), L. M. Milne-Thomson (1933), and Károly Jordan (1939), tracing its origins back to one of Jost Bürgi's algorithms (c. 1592) and others including Isaac Newton. In this viewpoint, the formal calculus of finite differences is an alternative to the calculus of infinitesimals.

Iterative method

In computational mathematics, an iterative method is a mathematical procedure that uses an initial guess to generate a sequence of improving approximate solutions for a class of problems, in which the n-th approximation is derived from the previous ones. A specific implementation of an iterative method, including the termination criteria, is an algorithm of the iterative method. An iterative method is called convergent if the corresponding sequence converges for given initial approximations. A mathematically rigorous convergence analysis of an iterative method is usually performed; however, heuristic-based iterative methods are also common.

In contrast, direct methods attempt to solve the problem by a finite sequence of operations. In the absence of rounding errors, direct methods would deliver an exact solution (like solving a linear system of equations by Gaussian elimination). Iterative methods are often the only choice for nonlinear equations. However, iterative methods are often useful even for linear problems involving a large number of variables (sometimes of the order of millions), where direct methods would be prohibitively expensive (and in some cases impossible) even with the best available computing power.

Least squares

The method of least squares is a standard approach in regression analysis to approximate the solution of overdetermined systems, i.e., sets of equations in which there are more equations than unknowns. "Least squares" means that the overall solution minimizes the sum of the squares of the residuals made in the results of every single equation.

The most important application is in data fitting. The best fit in the least-squares sense minimizes the sum of squared residuals (a residual being: the difference between an observed value, and the fitted value provided by a model). When the problem has substantial uncertainties in the independent variable (the x variable), then simple regression and least-squares methods have problems; in such cases, the methodology required for fitting errors-in-variables models may be considered instead of that for least squares.

Least-squares problems fall into two categories: linear or ordinary least squares and nonlinear least squares, depending on whether or not the residuals are linear in all unknowns. The linear least-squares problem occurs in statistical regression analysis; it has a closed-form solution. The nonlinear problem is usually solved by iterative refinement; at each iteration the system is approximated by a linear one, and thus the core calculation is similar in both cases.

Polynomial least squares describes the variance in a prediction of the dependent variable as a function of the independent variable and the deviations from the fitted curve.

When the observations come from an exponential family and mild conditions are satisfied, least-squares estimates and maximum-likelihood estimates are identical. The method of least squares can also be derived as a method of moments estimator.

The following discussion is mostly presented in terms of linear functions but the use of least squares is valid and practical for more general families of functions. Also, by iteratively applying local quadratic approximation to the likelihood (through the Fisher information), the least-squares method may be used to fit a generalized linear model.

The least-squares method is usually credited to Carl Friedrich Gauss (1795), but it was first published by Adrien-Marie Legendre (1805).

Linearized gravity

Linearized gravity is an approximation scheme in general relativity in which the nonlinear contributions from the spacetime metric are ignored, simplifying the study of many problems while still producing useful approximate results.

Numerical analysis

Numerical analysis is the study of algorithms that use numerical approximation (as opposed to general symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). Numerical analysis naturally finds application in all fields of engineering and the physical sciences, but in the 21st century also the life sciences, social sciences, medicine, business and even the arts have adopted elements of scientific computations. As an aspect of mathematics and computer science that generates, analyzes, and implements algorithms, the growth in power and the revolution in computing has raised the use of realistic mathematical models in science and engineering, and complex numerical analysis is required to provide solutions to these more involved models of the world. Ordinary differential equations appear in celestial mechanics (planets, stars and galaxies); numerical linear algebra is important for data analysis; stochastic differential equations and Markov chains are essential in simulating living cells for medicine and biology.

Before the advent of modern computers, numerical methods often depended on hand interpolation in large printed tables. Since the mid 20th century, computers calculate the required functions instead. These same interpolation formulas nevertheless continue to be used as part of the software algorithms for solving differential equations.

One of the earliest mathematical writings is a Babylonian tablet from the Yale Babylonian Collection (YBC 7289), which gives a sexagesimal numerical approximation of the square root of 2, the length of the diagonal in a unit square. Being able to compute the sides of a triangle (and hence, being able to compute square roots) is extremely important, for instance, in astronomy, carpentry and construction.Numerical analysis continues this long tradition of practical mathematical calculations. Much like the Babylonian approximation of the square root of 2, modern numerical analysis does not seek exact answers, because exact answers are often impossible to obtain in practice. Instead, much of numerical analysis is concerned with obtaining approximate solutions while maintaining reasonable bounds on errors.

Order of approximation

In science, engineering, and other quantitative disciplines, orders of approximation refer to formal or informal expressions for how accurate an approximation is. In formal expressions, the ordinal number used before the word order refers to the highest term in the series expansion used in the approximation. The choice of series expansion depends on the scientific method used to investigate a phenomenon. The expression order of approximation is expected to indicate progressively more refined approximations of a function in a specified interval. If a quantity is constant within the whole interval, approximating it with a second-order Taylor series will not increase the accuracy. Thus the numbers zeroth, first, second etc. used formally in the above meaning do not directly give information about percent error or significant figures.

This formal usage of order of approximation corresponds to the order of the power series representing the error, which is the first first nonzero higher derivative of the error. The expressions: a zeroth-order approximation, a first-order approximation, a second-order approximation, and so forth are used as fixed phrases.

The omission of the word order leads to phrases that have less formal meaning. Phrases like first approximation or to a first approximation may refer to a roughly approximate value of a quantity. The phrase to a zeroth approximation indicates a wild guess. The expression order of approximation is sometimes informally used to mean the number of significant figures, in increasing order of accuracy, or to the order of magnitude. However, this may be confusing as these formal expressions do not directly refer to the order of derivatives.

Formally, an nth-order approximation is one where the order of magnitude of the error is at most , or in terms of big O notation, the error is [citation needed] In the case of a smooth function, the nth-order approximation is a polynomial of degree n, which is obtained by truncating the Taylor series to this degree.

Pi Day

Pi Day is an annual celebration of the mathematical constant π (pi). Pi Day is observed on March 14 (3/14 in the month/day format) since 3, 1, and 4 are the first three significant digits of π. In 2009, the United States House of Representatives supported the designation of Pi Day.Pi Approximation Day is observed on July 22 (22/7 in the day/month format), since the fraction ​22⁄7 is a common approximation of π, which is accurate to two decimal places and dates from Archimedes.Two Pi Day, also known as Tau Day, is lightly observed on June 28 (6/28 in the month/day format).

Stirling's approximation

In mathematics, Stirling's approximation (or Stirling's formula) is an approximation for factorials. It is a good approximation, leading to accurate results even for small values of n. It is named after James Stirling, though it was first stated by Abraham de Moivre.

The version of the formula typically used in applications is

(in big O notation, as ), or, by changing the base of the logarithm (for instance in the worst-case lower bound for comparison sorting),

Specifying the constant in the O(ln n) error term gives 1/2ln(2πn), yielding the more precise formula:

where the sign ~ means that the two quantities are asymptotic: their ratio tends to 1 as n tends to infinity.

One may also give simple bounds valid for all positive integers n, rather than only for large n:

for . These follow from the more precise error bounds discussed below.

Stochastic approximation

Stochastic approximation algorithms are recursive update rules that can be used, among other things, to solve optimization problems and fixed point equations (including standard linear systems) when the collected data is subject to noise. In engineering, optimization problems are often of this type when you do not have a mathematical model of the system (which can be too complex) but still would like to optimize its behavior by adjusting certain parameters.

For this purpose, you can do experiments or run simulations to evaluate the performance of the system at given values of the parameters. Stochastic approximation algorithms have also been used in the social sciences to describe collective dynamics: fictitious play in learning theory and consensus algorithms can be studied using their theoryStochastic approximation methods are a family of iterative stochastic optimization algorithms that attempt to find zeroes or extrema of functions which cannot be computed directly, but only estimated via noisy observations. This situation is common, for instance, when taking noisy measurements of empirical data, or when computing parameters of a statistical model.

Mathematically, the goal of these algorithms is to understand properties of a function

which is the expected value of a function depending on a random variable

ξ

{\textstyle \xi }

, but to do so without evaluating

f

{\textstyle f}

directly. Instead, the algorithms use random samples of

F

(

θ

,

ξ

)

{\textstyle F(\theta ,\xi )}

to efficiently approximate properties of

f

{\textstyle f}

such as zeros or extrema.

The earliest, and prototypical, algorithms of this kind are the Robbins-Monro and Kiefer-Wolfowitz algorithms introduced respectively in 1951 and 1952.

Taylor's theorem

In calculus, Taylor's theorem gives an approximation of a k-times differentiable function around a given point by a k-th order Taylor polynomial. For analytic functions the Taylor polynomials at a given point are finite-order truncations of its Taylor series, which completely determines the function in some neighborhood of the point. It can be thought of as the extension of linear approximation to higher order polynomials, and in the case of k equals 2 is often referred to as a quadratic approximation. The exact content of "Taylor's theorem" is not universally agreed upon. Indeed, there are several versions of it applicable in different situations, and some of them contain explicit estimates on the approximation error of the function by its Taylor polynomial.

Taylor's theorem is named after the mathematician Brook Taylor, who stated a version of it in 1712. Yet an explicit expression of the error was not provided until much later on by Joseph-Louis Lagrange. An earlier version of the result was already mentioned in 1671 by James Gregory.

Taylor's theorem is taught in introductory-level calculus courses and is one of the central elementary tools in mathematical analysis. Within pure mathematics it is the starting point of more advanced asymptotic analysis and is commonly used in more applied fields of numerics, as well as in mathematical physics. Taylor's theorem also generalizes to multivariate and vector valued functions on any dimensions n and m. This generalization of Taylor's theorem is the basis for the definition of so-called jets, which appear in differential geometry and partial differential equations.

WKB approximation

In mathematical physics, the WKB approximation or WKB method is a method for finding approximate solutions to linear differential equations with spatially varying coefficients. It is typically used for a semiclassical calculation in quantum mechanics in which the wavefunction is recast as an exponential function, semiclassically expanded, and then either the amplitude or the phase is taken to be changing slowly.

The name is an initialism for Wentzel–Kramers–Brillouin. It is also known as the LG or Liouville–Green method. Other often-used letter combinations include JWKB and WKBJ, where the "J" stands for Jeffreys.

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.