# Time domain

Time domain refers to the analysis of mathematical functions, physical signals or time series of economic or environmental data, with respect to time. In the time domain, the signal or function's value is known for all real numbers, for the case of continuous time, or at various separate instants in the case of discrete time. An oscilloscope is a tool commonly used to visualize real-world signals in the time domain. A time-domain graph shows how a signal changes with time, whereas a frequency-domain graph shows how much of the signal lies within each given frequency band over a range of frequencies.

The Fourier transform relates the function in the time domain, shown in red, to the function in the frequency domain, shown in blue. The component frequencies, spread across the frequency spectrum, are represented as peaks in the frequency domain.

## Origin of term

The use of the contrasting terms time domain and frequency domain developed in U.S. communication engineering in the late 1940s, with the terms appearing together without definition by 1950.[1] When an analysis uses the second or one of its multiples as a unit of measurement, then it is in the time domain. When analysis concerns the reciprocal units such as Hertz, then it is in the frequency domain.

## References

1. ^ Lee, Y. W.; Cheatham, T. P., Jr.; Wiesner, J. B. (1950). "Application of Correlation Analysis to the Detection of Periodic Signals in Noise". Proceedings of the IRE. 38 (10): 1165–1171. doi:10.1109/JRPROC.1950.233423.
Acoustic impedance

Acoustic impedance and specific acoustic impedance are measures of the opposition that a system presents to the acoustic flow resulting from an acoustic pressure applied to the system. The SI unit of acoustic impedance is the pascal second per cubic metre (Pa·s/m3) or the rayl per square metre (rayl/m2), while that of specific acoustic impedance is the pascal second per metre (Pa·s/m) or the rayl. In this article the symbol rayl denotes the MKS rayl. There is a close analogy with electrical impedance, which measures the opposition that a system presents to the electrical flow resulting from an electrical voltage applied to the system.

Autocorrelation

Autocorrelation, also known as serial correlation, is the correlation of a signal with a delayed copy of itself as a function of delay. Informally, it is the similarity between observations as a function of the time lag between them. The analysis of autocorrelation is a mathematical tool for finding repeating patterns, such as the presence of a periodic signal obscured by noise, or identifying the missing fundamental frequency in a signal implied by its harmonic frequencies. It is often used in signal processing for analyzing functions or series of values, such as time domain signals.

Different fields of study define autocorrelation differently, and not all of these definitions are equivalent. In some fields, the term is used interchangeably with autocovariance.

Unit root processes, trend stationary processes, autoregressive processes, and moving average processes are specific forms of processes with autocorrelation.

Chipless RFID

Chipless RFID tags are RFID tags that do not require a microchip in the transponder.

RFIDs offer longer range and ability to be automated, unlike barcodes that require a human operator for interrogation. The main challenge to their adoption is the cost of RFIDs. The design and fabrication of ASICs needed for RFID are the major component of their cost, so removing ICs altogether can significantly reduce its cost. The major challenges in designing chipless RFID is data encoding and transmission.

Computational electromagnetics

Computational electromagnetics, computational electrodynamics or electromagnetic modeling is the process of modeling the interaction of electromagnetic fields with physical objects and the environment.

It typically involves using computationally efficient approximations to Maxwell's equations and is used to calculate antenna performance, electromagnetic compatibility, radar cross section and electromagnetic wave propagation when not in free space.

A specific part of computational electromagnetics deals with electromagnetic radiation scattered and absorbed by small particles.

Control theory

Control theory in control systems engineering is a subfield of mathematics that deals with the control of continuously operating dynamical systems in engineered processes and machines. The objective is to develop a control model for controlling such systems using a control action in an optimum manner without delay or overshoot and ensuring control stability.

To do this, a controller with the requisite corrective behaviour is required. This controller monitors the controlled process variable (PV), and compares it with the reference or set point (SP). The difference between actual and desired value of the process variable, called the error signal, or SP-PV error, is applied as feedback to generate a control action to bring the controlled process variable to the same value as the set point. Other aspects which are also studied are controllability and observability. On this is based the advanced type of automation that revolutionized manufacturing, aircraft, communications and other industries. This is feedback control, which is usually continuous and involves taking measurements using a sensor and making calculated adjustments to keep the measured variable within a set range by means of a "final control element", such as a control valve.Extensive use is usually made of a diagrammatic style known as the block diagram. In it the transfer function, also known as the system function or network function, is a mathematical model of the relation between the input and output based on the differential equations describing the system.

Control theory dates from the 19th century, when the theoretical basis for the operation of governors was first described by James Clerk Maxwell. Control theory was further advanced by Edward Routh in 1874, Charles Sturm and in 1895, Adolf Hurwitz, who all contributed to the establishment of control stability criteria; and from 1922 onwards, the development of PID control theory by Nicolas Minorsky.

Although a major application of control theory is in control systems engineering, which deals with the design of process control systems for industry, other applications range far beyond this. As the general theory of feedback systems, control theory is useful wherever feedback occurs.

Cross-correlation

In signal processing, cross-correlation is a measure of similarity of two series as a function of the displacement of one relative to the other. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomography, averaging, cryptanalysis, and neurophysiology.

The cross-correlation is similar in nature to the convolution of two functions. In an autocorrelation, which is the cross-correlation of a signal with itself, there will always be a peak at a lag of zero, and its size will be the signal energy.

In probability and statistics, the term cross-correlations is used for referring to the correlations between the entries of two random vectors ${\displaystyle \mathbf {X} }$ and ${\displaystyle \mathbf {Y} }$, while the correlations of a random vector ${\displaystyle \mathbf {X} }$ are considered to be the correlations between the entries of ${\displaystyle \mathbf {X} }$ itself, those forming the correlation matrix (matrix of correlations) of ${\displaystyle \mathbf {X} }$. If each of ${\displaystyle \mathbf {X} }$ and ${\displaystyle \mathbf {Y} }$ is a scalar random variable which is realized repeatedly in temporal sequence (a time series), then the correlations of the various temporal instances of ${\displaystyle \mathbf {X} }$ are known as autocorrelations of ${\displaystyle \mathbf {X} }$, and the cross-correlations of ${\displaystyle \mathbf {X} }$ with ${\displaystyle \mathbf {Y} }$ across time are temporal cross-correlations.

Furthermore, in probability and statistics the definition of correlation always includes a standardising factor in such a way that correlations have values between −1 and +1.

If ${\displaystyle X}$ and ${\displaystyle Y}$ are two independent random variables with probability density functions ${\displaystyle f}$ and ${\displaystyle g}$, respectively, then the probability density of the difference ${\displaystyle Y-X}$ is formally given by the cross-correlation (in the signal-processing sense) ${\displaystyle f\star g}$; however this terminology is not used in probability and statistics. In contrast, the convolution ${\displaystyle f*g}$ (equivalent to the cross-correlation of ${\displaystyle f(t)}$ and ${\displaystyle g(-t)}$) gives the probability density function of the sum ${\displaystyle X+Y}$.

Finite-difference time-domain method

Finite-difference time-domain or Yee's method (named after the Chinese American applied mathematician Kane S. Yee, born 1934) is a numerical analysis technique used for modeling computational electrodynamics (finding approximate solutions to the associated system of differential equations). Since it is a time-domain method, FDTD solutions can cover a wide frequency range with a single simulation run, and treat nonlinear material properties in a natural way.

The FDTD method belongs in the general class of grid-based differential numerical modeling methods (finite difference methods). The time-dependent Maxwell's equations (in partial differential form) are discretized using central-difference approximations to the space and time partial derivatives. The resulting finite-difference equations are solved in either software or hardware in a leapfrog manner: the electric field vector components in a volume of space are solved at a given instant in time; then the magnetic field vector components in the same spatial volume are solved at the next instant in time; and the process is repeated over and over again until the desired transient or steady-state electromagnetic field behavior is fully evolved.

Fourier transform

The Fourier transform (FT) decomposes (analyzes) a function of time (a signal) into its constituent frequencies. This is similar to the way a musical chord can be expressed in terms of the volumes and frequencies (or pitches) of its constituent notes. The term Fourier transform refers to both the frequency domain representation and the mathematical operation that associates the frequency domain representation to a function of time. The Fourier transform of a function of time is itself a complex-valued function of frequency, whose magnitude (modulus) represents the amount of that frequency present in the original function, and whose argument is the phase offset of the basic sinusoid in that frequency. The Fourier transform is not limited to functions of time, but the domain of the original function is commonly referred to as the time domain. There is also an inverse Fourier transform that mathematically synthesizes the original function (of time) from its frequency domain representation.

Linear operations performed in one domain (time or frequency) have corresponding operations in the other domain, which are sometimes easier to perform. The operation of differentiation in the time domain corresponds to multiplication by the frequency, so some differential equations are easier to analyze in the frequency domain. Also, convolution in the time domain corresponds to ordinary multiplication in the frequency domain (see Convolution theorem). After performing the desired operations, transformation of the result can be made back to the time domain. Harmonic analysis is the systematic study of the relationship between the frequency and time domains, including the kinds of functions or operations that are "simpler" in one or the other, and has deep connections to many areas of modern mathematics.

Functions that are localized in the time domain have Fourier transforms that are spread out across the frequency domain and vice versa, a phenomenon known as the uncertainty principle. The critical case for this principle is the Gaussian function, of substantial importance in probability theory and statistics as well as in the study of physical phenomena exhibiting normal distribution (e.g., diffusion). The Fourier transform of a Gaussian function is another Gaussian function. Joseph Fourier introduced the transform in his study of heat transfer, where Gaussian functions appear as solutions of the heat equation.

The Fourier transform can be formally defined as an improper Riemann integral, making it an integral transform, although this definition is not suitable for many applications requiring a more sophisticated integration theory. For example, many relatively simple applications use the Dirac delta function, which can be treated formally as if it were a function, but the justification requires a mathematically more sophisticated viewpoint. The Fourier transform can also be generalized to functions of several variables on Euclidean space, sending a function of 3-dimensional 'position space' to a function of 3-dimensional momentum (or a function of space and time to a function of 4-momentum). This idea makes the spatial Fourier transform very natural in the study of waves, as well as in quantum mechanics, where it is important to be able to represent wave solutions as functions of either position or momentum and sometimes both. In general, functions to which Fourier methods are applicable are complex-valued, and possibly vector-valued. Still further generalization is possible to functions on groups, which, besides the original Fourier transform on ℝ or ℝn (viewed as groups under addition), notably includes the discrete-time Fourier transform (DTFT, group = ℤ), the discrete Fourier transform (DFT, group = ℤ mod N) and the Fourier series or circular Fourier transform (group = S1, the unit circle ≈ closed finite interval with endpoints identified). The latter is routinely employed to handle periodic functions. The fast Fourier transform (FFT) is an algorithm for computing the DFT.

Frequency domain

In electronics, control systems engineering, and statistics, the frequency domain refers to the analysis of mathematical functions or signals with respect to frequency, rather than time. Put simply, a time-domain graph shows how a signal changes over time, whereas a frequency-domain graph shows how much of the signal lies within each given frequency band over a range of frequencies. A frequency-domain representation can also include information on the phase shift that must be applied to each sinusoid in order to be able to recombine the frequency components to recover the original time signal.

A given function or signal can be converted between the time and frequency domains with a pair of mathematical operators called transforms. An example is the Fourier transform, which converts a time function into a sum or integral of sine waves of different frequencies, each of which represents a frequency component. The "spectrum" of frequency components is the frequency-domain representation of the signal. The inverse Fourier transform converts the frequency-domain function back to the time function. A spectrum analyzer is a tool commonly used to visualize electronic signals in the frequency domain.

Some specialized signal processing techniques use transforms that result in a joint time–frequency domain, with the instantaneous frequency being a key link between the time domain and the frequency domain.

G.729.1

G.729.1 is an 8-32 kbit/s embedded speech and audio codec providing bitstream interoperability with G.729, G.729 Annex A and G.729 Annex B. Its official name is G.729-based embedded variable bit rate codec: An 8-32 kbit/s scalable wideband coder bitstream interoperable with G.729.

This codec has been designed to provide better quality and more flexibility than the existing ITU-T G.729 speech coding standard.

G.729.1 is scalable in bit rate, acoustic bandwidth and complexity.

In addition it offers various encoder and decoder modes, including the support of both 8 and 16 kHz input/output sampling frequency, compatibility with G.729B, and reduced algorithmic delay.

The bitstream of G.729.1 is structured into 12 hierarchical layers.

The first layer (or core layer) at 8 kbit/s follows the G.729 format.

The second layer (adds 4 kbit/s for a total of 12 kbit/s) is a narrowband enhancement layer. The third layer (2 kbit/s for a total of 14 kbit/s) is a bandwidth extension layer. Further layers (in 2 kbit/s steps) are wideband enhancement layers.

The G.729.1 output bandwidth is 50–4000 Hz at 8 and

12 kbit/s, and 50–7000 Hz from 14 to 32 kbit/s. G.729.1 is also known as G.729 Annex J and G.729EV where EV stands for Embedded Variable (bit rate).

The G.729.1 algorithm is based on a three-stage coding structure: embedded Code-excited linear prediction (CELP) coding of the lower band (50–4000 Hz), parametric coding of the higher band (4000–7000 Hz) by Time-Domain Bandwidth Extension (TDBWE), and enhancement of the full band (50–7000 Hz) by a predictive transform coding technique referred to as Time-Domain Aliasing Cancellation (TDAC).

As of January 1, 2017, the patent terms of most licensed patents under the G.729 Consortium have expired, the remaining unexpired patents are usable on a royalty-free basis.

Impulse response

In signal processing, the impulse response, or impulse response function (IRF), of a dynamic system is its output when presented with a brief input signal, called an impulse. More generally, an impulse response is the reaction of any dynamic system in response to some external change. In both cases, the impulse response describes the reaction of the system as a function of time (or possibly as a function of some other independent variable that parameterizes the dynamic behavior of the system).

In all these cases, the dynamic system and its impulse response may be actual physical objects, or may be mathematical systems of equations describing such objects.

Since the impulse function contains all frequencies, the impulse response defines the response of a linear time-invariant system for all frequencies.

Integral transform

In mathematics, an integral transform maps an equation from its original domain into another domain where it might be manipulated and solved much more easily than in the original domain. The solution is then mapped back to the original domain using the inverse of the integral transform.

Ljung–Box test

The Ljung–Box test (named for Greta M. Ljung and George E. P. Box) is a type of statistical test of whether any of a group of autocorrelations of a time series are different from zero. Instead of testing randomness at each distinct lag, it tests the "overall" randomness based on a number of lags, and is therefore a portmanteau test.

This test is sometimes known as the Ljung–Box Q test, and it is closely connected to the Box–Pierce test (which is named after George E. P. Box and David A. Pierce). In fact, the Ljung–Box test statistic was described explicitly in the paper that led to the use of the Box-Pierce statistic, and from which that statistic takes its name. The Box-Pierce test statistic is a simplified version of the Ljung–Box statistic for which subsequent simulation studies have shown poor performance.

The Ljung–Box test is widely applied in econometrics and other applications of time series analysis. A similar assessment can be also carried out with the Breusch–Godfrey test and the Durbin–Watson test.

Modified discrete cosine transform

The modified discrete cosine transform (MDCT) is a lapped transform based on the type-IV discrete cosine transform (DCT-IV), with the additional property of being lapped: it is designed to be performed on consecutive blocks of a larger dataset,

where subsequent blocks are overlapped so that

the last half of one block coincides with the first half of the next block.

This overlapping, in addition to the energy-compaction qualities of the DCT, makes the MDCT especially attractive for signal compression applications, since it helps to avoid artifacts stemming from the block boundaries. As a result of these advantages, the MDCT is employed in most modern lossy audio formats, including MP3, AC-3, Vorbis, Windows Media Audio, ATRAC, Cook, AAC, Opus, and LDAC.

The MDCT was proposed by Princen, Johnson, and Bradley in 1987, following earlier (1986) work by Princen and Bradley to develop the MDCT's underlying principle of time-domain aliasing cancellation (TDAC), described below. (There also exists an analogous transform, the MDST, based on the discrete sine transform, as well as other, rarely used, forms of the MDCT based on different types of DCT or DCT/DST combinations.)

In MP3, the MDCT is not applied to the audio signal directly, but rather to the output of a 32-band polyphase quadrature filter (PQF) bank. The output of this MDCT is postprocessed by an alias reduction formula to reduce the typical aliasing of the PQF filter bank. Such a combination of a filter bank with an MDCT is called a hybrid filter bank or a subband MDCT. AAC, on the other hand, normally uses a pure MDCT; only the (rarely used) MPEG-4 AAC-SSR variant (by Sony) uses a four-band PQF bank followed by an MDCT. Similar to MP3, ATRAC uses stacked quadrature mirror filters (QMF) followed by an MDCT.

Orthogonal frequency-division multiple access

Orthogonal frequency-division multiple access (OFDMA) is a multi-user version of the popular orthogonal frequency-division multiplexing (OFDM) digital modulation scheme. Multiple access is achieved in OFDMA by assigning subsets of subcarriers to individual users. This allows simultaneous low-data-rate transmission from several users.

Partial autocorrelation function

In time series analysis, the partial autocorrelation function (PACF) gives the partial correlation of a stationary time series with its own lagged values, regressed the values of the time series at all shorter lags. It contrasts with the autocorrelation function, which does not control for other lags.

This function plays an important role in data analysis aimed at identifying the extent of the lag in an autoregressive model. The use of this function was introduced as part of the Box–Jenkins approach to time series modelling, whereby plotting the partial autocorrelative functions one could determine the appropriate lags p in an AR (p) model or in an extended ARIMA (p,d,q) model.

State-space representation

In control engineering, a state-space representation is a mathematical model of a physical system as a set of input, output and state variables related by first-order differential equations or difference equations. State variables are variables whose values evolve through time in a way that depends on the values they have at any given time and also depends on the externally imposed values of input variables. Output variables’ values depend on the values of the state variables.

The "state space" is the Euclidean space in which the variables on the axes are the state variables. The state of the system can be represented as a vector within that space.

To abstract from the number of inputs, outputs and states, these variables are expressed as vectors. Additionally, if the dynamical system is linear, time-invariant, and finite-dimensional, then the differential and algebraic equations may be written in matrix form. The state-space method is characterized by significant algebraization of general system theory, which makes it possible to use Kronecker vector-matrix structures. The capacity of these structures can be efficiently applied to research systems with modulation or without it. The state-space representation (also known as the "time-domain approach") provides a convenient and compact way to model and analyze systems with multiple inputs and outputs. With ${\displaystyle p}$ inputs and ${\displaystyle q}$ outputs, we would otherwise have to write down ${\displaystyle q\times p}$ Laplace transforms to encode all the information about a system. Unlike the frequency domain approach, the use of the state-space representation is not limited to systems with linear components and zero initial conditions. The state-space model is used in many different areas. In econometrics, the state-space model can be used for forecasting stock prices and numerous other variables.

Time-domain reflectometer

A time-domain reflectometer (TDR) is an electronic instrument that uses time-domain reflectometry to characterize and locate faults in metallic cables (for example, twisted pair wire or coaxial cable). It can also be used to locate discontinuities in a connector, printed circuit board, or any other electrical path. The equivalent device for optical fiber is an optical time-domain reflectometer.

Ultrasound

Ultrasound is sound waves with frequencies higher than the upper audible limit of human hearing. Ultrasound is not different from "normal" (audible) sound in its physical properties, except that humans cannot hear it. This limit varies from person to person and is approximately 20 kilohertz (20,000 hertz) in healthy young adults. Ultrasound devices operate with frequencies from 20 kHz up to several gigahertz.

Ultrasound is used in many different fields. Ultrasonic devices are used to detect objects and measure distances. Ultrasound imaging or sonography is often used in medicine. In the nondestructive testing of products and structures, ultrasound is used to detect invisible flaws. Industrially, ultrasound is used for cleaning, mixing, and accelerating chemical processes. Animals such as bats and porpoises use ultrasound for locating prey and obstacles. Scientists are also studying ultrasound using graphene diaphragms as a method of communication.

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.