Additive white Gaussian noise (AWGN) is a basic noise model used in Information theory to mimic the effect of many random processes that occur in nature. The modifiers denote specific characteristics:

• Additive because it is added to any noise that might be intrinsic to the information system.
• White refers to the idea that it has uniform power across the frequency band for the information system. It is an analogy to the color white which has uniform emissions at all frequencies in the visible spectrum.
• Gaussian because it has a normal distribution in the time domain with an average time domain value of zero.

Wideband noise comes from many natural noise, such as the thermal vibrations of atoms in conductors (referred to as thermal noise or Johnson–Nyquist noise), shot noise, black-body radiation from the earth and other warm objects, and from celestial sources such as the Sun. The central limit theorem of probability theory indicates that the summation of many random processes will tend to have distribution called Gaussian or Normal.

AWGN is often used as a channel model in which the only impairment to communication is a linear addition of wideband or white noise with a constant spectral density (expressed as watts per hertz of bandwidth) and a Gaussian distribution of amplitude. The model does not account for fading, frequency selectivity, interference, nonlinearity or dispersion. However, it produces simple and tractable mathematical models which are useful for gaining insight into the underlying behavior of a system before these other phenomena are considered.

The AWGN channel is a good model for many satellite and deep space communication links. It is not a good model for most terrestrial links because of multipath, terrain blocking, interference, etc. However, for terrestrial path modeling, AWGN is commonly used to simulate background noise of the channel under study, in addition to multipath, terrain blocking, interference, ground clutter and self interference that modern radio systems encounter in terrestrial operation.

## Channel capacity

The AWGN channel is represented by a series of outputs ${\displaystyle Y_{i}}$ at discrete time event index ${\displaystyle i}$. ${\displaystyle Y_{i}}$ is the sum of the input ${\displaystyle X_{i}}$ and noise, ${\displaystyle Z_{i}}$, where ${\displaystyle Z_{i}}$ is independent and identically distributed and drawn from a zero-mean normal distribution with variance ${\displaystyle N}$ (the noise). The ${\displaystyle Z_{i}}$ are further assumed to not be correlated with the ${\displaystyle X_{i}}$.

${\displaystyle Z_{i}\sim {\mathcal {N}}(0,N)\,\!}$
${\displaystyle Y_{i}=X_{i}+Z_{i}.\,\!}$

The capacity of the channel is infinite unless the noise n is nonzero, and the ${\displaystyle X_{i}}$ are sufficiently constrained. The most common constraint on the input is the so-called "power" constraint, requiring that for a codeword ${\displaystyle (x_{1},x_{2},\dots ,x_{k})}$ transmitted through the channel, we have:

${\displaystyle {\frac {1}{k}}\sum _{i=1}^{k}x_{i}^{2}\leq P,}$

where ${\displaystyle P}$ represents the maximum channel power. Therefore, the channel capacity for the power-constrained channel is given by:

${\displaystyle C=\max _{f(x){\text{ s.t. }}E\left(X^{2}\right)\leq P}I(X;Y)\,\!}$

Where ${\displaystyle f(x)}$ is the distribution of ${\displaystyle X}$. Expand ${\displaystyle I(X;Y)}$, writing it in terms of the differential entropy:

{\displaystyle {\begin{aligned}I(X;Y)=h(Y)-h(Y|X)&=h(Y)-h(X+Z|X)&=h(Y)-h(Z|X)\end{aligned}}\,\!}

But ${\displaystyle X}$ and ${\displaystyle Z}$ are independent, therefore:

${\displaystyle I(X;Y)=h(Y)-h(Z)\,\!}$

Evaluating the differential entropy of a Gaussian gives:

${\displaystyle h(Z)={\frac {1}{2}}\log(2\pi eN)\,\!}$

Because ${\displaystyle X}$ and ${\displaystyle Z}$ are independent and their sum gives ${\displaystyle Y}$:

${\displaystyle E(Y^{2})=E((X+Z)^{2})=E(X^{2})+2E(X)E(Z)+E(Z^{2})=P+N\,\!}$

From this bound, we infer from a property of the differential entropy that

${\displaystyle h(Y)\leq {\frac {1}{2}}\log(2\pi e(P+N))\,\!}$

Therefore, the channel capacity is given by the highest achievable bound on the mutual information:

${\displaystyle I(X;Y)\leq {\frac {1}{2}}\log(2\pi e(P+N))-{\frac {1}{2}}\log(2\pi eN)\,\!}$

Where ${\displaystyle I(X;Y)}$ is maximized when:

${\displaystyle X\sim {\mathcal {N}}(0,P)\,\!}$

Thus the channel capacity ${\displaystyle C}$ for the AWGN channel is given by:

${\displaystyle C={\frac {1}{2}}\log \left(1+{\frac {P}{N}}\right)\,\!}$

### Channel capacity and sphere packing

Suppose that we are sending messages through the channel with index ranging from ${\displaystyle 1}$ to ${\displaystyle M}$, the number of distinct possible messages. If we encode the ${\displaystyle M}$ messages to ${\displaystyle n}$ bits, then we define the rate ${\displaystyle R}$ as:

${\displaystyle R={\frac {\log M}{n}}\,\!}$

A rate is said to be achievable if there is a sequence of codes so that the maximum probability of error tends to zero as ${\displaystyle n}$ approaches infinity. The capacity ${\displaystyle C}$ is the highest achievable rate.

Consider a codeword of length ${\displaystyle n}$ sent through the AWGN channel with noise level ${\displaystyle N}$. When received, the codeword vector variance is now ${\displaystyle N}$, and its mean is the codeword sent. The vector is very likely to be contained in a sphere of radius ${\displaystyle {\sqrt {n(N+\epsilon )}}}$ around the codeword sent. If we decode by mapping every message received onto the codeword at the center of this sphere, then an error occurs only when the received vector is outside of this sphere, which is very unlikely.

Each codeword vector has an associated sphere of received codeword vectors which are decoded to it and each such sphere must map uniquely onto a codeword. Because these spheres therefore must not intersect, we are faced with the problem of sphere packing. How many distinct codewords can we pack into our ${\displaystyle n}$-bit codeword vector? The received vectors have a maximum energy of ${\displaystyle n(P+N)}$ and therefore must occupy a sphere of radius ${\displaystyle {\sqrt {n(P+N)}}}$. Each codeword sphere has radius ${\displaystyle {\sqrt {nN}}}$. The volume of an n-dimensional sphere is directly proportional to ${\displaystyle r^{n}}$, so the maximum number of uniquely decodeable spheres that can be packed into our sphere with transmission power P is:

${\displaystyle {\frac {(n(P+N))^{\frac {n}{2}}}{(nN)^{\frac {n}{2}}}}=2^{{\frac {n}{2}}\log(1+P/N)}\,\!}$

By this argument, the rate R can be no more than ${\displaystyle {\frac {1}{2}}\log(1+P/N)}$.

### Achievability

In this section, we show achievability of the upper bound on the rate from the last section.

A codebook, known to both encoder and decoder, is generated by selecting codewords of length n, i.i.d. Gaussian with variance ${\displaystyle P-\epsilon }$ and mean zero. For large n, the empirical variance of the codebook will be very close to the variance of its distribution, thereby avoiding violation of the power constraint probabilistically.

Received messages are decoded to a message in the codebook which is uniquely jointly typical. If there is no such message or if the power constraint is violated, a decoding error is declared.

Let ${\displaystyle X^{n}(i)}$ denote the codeword for message ${\displaystyle i}$, while ${\displaystyle Y^{n}}$ is, as before the received vector. Define the following three events:

1. Event ${\displaystyle U}$:the power of the received message is larger than ${\displaystyle P}$.
2. Event ${\displaystyle V}$: the transmitted and received codewords are not jointly typical.
3. Event ${\displaystyle E_{j}}$: ${\displaystyle (X^{n}(j),Y^{n})}$ is in ${\displaystyle A_{\epsilon }^{(n)}}$, the typical set where ${\displaystyle i\neq j}$, which is to say that the incorrect codeword is jointly typical with the received vector.

An error therefore occurs if ${\displaystyle U}$, ${\displaystyle V}$ or any of the ${\displaystyle E_{i}}$ occur. By the law of large numbers, ${\displaystyle P(U)}$ goes to zero as n approaches infinity, and by the joint Asymptotic Equipartition Property the same applies to ${\displaystyle P(V)}$. Therefore, for a sufficiently large ${\displaystyle n}$, both ${\displaystyle P(U)}$ and ${\displaystyle P(V)}$ are each less than ${\displaystyle \epsilon }$. Since ${\displaystyle X^{n}(i)}$ and ${\displaystyle X^{n}(j)}$ are independent for ${\displaystyle i\neq j}$, we have that ${\displaystyle X^{n}(i)}$ and ${\displaystyle Y^{n}}$ are also independent. Therefore, by the joint AEP, ${\displaystyle P(E_{j})=2^{-n(I(X;Y)-3\epsilon )}}$. This allows us to calculate ${\displaystyle P_{e}^{(n)}}$, the probability of error as follows:

{\displaystyle {\begin{aligned}P_{e}^{(n)}&\leq P(U)+P(V)+\sum _{j\neq i}P(E_{j})\\&\leq \epsilon +\epsilon +\sum _{j\neq i}2^{-n(I(X;Y)-3\epsilon )}\\&\leq 2\epsilon +(2^{nR}-1)2^{-n(I(X;Y)-3\epsilon )}\\&\leq 2\epsilon +(2^{3n\epsilon })2^{-n(I(X;Y)-R)}\\&\leq 3\epsilon \end{aligned}}}

Therefore, as n approaches infinity, ${\displaystyle P_{e}^{(n)}}$ goes to zero and ${\displaystyle R. Therefore, there is a code of rate R arbitrarily close to the capacity derived earlier.

### Coding theorem converse

Here we show that rates above the capacity ${\displaystyle C={\frac {1}{2}}\log(1+{\frac {P}{N}})}$ are not achievable.

Suppose that the power constraint is satisfied for a codebook, and further suppose that the messages follow a uniform distribution. Let ${\displaystyle W}$ be the input messages and ${\displaystyle {\hat {W}}}$ the output messages. Thus the information flows as:

${\displaystyle W\longrightarrow X^{(n)}(W)\longrightarrow Y^{(n)}\longrightarrow {\hat {W}}}$

Making use of Fano's inequality gives:

${\displaystyle H(W|{\hat {W}})\leq 1+nRP_{e}^{(n)}=n\epsilon _{n}}$ where ${\displaystyle \epsilon _{n}\rightarrow 0}$ as ${\displaystyle P_{e}^{(n)}\rightarrow 0}$

Let ${\displaystyle X_{i}}$ be the encoded message of codeword index i. Then:

{\displaystyle {\begin{aligned}nR&=H(W)\\&=I(W;{\hat {W}})+H(W|{\hat {W}})\\&\leq I(W;{\hat {W}})+n\epsilon _{n}\\&\leq I(X^{(n)};Y^{(n)})+n\epsilon _{n}\\&=h(Y^{(n)})-h(Y^{(n)}|X^{(n)})+n\epsilon _{n}\\&=h(Y^{(n)})-h(Z^{(n)})+n\epsilon _{n}\\&\leq \sum _{i=1}^{n}h(Y_{i})-h(Z^{(n)})+n\epsilon _{n}\\&\leq \sum _{i=1}^{n}I(X_{i};Y_{i})+n\epsilon _{n}\end{aligned}}}

Let ${\displaystyle P_{i}}$ be the average power of the codeword of index i:

${\displaystyle P_{i}={\frac {1}{2^{nR}}}\sum _{w}x_{i}^{2}(w)\,\!}$

Where the sum is over all input messages ${\displaystyle w}$. ${\displaystyle X_{i}}$ and ${\displaystyle Z_{i}}$ are independent, thus the expectation of the power of ${\displaystyle Y_{i}}$ is, for noise level ${\displaystyle N}$:

${\displaystyle E(Y_{i}^{2})=P_{i}+N\,\!}$

And, if ${\displaystyle Y_{i}}$ is normally distributed, we have that

${\displaystyle h(Y_{i})\leq {\frac {1}{2}}\log {2\pi e}(P_{i}+N)\,\!}$

Therefore,

{\displaystyle {\begin{aligned}nR&\leq \sum (h(Y_{i})-h(Z_{i}))+n\epsilon _{n}\\&\leq \sum \left({\frac {1}{2}}\log(2\pi e(P_{i}+N))-{\frac {1}{2}}\log(2\pi eN)\right)+n\epsilon _{n}\\&=\sum {\frac {1}{2}}\log(1+{\frac {P_{i}}{N}})+n\epsilon _{n}\end{aligned}}}

We may apply Jensen's equality to ${\displaystyle \log(1+x)}$, a concave (downward) function of x, to get:

${\displaystyle {\frac {1}{n}}\sum _{i=1}^{n}{\frac {1}{2}}\log \left(1+{\frac {P_{i}}{N}}\right)\leq {\frac {1}{2}}\log \left(1+{\frac {1}{n}}\sum _{i=1}^{n}{\frac {P_{i}}{N}}\right)\,\!}$

Because each codeword individually satisfies the power constraint, the average also satisfies the power constraint. Therefore,

${\displaystyle {\frac {1}{n}}\sum _{i=1}^{n}{\frac {P_{i}}{N}}\,\!}$

Which we may apply to simplify the inequality above and get:

${\displaystyle {\frac {1}{2}}\log \left(1+{\frac {1}{n}}\sum _{i=1}^{n}{\frac {P_{i}}{N}}\right)\leq {\frac {1}{2}}\log \left(1+{\frac {P}{N}}\right)\,\!}$

Therefore, it must be that ${\displaystyle R\leq {\frac {1}{2}}\log \left(1+{\frac {P}{N}}\right)+\epsilon _{n}}$. Therefore, R must be less than a value arbitrarily close to the capacity derived earlier, as ${\displaystyle \epsilon _{n}\rightarrow 0}$.

## Effects in time domain

Zero-Crossings of a Noisy Cosine

In serial data communications, the AWGN mathematical model is used to model the timing error caused by random jitter (RJ).

The graph to the right shows an example of timing errors associated with AWGN. The variable Δt represents the uncertainty in the zero crossing. As the amplitude of the AWGN is increased, the signal-to-noise ratio decreases. This results in increased uncertainty Δt.[1]

When affected by AWGN, the average number of either positive going or negative going zero-crossings per second at the output of a narrow bandpass filter when the input is a sine wave is:

${\displaystyle {\frac {\mathrm {positive\ zero\ crossings} }{\mathrm {second} }}={\frac {\mathrm {negative\ zero\ crossings} }{\mathrm {second} }}}$
${\displaystyle =f_{0}{\sqrt {\frac {\mathrm {SNR} +1+{\frac {B^{2}}{12f_{0}^{2}}}}{\mathrm {SNR} +1}}}}$

Where

• f0 = the center frequency of the filter
• B = the filter bandwidth
• SNR = the signal-to-noise power ratio in linear terms

## Effects in phasor domain

AWGN Contributions in the Phasor Domain

In modern communication systems, bandlimited AWGN cannot be ignored. When modeling bandlimited AWGN in the phasor domain, statistical analysis reveals that the amplitudes of the real and imaginary contributions are independent variables which follow the Gaussian distribution model. When combined, the resultant phasor's magnitude is a Rayleigh distributed random variable while the phase is uniformly distributed from 0 to 2π.

The graph to the right shows an example of how bandlimited AWGN can affect a coherent carrier signal. The instantaneous response of the Noise Vector cannot be precisely predicted, however its time-averaged response can be statistically predicted. As shown in the graph, we confidently predict that the noise phasor will reside inside the 1σ circle about 38% of the time; the noise phasor will reside inside the 2σ circle about 86% of the time; and the noise phasor will reside inside the 3σ circle about 98% of the time.[1]

## References

1. ^ a b McClaning, Kevin, Radio Receiver Design, Noble Publishing Corporation
Anscombe transform

In statistics, the Anscombe transform, named after Francis Anscombe, is a variance-stabilizing transformation that transforms a random variable with a Poisson distribution into one with an approximately standard Gaussian distribution. The Anscombe transform is widely used in photon-limited imaging (astronomy, X-ray) where images naturally follow the Poisson law. The Anscombe transform is usually used to pre-process the data in order to make the standard deviation approximately constant. Then denoising algorithms designed for the framework of additive white Gaussian noise are used; the final estimate is then obtained by applying an inverse Anscombe transformation to the denoised data.

Block-matching and 3D filtering

Block-matching and 3D filtering (BM3D) is a 3-D block-matching algorithm used primarily for noise reduction in images.

Bruitparif

Bruitparif is a non-profit environmental organization responsible for monitoring the environmental noise in the Paris agglomeration. It was founded in 2004.

Constant false alarm rate

Constant false alarm rate (CFAR) detection refers to a common form of adaptive algorithm used in radar systems to detect target returns against a background of noise, clutter and interference.

Convolutional code

In telecommunication, a convolutional code is a type of error-correcting code that generates parity symbols via the sliding application of a boolean polynomial function to a data stream. The sliding application represents the 'convolution' of the encoder over the data, which gives rise to the term 'convolutional coding'. The sliding nature of the convolutional codes facilitates trellis decoding using a time-invariant trellis. Time invariant trellis decoding allows convolutional codes to be maximum-likelihood soft-decision decoded with reasonable complexity.

The ability to perform economical maximum likelihood soft decision decoding is one of the major benefits of convolutional codes. This is in contrast to classic block codes, which are generally represented by a time-variant trellis and therefore are typically hard-decision decoded. Convolutional codes are often characterized by the base code rate and the depth (or memory) of the encoder ${\displaystyle [n,k,K]}$. The base code rate is typically given as ${\displaystyle n/k}$, where ${\displaystyle n}$ is the input data rate and ${\displaystyle k}$ is the output symbol rate. The depth is often called the "constraint length" ${\displaystyle K}$, where the output is a function of the current input as well as the previous ${\displaystyle K-1}$ inputs. The depth may also be given as the number of memory elements ${\displaystyle v}$ in the polynomial or the maximum possible number of states of the encoder (typically : ${\displaystyle 2^{v}}$).

Convolutional codes are often described as continuous. However, it may also be said that convolutional codes have arbitrary block length, rather than being continuous, since most real-world convolutional encoding is performed on blocks of data. Convolutionally encoded block codes typically employ termination. The arbitrary block length of convolutional codes can also be contrasted to classic block codes, which generally have fixed block lengths that are determined by algebraic properties.

The code rate of a convolutional code is commonly modified via symbol puncturing. For example, a convolutional code with a 'mother' code rate ${\displaystyle n/k=1/2}$ may be punctured to a higher rate of, for example, ${\displaystyle 7/8}$ simply by not transmitting a portion of code symbols. The performance of a punctured convolutional code generally scales well with the amount of parity transmitted. The ability to perform economical soft decision decoding on convolutional codes, as well as the block length and code rate flexibility of convolutional codes, makes them very popular for digital communications.

Estimation theory

Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured empirical data that has a random component. The parameters describe an underlying physical setting in such a way that their value affects the distribution of the measured data. An estimator attempts to approximate the unknown parameters using the measurements.

In estimation theory, two approaches are generally considered.

The probabilistic approach (described in this article) assumes that the measured data is random with probability distribution dependent on the parameters of interest

The set-membership approach assumes that the measured data vector belongs to a set which depends on the parameter vector.

Gaussian noise

Gaussian noise, named after Carl Friedrich Gauss, is statistical noise having a probability density function (PDF) equal to that of the normal distribution, which is also known as the Gaussian distribution. In other words, the values that the noise can take on are Gaussian-distributed.

The probability density function ${\displaystyle p}$ of a Gaussian random variable ${\displaystyle z}$ is given by:

${\displaystyle p_{G}(z)={\frac {1}{\sigma {\sqrt {2\pi }}}}e^{-{\frac {(z-\mu )^{2}}{2\sigma ^{2}}}}}$

where ${\displaystyle z}$ represents the grey level, ${\displaystyle \mu }$ the mean value and ${\displaystyle \sigma }$ the standard deviation.

A special case is white Gaussian noise, in which the values at any pair of times are identically distributed and statistically independent (and hence uncorrelated). In communication channel testing and modelling, Gaussian noise is used as additive white noise to generate additive white Gaussian noise.

In telecommunications and computer networking, communication channels can be affected by wideband Gaussian noise coming from many natural sources, such as the thermal vibrations of atoms in conductors (referred to as thermal noise or Johnson–Nyquist noise), shot noise, black-body radiation from the earth and other warm objects, and from celestial sources such as the Sun.

Principal sources of Gaussian noise in digital images arise during acquisition e.g. sensor noise caused by poor illumination and/or high temperature, and/or transmission e.g. electronic circuit noise. In digital image processing Gaussian noise can be reduced using a spatial filter, though when smoothing an image, an undesirable outcome may result in the blurring of fine-scaled image edges and details because they also correspond to blocked high frequencies. Conventional spatial filtering techniques for noise removal include: mean (convolution) filtering, median filtering and Gaussian smoothing.

In control theory, the linear–quadratic–Gaussian (LQG) control problem is one of the most fundamental optimal control problems. It concerns linear systems driven by additive white Gaussian noise. The problem is to determine an output feedback law that is optimal in the sense of minimizing the expected value of a quadratic cost criterion. Output measurements are assumed to be corrupted by Gaussian noise and the initial state, likewise, is assumed to be a Gaussian random vector.

Under these assumptions an optimal control scheme within the class of linear control laws can be derived by a completion-of-squares argument. This control law which is known as the LQG controller, is unique and it is simply a combination of a Kalman filter (a linear–quadratic state estimator (LQE)) together with a linear–quadratic regulator (LQR). The separation principle states that the state estimator and the state feedback can be designed independently. LQG control applies to both linear time-invariant systems as well as linear time-varying systems, and constitutes a linear dynamic feedback control law that is easily computed and implemented: the LQG controller itself is a dynamic system like the system it controls. Both systems have the same state dimension.

A deeper statement of the separation principle is that the LQG controller is still optimal in a wider class of possibly nonlinear controllers. That is, utilizing a nonlinear control scheme will not improve the expected value of the cost functional. This version of the separation principle is a special case of the separation principle of stochastic control which states that even when the process and output noise sources are possibly non-Gaussian martingales, as long as the system dynamics are linear, the optimal control separates into an optimal state estimator (which may no longer be a Kalman filter) and an LQR regulator.In the classical LQG setting, implementation of the LQG controller may be problematic when the dimension of the system state is large. The reduced-order LQG problem (fixed-order LQG problem) overcomes this by fixing a priori the number of states of the LQG controller. This problem is more difficult to solve because it is no longer separable. Also, the solution is no longer unique. Despite these facts numerical algorithms are available to solve the associated optimal projection equations which constitute necessary and sufficient conditions for a locally optimal reduced-order LQG controller.LQG optimality does not automatically ensure good robustness properties. The robust stability of the closed loop system must be checked separately after the LQG controller has been designed. To promote robustness some of the system parameters may be assumed stochastic instead of deterministic. The associated more difficult control problem leads to a similar optimal controller of which only the controller parameters are different.Finally, the LQG controller is also used to control perturbed non-linear systems.

List of unsolved problems in information theory

This article lists some unsolved problems in information theory which are separated into source coding and channel coding. There are also related unsolved problems in philosophy.

Maximal-ratio combining

In telecommunications, maximum-ratio combining (MRC) is a method of diversity combining in which:

the signals from each channel are added together,

the gain of each channel is made proportional to the rms signal level and inversely proportional to the mean square noise level in that channel.

different proportionality constants are used for each channel.It is also known as ratio-squared combining and predetection combining. Maximum-ratio combining is the optimum combiner for independent additive white Gaussian noise channels.

MRC can restore a signal to its original shape. The technique was invented by American engineer Leonard R. Kahn in 1954.

MRC has also been found in the field of neuroscience, where it has been shown that neurons in the retina scale their dependence on two sources of input in proportion to the signal-to-noise ratio of the inputs.

Median filter

The Median Filter is a non-linear digital filtering technique, often used to remove noise from an image or signal. Such noise reduction is a typical pre-processing step to improve the results of later processing (for example, edge detection on an image). Median filtering is very widely used in digital image processing because, under certain conditions, it preserves edges while removing noise (but see discussion below), also having applications in signal processing.

Optimal projection equations

In control theory, optimal projection equations constitute necessary and sufficient conditions for a locally optimal reduced-order LQG controller.The Linear-Quadratic-Gaussian (LQG) control problem is one of the most fundamental optimal control problems. It concerns uncertain linear systems disturbed by additive white Gaussian noise, incomplete state information (i.e. not all the state variables are measured and available for feedback) also disturbed by additive white Gaussian noise and quadratic costs. Moreover, the solution is unique and constitutes a linear dynamic feedback control law that is easily computed and implemented. Finally the LQG controller is also fundamental to the optimal perturbation control of non-linear systems.The LQG controller itself is a dynamic system like the system it controls. Both systems have the same state dimension. Therefore, implementing the LQG controller may be problematic if the dimension of the system state is large. The reduced-order LQG problem (fixed-order LQG problem) overcomes this by fixing a-priori the number of states of the LQG controller. This problem is more difficult to solve because it is no longer separable. Also the solution is no longer unique. Despite these facts numerical algorithms are available to solve the associated optimal projection equations.

Phase-shift keying

Phase-shift keying (PSK) is a digital modulation process which conveys data by changing (modulating) the phase of a constant frequency reference signal (the carrier wave). The modulation is accomplished by varying the sine and cosine inputs at a precise time. It is widely used for wireless LANs, RFID and Bluetooth communication.

Any digital modulation scheme uses a finite number of distinct signals to represent digital data. PSK uses a finite number of phases, each assigned a unique pattern of binary digits. Usually, each phase encodes an equal number of bits. Each pattern of bits forms the symbol that is represented by the particular phase. The demodulator, which is designed specifically for the symbol-set used by the modulator, determines the phase of the received signal and maps it back to the symbol it represents, thus recovering the original data. This requires the receiver to be able to compare the phase of the received signal to a reference signal – such a system is termed coherent (and referred to as CPSK).

CPSK requires a complicated demodulator, because it must extract the reference wave from the received signal and keep track of it, to compare each sample to. Alternatively, the phase shift of each symbol sent can be measured with respect to the phase of the previous symbol sent. Because the symbols are encoded in the difference in phase between successive samples, this is called differential phase-shift keying (DPSK). DPSK can be significantly simpler to implement than ordinary PSK, as it is a 'non-coherent' scheme, i.e. there is no need for the demodulator to keep track of a reference wave. A trade-off is that it has more demodulation errors.

Process gain

In a spread-spectrum system, the process gain (or "processing gain") is the ratio of the spread (or RF) bandwidth to the unspread (or baseband) bandwidth. It is usually expressed in decibels (dB).

For example, if a 1 kHz signal is spread to 100 kHz, the process gain expressed as a numerical ratio would be 100000/1000 = 100. Or in decibels, 10 log10(100) = 20 dB.

Note that process gain does not reduce the effects of wideband thermal noise. It can be shown that a direct-sequence spread-spectrum (DSSS) system has exactly the same bit error behavior as a non-spread-spectrum system with the same modulation format. Thus, on an additive white Gaussian noise (AWGN) channel without interference, a spread system requires the same transmitter power as an unspread system, all other things being equal.

Unlike a conventional communication system, however, a DSSS system does have a certain resistance against narrowband interference, as the interference is not subject to the process gain of the DSSS signal, and hence the signal-to-interference ratio is improved.

In frequency modulation (FM), the processing gain can be expressed as

${\displaystyle G_{\text{p}}={\cfrac {1.5B_{\text{n}}(\Delta f)^{2}}{W^{3}}},}$

where:

Gp is the processing gain,
Bn is the noise bandwidth,
Δf is the peak frequency deviation,
W is the sinusoidal modulating frequency.

Shrinkage Fields (image restoration)

Shrinkage fields is a random field-based machine learning technique that aims to perform high quality image restoration (denoising and deblurring) using low computational overhead.

Signal-to-interference ratio

The signal-to-interference ratio (SIR or S/I), also known as the carrier-to-interference ratio (CIR or C/I), is the quotient between the average received modulated carrier power S or C and the average received co-channel interference power I, i.e. cross-talk, from other transmitters than the useful signal.

The CIR resembles the carrier-to-noise ratio (CNR or C/N), which is the signal-to-noise ratio (SNR or S/N) of a modulated signal before demodulation. A distinction is that interfering radio transmitters contributing to I may be controlled by radio resource management, while N involves noise power from other sources, typically additive white gaussian noise (AWGN).

WGN

WGN may refer to:

World's Greatest Newspaper, former slogan of the Chicago Tribune and the namesake for the WGN broadcasting outlets in Chicago, Illinois.

WGN (AM), a radio station (720 AM) licensed to Chicago, Illinois, United States

WGN-TV, a television station (channel 9.1 virtual/19 digital) licensed to Chicago, Illinois, United States

WGN America, a cable television network based in Chicago, Illinois, United States

WFMT, a radio station (98.7 FM) licensed to Chicago, Illinois, United States, which operates on the frequency formerly belonging to the Tribune-owned FM station that used the call sign WGNB from 1945 until 1953

Shaoyang Wugang Airport, IATA code WGN

The ICAO airline designator for Western Global Airlines

WGN, Journal of the International Meteor Organization

Water-pouring algorithm

The water-pouring algorithm is a technique used in digital communications systems for allocating power among different channels in multicarrier schemes. It was described by R. C. Gallager in 1968 along with the water-pouring theorem which proves its optimality for channels having Additive White Gaussian Noise (AWGN) and intersymbol interference (ISI).

For this reason, it is a standard baseline algorithm for various digital communications systems.The intuition that gives the algorithm its name is to think of the communication medium as if it was some kind of water container with an uneven bottom. Each of the available channels is then a section of the container having its own depth, given by the reciprocal of the frequency-dependent SNR for the channel.

To allocate power, imagine pouring water into this container (the amount depends on the desired maximum average transmit power). After the water level settles, the largest amount of water is in the deepest sections of the container. This implies allocating more power to the channels with the most favourable SNR. Note, however, that the ratio allocation to each channel is not a fixed proportion but varies nonlinearly with the maximum average transmit power.

White noise

In signal processing, white noise is a random signal having equal intensity at different frequencies, giving it a constant power spectral density. The term is used, with this or similar meanings, in many scientific and technical disciplines, including physics, acoustical engineering, telecommunications, and statistical forecasting. White noise refers to a statistical model for signals and signal sources, rather than to any specific signal. White noise draws its name from white light, although light that appears white generally does not have a flat power spectral density over the visible band.

In discrete time, white noise is a discrete signal whose samples are regarded as a sequence of serially uncorrelated random variables with zero mean and finite variance; a single realization of white noise is a random shock. Depending on the context, one may also require that the samples be independent and have identical probability distribution (in other words independent and identically distributed random variables are the simplest representation of white noise). In particular, if each sample has a normal distribution with zero mean, the signal is said to be Additive white Gaussian noise.The samples of a white noise signal may be sequential in time, or arranged along one or more spatial dimensions. In digital image processing, the pixels of a white noise image are typically arranged in a rectangular grid, and are assumed to be independent random variables with uniform probability distribution over some interval. The concept can be defined also for signals spread over more complicated domains, such as a sphere or a torus.

An infinite-bandwidth white noise signal is a purely theoretical construction. The bandwidth of white noise is limited in practice by the mechanism of noise generation, by the transmission medium and by finite observation capabilities. Thus, random signals are considered "white noise" if they are observed to have a flat spectrum over the range of frequencies that are relevant to the context. For an audio signal, the relevant range is the band of audible sound frequencies (between 20 and 20,000 Hz). Such a signal is heard by the human ear as a hissing sound, resembling the /sh/ sound in "ash". In music and acoustics, the term "white noise" may be used for any signal that has a similar hissing sound.

The term white noise is sometimes used in the context of phylogenetically based statistical methods to refer to a lack of phylogenetic pattern in comparative data. It is sometimes used analogously in nontechnical contexts to mean "random talk without meaningful contents".

Noise (physics and telecommunications)
General
Noise in...
Class of noise
Engineering
terms
Ratios
Related topics
Denoise
methods

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.