Alignment level

The alignment level in an audio signal chain or on an audio recording is a defined anchor point that represents a reasonable or typical level. It does not represent a particular sound level or signal level or digital representation, but it can be defined as corresponding to particular levels in each of these domains.

For example, alignment level is commonly 0 dBu (Equal to 0.775 Volts RMS) in broadcast chains and in professional audio what is commonly known as "0VU", which is +4dBu (Equal to 1.227 Volts RMS) in places where the signal exists as analogue voltage. Under normal situations the "0VU" reference allowed for a headroom of 18dB or more above the reference level without significant distortion. This is largely due to the use of slow responding VU meters in almost all analog professional audio equipment which, by their design, and by specification responded to an average level, not peak levels. It most commonly is at −18 dB FS (18 dB below full scale digital) on digital recordings for programme exchange, in accordance with EBU recommendations. Digital equipment must use peak reading metering systems to avoid severe digital distortion caused by the signal going beyond 'full scale' or maximum digital levels. 24-bit original or master recordings commonly have alignment level at −24 dB FS to allow extra headroom, which can then be reduced to match the available headroom of the final medium by audio level compression. FM broadcasts usually have only 9 dB of headroom as recommended by the EBU, but digital broadcasts, which could operate with 18dB of headroom, given their low noise floor even in difficult reception areas, currently operate in a state of confusion, with some transmitting at maximum level while others operate at much lower level even though they carry material that has been compressed for compatibility with the lower dynamic range of FM transmissions.

Alignment level as used in the EBU

The EBU uses the term "alignment level" not for levelling any real audio signals. In the EBU documents "alignment level" just defines -18 dBFS as the level of the Alignment Signal (1 kHz sinus tone generator resp. 997 Hz in the digital domain).

The reason for alignment level

Using alignment level rather than maximum permitted level as the reference point allows more sensible headroom management throughout the audio chain, so that quality is only sacrificed through compression as late as possible.

Loudness wars have caused a general fall in audio quality, initially on radio stations and more recently on CDs. As radio stations competed for attention and to raise the listener scores their ad revenue is based on, they used audio compression to give their sound more impact. They used level compressors, and in particular multi-band compressors that compress different frequencies independently. Such compressors usually incorporate fast acting limiters to eliminate brief peaks, since brief peaks, though they may not contribute much to perceived loudness, limit the modulation level that can be applied to FM transmissions in particular, if serious clipping and distortion are to be avoided. Digital broadcasting has changed all this: stations are no longer found by tuning across the band, so the loudest stations no longer stand out. Low noise level is also guaranteed regardless of signal level, so that it is no longer necessary to fully modulate to ensure acceptable clarity in poor reception areas. Many professionals feel that the more widespread adoption and understanding of alignment level throughout the audio industry could help bring modulation levels down, leaving headroom to cope with brief peaks, and using a different form of level compression that reduces dynamic range on programmes where this is considered desirable, but does not remove the brief peaks which add 'sparkle' and contribute to clearer sound. CDs in particular have suffered a loss of quality since they were introduced through the widespread use of fast limiting, which, given their very low noise level is quite unnecessary.

Digital audio players such as the iPod, demonstrate the need for a common alignment level. While tracks taken from recent CDs sound loud enough, many older recordings (such as Pink Floyd albums which notably allowed lots of headroom for stunning dynamic range and rarely reach peak digital level) are far too quiet, even at full volume setting. Older audio systems typically incorporated 12dB of 'overvolume', meaning that it was possible to turn up the loudness on a quiet recording to make maximum use of amplifier output even if peak level was never reached on the recording. Modern devices, however, tend to produce maximum output at full volume only on recordings that reach full-scale digital level. If extra gain is added, then playing a modern CD after listening to a well recorded older one is likely to deafen, requiring the volume control to be turned down by a huge amount. Again, the adoption of a common alignment level (early CDs allowed around 18dB of headroom by common consent) would make sense, improving quality and usability and ending the loudness war.

Making compression a listening option

The incorporation of (switchable) level compression in domestic music systems and car in-car systems would allow higher quality on systems capable of wide dynamic range and in situations that allowed realistic reproduction. Such compression systems have been suggested and tried from time to time, but are not in widespread use — a 'chicken and egg' problem since producers feel they must make programmes and recordings that sound good in car with high ambient noise or on cheap low-power music systems. In the UK, some DAB receivers do incorporate a menu setting for automatic loudness compensation which adds extra gain on BBC Radio 3 and BBC Radio 4, to allow for the fact that these programmes adopt lower levels than, for example, the pop station Radio 1. Some television receivers also have a menu setting for loudness normalisation, aimed at helping to reduce excessive loudness on advertisements. However, there is no common agreement to reduce compression and limiting and leave these tasks to the receiver.

See also

External links

Anchor point

In audio and recording, what is known colloquially as an anchor point is a center position in a stereo mix reserved for only three or four important tracks. Most modern pop productions are anchored by lead (vocals and soloing instruments), bass, kick drum, and snare drum. These are usually within a few degrees of center (horizontal) and front (proximity or depth) in the mix. Exceptions include early stereo recordings using "stereo-switching" (a three-way switch allowing only left output, right output, or both) rather than pan pots) such as the Beatles's "Strawberry Fields Forever" and Jimi Hendrix's "Purple Haze". Examples of tracks using anchor points include The Breeders's "Cannonball", The Cure's "Catch", Lady Gaga's "Just Dance", Lily Allen's "The Fear", Radiohead's "Airbag", Squarepusher's "Star Time 2", Stone Roses's "One Love", and Weezer's "My Name Is Jonas".

Audio bit depth

In digital audio using pulse-code modulation (PCM), bit depth is the number of bits of information in each sample, and it directly corresponds to the resolution of each sample. Examples of bit depth include Compact Disc Digital Audio, which uses 16 bits per sample, and DVD-Audio and Blu-ray Disc which can support up to 24 bits per sample.

In basic implementations, variations in bit depth primarily affect the noise level from quantization error—thus the signal-to-noise ratio (SNR) and dynamic range. However, techniques such as dithering, noise shaping and oversampling mitigate these effects without changing the bit depth. Bit depth also affects bit rate and file size.

Bit depth is only meaningful in reference to a PCM digital signal. Non-PCM formats, such as lossy compression formats, do not have associated bit depths.

Audio normalization

Audio normalization is the application of a constant amount of gain to an audio recording to bring the amplitude to a target level (the norm). Because the same amount of gain is applied across the entire recording, the signal-to-noise ratio and relative dynamics are unchanged.

Two principal types of audio normalization exist. Peak normalization adjusts the recording based on the highest signal level present in the recording. Loudness normalization adjusts the recording based on perceived loudness.

Normalization differs from dynamic range compression, which applies varying levels of gain over a recording to fit the level within a minimum and maximum range. Normalization adjusts the gain by a constant value across the entire recording.

Normalization is one of the functions commonly provided by a digital audio workstation.

Audio system measurements

Audio system measurements are made for several purposes. Designers take measurements so that they can specify the performance of a piece of equipment. Maintenance engineers make them to ensure equipment is still working to specification, or to ensure that the cumulative defects of an audio path are within limits considered acceptable. Some aspects of measurement and specification relate only to intended usage. Audio system measurements often accommodate psychoacoustic principles to measure the system in a way that relates to human hearing.


Decibels relative to full scale (dBFS or dB FS) is a unit of measurement for amplitude levels in digital systems, such as pulse-code modulation (PCM), which have a defined maximum peak level. The unit is similar to the units dBov and dBO.The level of 0 dBFS is assigned to the maximum possible digital level. For example, a signal that reaches 50% of the maximum level has a level of −6 dBFS, which is 6 dB below full scale. Conventions differ for root mean square (RMS) measurements, but all peak measurements smaller than the maximum are negative levels.

A digital signal that does not contain any samples at 0 dBFS can still clip when converted to analog form due to the signal reconstruction process interpolating between samples. This can be prevented by careful digital-to-analog converter circuit design. Measurements of the true inter-sample peak levels are notated as dBTP or dB TP ("decibels true peak").

Headroom (audio signal processing)

In digital and analog audio, headroom refers to the amount by which the signal-handling capabilities of an audio system exceed a designated nominal level. Headroom can be thought of as a safety zone allowing transient audio peaks to exceed the nominal level without damaging the system or the audio signal, e.g., via clipping. Standards bodies differ in their recommendations for nominal level and headroom.

Line level

Line level is the specified strength of an audio signal used to transmit analog sound between audio components such as CD and DVD players, television sets, audio amplifiers, and mixing consoles.

Line level sits amongst other signal strengths such as those from weaker audio signals i.e. microphones and instrument pickups, and stronger signals, such as those used to drive headphones and loudspeakers. The "strength" of these various signals does not necessarily refer to the output voltage of the source device; it also depends on its output impedance and output power capability.

Loudness war

The loudness war (or loudness race) refers to the trend of increasing audio levels in recorded music which many critics believe reduces sound quality and listener enjoyment. Increasing loudness was first reported as early as the 1940s, with respect to mastering practices for 7" singles. The maximum peak level of analog recordings such as these is limited by varying specifications of electronic equipment along the chain from source to listener, including vinyl and Compact Cassette players. The issue garnered renewed attention starting in the 1990s with the introduction of digital signal processing capable of producing further loudness increases.

With the advent of the Compact Disc (CD), music is encoded to a digital format with a clearly defined maximum peak amplitude. Once the maximum amplitude of a CD is reached, loudness can be increased still further through signal processing techniques such as dynamic range compression and equalization. Engineers can apply an increasingly high ratio of compression to a recording until it more frequently peaks at the maximum amplitude. In extreme cases, efforts to increase loudness can result in clipping and other audible distortion. Modern recordings that use extreme dynamic range compression and other measures to increase loudness therefore can sacrifice sound quality to loudness. The competitive escalation of loudness has led music fans and members of the musical press to refer to the affected albums as "victims of the loudness war."

Nominal level

Nominal level is the operating level at which an electronic signal processing device is designed to operate. The electronic circuits that make up such equipment are limited in the maximum signal they can handle and the low-level internally generated electronic noise they add to the signal. The difference between the internal noise and the maximum level is the device's dynamic range. The nominal level is the level that these devices were designed to operate at, for best dynamic range and adequate headroom. When a signal is chained with improper gain staging through many devices, the dynamic range of the signal is reduced.

In audio, a related measurement, signal-to-noise ratio, is usually defined as the difference between the nominal level and the noise floor, leaving the headroom as the difference between nominal and maximum output. It is important to realize that the measured level is a time average, meaning that the peaks of audio signals regularly exceed the measured average level. The headroom measurement defines how far the peak levels can stray from the nominal measured level before clipping. The difference between the peaks and the average for a given signal is the crest factor.

There is some confusion over the use of the term "nominal", which is often used incorrectly to mean "average or typical". The relevant definition in this case is "as per design"; gain is applied to make the average signal level correspond to the designed, or nominal, level.

Peak programme meter

A peak programme meter (PPM) is an instrument used in professional audio that indicates the level of an audio signal.

Different kinds of PPM fall into broad categories:

True peak programme meter. This shows the peak level of the waveform no matter how brief its duration.

Quasi peak programme meter (QPPM). This only shows the true level of the peak if it exceeds a certain duration, typically a few milliseconds. On peaks of shorter duration, it indicates less than the true peak level. The extent of the shortfall is determined by the 'integration time'.

Sample peak programme meter (SPPM). This is a PPM for digital audio—which shows only peak sample values, not the true waveform peaks (which may fall between samples and be up to 3 dB higher in amplitude). It may have either a 'true' or a 'quasi' integration characteristic.

Over-sampling peak programme meter. This is a sample PPM in which the signal has first been over-sampled, typically by a factor of four, to alleviate the problem with a basic sample PPM.In professional usage, where consistent level measurements are needed across an industry, audio level meters often comply with a detailed formal standard. This ensures that all compliant meters indicate the same level for a given audio signal. The principal standard for PPMs is IEC 60268-10. It describes two different quasi-PPM designs that have roots in meters originally developed in the 1930s for the AM radio broadcasting networks of Germany (Type I) and the United Kingdom (Type II). The term Peak Programme Meter usually refers to these IEC-specified types and similar designs. Though originally designed for monitoring analogue audio signals, these PPMs are now also used with digital audio.

PPMs do not provide effective loudness monitoring. Newer types of meter do, and there is now a push within the broadcasting industry to move away from the traditional level meters in this article to two new types: loudness meters based on EBU Tech. 3341 and oversampling true PPMs. The former would be used to standardise broadcast loudness to −23 LUFS and the latter to prevent digital clipping.

Programme level

Programme level refers to the signal level that an audio source is transmitted or recorded at, and is important in audio if listeners of Compact Discs (CDs), radio and television are to get the best experience, without excessive noise in quiet periods or distortion of loud sounds. Programme level is often measured using a peak programme meter or a VU meter.

The level of an audio signal is among the most basic of measurements, and yet widespread misunderstanding and disagreement about programme levels has become arguably the greatest single obstacle to high quality sound reproduction.


ReplayGain is a proposed standard published by David Robinson in 2001 to measure the perceived loudness of audio in computer audio formats such as MP3 and Ogg Vorbis. It allows media players to normalize loudness for individual tracks or albums. This avoids the common problem of having to manually adjust volume levels between tracks when playing audio files from albums that have been mastered at different loudness levels.

Although this de facto standard is now formally known as ReplayGain, it was originally known as Replay Gain and is sometimes abbreviated RG.

ReplayGain is supported in a large number of media software and portable devices.

Sensor fusion

Sensor fusion is combining of sensory data or data derived from disparate sources such that the resulting information has less uncertainty than would be possible when these sources were used individually. The term uncertainty reduction in this case can mean more accurate, more complete, or more dependable, or refer to the result of an emerging view, such as stereoscopic vision (calculation of depth information by combining two-dimensional images from two cameras at slightly different viewpoints).The data sources for a fusion process are not specified to originate from identical sensors. One can distinguish direct fusion, indirect fusion and fusion of the outputs of the former two. Direct fusion is the fusion of sensor data from a set of heterogeneous or homogeneous sensors, soft sensors, and history values of sensor data, while indirect fusion uses information sources like a priori knowledge about the environment and human input.

Sensor fusion is also known as (multi-sensor) data fusion and is a subset of information fusion.

Signal-to-noise ratio

Signal-to-noise ratio (abbreviated SNR or S/N) is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. SNR is defined as the ratio of signal power to the noise power, often expressed in decibels. A ratio higher than 1:1 (greater than 0 dB) indicates more signal than noise.

While SNR is commonly quoted for electrical signals, it can be applied to any form of signal, for example isotope levels in an ice core, biochemical signaling between cells, or financial trading signals. Signal-to-noise ratio is sometimes used metaphorically to refer to the ratio of useful information to false or irrelevant data in a conversation or exchange. For example, in online discussion forums and other online communities, off-topic posts and spam are regarded as "noise" that interferes with the "signal" of appropriate discussion.The signal-to-noise ratio, the bandwidth, and the channel capacity of a communication channel are connected by the Shannon–Hartley theorem.

Total harmonic distortion analyzer

A total harmonic distortion analyzer calculates the total harmonic content of a sinewave with some distortion, expressed as total harmonic distortion (THD). A typical application is to determine the THD of an amplifier by using a very-low-distortion sinewave input and examining the output. The figure measured will include noise, and any contribution from imperfect filtering out of the fundamental frequency. Harmonic-by-harmonic measurement, without wideband noise, can be measured by a more complex wave analyser.

Another application is measurement of the effectiveness of an electronic filter

with extremely narrow passband, such as a notch filter in a parametric equalizer.

Transmission level point

In telecommunication, a transmission level point (TLP) is a physical test point in an electronic circuit, typically a transmission channel, where a test signal may be inserted or measured. Typically, various parameters, such as the power of the signal, noise, or test tones inserted are specified or measured at the TLP.

The nominal transmission level at a TLP is a function of system design and is an expression of the design gain or attenuation (loss).

Voice-channel transmission levels at test points are measured in decibel-milliwatts (dBm) at a frequency of ca. 1000 hertz. The dBm is an absolute level with respect to 1 mW. The TLP is thus characterized by the relation:

TLP = dBm — dBm0When the nominal signal power is 0 dBm at the TLP, the test point is called a zero transmission level point, or zero-dBm TLP. In general, the term TLP is commonly used as if it were a unit, preceded by the nominal level for the test point. For example, the expression 0 TLP refers to a 0 dBm TLP. At a −16 TLP, the measured level of 0 dBm is +16 dBm0.

The level at a TLP where an end instrument, such as a telephone set, is connected is usually specified as 0 dBm.

Weighting filter

A weighting filter is used to emphasize or suppress some aspects of a phenomenon compared to others, for measurement or other purposes.

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.