Chroma subsampling

Chroma subsampling is the practice of encoding images by implementing less resolution for chroma information than for luma information, taking advantage of the human visual system's lower acuity for color differences than for luminance.[1]

It is used in many video encoding schemes – both analog and digital – and also in JPEG encoding.

Rationale

Colorcomp
In full size, this image shows the difference between four subsampling schemes. Note how similar the color images appear. The lower row shows the resolution of the color information.

Digital signals are often compressed to reduce file size and save transmission time. Since the human visual system is much more sensitive to variations in brightness than color, a video system can be optimized by devoting more bandwidth to the luma component (usually denoted Y'), than to the color difference components Cb and Cr. In compressed images, for example, the 4:2:2 Y'CbCr scheme requires two-thirds the bandwidth of (4:4:4) R'G'B'. This reduction results in almost no visual difference as perceived by the viewer.

How subsampling works

Because the human visual system is less sensitive to the position and motion of color than luminance,[2] bandwidth can be optimized by storing more luminance detail than color detail. At normal viewing distances, there is no perceptible loss incurred by sampling the color detail at a lower rate. In video systems, this is achieved through the use of color difference components. The signal is divided into a luma (Y') component and two color difference components (chroma).

In human vision there are three channels for color detection, and for many color systems, three "channels" is sufficient for representing most colors. For example: red, green, blue or magenta, yellow, cyan. But there are other ways to represent the color. In many video systems, the three channels are: luminance and two chroma channels. In video, the luma and chroma components are formed as a weighted sum of gamma-corrected (tristimulus) R'G'B' components instead of linear (tristimulus) RGB components. As a result, luma must be distinguished from luminance. That there is some "bleeding" of luminance and color information between the luma and chroma components in video, the error being greatest for highly saturated colors and noticeable in between the magenta and green bars of a color-bar test pattern (that has chroma subsampling applied), should not be attributed to this engineering approximation being used. Indeed, similar bleeding can occur also with gamma = 1, whence the reversing of the order of operations between gamma correction and forming the weighted sum can make no difference. The chroma can influence the luma specifically at the pixels where the subsampling put no chroma. Interpolation may then put chroma values there which are incompatible with the luma value there, and further post-processing of that Y'CbCr into R'G'B' for that pixel is what ultimately produces false luminance upon display.

Color-bars-original

Original without color subsampling. 200% zoom.

Color-bars-vegas-dv

Image after color subsampling (compressed with Sony Vegas DV codec, box filtering applied.)

Color-bars-original
Color-bars-vegas-dv

Sampling systems and ratios

The subsampling scheme is commonly expressed as a three part ratio J:a:b (e.g. 4:2:2) or four parts if alpha channel is present (e.g. 4:2:2:4), that describe the number of luminance and chrominance samples in a conceptual region that is J pixels wide, and 2 pixels high. The parts are (in their respective order):

  • J: horizontal sampling reference (width of the conceptual region). Usually, 4.
  • a: number of chrominance samples (Cr, Cb) in the first row of J pixels.
  • b: number of changes of chrominance samples (Cr, Cb) between first and second row of J pixels.
  • Alpha: horizontal factor (relative to first digit). May be omitted if alpha component is not present, and is equal to J when present.

This notation is not valid for all combinations and has exceptions, e.g. 4:1:0 (where the height of the region is not 2 pixels but 4 pixels, so if 8 bits/component are used the media would be 9 bits/pixel) and 4:2:1.

An explanatory image of different chroma subsampling schemes can be seen at the following link: http://lea.hamradio.si/~s51kq/subsample.gif (source: "Basics of Video": http://lea.hamradio.si/~s51kq/V-BAS.HTM) or in details in Chrominance Subsampling in Digital Images, by Douglas Kerr.

4:1:1 4:2:0 4:2:2 4:4:4 4:4:0
Y'CrCb  
 
= = = = =
Y'  
 
+ + + + +
1 2 3 4  J = 4 1 2 3 4  J = 4 1 2 3 4  J = 4 1 2 3 4  J = 4 1 2 3 4  J = 4
(Cr, Cb) 1 a = 1 1 2 a = 2 1 2 a = 2 1 2 3 4 a = 4 1 2 3 4 a = 4
1 b = 1 b = 0 1 2 b = 2 1 2 3 4 b = 4 b = 0
¼ horizontal resolution,
full vertical resolution
½ horizontal resolution,
½ vertical resolution
½ horizontal resolution,
full vertical resolution
full horizontal resolution,
full vertical resolution
full horizontal resolution,
½ vertical resolution

The mapping examples given are only theoretical and for illustration. Also note that the diagram does not indicate any chroma filtering, which should be applied to avoid aliasing.

To calculate required bandwidth factor relative to 4:4:4 (or 4:4:4:4), one needs to sum all the factors and divide the result by 12 (or 16, if alpha is present).

Types of sampling and subsampling

4:4:4

Each of the three Y'CbCr components have the same sample rate, thus there is no chroma subsampling. This scheme is sometimes used in high-end film scanners and cinematic post production.

Note that "4:4:4" may instead be referring to R'G'B' color space, which implicitly also does not have any chroma subsampling. Formats such as HDCAM SR can record 4:4:4 R'G'B' over dual-link HD-SDI.

4:2:2

The two chroma components are sampled at half the sample rate of luma: the horizontal chroma resolution is halved. This reduces the bandwidth of an uncompressed video signal by one-third with little to no visual difference.

Many high-end digital video formats and interfaces use this scheme:

4:2:1

This sampling mode is not expressible in J:a:b notation. '4:2:1' is an obsolete term from a previous notational scheme, and very few software or hardware codecs use it. Cb horizontal resolution is half that of Cr (and a quarter of the horizontal resolution of Y).

4:1:1

In 4:1:1 chroma subsampling, the horizontal color resolution is quartered, and the bandwidth is halved compared to no chroma subsampling. Initially, 4:1:1 chroma subsampling of the DV format was not considered to be broadcast quality and was only acceptable for low-end and consumer applications.[3][4] However, DV-based formats (some of which use 4:1:1 chroma subsampling) have been used professionally in electronic news gathering and in playout servers. DV has also been sporadically used in feature films and in digital cinematography.

In the NTSC system, if the luma is sampled at 13.5 MHz, then this means that the Cr and Cb signals will each be sampled at 3.375 MHz, which corresponds to a maximum Nyquist bandwidth of 1.6875 MHz, whereas traditional "high-end broadcast analog NTSC encoder" would have a Nyquist bandwidth of 1.5 MHz and 0.5 MHz for the I/Q channels. However, in most equipment, especially cheap TV sets and VHS/Betamax VCRs the chroma channels have only the 0.5 MHz bandwidth for both Cr and Cb (or equivalently for I/Q). Thus the DV system actually provides a superior color bandwidth compared to the best composite analog specifications for NTSC, despite having only 1/4 of the chroma bandwidth of a "full" digital signal.

Formats that use 4:1:1 chroma subsampling include:

4:2:0

In 4:2:0, the horizontal sampling is doubled compared to 4:1:1, but as the Cb and Cr channels are only sampled on each alternate line in this scheme, the vertical resolution is halved. The data rate is thus the same. This fits reasonably well with the PAL color encoding system since this has only half the vertical chrominance resolution of NTSC. It would also fit extremely well with the SECAM color encoding system since like that format, 4:2:0 only stores and transmits one color channel per line (the other channel being recovered from the previous line). However, little equipment has actually been produced that outputs a SECAM analogue video signal. In general SECAM territories either have to use a PAL capable display or a transcoder to convert the PAL signal to SECAM for display.

Different variants of 4:2:0 chroma configurations are found in:

Cb and Cr are each subsampled at a factor of 2 both horizontally and vertically.

There are three variants of 4:2:0 schemes, having different horizontal and vertical siting.[7]

  • In MPEG-2, Cb and Cr are cosited horizontally. Cb and Cr are sited between pixels in the vertical direction (sited interstitially).
  • In JPEG/JFIF, H.261, and MPEG-1, Cb and Cr are sited interstitially, halfway between alternate luma samples.
  • In 4:2:0 DV, Cb and Cr are co-sited in the horizontal direction. In the vertical direction, they are co-sited on alternating lines.

Most digital video formats corresponding to PAL use 4:2:0 chroma subsampling, with the exception of DVCPRO25, which uses 4:1:1 chroma subsampling. Both the 4:1:1 and 4:2:0 schemes halve the bandwidth compared to no chroma subsampling.

With interlaced material, 4:2:0 chroma subsampling can result in motion artifacts if it is implemented the same way as for progressive material. The luma samples are derived from separate time intervals while the chroma samples would be derived from both time intervals. It is this difference that can result in motion artifacts. The MPEG-2 standard allows for an alternate interlaced sampling scheme where 4:2:0 is applied to each field (not both fields at once). This solves the problem of motion artifacts, reduces the vertical chroma resolution by half, and can introduce comb-like artifacts in the image.

444-original-single-field

Original. *This image shows a single field. The moving text has some motion blur applied to it.

444-original-single-field

420-progressive-single-field

4:2:0 progressive sampling applied to moving interlaced material. Note that the chroma leads and trails the moving text. *This image shows a single field.

420-progressive-single-field

420-interlaced-single-field

4:2:0 interlaced sampling applied to moving interlaced material. *This image shows a single field.

420-interlaced-single-field

In the 4:2:0 interlaced scheme however, vertical resolution of the chroma is roughly halved since the chroma samples effectively describe an area 2 samples wide by 4 samples tall instead of 2X2. As well, the spatial displacement between both fields can result in the appearance of comb-like chroma artifacts.

420-original444

Original still image.

420-original444

420-progressive-still

4:2:0 progressive sampling applied to a still image. Both fields are shown.

420-progressive-still

420-interlaced-still

4:2:0 interlaced sampling applied to a still image. Both fields are shown.

420-interlaced-still

If the interlaced material is to be de-interlaced, the comb-like chroma artifacts (from 4:2:0 interlaced sampling) can be removed by blurring the chroma vertically.[8]

4:1:0

This ratio is possible, and some codecs support it, but it is not widely used. This ratio uses half of the vertical and one-fourth the horizontal color resolutions, with only one-eighth of the bandwidth of the maximum color resolutions used. Uncompressed video in this format with 8-bit quantization uses 10 bytes for every macropixel (which is 4 x 2 pixels). It has the equivalent chrominance bandwidth of a PAL I signal decoded with a delay line decoder, and still very much superior to NTSC.

  • Some video codecs may operate at 4:1:0.5 or 4:1:0.25 as an option, so as to allow similar to VHS quality.

3:1:1

Used by Sony in their HDCAM High Definition recorders (not HDCAM SR). In the horizontal dimension, luma is sampled horizontally at three quarters of the full HD sampling rate – 1440 samples per row instead of 1920. Chroma is sampled at 480 samples per row, a third of the luma sampling rate.

In the vertical dimension, both luma and chroma are sampled at the full HD sampling rate (1080 samples vertically).

Out-of-gamut colors

One of the artifacts that can occur with chroma subsampling is that out-of-gamut colors can occur upon chroma reconstruction. Suppose the image consisted of alternating 1-pixel red and black lines and the subsampling omitted the chroma for the black pixels. Chroma from the red pixels will be reconstructed onto the black pixels, causing the new pixels to have positive red and negative green and blue values. As displays cannot output negative light (negative light does not exist), these negative values will effectively be clipped and the resulting luma value will be too high.[9] Similar artifacts arise in the less artificial example of gradation near a fairly sharp red/black boundary.

Filtering during subsampling can also cause colors to go out of gamut.

Terminology

The term Y'UV refers to an analog encoding scheme while Y'CbCr refers to a digital encoding scheme. One difference between the two is that the scale factors on the chroma components (U, V, Cb, and Cr) are different. However, the term YUV is often used erroneously to refer to Y'CbCr encoding. Hence, expressions like "4:2:2 YUV" always refer to 4:2:2 Y'CbCr since there simply is no such thing as 4:x:x in analog encoding (such as YUV).

In a similar vein, the term luminance and the symbol Y are often used erroneously to refer to luma, which is denoted with the symbol Y'. Note that the luma (Y') of video engineering deviates from the luminance (Y) of color science (as defined by CIE). Luma is formed as the weighted sum of gamma-corrected (tristimulus) RGB components. Luminance is formed as a weighed sum of linear (tristimulus) RGB components.

In practice, the CIE symbol Y is often incorrectly used to denote luma. In 1993, SMPTE adopted Engineering Guideline EG 28, clarifying the two terms. Note that the prime symbol ' is used to indicate gamma correction.

Similarly, the chroma/chrominance of video engineering differs from the chrominance of color science. The chroma/chrominance of video engineering is formed from weighted tristimulus components, not linear components. In video engineering practice, the terms chroma, chrominance, and saturation are often used interchangeably to refer to chrominance.

History

Chroma subsampling was developed in the 1950s by Alda Bedford for the development of color television by RCA, which developed into the NTSC standard; luma-chroma separation was developed earlier, in 1938 by Georges Valensi.

Through studies, he showed that the human eye has high resolution only for black and white, somewhat less for "mid-range" colors like yellows and greens, and much less for colors on the end of the spectrum, reds and blues. Using this knowledge allowed RCA to develop a system in which they discarded most of the blue signal after it comes from the camera, keeping most of the green and only some of the red; this is chroma subsampling in the YIQ color space, and is roughly analogous to 4:2:1 subsampling, in that it has decreasing resolution for luma, yellow/green, and red/blue.

See also

References

  1. ^ S. Winkler, C. J. van den Branden Lambrecht, and M. Kunt (2001). "Vision and Video: Models and Applications". In Christian J. van den Branden Lambrecht. Vision models and applications to image and video processing. Springer. p. 209. ISBN 978-0-7923-7422-0.CS1 maint: Multiple names: authors list (link)
  2. ^ Livingstone, Margaret (2002). "The First Stages of Processing Color and Luminance: Where and What". Vision and Art: The Biology of Seeing. New York: Harry N. Abrams. pp. 46–67. ISBN 0-8109-0406-3.
  3. ^ Jennings, Roger; Bertel Schmitt (1997). "DV vs. Betacam SP". DV Central. Retrieved 2008-08-29.
  4. ^ Wilt, Adam J. (2006). "DV, DVCAM & DVCPRO Formats". adamwilt.com. Retrieved 2008-08-29.
  5. ^ Clint DeBoer (2008-04-16). "HDMI Enhanced Black Levels, xvYCC and RGB". Audioholics. Retrieved 2013-06-02.
  6. ^ "Digital Color Coding" (PDF). Telairity. Retrieved 2013-06-02.
  7. ^ Poynton, Charles (2008). "Chroma Subsampling Notation" (PDF). Poynton.com. Retrieved 2008-10-01.
  8. ^ Munsil, Don; Stacey Spears (2003). "DVD Player Benchmark - Chroma Upsampling Error". Secrets of Home Theater and High Fidelity. Retrieved 2008-08-29.
  9. ^ Chan, Glenn (May–June 2008). "Towards Better Chroma Subsampling". GlennChan.info. SMPTE Journal. Retrieved 2008-08-29.
  • Poynton, Charles. "YUV and luminance considered harmful: A plea for precise terminology in video" [2]
  • Poynton, Charles. "Digital Video and HDTV: Algorithms and Interfaces". U.S.: Morgan Kaufmann Publishers, 2003.
  • Kerr, Douglas A. "Chrominance Subsampling in Digital Images" [3]
Apple ProRes

Apple ProRes is a high quality (although still lossy) video compression format developed by Apple Inc. for use in post-production that supports up to 8K. It is the successor of the Apple Intermediate Codec and was introduced in 2007 with Final Cut Studio 2. It is widely used as a final format delivery method for HD broadcast files in commercials, features, Blu-ray and streaming.

Broadcast quality

Broadcast quality is a term stemming from quad videotape to denote the quality achieved by professional video cameras and time base correctors (TBC) used for broadcast television, usually in standard definition. As the standards for commercial television broadcasts have changed from analog television using analog video to digital television using digital video, the term has generally fallen out of use.Manufacturers have used it to describe both professional and prosumer or "Semi-Professional" devices. A camera with the minimum requirements typically being the inclusion of three CCDs and relatively low-compression analog recording or digital recording capability with little or no chroma subsampling, and the ability to be genlocked. The advantages of three CCDs include better color definition in shadows, better overall low-light sensitivity, and reduced noise when compared to single-CCD systems. With continuing improvements in image sensors, resolution, recording media, and codecs, by 2006 the term no longer carried much weight in the marketplace.The term is also used in its literal sense in broadcasting jargon in judging the fitness of audio or video for broadcast.

Chrominance

Chrominance (chroma or C for short) is the signal used in video systems to convey the color information of the picture, separately from the accompanying luma signal (or Y for short). Chrominance is usually represented as two color-difference components: U = B′ − Y′ (blue − luma) and V = R′ − Y′ (red − luma). Each of these difference components may have scale factors and offsets applied to it, as specified by the applicable video standard.

In composite video signals, the U and V signals modulate a color subcarrier signal, and the result is referred to as the chrominance signal; the phase and amplitude of this modulated chrominance signal correspond approximately to the hue and saturation of the color. In digital-video and still-image color spaces such as Y′CbCr, the luma and chrominance components are digital sample values.

Separating RGB color signals into luma and chrominance allows the bandwidth of each to be determined separately. Typically, the chrominance bandwidth is reduced in analog composite video by reducing the bandwidth of a modulated color subcarrier, and in digital systems by chroma subsampling.

Color image pipeline

An image pipeline or video pipeline is the set of components commonly used between an image source (such as a camera, a scanner, or the rendering engine in a computer game), and an image renderer (such as a television set, a computer screen, a computer printer or cinema screen), or for performing any intermediate digital image processing consisting of two or more separate processing blocks. An image/video pipeline may be implemented as computer software, in a digital signal processor, on an FPGA, or as fixed-function ASIC. In addition, analog circuits can be used to do many of the same functions.

Typical components include image sensor corrections (including "debaying" or applying a Bayer filter), noise reduction, image scaling, gamma correction, image enhancement, colorspace conversion (between formats such as RGB, YUV or YCbCr), chroma subsampling, framerate conversion, image compression/video compression (such as JPEG), and computer data storage/data transmission.

Typical goals of an imaging pipeline may be perceptually pleasing end-results, colorimetric precision, a high degree of flexibility, low cost/low CPU utilization/long battery life, or reduction in bandwidth/file size.

Some functions may be algorithmically linear. Mathematically, those elements can be connected in any order without changing the end-result. As digital computers use a finite approximation to numerical computing, this is in practice not true. Other elements may be non-linear or time-variant. For both cases, there is often one or a few sequences of components that makes sense for optimum precision as well as minimum hardware-cost/CPU-load.

Digital cinematography

Digital cinematography is the process of capturing (recording) a motion picture using digital image sensors rather than through film stock. As digital technology has improved in recent years, this practice has become dominant. Since the mid-2010s, most of the movies across the world are captured as well as distributed digitally.Many vendors have brought products to market, including traditional film camera vendors like Arri and Panavision, as well as new vendors like RED, Blackmagic, Silicon Imaging, Vision Research and companies which have traditionally focused on consumer and broadcast video equipment, like Sony, GoPro, and Panasonic.

As of 2017, professional 4K digital film cameras are approximately equal to 35mm film in their resolution and dynamic range capacity, however, digital film still has a slightly different look to analog film. Some filmmakers still prefer to use analogue picture formats to achieve the desired results.

Human visual system model

A human visual system model (HVS model) is used by image processing, video processing and computer vision experts to deal with biological and psychological processes that are not yet fully understood. Such a model is used to simplify the behaviours of what is a very complex system. As our knowledge of the true visual system improves, the model is updated.

Psychovisual involves the study of the psychology of vision.

It is common to think of "taking advantage" of the HVS model to produce desired effects. Examples of taking advantage of an HVS model include colour television. Originally it was thought that colour television required too high a bandwidth for the then available technology. Then it was noticed that the colour resolution of the HVS was much lower than the brightness resolution; this allowed colour to be squeezed into the signal by chroma subsampling. Another example is image compression, like JPEG. Our HVS model says that we cannot see high frequency detail so in JPEG we can quantise these components without a perceptible loss of quality. Similar concepts are applied in audio compression, where sound frequencies inaudible to humans are bandstop filtered.

Several HVS features are derived from evolution, when we needed to defend ourselves or hunt for food. We often see demonstrations of HVS features when we are looking at optical illusions.

ICtCp

ICTCP, ICtCp, or ITP is a color representation format specified in the Rec. ITU-R BT.2100 standard that is used as a part of the color image pipeline in video and digital photography systems for high dynamic range (HDR) and wide color gamut (WCG) imagery. It was developed by Dolby Laboratories. The format is derived from an associated RGB color space by a coordinate transformation that includes two matrix transformations and an intermediate nonlinear transfer function that is informally known as gamma pre-correction. The transformation produces three signals called I, CT, and CP. The ICTCP transformation can be used with RGB signals derived from either the perceptual quantizer (PQ) or hybrid log-gamma (HLG) nonlinearity functions, but is most commonly associated with the PQ function (which was also developed by Dolby).

The I ("intensity") component is a luma component that represents the brightness of the video, and CT and CP are blue-yellow (named from tritanopia) and red-green (named from protanopia) chroma components.The ICTCP color representation scheme is conceptually related to the LMS color space, as the color transformation from RGB to ICTCP is defined by first converting RGB to LMS with a 3×3 matrix transformation, then applying the nonlinearity function, and then converting the nonlinear signals to ICTCP using another 3×3 matrix transformation.

List of Avid DNxHD resolutions

This is a list of Avid DNxHD resolutions, mainly available in multiple HD encoding resolutions based on the frame size and frame rate of the media being encoded. The list below shows the available encoding choices for each of the available frame size and frame rate combinations.Its sister codec, Avid DNxHR, supports resolutions beyond FullHD.

Luma (video)

In video, luma represents the brightness in an image (the "black-and-white" or achromatic portion of the image). Luma is typically paired with chrominance. Luma represents the achromatic image, while the chroma components represent the color information. Converting R′G′B′ sources (such as the output of a three-CCD camera) into luma and chroma allows for chroma subsampling: because human vision has finer spatial sensitivity to luminance ("black and white") differences than chromatic differences, video systems can store and transmit chromatic information at lower resolution, optimizing perceived detail at a particular bandwidth.

Macroblock

Macroblock is a processing unit in image and video compression formats based on linear block transforms, such as the discrete cosine transform (DCT). A macroblock typically consists of 16×16 samples, and is further subdivided into transform blocks, and may be further subdivided into prediction blocks. Formats which are based on macroblocks include JPEG, where they are called MCU blocks, H.261, MPEG-1 Part 2, H.262/MPEG-2 Part 2, H.263, MPEG-4 Part 2, and H.264/MPEG-4 AVC. In H.265/HEVC, the macroblock as a basic processing unit has been replaced by the coding tree unit.

NETVC

The Internet Video Codec (NETVC) is a standardization project for a royalty-free video codec hosted by the IETF. It is intended to provide a royalty-free alternative to industry standards such as MPEG-4 and HEVC that require licensing payments for many uses.

The group has put together a list of criteria to be met by the new video standard. The VP9-based format AOMedia Video 1 (AV1) from the Alliance for Open Media is the primary contender for standardisation by the NetVC working group.The October 2015 basic draft requirements for NETVC are support for a bit depth of 8-bits to 10-bits per sample, 4:2:0 chroma subsampling, 4:4:4 YUV, low coding delay capability, feasible real time decoder/encoder software implementations, temporal scalability, and error resilience tools. The October 2015 optional draft requirements for NETVC is support for a bit depth of up to 16-bits per sample, 4:2:2 chroma subsampling, RGB video, auxiliary channel planes, high dynamic range, and parallel processing tools.On March 24, 2015, Xiph.org's Daala codec was presented to the IETF as a candidate for NETVC. Daala coding techniques have been proposed to the IETF for inclusion into NETVC.On July 22, 2015, Cisco Systems' Thor video codec was presented to the IETF as a candidate for their NETVC video standard. Thor is being developed by Cisco Systems and uses some Cisco elements that are also used by HEVC. The Constrained Low-Pass Filter (CLPF) and motion compensation that are used in Thor were tested with Daala.At the IETF there are now also other partners involved in the development of NETVC.

Panasonic Lumix DC-GH5

The Panasonic Lumix DC-GH5 is a Micro Four Thirds mirrorless interchangeable lens camera body announced by Panasonic on 4 January 2017.It is the first mirrorless camera capable of shooting 4K resolution video with 10-bit color with 4:2:2 chroma subsampling, along with recording in 4K 60p or 50p (but only in 8 bit). It also captures both 4K and Full HD without time limits. On September 28, 2017, Panasonic released firmware update 2.0 which added support for Hybrid Log-Gamma (HLG) recording, along with a higher 400Mbit/s bit rate All-i recording mode.The later-released sister model Panasonic Lumix DC-GH5S is a more specialized filmmakers' camera that adds greater low-light sensitivity, a multi-aspect image sensor, and expanded DCI 4K options. It has 10 Megapixels, and is equipped without a stabilised image sensor.

The Panasonic GH5S is an even more video-centric variant of the GH5: it can shoot either DCI or UHD 4K footage natively (i.e. where one capture pixel = one output pixel) at up to 60p. As well as the ability to shoot DCI 4K at higher frame rates, Panasonic claim the GH5S's larger pixels and 'Dual Native ISO' sensor will shoot significantly better footage in low light.

Rec. 2100

ITU-R Recommendation BT.2100, more commonly known by the abbreviations Rec. 2100 or BT.2100, defines various aspects of high dynamic range (HDR) video such as display resolution (HDTV and UHDTV), frame rate, chroma subsampling, bit depth, color space, and optical transfer function. It was posted on the International Telecommunication Union (ITU) website on July 4, 2016. Rec. 2100 expands on several aspects of Rec. 2020.

Rec. 601

ITU-R Recommendation BT.601, more commonly known by the abbreviations Rec. 601 or BT.601 (or its former name, CCIR 601) is a standard originally issued in 1982 by the CCIR (an organization which has since been renamed as the International Telecommunication Union – Radiocommunication sector) for encoding interlaced analog video signals in digital video form. It includes methods of encoding 525-line 60 Hz and 625-line 50 Hz signals, both with an active region covering 720 luminance samples and 360 chrominance samples per line. The color encoding system is known as YCbCr 4:2:2.

The Rec. 601 video raster format has been re-used in a number of later standards, including the ISO/IEC MPEG and ITU-T H.26x compressed formats – although compressed formats for consumer applications usually use chroma subsampling reduced from the 4:2:2 sampling specified in Rec. 601 to 4:2:0.

The standard has been revised several times in its history. Its edition 7, referred to as BT.601-7, was approved in March 2011 and was formally published in October 2011.

SMPTE 356M

SMPTE 356M is a SMPTE specification for a professional video format, it is composed of MPEG-2 video composed of only I-frames and using 4:2:2 chroma subsampling. 8 channel AES3 audio streams are also included. These AES3 audio usually contain 24 bit PCM audio samples. SMPTE 356M requires up to 50 MBit/s of bandwidth.

This format is described in the document SMPTE 356M-2001, "Type D-10 Stream Specifications — MPEG-2 4:2:2P @ ML for 525/60 and 625/50".The technology specified in SMPTE 356M is also known as D10 or D-10 and also called IMX by Sony.

Subsampling

Subsampling or sub-sampling may refer to:

Sampling (statistics)

Replication (statistics)

Downsampling in signal processing

Chroma subsampling

Sub-sampling (chemistry)

VP9

VP9 is an open and royalty-free video coding format developed by Google.

VP9 is the successor to VP8 and competes mainly with MPEG's High Efficiency Video Coding (HEVC/H.265).

At first, VP9 was mainly used on Google's video platform YouTube. The emergence of the Alliance for Open Media, and its support for the ongoing development of the successor AV1, of which Google is a part of, led to growing interest in the format.

In contrast to HEVC, VP9 support is common among web browsers (see HTML5 video § Browser support). The combination of VP9 video and Opus audio in the WebM container, as served by YouTube, is supported by roughly ​4⁄5 of the browser market (mobile included) as of June 2018. The two holdouts among major browsers are the discontinued Internet Explorer (unlike its successor Edge) and Safari (both desktop and mobile versions). Android has supported VP9 since version 4.4 KitKat.

Parts of the format are covered by patents held by Google. The company grants free usage of its own related patents based on reciprocity, i.e. as long as the user does not engage in patent litigations.

XAVC

XAVC is a recording format that was introduced by Sony on October 30, 2012. XAVC is a format that will be licensed to companies that want to make XAVC products.

YUV

YUV is a color encoding system typically used as part of a color image pipeline. It encodes a color image or video taking human perception into account, allowing reduced bandwidth for chrominance components, thereby typically enabling transmission errors or compression artifacts to be more efficiently masked by the human perception than using a "direct" RGB-representation. Other color encodings have similar properties, and the main reason to implement or investigate properties of Y′UV would be for interfacing with analog or digital television or photographic equipment that conforms to certain Y′UV standards.

The scope of the terms Y′UV, YUV, YCbCr, YPbPr, etc., is sometimes ambiguous and overlapping. Historically, the terms YUV and Y′UV were used for a specific analog encoding of color information in television systems, while YCbCr was used for digital encoding of color information suited for video and still-image compression and transmission such as MPEG and JPEG. Today, the term YUV is commonly used in the computer industry to describe file-formats that are encoded using YCbCr.

The Y′UV model defines a color space in terms of one luma component (Y′) and two chrominance (UV) components. The Y′UV color model is used in the PAL composite color video (excluding PAL-N) standard. Previous black-and-white systems used only luma (Y′) information. Color information (U and V) was added separately via a sub-carrier so that a black-and-white receiver would still be able to receive and display a color picture transmission in the receiver's native black-and-white format.

Y′ stands for the luma component (the brightness) and U and V are the chrominance (color) components; luminance is denoted by Y and luma by Y′ – the prime symbols (') denote gamma compression, with "luminance" meaning physical linear-space brightness, while "luma" is (nonlinear) perceptual brightness.

The YPbPr color model used in analog component video and its digital version YCbCr used in digital video are more or less derived from it, and are sometimes called Y′UV. (CB/PB and CR/PR are deviations from grey on blue–yellow and red–cyan axes, whereas U and V are blue–luminance and red–luminance differences respectively.) The Y′IQ color space used in the analog NTSC television broadcasting system is related to it, although in a more complex way. The YDbDr color space used in the analog SECAM and PAL-N television broadcasting systems, are also related.

As for etymology, Y, Y′, U, and V are not abbreviations. The use of the letter Y for luminance can be traced back to the choice of XYZ primaries. This lends itself naturally to the usage of the same letter in luma (Y′), which approximates a perceptually uniform correlate of luminance. Likewise, U and V were chosen to differentiate the U and V axes from those in other spaces, such as the x and y chromaticity space. See the equations below or compare the historical development of the math.

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.