Deinterlacing

Deinterlacing is the process of converting interlaced video, such as common analog television signals or 1080i format HDTV signals, into a non-interlaced form.

An interlaced video frame consists of two sub-fields taken in sequence, each sequentially scanned at odd, and then even, lines of the image sensor. Analog television employed this technique because it allowed for less transmission bandwidth and further eliminated the perceived flicker that a similar frame rate would give using progressive scan. CRT-based displays were able to display interlaced video correctly due to their complete analogue nature. Newer displays are inherently digital, in that the display comprises discrete pixels. Consequently, the two fields need to be combined into a single frame, which leads to various visual defects. The deinterlacing process should try to minimize these.

Deinterlacing has been researched for decades and employs complex processing algorithms; however, consistent results have been very hard to achieve.[1][2]

Background

Both video and photographic film capture a series of frames (still images) in rapid succession; however, television systems read the captured image by serially scanning the image sensor by lines (rows). In analog television, each frame is divided into two consecutive fields, one containing all even lines, another with the odd lines. The fields are captured in succession at a rate twice that of the nominal frame rate. For instance, PAL and SECAM systems have a rate of 25 frames/s or 50 fields/s, while the NTSC system delivers 29.97 frames/s or 59.94 fields/s. This process of dividing frames into half-resolution fields at double the frame rate is known as interlacing.

Since the interlaced signal contains the two fields of a video frame shot at two different times, it enhances motion perception to the viewer and reduces flicker by taking advantage of the persistence of vision effect. This results in an effective doubling of time resolution as compared with non-interlaced footage (for frame rates equal to field rates). However, interlaced signal requires a display that is natively capable of showing the individual fields in a sequential order, and only traditional CRT-based TV sets are capable of displaying interlaced signal, due to the electronic scanning and lack of apparent fixed resolution.

Most modern displays, such as LCD, DLP and plasma displays, are not able to work in interlaced mode, because they are fixed-resolution displays and only support progressive scanning. In order to display interlaced signal on such displays, the two interlaced fields must be converted to one progressive frame with a process known as de-interlacing. However, when the two fields taken at different points in time are re-combined to a full frame displayed at once, visual defects called interlace artifacts or combing occur with moving objects in the image. A good deinterlacing algorithm should try to avoid interlacing artifacts as much as possible and not sacrifice image quality in the process, which is hard to achieve consistently. There are several techniques available that extrapolate the missing picture information, however they rather fall into the category of intelligent frame creation and require complex algorithms and substantial processing power.

Deinterlacing techniques require complex processing and thus can introduce a delay into the video feed. While not generally noticeable, this can result in the display of older video games lagging behind controller input. Many TVs thus have a "game mode" in which minimal processing is done in order to maximize speed at the expense of image quality. Deinterlacing is only partly responsible for such lag; scaling also involves complex algorithms that take milliseconds to run.

Progressive source material

Interlaced video can carry progressive scan signal, and deinterlacing process should consider this as well.

Typical movie material is shot on 24 frames/s film; when converting film to interlaced video using telecine, each film frame can be presented by two progressive segmented frames (PsF). This format does not require complex deinterlacing algorithm because each field contains a part of the very same progressive frame. However to match 50 field interlaced PAL/SECAM or 59.94/60 field interlaced NTSC signal, frame rate conversion should be performed using various "pulldown" techniques; most advanced TV sets can restore the original 24 frame/s signal using an inverse telecine process. Another option is to speed up 24-frame film by 4% (to 25 frames/s) for PAL/SECAM conversion; this method is still vastly used for DVDs, as well as television broadcasts (SD & HD) in the PAL markets.

DVDs can either encode movies using one of these methods, or store original 24 frame/s progressive video and use MPEG-2 decoder tags to instruct the video player on how to convert them to the interlaced format. Most movies on Blu-ray discs have preserved the original non interlaced 24 frame/s motion film rate and allow output in the progressive 1080p24 format directly to display devices, with no conversion necessary.

Some 1080i HDV camcorders also offer PsF mode with cinema-like frame rates of 24 or 25 frame/s. The TV production can also use special film cameras which operate at 25 or 30 frame/s; such material does not need framerate conversion for broadcasting in the intended video system format.

Deinterlacing methods

Deinterlacing requires the display to buffer one or more fields and recombine them into full frames. In theory this would be as simple as capturing one field and combining it with the next field to be received, producing a single frame. However, the originally recorded signal was produced as a series of fields, and any motion of the subjects during the short period between the fields is encoded into the display. When combined into a single frame, the slight differences between the two fields due to this motion results in a "combing" effect where alternate lines are slightly displaced from each other.

There are various methods to deinterlace video, each producing different problems or artifacts of its own. Some methods are much cleaner in artifacts than other methods.

Most deinterlacing techniques can be broken up into three different groups all using their own exact techniques. The first group are called field combination deinterlacers, because they take the even and odd fields and combine them into one frame which is then displayed. The second group are called field extension deinterlacers, because each field (with only half the lines) is extended to the entire screen to make a frame. The third type uses a combination of both and falls under the banner of motion compensation and a number of other names.

Modern deinterlacing systems therefore buffer several fields and use techniques like edge detection in an attempt to find the motion between the fields. This is then used to interpolate the missing lines from the original field, reducing the combing effect.[3]

Field combination deinterlacing

Image deinterlaced by weaving
Weaving
  • Weaving is done by adding consecutive fields together. This method does not cause any problems when the image has not changed between fields, but any change will result in artifacts known as "combing," when the pixels in one frame do not line up with the pixels in the other, forming a jagged edge. This technique retains the full vertical resolution at the expense of half the temporal resolution (motion).
Image deinterlaced by blending
Blending
  • Blending is done by blending, or averaging consecutive fields to be displayed as one frame. Combing is avoided because the images are on top of each other. This instead leaves an artifact known as ghosting. The image loses vertical resolution and temporal resolution. This is often combined with a vertical resize so that the output has no numerical loss in vertical resolution. The problem with this is that there is a quality loss, because the image has been downsized then upsized. This loss in detail makes the image look softer. Blending also loses half the temporal resolution since two motion fields are combined into one frame.
  • Selective blending, or smart blending or motion adaptive blending, is a combination of weaving and blending. As areas that haven't changed from frame to frame don't need any processing, the frames are woven and only the areas that need it are blended. This retains the full vertical resolution and half the temporal resolution, and it has fewer artifacts than weaving or blending because of the selective combination of both techniques.
  • Inverse Telecine: Telecine is used to convert a motion picture source at 24 frames per second to interlaced TV video in countries that use NTSC video system at 30 frames per second. Countries which use PAL at 25 frames per second do not use Telecine since motion picture sources are sped up 4% to achieve the needed 25 frames per second. If Telecine was used then it is possible to reverse the algorithm to obtain the original non-interlaced footage, which has a slower frame rate. In order for this to work, the exact telecine pattern must be known or guessed. Unlike most other deinterlacing methods, when it works, inverse telecine can perfectly recover the original progressive video stream.
  • Telecide-style algorithms: If the interlaced footage was generated from progressive frames at a slower frame rate (e.g. "cartoon pulldown"), then the exact original frames can be recovered by copying the missing field from a matching previous/next frame. In cases where there is no match (e.g. brief cartoon sequences with an elevated frame rate), then the filter falls back on another deinterlacing method such as blending or line-doubling. This means that the worst case for Telecide is occasional frames with ghosting or reduced resolution. By contrast, when more sophisticated motion-detection algorithms fail, they can introduce pixel artifacts that are unfaithful to the original material. For telecine video, decimation can be applied as a post-process to reduce the frame rate, and this combination is generally more robust than a simple inverse telecine, which fails when differently interlaced footage is spliced together.

Field extension deinterlacing

Image deinterlaced by halfsizing
Half-sizing

Half-sizing displays each interlaced field on its own, resulting in a video with half the vertical resolution of the original, unscaled. While this method retains all vertical resolution and all temporal resolution it is understandably not used for regular viewing because of its false aspect ratio. However, it can be successfully used to apply video filters which expect a noninterlaced frame, such as those exploiting information from neighbouring pixels (e.g., sharpening).

Image deinterlaced by line doubling
Line doubling

Line doubling takes the lines of each interlaced field (consisting of only even or odd lines) and doubles them, filling the entire frame. This results in the video having a frame rate identical to the field rate, but each frame having half the vertical resolution, or resolution equal to that of each field that the frame was made from. Line doubling prevents combing artifacts but causes a noticeable reduction in picture quality since each frame displayed is doubled and really only at the original half field resolution. This is noticeable mostly on stationary objects since they appear to bob up and down. These techniques are also called bob deinterlacing and linear deinterlacing for this reason. Line doubling retains horizontal and temporal resolution at the expense of vertical resolution and bobbing artifacts on stationary and slower moving objects. A variant of this method discards one field out of each frame, halving temporal resolution.

Line doubling is sometimes confused with deinterlacing in general, or with interpolation (image scaling) which uses spatial filtering to generate extra lines and hence reduce the visibility of pixelation on any type of display.[4] The terminology 'line doubler' is used more frequently in high end consumer electronics, while 'deinterlacing' is used more frequently in the computer and digital video arena.

Motion detection

Best picture quality can be ensured by combining traditional field combination methods (weaving and blending) and frame extension methods (bob or line doubling) to create a high quality progressive video sequence; the best algorithms would also try to predict the direction and the amount of image motion between subsequent sub-fields in order to better blend the two subfields together.

One of the basic hints to the direction and amount of motion would be the direction and length of combing artifacts in the interlaced signal. More advanced implementations would employ algorithms similar to block motion compensation used in video compression; deinterlacers that use this technique are often superior because they can use information from many fields, as opposed to just one or two. This requires powerful hardware to achieve realtime operation.

For example, if two fields had a person's face moving to the left, weaving would create combing, and blending would create ghosting. Advanced motion compensation (ideally) would see that the face in several fields is the same image, just moved to a different position, and would try to detect direction and amount of such motion. The algorithm would then try to reconstruct the full detail of the face in both output frames by combining the images together, moving parts of each subfield along the detected direction by the detected amount of movement.

Motion compensation needs to be combined with scene change detection, otherwise it will attempt to find motion between two completely different scenes. A poorly implemented motion compensation algorithm would interfere with natural motion and could lead to visual artifacts which manifest as "jumping" parts in what should be a stationary or a smoothly moving image.

Where deinterlacing is performed

Deinterlacing of an interlaced video signal can be done at various points in the TV production chain.

Progressive media

Deinterlacing is required for interlaced archive programs when the broadcast format or media format is progressive, as in EDTV 576p or HDTV 720p50 broadcasting, or mobile DVB-H broadcasting; there are two ways to achieve this.

  • Production – The interlaced video material is converted to progressive scan during program production. This should typically yield the best possible quality, since videographers have access to expensive and powerful deinterlacing equipment and software and can deinterlace at the best possible quality, probably manually choosing the optimal deinterlacing method for each frame.
  • Broadcasting – Real-time deinterlacing hardware converts interlaced programs to progressive scan immediately prior to broadcasting. Since the processing time is constrained by the frame rate and no human input is available, the quality of conversion is most likely inferior to the pre-production method; however, expensive and high-performance deinterlacing equipment may still yield good results when properly tuned.

Interlaced media

When the broadcast format or media format is interlaced, real-time deinterlacing should be performed by embedded circuitry in a set-top box, television, external video processor, DVD or DVR player, or TV tuner card. Since consumer electronics equipment is typically far cheaper, has considerably less processing power and uses simpler algorithms compared to professional deinterlacing equipment, the quality of deinterlacing may vary broadly and typical results are often poor even on high-end equipment.

Using a computer for playback and/or processing potentially allows a broader choice of video players and/or editing software not limited to the quality offered by the embedded consumer electronics device, so at least theoretically higher deinterlacing quality is possible – especially if the user can pre-convert interlaced video to progressive scan before playback and advanced and time-consuming deinterlacing algorithms (i.e. employing the "production" method).

However, the quality of both free and commercial consumer-grade software may not be up to the level of professional software and equipment. Also, most users are not trained in video production; this often causes poor results as many people do not know much about deinterlacing and are unaware that the frame rate is half the field rate. Many codecs/players do not even deinterlace by themselves and rely on the graphics card and video acceleration API to do proper deinterlacing.

Concerns over effectiveness

The European Broadcasting Union has argued against the use of interlaced video in production and broadcasting, recommending 720p 50 fps (frames per second) as current production format and working with the industry to introduce 1080p50 as a future-proof production standard which offers higher vertical resolution, better quality at lower bitrates, and easier conversion to other formats such as 720p50 and 1080i50.[5][6] The main argument is that no matter how complex the deinterlacing algorithm may be, the artifacts in the interlaced signal cannot be completely eliminated because some information is lost between frames.

Yves Faroudja, the founder of Faroudja Labs and Emmy Award winner for his achievements in deinterlacing technology, has stated that "interlace to progressive does not work" and advised against using interlaced signal.[2][7]

See also

References

  1. ^ Jung, J.H.; Hong, S.H. (2011). "Deinterlacing method based on edge direction refinement using weighted maximum frequent filter". Proceedings of the 5th International Conference on Ubiquitous Information Management and Communication. ACM. ISBN 978-1-4503-0571-6.
  2. ^ a b Philip Laven (January 26, 2005). "EBU Technical Review No. 301 (January 2005)". EBU. Archived from the original on June 16, 2006.
  3. ^ http://patft1.uspto.gov/netacgi/nph-Parser?patentnumber=4698675
  4. ^ PC Magazine. "PCMag Definition: Deinterlace".
  5. ^ "EBU R115-2005: FUTURE HIGH DEFINITION TELEVISION SYSTEMS" (PDF). EBU. May 2005. Archived (PDF) from the original on 2009-05-27. Retrieved 2009-05-24.
  6. ^ "10 things you need to know about... 1080p/50" (PDF). EBU. September 2009. Retrieved 2010-06-26.
  7. ^ Philip Laven (January 25, 2005). "EBU Technical Review No. 300 (October 2004)". EBU. Archived from the original on June 7, 2011.

External links

Common Intermediate Format

CIF (Common Intermediate Format or Common Interchange Format), also known as FCIF (Full Common Intermediate Format), is a standardized format for the picture resolution, frame rate, color space, and color subsampling of digital video sequences used in video teleconferencing systems. It was first defined in the H.261 standard in 1988.

As the word "common" in its name implies, CIF was designed as a common compromise format to be relatively easy to convert for use either with PAL or NTSC standard displays and cameras. CIF defines a video sequence with a resolution of 352 × 288, which has a simple relationship to the PAL picture size, but with a frame rate of 30000/1001 (roughly 29.97) frames per second like NTSC, with color encoded using a YCbCr representation with 4:2:0 color sampling. It was designed as a compromise between PAL and NTSC schemes, since it uses a picture size that corresponds most easily to PAL, but uses the frame rate of NTSC. The compromise was established as a way to reach international agreement so that video conferencing systems in different countries could communicate with each other without needing two separate modes for displaying the received video. The simple way to convert NTSC video to CIF is to capture every other field (e.g., the top fields) of interlaced video, downsample it by 2:1 horizontally to convert 704 samples per line to 352 samples per line, and upsample it vertically by a ratio of 6:5 vertically to convert 240 lines to 288 lines. The simple way to convert PAL video to CIF is to similarly capture every other field, downsample it horizontally by 2:1, and introduce some jitter in the frame rate by skipping or repeating frames as necessary. Since H.261 systems typically operated at low bit rates, they also typically operated at low frame rates by skipping many of the camera source frames, so introducing some jitter in the frame rate tended not to be noticeable. More sophisticated conversion schemes (e.g., using deinterlacing to improve the vertical resolution from an NTSC camera) could also be used in higher quality systems.

In contrast to the CIF compromise that originated with the H.261 standard, there are two variants of the SIF (Source Input Format) that was first defined in the MPEG-1 standard. SIF is otherwise very similar to CIF. SIF on 525-line ("NTSC") based systems is 352 × 240 with a frame rate of 30000/1001 frames per second, and on 625-line ("PAL") based systems, it has the same picture size as CIF (352 × 288) but with a frame rate of 25 frames per second.

Some references to CIF are intended to refer only to its resolution (352 × 288), without intending to refer to its frame rate.

The YCbCr color representation had been previously defined in the first standard digital video source format, CCIR 601, in 1982. However, CCIR 601 uses 4:2:2 color sampling, which subsamples the Cb and Cr components only horizontally. H.261 additionally used vertical color subsampling, resulting in what is known as 4:2:0.

QCIF means "Quarter CIF". To have one quarter of the area, as "quarter" implies, the height and width of the frame are halved.

Terms also used are SQCIF (Sub Quarter CIF, sometimes subQCIF), 4CIF (4 × CIF) and 16CIF (16 × CIF). The resolutions for all of these formats are summarized in the table below.

xCIF pixels are not square, instead having a native aspect ratio of 12:11, as with the standard for 625-line systems (see CCIR 601). On square-pixel displays (e.g., computer screens and many modern televisions) xCIF rasters should be rescaled so that the picture covers a 4:3 area, in order to avoid a "stretched" look: CIF content expanded horizontally by 12:11 results in a 4:3 raster of 384 × 288 square pixels.

The CIF and QCIF picture dimensions were specifically chosen to be multiples of 16 because of the way that discrete cosine transform based video compression/decompression was handled in H.261, using 16 × 16 macroblocks and 8 × 8 transform blocks. So a CIF-size image (352 × 288) contains 22 × 18 macroblocks and a QCIF image (176 × 144) contains 11 × 9 macroblocks. The 16 × 16 macroblock concept was later also used in other compression standards such as MPEG-1, MPEG-2, MPEG-4 Part 2, H.263, and H.264/MPEG-4 AVC.

DirectX Video Acceleration

DirectX Video Acceleration (DXVA) is a Microsoft API specification for the Microsoft Windows and Xbox 360 platforms that allows video decoding to be hardware accelerated. The pipeline allows certain CPU-intensive operations such as iDCT, motion compensation and deinterlacing to be offloaded to the GPU. DXVA 2.0 allows more operations, including video capturing and processing operations, to be hardware accelerated as well.

DXVA works in conjunction with the video rendering model used by the video card. DXVA 1.0, which was introduced as a standardized API with Windows 2000 and is currently available on Windows 98 or later, can use either the overlay rendering mode or VMR 7/9. DXVA 2.0, available only on Windows Vista, Windows 7, Windows 8 and later OSs, integrates with Media Foundation (MF) and uses the Enhanced Video Renderer (EVR) present in MF.

Faroudja

Faroudja Labs was a San Francisco based IP and research company founded by Yves Faroudja. Faroudja Labs should not be confused with Faroudja Enterprises, Yves Faroudja's latest venture.

Faroudja specialized in video processing algorithms and products. Its technologies for deinterlacing and inverse telecine have received great acclaim within the consumer electronics industry and have been widely used in many electronic devices, such as TV sets, set top boxes and video processors.

Efforts by Faroudja generated more than 65 patents and provided technology licenses to consumer electronics companies, and helped receive three Technology & Engineering Emmy Awards (one for advanced encoding techniques, a lifetime achievement for Yves Faroudja and one for HDTV upconversion used in network broadcast applications), as well as numerous other awards.

Since 2007, the Faroudja brand and all associated video processing IPs are part of STMicroelectronics, an international semiconductor company, who now uses the technology in System-on-Chip (SoC) products.

Field (video)

In video, a field is one of the many still images which are displayed sequentially to create the impression of motion on the screen. Two fields comprise one video frame. When the fields are displayed on a video monitor they are "interlaced" so that the content of one field will be used on all of the odd-numbered lines on the screen and the other field will be displayed on the even lines. Converting fields to a still frame image requires a process called deinterlacing, in which the missing lines are duplicated or interpolated to recreate the information that would have been contained in the discarded field. Since each field contains only half of the information of a full frame, however, deinterlaced images do not have the resolution of a full frame.

In order to increase the resolution of video images, therefore, new schemes have been created that capture full-frame images for each frame. Video composed of such frames is called progressive scan video.

Video shot with a standard video camera format such as S-VHS or Mini-DV is often interlaced when created, whereas video shot with a film-based camera is almost always progressive. Free-to-air analog TV was mostly broadcast as interlaced material because the trade-off of spatial resolution for frame-rate reduced flickering on Cathode ray tube (CRT) televisions. High-definition digital television (see: HDTV) today can be broadcast terrestrially or distributed through cable system in either interlaced (1080i) or progressive scan formats (720p or 1080p). Most prosumer camcorders can record in progressive scan formats.

In video editing, it is crucial to know which of the two (odd or even) fields is "dominant." Selecting edit points on the wrong field can result in a "flash" at each edit point and playing the video fields in reverse order creates a flickering image.

Filter (video)

A video filter is a software component that performs some operation on a multimedia stream. Multiple filters can be used in a chain, known as a filter graph, in which each filter receives input from its upstream filter, processes the input and outputs the processed video to its downstream filter.

With regards to video encoding three categories of filters can be distinguished:

prefilters: used before encoding

intrafilters: used while encoding (and are thus an integral part of a video codec)

postfilters: used after decoding

Frame grabber

A frame grabber is an electronic device that captures (i.e., "grabs") individual, digital still frames from an analog video signal or a digital video stream. It is usually employed as a component of a computer vision system, in which video frames are captured in digital form and then displayed, stored, transmitted, analyzed, or combinations of these.

Historically, frame grabber expansion cards were the predominant way to interface cameras to PCs. Other interface methods have emerged since then, with frame grabbers (and in some cases, cameras with built-in frame grabbers) connecting to computers via interfaces such as USB, Ethernet and IEEE 1394 ("FireWire"). Early frame grabbers typically had only enough memory to store a single digitized video frame, whereas many modern frame grabbers can store multiple frames.

Modern frame grabbers often are able to perform functions beyond capturing a single video input. For example, some devices capture audio in addition to video, and some devices provide, and concurrently capture frames from multiple video inputs. Other operations may be performed as well, such as deinterlacing, text or graphics overlay, image transformations (e.g., resizing, rotation, mirroring), and conversion to JPEG or other compressed image formats. To satisfy the technological demands of applications such as radar acquisition, manufacturing and remote guidance, some frame grabbers can capture images at high frame rates, high resolutions, or both.

HDV

HDV is a format for recording of high-definition video on DV cassette tape. The format was originally developed by JVC and supported by Sony, Canon, and Sharp. The four companies formed the HDV consortium in September 2003.

Conceived as an affordable high definition format for digital camcorders, HDV quickly caught on with many amateur and professional videographers due to its low cost, portability, and image quality acceptable for many professional productions.

HDV and HDV logo are trademarks of JVC and Sony.

Herringbone

Herringbone can refer to:

Herring-Bone (card game), a game of patience

Herringbone (cloth), a woven pattern of tweed or twill cloth

Herringbone (horse), a Thoroughbred racehorse

Herringbone cross-stratification, a sedimentary structure in geology that is formed from back-and-forth tidal water flow

Herringbone gear, a type of gear

Herringbone pattern, a pattern of floor tiling or paving

Herringbone seating, a pattern of airliner seating

A bonding pattern of brickwork, also known as opus spicatum

Herringbone stitch

A type of braided hairstyle, which is also known as a fishtail braid

A distortion pattern from deinterlacing video called mouse teeth

A method of counting used with the unary numeral system

A technique of moving one's skis while cross-country skiing

Herringbone milking shed

Herringbone, another name for the medical condition scintillating scotoma

Interlaced video

Interlaced video (also known as Interlaced scan) is a technique for doubling the perceived frame rate of a video display without consuming extra bandwidth. The interlaced signal contains two fields of a video frame captured at two different times. This enhances motion perception to the viewer, and reduces flicker by taking advantage of the phi phenomenon.

This effectively doubles the time resolution (also called temporal resolution) as compared to non-interlaced footage (for frame rates equal to field rates). Interlaced signals require a display that is natively capable of showing the individual fields in a sequential order. CRT displays and ALiS plasma displays are made for displaying interlaced signals.

Interlaced scan refers to one of two common methods for "painting" a video image on an electronic display screen (the other being progressive scan) by scanning or displaying each line or row of pixels. This technique uses two fields to create a frame. One field contains all odd-numbered lines in the image; the other contains all even-numbered lines.

A Phase Alternating Line (PAL)-based television set display, for example, scans 50 fields every second (25 odd and 25 even). The two sets of 25 fields work together to create a full frame every 1/25 of a second (or 25 frames per second), but with interlacing create a new half frame every 1/50 of a second (or 50 fields per second). To display interlaced video on progressive scan displays, playback applies deinterlacing to the video signal (which adds input lag).

The European Broadcasting Union has argued against interlaced video in production and broadcasting. They recommend 720p 50 fps (frames per second) for the current production format—and are working with the industry to introduce 1080p 50 as a future-proof production standard. 1080p 50 offers higher vertical resolution, better quality at lower bitrates, and easier conversion to other formats, such as 720p 50 and 1080i 50. The main argument is that no matter how complex the deinterlacing algorithm may be, the artifacts in the interlaced signal cannot be completely eliminated because some information is lost between frames.

Despite arguments against it, television standards organizations continue to support interlacing. It is still included in digital video transmission formats such as DV, DVB, and ATSC. New video compression standards like High Efficiency Video Coding are optimized for progressive scan video, but sometimes do support interlaced video.

LiVES

LiVES (LiVES Editing System) is a free video editing software and VJ tool, released under the GNU General Public License version 3 or later. There are binary versions available for most popular Linux distributions (including Debian, Ubuntu, Fedora, Suse, Gentoo, Slackware, Arch Linux and Mandriva). There are also ports for BSD, and it will run under Solaris and IRIX. It has been compiled under OS X Leopard, but not thoroughly tested on that platform

MEncoder

MEncoder is a free command line transcoding tool released under the GNU General Public License. It is a sibling of MPlayer, and can convert all the formats that MPlayer understands into a variety of compressed and uncompressed formats using different codecs.MEncoder is included in the MPlayer distribution.

Non-local means

Non-local means is an algorithm in image processing for image denoising. Unlike "local mean" filters, which take the mean value of a group of pixels surrounding a target pixel to smooth the image, non-local means filtering takes a mean of all pixels in the image, weighted by how similar these pixels are to the target pixel. This results in much greater post-filtering clarity, and less loss of detail in the image compared with local mean algorithms.If compared with other well-known denoising techniques, non-local means adds "method noise" (i.e. error in the denoising process) which looks more like white noise, which is desirable because it is typically less disturbing in the denoised product. Recently non-local means has been extended to other image processing applications such as deinterlacing, view interpolation, and depth maps regularization .

Nvidia PureVideo

PureVideo is Nvidia's hardware SIP core that performs video decoding. PureVideo is integrated into some of the Nvidia GPUs, and it supports hardware decoding of multiple video codec standards: MPEG-2, VC-1, H.264, and HEVC. PureVideo occupies a considerable amount of a GPU's die area and should not be confused with Nvidia NVENC. In addition to video decoding on chip, PureVideo offers features such as edge enhancement, noise reduction, deinterlacing, dynamic contrast enhancement and color enhancement.

Progressive scan

Progressive scanning (alternatively referred to as noninterlaced scanning) is a format of displaying, storing, or transmitting moving images in which all the lines of each frame are drawn in sequence. This is in contrast to interlaced video used in traditional analog television systems where only the odd lines, then the even lines of each frame (each image called a video field) are drawn alternately, so that only half the number of actual image frames are used to produce video. The system was originally known as "sequential scanning" when it was used in the Baird 240 line television transmissions from Alexandra Palace, United Kingdom in 1936. It was also used in Baird's experimental transmissions using 30 lines in the 1920s. Progressive scanning is universally used in computer screens in the 2000s.

Uncompressed video

Uncompressed video is digital video that either has never been compressed or was generated by decompressing previously compressed digital video. It is commonly used by video cameras, video monitors, video recording devices (including general purpose computers), and in video processors that perform functions such as image resizing, image rotation, deinterlacing, and text and graphics overlay. It is conveyed over various types of baseband digital video interfaces, such as HDMI, DVI, DisplayPort and SDI. Standards also exist for carriage of uncompressed video over computer networks.

Some HD video cameras output uncompressed video, whereas others compress the video using a lossy compression method such as MPEG or H.264. In any lossy compression process, some of the video information is removed, which creates compression artifacts and reduces the quality of the resulting decompressed video. When editing video, it is preferred to work with video that has never been compressed (or was losslessly compressed) as this maintains the best possible quality, with compression performed after completion of editing.

Video

Video is an electronic medium for the recording, copying, playback, broadcasting, and display of moving visual media.Video was first developed for mechanical television systems, which were quickly replaced by cathode ray tube (CRT) systems which were later replaced by flat panel displays of several types.

Video systems vary in display resolution, aspect ratio, refresh rate, color capabilities and other qualities. Analog and digital variants exist and can be carried on a variety of media, including radio broadcast, magnetic tape, optical discs, computer files, and network streaming.

Video processing

In electronics engineering, video processing is a particular case of signal processing, in particular image processing, which often employs video filters and where the input and output signals are video files or video streams. Video processing techniques are used in television sets, VCRs, DVDs, video codecs, video players, video scalers and other devices. For example—commonly only design and video processing is different in TV sets of different manufactures.

Weaving (disambiguation)

Weaving is assembling threads into cloth.

Weaving or weave may also refer to:

Weaving (surname), a surname (and list of people with the name)

Weave (digital printing)

Weaving (horse), behavior pattern

Weaving (knitting)

Weaving (mythology), a literary theme

Weave (Forgotten Realms), a fictional magic-producing fabric in Forgotten Realms

Basket weaving

Hair weave

Mozilla Weave

Weaving, field combination deinterlacing of television images

Weaving, program transformation in Aspect-oriented programming

Weaving, grade-separation in vehicular traffic

Xilleon

The Broadcom Xilleon video processor (previously branded as ATI Xilleon and later AMD Xilleon) is a SoC combining a MIPS 4Kc CPU with ASIC for accelerated video decoding, for use in set-top boxes and digital TVs, providing MPEG2 decoding and other functions for major worldwide broadcast networks (including PAL, NTSC, SECAM and ATSC).

The Xilleon line consists of four products, models 210D/H, 226, 240S/H, and 260 respectively with slightly different features including HD deinterlacing, 3D comb filter, dynamic contrast, noise reduction, sharpness, color control, and integrated 2D graphics acceleration.

While AMD announced the completion of acquisition of ATI Technologies on the third quarter of 2006, the Xilleon products would be sold under the AMD brand as AMD Xilleon.

It was revealed that the next generation of AVIVO, named as Unified Video Decoder (UVD) was based on Xilleon video processor to provide hardware decoding of H.264, and VC-1 video codec standards.

A new line of Xilleon video processors for flat panel LCD TVs, named as Xilleon panel processors with four models 410, 411, 420 and 421, were announced on CES 2008. Supporting 1080p video resolution and featuring Technology advanced motion estimation, motion compensation and frame rate conversion technology based on enhanced phase-plane correlation technology, which converts 24 or 60 Hz input video signals to 100 or 120 Hz refresh rates used in most of the LCD TVs by creating additional frames to form a smoother motion.AMD had signed an agreement with DivX, Inc. to allow several of the future Xilleon video processors to implement hardware DivX video decoding with DivX certification on January 2008. However, as a result of company restructuring, AMD has divested the digital TV chipset business starting from the second quarter of 2008.

On August 25, 2008 the Xilleon line was sold to the semiconductor company Broadcom.

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.