Television standards conversion is the process of changing one type of television system to another. The most common is from NTSC to PAL or the other way around. This is done so television programs in one nation may be viewed in a nation with a different standard. The video is fed through a video standards converter that changes the video to a different video system.
Converting between different numbers of lines and different frame rates in video pictures is a complex technical problem. However, the international exchange of television programming makes standards conversion necessary and in many cases mandatory.
The first known case of television systems conversion was in Europe a few years after World War II, mainly with the RTF (France) and the BBC (UK) trying to exchange their 441 line and 405 line programming.
Perhaps the most technically challenging conversion to make is the PAL to NTSC.
The two TV standards are for all practical purposes, temporally and spatially incompatible with each other. Aside from the line count being different, converting to a format that requires 60 fields every second from a format that has only 50 fields poses difficulty. Every second, an additional 10 fields must be generated—the converter has to create new frames (from the existing input) in real time.
TV contains many hidden signals. One signal type that is not transferred, except on some very expensive converters, is the closed captioning signal. Teletext signals do not need to be transferred, but the captioning data stream should be if it is technologically possible to do so.
With HDTV broadcasting, this is less of an issue, for the most part meaning only passing the captioning datastream on to the new source material. However, DVB and ATSC have significantly different captioning datastream types.
The subsampling in a video system is usually expressed as a three part ratio. The three terms of the ratio are: the number of brightness ("luminance" "luma" or Y) samples, followed by the number of samples of the two color ("chroma") components: U/Cb then V/Cr, for each complete sample area.
For quality comparison, only the ratio between those values is important, so 4:4:4 could easily be called 1:1:1; however, traditionally the value for brightness is always 4, with the rest of the values scaled accordingly.
The sampling principles above apply to both digital and analog television.
The "3:2 pulldown" conversion process for 24 frame/s film to television (telecine) creates a slight error in the video signal compared to the original film frames. This is one reason why NTSC films viewed on typical home equipment may not appear as smooth as when viewed in a cinema. The phenomenon is particularly apparent during slow, steady camera movements which appear slightly jerky when telecined. This process is commonly referred to as telecine judder.
PAL material in which 2:2:2:2:2:2:2:2:2:2:2:3 pulldown has been applied, suffers from a similar lack of smoothness, though this effect is not usually called telecine judder.
In effect, every 12th film frame is displayed for the duration of 3 PAL fields (60 milliseconds) whereas the other 11 frames are all displayed for the duration of 2 PAL fields (40 milliseconds). This causes a slight "hiccup" in the video about twice a second.
Television systems converters must avoid creating telecine judder effects during the conversion process. Avoiding this judder is of economic importance as a substantial amount of NTSC (60 Hz, technically 29.97 frame/s) resolution material that originates from film – will have this problem when converted to PAL or SECAM (both 50 Hz, 25 frame/s).
This method was used by Ireland to convert 625 line service to 405 line service. It is perhaps the most basic television standard conversion technique. RTÉ used this method during the latter years of its use of the 405 line system.
A standards converter was used to provide the 405 line service, but according to more than one former RTÉ engineering source the converter blew up and afterwards the 405 line service was provided by a 405 line camera pointing at a monitor. This is not the best conversion technique but it can work if one is going from a higher resolution to a lower one – at the same frame rate. Slow phosphors are required on both orthicons.
The first video standards converters were analog. That is, a special professional video camera that used a video camera tube would be pointed at a cathode ray tube video monitor. Both the camera and the monitor could be switched to either NTSC or PAL, to convert both ways. Robert Bosch GmbH's Fernseh division made a large three rack analog video standards converter. These were the high-end converters of the 1960s and 1970s. Image Transform in Universal City, California, used the Fernseh converter and in the 1980s made their own custom digital converter. This was also a larger three-rack device. As digital memory size became larger in smaller packages, converters became the size of a microwave oven. Today one can buy a very small consumer converter for home use.
The Apollo moon missions (late 1960s, early 1970s) used slow-scan television (SSTV) as opposed to normal bandwidth television; this was mostly done to save battery power (and transmission bandwidth, since the SSTV video from the Apollo missions was multiplexed with all other voice and telemetry communications from the spacecraft). The camera used only 7 watts of power.
Later Apollo missions featured color field sequential cameras that output 60-frame/s video. Each frame corresponded to one of the RGB primary colors. This method is compatible with black and white NTSC, but incompatible with color NTSC. In fact, even NTSC monochrome TV compatibility is marginal. A monochrome set could have reproduced the pictures, but the pictures would have flickered terribly. The camera color video ran at only 10 frame/s. Also, Doppler shift in the lunar signal would have caused pictures to tear and flip. For these reasons, the Apollo moon pictures required special conversion techniques.
The conversion steps were completely electromechanical, and they took place in nearly real time. First, the downlink station corrected the pictures for Doppler shift. Next, in an analog disc recorder, the downlink station recorded and replayed every video field six times. On the six-track recorder, recording and playback took place simultaneously. After the recorder, analog video processors added the missing components of the NTSC color signal: These components included:
The conversion delay lasted only some 10 seconds. Then color moon pictures left the downlink station for world distribution.
This conversion technique may become popular with manufacturers of HDTV --> NTSC and HDTV --> PAL converter boxes for the ongoing global conversion to HDTV.
In a typical image transmission setup, all stationary images are transmitted at full resolution. Moving pictures possess a lower resolution visually, based on complexity of interframe image content.
When one uses Nyquist subsampling as a standards conversion technique, the horizontal and vertical resolution of the material are reduced – this is an excellent method for converting HDTV to standard definition television, but it works very poorly in reverse.
The Nyquist subsampling method of systems conversion only works for HDTV to Standard Definition Television, so as a standards conversion technology it has a very limited use. Phase Correlation is usually preferred for HDTV to standard definition conversion.
There is a large difference in frame rate between film (24.0 frames per second) and NTSC (approximately 29.97 frames per second). Unlike the two other most common video formats, PAL and SECAM, this difference cannot be overcome by a simple speed-up, because the required 25% speed-up would be clearly noticeable.
To convert 24 frame/s film to 29.97 frame/s (presented as 59.94 interlaced fields per second) NTSC, a complex process called "3:2 pulldown" is used, in which every other film frame is duplicated across an additional interlaced field to achieve a framerate of 23.976 (the audio is slowed imperceptibly from the 24 frame/s source to match). This produces irregularities in the sequence of images which some people can perceive as a stutter during slow and steady pans of the camera in the source material. See telecine for more details.
For viewing native PAL or SECAM material (such as European television series and some European movies) on NTSC equipment, a standards conversion has to take place. There are basically two ways to accomplish this.
To reduce the 625-line signal to 525, less expensive converters drop 100 lines. These converters maintain picture fidelity by evenly spacing removed lines. (For example, the system might discard every sixth line from each PAL field. After the 50th discard, this process would stop. By then the system would have passed the viewable area of the field. In the following field, the process would repeat, completing one frame.) To create the five additional frames, the converter repeats every fifth frame.
If there is little inter-frame motion, this conversion algorithm is fast, inexpensive and effective. Many inexpensive consumer television system converters have employed this technique. Yet in practise, most video features significant inter-frame motion. To reduce conversion artefacts, more modern or expensive equipment may use sophisticated techniques.
The most basic and literal way to double lines is to repeat each scanline, though the results of this are generally very crude. Linear interpolation use digital interpolation to recreate the missing lines in an interlaced signal, and the resulting quality depends on the technique used. Generally the bob version of linear deinterlacer will only interpolate within a single field, rather than merging information from adjacent fields, to preserve the smoothness of motion, resulting in a frame rate equal to the field rate (i.e. a 60i signal would be converted to 60p.) The former technique in moving areas and the latter in static areas, which improves overall sharpness.
Interfield Interpolation is a technique in which new frames are created by blending adjacent frames, rather than repeating a single frame. This is more complex and computationally expensive than linear interpolation, because it requires the interpolator to have knowledge of the preceding and the following frames to produce an intermediate blended frame. Deinterlacing may also be required in order to produce images which can be interpolated smoothly. Interpolation can also be used to reduce the number of scanlines in the image by averaging the colour and intensity of pixels on neighbouring lines, a technique similar to Bilinear filtering, but applied to only one axis.
There are simple 2-line and 4 line converters. The 2-line converter creates a new line by comparing two adjacent lines, whereas a 4-line model compares 4 lines to average the 5th. Interfield interpolation reduces judder, but at the expense of picture smearing. The greater the blending applied to smooth out the judder, the greater the smear caused by blending.
Some more advanced techniques measure the nature and degree of inter-frame motion in the source, and use adaptive algorithms to blend the image based on the results. Some such techniques are known as motion compensation algorithms, and are computationally much more expensive than the simpler techniques, thus requiring more powerful hardware to be effective in real-time conversion.
Adaptive Motion algorithms capitalize on the way the human eye and brain process moving images – in particular, detail is perceived less clearly on moving objects.
Adaptive interpolation requires that the converter analyzes multiple successive fields and to detect the amount and type of motion of different areas of the picture.
Adaptive Motion Interpolation has many variations and is commonly found in midrange converters. The quality and cost is dependent upon the accuracy in analyzing the type and amount of motion, and the selection of the most appropriate algorithm for processing the type of motion.
Block matching involves dividing the image into mosaic blocks – say perhaps for the sake of explanation, 8x8 pixels. The blocks are then stored in memory. The next field read out is also divided up into the same number and size of mosaic blocks. The converter's computer then goes to work and starts matching up blocks. The blocks that stayed in the same relative position (read: there was no motion in this part of the image) receive relatively little processing.
When panning from left to right is taking place (over say 10 fields) it is safe to assume that the 11th field will be similar or very close.
The technique is highly effective but it does require a tremendous amount of computing power. Consider a block of only 8x8 pixels. For each block, the computer has 64 possible directions and 64 pixels to be matched to the block in the next field. Also consider that the greater the motion, the further out the search must be conducted. Just to find an adjacent block in the next field would entail making a search of 9 blocks. 2 blocks out would require a search and match of 25 blocks – 3 blocks further distant and it grows to 49 etc.
The type of motion can exponentially compound the compute power required. Consider a rotating object, where a simple straight line motion vector is of little help in predicting where the next block should match. It can quickly be seen that the more inter frame motion introduced, the much greater the processing power required. This is the general concept of block matching. Block match converters can vary widely in price and performance depending on the attention to detail and complexity.
A weird artifact of block matching owes to the size of the block itself. If a moving object is smaller than the mosaic block, consider that it's the entire block that gets moved. In most cases, it's not an issue, but consider a thrown baseball. The ball itself has a high motion vector, but its background that makes up the rest of the block might not have any motion. The background gets transported in the moved block as well, based on the motion vector of the baseball, What you might see is the ball with a small amount of outfield or whatever, tagging along. As it's in motion, the block may be "soft" depending upon what additional techniques were used and barely noticeable unless you're looking for it.
Block matching requires a staggering amount of processing horsepower, but today's microprocessors are making it a viable solution.
Phase correlation is perhaps the most computationally complex of the general algorithms.
Phase correlation's success lies in the fact that it is effective with coping with rapid motion and random motion. Phase correlation does not easily get confused by rotating or twirling objects that confuse most other kinds of systems converters. Phase correlation is elegant as well as technically and conceptually complex. Its successful operation is derived by performing a Fourier transform to each field of video.
A Fast Fourier Transform (FFT) is an algorithm which deals with the transformation of discrete values (in this case image pixels). When applied to a sample of finite values, a Fast Fourier Transform expresses any changes (motion) in terms of frequency components.
Since the result of the FFT represents only the inter-frame changes in terms of frequency distribution, there is far less data that has to be processed in order to calculate the motion vectors.
A digital television adapter, (CECB), or digital-to-analog converter (box), is a device that receives, by means of an antenna, a digital television (DTV) transmission, and converts that signal into an analog television signal that can be received and displayed on an analog television.
These boxes cheaply convert HDTV (16:9 at 720 or 1080) to (NTSC or PAL at 4:3). Very little is known about the specific conversion technologies used by these converter boxes in the PAL and NTSC zones.
Downconversion is usually required, hence very little image quality loss is perceived by viewers at the recommended viewing distance with most TV sets.
A lot of cross format television conversion is done offline. There are several DVD packages that offer offline PAL ↔ NTSC conversion – including cross conversion (technically MPEG ↔ DTV) from the myriad of MPEG-based web video formats.
Cross conversion can use any method commonly in use for TV system format conversion, but typically (in order to reduce complexity and memory use) it is left up to the codec to do the conversion. Most modern DVDs are converted from 525 <--> 625 lines in this way, as it is very economical for most programming that originates at EDTV resolution.
The 25th Academy Awards ceremony was held on March 19, 1953. It took place at the RKO Pantages Theatre in Hollywood, and the NBC International Theatre in New York City.
It was the first Academy Awards ceremony to be televised, and the first ceremony to be held in Hollywood and New York City simultaneously. It was also the only year that the New York ceremonies were to be held in the NBC International Theatre on Columbus Circle, which was shortly thereafter demolished and replaced by the New York Coliseum convention center.A major upset occurred when the heavily favored High Noon lost to Cecil B. DeMille's The Greatest Show on Earth, eventually considered among the worst films to have won the Academy Award for Best Picture. The American film magazine Premiere listed the film among the 10 worst Oscar winners and the British film magazine Empire rated it #3 on their list of the 10 worst Oscar winners. It has the lowest spot on Rotten Tomatoes' list of the 81 films to win Best Picture. Of all the films nominated for the Oscar this year, only High Noon, and Singin' in the Rain would show up 46 years later on the American Film Institute list of the greatest American films of the 20th Century. For a film that only received two nominations, Singin' in the Rain went on to be named as the greatest American musical film of all time and in the 2007 American Film Institute updated list as the fifth greatest American film of all time, while High Noon was ranked twenty-seventh on the same 2007 list, as well.
The Bad and the Beautiful won five awards, the most wins ever for a film not nominated for Best Picture. It was also the second Academy Awards in which a film not nominated for Best Picture received the most awards of the evening, excluding years where there were ties for the most wins. The only other film to do this was The Thief of Bagdad at the 13th Academy Awards; as of the 91st Academy Awards, it has not happened since.
Until Spotlight won only Best Picture and Best Original Screenplay at the 88th Academy Awards, this was the last year in which the Best Picture winner won just two Oscars. It was also the second of three years to date in which two films not nominated for Best Picture received more nominations than the winner (The Bad and the Beautiful and Hans Christian Andersen, both with six). This occurred again at the 79th Academy Awards.
Shirley Booth became the last person born in the 19th century to win an Oscar in a Leading Role. She is also the first woman in her 50s to win the award, at the age of 54 (the second woman in her 50s to win, Julianne Moore, was 54 when awarded at the 87th Academy Awards).
John Ford's fourth win for Best Director set a record for the most wins in this category that remains unmatched to this day.
For the first time since the introduction of Supporting Actor and Actress awards in 1936, Best Picture, Best Director, and all four acting Oscars went to six different films. This has happened only three times since, at the 29th Academy Awards for 1956, the 78th for 2005, and the 85th for 2012.42nd Academy Awards
The 42nd Academy Awards were presented April 7, 1970, at the Dorothy Chandler Pavilion in Los Angeles, California. For the second year in a row, there was no official host. Awards were presented by seventeen "Friends of Oscar": Bob Hope, John Wayne, Barbra Streisand, Fred Astaire, Jon Voight, Myrna Loy, Clint Eastwood, Raquel Welch, Candice Bergen, James Earl Jones, Katharine Ross, Cliff Robertson, Ali MacGraw, Barbara McNair, Elliott Gould, Claudia Cardinale, and Elizabeth Taylor. This was the first Academy Awards ceremony to be broadcast via satellite to an international audience, but only outside North America. Mexico and Brazil were the sole countries to broadcast the event live.This is currently the highest rated of the televised Academy Awards ceremonies, according to Nielsen ratings. The record, as of 2019, remains unbroken thanks to the emergence of the Super Bowl as the biggest annual event of awards season.
Midnight Cowboy became the first – and so far, the only – X-rated film to win the Academy Award for Best Picture. Its rating has since been downgraded to R. The previous year had seen the only G-rated film to win Best Picture, Carol Reed's Oliver!.
They Shoot Horses, Don't They? set an Oscar record by receiving nine nominations without one for Best Picture.
This was the last time until the 68th Academy Awards wherein none of the four acting winners had appeared in Best Picture nominees, as well as the first time where every acting nomination, as well as every major nominated film, was in color.Digital image processing
In computer science, digital image processing is the use of computer algorithms to perform image processing on digital images. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing. It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and signal distortion during processing. Since images are defined over two dimensions (perhaps more) digital image processing may be modeled in the form of multidimensional systems.Fernseh
The Fernseh AG television company was registered in Berlin on July 3, 1929 by John Logie Baird, Robert Bosch and other partners with an initial capital of 100,000 Reichsmark. Fernseh AG did research and manufacturing of television equipment.Motion compensation
Motion compensation is an algorithmic technique used to predict a frame in a video, given the previous and/or future frames by accounting for motion of the camera and/or objects in the video. It is employed in the encoding of video data for video compression, for example in the generation of MPEG-2 files. Motion compensation describes a picture in terms of the transformation of a reference picture to the current picture. The reference picture may be previous in time or even from the future. When images can be accurately synthesized from previously transmitted/stored images, the compression efficiency can be improved.Motion interpolation
Motion interpolation or motion-compensated frame interpolation (MCFI) is a form of video processing in which intermediate animation frames are generated between existing ones by means of interpolation, in an attempt to make animation more fluid and to compensate for display motion blur.Phase correlation
Phase correlation is an approach to estimate the relative translative offset between two similar images (digital image correlation) or other data sets. It is commonly used in image registration and relies on a frequency-domain representation of the data, usually calculated by fast Fourier transforms. The term is applied particularly to a subset of cross-correlation techniques that isolate the phase information from the Fourier-space representation of the cross-correlogram.Reverse Standards Conversion
Reverse Standards Conversion or RSC is a process developed by a team led by James Insell at the BBC for the restoration of video recordings which have already been converted between different video standards using early conversion techniques.Video standards converter
A video standards converter is a video device that converts NTSC to PAL and/or PAL to NTSC.
The PAL TV signals may be transcoded to or from SECAM.
Video standards converters are primarily used so television shows can be viewed in nations with different video standards.
With the use of high-definition television, new digital video standards converters came on the market. Some were down converters only, HDTV to PAL or NTSC. Others could both up and down convert: HDTV to standard definition: PAL or NTSC and vice versa.