Frame rate

Frame rate (expressed in frames per second or fps) is the frequency (rate) at which consecutive images called frames appear on a display. The term applies equally to film and video cameras, computer graphics, and motion capture systems. Frame rate may also be called the frame frequency, and be expressed in hertz.

Human vision

The temporal sensitivity and resolution of human vision varies depending on the type and characteristics of visual stimulus, and it differs between individuals. The human visual system can process 10 to 12 images per second and perceive them individually, while higher rates are perceived as motion.[1] Modulated light (such as a computer display) is perceived as stable by the majority of participants in studies when the rate is higher than 50 Hz through 90 Hz. This perception of modulated light as steady is known as the flicker fusion threshold. However, when the modulated light is non-uniform and contains an image, the flicker fusion threshold can be much higher, in the hundreds of hertz.[2] With regard to image recognition, people have been found to recognize a specific image in an unbroken series of different images, each of which lasts as little as 13 milliseconds.[3] Persistence of vision sometimes accounts for very short single-millisecond visual stimulus having a perceived duration of between 100 ms and 400 ms. Multiple stimuli that are very short are sometimes perceived as a single stimulus, such as a 10 ms green flash of light immediately followed by a 10 ms red flash of light perceived as a single yellow flash of light.[4]

Film and video

Silent films

Early silent films had stated frame rates anywhere from 16 to 24 frames per second (fps),[5] but since the cameras were hand-cranked, the rate often changed during the scene to fit the mood. Projectionists could also change the frame rate in the theater by adjusting a rheostat controlling the voltage powering the film-carrying mechanism in the projector.[6] Film companies often intended that theaters show their silent films at higher frame rates than they were filmed at.[7] These frame rates were enough for the sense of motion, but it was perceived as jerky motion. To minimize the perceived flicker, projectors employed dual- and triple-blade shutters, so each frame was displayed two or three times, increasing the flicker rate to 48 or 72 Hertz and reducing eye strain. Thomas Edison said that 46 frames per second was the minimum needed for the eye to perceive motion: "Anything less will strain the eye."[8][9] In the mid to late 1920s, the frame rate for silent films increased to between 20 and 26 fps.[8]

Sound films

When sound film was introduced in 1926, variations in film speed were no longer tolerated, as the human ear is more sensitive to changes in audio frequency. Many theaters had shown silent films at 22 to 26 fps—which is why the industry chose 24 fps for sound as a compromise.[10] From 1927 to 1930, as various studios updated equipment, the rate of 24 fps became standard for 35 mm sound film.[1] At 24 fps the film travels through the projector at a rate of 456 millimetres (18.0 in) per second. This allowed for simple two-blade shutters to give a projected series of images at 48 per second, satisfying Edison's recommendation. Many modern 35 mm film projectors use three-blade shutters to give 72 images per second—each frame is flashed on screen three times.[8]

Animation

Animhorse
This animated cartoon of a galloping horse is displayed at 12 drawings per second, and the fast motion is on the edge of being objectionably jerky.

In drawn animation, moving characters are often shot "on twos", that is to say, one drawing is shown for every two frames of film (which usually runs at 24 frames per second), meaning there are only 12 drawings per second.[11] Even though the image update rate is low, the fluidity is satisfactory for most subjects. However, when a character is required to perform a quick movement, it is usually necessary to revert to animating "on ones", as "twos" are too slow to convey the motion adequately. A blend of the two techniques keeps the eye fooled without unnecessary production cost.[12]

Animation for most "Saturday morning cartoons" is produced as cheaply as possible, and is most often shot on "threes", or even "fours", i.e. three or four frames per drawing. This translates to only 8 or 6 drawings per second, respectively. Anime is also usually drawn on threes.[13][14]

Modern video standards

Due to the mains frequency of electric grids, analog television broadcast was developed with frame rates of 50 Hz (most of the world) or 60 Hz (US, Japan, South Korea). Hydroelectric generators, due to their massive size, developed enough centrifugal force to make the power mains frequency extremely stable, so circuits were developed for television cameras to lock onto that frequency as their primary reference.

The introduction of Color Television technology made it necessary to lower that 60 fps frequency by .1% to avoid "dot crawl", an annoying display artifact appearing on legacy black-and-white displays, showing up on highly-color-saturated surfaces. It was found that by lowering the frame rate by .1%, that undesirable effect was highly minimized.

Today's North America, Japan, and South Korea's video transmission standards are still based on 60÷1.001 or ≈59.94 images per second. Two sizes of images are typically used: 1920x540 (1080i) and 1280x720 (720p); Confusingly, interlaced formats are customarily stated at 1/2 their image rate, 29.97 fps, and double their image height, but these statements are purely custom; in each format, 60 images per second are produced. 1080i produces 59.94 1920x540 images, each squashed to half-height in the photographic process, and stretched back to fill the screen on playback in a television set. The 720p format produces 59.94 1280x720 images, not squeezed, so that no expansion or squeezing of the image is necessary.

This confusion was industry-wide in the early days of digital video software, with much software being written incorrectly -- the coders believing that only 29.97 images were expected each second, which was incorrect. While it was true that each picture element was polled and sent only 29.97 times per second, the pixel location immediately below that one was polled 1/60th of a second later -- part of a completely separate image for the next 1/60 second frame.

Film, at its native 24fps rate could not be displayed without the necessary pulldown process, often leading to "judder": To convert 24 frames per second into 60 frames per second, every odd frame is repeated, playing twice; Every even frame is tripled. This creates uneven motion, appearing stroboscopic. Other conversions have similar uneven frame doubling. Newer video standards support 120, 240, or 300 frames per second, so frames can be evenly multiplied for common frame rates such as 24 fps film and 30 fps video, as well as 25 and 50 fps video in the case of 300 fps displays. These standards also support video that's natively in higher frame rates, and video with interpolated frames between its native frames.[15] Some modern films are experimenting with frame rates higher than 24 fps, such as 48 and 60 fps.[16]

Frame rate in electronic camera specifications may refer to the maximum possible number of frames per second, where, in practice, other settings (such as exposure time) may reduce the frequency to a lower number.

See also

References

  1. ^ a b Read, Paul; Meyer, Mark-Paul; Gamma Group (2000). Restoration of motion picture film. Conservation and Museology. Butterworth-Heinemann. pp. 24–26. ISBN 978-0-7506-2793-1.
  2. ^ James Davis (1986), "Humans perceive flicker artefacts at 500 Hz", Sci Rep, 5: 7861, doi:10.1038/srep07861, PMC 4314649, PMID 25644611
  3. ^ Potter, Mary C. (December 28, 2013). "Detecting meaning in RSVP at 13 ms per picture". Attention, Perception, & Psychophysics. 76 (2): 270–279. doi:10.3758/s13414-013-0605-z. PMID 24374558.
  4. ^ Robert Efron (1973). "Conservation of temporal information by perceptual systems". Perception & Psychophysics. 14 (3): 518–530. doi:10.3758/bf03211193.
  5. ^ Brown, Julie (2014). "Audio-visual Palimpsests: Resynchronizing Silent Films with 'Special' Music". In David Neumeyer. The Oxford Handbook of Film Music Studies. Oxford University Press. p. 588. ISBN 978-0195328493.
  6. ^ Kerr, Walter (1975). Silent Clowns. Knopf. p. 36. ISBN 978-0394469072.
  7. ^ Card, James (1994). Seductive cinema: the art of silent film. Knopf. p. 53. ISBN 978-0394572185.
  8. ^ a b c Brownlow, Kevin (Summer 1980). "Silent Films: What Was the Right Speed?". Sight & Sound. 49 (3): 164–167. Archived from the original on 8 July 2011. Retrieved 2 May 2012.
  9. ^ Thomas Elsaesser, Thomas Elsaesser; Barker, Adam (1990). Early cinema: space, frame, narrative. BFI Publishing. p. 284. ISBN 978-0-85170-244-5.
  10. ^ TWiT Netcast Network (2017-03-30), How 24 FPS Became Standard, retrieved 2017-03-31
  11. ^ Chew, Johnny. "What Are Ones, Twos, and Threes in Animation?". Lifewire. Retrieved August 8, 2018.
  12. ^ Whitaker, Harold; Sito, John Halas ; updated by Tim (2009). Timing for animation (2nd ed.). Amsterdam: Elsevier/Focal Press. p. 52. ISBN 978-0240521602. Retrieved August 8, 2018.
  13. ^ "Shot on threes (ones, twos, etc.) - Anime News Network". www.animenewsnetwork.com.
  14. ^ CLIP STUDIO (12 February 2016). "CLIP STUDIO PAINT アニメーション機能の使い方" – via YouTube.
  15. ^ High Frame-Rate Television, BBC White Paper WHP 169, September 2008, M Armstrong, D Flynn, M Hammond, PAWAN Jahajpuria S Jolly, R Salmon
  16. ^ Jon Fingas (November 27, 2014), "James Cameron's 'Avatar' sequels will stick to 48 frames per second", Engadget, retrieved April 15, 2017

External links

480i

480i is a shorthand name for the video mode used for standard-definition analog or digital television in Caribbean, Myanmar, Japan, South Korea, Taiwan, Philippines, Laos, Western Sahara, and most of the Americas (with the exception of Argentina, Paraguay and Uruguay). The 480 identifies a vertical resolution of 480 lines, and the i identifies it as an interlaced resolution. The field rate, which is 60 Hz (or 59.94 Hz when used with NTSC color), is sometimes included when identifying the video mode, i.e. 480i60; another notation, endorsed by both the International Telecommunication Union in BT.601 and SMPTE in SMPTE 259M, includes the frame rate, as in 480i/30. The other common standard, used in the other parts of the world, is 576i.

In analogue contexts, this resolution is often called "525 lines". It is mandated by CCIR Systems M and J, which are usually paired with NTSC color - which led to the "NTSC" name being often inaccurately used to refer to this video mode. Other color encodings have also been used with System M, notably PAL-M in Brazil.

576i

576i is a standard-definition video mode originally used for broadcast television in most countries of the world where the utility frequency for electric power distribution is 50 Hz. Because of its close association with the color encoding system, it is often referred to as simply PAL, PAL/SECAM or SECAM when compared to its 60 Hz (typically, see PAL-M) NTSC-color-encoded counterpart, 480i. In digital applications it is usually referred to as "576i"; in analogue contexts it is often called "625 lines", and the aspect ratio is usually 4:3 in analogue transmission and 16:9 in digital transmission.

The 576 identifies a vertical resolution of 576 lines, and the i identifies it as an interlaced resolution. The field rate, which is 50 Hz, is sometimes included when identifying the video mode, i.e. 576i50; another notation, endorsed by both the International Telecommunication Union in BT.601 and SMPTE in SMPTE 259M, includes the frame rate, as in 576i/25.

Its basic parameters common to both analogue and digital implementations are: 576 scan lines or vertical pixels of picture content, 25 frames (giving 50 fields) per second.

In analogue 49 additional lines without image content are added to the displayed frame of 576 lines to allow time for older cathode ray tube circuits to retrace for the next frame, giving 625 lines per frame. Digital information not to be displayed as part of the image can be transmitted in the non-displayed lines; teletext and other services and test signals are often implemented.

Analogue television signals have no pixels; they are rastered in scan lines, but along each line the signal is continuous. In digital applications, the number of pixels per line is an arbitrary choice as long as it fulfils the sampling theorem. Values above about 500 pixels per line are enough for conventional free-to-air television; DVB-T, DVD and DV allow better values such as 704 or 720.

The video format can be transported by major digital television formats, ATSC, DVB and ISDB, and on DVD, and it supports aspect ratios of standard 4:3 and anamorphic 16:9.

576p

576p is the shorthand name for a video display resolution. The p stands for progressive scan, i.e. non-interlaced, the 576 for a vertical resolution of 576 pixels, usually with a horizontal resolution of 720 or 704 pixels. The frame rate can be given explicitly after the letter.

720p

720p (1280×720 px; also called HD Ready or standard HD) is a progressive HDTV signal format with 720 horizontal lines and an aspect ratio (AR) of 16:9, normally known as widescreen HDTV (1.78:1). All major HDTV broadcasting standards (such as SMPTE 292M) include a 720p format, which has a resolution of 1280×720; however, there are other formats, including HDV Playback and AVCHD for camcorders, that use 720p images with the standard HDTV resolution. The frame rate is standards-dependent, and for conventional broadcasting appears in 50 progressive frames per second in former PAL/SECAM countries (Europe, Australia, others), and 59.94 frames per second in former NTSC countries (North America, Japan, Brazil, others).

The number 720 stands for the 720 horizontal scan lines of image display resolution (also known as 720 pixels of vertical resolution). The p stands for progressive scan, i.e. non-interlaced. When broadcast at 60.00 frames/s frames per second, 720p features the highest temporal resolution possible under the ATSC and DVB standards. The term assumes a widescreen aspect ratio of 16:9, thus implying a resolution of 1280×720 px (0.9 megapixels).

720i (720 lines interlaced) is an erroneous term found in numerous sources and publications. Typically, it is a typographical error in which the author is referring to the 720p HDTV format. However, in some cases it is incorrectly presented as an actual alternative format to 720p. No proposed or existing broadcast standard permits 720 interlaced lines in a video frame at any frame rate.

Common Intermediate Format

CIF (Common Intermediate Format or Common Interchange Format), also known as FCIF (Full Common Intermediate Format), is a standardized format for the picture resolution, frame rate, color space, and color subsampling of digital video sequences used in video teleconferencing systems. It was first defined in the H.261 standard in 1988.

As the word "common" in its name implies, CIF was designed as a common compromise format to be relatively easy to convert for use either with PAL or NTSC standard displays and cameras. CIF defines a video sequence with a resolution of 352 × 288, which has a simple relationship to the PAL picture size, but with a frame rate of 30000/1001 (roughly 29.97) frames per second like NTSC, with color encoded using a YCbCr representation with 4:2:0 color sampling. It was designed as a compromise between PAL and NTSC schemes, since it uses a picture size that corresponds most easily to PAL, but uses the frame rate of NTSC. The compromise was established as a way to reach international agreement so that video conferencing systems in different countries could communicate with each other without needing two separate modes for displaying the received video. The simple way to convert NTSC video to CIF is to capture every other field (e.g., the top fields) of interlaced video, downsample it by 2:1 horizontally to convert 704 samples per line to 352 samples per line, and upsample it vertically by a ratio of 6:5 vertically to convert 240 lines to 288 lines. The simple way to convert PAL video to CIF is to similarly capture every other field, downsample it horizontally by 2:1, and introduce some jitter in the frame rate by skipping or repeating frames as necessary. Since H.261 systems typically operated at low bit rates, they also typically operated at low frame rates by skipping many of the camera source frames, so introducing some jitter in the frame rate tended not to be noticeable. More sophisticated conversion schemes (e.g., using deinterlacing to improve the vertical resolution from an NTSC camera) could also be used in higher quality systems.

In contrast to the CIF compromise that originated with the H.261 standard, there are two variants of the SIF (Source Input Format) that was first defined in the MPEG-1 standard. SIF is otherwise very similar to CIF. SIF on 525-line ("NTSC") based systems is 352 × 240 with a frame rate of 30000/1001 frames per second, and on 625-line ("PAL") based systems, it has the same picture size as CIF (352 × 288) but with a frame rate of 25 frames per second.

Some references to CIF are intended to refer only to its resolution (352 × 288), without intending to refer to its frame rate.

The YCbCr color representation had been previously defined in the first standard digital video source format, CCIR 601, in 1982. However, CCIR 601 uses 4:2:2 color sampling, which subsamples the Cb and Cr components only horizontally. H.261 additionally used vertical color subsampling, resulting in what is known as 4:2:0.

QCIF means "Quarter CIF". To have one quarter of the area, as "quarter" implies, the height and width of the frame are halved.

Terms also used are SQCIF (Sub Quarter CIF, sometimes subQCIF), 4CIF (4 × CIF) and 16CIF (16 × CIF). The resolutions for all of these formats are summarized in the table below.

xCIF pixels are not square, instead having a native aspect ratio of 12:11, as with the standard for 625-line systems (see CCIR 601). On square-pixel displays (e.g., computer screens and many modern televisions) xCIF rasters should be rescaled so that the picture covers a 4:3 area, in order to avoid a "stretched" look: CIF content expanded horizontally by 12:11 results in a 4:3 raster of 384 × 288 square pixels.

The CIF and QCIF picture dimensions were specifically chosen to be multiples of 16 because of the way that discrete cosine transform based video compression/decompression was handled in H.261, using 16 × 16 macroblocks and 8 × 8 transform blocks. So a CIF-size image (352 × 288) contains 22 × 18 macroblocks and a QCIF image (176 × 144) contains 11 × 9 macroblocks. The 16 × 16 macroblock concept was later also used in other compression standards such as MPEG-1, MPEG-2, MPEG-4 Part 2, H.263, and H.264/MPEG-4 AVC.

Deinterlacing

Deinterlacing is the process of converting interlaced video, such as common analog television signals or 1080i format HDTV signals, into a non-interlaced form.

An interlaced video frame consists of two sub-fields taken in sequence, each sequentially scanned at odd, and then even, lines of the image sensor. Analog television employed this technique because it allowed for less transmission bandwidth and further eliminated the perceived flicker that a similar frame rate would give using progressive scan. CRT-based displays were able to display interlaced video correctly due to their complete analogue nature. Newer displays are inherently digital, in that the display comprises discrete pixels. Consequently, the two fields need to be combined into a single frame, which leads to various visual defects. The deinterlacing process should try to minimize these.

Deinterlacing has been researched for decades and employs complex processing algorithms; however, consistent results have been very hard to achieve.

Fillrate

The term pixel fillrate refers to the number of pixels a video card can render to screen and write to video memory in a second or in case of texture fillrate the number of texture map elements (texels) GPU can map to pixels in a second. Pixel fillrates are given in megapixels per second or in gigapixels per second (in the case of newer cards), and they are obtained by multiplying the number of Raster Output Units (ROPs) by the clock frequency of the graphics processor unit (GPU) of a video card and texture fillrate is obtained by multiplying the number of Texture Mapping Units (TMUs) by the clock frequency of the graphics processing unit (GPU). Texture fillrates are given in mega or gigatexels per second. However, there is no full agreement on how to calculate and report fillrates. Other possible method is: to multiply the number of pixel pipelines by the clock frequency.

The results of these multiplications correspond to a theoretical number. The actual fillrate depends on many other factors. In the past, the fillrate has been used as an indicator of performance by video card manufacturers such as ATI and NVIDIA, however, the importance of the fillrate as a measurement of performance has declined as the bottleneck in graphics applications has shifted. For example, today, the number and speed of unified shader processing units has gained attention.Scene complexity can be increased by overdrawing, which happens when "an object is drawn to the frame buffer, and then another object (such as a wall) is drawn on top of it, covering it up. The time spent drawing the first object was wasted because it isn't visible." When a sequence of scenes is extremely complex (many pixels have to be drawn for each scene), the frame rate for the sequence may drop. When designing graphics intensive applications, one can determine whether the application is fillrate-limited (or shader limited) by seeing if the frame rate increases dramatically when the application runs at a lower resolution or in a smaller window.

Film frame

In filmmaking, video production, animation, and related fields, a frame is one of the many still images which compose the complete moving picture. The term is derived from the fact that, from the beginning of modern filmmaking toward the end of the 20th century, and in many places still up to the present, the single images have been recorded on a strip of photographic film that quickly increased in length, historically; each image on such a strip looks rather like a framed picture when examined individually.

The term may also be used more generally as a noun or verb to refer to the edges of the image as seen in a camera viewfinder or projected on a screen. Thus, the camera operator can be said to keep a car in frame by panning with it as it speeds past.

High-definition television

High-definition television (HDTV) is a television system providing an image resolution that is of substantially higher resolution than that of standard-definition television. This can be either analog or digital. HDTV is the current standard video format used in most broadcasts: terrestrial broadcast television, cable television, satellite television, Blu-rays, and streaming video.

HDTV may be transmitted in various formats:

720p (HD ready): 1280×720p: 923,600 pixels (~0.92 MP) per frame

1080i (full HD) : 1920×1080i: 1,036,800 pixels (~1.04 MP) per field or 2,073,600 pixels (~2.07 MP) per frame

1080p (full HD): 1920×1080p: 2,073,600 pixels (~2.07 megapixels) per frame

Some countries also use a non-standard CEA resolution, such as 1440×1080i: 777,600 pixels (~0.78 MP) per field or 1,555,200 pixels (~1.56 MP) per frameThe letter "p" here stands for progressive scan, while "i" indicates interlaced.

When transmitted at two megapixels per frame, HDTV provides about five times as many pixels as SD (standard-definition television). The increased resolution provides for a clearer, more detailed picture. In addition, progressive scan and higher frame rates result in a picture with less flicker and better rendering of fast motion. HDTV as is known today first started official broadcasting in 1989 in Japan, under the MUSE/Hi-Vision analog system. HDTV was widely adopted worldwide in the late 2000s.

High frame rate

In motion picture technology—either film or video—high frame rate (HFR) refers to higher frame rates than typical prior practice.

The frame rate for motion picture film cameras was typically 24 frames per second (fps) with multiple flashes on each frame during projection to prevent flicker. Analog television and video employed interlacing where only half of the image (known as a video field) was recorded and played back/refreshed at once but at twice the rate of what would be allowed for progressive video of the same bandwidth, resulting in smoother playback, as opposed to progressive video which is more similar to how celluloid works. The field rate of analog television and video systems was typically 50 or 60 fields per second. Usage of frame rates higher than 24 FPS for feature motion pictures and higher than 30 FPS for other applications are emerging trends in the recent past.

Interlaced video

Interlaced video is a technique for doubling the perceived frame rate of a video display without consuming extra bandwidth. The interlaced signal contains two fields of a video frame captured at two different times. This enhances motion perception to the viewer, and reduces flicker by taking advantage of the phi phenomenon.

This effectively doubles the time resolution (also called temporal resolution) as compared to non-interlaced footage (for frame rates equal to field rates). Interlaced signals require a display that is natively capable of showing the individual fields in a sequential order. CRT displays and ALiS plasma displays are made for displaying interlaced signals.

Interlaced scan refers to one of two common methods for "painting" a video image on an electronic display screen (the other being progressive scan) by scanning or displaying each line or row of pixels. This technique uses two fields to create a frame. One field contains all odd-numbered lines in the image; the other contains all even-numbered lines.

A Phase Alternating Line (PAL)-based television set display, for example, scans 50 fields every second (25 odd and 25 even). The two sets of 25 fields work together to create a full frame every 1/25 of a second (or 25 frames per second), but with interlacing create a new half frame every 1/50 of a second (or 50 fields per second). To display interlaced video on progressive scan displays, playback applies deinterlacing to the video signal (which adds input lag).

The European Broadcasting Union has argued against interlaced video in production and broadcasting. They recommend 720p 50 fps (frames per second) for the current production format—and are working with the industry to introduce 1080p 50 as a future-proof production standard. 1080p 50 offers higher vertical resolution, better quality at lower bitrates, and easier conversion to other formats, such as 720p 50 and 1080i 50. The main argument is that no matter how complex the deinterlacing algorithm may be, the artifacts in the interlaced signal cannot be completely eliminated because some information is lost between frames.

Despite arguments against it, television standards organizations continue to support interlacing. It is still included in digital video transmission formats such as DV, DVB, and ATSC. New video compression standards like High Efficiency Video Coding are optimized for progressive scan video, but sometimes do support interlaced video.

MEncoder

MEncoder is a free command line transcoding tool released under the GNU General Public License. It is a sibling of MPlayer, and can convert all the formats that MPlayer understands into a variety of compressed and uncompressed formats using different codecs.MEncoder is included in the MPlayer distribution.

Movie camera

The movie camera, film camera or cine-camera is a type of photographic camera which takes a rapid sequence of photographs on an image sensor or on a film. In contrast to a still camera, which captures a single snapshot at a time, the movie camera takes a series of images; each image constitutes a "frame". This is accomplished through an intermittent mechanism. The frames are later played back in a movie projector at a specific speed, called the frame rate (number of frames per second). While viewing at a particular frame rate, a person's eyes and brain merge the separate pictures to create the illusion of motion.Since the 2000s, film-based movie cameras have been largely (but not completely) replaced by digital movie cameras.

NTSC

NTSC, named after the National Television System Committee, is the analog television color system that was used in North America from 1954 and until digital conversion, was used in most of the Americas (except Brazil, Argentina, Paraguay, Uruguay, and French Guiana); Myanmar; South Korea; Taiwan; Philippines; Japan; and some Pacific island nations and territories (see map).

The first NTSC standard was developed in 1941 and had no provision for color. In 1953 a second NTSC standard was adopted, which allowed for color television broadcasting which was compatible with the existing stock of black-and-white receivers. NTSC was the first widely adopted broadcast color system and remained dominant until the 2000s, when it started to be replaced with different digital standards such as ATSC and others.

Most countries using the NTSC standard, as well as those using other analog television standards, have switched to, or are in process of switching to newer digital television standards, there being at least four different standards in use around the world. North America, parts of Central America, and South Korea are adopting or have adopted the ATSC standards, while other countries (such as Japan) are adopting or have adopted other standards instead of ATSC. After nearly 70 years, the majority of over-the-air NTSC transmissions in the United States ceased on January 1, 2010, and by August 31, 2011 in Canada and most other NTSC markets. The majority of NTSC transmissions ended in Japan on July 24, 2011, with the Japanese prefectures of Iwate, Miyagi, and Fukushima ending the next year. After a pilot program in 2013, most full-power analog stations in Mexico left the air on ten dates in 2015, with some 500 low-power and repeater stations allowed to remain in analog until the end of 2016. Digital broadcasting allows higher-resolution television, but digital standard definition television continues to use the frame rate and number of lines of resolution established by the analog NTSC standard.

Rec. 2100

ITU-R Recommendation BT.2100, more commonly known by the abbreviations Rec. 2100 or BT.2100, defines various aspects of high dynamic range (HDR) video such as display resolution (HDTV and UHDTV), frame rate, chroma subsampling, bit depth, color space, and optical transfer function. It was posted on the International Telecommunication Union (ITU) website on July 4, 2016. Rec. 2100 expands on several aspects of Rec. 2020.

Refresh rate

The refresh rate (most commonly the "vertical refresh rate", "vertical scan rate" for cathode ray tubes) is the number of times in a second that a display hardware updates its buffer. This is distinct from the measure of frame rate. The refresh rate includes the repeated drawing of identical frames, while frame rate measures how often a video source can feed an entire frame of new data to a display.

For example, most movie projectors advance from one frame to the next one 24 times each second. But each frame is illuminated two or three times before the next frame is projected using a shutter in front of its lamp. As a result, the movie projector runs at 24 frames per second, but has a 48 or 72 Hz refresh rate.

On cathode ray tube (CRT) displays, increasing the refresh rate decreases flickering, thereby reducing eye strain. However, if a refresh rate is specified that is beyond what is recommended for the display, damage to the display can occur.For computer programs or telemetry, the term is also applied to how frequently a datum is updated with a new external value from another source (for example; a shared public spreadsheet or hardware feed).

Scrolling

In computer displays, filmmaking, television production, and other kinetic displays, scrolling is sliding text, images or video across a monitor or display, vertically or horizontally. "Scrolling", as such, does not change the layout of the text or pictures, but moves (pans or tilts) the user's view across what is apparently a larger image that is not wholly seen. A common television and movie special effect is to scroll credits, while leaving the background stationary. Scrolling may take place completely without user intervention (as in film credits) or, on an interactive device, be triggered by touchscreen or a keypress and continue without further intervention until a further user action, or be entirely controlled by input devices.

Scrolling may take place in discrete increments (perhaps one or a few lines of text at a time), or continuously (smooth scrolling). Frame rate is the speed at which an entire image is redisplayed. It is related to scrolling in that changes to text and image position can only happen as often as the image can be redisplayed. When frame rate is a limiting factor, one smooth scrolling technique is to blur images during movement that would otherwise appear to "jump".

Telecine

Telecine ( or ) is the process of transferring motion picture film into video and is performed in a color suite. The term is also used to refer to the equipment used in the post-production process.

Telecine enables a motion picture, captured originally on film stock, to be viewed with standard video equipment, such as television sets, video cassette recorders (VCR), DVD, Blu-ray Disc or computers. Initially, this allowed television broadcasters to produce programmes using film, usually 16mm stock, but transmit them in the same format, and quality, as other forms of television production. Furthermore, telecine allows film producers, television producers and film distributors working in the film industry to release their products on video and allows producers to use video production equipment to complete their filmmaking projects. Within the film industry, it is also referred to as a TK, because TC is already used to designate timecode.

Time-lapse photography

Time-lapse photography is a technique whereby the frequency at which film frames are captured (the frame rate) is much more spread out than the frequency used to view the sequence. When played at normal speed, time appears to be moving faster and thus lapsing. For example, an image of a scene may be captured at 1 frame per second, but then played back at 30 frames per second; the result is an apparent 30 times speed increase. In a similar manner, film can also be played at a much lower rate than it was captured at, slowing down an otherwise fast action, as in slow motion or high-speed photography.

Processes that would normally appear subtle and slow to the human eye, e.g. the motion of the sun and stars in the sky or the growth of a plant, become very pronounced. Time-lapse is the extreme version of the cinematography technique of undercranking. Stop motion animation is a comparable technique; a subject that does not actually move, such as a puppet, can repeatedly be moved manually by a small distance and photographed. Then the photographs can be played back as a film at a speed that shows the subject appearing to move.

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.