In filmmaking, video production, animation, and related fields, a frame is one of the many still images which compose the complete moving picture. The term is derived from the fact that, from the beginning of modern filmmaking toward the end of the 20th century, and in many places still up to the present, the single images have been recorded on a strip of photographic film that quickly increased in length, historically; each image on such a strip looks rather like a framed picture when examined individually.
The term may also be used more generally as a noun or verb to refer to the edges of the image as seen in a camera viewfinder or projected on a screen. Thus, the camera operator can be said to keep a car in frame by panning with it as it speeds past.
When the moving picture is displayed, each frame is flashed on a screen for a short time (nowadays, usually 1/24, 1/25 or 1/30 of a second) and then immediately replaced by the next one. Persistence of vision blends the frames together, producing the illusion of a moving image.
The frame is also sometimes used as a unit of time, so that a momentary event might be said to last six frames, the actual duration of which depends on the frame rate of the system, which varies according to the video or film standard in use. In North America and Japan, 30 frames per s:) (fps) is the broadcast standard, with 24 frames/s now common in production for high-definition video shot to look like film. In much of the rest of the world, 25 frames/s is standard.
In systems historically based on NTSC standards, for reasons originally related to the Chromilog NTSC TV systems, the exact frame rate is actually (3579545 / 227.5) / 525 = 29.97002616 fps.[a] This leads to many synchronization problems which are unknown outside the NTSC world, and also brings about hacks such as drop-frame timecode.
In film projection, 24 fps is the norm, except in some special venue systems, such as IMAX, Showscan and Iwerks 70, where 30, 48 or even 60 frame/s have been used. Silent films and 8 mm amateur movies used 16 or 18 frame/s.
In a strip of movie film, individual frames are separated by frame lines. Normally, 24 frames are needed for one second of film. In ordinary filming, the frames are photographed automatically, one after the other, in a movie camera. In special effects or animation filming, the frames are often shot one at a time.
The size of a film frame varies, depending on the still film format or the motion picture film format. In the smallest 8 mm amateur format for motion pictures film, it is only about 4.8 by 3.5 mm, while an IMAX frame is as large as 69.6 by 48.5 mm. The larger the frame size is in relation to the size of the projection screen, the sharper the image will appear.
The size of the film frame of motion picture film also depends on the location of the holes, the size of the holes, the shape of the holes. and the location and type of sound stripe.
The most common film format, 35 mm, has a frame size of 36 by 24 mm when used in a still 35 mm camera where the film moves horizontally, but the frame size varies when used for motion picture where the film moves vertically (with the exception of VistaVision and Technirama where the film moves horizontally). Using a 4-perf pulldown, there are exactly 16 frames in one foot of 35 mm film, leading to film frames sometimes being counted in terms of "feet and frames". The maximum frame size is 18 by 24 mm, (silent/full aperture), but this is significantly reduced by the application of sound track(s). A system called KeyKode is often used to identify specific physical film frames in a production.
Historically, video frames were represented as analog waveforms in which varying voltages represented the intensity of light in an analog raster scan across the screen. Analog blanking intervals separated video frames in the same way that frame lines did in film. For historical reasons, most systems used an interlaced scan system in which the frame typically consisted of two video fields sampled over two slightly different periods of time. This meant that a single video frame was usually not a good still picture of the scene, unless the scene being shot was completely still.
With the dominance of digital technology, modern video systems now represent the video frame as a rectangular raster of pixels, either in an RGB color space or a color space such as YCbCr, and the analog waveform is typically found nowhere other than in legacy I/O devices.
Video frames are typically identified using SMPTE time code.
The frame is composed of picture elements just like a chess board. Each horizontal set of picture elements is known as a line. The picture elements in a line are transmitted as sine signals where a pair of dots, one dark and one light can be represented by a single sine. The product of the number of lines and the number of maximum sine signals per line is known as the total resolution of the frame. The higher the resolution the more faithful the displayed image is to the original image. But higher resolution introduces technical problems and extra cost. So a compromise should be reached in system designs both for satisfactory image quality and affordable price.
The key parameter to determine the lowest resolution still satisfactory to viewers is the viewing distance, i.e. the distance between the eyes and the monitor. The total resolution is inversely proportional to the square of the distance. If d is the distance, r is the required minimum resolution and k is the proportionality constant which depends on the size of the monitor;
Since the number of lines is approximately proportional to the resolution per line, the above relation can also be written as
where n is the number of lines. That means that the required resolution is proportional to the height of the monitor and inversely proportional to the viewing distance.
In moving picture (TV) the number of frames scanned per second is known as the frame rate. The higher the frame rate, the better the sense of motion. But again, increasing the frame rate introduces technical difficulties. So the frame rate is fixed at 25 (System B/G) or 29.97 (System M). To increase the sense of motion it is customary to scan the very same frame in two consecutive phases. In each phase only half of the lines are scanned; only the lines with odd numbers in the first phase and only the lines with even numbers in the second phase. Each scan is known as a field. So the field rate is two times the frame rate.
In system B the number of lines is 625 and the frame rate is 25. The maximum video bandwidth is 5 MHz. The maximum number of sine signals the system is theorically capable of transmitting is given as follows:
The system is able to transmit 5 000 000 sine signals in a second. Since the frame rate is 25, the maximum number of sine signals per frame is 200 000. Dividing this number by the number of lines gives the maximum number of sine signals in a line which is 320. (Actually about 19% of each line is devoted to auxiliary services. So the number of maximum useful sine signals is about 260.)
A still frame is a single static image taken from a film or video, which are kinetic (moving) images. Still frames are also called freeze frame, video prompt, preview or misleadingly thumbnail, keyframe, poster frame, or screen shot/grab/capture/dump. Freeze frames are widely used on video platforms and in video galleries, to show viewers a preview or a teaser. Many video platforms have a standard to display a frame from mid-time of the video. Some platforms offer the option to choose a different frame individually.
For criminal investigations it has become a frequent use to publish still frames from surveillance videos in order to identify suspect persons and to find more witnesses. Videos of the 9/11 attacks have been often discussed frame-by-frame for various interpretations. For medical diagnostics it is very useful to watch still frames of Magnetic resonance imaging videos.
Some humor in animation is based on the fourth wall aspect of the film frame itself, with some animation showing characters leaving what is assumed to be the edge of the film or the film malfunctioning. This latter one is used often in films as well. This hearkens back to some early cartoons, where characters were aware that they were in a cartoon, specifically that they could look at the credits and be aware of something that isn't part of the story as presented. These jokes include:
120 is a popular film format for still photography introduced by Kodak for their Brownie No. 2 in 1901. It was originally intended for amateur photography but was later superseded in this role by 135 film. 120 film and its close relative, 220 film, survive to this day as the only medium format films that are readily available to both professionals and amateur enthusiasts. As of December 2018 all production of 220 film has stopped/paused worldwide. The only remaining stocks from the last Fujifilm production run (2018) and they are mostly found in Japan.135 film
135 is photographic film in a film format used for still photography. It is a cartridge film with a film gauge of 35 mm (1.4 in), typically used for hand-held photography in 35 mm film cameras. Its engineering standard for the film is controlled by ISO 1007.The term 135 (ISO 1007) was introduced by Kodak in 1934 as a designation for the cassette for 35 mm film, specifically for still photography. It quickly grew in popularity, surpassing 120 film by the late 1960s to become the most popular photographic film size. Despite competition from formats such as 828, 126, 110, and APS, it remains so today.
135 camera film always comes perforated with Kodak Standard perforations.
The size of the 135 film frame has been adopted by many high-end digital single-lens reflex and digital mirrorless cameras, commonly referred to as "full frame". Even though the format is much smaller than historical medium format and large format film, it is much larger than image sensors in most compact cameras and smart phone cameras.Contax N Digital
The Contax N Digital was a six-megapixel digital SLR camera produced by Contax in Japan. The camera was announced in late 2000, and began to be sold in spring 2002, after several delays. The camera received mixed reviews from the press, and was withdrawn from the market within a year of its introduction.
It was noteworthy for being the first full-frame digital SLR, with an imaging chip the full size of a 135 film frame. All previous digital SLRs had a smaller sensor, giving a cropped view (see magnification factor). The imaging sensor was a Philips FTF3020-C, which had previously been used in the Jenoptik Eyelike medium format digital back. Pentax also planned to use the sensor in a full-frame digital SLR, the Pentax MZ-D, but abandoned work on the prototype in late 2001. The sensor featured ISO settings as low as ISO 25, but the reviews noted that it had a relatively high noise level above ISO 100. The next full-frame digital SLRs were the Canon EOS-1Ds of late 2002, followed by Kodak's DCS Pro 14n in 2003. Nikon and Sony introduced full-frame models in 2007 and 2008 respectively.
The N Digital was based on the short-lived Contax N range of 35mm film SLRs, and used the Contax N-mount lens system. Nine lenses were produced for this mount, by Carl Zeiss. There were three Contax N-Mount cameras – two 35mm film SLR bodies, plus the N Digital – all of which are now discontinued. Contax's parent company Kyocera withdrew from the digital imaging market in 2005.Depth-of-field adapter
A depth-of-field adapter (often shortened to DOF adapter) is used to achieve shallow depth of field on a video camera whose fixed lens or interchangeable lens selection is limited or economically prohibitive at providing such effect. A DOF adapter could theoretically be used on a multitude of platforms, although it is most useful on prosumer digital camcorders where high resolution is a capability but the sensor size is still small enough to elicit use of the adapter. The term 35mm adapter is common, since most designs use a focusing screen the size of a 35mm film frame (24×36 mm) and interface with lenses designed for 35mm cameras. The use of adapters has decreased largely due to the video function available on newer DSLR cameras.Digital cinematography
Digital cinematography is the process of capturing (recording) a motion picture using digital image sensors rather than through film stock. As digital technology has improved in recent years, this practice has become dominant. Since the mid-2010s, most of the movies across the world are captured as well as distributed digitally.Many vendors have brought products to market, including traditional film camera vendors like Arri and Panavision, as well as new vendors like RED, Blackmagic, Silicon Imaging, Vision Research and companies which have traditionally focused on consumer and broadcast video equipment, like Sony, GoPro, and Panasonic.
As of 2017, professional 4K digital film cameras are approximately equal to 35mm film in their resolution and dynamic range capacity, however, digital film still has a slightly different look to analog film. Some filmmakers still prefer to use analogue picture formats to achieve the desired results.Go motion
Go motion is a variation of stop motion animation which incorporates motion blur into each frame involving motion. It was co-developed by Industrial Light & Magic and Phil Tippett. Stop motion animation can create a disorienting, and distinctive staccato effect, because the animated object is perfectly sharp in every frame, since each frame of the animation was actually shot when the object was perfectly still. Real moving objects in similar scenes of the same movie will have motion blur, because they moved while the shutter of the camera was open. Filmmakers use a variety of techniques to simulate motion blur, such as moving the model slightly during the exposure of each film frame or using a glass plate smeared with petroleum jelly in front of the camera lens to blur the moving areas.Image sensor format
Note: If you came here to get a quick understanding of numbers like 1/2.3, skip ahead to table of sensor formats and sizes. For a simplified discussion of image sensors see image sensor.
In digital photography, the image sensor format is the shape and size of the image sensor.
The image sensor format of a digital camera determines the angle of view of a particular lens when used with a particular sensor. Because the image sensors in many digital cameras are smaller than the 24 mm × 36 mm image area of full-frame 35-mm cameras, a lens of a given focal length gives a narrower field of view in such cameras.
Sensor size is often expressed as optical format in inches. Other measures are also used; see table of sensor formats and sizes below.
Lenses produced for 35-mm film cameras may mount well on the digital bodies, but the larger image circle of the 35-mm system lens allows unwanted light into the camera body, and the smaller size of the image sensor compared to 35-mm film format results in cropping of the image. This latter effect is known as field of view crop. The format size ratio (relative to the 35-mm film format) is known as the field of view crop factor, crop factor, lens factor, focal length conversion factor, focal length multiplier or lens multiplier.Kodak DCS 100
The Kodak Professional Digital Camera System or DCS, later unofficially named DCS 100, was the first commercially available digital single-lens reflex (DSLR) camera. It was a customized camera back bearing the digital image sensor, mounted on a Nikon F3 body and released by Kodak in May 1991; the company had previously shown the camera at photokina in 1990. Aimed at the photo journalism market in order to improve the speed with which photographs could be transmitted back to the studio or newsroom, the DCS had a resolution of 1.3 megapixels. The DCS 100 was publicly presented for the first time in Arles (France), at the Journées de l'Image Pro by Mr Ray H. DeMoulin, the worldwide President of the Eastman Kodak Company. 453 international journalists attended this presentation, which took place in the Palais des Congres of Arles.
The predecessor to the commercial Digital Still Camera (DCS) was prototyped in the spring of 1987 at Kodak Research Labs. A 1.3 megapixel imager had been produced by Kodak’s Microelectronics Technology Division and the logical next step was to build a high resolution digital imaging system around it. The DCS prototype was developed for trials by the Associated Press. Kodak researchers chose the Nikon F3HP SLR because it was the most widely used professional camera at the time.
A number of key problems had to be solved:
How to accurately position the image sensor in the camera’s film plane? How to synchronize the camera’s mechanical shutter exposure period and the image sensor’s electronic integration period? Which lenses would provide sharp images without aliasing artifacts? What feedback could be provided to assist the photographer with getting the right exposure? Many scenes had higher dynamic range than the imager. How were digital images to be stored? Where was sufficient power to be sourced?
The F3HP had motor drive contacts that provided signals sufficient for electronic synchronization. A set of potential lenses underwent MTF testing and best matched lenses were selected. The battery power and a hard drive were integrated into a tethered remote system to be worn on the shoulder while the photographer worked. The A/D converter output was processed to generate an exposure histogram for the photographer. Finally, since the 1.3MP imager was smaller than the full 35mm film frame, colored templates were added to the viewfinder to indicate the area the imager would capture.
The prototype system was tested extensively in 1987 and 1988 by AP photographers and in studies comparing its performance to film systems. There was enough enthusiasm for the system to undertake a commercial version. An early version was shown at photokina in 1990 and the product was launched in May 1991.
The DCS 100 retained many of the characteristics of the prototype, including a separate shoulder carried Digital Storage Unit (DSU) to store and to visualize the images, and to house the batteries. The DSU contained a 200 megabyte hard disk drive that could store up to 156 images without compression, or up to 600 images using a JPEG compatible compression board that was offered later as an optional extra. An external keyboard allowed entry of captions and other image information.
The Kodak Professional Digital Camera System was available with two different digital format backs. The DC3 color back used a custom color filter array layout. The DM3 monochrome back had no color filter array. A few DM3 backs were manufactured without IR filters.
Internally, It has a 3.5" SCSI hard drive. It connects to a computer via an external SCSI interface. It appears as a non-disk SCSI device, and can be accessed by a TWAIN-based plugin for Photoshop 3.
There were many models of the DCS 100 with different buffers, monochrome, color, transmission versions with keyboard and modem.
The system was marketed at a retail price of $20000. A total of 987 units were sold.Kodak DCS Pro 14n
The Kodak Professional DCS Pro 14n is a professional Nikon F80 based F-mount digital SLR produced by Eastman Kodak. It was announced at the photographic trade show photokina in Germany during September 2002; production examples became available in May 2003.
Featuring a 13.89 Megapixel (4560 x 3048 pixels total) full frame 24 x 36 mm CMOS sensor, the DCS Pro 14n was the third full-frame digital SLR to reach the market, after the unsuccessful and short lived Contax N Digital and the successful Canon EOS-1Ds . All previous digital SLRs had sensors smaller than a film frame and thus had a crop factor larger than 1.0, making a wide-angle field of view difficult to achieve.
In September 2003 Kodak announced the availability of a memory upgrade from 256 MB to 512 MB to DCS Pro 14n owners. The 512 MB version of the camera is often unofficially referred to as Kodak Professional DCS Pro 14n 512. A monochrome variant, known unofficially as Kodak Professional DCS Pro 14n m and based on the same CMOS image sensor, existed as well.
The DCS Pro 14n was replaced by the Kodak Professional DCS Pro SLR/n, released in 2004, which was a similar, but improved model. In particular, the new camera featured an improved image sensor and better power management, and it came with 512 MiB of buffer memory pre-installed. At around 1800 USD existing owners of the DCS 14n could order another camera upgrade from Kodak, comprising the new image sensor and memory upgrade. These upgraded cameras were officially referred to as Kodak Professional DCS Pro 14nx by Kodak. Except for the power management and name plate, they were basically the same as the DSC Pro SLR/n.
Reviews are already available.Medium format (film)
Medium format has traditionally referred to a film format in still photography and the related cameras and equipment that use film. Nowadays, the term applies to film and digital cameras that record images on media larger than 24 mm × 36 mm (0.94 in × 1.42 in) (full-frame) (used in 35 mm (1.4 in) photography), but smaller than 4 in × 5 in (100 mm × 130 mm) (which is considered to be large-format photography).
In digital photography, medium format refers either to cameras adapted from medium-format film photography uses or to cameras making use of sensors larger than that of a 35 mm film frame. Often, medium-format film cameras can be retrofitted with digital camera backs, converting them to digital cameras, but some of these digital backs, especially early models, use sensors smaller than a 35 mm film frame. As of 2013, medium-format digital photography sensors were available in sizes of up to 40.3 by 53.7 mm, with 60 million pixels for use with commonly available professional medium-format cameras. Sensors used in special applications such as spy satellites can be even larger but are not necessarily described as medium-format equipment.In the film world, medium format has moved from being the most widely used film size (the 1900s through 1950s) to a niche used by professionals and some amateur enthusiasts, but one which is still substantially more popular than large format. In digital photography, medium format has been a very expensive option, with lower-cost options such as the Fujifilm GFX 50R still retailing for $4,500.While at one time a variety of medium-format film sizes were produced, today the vast majority of the medium-format film is produced in the 6 cm 120 and 220 sizes. Other sizes are mainly produced for use in antique cameras, and many people assume 120/220 film is being referred to when the term medium format is used.
The general rule with consumer cameras—as opposed to specialized industrial, scientific, and military equipment—is the more cameras sold, the more sophisticated the automation features available. Medium-format cameras made since the 1950s are generally less automated than smaller cameras made at the same time, having high image quality as their primary advantage. For example, autofocus became available in consumer 35 mm cameras in 1977, but did not reach medium format until the late 1990s, and has never been available in a consumer large format camera.Open matte
Open matte is a filming technique that involves matting out the top and bottom of the film frame in the movie projector (known as a soft matte) for the widescreen theatrical release and then scanning the film without a matte (at Academy ratio) for a full screen home video release.
Usually, non-anamorphic 4-perf films are filmed directly on the entire full frame silent aperture gate (1.33:1). When a married print is created, this frame is slightly re-cropped by the frame line and optical soundtrack down to Academy ratio (1.37:1). The movie projector then uses an aperture mask to soft matte the Academy frame to the intended aspect ratio (1.85:1 or 1.66:1). When the 4:3 full-screen video master is created, many filmmakers may prefer to use the full Academy frame ("open matte") instead of creating a pan and scan version from within the 1.85 framing. Because the framing is increased vertically in the open matte process, the decision to use it needs to be made prior to shooting, so that the camera operator can frame for 1.85:1 and "protect" for 4:3; otherwise unintended objects such as boom microphones, cables, and light stands may appear in the open matte frame, thus requiring some amount of pan and scan in some or all scenes. Additionally, the un-matted 4:3 version will often throw off an otherwise tightly-framed shot and add an inordinate amount of headroom above actors (particularly with 1.85:1).
Open-matte doesn't happen as often with films presented in 2.20:1 or 2.39:1. Instead those employ pan and scan. Many films over the years have used this technique, the most prominent of which include Schindler's List, Titanic, and Top Gun. Stanley Kubrick also used this technique for his last five films (A Clockwork Orange (1971), Barry Lyndon (1975), The Shining (1980), Full Metal Jacket (1987) and Eyes Wide Shut (1999).) James Cameron's Terminator 2: Judgment Day (1991) is also an example of open-matte.Panoramic photography
Panoramic photography is a technique of photography, using specialized equipment or software, that captures images with horizontally elongated fields of view. It is sometimes known as wide format photography. The term has also been applied to a photograph that is cropped to a relatively wide aspect ratio, like the familiar letterbox format in wide-screen video.
While there is no formal division between "wide-angle" and "panoramic" photography, "wide-angle" normally refers to a type of lens, but using this lens type does not necessarily make an image a panorama. An image made with an ultra wide-angle fisheye lens covering the normal film frame of 1:1.33 is not automatically considered to be a panorama. An image showing a field of view approximating, or greater than, that of the human eye – about 160° by 75° – may be termed panoramic. This generally means it has an aspect ratio of 2:1 or larger, the image being at least twice as wide as it is high. The resulting images take the form of a wide strip. Some panoramic images have aspect ratios of 4:1 and sometimes 10:1, covering fields of view of up to 360 degrees. Both the aspect ratio and coverage of field are important factors in defining a true panoramic image.
Photo-finishers and manufacturers of Advanced Photo System (APS) cameras use the word "panoramic" to define any print format with a wide aspect ratio, not necessarily photos that encompass a large field of view.Pathécolor
Pathécolor, later renamed Pathéchrome, was an early mechanical stencil-based film tinting process for movies developed by Segundo de Chomón for Pathé in the early 20th century. One of the last feature films to use the process was the British revue film Elstree Calling (1930).
The Pathécolor stencil process should not be confused with the later Pathécolor, Pathé Color and Color by Pathé trade names seen in screen credits and advertising materials. Like Metrocolor, WarnerColor and Color by DeLuxe, these were simply rebrandings, for advertising purposes, of the use of Eastman Kodak's Eastmancolor color negative film for the original photography. The stencil process was not a color photography process and did not use color film. Like computer-based film colorization processes, it was a way of arbitrarily adding selected colors to films originally photographed and printed in black-and-white.
Each frame of an extra print of the black-and-white film to be colored was rear-projected onto a sheet of frosted glass, as in rotoscoping. An operator used a blunt stylus to trace the outlines of areas of the projected image that were to be tinted one particular color. The stylus was connected to a reducing pantograph that caused a sharp blade to cut corresponding outlines through the actual film frame, creating the stencil for that color in that frame. This had to be done for each individual frame, and as many different stencil films had to be made as there were different colors to be added. Each of the final projection prints was matched up with one of the stencil films and run through a machine that applied the corresponding dye through the stencil. This operation was repeated using each of the different stencils and dyes in turn.Pentax MZ-D
The Pentax MZ-D, also known by its internal code name of MR-52, was a prototype digital single-lens reflex camera from Pentax of Japan. It was announced at photokina in September 2000 and was demonstrated to the press at the Photo Marketing Association (PMA) show in January 2001. In October 2001, Pentax cancelled the camera, stating "The cost of manufacturing the prototype SLR 6-megapixel digital camera meant it was not a viable product for our target market"The MZ-D was derived from the top-of-the-line Pentax film camera of the time, the MZ-S. To give space for the extra circuitry and battery power required for a digital camera, the MZ-D shape incorporated the size and shape of the MZ-S' optional battery booster and vertical grip. It also shared the Pentax KAF2 lens mount.
The MZ-D was to use the same 6 megapixel (3072×2048) full 135 film frame sized (24×36 mm) CCD (model FTF3020-C) from Philips that was used by the Contax N Digital. The lack of success of that camera and the image quality problems it displayed suggest to some that Pentax may have had other reasons than cost to cancel the MZ-D project. Michael Reichmann of The Luminous Landscape stated, "For whatever their reasons Pentax decided that they couldn't build a camera with this chip, while Contax decided to forge ahead. To save face? Possibly. We'll likely never know for sure"In 2003, Pentax released a different design 6 MP APS-C sensor size DSLR, the Pentax *istD.Slow motion
Slow motion (commonly abbreviated as slo-mo or slow-mo) is an effect in film-making whereby time appears to be slowed down. It was invented by the Austrian priest August Musger in the early 20th century.
Typically this style is achieved when each film frame is captured at a rate much faster than it will be played back. When replayed at normal speed, time appears to be moving more slowly. A term for creating slow motion film is overcranking which refers to hand cranking an early camera at a faster rate than normal (i.e. faster than 24 frames per second). Slow motion can also be achieved by playing normally recorded footage at a slower speed. This technique is more often applied to video subjected to instant replay than to film. A third technique that is becoming common using current computer software post-processing (with programs like Twixtor) is to fabricate digitally interpolated frames to smoothly transition between the frames that were actually shot. Motion can be slowed further by combining techniques, interpolating between overcranked frames. The traditional method for achieving super-slow motion is through high-speed photography, a more sophisticated technique that uses specialized equipment to record fast phenomena, usually for scientific applications.
Slow motion is ubiquitous in modern filmmaking. It is used by a diverse range of directors to achieve diverse effects. Some classic subjects of slow-motion include:
Athletic activities of all kinds, to demonstrate skill and style.
To recapture a key moment in an athletic game, typically shown as a replay.
Natural phenomena, such as a drop of water hitting a glass.Slow motion can also be used for artistic effect, to create a romantic or suspenseful aura or to stress a moment in time. Vsevolod Pudovkin, for instance, used slow motion in a suicide scene in The Deserter, in which a man jumping into a river seems sucked down by the slowly splashing waves. Another example is Face/Off, in which John Woo used the same technique in the movements of a flock of flying pigeons. The Matrix made a distinct success in applying the effect into action scenes through the use of multiple cameras, as well as mixing slow-motion with live action in other scenes. Japanese director Akira Kurosawa was a pioneer using this technique in his 1954 movie Seven Samurai. American director Sam Peckinpah was another classic lover of the use of slow motion. The technique is especially associated with explosion effect shots and underwater footage.The opposite of slow motion is fast motion. Cinematographers refer to fast motion as undercranking since it was originally achieved by cranking a handcranked camera slower than normal. It is often used for comic, or occasional stylistic effect. Extreme fast motion is known as time lapse photography; a frame of, say, a growing plant is taken every few hours; when the frames are played back at normal speed, the plant is seen to grow before the viewer's eyes.
The concept of slow motion may have existed before the invention of the motion picture: the Japanese theatrical form Noh employs very slow movements.Society of Motion Picture and Television Engineers
The Society of Motion Picture and Television Engineers (SMPTE) (, rarely ), founded in 1916 as the Society of Motion Picture Engineers or SMPE, is a global professional association, of engineers, technologists, and executives working in the media and entertainment industry. An internationally recognized standards organization, SMPTE has more than 800 Standards, Recommended Practices, and Engineering Guidelines for broadcast, filmmaking, digital cinema, audio recording, information technology (IT), and medical imaging. In addition to development and publication of technical standards documents, SMPTE publishes the SMPTE Motion Imaging Journal, provides networking opportunities for its members, produces academic conferences and exhibitions, and performs other industry-related functions.
SMPTE Membership is open to any individual or organization with interest in the subject matter.
SMPTE standards documents are copyrighted and may be purchased from the SMPTE website, or other distributors of technical standards. Standards documents may be purchased by the general public. Significant standards promulgated by SMPTE include:
All film and television transmission formats and media, including digital.
Physical interfaces for transmission of television signals and related data (such as SMPTE time code and the Serial Digital Interface) (SDI)
SMPTE color bars
Test card patterns and other diagnostic tools
The Material eXchange Format, or MXF
SMPTE ST 2110SMPTE's educational and professional development activities include technical presentations at regular meetings of its local Sections, annual and biennial conferences in the US and Australia and the SMPTE Motion Imaging Journal. The society sponsors many awards, the oldest of which are the SMPTE Progress Medal, the Samuel Warner Memorial Medal, and the David Sarnoff Medal. SMPTE also has a number of Student Chapters and sponsors scholarships for college students in the motion imaging disciplines.
SMPTE is a 501(c)3 non-profit charitable organization.
Related organizations include
Advanced Television Systems Committee (ATSC)
Moving Picture Experts Group (MPEG)
Joint Photographic Experts Group (JPEG)
ITU Radiocommunication Sector (formerly known as the CCIR)
ITU Telecommunication Sector (formerly known as the CCITT)
Digital Video Broadcasting
BBC Research Department
European Broadcasting Union (EBU)Stereo camera
A stereo camera is a type of camera with two or more lenses with a separate image sensor or film frame for each lens. This allows the camera to simulate human binocular vision, and therefore gives it the ability to capture three-dimensional images, a process known as stereo photography. Stereo cameras may be used for making stereoviews and 3D pictures for movies, or for range imaging. The distance between the lenses in a typical stereo camera (the intra-axial distance) is about the distance between one's eyes (known as the intra-ocular distance) and is about 6.35 cm, though a longer base line (greater inter-camera distance) produces more extreme 3-dimensionality.
In the 1950s, stereo cameras gained some popularity with the Stereo Realist and similar cameras that employed 135 film to make stereo slides.
3D pictures following the theory behind stereo cameras can also be made more inexpensively by taking two pictures with the same camera, but moving the camera a few inches either left or right. If the image is edited so that each eye sees a different image, then the image will appear to be 3D. This method has problems with objects moving in the different views, though works well with still life.
Stereo cameras are sometimes mounted in cars to detect the lane's width and the proximity of an object on the road.
Not all two-lens cameras are used for taking stereoscopic photos. A twin-lens reflex camera uses one lens to image to a focusing/composition screen and the other to capture the image on film. These are usually in a vertical configuration. Examples include would be a vintage Rolleiflex or a modern twin lens like a Mamiya C330.Telecine
Telecine ( or ) is the process of transferring motion picture film into video and is performed in a color suite. The term is also used to refer to the equipment used in the post-production process.
Telecine enables a motion picture, captured originally on film stock, to be viewed with standard video equipment, such as television sets, video cassette recorders (VCR), DVD, Blu-ray Disc or computers. Initially, this allowed television broadcasters to produce programmes using film, usually 16mm stock, but transmit them in the same format, and quality, as other forms of television production. Furthermore, telecine allows film producers, television producers and film distributors working in the film industry to release their products on video and allows producers to use video production equipment to complete their filmmaking projects. Within the film industry, it is also referred to as a TK, because TC is already used to designate timecode.Three-two pull down
Three-two pull down (3:2 pull down) is a term used in filmmaking and television production for the post-production process of transferring film to video.
It converts 24 frames per second into 29.97 frames per second. Roughly speaking, converting every 4 frames into 5 frames plus a slight slow down in speed. Film runs at a standard rate of 24 frames per second, whereas NTSC video has a signal frame rate of 29.97 frames per second. Every interlaced video frame has two fields for each frame. The three-two pull down is where the telecine adds a third video field (a half frame) to every second video frame, but the untrained eye cannot see the addition of this extra video field. In the figure, the film frames A-D are true or original images since they have been photographed as a complete frame. The AB and D frames on the right, the NTSC footage, are original frames. The third and fourth frames have been created by blending movie fields from different frames.