Autostereoscopy is any method of displaying stereoscopic images (adding binocular perception of 3D depth) without the use of special headgear or glasses on the part of the viewer. Because headgear is not required, it is also called "glasses-free 3D" or "glassesless 3D". There are two broad approaches currently used to accommodate motion parallax and wider viewing angles: eye-tracking, and multiple views so that the display does not need to sense where the viewers' eyes are located.
Many organizations have developed autostereoscopic 3D displays, ranging from experimental displays in university departments to commercial products, and using a range of different technologies. The method of creating autostereoscopic flat panel video displays using lenses was mainly developed in 1985 by Reinhard Boerner at the Heinrich Hertz Institute (HHI) in Berlin. Prototypes of single-viewer displays were already being presented in the 1990s, by Sega AM3 (Floating Image System) and the HHI. Nowadays, this technology has been developed further mainly by European and Japanese companies. One of the best-known 3D displays developed by HHI was the Free2C, a display with very high resolution and very good comfort achieved by an eye tracking system and a seamless mechanical adjustment of the lenses. Eye tracking has been used in a variety of systems in order to limit the number of displayed views to just two, or to enlarge the stereoscopic sweet spot. However, as this limits the display to a single viewer, it is not favored for consumer products.
Currently, most flat-panel displays employ lenticular lenses or parallax barriers that redirect imagery to several viewing regions; however, this manipulation requires reduced image resolutions. When the viewer's head is in a certain position, a different image is seen with each eye, giving a convincing illusion of 3D. Such displays can have multiple viewing zones, thereby allowing multiple users to view the image at the same time, though they may also exhibit dead zones where only a non-stereoscopic or pseudoscopic image can be seen, if at all.
A parallax barrier is a device placed in front of an image source, such as a liquid crystal display, to allow it to show a stereoscopic image or multiscopic image without the need for the viewer to wear 3D glasses. The principle of the parallax barrier was independently invented by Auguste Berthier, who published first but produced no practical results, and by Frederic E. Ives, who made and exhibited the first known functional autostereoscopic image in 1901. About two years later, Ives began selling specimen images as novelties, the first known commercial use.
In the early 2000s, Sharp developed the electronic flat-panel application of this old technology to commercialization, briefly selling two laptops with the world's only 3D LCD screens. These displays are no longer available from Sharp but are still being manufactured and further developed from other companies. Similarly, Hitachi has released the first 3D mobile phone for the Japanese market under distribution by KDDI. In 2009, Fujifilm released the FinePix Real 3D W1 digital camera, which features a built-in autostereoscopic LCD measuring 2.8 in (71 mm) diagonal. The Nintendo 3DS video game console family uses a parallax barrier for 3D imagery; on a newer revision, the New Nintendo 3DS, this is combined with an eye tracking system.
The principle of integral photography, which uses a two-dimensional (X-Y) array of many small lenses to capture a 3-D scene, was introduced by Gabriel Lippmann in 1908. Integral photography is capable of creating window-like autostereoscopic displays that reproduce objects and scenes life-size, with full parallax and perspective shift and even the depth cue of accommodation, but the full realization of this potential requires a very large number of very small high-quality optical systems and very high bandwidth. Only relatively crude photographic and video implementations have yet been produced.
One-dimensional arrays of cylindrical lenses were patented by Walter Hess in 1912. By replacing the line and space pairs in a simple parallax barrier with tiny cylindrical lenses, Hess avoided the light loss that dimmed images viewed by transmitted light and that made prints on paper unacceptably dark. An additional benefit is that the position of the observer is less restricted, as the substitution of lenses is geometrically equivalent to narrowing the spaces in a line-and-space barrier.
Philips solved a significant problem with electronic displays in the mid-1990s by slanting the cylindrical lenses with respect to the underlying pixel grid. Based on this idea, Philips produced its WOWvx line until 2009, running up to 2160p (a resolution of 3840×2160 pixels) with 46 viewing angles. Lenny Lipton's company, StereoGraphics, produced displays based on the same idea, citing a much earlier patent for the slanted lenticulars. Magnetic3d and Zero Creative have also been involved.
With rapid advances in optical fabrication, digital processing power, and computational models for human perception, a new generation of display technology is emerging: compressive light field displays. These architectures explore the co-design of optical elements and compressive computation while taking particular characteristics of the human visual system into account. Compressive display designs include dual and multilayer devices that are driven by algorithms such as computed tomography and Non-negative matrix factorization and non-negative tensor factorization.
Dimension Technologies released a range of commercially available 2D/3D switchable LCDs in 2002 using a combination of parallax barriers and lenticular lenses. SeeReal Technologies has developed a holographic display based on eye tracking. CubicVue exhibited a color filter pattern autostereoscopic display at the Consumer Electronics Association's i-Stage competition in 2009.
There are a variety of other autostereo systems as well, such as volumetric display, in which the reconstructed light field occupies a true volume of space, and integral imaging, which uses a fly's-eye lens array.
The term automultiscopic display has recently been introduced as a shorter synonym for the lengthy "multi-view autostereoscopic 3D display", as well as for the earlier, more specific "parallax panoramagram". The latter term originally indicated a continuous sampling along a horizontal line of viewpoints, e.g., image capture using a very large lens or a moving camera and a shifting barrier screen, but it later came to include synthesis from a relatively large number of discrete views.
Sunny Ocean Studios, located in Singapore, has been credited with developing an automultiscopic screen that can display autostereo 3D images from 64 different reference points.
A fundamentally new approach to autostereoscopy, called HR3D has been developed by researchers from MIT's Media Lab. It would consume half as much power, doubling the battery life if used with devices like the Nintendo 3DS, without compromising screen brightness or resolution. And having other advantages such as bigger viewing angle and it would maintain the 3D effect even when the screen is rotated.
Movement parallax refers to the fact that the view of a scene changes with movement of the head. Thus, different images of the scene are seen as the head is moved from left to right, and from up to down.
Many autostereoscopic displays are single-view displays and are thus not capable of reproducing the sense of movement parallax, except for a single viewer in systems capable of eye tracking.
Some autostereoscopic displays, however, are multi-view displays, and are thus capable of providing the perception of left-right movement parallax. Eight and sixteen views are typical for such displays. While it is theoretically possible to simulate the perception of up-down movement parallax, no current display systems are known to do so, and the up-down effect is widely seen as less important than left-right movement parallax. One consequence of not including parallax about both axes becomes more evident as objects increasingly distant from the plane of the display are presented: as the viewer moves closer to or farther away from the display, such objects will more obviously exhibit the effects of perspective shift about one axis but not the other, appearing variously stretched or squashed to a viewer not positioned at the optimum distance from the display.
A three-dimensional stereoscopic film (also known as three-dimensional film, 3D film or S3D film) is a motion picture that enhances the illusion of depth perception, hence adding a third dimension. The most common approach to the production of 3D films is derived from stereoscopic photography. In this approach, a regular motion picture camera system is used to record the images as seen from two perspectives (or computer-generated imagery generates the two perspectives in post-production), and special projection hardware or eyewear is used to limit the visibility of each image to the viewer's left or right eye only. 3D films are not limited to theatrical releases; television broadcasts and direct-to-video films have also incorporated similar methods, especially since the advent of 3D television and Blu-ray 3D.
3D films have existed in some form since 1915, but had been largely relegated to a niche in the motion picture industry because of the costly hardware and processes required to produce and display a 3D film, and the lack of a standardized format for all segments of the entertainment business. Nonetheless, 3D films were prominently featured in the 1950s in American cinema, and later experienced a worldwide resurgence in the 1980s and 1990s driven by IMAX high-end theaters and Disney-themed venues. 3D films became increasingly successful throughout the 2000s, peaking with the success of 3D presentations of Avatar in December 2009, after which 3D films again decreased in popularity. Certain directors have also taken more experimental approaches to 3D filmmaking, most notably celebrated auteur Jean-Luc Godard in his films 3x3D and Goodbye to Language.AMD HD3D
HD3D is AMD's stereoscopic 3D API.HD3D exposes a quad buffer for game and software developers, allowing native 3D.
An open HD3D SDK is available, although, for now, only DirectX 9, 10 and 11 are supported.Support for HDMI-3D-, DisplayPort-3D- and DVI-3D-displays is included in the latest AMD Catalyst.
AMD's Quad-Buffer API is supported by the following GPUs on following AMD products: Radeon HD 5000 Series, Radeon HD 6000 Series, and Radeon HD 7000 Series and A-Series APUs.Atomtronics
Atomtronics is an emerging sub-field of ultracold atomic physics which encompasses a broad range of topics featuring guided atomic matter waves. The systems typically include components analogous to those found in electronic or optical systems, such as beam splitters and transistors. Applications range from studies of fundamental physics to the development of practical devices.Cryoprotectant
A cryoprotectant is a substance used to protect biological tissue from freezing damage (i.e. that due to ice formation). Arctic and Antarctic insects, fish and amphibians create cryoprotectants (antifreeze compounds and antifreeze proteins) in their bodies to minimize freezing damage during cold winter periods. Cryoprotectants are also used to preserve living materials in the study of biology and to preserve food products.Dolby 3D
Dolby 3D (formerly known as Dolby 3D Digital Cinema) is a marketing name for a system from Dolby Laboratories, Inc. to show three-dimensional motion pictures in a digital cinema.Ferroelectric liquid crystal display
Ferroelectric Liquid Crystal Display (FLCD) is a display technology based on the ferroelectric properties of chiral smectic liquid crystals as proposed in 1980 by Clark and Lagerwall.The FLCD did not make many inroads as a direct view display device. Manufacturing of larger FLCDs was problematic making them unable to compete against direct view LCDs based on nematic liquid crystals using the Twisted nematic field effect or In-Plane Switching. Today, the FLCD is used in reflective microdisplays based on Liquid Crystal on Silicon technology. Using ferroelectric liquid crystal (FLC) in FLCoS technology allows a much smaller display area which eliminates the problems of manufacturing larger area FLC displays. Additionally, the dot pitch or pixel pitch of such displays can be as low as 6 µm giving a very high resolution display in a small area. To produce color and grey-scale, time multiplexing is used, exploiting the sub-millisecond switching time of the ferroelectric liquid crystal.
These microdisplays find applications in 3D head mounted displays (HMD), image insertion in surgical microscopes and electronic viewfinders where direct-view LCDs fail to provide more than 600 ppi resolution.
Ferroelectric LCoS also finds commercial uses in Structured illumination for 3D-Metrology and Super-resolution microscopy. Some commercial products use FLCD.High switching allows building optical switches and shutters in printer heads.Integral imaging
Integral imaging is an autostereoscopic and multiscopic three-dimensional imaging technique that captures and reproduces a light field by using a two-dimensional array of microlenses, sometimes called a fly's-eye lens, normally without the aid of a larger overall objective or viewing lens. In capture mode, each microlens allows an image of the subject as seen from the viewpoint of that lens's location to be acquired. In reproduction mode, each microlens allows each observing eye to see only the area of the associated micro-image containing the portion of the subject that would have been visible through that space from that eye's location. The optical geometry can perhaps be visualized more easily by substituting pinholes for the microlenses, as has actually been done for some demonstrations and special applications.
The result is a visual reproduction complete with all significant depth cues, including parallax in all directions, perspective that changes with the position and distance of the observer, and, if the lenses are small enough and the images of sufficient quality, the cue of accommodation — the adjustments of eye focus required to clearly see objects at different distances. Unlike the voxels in a true volumetric display, the image points perceived through the microlens array are virtual and have only a subjective location in space, allowing a scene of infinite depth to be displayed without resorting to an auxiliary large magnifying lens or mirror.
Integral imaging was partly inspired by barrier grid autostereograms and in turn partly inspired lenticular printing.MasterImage 3D
MasterImage 3D is a company that develops stereoscopic 3D systems for theaters, and auto-stereoscopic 3D displays for mobile devices.Multi-primary color display
Multi-primary color (MPC) display is a display that can reproduce a wider gamut color than conventional displays. In addition to the standard RGB (Red Green and Blue) color subpixels, the technology utilizes additional colors, such as yellow, magenta and cyan, and thus increases the range of displayable colors that the human eye can see.Sharp's Quattron is the brand name of an LCD color display technology that utilizes a yellow fourth color subpixel. It is used in Sharp's Aquos LCD TV product line, particularly in models with screens 40 inches across and larger.Organic light-emitting transistor
An organic light-emitting transistor (OLET) is a form of transistor that emits light. These transistors have potential for digital displays and on-chip optical interconnects. OLET is a new light-emission concept, providing planar light sources that can be easily integrated in substrates like silicon, glass, and paper using standard microelectronic techniques.OLETs differ from OLEDs in that an active matrix can be made entirely of OLETs, whereas OLEDs must be combined with switching elements such as TFTs.Pseudoscope
A pseudoscope is a binocular optical instrument that reverses depth perception. It is used to study human stereoscopic perception. Objects viewed through it appear inside out, for example: a box on a floor would appear as a box shaped hole in the floor.
It typically uses sets of optical prisms, or periscopically arranged mirrors to swap the view of the left eye with that of the right eye.Screenless video
Screenless video is any system for transmitting visual information from a video source without the use of a screen. Screenless computing systems can be divided into three groups: Visual Image, Retinal Direct, and Synaptic Interface.Stereo display
A stereo display (also 3D display) is a display device capable of conveying depth perception to the viewer by means of stereopsis for binocular vision.Stereoscopic Video Coding
3D video coding is one of the processing stages required to manifest stereoscopic content into a home. There are three techniques which are used to achieve stereoscopic video:
Color shifting (anaglyph)
Pixel subsampling (side-by-side, checkerboard, quincunx)
Enhanced video stream coding (2D+Delta, 2D+Metadata, 2D plus depth)Stereoscopic spectroscopy
Stereoscopic spectroscopy is a type of imaging spectroscopy that can extract a few spectral parameters over a complete image plane simultaneously. A stereoscopic spectrograph is similar to a normal spectrograph except that (A) it has no slit, and (B) multiple spectral orders (often including the non-dispersed zero order) are collected simultaneously. The individual images are blurred by the spectral information present in the original data. The images are recombined using stereoscopic algorithms similar to those used to find ground feature altitudes from parallax in aerial photography.
Stereoscopic spectroscopy is a special case of the more general field of tomographic spectroscopy. Both types of imaging use an analogy between the data space of imaging spectrographs and the conventional 3-space of the physical world. Each spectral order in the instrument produces an image plane analogous to the view from a camera with a particular look angle through the data space, and recombining the views allows recovery of (some aspects of) the spectrum at every location in the image.Stereoscopy
Stereoscopy (also called stereoscopics, or stereo imaging) is a technique for creating or enhancing the illusion of depth in an image by means of stereopsis for binocular vision. The word stereoscopy derives from Greek, Modern στερεός (stereos), meaning 'firm, solid', and σκοπέω (skopeō), meaning 'to look, to see'. Any stereoscopic image is called a stereogram. Originally, stereogram referred to a pair of stereo images which could be viewed using a stereoscope.
Most stereoscopic methods present two offset images separately to the left and right eye of the viewer. These two-dimensional images are then combined in the brain to give the perception of 3D depth. This technique is distinguished from 3D displays that display an image in three full dimensions, allowing the observer to increase information about the 3-dimensional objects being displayed by head and eye movements.Utility fog
Utility fog (coined by Dr. John Storrs Hall in 1993) is a hypothetical collection of tiny robots that can replicate a physical structure. As such, it is a form of self-reconfiguring modular robotics.Virtual retinal display
A virtual retinal display (VRD), also known as a retinal scan display (RSD) or retinal projector (RP), is a display technology that draws a raster display (like a television) directly onto the retina of the eye. The user sees what appears to be a conventional display floating in space in front of them.XpanD 3D
XPAND 3D developed active-shutter 3D solutions for multiple purposes. The company was founded by Maria Costeira and Ami Dror in 1995 as X6D Limited. The company deployed over 15,000 cinemas worldwide.