Autostereoscopy

Autostereoscopy is any method of displaying stereoscopic images (adding binocular perception of 3D depth) without the use of special headgear or glasses on the part of the viewer. Because headgear is not required, it is also called "glasses-free 3D" or "glassesless 3D". There are two broad approaches currently used to accommodate motion parallax and wider viewing angles: eye-tracking, and multiple views so that the display does not need to sense where the viewers' eyes are located.[1]

Examples of autostereoscopic displays technology include lenticular lens, parallax barrier, volumetric display, holographic and light field displays.

New Nintendo 3DS
The Nintendo 3DS uses parallax barrier autostereoscopy to display a 3D image.

Technology

Many organizations have developed autostereoscopic 3D displays, ranging from experimental displays in university departments to commercial products, and using a range of different technologies.[2] The method of creating autostereoscopic flat panel video displays using lenses was mainly developed in 1985 by Reinhard Boerner at the Heinrich Hertz Institute (HHI) in Berlin.[3] Prototypes of single-viewer displays were already being presented in the 1990s, by Sega AM3 (Floating Image System)[4] and the HHI. Nowadays, this technology has been developed further mainly by European and Japanese companies. One of the best-known 3D displays developed by HHI was the Free2C, a display with very high resolution and very good comfort achieved by an eye tracking system and a seamless mechanical adjustment of the lenses. Eye tracking has been used in a variety of systems in order to limit the number of displayed views to just two, or to enlarge the stereoscopic sweet spot. However, as this limits the display to a single viewer, it is not favored for consumer products.

Currently, most flat-panel displays employ lenticular lenses or parallax barriers that redirect imagery to several viewing regions; however, this manipulation requires reduced image resolutions. When the viewer's head is in a certain position, a different image is seen with each eye, giving a convincing illusion of 3D. Such displays can have multiple viewing zones, thereby allowing multiple users to view the image at the same time, though they may also exhibit dead zones where only a non-stereoscopic or pseudoscopic image can be seen, if at all.

Parallax barrier

Parallax barrier vs lenticular screen
Comparison of parallax-barrier and lenticular autostereoscopic displays. Note: The figure is not to scale.

A parallax barrier is a device placed in front of an image source, such as a liquid crystal display, to allow it to show a stereoscopic image or multiscopic image without the need for the viewer to wear 3D glasses. The principle of the parallax barrier was independently invented by Auguste Berthier, who published first but produced no practical results,[5] and by Frederic E. Ives, who made and exhibited the first known functional autostereoscopic image in 1901.[6] About two years later, Ives began selling specimen images as novelties, the first known commercial use.

In the early 2000s, Sharp developed the electronic flat-panel application of this old technology to commercialization, briefly selling two laptops with the world's only 3D LCD screens.[7] These displays are no longer available from Sharp but are still being manufactured and further developed from other companies. Similarly, Hitachi has released the first 3D mobile phone for the Japanese market under distribution by KDDI.[8][9] In 2009, Fujifilm released the FinePix Real 3D W1 digital camera, which features a built-in autostereoscopic LCD measuring 2.8 in (71 mm) diagonal. The Nintendo 3DS video game console family uses a parallax barrier for 3D imagery; on a newer revision, the New Nintendo 3DS, this is combined with an eye tracking system.

Integral photography and lenticular arrays

The principle of integral photography, which uses a two-dimensional (X-Y) array of many small lenses to capture a 3-D scene, was introduced by Gabriel Lippmann in 1908.[10][11] Integral photography is capable of creating window-like autostereoscopic displays that reproduce objects and scenes life-size, with full parallax and perspective shift and even the depth cue of accommodation, but the full realization of this potential requires a very large number of very small high-quality optical systems and very high bandwidth. Only relatively crude photographic and video implementations have yet been produced.

One-dimensional arrays of cylindrical lenses were patented by Walter Hess in 1912.[12] By replacing the line and space pairs in a simple parallax barrier with tiny cylindrical lenses, Hess avoided the light loss that dimmed images viewed by transmitted light and that made prints on paper unacceptably dark.[13] An additional benefit is that the position of the observer is less restricted, as the substitution of lenses is geometrically equivalent to narrowing the spaces in a line-and-space barrier.

Philips solved a significant problem with electronic displays in the mid-1990s by slanting the cylindrical lenses with respect to the underlying pixel grid.[14] Based on this idea, Philips produced its WOWvx line until 2009, running up to 2160p (a resolution of 3840×2160 pixels) with 46 viewing angles.[15] Lenny Lipton's company, StereoGraphics, produced displays based on the same idea, citing a much earlier patent for the slanted lenticulars. Magnetic3d and Zero Creative have also been involved.[16]

Compressive light field displays

With rapid advances in optical fabrication, digital processing power, and computational models for human perception, a new generation of display technology is emerging: compressive light field displays. These architectures explore the co-design of optical elements and compressive computation while taking particular characteristics of the human visual system into account. Compressive display designs include dual[17] and multilayer[18][19][20] devices that are driven by algorithms such as computed tomography and Non-negative matrix factorization and non-negative tensor factorization.

Autostereoscopic content creation and conversion

Tools for the instant conversion of existing 3D movies to autostereoscopic were demonstrated by Dolby, Stereolabs and Viva3D.[21][22][23]

Other

Dimension Technologies released a range of commercially available 2D/3D switchable LCDs in 2002 using a combination of parallax barriers and lenticular lenses.[24][25] SeeReal Technologies has developed a holographic display based on eye tracking.[26] CubicVue exhibited a color filter pattern autostereoscopic display at the Consumer Electronics Association's i-Stage competition in 2009.[27][28]

There are a variety of other autostereo systems as well, such as volumetric display, in which the reconstructed light field occupies a true volume of space, and integral imaging, which uses a fly's-eye lens array.

The term automultiscopic display has recently been introduced as a shorter synonym for the lengthy "multi-view autostereoscopic 3D display",[29] as well as for the earlier, more specific "parallax panoramagram". The latter term originally indicated a continuous sampling along a horizontal line of viewpoints, e.g., image capture using a very large lens or a moving camera and a shifting barrier screen, but it later came to include synthesis from a relatively large number of discrete views.

Sunny Ocean Studios, located in Singapore, has been credited with developing an automultiscopic screen that can display autostereo 3D images from 64 different reference points.[30]

A fundamentally new approach to autostereoscopy, called HR3D has been developed by researchers from MIT's Media Lab. It would consume half as much power, doubling the battery life if used with devices like the Nintendo 3DS, without compromising screen brightness or resolution. And having other advantages such as bigger viewing angle and it would maintain the 3D effect even when the screen is rotated.[31]

Movement parallax: single view vs. multi-view systems

Movement parallax refers to the fact that the view of a scene changes with movement of the head. Thus, different images of the scene are seen as the head is moved from left to right, and from up to down.

Many autostereoscopic displays are single-view displays and are thus not capable of reproducing the sense of movement parallax, except for a single viewer in systems capable of eye tracking.

Some autostereoscopic displays, however, are multi-view displays, and are thus capable of providing the perception of left-right movement parallax.[32] Eight and sixteen views are typical for such displays. While it is theoretically possible to simulate the perception of up-down movement parallax, no current display systems are known to do so, and the up-down effect is widely seen as less important than left-right movement parallax. One consequence of not including parallax about both axes becomes more evident as objects increasingly distant from the plane of the display are presented: as the viewer moves closer to or farther away from the display, such objects will more obviously exhibit the effects of perspective shift about one axis but not the other, appearing variously stretched or squashed to a viewer not positioned at the optimum distance from the display.

References

  1. ^ Dodgson, N.A. (August 2005). "Autostereoscopic 3D Displays". IEEE Computer. 38 (8): 31–36. doi:10.1109/MC.2005.252. ISSN 0018-9162.
  2. ^ Holliman, N.S. (2006). Three-Dimensional Display Systems (PDF). ISBN 0-7503-0646-7.
  3. ^ Boerner, R. (1985). "3D-Bildprojektion in Linsenrasterschirmen" (in German).
  4. ^ Electronic Gaming Monthly, issue 93 (April 1997), page 22
  5. ^ Berthier, Auguste. (May 16 and 23, 1896). "Images stéréoscopiques de grand format" (in French). Cosmos 34 (590, 591): 205–210, 227-233 (see 229-231)
  6. ^ Ives, Frederic E. (1902). "A novel stereogram". Journal of the Franklin Institute. 153: 51–52. doi:10.1016/S0016-0032(02)90195-X. Reprinted in Benton "Selected Papers n Three-Dimensional Displays"
  7. ^ "2D/3D Switchable Displays" (PDF). Sharp white paper. Archived (PDF) from the original on 30 May 2008. Retrieved 2008-06-19.
  8. ^ "Woooケータイ H001 | 2009年 | 製品アーカイブ | au by KDDI". Au.kddi.com. Archived from the original on 2010-05-04. Retrieved 2010-06-15.
  9. ^ "Hitachi Comes Up with 3.1-Inch 3D IPS Display". News.softpedia.com. 2010-04-12. Retrieved 2010-06-15.
  10. ^ Lippmann, G. (2 March 1908). "Épreuves réversibles. Photographies intégrales". Comptes Rendus de l'Académie des Sciences. 146 (9): 446–451. Reprinted in Benton "Selected Papers on Three-Dimensional Displays"
  11. ^ Frédo Durand; MIT CSAIL. "Reversible Prints. Integral Photographs" (PDF). Retrieved 2011-02-17. (This crude English translation of Lippmann's 1908 paper will be more comprehensible if the reader bears in mind that "dark room" and "darkroom" are the translator's mistaken renderings of "chambre noire", the French equivalent of the Latin "camera obscura", and should be read as "camera" in the thirteen places where this error occurs.)
  12. ^ ‹See Tfd›1128979, ‹See Tfd›Hess, Walter, "Stereoscopic picture", filed 1 June 1912, patented 16 February 1915. Hess filed several similar patent applications in Europe in 1911 and 1912, which resulted in several patents issued in 1912 and 1913.
  13. ^ Benton, Stephen (2001). Selected Papers on Three-Dimensional Displays. Milestone Series. MS 162. SPIE Optical Engineering Press. p. xx-xxi.
  14. ^ van Berkel, Cees (1997). "Characterisation and optimisation of 3D-LCD module design". Proc. SPIE. 3012: 179–186. doi:10.1117/12.274456.
  15. ^ Fermoso, Jose (2008-10-01). "Philips' 3D HDTV Might Destroy Space-Time Continuum, Wallets | Gadget Lab". Wired.com. Archived from the original on 3 June 2010. Retrieved 2010-06-15.
  16. ^ "xyZ 3D Displays - Autostereoscopic 3D TV - 3D LCD - 3D Plasma - No Glasses 3D". Xyz3d.tv. Archived from the original on 2010-04-20. Retrieved 2010-06-15.
  17. ^ Lanman, D.; Hirsch, M.; Kim, Y.; Raskar, R. (2010). "Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization".
  18. ^ Wetzstein, G.; Lanman, D.; Heidrich, W.; Raskar, R. (2011). "Layered 3D: Tomographic Image Synthesis for Attenuation-based Light Field and High Dynamic Range Displays". ACM Transactions on Graphics (SIGGRAPH).
  19. ^ Lanman, D.; Wetzstein, G.; Hirsch, M.; Heidrich, W.; Raskar, R. (2011). "Polarization Fields: Dynamic Light Field Display using Multi-Layer LCDs". ACM Transactions on Graphics (SIGGRAPH Asia).
  20. ^ Wetzstein, G.; Lanman, D.; Hirsch, M.; Raskar, R. (2012). "Tensor Displays: Compressive Light Field Synthesis using Multilayer Displays with Directional Backlighting". ACM Transactions on Graphics (SIGGRAPH).
  21. ^ Chinnock, Chris (April 11, 2014). "NAB 2014 – Dolby 3D Details Partnership with Stereolabs". Display Central. Archived from the original on April 23, 2014. Retrieved July 19, 2016.
  22. ^ "Viva3D autostereo output for glasses-free 3D monitors". ViewPoint 3D. Retrieved July 19, 2016.
  23. ^ Robin C. Colclough. "Viva3D Real-time Stereo Vision: Stereo conversion & depth determination with mixed 3D graphics" (PDF). ViewPoint 3D. Retrieved July 19, 2016.
  24. ^ Smith, Tom (2002-06-14). "Review : Dimension Technologies 2015XLS". BlueSmoke. Retrieved 25 March 2010.
  25. ^ McAllister, David F. (February 2002). "Stereo & 3D Display Technologies, Display Technology" (PDF). In Hornak, Joseph P. Encyclopedia of Imaging Science and Technology, 2 Volume Set (Hardcover). 2. New York: Wiley & Sons. pp. 1327–1344. ISBN 978-0-471-33276-3.
  26. ^ Ooshita, Junichi (2007-10-25). "SeeReal Technologies Exhibits Holographic 3D Video Display, Targeting Market Debut in 2009". TechOn!. Retrieved 23 March 2010.
  27. ^ "CubicVue LLC : i-stage". I-stage.ce.org. 1999-02-22. Retrieved 2010-06-15.
  28. ^ Heater, Brian (2010-03-23). "Nintendo Says Next-Gen DS Will Add a 3D Display". PC Magazine.
  29. ^ Tomas Akenine-Moller, Tomas (2006). Rendering Techniques 2006. A K Peters, Ltd. p. 73.
  30. ^ Pop, Sebastian (2010-02-03). "Sunny Ocean Studios Fulfills No-Glasses 3D Dream". Softpedia.
  31. ^ "Better glasses-free 3-D: A fundamentally new approach". Physorg.com. Retrieved 2012-03-04.
  32. ^ Dodgson, N.A.; Moore, J. R.; Lang, S. R. (1999). "Multi-View Autostereoscopic 3D Display". IEEE Computer. 38 (8): 31–36. CiteSeerX 10.1.1.42.7623. doi:10.1109/MC.2005.252. ISSN 0018-9162.

External links

3D film

A three-dimensional stereoscopic film (also known as three-dimensional film, 3D film or S3D film) is a motion picture that enhances the illusion of depth perception, hence adding a third dimension. The most common approach to the production of 3D films is derived from stereoscopic photography. In this approach, a regular motion picture camera system is used to record the images as seen from two perspectives (or computer-generated imagery generates the two perspectives in post-production), and special projection hardware or eyewear is used to limit the visibility of each image to the viewer's left or right eye only. 3D films are not limited to theatrical releases; television broadcasts and direct-to-video films have also incorporated similar methods, especially since the advent of 3D television and Blu-ray 3D.

3D films have existed in some form since 1915, but had been largely relegated to a niche in the motion picture industry because of the costly hardware and processes required to produce and display a 3D film, and the lack of a standardized format for all segments of the entertainment business. Nonetheless, 3D films were prominently featured in the 1950s in American cinema, and later experienced a worldwide resurgence in the 1980s and 1990s driven by IMAX high-end theaters and Disney-themed venues. 3D films became increasingly successful throughout the 2000s, peaking with the success of 3D presentations of Avatar in December 2009, after which 3D films again decreased in popularity. Certain directors have also taken more experimental approaches to 3D filmmaking, most notably celebrated auteur Jean-Luc Godard in his films 3x3D and Goodbye to Language.

AMD HD3D

HD3D is AMD's stereoscopic 3D API.HD3D exposes a quad buffer for game and software developers, allowing native 3D.

An open HD3D SDK is available, although, for now, only DirectX 9, 10 and 11 are supported.Support for HDMI-3D-, DisplayPort-3D- and DVI-3D-displays is included in the latest AMD Catalyst.

AMD's Quad-Buffer API is supported by the following GPUs on following AMD products: Radeon HD 5000 Series, Radeon HD 6000 Series, and Radeon HD 7000 Series and A-Series APUs.

Atomtronics

Atomtronics is an emerging sub-field of ultracold atomic physics which encompasses a broad range of topics featuring guided atomic matter waves. The systems typically include components analogous to those found in electronic or optical systems, such as beam splitters and transistors. Applications range from studies of fundamental physics to the development of practical devices.

Cryoprotectant

A cryoprotectant is a substance used to protect biological tissue from freezing damage (i.e. that due to ice formation). Arctic and Antarctic insects, fish and amphibians create cryoprotectants (antifreeze compounds and antifreeze proteins) in their bodies to minimize freezing damage during cold winter periods. Cryoprotectants are also used to preserve living materials in the study of biology and to preserve food products.

Dolby 3D

Dolby 3D (formerly known as Dolby 3D Digital Cinema) is a marketing name for a system from Dolby Laboratories, Inc. to show three-dimensional motion pictures in a digital cinema.

Ferroelectric liquid crystal display

Ferroelectric Liquid Crystal Display (FLCD) is a display technology based on the ferroelectric properties of chiral smectic liquid crystals as proposed in 1980 by Clark and Lagerwall.The FLCD did not make many inroads as a direct view display device. Manufacturing of larger FLCDs was problematic making them unable to compete against direct view LCDs based on nematic liquid crystals using the Twisted nematic field effect or In-Plane Switching. Today, the FLCD is used in reflective microdisplays based on Liquid Crystal on Silicon technology. Using ferroelectric liquid crystal (FLC) in FLCoS technology allows a much smaller display area which eliminates the problems of manufacturing larger area FLC displays. Additionally, the dot pitch or pixel pitch of such displays can be as low as 6 µm giving a very high resolution display in a small area. To produce color and grey-scale, time multiplexing is used, exploiting the sub-millisecond switching time of the ferroelectric liquid crystal.

These microdisplays find applications in 3D head mounted displays (HMD), image insertion in surgical microscopes and electronic viewfinders where direct-view LCDs fail to provide more than 600 ppi resolution.

Ferroelectric LCoS also finds commercial uses in Structured illumination for 3D-Metrology and Super-resolution microscopy. Some commercial products use FLCD.High switching allows building optical switches and shutters in printer heads.

Integral imaging

Integral imaging is an autostereoscopic and multiscopic three-dimensional imaging technique that captures and reproduces a light field by using a two-dimensional array of microlenses, sometimes called a fly's-eye lens, normally without the aid of a larger overall objective or viewing lens. In capture mode, each microlens allows an image of the subject as seen from the viewpoint of that lens's location to be acquired. In reproduction mode, each microlens allows each observing eye to see only the area of the associated micro-image containing the portion of the subject that would have been visible through that space from that eye's location. The optical geometry can perhaps be visualized more easily by substituting pinholes for the microlenses, as has actually been done for some demonstrations and special applications.

The result is a visual reproduction complete with all significant depth cues, including parallax in all directions, perspective that changes with the position and distance of the observer, and, if the lenses are small enough and the images of sufficient quality, the cue of accommodation — the adjustments of eye focus required to clearly see objects at different distances. Unlike the voxels in a true volumetric display, the image points perceived through the microlens array are virtual and have only a subjective location in space, allowing a scene of infinite depth to be displayed without resorting to an auxiliary large magnifying lens or mirror.

Integral imaging was partly inspired by barrier grid autostereograms and in turn partly inspired lenticular printing.

MasterImage 3D

MasterImage 3D is a company that develops stereoscopic 3D systems for theaters, and auto-stereoscopic 3D displays for mobile devices.

Multi-primary color display

Multi-primary color (MPC) display is a display that can reproduce a wider gamut color than conventional displays. In addition to the standard RGB (Red Green and Blue) color subpixels, the technology utilizes additional colors, such as yellow, magenta and cyan, and thus increases the range of displayable colors that the human eye can see.Sharp's Quattron is the brand name of an LCD color display technology that utilizes a yellow fourth color subpixel. It is used in Sharp's Aquos LCD TV product line, particularly in models with screens 40 inches across and larger.

Organic light-emitting transistor

An organic light-emitting transistor (OLET) is a form of transistor that emits light. These transistors have potential for digital displays and on-chip optical interconnects. OLET is a new light-emission concept, providing planar light sources that can be easily integrated in substrates like silicon, glass, and paper using standard microelectronic techniques.OLETs differ from OLEDs in that an active matrix can be made entirely of OLETs, whereas OLEDs must be combined with switching elements such as TFTs.

Pseudoscope

A pseudoscope is a binocular optical instrument that reverses depth perception. It is used to study human stereoscopic perception. Objects viewed through it appear inside out, for example: a box on a floor would appear as a box shaped hole in the floor.

It typically uses sets of optical prisms, or periscopically arranged mirrors to swap the view of the left eye with that of the right eye.

Screenless video

Screenless video is any system for transmitting visual information from a video source without the use of a screen. Screenless computing systems can be divided into three groups: Visual Image, Retinal Direct, and Synaptic Interface.

Stereo display

A stereo display (also 3D display) is a display device capable of conveying depth perception to the viewer by means of stereopsis for binocular vision.

Stereoscopic Video Coding

3D video coding is one of the processing stages required to manifest stereoscopic content into a home. There are three techniques which are used to achieve stereoscopic video:

Color shifting (anaglyph)

Pixel subsampling (side-by-side, checkerboard, quincunx)

Enhanced video stream coding (2D+Delta, 2D+Metadata, 2D plus depth)

Stereoscopic spectroscopy

Stereoscopic spectroscopy is a type of imaging spectroscopy that can extract a few spectral parameters over a complete image plane simultaneously. A stereoscopic spectrograph is similar to a normal spectrograph except that (A) it has no slit, and (B) multiple spectral orders (often including the non-dispersed zero order) are collected simultaneously. The individual images are blurred by the spectral information present in the original data. The images are recombined using stereoscopic algorithms similar to those used to find ground feature altitudes from parallax in aerial photography.

Stereoscopic spectroscopy is a special case of the more general field of tomographic spectroscopy. Both types of imaging use an analogy between the data space of imaging spectrographs and the conventional 3-space of the physical world. Each spectral order in the instrument produces an image plane analogous to the view from a camera with a particular look angle through the data space, and recombining the views allows recovery of (some aspects of) the spectrum at every location in the image.

Stereoscopy

Stereoscopy (also called stereoscopics, or stereo imaging) is a technique for creating or enhancing the illusion of depth in an image by means of stereopsis for binocular vision. The word stereoscopy derives from Greek, Modern στερεός (stereos), meaning 'firm, solid', and σκοπέω (skopeō), meaning 'to look, to see'. Any stereoscopic image is called a stereogram. Originally, stereogram referred to a pair of stereo images which could be viewed using a stereoscope.

Most stereoscopic methods present two offset images separately to the left and right eye of the viewer. These two-dimensional images are then combined in the brain to give the perception of 3D depth. This technique is distinguished from 3D displays that display an image in three full dimensions, allowing the observer to increase information about the 3-dimensional objects being displayed by head and eye movements.

Utility fog

Utility fog (coined by Dr. John Storrs Hall in 1993) is a hypothetical collection of tiny robots that can replicate a physical structure. As such, it is a form of self-reconfiguring modular robotics.

Virtual retinal display

A virtual retinal display (VRD), also known as a retinal scan display (RSD) or retinal projector (RP), is a display technology that draws a raster display (like a television) directly onto the retina of the eye. The user sees what appears to be a conventional display floating in space in front of them.

XpanD 3D

XPAND 3D developed active-shutter 3D solutions for multiple purposes. The company was founded by Maria Costeira and Ami Dror in 1995 as X6D Limited. The company deployed over 15,000 cinemas worldwide.

Perception
Display
technologies
Other
technologies
Product
types
Notable
products
Miscellany
Video displays
Non-video
3D display
Static media
Display capabilities
Related articles

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.