2D to 3D conversion

2D to 3D video conversion (also called 2D to stereo 3D conversion and stereo conversion) is the process of transforming 2D ("flat") film to 3D form, which in almost all cases is stereo, so it is the process of creating imagery for each eye from one 2D image.

2D to 3D conversion
Process typedigital and print
Industrial sector(s)Film and television, print production
Main technologies or sub-processesComputer software
Product(s)Movies, television shows, social media, printed images

Overview

2D-to-3D conversion adds the binocular disparity depth cue to digital images perceived by the brain, thus, if done properly, greatly improving the immersive effect while viewing stereo video in comparison to 2D video. However, in order to be successful, the conversion should be done with sufficient accuracy and correctness: the quality of the original 2D images should not deteriorate, and the introduced disparity cue should not contradict other cues used by the brain for depth perception. If done properly and thoroughly, the conversion produces stereo video of similar quality to "native" stereo video which is shot in stereo and accurately adjusted and aligned in post-production.[1]

Two approaches to stereo conversion can be loosely defined: quality semiautomatic conversion for cinema and high quality 3DTV, and low-quality automatic conversion for cheap 3DTV, VOD and similar applications.

Re-rendering of computer animated films

Computer animated 2D films made with 3D models can be re-rendered in stereoscopic 3D by adding a second virtual camera if the original data is still available. This is technically not a conversion; therefore, such re-rendered films have the same quality as films originally produced in stereoscopic 3D. Examples of this technique include the re-release of Toy Story and Toy Story 2. Revisiting the original computer data for the two films took four months, as well as an additional six months to add the 3D.[2] However, not all CGI films are re-rendered for the 3D re-release because of the costs, time required, lack of skilled resources or missing computer data.

Importance and applicability

With the increase of films released in 3D, 2D to 3D conversion has become more common. The majority of non-CGI stereo 3D blockbusters are converted fully or at least partially from 2D footage. Even Avatar contains several scenes shot in 2D and converted to stereo in post-production.[3] The reasons for shooting in 2D instead of stereo are financial, technical and sometimes artistic:[1][4]

  • Stereo post-production workflow is much more complex and not as well-established as 2D workflow, requiring more work and rendering.
  • Professional stereoscopic rigs are much more expensive and bulky than customary monocular cameras. Some shots, particularly action scenes, can be only shot with relatively small 2D cameras.
  • Stereo cameras can introduce various mismatches in stereo image (such as vertical parallax, tilt, color shift, reflections and glares in different positions) that should be fixed in post-production anyway because they ruin the 3D effect. This correction sometimes may have complexity comparable to stereo conversion.
  • Stereo cameras can betray practical effects used during filming. For example, some scenes in the Lord of the Rings film trilogy were filmed using forced perspective to allow two actors to appear to be different physical sizes. The same scene filmed in stereo would reveal that the actors were not the same distance from the camera.
  • By their very nature, stereo cameras have restrictions on how far the camera can be from the filmed subject and still provide acceptable stereo separation. For example, the simplest way to film a scene set on the side of a building might be to use a camera rig from across the street on a neighboring building, using a zoom lens. However, while the zoom lens would provide acceptable image quality, the stereo separation would be virtually nil over such a distance.

Even in the case of stereo shooting, conversion can frequently be necessary. Besides the mentioned hard-to-shoot scenes, there are situations when mismatches in stereo views are too big to adjust, and it is simpler to perform 2D to stereo conversion, treating one of the views as the original 2D source.

General problems

Without respect to particular algorithms, all conversion workflows should solve the following tasks:[4][5]

  1. Allocation of "depth budget" – defining the range of permitted disparity or depth, what depth value corresponds to the screen position (so-called "convergence point" position), the permitted distance ranges for out-of-the-screen effects and behind-the-screen background objects. If an object in stereo pair is in exactly the same spot for both eyes, then it will appear on the screen surface and it will be in zero parallax. Objects in front of the screen are said to be in negative parallax, and background imagery behind the screen is in positive parallax. There are the corresponding negative or positive offsets in object positions for left and right eye images.
  2. Control of comfortable disparity depending on scene type and motion – too much parallax or conflicting depth cues may cause eye-strain and nausea effects
  3. Filling of uncovered areas – left or right view images show a scene from a different angle, and parts of objects or entire objects covered by the foreground in the original 2D image should become visible in a stereo pair. Sometimes the background surfaces are known or can be estimated, so they should be used for filling uncovered areas. Otherwise the unknown areas must be filled in by an artist or inpainted, since the exact reconstruction is not possible.

High quality conversion methods should also deal with many typical problems including:

  • Translucent objects
  • Reflections
  • Fuzzy semitransparent object borders – such as hair, fur, foreground out-of-focus objects, thin objects
  • Film grain (real or artificial) and similar noise effects
  • Scenes with fast erratic motion
  • Small particles – rain, snow, explosions and so on.

Quality semiautomatic conversion

Depth-based conversion

Most semiautomatic methods of stereo conversion use depth maps and depth-image-based rendering.[4][5]

The idea is that a separate auxiliary picture known as the "depth map" is created for each frame or for a series of homogenous frames to indicate depths of objects present in the scene. The depth map is a separate grayscale image having the same dimensions as the original 2D image, with various shades of gray to indicate the depth of every part of the frame. While depth mapping can produce a fairly potent illusion of 3D objects in the video, it inherently does not support semi-transparent objects or areas, nor does it allow explicit use of occlusion, so these and other similar issues should be dealt with via a separate method.

2D plus depth
An example of depth map
Synthesizing 3D Shapes via Modeling Multi-View Depth Maps and Silhouettes With Deep Generative Networks
Generating and reconstructing 3D shapes from single or multi-view depth maps or silhouettes [6]

The major steps of depth-based conversion methods are:

  1. Depth budget allocation – how much total depth in the scene and where the screen plane will be.
  2. Image segmentation, creation of mattes or masks, usually by rotoscoping. Each important surface should be isolated. The level of detail depends on the required conversion quality and budget.
  3. Depth map creation. Each isolated surface should be assigned a depth map. The separate depth maps should be composed into a scene depth map. This is an iterative process requiring adjustment of objects, shapes, depth, and visualization of intermediate results in stereo. Depth micro-relief, 3D shape is added to most important surfaces to prevent the "cardboard" effect when stereo imagery looks like a combination of flat images just set at different depths.
  4. Stereo generation based on 2D+Depth with any supplemental information like clean plates, restored background, transparency maps, etc. When the process is complete, a left and right image will have been created. Usually the original 2D image is treated as the center image, so that two stereo views are generated. However, some methods propose to use the original image as one eye's image and to generate only the other eye's image to minimize the conversion cost.[4] During stereo generation, pixels of the original image are shifted to the left or to the right depending on depth map, maximum selected parallax, and screen surface position.
  5. Reconstruction and painting of any uncovered areas not filled by the stereo generator.

Stereo can be presented in any format for preview purposes, including anaglyph.

Time-consuming steps are image segmentation/rotoscoping, depth map creation and uncovered area filling. The latter is especially important for the highest quality conversion.

There are various automation techniques for depth map creation and background reconstruction. For example, automatic depth estimation can be used to generate initial depth maps for certain frames and shots.[7]

People engaged in such work may be called depth artists.[8]

Multi-layering

A development on depth mapping, multi-layering works around the limitations of depth mapping by introducing several layers of grayscale depth masks to implement limited semi-transparency. Similar to a simple technique,[9] multi-layering involves applying a depth map to more than one "slice" of the flat image, resulting in a much better approximation of depth and protrusion. The more layers are processed separately per frame, the higher the quality of 3D illusion tends to be.

Other approaches

3D reconstruction and re-projection may be used for stereo conversion. It involves scene 3D model creation, extraction of original image surfaces as textures for 3D objects and, finally, rendering the 3D scene from two virtual cameras to acquire stereo video. The approach works well enough in case of scenes with static rigid objects like urban shots with buildings, interior shots, but has problems with non-rigid bodies and soft fuzzy edges.[3]

Another method is to set up both left and right virtual cameras, both offset from the original camera but splitting the offset difference, then painting out occlusion edges of isolated objects and characters. Essentially clean-plating several background, mid ground and foreground elements.

Binocular disparity can also be derived from simple geometry.[10]

Automatic conversion

Depth from motion

It is possible to automatically estimate depth using different types of motion. In case of camera motion, a depth map of the entire scene can be calculated. Also, object motion can be detected and moving areas can be assigned with smaller depth values than the background. Occlusions provide information on relative position of moving surfaces.[11][12]

Depth from focus

Approaches of this type are also called "depth from defocus" and "depth from blur".[11][13] On "depth from defocus" (DFD) approaches, the depth information is estimated based on the amount of blur of the considered object, whereas "depth from focus" (DFF) approaches tend to compare the sharpness of an object over a range of images taken with different focus distances in order to find out its distance to the camera. DFD only needs two or three at different focus to properly work, whereas DFF needs 10 to 15 images at least but is more accurate than the previous method.

If the sky is detected in the processed image, it can also be taken into account that more distant objects, besides being hazy, should be more desaturated and more bluish because of a thick air layer.[13]

Depth from perspective

The idea of the method is based on the fact that parallel lines, such as railroad tracks and roadsides, appear to converge with distance, eventually reaching a vanishing point at the horizon. Finding this vanishing point gives the farthest point of the whole image.[11][13]

The more the lines converge, the farther away they appear to be. So, for depth map, the area between two neighboring vanishing lines can be approximated with a gradient plane.

See also

References

  1. ^ a b Barry Sandrew. "2D – 3D Conversion Can Be Better Than Native 3D"
  2. ^ Murphy, Mekado (October 1, 2009). "Buzz and Woody Add a Dimension". The New York Times. Retrieved February 18, 2010.
  3. ^ a b Mike Seymour. Art of Stereo conversion: 2D to 3D
  4. ^ a b c d Scott Squires. 2D to 3D Conversions
  5. ^ a b Jon Karafin. State-of-the-Art 2D to 3D Conversion and Stereo VFX International 3D Society University. Presentation from the October 21, 2011 3DU-Japan event in Tokyo.
  6. ^ "Soltani, A. A., Huang, H., Wu, J., Kulkarni, T. D., & Tenenbaum, J. B. Synthesizing 3D Shapes via Modeling Multi-View Depth Maps and Silhouettes With Deep Generative Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1511-1519)".
  7. ^ YUVsoft. 2D–to–Stereo 3D Conversion Process
  8. ^ Mike Eisenberg (31 October 2011). "Interview with 3D Artist Adam Hlavac". Screen Rant. Retrieved 28 December 2015.
  9. ^ Cutler, James. "Masking Multiple Layers in Adobe Photoshop". Archived from the original on January 18, 2012.
  10. ^ Converting a 2D picture to a 3D Lenticular Print
  11. ^ a b c Dr. Lai-Man Po. Automatic 2D-to-3D Video Conversion Techniques for 3DTV Department of Electronic Engineering, City University of Hong Kong. 13 April 2010
  12. ^ Automatic 2D to 2D-plus-Depth conversion sample for a camera motion scene
  13. ^ a b c Qingqing We. "Converting 2D to 3D: A Survey" (PDF). Faculty of Electrical Engineering, Mathematics and Computer Science Delft University of Technology, the Netherlands. Archived from the original (PDF) on 2012-04-15.
3D film

A three-dimensional stereoscopic film (also known as three-dimensional film, 3D film or S3D film) is a motion picture that enhances the illusion of depth perception, hence adding a third dimension. The most common approach to the production of 3D films is derived from stereoscopic photography. In this approach, a regular motion picture camera system is used to record the images as seen from two perspectives (or computer-generated imagery generates the two perspectives in post-production), and special projection hardware or eyewear is used to limit the visibility of each image to the viewer's left or right eye only. 3D films are not limited to theatrical releases; television broadcasts and direct-to-video films have also incorporated similar methods, especially since the advent of 3D television and Blu-ray 3D.

3D films have existed in some form since 1915, but had been largely relegated to a niche in the motion picture industry because of the costly hardware and processes required to produce and display a 3D film, and the lack of a standardized format for all segments of the entertainment business. Nonetheless, 3D films were prominently featured in the 1950s in American cinema, and later experienced a worldwide resurgence in the 1980s and 1990s driven by IMAX high-end theaters and Disney-themed venues. 3D films became increasingly successful throughout the 2000s, peaking with the success of 3D presentations of Avatar in December 2009, after which 3D films again decreased in popularity. Certain directors have also taken more experimental approaches to 3D filmmaking, most notably celebrated auteur Jean-Luc Godard in his films 3x3D and Goodbye to Language.

AMD HD3D

HD3D is AMD's stereoscopic 3D API.HD3D exposes a quad buffer for game and software developers, allowing native 3D.

An open HD3D SDK is available, although, for now, only DirectX 9, 10 and 11 are supported.Support for HDMI-3D-, DisplayPort-3D- and DVI-3D-displays is included in the latest AMD Catalyst.

AMD's Quad-Buffer API is supported by the following GPUs on following AMD products: Radeon HD 5000 Series, Radeon HD 6000 Series, and Radeon HD 7000 Series and A-Series APUs.

Bubblegram

A bubblegram (also known as laser crystal, 3D crystal engraving or vitrography) is a solid block of glass or transparent plastic that has been exposed to laser beams to generate three-dimensional designs inside. The image is composed of many small points of fracture or other visible deformations and appears to float inside the block.

Correspondence problem

The correspondence problem refers to the problem of ascertaining which parts of one image correspond to which parts of another image, where differences are due to movement of the camera, the elapse of time, and/or movement of objects in the photos.

Digital 3D

Digital 3D is a non-specific 3D standard in which films, television shows, and video games are presented and shot in digital 3D technology or later processed in digital post-production to add a 3D effect.

One of the first studios to use digital 3D was Walt Disney Pictures. In promoting their first CGI animated film Chicken Little, they trademarked the phrase Disney Digital 3-D and teamed up with RealD in order to present the film in 3D in the United States. A total of over 62 theaters in the US were retrofitted to use the RealD Cinema system. The 2008 animated feature Bolt was the first movie which was animated and rendered for digital 3D whereas Chicken Little had been converted after it was finished.

Even though some critics and fans were skeptical about digital 3D, it has gained in popularity. Now there are several competing digital 3D formats including Dolby 3D, XpanD 3D, Panavision 3D, MasterImage 3D and IMAX 3D. The first home video game console to be capable of 3D was the Sega Master System in which a limited number of titles were capable of delivering 3D.

Dolby 3D

Dolby 3D (formerly known as Dolby 3D Digital Cinema) is a marketing name for a system from Dolby Laboratories, Inc. to show three-dimensional motion pictures in a digital cinema.

Identity FX

Identity FX, a post-production division of Identity Studios, Inc., is a Visual Effects (VFX) company and Stereoscopic 3D design studio located in Los Angeles. The company specializes in full-service visual effects and stereoscopic 3D conversion in post production.

For nearly a decade, Identity FX has created visuals for award-winning feature films, TV shows, and commercial campaigns. This North Hollywood-based VFX company completed visual effects, stereo conversion, and native stereo optimization work for more than one-hundred titles, including such projects as The Amazing Spider Man, Prometheus, Conan the Barbarian, Green Lantern, The Chronicles of Narnia: The Voyage of the Dawn Treader, Hancock, Transformers, U23D, Paramount Park’s 4D Borg Adventure, and the RealD Demos.

MasterImage 3D

MasterImage 3D is a company that develops stereoscopic 3D systems for theaters, and auto-stereoscopic 3D displays for mobile devices.

Pseudoscope

A pseudoscope is a binocular optical instrument that reverses depth perception. It is used to study human stereoscopic perception. Objects viewed through it appear inside out, for example: a box on a floor would appear as a box shaped hole in the floor.

It typically uses sets of optical prisms, or periscopically arranged mirrors to swap the view of the left eye with that of the right eye.

RealD 3D

RealD 3D is a digital stereoscopic projection technology made and sold by RealD. It is currently the most widely used technology for watching 3D films in theaters (cinemas). Worldwide, RealD 3D is installed in more than 26,500 auditoriums by approximately 1,200 exhibitors in 72 countries as of June 2015.

Reliance MediaWorks

Reliance MediaWorks Limited (RMW) is a Film and Entertainment Services Company and a member of the Reliance Group.

The company is one of India's leading film and entertainment services companies with a presence across several media businesses including the theatrical exhibition of films, television content production and distribution, and film and media services. The company facilities have been MPAA certified. Services provided by the company include Motion Picture Processing and DI, Digital Distribution, Audio Restoration and Image Enhancement, 2D to 3D conversion, Digital Master, Studios and Equipment rentals, Visual Effects, Animation and Post Production for TV Advertisements. RMW's operations are spread across India, UK and the US.RMW’s television venture BIG Synergy is engaged in the television programming industry housing popular shows such as Kaun Banega Crorepati and Indian Idol.

Reliance MediaWorks’ sound stages have also been utilized for events such as The Filmfare Awards, the movies Singham and Agneepath, and numerous television commercials.

SilhouetteFX

SilhouetteFX began as a rotoscoping tool for the visual effects industry. SilhouetteFX has been expanded to include capabilities facilitating paint, warping and morphing, 2D to 3D conversion and alternative matting methods. As of V6, SilhouetteFX retains all of the aforementioned capabilities now embedded in a node-based digital compositing application.

Stereoautograph

The Stereoautograph is a complex opto-mechanical measurement instrument for the evaluation of analog or digital photograms. It is based on the stereoscopy effect by using two aero photos or two photograms of the topography or of buildings from different standpoints.

It was invented by Eduard von Orel in 1907.The photograms or photographic plates are oriented by measured passpoints in the field or on the building. This procedure can be carried out digitally (by methods of triangulation and projective geometry or iteratively (repeated angle corrections by congruent rays). The accuracy of modern autographs is about 0.001 mm.

Well known are the instruments of the companies Wild Heerbrugg (Leica), e.g. analog A7, B8 of the 1980s and the digital autographs beginning in the 1990s, or special instruments of Zeiss and Contraves.

Stereographer

A stereographer is a professional in the field of stereoscopy and visual effects using the art and techniques of stereo photography, 3D photography, or stereoscopic 3D film to create a visual perception of a 3-dimensional image from a flat surface.

Stereoscopic Displays and Applications

Stereoscopic Displays and Applications (SD&A) is an academic technical conference in the field of stereoscopic 3D imaging. The conference started in 1990 and is held annually. The conference is held as part of the annual Electronic Imaging: Science and Technology Symposium organised by the Society for Imaging Science and Technology (IS&T).

Stereoscopic Video Coding

3D video coding is one of the processing stages required to manifest stereoscopic content into a home. There are three techniques which are used to achieve stereoscopic video:

Color shifting (anaglyph)

Pixel subsampling (side-by-side, checkerboard, quincunx)

Enhanced video stream coding (2D+Delta, 2D+Metadata, 2D plus depth)

Stereoscopic acuity

Stereoscopic acuity, also stereoacuity, is the smallest detectable depth difference that can be seen in binocular vision.

Stereoscopic spectroscopy

Stereoscopic spectroscopy is a type of imaging spectroscopy that can extract a few spectral parameters over a complete image plane simultaneously. A stereoscopic spectrograph is similar to a normal spectrograph except that (A) it has no slit, and (B) multiple spectral orders (often including the non-dispersed zero order) are collected simultaneously. The individual images are blurred by the spectral information present in the original data. The images are recombined using stereoscopic algorithms similar to those used to find ground feature altitudes from parallax in aerial photography.

Stereoscopic spectroscopy is a special case of the more general field of tomographic spectroscopy. Both types of imaging use an analogy between the data space of imaging spectrographs and the conventional 3-space of the physical world. Each spectral order in the instrument produces an image plane analogous to the view from a camera with a particular look angle through the data space, and recombining the views allows recovery of (some aspects of) the spectrum at every location in the image.

XpanD 3D

XPAND 3D developed active-shutter 3D solutions for multiple purposes. The company was founded by Maria Costeira and Ami Dror in 1995 as X6D Limited. The company deployed over 15,000 cinemas worldwide.

Perception
Display
technologies
Other
technologies
Product
types
Notable
products
Miscellany

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.