3D computer graphics

3D computer graphics or three-dimensional computer graphics (in contrast to 2D computer graphics), are graphics that use a three-dimensional representation of geometric data (often Cartesian) that is stored in the computer for the purposes of performing calculations and rendering 2D images. Such images may be stored for viewing later or displayed in real-time.

3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire-frame model and 2D computer raster graphics in the final rendered display. In computer graphics software, 2D applications may use 3D techniques to achieve effects such as lighting, and 3D may use 2D rendering techniques.

3D computer graphics are often referred to as 3D models. Apart from the rendered graphic, the model is contained within the graphical data file. However, there are differences: a 3D model is the mathematical representation of any three-dimensional object. A model is not technically a graphic until it is displayed. A model can be displayed visually as a two-dimensional image through a process called 3D rendering or used in non-graphical computer simulations and calculations. With 3D printing, 3D models are similarly rendered into a 3D physical representation of the model, with limitations to how accurate the rendering can match the virtual model[1].


William Fetter was credited with coining the term computer graphics in 1961[2][3] to describe his work at Boeing. One of the first displays of computer animation was Futureworld (1976), which included an animation of a human face and a hand that had originally appeared in the 1972 experimental short A Computer Animated Hand, created by University of Utah students Edwin Catmull and Fred Parke.[4]

3D computer graphics software began appearing for home computers in the late 1970s. The earliest known example is 3D Art Graphics, a set of 3D computer graphics effects, written by Kazumasa Mitazawa and released in June 1978 for the Apple II.[5][6]


3D computer graphics creation falls into three basic phases:

  1. 3D modeling – the process of forming a computer model of an object's shape
  2. Layout and animation – the placement and movement of objects within a scene
  3. 3D rendering – the computer calculations that, based on light placement, surface types, and other qualities, generate the image


The model describes the process of forming the shape of an object. The two most common sources of 3D models are those that an artist or engineer originates on the computer with some kind of 3D modeling tool, and models scanned into a computer from real-world objects. Models can also be produced procedurally or via physical simulation. Basically, a 3D model is formed from points called vertices (or vertexes) that define the shape and form polygons. A polygon is an area formed from at least three vertexes (a triangle). A polygon of n points is an n-gon.[7] The overall integrity of the model and its suitability to use in animation depend on the structure of the polygons.

Materials and textures

Materials and textures are properties that the render engine uses to render the model, in an unbiased render engine like blender cycles, one can give the model materials to tell the engine how to treat light when it hits the surface. Textures are used to give the material color using a color or albedo map, or give the surface features using a bump or normal map. It can be also used to deform the model itself using a displacement map.

Layout and animation

Before rendering into an image, objects must be laid out in a scene. This defines spatial relationships between objects, including location and size. Animation refers to the temporal description of an object (i.e., how it moves and deforms over time. Popular methods include keyframing, inverse kinematics, and motion capture). These techniques are often used in combination. As with animation, physical simulation also specifies motion.


Rendering converts a model into an image either by simulating light transport to get photo-realistic images, or by applying an art style as in non-photorealistic rendering. The two basic operations in realistic rendering are transport (how much light gets from one place to another) and scattering (how surfaces interact with light). This step is usually performed using 3D computer graphics software or a 3D graphics API. Altering the scene into a suitable form for rendering also involves 3D projection, which displays a three-dimensional image in two dimensions. Although 3D modeling and CAD software may perform 3D rendering as well (e.g. Autodesk 3ds Max or Blender), exclusive 3D rendering software also exists.

Far left: A 3D rendering with ray tracing and ambient occlusion using Blender and YafaRay

Center left: A 3d model of a Dunkerque-class battleship rendered with flat shading

Center right: During the 3D rendering step, the number of reflections "light rays" can take, as well as various other attributes, can be tailored to achieve a desired visual effect. Rendered with Cobalt.

Far right: Experience Curiosity, a real-time web application which leverages 3D rendering capabilities of browsers (WebGL)

Engine movingparts
Dunkerque 3d.jpeg
Cannonball stack with FCC unit cell
Experience curiosity1


3D computer graphics software produces computer-generated imagery (CGI) through 3D modeling and 3D rendering or produces 3D models for analytic, scientific and industrial purposes.


3D modeling software is a class of 3D computer graphics software used to produce 3D models. Individual programs of this class are called modeling applications or modelers.

3D modelers allow users to create and alter models via their 3D mesh. Users can add, subtract, stretch and otherwise change the mesh to their desire. Models can be viewed from a variety of angles, usually simultaneously. Models can be rotated and the view can be zoomed in and out.

3D modelers can export their models to files, which can then be imported into other applications as long as the metadata are compatible. Many modelers allow importers and exporters to be plugged-in, so they can read and write data in the native formats of other applications.

Most 3D modelers contain a number of related features, such as ray tracers and other rendering alternatives and texture mapping facilities. Some also contain features that support or allow animation of models. Some may be able to generate full-motion video of a series of rendered scenes (i.e. animation).

Computer-aided design

Computer aided design software may employ the same fundamental 3D modeling techniques that 3D modeling software use but their goal differs. They are used in computer-aided engineering, computer-aided manufacturing, Finite element analysis, product lifecycle management, 3D printing and computer-aided architectural design.

Complementary tools

After producing video, studios then edit or composite the video using programs such as Adobe Premiere Pro or Final Cut Pro at the mid-level, or Autodesk Combustion, Digital Fusion, Shake at the high-end. Match moving software is commonly used to match live video with computer-generated video, keeping the two in sync as the camera moves.

Use of real-time computer graphics engines to create a cinematic production is called machinima.


There are a multitude of websites designed to help, educate and support 3D graphic artists. Some are managed by software developers and content providers, but there are standalone sites as well. These communities allow for members to seek advice, post tutorials, provide product reviews or post examples of their own work.

Differences with other types of computer graphics

Distinction from photorealistic 2D graphics

Not all computer graphics that appear 3D are based on a wireframe model. 2D computer graphics with 3D photorealistic effects are often achieved without wireframe modeling and are sometimes indistinguishable in the final form. Some graphic art software includes filters that can be applied to 2D vector graphics or 2D raster graphics on transparent layers. Visual artists may also copy or visualize 3D effects and manually render photorealistic effects without the use of filters.

Pseudo-3D and true 3D

Some video games use restricted projections of three-dimensional environments, such as isometric graphics or virtual cameras with fixed angles, either as a way to improve performance of the game engine, or for stylistic and gameplay concerns. Such games are said to use pseudo-3D graphics. By contrast, games using 3D computer graphics without such restrictions are said to use true 3D.

See also

Graphics and software

Fields of use


  1. ^ "3D computer graphics". ScienceDaily. Retrieved 2019-01-19.
  2. ^ "An Historical Timeline of Computer Graphics and Animation". Archived from the original on 2008-03-10. Retrieved 2009-07-22.
  3. ^ "Computer Graphics".
  4. ^ "Pixar founder's Utah-made Hand added to National Film Registry". The Salt Lake Tribune. December 28, 2011. Retrieved January 8, 2012.
  5. ^ "Brutal Deluxe Software". www.brutaldeluxe.fr.
  6. ^ "PROJECTS AND ARTICLES Retrieving Japanese Apple II programs". Archived from the original on 2016-10-05. Retrieved 2017-03-26.
  7. ^ Simmons, Bruce. "n-gon". MathWords. Retrieved 2018-11-30.

External links

3D modeling

In 3D computer graphics, 3D modeling is the process of developing a mathematical representation of any surface of an object (either inanimate or living) in three dimensions via specialized software. The product is called a 3D model. Someone who works with 3D models may be referred to as a 3D artist. It can be displayed as a two-dimensional image through a process called 3D rendering or used in a computer simulation of physical phenomena. The model can also be physically created using 3D printing devices.

Models may be created automatically or manually. The manual modeling process of preparing geometric data for 3D computer graphics is similar to plastic arts such as sculpting.

3D modeling software is a class of 3D computer graphics software used to produce 3D models. Individual programs of this class are called modeling applications or modelers.


AC3D is a 3D design program which has been available since 1994. The software is used by designers for modeling 3D graphics for games and simulations - most notably it is used by the scenery creators at Laminar Research on the X-Plane (simulator). The .ac format has also been used in FlightGear for scenery objects and aircraft models.

Algorithms-Aided Design (AAD)

Algorithms-Aided Design (AAD) is the use of specific algorithms-editors to assist in the creation, modification, analysis, or optimization of a design. The algorithms-editors are usually integrated with 3D modeling packages and read several programming languages, both scripted or visual (RhinoScript®, Grasshopper®, MEL®, C#, Python®). The Algorithms-Aided Design allows designers to overcome the limitations of traditional CAD software and 3D computer graphics software, reaching a level of complexity which is beyond the human possibility to interact with digital objects. The acronym appears for the first time in the book AAD Algorithms-Aided Design, Parametric Strategies using Grasshopper, published by Arturo Tedeschi in 2014.

Autodesk Maya

Autodesk Maya, commonly shortened to Maya , is a 3D computer graphics application that runs on Windows, macOS and Linux, originally developed by Alias Systems Corporation (formerly Alias|Wavefront) and currently owned and developed by Autodesk, Inc. It is used to create interactive 3D applications, including video games, animated film, TV series, or visual effects.

Autodesk Softimage

Autodesk Softimage, or simply Softimage () is a discontinued 3D computer graphics application, for producing 3D computer graphics, 3D modeling, and computer animation. Now owned by Autodesk and formerly titled Softimage|XSI, the software has been predominantly used in the film, video game, and advertising industries for creating computer generated characters, objects, and environments.

Released in 2000 as the successor to Softimage|3D, Softimage|XSI was developed by its eponymous company, then a subsidiary of Avid Technology. On October 23, 2008, Autodesk acquired the Softimage brand and 3D animation assets from Avid for approximately $35 million, thereby ending Softimage Co. as a distinct entity. In February 2009, Softimage|XSI was rebranded Autodesk Softimage.

A free version of the software, called Softimage Mod Tool, was developed for the game modding community to create games using the Microsoft XNA toolset for PC and Xbox 360, or to create mods for games using Valve Corporation's Source engine, Epic Games's Unreal Engine and others. It was discontinued with the release of Softimage 2014.

On March 4, 2014, it was announced that Autodesk Softimage would be discontinued after the release of the 2015 version, providing product support until April 30, 2016.

Bind pose

In computer animation, a bind pose, also known as a T-pose, is a default pose for a 3D model's skeleton before it is animated.

Constructive solid geometry

Constructive solid geometry (CSG) (formerly called computational binary solid geometry) is a technique used in solid modeling. Constructive solid geometry allows a modeler to create a complex surface or object by using Boolean operators to combine simpler objects. potentially generating visually complex objects by combining a few primitive ones.In 3D computer graphics and CAD, CSG is often used in procedural modeling. CSG can also be performed on polygonal meshes, and may or may not be procedural and/or parametric.

Contrast CSG with polygon mesh modeling and box modeling.

Image plane

In 3D computer graphics, the image plane is that plane in the world which is identified with the plane of the display monitor used to view the image that is being rendered. It is also referred to as screen space. If one makes the analogy of taking a photograph to rendering a 3D image, the surface of the film is the image plane. In this case, the viewing transformation is a projection that maps the world onto the image plane. A rectangular region of this plane, called the viewing window or viewport, maps to the monitor. This establishes the mapping between pixels on the monitor and points (or rather, rays) in the 3D world. The plane is not usually an actual geometric object in a 3D scene, but instead is usually a collection of target coordinates or dimensions that are used during the rasterization process so the final output can be displayed as intended on the physical screen.

In optics, the image plane is the plane that contains the object's projected image, and lies beyond the back focal plane.

List of 3D computer graphics software

This list of 3D graphics software contains software packages related to the development and exploitation of 3D computer graphics. For a comparison see Comparison of 3D computer graphics software.

List of 3D rendering software

This page provides a list of 3D rendering software. This is not the same as 3D modeling software, which involves the creation of 3D models, for which the software listed below can produce realistic rendered visualisations. Also not included are general-purpose packages which can have their own built-in rendering capabilities; these can be found in the List of 3D computer graphics software and List of 3D animation software. See 3D computer graphics software for more discussion about the distinctions.

MASSIVE (software)

MASSIVE (Multiple Agent Simulation System in Virtual Environment) is a high-end computer animation and artificial intelligence software package used for generating crowd-related visual effects for film and television.

Morph target animation

Morph target animation, per-vertex animation, shape interpolation, shape keys, or blend shapes is a method of 3D computer animation used together with techniques such as skeletal animation. In a morph target animation, a "deformed" version of a mesh is stored as a series of vertex positions. In each key frame of an animation, the vertices are then interpolated between these stored positions.

Nvidia RTX

Nvidia RTX is a development platform for rendering graphics that was created by Nvidia, primarily for real-time ray tracing. Ray tracing is typically utilized in instances where image creation is not display time sensitive (like films), meaning that applications such as video games have had to rely on rasterization for their rendering. RTX facilitates a new development in computer graphics of generating images that react to lighting, shadows, reflections and such in real time. RTX runs on Nvidia Volta- and Turing-based GPUs, specifically utilizing the Tensor cores (and new RT cores on Turing) on the architectures for ray tracing acceleration.Nvidia worked with Microsoft to integrate RTX support with Microsoft's DirectX Raytracing API (DXR). RTX is currently available through Nvidia OptiX and for Microsoft DirectX, and is in development for Vulkan.

Pixar RenderMan

Pixar RenderMan (formerly PhotoRealistic RenderMan) is proprietary photorealistic 3D rendering software produced by Pixar Animation Studios. Pixar uses RenderMan to render their in-house 3D animated movie productions and it is also available as a commercial product licensed to third parties.

On May 30, 2014, Pixar announced it would offer a free non-commercial version of RenderMan that would be available to download in August 2014. The product's release was postponed to early 2015. As of March 23, 2015, RenderMan is available for non-commercial use.

Polygon (computer graphics)

Polygons are used in computer graphics to compose images that are three-dimensional in appearance. Usually (but not always) triangular, polygons arise when an object's surface is modeled, vertices are selected, and the object is rendered in a wire frame model. This is quicker to display than a shaded model; thus the polygons are a stage in computer animation. The polygon count refers to the number of polygons being rendered per frame.

Radiosity (computer graphics)

In 3D computer graphics, radiosity is an application of the finite element method to solving the rendering equation for scenes with surfaces that reflect light diffusely. Unlike rendering methods that use Monte Carlo algorithms (such as path tracing), which handle all types of light paths, typical radiosity only account for paths (represented by the code "LD*E") which leave a light source and are reflected diffusely some number of times (possibly zero) before hitting the eye. Radiosity is a global illumination algorithm in the sense that the illumination arriving on a surface comes not just directly from the light sources, but also from other surfaces reflecting light. Radiosity is viewpoint independent, which increases the calculations involved, but makes them useful for all viewpoints.

Radiosity methods were first developed in about 1950 in the engineering field of heat transfer. They were later refined specifically for the problem of rendering computer graphics in 1984 by researchers at Cornell University and Hiroshima University.Notable commercial radiosity engines are Enlighten by Geomerics (used for games including Battlefield 3 and Need for Speed: The Run); 3ds Max; form•Z; LightWave 3D and the Electric Image Animation System.

Remo 3D

Remo 3D is a 3D computer graphics software specialized in creating 3D models for realtime visualization. As opposed to many other 3D modeling products that are primarily intended for rendering. Remo 3D focuses on supporting realtime features like full control of the model scene graph, and modification of features like degrees-of-freedom nodes (DOF), levels-of-detail (LOD), switches, etc. Remo 3D's primary file format is OpenFlight and it allows for importing from and exporting to different file formats. This makes Remo 3D suitable for creating realtime 3D models intended for use in virtual reality software, simulators and computer games.

The product is developed by the Swedish company Remograph, and it has been on the market since 2005. It has users worldwide, both private and governmental, in defence and civil industries.

Remo 3D has been described in several independent articles, for instance at the vr-news and modsim sites, as well as in the Defence Management JournalRemo 3D is developed using OpenSceneGraph, FLTK and scriptable using the Lua programming language.

Skeletal animation

Skeletal animation is a technique in computer animation in which a character (or other articulated object) is represented in two parts: a surface representation used to draw the character (called skin or mesh) and a hierarchical set of interconnected bones (called the skeleton or rig) used to animate (pose and keyframe) the mesh. While this technique is often used to animate humans or more generally for organic modeling, it only serves to make the animation process more intuitive, and the same technique can be used to control the deformation of any object—such as a door, a spoon, a building, or a galaxy. When the animated object is more general than, for example, a humanoid character, the set of bones may not be hierarchical or interconnected, but it just represents a higher level description of the motion of the part of mesh or skin it is influencing.

The technique was introduced in 1988 by Nadia Magnenat Thalmann, Richard Laperrière, and Daniel Thalmann. This technique is used in virtually all animation systems where simplified user interfaces allows animators to control often complex algorithms and a huge amount of geometry; most notably through inverse kinematics and other "goal-oriented" techniques. In principle, however, the intention of the technique is never to imitate real anatomy or physical processes, but only to control the deformation of the mesh data.

Vertex (geometry)

In geometry, a vertex (plural: vertices or vertexes) is a point where two or more curves, lines, or edges meet. As a consequence of this definition, the point where two lines meet to form an angle and the corners of polygons and polyhedra are vertices.

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.