Computer animation

Computer animation is the process used for digitally generating animated images. The more general term computer-generated imagery (CGI) encompasses both static scenes and dynamic images, while computer animation only refers to the moving images. Modern computer animation usually uses 3D computer graphics, although 2D computer graphics are still used for stylistic, low bandwidth, and faster real-time renderings. Sometimes, the target of the animation is the computer itself, but sometimes film as well.

Computer animation is essentially a digital successor to the stop motion techniques using 3D models, and traditional animation techniques using frame-by-frame animation of 2D illustrations. Computer-generated animations are more controllable than other more physically based processes, constructing miniatures for effects shots or hiring extras for crowd scenes, and because it allows the creation of images that would not be feasible using any other technology. It can also allow a single graphic artist to produce such content without the use of actors, expensive set pieces, or props. To create the illusion of movement, an image is displayed on the computer monitor and repeatedly replaced by a new image that is similar to it, but advanced slightly in time (usually at a rate of 24, 25 or 30 frames/second). This technique is identical to how the illusion of movement is achieved with television and motion pictures.

For 3D animations, objects (models) are built on the computer monitor (modeled) and 3D figures are rigged with a virtual skeleton. For 2D figure animations, separate objects (illustrations) and separate transparent layers are used with or without that virtual skeleton. Then the limbs, eyes, mouth, clothes, etc. of the figure are moved by the animator on key frames. The differences in appearance between key frames are automatically calculated by the computer in a process known as tweening or morphing. Finally, the animation is rendered.[1]

For 3D animations, all frames must be rendered after the modeling is complete. For 2D vector animations, the rendering process is the key frame illustration process, while tweened frames are rendered as needed. For pre-recorded presentations, the rendered frames are transferred to a different format or medium, like digital video. The frames may also be rendered in real time as they are presented to the end-user audience. Low bandwidth animations transmitted via the internet (e.g. Adobe Flash, X3D) often use software on the end-users computer to render in real time as an alternative to streaming or pre-loaded high bandwidth animations.

Activemarker2
An example of computer animation which is produced in the "motion capture" technique

Explanation

To trick the eye and the brain into thinking they are seeing a smoothly moving object, the pictures should be drawn at around 12 frames per second or faster.[2] (A frame is one complete image.) With rates above 75-120 frames per second, no improvement in realism or smoothness is perceivable due to the way the eye and the brain both process images. At rates below 12 frames per second, most people can detect jerkiness associated with the drawing of new images that detracts from the illusion of realistic movement.[3] Conventional hand-drawn cartoon animation often uses 15 frames per second in order to save on the number of drawings needed, but this is usually accepted because of the stylized nature of cartoons. To produce more realistic imagery, computer animation demands higher frame rates.

Films seen in theaters in the United States run at 24 frames per second, which is sufficient to create the illusion of continuous movement. For high resolution, adapters are used.

History

Early digital computer animation was developed at Bell Telephone Laboratories in the 1960s by Edward E. Zajac, Frank W. Sinden, Kenneth C. Knowlton, and A. Michael Noll.[4] Other digital animation was also practiced at the Lawrence Livermore National Laboratory.[5]

In 1967, a computer animation named "Hummingbird" was created by Charles Csuri and James Shaffer.[6]

In 1968, a computer animation called "Kitty" was created with BESM-4 by Nikolai Konstantinov, depicting a cat moving around.[7]

In 1971, a computer animation called "Metadata" was created, showing various shapes.[8]

An early step in the history of computer animation was the sequel to the 1973 film Westworld, a science-fiction film about a society in which robots live and work among humans.[9] The sequel, Futureworld (1976), used the 3D wire-frame imagery, which featured a computer-animated hand and face both created by University of Utah graduates Edwin Catmull and Fred Parke.[10] This imagery originally appeared in their student film A Computer Animated Hand, which they completed in 1972.[11][12]

Developments in CGI technologies are reported each year at SIGGRAPH,[13] an annual conference on computer graphics and interactive techniques that is attended by thousands of computer professionals each year.[14] Developers of computer games and 3D video cards strive to achieve the same visual quality on personal computers in real-time as is possible for CGI films and animation. With the rapid advancement of real-time rendering quality, artists began to use game engines to render non-interactive movies, which led to the art form Machinima.

The very first full length computer animated television series was ReBoot,[15] which debuted in September 1994; the series followed the adventures of characters who lived inside a computer.[16] The first feature-length computer animated film was Toy Story (1995), which was made by Pixar.[17][18][19] It followed an adventure centered around toys and their owners. This groundbreaking film was also the first of many fully computer-animated movies.[18]

Animation methods

In this .gif of a 2D Flash animation, each 'stick' of the figure is keyframed over time to create motion.

In most 3D computer animation systems, an animator creates a simplified representation of a character's anatomy, which is analogous to a skeleton or stick figure.[20] They are by default arranged into a default position known as a bind pose. The position of each segment of the skeletal model is defined by animation variables, or Avars for short. In human and animal characters, many parts of the skeletal model correspond to the actual bones, but skeletal animation is also used to animate other things, with facial features (though other methods for facial animation exist).[21] The character "Woody" in Toy Story, for example, uses 700 Avars (100 in the face alone). The computer doesn't usually render the skeletal model directly (it is invisible), but it does use the skeletal model to compute the exact position and orientation of that certain character, which is eventually rendered into an image. Thus by changing the values of Avars over time, the animator creates motion by making the character move from frame to frame.

There are several methods for generating the Avar values to obtain realistic motion. Traditionally, animators manipulate the Avars directly.[22] Rather than set Avars for every frame, they usually set Avars at strategic points (frames) in time and let the computer interpolate or tween between them in a process called keyframing. Keyframing puts control in the hands of the animator and has roots in hand-drawn traditional animation.[23]

In contrast, a newer method called motion capture makes use of live action footage.[24] When computer animation is driven by motion capture, a real performer acts out the scene as if they were the character to be animated.[25] His/her motion is recorded to a computer using video cameras and markers and that performance is then applied to the animated character.[26]

Each method has its advantages and as of 2007, games and films are using either or both of these methods in productions. Keyframe animation can produce motions that would be difficult or impossible to act out, while motion capture can reproduce the subtleties of a particular actor.[27] For example, in the 2006 film Pirates of the Caribbean: Dead Man's Chest, Bill Nighy provided the performance for the character Davy Jones. Even though Nighy doesn't appear in the movie himself, the movie benefited from his performance by recording the nuances of his body language, posture, facial expressions, etc. Thus motion capture is appropriate in situations where believable, realistic behavior and action is required, but the types of characters required exceed what can be done throughout the conventional costuming.

Modeling

3D computer animation combines 3D models of objects and programmed or hand "keyframed" movement. These models are constructed out of geometrical vertices, faces, and edges in a 3D coordinate system. Objects are sculpted much like real clay or plaster, working from general forms to specific details with various sculpting tools. Unless a 3D model is intended to be a solid color, it must be painted with "textures" for realism. A bone/joint animation system is set up to deform the CGI model (e.g., to make a humanoid model walk). In a process known as rigging, the virtual marionette is given various controllers and handles for controlling movement.[28] Animation data can be created using motion capture, or keyframing by a human animator, or a combination of the two.[29]

3D models rigged for animation may contain thousands of control points — for example, "Woody" from Toy Story uses 700 specialized animation controllers. Rhythm and Hues Studios labored for two years to create Aslan in the movie The Chronicles of Narnia: The Lion, the Witch and the Wardrobe, which had about 1,851 controllers (742 in the face alone). In the 2004 film The Day After Tomorrow, designers had to design forces of extreme weather with the help of video references and accurate meteorological facts. For the 2005 remake of King Kong, actor Andy Serkis was used to help designers pinpoint the gorilla's prime location in the shots and used his expressions to model "human" characteristics onto the creature. Serkis had earlier provided the voice and performance for Gollum in J. R. R. Tolkien's The Lord of the Rings trilogy.

Equipment

Jack-in-cube solid model, light background
A ray-traced 3-D model of a jack inside a cube, and the jack alone below.

Computer animation can be created with a computer and an animation software. Some impressive animation can be achieved even with basic programs; however, the rendering can take a lot of time on an ordinary home computer.[30] Professional animators of movies, television and video games could make photorealistic animation with high detail. This level of quality for movie animation would take hundreds of years to create on a home computer. Instead, many powerful workstation computers are used.[31] Graphics workstation computers use two to four processors, and they are a lot more powerful than an actual home computer and are specialized for rendering. A large number of workstations (known as a "render farm") are networked together to effectively act as a giant computer.[32] The result is a computer-animated movie that can be completed in about one to five years (however, this process is not composed solely of rendering). A workstation typically costs $2,000-16,000 with the more expensive stations being able to render much faster due to the more technologically-advanced hardware that they contain. Professionals also use digital movie cameras, motion/performance capture, bluescreens, film editing software, props, and other tools used for movie animation.

Facial animation

The realistic modeling of human facial features is both one of the most challenging and sought after elements in computer-generated imagery. Computer facial animation is a highly complex field where models typically include a very large number of animation variables.[33] Historically speaking, the first SIGGRAPH tutorials on State of the art in Facial Animation in 1989 and 1990 proved to be a turning point in the field by bringing together and consolidating multiple research elements and sparked interest among a number of researchers.[34]

The Facial Action Coding System (with 46 "action units", "lip bite" or "squint"), which had been developed in 1976, became a popular basis for many systems.[35] As early as 2001, MPEG-4 included 68 Face Animation Parameters (FAPs) for lips, jaws, etc., and the field has made significant progress since then and the use of facial microexpression has increased.[35][36]

In some cases, an affective space, the PAD emotional state model, can be used to assign specific emotions to the faces of avatars.[37] In this approach, the PAD model is used as a high level emotional space and the lower level space is the MPEG-4 Facial Animation Parameters (FAP). A mid-level Partial Expression Parameters (PEP) space is then used to in a two-level structure – the PAD-PEP mapping and the PEP-FAP translation model.[38]

Realism

Realism in computer animation can mean making each frame look photorealistic, in the sense that the scene is rendered to resemble a photograph or make the characters' animation believable and lifelike.[39] Computer animation can also be realistic with or without the photorealistic rendering.[40]

One of the greatest challenges in computer animation has been creating human characters that look and move with the highest degree of realism. Part of the difficulty in making pleasing, realistic human characters is the uncanny valley, the concept where the human audience (up to a point) tends to have an increasingly negative, emotional response as a human replica looks and acts more and more human. Films that have attempted photorealistic human characters, such as The Polar Express,[41][42][43] Beowulf,[44] and A Christmas Carol[45][46] have been criticized as "creepy" and "disconcerting".

The goal of computer animation is not always to emulate live action as closely as possible, so many animated films instead feature characters who are anthropomorphic animals, fantasy creatures and characters, superheroes, or otherwise have non-realistic, cartoon-like proportions.[47] Computer animation can also be tailored to mimic or substitute for other kinds of animation, like traditional stop-motion animation (as shown in Flushed Away or The Lego Movie). Some of the long-standing basic principles of animation, like squash & stretch, call for movement that is not strictly realistic, and such principles still see widespread application in computer animation.[48]

Films

CGI film made using Machinima

CGI short films have been produced as independent animation since 1976.[49] An early example of an animated feature film to incorporate CGI animation was the 1983 Japanese anime film Golgo 13: The Professional.[50] The popularity of computer animation (especially in the field of special effects) skyrocketed during the modern era of U.S. animation.[51] The first completely computer-animated movie was Toy Story (1995), but VeggieTales is the first American fully 3D computer animated series sold directly (made in 1993); its success inspired other animation series, such as ReBoot in 1994.

Animation studios

Some notable producers of computer-animated feature films include:

Web animations

The popularity of websites that allow members to upload their own movies for others to view has created a growing community of amateur computer animators.[52] With utilities and programs often included free with modern operating systems, many users can make their own animated movies and shorts. Several free and open-source animation software applications exist as well. The ease at which these animations can be distributed has attracted professional animation talent also. Companies such as PowToon and GoAnimate attempt to bridged the gap by giving amateurs access to professional animations as clip art.

The oldest (most backward compatible) web-based animations are in the animated GIF format, which can be uploaded and seen on the web easily.[53] However, the raster graphics format of GIF animations slows the download and frame rate, especially with larger screen sizes. The growing demand for higher quality web-based animations was met by a vector graphics alternative that relied on the use of a plugin. For decades, Flash animations were the most popular format, until the web development community abandoned support for the Flash player plugin. Web browsers on mobile devices and mobile operating systems never fully supported the Flash plugin.

By this time, internet bandwidth and download speeds increased, making raster graphic animations more convenient. Some of the more complex vector graphic animations had a slower frame rate due to complex rendering than some of the raster graphic alternatives. Many of the GIF and Flash animations were already converted to digital video formats, which were compatible with mobile devices and reduced file sizes via video compression technology. However, compatibility was still problematic as some of the popular video formats such as Apple's QuickTime and Microsoft Silverlight required plugins. YouTube, the most popular video viewing website, was also relying on the Flash plugin to deliver digital video in the Flash Video format.

The latest alternatives are HTML5 compatible animations. Technologies such as JavaScript and CSS animations made sequencing the movement of images in HTML5 web pages more convenient. SVG animations offered a vector graphic alternative to the original Flash graphic format, SmartSketch. YouTube offers an HTML5 alternative for digital video. APNG (Animated PNG) offered a raster graphic alternative to animated GIF files that enables multi-level transparency not available in GIFs

Detailed examples and pseudocode

In 2D computer animation, moving objects are often referred to as "sprites." A sprite is an image that has a location associated with it. The location of the sprite is changed slightly, between each displayed frame, to make the sprite appear to move.[54] The following pseudocode makes a sprite move from left to right:

var int x := 0, y := screenHeight / 2;
while x < screenWidth
drawBackground()
drawSpriteAtXY (x, y) // draw on top of the background
x := x + 5 // move to the right

Computer animation uses different techniques to produce animations. Most frequently, sophisticated mathematics is used to manipulate complex three-dimensional polygons, apply "textures", lighting and other effects to the polygons and finally rendering the complete image. A sophisticated graphical user interface may be used to create the animation and arrange its choreography. Another technique called constructive solid geometry defines objects by conducting boolean operations on regular shapes, and has the advantage that animations may be accurately produced at any resolution.

Computer-assisted vs. computer-generated

To animate means, figuratively, to "give life to". There are two basic methods that animators commonly use to accomplish this.

Computer-assisted animation is usually classed as two-dimensional (2D) animation. Drawings are either hand drawn (pencil to paper) or interactively drawn (on the computer) using different assisting appliances and are positioned into specific software packages. Within the software package, the creator places drawings into different key frames which fundamentally create an outline of the most important movements.[55] The computer then fills in the "in-between frames", a process commonly known as Tweening.[56] Computer-assisted animation employs new technologies to produce content faster than is possible with traditional animation, while still retaining the stylistic elements of traditionally drawn characters or objects.[57]

Examples of films produced using computer-assisted animation are The Little Mermaid, The Rescuers Down Under, Beauty and the Beast, Aladdin, The Lion King, Pocahontas, The Hunchback of Notre Dame, Hercules, Mulan, The Road to El Dorado and Tarzan.

Computer-generated animation is known as three-dimensional (3D) animation. Creators design an object or character with an X, a Y and a Z axis. No pencil-to-paper drawings create the way computer generated animation works. The object or character created will then be taken into a software, key framing and tweening are also carried out in computer generated animation but are also a lot of techniques used that do not relate to traditional animation. Animators can break physical laws by using mathematical algorithms to cheat mass, force and gravity rulings. Fundamentally, time scale and quality could be said to be a preferred way to produce animation as they are two major things that are enhanced by using computer generated animation. Another positive aspect of CGA is the fact one can create a flock of creatures to act independently when created as a group. An animal's fur can be programmed to wave in the wind and lie flat when it rains instead of programming each strand of hair separately.[57]

A few examples of computer-generated animation movies are Toy Story, Frozen, and Shrek.

See also

References

Citations

  1. ^ Sito 2013, p. 232.
  2. ^ Masson 1999, p. 148.
  3. ^ Parent 2012, pp. 100–101, 255.
  4. ^ Masson 1999, pp. 390–394.
  5. ^ Sito 2013, pp. 69–75.
  6. ^ [1]
  7. ^ [2]
  8. ^ [3]
  9. ^ Masson 1999, p. 404.
  10. ^ Masson 1999, pp. 282–288.
  11. ^ Sito 2013, p. 64.
  12. ^ Means 2011.
  13. ^ Sito 2013, pp. 97–98.
  14. ^ Sito 2013, pp. 95–97.
  15. ^ Sito 2013, p. 188.
  16. ^ Masson 1999, p. 430.
  17. ^ Masson 1999, p. 432.
  18. ^ a b Masson 1999, p. 302.
  19. ^ "Our Story", Pixar, 1986-2013. Retrieved on 2013-02-15. "The Pixar Timeline, 1979 to Present". Pixar. Archived from the original on 2015-09-05.
  20. ^ Parent 2012, pp. 193–196.
  21. ^ Parent 2012, pp. 324–326.
  22. ^ Parent 2012, pp. 111–118.
  23. ^ Sito 2013, p. 132.
  24. ^ Masson 1999, p. 118.
  25. ^ Masson 1999, pp. 94–98.
  26. ^ Masson 1999, p. 226.
  27. ^ Masson 1999, p. 204.
  28. ^ Parent 2012, p. 289.
  29. ^ Beane 2012, p. 2-15.
  30. ^ Masson 1999, p. 158.
  31. ^ Sito 2013, p. 144.
  32. ^ Sito 2013, p. 195.
  33. ^ Masson 1999, pp. 110–116.
  34. ^ Parke & Waters 2008, p. xi.
  35. ^ a b Magnenat Thalmann & Thalmann 2004, p. 122.
  36. ^ Pereira & Ebrahimi 2002, p. 404.
  37. ^ Pereira & Ebrahimi 2002, pp. 60–61.
  38. ^ Paiva, Prada & Picard 2007, pp. 24–33.
  39. ^ Masson 1999, pp. 160–161.
  40. ^ Parent 2012, pp. 14–17.
  41. ^ Zacharek, Stephanie (2004-11-10). "The Polar Express". Salon. Retrieved 2015-06-08.
  42. ^ Herman, Barbara (2013-10-30). "The 10 Scariest Movies and Why They Creep Us Out". Newsweek. Retrieved 2015-06-08.
  43. ^ Clinton, Paul (2004-11-10). "Review: 'Polar Express' a creepy ride". CNN. Retrieved 2015-06-08.
  44. ^ Digital Actors in ‘Beowulf’ Are Just Uncanny – New York Times, November 14, 2007
  45. ^ Neumaier, Joe (November 5, 2009). "Blah, humbug! 'A Christmas Carol's 3-D spin on Dickens well done in parts but lacks spirit". New York Daily News. Retrieved October 10, 2015.
  46. ^ Williams, Mary Elizabeth (November 5, 2009). "Disney's 'A Christmas Carol': Bah, humbug!". Salon.com. Archived from the original on January 11, 2010. Retrieved October 10, 2015.
  47. ^ Sito 2013, p. 7.
  48. ^ Sito 2013, p. 59.
  49. ^ Masson 1999, p. 58.
  50. ^ Beck, Jerry (2005). The Animated Movie Guide. Chicago Review Press. p. 216. ISBN 1569762228.
  51. ^ Masson 1999, p. 52.
  52. ^ Sito 2013, pp. 82, 89.
  53. ^ Kuperberg 2002, pp. 112–113.
  54. ^ Masson 1999, p. 123.
  55. ^ Masson 1999, p. 115.
  56. ^ Masson 1999, p. 284.
  57. ^ a b Roos, Dave (2013). "How Computer Animation Works". HowStuffWorks. Retrieved 2013-02-15.

Works cited

  • Beane, Andy (2012). 3D Animation Essentials. Indianapolis, Indiana: John Wiley & Sons. ISBN 978-1-118-14748-1.
  • Kuperberg, Marcia (2002). A Guide to Computer Animation: For TV, Games, Multimedia and Web. Focal Press. ISBN 0-240-51671-0.
  • Magnenat Thalmann, Nadia; Thalmann, Daniel (2004). Handbook of Virtual Humans. Wiley Publishing. ISBN 0-470-02316-3.
  • Masson, Terrence (1999). CG 101: A Computer Graphics Industry Reference. Digital Fauxtography Inc. ISBN 0-7357-0046-X.
  • Means, Sean P. (December 28, 2011). "Pixar founder's Utah-made Hand added to National Film Registry". The Salt Lake Tribune. Retrieved January 8, 2012.
  • Paiva, Ana; Prada, Rui; Picard, Rosalind W. (2007). "Facial Expression Synthesis using PAD Emotional Parameters for a Chinese Expressive Avatar". Affective computing and intelligent interaction. Springer Science+Business Media. ISBN 3-540-74888-1.
  • Parent, Rick (2012). Computer Animation: Algorithms and Techniques. Ohio: Elsevier. ISBN 978-0-12-415842-9.
  • Pereira, Fernando C. N.; Ebrahimi, Touradj (2002). The MPEG-4 Book. New Jersey: IMSC Press. ISBN 0-13-061621-4.
  • Parke, Frederic I.; Waters, Keith (2008). Computer Facial Animation (2nd ed.). Massachusetts: A.K. Peters, Ltd. ISBN 1-56881-448-8.
  • Sito, Tom (2013). Moving Innovation: A History of Computer Animation. Massachusetts: MIT Press. ISBN 978-0-262-01909-5.

External links

Animation

Animation is a method in which pictures are manipulated to appear as moving images. In traditional animation, images are drawn or painted by hand on transparent celluloid sheets to be photographed and exhibited on film. Today, most animations are made with computer-generated imagery (CGI). Computer animation can be very detailed 3D animation, while 2D computer animation can be used for stylistic reasons, low bandwidth or faster real-time renderings. Other common animation methods apply a stop motion technique to two and three-dimensional objects like paper cutouts, puppets or clay figures.

Commonly the effect of animation is achieved by a rapid succession of sequential images that minimally differ from each other. The illusion—as in motion pictures in general—is thought to rely on the phi phenomenon and beta movement, but the exact causes are still uncertain.

Analog mechanical animation media that rely on the rapid display of sequential images include the phénakisticope, zoetrope, flip book, praxinoscope and film. Television and video are popular electronic animation media that originally were analog and now operate digitally. For display on the computer, techniques like animated GIF and Flash animation were developed.

Animation is more pervasive than many people realise. Apart from short films, feature films, animated gifs and other media dedicated to the display of moving images, animation is also heavily used for video games, motion graphics and special effects. Animation is also prevalent in information technology interfaces.The physical movement of image parts through simple mechanics – in for instance the moving images in magic lantern shows – can also be considered animation. The mechanical manipulation of puppets and objects to emulate living beings has a very long history in automata. Automata were popularised by Disney as animatronics.

Animators are artists who specialize in creating animation.

Animusic

Animusic is an American company specializing in the 3D visualization of MIDI-based music. Founded by Wayne Lytle, it is incorporated in New York, with offices in Texas and California. The initial name of the company was Visual Music, but changed to Animusic in 1995.

The company is known for its Animusic compilations of computer-generated animations, based on MIDI events processed to simultaneously drive the music and on-screen action, leading to and corresponding to every sound. The animated short "Pipe Dream" showed at SIGGRAPH's Electronic Theater in 2001.Unlike many other music visualizations, the music drives the animation. While other productions may animate figures or characters to the music, the animated models in Animusic are created first, and are then programmed to follow what the music "tells them" to. 'Solo cams' featured on the Animusic DVD shows how each instrument plays through a piece of music from beginning to end.

Many of the instruments appear to be robotic or play themselves using seemingly curious methods to produce and visualize the original compositions. The animations typically feature dramatically-lit rooms or landscapes.

The music of Animusic is principally pop-rock based, consisting of straightforward sequences of triggered samples and digital patches mostly played "dry" (with few effects). There are no lyrics or voices, save for the occasional chorus synthesizer. According to the director's comments on Animusic 2, most instrument sounds are generated with software synthesizers on a music workstation (see Software Programs for more info). Many sounds resemble stock patches available on digital keyboards, subjected to some manipulation, such as pitch or playback speed, to enhance the appeal of their timbre.

Cel shading

Cel shading or toon shading is a type of non-photorealistic rendering designed to make 3-D computer graphics appear to be flat by using less shading color instead of a shade gradient or tints and shades. Cel-shading is often used to mimic the style of a comic book or cartoon and/or give it a characteristic paper-like texture. There are similar techniques that can make an image look like a sketch, an oil painting or an ink painting. It is somewhat recent, appearing from around the beginning of the twenty-first century. The name comes from cels (short for celluloid), the clear sheets of acetate, which are painted on for use in traditional 2D animation.

Computer Animation Production System

The Computer Animation Production System (CAPS) was a digital ink and paint system used in animated feature films, the first at a major studio, designed to replace the expensive process of transferring animated drawings to cels using India ink or xerographic technology, and painting the reverse sides of the cels with gouache paint. Using CAPS, enclosed areas and lines could be easily colored in the digital computer environment using an unlimited palette. Transparent shading, blended colors, and other sophisticated techniques could be extensively used that were not previously available.

The completed digital cels were composited over scanned background paintings and camera or pan movements were programmed into a computer exposure sheet simulating the actions of old style animation cameras. Additionally, complex multiplane shots giving a sense of depth were possible. Unlike the analog multiplane camera, the CAPS multiplane cameras were not limited by artwork size. Extensive camera movements never before seen were incorporated into the films. The final version of the sequence was composited and recorded onto film. Since the animation elements existed digitally, it was easy to integrate other types of film and video elements, including three-dimensional computer animation.

CAPS was a proprietary collection of software, scanning camera systems, servers, networked computer workstations, and custom desks developed by The Walt Disney Company together with Pixar in the late-1980s. It succeeded in reducing labor costs for ink and paint and post-production processes of traditionally animated feature films produced by Walt Disney Animation Studios. It also provided an entirely new palette of digital tools for the film-makers.

Game physics

Computer animation physics or game physics involves the introduction of the laws of physics into a simulation or game engine, particularly in 3D computer graphics, for the purpose of making the effects appear more realistic to the observer. Typically, simulation physics is only a close approximation to actual physics, and computation is performed using discrete values.

There are several elements that form components of simulation physics including the physics engine, program code that is used to simulate Newtonian physics within the environment, and collision detection, used to solve the problem of determining when any two or more physical objects in the environment cross each other's path.

Havok (company)

Havok (legally Telekinesys Research Ltd.) is an Irish software company founded in 1998 by Hugh Reynolds and Steven Collins, based in Dublin, Ireland and owned by Microsoft. They have partnership with Activision, Electronic Arts, Nintendo, Microsoft, Sony, Bethesda Softworks and Ubisoft.

Its cross-platform technology is available for PlayStation 2, PlayStation 3, PlayStation 4, PlayStation Portable, Xbox, Xbox 360, Xbox One, Wii, Wii U, GameCube, and PCs. Havok’s technology has been used in more than 150 game titles, including Half-Life 2, Halo 2, Dark Souls, Mafia III, Tony Hawk's Project 8, The Elder Scrolls IV: Oblivion, Age of Empires III, Vanquish, Lost Planet 2, Fallout 3 and Super Smash Bros. Brawl. Havok products have also been used to drive special effects in movies such as Poseidon, The Matrix, Troy, Kingdom of Heaven and Charlie and the Chocolate Factory. Havok provides the dynamics driving for Autodesk 3ds Max.

Intel announced the acquisition of Havok in a press release on September 14, 2007. On October 2, 2015 Intel sold Havok to Microsoft for an undisclosed amount.

Inbetweening

Inbetweening or tweening is a key process in all types of animation, including computer animation. It is the process of generating intermediate frames between two images, called key frames, to give the appearance that the first image evolves smoothly into the second image. Inbetweens are the drawings which create the illusion of motion.

List of animated short films

This is a list of animated short films. The list is organized by decade and year, and then alphabetically. The list includes theatrical, television, and direct-to-video films with less than 40 minutes runtime. For a list of films with over 40 minutes of runtime, see List of animated feature films.

Live action

Live action is a form of cinematography or videography that uses photography instead of animation. Some works combine live action with animation. Live-action is used to define film, video games or similar visual media. Photorealistic animation, particularly modern computer animation, is sometimes erroneously described as “live-action” as in the case of some media reports about Disney's 2019 remake of The Lion King. According to the Cambridge English Dictionary, live action "[involves] real people or animals, not models, or images that are drawn, or produced by computer".

Morph target animation

Morph target animation, per-vertex animation, shape interpolation, shape keys, or blend shapes is a method of 3D computer animation used together with techniques such as skeletal animation. In a morph target animation, a "deformed" version of a mesh is stored as a series of vertex positions. In each key frame of an animation, the vertices are then interpolated between these stored positions.

Pose to pose animation

Pose to pose is a term used in animation, for creating key poses for characters and then inbetweening them in intermediate frames to make the character appear to move from one pose to the next. Pose-to-pose is used in traditional animation as well as computer-based 3D animation. The opposite concept is straight ahead animation where the poses of a scene are not planned, which results in more loose and free animation, though with less control over the animation's timing.

Prix Ars Electronica

The Prix Ars Electronica is one of the best known and longest running yearly prizes in the field of electronic and interactive art, computer animation, digital culture and music. It has been awarded since 1987 by Ars Electronica (Linz, Austria).

In 2005, the Golden Nica, the highest prize, was awarded in six categories: "Computer Animation/Visual Effects," "Digital Musics," "Interactive Art," "Net Vision," "Digital Communities" and the "u19" award for "freestyle computing." Each Golden Nica came with a prize of €10,000, apart from the u19 category, where the prize was €5,000. In each category, there are also Awards of Distinction and Honorary Mentions.

The Golden Nica is replica of the Greek Nike of Samothrace. It is a handmade wooden statuette, plated with gold, so each trophy is unique: approximately 35 cm high, with a wingspan of about 20 cm, all on a pedestal. "Prix Ars Electronica" is a phrase composed of French, Latin and Spanish words, loosely translated as "Electronic Arts Prize."

Scanimate

Scanimate is the name for an analog computer animation (video synthesizer) system developed from the late 1960s to the 1980s by Computer Image Corporation of Denver, Colorado.

The 8 Scanimate systems were used to produce much of the video-based animation seen on television between most of the 1970s and early 1980s in commercials, promotions, and show openings. One of the major advantages the Scanimate system had over film-based animation and computer animation was the ability to create animations in real time. The speed with which animation could be produced on the system because of this, as well as its range of possible effects, helped it to supersede film-based animation techniques for television graphics. By the mid-1980s, it was superseded by digital computer animation, which produced sharper images and more sophisticated 3D imagery.

Animations created on Scanimate and similar analog computer animation systems have a number of characteristic features that distinguish them from film-based animation: The motion is extremely fluid, using all 60 fields per second (in NTSC format video) or 50 fields (in PAL format video) rather than the 24 frames per second that film uses; the colors are much brighter and more saturated; and the images have a very "electronic" look that results from the direct manipulation of video signals through which the Scanimate produces the images.

Skeletal animation

Skeletal animation is a technique in computer animation in which a character (or other articulated object) is represented in two parts: a surface representation used to draw the character (called skin or mesh) and a hierarchical set of interconnected bones (called the skeleton or rig) used to animate (pose and keyframe) the mesh. While this technique is often used to animate humans or more generally for organic modeling, it only serves to make the animation process more intuitive, and the same technique can be used to control the deformation of any object—such as a door, a spoon, a building, or a galaxy. When the animated object is more general than, for example, a humanoid character, the set of bones may not be hierarchical or interconnected, but it just represents a higher level description of the motion of the part of mesh or skin it is influencing.

The technique was introduced in 1988 by Nadia Magnenat Thalmann, Richard Laperrière, and Daniel Thalmann. This technique is used in virtually all animation systems where simplified user interfaces allows animators to control often complex algorithms and a huge amount of geometry; most notably through inverse kinematics and other "goal-oriented" techniques. In principle, however, the intention of the technique is never to imitate real anatomy or physical processes, but only to control the deformation of the mesh data.

Sony Pictures Imageworks

Sony Pictures Imageworks Inc. is a Canadian visual effects and computer animation company headquartered in Vancouver, British Columbia, with an additional office in Culver City, California. SPI is a unit of Sony Pictures Entertainment's Motion Picture Group.The company has been recognized by the Academy of Motion Picture Arts and Sciences with Oscars for their work on Spider-Man 2 and the computer-animated short film The ChubbChubbs!, and received many other nominations for their work.

SPI has provided visual effects for many films; most recent include Spider-Man: Homecoming, Kingsman: The Golden Circle, and The Meg. They also provided services for several of director Robert Zemeckis' films, including Contact, Cast Away, The Polar Express, and Beowulf.

Since the foundation of its sister company Sony Pictures Animation in 2002, SPI would go on to animate nearly all of SPA's films, including Open Season, Surf's Up, The Emoji Movie, Spider-Man: Into the Spider-Verse, and films in the Cloudy with a Chance of Meatballs, Smurfs and Hotel Transylvania franchises, in addition to animating films for other studios such as Arthur Christmas for Aardman Animations (co-produced by SPA), Storks and Smallfoot for the Warner Animation Group, and The Angry Birds Movie and its sequel for Rovio Animation.

Tau Films

Tau Films is an American visual effects and animation company that was founded in 2014 by John Hughes, Mandeep Singh, and Walt Jones. It has locations in Los Angeles, Kuala Lumpur, Vancouver, and Hyderabad. Tau Films has worked on movies such as Evil Nature (2015), Baahubali: The Beginning (2015), and 2.0 (2018). In addition, it produces special venue/ride films, such as Racing Legends at Ferrari Land in 2017 and The Lost Temple in 2014 (which was nominated for Outstanding Visual Effects in a Special Venue Project at the 13th Visual Effects Society Awards in 2015). It also produces virtual reality episodic storytelling such as Delusion: Lies Within.

Timeline of computer animation in film and television

This is a chronological list of films and television programs that have been recognised as being pioneering in their use of computer animation.

Twelve basic principles of animation

Disney's twelve basic principles of animation were introduced by the Disney animators Ollie Johnston and Frank Thomas in their 1981 book The Illusion of Life: Disney Animation. Johnston and Thomas in turn based their book on the work of the leading Disney animators from the 1930s onwards, and their effort to produce more realistic animations. The main purpose of the principles was to produce an illusion of characters adhering to the basic laws of physics, but they also dealt with more abstract issues, such as emotional timing and character appeal.

The book and some of its principles have been adopted by some traditional studios, and have been referred to by some as the "Bible of animation." In 1999 this book was voted number one of the "best animation books of all time" in an online poll. Though originally intended to apply to traditional, hand-drawn animation, the principles still have great relevance for today's more prevalent computer animation.

Will Vinton

William Gale Vinton (November 17, 1947 – October 4, 2018) was an American animator and filmmaker. He won an Oscar for his work alongside several Emmy Awards and Clio Awards for his studio's work.

Animation topics
By country
Industry
Works
Techniques
Variants
Related topics
Hardware
Computer systems
organization
Networks
Software organization
Software notations
and tools
Software development
Theory of computation
Algorithms
Mathematics
of computing
Information
systems
Security
Human–computer
interaction
Concurrency
Artificial
intelligence
Machine learning
Graphics
Applied
computing
By style
By theme
By movement
or period
By demographic groups
By format,
technique,
approach,
or production

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.