Physically based animation is an area of interest within computer graphics concerned with the simulation of physically plausible behaviors at interactive rates. Advances in physically based animation are often motivated by the need to include complex, physically inspired behaviors in video games, interactive simulations, and movies. Although off-line simulation methods exist to solve most all of the problems studied in physically-based animation, these methods are intended for applications that necessitate physical accuracy and slow, detailed computations. In contrast to methods common in offline simulation, techniques in physically based animation are concerned with physical plausibility, numerical stability, and visual appeal over physical accuracy. Physically based animation is often limited to loose approximations of physical behaviors because of the strict time constraints imposed by interactive applications. The target frame rate for interactive applications such as games and simulations is often 25-60 hertz, with only a small fraction of the time allotted to an individual frame remaining for physical simulation. Simplified models of physical behaviors are generally preferred if they are more efficient, easier to accelerate (through pre-computation, clever data structures, or SIMD/GPGPU), or satisfy desirable mathematical properties (such as unconditional stability or volume conservation when a soft body undergoes deformation). Fine details are not important when the overriding goal of a visualization is aesthetic appeal or the maintenance of player immersion since these details are often difficult for humans to notice or are otherwise impossible to distinguish at human scales.
Physically based animation is now common in movies and video games, and many techniques were pioneered during the development of early special effects scenes and game engines. Star Trek II: The Wrath of Khan famously used particle systems in the Genesis explosion scene to create the visual effect of a flaming shockwave engulfing a planet. Despite being released before physics engines were a common feature in games, System Shock incorporated rigid body physics in its engine and was widely considered innovative for this feature and the novel sense of interaction it afforded players. Valve later developed Half-Life and used rigid body physics to create environmental puzzles for the player, such as obstacles that could not be reached without stacking boxes. Half-Life 2 featured a more advanced physics engine that incorporated constrained systems such as pulleys or levers with more environmental puzzles to showcase these features. Physics engines are now much more common in games, and their frequent appearance has motivated research in physically based animation by companies such as Nvidia.
Physically based animation is common in games and simulations where users have the expectation of interaction with the environment. Physics engines such as Havok, PhysX, and Bullet exist as separately developed products to be licensed and included in games. In games such as Angry Birds or World of Goo, physically based animation is itself the primary game mechanic and players are expected to interact with or create physically simulated systems in order to achieve goals. Aspects of physics puzzle games exist in many games that belong to other genres but feature physically based simulation. Allowing physical interaction with the environment through physically based animation promotes non-linear solutions to puzzles by players, and can sometimes results in solutions to problems presented in games that were not deliberately included by level designers. Simulations used for purposes other than entertainment, such as military simulations, also make use of physically based animation to portray realistic situations and maintain the immersion of users. Many techniques in physically based animation are designed with GPGPU implementations in mind or can otherwise be extended to benefit from graphics hardware, which can be used to make physically based simulations fast enough for gaming. GPU time is often reserved for rendering, however, and frequent data transfers between the host and device can easily become a bottleneck to performance.
Simulations can be performed offline (as in apart from when they are viewed) in the development of special effects for movies. Speed is therefore not strictly a necessity in the production of special effects but is still desirable for reasonably responsive feedback and because the hardware required for slower methods is more expensive. However, physically based animation is still preferred because slower, more accurate methods can be costly and limiting. The physical accuracy of small details in a special effect are not meaningful to their visual appeal, restrict the amount of control that artists and directors can exert over behavior, and increase the monetary cost and time required to achieve results. It is necessary to be able to dictate the high level behavior of physically inspired effects in movies in order to achieve a desired artistic direction, but scripting physical behaviors on the level of small details can be unfeasible when fluids, smoke, or many individual objects are involved. Physically based animation generally affords more artist control over the appearance of simulated results and is also more convenient when desired effects might bend or defy physics.
Simplified rigid body physics is relatively cheap and easy to implement, which is why it appeared in interactive games and simulations earlier than most other techniques. Rigid bodies are assumed to undergo no deformation during simulation so that rigid body motion between time steps can be described as a translation and rotation, traditionally using affine transformations stored as 4x4 matrices. Alternatively, quaternions can be used to store rotations and vectors can be used to store the objects offset from the origin. The most computationally expensive aspects of rigid body dynamics are collision detection, correcting interpenetration between bodies and the environment, and handling resting contact. Rigid bodies are commonly simulated iteratively, with back-tracking to correct error using smaller timesteps. Resting contact between multiple rigid bodies (as is the case when rigid bodies fall into piles or are stacked) can be particularly difficult to handle efficiently and may require complex contact and shock propagation graphs in order to resolve using impulse-based methods. When simulating large numbers of rigid bodies, simplified geometries or convex hulls are often used to represent their boundaries for the purpose of collision detection and response (since this is generally the bottleneck in simulation).
Soft bodies can easily be implemented using spring-mesh systems. Spring mesh systems are composed of individually simulated particles that are attracted to each other by simulated spring forces and experience resistance from simulated dampeners. Arbitrary geometries can be more easily simulated by applying spring and dampener forces to the nodes of a lattice and deforming the object with the lattice. However, explicit solutions to these systems are not very numerically stable and are extremely difficult to control the behavior of through spring parameters. Techniques that allow for physically plausible and visually appealing soft bodies, are numerically stable, and can be configured well by artists were prohibitively expensive in early gaming history, which is why soft bodies were not as common as rigid bodies. Integration using Runge-Kutta methods can be used to increase the numerical stability of unstable techniques such as spring meshes or finer time steps can be used for simulation (although this is more costly and cannot make spring meshes stable for arbitrarily large forces). Techniques such as shape matching and position based dynamics address these problems with interactive games and simulations in mind. Position based dynamics is used in mainstream game engines such as Bullet (software), Havok, and PhysX. Unconditional stability and ease of configuration are particularly desirable properties of soft body simulations that can be difficult to achieve with spring-mesh systems, although they are still often used in games because of their simplicity and speed.
Computational fluid dynamics can be expensive, and interactions between multiple fluid bodies or with external objects/forces can require complex logic to evaluate. Fluid simulation is generally achieved in video games by simulating only the height of bodies of water to create the effect of waves, ripples, or other surface features. For relatively free bodies of liquid, Lagrangian or semi-Lagrangian methods are often used to speed up the simulation by treating particles as finite elements of fluid (or carriers of physical properties) and approximating the Navier-Stokes equations . It is uncommon to simulate bodies of fluid in games, although surface features may be simulated using similar methods and fluid simulations may be used to generate textures or height-fields to render water in real-time without real-time simulation (this is commonly done for large bodies of water in games). Fluid simulations can be computed using commodity graphics hardware through GPGPU, and height fields can be efficiently computed that result in wave-like behavior using Lattice Boltzmann methods. Alternatively, surface features and waves can be simulated as particles and a height field generated from the simulated particles in real-time. This also allows for efficient two way interaction between the fluid and floating objects.
Particle systems are an extremely popular technique for creating visual effects in movies and games because of their ease of implementation, efficiency, extensibility, and artist control. The update cycle of particle systems usually consists of the three phases: generation, simulation, and extinction. These phases respectively consist of the introduction of new particles, simulating them through the next timestep, and removing particles that have exceeded their life-span. The physical and visual attributes of particles are usually randomized on generation with the range and distribution of attributes controlled by the artist. Particle systems can further be made to generate particle systems themselves to create more complex and dynamic effects, and their high-level behavior can be choreographed through a framework of operators as in the canonical Sims paper.  Early games that rendered systems of particles suffered from clipping artifacts when particles partially intersected geometry in the environment, and this artifact was especially noticeable for large particles (which were often used to stand in for smoke). Soft particles address these artifacts through careful shading and manipulation of the transparency of particles, such that particles become more transparent as they approach surfaces.
In physically based animation, flocking refers to a technique that models the complex behavior of birds, schools of fish, and swarms of insects using virtual forces. These virtual forces simulate the tendency for flocks to center their velocities, avoid collisions and crowding, and move toward the group. In these simulations, individual members of the flock (sometimes called boids, short for bird-oid) act without collaboration using only information about the position and velocity of their peers to create the illusion of synchronized, group behavior efficiently. Flocking can be used to efficiently approximate the behavior of crowds of humans as well, and methods based on flocking are often used for crowds of NPCs in gaming. Unreal and Half-Life were among the first games to implement flocking, which was used to model the behavior of birds and flying creatures present in outdoor levels.
Characters in games and simulations are traditionally animated through methods such as keyframing that define animations through compositions of smaller, static motions sequenced to convey more complex behavior. Visually, these static methods cannot easily convey complex interactions with the environment and make lifelike character motion difficult to accomplish. Techniques in physically based character animation achieve dynamic animations that respond to user interaction, external events, and the environment by optimizing motions toward specified goals given physically based constraints such as energy minimization. The adoption of physically based character animation, as opposed to more static methods, has been slow in the gaming industry due to the increased cost and complexity associated with its use. Physically based character animation has been used in the Skate (video game) series of video games, and in the independently developed first-person shooter StarForge.
Animation is a method in which pictures are manipulated to appear as moving images. In traditional animation, images are drawn or painted by hand on transparent celluloid sheets to be photographed and exhibited on film. Today, most animations are made with computer-generated imagery (CGI). Computer animation can be very detailed 3D animation, while 2D computer animation can be used for stylistic reasons, low bandwidth or faster real-time renderings. Other common animation methods apply a stop motion technique to two and three-dimensional objects like paper cutouts, puppets or clay figures.
Commonly the effect of animation is achieved by a rapid succession of sequential images that minimally differ from each other. The illusion—as in motion pictures in general—is thought to rely on the phi phenomenon and beta movement, but the exact causes are still uncertain.
Analog mechanical animation media that rely on the rapid display of sequential images include the phénakisticope, zoetrope, flip book, praxinoscope and film. Television and video are popular electronic animation media that originally were analog and now operate digitally. For display on the computer, techniques like animated GIF and Flash animation were developed.
Animation is more pervasive than many people realise. Apart from short films, feature films, animated gifs and other media dedicated to the display of moving images, animation is also heavily used for video games, motion graphics and special effects. Animation is also prevalent in information technology interfaces.The physical movement of image parts through simple mechanics – in for instance the moving images in magic lantern shows – can also be considered animation. The mechanical manipulation of puppets and objects to emulate living beings has a very long history in automata. Automata were popularised by Disney as animatronics.
Animators are artists who specialize in creating animation.Animation in the United States during the silent era
Animated films in the United States date back to at least 1906 when Vitagraph released Humorous Phases of Funny Faces. Although early animations were rudimentary they rapidly became more sophisticated with such classics as Gertie the Dinosaur in 1914, Felix the Cat, and Koko the Clown.
Originally a novelty, some early animated silents depicted magic acts or were strongly influenced by the comic strip. Later, they were distributed along with newsreels. Early animation films, like their live-action silent cousins, would come with a musical score to be played by an organist or even an orchestra in larger theatres.Animation in the United States in the television era
Television animation developed from the success of animated movies in the first half of the 20th century. The state of animation changed dramatically in the three decades starting with the post-World War II proliferation of television. While studios gave up on the big-budget theatrical short cartoons that thrived in the 1930s and 1940s, new television animation studios would thrive based on the economy and volume of their output. By the end of the 1970s and 1980s, most of the Golden Age animators had retired or died, and their younger successors were ready to change the industry and the way that animation was perceived.Bill Westenhofer
Bill Westenhofer is a visual effects supervisor for Rhythm and Hues Studios, for which he has worked since 1994. His hometown is Brookfield, Connecticut, where he graduated from Brookfield High School in 1986. He then earned a Bachelor of Science in Computer Science and Engineering from Bucknell University in 1990. Westenhofer also received his Masters at School of Engineering and Applied Sciences at George Washington University in 1995, where he studied the use of dynamics in physically based animation.In 1994, he joined Rhythm & Hues as a technical director, and Westenhofer’s lighting and effects animation work was featured in Batman Forever and numerous commercials. He was promoted to CG supervisor for Speed 2: Cruise Control, and continued in that role for Spawn, Mouse Hunt, Kazaam and Waterworld. His other VFX supervisor credits include Elf, The Rundown, Stuart Little 2, Men in Black II, Cats & Dogs, Along Came a Spider, Frequency, Stuart Little, and Babe: Pig in the City.
In 2005, Westenhofer supervised a team of 400 digital artists on The Chronicles of Narnia: The Lion, the Witch and the Wardrobe, which was nominated for an Academy Award for Best Visual Effects. Later he would win both the BAFTA and the Academy Award for the 2007 release The Golden Compass, and 2012's Life of Pi in 2013. During the Academy Awards, when Westenhofer brought up Rhythm and Hues' financial issues during his speech, the microphone was cut off, which prompted many protests by the visual effects industry. He had intended to say:
"What I was trying to say up there is that at a time when visual effects movies are dominating the box office, visual effects companies are struggling," Westenhofer told reporters. "And I wanted to point out that we aren't technicians. Visual effects is not just a commodity that's being done by people pushing buttons. We're artists, and if we don't find a way to fix the business model, we start to lose the artistry. If anything, 'Life of Pi' shows that we're artists and not just technicians."
Westenhofer also worked as VFX supervisor for Wonder Woman, released in June 2017.Golden age of American animation
The golden age of American animation was a period in the history of U.S. animation that began with the advent of sound cartoons in 1928 and continued until around 1972, by which time theatrical animated shorts had begun losing to the newer medium of television animation.
Many popular characters emerged from this period, including Bugs Bunny, Mickey Mouse, Daffy Duck, Donald Duck, Goofy, Popeye, Tom and Jerry, Porky Pig, Betty Boop, Woody Woodpecker, Droopy, Mighty Mouse, Mr. Magoo, Pink Panther, Elmer Fudd, the Fox and the Crow, George and Junior, Wile E. Coyote and the Road Runner, Barney Bear, and the first animated adaptation of Superman, Casper and Little Lulu, among others.
Feature length animation also had begun during this period, most notably with Walt Disney's first films: Snow White and the Seven Dwarfs, Pinocchio, Fantasia, Dumbo and Bambi. Animation also began on television with the first animated series from 1949 to the early 1960s.James F. O'Brien
James F. O'Brien is a computer graphics researcher and professor of Computer Science and Electrical Engineering at the University of California, Berkeley.
He is also co-founder and chief science officer at Avametric, a company developing software for virtual clothing try on.
In 2015, he received an award for Scientific and Technical Achievement from
Academy of Motion Pictures Arts and Sciences.Jonathan Shewchuk
Jonathan Richard Shewchuk is a Professor in Computer Science at the University of California, Berkeley.
He obtained his B.S. in Physics and Computing Science from Simon Fraser University in 1990, and his M.S. and Ph.D. in Computer Science from Carnegie Mellon University, the latter in 1997.
He conducts research in scientific computing, computational geometry (especially mesh generation, numerical robustness, and surface reconstruction), numerical methods, and physically based animation.
He is also the author of Three Sins of Authors In Computer Science And Math.
In 2003 he was awarded J. H. Wilkinson Prize for Numerical Software for writing the Triangle software package which computes high-quality unstructured triangular meshes.
He appears in online course videos of CS 61B: Data Structures class in University of California, Berkeley.Mickey Mousing
In animation and film, "Mickey Mousing" (synchronized, mirrored, or parallel scoring) is a film technique that syncs the accompanying music with the actions on screen. "Matching movement to music," or, "The exact segmentation of the music analogue to the picture." The term comes from the early and mid-production Walt Disney films, where the music almost completely works to mimic the animated motions of the characters. Mickey Mousing may use music to "reinforce an action by mimicking its rhythm exactly....Frequently used in the 1930s and 1940s, especially by Max Steiner, it is somewhat out of favor today, at least in serious films, because of overuse. However, it can still be effective if used imaginatively". Mickey Mousing and synchronicity help structure the viewing experience, to indicate how much events should impact the viewer, and to provide information not present on screen. The technique, "enable[s] the music to be seen to 'participate' in the action and for it to be quickly and formatively interpreted...and [to] also intensify the experience of the scene for the spectator." Mickey Mousing may also create unintentional humor, and be used in parody or self-reference.
It is often not the music that is synced to the animated action, but the other way around. This is especially so when the music is a classical or other well-known piece. In such cases, the music for the animation is pre-recorded, and an animator will have an exposure sheet with the beats marked on it, frame by frame, and can time the movements accordingly. In the 1940 film Fantasia, the musical piece The Sorcerer's Apprentice, composed in the 1890s, contains a fragment that is used to accompany the actions of Mickey himself. At one point Mickey, as the apprentice, seizes an axe and chops an enchanted broom to pieces so that it will stop carrying water to a pit. The visual action is synchronized exactly to crashing chords in the music.Modern animation in the United States
Modern animation of the United States from the late 1980s and 1990s onward is sometimes referred to as the "renaissance age of American animation". During this period, many large American entertainment companies reformed and reinvigorated their animation departments following a general decline during the 1960s to 1980s. The United States has had a profound effect on animation worldwide. Since the late 1990s and the 2000s traditional animation would lose interest against digital and Flash animation, naming this current period as the "millennium age of American animation".Next Limit Technologies
Next Limit Technologies is a computer software company headquartered in Madrid, Spain. Founded in 1998 by engineers Victor Gonzalez and Ignacio Vargas the firm develops technologies in the field of digital simulation and visualization. This software can be applied to professional fields including engineering and digital content. In December 2016, the XFlow division was acquired by Dassault Systèmes.Realsoft 3D
Realsoft 3D is a modeling and raytracing application created by Realsoft Graphics Oy. Originally called Real 3D, it was developed for the Amiga computer and later also for Linux, Irix, Mac OS X and Microsoft Windows.
It was initially written in 1983 on Commodore 64 by two Finnish brothers Juha and Vesa Meskanen. The development of Real 3D was really started in 1985, as Juha Meskanen started his studies at the Lahti University of Applied Sciences, Finland. Juha's brother Vesa joined the development and jumped out from his university career to start the Realsoft company in 1989.Soft-body dynamics
Soft-body dynamics is a field of computer graphics that focuses on visually realistic physical simulations of the motion and properties of deformable objects (or soft bodies). The applications are mostly in video games and films. Unlike in simulation of rigid bodies, the shape of soft bodies can change, meaning that the relative distance of two points on the object is not fixed. While the relative distances of points are not fixed, the body is expected to retain its shape to some degree (unlike a fluid). The scope of soft body dynamics is quite broad, including simulation of soft organic materials such as muscle, fat, hair and vegetation, as well as other deformable materials such as clothing and fabric. Generally, these methods only provide visually plausible emulations rather than accurate scientific/engineering simulations, though there is some crossover with scientific methods, particularly in the case of finite element simulations. Several physics engines currently provide software for soft-body simulation.