Layered Performance Animation with Correlation Maps
|
|
|
- Lawrence Hodges
- 10 years ago
- Views:
Transcription
1 EUROGRAPHICS 2007 / D. Cohen-Or and P. Slavík (Guest Editors) Volume 26 (2007), Number 3 Layered Performance Animation with Correlation Maps Michael Neff 1, Irene Albrecht 2 and Hans-Peter Seidel 2 1 University of California, Davis, U.S.A. 2 MPI Informatik, Saarbrücken, Germany Abstract Performance has a spontaneity and aliveness that can be difficult to capture in more methodical animation processes such as keyframing. Access to performance animation has traditionally been limited to either low degree of freedom characters or required expensive hardware. We present a performance-based animation system for humanoid characters that requires no special hardware, relying only on mouse and keyboard input. We deal with the problem of controlling such a high degree of freedom model with low degree of freedom input through the use of correlation maps which employ 2D mouse input to modify a set of expressively relevant character parameters. Control can be continuously varied by rapidly switching between these maps. We present flexible techniques for varying and combining these maps and a simple process for defining them. The tool is highly configurable, presenting suitable defaults for novices and supporting a high degree of customization and control for experts. Animation can be recorded on a single pass, or multiple layers can be used to increase detail. Results from a user study indicate that novices are able to produce reasonable animations within their first hour of using the system. We also show more complicated results for walking and a standing character that gestures and dances. Categories and Subject Descriptors (according to ACM CCS): Graphics and RealismAnimation I.3.7 [Computer Graphics]: Three-Dimensional 1. Introduction In traditional animation, two main approaches are used: keyframing and straight-ahead animation [TJ81]. With keyframing, poses are set at particular points in time and in-between poses are added later to create continuous motion. In straight-ahead animation, no keyframes are used, rather the final frames are simply drawn in sequence. Keyframing is considered preferable for planned, controlled motion, while straight-ahead animation often produces more free, spontaneous and exciting movement. Most computational animation tools are based on the key-frame approach and the research community has paid comparably less attention to providing a computational equivalent to straightahead animation. Performance animation provides a good computational parallel to straight-ahead animation, yet it is difficult to [email protected] {albrecht hpseidel}@mpi-inf.mpg.de build such systems, particularly when the aim is to control a complex character with limited input degrees of freedom (DOFs). This paper presents a performance based animation system in which an animator uses a mouse to interactively control the movements of a 3D humanoid character. We tackle this problem by using a meaningful, reduced DOF parameterization of character pose and employing correlation maps to link several of these DOFs to a single input parameter. Multiple correlation maps may be active at one time. Correlation maps encode two kinds of relationships: correlation between movements in input space and movements of the character, and correlations between various character pose parameters. For example, a downward movement in input space might cause a character to hunch over (correlation between input space and character space), and the rotation of the spine may also cause the collar bones to rotate and knees to bend (correlation within character space). The system allows correlation maps to be easily built and combined so that limited 2D mouse input can be used to control complex character movement. Correlation maps are normally defined Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA.
2 only over a subset of the pose parameters. The parameterization consisting of balance adjustments, IK handles, spine curves, joint DOFs, etc. (Table 2) is a key component of the efficacy of the system. One of the key observations of this work is that correlations in movement often hold for only a short duration. For this reason, the system allows users to rapidly switch between correlation maps while interacting, essentially changing the character rig on the fly. It is this ability to rapidly change control that allows a low DOF input device to control a high DOF character, providing for a variety of movement changes during a single interaction. As well, the tool is highly configurable, allowing the user to define new maps, specify arbitrary combinations of maps and define how she wishes to invoke the maps during interaction. Animations are normally recorded in multiple layers, where the main structure of the motion is specified during a first interaction and refinements are added during subsequent passes. An innovative feature of the system is the introduction of overlays, which allow the animator to define correlations across layers, avoiding the need to synchronize performance passes. An overlay can be viewed as a dynamic character filter that will adjust the warps it makes to character motion based on the already recorded data. All system input is via a mouse and keyboard, providing near universal accessibility. A mouse is used here not because it is necessarily the ideal input device for animation, but because it offers a particularly difficult test case, with only 2-DOF of input. Techniques that work with such an impoverished input device will hopefully extend gracefully to higher DOF input devices such as game controllers that provide additional DOFs. The main movements targeted include a range of standing motions including gesturing, night-club dancing, and walking. The system is particularly effective for rapidly exploring the movement space and improvising spontaneous animations. To evaluate the system, we performed a novice user study that indicated that people with no previous animation experience were able to create two reasonable gesture animations of short video sequences within their first hour of using the system, including training time. The quality of these animations ranged from rough to quite good, which seems reasonable for such short exposure to a new instrument (cf. a piano). Key contributions of this work include: The identification of a good set of movement parameters for character control. An effective interface design for real-time interaction and a discussion of the trade offs involved in the different types of mappings. The introduction of overlays for modifying style of recorded performance motion. A design that both provides appropriate defaults for novices and allows experts to extend the power of the tool. 2. Background Researchers have developed a number of interesting, useful and fun tools for performance animation. In comparing these approaches, it is useful to consider the range of movement that can be modeled, the number of degrees of freedom of the character that is controlled, and the input device used. Our current prototype maintains the character in a standing or walking position, although this is not a general restriction of the approach. Compared to other approaches, our system generally features a larger range of movement on a more complex humanoid model while using minimal input DOFs. Perlin [Per95] introduced a computer puppet system in which the animator invokes predefined actions that are defined in script files using sine and noise based interpolation functions and can be smoothly combined at run time. The system offers a flexible range of movement, but the control is at a higher level than in our approach. Laszlo et al. [LvdPF00] and van de Panne [vdp01] use mouse input to directly control simple two dimensional characters that are physically simulated. The use of physical simulation amplifies the two DOF mouse input because the speed of input will change the motion due to momentum. [LNS05] extend this work with the use of predictive lookaheads that are displayed in the interaction space, making it easier to predict the outcome of control input. The animator can essentially select the desired position of the character, rather than needing to generate an abstract input curve. Oore et al. [OTH02a, OTH02b] present a performance animation system based on a novel interface consisting of two six DOF trackers embedded in cylinders. Like our system, they use a layered approach to animation, but map layers to particular body regions with a normal performance ordering. Oore et al. opt for a literal input mapping, aligning the cylinders with the character s bones, while we employ a more complex set of user definable mappings that can be rapidly switched between at runtime. In the work of Dontcheva et al. [DYP03], users control a performance animation system by moving props in space which are mapped to character DOFs. Mappings are either defined explicitly or inferred from similarities between prop and character movement. By layering several passes of acting, the user adds detail to the animation. The differences between this system and our system stem mainly from differences in character parameterization: Dontcheva et al. map user input directly to rotational and translational character DOFs, while we use more abstract correlations. This allows us to animate complex characters. We provide tools that allow an animator to explicitly define mappings, but we do not support inference. Our system is highly configurable and the
3 user can switch between different mappings within one animation pass. Motion doodles [TBvdP04] use a scriptive interface to control a two DOF character that can move through a three dimensional environment. The path of the character can be drawn and cursive gestures are used to specify different character movements. They use a preset mapping and deal with a different range of movements than our system. Our system has some basic similarity to the blend shape method (e.g. [JTDP03]) commonly used in facial animation. While both methods blend input to create output, blend shapes work with full versions of the final target output (i.e. a mesh) whereas our system operates on a subset of parameters, specified over multiple layers, that are then used to calculate the final pose in a separate algorithmic stage. Blend shapes have no inherent mapping from input space to parameter value(s), while this is fundamental to our approach. Spatial keyframing [IMH05] embeds whole body poses at particular locations in input space. The system continuously interpolates between these poses as the pointer is moved. This provides simple, intuitive control, but the range of possible movement is limited. By tying spatial location to (interpolated) poses, spatial keyframing uses absolute mappings. In contrast, our system also offers both relative mappings, where the input mouse location is interpreted relative to the current character pose, and control over any subset of character parameters. This allows for intuitive layering of animation passes and results in greater flexibility through the reuse of partial body correlations that can be found in multiple movements, versus more restrictive full body poses. Numerous computer puppetry systems have been developed that use motion capture technology to drive the movement of a character in real time (e.g. [Stu98]). These systems use expensive, high DOF input systems whereas we are interested in employing the cheap, low DOF input technology people already have access to. In addition, when these technologies use literal mappings, which is the norm, they require the animator to be able to actually make the movements he wishes to animate. We are interested in using simpler mappings to allow an animator to create movements he may or may not be personally capable of performing. Other approaches are based on replaying motion capture data. FootSee [YP03] uses a floor pressure sensor as input, while marker-based [CH05] and makerless [RSH 05] motion capture systems have been developed that measure a performers movement and then reconstruct it using prerecorded motion capture data. Such approaches have the potential of creating high quality movement, but rely on the user s ability to perform the desired movements and potentially limit the user s control through reliance on prerecorded motion clips. Terra and Metoyer [TM04] present a system in which the desired key values are defined separately, but the user can interactively define the timing for translatory aspects (body translation, IK handles) of the motion. In our work, the angles and timing are both defined through performance and both rotational and translatory data are controlled. Yamane and Nakamura [YN03] present a fast IK system that allows arbitrary points on a character to be pinned or dragged. While not a performance system per se, it allows for rapid pose manipulation while authoring animations. Neff and Fiume [NF06] developed a system for modeling expressive posture based on the arts literature that has also been applied to gesture modeling [NKAS07]. We find their parameterization of character pose provides useful handles for interactively controlling a character and use it here (see Table 2 and Section 4). 3. Basic Interaction and Workflow A user creates animation interactively by moving his mouse pointer in the screen space of the character. The mouse movements are converted to changes in body pose through correlation maps. Each correlation map defines how mouse movement in a particular dimension (x or y) varies one or more parameters of a character s pose. IK and balance algorithms convert these parameters into a final pose. Often, multiple correlation maps will be active at any given time. The interaction palette defines what correlation maps are currently active and allows different active sets to be mapped to three different mouse buttons. Switching mouse buttons during a drag allows the user to very quickly change the correlation maps he is using without introducing any discontinuities in the motion. The interaction palette allows the user to specify up to ten correlation maps that can be triggered at the same time, define which mouse button activates which subset of maps, and alter other parameters such as map gain and the use of relative or absolute mappings. More details are given in Section 5 and an example is shown in Figure 1. Keyboard hotkeys are used to switch between correlation maps and even switch whole palettes and mouse mappings Correlation Map Definition Formally, a correlation map consists of a set of correlation entries where each entry is a linear map from one dimensional mouse input (either x or y) to a scalar parameter that is used in defining the state of the skeleton. A correlation entry is defined by four values per dimension: two mouse coordinates in either the x or y dimension, m A and m B, and two character parameter values, p A and p B, corresponding to the input mouse values. A simple linear function transforms an input value, m, to a character parameter value, p: p = f (m) = p A + (p B p A ) m m A m B m A (1) Correlation entries are quick to specify and simple to invert. The latter property is important for implementing relative mappings as discussed below.
4 Correlation Map Mouse DOF Description RHand XY Position constraint on right hand. Two Hands XY Position of left (or right) hand and other hand mirrors it. LHand XY Position constraint on left hand. Two Hand Vert Y Vertical movement of the two hands. Twist X Full body rotation. Twist X Second copy (allows different scale). Crunch Y Downward C bend of spine. Beauty Line X S-curve through body; coronal plane. Lean X Sideways lean. Shoulders Vert Y Up and down movement of the collar bones. Table 1: A sample palette from our user study Defining Correlation Maps A number of correlation maps have been defined for the system. These provide a generic, flexible range of input useful for gesturing, night club-style dancing and walking. Different sets of these maps have been defined on different palettes, which the user can freely switch between using hotkeys or GUI menus. The user can also change which correlation maps are included on any particular palette. An example palette designed for gesturing and provided to novice users in our test scenario is detailed in Table 1 and will be discussed in Section While flexible, the set of pre-built correlation maps may not meet all of a user s needs. A second interface, named the mapping definition palette, is provided to allow experienced users to interactively define new correlation maps. The interface allows all the low-level body parameters to be controlled through a GUI. To define a correlation map, the user first decides on the set of pose parameters he wishes to control. He then places a marker in input space to define the m A and m B input values and uses the GUI to define the correlated pose parameters, which can be previewed on the character. A correlation map is thus defined in a few clicks and saved for future use. The effective pose parameterization and realtime updates of the skeleton pose make this a simple task. Previously defined correlation maps can be activated while designing a new map to allow the animator to ensure that the new map combines effectively with previous maps. Once defined, the new correlation maps can be loaded into the interaction palette and used with all the pre-existing maps. Advanced users will thus move back and forth between the two palettes, first experimenting with control mappings, then extending the set of control mappings, then authoring more animation, etc. The highly configurable nature of the tool allows it to adapt to different users needs and skill levels Workflow The animator must first decide on a set of correlation maps to use for the motion sequence. This can be done by either experimenting with the tool or reflecting on the nature of the movement. The animator configures the interaction palette so that it contains the desired maps, they are triggered by his preferred mouse buttons and any keyboard hot switching has been specified. All of this configuration data can be saved. The animator can then rehearse the motion to become familiar with the mappings and hotkey layout. To record the animator clicks the record button and lays down a base layer of motion. Although not restricted to this, the base layer normally consists of hand movements in space, often combined with posture deformations and possibly head movements. This reflects the definitional nature of hand movement in determining gestures and the fact that posture changes are often correlated with the movement of the hands. Some participants in our study preferred to first specify posture deformation and this remains an option. During any recording phase, the system can play either video or audio in the background to allow the animator to align movements with character text or example video. The animator makes multiple takes of the initial layer until satisfied. Once specified, the base layer can be replayed and additional movements added on top. These movements may include further arm and posture movement, adjustment of arm swivel and hand depth, (additional) head movement, rotation of forearm and hand angles, adjustments to balance and pelvic twist, etc. Additional movements can be added in two ways. The animator can perform new mouse input, or the animator can invoke overlays - correlation maps which are driven by the recorded mouse input from a previous layer(s). Overlays allow additional body movements to be automatically synchronized with previously recorded movements, avoiding the challenging problem of trying to perform a new motion in synchrony with a previous pass, and will be described in detail in Section 5. The animator can also reduce the speed of playback to make it easier to combine new features with the timing of previously recorded layers, and to reduce the need for fast mouse movements. The combination of the new layer and the previous motion are updated and displayed in real-time. Each loop of input is recorded on a separate layer. The animator can turn on or off any of the recorded layers when generating the animation. This represents a more flexible form of undo, allowing the animator to use only the best interaction passes. It also allows him to examine the different components of the animation in isolation by turning on and off layers at will.
5 Description # params Spinal deformations in the coronal and 1 per sagittal plane following S and C curves. dimension Spinal twist. 1 Right and left hand positions. 3 per hand Swivel angle of arms. 1 per arm Forearm rotation. 1 per arm Hand rotation. 2 per hand Vertical and horizontal movement of the 2 collar bones. Gaze direction and tilt control for head. 3 Lateral and forward/backward centre of 2 mass shifts. Knee bends. 1 per knee Pelvic twists. 1 Foot positions 3 per foot Table 2: Character Parameterization: This is the reduced set of parameters used to characterize character pose. 4. Mapping Design 4.1. Character Parameterization A critical issue in making effective correlation maps is ensuring that the controlled parameters are expressively relevant. Directly controlling the character s DOFs is not ideal both because this requires too many parameters in order to specify a character s pose (48 for our skeleton not including hand shape) and more importantly, individual DOFs do not have clear expressive meanings. At the same time, parameters that are too high level, such as a full character pose, limit the animator s control over the movements that can be expressed in the system. Also worth considering, linear combinations of joint angles will not in general produce certain desirable outputs, such as a particular end effector path in space. We adopted the low-level parameter set specified in [NF06] which has a maximal set of 33 DOFs to define a skeleton pose and provides parameters that are expressively salient, based on research in the arts literature. Most interactions rely heavily on a subset of eleven of these parameters: six DOFs for hand positions, three DOFs for spine configuration and two DOFs for collar bones. The full parameter set is summarized in Table 2. The use of automatic balance adjustment and the availability of balance offset parameters has proved very important in creating lively motions Absolute vs. Relative Mappings By definition, every correlation map has an absolute embedding in input space. This means that a particular location in input space corresponds to a particular parameter value in character space. If absolute mappings are used, the starting location of a mouse drag defines the initial configuration of the character. While using the system, an animator can switch between absolute and relative mappings. A relative mapping takes the current character position as the starting Relative Absolute Average p 0 (t 0 )+ 1 k k i=1 p i(t)) 1 k k 1 i=0 (p i(t) p i (t 0 )) Add p 0 (t)+ k 1 i=0 p i(t) k 1 i=1 (p i(t) p i (t 0 )) Table 3: Blend rules for different inputs controlling the same parameter. Each formula is used to calculate p(t), the cumulative result of the k different requested values for the parameter. p i (t) represents the i-th requested value for the parameter at time t. p 0 is from the first input layer. point for a movement and uses changes in mouse movement to deform the character from there. Formally, an absolute mapping is defined by Equation 1 and a relative mapping is defined as: p = g(m) = f (m f 1 (p 0 )) (2) where p 0 is the value of the parameter being controlled at the start of interaction. Absolute mappings require the user to be aware of the location of their mouse in input space. An advantage of absolute mappings is that they blend well with other absolute mappings with similar spatial relationships. Invoking an absolute mapping can cause a jump in character state, which is sometimes useful. For instance if the entire input space corresponds to variations of a severely hunched back, invoking this mapping will instantly hunch a previously erect character. Relative mappings, conversely, work to adjust the character from it s current position and do not require the animator to be aware of their input location. They work well for interacting on top of already recorded motion and are also used as the default for general interaction. In practice, we blend in absolute mappings when they are activated to avoid motion jumps: p = f (m) = f (m c(t) f 1 (p 0 )) (3) where c(t) linearly transitions from 1 to 0 over a specified blend duration, currently ten frames. A given parameter value may be varied by multiple inputs. This can occur if the same input is varied on multiple animation layers, or in occasional cases, it is useful to have both the x and y input dimensions of a particular correlation map vary the same character parameter. These various parameter adjustments must be combined. We define two blending rules, one that averages input and one that adds it. These can also be defined in a relative or absolute sense. Table 3 summarizes the four update rules. A useful addition to the system would be allowing a new stretch of input to replace a previous section Correlation Map Design Correlation maps can be defined at varied levels and there is a trade-off between easy to use, high level control, and
6 more flexible, lower level control. For instance, it is possible to build high level correlation maps that make it very easy to control specific gestures, such as a shrug, or generate specific movements, like walking, but most of the correlation maps provided in the system are closer to the low-level parameters. Such low-level maps are generic and can easily be combined with each other to control a wide range of movement. Using simple maps and then combining them to effect more complex control also allows the scale of each component to be varied independently (more on scale below). In practice, it is very common to combine multiple correlation maps that are driven by the same input DOF, for instance mapping lateral hand movement, torso twists and balance adjustments to horizontal input Mapping Categories The input-to-character space mappings fall into three categories: direct spatial, spatially based and abstract. A direct spatial mapping connects an input parameter to a character parameter such that the screen location of the input is the same as the location of the character parameter (e.g. grabbing a hand and moving it about). Our head tracking and wrist position controls come close to this category, but in each case we chose to violate the exact constraint to provide more intuitive control. Hand movement is defined in chest space, and the direct mapping will be violated as the chest rotates. The adjustment of head movement is scaled to make it easier to control. The second category is spatially based. For instance a mouse move to the left can cause a character to twist to the left, but there is no direct alignment between the mouse pointer and the location of a body part. Most mappings in the system fit into this category as they provide intuitive control and tend to layer well with other spatially based mappings. In abstract mappings, there is no direct relationship between the spatial movement of the character and the movement of the mouse. Forearm rotation is an example of this and it could be associated with either input dimension. These mappings are rare. 5. Advanced Input 5.1. Configuring the Animation Interface Palettes In the interaction palette, a section of which is shown in Figure 1, the correlation maps are arranged on vertical channels, one map per channel. We refer to the set of currently available correlation maps as a palette, which contains up to ten channels. Palettes can be predefined and an animator can switch between them as needed during animation. When defining the palette, the entry on any channel can be changed by selecting any of the available correlation maps from a drop down menu. Table 1 shows a sample palette from our Figure 1: Correlation maps are arranged in the palette interface in columns. The name of the map is at the bottom of the column. The middle portion of the column controls how the map behaves and the top portion determines which mouse buttons activate the correlation map. user study. It contains a combination of hand position controls and posture controls, which is typical for early interaction passes. Other palettes may contain additional postural deformations, head movement, fine tuning of hand and forearm rotation, arm swivel, etc. A key feature of the system is that an animator will not normally use a single channel at a time, but combine multiple channels during a single interaction. As well, when switching between input mappings during an animation sequence, the animator will normally switch between sets of channels. For instance, an interaction run might begin with the animator combining right hand movement with a large twist and a slight beauty line. Part way through the animator might switch to two hand movement with a smaller twist and a torso crunch by switching mouse buttons, and then switch to a third mapping or even back to the first. The power of the system comes to a large degree by being able to overload multiple channels during a single interaction pass.
7 Switching Control Mappings The palette interface is used by the animator to specify which subset of channels can be invoked with each mouse button during an interaction run. The top three rows of each channel contain checkboxes corresponding to each of the three mouse buttons (left, middle, right), as can be seen in Figure 1. Enabling one of these checkboxes means that when the user drags over the character window with the selected button, the corresponding channel mapping will be invoked. The user can map as many channels as desired to a particular mouse button and associate different channels with different buttons. This is useful for very quickly switching control mappings while interacting with the character. By changing the button depressed, the nature of the control can be changed during a single mouse drag. As an example, this can be used to have a character rotate with the first portion of an arm movement, then stop rotating while seamlessly continuing the arm movement, and finally adding a torso crunch to accompany a later arm movement. A set of correlation maps are normally ideal for controlling a very short segment of motion, so switching rapidly between them is essential for providing adequate control of longer sequences. Keyboard keys can also be used to switch mouse button mappings. By default, the first four palette channels are used for different combinations of hand movements. These are often combined with posture deformations, which can be stored in the latter six channels. The number keys can be used to switch the active hand(s), while maintaining the same posture controls. The key to the left of the 1" switches off all hand control. The keys 1 through 4 enable the corresponding hand mapping. All five of these keys do not change the button set up for the right six channels. This allows the keyboard to be used for rapid switching between hands, without affecting the selected posture channels, while the mouse buttons are used to switch between postural control. Any rig consisting of a palette, mouse maps, scale values (Sec ) and relative or absolute mappings can also be tied to a keyboard hotkey. This allows rapid tool switching during interaction by pressing a single key. Quick mapping changes are what allows two DOF input to be used to control a wide range of high DOF character movement Advanced Channel Options The behaviour of the individual channels can be adjusted through the interface using the parameters and controls summarized in Table 4. Scale and overlays are explained below Movement Scale The scale value acts as a gain on the correlation map. A scale s is applied directly to the parameter values used to define a correlation map in Eq. 1, such that p A = sp A and p B = sp B. Changing the scale alters the amount of movement in input space required for a given pose change. Scale values can also Record Absolute Overlay Absolute Overlay Scale Name Highlighted when the channel is being recorded. Can turn on to record an overlay. Check box to allow the channel to be switched between absolute and relative mapping. Highlighted when the channel is applied as an overlay. Can turn on and off. Check box to specify whether the overlay is absolute or relative. Adjusts the gain on the channel. Dropdown displaying the current correlation map and allowing other correlation maps to be selected. Table 4: Properties associated with each channel. be used to vary the contribution of different channels that are combined together on a given mouse button, allowing for instance, small and large amounts of twist on different buttons. Specifying a negative scale inverts a mapping. A hand movement combined with a normal crunch" will have the spine curl down in synchrony with a downward arm movement; with a negative crunch, the spine will curl up. Each conveys a different, important expressive intent Overlays One of the most challenging tasks in a layered approach to interactive animation is to perform a new layer so that it synchronizes with the movements on a previous layer. Overlays are a recognition of the need to correlate additional body parameters with previously recorded ones. Any combination of channels can be invoked as an overlay. During playback, an active overlay channel will modify the character s movement based on the channel s correlation map, but rather than using interactive data from the user as the input, it will use the mouse input from a specified, previously recorded layer(s). An animator can try an overlay and then decide whether or not to record it. The channel scale can also be adjusted interactively during the playback, allowing it to be varied continuously over different portions of the animation. By default, overlays use an absolute mapping, but relative mappings are also possible. Overlays allow very different posture deformations to be applied to a given sequence of motion to create different characterizations. Unlike a static default posture, overlays are dynamic, changing based on the previously recorded motion. Such overlays act essentially as dynamic character filters, useful for making broad, stylistic changes Editing Operations Editing occurs at multiple levels within the system. At the most coarse level, an animator can turn on or off any of the layers that have been recorded. This allows multiple takes
8 of any interaction to be recorded and the best performances to be selected. Individual channels within a layer can also be disabled after recording. Overlays represent an additional form of editing. Consistent with the performance paradigm, the most difficult types of edits are those that would require changes to the input mouse data (i.e. the performance). If the timing is off, we offer no performance based way of editing this. It is best handled with off-line time warping, or by redoing the performance. If a recorded value needs to be changed, such as a hand position at a certain point in time, the user can record an offset to this data making use of the blending rules defined in Table 3. Motion complexity can also be increased by layering posture deformations in this way. Another method for changing recorded data would be to allow short sections to be re-performed, replacing the previous data and blending at the ends. This replace and blend edit has not been implemented, but is a straightforward addition. 6. System The set of parameters used to define a character s pose includes both world space and joint space values. The underlying animation engine satisfies these constraints using a combination of feedback based balance adjustment, fast inverse kinematics and forward kinematics, based on an implementation of the system described by Neff and Fiume [NF06]. Specifically, an analytic IK routine is used to solve for the angles in the lower body kinematic chain. Balance is adjusted by feeding back error values to adjust the ankle angles and move the character to a desired balance point. Simple two limb IK is used to position the wrists at desired constraint points. Our implementation differs from Neff and Fiume s in two significant ways. We use forward kinematics instead of optimization to control the torso. This means that a character will not adjust his spine to reach a target beyond his grasp, but simply point in the direction of it. This restriction performs very naturally in the movement tests we have performed. The second difference is that wrist constraints are defined in the character s chest frame, rather than the world frame. This allows the character s hands to move with him if twists or other deformations are applied in later layers. The system also implements automatic collar bone adjustment based on the height of the hands Data Recording Determining which data to store is a significant technical decision. We elect to store the original mouse data for each interaction run, along with related data needed to reconstruct the motion such as which correlation maps were active. This design decision provides maximum flexibility in editing the motion after it has been recorded. A hierarchical structure is used to organize the data. An interaction manager is responsible for recording and playing back all data. The interaction manager contains a set of interaction records. Each interaction record corresponds to one record/playback loop (one layer). These records store the 2D mouse samples for all interactions during the loop, and a sequence of interaction runs. A run corresponds to the period from one mouse click to a release. Runs store the correlation maps that are active during the run, offset values for each correlation entry ( f 1 (p 0 )) and scale samples for each correlation map at each time step during the run. Storing this data allows any interaction run to be turned on or off and also any correlation map or channel to be turned on or off. Maintaining this data also allows the blending rules to be arbitrarily changed after the data has been recorded. 7. System Evaluation The system is evaluated with a novice user study and by using the system to create a range of animations User Study The system is designed foremost for the creation of spontaneous, improvised animation that has the free, chaotic feel of classical straightahead animation. We wished to perform a user study to evaluate the system, but it is difficult to measure free improvisation. We decided instead to use a task that is outside of the sweet spot of our system: the recreation of specific performances from video clips using the original audio. This task is particularly challenging in a performance based system as it requires precise synchrony with the source audio. This task has several advantages, however. It is well defined and easy to explain to subjects. It also decouples creativity from system usability as subjects were not required to be creative. Having everybody animate the same movements also made it easier to compare user results. In the user study, 11 subjects (4 female, 7 male) with different levels of animation experience recreated brief performances based on video of two actors. In order to compare our approach to animation to traditional keyframing, we asked them to animate the two sequences using both our system and Curious Labs TM Poser R. Half of the participants started with Poser, the others used our system first. After a brief training session on a given system (5-20 min), every subject created two animations using each tool. Due to time constraints, creation time for each animation was limited to roughly 20 min. Participants were allowed to view the movies as often as they wished in a movie player and also play the audio in both systems (but not the video). Users of our system were not allowed to define new correlation maps, but were asked to use those predefined on some simple palettes such as the sample included previously. These palettes were created before the actors were recorded and were not customized to suit the actors movements. After the experiment, participants filled out a questionnaire and compared the two systems with regard to several aspects
9 Figure 2: Frames from a dance animation that was generated with the system in real-time, in a single pass. such as ease of use, level of detail or satisfaction with animations produced. The evaluation only produced a small number of statistically significant statements due to the small number of participants and the fact that personal preferences seem to play a strong role in the type of tool a user prefers. The statistically relevant results were: Subjects felt that our system encourages creativity more than Poser and strongly preferred it for sketching out the initial performance. Several results fell just below statistical relevance, but users seemed to favour Poser for fine-tuning and for adding detail. There was also a tendency to prefer an interactive as opposed to an offline approach to animation. Participants appeared to believe that our system produces more natural animations and is better suited for modeling style and expressiveness. Typical beginner problems included using too large amplitudes and a certain jerkiness of mouse movements as seen in the resulting animations. Amplitude can be regulated by scaling the mappings, and trajectories could be easily smoothed. More experienced animators appreciated the possibility to define their own mouse button occupancies and what they considered to be the greater naturalness of the movements from our system, particularly with respect to timing. Worth noting, the well defined task did not inhibit personal preferences in tool use. In our system, some users would lay down an initial layer quite quickly and then try to refine it, while other users would work to define a good mapping and then rehearse the movements multiple times before arriving at a final recording. Some of the best and worst results from both systems are included in the accompanying video. In every case, we show the results of the same user in each system. The quality of results varies more across users than across systems Sample Animations The accompanying video also includes some short clips made by an intermediate user of the system who was also involved in the system design. These include gesture animations, a bow, walking and a dance sequence. Several frames from the dance sequence are shown in Figure 2, although the motion can be better evaluated in the accompanying video. In the first gesture animation, hand position and posture deformation were modeled together and forearm rotation was added on a separate layer. A second gesture animation shows how a given hand movement track can be given very different style by overlaying different posture deformations. For the bow, posture deformation was recorded on one layer and arm movement was added on a second layer. The walking sequence is performed using three pairs of mirrored correlation maps: one pair controls the foot movement and balance adjustments, one the accompanying torso twists, and one the arm movements. In each pair, one member corresponds to the right step and one the left. Walking is controlled by assigning the right step related maps to one mouse button and the left step maps to another. The horizontal directions of each mapping are reversed so that holding down one mouse button and making a forward arc will cause the character to take a right step forward, and holding the other mouse button and making a backwards arc will cause the character to take a left step forward. Thus, by making back and forth arcs in input space, the character can be made to walk forwards or backwards. Ankle bends and toe rolls are not currently supported in the system, which reduce the realism of the foot movement. All motions in the extended dance sequence were recorded in real-time on one layer. The snippet shows the beginning of a 1.5 minute animation that was recorded in a single take on the fourth try. While not a scientific result, it is worth noting that these types of free movement are especially fun to create in the system. It is also interesting to note how exploratory the process is: interesting movement patterns were often discovered by defining a mapping for one purpose and then interacting with it and finding new possibilities. 8. Discussion and Conclusion Our system, and likely performance animation in general, performs very well for certain tasks while other tasks are more difficult. It is easiest to use our system when creating free, spontaneous motion, a task that is difficult in approaches like keyframing. Controlling the motion envelope (the timing of transitions) also appears to be easier using direct control. Overlays provide a useful method for making whole scale changes to the style of a motion in a very controlled way. This feature of our system is not an aspect of performance animation in general. Layering reduces the
10 number of DOFs that need to be controlled in one pass and allows detail to be added. Precise editing of already recorded values is more difficult in our system and precisely synchronizing movements with prerecorded audio is a challenging performance task. The latter is due to the difficulty of predicting the time at which the alignment point will come, so preview techniques may make this significantly easier. Ghost previews [DYP03], that show the already recorded motion slightly ahead of time are an example. Non-performance based tools, such as those used for processing motion capture, should combine well with performance data. This would allow a performance to be time warped to align with particular phrasing and also allow particular values to be adjusted more easily. Worthwhile extensions to the system include allowing filtering of mouse input to smooth out unwanted jerkiness and adding a replace and blend" editing option. In summary, we have presented a flexible and highly configurable tool for performance animation of complex characters that requires only a mouse and keyboard as input. The system can perform a good range of character movements. It is particularly useful for roughing out motions, creating quick prototypes and and exploring the movement space. As users become more skilled, we feel they will also be able to produce a range of quality results. The system also provides a way to create free, spontaneous animations, such as the dance sequence, that would be difficult to generate in any other way. We believe such systems occupy an important niche in the animation tool range. References [CH05] CHAI J., HODGINS J. K.: Performance animation from low-dimensional control signals. ACM Transactions on Graphics 24, 3 (Aug. 2005), [DYP03] DONTCHEVA M., YNGVE G., POPOVIĆ Z.: Layered acting for character animation. ACM Transactions on Graphics 22, 3 (2003), [IMH05] IGARASHI T., MOSCOVICH T., HUGHES J.: Spatial keyframing for performance-driven animation. In Proc. ACM SIGGRAPH / Eurographics Symposium on Computer Animation 2005 (July 2005), pp [JTDP03] JOSHI P., TIEN W. C., DESBRUN M., PIGHIN F.: Learning controls for blend shape based realistic facial animation. In 2003 ACM SIGGRAPH / Eurographics Symposium on Computer Animation (Aug. 2003), pp [LNS05] LASZLO J., NEFF M., SINGH K.: Predictive feedback for interactive control of physics-based characters. In Eurographics (2005). [LvdPF00] LASZLO J., VAN DE PANNE M., FIUME E.: Interactive control for physically-based animation. Proceedings of SIGGRAPH 2000 (2000), [NF06] NEFF M., FIUME E.: Methods for exploring expressive stance. Graphical Models 68, 2 (2006), [NKAS07] NEFF M., KIPP M., ALBRECHT I., SEIDEL H.-P.: Gesture modeling and animation based on a probabilistic recreation of speaker style. ACM Transactions on Graphics (2007). to appear. [OTH02a] OORE S., TERZOPOULOS D., HINTON G.: A desktop input device and interface for interactive 3d character animation. Graphics Interface 02 (2002),? [OTH02b] OORE S., TERZOPOULOS D., HINTON G.: Local physical models for interactive character animation. Computer Graphics Forum 21, 3 (2002), [Per95] PERLIN K.: Real time responsive animation with personality. IEEE Transactions on Visualization and Computer Graphics 1, 1 (1995), [RSH 05] REN L., SHAKHNAROVICH G., HODGINS J. K., PFISTER H., VIOLA P.: Learning silhouette features for control of human motion. ACM Transactions on Graphics 24, 4 (Oct. 2005), [Stu98] STURMAN D. J.: Computer puppetry. Computer Graphics and Applications (1998), [TBvdP04] THORNE M., BURKE D., VAN DE PANNE M.: Motion doodles: an interface for sketching character motion. ACM Transactions on Graphics 23, 3 (2004), [TJ81] THOMAS F., JOHNSTON O.: The Illusion of Life: Disney Animation. Abbeville Press, New York, [TM04] TERRA S. C. L., METOYER R. A.: Performance timing for keyframe animation. In 2004 ACM SIGGRAPH / Eurographics Symposium on Computer Animation (July 2004), pp [vdp01] VAN DE PANNE M.: Motion playground, [YN03] YAMANE K., NAKAMURA Y.: Natural motion animation through constraining and deconstraining at will. IEEE Transactions on Visualization and Computer Graphics 9, 3 (2003), [YP03] YIN K., PAI D. K.: FootSee: an interactive animation system. In Proceedings of the 2003 ACM SIG- GRAPH / Eurographics Symposium on Computer Animation (2003), Eurographics Association, pp
CS 4204 Computer Graphics
CS 4204 Computer Graphics Computer Animation Adapted from notes by Yong Cao Virginia Tech 1 Outline Principles of Animation Keyframe Animation Additional challenges in animation 2 Classic animation Luxo
Maya 2014 Basic Animation & The Graph Editor
Maya 2014 Basic Animation & The Graph Editor When you set a Keyframe (or Key), you assign a value to an object s attribute (for example, translate, rotate, scale, color) at a specific time. Most animation
This week. CENG 732 Computer Animation. Challenges in Human Modeling. Basic Arm Model
CENG 732 Computer Animation Spring 2006-2007 Week 8 Modeling and Animating Articulated Figures: Modeling the Arm, Walking, Facial Animation This week Modeling the arm Different joint structures Walking
Spatial Pose Trees: Creating and Editing Motions Using a Hierarchy of Low Dimensional Control Spaces
Eurographics/ ACM SIGGRAPH Symposium on Computer Animation (2006), pp. 1 9 M.-P. Cani, J. O Brien (Editors) Spatial Pose Trees: Creating and Editing Motions Using a Hierarchy of Low Dimensional Control
Chapter 1. Introduction. 1.1 The Challenge of Computer Generated Postures
Chapter 1 Introduction 1.1 The Challenge of Computer Generated Postures With advances in hardware technology, more powerful computers become available for the majority of users. A few years ago, computer
An Interactive method to control Computer Animation in an intuitive way.
An Interactive method to control Computer Animation in an intuitive way. Andrea Piscitello University of Illinois at Chicago 1200 W Harrison St, Chicago, IL [email protected] Ettore Trainiti University of
CATIA V5 Tutorials. Mechanism Design & Animation. Release 18. Nader G. Zamani. University of Windsor. Jonathan M. Weaver. University of Detroit Mercy
CATIA V5 Tutorials Mechanism Design & Animation Release 18 Nader G. Zamani University of Windsor Jonathan M. Weaver University of Detroit Mercy SDC PUBLICATIONS Schroff Development Corporation www.schroff.com
Fundamentals of Computer Animation
Fundamentals of Computer Animation Principles of Traditional Animation How to create maximum impact page 1 How to create maximum impact Early animators worked from scratch to analyze and improve upon silence
Using Autodesk HumanIK Middleware to Enhance Character Animation for Games
Autodesk HumanIK 4.5 Using Autodesk HumanIK Middleware to Enhance Character Animation for Games Unlock your potential for creating more believable characters and more engaging, innovative gameplay with
The 3D rendering pipeline (our version for this class)
The 3D rendering pipeline (our version for this class) 3D models in model coordinates 3D models in world coordinates 2D Polygons in camera coordinates Pixels in image coordinates Scene graph Camera Rasterization
animation animation shape specification as a function of time
animation animation shape specification as a function of time animation representation many ways to represent changes with time intent artistic motion physically-plausible motion efficiency control typically
Chapter 1. Animation. 1.1 Computer animation
Chapter 1 Animation "Animation can explain whatever the mind of man can conceive. This facility makes it the most versatile and explicit means of communication yet devised for quick mass appreciation."
Project 2: Character Animation Due Date: Friday, March 10th, 11:59 PM
1 Introduction Project 2: Character Animation Due Date: Friday, March 10th, 11:59 PM The technique of motion capture, or using the recorded movements of a live actor to drive a virtual character, has recently
Blender Notes. Introduction to Digital Modelling and Animation in Design Blender Tutorial - week 9 The Game Engine
Blender Notes Introduction to Digital Modelling and Animation in Design Blender Tutorial - week 9 The Game Engine The Blender Game Engine This week we will have an introduction to the Game Engine build
Computer Animation. CS 445/645 Fall 2001
Computer Animation CS 445/645 Fall 2001 Let s talk about computer animation Must generate 30 frames per second of animation (24 fps for film) Issues to consider: Is the goal to replace or augment the artist?
Character Animation Tutorial
Character Animation Tutorial 1.Overview 2.Modelling 3.Texturing 5.Skeleton and IKs 4.Keys 5.Export the character and its animations 6.Load the character in Virtools 7.Material & texture tuning 8.Merge
Voice Driven Animation System
Voice Driven Animation System Zhijin Wang Department of Computer Science University of British Columbia Abstract The goal of this term project is to develop a voice driven animation system that could take
Graphics. Computer Animation 고려대학교 컴퓨터 그래픽스 연구실. kucg.korea.ac.kr 1
Graphics Computer Animation 고려대학교 컴퓨터 그래픽스 연구실 kucg.korea.ac.kr 1 Computer Animation What is Animation? Make objects change over time according to scripted actions What is Simulation? Predict how objects
Robot Task-Level Programming Language and Simulation
Robot Task-Level Programming Language and Simulation M. Samaka Abstract This paper presents the development of a software application for Off-line robot task programming and simulation. Such application
SimFonIA Animation Tools V1.0. SCA Extension SimFonIA Character Animator
SimFonIA Animation Tools V1.0 SCA Extension SimFonIA Character Animator Bring life to your lectures Move forward with industrial design Combine illustrations with your presentations Convey your ideas to
Interactive Computer Graphics
Interactive Computer Graphics Lecture 18 Kinematics and Animation Interactive Graphics Lecture 18: Slide 1 Animation of 3D models In the early days physical models were altered frame by frame to create
An Instructional Aid System for Driving Schools Based on Visual Simulation
An Instructional Aid System for Driving Schools Based on Visual Simulation Salvador Bayarri, Rafael Garcia, Pedro Valero, Ignacio Pareja, Institute of Traffic and Road Safety (INTRAS), Marcos Fernandez
Karen Liu's Concept of Virtual Characters
Performance-based Control Interface for Character Animation Satoru Ishigaki Georgia Tech Timothy White Georgia Tech Victor B. Zordan UC Riverside C. Karen Liu Georgia Tech Figure 1: Our system allows the
CREATE A 3D MOVIE IN DIRECTOR
CREATE A 3D MOVIE IN DIRECTOR 2 Building Your First 3D Movie in Director Welcome to the 3D tutorial for Adobe Director. Director includes the option to create three-dimensional (3D) images, text, and animations.
animation shape specification as a function of time
animation 1 animation shape specification as a function of time 2 animation representation many ways to represent changes with time intent artistic motion physically-plausible motion efficiency typically
Animator V2 for DAZ Studio. Reference Manual
Animator V2 for DAZ Studio Reference Manual 1 Overview... 3 2 Installation... 4 3 What s new in version 2... 5 4 Getting Started... 6 4.1 Features... 6 4.2 Startup... 7 5 The Timeline Control... 8 5.1
Chapter 9- Animation Basics
Basic Key-framing and Auto Key-framing Now that we know how to make stuff and make it look good, it s time to figure out how to move it around in your scene. If you're familiar with older versions of Blender,
Mocap in a 3D Pipeline
East Tennessee State University Digital Commons @ East Tennessee State University Undergraduate Honors Theses 5-2014 Mocap in a 3D Pipeline Logan T. Maides Follow this and additional works at: http://dc.etsu.edu/honors
Tutorial: Biped Character in 3D Studio Max 7, Easy Animation
Tutorial: Biped Character in 3D Studio Max 7, Easy Animation Written by: Ricardo Tangali 1. Introduction:... 3 2. Basic control in 3D Studio Max... 3 2.1. Navigating a scene:... 3 2.2. Hide and Unhide
If there are any questions, students are encouraged to email or call the instructor for further clarification.
Course Outline 3D Maya Animation/2015 animcareerpro.com Course Description: 3D Maya Animation incorporates four sections Basics, Body Mechanics, Acting and Advanced Dialogue. Basic to advanced software
IMD4003 3D Computer Animation
Contents IMD4003 3D Computer Animation Strange from MoCap G03 Correcting Animation in MotionBuilder Prof. Chris Joslin Overview This document covers how to correct animation (specifically rotations) in
Animation. Persistence of vision: Visual closure:
Animation Persistence of vision: The visual system smoothes in time. This means that images presented to the eye are perceived by the visual system for a short time after they are presented. In turn, this
Annotation of Human Gesture using 3D Skeleton Controls
Annotation of Human Gesture using 3D Skeleton Controls Quan Nguyen, Michael Kipp DFKI Saarbrücken, Germany {quan.nguyen, michael.kipp}@dfki.de Abstract The manual transcription of human gesture behavior
Motion Retargetting and Transition in Different Articulated Figures
Motion Retargetting and Transition in Different Articulated Figures Ming-Kai Hsieh Bing-Yu Chen Ming Ouhyoung National Taiwan University [email protected] [email protected] [email protected]
TABLE OF CONTENTS. INTRODUCTION... 5 Advance Concrete... 5 Where to find information?... 6 INSTALLATION... 7 STARTING ADVANCE CONCRETE...
Starting Guide TABLE OF CONTENTS INTRODUCTION... 5 Advance Concrete... 5 Where to find information?... 6 INSTALLATION... 7 STARTING ADVANCE CONCRETE... 7 ADVANCE CONCRETE USER INTERFACE... 7 Other important
Blender 3D Animation
Bachelor Maths/Physics/Computer Science University Paris-Sud Digital Imaging Course Blender 3D Animation Christian Jacquemin Introduction to Computer Animation Animation Basics animation consists in changing
Animations in Creo 3.0
Animations in Creo 3.0 ME170 Part I. Introduction & Outline Animations provide useful demonstrations and analyses of a mechanism's motion. This document will present two ways to create a motion animation
SignalDraw: GUI Tool For Generating Pulse Sequences
SignalDraw: GUI Tool For Generating Pulse Sequences Konstantin Berlin Department of Computer Science University of Maryland College Park, MD 20742 [email protected] December 9, 2005 Abstract Generating
The main imovie window is divided into six major parts.
The main imovie window is divided into six major parts. 1. Project Drag clips to the project area to create a timeline 2. Preview Window Displays a preview of your video 3. Toolbar Contains a variety of
Character Animation from 2D Pictures and 3D Motion Data ALEXANDER HORNUNG, ELLEN DEKKERS, and LEIF KOBBELT RWTH-Aachen University
Character Animation from 2D Pictures and 3D Motion Data ALEXANDER HORNUNG, ELLEN DEKKERS, and LEIF KOBBELT RWTH-Aachen University Presented by: Harish CS-525 First presentation Abstract This article presents
Character Creation You can customize a character s look using Mixamo Fuse:
Using Mixamo Fuse, Mixamo, and 3ds Max, you can create animated characters for use with FlexSim. Character Creation You can customize a character s look using Mixamo Fuse: After creating the character,
Anime Studio Debut vs. Pro
vs. Animation Length 2 minutes (3000 frames) Unlimited Motion Tracking 3 Points Unlimited Audio Tracks 2 Tracks Unlimited Video Tracks 1 Track Unlimited Physics No Yes Poser scene import No Yes 3D layer
CHAPTER 6 TEXTURE ANIMATION
CHAPTER 6 TEXTURE ANIMATION 6.1. INTRODUCTION Animation is the creating of a timed sequence or series of graphic images or frames together to give the appearance of continuous movement. A collection of
Anime Studio Debut 10 Create Your Own Cartoons & Animations!
Anime Studio Debut 10 Create Your Own Cartoons & Animations! Anime Studio Debut is your complete animation program for creating 2D movies, cartoons, anime or cut out animations. This program is fun, easy,
Motion Capture Assisted Animation: Texturing and Synthesis
Motion Capture Assisted Animation: Texturing and Synthesis Katherine Pullen Stanford University Christoph Bregler Stanford University Abstract We discuss a method for creating animations that allows the
ACE: After Effects CC
Adobe Training Services Exam Guide ACE: After Effects CC Adobe Training Services provides this exam guide to help prepare partners, customers, and consultants who are actively seeking accreditation as
CG T17 Animation L:CC, MI:ERSI. Miguel Tavares Coimbra (course designed by Verónica Orvalho, slides adapted from Steve Marschner)
CG T17 Animation L:CC, MI:ERSI Miguel Tavares Coimbra (course designed by Verónica Orvalho, slides adapted from Steve Marschner) Suggested reading Shirley et al., Fundamentals of Computer Graphics, 3rd
C O M P U C O M P T U T E R G R A E R G R P A H I C P S Computer Animation Guoying Zhao 1 / 66 /
Computer Animation Guoying Zhao 1 / 66 Basic Elements of Computer Graphics Modeling construct the 3D model of the scene Rendering Render the 3D model, compute the color of each pixel. The color is related
imc FAMOS 6.3 visualization signal analysis data processing test reporting Comprehensive data analysis and documentation imc productive testing
imc FAMOS 6.3 visualization signal analysis data processing test reporting Comprehensive data analysis and documentation imc productive testing imc FAMOS ensures fast results Comprehensive data processing
No Equipment Agility/Core/Strength Program for Full Body No Equip Trainer: Rick Coe
No Equipment Agility/Core/Strength Program for Full Body No Equip Trainer: Rick Coe Introduction Program designed to be performed in a circuit. Perform exercises in sequence without rest 2-3 times. Increase
House Design Tutorial
Chapter 2: House Design Tutorial This House Design Tutorial shows you how to get started on a design project. The tutorials that follow continue with the same plan. When we are finished, we will have created
FACIAL RIGGING FOR 3D CHARACTER
FACIAL RIGGING FOR 3D CHARACTER Matahari Bhakti Nendya 1, Eko Mulyanto Yuniarno 2 and Samuel Gandang Gunanto 3 1,2 Department of Electrical Engineering, Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia
Interactive Quadruped Animation
Interactive Quadruped Animation Tyler Martin and Michael Neff University of California, Davis, USA [email protected] [email protected] Abstract. An animation system entitled CAT (Cat Animation Tool)
Reviewer s Guide. Morpheus Photo Animation Suite. Screenshots. Tutorial. Included in the Reviewer s Guide:
Morpheus Photo Animation Suite Reviewer s Guide The all-in-one animation suite includes Morpheus Photo Morpher, Morpheus Photo Warper, Morpheus Photo Mixer, as well as all 15 sample morphs, warps, and
Adding Animation With Cinema 4D XL
Step-by-Step Adding Animation With Cinema 4D XL This Step-by-Step Card covers the basics of using the animation features of Cinema 4D XL. Note: Before you start this Step-by-Step Card, you need to have
Motion Capture Technologies. Jessica Hodgins
Motion Capture Technologies Jessica Hodgins Motion Capture Animation Video Games Robot Control What games use motion capture? NBA live PGA tour NHL hockey Legends of Wrestling 2 Lords of Everquest Lord
Chapter 9 Slide Shows
Impress Guide Chapter 9 Slide Shows Transitions, animations, and more Copyright This document is Copyright 2007 2013 by its contributors as listed below. You may distribute it and/or modify it under the
Design of a modular character animation tool
Autonomous Systems Lab Prof. Roland Siegwart Master-Thesis Design of a modular character animation tool draft Spring Term 2012 Supervised by: Cedric Pradalier Gilles Caprari Author: Oliver Glauser Preface...
Creating Scenes and Characters for Virtools in OpenFX
Creating Scenes and Characters for Virtools in OpenFX Scenes Scenes are straightforward: In Virtools use the Resources->Import File As->Scene menu command and select the.mfx (OpenFX model) file containing
Lesson Plan. Performance Objective: Upon completion of this assignment, the student will be able to identify the Twelve Principles of Animation.
Lesson Plan Course Title: Animation Session Title: The Twelve Principles of Animation Lesson Duration: Approximately two 90-minute class periods Day One View and discuss The Twelve Principles of Animation
Visualizing molecular simulations
Visualizing molecular simulations ChE210D Overview Visualization plays a very important role in molecular simulations: it enables us to develop physical intuition about the behavior of a system that is
Working With Animation: Introduction to Flash
Working With Animation: Introduction to Flash With Adobe Flash, you can create artwork and animations that add motion and visual interest to your Web pages. Flash movies can be interactive users can click
DINAMIC AND STATIC CENTRE OF PRESSURE MEASUREMENT ON THE FORCEPLATE. F. R. Soha, I. A. Szabó, M. Budai. Abstract
ACTA PHYSICA DEBRECINA XLVI, 143 (2012) DINAMIC AND STATIC CENTRE OF PRESSURE MEASUREMENT ON THE FORCEPLATE F. R. Soha, I. A. Szabó, M. Budai University of Debrecen, Department of Solid State Physics Abstract
Computer Animation. Lecture 2. Basics of Character Animation
Computer Animation Lecture 2. Basics of Character Animation Taku Komura Overview Character Animation Posture representation Hierarchical structure of the body Joint types Translational, hinge, universal,
ACE: After Effects CS6
Adobe Training Services Exam Guide ACE: After Effects CS6 Adobe Training Services provides this exam guide to help prepare partners, customers, and consultants who are actively seeking accreditation as
Scripted Operator Shoulder
Scripted Operator Shoulder by Josh Murtack, certified instructor from Vancouver Film School In this tutorial you will learn how to apply and edit scripted operators to create a shoulder rig that is both
The Fundamental Principles of Animation
Tutorial #11 Prepared by Gustavo Carneiro This tutorial was based on the Notes by P. Coleman, on the web-page http://www.comet-cartoons.com/toons/3ddocs/charanim/, and on the paper Principles of Traditional
Develop Computer Animation
Name: Block: A. Introduction 1. Animation simulation of movement created by rapidly displaying images or frames. Relies on persistence of vision the way our eyes retain images for a split second longer
Tutorial 13: Object Animation
Tutorial 13: Object Animation In this tutorial we will learn how to: Completion time 40 minutes Establish the number of frames for an object animation Rotate objects into desired positions Set key frames
Pro/E Design Animation Tutorial*
MAE 377 Product Design in CAD Environment Pro/E Design Animation Tutorial* For Pro/Engineer Wildfire 3.0 Leng-Feng Lee 08 OVERVIEW: Pro/ENGINEER Design Animation provides engineers with a simple yet powerful
INSTRUCTOR WORKBOOK Quanser Robotics Package for Education for MATLAB /Simulink Users
INSTRUCTOR WORKBOOK for MATLAB /Simulink Users Developed by: Amir Haddadi, Ph.D., Quanser Peter Martin, M.A.SC., Quanser Quanser educational solutions are powered by: CAPTIVATE. MOTIVATE. GRADUATE. PREFACE
Video, film, and animation are all moving images that are recorded onto videotape,
See also Data Display (Part 3) Document Design (Part 3) Instructions (Part 2) Specifications (Part 2) Visual Communication (Part 3) Video and Animation Video, film, and animation are all moving images
Performance Driven Facial Animation Course Notes Example: Motion Retargeting
Performance Driven Facial Animation Course Notes Example: Motion Retargeting J.P. Lewis Stanford University Frédéric Pighin Industrial Light + Magic Introduction When done correctly, a digitally recorded
Research-Grade Research-Grade. Capture
Research-Grade Research-Grade Motion Motion Capture Capture The System of Choice For Resear systems have earned the reputation as the gold standard for motion capture among research scientists. With unparalleled
0 Introduction to Data Analysis Using an Excel Spreadsheet
Experiment 0 Introduction to Data Analysis Using an Excel Spreadsheet I. Purpose The purpose of this introductory lab is to teach you a few basic things about how to use an EXCEL 2010 spreadsheet to do
Module 3 Crowd Animation Using Points, Particles and PFX Linker for creating crowd simulations in LightWave 8.3
Module 3 Crowd Animation Using Points, Particles and PFX Linker for creating crowd simulations in LightWave 8.3 Exercise 2 Section A Crowd Control Crowd simulation is something you see in movies every
FAMOS 6.0 What s new?
FAMOS 6.0 What s new? 1 vom 22. Dezember 2008 What s new? FAMOS more than signal analysis Visualization Analysis Presentation The strength of FAMOS simple fast even with high amount of data sets The pocket
Animation. Basic Concepts
Animation Basic Concepts What is animation? Animation is movement of graphics or text Some common uses of animation include: Advertising o Example: Web site advertisements that are animated to attract
CIS 536/636 Introduction to Computer Graphics. Kansas State University. CIS 536/636 Introduction to Computer Graphics
2 Lecture Outline Animation 2 of 3: Rotations, Quaternions Dynamics & Kinematics William H. Hsu Department of Computing and Information Sciences, KSU KSOL course pages: http://bit.ly/hgvxlh / http://bit.ly/evizre
imc FAMOS 6.3 visualization signal analysis data processing test reporting Comprehensive data analysis and documentation imc productive testing
imc FAMOS 6.3 visualization signal analysis data processing test reporting Comprehensive data analysis and documentation imc productive testing www.imcfamos.com imc FAMOS at a glance Four editions to Optimize
A Desktop Input Device and Interface for Interactive 3D Character Animation
A Desktop Input Device and Interface for Interactive 3D Character Animation Sageev Oore Department of Computer Science University of Toronto Demetri Terzopoulos Department of Computer Science New York
Creating Hyperlinks & Buttons InDesign CS6
Creating Hyperlinks & Buttons Adobe DPS, InDesign CS6 1 Creating Hyperlinks & Buttons InDesign CS6 Hyperlinks panel overview You can create hyperlinks so that when you export to Adobe PDF or SWF in InDesign,
Introducing your Intelligent Monitoring Software. Designed for security.
Introducing your Intelligent Monitoring Software. Designed for security. RealShot Manager Advance www.sonybiz.net/videosecurity Simple, flexible and scalable HD-ready Intelligent Monitoring Software from
Short Presentation. Topic: Locomotion
CSE 888.14 Advanced Computer Animation Short Presentation Topic: Locomotion Kang che Lee 2009 Fall 1 Locomotion How a character moves from place to place. Optimal gait and form for animal locomotion. K.
Intelligent Monitoring Software
Intelligent Monitoring Software IMZ-NS101 IMZ-NS104 IMZ-NS109 IMZ-NS116 IMZ-NS132 click: sony.com/sonysports sony.com/security Stunning video and audio brought to you by the IPELA series of visual communication
Solar Tracking Controller
Solar Tracking Controller User Guide The solar tracking controller is an autonomous unit which, once configured, requires minimal interaction. The final tracking precision is largely dependent upon the
COMP 150-04 Visualization. Lecture 15 Animation
COMP 150-04 Visualization Lecture 15 Animation History of animation The function of animation Illustrate steps of a complex process Illustrate cause and effect, context Show trends over time, tell a story
Linkage 3.2. User s Guide
Linkage 3.2 User s Guide David Rector Wednesday, April 06, 2016 Table of Contents Table of Contents... 2 Installation... 3 Running the Linkage Program... 3 Simple Mechanism Tutorial... 5 Mouse Operations...
Intro to 3D Animation Using Blender
Intro to 3D Animation Using Blender Class Instructor: Anthony Weathersby Class Objectives A primer in the areas of 3D modeling and materials An introduction to Blender and Blender s toolset Course Introduction
D animation. Advantages of 2-D2. Advantages of 3-D3. Related work. Key idea. Applications of Computer Graphics in Cel Animation.
Page 1 Applications of Computer Graphics in Cel Animation 3-D D and 2-D 2 D animation Adam Finkelstein Princeton University COS 426 Spring 2003 Homer 3-D3 Homer 2-D2 Advantages of 3-D3 Complex lighting
GAZETRACKERrM: SOFTWARE DESIGNED TO FACILITATE EYE MOVEMENT ANALYSIS
GAZETRACKERrM: SOFTWARE DESIGNED TO FACILITATE EYE MOVEMENT ANALYSIS Chris kankford Dept. of Systems Engineering Olsson Hall, University of Virginia Charlottesville, VA 22903 804-296-3846 [email protected]
Wednesday, March 30, 2011 GDC 2011. Jeremy Ernst. Fast and Efficient Facial Rigging(in Gears of War 3)
GDC 2011. Jeremy Ernst. Fast and Efficient Facial Rigging(in Gears of War 3) Fast and Efficient Facial Rigging in Gears of War 3 Character Rigger: -creates animation control rigs -creates animation pipeline
Animation. The Twelve Principles of Animation
Animation The Twelve Principles of Animation Image 01. Public Domain. 1 Principles of Animation Image 02. Public Domain. In their book, The Illusion of Life, Ollie Johnston and Frank Thomas present the
