Computer Animation for Articulated 3D Characters

Size: px
Start display at page:

Download "Computer Animation for Articulated 3D Characters"

Transcription

1 Computer Animation for Articulated 3D Characters Szilárd Kiss October 15, 2002 Abstract We present a review of the computer animation literature, mainly concentrating on articulated characters and at least some degree of interactivity or real time simulation. Advances in different techniques such as key-frame, motion capture (also known as mocap), dynamics, inverse kinematics (IK), controller systems, and even neural networks are discussed. We try to analyze these methods from different aspects: semantics, data input, animation types, generating the animations, combining animations, animation varying, target type, and even the path planning perspective is taken into account for the reviewed animation systems. 1 Introduction This report is an overview of motion (animation) control methods for articulated figures. These articulated figures consist of rigid links (segments in H-anim [42] terminology which is a standard for articulated human figures) connected by joints with 1, 2 or 3 Degrees Of Freedom (DOFs). The control methods use kinematics (time-based joint rotation values) or dynamics (force-based simulation of movement) to drive the articulated figures, methods for which the concepts and basics are presented in many animation textbooks [61] [59]. Besides the task of animating which could be quite complex by itself, some of the control methods presented take into account the context (environment) in which the character is being animated, such as the environment objects and obstacles or even the group and social behaviour [76]. The methods discussed differ not only in their motion control methods, but also in their animation possibilities like quality, speed, etc. and thus they provide possibilities for different target applications. This research will be integrated into my own work via an animation editor, which is being currently developed. The animation editor comes to complete a geometry editor [46] and an H-anim based bone hierarchy editor [47] to provide a system capable of producing the avatar or agent bodies needed to populate virtual environments. Since the aim is not just to provide a simple animation editor but a whole animation system which can be used for specifying basic movements and creating combinations of movements, the goal of this report was to find previous research dealing with animation in general, and with animation combination respectively qualitative assessment of animations in particular. In our system, interactivity is the key requirement, which asks for speed in the system, and more importantly, for animation generation and combination possibilities that result in compelling animation sequences. The outcome of this review is formulated in section 6. 1

2 2 Animation data Traditionally, computer animation data is stored as key-frame data. This data structure has two components: the key which specifies relative time moments and the animation data values at these time moments such as rotation angle/displacement/etc. values [58] [61]. Creating these data values can be done manually (by talented animators) or can be captured automatically using different types of capture hardware [62] [75] or simpler mechanical devices like a bicycle [17]. Mechanical, force-based measurements were used in the past for (medical) musculoskeletal measurements and analysis such as the walking characteristics analysis presented in [43]. Newer, magnetic or optical commercial motion capture systems use different markers for tracking coordinates and even mechanical (or optical fiber) wearable systems exists for capturing movement, these latter being more cumbersome, but providing a greater area of movement freedom. Markerless, image-based motion capture systems also start to emerge, where the input is one or more video camera feed [28]. Motion capture systems usually store data as absolute coordinate values, but there are motion capture file formats that specify rotation data, which can be extracted from absolute coordinates data to be used for hierarchically articulated characters. Next to body animation data, special motion capture systems are also used to capture facial animation data. [67] uses this technique to identify MPEG4 Facial Animation Parameters (FAPs) in captured data and reuse it with 3D and cartoon heads, consisting of a fixed skeleton and animated with spring simulations (muscle). Motion capture data can be used for directly animating (directing in real-time) characters similar to the tennis game and virtual dancing in [45] and [62], even enhanced with additional motion/gesture recognition routines [26]. This type of animation may be useful for interactive systems, television, etc. If the data is applied later to characters, then it is stored in key-frame format, usually rotation angle data. There are systems that extract motion key-frames from any kind of data source (manual, IK, procedural, mocap) and apply that data to their purposes, for instance qualitative variance [23] or composing animations [64] [13], methods described in section 3.2 and its subsections. 2.1 Animation data types There are several distinctions that can be applied to animation data. One of them is reusability. Some data can be reused in the sense of re-run cycles (like walking, running [81] [18]) or by simply repeating a certain movement sequence when needed. There are also animation sequences that are dependent on the character s properties (position, orientation) relative to target entities and thus cannot be used in a pre-recorded manner, and these are animations like grasping or pointing [40] which are usually achieved using IK or similar techniques described in section

3 3 Generating animation Table 3.1 enumerates and additionally roughly classifies the animation methods that are discussed in the rest of the paragraph. The classification dimensions are: 1. Interactivity/Speed. The higher this value is, the more usable the system is in interactive virtual systems. Simple motion combination methods are the easiest and fastest form of animation. 2. Reusability/Speed. This dimension appreciates motion control methods that reuse recorded movements, not requiring new motion data generated from scratch. 3. Generality. Physics based, IK motion systems score higher, since in theory they can produce an unlimited number of motion types, while procedural systems are limited by the motion data they possess (quality and nor quantity limitation). 4. Quality. Physics based and qualitative variance methods score higher by producing unique and context-sensitive, adapted motions. 3.1 Physical simulation A wide-spread method for animating articulated figures is kinematics. It is based on the motion properties time, position and velocity. The term forward kinematics is used when joint rotations are specified in function of time. The term inverse kinematics refers to the problem of specifying the forward kinematics values when the end-point (end effector) of the character articulation is known. There are many different methods to solve this problem, and this attention transformed the concept into a well-known animation (simulation) alternative. The most common examples of forward and inverse kinematics are the key-frame editors (which interpolate forward kinematics values from a few intermediate values) and the possibilities of fixing the extremities of articulated figures while moving the other components or dragging the extremities themselves (in this case IK methods resolve the joint rotations) present in many modeling/animation packages [30]. Dynamic simulations are force-based simulation methods that take in account the forces that occur in articulated characters as well as additional properties or events like velocities, mass, torques, collision [40]. This animation method is computationally expensive and it is used for instance in realistic simulations [37] [83] [78] and tests for usability [57] [5]. With these simulation methods, the end-result generally consists of timedependent values which represent the forward kinematics angle data that can be used to drive the target articulated character. The IK problem has been approached from different angles. There are numerical and analytical methods [54] [5], but there are simpler, intuitive, non mathematics-inspired methods also that are aiming for speed gain: [34] tries to approximate target points with forward kinematics, while [30] describes techniques where the hierarchies (limbs) are inverted. 3

4 Interactivity/Speed Quality Procedural animation (based on data) - Fourier transformations to extract and alter qualitative features - image and signal processing techniques: filtering motion features - signal processing techniques for emotional transforms - learning and applying motion styles through HMMs and analogy - LMA properties based alterations (for effort and shape) - knowledge-driven alterations for specific subdomain - noise functions for qualitative variance - constraints and displacement maps - constraints based retargeting with spacetime approach - motion warping (time based alteration) - simulated annealing (scaling & fine-tuning) - path-based motion alteration - motion prototypes annotated with behavioral tags - combination based on height (foot elevation) criteria - 2nd order derivative to extract and combine basic motions - priority (importance) levels and weights based combination - segmenting data and recombining with transitional motions - combining non-overlapping data based on spacetime constraints - timewarping to match motion sequences - fade-in, fade-out functions - curve blending functions - interpolation between motion sequences - grouping mutually exclusive motions for faster combination Reusability/Speed Physical simulation (control systems) - dynamic Newton equations system - mass data and moments of inertia with motion equations - inverse dynamics with energy minimization constraints - decentralized, agent-based approach (forces = agents) - inverse kinetics (IK with mass) - muscles as motion actuators with sensor feedback - proportional-derivative (damped spring) controllers (PD) - PD servos for dynamic simulation of joint torques - Genetic Programming method to create PD controllers - closed loop method for balancing in open loop walking - dynamics system with feedback control method - simplified inverse dynamics with mass (Newton-Euler eq.) - iterative numerical integration for IK - IK constraints with forward kinematics and feedback - inverting hierarchies (articulations) - sagittal elevation resolution method for IK - direct manipulation - Neural Networks based learning from examples/feedback - Machine Learning (adaptive state machines) Generality Table 3.1: Classification dimensions for different animation methods

5 [73] relates about a molecular simulation based IK algorithm used to calculate the motion of a skeleton s joints in real-time, using 5 magnetic track sensors as end effectors. The algorithm is an iterative algorithm for numerical integration of constrained motion in a Lagrange framework. [8] describes a heuristic, iterative approach for IK resolution. In this approach, each functional segment of a limb hierarchy (in their specific case a robot arm) is regarded as a set of agents, yielding in a decentralized approach. The constraints are regarded as forces, while each force is represented by an agent. There are 4 types of forces used: goto (target), pain (operation range), relax (equilibrium point) and orientation (for grabbing). Path smoothing must be used after the IK resolution to eliminate jitter. [78] uses kinematics constraints for physics-based simulation of artificial animals (fish). Inverse dynamics is used through solving motion equations (the Newtonian laws of motion) resulting in forces that are physically correct but not realistic. Constrained optimization is used to find motions requiring less energy (open-loop controller). Another method presented for physical animation is motion synthesis using muscles as motion actuators, method which allows the integration of virtual sensors feedback. Finite-state machines are used to add and remove constraint equations. The paper [14] presents a concept called Inverse Kinetics, a variant of the IK incorporating center of mass information. Beside the pseudo-inverse Jacobian matrix calculations for IK, the mass distribution information is considered in the calculations to provide center of mass position control and posture optimization for a more physically correct appearance. The same idea is related with more detail in [15]. Another type of control system is the Proportional-Derivative (damped spring) controller [52] [34], PD for short. It is a proportional scale and deviation minimizing derivative system, which means that it generates a force or torque proportional to differences of positions and velocities of source and target coordinates. Similar is the work presented in [52] where an open loop cyclic motion (walking) is extended with a closed loop feedback for inclusion of balancing properties. Finite state automata is used for balancing feedback, while a PD controller is used for the walking simulation. [39] uses also the PD technique for servos to compute joint torques for the dynamic simulation of human athletics movements (running, bicycling, vaulting). Manually constructed control systems (incorporating intuition, observations, biomechanical data and built using a control techniques toolbox) are used for characters and environment objects. User has the possibility to change certain high-level properties of the simulation (speed, orientation, etc.). They also use mass-spring clothes simulation to add secondary motion to the animations. [71] describes a dynamics method with a musculoskeletal basis structure and uses a dynamic Newton equations system to resolve the animation. The system presented is defined as dynamic having the possibility to add or remove equations respectively to change attributes on the fly. Articulation relationships are modeled with constraint equations, also dynamically alterable. The dynamics system in [49] uses as enhancement for efficiency a feedback control scheme with a two-stage collision response algorithm. The collision 5

6 response algorithm consists of a system for calculating the velocity change at impact and of a second system for calculating forces and counter forces for non-penetrating objects. The user has the possibility to specify kinematics trajectories for selected DOFs. [41] relates over an inverse dynamics technique applied to a simplified skeleton having mass information attached to the body segments. A recursive algorithm is used based on the Newton-Euler motion equations to calculate the force and torque of joint actuators. A walking gait controller is presented in [72] that generates walking motion along a curved path and uneven terrain with high control path specification. The high-level parameters are step length, step height, heading direction and toe-out. A sagittal elevation motion algorithm is used to specify the footfloor interaction model using sagittal elevation angle for realistic contact. The motion algorithm takes into account the properties of parent joints and segments (rotation, body properties) to calculate position and positioning data relative to the basis key-frame data. [37] and [83] present human running respectively human diving animation methods based on the same principles. They use mass data and moments of inertia in combination with a commercial motion equations generating package. The movements are separated in phases, and a state machine is used to control the action selection mechanism. The movement actions in case of diving are more diverse as in the case of running, including jumping, twisting next to running and balancing. [53] presents interactive techniques for controlling physically-based animated characters using the mouse and the keyboard. Assigning mouse movements or keyboard shortcuts to DOFs, direct manipulation of limited complexity characters (2D and few DOFs) is possible. The input devices also enforce limitations, for instance a mouse can control 4 DOFs at a time (the control procedure is time-dependent). It also uses a state automaton to combine the animations. Another category of controllers span from the field of artificial intelligence and neural networks (NN). [36] uses neural networks for learning motion (from examples or positioning feedback), but applies it only to simple articulations (no humans): a pendulum (3 limbs), a car (learning how to do parking), a lunar module (learn landing) and a dolphin (learn swimming). Similar is their earlier work [35] concentrating on automatic locomotion learning for fish and snakelike creatures, having highly flexible, many DOF joints, but also lacking the complexity that is necessary for human articulated figures. Their work was certainly inspired by the pioneering work in this area conducted by Sims [70]. Similar is the approach taken by [34] to achieve physical simulation with the aid of Genetic Programming (GP) to produce controller programs. Key marks (a subset of key frames and tracks) are used to specify the goals of an animation, compared with spline control points which have uncertainty incorporated through values and time. From this, a Proportional-Derivative controller is calculated. [7] presents a data-driven NN approach that is aimed to grasp the essence and intrinsic variability of human movement. They use a recurrent RPROP artificial neural network with state neurons and weighted cost functions in combination with a pre-processing normalization step according to a mannequin based 6

7 on average human characteristics, with the aim to learn and simulate human movements. [27] relates about physics based animation using adaptive state machines as controllers. The state machines adapt by machine learning techniques, and they use balancing, pre and post states for optimal control. The controller selection is based on query and weighted appropriateness result, controllers having scopes and weights defining their capabilities. The technique is presented as a more general, longer-term solution than data blending and warping, but it is rather a question of physical simulation versus data based approach. [29] use high level cognitive modeling for action planning. Domain knowledge is represented by preconditions, actions and effects, while characters are directed by goals. The AI formalism situation calculus is used to describe changing worlds using sorted first-order logic which means the use of the possible successor state axiom instead of the effect axiom (what the world can turn into). The paper presents an animation method hierarchy (pyramid hierarchy) with geometric, kinematic, physical, behavioral, cognitive modeling representing higher and higher levels of animation techniques. They use cognitive modeling to direct behavioral animation. [40] presents another type of domain knowledge. Virtual touch sensors are used in the hands of virtual humans to simulate grasping behavior. Small spheres are used as collision sensors in combination with grasping knowledge (deducted from the grasped object s properties) for a physically correct grasping sequence. A particular method for character animation is presented in [63]. A timed, parametric, stochastic and conditional production L-system expanded with force fields, synthetic vision and audition is used to simulate the development and behavior of static objects, plants and autonomous creatures. An extra iteration step is used to derive the state of objects at increasing time moments. The animation system used is a real-time multi-l-system interpreter enhanced with ray tracing image production. Emergent behaviors can emerge by using virtual sensor functions in the conditions of production rules. A physically based virtual sensor is the tactile sensor (the other two virtual sensors are the image-based visual sensor and the acoustic sensor) which evaluates the global force field at a certain position. [20] uses different levels of detail for physical simulations. Similar to graphics LOD techniques, different simulation levels are used for animation: dynamic simulation, hybrid kinematic/dynamic simulation and point-mass simulation. These levels are selected according to the interaction probabilities. Objects have space of influence, determining collision possibilities and thus the need for a detailed dynamic simulation. The switching of simulation levels is done during simulation steps that match certain criteria for a smooth transition, like the character being airborne. Another possible simulation LOD selection criteria the authors mention could be the cost/benefit heuristic, which requires the developer to specify importance values in the simulation environment. 3.2 Procedural animation We divided the procedural animation methods from two point of views. First, we take a look at the combination techniques for motion data (left column, bottom in table 3.1), which provide the easiest interactivity possibilities by simply 7

8 reusing a motion library. In addition to combining motion sequences, there is another category of motion generating methods (left column, top) which aim to alter the motion data based on properties defined for the character, its mental model, behavior, etc. The latter methods are used generally together with motion combination techniques, thus this category is possibly less interactive than the motion combining methods category alone (for instance by being more computationally demanding and not performing in real-time) but also provides more qualitative variance in the generated motion sequences. An animation method that does not fit the following classifications is the impostor technique presented in [2]. Their aim was to produce real-time (fast) animations for geometrically complex characters, possible multiple characters. Therefore they use the 2D sprites technique in 3D by pre-generating snapshot pictures of the complex characters, with several levels of detail, and mapping these on billboards instead of really animating the characters. Re-rendering is controlled by a temporal coherence model, meaning if threshold values (differences between the snapshot and the animated character) are exceeded, only then the impostor is re-rendered Combining animation data Combining animation sequences (motion capture data) can be a tedious task if done by hand. Therefore, automatic motion combination techniques have emerged from the simplest ones to most complex ones, from fading functions to overlapping and blending techniques. One of the most simple motion combination methods is the use of fade-in and fade-out functions [65][54] or the motion parameter curve blending method used by [82]. Similar is the method presented by [3] where motions are segmented into motion phases that can be named, stored and executed separately, and possibly connected via transitional motions. Slightly more complex is the motion interpolation technique used by [72] to combine walking movement sequences into data with height changes at foot contact that fits the terrain variations the character is walking on. In addition to the simple fade-in and fade-out functions, [65] uses radial basis functions to interpolate between distinct motion data sequences. The approach is based on the verbs and adverbs analogy, where the original motion (verb) is interpolated according to some property (adverb) for motion transition purposes. This way, a smooth transition is achieved between motion sequences. In an earlier publication [66] they use spacetime constraints for the same purpose, for reassembling the non-overlapping motion sequences. Although motion data is used, this is broken up and converted into dynamics formulation incorporating spacetime and IK constraints, and an energy (torque) minimizing approach is used to enhance the naturalness of the generated movement. [21] and [22] uses motion prototypes, a dictionary of gestures to create animations from text and dialogue theory. The gesture motions are annotated and synchronized for animating dialogues using parallel transition networks (PaT- Nets), see section 5.3 for a description of this and other animation architectures. However, these methods work satisfactory only with non-conflicting motion sequences. When the motion data sets try to act upon the same target joints, the results can be unexpected. To cope with this problem, different methods 8

9 for combining motion data were proposed, based mainly on different weighting and summing techniques. [40] [69] uses priority (importance) weights and weights-based combination of conflicting data in initiating and terminating movement phases, with an ease-in and ease-out technique based on cubic step functions, to achieve smoothness next to the meaningful combination of movements. These methods were collected in a software library [13] for the management of animation data combinations. In this library, actions have scope (affected joints) and priority (relative importance of action) properties, while agents (articulated characters) also have scope, they can accept or reject actions. The blending of motions is executed depending on the action types motion or posture. [64] uses the traditional animators knowledge: human motions are created from combinations of temporarily overlapping gestures and stances but goes one step further and uses groups of mutually exclusive animations which compose levels with different importance weights to achieve next to motion combination a certain amount of speed gain by simplifying the search for conflicting motions. [12] uses a range of motion generator techniques: dynamic, motion capture (live motion), motion capture playback and IK motion tasks, where the motion generator techniques can be used in combination (one technique per motion generator which acts on a set of joints). In their case, conflicting joints are summed using priority levels and weights. A particular case of motion combination is presented in [44] where cyclic motion capture data is used in combination with a PD controller. The controller is responsible for executing the linear and angular acceleration of state variables over time as a result of the path planning algorithm Altering (varying) animation data If motion sequences are reused frequently, then the repetitions lend a monotone feeling to the animation. Since motion data (motion libraries) are limited, researchers tried to introduce variations in motion data to achieve non-repetitive movement sequences. They achieved this in various ways, ranging from noise functions to B-splines and from IK to neural networks. Qualitative variations The first set of methods discussed uses key-frame data and alters it qualitatively using different functions and methods. The goal is to introduce qualitative variations into the animation data, making non-repetitive re-usage of the same data set possible. [79] applies Fourier transformations to motion data and uses frequency analysis to extract basic factors (like walk) and qualitative factors (like brisk, slow, etc.). Intrapolation and extrapolation in the frequency domain is then used to create new motion types from neutral data and emotional mood variations. The new motions can be generated continuously, and additional manual control possibilities exist to influence the animation by hand. [1] performs emotional transforms on motion data using signal processing techniques. They analyze and extract neutral motion and emotional differences from motion data and apply these to other types of neutral data. The varying components used (extracted) are speed and spatial amplitude. 9

10 [82] uses motion warping techniques to alter key-frame data. By warping the motion parameter curves to animator-specified constraint key-frames, local variations are introduced in the motion data without changing the fine structure of the original motion. Motion sequences are combined by overlapping and blending the parameter curves. Can be used to create libraries of reusable motion by specifying a small number of constraint key-frames. [19] uses similar techniques from the image and signal processing domain as complementary techniques to key-framing, motion capture, and procedural animation. The techniques are multiresolution motion filtering, multitarget motion interpolation with dynamic timewarping, waveshaping and motion displacement mapping. In multiresolution motion filtering, low frequencies contain the general motion, while high frequencies the detail, subtleties and noise. By filtering certain bands, different styles of motions can be obtained. Timewarping techniques are used to match movement patterns indifferently of the animation time, resulting in combination (warping) of movement sequences. Motion waveshaping is used to introduce motion limits or to alter movement styles. Motion displacement mapping is used to locally alter the movement sequence without changing the global aspect of the movement. The authors state that the most important feature of this latter technique would be the alteration of a grasping movement towards a different target object (although the new target object should be also close to the original target object). [16] uses Style Hidden Markov Models (HMMs) to learn patterns, in particular the style of movements from motion capture data and then uses these Style HMMs to generate motions with different style and scope. The goal is to extract a parameterized model that covers the generic behavior of a family of motion capture data. The Style HMMs are applied by using analogy rules, and it is applicable to long motion data sequences and this technique is regarded by the authors as an alternative for motion libraries. New motion from data A second set of methods could be defined as methods using motion data as basis for new motion calculations, such as noise functions or knowledge-driven methods, and even physical simulation methods are used sometimes. [64] uses noise functions on existing key-frame data in combination with IKbased motion calculations for simulating vivid, natural movements from existing key-frame motion data. A hybrid numerical IK method is presented in [54] for interactive motion editing. Constraints are specified on the key-frame motion data, for which B-splines are fitted. These fittings result in hierarchical displacement maps that are used for motion alterations, from which the animation data is calculated (using IK) by eliminating redundant coordinates. Example constraints specify bending below doorways, terrain following, motion combination and even motion retargeting (next group of motion altering methods). [18] uses interactive, knowledge driven altering techniques for running data which are based on empirical (relations between parameters), physical (for body trajectory) and constraint knowledge (for flight/support states and s- tance/swing phases). Examples of parameter relations are presented such as: increase in velocity is linear with the increase in step length up to a certain speed (24km/h) then only the step frequency is increased to further increase 10

11 the velocity. Other relations regard the level of expertise (experts have greater stride length) or the relation between leg length and froude number (inertial force / gravitational force). [23] uses LMA (Laban Movement Analysis) properties based movement analysis and alteration. Effort and shape properties are used (the other three properties used by LMA are body, space and relationship) to introduce qualitative differences (mood, internal state). Key-frame data is used from various sources, and the shape parameter influences the key-frame values, while the effort parameter influences the execution speed of the key-frame data, specifying also the frame distribution between key-points. The altered key-frames are regarded as via and goal type end-effectors for an IK interactive system. The same requirements for virtual human animation are presented in an earlier work [4]. Based on an analysis of animation methods, the paper concludes that both empirical data and cognitive properties are important in animation, resulting in the approach taken based on LMA and human properties like personality, emotions, temperament. They express the view that for directing attention and attentiondependent motions (walking, reaching) sensing actions and interleaving motions are needed. Motion retargeting A different set of methods are the ones that concentrate on motion retargeting. The motion data is altered to be usable for articulated characters of different sizes and types. [10] parameterizes the motion capture data and applies it to other target articulations. The second order derivative is used to search for significant changes in motion, by analyzing zero-crossings to extract basic action segments, which are later interpolated using cubic splines to yield new motion sequences. Uses different types of tags for action participant objects. [31] retargets motion to new characters using specific features (contact points) of the original motion as constraints. Their argument is that procedural and simulation retargeting means new data, without the qualities of the original detailed mocap, key-frame data. They use a spacetime approach which eliminates the spatial jerkiness of frame-based retargeting methods by look-ahead and look-behind techniques (anticipating constraints) for the whole motion sequence which results in smooth motion data generation. [38] uses a two-step motion adapting method for new characters. The first step consists of a geometric scaling enhanced with mass scaling and in a second step the parameters are fine-tuned using a simulated annealing technique (progressive adaptation of multivariate functions towards target states). Although the method is applied to control systems and not motion data as other methods in this section, the method is still relevant from the retargeting perspective. One method which differs from the previous ones in its scope presented in [32] is the use of different paths to retarget the motion data. Paths represented as B-spline curves are altered (they are editable) and foot constraints are used to eliminate foot skating (method presented also in an earlier work [31]). 11

12 3.3 Alternative classifications There are of course other type of animation classifications. For instance, [74] and [75] uses instead of the physical/procedural a physical/geometrical classification, where kinematics is enlisted as geometrical animation type, next to procedural animation methods. [29] defines a pyramid hierarchy with geometric, kinematic, physical, behavioral, cognitive animation methods, all building on top of each other. Animation could be classified also from the point of view of target systems. Film and animation production software use complex physical simulations to make compelling animations, since their target objective is not the real-time applications market. On-line environments, games on the other hand have the real-time requirement, thus they need fast, reusable animations, thus they orient more in the direction of procedural animation. 4 Movement semantics Semantic analysis and semantic generation of movement are important components in the behavior (brain) specification for animated characters as stated by [3] [4], where they use this semantics as a method to alter movement sequences. [18] uses a high-level motion control system based on dynamic parameters in combination with empirical, physical as well as constraints knowledge to procedurally generate walking/running animation data. Others like [40] use object grasping knowledge for sensor-based grasping simulation of different types of objects in virtual environments. Classical cartoon creators use different animation sequences to empathize the movement semantics in their cartoon characters. Computer character animators can use the same techniques when creating key-frame animations [51] to achieve expressiveness, believable behavior. Additionally, film theory concepts could be used for animations [56], mainly for staging, viewpoint animation, etc. On the other hand, there are exact, medical and psychological-social measurements for the semantics of human movement. Regarding medical measurements, [43] presents a series of walking characteristics, while [81] discusses phases of walking, forces and moments, muscle activity resulting from human gait analysis. Psychological textbooks analyze the different human poses, assigning meanings to them. McNeill [60] analyzed the meaning of gestures and the gesturing behavior of humans during dialogs. He categorized arm gestures into the following five types: iconics, metaphorics, deictics, beats, emblems and studied their connections to dialogue characteristics. Similarly, the human facial movements were analyzed by Ekman [25] for extracting the expression of different psychological effects such as mood or mind state from face configurations (features) resulting in the Facial Action Coding System (FACS). These results found their way into computer animation. [21], and more recent the BEAT system [22] uses the McNeill gesture and dialogue semantics to generate the gesturing behavior of autonomous characters. [21] uses gestural annotations, gestural lexicon look-up, timing from speech generation in producing the animation. Similarly, [22] creates gestures from speech and dialogue rules. An arbitrary animation system is wrapped around with behavior tags and used in synchronization with speech and the semantic context of the text 12

13 to produce the gesture animations. Even the FACS system data is used to generate the facial expressions for the simulated dialogues. [80], [55] and other facial animation systems rely heavily on the findings of Ekman in creating facial expressions. The Waters facial model [80] uses muscle simulations and the FACS system to model facial expressions, emotions. [21] uses the same FACS system to generate facial expressions such as lip shape, conversational signal, punctuator, manipulator and emblem, animated expressions to accompany their dialogue utterances. Behavioral animation is also gaining momentum. Simple animal behaviors [78], complex human dialogue behaviors [22] and some of the different motion alteration methods from section use behavior, moods, emotions to influence the created animation, making it more expressive, natural and meaningful. 5 Applications, tools, systems 5.1 Editor types Historically, 3D editor systems used different views for editing: the 3+1 approach, representing the scene from top, front, side and 3D view [61]. However, there are other approaches also, where the editor interface is inserted into the 3D environment, breaking away from the multiple viewpoints and also from the WIMP interface towards a fully integrated 3D interface [33] [47]. Classical key-frame setting operations in modeling commercial packages handle besides articulated characters also the environment animations. They do not produce interactive environments, thus this is a logical animation approach. There are also software packages that concentrate only on animation of articulated characters, and the result of these animations can be applied in interactive environments. However, these latter animations are still rigid, and running them repeated times will convey unnaturalness to the environment. These packages have different tools for enabling easy foot positioning. [30] presents a number of these approaches that are used by commercial packages. The first method is IK with locks, when 3D character articulations are locked (pinned down) and IK calculations are used to calculate joint rotations when the character is moved. A typical example is pinning down the foot on the floor to eliminate foot skating. Some packages offer the same pinning functionality also for forward kinematics, when the user positions manually the unconstrained articulations to achieve the desired result. A similar animation possibility is to drag limb extremities, where internal joint rotations are resolved by IK. An advanced animation possibility is the footstep generator, when the animator specifies footprints on the ground, and the software animates the character s movements to match the footprints. This calculation is usually a physical simulation which has to be refined, for instance by adding personality, individual characteristics. An odd method for animation is the use of inverse (or broken) hierarchies. It is based on the forward kinematics animation of inverse limbs, typically the legs and arms. It is called broken because to achieve inverse hierarchies with non-repeating leaves, the torso has to be separated from the hierarchies and it has to be animated (kept in synchronization with the rest of the character) separately. This can be a little cumbersome, but the advantage 13

14 is that the foot can be positioned without IK. Generally, when characters are being animated, it is also important to have references to the previous frames. Some software have built-in onion-skin effect or aid by allowing to place dummy objects as markers. Another approach to animation for which intuitive interfaces also start to e- merge is the use of key-frame or mocap data and tools to test the manipulation possibilities on these data (combination, alteration techniques) as an interactive system would do in real-time. Such a system is presented in [19] where filters can be used to alter animations. A complex, high-level animation interface is presented in [68] which allows the directing of multiple actors and cameras in real-time. Key-words are high level controls, predefining actions, while text-based sentences are used for high level actions. There are also tools for preprogramming actions, since there are many virtual humans to control. A scripting tool records all actions played by an actor and plays it back as requested. Actors can be programmed one by one. [33] uses a network of interrelated objects and constraints to visually edit the properties of virtual environments, including animation. The interesting approach is the fact that the edited environment and the controls are all threedimensional, making the editing interface integrated into the environment. [67] presents a facial animation editor system based on motion capture data and editing constraints representing laws of co-activation, symmetry laws and animation limits, where the animation itself is driven by emotion parameters. 5.2 Planning A classical path planning algorithm is the A* algorithm, used by [44]. They use an orthogonal projection of the environment to a 2D plane, divided into cells. Collision detection and path planning is done based on the occupied/free cells of the plane. The free cells of the plane are transformed into a graph, for which the A* algorithm is applied. Variations exist on this method, for instance boundary cell skipping for collision avoidance or multilevel cell maps [6]. A planning method that is used frequently in games takes advantage of space subdivision techniques, with different types of subdivisions. It could be applied in both 2D planes and 3D space, providing both a notation for freely navigable space and a collision detection mechanism. [77] uses an octree division of free and occupied spaces in a virtual environment to notate free space for path planning, incorporating dynamic possibilities for updating octree data due to animations or possibilities for adding or removing objects, in case of content change. [63] uses an autonomous planning method based on a reactive, timed behavioral L-system that handles visual and tactile virtual sensors. The values of the sensors are used in production rules for determining the behavior of the characters. [50] provides a method of high level animation by specifying approximate paths and synchronization constraints. Their system handles automatically collisions and animation using forward dynamics and a synchronization mechanism that is depending on relative positions of objects to each other, and not a common coordinate system. Splines are used for motion (path) specification and are 14

15 altered by scripting depending on the synchronization constraint states. [11] presents an autonomous virtual human directing system that is based on synthetic vision sensor (the viewpoint of the virtual human). The autonomous humans are directed by a behavioral system at motivational (disregarding the behavior system), task (if the behavioral system allows it) and direct motor levels (influence the modality of the current action) according to the information extracted from the visual sensors using vision techniques. [29] uses a reactive path planning method based on the possible successor state axiom instead of the effect axiom (using the what the world can turn into metaphor). Their characters are directed by goals and domain knowledge, incorporating actions, preconditions and effects. For the actual path planning, a sensor and interval (effects range) based narrowing of possible world states (regarded as a tree structure) and thus plan sequences is used as a predecessor step for a faster resolution time. The following methods are methods complementary to path planning, but necessary to execute the different paths selected. [72] uses a method of gait generation along a curved path and uneven terrain with high control path specification. The high-level, intuitive parameters used are step length, step height, heading direction and toe-out. Similarly, [32] retargets motion data with different paths, using constraints to eliminate foot skating. The path is represented by a B-spline curve, which is editable to produce new movement paths. [37] uses a path following method based on the redirection of the runner character to face the point it will reach in a two-second animation cycle. This redirecting methods allows a dynamic path planning method to be used. 5.3 Animation systems (architectures) For real-time virtual humans, [45] presents a number of criteria or guidelines that is recommended to achieve good animation results. These are low polygon count, fast skin deformations, non-physical skeleton animation, etc. Given that the statement was made in 1998 referring to SGI workstation hardware, the statement is still partially valid referring to today s ever powerful PC environments, especially if we consider a setting of multiple characters that must be animated in complex environments, with complex behavior. However, physicsbased methods also approach the real-time level, and will start to emerge, especially in professional settings. [3] and [9] describes parallel finite-state machine controllers called Parallel Transition Networks (PaT-Nets) used together with additional parameters and information coded in a higher-level representation called Parameterized Action Representation (PAR). A PAR is defined as the description of a single action in terms of objects, agent, applicability conditions, preparatory specifications, execution steps, manner, termination conditions and post assertions. This is used as a sense-control-act architecture to animate smart avatars, avatars whose reactive behavior is manipulated in real-time to be locally adaptive. The architecture integrates autonomy control levels (to optimize lower level reactivity or plan higher level complex tasks), gesture control based on a object-specific relational table and non-verbal communication, attention control directed by visual sensing requirements and locomotion with anticipation for favorable positioning in interaction scenarios. The paper [4] presents in detail their attention 15

16 control method based on sensing events that simulates deliberate, spontaneous and idle attention or behavior using an automata system with an arbitrating mechanism controlling it. Part of the animation architecture presented previously is used also by [21] for the synchronization of gaze and hand movements with dialogues. Gesture PaT-Nets are used to sends hand and arm timing, position and shape to the animation system which produces a file of motions depending on this data. Motions are categorized as beats and gestures, the beats being based on speech rhythm, while gestures summarize the deictic, iconic, metaphoric gestures based on meaning representations. Beats are often superimposed on other gestures and coarticulation (2 gestures without relaxation), preparation, foreshortening (anticipation) effects are also reproduced using the PaT-Nets synchronization architecture. The further extension of the system is discussed in [22] where an automatic text-to-dialogue conversion method is used to generate synchronized animation and speech from the semantic context and a behavior-tagged animation system. [64] uses a distributed multi-agent system (MAS) with blackboard agent synchronization for animating multi-user virtual environments. Multiple animation engine values at each LAN and a global behavior engine value is used for local animation frame synchronization (one mind, multiple bodies or parallel universes analogy). For consistency, all environment objects behave as agents to avoid multiple locking (similar to database techniques). The aim of the animation system presented in [68] and [69] is to provide integrated virtual humans with (multiple source) facial and body animation and speech, as well as to provide a straightforward interface for designers and directors to handle these virtual humans. The speech-animation synchronization is done using duration information from the text-to-speech system used. The system is built upon a server/client architecture using a virtual human message protocol that enables connection over networks, and makes possible for many clients to control the virtual humans, behaving thus as virtual actors. Keywords are used as high level controls, predefining actions, text-based sentences for high level actions. Tools for preprogramming actions and a scripting tool to record and play back all actions played by an actor are used to exploit the system s possibilities. The animation management methods used for this directing system is described in detail in [13]. [48] presents an animation framework for general type of animations (not just biped) with a combination of techniques (kinematics, dynamics and control theory). The input desired velocity and heading is transformed by a feedback controller into forces and torques, which are distributed to the legs in contact with the ground by a linear programming algorithm. Forward dynamics is used to calculate actual body displacement, while a kinematic gait controller calculates the stepping pattern. The animation framework is tested in uneven terrain and target following experiments. With such complex systems it is possible to achieve even the animation for text to scene conversion systems such as [24] or text to animation systems as it is already being done by the BEAT system [22]. 16

17 6 Conclusions During the review of the different animation systems we looked at the applicability of the described systems to our constraints, constraints that are inherent from the setup we work on. We use internet-enabled 3D virtual reality technologies to provide a wide user access possibility, therefore the target platforms are desktop computers with consumer graphics cards. The desktop hardware provides ever increasing computational and graphics power, but real-time complex simulation methods are not yet feasible for interactive, web-based applications. We analyzed the animation systems from three points of view. These considerations were animation quality, computational overhead and reusability, since our aim is a complete system that provides compelling animation with a small computational overhead and provides the possibility to reuse the created animations. The systems analyzed looking at these criteria were themselves complex systems that presented results for all of the three consideration directions or just for some of them, either way they were considered in our review. The two main categories of animation methods with regard to movement data production are the motion-capture based and simulation based animation methods. One of these forms the base for an animation system, while different additional methods provide enhancements to the animations produced by the base methods. Regarding animation quality, there are two subconsiderations: variation and naturalness. Motion capture based methods are capturing motion performed by humans, thus they capture also the naturalness of the movements, and not only with movement experts. The drawback lies in the fact that the motion sequences are invariant and in case of reuse the repetition is easily observable. On the other hand, simulation methods produce variant movement data, but they often lack the naturalness of captured movement data, and therefore are mainly used in simulating robots. The second consideration is computational overhead. In this case the motion capture methods are in advantage since they don t need computations to calculate the movement data for each animation sequence, they just use prerecorded data. The price of the varieties provided by the simulation based animation methods is in the high computing requirements, making it too expensive for certain application areas. From the first two considerations, we can see that motion capture is more favorable for our purposes, but there is still a negative property for animation sequences based solely on motion capture data: the resulting animations are invariant. The third criteria we looked at is therefore reusability, with emphasis on motion data analysis and handling. Simulation based systems always produce new data, thus the criteria does not apply to simulation. With a little computational overhead (which is less than what is needed for simulation methods), different methods can be used to analyze motion capture data, extract features, style, basic motions, etc. and use this new motion knowledge to modify the motion capture data in a natural, compelling way to produce variant animations. Our conclusion is that motion capture methods in combination with motion analysis and alteration methods provide quality, naturalness and reusability with a small footprint on speed and thus interactivity, since the computational 17

18 overhead is small. Motion data analysis and processing is particularly important with the advent of movement capture methods from 2D video images, especially if the movement is extracted from general purpose video material where motions are not executed separately and expressively, with motion capture in mind. We need qualitative analysis to break captured movements (regardless from the source) into basic movements that represent separate actions, for a variety of combination possibilities, together with animation data qualitative variation based on movement parameters and psychological (mood, emotion) factors to simulate compelling animations for agent and avatar 3D characters. Regarding the actual motion analysis and alteration methods, many options exist. However, in the light of possibly using complex movement data captured from video sources, an obvious choice would be a motion segmentation method based on motion curve parameters, where the derivatives show movement endpoints, making the separation of movement sequences into basic movements possible. However, this method would have to be extended to extract the overlapping movement sequences of video based movement capture data, since in that case the movements are most likely mixed. References [1] Kenji Amaya, Armin Bruderlin, and Tom Calvert. Emotion from Motion. In Graphics Interface 96, [2] Amaury Aubel, Ronan Boulic, and Daniel Thalmann. Animated Impostors for Real-Time Display of Numerous Virtual Humans. In Virtual Worlds 98, pages 14 28, [3] Norman I. Badler, Rama Bindiganavale, Juliet Bourne, Jan Allbeck, Jianping Shi, and Martha Palmer. Real Time Virtual Humans. In Proceedings of International Conference on Digital Media Futures 99. British Computer Society, [4] Norman I. Badler, Diane Chi, and Sonu Chopra. Virtual Human Animation Based on Movement Observation and Cognitive Behavior Models. In Computer Animation 99, pages , [5] Norman I. Badler, Cary B. Phillips, and Bonnie L. Webber. Simulating Humans: Computer Graphics, Animation, and Control. Oxford University Press, [6] Srikanth Bandi and Daniel Thalmann. Space discretization for efficient human navigation. In Proc. Eurographics 98, Computer GraphicsForum, volume 17(3), pages , [7] Y. Bellan, M. Costa, G. Ferrigno, F. Lombardi, L. Macchiarulo, A. Montuori, E. Pasero, and C. Rigotti. Artificial Neural Networks for Motion Emulation in Virtual Environments. In CAPTECH 98, pages Springer, [8] Uwe Beyer and Frank Śmieja. A Heuristic Approach to the Inverse Differential Kinematics Problem. In Journal of Intelligent Robotic Systems: Theory and Applications, volume 18(4), pages ,

19 [9] Rama Bindiganavale, William Schuler, Jan M. Allbeck, Norman I. Badler, Aravind K. Joshi, and Martha Palmer. Dynamically Altering Agent Behaviors Using Natural Language Instructions. In Autonomous Agents 2000, pages , [10] Ramamani N. Bindiganavale. Building Parameterized Action Representations from Observation. PhD thesis. [11] Bruce M. Blumberg and Tinsley A. Galyean. Multi-Level Direction of Autonomous Creatures for Real-Time Virtual Environments. In Computer Graphics (Siggraph 95 Proceedings), pages 47 54, [12] Vincent Bonnafous, Eric Menou, Jean-Pierre Jessel, and René Caubet. Cooperative and Concurrent Blending Motion Generators. In International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG 2001), [13] Ronan Boulic, Pascal Bécheiraz, Luc Emering, and Daniel Thalmann. Integration of Motion Control Techniques for Virtual Human and Avatar Real-Time Animation. In VRST 97, pages ACM, [14] Ronan Boulic, Ramon Mas, and Daniel Thalmann. Inverse Kinetics for Center of Mass Position Control and Posture Optimization. In European Workshop on Combined Real and Synthetic Image Processing for Broadcast and Video Production, [15] Ronan Boulic, Ramon Mas, and Daniel Thalmann. Robust Position Control of the Center of Mass with Second Order Inverse Kinetics. In Computers Graphics Journal, [16] Matthew Brand and Aaron Hertzmann. Style Machines. In Computer Graphics (Siggraph 00 Proceedings), pages , [17] David C. Brogan, Ronald A. Metoyer, and Jessica K. Hodgins. Dynamically Simulated Characters in Virtual Environments. In IEEE Computer Graphics and Applications, volume 15(5), pages 58 69, [18] Armin Bruderlin and Tom Calvert. Knowledge-Driven, Interactive Animation of Human Running. In Graphics Interface 96, [19] Armin Bruderlin and Lance Williams. Motion Signal Processing. In Computer Graphics (Siggraph 95 Proceedings), pages , [20] Deborah A. Carlson and Jessica K. Hodgins. Simulation Levels of Detail for Real-time Animation. In Graphics Interface 97, [21] Justine Cassell, Catherine Pelachaud, Norman Badler, Mark Steedman, Brett Achorn, Tripp Becket, Brett Douville, Scott Prevost, and Matthew Stone. Animated Conversation: Rule-based Generation of Facial Expression, Gesture & Spoken Intonation for Multiple Conversational Agents. In Computer Graphics (Siggraph 94 Proceedings), pages ACM Press, [22] Justine Cassell, Hannes Högni Vilhjálmsson, and Timothy Bickmore. BEAT: the Behavior Expression Animation Toolkit. In Computer Graphics (Siggraph 01 Proceedings), pages ,

20 [23] Diane Chi, Monica Costa, Liwei Zhao, and Norman Badler. The EMOTE Model for Effort and Shape. In Computer Graphics (Siggraph 00 Proceedings), pages , [24] Bob Coyne and Richard Sproat. WorldsEye: An Automatic Text-to-Scene Conversion System. In Computer Graphics (Siggraph 01 Proceedings), pages , [25] P. Ekman and W. V. Friesen. The Facial Action Coding System (FACS): A Technique for the Measurement of Facial Action. Consulting Psychologists Press, [26] Luc Emering, Ronan Boulic, and Daniel Thalmann. Interacting with Virtual Humans through Body Actions. In IEEE Journal of Computer Graphics and Applications, volume 18(1), pages IEEE, [27] Petros Faloutsos, Michiel van de Panne, and Demetri Terzopoulos. Composable Controllers for Physics-Based Character Animation. In Computer Graphics (Siggraph 01 Proceedings), pages , [28] P. Fua, R. Plänkers, and D. Thalmann. Realistic Human Body Modeling. In Fifth International Symposium on the 3-D Analysis of Human Movement, [29] John Funge, Xiaoyuan Tu, and Demetri Terzopoulos. Cognitive Modeling: Knowledge, Reasoning and Planning for Intelligent Characters. In Computer Graphics (Siggraph 99 Proceedings), pages 29 38, [30] George Maestri. Learning to Walk: The Theory and Practice of 3D Character Motion. character animation/walking/learning to walk.htm. [31] Michael Gleicher. Retargetting Motion to New Characters. In Computer Graphics (Siggraph 98 Proceedings), pages 33 42, [32] Michael Gleicher. Motion path editing. In Symposium on Interactive 3D Techniques, pages , [33] Enrico Gobbetti and Jean-Francis Balaguer. An Integrated Environment to Visually Construct 3D Animations. In Computer Graphics (Siggraph 95 Proceedings), pages , [34] Larry Israel Gritz. Evolutionary Controller Synthesis for 3-D Character Animation. PhD thesis, [35] Radek Grzeszczuk and Demetri Terzopoulos. Automated Learning of Muscle-Actuated Locomotion Through Control Abstraction. In Computer Graphics (Siggraph 95 Proceedings), pages 63 70, [36] Radek Grzeszczuk, Demetri Terzopoulos, and Geoffrey Hinton. NeuroAnimator: Fast Neural Network Emulation and Control of Physics-Based Models. In Computer Graphics (Siggraph 98 Proceedings), pages 9 20,

21 [37] Jessica K. Hodgins. Three-Dimensional Human Running. In Proceedings of the IEEE Conference on Robotics and Automation, [38] Jessica K. Hodgins and Nancy S. Pollard. Adapting Simulated Behavios for New Characters. In Computer Graphics (Siggraph 97 Proceedings), pages , [39] Jessica K. Hodgins, Wayne L. Wooten, David C. Brogan, and James F. O Brien. Animating Human Athletics. In Computer Graphics (Siggraph 95 Proceedings), pages 71 78, [40] Zhiyong Huang. Motion control for human animation. PhD thesis, [41] Zhiyong Huang, Nadia Magnenat-Thalmann, and Daniel Thalmann. Interactive Human Motion Control Using a Closed-form of Direct and Inverse Dynamics. In Proc. Pacific Graphics 94, [42] Humanoid Animation Working Group, Web3D Consortium. H-Anim: Specification for a Standard VRML Humanoid, version [43] Verne T. Inman, Henry J. Ralston, Frank Todd, and Jean C. Lieberman. Human Walking. Williams Wilkins, [44] Jr. James J. Kuffner. Goal-Directed Navigation for Animated Characters Using Real-Time Path Planning and Control. In CAPTECH 98, pages Springer, [45] Prem Kalra, Nadia Magnenat-Thalmann, Laurent Moccozet, Gael Sannier, Amaury Aubel, and Daniel Thalmann. Real-Time Animation of Realistic Virtual Humans. In IEEE Journal of Computer Graphics and Applications, volume 18(5), pages 42 56, [46] Szilárd Kiss. Web Based VRML Modeling. In IV2001 Information Visualisation, pages , London, England, [47] Szilárd Kiss. 3D Character Modeling in Virtual Reality. In IV2002 Information Visualisation, pages , London, England, [48] Evangelos Kokkevis, Dimitri Metaxas, and Norman I. Badler. Autonomous Animation and Control of Four-Legged Animals. In Graphics Interface 95, [49] Evangelos Kokkevis, Dimitri Metaxas, and Norman I. Badler. User- Controlled Physics-Based Animation for Articulated Figures. In Computer Animation 96, [50] Alexis Lamouret and Marie-Paule Gascuel. Scripting Interactive Physically-Based Motions with Relative Paths and Synchronization. In Graphics Interface 95, [51] John Lasseter. Principles of Traditional Animation Applied to 3D Computer Animation. In Proc. Siggraph 87, pages ACM Press, [52] Joseph Laszlo, Michiel van de Panne, and Eugene Fiume. Limit Cycle Control and its Application to the Animation of Balancing and Walking. In Computer Graphics (Siggraph 96 Proceedings), pages ,

22 [53] Joseph Laszlo, Michiel van de Panne, and Eugene Fiume. Interactive Control for Physically-Based Animation. In Computer Graphics (Siggraph 00 Proceedings), pages , [54] Jehee Lee and Sung Yong Shin. A Hierarchical approach to Interactive Motion Editing for Human-like Figures. In Computer Graphics (Siggraph 99 Proceedings), pages 39 48, [55] Yuencheng Lee, Demetri Terzopoulos, and Keith Waters. Realistic Modeling for Facial Animation. In Computer Graphics (Siggraph 95 Proceedings), pages 55 62, [56] Ruqian Lu and Songmao Zhang. Automatic generation of Computer Animation: Using AI for Movie Animation, volume 2160, LNAI. Springer. [57] M. Grosso and R. Quach and E. Otani and J. Zhao and S. Wei and P.-H. Ho and J. Lu and N. I. Badler. Anthropometry for Computer Graphics Human Figures. Technical Report MS-CIS-89-71, [58] George Maestri. Digital Character Animation 2, volume 1 - Essential Techniques. New Riders Publishing, [59] George Maestri. Digital Character Animation 2, volume 2 - Advanced Techniques. New Riders Publishing, [60] D. McNeill. Hand and Mind: What Gestures Reveal About Thought. University of Chicago, Characterizes gestures. [61] Stuart Mealing. The art and science of computer animation. Oxford, [62] Nadia Magnenat-Thalmann and Daniel Thalmann. Computer Animation. In Handbook of Computer Science, pages CRC Press, [63] Hansrudi Noser and Daniel Thalmann. The Animation of Autonomous Actors Based on Production Rules. In Proc. Computer Animation 96, pages IEEE Computer Society Press, [64] Ken Perlin and Athomas Goldberg. Improv: A System for Scripting Interactive Actors in Virtual Worlds. In Computer Graphics (Siggraph 96 Proceedings), pages , [65] Charles Rose, Bobby Bodenheimer, and Michael F. Cohen. Verbs and Adverbs: Multidimensional Motion Interpolation using Radial Basis Functions. In IEEE Journal of Computer Graphics and Applications, [66] Charles Rose, Brian Guenter, Bobby Bodenheimer, and Michael F. Cohen. Efficient Generation of Motion Transitions using Spacetime Constraints. In Computer Graphics (Siggraph 96 Proceedings), pages , [67] Zsófia Ruttkay, Paul ten Hagen, Han Noot, and Mark Savenije. Facial Animation by Synthesis of Captured and Artificial Data. In CAPTECH 98, pages , [68] G. Sannier, S. Balcisoy, N. Magnenat-Thalmann, and D. Thalmann. An Interactive Interface for Directing Virtual Humans. In Proc. ISCIS 98. IOS Press,

23 [69] G. Sannier, S. Balcisoy, N. Magnenat-Thalmann, and D. Thalmann. VHD: A System for Directing Real-Time Virtual Actors. In The Visual Computer, volume 15, No 7/8, pages Springer, [70] Karl Sims. Evolving Virtual Creatures. In Computer Graphics (Siggraph 94 Proceedings), pages ACM Press, [71] A. James Stewart and James F. Cremer. Beyond Keyframing: An Algorithmic Approach to Animation. In Graphics Interface 92, [72] Harold C. Sun and Dimitris N. Metaxas. Automatic Gait Generation. In Computer Graphics (Siggraph 01 Proceedings), pages , [73] Wen Tang, Marc Cavazza, Dale Mountain, and Rae Earnshaw. Real-Time Inverse Kinematics through Constrained Dynamics. In CAPTECH 98, pages Springer, [74] Daniel Thalmann. A New Generation of Synthetic Actors: the Real-time and Interactive Perceptive Actors. In Proc. Pacific Graphics 96, [75] Daniel Thalmann. Physical, Behavioral, and Sensor-Based Animation. In Graphicon 96, [76] Daniel Thalmann. The Foundations to Build a Virtual Human Society. In Intelligent Virtual Agents Workshop (IVA2001), volume 2190, LNAI. Springer, [77] Daniel Thalmann, Hansrudi Noser, and Zhiyong Huang. Autonomous Virtual Actors based on Virtual Sensors. In Creating Personalities for Synthetic Actors: Towards Autonomous Personality Agents, volume 1195, LNAI, pages Springer, [78] Xiaoyuan Tu. Artificial Animals for Computer Animation: Biomechanics, Locomotion, Perception, and Behavior, volume 1635 of LNCS. Springer, [79] Munetoshi Unuma, Ken Anjyo, and Ryozo Takeuchi. Fourier Principles for Emotion-based Human Figure Animation. In Computer Graphics (Siggraph 95 Proceedings), pages 91 96, [80] Keith Waters. A muscle model for animation three-dimensional facial expression. In Proc. Siggraph 87, pages ACM Press, [81] Michael W. Whittle. Musculo-Skeletal Applications of Three-Dimensional Analysis. In Paul Allard and Ian A.F. Stokes, editors, Three-Dimensional Analysis of Human Movement. Human Kinetics Publishers, [82] Andrew Witkin and Zoran Popović. Motion Warping. In Computer Graphics (Siggraph 95 Proceedings), pages , [83] Wayne L. Wooten and Jessica K. Hodgins. Animation of Human Diving. In Graphics Interface 95,

This week. CENG 732 Computer Animation. Challenges in Human Modeling. Basic Arm Model

This week. CENG 732 Computer Animation. Challenges in Human Modeling. Basic Arm Model CENG 732 Computer Animation Spring 2006-2007 Week 8 Modeling and Animating Articulated Figures: Modeling the Arm, Walking, Facial Animation This week Modeling the arm Different joint structures Walking

More information

animation animation shape specification as a function of time

animation animation shape specification as a function of time animation animation shape specification as a function of time animation representation many ways to represent changes with time intent artistic motion physically-plausible motion efficiency control typically

More information

CS 4204 Computer Graphics

CS 4204 Computer Graphics CS 4204 Computer Graphics Computer Animation Adapted from notes by Yong Cao Virginia Tech 1 Outline Principles of Animation Keyframe Animation Additional challenges in animation 2 Classic animation Luxo

More information

Introduction to Computer Graphics Marie-Paule Cani & Estelle Duveau

Introduction to Computer Graphics Marie-Paule Cani & Estelle Duveau Introduction to Computer Graphics Marie-Paule Cani & Estelle Duveau 04/02 Introduction & projective rendering 11/02 Prodedural modeling, Interactive modeling with parametric surfaces 25/02 Introduction

More information

CHAPTER 6 TEXTURE ANIMATION

CHAPTER 6 TEXTURE ANIMATION CHAPTER 6 TEXTURE ANIMATION 6.1. INTRODUCTION Animation is the creating of a timed sequence or series of graphic images or frames together to give the appearance of continuous movement. A collection of

More information

The 3D rendering pipeline (our version for this class)

The 3D rendering pipeline (our version for this class) The 3D rendering pipeline (our version for this class) 3D models in model coordinates 3D models in world coordinates 2D Polygons in camera coordinates Pixels in image coordinates Scene graph Camera Rasterization

More information

animation shape specification as a function of time

animation shape specification as a function of time animation 1 animation shape specification as a function of time 2 animation representation many ways to represent changes with time intent artistic motion physically-plausible motion efficiency typically

More information

Chapter 1. Introduction. 1.1 The Challenge of Computer Generated Postures

Chapter 1. Introduction. 1.1 The Challenge of Computer Generated Postures Chapter 1 Introduction 1.1 The Challenge of Computer Generated Postures With advances in hardware technology, more powerful computers become available for the majority of users. A few years ago, computer

More information

An Instructional Aid System for Driving Schools Based on Visual Simulation

An Instructional Aid System for Driving Schools Based on Visual Simulation An Instructional Aid System for Driving Schools Based on Visual Simulation Salvador Bayarri, Rafael Garcia, Pedro Valero, Ignacio Pareja, Institute of Traffic and Road Safety (INTRAS), Marcos Fernandez

More information

C O M P U C O M P T U T E R G R A E R G R P A H I C P S Computer Animation Guoying Zhao 1 / 66 /

C O M P U C O M P T U T E R G R A E R G R P A H I C P S Computer Animation Guoying Zhao 1 / 66 / Computer Animation Guoying Zhao 1 / 66 Basic Elements of Computer Graphics Modeling construct the 3D model of the scene Rendering Render the 3D model, compute the color of each pixel. The color is related

More information

Design-Simulation-Optimization Package for a Generic 6-DOF Manipulator with a Spherical Wrist

Design-Simulation-Optimization Package for a Generic 6-DOF Manipulator with a Spherical Wrist Design-Simulation-Optimization Package for a Generic 6-DOF Manipulator with a Spherical Wrist MHER GRIGORIAN, TAREK SOBH Department of Computer Science and Engineering, U. of Bridgeport, USA ABSTRACT Robot

More information

SimFonIA Animation Tools V1.0. SCA Extension SimFonIA Character Animator

SimFonIA Animation Tools V1.0. SCA Extension SimFonIA Character Animator SimFonIA Animation Tools V1.0 SCA Extension SimFonIA Character Animator Bring life to your lectures Move forward with industrial design Combine illustrations with your presentations Convey your ideas to

More information

Two hours UNIVERSITY OF MANCHESTER SCHOOL OF COMPUTER SCIENCE. M.Sc. in Advanced Computer Science. Friday 18 th January 2008.

Two hours UNIVERSITY OF MANCHESTER SCHOOL OF COMPUTER SCIENCE. M.Sc. in Advanced Computer Science. Friday 18 th January 2008. COMP60321 Two hours UNIVERSITY OF MANCHESTER SCHOOL OF COMPUTER SCIENCE M.Sc. in Advanced Computer Science Computer Animation Friday 18 th January 2008 Time: 09:45 11:45 Please answer any THREE Questions

More information

Fundamentals of Computer Animation

Fundamentals of Computer Animation Fundamentals of Computer Animation Principles of Traditional Animation How to create maximum impact page 1 How to create maximum impact Early animators worked from scratch to analyze and improve upon silence

More information

INSTRUCTOR WORKBOOK Quanser Robotics Package for Education for MATLAB /Simulink Users

INSTRUCTOR WORKBOOK Quanser Robotics Package for Education for MATLAB /Simulink Users INSTRUCTOR WORKBOOK for MATLAB /Simulink Users Developed by: Amir Haddadi, Ph.D., Quanser Peter Martin, M.A.SC., Quanser Quanser educational solutions are powered by: CAPTIVATE. MOTIVATE. GRADUATE. PREFACE

More information

Interactive Computer Graphics

Interactive Computer Graphics Interactive Computer Graphics Lecture 18 Kinematics and Animation Interactive Graphics Lecture 18: Slide 1 Animation of 3D models In the early days physical models were altered frame by frame to create

More information

CE801: Intelligent Systems and Robotics Lecture 3: Actuators and Localisation. Prof. Dr. Hani Hagras

CE801: Intelligent Systems and Robotics Lecture 3: Actuators and Localisation. Prof. Dr. Hani Hagras 1 CE801: Intelligent Systems and Robotics Lecture 3: Actuators and Localisation Prof. Dr. Hani Hagras Robot Locomotion Robots might want to move in water, in the air, on land, in space.. 2 Most of the

More information

Physics 9e/Cutnell. correlated to the. College Board AP Physics 1 Course Objectives

Physics 9e/Cutnell. correlated to the. College Board AP Physics 1 Course Objectives Physics 9e/Cutnell correlated to the College Board AP Physics 1 Course Objectives Big Idea 1: Objects and systems have properties such as mass and charge. Systems may have internal structure. Enduring

More information

Voice Driven Animation System

Voice Driven Animation System Voice Driven Animation System Zhijin Wang Department of Computer Science University of British Columbia Abstract The goal of this term project is to develop a voice driven animation system that could take

More information

CATIA V5 Tutorials. Mechanism Design & Animation. Release 18. Nader G. Zamani. University of Windsor. Jonathan M. Weaver. University of Detroit Mercy

CATIA V5 Tutorials. Mechanism Design & Animation. Release 18. Nader G. Zamani. University of Windsor. Jonathan M. Weaver. University of Detroit Mercy CATIA V5 Tutorials Mechanism Design & Animation Release 18 Nader G. Zamani University of Windsor Jonathan M. Weaver University of Detroit Mercy SDC PUBLICATIONS Schroff Development Corporation www.schroff.com

More information

Computer Animation and Visualisation. Lecture 1. Introduction

Computer Animation and Visualisation. Lecture 1. Introduction Computer Animation and Visualisation Lecture 1 Introduction 1 Today s topics Overview of the lecture Introduction to Computer Animation Introduction to Visualisation 2 Introduction (PhD in Tokyo, 2000,

More information

Character Animation Tutorial

Character Animation Tutorial Character Animation Tutorial 1.Overview 2.Modelling 3.Texturing 5.Skeleton and IKs 4.Keys 5.Export the character and its animations 6.Load the character in Virtools 7.Material & texture tuning 8.Merge

More information

Robot Task-Level Programming Language and Simulation

Robot Task-Level Programming Language and Simulation Robot Task-Level Programming Language and Simulation M. Samaka Abstract This paper presents the development of a software application for Off-line robot task programming and simulation. Such application

More information

Robotics and Automation Blueprint

Robotics and Automation Blueprint Robotics and Automation Blueprint This Blueprint contains the subject matter content of this Skill Connect Assessment. This Blueprint does NOT contain the information one would need to fully prepare for

More information

Graphics. Computer Animation 고려대학교 컴퓨터 그래픽스 연구실. kucg.korea.ac.kr 1

Graphics. Computer Animation 고려대학교 컴퓨터 그래픽스 연구실. kucg.korea.ac.kr 1 Graphics Computer Animation 고려대학교 컴퓨터 그래픽스 연구실 kucg.korea.ac.kr 1 Computer Animation What is Animation? Make objects change over time according to scripted actions What is Simulation? Predict how objects

More information

Motion Capture Technologies. Jessica Hodgins

Motion Capture Technologies. Jessica Hodgins Motion Capture Technologies Jessica Hodgins Motion Capture Animation Video Games Robot Control What games use motion capture? NBA live PGA tour NHL hockey Legends of Wrestling 2 Lords of Everquest Lord

More information

Force/position control of a robotic system for transcranial magnetic stimulation

Force/position control of a robotic system for transcranial magnetic stimulation Force/position control of a robotic system for transcranial magnetic stimulation W.N. Wan Zakaria School of Mechanical and System Engineering Newcastle University Abstract To develop a force control scheme

More information

2.5 Physically-based Animation

2.5 Physically-based Animation 2.5 Physically-based Animation 320491: Advanced Graphics - Chapter 2 74 Physically-based animation Morphing allowed us to animate between two known states. Typically, only one state of an object is known.

More information

INTERACTIVELY RESPONSIVE ANIMATION OF HUMAN WALKING IN VIRTUAL ENVIRONMENTS

INTERACTIVELY RESPONSIVE ANIMATION OF HUMAN WALKING IN VIRTUAL ENVIRONMENTS INTERACTIVELY RESPONSIVE ANIMATION OF HUMAN WALKING IN VIRTUAL ENVIRONMENTS By Shih-kai Chung B.A. June 1988, National Taiwan University, Taiwan M.S. May 1994, The George Washington University A Dissertation

More information

Using Autodesk HumanIK Middleware to Enhance Character Animation for Games

Using Autodesk HumanIK Middleware to Enhance Character Animation for Games Autodesk HumanIK 4.5 Using Autodesk HumanIK Middleware to Enhance Character Animation for Games Unlock your potential for creating more believable characters and more engaging, innovative gameplay with

More information

Kinematical Animation. [email protected] 2013-14

Kinematical Animation. lionel.reveret@inria.fr 2013-14 Kinematical Animation 2013-14 3D animation in CG Goal : capture visual attention Motion of characters Believable Expressive Realism? Controllability Limits of purely physical simulation : - little interactivity

More information

Computer Animation. Lecture 2. Basics of Character Animation

Computer Animation. Lecture 2. Basics of Character Animation Computer Animation Lecture 2. Basics of Character Animation Taku Komura Overview Character Animation Posture representation Hierarchical structure of the body Joint types Translational, hinge, universal,

More information

Computer Animation. CS 445/645 Fall 2001

Computer Animation. CS 445/645 Fall 2001 Computer Animation CS 445/645 Fall 2001 Let s talk about computer animation Must generate 30 frames per second of animation (24 fps for film) Issues to consider: Is the goal to replace or augment the artist?

More information

An Interactive method to control Computer Animation in an intuitive way.

An Interactive method to control Computer Animation in an intuitive way. An Interactive method to control Computer Animation in an intuitive way. Andrea Piscitello University of Illinois at Chicago 1200 W Harrison St, Chicago, IL [email protected] Ettore Trainiti University of

More information

Teaching Methodology for 3D Animation

Teaching Methodology for 3D Animation Abstract The field of 3d animation has addressed design processes and work practices in the design disciplines for in recent years. There are good reasons for considering the development of systematic

More information

Motion Retargetting and Transition in Different Articulated Figures

Motion Retargetting and Transition in Different Articulated Figures Motion Retargetting and Transition in Different Articulated Figures Ming-Kai Hsieh Bing-Yu Chen Ming Ouhyoung National Taiwan University [email protected] [email protected] [email protected]

More information

Introduction to Robotics Analysis, Systems, Applications

Introduction to Robotics Analysis, Systems, Applications Introduction to Robotics Analysis, Systems, Applications Saeed B. Niku Mechanical Engineering Department California Polytechnic State University San Luis Obispo Technische Urw/carsMt Darmstadt FACHBEREfCH

More information

Computer Animation in Future Technologies

Computer Animation in Future Technologies Computer Animation in Future Technologies Nadia Magnenat Thalmann MIRALab, University of Geneva Daniel Thalmann Computer Graphics Lab, Swiss Federal Institute of Technology Abstract In this introductory

More information

In: Proceedings of RECPAD 2002-12th Portuguese Conference on Pattern Recognition June 27th- 28th, 2002 Aveiro, Portugal

In: Proceedings of RECPAD 2002-12th Portuguese Conference on Pattern Recognition June 27th- 28th, 2002 Aveiro, Portugal Paper Title: Generic Framework for Video Analysis Authors: Luís Filipe Tavares INESC Porto [email protected] Luís Teixeira INESC Porto, Universidade Católica Portuguesa [email protected] Luís Corte-Real

More information

Chapter 1. Animation. 1.1 Computer animation

Chapter 1. Animation. 1.1 Computer animation Chapter 1 Animation "Animation can explain whatever the mind of man can conceive. This facility makes it the most versatile and explicit means of communication yet devised for quick mass appreciation."

More information

DINAMIC AND STATIC CENTRE OF PRESSURE MEASUREMENT ON THE FORCEPLATE. F. R. Soha, I. A. Szabó, M. Budai. Abstract

DINAMIC AND STATIC CENTRE OF PRESSURE MEASUREMENT ON THE FORCEPLATE. F. R. Soha, I. A. Szabó, M. Budai. Abstract ACTA PHYSICA DEBRECINA XLVI, 143 (2012) DINAMIC AND STATIC CENTRE OF PRESSURE MEASUREMENT ON THE FORCEPLATE F. R. Soha, I. A. Szabó, M. Budai University of Debrecen, Department of Solid State Physics Abstract

More information

Facial Expression Analysis and Synthesis

Facial Expression Analysis and Synthesis 1. Research Team Facial Expression Analysis and Synthesis Project Leader: Other Faculty: Post Doc(s): Graduate Students: Undergraduate Students: Industrial Partner(s): Prof. Ulrich Neumann, IMSC and Computer

More information

Project 2: Character Animation Due Date: Friday, March 10th, 11:59 PM

Project 2: Character Animation Due Date: Friday, March 10th, 11:59 PM 1 Introduction Project 2: Character Animation Due Date: Friday, March 10th, 11:59 PM The technique of motion capture, or using the recorded movements of a live actor to drive a virtual character, has recently

More information

Today. Keyframing. Procedural Animation. Physically-Based Animation. Articulated Models. Computer Animation & Particle Systems

Today. Keyframing. Procedural Animation. Physically-Based Animation. Articulated Models. Computer Animation & Particle Systems Today Computer Animation & Particle Systems Some slides courtesy of Jovan Popovic & Ronen Barzel How do we specify or generate motion? Keyframing Procedural Animation Physically-Based Animation Forward

More information

Graduate Co-op Students Information Manual. Department of Computer Science. Faculty of Science. University of Regina

Graduate Co-op Students Information Manual. Department of Computer Science. Faculty of Science. University of Regina Graduate Co-op Students Information Manual Department of Computer Science Faculty of Science University of Regina 2014 1 Table of Contents 1. Department Description..3 2. Program Requirements and Procedures

More information

Doctor of Philosophy in Computer Science

Doctor of Philosophy in Computer Science Doctor of Philosophy in Computer Science Background/Rationale The program aims to develop computer scientists who are armed with methods, tools and techniques from both theoretical and systems aspects

More information

Maya 2014 Basic Animation & The Graph Editor

Maya 2014 Basic Animation & The Graph Editor Maya 2014 Basic Animation & The Graph Editor When you set a Keyframe (or Key), you assign a value to an object s attribute (for example, translate, rotate, scale, color) at a specific time. Most animation

More information

Animation. Persistence of vision: Visual closure:

Animation. Persistence of vision: Visual closure: Animation Persistence of vision: The visual system smoothes in time. This means that images presented to the eye are perceived by the visual system for a short time after they are presented. In turn, this

More information

CS231M Project Report - Automated Real-Time Face Tracking and Blending

CS231M Project Report - Automated Real-Time Face Tracking and Blending CS231M Project Report - Automated Real-Time Face Tracking and Blending Steven Lee, [email protected] June 6, 2015 1 Introduction Summary statement: The goal of this project is to create an Android

More information

Bachelor of Games and Virtual Worlds (Programming) Subject and Course Summaries

Bachelor of Games and Virtual Worlds (Programming) Subject and Course Summaries First Semester Development 1A On completion of this subject students will be able to apply basic programming and problem solving skills in a 3 rd generation object-oriented programming language (such as

More information

Geometric Constraints

Geometric Constraints Simulation in Computer Graphics Geometric Constraints Matthias Teschner Computer Science Department University of Freiburg Outline introduction penalty method Lagrange multipliers local constraints University

More information

Short Presentation. Topic: Locomotion

Short Presentation. Topic: Locomotion CSE 888.14 Advanced Computer Animation Short Presentation Topic: Locomotion Kang che Lee 2009 Fall 1 Locomotion How a character moves from place to place. Optimal gait and form for animal locomotion. K.

More information

Sensory-motor control scheme based on Kohonen Maps and AVITE model

Sensory-motor control scheme based on Kohonen Maps and AVITE model Sensory-motor control scheme based on Kohonen Maps and AVITE model Juan L. Pedreño-Molina, Antonio Guerrero-González, Oscar A. Florez-Giraldo, J. Molina-Vilaplana Technical University of Cartagena Department

More information

Reusable Knowledge-based Components for Building Software. Applications: A Knowledge Modelling Approach

Reusable Knowledge-based Components for Building Software. Applications: A Knowledge Modelling Approach Reusable Knowledge-based Components for Building Software Applications: A Knowledge Modelling Approach Martin Molina, Jose L. Sierra, Jose Cuena Department of Artificial Intelligence, Technical University

More information

VRSPATIAL: DESIGNING SPATIAL MECHANISMS USING VIRTUAL REALITY

VRSPATIAL: DESIGNING SPATIAL MECHANISMS USING VIRTUAL REALITY Proceedings of DETC 02 ASME 2002 Design Technical Conferences and Computers and Information in Conference Montreal, Canada, September 29-October 2, 2002 DETC2002/ MECH-34377 VRSPATIAL: DESIGNING SPATIAL

More information

Analecta Vol. 8, No. 2 ISSN 2064-7964

Analecta Vol. 8, No. 2 ISSN 2064-7964 EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,

More information

Design of a modular character animation tool

Design of a modular character animation tool Autonomous Systems Lab Prof. Roland Siegwart Master-Thesis Design of a modular character animation tool draft Spring Term 2012 Supervised by: Cedric Pradalier Gilles Caprari Author: Oliver Glauser Preface...

More information

College Credit Plus Dual Enrollment

College Credit Plus Dual Enrollment Plus Dual Enrollment Plus Dual Enrollment is a program that gives high school students an opportunity to be enrolled in both high school and college course work at the same time. Students who qualify academically

More information

HAND GESTURE BASEDOPERATINGSYSTEM CONTROL

HAND GESTURE BASEDOPERATINGSYSTEM CONTROL HAND GESTURE BASEDOPERATINGSYSTEM CONTROL Garkal Bramhraj 1, palve Atul 2, Ghule Supriya 3, Misal sonali 4 1 Garkal Bramhraj mahadeo, 2 Palve Atule Vasant, 3 Ghule Supriya Shivram, 4 Misal Sonali Babasaheb,

More information

Robotic motion planning for 8- DOF motion stage

Robotic motion planning for 8- DOF motion stage Robotic motion planning for 8- DOF motion stage 12 November Mark Geelen Simon Jansen Alten Mechatronics www.alten.nl [email protected] Introduction Introduction Alten FEI Motion planning MoveIt! Proof

More information

BENEFIT OF DYNAMIC USE CASES TO EARLY DESIGN A DRIVING ASSISTANCE SYSTEM FOR PEDESTRIAN/TRUCK COLLISION AVOIDANCE

BENEFIT OF DYNAMIC USE CASES TO EARLY DESIGN A DRIVING ASSISTANCE SYSTEM FOR PEDESTRIAN/TRUCK COLLISION AVOIDANCE BENEFIT OF DYNAMIC USE CASES TO EARLY DESIGN A DRIVING ASSISTANCE SYSTEM FOR PEDESTRIAN/TRUCK COLLISION AVOIDANCE Hélène Tattegrain, Arnaud Bonnard, Benoit Mathern, LESCOT, INRETS France Paper Number 09-0489

More information

Prentice Hall Algebra 2 2011 Correlated to: Colorado P-12 Academic Standards for High School Mathematics, Adopted 12/2009

Prentice Hall Algebra 2 2011 Correlated to: Colorado P-12 Academic Standards for High School Mathematics, Adopted 12/2009 Content Area: Mathematics Grade Level Expectations: High School Standard: Number Sense, Properties, and Operations Understand the structure and properties of our number system. At their most basic level

More information

Introduction to Engineering System Dynamics

Introduction to Engineering System Dynamics CHAPTER 0 Introduction to Engineering System Dynamics 0.1 INTRODUCTION The objective of an engineering analysis of a dynamic system is prediction of its behaviour or performance. Real dynamic systems are

More information

Information Technology Career Cluster Game Design: Animation and Simulation. Course Standard 1

Information Technology Career Cluster Game Design: Animation and Simulation. Course Standard 1 Information Technology Career Cluster Game Design: Animation and Simulation Course Number: 11.42900 Course Description: Students completing this course will gain an understanding of the fundamental principles

More information

Blender Notes. Introduction to Digital Modelling and Animation in Design Blender Tutorial - week 9 The Game Engine

Blender Notes. Introduction to Digital Modelling and Animation in Design Blender Tutorial - week 9 The Game Engine Blender Notes Introduction to Digital Modelling and Animation in Design Blender Tutorial - week 9 The Game Engine The Blender Game Engine This week we will have an introduction to the Game Engine build

More information

Principal Components of Expressive Speech Animation

Principal Components of Expressive Speech Animation Principal Components of Expressive Speech Animation Sumedha Kshirsagar, Tom Molet, Nadia Magnenat-Thalmann MIRALab CUI, University of Geneva 24 rue du General Dufour CH-1211 Geneva, Switzerland {sumedha,molet,thalmann}@miralab.unige.ch

More information

School of Computer Science

School of Computer Science School of Computer Science Computer Science - Honours Level - 2014/15 October 2014 General degree students wishing to enter 3000- level modules and non- graduating students wishing to enter 3000- level

More information

Interactive Data Mining and Visualization

Interactive Data Mining and Visualization Interactive Data Mining and Visualization Zhitao Qiu Abstract: Interactive analysis introduces dynamic changes in Visualization. On another hand, advanced visualization can provide different perspectives

More information

Republic Polytechnic School of Information and Communications Technology C391 Animation and Visual Effect Automation.

Republic Polytechnic School of Information and Communications Technology C391 Animation and Visual Effect Automation. Republic Polytechnic School of Information and Communications Technology C391 Animation and Visual Effect Automation Module Curriculum This document addresses the content related abilities, with reference

More information

Kinematics & Dynamics

Kinematics & Dynamics Overview Kinematics & Dynamics Adam Finkelstein Princeton University COS 46, Spring 005 Kinematics Considers only motion Determined by positions, velocities, accelerations Dynamics Considers underlying

More information

Intelligent Submersible Manipulator-Robot, Design, Modeling, Simulation and Motion Optimization for Maritime Robotic Research

Intelligent Submersible Manipulator-Robot, Design, Modeling, Simulation and Motion Optimization for Maritime Robotic Research 20th International Congress on Modelling and Simulation, Adelaide, Australia, 1 6 December 2013 www.mssanz.org.au/modsim2013 Intelligent Submersible Manipulator-Robot, Design, Modeling, Simulation and

More information

Computer Graphics. Geometric Modeling. Page 1. Copyright Gotsman, Elber, Barequet, Karni, Sheffer Computer Science - Technion. An Example.

Computer Graphics. Geometric Modeling. Page 1. Copyright Gotsman, Elber, Barequet, Karni, Sheffer Computer Science - Technion. An Example. An Example 2 3 4 Outline Objective: Develop methods and algorithms to mathematically model shape of real world objects Categories: Wire-Frame Representation Object is represented as as a set of points

More information

Program of Study. Animation (1288X) Level 1 (390 hrs)

Program of Study. Animation (1288X) Level 1 (390 hrs) Program of Study Level 1 (390 hrs) ANI1513 Life Drawing for Animation I (60 hrs) Life drawing is a fundamental skill for creating believability in our animated drawings of motion. Through the use of casts

More information

Sensor Modeling for a Walking Robot Simulation. 1 Introduction

Sensor Modeling for a Walking Robot Simulation. 1 Introduction Sensor Modeling for a Walking Robot Simulation L. France, A. Girault, J-D. Gascuel, B. Espiau INRIA, Grenoble, FRANCE imagis, GRAVIR/IMAG, Grenoble, FRANCE Abstract This paper proposes models of short-range

More information

Animations in Creo 3.0

Animations in Creo 3.0 Animations in Creo 3.0 ME170 Part I. Introduction & Outline Animations provide useful demonstrations and analyses of a mechanism's motion. This document will present two ways to create a motion animation

More information

EVIDENCE-BASED BIOMECHANICS

EVIDENCE-BASED BIOMECHANICS EVIDENCE-BASED BIOMECHANICS BEST IN CLASS Technology Software Customer Service EVIDENCE-BASED DIAGNOSTICS EVIDENCE-BASED ADVANTAGE We invite you to experience the highest quality, state-of-the-art measurement

More information

Professor, D.Sc. (Tech.) Eugene Kovshov MSTU «STANKIN», Moscow, Russia

Professor, D.Sc. (Tech.) Eugene Kovshov MSTU «STANKIN», Moscow, Russia Professor, D.Sc. (Tech.) Eugene Kovshov MSTU «STANKIN», Moscow, Russia As of today, the issue of Big Data processing is still of high importance. Data flow is increasingly growing. Processing methods

More information

Practical Work DELMIA V5 R20 Lecture 1. D. Chablat / S. Caro [email protected] [email protected]

Practical Work DELMIA V5 R20 Lecture 1. D. Chablat / S. Caro Damien.Chablat@irccyn.ec-nantes.fr Stephane.Caro@irccyn.ec-nantes.fr Practical Work DELMIA V5 R20 Lecture 1 D. Chablat / S. Caro [email protected] [email protected] Native languages Definition of the language for the user interface English,

More information

Rotation: Moment of Inertia and Torque

Rotation: Moment of Inertia and Torque Rotation: Moment of Inertia and Torque Every time we push a door open or tighten a bolt using a wrench, we apply a force that results in a rotational motion about a fixed axis. Through experience we learn

More information

Working With Animation: Introduction to Flash

Working With Animation: Introduction to Flash Working With Animation: Introduction to Flash With Adobe Flash, you can create artwork and animations that add motion and visual interest to your Web pages. Flash movies can be interactive users can click

More information

3D Interactive Information Visualization: Guidelines from experience and analysis of applications

3D Interactive Information Visualization: Guidelines from experience and analysis of applications 3D Interactive Information Visualization: Guidelines from experience and analysis of applications Richard Brath Visible Decisions Inc., 200 Front St. W. #2203, Toronto, Canada, [email protected] 1. EXPERT

More information

3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving

3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving 3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving AIT Austrian Institute of Technology Safety & Security Department Christian Zinner Safe and Autonomous Systems

More information

OLAP and Data Mining. Data Warehousing and End-User Access Tools. Introducing OLAP. Introducing OLAP

OLAP and Data Mining. Data Warehousing and End-User Access Tools. Introducing OLAP. Introducing OLAP Data Warehousing and End-User Access Tools OLAP and Data Mining Accompanying growth in data warehouses is increasing demands for more powerful access tools providing advanced analytical capabilities. Key

More information

Motion Capture Assisted Animation: Texturing and Synthesis

Motion Capture Assisted Animation: Texturing and Synthesis Motion Capture Assisted Animation: Texturing and Synthesis Katherine Pullen Stanford University Christoph Bregler Stanford University Abstract We discuss a method for creating animations that allows the

More information

Proposal for a Virtual 3D World Map

Proposal for a Virtual 3D World Map Proposal for a Virtual 3D World Map Kostas Terzidis University of California at Los Angeles School of Arts and Architecture Los Angeles CA 90095-1467 ABSTRACT The development of a VRML scheme of a 3D world

More information

DESIGN, IMPLEMENTATION, AND COOPERATIVE COEVOLUTION OF AN AUTONOMOUS/TELEOPERATED CONTROL SYSTEM FOR A SERPENTINE ROBOTIC MANIPULATOR

DESIGN, IMPLEMENTATION, AND COOPERATIVE COEVOLUTION OF AN AUTONOMOUS/TELEOPERATED CONTROL SYSTEM FOR A SERPENTINE ROBOTIC MANIPULATOR Proceedings of the American Nuclear Society Ninth Topical Meeting on Robotics and Remote Systems, Seattle Washington, March 2001. DESIGN, IMPLEMENTATION, AND COOPERATIVE COEVOLUTION OF AN AUTONOMOUS/TELEOPERATED

More information

Intelligent Log Analyzer. André Restivo <[email protected]>

Intelligent Log Analyzer. André Restivo <andre.restivo@portugalmail.pt> Intelligent Log Analyzer André Restivo 9th January 2003 Abstract Server Administrators often have to analyze server logs to find if something is wrong with their machines.

More information

LEGO NXT-based Robotic Arm

LEGO NXT-based Robotic Arm Óbuda University e Bulletin Vol. 2, No. 1, 2011 LEGO NXT-based Robotic Arm Ákos Hámori, János Lengyel, Barna Reskó Óbuda University [email protected], [email protected], [email protected]

More information

Master of Science in Computer Science

Master of Science in Computer Science Master of Science in Computer Science Background/Rationale The MSCS program aims to provide both breadth and depth of knowledge in the concepts and techniques related to the theory, design, implementation,

More information

Introduction to Computer Graphics

Introduction to Computer Graphics Introduction to Computer Graphics Torsten Möller TASC 8021 778-782-2215 [email protected] www.cs.sfu.ca/~torsten Today What is computer graphics? Contents of this course Syllabus Overview of course topics

More information

Finite Element Method (ENGC 6321) Syllabus. Second Semester 2013-2014

Finite Element Method (ENGC 6321) Syllabus. Second Semester 2013-2014 Finite Element Method Finite Element Method (ENGC 6321) Syllabus Second Semester 2013-2014 Objectives Understand the basic theory of the FEM Know the behaviour and usage of each type of elements covered

More information

Course Overview. CSCI 480 Computer Graphics Lecture 1. Administrative Issues Modeling Animation Rendering OpenGL Programming [Angel Ch.

Course Overview. CSCI 480 Computer Graphics Lecture 1. Administrative Issues Modeling Animation Rendering OpenGL Programming [Angel Ch. CSCI 480 Computer Graphics Lecture 1 Course Overview January 14, 2013 Jernej Barbic University of Southern California http://www-bcf.usc.edu/~jbarbic/cs480-s13/ Administrative Issues Modeling Animation

More information

Wednesday, March 30, 2011 GDC 2011. Jeremy Ernst. Fast and Efficient Facial Rigging(in Gears of War 3)

Wednesday, March 30, 2011 GDC 2011. Jeremy Ernst. Fast and Efficient Facial Rigging(in Gears of War 3) GDC 2011. Jeremy Ernst. Fast and Efficient Facial Rigging(in Gears of War 3) Fast and Efficient Facial Rigging in Gears of War 3 Character Rigger: -creates animation control rigs -creates animation pipeline

More information

Robotics. Chapter 25. Chapter 25 1

Robotics. Chapter 25. Chapter 25 1 Robotics Chapter 25 Chapter 25 1 Outline Robots, Effectors, and Sensors Localization and Mapping Motion Planning Motor Control Chapter 25 2 Mobile Robots Chapter 25 3 Manipulators P R R R R R Configuration

More information

RIA : 2013 Market Trends Webinar Series

RIA : 2013 Market Trends Webinar Series RIA : 2013 Market Trends Webinar Series Robotic Industries Association A market trends education Available at no cost to audience Watch live or archived webinars anytime Learn about the latest innovations

More information

DIRECT ORBITAL DYNAMICS: USING INDEPENDENT ORBITAL TERMS TO TREAT BODIES AS ORBITING EACH OTHER DIRECTLY WHILE IN MOTION

DIRECT ORBITAL DYNAMICS: USING INDEPENDENT ORBITAL TERMS TO TREAT BODIES AS ORBITING EACH OTHER DIRECTLY WHILE IN MOTION 1 DIRECT ORBITAL DYNAMICS: USING INDEPENDENT ORBITAL TERMS TO TREAT BODIES AS ORBITING EACH OTHER DIRECTLY WHILE IN MOTION Daniel S. Orton email: [email protected] Abstract: There are many longstanding

More information