Emotional Communicative Body Animation for Multiple Characters
|
|
|
- Madeleine Mathews
- 9 years ago
- Views:
Transcription
1 Emotional Communicative Body Animation for Multiple Characters Arjan Egges, Nadia Magnenat-Thalmann MIRALab - University of Geneva 24, Rue General-Dufour, 1211 Geneva, Switzerland Telephone: , Fax: {egges,thalmann}@miralab.unige.ch Abstract Current body animation systems for Interactive Virtual Humans are mostly procedural or key-frame based. Although such methods provide for a high flexibility of the animation system, often it is not possible to create animations that are as realistic as animations obtained using a motion capture system. Simply using motion captured animation segments in stead of key-framed gestures is not a good solution, since virtual human animation systems also specify parameters of gesture that affect the style, such as for example expressing emotions or stressing a part of a speech sequence. In this paper, we describe an animation system that allows for the synthesis of realistic communicative body motions according to an emotional state, while still retaining the flexibility of procedural gesture synthesis systems. These motions are constructed as a blend of idle motions and gesture animations. Based on an animation specified for only a few joints, automatically and in real-time, the dependent joint motions are calculated. Realistic balance shifts adapted from motion capture data are generated on-the-fly, resulting in a fully controllable body animation, adaptable according to individual characteristics and directly playable on different characters at the same time. Keywords Computer Animation, Interactive Virtual Humans, Individuality 1 Introduction Although virtual models of humans continue to improve both in real-time and non-real-time applications, controlling and animating them realistically still remains a difficult task. Techniques for capturing human motions and automatic adaptation of the obtained animations are maturing, but a gap still exists between animation engines and the systems that are controlling them, in particular Interactive Virtual Human (IVH) systems. In this paper, we will focus on two different types of motions: Idle Motions An important aspect of an animation system is how to deal with a scenario where several animations are played sequentially for various actors. In nature there exists no motionless character, while in computer animation we often encounter cases where no planned actions, such as waiting for another actor finishing his/her part, is implemented as a stop/frozen animation. A flexible idle motion generator is required to provide for realistic motions even when no action is planned. Nonverbal (Communicative) Body Motions Nonverbal body motions (gestures) are generally synthesized procedurally by an IVH system. As a result, the gestures often are not defined for all joints, but only for the arms, hands and head joints. However, when real humans are moving their arms, this affects other joints as well such as the spine and the shoulders. Generally, such kinds of motions are not defined in a high-level gesture specification. There is the need for a system that can automatically calculate these dependent joint movements in real-time and add them to the gesture animations. 1
2 Figure 1: Overview of an Interactive Virtual Human animation system. In our previous work [11, 12], we have developed an idle motion generator that constructs realistic idle animations from motion capture data, that still allows a high control from the animator or IVH control mechanism. We will give a short overview of this system in Section 3.2. Depending on which database of animations is used, different types of individuals can be portrayed through the body motions. In this paper, we provide for an extension of the idle motion engine that allows to generate different motions depending on the emotional state of an IVH. Additionally, we will describe a technique to automatically calculate dependent joints motions given a gesture animation specified for only a few joints (that is for example coming from a procedural gesture synthesis system). The resulting body motion is then integrated with a facial expression synthesizer, to produce realistic virtual human behaviour in synchrony with speech. 2 Background Over the last years, quite some research has been done to develop systems that can simulate Interactive Virtual Humans (IVHs). Figure 1 shows an overview of how an animation system for simulating IVHs generally looks like. We will now discuss relevant research that has already been done in this area. One of the most well-known systems that can produce gesture animations from text, is BEAT [6]. BEAT allows animators to input typed text that they wish to be spoken by an animated human figure, and to obtain as output speech and behavioural characteristics. The MAX system, developed by Kopp and Wachsmuth [18], automatically generates gesture animations based on an XML specification of the output. In MAX, the gesture animations are generated procedurally (not from motion captured sequences). The work of Hartmann et al. [15] specifies a system that automatically generates hand and arm gestures from conversation transcripts using predefined key-frames. However, hand and arm gestures are not the only way in which the human body communicates. For example, Kendon [16] shows a hierarchy in the organization of movements such that the smaller limbs such as the fingers and hands engage in more frequent movements, while the trunk and lower limbs change relatively rarely. More specifically, posture shifts and other general body movements appear to mark the points of change between one major unit of communicative activity and another [28]. Cassell et al. [7] describe experiments to determine more precisely when posture shifts should take place during communication and they applied their technique to the REA agent [5], resulting in a virtual character being capable of performing posture shifts. All of the previously mentioned systems focus mainly on how to produce communicative body behaviour from language or a formal description of a communicative act and not so much on the realism of the resulting animations.
3 In order to provide for realistic animations, while still retaining (some) flexibility, motion captured data can be used and adapted to construct new animations. There are a broad number of algorithms for motion synthesis from motion data, although they are seldom directed to generating idle motions or gestures. Kovar et al. [19] proposed a technique called motion graphs for generating animations and transitions based on a motion database. Li et al. [22] divided the motion into textons modelled using linear dynamic system. Additionally, Kim et al. [17] propose a method that analyses sound and that can generate rhythmic motions based on the beats that are recognised in the audio. Pullen and Bregler [27] proposed to help the process of building key-frame animation by an automatic generation of the overall motion of the character based on a subset of joints animated by the user. Lee et al. [21] propose a motion synthesis method based on example motions that can be obtained through motion capture. Also relevant work has been done by Arikan et al. [3, 2] that define motion graphs based on an annotated motion database. Recent work from Stone et al. [30] describes a system that uses motion capture data to produce new gesture animations. The system is based on communicative units that combine both speech and gestures. The existing combinations in the motion capture database are used to construct new animations from new utterances. This method does result in natural-looking animations, but in order to provide for a wide range of motions, a coherent performance from the motion captured person is required. Also, the style and shape of the motions are not directly controllable, contrary to procedural animation methods. Especially when one desires to generate motions that reflect a certain style, emotion or individuality, a highly flexible animation engine is required that allows for a precise definition of how the movement should take place, while still retaining the motion realism that can be obtained using motion capture techniques. The EMOTE model [8] aims to control gesture motions using effort and shape parameters. As a result, gestures can be adapted to express emotional content or to stress a part of what is communicated. Currently no method exists that allows such expressive gestures, while still having a natural-looking final animation. Our method adapts gesture animation sequences after they have been fully constructed. Consequently, parameters such as effort and shape will still form a part of the final animation. However, a trade-off always has to be made between ensuring the exact execution of the effort and shape characteristics and having a realistic-looking motion. It is generally assumed that an emotional state can be viewed as a set of dimensions. The number of these dimensions varies among different researchers. For example, Ekman [13] has identified six common expressions of emotion: fear, disgust, anger, sadness, surprise and joy, and in the OCC appraisal model [24], there are 22 emotions. Our system is based on an emotion representation called the activation-evaluation space [29], which defines emotions along two axes on a circle (see Figure 2), where the distance from the centre defines the power of the emotion. This emotion space allows for an easy mapping of different types of motions. Additionally, different discrete emotions can be placed on the disc [9], which provides for a possibility to link the activation-evaluation model with other multidimensional emotion models, such as the OCC emotions or Ekman s expressions. The remainder of this paper is organised as follows. Section 3 will present an overview of the idle motion synthesizer and it will present an extension that allows to synthesize idle motions according to an emotional state. Section 4 will present our method to automatically create realistic gesture motions based on animations defined only for a few joints. Finally, we will show how the gesture animations and idle motions are integrated, as well as some results on different characters (Section 5). 3 Emotional Idle Motions In this section, we will describe a system that can automatically produce realistic idle motions from segmented motion clips according to an emotional state. After we introduce the animation model that is used, we will give a overview of the idle motion engine. For a more detailed description of the idle motion generator, please see our previous work [11]. In Section 3.3 we will then present an extension of this system that allows for emotional idle motions. 3.1 Animation Model There exist many techniques for animating virtual characters. Two very commonly used techniques are: Key-frames: an animation is constructed from a set of key-frames (manually designed by an animator or generated automatically) by using interpolation techniques. Although this method results in very flexible animations, the realism of the animations is low, unless a lot of time is invested. Pre-recorded animations: an animation is recorded using a motion capture/tracking system such as Vicon or MotionStar. The animation realism is high, but the resulting animation is usually not very flexible,
4 Figure 2: Activation-evaluation emotion disc. although methods have been developed to overcome part of this problem [4]. A method like Principal Component Analysis (PCA) can determine dependencies between variables in a data set. The result of PCA is a matrix (constructed of a set of eigenvectors) that converts a set of partially dependent variables into another set of variables that have a maximum independency. The PC variables are ordered corresponding to their occurrence in the dataset. Low PC indices indicate a high occurrence in the dataset; higher PC indices indicate a lower occurrence in the dataset. As such, PCA is also used to reduce the dimension of a set of variables, by removing the higher PC indices from the variable set. We will use the results of the PCA later on for synthesizing the dependent joint motions (see Section 4). For our analysis, we perform the PCA on a subset of H-Anim joints. In order to do that, we need to convert each frame of the animation sequences in the data set into an N-dimensional vector. For representing rotations, we use the exponential map representation [14]. In this representation, a rotation can be represented by a vector r R 3, as a rotation with angle r around axis r. The exponential map representation of a rotation is very useful for motion interpolation [25, 1], because it allows to perform linear operations on rotations. In our case the linearity of the exponential map representation is crucial since the PCA only works in the linear domain. Any rotation matrix can be written in the exponential map representation, and any exponential map representation (modulo 2π) is a rotation. Grassia [14] provides an extensive overview of the advantages and disadvantages of various representations of rotations, including the exponential map. Using the exponential map representation for a joint rotation, a posture consisting of m joint rotations and a global root translation can be represented by a vector v R 3m+3. In our case, one posture/key-frame is represented by 25 joint rotations and one root joint translation, resulting in a vector of dimension 78. We have applied a PCA on a large set of motion captured postures, resulting in a PC space of equal dimension. 3.2 Idle Motion Synthesis We have separately recorded the motions of ten people of both genders while they were in a conversation. This provides us with motion data of both gestures and idle motions. In the recorded data, we have observed three common types of idle behaviour: Posture shifts This kind of idle behaviour concerns the shifting from one resting posture to another one. For example, shifting balance while standing, or go to a different lying or sitting position. Continuous small posture variations Because of breathing, maintaining equilibrium, and so on, the human body constantly makes small movements. When such movements are lacking in virtual characters, they look significantly less lively. Supplemental idle motions These kinds of motions generally concern interacting with ones own body, for example touching of the face or hair, or putting a hand in a pocket.
5 3.2.1 Balance Shifting Humans needs to change posture once in a while due to factors such as fatigue. Between these posture changes, he/she is in a resting posture. We can identify different categories of resting postures, such as in the case of standing: balance on the left foot, balance on the right foot or rest on both feet. Given a recording of someone standing, we can extract the animation segments that form the transitions between each of these categories 1. These animation segments together form a database that is used to synthesize balancing animations. In order for the database to be usable, at least one animation is needed for every possible category transition. However, more than one animation for each transition is better, since this creates more variation in the motions later on. In order to generate new animations, recorded clips from the database are blended and modified to ensure a smooth transition. Once the transitions between the different postures have been calculated, the creation of new animations consists of simply requesting the correct key frame from the database during the animation. This means that the database can be used to control many different virtual humans at the same time. For each virtual human, a different motion program is defined that describes the sequence of animation segments that are to be played. This motion program does not contain any real motion data but only references to transitions in the database. Therefore it can be constructed and updated on-the-fly Small Posture Variations Apart from the balance shifting postures, small variations in posture also greatly improve the realism of animations. Due to factors such as breathing, small muscle contractions etc., humans can never maintain the exact same posture. As a basis for the synthesis of these small posture variations, we use the Principal Component representation for each key-frame. Since the variations apply to the Principal Components and not directly to the joint parameters, this method generates randomised variations that still take into account the dependencies between joints. Additionally, because the PCs represent dependencies between variables in the data, the PCs are variables that have maximum independency. As such, we can treat them separately for generating posture variations. The variations can be generated either by applying a Perlin noise function [26] on the PCs or by applying the method that is described in our previous work [11]. Small posture variations are very important when the character is well visible on the screen. For crowds however, characters that are further away do not need such small variations since they will not be visible anyway. In the case of many virtual humans in one scene, it suffices to synthesize these variations for the characters that are close to the camera. 3.3 Emotional Idle Motions In the evaluation-activation space, an emotional state e is defined as a 2-dimensional vector: [e e, e a ], where 1 e e, e a 1 and e 2 e + e 2 a 1 (1) When using a discrete list of n different emotion dimensions (for example based on OCC), a mapping function f : R n R 2 that respects the conditions in Equation 1 has to be defined. In this way, the emotional state representation that is used in this paper can be linked with frameworks of emotion simulation such as for example presented in our previous work [10] Emotional Balance Shifting The extension of the idle motion engine for now mainly focuses on the balance shift synthesizer. The animations that are in the animation database are extended with additional emotional information. For each animation segment, we define the change of emotional content (if any) by specifying for both the start and end points of the animation a 2-dimensional interval on the activation-evaluation circle. Figure 3 shows some examples of possible intervals and related postures on the activation-evaluation circle. So, given an emotional state [e e, e a ], the idle motion synthesizer automatically selects animations that have a target interval including this point in the emotion space. In order to make sure that it is always possible to make a balance shift regardless of the emotional content, a set of neutral posture shifts is added as well, so that when no suitable target interval can be selected, a posture shift is still possible. 1 In the current configuration this segmentation is done manually, however automatic segmentation methods also exist [23].
6 Figure 3: Different intervals of emotional states together with example postures Adapting the pause length The balance motion synthesizer together with facial expressions can portray postures and shifts that change according to evaluation (positive or negative), together with the activation level. An additional option that we have implemented is the automatic adaptation of pause length according to the activation level. Higher activation level will result in shorter pauses (and thus more shifts). In our system the length of a pause is determined using a minimum length p m and a maximum offset p o. A random value between p m and p m + p o is then chosen as the final pause length. In order to adapt this value according to the emotional state, the value of p o is replaced by p o, which is calculated as follows: p o = (α e a ) β p o, and α 1, β 0 (2) where α and β are parameters that define how the offset length adaptation should be applied. In our system these values are set by default to α = 1 and β = 1. The application of the length adaptation can be dynamically switched on and off, so that there is no interference when pauses are required to have specific lengths (for example during speech). 4 Natural Gestures As discussed in Section 2, body gesture synthesis systems often generate gestures that are defined as specific arm movements coming from a more conceptual representation of gesture. Examples are: raise left arm, point at an object, and so on. Translating such higher level specifications of gestures into animations often results in motions that look mechanic, since the motions are only defined for a few joints, whereas in motion captured animations, each joint motion also has an influence on other joints. For example, by moving the head from left to right, some shoulder and spine movements normally occur as well. However, motion captured animations generally do not provide for the flexibility that is required by gesture synthesis systems. Such systems would greatly benefit from a method that can automatically and in real-time calculate believable movements for the joints that are dependent on the gesture. Methods that can calculate dependent joint motions already exist, see for example the research done by Pullen and Bregler [27]. In their work, they adapt key-framed motions with motion captured data, depending on the specification of which degrees of freedom are to be used as the basis for comparison with motion capture data. In this paper, we will present a novel method that uses the Principal Components to create more natural looking motions. Our method is not as general as the previously discussed work, but it works very well within the upper body gesture domain. Furthermore, it is a very simple method and therefore suited for real-time applications. The Principal Components are ordered in such a way that lower PC indices indicate high occurrence in the data and higher PC indices indicate low occurrence in the data. This allows for example to compress animations by only retaining the lower PC indices. Animations that are close to the ones that are in the database that was used for the PCA, will have higher PC indices that are mostly zero (see Figure 4) for an example. An animation that is very different from what is in the database, will have more noise in the higher PC indices to compensate
7 Figure 4: (Absolute) PC values of a posture extracted from a motion captured animation sequence. Figure 5: (Absolute) PC values of a posture modelled by hand for a few joints. for the difference (see Figure 5). If one assumes that the database that is used for the PCA is representative for general motions that are expressed by humans during communication, then the higher PC indices represent the part of the animation that is unnatural (or: not frequently occurring in the animation database). When we remove these higher PC indices or apply a scaling filter (such as the one displayed in Figure 6), this generates an error in the final animation. However, since the scaling filter removes the unnatural part of the animation, the result is a motion that actually contains the movements of dependent joints. By varying the PC index where the scaling filter starts, one can define how close the resulting animation should be to the original key-framed animation. To calculate the motions of dependent joints, only a scaling function has to be applied. Therefore this method is very well suited for real-time applications. A disadvantage is that when applying the scaling function onto the global PC vectors, translation problems can occur. In order to eliminate these translation artefacts, we have also performed a PCA on the upper body joints only (which does not contain the root joint translation). The scaling filter is then only applied on the upper body PC vector. This solution works very well since in our case, the dependent joint movements are calculated for upper body gestures only, whereas the rest of the body is animated using the idle motion engine. Figure 7 shows some examples of original frames versus frames where the PC scaling filter was applied. Figure 6: An example of a scaling filter that can be applied to the PC vector representation of a posture.
8 Figure 7: Two examples of key frame postures designed for a few joints and the same postures after application of the Principal Component scaling filter. Figure 8: Integration of gesture animations, dependent joint motion synthesis and idle motion synthesis. 5 Integration and Results In order to integrate the emotional idle motions and the gesture animations, we use the blending library that was developed in our earlier work [12]. This library allows us to perform weighted animation blending operations in the exponential map space. Additionally, a set of modifiers is provided that allows scaling, flipping and stretching of animations, among others. Figure 8 shows the general process of the animation synthesis and blending. The idle motion engine is running continuously, therefore providing the IVH with continuous idle motions. Gesture animations are adapted so that the dependent joint movements are also included, and are blended in on-the-fly. The blending parameters such as weight are currently set as default values. For the lower body, 100% of the idle motion is used, whereas for the upper body 75% of the gesture animation is used and 25% of the idle motion. Each gesture animation is designed in such a way that it starts and ends in the H-Anim neutral posture. A blending fade-in and fade-out of 500 ms is also applied on the gesture animation, in order to avoid unnatural transitions. In our case, these values and percentages worked well for most of the gestures. However, some animations might need different values if some parts of the animation are very important and should not be changed. The PC scaling filter is applied on the upper body only and starts at PC index 20 (of a total of 48). The final body animation is then synchronized with a speech signal and a face animation [20] (see Figure 9). 6 Conclusions and Future Work We have presented an idle motion engine that can produce realistic looking animations according to an emotional state, based on motion captured animation segments. Additionally, we have shown a method to automatically determine the movements of dependent joints, given a key frame animation specified for only a few joints. Our system then blends both the animations, resulting in a final animation that can be synchronised with facial animation and speech. While our method improves the realism of animations with communicative and emotional content, it still retains the flexibility that is required for IVH systems. Because of the methods that are used, all of the operations are performed in real-time. The system works very well for generic gesture animations, although
9 Figure 9: Some frames of gesture and idle motion sequences played on different 3D models, synchronised with facial animation and speech. there is still room for improvements. Since the system currently uses default parameters for the blending procedure, some important parts of the gesture animations might be affected when blended with the idle motions. A solution could be that in the gesture specification, a tag is included that indicates if a part of a gesture (or a set of joints) should not be affected by movements from other animations (such as idle motions). Our future work will focus on how these blending parameters can be easily integrated with high-level gesture synthesizers, while still ensuring that controlling the gesture sequence remains simple. Another limitation of our method is that the dependent joint calculation only works if a representative database is used. Finally, if one desires to simulate a lot of different emotional states, this also means that a large database of motions is required. In order to further improve the flexibility of the system, we will investigate if it is possible to apply interpolation techniques between the motions, so that not all emotional states need to be recorded. For example, by interpolating between extremely happy and neutral motions, the medium happy sequences could be constructed automatically. 7 Acknowledgements This work has been developed through the support of the HUMAINE Network of Excellence (Human-Machine Interaction Network on Emotion, IST , funded under the Sixth Framework Programme and by the OFES. References [1] Marc Alexa. Linear combination of transformations. In SIGGRAPH 2002, pages , [2] O. Arikan and D. Forsyth. Interactive motion generation from examples. In Proceedings of ACM SIGGRAPH 2002, [3] Okan Arikan, David A. Forsyth, and James F. O Brien. Motion synthesis from annotations. ACM Transactions on Graphics, 22(3): , [4] Armin Bruderlin and Lance Williams. Motion signal processing. Computer Graphics, 29(Annual Conference Series):97 104, [5] J. Cassell, T. Bickmore, M. Billinghurst, L. Campbell, K. Chang, Vilhjálmsson, H., and H. Yan. Embodiment in conversational interfaces: Rea. In Proceedings of the CHI 99 Conference, pages , [6] J. Cassell, H. Vilhjálmsson, and T. Bickmore. BEAT: the Behavior Expression Animation Toolkit. In Proceedings of SIGGRAPH, pages , [7] Justine Cassell, Yukiko I. Nakano, Timothy W. Bickmore, Candace L. Sidner, and Charles Rich. Non-verbal cues for discourse structure. In Proceedings Association for Computational Linguistics Annual Conference (ACL), pages , July 2001.
10 [8] D. Chi, M. Costa, L. Zhao, and N. Badler. The emote model for effort and shape. In SIGGRAPH 2000, pages , July [9] R. Cowie, E. Douglas-Cowie, S. Savvidou, E. McMahon, M. Sawey, and M. Schröder. Feeltrace: An instrument for recording perceived emotion in real time. In ISCA Workshop on Speech and Emotion, pages 19 24, Northern Ireland, [10] A. Egges, S. Kshirsagar, and N. Magnenat-Thalmann. Generic personality and emotion simulation for conversational agents. Computer Animation and Virtual Worlds, 15(1):1 13, [11] A. Egges, T. Molet, and N. Magnenat-Thalmann. Personalised real-time idle motion synthesis. In Pacific Graphics 2004, pages , [12] A. Egges, R. Visser, and N. Magnenat-Thalmann. Example-based idle motion synthesis in a real-time application. In CAPTECH Workshop, pages 13 19, [13] P. Ekman. Emotion in the human face. Cambridge University Press, New York, [14] F. Sebastian Grassia. Practical parameterization of rotations using the exponential map. Journal of Graphics Tools, 3(3):29 48, [15] Björn Hartmann, Maurizio Mancini, and Catherine Pelachaud. Formational parameters and adaptive prototype instantiation for mpeg-4 compliant gesture synthesis. In Computer Animation 2002, pages , [16] Adam Kendon. Some relationships between body motion and speech: an analysis of one example. In Aron Wolfe Siegman and Benjamin Pope, editors, Studies in Dyadic Communication, pages New York: Pergamon, [17] T. H. Kim, S. I. Park, and S. Y. Shin. Rhythmic-motion synthesis based on motion-beat analysis. ACM Transactions on Graphics, 22(3): , [18] S. Kopp and I. Wachsmuth. Synthesizing multimodal utterances for conversational agents. Computer Animation and Virtual Worlds, 15(1):39 52, [19] L. Kovar, M. Gleicher, and F. Pighin. Motion graphs. In Proc. SIGGRAPH 2002, [20] S. Kshirsagar, S. Garchery, and N. Magnenat-Thalmann. Feature point based mesh deformation applied to MPEG-4 facial animation. In Proceedings Deform2000, pages 23 34, Geneva, Switzerland, November [21] Jehee Lee, Jinxiang Chai, Paul Reitsma, Jessica K Hodgins, and Nancy Pollard. Interactive control of avatars animated with human motion data. In Proceedings of SIGGRAPH 2002, July [22] Y. Li, T. Wang, and H. Y. Shum. Motion texture: A two-level statistical model for character motion synthesis. In Proc. SIGGRAPH 2002, [23] Meinard Mueller, Tido Roeder, and Michael Clausen. Efficient content-based retrieval of motion capture data. In Proceedings SIGGRAPH 2005, [24] Andrew Ortony, Gerald L. Clore, and Allan Collins. The Cognitive Structure of Emotions. Cambridge University Press, [25] F. C. Park and Bahram Ravani. Smooth invariant interpolation of rotations. ACM Transactions on Graphics, 16(3): , July [26] Ken Perlin. An image synthesizer. In Proceedings of the 12th Annual Conference on Computer Graphics and Interactive Techniques, pages ACM Press, [27] K. Pullen and C. Bregler. Motion capture assisted animation: Texturing and synthesis. In Proc. SIGGRAPH 2002, [28] A. Scheflen. Communicational structure. Bloomington: Indiana University Press, [29] H. Schlosberg. A scale for judgement of facial expressions. Journal of Experimental Psychology, 29: , [30] Matthew Stone, Doug DeCarlo, Insuk Oh, Christian Rodriguez, Adrian Stere, Alyssa Lees, and Christoph Bregler. Speaking with hands: Creating animated conversational characters from recordings of human performance. In SIGGRAPH 2004, pages , 2004.
Principal Components of Expressive Speech Animation
Principal Components of Expressive Speech Animation Sumedha Kshirsagar, Tom Molet, Nadia Magnenat-Thalmann MIRALab CUI, University of Geneva 24 rue du General Dufour CH-1211 Geneva, Switzerland {sumedha,molet,thalmann}@miralab.unige.ch
Motion Capture Assisted Animation: Texturing and Synthesis
Motion Capture Assisted Animation: Texturing and Synthesis Katherine Pullen Stanford University Christoph Bregler Stanford University Abstract We discuss a method for creating animations that allows the
Piavca: A Framework for Heterogeneous Interactions with Virtual Characters
Piavca: A Framework for Heterogeneous Interactions with Virtual Characters Marco Gillies 1 Xueni Pan 2 Mel Slater 3 1 Goldsmiths, University of London Department of Computing, London, UK [email protected]
Motion Retargetting and Transition in Different Articulated Figures
Motion Retargetting and Transition in Different Articulated Figures Ming-Kai Hsieh Bing-Yu Chen Ming Ouhyoung National Taiwan University [email protected] [email protected] [email protected]
Motion Capture Assisted Animation: Texturing and Synthesis
Motion Capture Assisted Animation: Texturing and Synthesis Katherine Pullen Stanford University Christoph Bregler Stanford University Abstract We discuss a method for creating animations that allows the
DESIGNING MPEG-4 FACIAL ANIMATION TABLES FOR WEB APPLICATIONS
DESIGNING MPEG-4 FACIAL ANIMATION TABLES FOR WEB APPLICATIONS STEPHANE GACHERY, NADIA MAGNENAT-THALMANN MIRALab - University of Geneva 22 Rue Général Dufour, CH-1211 GENEVA 4, SWITZERLAND Web: http://www.miralab.unige.ch
Background Animation Generator: Interactive Scene Design based on Motion Graph
Background Animation Generator: Interactive Scene Design based on Motion Graph Tse-Hsien Wang National Taiwan University [email protected] Bin-Yu Chen National Taiwan University [email protected]
This week. CENG 732 Computer Animation. Challenges in Human Modeling. Basic Arm Model
CENG 732 Computer Animation Spring 2006-2007 Week 8 Modeling and Animating Articulated Figures: Modeling the Arm, Walking, Facial Animation This week Modeling the arm Different joint structures Walking
Comparing and evaluating real-time character engines for virtual environments. [email protected]
real-time character engines 1 Running head: REAL-TIME CHARACTER ENGINES Comparing and evaluating real-time character engines for virtual environments Marco Gillies 1, Bernhard Spanlang 2 1 Department of
Facial Expression Analysis and Synthesis
1. Research Team Facial Expression Analysis and Synthesis Project Leader: Other Faculty: Post Doc(s): Graduate Students: Undergraduate Students: Industrial Partner(s): Prof. Ulrich Neumann, IMSC and Computer
CS 4204 Computer Graphics
CS 4204 Computer Graphics Computer Animation Adapted from notes by Yong Cao Virginia Tech 1 Outline Principles of Animation Keyframe Animation Additional challenges in animation 2 Classic animation Luxo
Modelling 3D Avatar for Virtual Try on
Modelling 3D Avatar for Virtual Try on NADIA MAGNENAT THALMANN DIRECTOR MIRALAB UNIVERSITY OF GENEVA DIRECTOR INSTITUTE FOR MEDIA INNOVATION, NTU, SINGAPORE WWW.MIRALAB.CH/ Creating Digital Humans Vertex
Voice Driven Animation System
Voice Driven Animation System Zhijin Wang Department of Computer Science University of British Columbia Abstract The goal of this term project is to develop a voice driven animation system that could take
Motion Capture Technologies. Jessica Hodgins
Motion Capture Technologies Jessica Hodgins Motion Capture Animation Video Games Robot Control What games use motion capture? NBA live PGA tour NHL hockey Legends of Wrestling 2 Lords of Everquest Lord
CHAPTER 6 TEXTURE ANIMATION
CHAPTER 6 TEXTURE ANIMATION 6.1. INTRODUCTION Animation is the creating of a timed sequence or series of graphic images or frames together to give the appearance of continuous movement. A collection of
Fundamentals of Computer Animation
Fundamentals of Computer Animation Principles of Traditional Animation How to create maximum impact page 1 How to create maximum impact Early animators worked from scratch to analyze and improve upon silence
Chapter 1. Introduction. 1.1 The Challenge of Computer Generated Postures
Chapter 1 Introduction 1.1 The Challenge of Computer Generated Postures With advances in hardware technology, more powerful computers become available for the majority of users. A few years ago, computer
Character Animation Tutorial
Character Animation Tutorial 1.Overview 2.Modelling 3.Texturing 5.Skeleton and IKs 4.Keys 5.Export the character and its animations 6.Load the character in Virtools 7.Material & texture tuning 8.Merge
Performance Driven Facial Animation Course Notes Example: Motion Retargeting
Performance Driven Facial Animation Course Notes Example: Motion Retargeting J.P. Lewis Stanford University Frédéric Pighin Industrial Light + Magic Introduction When done correctly, a digitally recorded
animation animation shape specification as a function of time
animation animation shape specification as a function of time animation representation many ways to represent changes with time intent artistic motion physically-plausible motion efficiency control typically
3D Face Modeling. Vuong Le. IFP group, Beckman Institute University of Illinois ECE417 Spring 2013
3D Face Modeling Vuong Le IFP group, Beckman Institute University of Illinois ECE417 Spring 2013 Contents Motivation 3D facial geometry modeling 3D facial geometry acquisition 3D facial deformation modeling
Human Skeletal and Muscle Deformation Animation Using Motion Capture Data
Human Skeletal and Muscle Deformation Animation Using Motion Capture Data Ali Orkan Bayer Department of Computer Engineering, Middle East Technical University 06531 Ankara, Turkey [email protected]
2.5 Physically-based Animation
2.5 Physically-based Animation 320491: Advanced Graphics - Chapter 2 74 Physically-based animation Morphing allowed us to animate between two known states. Typically, only one state of an object is known.
The 3D rendering pipeline (our version for this class)
The 3D rendering pipeline (our version for this class) 3D models in model coordinates 3D models in world coordinates 2D Polygons in camera coordinates Pixels in image coordinates Scene graph Camera Rasterization
SimFonIA Animation Tools V1.0. SCA Extension SimFonIA Character Animator
SimFonIA Animation Tools V1.0 SCA Extension SimFonIA Character Animator Bring life to your lectures Move forward with industrial design Combine illustrations with your presentations Convey your ideas to
Template-based Eye and Mouth Detection for 3D Video Conferencing
Template-based Eye and Mouth Detection for 3D Video Conferencing Jürgen Rurainsky and Peter Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz-Institute, Image Processing Department, Einsteinufer
Virtual Patients: Assessment of Synthesized Versus Recorded Speech
Virtual Patients: Assessment of Synthesized Versus Recorded Speech Robert Dickerson 1, Kyle Johnsen 1, Andrew Raij 1, Benjamin Lok 1, Amy Stevens 2, Thomas Bernard 3, D. Scott Lind 3 1 Department of Computer
Teaching Methodology for 3D Animation
Abstract The field of 3d animation has addressed design processes and work practices in the design disciplines for in recent years. There are good reasons for considering the development of systematic
To Gesture or Not to Gesture: What is the Question?
University of Pennsylvania ScholarlyCommons Center for Human Modeling and Simulation Department of Computer & Information Science June 2000 To Gesture or Not to Gesture: What is the Question? Norman I.
An Interactive method to control Computer Animation in an intuitive way.
An Interactive method to control Computer Animation in an intuitive way. Andrea Piscitello University of Illinois at Chicago 1200 W Harrison St, Chicago, IL [email protected] Ettore Trainiti University of
Project 2: Character Animation Due Date: Friday, March 10th, 11:59 PM
1 Introduction Project 2: Character Animation Due Date: Friday, March 10th, 11:59 PM The technique of motion capture, or using the recorded movements of a live actor to drive a virtual character, has recently
Mocap in a 3D Pipeline
East Tennessee State University Digital Commons @ East Tennessee State University Undergraduate Honors Theses 5-2014 Mocap in a 3D Pipeline Logan T. Maides Follow this and additional works at: http://dc.etsu.edu/honors
Subspace Analysis and Optimization for AAM Based Face Alignment
Subspace Analysis and Optimization for AAM Based Face Alignment Ming Zhao Chun Chen College of Computer Science Zhejiang University Hangzhou, 310027, P.R.China [email protected] Stan Z. Li Microsoft
The Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
Bachelor of Games and Virtual Worlds (Programming) Subject and Course Summaries
First Semester Development 1A On completion of this subject students will be able to apply basic programming and problem solving skills in a 3 rd generation object-oriented programming language (such as
Animation (-4, -2, 0 ) + (( 2, 6, -4 ) - (-4, -2, 0 ))*.75 = (-4, -2, 0 ) + ( 6, 8, -4)*.75 = (.5, 4, -3 ).
Animation A Series of Still Images We Call Animation Animation needs no explanation. We see it in movies and games. We grew up with it in cartoons. Some of the most popular, longest-running television
Motion Graphs. Abstract. 1 Introduction. To appear in Proceedings of SIGGRAPH 02
Motion Graphs Lucas Kovar University of Wisconsin-Madison Michael Gleicher University of Wisconsin-Madison Frédéric Pighin University of Southern California Institute for Creative Technologies Abstract
Animating reactive motion using momentum-based inverse kinematics
COMPUTER ANIMATION AND VIRTUAL WORLDS Comp. Anim. Virtual Worlds 2005; 16: 213 223 Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/cav.101 Motion Capture and Retrieval
Interactive Computer Graphics
Interactive Computer Graphics Lecture 18 Kinematics and Animation Interactive Graphics Lecture 18: Slide 1 Animation of 3D models In the early days physical models were altered frame by frame to create
Classifying Manipulation Primitives from Visual Data
Classifying Manipulation Primitives from Visual Data Sandy Huang and Dylan Hadfield-Menell Abstract One approach to learning from demonstrations in robotics is to make use of a classifier to predict if
Human-like Arm Motion Generation for Humanoid Robots Using Motion Capture Database
Human-like Arm Motion Generation for Humanoid Robots Using Motion Capture Database Seungsu Kim, ChangHwan Kim and Jong Hyeon Park School of Mechanical Engineering Hanyang University, Seoul, 133-791, Korea.
Digital media animation design based on max script language
Digital media animation design based on max script language Abstract Qiaoqiao Feng Harbin university of science and technology, Heilongjiang Harbin, 150080 Corresponding author s e-mail: [email protected]
Introduction to Computer Graphics
Introduction to Computer Graphics Torsten Möller TASC 8021 778-782-2215 [email protected] www.cs.sfu.ca/~torsten Today What is computer graphics? Contents of this course Syllabus Overview of course topics
Computer Animation in Future Technologies
Computer Animation in Future Technologies Nadia Magnenat Thalmann MIRALab, University of Geneva Daniel Thalmann Computer Graphics Lab, Swiss Federal Institute of Technology Abstract In this introductory
CREATING EMOTIONAL EXPERIENCES THAT ENGAGE, INSPIRE, & CONVERT
CREATING EMOTIONAL EXPERIENCES THAT ENGAGE, INSPIRE, & CONVERT Presented by Tim Llewellynn (CEO & Co-founder) Presented 14 th March 2013 2 Agenda Agenda and About nviso Primer on Emotions and Decision
Computer Animation. CS 445/645 Fall 2001
Computer Animation CS 445/645 Fall 2001 Let s talk about computer animation Must generate 30 frames per second of animation (24 fps for film) Issues to consider: Is the goal to replace or augment the artist?
Creating Scenes and Characters for Virtools in OpenFX
Creating Scenes and Characters for Virtools in OpenFX Scenes Scenes are straightforward: In Virtools use the Resources->Import File As->Scene menu command and select the.mfx (OpenFX model) file containing
Habilitation. Bonn University. Information Retrieval. Dec. 2007. PhD students. General Goals. Music Synchronization: Audio-Audio
Perspektivenvorlesung Information Retrieval Music and Motion Bonn University Prof. Dr. Michael Clausen PD Dr. Frank Kurth Dipl.-Inform. Christian Fremerey Dipl.-Inform. David Damm Dipl.-Inform. Sebastian
Transana 2.60 Distinguishing features and functions
Transana 2.60 Distinguishing features and functions This document is intended to be read in conjunction with the Choosing a CAQDAS Package Working Paper which provides a more general commentary of common
Tutorial: Biped Character in 3D Studio Max 7, Easy Animation
Tutorial: Biped Character in 3D Studio Max 7, Easy Animation Written by: Ricardo Tangali 1. Introduction:... 3 2. Basic control in 3D Studio Max... 3 2.1. Navigating a scene:... 3 2.2. Hide and Unhide
Nonverbal Communication Human Communication Lecture 26
Nonverbal Communication Human Communication Lecture 26 Mar-14-11 Human Communication 1 1 Nonverbal Communication NVC can be communicated through gestures and touch (Haptic communication), by body language
A simple footskate removal method for virtual reality applications
The Visual Computer manuscript No. (will be inserted by the editor) A simple footskate removal method for virtual reality applications Etienne Lyard 1, Nadia Magnenat-Thalmann 1 MIRALab, University of
3D Distance from a Point to a Triangle
3D Distance from a Point to a Triangle Mark W. Jones Technical Report CSR-5-95 Department of Computer Science, University of Wales Swansea February 1995 Abstract In this technical report, two different
C O M P U C O M P T U T E R G R A E R G R P A H I C P S Computer Animation Guoying Zhao 1 / 66 /
Computer Animation Guoying Zhao 1 / 66 Basic Elements of Computer Graphics Modeling construct the 3D model of the scene Rendering Render the 3D model, compute the color of each pixel. The color is related
Emotion Detection from Speech
Emotion Detection from Speech 1. Introduction Although emotion detection from speech is a relatively new field of research, it has many potential applications. In human-computer or human-human interaction
Open Access A Facial Expression Recognition Algorithm Based on Local Binary Pattern and Empirical Mode Decomposition
Send Orders for Reprints to [email protected] The Open Electrical & Electronic Engineering Journal, 2014, 8, 599-604 599 Open Access A Facial Expression Recognition Algorithm Based on Local Binary
Doppler Effect Plug-in in Music Production and Engineering
, pp.287-292 http://dx.doi.org/10.14257/ijmue.2014.9.8.26 Doppler Effect Plug-in in Music Production and Engineering Yoemun Yun Department of Applied Music, Chungwoon University San 29, Namjang-ri, Hongseong,
EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set
EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set Amhmed A. Bhih School of Electrical and Electronic Engineering Princy Johnson School of Electrical and Electronic Engineering Martin
Segmenting Motion Capture Data into Distinct Behaviors
Segmenting Motion Capture Data into Distinct Behaviors Jernej Barbič Alla Safonova Jia-Yu Pan Christos Faloutsos Jessica K. Hodgins Nancy S. Pollard Computer Science Department Carnegie Mellon University
IMD4003 3D Computer Animation
Contents IMD4003 3D Computer Animation Strange from MoCap G03 Correcting Animation in MotionBuilder Prof. Chris Joslin Overview This document covers how to correct animation (specifically rotations) in
International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014
Efficient Attendance Management System Using Face Detection and Recognition Arun.A.V, Bhatath.S, Chethan.N, Manmohan.C.M, Hamsaveni M Department of Computer Science and Engineering, Vidya Vardhaka College
FPGA Implementation of Human Behavior Analysis Using Facial Image
RESEARCH ARTICLE OPEN ACCESS FPGA Implementation of Human Behavior Analysis Using Facial Image A.J Ezhil, K. Adalarasu Department of Electronics & Communication Engineering PSNA College of Engineering
Layered Performance Animation with Correlation Maps
EUROGRAPHICS 2007 / D. Cohen-Or and P. Slavík (Guest Editors) Volume 26 (2007), Number 3 Layered Performance Animation with Correlation Maps Michael Neff 1, Irene Albrecht 2 and Hans-Peter Seidel 2 1 University
Control of affective content in music production
International Symposium on Performance Science ISBN 978-90-9022484-8 The Author 2007, Published by the AEC All rights reserved Control of affective content in music production António Pedro Oliveira and
Two hours UNIVERSITY OF MANCHESTER SCHOOL OF COMPUTER SCIENCE. M.Sc. in Advanced Computer Science. Friday 18 th January 2008.
COMP60321 Two hours UNIVERSITY OF MANCHESTER SCHOOL OF COMPUTER SCIENCE M.Sc. in Advanced Computer Science Computer Animation Friday 18 th January 2008 Time: 09:45 11:45 Please answer any THREE Questions
Copyright 2005 by Washington Office of the Superintendent of Public Instruction. All rights reserved. Educational institutions within the State of
Copyright 2005 by Washington Office of the Superintendent of Public Instruction. All rights reserved. Educational institutions within the State of Washington have permission to reproduce this document.
Chapter 1. Animation. 1.1 Computer animation
Chapter 1 Animation "Animation can explain whatever the mind of man can conceive. This facility makes it the most versatile and explicit means of communication yet devised for quick mass appreciation."
How do non-expert users exploit simultaneous inputs in multimodal interaction?
How do non-expert users exploit simultaneous inputs in multimodal interaction? Knut Kvale, John Rugelbak and Ingunn Amdal 1 Telenor R&D, Norway [email protected], [email protected], [email protected]
Prentice Hall Algebra 2 2011 Correlated to: Colorado P-12 Academic Standards for High School Mathematics, Adopted 12/2009
Content Area: Mathematics Grade Level Expectations: High School Standard: Number Sense, Properties, and Operations Understand the structure and properties of our number system. At their most basic level
4.2: Multimedia File Systems Traditional File Systems. Multimedia File Systems. Multimedia File Systems. Disk Scheduling
Chapter 2: Representation of Multimedia Data Chapter 3: Multimedia Systems Communication Aspects and Services Chapter 4: Multimedia Systems Storage Aspects Optical Storage Media Multimedia File Systems
Using Photorealistic RenderMan for High-Quality Direct Volume Rendering
Using Photorealistic RenderMan for High-Quality Direct Volume Rendering Cyrus Jam [email protected] Mike Bailey [email protected] San Diego Supercomputer Center University of California San Diego Abstract With
BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES
BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES 123 CHAPTER 7 BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES 7.1 Introduction Even though using SVM presents
Monash University Clayton s School of Information Technology CSE3313 Computer Graphics Sample Exam Questions 2007
Monash University Clayton s School of Information Technology CSE3313 Computer Graphics Questions 2007 INSTRUCTIONS: Answer all questions. Spend approximately 1 minute per mark. Question 1 30 Marks Total
CS-184: Computer Graphics
CS-184: Computer Graphics Lecture #18: Introduction to Animation Prof. James O Brien University of California, Berkeley V2007-F-18-1.0 Introduction to Animation Generate perception of motion with sequence
Spatial Pose Trees: Creating and Editing Motions Using a Hierarchy of Low Dimensional Control Spaces
Eurographics/ ACM SIGGRAPH Symposium on Computer Animation (2006), pp. 1 9 M.-P. Cani, J. O Brien (Editors) Spatial Pose Trees: Creating and Editing Motions Using a Hierarchy of Low Dimensional Control
A Reliability Point and Kalman Filter-based Vehicle Tracking Technique
A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video
RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 29 (2008) Indiana University
RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 29 (2008) Indiana University A Software-Based System for Synchronizing and Preprocessing Eye Movement Data in Preparation for Analysis 1 Mohammad
2014/02/13 Sphinx Lunch
2014/02/13 Sphinx Lunch Best Student Paper Award @ 2013 IEEE Workshop on Automatic Speech Recognition and Understanding Dec. 9-12, 2013 Unsupervised Induction and Filling of Semantic Slot for Spoken Dialogue
Computer Animation and Visualisation. Lecture 1. Introduction
Computer Animation and Visualisation Lecture 1 Introduction 1 Today s topics Overview of the lecture Introduction to Computer Animation Introduction to Visualisation 2 Introduction (PhD in Tokyo, 2000,
Intelligent Submersible Manipulator-Robot, Design, Modeling, Simulation and Motion Optimization for Maritime Robotic Research
20th International Congress on Modelling and Simulation, Adelaide, Australia, 1 6 December 2013 www.mssanz.org.au/modsim2013 Intelligent Submersible Manipulator-Robot, Design, Modeling, Simulation and
Character Animation from 2D Pictures and 3D Motion Data ALEXANDER HORNUNG, ELLEN DEKKERS, and LEIF KOBBELT RWTH-Aachen University
Character Animation from 2D Pictures and 3D Motion Data ALEXANDER HORNUNG, ELLEN DEKKERS, and LEIF KOBBELT RWTH-Aachen University Presented by: Harish CS-525 First presentation Abstract This article presents
B2.53-R3: COMPUTER GRAPHICS. NOTE: 1. There are TWO PARTS in this Module/Paper. PART ONE contains FOUR questions and PART TWO contains FIVE questions.
B2.53-R3: COMPUTER GRAPHICS NOTE: 1. There are TWO PARTS in this Module/Paper. PART ONE contains FOUR questions and PART TWO contains FIVE questions. 2. PART ONE is to be answered in the TEAR-OFF ANSWER
Vision-based Control of 3D Facial Animation
Eurographics/SIGGRAPH Symposium on Computer Animation (2003) D. Breen, M. Lin (Editors) Vision-based Control of 3D Facial Animation Jin-xiang Chai, 1 Jing Xiao 1 and Jessica Hodgins 1 1 The Robotics Institute,
Recognition of Emotions in Interactive Voice Response Systems
Recognition of Emotions in Interactive Voice Response Systems Sherif Yacoub, Steve Simske, Xiaofan Lin, John Burns HP Laboratories Palo Alto HPL-2003-136 July 2 nd, 2003* E-mail: {sherif.yacoub, steven.simske,
