Transferring the Rig and Animations from a Character to Different Face Models
|
|
|
- Rosamund Carroll
- 9 years ago
- Views:
Transcription
1 DOI: /j x COMPUTER GRAPHICS forum Volume 27 (2008), number 8 pp Transferring the Rig and Animations from a Character to Different Face Models Verónica Costa Orvalho 1, Ernesto Zacur 2 and Antonio Susin 1 1 Laboratorio de Simulación Dinámica, Universitat Politècnica de Catalunya, Barcelona, Spain 2 Universitat Pompeu Fabra, Barcelona, Spain Abstract We introduce a facial deformation system that allows artists to define and customize a facial rig and later apply the same rig to different face models. The method uses a set of landmarks that define specific facial features and deforms the rig anthropometrically. We find the correspondence of the main attributes of a source rig, transfer them to different three-demensional (3D) face models and automatically generate a sophisticated facial rig. The method is general and can be used with any type of rig configuration. We show how the landmarks, combined with other deformation methods, can adapt different influence objects (NURBS surfaces, polygon surfaces, lattice) and skeletons from a source rig to individual face models, allowing high quality geometric or physically-based animations. We describe how it is possible to deform the source facial rig, apply the same deformation parameters to different face models and obtain unique expressions. We enable reusing of existing animation scripts and show how shapes nicely mix one with the other in different face models. We describe how our method can easily be integrated in an animation pipeline. We end with the results of tests done with major film and game companies to show the strength of our proposal. Keywords: facial animation, rigging, skinning, geometric deformation ACM CCS: I.3.7 Computer Graphics: Three-Dimensional Graphics and Realism. Animation 1. Introduction Facial animation is a key element to transmit individuality and personality to a character in films and video games. For animators, to create a realistic face movement that transmits emotion is very challenging. To obtain such realism, the traditional animation pipeline requires that each character is separately rigged by hand. The rigging process is analogous to setting up the strings that control a puppet. After rigging, we obtain a virtual character with a set of shapes, representing facial expressions (see Figure 1c and f), and a set of animation controls (see Figure 1b and e). Digital artists set the value of the control parameters to create animations. We contacted several film and game studios in the USA, Canada and Europe, and confirmed that facial animation is still an unsolved problem in most animation pipelines. In practice, artists use geometric deformation [CHP89, DN06, WG97] or physically-based approaches [KHYS02, SL96, WF95] to animate the face models. Physical simulation involves the deformation of muscles and skin to capture details and realistic motion [SSRMF06]. This approach is computationally expensive, labour intensive and lacks direct control over the resulting skin deformation. Thus, geometric deformation approaches remain commonly used, because they are easier to control. Today, to animate a character, an experienced computer generated (CG) artist has to model each facial rig by hand, making it impossible to reuse the same rig in different facial models. The task is further complicated when a minor artistic change on the facial topology leads to the restarting of the rigging process from scratch. This creates a bottleneck in any CG production and leads to the research of automated methods to accelerate the process [JTDP03]. The challenge can be solved if we successfully answer the following question: Journal compilation c 2008 The Eurographics Association and Blackwell Publishing Ltd. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA Submitted November 2006 Revised February 2007 Accepted May 2008
2 1998 V. C. Orvalho et al./transferring the Rig and Animations from a Character Figure 1: In this example, we transfer the character setup from the source model (SM) to the target model (TM). The character rig includes: 19 joints and a NURBS surface behind the mouth (a) SM, (b) SM rig, (c) SM shape, (d) TM, (e) TM rig, (f) TM Shape (3D models courtesy of Dygra). Would it be possible to use the rig created for one character in other characters? We are interested in simplifying the process of facial character rigging for CG productions. With our proposal, an artist can define and customize a facial rig and later apply the same rig to different face models. We present a deformation method to transfer, from a source model to a target model, the inner structure of a face model. The method is based on a nonlinear warp transform [Boo89] and the use of facial features landmarks. The method is general, it can be used with any type of rig configuration. We tag the source model with landmarks on its surface (the skin) and automatically deform it, together with the animation controls, facial expressions and skeleton. Because all models share the same set of attributes, we don t need to develop unique scripts for each face. We can transfer the rig parameters, enabling reuse of existing animation scripts. We can build models with underlying anatomical structure, skin, muscle and skeleton, for human heads or other type of inner structure to animate fantastic creatures. We can use geometric deformation of physically-based simulation to deform the skin of a face model. We use influence objects like, polygon surfaces that can represent muscles, to obtain the desired deformation. We also prepare models suitable for real-time animation. The following section provides an overview of the related work. Section 3 gives an overview of our approach. Section 4 defines our source model: the generic rig. Section 5 describes the deformation method that transfers the generic rig attributes to different face models. Section 6 details how to animate a three-dimensional (3D) face model. We conclude with a discussion of our results in Section 7 and contributions in Section Background and Related Work Facial animation is based on ideas pioneered by P. Waters in the 1970s [Par72]. Traditional approaches to animate facial models usually rely on an artist to create the key movements and then blend between those movements to obtain a fluid motion. Facial animation in video games is either poor or omitted due to limited resources to produce real-time results. In films, facial animation remains a challenge because reusing the same facial rig in different models is a very complex and time consuming task for artists. The rigging process is a key step within an animation pipeline and defines the quality and speed of a CG production. Different approaches, both geometric and physically-based, have been studied to help improve this process [CBC 05]. Modelling and animation of deformable objects have been applied to different fields [ACWK04, BK04]. Noh et al. [NN01] proposed several methods for transferring animations between different face models. The surface correspondence is obtained by specifying the point pairs on the models. Pighin et al. [PLS 98] presented a method to interactively mark corresponding facial features in several photographs of a person and to deform a generic face model using radial basis function. Chadwick et al. [CHP89] presented a method for layered construction of flexible animated characters using free-form deformations (FFD) based on Sederberg and Parry [SP86]. This method does not require setting the corresponding features on the geometries. Komatsu [Kom88] also used FFD for skin deformation. Mark Henne [Hen90] used a layered approach, where implicit fields simulated body tissue. Singh et al. [SOP95] used implicit functions to simulate skin behaviour. Turner et al. [TT93] used an elastic skin model for character animation. Wu et al. [WKT96] studied skin wrinkling. Other interesting approaches were introduced for high level geometric control and deformation over 3D model [Coq90, HHK92, SF98]. None of these methods attempted to model individual 3D muscles. Chen and Zeltzer [CZ92] presented a realistic biomechanical muscle based model, using a finite element method to simulate individual muscles. Skin is generally modelled as a geometric surface, whose points move as facial
3 V. C. Orvalho et al./transferring the Rig and Animations from a Character 1999 expressions change. Koch et al. [KGC 96] described a system for simulating facial surgery using finite element models. Physically-based simulation has been integrated into facial modelling by Lee et al. [LTW95]. Dynamic models include damped mass-spring systems [LTW95], finite elements models [CGC 02, CZ92, KGC 96] and modal analysis [JP02]. Physically-based simulations do not provide an animator with direct and intuitive control over the skin appearance. So, it is necessary to combine physicallybased approaches with geometric deformation methods. Pratscher et al. [PCLS05] defined an approach for anatomical modelling that allows to generate musculature from a predefined structure, which is well suited for the human body. None of these methods include animation controls that automate the facial character setup process in film productions. 3. Our Approach We begin with two 3D face models. The first one is an artist-sculpted 3D model that includes a control skeleton. The skeleton can have any number of influence objects that represent animation controls or geometric surfaces, for instance, to simulate muscles. This structure is called character rig. We refer to this 3D face model as source model. Section 4 details the source model, called generic rig, we used to illustrate our technique. Section 6.3 shows a source model created by Dygra film studio and how our method can be integrated within a film production pipeline. The second model is an artist-sculpted 3D model or scanned face that we call target model. This model does not have a character rig associated to it. The source and target models can have different descriptors. For example, one of them could be defined as a polygonal mesh and the other as a NURBS surface. Both surfaces share the face appearance without needing a point-to-point correspondence. Multiple poses of the source model can be sculpted, generating different facial expressions, called shapes. We are able to map this poses in the target model (see Section 6.1) and allow the animators to intuitively control the skin final appearance (see Section 6.2). Thus, shapes will nicely mix to generate convincing facial animations. After applying our deformation method, the geometry of the influence objects in the source model are reshaped to fit the target model surface. Also, the skeleton and additional animation controls are relocated in the target model. Thus, an entire rig can be rapidly applied to new characters to produce fast and accurate animations. Given this rig, we can apply any physically-based method or geometric deformation technique that responds to changes of the influence object, skeleton or animation controls shape. Geometric deformations are commonly preferred over physically-based methods, because are easier to manipulate and allow animators direct control over the 3D model. After applying our method, the target model will have inherited the character rig from the source model (see Figure 1). The artist is now free to modify the target model: final geometry, influence objects, facial expressions (shapes), skeleton position and animation controls. Artists could also use the animation scripts of the source model in the target model to animate the character; or generate animations using both animation controls and sparse data interpolation of the shapes. This approach is simple and provides direct animation control. For more details on how to create facial animations and expressions see Section 6.3. Figure 2 shows an overview of the method. 4. The Generic Rig To illustrate our method, we built a sophisticated 3D face model we call generic rig R (see Figure 3). The model is formed by different layers of abstraction: skin surface R S,influence objects R O, skeleton joints R B, facial features landmarks λ, shapes R H, skinning system and other components for representing the eyes, teeth and tongue. We can assign different attributes to each of these layers: weight, texture, muscle stress, etc. [Hab04] The generic rig R has been modelled manually and is a highly deformable structure of a face model based on physical anatomy. During the modelling process, we used facial features and regions to guarantee realistic animation and reduce artifacts (see Figure 3a). The surface R S is the external geometry of the character, determining the skin of the face using polygonal surfaces composed by a set of vertices r and a topology that connects them (see Figure 3b). The generic rig is tagged with landmarks λ, distributed as a set of sparse anthropometric points. We use these landmarks to define specific facial features to guarantee correspondence between models. Our rig has 44 landmarks placed on the surface (see Figure 3c). Those 44 anatomical points are the most prominent and distinctive points on human like face geometries [DMS87, FLMI87]. The skeleton R B is a group of bones positioned under the skin. It defines the pose of the head and controls lower level surface deformation. Our rig has 6 bones (see Figure 3f). The influence objects R O are objects that influence the shape of smooth skin and help artists control the 3D models. Some R O include: NURBS surfaces, NURBS curves, lattice deformer, cluster deformers, polygon mesh, etc. Figure 3e shows the geometric representation of the 11 muscles used in our rig. Out of the 26 that move the face, these muscles are responsible for facial expressions [Fai87]. The shapes R H are new 3D face models created by applying deformations over the geometry R S of the character.
4 2000 V. C. Orvalho et al./transferring the Rig and Animations from a Character Figure 2: Overview: (a) define the source and target model; (b) adapt the source model geometry to fit the target model; (c) transfer and bind the influence objects, skeleton, shapes and attributes from the source model to the target model. As a result, we obtain a model ready to animate. you change the shape of one object into the shapes of other objects (see Figure 10). The interpolation between shapes results in facial animations. 5. Transferring the Generic Rig Structure We introduce a method to automatically transfer the generic rig structure and components to individual 3D face models, which can be divided in three main steps: first, we deform the generic rig surface to match the geometry of the face model we want to control; then, we adapt the influence objects, skeleton and attributes of the generic rig to the 3D model; finally, we bind the transferred elements to the model and obtain a rig prepared for physically-based animation or geometric deformations (Figure 2 shows the relationship between the three steps). Figure 3: The generic rig: (a) 3D textured model; (b) wireframe, 1800 points; (c) 44 landmarks; (d) weight distribution; (e) 11 polygon geometric surfaces, representing the face muscles; (f) 6 bones that constitute the skeleton. A shape is represented by a deformation field defined on each vertex of R S always keeping the same topology of R. Shapes are usually modelled manually. They represent facial expressions and are used to create blend-shapes, which lets The face model that inherits the generic rig setup is referred as F. It is defined by a face surface F S, which determines the face geometry and shape, and a set of landmarks φ placed on F S. Like R S from the generic rig, F S is defined by a set of vertices f that define the connectivity between the points. The landmarks are positioned manually by the artist, to guarantee correspondence with the generic rig landmarks (see Section 4). Even though the generic rig has 44 landmarks, it is not necessary to use them all to transfer the rig (see results in Figure 19). Starting with a landmarked face model F, the rest of the structure transfer is automated as detailed next.
5 5.1. Geometric Transformations V. C. Orvalho et al./transferring the Rig and Animations from a Character 2001 To deform the rig R into F, we use an interpolation technique named Thin Plate Splines (TPS) [Boo89], which is a special case of Radial Basis Function Warping [CFB97]. The TPS is the two-dimensional (2D) analog of the cubic spline in one dimension. In the physical setting, where the TPS name comes from, the deflection of a thin sheet of metal is bent in the direction orthogonal to the plane. In order to apply this idea to the problem of coordinate transformation, one interprets the lifting of the plate as a displacement of the three spatial coordinates. Thus, in general, three TPS are needed to specify a 3D coordinate transformation. Given a set of landmarks p i, a weighted combination of thin plate splines centred about each data point gives the interpolation function that passes through the points exactly while minimizing the so-called bending energy.tpsworks well when using scattered, unstructured and unsorted data. It also works in our case where we want to interpolate smoothly the deformation field that warp from the source landmarks to the target ones. Another important feature is that with strongly inter-correlated landmarks, which is the case of facial correspondence, we can only use a few of the landmarks to obtain good deformation results. Equation (1) describes the generic form of the transformations based on n landmark points: n i=1 w ixu(x, p i ) + a x0 + a xx x + a xy y + a xz z x = n i=1 w iyu(x, p i ) + a y0 + a yx x + a yy y + a yz z. n i=1 w izu(x, p i ) + a z0 + a zx x + a zy y + a zz z (1) Following Bookstein [Boo89, RSS 01], we use the kernel function U(x, p i ) = x p i that minimizes the bending energy of the deformation. Using vectorial notation, we can state the following linear equation system with a n n matrix K defined as K i,j = U(p i, p j ), A = (a x0 a y0 a z0 ; a xx a yx a zx ;...a zz )isa4 4 matrix and a n 3matrixW = (...;w ix w iy w iz ;...): K P ( ) ( ) W Q =. (2) P T 0 A 0 In Equation (2), P is a n 4 matrix built by rows with the source coordinates P = (... ;1p ix p iy p iz ;...)andq is a n 3 matrix built from the target coordinates Q = (...; q ix q iy q iz ;...). Solving for A and W returns the coefficients to compute the warping. The KW + PA = Q term ensure the exact point matching of the source points into the target. The term P T W = 0 represents a boundary condition that regularizes Figure 4: (a) TPS warp of a generic surface based on reduced set of sparse landmarks (S 1 : source surface, S 2 :target surface, P: source landmarks, Q: target landmarks); (b) Sticking of the source surface to the target surface after applying the TPS (see Section 5.3). Figure 5: Human face warping process using 10 landmarks. the nonlinear warp in order to vanish its energy at infinity [RSS 01] Surface Deformation Given a set of source and target landmarks, p and q, respectively, we denote the map correspondence defined in Equation (1) by: x = TPS q p (x) (3) that for each point x minimizes the energy of the surface deformation. We will use the following notation, q = p S, where q i is the position of the correspondent point to p i in the geometry S. Figure 4a shows a qualitative 2D representation of the deformation of a surface uniformly sampled into another surface, using a reduced set of sparse landmarks. Only these landmarks will result on an exact deformation, while the rest of the surface points lay outside the target surface. Figure 5 shows the deformation of the generic rig into a face model using 10 anthropometric landmarks Obtaining a Dense Correspondence Between Surfaces To obtain an exact deformation of every surface point, where the origin surface matches the target surface, we apply a local deformation to every point of the origin surface. Then, we
6 2002 V. C. Orvalho et al./transferring the Rig and Animations from a Character Figure 6: Human face deformation with 10 landmarks, TPS front view (left); close up (middle); stick lines that represent the dense correspondence between the generic rig and the target model after applying STK (right). Figure 8: Cartoon deformation: (a) TPS and stick lines; (b) cartoon after STK; (c) muscles transfer front view; (d) muscles transfer side view. Figure 9: Weight distribution between generic rig and target model: (a) weight distribution for the jaw bone (red is w = 0, blue is w = 1); (b) weight region distribution for the face. avoid these folds [HPL04, LK00]. Fortunately, we didn t come across this problem in the many tests we performed on different face models: human and cartoon Deforming Layer Structures Figure 7: Warping structure with (a) sparse correspondences (landmarks); (b) dense correspondences. project every point of the warped surface to the closest point of the target surface. As a result, we get the correspondent point in the target surface for every vertex of the origin surface. This is called dense correspondence [HBH01] between surfaces. Figure 6 shows the dense correspondence between the generic rig and the target model. We define in our pipeline a new map function called Stick (STK) that computes the dense correspondence of points r, between the generic rig R and the face model F: r F = STK FS ( TPS φ λ (r) ). (4) This mapping can present undesirable folds in areas with high curvature or if the distance between origin and target points is large. Lorenz and Hilger worked on solutions to Based on the dense correspondence between R S and F S, we can deform the generic rig influence objects R O and skeleton R B. The sticking process avoids placing additional landmarks on the influence objects or on the skeleton structure, which otherwise would be a time consuming process. Figure 7 shows that the warp based on dense correspondence keeps the relationship between the structure and the surfaces better than the warp based on sparse landmarks Attribute Transfer The generic rig R has a set of attributes on the surface nodes r defined as scalar or vectorial fields. We have to transfer each of these attributes to surface F S. For each surface vertex f i, we find its closest point on R S F, get the interpolated value and assign it to f i. Figure 8 shows the result of transferring influence objects, geometric surfaces that represent muscles. Figure 9b shows a region labelling transfer. Figure 10 shows the result of transferring facial expressions (shapes). All figures show the
7 V. C. Orvalho et al./transferring the Rig and Animations from a Character Weight transfer Assigning the appropriate skin weight to a model is a crucial step on the skinning process. An unoptimized weight distribution can lead to undesirable deformations. Figure 9 shows the weight distribution of the generic rig and the weight distribution of the target face model F after applying our method. The steps to automatically transfer the weights from R to F are: Figure 10: Facial expressions (shape) transfer: generic rig facial expressions templates (first row); cartoon model facial expressions using generic rig templates (second row). attributes transfer from the generic rig to the cartoon model, with different triangulations Skinning In animation, skinning is the process of binding deformable objects (influence objects and animation controls) to an underlying articulated skeleton. There are several approaches to skinning varying on the degree of complexity and realism [Sch02]. After skinning, the deformable object that makes up the surface is called the character s skin. We used smooth skinning instead of other algorithms [Web00] because it is fast and effective, and has been used in many occasions for real time and pre-rendered animations. The binding process involves assigning each vertex of the skin to one or more skeleton joints and setting the appropriate blending weights. Each attachment affects the vertex with a different strength. The weight will be the degree of influence of each skin vertex during deformation. The output of the skinning process is a character model setup, with the skeleton, influence objects and animation controls ready to define the deformations of the 3D model and create animations. The positioning of the influence objects has two goals: build an inner structure that correctly reflects the character s appearance and enable the projected facial animations with minimum effort. Our skinning method uses the generic rig weight attribute to attach the previously deformed skeleton, muscles and animation controls to the target face model F. The weights of the generic rig are carefully defined manually by an experienced artist to guarantee a correct deformation over the skin. 1. store the weight values of every influence object for each character skin vertex; 2. apply the TPS function and STK function to the generic rig R; 3. find the correspondent vertex between the generic rig R and the target face model F; and 4. copy the weight of every influence object to the corresponding vertex of the target face model F. 6. Animating 3D Face Models Given a 3D face model (source model R) with its character setup and animation scripts (optional), we apply the method described in Section 5 to: fit the character setup from the source model into the target model; manipulate the target model as if we are using a puppet; adjust animation parameters in the target model; and animate the model using the source model pre-defined animations. To manipulate the model, we use the controls defined in the source rig that are transferred to the target model. After transferring the rig to the target model, artists can always adjust the different animation controls: clusters, bones, NURBS curves. We choose to use geometric deformation models, as this provides direct animation control. Physically-based methods can also be used to simulate human anatomy behaviour. Next, we detail how to transfer facial expressions and animations between models Facial Epressions Human facial expressions are caused by the contraction of facial muscles. The skin modifies its initial shape depending on the underlying muscle and skeleton behaviour. To animate virtual characters the challenge becomes enormous as the face is capable of producing about 5000 expressions. A character like Shrek, in the 2001 movie, had over 500 commands arranged by facial features. Dick Walsh described that for the right brow there is raised, mad, sad with at least 15
8 2004 V. C. Orvalho et al./transferring the Rig and Animations from a Character possible commands that activate not only the brow but also the other parts of the face, which need to move in conjunction with it to produce a convincing expression. There are two principal methods to create facial expressions: 3D scanned or artist-sculpted [Fai87, JTDP03]. To create animations using blend-shapes, every facial expression (shape) needs to have the same geometry of the face model at a rest position. It takes about 2 3 weeks and over 100 shapes to create the facial expressions and phonemes for a complex character. We ease this job by automatically transferring the shapes from the source model R to the target model F: 1. We start with the source model mesh R S at a rest position and all its shapes R H ; 2. We calculate the displacement between every vertex of the source mesh R S at rest position and the source mesh shape R H ; 3. We assign the TPS of the displacement between the source model and target model, using the linear part of Equation (1). This step is needed to keep the target model shapes, scale and rotation, proportional to the source model shapes. For example, if the target model is 10 times bigger, the displacement will be 10 times larger as this information is stored in the TPS: x a xx x + a xy y + a xz z x = y = a yx x + a yy y + a yz z, z a zx x + a zy y + a zz z (5) where coefficients a xx... a zz are the same as in Equation (1); 4. We find the correspondent vertex between the source model R and the target face model F; 5. We apply the displacement previously calculated in the target model mesh F, creating a new shape F H.It doesn t matter if the topologies of R and F are different; and 6. We apply a smooth filter to guarantee continuity on the target model shape. Figure 10a shows different facial expression templates and Figure 10b shows the result of applying the generic rig s facial expressions to our cartoon model Facial Animation The most common methods to animate a face are: keyframe interpolation and motion capture. Keyframe interpolation is a completely geometrical approach, where the whole face model is specified for any given point through time, called keyframe. For instance, each keyframe can be a pose or an Figure 11: Method pipeline: (1) find surface correspondence based on the SM and TM landmarks; (2) obtain surface dense correspondence; (3) transfer attributes from the SM to the TM; (4) bind the attributes to the TM based on the SM weights. expression where the in-between facial models frames are generated by interpolation. One of the main difficulties of keyframe interpolation is to combine separate animations that involve the same area of the face into one, like sleeping and crying. It requires laborious and intensive modelling [Hab03]. Motion capture allows capturing the movements of an actor s face and directly transfer the performance to a virtual face model [SSRMF06]. The disadvantage is that for each virtual character, a real actor has to perform the animation, which can be very inefficient during production. In addition, modifying the captured facial motions remains a challenge. We speed the animation process and achieve motion of the face by applying the animation scripts of the source model in the target face model. Scripts are generated by interpolating facial poses or creating local deformations using high level controls, like NURBS curves, NURBS surfaces, clusters, etc. Facial motion can also be controlled on a lower level by applying physical simulation on muscles, which consists on determining muscle contraction over time [AHS02]. The main steps are: 1. Create templates: sculpt facial poses (shapes), apply local deformations or rotate/translate the skeleton structure; 2. Tuning: adjust deformers to optimize the model; 3. Create script: determine templates keyframe sequence or use source model pre-defined scripts. If we use predefined scripts, it is necessary to apply the TPS method to it (see Section 5); and 4. Animate model: run animation scripts at target model Character Setup Pipeline Our method pipeline (see Figure 11) allows transferring the character setup from a source model R to a target model F.
9 V. C. Orvalho et al./transferring the Rig and Animations from a Character 2005 Figure 12: Geometric surface deformation: (a) source model (SM); (b) SM TPS; (c) SM STK; (d) target model (3D models courtesy of Dygra). The input to the pipeline is the source model R rig. The output is the set of animation controls and animation scripts defined in the source model R now created in the target model F. Figure 14 details the source model rig and the target models rig that was transferred from the source model. The process starts by landmarking the source model R and the target face model F (see Figures 14b, f and j). Then we proceed with: 1. Surface correspondence: R S TPSφ λ (R S) This function gives the correspondence between the source model mesh vertex R S and the target model mesh vertex F S ; it ensures the exact point matching at the landmark (SM landmarks λ and TM landmarks φ) and interpolates the deformation of the other points smoothly (see Figure 12b). 2. Surface dense correspondence: r F STK F (R S ) We obtain a dense correspondence to ensure exact deformation of every surface point, where the source surface R S matches the target surface F S ; we avoid placing additional landmarks (see Figure 12c). 3. Attribute transfer: f attributetransfer(r F ) The attributes we transfer are: bones, influence objects, weights, shapes and animation scripts. Figures 14c, g and k details bones and influence objects. 4. Skinning: F skinning(f S, F O, F B, R w ) We bind the deformable objects, influence objects and 3D model surface to the skeleton of the face model we want to animate. Figures 14d, h and l details the weight distribution of the source and target models. As a result of this process our target face model F: displays pre-defined animations, by using animations scripts from the source model R; has animation controls that artists can manipulate to create animations (see Figure 14g and k for a detail of the target model controls); has all the blend shapes defined in R to help create new animations in F (see Figure 13 for an example of blend shapes). Figure 13: In this example, we show 3 of the 25 shapes we transferred from the source model (SM) to the target models (TM). First row details de SM, second and third row detail the TM: (a) rest pose; (b) eye brows up; (c) left eye brow up; d) angry expression, mix of four different shapes: end of the eye brow up, middle eye brow down, inside eye brow down and upper chick up (3D models courtesy of Dygra). 7. Results The deformation methods have been implemented in C ++ as a plug-in for Maya 7.0 software. We have tested our methods with a number of examples. Figure 16 represents keyframes from a speech facial animation script. Figure 17 represents keyframes from an animation script that shows extreme facial expressions. The animation scripts were created by an artist in the source model and later transferred to both target models. To animate the cheeks, eye brows and nose, the artists used the blendshapes defined in the source model to create the key expressions. There are 25 shapes defined in the source model that are transferred to the target models. To animate the lips that represent the speech in Figure 16, the artist manipulated the joints of the mouth and the NURBS surface attached to them. The eyes and jaw positions were also created by manipulating the joints. We can see in the forth, fifth and sixth column that the models have different topology. Our generic rig has 1800 points, 44 landmarks, 6 bones and 11 polygon geometric surfaces that represent the muscles, and is based on human anatomy (see Figure 18). The human model is a 3D scan of a human face. It has 1260 points and 10 landmarks (see Figure 19). Figure 19b displays the wireframe mesh. We use the 10 landmarks to transfer the rig structure (see Figure 19d). Figure 6 shows the warping process. The cartoon model has 1550 points and 44 landmarks (see Figure 20). Figure 20 shows the muscle transfer and Figure 9 shows the attribute transfer of the weight and region labels. Based on the weights of Figure 9a, Figure 10 shows the transfer of a facial expression.
10 2006 V. C. Orvalho et al./transferring the Rig and Animations from a Character Figure 14: In this example, the source model and the target models were hand sculpted by an artist from Dygra Films. The rig from the source model was also created by an artist from Dygra films. The rig includes 12 bones distributed along the head, seven bones around the mouth and one influence object. The influence object is a NURBS surface located behind the mouth were the seven mouth bones are attached. The source model rig was automatically transferred and bound to the target models using the method described in Section 5. The source model (a) has 1455 points, target model (e) has 1610 points and target model (i) has 1291 points (3D models courtesy of Dygra). The graphics on Figure 15 display the distance between the muscle and the skin surface points, on the generic rig (solid line) and on the face model (dots). Results show that if the generic rig topology resembles the target model, the output of the warping is better. To explore the limits of our method, Figure 21 confirms that the warping and landmarks fitting work robustly in non-human faces with extreme facial appearance. We use 12 landmarks to transfer the rig structure to a goat (see Figure 21d). 8. Conclusion We have presented a comprehensive method that speeds up the character setup within a facial animation pipeline,
11 V. C. Orvalho et al./transferring the Rig and Animations from a Character Distance F M to F S Distance R M to R S Distance ( mm ) (a) Distance F M to F S Distance R M to R S (b) Distance ( mm ) Figure 15: Distance between muscle and skin surfaces on the generic rig and on the model: (a) scanned human; (b) cartoon. because we drive all face models by deformation of the same source rig. This allows to create once, use many times. Our approach allows artists to transfer the facial expressions, animation controls and animation scripts created in a source model and reuse them on different models. In contrast with other methods [KHYS02] that landmark the skin, muscle and skull, we only landmark the skin surface because we obtain dense correspondence. This simplifies and eases the setup of the character. In film and game productions, artists are often given one base 3D face to make all new 3D faces, which are used as shapes and later become blend-shapes. Thus, it is very common that during production artists are told they need to use a different 3D face, because it has better deformation details or simply look better. Right now there is not a tool built into 3D animation packages that will transfer all the rig from one mesh to the other. As a result, all face models created will need to be remade to reflect the topology of the new face model. With our method the artist can make their 3D face models, and later, should the producer or anyone want a different appearance, previous work will not be wasted. We have performed several tests with major film and video game companies in Canada and Spain and got very positive feedback. Everyone said that our method speeds up the rigging process and most showed interest in integrating it into their production pipeline. We have tested the precision of the attribute transfer and the accuracy of the rig created in the target model. The companies art directors approved the quality of the shapes that where automatically created in the target model. This is a crucial result: if the output still required a lot of tuning, then the system would be useless in a production. Last, we overcome traditional facial animation rigging techniques limitations and allow: reusing the character setup created for one model in different models, saving production time; scripts to be portable; as the models share a set of attributes, scripts are valid for all of them; a user to define and customize a source model, providing flexibility in the animation pipeline and between productions. Acknowledgements Special thanks go to Jo ao Orvalho for his review, unconditional support and motivation. We also thank Juan Nouche and Xenxo Álvarez Blanco from Dygra Films for their valuable comments, dedication and for creating the 3D models, rig and animations to test the system.
12 2008 V. C. Orvalho et al./transferring the Rig and Animations from a Character Figure 16: This example shows keyframes from a facial animation that reproduces a speech. To create these animations, an artist created an animation script in the source model (first and forth column), based on mixing different blendshapes and manipulating the joints of the mouth and NURBS surface. Then the animation script was applied to both target models (second, third, ffith and sixth column) (3D models and animations courtesy of Dygra).
13 V. C. Orvalho et al./transferring the Rig and Animations from a Character 2009 Figure 17: This example shows keyframes from a facial animation that tests extreme expressions. To create these animations an artist created an animation script in the source model (first and forth column), based on mixing different blendshapes and manipulating the joints of the mouth and NURBS surface. Then the animation script was applied to both target models (second, third, fifth and sixth column) (3D models and animations courtesy of Dygra).
14 2010 V. C. Orvalho et al./transferring the Rig and Animations from a Character Figure 18: Generic rig: (a) textured; (b) wireframe; (c) 44 landmarks; (d) muscles. Figure 19: Human (a) textured; (b) wireframe; (c) 10 landmarks; (d) muscles. Figure 20: Cartoon (a) textured; (b) wireframe; (c) 44 landmarks; (d) muscles. Figure 21: Animal (a) textured; (b) wireframe; (c) 44 landmarks; (d) muscles.
15 V. C. Orvalho et al./transferring the Rig and Animations from a Character 2011 References [ACWK04] ANGELIDIS A., CANI M., WYVILL G., KING S.: Swirling-sweepers: Constant-volume modeling. Pacific Graphics 2004, [AHS02] ALBRECHT I., HABER J., SIEDEL H.: Speech synchronization for physics-based facial animation. In Proceedings of WSCG 2002, pp [BK04] BOTSCH M., KOBBELT L.: An intuitive framework for real-time freeform modeling. ACM Transactions on Graphics (TOG), SIGGRAPH 2004, pp. 23,3 (2004), [Boo89] BOOKSTEIN F.: Principal warps: Thin-plate splines and the decomposition of deformations. IEEE Transactions on Pattern Anaylsis and Machine Intelligence, 11, 6 (1989), [CBC 05] CAPELL S., BURKHART M., CURLESS B., DUCHAMP T., POPOVIĆ Z.: Physically based rigging for deformable characters. Eurographics 2005, Symposium on Computer Animation, [CFB97] CARR J., FRIGHT W., BEATSON R.: Surface interpolation with radial basis functions for medical imaging. IEEE Trans. on Medical Imaging, 16 (1997, ). [CGC 02] CAPELL S., GREEN S., CURLESSY B., DUCHAMP T., POPOVIC Z.: Interactive skeleton-driven dynamic deformation. SIGGRAPH 2002, pp [CHP89] CHADWICK J., HAUMANN D., PARENT R.: Layered construction for deformable animated characters. SIGGRAPH 1989, pp [Coq90] COQUILLART S.: Extended free-form deformations: A sculpturing tool for 3D geometric modeling. Proceedings of SIGGRAPH 1990 Conference, ACM Computer Graphics, pp Proportions in Medicine. Charles Thomas Ltd., USA, [Hab03] HABER J.: Overview: Facial animation technique. Eurographics Tutorials: Facial Modeling and Animation. [Hab04] HABER J.: Anatomy of the human head. SIGGRAPH 2004, Course Notes: Facial Modeling and Animation. [HBH01] HUTTON T., BUXTON B., HAMMOND P.: Dense surface point distribution models of the human face. IEEE Workshop on Mathematical Methods in Biomedical Image Analysis (2001), pp [Hen90] HENNE M.: A Constraint-Based Skin Model for Human Figure Animation. Master thesis, University of California Santa Cruz, Santa Cruz, [HHK92] HSU W., HUGUES J., KAUFMAN H.: Direct manipulation of free-form deformation. Proceedings of SIGGRAPH 1992, ACM Press, pp [HPL04] HILGER K. B., PAULSEN R. R., LARSEN R.: Markov random field restoration of point correspondences for active shape modelling. SPIE Medical Imaging, 5370, [JP02] JAMES D., PAI K.: Dyrt: Dynamic response texture for real time deformation simulation with graphics hardware. SIGGRAPH 2002, pp [JTDP03] JOSHI P., TIEN W., DESBRUN M., PIGHIN F.: Learning controls for blend shape based realistic facial animation. Eurographics/SIGGRAPH Symposium on Computer Animation, ACM Press, 2003, pp [KGC 96] KOCH R., GROSS M., CARLS F., BUERIN D. V., FRANKHAUSER G., PARISH Y.: Simulating facial surgery using finite element models. SIGGRAPH 1996, pp [CZ92] CHEN D., ZELTZER D.: Pump it up: Computer animation based model of muscle using the finite element method. SIGGRAPH 1992, pp [DMS87] DECARLO D., METAXAS D., STONE M.: An anthropometric face model using variational techniques. Proceedings of SIGGRAPH 1998, pp [DN06] DENG Z., NEUMANN U.: Expressive facial animation synthesis and editing with phoneme-isomap control. EUROGRAPHICS 2006, [Fai87] FAIGIN G.: The Artist s Complete Guide to Facial Expressions. Watson-Guptill Publications, New York, 1987, pp [FLMI87] FARKAS, L.,MUNRO, I.:Anthropometric Facial [KHYS02] KAHLER K., HABER J., YAMAUCHI H., SEIDEL H.: Head shop: Generating animated head models with anatomical structure. ACM SIGGRAPH/Eurographics symposium on computer animation 2002, pp [Kom88] KOMATSU K.: Human skin model capable of natural shape variation. In The Visual Computer (March 1988), Vol. 3, Springer Berlin/Heidelberg, pp [LK00] LORENZ C., KRAHNSTÖVER N.: Generation of pointbased 3D statistical shape models for anatomical objects. Computer Vision and Image Understanding: CVIU, 77 (2000), [LTW95] LEE Y., TERZOPOULOS D., WATERS K.: Realistic modeling for facial animation. SIGGRAPH 1995, pp
16 2012 V. C. Orvalho et al./transferring the Rig and Animations from a Character [NN01] NOH J., NEUMANN U.: Expression cloning. SIGGRAPH 2001, ACM SIGGRAPH, pp [Par72] PARKE F.: Computer Generated Animation of faces. Vol. 1, ACM Press, 1972, pp [PCLS05] PRATSCHER M., COLEMAN P., LASZLO J., SINGH K.: Outside-in anatomy based character rigging. Eurographics/SIGGRAPH Simposium on Computer Animation [PLS 98] PIGHIN F., LISCHINSKI D., SZELISKI R., SALESIN D., J. HECKER: Synthesizing realistic facial expressions from photographs. Proceedings of SIGGRAPH 1998 Conference, pp [RSS 01] ROHR K., STIEHL H., SPRENGEL R., BUZUG T., WEESE J., KUHN M.: Landmark-based elastic registration using approximating thin-plate splines. IEEE Transactions on Medical Imaging, 20 (2001), [Sch02] SCHLEIFER J.: Character setup from rig mechanics to skin deformations: A practical approach. Proceedings of SIGGRAPH 2002, Course Note. [SF98] SINGH K., FIUME E. L.: Wires: A geometric deformation technique. Proceedings of SIGGRAPH 1998 Conference, ACM Computer Graphics, pp [SL96] SZELISKI R., LAVALLEE S.: Matching 3D anatomical surfaces with non-rigid deformation using octree splines. Internatinal Journal of Computer Vision 18,2 (1996), [SOP95] SINGH K., OHYA J.,PARENT R.: Human figure synthesis and animation for virtual space teleconferencing. Virtual Reality Annual Interanational Symposium 1995, IEEE, pp [SP86] SEDERBERG T., PARRY S.: Free-form deformation of solid geometric models. Proceedings of SIGGRAPH 1986 Conference, ACM Computer Graphics, pp [SSRMF06] SIFAKIS E., SELLE A., ROBINSON-MOSHER A., FEDKIW R.: Simulating speech with a physics-based facial muscle model. Eurographics 2006, [TT93] TURNER R.,THALMANN D.: The elastic surface layer model for animated character construction. Proceedings of Computer Graphics International 1993, pp [Web00] WEBER J.: Run-time skin deformation. Game Developers Conference Available at: animate.pdf [WF95] WATERS K., FRISBIE J.: A coordinated muscle model for speech animation. Proceedings of Graphics Interface 1995, pp [WG97] WILHELMS J., GELDER A.: Anatomically based modeling. SIGGRAPH 1997, pp [WKT96] WU Y., KALRA P., THALMANN N.: Simulation of static and dynamic wrinkles of skin. In Proceedings of Computer Animation 1996, pp
FACIAL RIGGING FOR 3D CHARACTER
FACIAL RIGGING FOR 3D CHARACTER Matahari Bhakti Nendya 1, Eko Mulyanto Yuniarno 2 and Samuel Gandang Gunanto 3 1,2 Department of Electrical Engineering, Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia
Performance Driven Facial Animation Course Notes Example: Motion Retargeting
Performance Driven Facial Animation Course Notes Example: Motion Retargeting J.P. Lewis Stanford University Frédéric Pighin Industrial Light + Magic Introduction When done correctly, a digitally recorded
C O M P U C O M P T U T E R G R A E R G R P A H I C P S Computer Animation Guoying Zhao 1 / 66 /
Computer Animation Guoying Zhao 1 / 66 Basic Elements of Computer Graphics Modeling construct the 3D model of the scene Rendering Render the 3D model, compute the color of each pixel. The color is related
animation animation shape specification as a function of time
animation animation shape specification as a function of time animation representation many ways to represent changes with time intent artistic motion physically-plausible motion efficiency control typically
This week. CENG 732 Computer Animation. Challenges in Human Modeling. Basic Arm Model
CENG 732 Computer Animation Spring 2006-2007 Week 8 Modeling and Animating Articulated Figures: Modeling the Arm, Walking, Facial Animation This week Modeling the arm Different joint structures Walking
Graphics. Computer Animation 고려대학교 컴퓨터 그래픽스 연구실. kucg.korea.ac.kr 1
Graphics Computer Animation 고려대학교 컴퓨터 그래픽스 연구실 kucg.korea.ac.kr 1 Computer Animation What is Animation? Make objects change over time according to scripted actions What is Simulation? Predict how objects
animation shape specification as a function of time
animation 1 animation shape specification as a function of time 2 animation representation many ways to represent changes with time intent artistic motion physically-plausible motion efficiency typically
Mocap in a 3D Pipeline
East Tennessee State University Digital Commons @ East Tennessee State University Undergraduate Honors Theses 5-2014 Mocap in a 3D Pipeline Logan T. Maides Follow this and additional works at: http://dc.etsu.edu/honors
Background Animation Generator: Interactive Scene Design based on Motion Graph
Background Animation Generator: Interactive Scene Design based on Motion Graph Tse-Hsien Wang National Taiwan University [email protected] Bin-Yu Chen National Taiwan University [email protected]
Real-Time Dynamic Wrinkles of Face for Animated Skinned Mesh
Real-Time Dynamic Wrinkles of Face for Animated Skinned Mesh L. Dutreve and A. Meyer and S. Bouakaz Université de Lyon, CNRS Université Lyon 1, LIRIS, UMR5205, F-69622, France PlayAll Abstract. This paper
2.5 Physically-based Animation
2.5 Physically-based Animation 320491: Advanced Graphics - Chapter 2 74 Physically-based animation Morphing allowed us to animate between two known states. Typically, only one state of an object is known.
Character Animation from 2D Pictures and 3D Motion Data ALEXANDER HORNUNG, ELLEN DEKKERS, and LEIF KOBBELT RWTH-Aachen University
Character Animation from 2D Pictures and 3D Motion Data ALEXANDER HORNUNG, ELLEN DEKKERS, and LEIF KOBBELT RWTH-Aachen University Presented by: Harish CS-525 First presentation Abstract This article presents
Computer Animation and Visualisation. Lecture 1. Introduction
Computer Animation and Visualisation Lecture 1 Introduction 1 Today s topics Overview of the lecture Introduction to Computer Animation Introduction to Visualisation 2 Introduction (PhD in Tokyo, 2000,
The 3D rendering pipeline (our version for this class)
The 3D rendering pipeline (our version for this class) 3D models in model coordinates 3D models in world coordinates 2D Polygons in camera coordinates Pixels in image coordinates Scene graph Camera Rasterization
Modelling 3D Avatar for Virtual Try on
Modelling 3D Avatar for Virtual Try on NADIA MAGNENAT THALMANN DIRECTOR MIRALAB UNIVERSITY OF GENEVA DIRECTOR INSTITUTE FOR MEDIA INNOVATION, NTU, SINGAPORE WWW.MIRALAB.CH/ Creating Digital Humans Vertex
DESIGNING MPEG-4 FACIAL ANIMATION TABLES FOR WEB APPLICATIONS
DESIGNING MPEG-4 FACIAL ANIMATION TABLES FOR WEB APPLICATIONS STEPHANE GACHERY, NADIA MAGNENAT-THALMANN MIRALab - University of Geneva 22 Rue Général Dufour, CH-1211 GENEVA 4, SWITZERLAND Web: http://www.miralab.unige.ch
Three Methods for Making of Character Facial Animation based on Game Engine
Received September 30, 2014; Accepted January 4, 2015 Three Methods for Making of Character Facial Animation based on Game Engine Focused on Scene Composition of Machinima Game Walking Dead Chanho Jeong
Wednesday, March 30, 2011 GDC 2011. Jeremy Ernst. Fast and Efficient Facial Rigging(in Gears of War 3)
GDC 2011. Jeremy Ernst. Fast and Efficient Facial Rigging(in Gears of War 3) Fast and Efficient Facial Rigging in Gears of War 3 Character Rigger: -creates animation control rigs -creates animation pipeline
Fundamentals of Computer Animation
Fundamentals of Computer Animation Principles of Traditional Animation How to create maximum impact page 1 How to create maximum impact Early animators worked from scratch to analyze and improve upon silence
Computer Graphics. Geometric Modeling. Page 1. Copyright Gotsman, Elber, Barequet, Karni, Sheffer Computer Science - Technion. An Example.
An Example 2 3 4 Outline Objective: Develop methods and algorithms to mathematically model shape of real world objects Categories: Wire-Frame Representation Object is represented as as a set of points
Blender 3D Animation
Bachelor Maths/Physics/Computer Science University Paris-Sud Digital Imaging Course Blender 3D Animation Christian Jacquemin Introduction to Computer Animation Animation Basics animation consists in changing
An Interactive method to control Computer Animation in an intuitive way.
An Interactive method to control Computer Animation in an intuitive way. Andrea Piscitello University of Illinois at Chicago 1200 W Harrison St, Chicago, IL [email protected] Ettore Trainiti University of
Animation (-4, -2, 0 ) + (( 2, 6, -4 ) - (-4, -2, 0 ))*.75 = (-4, -2, 0 ) + ( 6, 8, -4)*.75 = (.5, 4, -3 ).
Animation A Series of Still Images We Call Animation Animation needs no explanation. We see it in movies and games. We grew up with it in cartoons. Some of the most popular, longest-running television
Finite Element Formulation for Plates - Handout 3 -
Finite Element Formulation for Plates - Handout 3 - Dr Fehmi Cirak (fc286@) Completed Version Definitions A plate is a three dimensional solid body with one of the plate dimensions much smaller than the
Template-based Eye and Mouth Detection for 3D Video Conferencing
Template-based Eye and Mouth Detection for 3D Video Conferencing Jürgen Rurainsky and Peter Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz-Institute, Image Processing Department, Einsteinufer
Principal Components of Expressive Speech Animation
Principal Components of Expressive Speech Animation Sumedha Kshirsagar, Tom Molet, Nadia Magnenat-Thalmann MIRALab CUI, University of Geneva 24 rue du General Dufour CH-1211 Geneva, Switzerland {sumedha,molet,thalmann}@miralab.unige.ch
INTRODUCTION TO RENDERING TECHNIQUES
INTRODUCTION TO RENDERING TECHNIQUES 22 Mar. 212 Yanir Kleiman What is 3D Graphics? Why 3D? Draw one frame at a time Model only once X 24 frames per second Color / texture only once 15, frames for a feature
Constrained Tetrahedral Mesh Generation of Human Organs on Segmented Volume *
Constrained Tetrahedral Mesh Generation of Human Organs on Segmented Volume * Xiaosong Yang 1, Pheng Ann Heng 2, Zesheng Tang 3 1 Department of Computer Science and Technology, Tsinghua University, Beijing
Facial Expression Analysis and Synthesis
1. Research Team Facial Expression Analysis and Synthesis Project Leader: Other Faculty: Post Doc(s): Graduate Students: Undergraduate Students: Industrial Partner(s): Prof. Ulrich Neumann, IMSC and Computer
Fundamentals of Computer Animation
Fundamentals of Computer Animation Production Pipeline page 1 Producing an Animation page 2 Producing an Animation 3DEM Flybys http://www.visualizationsoftware.com/3dem/flybys.html page 3 Producing an
Virtual Humans on Stage
Chapter XX Virtual Humans on Stage Nadia Magnenat-Thalmann Laurent Moccozet MIRALab, Centre Universitaire d Informatique Université de Genève [email protected] [email protected] http://www.miralab.unige.ch/
Republic Polytechnic School of Information and Communications Technology C391 Animation and Visual Effect Automation.
Republic Polytechnic School of Information and Communications Technology C391 Animation and Visual Effect Automation Module Curriculum This document addresses the content related abilities, with reference
Advanced Diploma of Professional Game Development - Game Art and Animation (10343NAT)
The Academy of Interactive Entertainment 201 Advanced Diploma of Professional Game Development - Game Art and Animation (10343NAT) Subject Listing Online Campus 0 Page Contents 3D Art Pipeline...2 Grasping
Chapter 1. Introduction. 1.1 The Challenge of Computer Generated Postures
Chapter 1 Introduction 1.1 The Challenge of Computer Generated Postures With advances in hardware technology, more powerful computers become available for the majority of users. A few years ago, computer
Introduction to Computer Graphics
Introduction to Computer Graphics Torsten Möller TASC 8021 778-782-2215 [email protected] www.cs.sfu.ca/~torsten Today What is computer graphics? Contents of this course Syllabus Overview of course topics
3. Interpolation. Closing the Gaps of Discretization... Beyond Polynomials
3. Interpolation Closing the Gaps of Discretization... Beyond Polynomials Closing the Gaps of Discretization... Beyond Polynomials, December 19, 2012 1 3.3. Polynomial Splines Idea of Polynomial Splines
3D Face Modeling. Vuong Le. IFP group, Beckman Institute University of Illinois ECE417 Spring 2013
3D Face Modeling Vuong Le IFP group, Beckman Institute University of Illinois ECE417 Spring 2013 Contents Motivation 3D facial geometry modeling 3D facial geometry acquisition 3D facial deformation modeling
CG T17 Animation L:CC, MI:ERSI. Miguel Tavares Coimbra (course designed by Verónica Orvalho, slides adapted from Steve Marschner)
CG T17 Animation L:CC, MI:ERSI Miguel Tavares Coimbra (course designed by Verónica Orvalho, slides adapted from Steve Marschner) Suggested reading Shirley et al., Fundamentals of Computer Graphics, 3rd
CS130 - Intro to computer graphics. Dr. Victor B. Zordan [email protected] www.cs.ucr.edu/~vbz. Objectives
CS130 - Intro to computer graphics Dr. Victor B. Zordan [email protected] www.cs.ucr.edu/~vbz Objectives Explore basics of computer graphics Survey application areas Today, brief introduction to graphics
Bernice E. Rogowitz and Holly E. Rushmeier IBM TJ Watson Research Center, P.O. Box 704, Yorktown Heights, NY USA
Are Image Quality Metrics Adequate to Evaluate the Quality of Geometric Objects? Bernice E. Rogowitz and Holly E. Rushmeier IBM TJ Watson Research Center, P.O. Box 704, Yorktown Heights, NY USA ABSTRACT
Introduction to Engineering System Dynamics
CHAPTER 0 Introduction to Engineering System Dynamics 0.1 INTRODUCTION The objective of an engineering analysis of a dynamic system is prediction of its behaviour or performance. Real dynamic systems are
Spatial Pose Trees: Creating and Editing Motions Using a Hierarchy of Low Dimensional Control Spaces
Eurographics/ ACM SIGGRAPH Symposium on Computer Animation (2006), pp. 1 9 M.-P. Cani, J. O Brien (Editors) Spatial Pose Trees: Creating and Editing Motions Using a Hierarchy of Low Dimensional Control
Computer Aided Engineering (CAE) Techniques Applied To Hip Implant
International Journal Of Computational Engineering Research (ijceronline.com) Vol. 3 Issue. 3 Computer Aided Engineering (CAE) Techniques Applied To Hip Implant 1, M. S. Abo_Elkhair, 2, M. E. Abo-Elnor,
Onboard electronics of UAVs
AARMS Vol. 5, No. 2 (2006) 237 243 TECHNOLOGY Onboard electronics of UAVs ANTAL TURÓCZI, IMRE MAKKAY Department of Electronic Warfare, Miklós Zrínyi National Defence University, Budapest, Hungary Recent
DIPLOMA IN 3D DESIGN AND DIGITAL ANIMATION COURSE INFO PACK
Registered as a Private Higher Education Institution with the Department of Higher Education and Training in South Africa under the Higher Education Act 1997 Registration Nr. 2001/HE07/005 DIPLOMA IN 3D
SOFA an Open Source Framework for Medical Simulation
SOFA an Open Source Framework for Medical Simulation J. ALLARD a P.-J. BENSOUSSAN b S. COTIN a H. DELINGETTE b C. DURIEZ b F. FAURE b L. GRISONI b and F. POYER b a CIMIT Sim Group - Harvard Medical School
Character Animation Tutorial
Character Animation Tutorial 1.Overview 2.Modelling 3.Texturing 5.Skeleton and IKs 4.Keys 5.Export the character and its animations 6.Load the character in Virtools 7.Material & texture tuning 8.Merge
We can display an object on a monitor screen in three different computer-model forms: Wireframe model Surface Model Solid model
CHAPTER 4 CURVES 4.1 Introduction In order to understand the significance of curves, we should look into the types of model representations that are used in geometric modeling. Curves play a very significant
Creation of an Unlimited Database of Virtual Bone. Validation and Exploitation for Orthopedic Devices
Creation of an Unlimited Database of Virtual Bone Population using Mesh Morphing: Validation and Exploitation for Orthopedic Devices Najah Hraiech 1, Christelle Boichon 1, Michel Rochette 1, 2 Thierry
Face Model Fitting on Low Resolution Images
Face Model Fitting on Low Resolution Images Xiaoming Liu Peter H. Tu Frederick W. Wheeler Visualization and Computer Vision Lab General Electric Global Research Center Niskayuna, NY, 1239, USA {liux,tu,wheeler}@research.ge.com
Advanced Diploma of Screen - 3D Animation and VFX (10343NAT)
The Academy of Interactive Entertainment 2013 Advanced Diploma of Screen - 3D Animation and VFX (10343NAT) Subject Listing Online Campus 0 Page Contents 3D Art Pipeline...2 Modelling, Texturing and Game
Computer Animation Curriculum : An Interdisciplinary Approach
EUROGRAPHICS 2009/ G. Domik and R. Scateni Education Paper Computer Animation Curriculum : An Interdisciplinary Approach Caroline Larboulette Universidad Rey Juan Carlos, Madrid, Spain Abstract This paper
Degree Reduction of Interval SB Curves
International Journal of Video&Image Processing and Network Security IJVIPNS-IJENS Vol:13 No:04 1 Degree Reduction of Interval SB Curves O. Ismail, Senior Member, IEEE Abstract Ball basis was introduced
Human Skeletal and Muscle Deformation Animation Using Motion Capture Data
Human Skeletal and Muscle Deformation Animation Using Motion Capture Data Ali Orkan Bayer Department of Computer Engineering, Middle East Technical University 06531 Ankara, Turkey [email protected]
Analyzing Facial Expressions for Virtual Conferencing
IEEE Computer Graphics & Applications, pp. 70-78, September 1998. Analyzing Facial Expressions for Virtual Conferencing Peter Eisert and Bernd Girod Telecommunications Laboratory, University of Erlangen,
Introduction to Computer Graphics Marie-Paule Cani & Estelle Duveau
Introduction to Computer Graphics Marie-Paule Cani & Estelle Duveau 04/02 Introduction & projective rendering 11/02 Prodedural modeling, Interactive modeling with parametric surfaces 25/02 Introduction
An Instructional Aid System for Driving Schools Based on Visual Simulation
An Instructional Aid System for Driving Schools Based on Visual Simulation Salvador Bayarri, Rafael Garcia, Pedro Valero, Ignacio Pareja, Institute of Traffic and Road Safety (INTRAS), Marcos Fernandez
Computers in Film Making
Computers in Film Making Snow White (1937) Computers in Film Making Slide 1 Snow White - Disney s Folly Moral: Original Budget $250,000 Production Cost $1,488,422 Frames 127,000 Production time 3.5 years
Interactive Computer Graphics
Interactive Computer Graphics Lecture 18 Kinematics and Animation Interactive Graphics Lecture 18: Slide 1 Animation of 3D models In the early days physical models were altered frame by frame to create
Computer Animation in Future Technologies
Computer Animation in Future Technologies Nadia Magnenat Thalmann MIRALab, University of Geneva Daniel Thalmann Computer Graphics Lab, Swiss Federal Institute of Technology Abstract In this introductory
New Media production week 9
New Media production week 9 How to Make an Digital Animation [email protected] Hardware PC or Mac with high resolution graphics and lots of RAM Peripherals such as graphics tablet Digital Camera (2D,
also describes the method used to collect the data for the faces. These techniques could be used to animate other flexible surfaces.
Computer Generated Animation of Faces Frederick I. Parke, University of Utah This paper describes the representation, animation and data collection techniques that have been used to produce "realistic"
A Short Introduction to Computer Graphics
A Short Introduction to Computer Graphics Frédo Durand MIT Laboratory for Computer Science 1 Introduction Chapter I: Basics Although computer graphics is a vast field that encompasses almost any graphical
Program of Study. Animation (1288X) Level 1 (390 hrs)
Program of Study Level 1 (390 hrs) ANI1513 Life Drawing for Animation I (60 hrs) Life drawing is a fundamental skill for creating believability in our animated drawings of motion. Through the use of casts
Course Overview. CSCI 480 Computer Graphics Lecture 1. Administrative Issues Modeling Animation Rendering OpenGL Programming [Angel Ch.
CSCI 480 Computer Graphics Lecture 1 Course Overview January 14, 2013 Jernej Barbic University of Southern California http://www-bcf.usc.edu/~jbarbic/cs480-s13/ Administrative Issues Modeling Animation
CS 4204 Computer Graphics
CS 4204 Computer Graphics Computer Animation Adapted from notes by Yong Cao Virginia Tech 1 Outline Principles of Animation Keyframe Animation Additional challenges in animation 2 Classic animation Luxo
Efficient Storage, Compression and Transmission
Efficient Storage, Compression and Transmission of Complex 3D Models context & problem definition general framework & classification our new algorithm applications for digital documents Mesh Decimation
Computer Animation. CS 445/645 Fall 2001
Computer Animation CS 445/645 Fall 2001 Let s talk about computer animation Must generate 30 frames per second of animation (24 fps for film) Issues to consider: Is the goal to replace or augment the artist?
Design of a modular character animation tool
Autonomous Systems Lab Prof. Roland Siegwart Master-Thesis Design of a modular character animation tool draft Spring Term 2012 Supervised by: Cedric Pradalier Gilles Caprari Author: Oliver Glauser Preface...
Volume visualization I Elvins
Volume visualization I Elvins 1 surface fitting algorithms marching cubes dividing cubes direct volume rendering algorithms ray casting, integration methods voxel projection, projected tetrahedra, splatting
FEAWEB ASP Issue: 1.0 Stakeholder Needs Issue Date: 03/29/2000. 04/07/2000 1.0 Initial Description Marco Bittencourt
)($:(%$63 6WDNHKROGHU1HHGV,VVXH 5HYLVLRQ+LVWRU\ 'DWH,VVXH 'HVFULSWLRQ $XWKRU 04/07/2000 1.0 Initial Description Marco Bittencourt &RQILGHQWLDO DPM-FEM-UNICAMP, 2000 Page 2 7DEOHRI&RQWHQWV 1. Objectives
Chapter 1. Animation. 1.1 Computer animation
Chapter 1 Animation "Animation can explain whatever the mind of man can conceive. This facility makes it the most versatile and explicit means of communication yet devised for quick mass appreciation."
3D Computer Animation (Msc CAGTA) Report
3D Computer Animation (Msc CAGTA) Report (December, 10 th 2004) Jason MAHDOUB Course Leader: Colin Wheeler 1 Summary INTRODUCTION... 3 MODELLING... 3 THE MAN...3 THE BUG RADIO...4 The head...4 The base...4
Digital 3D Animation
Elizabethtown Area School District Digital 3D Animation Course Number: 753 Length of Course: 1 semester 18 weeks Grade Level: 11-12 Elective Total Clock Hours: 120 hours Length of Period: 80 minutes Date
Computer Facial Animation: A Review
International Journal of Computer Theory and Engineering, Vol. 5, No. 4, August 2013 Computer Facial Animation: A Review Heng Yu Ping, Lili Nurliyana Abdullah, Puteri Suhaiza Sulaiman, and Alfian Abdul
Blender in Research & Education
Blender in Research & Education 1 Overview The RWTH Aachen University The Research Projects Blender in Research Modeling and scripting Video editing Blender in Education Modeling Simulation Rendering 2
PHYSIOLOGICALLY-BASED DETECTION OF COMPUTER GENERATED FACES IN VIDEO
PHYSIOLOGICALLY-BASED DETECTION OF COMPUTER GENERATED FACES IN VIDEO V. Conotter, E. Bodnari, G. Boato H. Farid Department of Information Engineering and Computer Science University of Trento, Trento (ITALY)
STEPHANIE POCKLINGTON
STEPHANIE POCKLINGTON CELL: 1-905-599-4385 EMAIL: [email protected] WEB: www.pockstudio.com EXPERIENCE Moving Picture Company (MPC) - September 2013 to present Montreal, QC Cinderella Assets - modeling
Rotation Matrices and Homogeneous Transformations
Rotation Matrices and Homogeneous Transformations A coordinate frame in an n-dimensional space is defined by n mutually orthogonal unit vectors. In particular, for a two-dimensional (2D) space, i.e., n
Learning an Inverse Rig Mapping for Character Animation
Learning an Inverse Rig Mapping for Character Animation Daniel Holden Jun Saito Taku Komura University of Edinburgh Marza Animation Planet University of Edinburgh Figure 1: Results of our method: animation
Subspace Analysis and Optimization for AAM Based Face Alignment
Subspace Analysis and Optimization for AAM Based Face Alignment Ming Zhao Chun Chen College of Computer Science Zhejiang University Hangzhou, 310027, P.R.China [email protected] Stan Z. Li Microsoft
SkillsUSA 2014 Contest Projects 3-D Visualization and Animation
SkillsUSA Contest Projects 3-D Visualization and Animation Click the Print this Section button above to automatically print the specifications for this contest. Make sure your printer is turned on before
Computer Graphics CS 543 Lecture 12 (Part 1) Curves. Prof Emmanuel Agu. Computer Science Dept. Worcester Polytechnic Institute (WPI)
Computer Graphics CS 54 Lecture 1 (Part 1) Curves Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) So Far Dealt with straight lines and flat surfaces Real world objects include
An Animation Definition Interface Rapid Design of MPEG-4 Compliant Animated Faces and Bodies
An Animation Definition Interface Rapid Design of MPEG-4 Compliant Animated Faces and Bodies Erich Haratsch, Technical University of Munich, [email protected] Jörn Ostermann, AT&T Labs
N. Burtnyk and M. Wein National Research Council of Canada
ntroduction Graphics and mage Processing nteractive Skeleton Techniques for Enhancing Motion Dynamics in Key Frame Animation N. Burtnyk and M. Wein National Research Council of Canada A significant increase
HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT
International Journal of Scientific and Research Publications, Volume 2, Issue 4, April 2012 1 HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT Akhil Gupta, Akash Rathi, Dr. Y. Radhika
CHAPTER 6 TEXTURE ANIMATION
CHAPTER 6 TEXTURE ANIMATION 6.1. INTRODUCTION Animation is the creating of a timed sequence or series of graphic images or frames together to give the appearance of continuous movement. A collection of
MMGD0203 Multimedia Design MMGD0203 MULTIMEDIA DESIGN. Chapter 3 Graphics and Animations
MMGD0203 MULTIMEDIA DESIGN Chapter 3 Graphics and Animations 1 Topics: Definition of Graphics Why use Graphics? Graphics Categories Graphics Qualities File Formats Types of Graphics Graphic File Size Introduction
Project 2: Character Animation Due Date: Friday, March 10th, 11:59 PM
1 Introduction Project 2: Character Animation Due Date: Friday, March 10th, 11:59 PM The technique of motion capture, or using the recorded movements of a live actor to drive a virtual character, has recently
