Motion Retargetting and Transition in Different Articulated Figures
|
|
|
- Laura Garrett
- 9 years ago
- Views:
Transcription
1 Motion Retargetting and Transition in Different Articulated Figures Ming-Kai Hsieh Bing-Yu Chen Ming Ouhyoung National Taiwan University Abstract In this paper, we present an interactive system to transfer and concatenate motions in different articulated figures, such as human and dog. This system first constructs a union skeleton that contains both bone structures with bone correspondence manually assigned by users, then aligns the initial poses of these two skeletons. With the initial poses aligned, motion data can be transferred between them. In addition, by blending similar poses of different skeletons, we can produce a seamless transition sequence which can be used to drive a meta-mesh and generate an animated morphing result. their experiences. So if we can transfer motions between two different characters, we can save the animation production time. Besides, transition between two motions of different skeletons will produce an animated morphing result, such as a walking human morph to a running dog. This technique may be useful in game or movie industries... Overview. Introduction The process of animation creation is notorious for its timeconsuming and tedious manual tweaking, and the results are often not good enough. Thus, how to reuse an existing motion (keyframe animation or motion capture data) is an important topic. It is a useful technique for user to produce a base motion from another motion, then add some characteristics by keyframing. Recently, with the advance and prevalence of motion capture techniques, more and more high quality motion data can be obtained from the world wide web or commercial companies. There are three main techniques: first, motion retargetting, which allows user to adapt a motion to a different character; second, motion synthesis, which combines two or more motions together to produce a new motion; third, motion transition, which generates a smooth transition between two motions that concatenates them seamlessly. But reusing these motion data on another character is not a trivial task, especially when two characters are different. If we want a human character to imitate the walking motion of a dog, or a flying motion of a bird, the problems we will encounter are: they may have different number of bones, initial poses, or topologies. Same problems (except for the topologies problem) can occur even if we are dealing with same type of characters. Different motion capture companies may define different human skeletons; different artists tend to define different animal skeletons according to Human Walk Initial Pose Difference Initial Pose Difference η ηdog Walk Motion Transition Figure : An overview of our system. The source skeleton is a dog, and the target skeleton is a man. In order to accomplish motion retargetting and transition in different skeletons, we have to define the correspondence of the source and target skeletons. Then after aligning their initial poses, we can retarget the motion data between them. Moreover, motion transition can be done by constructing a meta-skeleton which contains both skeleton structures, then blending similar frames and bone lengths of these two motions. Figure is an overview of our system. In this fig- Dog Walk Dog Walk
2 Load skeleton data Load skeleton (.asf) Load skeleton (.asf) Save motion data Save motion (.amc) Save motion (.amc) Find skeleton correspondence Load alignment file (.align) Assign manually Find motion correspondence Load alignment file (.time) Assign manually Figure : System workflow. Initial pose alignment Create metaskeleton Load motion data Load motion (.amc) Load motion (.amc) ure, the source skeleton is a dog, and the target skeleton is a man. We can retarget the dog s walking motion to the man by adding their initial pose difference and dog s walking motion on the man s character. Then we can concatenate the man s and the dog s walking motions by their meta-skeleton. Figure shows the system workflow. The remainder of this paper are organized as follows: Section reviews some previous motion retargetting and transition techniques. Sections and focus on the main algorithm. Section shows several animation snapshots produced by our system. In Section, we conclude this paper and list some future work.. Related Work Gleicher [] exploited spacetime constraints to take the spatial and temporal constraints of the original motion into account. He formulates motion retargetting as a big constraints optimization problem, and tried to reduce the computational costs. In his paper, he also discussed the issues of transferring motion between different characters. Using the same spacetime constraints technique, they can retarget a human walking motion to a rigid can. Popovi`c and Witkin [0] also use spacetime constraints scheme to solve motion transformation problem with taking dynamics into account. Their method can also map motion between different kinematic structures. The main idea of their method is to manually simplify the original skeleton to a reduced skeleton, and fit the original motion data to it. The computational cost of spacetime constraints editing or transformation can be reduced by using this simplified model. After editing, they fit the result motion back to the original skeleton. Lee and Shin [] develop an interactive system to adapt existing human character motions to have desired features by fitting a set of constraints induced by both characters and environments. They fuse a hierarchical curve fitting technique with their own inverse kinematics solver to accomplish the tasks. In contrast to [], who solve motion retargetting problem in a large non-linear optimization problem, they decouple this Motion retargetting Motion transition problem into several manageable subproblems that can be efficiently solved. The goal of motion transition is to seamlessly concatenate two motions. Most previous techniques are focusing on transiting motions of the same skeleton. First, the similar frames of these two motions are found by some distance metrics. These frames are regarded as good transition points. Then the neighbors of corresponding frames are blended according to some ease-in ease-out blend weights to produce a smooth result. Bruderlin and Williams [] use dynamic programming and timewarping methods to produce transitions. They treat motions as time-varying continuous functions and apply signal processing techniques for motion blending and transition. After finding the similar frames, displacement mapping is used to combine two motion without losing their continuous characteristics. Several work like [,, ] also use displacement mapping techniques to blend motions. Unuma et al. [] use fourier function approximation to represent a rotation angle of a bone in a period of time. The transition of two motions of the same character can be produced by interpolating the fourier parameters in frequency domain and then transforming back to spatial domain. Ashraf and Wong [] propose a semi-automatic labeling technique to find out the correspondence of two motions by analyzing the end-effector acceleration zero-crossings of user-specified joints. Their motion-state based transition can handle more categories of motions than other signal based techniques, because the latter methods often blend the whole body movement, unlike their decoupled interpolation approach, which consider upper and lower halves apart. Later, they add inverse kinematics constraints to their system for correcting foot sliding problems on the fly [, ].. Motion Retargetting A skeleton S is defined as a collection of several bones b i B and a motion clip M S (t) is recorded by the state of each bone (included its translation and rotation angels) in time t, relative to its initial pose. Thus, to retarget motions between different skeletons S and S (Figure ), a straightforward idea is to align their initial pose first, then transfer the motion data from one to the other. First, we need a function to describe the corresponding bones in S and S. Since the initial pose of two skeletons may be different, we have to align these initial pose in the beginning... Skeleton Correspondence In our system, users can assign three types of correspondence manually through our interface: () one-to-one correspondence (occurred in most cases), () many-to-one correspondence (e.g. we may map two legs of a human to the
3 0 0 Figure : An illustration of motion retargetting. Figure : Skeleton correspondence of a dog and a human. Corresponding bones are represented by the same number. Numbers enclosed with red squares represents many-to-one correspondence. Orange bones denote bones that have no correspondence. tail fin of a fish), and () no correspondence (e.g. we cannot map a tail of a dog to any bone of a human). Figure shows an example of skeleton correspondence of a dog and a human. Because we are dealing with different skeletons, fully automatic skeleton matching techniques which consider the bone direction, topologies or dominance (the number of successors) cannot work in the most cases. In addition, we want to provide the flexibility for artists to retarget motions in their own will, so user intervention is still needed. Other semi-automatic matching techniques like [] can be added in this system. (a) A skeleton of dog. (b) A skeleton of human with initial pose aligned with a dog... Initial Pose Alignment To align the initial pose of two skeletons, we recursively count the rotation angle between two corresponding bones from root to leaf and directly apply the results to the target skeleton before computing the next angle difference. This method can only guarantee the bone directions are aligned, we do not consider bone twist effect, so unexpected results can occur, such as the orientation of the human s hand shown in Figure. That is because that there is no finger map to dog s limbs. In our current implementation, the user can adjust the bone twist through our interface... Motion Data Transfer With the initial poses of the two skeletons are aligned, we can transfer motion data between each other, but the local frames of corresponding bones may be different (Figure ). In this case, we can transform the source rotation angles from local to global coordinate, then transform back to the target s local coordinate. Figure : Local frames of the corresponding bones may not be the same even the initial poses are aligned. The highlighted bones denote the corresponding bones, and the x, y, z axes are colored by red, green, blue, respectively.. Motion Transition Our system provides the ability to transfer the motions in different skeletons by the bone correspondences. We also want to know how to transit the motions between different skeletons. For example, morph a running human to a flying bird. However, if two mesh models are parameterized with their initial pose aligned, we can morph the meta-mesh between these two models while it is animated by a metaskeleton that contains both skeletons. In this paper, we only discuss skeleton morphing. The main idea of motion transition in different skeletons is as follows: () first we construct a meta-skeleton that contains both skeletons by bone correspondence information; () then we retarget the source motion to target with the techniques described in Section ;
4 () finally with the correspondence of these two motions are found, we can use motion blending to produce a skeleton morphing result... Meta-Skeleton Construction After the bone correspondence of the input skeletons are found, we can construct a meta-skeleton (union skeleton) that contains both information of input skeletons. Without meta-skeleton, we cannot generate a consistent transition result in different skeletons, because there may be several bones that have no correspondence. Figure shows a metaskeleton construct by a dog and a human. Sometimes the bone correspondence may produce ambiguity in the metaskeleton, we have to prevent users from assigning ambiguous bone correspondence in the bone corresponding interface... Motion Correspondence In order to get a better transition result, we need to find the correspondence between two motions. Blending frames with similar poses can produce a smoother transition. However, since we are dealing with two different motions in different skeletons, even if they do the same type of motion (such as walking), their postures will remain different. In this case, signal based or joint based motion correspondence approaches may fail. Signal based techniques [] are often short of establishing meaningful correlation between motions that have many DOFs (degree of freedoms), since they only consider one DOF at a time. Joint position based approaches [] take the whole body motion into account, but lack of the abilities to find correspondence between different skeletons or dissimilar motions. In current implementation, we provide an interface for users to assign the corresponding frames manually.. Results Figure : The interface of our system. Figure shows the interface of our system, which is developed with FLTK. The results of motion retargetting and transition are shown in Figures and, respectively.. Conclusions Figure : A meta-skeleton (the yellow one) constructed by a dog and a human. In this paper, we presented a system that can perform motion retargetting and transition with different skeletons. After setting the skeleton correspondence by users, the initial poses are automatically aligned, then the motion data are transferred directly. In addition, if both motions of two input skeletons are provided, we can blend corresponding frames and bone length to produce a skeleton morphing result. However, there are still lots of work to do, including: The joint constraints of the retargetted skeleton must be taken into consideration to produce a plausible retargetting result. Spacetime constraints and inverse kinematics should be considered to correct the path of the motion and other external constraints (constraints can be manually set by users). Collision detection should also be taken into account to prevent unnatural intersection of the body and limbs.
5 source 0 0 target output (a) Retarget a dog s motion to a man. source target output (b) Retarget a shark s motion to a man. Figure : The motion retargetting results of our system. Acknowledgments This work was partially supported by the National Science Council of Taiwan under the contract numbers: --E- 00-0, and --E References [] Golam Ashraf and Kok Cheong Wong. Generating consistent motion transition via decoupled framespace interpolation. Computer Graphics Forum (Eurographics 000 Conference Proceedings), ():, 000. [] Golam Ashraf and Kok Cheong Wong. Constrained framespace interpolation. In Proceedings of Computer Animation 00, pages, 00. [] Golam Ashraf and Kok Cheong Wong. Semantic representation and correspondence for state-based motion transition. IEEE Transactions on Visualization and Computer Graphics, ():, 00. [] Armin Bruderlin and Lance Williams. Motion signal processing. In ACM SIGGRAPH Conference Proceedings, pages 0,. [] Michael Gleicher. Retargetting motion to new characters. In ACM SIGGRAPH Conference Proceedings, pages,.
6 Figure : Concatenate a man s walking motion with a dog s walking motion. [] Michael Gleicher, Hyun Joon Shin, Lucas Kovar, and Andrew Jepsen. Snap-together motion: Assembling run-time animations. In Proceedings of 00 ACM SIGGRAPH Symposium on Interactive D graphics, pages, 00. GRAPH/Eurographics Symposium on Computer Animation, pages, 00. [] Lucas Kovar, Michael Gleicher, and Frèdèric Pighin. Motion graphs. ACM Transactions on Graphics (SIG- GRAPH 00 Conference Proceedings), ():, 00. [] Jehee Lee, Jinxiang Chai, Paul S. A. Reitsma, Jessica K. Hodgins, and Nancy S. Pollard. Interactive control of avatars animated with human motion data. ACM Transactions on Graphics (SIGGRAPH 00 Conference Proceedings), (): 00, 00. [] Jehee Lee and Sung Yong Shin. A hierarchical approach to interactive motion editing for human-like figures. In ACM SIGGRAPH Conference Proceedings, pages,. [0] Zoran Popovi`c and Andrew Witkin. Physically based motion transformation. In ACM SIGGRAPH Conference Proceedings, pages 0,. [] Munetoshi Unuma, Ken Anjyo, and Ryozo Takeuchi. Fourier principles for emotion-based human figure animation. In ACM SIGGRAPH Conference Proceedings, pages,. [] Andrew Witkin and Zoran Popovi`c. Motion warping. In ACM SIGGRAPH Conference Proceedings, pages 0 0,. [] Yonghong Zhao, Hong Yang Ong, Tiow Seng Tan, and Yongguan Xiao. Interactive control of componentbased morphing. In Proceedings of 00 ACM SIG-
Motion Capture Assisted Animation: Texturing and Synthesis
Motion Capture Assisted Animation: Texturing and Synthesis Katherine Pullen Stanford University Christoph Bregler Stanford University Abstract We discuss a method for creating animations that allows the
Spatial Pose Trees: Creating and Editing Motions Using a Hierarchy of Low Dimensional Control Spaces
Eurographics/ ACM SIGGRAPH Symposium on Computer Animation (2006), pp. 1 9 M.-P. Cani, J. O Brien (Editors) Spatial Pose Trees: Creating and Editing Motions Using a Hierarchy of Low Dimensional Control
animation animation shape specification as a function of time
animation animation shape specification as a function of time animation representation many ways to represent changes with time intent artistic motion physically-plausible motion efficiency control typically
Motion Capture Assisted Animation: Texturing and Synthesis
Motion Capture Assisted Animation: Texturing and Synthesis Katherine Pullen Stanford University Christoph Bregler Stanford University Abstract We discuss a method for creating animations that allows the
Motion Capture Technologies. Jessica Hodgins
Motion Capture Technologies Jessica Hodgins Motion Capture Animation Video Games Robot Control What games use motion capture? NBA live PGA tour NHL hockey Legends of Wrestling 2 Lords of Everquest Lord
Project 2: Character Animation Due Date: Friday, March 10th, 11:59 PM
1 Introduction Project 2: Character Animation Due Date: Friday, March 10th, 11:59 PM The technique of motion capture, or using the recorded movements of a live actor to drive a virtual character, has recently
animation shape specification as a function of time
animation 1 animation shape specification as a function of time 2 animation representation many ways to represent changes with time intent artistic motion physically-plausible motion efficiency typically
Interactive Computer Graphics
Interactive Computer Graphics Lecture 18 Kinematics and Animation Interactive Graphics Lecture 18: Slide 1 Animation of 3D models In the early days physical models were altered frame by frame to create
C O M P U C O M P T U T E R G R A E R G R P A H I C P S Computer Animation Guoying Zhao 1 / 66 /
Computer Animation Guoying Zhao 1 / 66 Basic Elements of Computer Graphics Modeling construct the 3D model of the scene Rendering Render the 3D model, compute the color of each pixel. The color is related
The 3D rendering pipeline (our version for this class)
The 3D rendering pipeline (our version for this class) 3D models in model coordinates 3D models in world coordinates 2D Polygons in camera coordinates Pixels in image coordinates Scene graph Camera Rasterization
An Interactive method to control Computer Animation in an intuitive way.
An Interactive method to control Computer Animation in an intuitive way. Andrea Piscitello University of Illinois at Chicago 1200 W Harrison St, Chicago, IL [email protected] Ettore Trainiti University of
CG T17 Animation L:CC, MI:ERSI. Miguel Tavares Coimbra (course designed by Verónica Orvalho, slides adapted from Steve Marschner)
CG T17 Animation L:CC, MI:ERSI Miguel Tavares Coimbra (course designed by Verónica Orvalho, slides adapted from Steve Marschner) Suggested reading Shirley et al., Fundamentals of Computer Graphics, 3rd
Animations in Creo 3.0
Animations in Creo 3.0 ME170 Part I. Introduction & Outline Animations provide useful demonstrations and analyses of a mechanism's motion. This document will present two ways to create a motion animation
Character Animation from 2D Pictures and 3D Motion Data ALEXANDER HORNUNG, ELLEN DEKKERS, and LEIF KOBBELT RWTH-Aachen University
Character Animation from 2D Pictures and 3D Motion Data ALEXANDER HORNUNG, ELLEN DEKKERS, and LEIF KOBBELT RWTH-Aachen University Presented by: Harish CS-525 First presentation Abstract This article presents
Computer Animation. CS 445/645 Fall 2001
Computer Animation CS 445/645 Fall 2001 Let s talk about computer animation Must generate 30 frames per second of animation (24 fps for film) Issues to consider: Is the goal to replace or augment the artist?
INTERACTIVELY RESPONSIVE ANIMATION OF HUMAN WALKING IN VIRTUAL ENVIRONMENTS
INTERACTIVELY RESPONSIVE ANIMATION OF HUMAN WALKING IN VIRTUAL ENVIRONMENTS By Shih-kai Chung B.A. June 1988, National Taiwan University, Taiwan M.S. May 1994, The George Washington University A Dissertation
CS 4204 Computer Graphics
CS 4204 Computer Graphics Computer Animation Adapted from notes by Yong Cao Virginia Tech 1 Outline Principles of Animation Keyframe Animation Additional challenges in animation 2 Classic animation Luxo
CHAPTER 6 TEXTURE ANIMATION
CHAPTER 6 TEXTURE ANIMATION 6.1. INTRODUCTION Animation is the creating of a timed sequence or series of graphic images or frames together to give the appearance of continuous movement. A collection of
Computer Animation. Lecture 2. Basics of Character Animation
Computer Animation Lecture 2. Basics of Character Animation Taku Komura Overview Character Animation Posture representation Hierarchical structure of the body Joint types Translational, hinge, universal,
Blender 3D Animation
Bachelor Maths/Physics/Computer Science University Paris-Sud Digital Imaging Course Blender 3D Animation Christian Jacquemin Introduction to Computer Animation Animation Basics animation consists in changing
This week. CENG 732 Computer Animation. Challenges in Human Modeling. Basic Arm Model
CENG 732 Computer Animation Spring 2006-2007 Week 8 Modeling and Animating Articulated Figures: Modeling the Arm, Walking, Facial Animation This week Modeling the arm Different joint structures Walking
Background Animation Generator: Interactive Scene Design based on Motion Graph
Background Animation Generator: Interactive Scene Design based on Motion Graph Tse-Hsien Wang National Taiwan University [email protected] Bin-Yu Chen National Taiwan University [email protected]
Chapter 1. Introduction. 1.1 The Challenge of Computer Generated Postures
Chapter 1 Introduction 1.1 The Challenge of Computer Generated Postures With advances in hardware technology, more powerful computers become available for the majority of users. A few years ago, computer
Using Autodesk HumanIK Middleware to Enhance Character Animation for Games
Autodesk HumanIK 4.5 Using Autodesk HumanIK Middleware to Enhance Character Animation for Games Unlock your potential for creating more believable characters and more engaging, innovative gameplay with
Motion Graphs. Abstract. 1 Introduction. To appear in Proceedings of SIGGRAPH 02
Motion Graphs Lucas Kovar University of Wisconsin-Madison Michael Gleicher University of Wisconsin-Madison Frédéric Pighin University of Southern California Institute for Creative Technologies Abstract
Computer Puppetry: An Importance-Based Approach
Computer Puppetry: An Importance-Based Approach HYUN JOON SHIN, JEHEE LEE, and SUNG YONG SHIN Korea Advanced Institute of Science & Technology and MICHAEL GLEICHER University of Wisconsin Madison Computer
Character Animation from a Motion Capture Database
Max-Planck-Institut für Informatik Computer Graphics Group Saarbrücken, Germany Character Animation from a Motion Capture Database Master Thesis in Computer Science Computer Science Department University
Geometric Constraints
Simulation in Computer Graphics Geometric Constraints Matthias Teschner Computer Science Department University of Freiburg Outline introduction penalty method Lagrange multipliers local constraints University
Emotional Communicative Body Animation for Multiple Characters
Emotional Communicative Body Animation for Multiple Characters Arjan Egges, Nadia Magnenat-Thalmann MIRALab - University of Geneva 24, Rue General-Dufour, 1211 Geneva, Switzerland Telephone: +41 22 379
A simple footskate removal method for virtual reality applications
The Visual Computer manuscript No. (will be inserted by the editor) A simple footskate removal method for virtual reality applications Etienne Lyard 1, Nadia Magnenat-Thalmann 1 MIRALab, University of
Fundamentals of Computer Animation
Fundamentals of Computer Animation Principles of Traditional Animation How to create maximum impact page 1 How to create maximum impact Early animators worked from scratch to analyze and improve upon silence
Metrics on SO(3) and Inverse Kinematics
Mathematical Foundations of Computer Graphics and Vision Metrics on SO(3) and Inverse Kinematics Luca Ballan Institute of Visual Computing Optimization on Manifolds Descent approach d is a ascent direction
Graphics. Computer Animation 고려대학교 컴퓨터 그래픽스 연구실. kucg.korea.ac.kr 1
Graphics Computer Animation 고려대학교 컴퓨터 그래픽스 연구실 kucg.korea.ac.kr 1 Computer Animation What is Animation? Make objects change over time according to scripted actions What is Simulation? Predict how objects
Human Skeletal and Muscle Deformation Animation Using Motion Capture Data
Human Skeletal and Muscle Deformation Animation Using Motion Capture Data Ali Orkan Bayer Department of Computer Engineering, Middle East Technical University 06531 Ankara, Turkey [email protected]
INSTRUCTOR WORKBOOK Quanser Robotics Package for Education for MATLAB /Simulink Users
INSTRUCTOR WORKBOOK for MATLAB /Simulink Users Developed by: Amir Haddadi, Ph.D., Quanser Peter Martin, M.A.SC., Quanser Quanser educational solutions are powered by: CAPTIVATE. MOTIVATE. GRADUATE. PREFACE
Kinematics & Dynamics
Overview Kinematics & Dynamics Adam Finkelstein Princeton University COS 46, Spring 005 Kinematics Considers only motion Determined by positions, velocities, accelerations Dynamics Considers underlying
Animation. Persistence of vision: Visual closure:
Animation Persistence of vision: The visual system smoothes in time. This means that images presented to the eye are perceived by the visual system for a short time after they are presented. In turn, this
Automated Semi Procedural Animation for Character Locomotion
Master s Thesis Department of Information and Media Studies Aarhus University Automated Semi Procedural Animation for Character Locomotion Rune Skovbo Johansen (20020182) May 25, 2009 Abstract This thesis
Learning an Inverse Rig Mapping for Character Animation
Learning an Inverse Rig Mapping for Character Animation Daniel Holden Jun Saito Taku Komura University of Edinburgh Marza Animation Planet University of Edinburgh Figure 1: Results of our method: animation
Today. Keyframing. Procedural Animation. Physically-Based Animation. Articulated Models. Computer Animation & Particle Systems
Today Computer Animation & Particle Systems Some slides courtesy of Jovan Popovic & Ronen Barzel How do we specify or generate motion? Keyframing Procedural Animation Physically-Based Animation Forward
Behavior Planning for Character Animation
Eurographics/ACM SIGGRAPH Symposium on Computer Animation (2005) K. Anjyo, P. Faloutsos (Editors) Behavior Planning for Character Animation Manfred Lau1 and James J. Kuffner1,2 {mlau,kuffner}@cs.cmu.edu
MovieClip, Button, Graphic, Motion Tween, Classic Motion Tween, Shape Tween, Motion Guide, Masking, Bone Tool, 3D Tool
1 CEIT 323 Lab Worksheet 1 MovieClip, Button, Graphic, Motion Tween, Classic Motion Tween, Shape Tween, Motion Guide, Masking, Bone Tool, 3D Tool Classic Motion Tween Classic tweens are an older way of
Performance Driven Facial Animation Course Notes Example: Motion Retargeting
Performance Driven Facial Animation Course Notes Example: Motion Retargeting J.P. Lewis Stanford University Frédéric Pighin Industrial Light + Magic Introduction When done correctly, a digitally recorded
SimFonIA Animation Tools V1.0. SCA Extension SimFonIA Character Animator
SimFonIA Animation Tools V1.0 SCA Extension SimFonIA Character Animator Bring life to your lectures Move forward with industrial design Combine illustrations with your presentations Convey your ideas to
Comparing and evaluating real-time character engines for virtual environments. [email protected]
real-time character engines 1 Running head: REAL-TIME CHARACTER ENGINES Comparing and evaluating real-time character engines for virtual environments Marco Gillies 1, Bernhard Spanlang 2 1 Department of
2.5 Physically-based Animation
2.5 Physically-based Animation 320491: Advanced Graphics - Chapter 2 74 Physically-based animation Morphing allowed us to animate between two known states. Typically, only one state of an object is known.
Creating Scenes and Characters for Virtools in OpenFX
Creating Scenes and Characters for Virtools in OpenFX Scenes Scenes are straightforward: In Virtools use the Resources->Import File As->Scene menu command and select the.mfx (OpenFX model) file containing
Physically Based Motion Transformation
Physically Based Motion Transformation Zoran Popović Andrew Witkin Computer Science Department Carnegie Mellon University Abstract We introduce a novel algorithm for transforming character animation sequences
Chapter 1. Animation. 1.1 Computer animation
Chapter 1 Animation "Animation can explain whatever the mind of man can conceive. This facility makes it the most versatile and explicit means of communication yet devised for quick mass appreciation."
Short Presentation. Topic: Locomotion
CSE 888.14 Advanced Computer Animation Short Presentation Topic: Locomotion Kang che Lee 2009 Fall 1 Locomotion How a character moves from place to place. Optimal gait and form for animal locomotion. K.
Animating reactive motion using momentum-based inverse kinematics
COMPUTER ANIMATION AND VIRTUAL WORLDS Comp. Anim. Virtual Worlds 2005; 16: 213 223 Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/cav.101 Motion Capture and Retrieval
Mocap in Carrara - by CyBoRgTy
Mocap in Carrara - by CyBoRgTy Animating in Carrara can be a lot of fun, especially when you combine keyframe animation with your personally captured motions. As a hobbyist creating mocap, I like to use
Colour Image Segmentation Technique for Screen Printing
60 R.U. Hewage and D.U.J. Sonnadara Department of Physics, University of Colombo, Sri Lanka ABSTRACT Screen-printing is an industry with a large number of applications ranging from printing mobile phone
Introduction to Computer Graphics Marie-Paule Cani & Estelle Duveau
Introduction to Computer Graphics Marie-Paule Cani & Estelle Duveau 04/02 Introduction & projective rendering 11/02 Prodedural modeling, Interactive modeling with parametric surfaces 25/02 Introduction
Karen Liu's Concept of Virtual Characters
Performance-based Control Interface for Character Animation Satoru Ishigaki Georgia Tech Timothy White Georgia Tech Victor B. Zordan UC Riverside C. Karen Liu Georgia Tech Figure 1: Our system allows the
A PAIR OF MEASURES OF ROTATIONAL ERROR FOR AXISYMMETRIC ROBOT END-EFFECTORS
A PAIR OF MEASURES OF ROTATIONAL ERROR FOR AXISYMMETRIC ROBOT END-EFFECTORS Sébastien Briot, Ilian A. Bonev Department of Automated Manufacturing Engineering École de technologie supérieure (ÉTS), Montreal,
Video based Animation Synthesis with the Essential Graph
Video based Animation Synthesis with the Essential Graph Adnane Boukhayma, Edmond Boyer o cite this version: Adnane Boukhayma, Edmond Boyer. Video based Animation Synthesis with the Essential Graph. 3DV
CIS 536/636 Introduction to Computer Graphics. Kansas State University. CIS 536/636 Introduction to Computer Graphics
2 Lecture Outline Animation 2 of 3: Rotations, Quaternions Dynamics & Kinematics William H. Hsu Department of Computing and Information Sciences, KSU KSOL course pages: http://bit.ly/hgvxlh / http://bit.ly/evizre
D animation. Advantages of 2-D2. Advantages of 3-D3. Related work. Key idea. Applications of Computer Graphics in Cel Animation.
Page 1 Applications of Computer Graphics in Cel Animation 3-D D and 2-D 2 D animation Adam Finkelstein Princeton University COS 426 Spring 2003 Homer 3-D3 Homer 2-D2 Advantages of 3-D3 Complex lighting
Teaching Methodology for 3D Animation
Abstract The field of 3d animation has addressed design processes and work practices in the design disciplines for in recent years. There are good reasons for considering the development of systematic
Course Overview. CSCI 480 Computer Graphics Lecture 1. Administrative Issues Modeling Animation Rendering OpenGL Programming [Angel Ch.
CSCI 480 Computer Graphics Lecture 1 Course Overview January 14, 2013 Jernej Barbic University of Southern California http://www-bcf.usc.edu/~jbarbic/cs480-s13/ Administrative Issues Modeling Animation
The Process of Motion Capture: Dealing with the Data
The Process of Motion Capture: Dealing with the Data Bobby Bodenheimer Chuck Rose Microsoft Research Seth Rosenthal John Pella Interactive Media Production Microsoft Abstract This paper presents a detailed
IMD4003 3D Computer Animation
Contents IMD4003 3D Computer Animation Strange from MoCap G03 Correcting Animation in MotionBuilder Prof. Chris Joslin Overview This document covers how to correct animation (specifically rotations) in
Whole-body dynamic motion planning with centroidal dynamics and full kinematics
Sep. 18. 2014 Introduction Linear Inverted Pendulum compute ZMP with point-mass model linear system, analytical solution co-planar contact solve kinematics separately Our approach dynamic constraint for
FACIAL RIGGING FOR 3D CHARACTER
FACIAL RIGGING FOR 3D CHARACTER Matahari Bhakti Nendya 1, Eko Mulyanto Yuniarno 2 and Samuel Gandang Gunanto 3 1,2 Department of Electrical Engineering, Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia
Kinematical Animation. [email protected] 2013-14
Kinematical Animation 2013-14 3D animation in CG Goal : capture visual attention Motion of characters Believable Expressive Realism? Controllability Limits of purely physical simulation : - little interactivity
Voice Driven Animation System
Voice Driven Animation System Zhijin Wang Department of Computer Science University of British Columbia Abstract The goal of this term project is to develop a voice driven animation system that could take
A Framework for Rendering, Simulation and Animation of Crowds
CEIG 09, San Sebastián, Sept. 9-11 (2009) C. Andújar and J. Lluch (Editors) A Framework for Rendering, Simulation and Animation of Crowds N. Pelechano, B. Spanlang, A. Beacco Universitat Politècnica de
Vision-based Walking Parameter Estimation for Biped Locomotion Imitation
Vision-based Walking Parameter Estimation for Biped Locomotion Imitation Juan Pedro Bandera Rubio 1, Changjiu Zhou 2 and Francisco Sandoval Hernández 1 1 Dpto. Tecnología Electrónica, E.T.S.I. Telecomunicación
Interactive Quadruped Animation
Interactive Quadruped Animation Tyler Martin and Michael Neff University of California, Davis, USA [email protected] [email protected] Abstract. An animation system entitled CAT (Cat Animation Tool)
Designing a Compelling User Interface for Morphing
Designing a Compelling User Interface for Morphing David Vronay Rapid Prototyping Research Group Microsoft Research Asia 3F, Beijing Sigma Center No.49, Zhichun Road Haidian District Beijing 100080, P.
Synthesizing Animations of Human Manipulation Tasks
Synthesizing Animations of Human Manipulation Tasks Katsu Yamane The University of Tokyo James J. Kuffner Carnegie Mellon University Jessica K. Hodgins Carnegie Mellon University Abstract Even such simple
Computer Animation. Computer Animation. Principles of Traditional Animation. Outline. Principles of Traditional Animation
Computer Animation What is animation? Make objects change over time according to scripted actions Computer Animation What is simulation? Predict how objects change over time according to physical laws
Spatial Relationship Preserving Character Motion Adaptation
Spatial Relationship Preserving Character Motion Adaptation Edmond S.L. Ho Taku Komura The University of Edinburgh Chiew-Lan Tai Hong Kong University of Science and Technology Figure 1: Our system can
Goldsmiths, University of London. Computer Animation. Goldsmiths, University of London
Computer Animation Goldsmiths, University of London Computer Animation Introduction Computer animation is about making things move, a key part of computer graphics. In film animations are created off-line,
Generating Animation from Natural Language Texts and Framework of Motion Database
2009 International Conference on CyberWorlds Generating Animation from Natural Language Texts and Framework of Motion Database Masaki Oshita Kyushu Institute of Technology 680-4 Kawazu, Iizuka, Fukuoka,
Tutorial: Biped Character in 3D Studio Max 7, Easy Animation
Tutorial: Biped Character in 3D Studio Max 7, Easy Animation Written by: Ricardo Tangali 1. Introduction:... 3 2. Basic control in 3D Studio Max... 3 2.1. Navigating a scene:... 3 2.2. Hide and Unhide
Simultaneous Gamma Correction and Registration in the Frequency Domain
Simultaneous Gamma Correction and Registration in the Frequency Domain Alexander Wong [email protected] William Bishop [email protected] Department of Electrical and Computer Engineering University
Two hours UNIVERSITY OF MANCHESTER SCHOOL OF COMPUTER SCIENCE. M.Sc. in Advanced Computer Science. Friday 18 th January 2008.
COMP60321 Two hours UNIVERSITY OF MANCHESTER SCHOOL OF COMPUTER SCIENCE M.Sc. in Advanced Computer Science Computer Animation Friday 18 th January 2008 Time: 09:45 11:45 Please answer any THREE Questions
An Instructional Aid System for Driving Schools Based on Visual Simulation
An Instructional Aid System for Driving Schools Based on Visual Simulation Salvador Bayarri, Rafael Garcia, Pedro Valero, Ignacio Pareja, Institute of Traffic and Road Safety (INTRAS), Marcos Fernandez
Online Motion Capture Marker Labeling for Multiple Interacting Articulated Targets
EUROGRAPHICS 2007 / D. Cohen-Or and P. Slavík (Guest Editors) Volume 26 (2007), Number 3 Online Motion Capture Marker Labeling for Multiple Interacting Articulated Targets Qian Yu 1,2, Qing Li 1, and Zhigang
Maya 2014 Basic Animation & The Graph Editor
Maya 2014 Basic Animation & The Graph Editor When you set a Keyframe (or Key), you assign a value to an object s attribute (for example, translate, rotate, scale, color) at a specific time. Most animation
CMSC 425: Lecture 13 Animation for Games: Basics Tuesday, Mar 26, 2013
CMSC 425: Lecture 13 Animation for Games: Basics Tuesday, Mar 26, 2013 Reading: Chapt 11 of Gregory, Game Engine Architecture. Game Animation: Most computer games revolve around characters that move around
Character Animation Tutorial
Character Animation Tutorial 1.Overview 2.Modelling 3.Texturing 5.Skeleton and IKs 4.Keys 5.Export the character and its animations 6.Load the character in Virtools 7.Material & texture tuning 8.Merge
Computer Animation for Articulated 3D Characters
Computer Animation for Articulated 3D Characters Szilárd Kiss October 15, 2002 Abstract We present a review of the computer animation literature, mainly concentrating on articulated characters and at least
Digital 3D Animation
Elizabethtown Area School District Digital 3D Animation Course Number: 753 Length of Course: 1 semester 18 weeks Grade Level: 11-12 Elective Total Clock Hours: 120 hours Length of Period: 80 minutes Date
Layered Performance Animation with Correlation Maps
EUROGRAPHICS 2007 / D. Cohen-Or and P. Slavík (Guest Editors) Volume 26 (2007), Number 3 Layered Performance Animation with Correlation Maps Michael Neff 1, Irene Albrecht 2 and Hans-Peter Seidel 2 1 University
NORCO COLLEGE SLO to PLO MATRIX PLOs
SLO to PLO MATRX CERTF CATE/ Game Art: 3D Animation NAS686/NCE686 PROGR AM: ART-17: Beginning Drawing dentify and employ proper use of a variety of drawing materials. dentify, define, and properly use
