A simple footskate removal method for virtual reality applications
|
|
|
- Audrey Amanda Casey
- 9 years ago
- Views:
Transcription
1 The Visual Computer manuscript No. (will be inserted by the editor) A simple footskate removal method for virtual reality applications Etienne Lyard 1, Nadia Magnenat-Thalmann 1 MIRALab, University of Geneva Battelle, Batiment A 7, route de Drize CH-1227 Carouge Switzerland Tel.: Fax: [email protected] Received: 3/14/2007 / Revised version: 3/28/2007 Abstract Footskate is a common problem encountered in interactive applications dealing with virtual characters animations. It has proven to be hard to fix without the use of complex numerical methods which require expert skills for their implementations, along with a fair amount of user interaction to correct a motion. On the other hand, deformable bodies are being increasingly used in Virtual Reality (VR) applications, allowing users to customize their avatar as they whish. This introduces the need of adapting motions without any help from a designer, as a random user seldom has the skills required to drive the existing algorithms towards the right solution. In this paper, we present a simple method to remove footskate artifacts in VR applications. Unlike previous algorithms, our approach doesn t rely on the skeletal animation to perform the correction but rather on the skin. This ensures that the final foot planting really matches the virtual character s motion. The changes are applied to the root joint of the skeleton only so that the resulting animation is as close as possible from the original one. Eventually, thanks to the simplicity of its formulation, it can be quickly and easily added to existing frameworks. Key words Computer Animation Motion Retargeting Virtual Reality 1 Introduction Virtual reality often uses motion captured data in order to animate the virtual humans who populate these environments. As it is very time consuming to capture
2 2 Etienne Lyard, Nadia Magnenat-Thalmann high quality motions, an existing clip is most likely to be reused for various applications. Thus, unless a clip is applied to only one given character, the motion must be adapted in order to match the new body s characteristics and dimensions. Even though very efficient and reliable methods have already been proposed to solve this problem, our intention is to focus on VR applications while taking into account the trade offs between the quality of the results, the computational complexity and the ease of implementation. Such applications commonly have databases of 3D bodies, or even systems able to generate new bodies according to user requirements [22]. These bodies are animated on-the-fly according to user interaction by applying motions taken from a database or generated at runtime. [16,15,7]. Users seldom have the skills required in order to perform the motion retargeting themselves and moreover this is a problem they simply do not want to acknowledge. For this reason why we propose here a new footskate removal algorithm which can accommodatemost ofthe usualskeleton simplification and low quality motion clips that are used by VR applications and yet still deliver high quality and reliable results with no user interaction. In order to face the degradation of the models imposed by the VR designers, our algorithm doesn t rely on the skeletal animation as it is usually done, but rather on the skin motion. Indeed, it isn t realistic to rely on a model with two points of contact for the skeleton s feet as VR characters rarely have such a feature. Moreover, as the motions are extremely simplified, it is difficult to robustly estimate these points of contact from the motion data. Considering the skin motion itself directly gives a multiple points of contact model, thus avoiding this difficult estimation step and ensuring a higer quality final animation. 2 Related works In real human walking motion, feet remain in a fixed position on the ground during small periods of time. Footskate is a common error in character animation when a foot appears to slide on the ground plane. It is introduced by the fact that even tough motions are recorded very accurately on real subjects [12,11] or very well designed by hand, a particular movement only matches the specific subject from which it was recorded or the character for which it was originally designed. Thus, when applying a motion to a different body, the global rotations and translations that position the character in the scene no longer match the motion of the limbs. Numerous approaches have been developed in order to address this mapping. It can be thought of as finding a motion that optimizes a given criterion (e.g. be as close as possible to the original motion) while complying with a set of constraints (e.g. no footskate should remain). Numerical methods estimate the required correction using global optimization algorithms (often referred to as spacetime optimization [26]) and inverse kinematics (IK). Such approaches have the advantage to take the whole motion into account to perform the retarget and hence deliver high quality results. Gleicher [9,10] was the first to apply this kind of optimization to the problem of motion editing and retargeting. He managed to keep the high frequency features of a motion by putting a spline layer between the optimization algorithm
3 A simple footskate removal method for virtual reality applications 3 and the actual values of the motion parameters. Lee and Shin [19] further refined this idea by adopting not only a single spline per piece of motion, but hierarchies of splines which made the final motion easier to control. Choi and Ko [6] used inverse rate control and managed to achieve the retargeting in real time while Boulic et Al. [4] proposed an approach based on null-space projections which allows to control a strict number of priority for the constraints. All of the works mentioned above, even though not explicitly focused on footskate, can be employed to address this issue. However, due to the large non-linear systems that must be solved at each iteration of the algorithm, they are quite slow compared to other techniques. Analytical IK methods were developed to quicken and facilitate the manipulation of human figures. Tolani et Al. [25] demonstrated how to analytically manipulate a human limb with 7 degrees of freedom. This scheme was later employed by several works, which either used it alone [17], or in conjunction with optimization techniques, thus creating a class of hybrid systems [23]. This idea of hybrid systems, combining several techniques in order to speed up the computation was also exploited by the most recent works in motion adaptation [24,1,18]. However, these works do not address the problem of footskate removal. Rather, they consider the physical aspects of a motion [24,1] or an innovative way to interact with a given clip [18]. Such approaches, even though they may be able to fix the footskating aren t designed for this purpose, and improvements may still be found for this specific goal. The only recent work directly dealing with footskate is from Glardon et Al. [8] and takes only the skeleton into account along with the use of a numerical IK method to reposition the feet where they should stay put. Unlike previous approaches, ours does not intend to make the motion comply with predefined constraints such as a given foot planting. Rather, we aim to calculate the actual displacement which corresponds to the limbs motion, thus modifying the character s path. This may appear strange at first, but numerous VR applications -for instance a Virtual Try On- do not focus on the character s path as much as on the actual movements performed by its limbs. Indeed, in order to see how a garment fits on a body or another, the motion of their limbs must be similar whereas their actual path isn t of such great importance. Another novel aspect of our approach is that it relies fully on the skin to carry out the computation. This allows the use of any kind of skeletal hierachy, would it feature both balls and heels joints or not [2]. Finally, we also present a new way to estimate the foot planting. Although quite simple, it has proven to be very robust during our tests on catwalk and tryout animations. The remainder of this paper is organized as follows: section 3 gives an outline of our method, which is explained further from section 3.1 to 3.3. Section 4 exposes the results we obtained using this method and eventually these results are discussedinsection5. 3 A footskate removal method for simplified characters VR applications are quite different from entertainment productions. Indeed, VR aims at immersing a user in a realtime environment as similar as possible to reality.
4 4 Etienne Lyard, Nadia Magnenat-Thalmann Thus these applications are usually highly demanding in terms of performances. To meet these requirements, most labs have built their own VR platforms [21,13] which allow to re-use previously developed components. These frameworks are optimized to ensure maximum performances at runtime, and most of the models assume numerous simplifications in order to allow for rich environments. For instance, VHD++ developed jointly by MIRALab and VRLab doesn t allow to resize a skeleton at runtime, which leaves the method proposed by [17] unusable within this context. Moreover, VR developers rarely focus on side artifacts and usually prefer to concentrate on the final user experience, which prevents them from implementing complex methods for little benefits. The method we propose here complies with the two previous statements in the sense that it can accommodate rigid skeletons and is very easy to implement. Thus it can be added with only little time and effort to an existing VR framework. The method can be summarized as follow: first foot plants are estimated, i.e. when should each foot be planted on the ground. Unlike most of the other approaches our algorithm doesn t constrain the location where a foot is planted, but rather the frame at which this should happen. This way, the motion itself remains as close as possible to the original, only the path followed by the character is subject to a scale. The next stage is divided into two separate processes. A first treatment corrects the character s motion along the horizontal axis, and a second one adapts its vertical displacement. This choice was motivated by the observation that in most VR applications, the feet of the character remain rigid throughout the animations. This happens because the feet are attached to only one joint, again for optimization reasons. Thus, as the skin isn t deformed accurately, the feet will somewhat penetrate the ground regardless of the retargeting process applied. To correct this, our method accurately plant the feet where they should be in the horizontal plane, while in the vertical direction it minimizes the distance between the ground and the planted foot. 3.1 Feet motion analysis Depending on the quality of a motion clip, it can be quite tricky to estimate how and when to plant a foot. If the motion is perfect, it should be enough to simply observe that a foot remaining static may be planted. However, feet are rarely motionless. Moreover most of the clips that are repeatedly used in reality are far from perfect and therefore such a simple criterion is insufficient. Previous works focused on proximity rules to extract the planting [3], k-nearest neighbours classifiers [14] or adaptative threshold imposed on the location and velocity of the feet [8]. All the above mentionned approaches require some human interaction to perform the estimation: even [8] requires to at least specify the kind of motion being performed. As we mentionned previously, our goal is to discard this interaction stage. Instead, we applied a two steps estimation taking the root translation and foot vertices into account. The first step finds out which foot should be planted while the second one refines which part of the sole should remain static. This is
5 A simple footskate removal method for virtual reality applications 5 achieved by first extracting from the skin mesh the vertices for which the speed has to be calculated. We then isolate the vertices belonging to the feet by using the skin attachement data. Finally, we remove the ones for which the normal isn t pointing downward, which leaves us with the sole. For clarity reasons, we will use t to designate a frame index or a time interval, the unit corresponding to the actual time elapsed between two animation frames Foot selection Given the original root translation ΔR t from time t to t + 1 and v i the vertex which is planted at frame t, we estimate which foot must remain planted at frame t + 1 by considering the motion ΔR t for which the sole vertex v j remains planted during the next animation frame. By planting a vertex at time t, we mean that its global coordinates remain constant during the time interval [t 1 2,t ]. ΔR t can thus simply be expressed as: ΔR t = o(v i,t) o(v i,t + δ)+o(v j,t + δ) o(v j,t + 1) (1) o(v i,t) being the offset at frame t of vertex v i from the root, in world coordinates. For this estimation, we take δ = 1 2 and o(v i,t + δ) is calculated by linear interpolation between t and t + 1. Once we calculated ΔR t for all the sole vertices, we designate as static the vertex that maximizes the dot product p: p = ΔR t ΔR t. ΔR t ΔR t Indeed, a higher value of this dot product means that if v j is static at frame t +1, the displacement induced will ressemble more the original root motion. We discard the magnitude of the vector because we are interested in where the character is going and not how far away it goes Vertex selection The dot product criterion robustly tells us which foot must be planted. However, the actual vertex picked by this algorithm can sometimes be jerky, e.g. jump from the foot tip to the heel. The reason is that we picked the vertex which keeps the motion as close as possible to the original one, possibly keeping a bit of skating on its way. In order to overcome this issue, we add a second selection process applied on the vertices of the planted foot only. This second process uses the speed of the vertices in order to pick the right one. Indeed, if the original motion isn t to bad, then the vertex which must be static at a given frame is most likely to be the one moving the less. The speed of each vertex is first smoothed along several frames in order to remove some of the data noise (in our experiments, 5 frames appeared to be a good compromise). Second, the least moving vertex is chosen as the static one. The result of this selection over a foot step can be seen on figure 1. We previously assumed that the static vertex in the previous frame must be known in order to estimate the one in the current frame. So for the first frame of the animation, we just use the speed criterion. One could think that the detection
6 6 Etienne Lyard, Nadia Magnenat-Thalmann would be less accurate because of this, however we didn t witness any setback during our experients. Fig. 1 View of the trajectory of the least moving point over the sole during one foot step. In black is a wire frame view of the sole of the character, in red are the vertices selected during the step, and eventually the blue arrows shows the transitions between each point. This algorithm has proven to be quite efficient on the catwalk animations we tried it on. It is even possible to discard the dot product phase of the algorithm but we noticed that this stage of the process significantly improved the robustness of the detection by accurately tagging which foot must be planted. If the original animation clip is too bad, our algorithm may fail to figure out which vertex should be planted. In this case, one still has the possibility to manually label the vertices (or correct the output of the algorithm), as it is the case for all the previous motion retargeting methods. However, during our tests, this only happened on complexe dance motions, for which it was hard even for the human eye to figure out which foot should be planted or not. 3.2 Root translation correction As outlined in section 1, the retargeting is split into two phases, namely horizontal and vertical corrections. The horizontal correction introduces a drift of the character over the animation range in order to remove the foot skating, while the vertical processing aims at minimizing the distance of the static vertices from the floor. These two separate steps use completely different approaches, which are outlined in the next section Horizontal correction In order to calculate the corrected horizontal translation of the root joint between two frames, once again we use the motion of the vertices. In the previous section, we made the assumption that a static vertex remains so during a time interval of at least one frame, centered around the current time instant. However, due to the low sampling of the motion data which is often no more than 25Hz, this assumption cannot be retained for the actual displacement of the root joint. Thus, we estimate when the transition between two static vertices
7 A simple footskate removal method for virtual reality applications 7 should happen, again using their speed. As we stated that the vertex with the less speed should remain static, we estimate the exact time instant between two frames when the transfer should occur. For doing so, we approximate the speed of each vertex as follow: first the speed of the current and next static vertices v i and v j are calculated for frames t 1, t, t + 1andt + 2. These velocities are then plotted in 2D and approximated using a Catmull-Rom spline [5] which yields to two parametric curves V i (q) and V j (q), q [0,1], as depicted on figure 2. Eventually, the particular value q t corresponding to the cross between v i and v j is calculated by solving the cubic equation V i (q)=v j (q), which we did using the approach proposed by Nickalls [20]. Speed t+q t V i V j t-1 t t+1 t+2 Frames Fig. 2 A conceptual view of the velocity estimation performed in order to determinate the exact instant of the weight transfer between two fixed points. Now that the exact time t + q t when the weight transfer occurs is know, the actual position of the vertices at this instant is to be calculated. For doing so, the trajectory of the points between t and t + 1isfirst approximated using again a Catmull-Rom spline. The parametric location t 1 and t 2 of the points over these curves is given by their approximated speeds as follow: t i = t+qt t V i (q)dq,i = 1,2 t+1 t V i (q)dq Having the two offsets o(v i,t + q t ) and o(v j,t + q t ), enables us to calculate the new root displacement between frames t and t + 1 using formula (1), with δ = q t. The translation computed during this step is valid only if the feet deform in a realistic way which - to our experience - they seldom do. Often they remain rigid and this creates a bad vertical translation while the weight is transferred from the heel to the toe during a foot step. This is the reason why, as stated previously, the calculated root translation is only applied on the horizontal directions, as follow: ΔR horizontal t P being a 3D to 2D projection matrix. = P.ΔR t
8 8 Etienne Lyard, Nadia Magnenat-Thalmann 3.3 Vertical correction The horizontal correction introduces some drift of the character compared to the original animation. This effect is desired as it removes the foot skating. However, in the vertical direction, no drift should take place otherwise the body will soon be walking in the floor or in the air. We don t want either to change the legs configuration for enforcing the correct height of the foot sole because we want to remain as close as possible from the original animation of the limbs. Moreover, strictly enforcing the height of the static vertices to be zero would lead to cumbersome configurations of the legs in order to cope with the rigidity of the feet (remember that the feet seldom deform in VR applications) thus introducing ugly artifacts. Instead we chose to act on the root joint translation only, by minimizing the height of the static vertices over the animation. Thus, a little bit of penetration will remain afterwards, which is the price to pay if the feet are rigid and if we do not want to drastically change the look of the animation. We calculate a single offset and a scale to be applied to the root height trajectory so that static vertices remain as close as possible from the ground throughout the animation, as shown on figure 3. It is quite trivial to calculate the offset to be applied to the root trajectory: if we consider the height h t in world coordinates of each static point, then the root offset ΔH is simply: ΔH = N 1 t=0 N being the number of frames of the animation. Once this offset is applied to the root trajectory, the mean of the static vertices height is thus zero. However, they still oscillate above and underneath the ground during the animation. This oscillation will be minimized by the calculation of the scaling factor α. If we consider H to be the average root height over the animation, then for each frame its actual height H t can be written as an offset r t from this mean value: H t = H + r t. The variance σ 2 of the static points heights can be expressed in terms of the root average height H, the scaling factor α and the relative height l t of the fixed vertex with respect to the root as follow: Nσ 2 N 1 = t=0 ht 2 N 1 = t=0 h t N (H + αr t + l t ) 2 This variance is to be minimized by the scaling factor α, and fortunately this is equivalent to finding the root of a simple second order equation with only one unknown, α. Indeed: Nσ 2 = α 2 N 1 t=0 r 2 t + 2α N 1 t=0 N 1 (r t.(h + l t )) + t=0 (H + l t ) 2
9 A simple footskate removal method for virtual reality applications 9 Original Root Height Trajectory Corrected Trajectory ΔH root l i r i H h i Ground Fig. 3 Scaling of the root joint height. As σ 2 and N are always positive, the minimal variance is given by: α = N 1 t=0 r t.(h + l t ) N 1 t=0 r2 t 4Results Figure 4 exhibits the foot prints left by a retargeted walk (in blue) and by the original walk(in green). One can see that nearly all footskate is corrected. The minor sliding which remains is due to the estimation of the displacement of the root joint between two steps, as described in section 3.2. Indeed, because the weight transfer doesn t occur exactly on an animation frame, the feet may well slide a little bit between two frames when switching from a footplant to the other. Figure 5 shows animation snapshots of an original animation and its retargeted version. One can see that the drifting effect for getting rid of the foot sliding actually occurs, while the height of the character is modified so that its feet stick to the floor as much as possible. Due to its simplicity, our method performs very swiftly and can be added to existing systems within moments. Thus even if it cannot accomodate all the motions one may want to use within a VR framework, it can fix many existing clips with only a minimal amount of time and effort. 5Conclusion In this paper, we presented a method to remove the footskating of a motion clip which takes into account the character itself in order to improve the results and speed up the computations. Compared to previous approaches, this method fully relies on the skin, which is what the final animation will display, to perform the retarget. It can be utilized by casual users as no interaction or expertise is required to obtain good motions. The footskating removal preserves the original movements
10 10 Etienne Lyard, Nadia Magnenat-Thalmann Fig. 4 Two foot prints lefts by a character animation: on the left (in green) the original clip and on the right (in blue) the retargeted clip. The residual sliding that one may notice on the blue prints is due to the estimation of the root motion between two steps which makes both feet move at the same time in order to get a more realistic final motion. of the limbs because the corrections are applied to the root joint translation only. Moreover -for most cases- it has the advantage of extracting the foot plants automatically, which prevents time consuming manual work. Last but not least, in the future we would like to extend our method to be able to apply it on any kind of motion, and more particularly motions such as sliding and jumping. Indeed, during the foot plant extraction, we currently assume that
11 A simple footskate removal method for virtual reality applications 11 Fig. 5 Superimposed snapshots of a walk with the original animation in red, and the corrected one in green. the character always has one foot anchored to the ground, which is not the case in such motions. Acknowledgements We would like to thank Clementine Lo for proof reading this article, Nedjma Cadi and Marlene Arevalo for the demo movie and Autodesk for their Maya donation. This work was funded by the European project LEAPFROG IP (FP6-NMP ). References 1. Abe, Y., Liu, K., Popovic, Z.: Momentum-based parameterization of dynamic character motion. In: Proceedings of the ACM Symposium on computer Animation ACM, ACM Press, New York (2004) 2. anim.org/: The humanoid animation working group. accessed May 2006 (2006) 3. Bindiganavale, R., Badler, N.I.: Motion abstraction and mapping with spatial constraints. Lecture Notes in Computer Science 1537, 70?? (1998). URL citeseer.ist.psu.edu/bindiganavale98motion.html 4. Boulic, R., Le Callennec, B., Herren, M., Bay, H.: Experimenting prioritized ik for motion editing. In: Proceedings of EUROGRAPHICS EUROGRAPHICS (2003) 5. Catmull, E., Rom, R.: A class of local interpolating splines. Computer Aided Geometric Design pp (1974) 6. Choi, K., Ko, H.: Online motion retargeting. Journal of Visualization and Computer Animation 11(5), (2000) 7. Egges, A., Molet, T., Magnenat-Thalmann, N.: Personalised real-time idle motion synthesis. In: PG 04: Proceedings of the Computer Graphics and Applications, 12th Pacific Conference on (PG 04), pp IEEE Computer Society, Washington, DC, USA (2004) 8. Glardon, P., Boulic, R., Thalmann, D.: Robust on-line adaptive footplant detection and enforcement. the Visual Computer 22(3), (2006) 9. Gleicher, M.: Motion editing with spacetime constraints. In: Proceedings of the Symposium on Interactive 3D Graphics. ACM, ACM Press (1997)
12 12 Etienne Lyard, Nadia Magnenat-Thalmann 10. Gleicher, M.: Retargeting motion to new characters. In: Proceedings of SIGGRAPH 1998, Computer Graphics Proceedings, Annual Conference Series, pp ACM, ACM Press/ ACM SIGGRAPH, New York (1998) The motion analysis website. accessed May 2006 (2006) The vicon peak website. accessed May 2006 (2006) Vr juggleropen source virtual reality tools. accessed May 2006 (2006) 14. Ikemoto, L., Arikan, O., Forsyth, D.: Knowing when to put your foot down. In: SI3D 06: Proceedings of the 2006 symposium on Interactive 3D graphics and games, pp ACM Press, New York, NY, USA (2006). DOI Kovar, L., Gleicher, M.: Flexible automatic motion blending with registration curves. In: SCA 03: Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation, pp Eurographics Association, Aire-la-Ville, Switzerland, Switzerland (2003) 16. Kovar, L., Gleicher, M., Pighin, F.: Motion graphs. In: SIGGRAPH 02: Proceedings of the 29th annual conference on Computer graphics and interactive techniques, pp ACM Press, New York, NY, USA (2002). DOI Kovar, L., Schreiner, J., Gleicher, M.: Footskate cleanup for motion capture editing. In: Proceedings of the ACM Symposium on computer Animation ACM, ACM Press, New York (2002) 18. Lam, W., Zou, F., Komura, T.: Motion editing with data glove. In: Proceedings of the SIGCHI International conference on advances in computer entertainment technology, pp ACM, ACM Press, New York (2005) 19. Lee, J., Shin, S.: A hierarchical approach to interactive motion editing for human-like figures. In: Proceedings of SIGGRAPH 1999, Computer Graphics Proceedings, Anjual Conference Series. ACM, ACM Press/ ACM SIGGRAPH, New York (1999) 20. Nickalls, R.: A new approach to solving the cubic: Cardan s solution revealed. The Mathematical Gazette 77, (1993) 21. Ponder, M., Papagiannakis, G., Molet, T., Magnenat-Thalmann, N., Thalmann, D.: Vhd++ development framework: Towards extendible, component based vr/ar simulation engine featuring advanced virtual character technologies (2003) 22. Seo, H., Magnenat-Thalmann, N.: An example-based approach to human body manipulation. Graphical Models 66(1) (2004) 23. Shin, H., Lee, J., Shin, S., Gleicher, M.: computer puppetry: an importance-based approach. ACM Transactions on Graphics 20(2), (2001) 24. Tak, S., Ko, H.: A physically-based motion retargeting filter. ACM Transactions on Graphics 24(1) (2005) 25. Tolani, D., Goswami, A., Badler, N.I.: Real-time inverse kinematics techniques for anthropomorphic limbs. Graphical Models 62, (2000) 26. Witkin, A., Kass, M.: Spacetime constraints. In: Proceedings of SIGGRAPH 1988, Computer Graphics Proceedings, Anjual Conference Series, pp ACM, ACM Press/ ACM SIGGRAPH, New York (1988)
13 A simple footskate removal method for virtual reality applications 13 Etienne Lyard got his master degree in computer graphics, vision and robotics from the university of Grenoble in The next year he worked as a reasearch scholar at the multisensory computation lab (Rutgers university). Since then, he has been a research assistant and PhD candidate at MIRALab (university of Geneva). His research interests include physics based modeling, real-time applications and human motion. Nadia Magnenat-Thalmann has pioneered research into virtual humans over the last 25 years. She obtained her PhD in Quantum Physics from the University of Geneva. From 1977 to 1989, she was a Professor at the University of Montreal where she founded the research lab MI- RALab. She was elected Woman of the Year in the Grand Montreal for her pionnering work on virtual humans and presented Virtual Marilyn at the Modern Art Museum of New York in Since 1989, she is Professor at the University of Geneva where she recreated the interdisciplinary MIRALab laboratory. With her 30 PhD students, she has authored and coauthored more than 300 research papers and books in the field of modeling virtual humans. She is presently coordinating three European Research Projects (INTERMEDIA, HAPTEX and 3DANATOMI- CAL HUMANS).
Project 2: Character Animation Due Date: Friday, March 10th, 11:59 PM
1 Introduction Project 2: Character Animation Due Date: Friday, March 10th, 11:59 PM The technique of motion capture, or using the recorded movements of a live actor to drive a virtual character, has recently
This week. CENG 732 Computer Animation. Challenges in Human Modeling. Basic Arm Model
CENG 732 Computer Animation Spring 2006-2007 Week 8 Modeling and Animating Articulated Figures: Modeling the Arm, Walking, Facial Animation This week Modeling the arm Different joint structures Walking
Motion Retargetting and Transition in Different Articulated Figures
Motion Retargetting and Transition in Different Articulated Figures Ming-Kai Hsieh Bing-Yu Chen Ming Ouhyoung National Taiwan University [email protected] [email protected] [email protected]
animation animation shape specification as a function of time
animation animation shape specification as a function of time animation representation many ways to represent changes with time intent artistic motion physically-plausible motion efficiency control typically
Geometric Constraints
Simulation in Computer Graphics Geometric Constraints Matthias Teschner Computer Science Department University of Freiburg Outline introduction penalty method Lagrange multipliers local constraints University
Character Animation Tutorial
Character Animation Tutorial 1.Overview 2.Modelling 3.Texturing 5.Skeleton and IKs 4.Keys 5.Export the character and its animations 6.Load the character in Virtools 7.Material & texture tuning 8.Merge
INSTRUCTOR WORKBOOK Quanser Robotics Package for Education for MATLAB /Simulink Users
INSTRUCTOR WORKBOOK for MATLAB /Simulink Users Developed by: Amir Haddadi, Ph.D., Quanser Peter Martin, M.A.SC., Quanser Quanser educational solutions are powered by: CAPTIVATE. MOTIVATE. GRADUATE. PREFACE
SimFonIA Animation Tools V1.0. SCA Extension SimFonIA Character Animator
SimFonIA Animation Tools V1.0 SCA Extension SimFonIA Character Animator Bring life to your lectures Move forward with industrial design Combine illustrations with your presentations Convey your ideas to
Motion Capture Assisted Animation: Texturing and Synthesis
Motion Capture Assisted Animation: Texturing and Synthesis Katherine Pullen Stanford University Christoph Bregler Stanford University Abstract We discuss a method for creating animations that allows the
Solving Simultaneous Equations and Matrices
Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering
CHAPTER 6 TEXTURE ANIMATION
CHAPTER 6 TEXTURE ANIMATION 6.1. INTRODUCTION Animation is the creating of a timed sequence or series of graphic images or frames together to give the appearance of continuous movement. A collection of
Computer Puppetry: An Importance-Based Approach
Computer Puppetry: An Importance-Based Approach HYUN JOON SHIN, JEHEE LEE, and SUNG YONG SHIN Korea Advanced Institute of Science & Technology and MICHAEL GLEICHER University of Wisconsin Madison Computer
The 3D rendering pipeline (our version for this class)
The 3D rendering pipeline (our version for this class) 3D models in model coordinates 3D models in world coordinates 2D Polygons in camera coordinates Pixels in image coordinates Scene graph Camera Rasterization
Interactive Computer Graphics
Interactive Computer Graphics Lecture 18 Kinematics and Animation Interactive Graphics Lecture 18: Slide 1 Animation of 3D models In the early days physical models were altered frame by frame to create
Spatial Pose Trees: Creating and Editing Motions Using a Hierarchy of Low Dimensional Control Spaces
Eurographics/ ACM SIGGRAPH Symposium on Computer Animation (2006), pp. 1 9 M.-P. Cani, J. O Brien (Editors) Spatial Pose Trees: Creating and Editing Motions Using a Hierarchy of Low Dimensional Control
C O M P U C O M P T U T E R G R A E R G R P A H I C P S Computer Animation Guoying Zhao 1 / 66 /
Computer Animation Guoying Zhao 1 / 66 Basic Elements of Computer Graphics Modeling construct the 3D model of the scene Rendering Render the 3D model, compute the color of each pixel. The color is related
Background Animation Generator: Interactive Scene Design based on Motion Graph
Background Animation Generator: Interactive Scene Design based on Motion Graph Tse-Hsien Wang National Taiwan University [email protected] Bin-Yu Chen National Taiwan University [email protected]
Introduction to Engineering System Dynamics
CHAPTER 0 Introduction to Engineering System Dynamics 0.1 INTRODUCTION The objective of an engineering analysis of a dynamic system is prediction of its behaviour or performance. Real dynamic systems are
animation shape specification as a function of time
animation 1 animation shape specification as a function of time 2 animation representation many ways to represent changes with time intent artistic motion physically-plausible motion efficiency typically
CS 4204 Computer Graphics
CS 4204 Computer Graphics Computer Animation Adapted from notes by Yong Cao Virginia Tech 1 Outline Principles of Animation Keyframe Animation Additional challenges in animation 2 Classic animation Luxo
A Learning Based Method for Super-Resolution of Low Resolution Images
A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 [email protected] Abstract The main objective of this project is the study of a learning based method
Animating reactive motion using momentum-based inverse kinematics
COMPUTER ANIMATION AND VIRTUAL WORLDS Comp. Anim. Virtual Worlds 2005; 16: 213 223 Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/cav.101 Motion Capture and Retrieval
Principal Components of Expressive Speech Animation
Principal Components of Expressive Speech Animation Sumedha Kshirsagar, Tom Molet, Nadia Magnenat-Thalmann MIRALab CUI, University of Geneva 24 rue du General Dufour CH-1211 Geneva, Switzerland {sumedha,molet,thalmann}@miralab.unige.ch
Metrics on SO(3) and Inverse Kinematics
Mathematical Foundations of Computer Graphics and Vision Metrics on SO(3) and Inverse Kinematics Luca Ballan Institute of Visual Computing Optimization on Manifolds Descent approach d is a ascent direction
Computer Animation in Future Technologies
Computer Animation in Future Technologies Nadia Magnenat Thalmann MIRALab, University of Geneva Daniel Thalmann Computer Graphics Lab, Swiss Federal Institute of Technology Abstract In this introductory
Tutorial: Biped Character in 3D Studio Max 7, Easy Animation
Tutorial: Biped Character in 3D Studio Max 7, Easy Animation Written by: Ricardo Tangali 1. Introduction:... 3 2. Basic control in 3D Studio Max... 3 2.1. Navigating a scene:... 3 2.2. Hide and Unhide
Graphics. Computer Animation 고려대학교 컴퓨터 그래픽스 연구실. kucg.korea.ac.kr 1
Graphics Computer Animation 고려대학교 컴퓨터 그래픽스 연구실 kucg.korea.ac.kr 1 Computer Animation What is Animation? Make objects change over time according to scripted actions What is Simulation? Predict how objects
Mocap in a 3D Pipeline
East Tennessee State University Digital Commons @ East Tennessee State University Undergraduate Honors Theses 5-2014 Mocap in a 3D Pipeline Logan T. Maides Follow this and additional works at: http://dc.etsu.edu/honors
Goldsmiths, University of London. Computer Animation. Goldsmiths, University of London
Computer Animation Goldsmiths, University of London Computer Animation Introduction Computer animation is about making things move, a key part of computer graphics. In film animations are created off-line,
We can display an object on a monitor screen in three different computer-model forms: Wireframe model Surface Model Solid model
CHAPTER 4 CURVES 4.1 Introduction In order to understand the significance of curves, we should look into the types of model representations that are used in geometric modeling. Curves play a very significant
FACIAL RIGGING FOR 3D CHARACTER
FACIAL RIGGING FOR 3D CHARACTER Matahari Bhakti Nendya 1, Eko Mulyanto Yuniarno 2 and Samuel Gandang Gunanto 3 1,2 Department of Electrical Engineering, Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia
Computer Animation. Lecture 2. Basics of Character Animation
Computer Animation Lecture 2. Basics of Character Animation Taku Komura Overview Character Animation Posture representation Hierarchical structure of the body Joint types Translational, hinge, universal,
Motion Capture Assisted Animation: Texturing and Synthesis
Motion Capture Assisted Animation: Texturing and Synthesis Katherine Pullen Stanford University Christoph Bregler Stanford University Abstract We discuss a method for creating animations that allows the
Fundamentals of Computer Animation
Fundamentals of Computer Animation Principles of Traditional Animation How to create maximum impact page 1 How to create maximum impact Early animators worked from scratch to analyze and improve upon silence
2-1 Position, Displacement, and Distance
2-1 Position, Displacement, and Distance In describing an object s motion, we should first talk about position where is the object? A position is a vector because it has both a magnitude and a direction:
CALIBRATION OF A ROBUST 2 DOF PATH MONITORING TOOL FOR INDUSTRIAL ROBOTS AND MACHINE TOOLS BASED ON PARALLEL KINEMATICS
CALIBRATION OF A ROBUST 2 DOF PATH MONITORING TOOL FOR INDUSTRIAL ROBOTS AND MACHINE TOOLS BASED ON PARALLEL KINEMATICS E. Batzies 1, M. Kreutzer 1, D. Leucht 2, V. Welker 2, O. Zirn 1 1 Mechatronics Research
CATIA V5 Tutorials. Mechanism Design & Animation. Release 18. Nader G. Zamani. University of Windsor. Jonathan M. Weaver. University of Detroit Mercy
CATIA V5 Tutorials Mechanism Design & Animation Release 18 Nader G. Zamani University of Windsor Jonathan M. Weaver University of Detroit Mercy SDC PUBLICATIONS Schroff Development Corporation www.schroff.com
VRSPATIAL: DESIGNING SPATIAL MECHANISMS USING VIRTUAL REALITY
Proceedings of DETC 02 ASME 2002 Design Technical Conferences and Computers and Information in Conference Montreal, Canada, September 29-October 2, 2002 DETC2002/ MECH-34377 VRSPATIAL: DESIGNING SPATIAL
Emotional Communicative Body Animation for Multiple Characters
Emotional Communicative Body Animation for Multiple Characters Arjan Egges, Nadia Magnenat-Thalmann MIRALab - University of Geneva 24, Rue General-Dufour, 1211 Geneva, Switzerland Telephone: +41 22 379
Computer Animation and Visualisation. Lecture 1. Introduction
Computer Animation and Visualisation Lecture 1 Introduction 1 Today s topics Overview of the lecture Introduction to Computer Animation Introduction to Visualisation 2 Introduction (PhD in Tokyo, 2000,
3D Modeling, Animation, and Special Effects ITP 215x (2 Units)
3D Modeling, Animation, and Special Effects ITP 215x (2 Units) Fall 2008 Objective Overview of developing a 3D animation from modeling to rendering: Basics of surfacing, lighting, animation, and modeling
Motion Graphs. It is said that a picture is worth a thousand words. The same can be said for a graph.
Motion Graphs It is said that a picture is worth a thousand words. The same can be said for a graph. Once you learn to read the graphs of the motion of objects, you can tell at a glance if the object in
Simple Predictive Analytics Curtis Seare
Using Excel to Solve Business Problems: Simple Predictive Analytics Curtis Seare Copyright: Vault Analytics July 2010 Contents Section I: Background Information Why use Predictive Analytics? How to use
Binary Image Scanning Algorithm for Cane Segmentation
Binary Image Scanning Algorithm for Cane Segmentation Ricardo D. C. Marin Department of Computer Science University Of Canterbury Canterbury, Christchurch [email protected] Tom
Automatic Labeling of Lane Markings for Autonomous Vehicles
Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 [email protected] 1. Introduction As autonomous vehicles become more popular,
How To Develop Software
Software Engineering Prof. N.L. Sarda Computer Science & Engineering Indian Institute of Technology, Bombay Lecture-4 Overview of Phases (Part - II) We studied the problem definition phase, with which
Modelling 3D Avatar for Virtual Try on
Modelling 3D Avatar for Virtual Try on NADIA MAGNENAT THALMANN DIRECTOR MIRALAB UNIVERSITY OF GENEVA DIRECTOR INSTITUTE FOR MEDIA INNOVATION, NTU, SINGAPORE WWW.MIRALAB.CH/ Creating Digital Humans Vertex
Maya 2014 Basic Animation & The Graph Editor
Maya 2014 Basic Animation & The Graph Editor When you set a Keyframe (or Key), you assign a value to an object s attribute (for example, translate, rotate, scale, color) at a specific time. Most animation
Subspace Analysis and Optimization for AAM Based Face Alignment
Subspace Analysis and Optimization for AAM Based Face Alignment Ming Zhao Chun Chen College of Computer Science Zhejiang University Hangzhou, 310027, P.R.China [email protected] Stan Z. Li Microsoft
Blender Notes. Introduction to Digital Modelling and Animation in Design Blender Tutorial - week 9 The Game Engine
Blender Notes Introduction to Digital Modelling and Animation in Design Blender Tutorial - week 9 The Game Engine The Blender Game Engine This week we will have an introduction to the Game Engine build
If A is divided by B the result is 2/3. If B is divided by C the result is 4/7. What is the result if A is divided by C?
Problem 3 If A is divided by B the result is 2/3. If B is divided by C the result is 4/7. What is the result if A is divided by C? Suggested Questions to ask students about Problem 3 The key to this question
COMPUTING CLOUD MOTION USING A CORRELATION RELAXATION ALGORITHM Improving Estimation by Exploiting Problem Knowledge Q. X. WU
COMPUTING CLOUD MOTION USING A CORRELATION RELAXATION ALGORITHM Improving Estimation by Exploiting Problem Knowledge Q. X. WU Image Processing Group, Landcare Research New Zealand P.O. Box 38491, Wellington
Projectile motion simulator. http://www.walter-fendt.de/ph11e/projectile.htm
More Chapter 3 Projectile motion simulator http://www.walter-fendt.de/ph11e/projectile.htm The equations of motion for constant acceleration from chapter 2 are valid separately for both motion in the x
2After completing this chapter you should be able to
After completing this chapter you should be able to solve problems involving motion in a straight line with constant acceleration model an object moving vertically under gravity understand distance time
Character Animation from 2D Pictures and 3D Motion Data ALEXANDER HORNUNG, ELLEN DEKKERS, and LEIF KOBBELT RWTH-Aachen University
Character Animation from 2D Pictures and 3D Motion Data ALEXANDER HORNUNG, ELLEN DEKKERS, and LEIF KOBBELT RWTH-Aachen University Presented by: Harish CS-525 First presentation Abstract This article presents
mouse (or the option key on Macintosh) and move the mouse. You should see that you are able to zoom into and out of the scene.
A Ball in a Box 1 1 Overview VPython is a programming language that is easy to learn and is well suited to creating 3D interactive models of physical systems. VPython has three components that you will
6 EXTENDING ALGEBRA. 6.0 Introduction. 6.1 The cubic equation. Objectives
6 EXTENDING ALGEBRA Chapter 6 Extending Algebra Objectives After studying this chapter you should understand techniques whereby equations of cubic degree and higher can be solved; be able to factorise
Introduction to Computer Graphics Marie-Paule Cani & Estelle Duveau
Introduction to Computer Graphics Marie-Paule Cani & Estelle Duveau 04/02 Introduction & projective rendering 11/02 Prodedural modeling, Interactive modeling with parametric surfaces 25/02 Introduction
2.5 Physically-based Animation
2.5 Physically-based Animation 320491: Advanced Graphics - Chapter 2 74 Physically-based animation Morphing allowed us to animate between two known states. Typically, only one state of an object is known.
Path Tracking for a Miniature Robot
Path Tracking for a Miniature Robot By Martin Lundgren Excerpt from Master s thesis 003 Supervisor: Thomas Hellström Department of Computing Science Umeå University Sweden 1 Path Tracking Path tracking
ACTUATOR DESIGN FOR ARC WELDING ROBOT
ACTUATOR DESIGN FOR ARC WELDING ROBOT 1 Anurag Verma, 2 M. M. Gor* 1 G.H Patel College of Engineering & Technology, V.V.Nagar-388120, Gujarat, India 2 Parul Institute of Engineering & Technology, Limda-391760,
An Instructional Aid System for Driving Schools Based on Visual Simulation
An Instructional Aid System for Driving Schools Based on Visual Simulation Salvador Bayarri, Rafael Garcia, Pedro Valero, Ignacio Pareja, Institute of Traffic and Road Safety (INTRAS), Marcos Fernandez
Introduction to Computer Graphics. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012
CSE 167: Introduction to Computer Graphics Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012 Today Course organization Course overview 2 Course Staff Instructor Jürgen Schulze,
Kinematical Animation. [email protected] 2013-14
Kinematical Animation 2013-14 3D animation in CG Goal : capture visual attention Motion of characters Believable Expressive Realism? Controllability Limits of purely physical simulation : - little interactivity
Animations in Creo 3.0
Animations in Creo 3.0 ME170 Part I. Introduction & Outline Animations provide useful demonstrations and analyses of a mechanism's motion. This document will present two ways to create a motion animation
INTRUSION PREVENTION AND EXPERT SYSTEMS
INTRUSION PREVENTION AND EXPERT SYSTEMS By Avi Chesla [email protected] Introduction Over the past few years, the market has developed new expectations from the security industry, especially from the intrusion
In order to describe motion you need to describe the following properties.
Chapter 2 One Dimensional Kinematics How would you describe the following motion? Ex: random 1-D path speeding up and slowing down In order to describe motion you need to describe the following properties.
DINAMIC AND STATIC CENTRE OF PRESSURE MEASUREMENT ON THE FORCEPLATE. F. R. Soha, I. A. Szabó, M. Budai. Abstract
ACTA PHYSICA DEBRECINA XLVI, 143 (2012) DINAMIC AND STATIC CENTRE OF PRESSURE MEASUREMENT ON THE FORCEPLATE F. R. Soha, I. A. Szabó, M. Budai University of Debrecen, Department of Solid State Physics Abstract
Using Autodesk HumanIK Middleware to Enhance Character Animation for Games
Autodesk HumanIK 4.5 Using Autodesk HumanIK Middleware to Enhance Character Animation for Games Unlock your potential for creating more believable characters and more engaging, innovative gameplay with
Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication
Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication Thomas Reilly Data Physics Corporation 1741 Technology Drive, Suite 260 San Jose, CA 95110 (408) 216-8440 This paper
Adequate Theory of Oscillator: A Prelude to Verification of Classical Mechanics Part 2
International Letters of Chemistry, Physics and Astronomy Online: 213-9-19 ISSN: 2299-3843, Vol. 3, pp 1-1 doi:1.1852/www.scipress.com/ilcpa.3.1 212 SciPress Ltd., Switzerland Adequate Theory of Oscillator:
Template-based Eye and Mouth Detection for 3D Video Conferencing
Template-based Eye and Mouth Detection for 3D Video Conferencing Jürgen Rurainsky and Peter Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz-Institute, Image Processing Department, Einsteinufer
MMGD0203 Multimedia Design MMGD0203 MULTIMEDIA DESIGN. Chapter 3 Graphics and Animations
MMGD0203 MULTIMEDIA DESIGN Chapter 3 Graphics and Animations 1 Topics: Definition of Graphics Why use Graphics? Graphics Categories Graphics Qualities File Formats Types of Graphics Graphic File Size Introduction
Chapter 1. Animation. 1.1 Computer animation
Chapter 1 Animation "Animation can explain whatever the mind of man can conceive. This facility makes it the most versatile and explicit means of communication yet devised for quick mass appreciation."
Constrained curve and surface fitting
Constrained curve and surface fitting Simon Flöry FSP-Meeting Strobl (June 20, 2006), [email protected], Vienna University of Technology Overview Introduction Motivation, Overview, Problem
Effective Use of Android Sensors Based on Visualization of Sensor Information
, pp.299-308 http://dx.doi.org/10.14257/ijmue.2015.10.9.31 Effective Use of Android Sensors Based on Visualization of Sensor Information Young Jae Lee Faculty of Smartmedia, Jeonju University, 303 Cheonjam-ro,
Overview of the Adobe Flash Professional CS6 workspace
Overview of the Adobe Flash Professional CS6 workspace In this guide, you learn how to do the following: Identify the elements of the Adobe Flash Professional CS6 workspace Customize the layout of the
An Interactive method to control Computer Animation in an intuitive way.
An Interactive method to control Computer Animation in an intuitive way. Andrea Piscitello University of Illinois at Chicago 1200 W Harrison St, Chicago, IL [email protected] Ettore Trainiti University of
Human Skeletal and Muscle Deformation Animation Using Motion Capture Data
Human Skeletal and Muscle Deformation Animation Using Motion Capture Data Ali Orkan Bayer Department of Computer Engineering, Middle East Technical University 06531 Ankara, Turkey [email protected]
A Reliability Point and Kalman Filter-based Vehicle Tracking Technique
A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video
Virtual Humans on Stage
Chapter XX Virtual Humans on Stage Nadia Magnenat-Thalmann Laurent Moccozet MIRALab, Centre Universitaire d Informatique Université de Genève [email protected] [email protected] http://www.miralab.unige.ch/
A Short Tour of the Predictive Modeling Process
Chapter 2 A Short Tour of the Predictive Modeling Process Before diving in to the formal components of model building, we present a simple example that illustrates the broad concepts of model building.
Goal Seeking in Solid Edge
Goal Seeking in Solid Edge White Paper Goal Seeking in Solid Edge software offers a fast built-in method for solving complex engineering problems. By drawing and dimensioning 2D free-body diagrams, Goal
Automated Semi Procedural Animation for Character Locomotion
Master s Thesis Department of Information and Media Studies Aarhus University Automated Semi Procedural Animation for Character Locomotion Rune Skovbo Johansen (20020182) May 25, 2009 Abstract This thesis
the points are called control points approximating curve
Chapter 4 Spline Curves A spline curve is a mathematical representation for which it is easy to build an interface that will allow a user to design and control the shape of complex curves and surfaces.
Spatial Relationship Preserving Character Motion Adaptation
Spatial Relationship Preserving Character Motion Adaptation Edmond S.L. Ho Taku Komura The University of Edinburgh Chiew-Lan Tai Hong Kong University of Science and Technology Figure 1: Our system can
(Refer Slide Time: 1:42)
Introduction to Computer Graphics Dr. Prem Kalra Department of Computer Science and Engineering Indian Institute of Technology, Delhi Lecture - 10 Curves So today we are going to have a new topic. So far
Today. Keyframing. Procedural Animation. Physically-Based Animation. Articulated Models. Computer Animation & Particle Systems
Today Computer Animation & Particle Systems Some slides courtesy of Jovan Popovic & Ronen Barzel How do we specify or generate motion? Keyframing Procedural Animation Physically-Based Animation Forward
A method of generating free-route walk-through animation using vehicle-borne video image
A method of generating free-route walk-through animation using vehicle-borne video image Jun KUMAGAI* Ryosuke SHIBASAKI* *Graduate School of Frontier Sciences, Shibasaki lab. University of Tokyo 4-6-1
Sensor Modeling for a Walking Robot Simulation. 1 Introduction
Sensor Modeling for a Walking Robot Simulation L. France, A. Girault, J-D. Gascuel, B. Espiau INRIA, Grenoble, FRANCE imagis, GRAVIR/IMAG, Grenoble, FRANCE Abstract This paper proposes models of short-range
EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set
EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set Amhmed A. Bhih School of Electrical and Electronic Engineering Princy Johnson School of Electrical and Electronic Engineering Martin
Learning an Inverse Rig Mapping for Character Animation
Learning an Inverse Rig Mapping for Character Animation Daniel Holden Jun Saito Taku Komura University of Edinburgh Marza Animation Planet University of Edinburgh Figure 1: Results of our method: animation
Off-line Model Simplification for Interactive Rigid Body Dynamics Simulations Satyandra K. Gupta University of Maryland, College Park
NSF GRANT # 0727380 NSF PROGRAM NAME: Engineering Design Off-line Model Simplification for Interactive Rigid Body Dynamics Simulations Satyandra K. Gupta University of Maryland, College Park Atul Thakur
Computer Animation. CS 445/645 Fall 2001
Computer Animation CS 445/645 Fall 2001 Let s talk about computer animation Must generate 30 frames per second of animation (24 fps for film) Issues to consider: Is the goal to replace or augment the artist?
Structural Axial, Shear and Bending Moments
Structural Axial, Shear and Bending Moments Positive Internal Forces Acting Recall from mechanics of materials that the internal forces P (generic axial), V (shear) and M (moment) represent resultants
DESIGN, IMPLEMENTATION, AND COOPERATIVE COEVOLUTION OF AN AUTONOMOUS/TELEOPERATED CONTROL SYSTEM FOR A SERPENTINE ROBOTIC MANIPULATOR
Proceedings of the American Nuclear Society Ninth Topical Meeting on Robotics and Remote Systems, Seattle Washington, March 2001. DESIGN, IMPLEMENTATION, AND COOPERATIVE COEVOLUTION OF AN AUTONOMOUS/TELEOPERATED
