University of Bolton UBIR: University of Bolton Institutional Repository Games Computing and Creative Technologies: Conference Papers (Peer-Reviewed) School of Games Computing and Creative Technologies 2008 Context based variation of character animation by physical simulation. Gabriel Notman University of Bolton, G.Notman@bolton.ac.uk Philip Carlisle University of Bolton, P.A.Carlisle@bolton.ac.uk Roger Jackson University of Bolton, R.Jackson@bolton.ac.uk Digital Commons Citation Notman, Gabriel; Carlisle, Philip; and Jackson, Roger. "Context based variation of character animation by physical simulation.." (2008). Games Computing and Creative Technologies: Conference Papers (Peer-Reviewed). Paper 2. http://digitalcommons.bolton.ac.uk/gcct_conferencepr/2 This Conference Paper is brought to you for free and open access by the School of Games Computing and Creative Technologies at UBIR: University of Bolton Institutional Repository. It has been accepted for inclusion in Games Computing and Creative Technologies: Conference Papers (Peer- Reviewed) by an authorized administrator of UBIR: University of Bolton Institutional Repository. For more information, please contact ubir@bolton.ac.uk.
Gabriel Notman School of Games Computing & Creative Technologies. The University of Bolton Deane Rd., Bolton, BL3 5AB United Kingdom G.Notman@bolton.ac.uk Context Based Variation of Character Animation by Physical Simulation Phillip Carlisle School of Games Computing & Creative Technologies. The University of Bolton Deane Rd., Bolton, BL3 5AB United Kingdom P.A.Carlisle@bolton.ac.uk Roger Jackson School of Games Computing & Creative Technologies. The University of Bolton Deane Rd., Bolton, BL3 5AB United Kingdom R. Jackson@bolton.ac.uk ABSTRACT Presented is a review of existing methods for creating physically-based character animation. We argue that creating dynamic variation to defined motions is the main benefit of these methods. Additionally, our proposed research in the area of dynamic stylization of the character animation based on physical methods is included here. Categories and Subject Descriptors I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling Physically based modeling; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism Animation; I.6.8 [Computer Graphics]: Types of Simulation Animation. General Terms Algorithms and Theory. Keywords Character animation, physically-based animation, motion control. 1. INTRODUCTION The use of physically-based simulation can be viewed as a powerful tool for animation. The complex motion and interaction of simulated objects can be produced automatically, creating animations which may be too difficult or time consuming to produce manually. Physically-based character animation presents several potential benefits for real-time systems such as video games. Ragdoll animation (see Lih-hern et al. [15] for discussion) demonstrates the possibilities of automatically creating dynamic animations (primarily falling) for a passive character model. By adding actuators which apply internal forces to the simulated model, a range of human motions can be produced. Because a description of the desired motion is required, these animations can no longer be considered automatic. As a result we argue that dynamic variation of the original animation is the main benefit of these physically-based methods. These variations are the result of the model reacting to external forces and the context of the environment. This paper is intended to provide a review of existing methods in this area and present the direction and area of our research, the remainder of which is organized as follows: in section 2 we present a review of existing methods. Section 3 includes a discussion of these methods and their application to real-time systems. Section 4 describes the areas and the direction of our research. 2. BACKGROUND 2.1 The Dynamics Model A simplified dynamics representation of the human body, based on a rigid linked model, is used to simulate the motion and interaction (collisions) of the character. Armstrong & Green [2] is the earliest example we know of that uses this representation for human character animation. The human body segments are represented by rigid bodies whose physical properties are based on measured values [9]. These segments are connected by joints which limit the movement to between one and three angular degrees of freedom. This model can be used as the skeletal basis, in methods such as matrix skinning [14], to drive the movement of the complex visual geometry. Physically-based animation of the model requires the addition of actuators or motors. Torque-based proportional derivative servos (see Weinstein et al. [29] for discussion) have been used by many. By controlling and coordinating the behaviour of these motors a large range of complex human motions can be produced. 2.2 Constraint Optimization Methods The space-time constraints method developed by Witkin & Kass [30] treated physically-based animation as a trajectory or constraints optimization problem. This method requires a description of the motion objectives, the model and its actuators, and any environment constraints. A search is conducted to find a valid trajectory which achieves the motion objectives and minimizes constraint violation. Finer control of the resulting motion trajectory can be achieved by adding additional constraints
in the form of optimization criteria. The criterion of minimum torque was used by Lo & Metaxas [16] and Rose et al. [24]. Komura and colleagues [12] used a more complex criteria based on muscle activation levels. These optimization criteria direct the search to solutions which minimize energy expenditure, arguing that the results are more natural and realistic. Work in this area includes that of Cohen [3] who extended it to space-time windows which allowed for localization of the constraints and optimization criteria. Ngo & Marks [18] used a genetic algorithm to search for the optimal trajectory. Rose and colleagues [24] applied these methods to create animation transitions and cycles. Popovic [23] used these methods for editing and adapting motion captured animations. Komura and colleagues [13] combined these methods with a detailed musculoskeletal model to add physiological constraints and demonstrated the effects of fatigue and injury. 2.3 Controller Methods Controller-based methods may be categorized into three main approaches: finite-state machines, artificial neural networks, and motion-based controllers. 2.3.1 Finite-State Machine Controllers Stewart & Cremer [27] created a finite-state machine-based controller, which was used to animate a biped model walking and stair climbing. This approach has been applied to recreate a range of human motions. A selection of examples include: platform diving [31], human athletics (cycling, running & vaulting) [9], a virtual stuntman with a range of movements [5], and swimming [33]. This finite-state machine method is a procedural or algorithmic approach to the control problem. A method sometimes referred to as behaviour controllers. The complex motion of these behaviours is broken down into a collection of simple motion tasks. Each state encapsulates a single motion task and contains either specific motor instructions [27] or a desired pose to achieve [5, 33]. The transitions between these states are triggered either by timing or when the model achieves the desired positional state. By completing a series of these simple motion tasks, complex motions can be achieved. These behaviour controllers may also accept high level parameters for further control of the motion. An example of which is the velocity and direction parameters used by Hodgins and colleagues [9], for their running controller. Other recent work in this area includes that of controller frameworks and descriptions [6, 25, 19], and that of Sok and colleagues [26] who developed controllers which learnt from motion capture animations. 2.3.2 Artificial Neural Network Controllers Artificial neural networks have been developed as a method of controlling physically-based character models. This method can be viewed as an attempt to model the control methods of the central nervous system. Reil & Husbands [22] used a recurring neural network to control bipedal walking. Hase & colleagues [7] developed controllers for walking and running and demonstrated variations by modifying the underlying physical model. Ok & Kim [20] used evolutionary methods to develop the structure of their neural network and applied their controller to bipedal walking. These neural network controllers take sensory input of the current state of the model and provide outputs to the motors in order to control the movement of the model. The relationships between these inputs and outputs are determined by the parameters of the neural network. The controllers are trained by searching for viable parameters using machine learning methods (the examples given here use evolutionary methods). An accurate description or performance criteria is critical to this training process, as this provides the basis of what the controller is trying to achieve. The controllers are scored using a fitness function which provides a numerical value of the performance based on this description and criteria. Reil & Husbands [22] used criteria of distance travelled and the maintenance of a minimum height for the centre of mass. The muscle model used by Hase and colleagues [7] allowed for factors of energy consumption and smoothness of movement. Ok & Kim [20] used a two-stage approach. In the first stage the distance travelled, the number of steps taken, and the model s stability were used. In the second stage more complex criteria based on energy consumption, muscle fatigue and joint stress were used to further develop the controllers. Incomplete or poorly defined criteria may result in the development of controllers which score highly but do not achieve the objectives in a desirable fashion. Additionally, the conditions under which the controllers are evaluated are important as performance under other conditions is not guaranteed. 2.3.3 Motion-Based Controllers A more recently developed method is that of using existing animation data, either from motion capture or key framed sources, to drive the physically-based model. This allows for highly detailed animations, which may be difficult to define procedurally, to be combined with the benefits of physicallybased animation. Zordan & colleagues [36, 35] demonstrated dynamic and reactive upper body animations using motion capture data with inverse kinematic constraints. Oshita & Makinouchi [21] added comfort and static balance control. Komura and colleagues [11] focused on balancing during locomotion with external disturbing forces. Wrotek and colleagues [32] used a root spring for balance and world space torques for greater stability. Kim and colleagues [10] used a global root controller for controlling the balance and movement of the entire model. In order to accurately track the specified motion the motor controllers must calculate the required torques to be applied to the joints. This problem is further complicated due to the varying inertia moments caused by other joint motion within the model. Allen and colleagues [1] addressed this problem by constantly updating the motor parameters based on the current state of the model. An active balance control strategy must also be applied in order to maintain stability. Sok and colleagues [26] argue that, due to the dynamics simplifications used by the simulated model, simply tracking the recorded motion will result in instability. Strategies which may work well during static stance may conflict at times with locomotive motions; this is due to the unstable nature of bipedal locomotion. The method developed by da Silva and colleagues [4] attempts to preserve the style of the original motion while maintaining stability.
Actively reacting to input from external forces, while maintaining the style and objectives of the original motion, presents further challenges. Additionally if the model is subject to a large enough external force it may not be possible to react realistically in order to maintain stability, resulting in the model losing balance and falling over. It is unlikely that continuation with the original motion will be possible, and consequently strategies for dealing with these failure conditions must be included. Both Zordan and colleagues [34] and Tang and colleagues [28] used forward simulation to predict the resulting state of the model after these events. A library of available motion clips is then searched to find an appropriate transition to enable continuation with the original motion. Although their systems were a mix of kinematic and physically-based animation, these methods should be applicable to purely physical systems. 3. DISCUSSION The question of which method is most suited to real-time applications and video games is one that is open for debate. We consider two main factors here. First the method must be dynamic so that the animation reacts to input from external forces and constraints. Secondly, we consider the amount of detailed control over the style of the motion to be important. The constraints optimization method, we argue, is least suited for this purpose as the solutions are based on a set of predefined constraints and so are not dynamic. The constraints acting on the character model may be constantly changing depending on the current context of the environment. So application of this method would require continued searching for solutions as the constraints changed. Due to the high costs of these searches we do not believe this would be a viable solution. Additionally, artistic control of the motion may be difficult to obtain through the definition of optimization functions. Each of the three controller-based methods can be considered dynamic in that they are capable of incorporating methods for reacting to input from external forces and constraints. This leaves the question of how much artistic control over the style of motion is allowed by these methods. Both the finite-state machine and neural network controllers require some procedural definition of the desired motion. The motion style of the neural network controllers is defined by the fitness function used during training. Like the function used in the constraint optimization method, precise style control may be difficult to achieve. The procedural definition of the finite-state machine controllers is different to that of other animation authoring and development methods. We argue that the animation artist may find authoring animations through this procedural definition challenging. The motion-based controller method seems to be the most promising, as it allows for the animation to be defined using existing and well developed tools, tools which provide the animation artist with the level of control they are accustomed to. 4. PROPOSED WORK In our work we aim to create variations in the character s movement style by modifying the underlying physical & physiological properties of the model and its actuators. Several existing examples can be found which demonstrate this concept. Komura and colleagues [12] applied an adult walking motion to a model of a child with different segment properties and weaker muscle strengths. This resulted in a new and unique walking motion. Their later work [13] demonstrated the effects of fatigue and injury. The modelling of tension and relaxation of the muscles was used by Neff & Fiume [17] to stylize motions, citing previous usage in traditional animation. Hase & colleagues [7] developed controllers for walking and running and demonstrated variations by using several physical models. The key difference between these existing methods and our proposed work is that we seek to develop a motion-based controller which is able to demonstrate the results of these modifications in real-time. Other work, worth mentioning, is that of Hodgins and colleagues [8] who adapted several controllers for use with varied physical models. Their method modified the motor parameters, which were tuned for the original model, to those required by the new model. This process involved using a set of scaling rules and a computationally expensive search method for final tuning. However, the more recent work of Allen and colleagues [1] demonstrated that the tuning of the motor parameters to a specific model is not required as they can be calculated accurately in realtime. To achieve the aim of modifying the movement style we look to develop a more muscle-based motor controller. This motor controller must still be capable of accurately tracking a specified motion, but will take additional parameters. These parameters will act as constraints on the motor s function creating variation to the original motion. The Hill s muscle model (used by Komura et al. [12]) and that of the antagonistic model [17] may serve as a good basis for this motor controller. The aims of the final system will be to provide the animation artist with a toolset to create unique and expressive variations to existing motions. This toolset will allow for the modification of these motor-function parameters (and other physical properties), and will demonstrate the effects of these changes in real-time. The effects of injury, fatigue, slow or weak muscle response, tension and movement exaggeration are some of the areas we look to explore through this work. 5. ACKNOWLEDGMENTS The research reported here forms part of the author's wider PhD research funded by The University of Bolton. The author acknowledges the support of his PhD supervisors, Mr. Philip A. Carlisle (Director of Studies), and Dr. Roger G. Jackson. 6. REFERENCES [1] Allen, B., Chu, D., Shapiro, A., and Faloutsos, P. 2007. On the beat!: timing and tension for dynamic characters. In Proceedings of the 2007 ACM Siggraph/Eurographics Symposium on Computer Animation (San Diego, California, August 02-04, 2007). Symposium on Computer Animation. Eurographics Association, Aire-la-Ville, Switzerland, 239-247. [2] Armstrong, W.W. and Green, M. 1985. The Dynamics of Articulated Rigid Bodies for Purposes of Animation, Visual Computer, 1, 4, 231-240. [3] Cohen, M. F. 1992. Interactive spacetime control for animation. In Proceedings of the 19th Annual Conference on Computer Graphics and interactive Techniques J. J. Thomas,
Ed. SIGGRAPH '92. ACM, New York, NY, 293-302. DOI= http://doi.acm.org/10.1145/133994.134083 [4] da Silva, M., Abe, Y., and Popović, J. 2008. Interactive simulation of stylized human locomotion. ACM Trans. Graph. 27, 3, 1-10. DOI= http://doi.acm.org/10.1145/1360612.1360681 [5] Faloutsos, P., Van de Panne, M. and Terzopoulos, D., 2001. The virtual stuntman: Dynamic characters with a repertoire of autonomous motor skills. Computers and Graphics, 25, 6, 933-953. [6] Faloutsos, P., van de Panne, M., and Terzopoulos, D. 2001. Composable controllers for physics-based character animation. In Proceedings of the 28th Annual Conference on Computer Graphics and interactive Techniques SIGGRAPH '01. ACM, New York, NY, 251-260. DOI= http://doi.acm.org/10.1145/383259.383287 [7] Hase, K., Miyashita, K., Ok, S. and Arakawa, Y., 2003. Human gait simulation with a neuromusculoskeletal model and evolutionary computation. Journal of Visualization and Computer Animation, 14, 2, 73-92. [8] Hodgins, J. K. and Pollard, N. S. 1997. Adapting simulated behaviors for new characters. In Proceedings of the 24th Annual Conference on Computer Graphics and interactive Techniques International Conference on Computer Graphics and Interactive Techniques. ACM Press/Addison-Wesley Publishing Co., New York, NY, 153-162. DOI= http://doi.acm.org/10.1145/258734.258822 [9] Hodgins, J. K., Wooten, W. L., Brogan, D. C., and O'Brien, J. F. 1995. Animating human athletics. In Proceedings of the 22nd Annual Conference on Computer Graphics and interactive Techniques S. G. Mair and R. Cook, Eds. SIGGRAPH '95. ACM, New York, NY, 71-78. DOI= http://doi.acm.org/10.1145/218380.218414 [10] Kim, S., Kim, M. and Park, M., 2006. Combining motion capture data with PD controllers for human-like animations, SICE-ICASE International Joint Conference, 904-907. [11] Komura, T., Leung, H., and Kuffner, J. 2004. Animating reactive motions for biped locomotion. In Proceedings of the ACM Symposium on Virtual Reality Software and Technology (Hong Kong, November 10-12, 2004). VRST '04. ACM, New York, NY, 32-40. DOI= http://doi.acm.org/10.1145/1077534.1077542 [12] Komura, T., Shinagawa, Y. and Kunii, T.L., 1997. A musclebased feed-forward controller of the human body. Computer Graphics Forum, 16, 3, C165-C176. [13] Komura, T., Shinagawa, Y. and Kunii, T.L., 2000. Creating and retargeting motion by the musculoskeletal human body model. Visual Computer, 16, 5, 254-270. [14] Lewis, J. P., Cordner, M., and Fong, N. 2000. Pose space deformation: a unified approach to shape interpolation and skeleton-driven deformation. In Proceedings of the 27th Annual Conference on Computer Graphics and interactive Techniques International Conference on Computer Graphics and Interactive Techniques. ACM Press/Addison-Wesley Publishing Co., New York, NY, 165-172. DOI= http://doi.acm.org/10.1145/344779.344862 [15] Lih-Hern, P., Siang, T.Y., Foo, W.C. and Kuan, W.L. 2006. ODECAL, a Flexible Open Source Rag Doll Simulation Engine, Lecture Notes in Computer Science, 3942, 680-687. [16] Lo, J. and Metaxas, D., 1999. Recursive dynamics and optimal control techniques for human motion planning. In Computer Animation, Conf. Proc. 220-234. [17] Neff, M. and Fiume, E., 2002. Modeling tension and relaxation for computer animation, In Computer Animation Conf. Proc., 81-88. [18] Ngo, J. T. and Marks, J. 1993. Spacetime constraints revisited. In Proceedings of the 20th Annual Conference on Computer Graphics and interactive Techniques SIGGRAPH '93. ACM, New York, NY, 343-350. DOI= http://doi.acm.org/10.1145/166117.166160 [19] Nunes, R. F., Vidal, C. A., and Cavalcante-Neto, J. B. 2007. A flexible representation of controllers for physically-based animation of virtual humans. In Proceedings of the 2007 ACM Symposium on Applied Computing (Seoul, Korea, March 11-15, 2007). SAC '07. ACM, New York, NY, 30-36. DOI= http://doi.acm.org/10.1145/1244002.1244010 [20] Ok, S. and Kim, D., 2005. Evolution of the CPG with sensory feedback for bipedal locomotion, Lecture Notes in Computer Science, 3611, 2, 714-726. [21] Oshita, M. and Makinouchi, A., 2001. A dynamic motion control technique for human-like articulated figures. Computer Graphics Forum, 20, 3, C/192-C/202. [22] Reil, T. and Husbands, P., 2002. Evolution of central pattern generators for bipedal walking in a real-time physics environment. IEEE Transactions on Evolutionary Computation, 6, 2, 159-168. [23] Popovic, Z., 2000. Editing dynamic properties of captured human motion, In Proceedings - IEEE International Conference on Robotics and Automation, 1, 670-675. [24] Rose, C., Guenter, B., Bodenheimer, B., and Cohen, M. F. 1996. Efficient generation of motion transitions using spacetime constraints. In Proceedings of the 23rd Annual Conference on Computer Graphics and interactive Techniques SIGGRAPH '96. ACM, New York, NY, 147-154. DOI= http://doi.acm.org/10.1145/237170.237229 [25] Shapiro, A., Chu, D., Allen, B., and Faloutsos, P. 2007. A dynamic controller toolkit. In Proceedings of the 2007 ACM SIGGRAPH Symposium on Video Games (San Diego, California, August 04-05, 2007). Sandbox '07. ACM, New York, NY, 15-20. DOI= http://doi.acm.org/10.1145/1274940.1274945 [26] Sok, K. W., Kim, M., and Lee, J. 2007. Simulating biped behaviors from human motion data. ACM Trans. Graph. 26, 3 (Jul. 2007), 107. DOI= http://doi.acm.org/10.1145/1276377.1276511 [27] Stewart, J.A. and Cremer, J.F. 1992. Beyond Keyframing: An Algorithmic Approach to Animation. In Graphics Interface Conf. Proc. 273-281 [28] Tang, B., Pan, Z., Zheng, L. and Zhang, M., 2006. Simulating reactive motions for motion capture animation. Lecture Notes in Computer Science, 4035, 530-537.
[29] Weinstein, R. Guendelman, E. and Fedkiw R. 2008. Impulsebased Control of Joints and Muscles. IEEE Transactions on Visualization and Computer Graphics, 22, 4, (Jan. 2008), 37-46. [30] Witkin, A. and Kass, M. 1988. Spacetime constraints. SIGGRAPH Comput. Graph. 22, 4 (Aug. 1988), 159-168. DOI= http://doi.acm.org/10.1145/378456.378507 [31] Wooten, W.L. and Hodgins, J.K., 1996. Animation of human diving, Computer Graphics Forum, 15, 1, 3-13. [32] Wrotek, P., Jenkins, O. C., and McGuire, M. 2006. Dynamo: dynamic, data-driven character control with adjustable balance. In Proceedings of the 2006 ACM SIGGRAPH Symposium on Videogames (Boston, Massachusetts, July 30-31, 2006). Sandbox '06. ACM, New York, NY, 61-70. DOI= http://doi.acm.org/10.1145/1183316.1183325 [33] Yang, P., Laszlo, J., and Singh, K. 2004. Layered dynamic control for interactive character swimming. In Proceedings of the 2004 ACM Siggraph/Eurographics Symposium on Computer Animation (Grenoble, France, August 27-29, 2004). Symposium on Computer Animation. Eurographics Association, Aire-la-Ville, Switzerland, 39-47. DOI= http://doi.acm.org/10.1145/1028523.1028529 [34] Zordan, V. B., Majkowska, A., Chiu, B., and Fast, M. 2005. Dynamic response for motion capture animation. In ACM SIGGRAPH 2005 Papers (Los Angeles, California, July 31 - August 04, 2005). J. Marks, Ed. SIGGRAPH '05. ACM, New York, NY, 697-701. DOI= http://doi.acm.org/10.1145/1186822.1073249 [35] Zordan, V. B. and Hodgins, J. K. 2002. Motion capturedriven simulations that hit and react. In Proceedings of the 2002 ACM Siggraph/Eurographics Symposium on Computer Animation (San Antonio, Texas, July 21-22, 2002). SCA '02. ACM, New York, NY, 89-96. DOI=http://doi.acm.org/10.1145/545261.545276 [36] Zordan, V.B. and Hodgins, J.K., 1999. Tracking and Modifying Upper-Body Human Motion Data with Dynamic Simulation, Computer Animation and Simulation, 13-22.