Annotation of Human Gesture using 3D Skeleton Controls
|
|
|
- Virgil Bryan
- 10 years ago
- Views:
Transcription
1 Annotation of Human Gesture using 3D Skeleton Controls Quan Nguyen, Michael Kipp DFKI Saarbrücken, Germany {quan.nguyen, Abstract The manual transcription of human gesture behavior from video for linguistic analysis is a work-intensive process that results in a rather coarse description of the original motion. We present a novel approach for transcribing gestural movements: by overlaying an articulated 3D skeleton onto the video frame(s) the human coder can replicate original motions on a pose-by-pose basis by manipulating the skeleton. Our tool is integrated in the ANVIL tool so that both symbolic interval data and 3D pose data can be entered in a single tool. Our method allows a relatively quick annotation of human poses which has been validated in a user study. The resulting data are precise enough to create animations that match the original speaker s motion which can be validated with a realtime viewer. The tool can be applied for a variety of research topics in the areas of conversational analysis, gesture studies and intelligent virtual agents. 1. Introduction Transcribing human gesture movement from video is a necessary procedure for a number of research fields, including gesture research, sign language studies, anthropology and believable virtual characters. While our own research is motivated by the creation of believable virtual characters based on the empirical study of human movements, the resulting tools are well transferrable to other fields. In our research area, we found virtual characters of particular interest for many application fields like computer games, movies or human-computer interaction. An essential research objective is to generate nonverbal behavior (gestures, body poses etc.) and a key prerequisite for this is to analyze real human behavior. The underlying motion data can be video recordings, manually animated characters or motion capture data. Motion capture data is very precise but requires the human subject to act in a highly controlled lab environment (special suit, markers, cameras), the equipment is very expensive and significant post-processing is necessary to clean the resulting data (Heloir et al., 2010). Traditional keyframe animation requires a high level of artistic expertise and is also very time-consuming. Motion data can also be acquired by manually annotating the video with symbolic labels on time intervals in a tool like ANVIL (Kipp, 2001; Kipp et al., 2007; Kipp, 2010b; Kipp, 2010a) as shown in Fig. 1. However, the encoded information is a rather coarse approximation of the original movement. We present a novel technique for efficiently creating a gesture movement transcription using a 3D skeleton. By adjusting the skeleton to match single poses of the original speaker, the human coder can recreate the whole motion. Single pose matching is facilitated by overlaying the skeleton onto the respective video frame. Motion is created by interpolating between the annotated poses. Our tool is realized as an extension to the ANVIL 1 software. ANVIL is a multi-layer video annotation tool where temporal events like words, gestures, and other actions can be transcribed on time-aligned tracks (Fig. 1). The encoded data can become quite complex which is why ANVIL offers 1 typed attributes for the encoding. Examples of similar tools are ELAN 2 (Wittenburg and Sloetjes, 2006) and EXMARaLDA (Schmidt, 2004). Making our tool an extension of ANVIL fuses the advantages of traditional (symbolic) annotation tools and traditional 3D animation tools (like 3D Studio MAX, Maya or Blender): On the one hand, it allows to encode poses with the precision of 3D animation tools and, on the other hand, temporal information and semantic meaning can be added, all in a single tool (Figure 2). Note that ANVIL has recently been extended to also display motion capture data with a 3D skeleton for the case that such data is available (Kipp, 2010b). In the area of virtual characters our pose-based data can immediately been used for extracting gesture lexicons that form the basis of procedural animation techniques (Neff et al., 2008) in conjunction with realtime character animation engines like EMBR (Heloir and Kipp, 2009). Moreover, empirical research disciplines that investigate human communication can use the resulting data and animations to validate their annotation and create material for communication experiments. 2. Human Gesture Annotation with a 3D Skeleton Our novel method of gesture annotation is based on the idea that a human coder can easily reconstruct the pose of a speaker from a simple 2D image (e.g. a frame in a movie). For this purpose we provide a 3D stick figure and an intuitive user interface that allows efficient coding. The human coder visually matches single poses of the original speaker with the 3D skeleton (Figure 2). This is supported by overlaying the skeleton on the video frame and offering intuitive skeleton posing controls (Fig. 3). The system can then interpolate between the annotated poses to approximate the speaker s motion. Here we describe the relevant user interface controls, the overall workflow and how to export the annotated data. Controlling the skeleton 3D posing is difficult because it involves the manipulation of multiple joints with multiple 2
2 Figure 1: The annotations in ANVIL are shown as colored boxes in the annotation board. Every annotation contains several informations about the corresponding event. Figure 2: Our ANVIL extension allows to encode poses with the precision of 3D animation tools. With ANVIL s conventional features, temporal information and semantic meaning can be added. degrees of freedom. The two methods of skeleton manipulation are forward kinematics (FK) and inverse kinematics (IK). Pose creation with FK, i.e. rotating single joints, is slow. In contrast, IK allows the positioning of the end effector (usually the hand or wrist) while all other joint angles of the arm are resolved automatically. In our tool, the coder can pose the skeleton by moving the hands to the desired position (IK). The coder can the fine-tune single joints using FK, changing arm swivel, elbow bent and hand orientation (Figure 4). For IK it is necessary to define kinematic chains which can be done in the running system. By default, both arms are defined as kinematic chains, thus both arms can be manipultated by the user. The underlying skeleton can be freely defined using the standard Collada3 format. 3 Limitations Our controls are limited to posing arms. The coder cannot change the head pose or specify facial expressions. Also, there are no controls for the upper body to adjust e.g. the shoulders for shrugging or produce hunched over or upright postures. Also, no locomotion or leg positioning are possible. Pose matching For every new pose, our tool puts the current frame of the video in the background behind the skeleton (Figure 3). This screenshot serves as reference for the to be annotated pose. Automatic alignment of skeleton and screenshot is performed by marking the shoulders: the tool then puts the screenshot in a correct position to match the skeleton size. To check the current pose in 3D space, the pose viewer window (Figure 5) offers three adjustable views (different camera position + angle). Additionally, the user can adjust the camera in the main editor window. From poses to motion The skeleton is animated by interpolating between poses in realtime. Using this animation the
3 Figure 3: The pose editor window takes the current frame from the video and places it in the background for direct reference. Bending elbow Arm swivel Hand orientation Moving by ik Figure 4: The coder can pose the skeleton by moving the end effector (green). To adjust the final pose the coder can correct the arm swivel (blue), bend the elbow (yellow) or change the hand orientation (red). Figure 5: The pose viewer provides multiple views on the skeleton. Additionally the user can zoom and rotate the camera in each view seperatly. 3. Evaluation In an evaluation study we examined the intuitiveness and efficiency of our new annotation method. We recruited eight subjects (21-30 years) without prior annotation or animation experience. The task was to annotate a given gesture sequence (123 frames). Subjects were instructed with a written manual and filled in a post-session questionnaire. Subject took 19 mins on average for the gesture sequence (123 frames = approx. 5 sec). At least 13 poses were annotated. We compared annotation times with the performance of an expert (one of the authors) which we took as the optimal performance. In addition, the expert performed conventional symbolic annotation (Fig. 7(a)). What is clear is that symbolic and skeleton annotation are similar in terms of time, even though the symbolic annotation is much coarser in resulting data. The learning curve of the non-expert subjects needed a normalization because the different poses were of differing complexity. The complexity of a pose is measured by looking at the difference between two neighboring poses. Following formula describes this complexity: C = T + R coder can validate whether the specified movement matches the original motion. The coder can always improve the animation by adding new key poses. In the sequence view window thumbnails of the poses are shown to allow intuitive viewing and editing (Figure 6(a) and Figure 6(b)). Data export Apart from the regular ANVIL file, the poses are stored in a format that allows easy reuse in animation systems but can also be analyzed by behavior analysts. We support the two standard formats: Collada and BML (Behavior Markup Language) 4. BML is a description language for human nonverbal and verbal behavior. Collada is a standard to exchange data between 3D applications, supported by many 3D modeling tools like Maya, 3D Studio MAX or Blender. The animation data can be used to animate own skeletons in these tools or for realtime animation with animation engines like EMBR. main 4 C stands for the complexity of a pose. T is the covered distance of both end-effectors to reached the end pose from a base pose. R is the sum of all joint modifications. A joint modification is the angle difference between two joint orientations (Nguyen, 2009). This means, that a pose has the highest complexity if both arms are moved in the widest range and all joints are rotated by the maximal degree. The normalized curve (Fig. 7(b)) nicely shows that even within a single pose, a significant learning effect is observable which indicates that the interface is intuitive. The subjective assessment of our application (by questionnaire) was very positive. It showed that the application was accepted and easy to use. Subjects often described the application as plausible and intuitive. Our application seems to be, at least in regard to subjective opinions, an intuitive interface. For instance the subjects appreciated the film stripe looks of the sequence view window in the sense that the functionality was directly clear. Additionally the annotation with this 3D extension was rated as easily
4 (a) Expanded and zoomed view of the pose annotation board (b) Folded view of the pose annotation board Figure 6: Figure 6(a) shows the expanded view of the pose annotation board. The white areas symolized the unannotated frames. The coder can add new key poses to improve the animation by clicking at this areas. The imaged areas are the annotated key frames. To keep the overview about all annotated key frames the coder can fold this stripe (figure 6(b)). Additionally the view can be zoomed in or out to have an overview about all frames at a glance. accessible. One reason was the result can bee seen directly. The possibility to see the animation of the annotated gesture immediately was highly motivating. We conclude that our method is regarded as intuitive in subjective ratings and appears to be highly learnable and efficient in coding. Note the impressive performance of the expert coder who was able to code a whole gesture in approx. 1 minute. 4. Related Work To the best of our knowledge, this is the first tool using 3D skeletons to transcribe human movement. Previous approaches for transcribing gesture rely on symbolic labels to describe joint angles or hand positions. In own previous work (Kipp et al., 2007), we relied on the transcription of hand position (3 coordinates) and arm swivel to completely specify the arm configuration (without hand shape). We could show that our scheme was more efficient than the related Bern and FORM schemes, although it must be noted that those schemes offer a more complete annotation of the full-body configuration. The Bern scheme (Frey et al., 1983) is an early, purely descriptive scheme which is reliable to code (90-95% agreement) but has high annotation costs. For a gesture of, say, 3 seconds duration, the Bern system encodes 7 time points with 9 dimensions each (counting only the gesture relevant ones), resulting in 63 attributes to code. FORM is a more recent descriptive gesture annotation scheme(martell, 2002). It encodes positions by body part (left/right upper/lower arm, left/right hand) and has two tracks for each part, one for static locations and one for motions. For each position change of each body part the start/end configurations are annotated. Coding reliability appears to be satisfactory but, like with the Bern system, coding effort is very high: 20 hours coding per minute of video. Of course, both FORM and the Bern System also encode other body data (head, torso, legs, shoulders etc.) that we do not consider. However, since annotation effort for descriptive schemes is generally very high, we argue that annotation schemes must be targeted at this point to be manageable and have research impact in the desired area. Other approaches import numerical data for statistical analysis or quantitative research. For instance, Segouat et al. import numerical data from video, which are generated by image processing, in ANVIL to analyze the possible correlation between linguistic phenomena and numerical data (Segouat et al., 2006). Crasborn et al. import data glove signals into ELAN to analyze sign languages gestures (Crasborn et al., 2006). ANVIL offers the possibillity to import and visualize motion capture data for analysis (Kipp, 2010b). It shows motion curves of e.g. the wrist joint s absolute position in space, their velocity and acceleration. These numerical data are useful for statistical analysis and quantitative research (Heloir et al., 2010). However, our extension supports the annotation of poses and gestures, so that the annotated gesture or pose can be reproduced and reused to build a repertoire of gesture from a specific speaker. 5. Conclusions We presented an extension to the ANVIL annotation tool for transcribing human gestures using 3D skeleton controls. We showed how our intuitive 3D controls allow the quick creation, editing and realtime viewing of poses and animations. The latter are automatically created using interpolation. Apart from being useful in a computer animation context, the tool can be used for quantitative research on human gesture in fields like conversation analysis, gesture studies and anthropology. We also argued that the tool can be used in the field of intelligent virtual agents to build a repertoire of gesture templates from video recordings. Future work will investigate the use of more intuitive controls for posing the skeleton (e.g. using multitouch or other advanced input devices) and automating part of the posing using computer vision algorithms for detecting hands and shoulders. Additionally, we plan to provide more controls for the manipulation of shoulders (shrugging), leg poses or body postures. Acknowledgements Part of this research has been carried out within the framework of the Cluster of Excellence Multimodal Computing and Interaction (MMCI), sponsored by the German Research Foundation (DFG).
5 200 Standard deviations per frame [start-/endpose] Expert (skeleton-based annotation) Expert (annotation scheme) 300 Average values [start-/endpose] Expert Frame 434 Frame 557 Frame 548 Frame 544 Frame 536 Frame 520 Frame 508 Frame 506 Frame 489 Frame 481 Frame 468 Frame 458 Frame 453 Frame 468 Frame 458 Frame 453 Frame Frame 520 Frame 508 Frame 506 Frame 489 Frame 481 Frame 557 Frame 548 Frame 544 Frame 536 (a) Averaged annotation time (b) Averaged Improvement in the course of annotation Figure 7: The upper chart shows the averaged annotation time over all subjects and standard deviations per frame (black) in comparison to annotation scheme (blue) and skeleton-based annotation times of an expert. Figure (b) shows the averaged improvement in the course of annotation. Values are divided by complexity of a pose (s/c) and show annotation time in seconds per unit of complexity. 6. References Onno Crasborn, Hans Sloetjes, Eric Auer, and Peter Wittenburg Combining video and numeric data in the analysis of sign languages with the elan annotation software. In Proceedings of the 2nd Workshop on the Representation and Processing of Sign languages: Lexicographic matters and didactic scenarios. S. Frey, H. P. Hirsbrunner, A. Florin, W. Daw, and R. Crawford A unified approach to the investigation of nonverbal and verbal behavior in communication research. In W. Doise and S. Moscovici, editors, Current Issues in European Social Psychology, pages Cambridge University Press, Cambridge. Alexis Heloir and Michael Kipp Embr a realtime animation engine for interactive embodied agents. In IVA 09: Proceedings of the 9th International Conference on Intelligent Virtual Agents, pages , Berlin, Heidelberg. Springer-Verlag. Alexis Heloir, Michael Neff, and Michael Kipp Exploiting motion capture for virtual human animation: Data collection and annotation visualization. In Proc. of the Workshop on Multimodal Corpora: Advances in Capturing, Coding and Analyzing Multimodality. Michael Kipp, Michael Neff, and Irene Albrecht An Annotation Scheme for Conversational Gestures: How to economically capture timing and form. Journal on Language Resources and Evaluation - Special Issue on Multimodal Corpora, 41(3-4): , December. Michael Kipp Anvil a Generic Annotation Tool for Multimodal Dialogue. In Proceedings of Eurospeech, pages Michael Kipp. 2010a. Anvil: The video annotation research tool. In Jacques Durand, Ulrike Gut, and Gjert Kristofferson, editors, Handbook of Corpus Phonology. Oxford University Press. Michael Kipp. 2010b. Multimedia annotation, querying and analysis in anvil. In Mark Maybury, editor, Multimedia Information Extraction, chapter 21. MIT Press. Craig Martell FORM: An extensible, kinematically-based gesture annotation scheme. In Proceedings of the Seventh International Conference on Spoken Language Processing (ICSLP 02), pages , Denver. Michael Neff, Michael Kipp, Irene Albrecht, and Hans- Peter Seidel Gesture Modeling and Animation Based on a Probabilistic Recreation of Speaker Style. ACM Transactions on Graphics, 27(1):1 24, March. Quan Nguyen Werkzeuge zur IK-basierten Gestenannotation mit Hilfe eines 3D-Skeletts. Master s thesis, University of Saarland. T Schmidt Transcribing and annotating spoken language with exmaralda. In Proceedings of the LREC- Workshop on XML based richly annotated corpora. Jérémie Segouat, Annelies Braffort, and Emilie Martin Sign language corpus analysis: Synchronisation of linguistic annotation and numerical data. Brugman H. Russel A. Klassmann A. Wittenburg, P. and H. Sloetjes Elan: A professional framework for multimodality research. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC).
Chapter 1. Introduction. 1.1 The Challenge of Computer Generated Postures
Chapter 1 Introduction 1.1 The Challenge of Computer Generated Postures With advances in hardware technology, more powerful computers become available for the majority of users. A few years ago, computer
INSTRUCTOR WORKBOOK Quanser Robotics Package for Education for MATLAB /Simulink Users
INSTRUCTOR WORKBOOK for MATLAB /Simulink Users Developed by: Amir Haddadi, Ph.D., Quanser Peter Martin, M.A.SC., Quanser Quanser educational solutions are powered by: CAPTIVATE. MOTIVATE. GRADUATE. PREFACE
Interactive Computer Graphics
Interactive Computer Graphics Lecture 18 Kinematics and Animation Interactive Graphics Lecture 18: Slide 1 Animation of 3D models In the early days physical models were altered frame by frame to create
Blender 3D Animation
Bachelor Maths/Physics/Computer Science University Paris-Sud Digital Imaging Course Blender 3D Animation Christian Jacquemin Introduction to Computer Animation Animation Basics animation consists in changing
Mocap in a 3D Pipeline
East Tennessee State University Digital Commons @ East Tennessee State University Undergraduate Honors Theses 5-2014 Mocap in a 3D Pipeline Logan T. Maides Follow this and additional works at: http://dc.etsu.edu/honors
IMD4003 3D Computer Animation
Contents IMD4003 3D Computer Animation Strange from MoCap G03 Correcting Animation in MotionBuilder Prof. Chris Joslin Overview This document covers how to correct animation (specifically rotations) in
SimFonIA Animation Tools V1.0. SCA Extension SimFonIA Character Animator
SimFonIA Animation Tools V1.0 SCA Extension SimFonIA Character Animator Bring life to your lectures Move forward with industrial design Combine illustrations with your presentations Convey your ideas to
Template-based Eye and Mouth Detection for 3D Video Conferencing
Template-based Eye and Mouth Detection for 3D Video Conferencing Jürgen Rurainsky and Peter Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz-Institute, Image Processing Department, Einsteinufer
animation animation shape specification as a function of time
animation animation shape specification as a function of time animation representation many ways to represent changes with time intent artistic motion physically-plausible motion efficiency control typically
Maya 2014 Basic Animation & The Graph Editor
Maya 2014 Basic Animation & The Graph Editor When you set a Keyframe (or Key), you assign a value to an object s attribute (for example, translate, rotate, scale, color) at a specific time. Most animation
Character Animation from 2D Pictures and 3D Motion Data ALEXANDER HORNUNG, ELLEN DEKKERS, and LEIF KOBBELT RWTH-Aachen University
Character Animation from 2D Pictures and 3D Motion Data ALEXANDER HORNUNG, ELLEN DEKKERS, and LEIF KOBBELT RWTH-Aachen University Presented by: Harish CS-525 First presentation Abstract This article presents
Peggy Southerland Coordinator, Animation Department Regent University
Peggy Southerland Coordinator, Animation Department Regent University What can you do with an Animation Degree? What can you do with an Animation Degree? 1. Product Commercials What can you do with an
Using Autodesk HumanIK Middleware to Enhance Character Animation for Games
Autodesk HumanIK 4.5 Using Autodesk HumanIK Middleware to Enhance Character Animation for Games Unlock your potential for creating more believable characters and more engaging, innovative gameplay with
Information Technology Cluster
Web and Digital Communications Pathway Information Technology Cluster 3D Animator This major prepares students to utilize animation skills to develop products for the Web, mobile devices, computer games,
Nonverbal Communication Human Communication Lecture 26
Nonverbal Communication Human Communication Lecture 26 Mar-14-11 Human Communication 1 1 Nonverbal Communication NVC can be communicated through gestures and touch (Haptic communication), by body language
SPRING SCHOOL. Empirical methods in Usage-Based Linguistics
SPRING SCHOOL Empirical methods in Usage-Based Linguistics University of Lille 3, 13 & 14 May, 2013 WORKSHOP 1: Corpus linguistics workshop WORKSHOP 2: Corpus linguistics: Multivariate Statistics for Semantics
EXMARaLDA and the FOLK tools two toolsets for transcribing and annotating spoken language
EXMARaLDA and the FOLK tools two toolsets for transcribing and annotating spoken language Thomas Schmidt Institut für Deutsche Sprache, Mannheim R 5, 6-13 D-68161 Mannheim [email protected]
The 3D rendering pipeline (our version for this class)
The 3D rendering pipeline (our version for this class) 3D models in model coordinates 3D models in world coordinates 2D Polygons in camera coordinates Pixels in image coordinates Scene graph Camera Rasterization
Athletics (Throwing) Questions Javelin, Shot Put, Hammer, Discus
Athletics (Throwing) Questions Javelin, Shot Put, Hammer, Discus Athletics is a sport that contains a number of different disciplines. Some athletes concentrate on one particular discipline while others
4VATARS PROJECT. Standard avatar specification for content creation in RealXtend
4VATARS PROJECT Standard avatar specification for content creation in RealXtend In partnership with the development team under opensource realxtend, the research program ENER of Ecole Nationale des Arts
animation shape specification as a function of time
animation 1 animation shape specification as a function of time 2 animation representation many ways to represent changes with time intent artistic motion physically-plausible motion efficiency typically
Video, film, and animation are all moving images that are recorded onto videotape,
See also Data Display (Part 3) Document Design (Part 3) Instructions (Part 2) Specifications (Part 2) Visual Communication (Part 3) Video and Animation Video, film, and animation are all moving images
This week. CENG 732 Computer Animation. Challenges in Human Modeling. Basic Arm Model
CENG 732 Computer Animation Spring 2006-2007 Week 8 Modeling and Animating Articulated Figures: Modeling the Arm, Walking, Facial Animation This week Modeling the arm Different joint structures Walking
Program of Study. Animation (1288X) Level 1 (390 hrs)
Program of Study Level 1 (390 hrs) ANI1513 Life Drawing for Animation I (60 hrs) Life drawing is a fundamental skill for creating believability in our animated drawings of motion. Through the use of casts
CHAPTER 6 TEXTURE ANIMATION
CHAPTER 6 TEXTURE ANIMATION 6.1. INTRODUCTION Animation is the creating of a timed sequence or series of graphic images or frames together to give the appearance of continuous movement. A collection of
Graphic Design. Background: The part of an artwork that appears to be farthest from the viewer, or in the distance of the scene.
Graphic Design Active Layer- When you create multi layers for your images the active layer, or the only one that will be affected by your actions, is the one with a blue background in your layers palette.
An exchange format for multimodal annotations
An exchange format for multimodal annotations Thomas Schmidt, University of Hamburg Susan Duncan, University of Chicago Oliver Ehmer, University of Freiburg Jeffrey Hoyt, MITRE Corporation Michael Kipp,
Graphics. Computer Animation 고려대학교 컴퓨터 그래픽스 연구실. kucg.korea.ac.kr 1
Graphics Computer Animation 고려대학교 컴퓨터 그래픽스 연구실 kucg.korea.ac.kr 1 Computer Animation What is Animation? Make objects change over time according to scripted actions What is Simulation? Predict how objects
If there are any questions, students are encouraged to email or call the instructor for further clarification.
Course Outline 3D Maya Animation/2015 animcareerpro.com Course Description: 3D Maya Animation incorporates four sections Basics, Body Mechanics, Acting and Advanced Dialogue. Basic to advanced software
Interactive Multimedia Courses-1
Interactive Multimedia Courses-1 IMM 110/Introduction to Digital Media An introduction to digital media for interactive multimedia through the study of state-of-the-art methods of creating digital media:
Robotics and Automation Blueprint
Robotics and Automation Blueprint This Blueprint contains the subject matter content of this Skill Connect Assessment. This Blueprint does NOT contain the information one would need to fully prepare for
Two hours UNIVERSITY OF MANCHESTER SCHOOL OF COMPUTER SCIENCE. M.Sc. in Advanced Computer Science. Friday 18 th January 2008.
COMP60321 Two hours UNIVERSITY OF MANCHESTER SCHOOL OF COMPUTER SCIENCE M.Sc. in Advanced Computer Science Computer Animation Friday 18 th January 2008 Time: 09:45 11:45 Please answer any THREE Questions
Tools & Resources for Visualising Conversational-Speech Interaction
Tools & Resources for Visualising Conversational-Speech Interaction Nick Campbell NiCT/ATR-SLC Keihanna Science City, Kyoto, Japan. [email protected] Preamble large corpus data examples new stuff conclusion
siemens.com/mobility Traffic data analysis in Sitraffic Scala/Concert The expert system for visualization, quality management and statistics
siemens.com/mobility Traffic data analysis in Sitraffic Scala/Concert The expert system for visualization, quality management and statistics 2 Traffic data analysis produces transparent intersections The
Fundamentals of Computer Animation
Fundamentals of Computer Animation Principles of Traditional Animation How to create maximum impact page 1 How to create maximum impact Early animators worked from scratch to analyze and improve upon silence
C O M P U C O M P T U T E R G R A E R G R P A H I C P S Computer Animation Guoying Zhao 1 / 66 /
Computer Animation Guoying Zhao 1 / 66 Basic Elements of Computer Graphics Modeling construct the 3D model of the scene Rendering Render the 3D model, compute the color of each pixel. The color is related
Computer Animation. Lecture 2. Basics of Character Animation
Computer Animation Lecture 2. Basics of Character Animation Taku Komura Overview Character Animation Posture representation Hierarchical structure of the body Joint types Translational, hinge, universal,
SOFT FLOW 2012 PRODUCT OVERVIEW
SOFT FLOW 2012 PRODUCT OVERVIEW Copyright 2010-2012 Soft Click 1 About Soft Flow Platform Welcome to Soft Flow, the most flexible and easiest to use document management and business process management
Project 2: Character Animation Due Date: Friday, March 10th, 11:59 PM
1 Introduction Project 2: Character Animation Due Date: Friday, March 10th, 11:59 PM The technique of motion capture, or using the recorded movements of a live actor to drive a virtual character, has recently
Character Creation You can customize a character s look using Mixamo Fuse:
Using Mixamo Fuse, Mixamo, and 3ds Max, you can create animated characters for use with FlexSim. Character Creation You can customize a character s look using Mixamo Fuse: After creating the character,
A Learning Based Method for Super-Resolution of Low Resolution Images
A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 [email protected] Abstract The main objective of this project is the study of a learning based method
The main imovie window is divided into six major parts.
The main imovie window is divided into six major parts. 1. Project Drag clips to the project area to create a timeline 2. Preview Window Displays a preview of your video 3. Toolbar Contains a variety of
Intelligent Submersible Manipulator-Robot, Design, Modeling, Simulation and Motion Optimization for Maritime Robotic Research
20th International Congress on Modelling and Simulation, Adelaide, Australia, 1 6 December 2013 www.mssanz.org.au/modsim2013 Intelligent Submersible Manipulator-Robot, Design, Modeling, Simulation and
imc FAMOS 6.3 visualization signal analysis data processing test reporting Comprehensive data analysis and documentation imc productive testing
imc FAMOS 6.3 visualization signal analysis data processing test reporting Comprehensive data analysis and documentation imc productive testing imc FAMOS ensures fast results Comprehensive data processing
Republic Polytechnic School of Information and Communications Technology C391 Animation and Visual Effect Automation.
Republic Polytechnic School of Information and Communications Technology C391 Animation and Visual Effect Automation Module Curriculum This document addresses the content related abilities, with reference
ANIMATION a system for animation scene and contents creation, retrieval and display
ANIMATION a system for animation scene and contents creation, retrieval and display Peter L. Stanchev Kettering University ABSTRACT There is an increasing interest in the computer animation. The most of
CS 4204 Computer Graphics
CS 4204 Computer Graphics Computer Animation Adapted from notes by Yong Cao Virginia Tech 1 Outline Principles of Animation Keyframe Animation Additional challenges in animation 2 Classic animation Luxo
Product Characteristics Page 2. Management & Administration Page 2. Real-Time Detections & Alerts Page 4. Video Search Page 6
Data Sheet savvi Version 5.3 savvi TM is a unified video analytics software solution that offers a wide variety of analytics functionalities through a single, easy to use platform that integrates with
A Short Introduction to Transcribing with ELAN. Ingrid Rosenfelder Linguistics Lab University of Pennsylvania
A Short Introduction to Transcribing with ELAN Ingrid Rosenfelder Linguistics Lab University of Pennsylvania January 2011 Contents 1 Source 2 2 Opening files for annotation 2 2.1 Starting a new transcription.....................
Layered Performance Animation with Correlation Maps
EUROGRAPHICS 2007 / D. Cohen-Or and P. Slavík (Guest Editors) Volume 26 (2007), Number 3 Layered Performance Animation with Correlation Maps Michael Neff 1, Irene Albrecht 2 and Hans-Peter Seidel 2 1 University
M3039 MPEG 97/ January 1998
INTERNATIONAL ORGANISATION FOR STANDARDISATION ORGANISATION INTERNATIONALE DE NORMALISATION ISO/IEC JTC1/SC29/WG11 CODING OF MOVING PICTURES AND ASSOCIATED AUDIO INFORMATION ISO/IEC JTC1/SC29/WG11 M3039
M2: Animation Techniques
M2: Animation Techniques This module is divided into the following topics: Animation Techniques, on page 24 Activity 1: Research a New Animation Technique, on page 31 Animate Pro 2 - Student Workbook Animation
Motion Retargetting and Transition in Different Articulated Figures
Motion Retargetting and Transition in Different Articulated Figures Ming-Kai Hsieh Bing-Yu Chen Ming Ouhyoung National Taiwan University [email protected] [email protected] [email protected]
VACA: A Tool for Qualitative Video Analysis
VACA: A Tool for Qualitative Video Analysis Brandon Burr Stanford University 353 Serra Mall, Room 160 Stanford, CA 94305 USA [email protected] Abstract In experimental research the job of analyzing data
3D Animation Graphic Designer
Goal of the program The training program aims to develop the trainee to get him to the level of professional and creative in designing models three-dimensional and move with all respect to this art and
Course Descriptions: Undergraduate/Graduate Certificate Program in Data Visualization and Analysis
9/3/2013 Course Descriptions: Undergraduate/Graduate Certificate Program in Data Visualization and Analysis Seton Hall University, South Orange, New Jersey http://www.shu.edu/go/dava Visualization and
Production Design / Art Direction. TV Animation / Shorts
12 Head of 14 Head of Animation Studio 16 Top Creative Story Generates and develops story ideas, sequences, storyboards, elements and enhancements throughout production. TV Animation / Shorts Manages the
PRODUCT DATA. PULSE WorkFlow Manager Type 7756
PRODUCT DATA PULSE WorkFlow Manager Type 7756 PULSE WorkFlow Manager Type 7756 provides a range of tools for automating measurement and analysis tasks performed with Brüel & Kjær PULSE. This makes it particularly
Building Interactive Animations using VRML and Java
Building Interactive Animations using VRML and Java FABIANA SALDANHA TAMIOSSO 1,ALBERTO BARBOSA RAPOSO 1, LÉO PINI MAGALHÃES 1 2,IVAN LUIZ MARQUES RICARTE 1 1 State University of Campinas (UNICAMP) School
NORCO COLLEGE SLO to PLO MATRIX PLOs
SLO to PLO MATRX CERTF CATE/ Game Art: 3D Animation NAS686/NCE686 PROGR AM: ART-17: Beginning Drawing dentify and employ proper use of a variety of drawing materials. dentify, define, and properly use
Information Technology Career Field Pathways and Course Structure
Information Technology Career Field Pathways and Course Structure Courses in Information Support and Services (N0) Computer Hardware 2 145025 Computer Software 145030 Networking 2 145035 Network Operating
Hierarchical Clustering Analysis
Hierarchical Clustering Analysis What is Hierarchical Clustering? Hierarchical clustering is used to group similar objects into clusters. In the beginning, each row and/or column is considered a cluster.
Design of a six Degree-of-Freedom Articulated Robotic Arm for Manufacturing Electrochromic Nanofilms
Abstract Design of a six Degree-of-Freedom Articulated Robotic Arm for Manufacturing Electrochromic Nanofilms by Maxine Emerich Advisor: Dr. Scott Pierce The subject of this report is the development of
multimodality image processing workstation Visualizing your SPECT-CT-PET-MRI images
multimodality image processing workstation Visualizing your SPECT-CT-PET-MRI images InterView FUSION InterView FUSION is a new visualization and evaluation software developed by Mediso built on state of
University of Arkansas Libraries ArcGIS Desktop Tutorial. Section 2: Manipulating Display Parameters in ArcMap. Symbolizing Features and Rasters:
: Manipulating Display Parameters in ArcMap Symbolizing Features and Rasters: Data sets that are added to ArcMap a default symbology. The user can change the default symbology for their features (point,
A Tool for Generating Partition Schedules of Multiprocessor Systems
A Tool for Generating Partition Schedules of Multiprocessor Systems Hans-Joachim Goltz and Norbert Pieth Fraunhofer FIRST, Berlin, Germany {hans-joachim.goltz,nobert.pieth}@first.fraunhofer.de Abstract.
Journal of Industrial Engineering Research. Adaptive sequence of Key Pose Detection for Human Action Recognition
IWNEST PUBLISHER Journal of Industrial Engineering Research (ISSN: 2077-4559) Journal home page: http://www.iwnest.com/aace/ Adaptive sequence of Key Pose Detection for Human Action Recognition 1 T. Sindhu
Tutorial 13: Object Animation
Tutorial 13: Object Animation In this tutorial we will learn how to: Completion time 40 minutes Establish the number of frames for an object animation Rotate objects into desired positions Set key frames
BEDA: Visual analytics for behavioral and physiological data
BEDA: Visual analytics for behavioral and physiological data Jennifer Kim 1, Melinda Snodgrass 2, Mary Pietrowicz 1, Karrie Karahalios 1, and Jim Halle 2 Departments of 1 Computer Science and 2 Special
imc FAMOS 6.3 visualization signal analysis data processing test reporting Comprehensive data analysis and documentation imc productive testing
imc FAMOS 6.3 visualization signal analysis data processing test reporting Comprehensive data analysis and documentation imc productive testing www.imcfamos.com imc FAMOS at a glance Four editions to Optimize
Animation. Persistence of vision: Visual closure:
Animation Persistence of vision: The visual system smoothes in time. This means that images presented to the eye are perceived by the visual system for a short time after they are presented. In turn, this
Working With Animation: Introduction to Flash
Working With Animation: Introduction to Flash With Adobe Flash, you can create artwork and animations that add motion and visual interest to your Web pages. Flash movies can be interactive users can click
Tutorial: Biped Character in 3D Studio Max 7, Easy Animation
Tutorial: Biped Character in 3D Studio Max 7, Easy Animation Written by: Ricardo Tangali 1. Introduction:... 3 2. Basic control in 3D Studio Max... 3 2.1. Navigating a scene:... 3 2.2. Hide and Unhide
STRAND: Number and Operations Algebra Geometry Measurement Data Analysis and Probability STANDARD:
how August/September Demonstrate an understanding of the place-value structure of the base-ten number system by; (a) counting with understanding and recognizing how many in sets of objects up to 50, (b)
Overview of the Adobe Flash Professional CS6 workspace
Overview of the Adobe Flash Professional CS6 workspace In this guide, you learn how to do the following: Identify the elements of the Adobe Flash Professional CS6 workspace Customize the layout of the
Modelling 3D Avatar for Virtual Try on
Modelling 3D Avatar for Virtual Try on NADIA MAGNENAT THALMANN DIRECTOR MIRALAB UNIVERSITY OF GENEVA DIRECTOR INSTITUTE FOR MEDIA INNOVATION, NTU, SINGAPORE WWW.MIRALAB.CH/ Creating Digital Humans Vertex
What is Visualization? Information Visualization An Overview. Information Visualization. Definitions
What is Visualization? Information Visualization An Overview Jonathan I. Maletic, Ph.D. Computer Science Kent State University Visualize/Visualization: To form a mental image or vision of [some
CATIA V5 Tutorials. Mechanism Design & Animation. Release 18. Nader G. Zamani. University of Windsor. Jonathan M. Weaver. University of Detroit Mercy
CATIA V5 Tutorials Mechanism Design & Animation Release 18 Nader G. Zamani University of Windsor Jonathan M. Weaver University of Detroit Mercy SDC PUBLICATIONS Schroff Development Corporation www.schroff.com
Human-like Arm Motion Generation for Humanoid Robots Using Motion Capture Database
Human-like Arm Motion Generation for Humanoid Robots Using Motion Capture Database Seungsu Kim, ChangHwan Kim and Jong Hyeon Park School of Mechanical Engineering Hanyang University, Seoul, 133-791, Korea.
PDF hosted at the Radboud Repository of the Radboud University Nijmegen
PDF hosted at the Radboud Repository of the Radboud University Nijmegen The following full text is a publisher's version. For additional information about this publication click this link. http://hdl.handle.net/2066/60933
MovieClip, Button, Graphic, Motion Tween, Classic Motion Tween, Shape Tween, Motion Guide, Masking, Bone Tool, 3D Tool
1 CEIT 323 Lab Worksheet 1 MovieClip, Button, Graphic, Motion Tween, Classic Motion Tween, Shape Tween, Motion Guide, Masking, Bone Tool, 3D Tool Classic Motion Tween Classic tweens are an older way of
Animation Overview of the Industry Arts, AV, Technology, and Communication. Lesson Plan
Animation Overview of the Industry Arts, AV, Technology, and Communication Lesson Plan Performance Objective Upon completion of this assignment, the student will have a better understanding of career and
Mocap in Carrara - by CyBoRgTy
Mocap in Carrara - by CyBoRgTy Animating in Carrara can be a lot of fun, especially when you combine keyframe animation with your personally captured motions. As a hobbyist creating mocap, I like to use
Off-line programming of industrial robots using co-located environments
ISBN 978-1-84626-xxx-x Proceedings of 2011 International Conference on Optimization of the Robots and Manipulators (OPTIROB 2011) Sinaia, Romania, 26-28 Mai, 2011, pp. xxx-xxx Off-line programming of industrial
Character Animation Tutorial
Character Animation Tutorial 1.Overview 2.Modelling 3.Texturing 5.Skeleton and IKs 4.Keys 5.Export the character and its animations 6.Load the character in Virtools 7.Material & texture tuning 8.Merge
Automatic Labeling of Lane Markings for Autonomous Vehicles
Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 [email protected] 1. Introduction As autonomous vehicles become more popular,
WORKSHOPS FOR PRIMARY SCHOOLS
WORKSHOPS FOR PRIMARY SCHOOLS Note: Times and prices will be amended where possible to suit school schedules. MEDIA ARTS MAKE-A-MOVIE WORKSHOP In this practical, hands-on course, students learn the step-by-step
