Robust and Automatic Optical Motion Tracking
|
|
- Aubrey Bell
- 8 years ago
- Views:
Transcription
1 Robust and Automatic Optical Motion Tracking Alexander Hornung, Leif Kobbelt Lehrstuhl für Informatik VIII RWTH Aachen Aachen Tel.: +49 (0) Fax: +49 (0) Abstract: Marker-based optical motion tracking is an established technique to capture and reconstruct the skeleton and motion of a subject. However, several practical problems still arise, mostly due to ambiguities caused by occluded or wrongly identified markers which often makes post-processing by a human user inevitable. We present techniques to make marker-based optical motion tracking an automatic and robust process. The aim of our framework is a self-calibrating system, which automatically identifies rigid cliques of markers and recovers the skeleton topology and geometry of a tracked subject without any auxiliary information about the tracking setup. The gathered information is used to make the actual motion recording phase robust to marker occlusions by reconstructing missing limbs or joints of the subject using inverse kinematic methods. The resulting techniques provide a simple, general framework to perform optical motion tracking which minimizes the need for complex and manual post-processing. Keywords: Optical Motion Tracking, Self-calibration, Retargetting, Animation 1 Introduction Capturing a real actor s motion plays an important role in computer animation as well as in motion analysis for medicine or sport science. Tracking the position and orientation of the subject s limbs allows the realistic reproduction and transfer of this motion to virtual characters with the same skeleton topology. Several approaches to track motion exist, like contour finding [CTMS03], or marker-based methods, where the trajectory of markers Figure 1: A subject s attached to the subject s limbs are tracked magnetically [OBBH00] or optically [HFP + 00] (Fig. 1). Although optical tracking is in general arm equipped with optical markers. the most reliable system in terms of accuracy and robustness to external influences, it suffers from two fundamental problems. On the one hand optical markers are visually indistinguishable. Therefore we need appropriate methods to identify markers based on other
2 criteria in order to associate detected markers with the respective limbs. The second fundamental problem is that of occlusion. To reconstruct the three dimensional position of a marker, it has to be seen by at least two cameras. This cannot be ensured for a freely moving actor. Hence we need methods to compensate for missing markers, so that we can still reconstruct the position and orientation of the actor s limbs, even if a significant number of markers is occluded. Existing research and commercial solutions like our ART tracking system [ART] often need a considerable amount of time for a manual calibration to allow for reliable and robust marker recognition and tracking. In contrast, this project focuses on methods to make optical motion tracking a completely automated pipeline which minimizes the necessity for intervention. Beginning with a self-calibrating system initialization, which uniquely identifies rigid cliques of markers, we automatically compute the underlying skeleton geometry and topology of a tracked subject. In particular, we do not constrain the degrees of freedom of the underlying model in any way, so that we are able to track arbitrary articulated bodies. Moreover, we reconstruct the complete position and orientation of limbs in contrast to other methods, which often have open degrees of freedom concerning the orientation of limbs, or which have to constrain the skeleton topology beforehand. During the actual motion recording phase we take advantage of this information to compensate for occluded markers, and to reconstruct the position and orientation of limbs and joints in an accurate and robust way. 2 Related Work Several partial solutions increasing robustness and automation in optical motion tracking have been proposed. [RL02] present an automatic method to identify marker cliques. However, they need an explicitly occlusion-free training sequence which is processed offline to determine marker cliques and model parameters like the skeleton structure. Our method does not impose constraints on the initial training sequence and provides permanent feedback since it is computed online. Our work on marker tracking and the dynamic identification of rigid marker cliques by formulating them as instances of a generic correspondence estimation problem is based on [SLH91]. They present an elegant algorithm to associate the features in two images for applications in computer vision. The transfer of their method to the domain of optical motion tracking allows us to solve several tracking-related problems in a unified manner. [OBBH00] show how to estimate the structure and geometry of an unknown skeleton model. They describe a least squares fit of input motion data of individual limbs to a rotary joint model. Other methods like [SPB + 98] compute joints by estimating the rotation center of markers and their associated limbs. We use the technique of [OBBH00] since it results in higher accuracy and robustness concerning noise. Approaches to make the actual recording of motion data more robust range from predicting future marker positions using a Kalman filter [DU03] or search space reductions based on other prediction quality measures [vlvr03] to resolving occlusions based on the skeletal model of the tracked person as described in [HFP + 00]. Our method does not try to identify or reconstruct
3 markers based on predictions of future states but focuses on their robust recognition based on generated marker-signatures. This ensures a reliable identification even after occlusions during several frames, where prediction models possibly fail due to unconstrained movements of the tracked subject. We improve the actual tracking quality in the case of missing markers by applying methods of inverse kinematics to the computed skeleton as presented in [TGB99]. They show how to reconstruct missing inner limbs of a skeleton up to one degree of freedom based on adjacent limbs in real-time. We extend their solution to determine the remaining degree of freedom if only one additional marker on the lost limb is known. In a recent work [ZH03] use a force-based forward dynamic model to map optical motion tracking data to a body model. Their technique fails to estimate the skeleton of the tracked subject and is also running offline. However they explicitly mention benefits of a real-time system for motion capture. Commercial systems like [Vic] provide software tools for all phases of the tracking pipeline. However, such systems focus on setups with single markers attached to limbs, resulting in the above mentioned restrictions. [ART], which is a commonly used system for VR-applications, provides only low-level tracking and marker recognition without methods for automatic calibration, skeleton estimation or robust tracking of articulated bodies. 3 Self-Calibration To track an unknown subject s motion, it is equipped with a set of spherical optical markers m 1,..., m k (Fig. 1). As the subject moves, the tracking system reconstructs the 3D position of a marker at time t when it is seen by at least two cameras. Due to indistinguishable markers and occlusions, our input data consists of an unstructured set of detected markers M t = {m t 1,..., m t k(t) } given by their corresponding 3D positions P t = {p t 1,..., p t k(t) }, at Frame F t with k(t) k. To calibrate and prepare the motion tracking system for the actual motion recording phase, we have to solve the above mentioned two fundamental problems of marker distinction (map m t i to m j ) and temporal marker occlusion (recognize m t+n i as m t j). We present a method to solve both problems as instances of a general correspondence estimation problem, resulting in a simple, coherent general framework.! #" Figure 2: In this figure, eight markers are part of two distinct cliques, while one marker is tracked isolated. Within each clique, the inter-marker distances are constant. Between two sets M t 1 and M t three classes of markers have to be distinguished during continuous tracking: Lost markers (empty dashed boxes), tracked markers, and new markers. We solve this problem by finding corresponding elements in M t 1 and M t. Every time a new marker is found it is assigned to an unused global marker identity m i.
4 Our methods for correspondence estimation are motivated by the work of [SLH91]. They presented an elegant approach to find a partial mapping between two sets of objects which minimizes the overall squared sum of some inter-object measurements based on a singular value decomposition of a proximity matrix. [SLH91] used this method to find an assignment between feature points in two images. However, using their method in a more general sense by formulating each tracking-subproblem as a matching problem between two partially corresponding sets of objects yields a simple framework to solve tracking-related tasks. We extended their method to work better with sets of objects, where the actual number of corresponding elements can be arbitrary. The first low level tracking task where we apply our correspondence based method is the continuous tracking of markers between successive frames F t 1 and F t. As mentioned above, our input data consists of unstructured sets of markers M t 1 and M t and their corresponding 3D positions P t 1 and P t. Due to occlusion some of the markers in M t 1 will be lost in M t, some will be trackable through both frames with slightly different positions, and some markers will be new in M t (Fig. 2). Our modified correspondence estimation algorithm identifies these classes of markers by assigning corresponding positions between the two sets P t 1 and P t. In particular, our algorithm finds a partial mapping of subsets of markers within the two sets M t 1 and M t, allowing for vanishing or newly appearing markers. As depicted in Fig. 2 markers can be temporally occluded for the tracking system and are therefore lost during the continuous tracking approach. Since markers cannot be distinguished the system does not know, whether a new marker was already tracked before and thus should be associated with the former identity. To resolve these ambiguities, one attaches not only one but several markers to every limb of the tracked person. Markers located on the same limb form rigid cliques with characteristic invariant inter-marker distances, while distances to markers on other limbs will change over time. We can think of attaching a string between each pair of markers. As we go from Figure 3: Ripping strings. Initially all markers are connected to each other. By moving the respective cliques around, edges between different cliques are destroyed and only the final rigid cliques remain. frame to frame, we record the length variation of each string. If a string is stretched too much it rips (Fig. 3). In the end only the strings between the rigid cliques remain. These constant distances to other markers within the same clique form a unique distance pattern or signature Sig i for every marker m i. This makes it possible to identify a currently unknown marker m t j. Suppose this marker was already seen by the system before, then there exists some marker identity m i with a unique distance pattern Sig i corresponding to m t j. So after computing the set of distances of m t j to all other markers found in the same frame F t, we can identify m t j as m i by finding the distance
5 pattern Sig i within this set. This is in particular difficult since signatures are often only partially available because of marker occlusions. Furthermore, signatures are often partially equal since the range of possible marker distances within one rigid clique is restricted by marker sizes, the precision of the tracking system, and of course the wearability of a clique for the tracked subject. This problem of identifying temporally occluded markers based on signatures is also solved by our correspondence estimation algorithm. During the run of our algorithm we continuously track markers, dynamically create these signatures to identify single markers and rigid cliques, and resolve ambiguities caused by temporally occluded markers. As soon as a target number of rigid cliques is found, we automatically compute the position and orientation of each corresponding limb by embedding a local coordinate system into the corresponding clique of markers. At this point, the basic calibration of the tracking system is accomplished. We can reliably track a subject equipped with several sets of markers. In contrast to systems like [ART], this initialization is done completely automatic in real-time with a permanent feedback to the user, which allows a maximum efficiency and flexibility. In the following step, the underlying skeleton model is automatically reconstructed for higher-level tracking tasks. 4 Skeleton Reconstruction Each of the identified rigid cliques corresponds to a limb of the tracked subject. Several methods were proposed to automatically reconstruct the underlying skeleton structure. Under the assumption of a skeleton model with rigid bones and rotational joints, [OBBH00] show how to robustly compute precise joint positions by solving a least squares system of motion measurements. Each limb l i is associated by a time-varying local coordinate system. Thus there is a transform L t i = [R t i t t i] which maps from l i s current local coordinates to world coordinates. The joint between two limbs l i and l j has the property that it has constant local coordinates c i with respect to l i and constant local coordinates c j with respect to l j. The coordinates c i and c j are related to each other by the fact that they map to the same position in world coordinates, i.e., L t ic i = L t jc j for every frame F t. For every possible pair of limbs and measurements in n frames this leads to an overdetermined system that we can solve for the local joint coordinates in the least squares sense. R 0 i. R n 1 i R 0 j. R n 1 j [ c i c j ] = t n 1 j t 0 j t 0 i. t n 1 i (1) The skeleton structure can be computed by a minimum spanning tree connecting the limbs. Since every joint is defined by the two respective positions L t ic i and L t jc j, it can be computed during the actual tracking phase in the following section by averaging the two positions. Even if one limb is completely lost, all joint positions are still explicitly defined. The geometry of the bones is given by distance between adjacent joints. The computed joint positions allow us to calculate a further type of signature to identify markers, since the distance between markers and joints associated with the same limb remain constant. We will exploit these additional signatures
6 in the following section. The extracted skeleton allows us to retarget the captured motion data to arbitrary objects with the same skeleton structure. Moreover, the computed model helps us to make the actual motion capturing phase more robust in cases of occluded markers. 5 Robust Motion Capture During the actual motion capturing phase we are able to use several methods to make the tracking robust. We have a robust method to identify formerly lost markers, redundant orientation and position information for every limb, and a skeleton which reduces the degrees of freedom for every limb by imposing constraints from neighboring limbs. In this section we provide an example of how to exploit the skeleton structure to reconstruct lost limbs in cases, where complete cliques of markers could not be tracked due to occlusions. Consider the case where two inner limbs like the forearm and upper arm of a tracked human body, a so called human arm like chain (HAL-chain) are lost. In this case the position of the missing inner joint can still be computed up to one degree of freedom. [TGB99] show that it has to lie on a circle defined by the intersection of two spheres (Fig. 4) given by the outer joint positions j 1 and j 2 (wrist and shoulder) and both inner limb lengths l 1 and l 2. In practice it is very unlikely that both cliques of the lost limbs are completely occluded. In most cases there will be at least one additional marker position p available. This marker can be identified by its rigid distance to its corresponding joint. Knowing its constant distance d to the missing inner joint position, it is possible to define a third sphere centered at the marker s position p with radius d. The lost inner joint position j has to be the intersection of these three spheres. Figure 4: This figure shows a situation, where the upper and lower arm are lost during tracking. The lost inner joint can be reconstructed by intersecting three spheres as described below. The orange circle visualizes the intersection of two of them. The third sphere intersects this circle in two points, yielding the lost inner joint. j S(j 1, l 1 ) S(j 2, l 2 ) S(p, d). Usually this yields two possible solutions. Additional markers constrain the left ambiguities in a similar way, until the the correct joint position can be reconstructed exactly. Otherwise one can choose the most plausible of the left solutions, according to continuity assumptions or other heuristics.
7 6 Results In our current setup we use four ARTTrack1 cameras [ART], placed in the four upper corners of a rectangular room. While this setting allows us to track an unconstrained moving person, it also results in very frequent marker occlusions. For example, while tracking the HAL-chain in Figure 4 with 17 markers in 4 cliques, we had an average of 21% of markers lost between two successive frames at a tracking rate of approximately 50 frames per second. In spite of this high percentage of lost markers it is still possible to capture the motion of a subject reliably since reappearing markers and partially visible cliques are generally identified within a few frames. To compare the quality of the inverse kinematic technique for inner joints to the actual joint position, we measured the deviation (Fig. 5) between both estimates for the elbow position (Fig. 4). The average deviation of the reconstructed joint from the actual joint position is only about 8 mm with a standard deviation from this value of 7 mm. This is very close to the actual precision of the tracking system for the used marker size. The high peaks result from wrongly identified single markers, in which case the sphere S(p, d) is of wrong size and the computed circle intersections result in wrong marker positions. However, such errors can be identified easily by assuming a continuously moving subject. The wrong positions can be compensated by enforcing physically plausible movements of the joints. Figure 5: The deviation between the computed joint position using the inverse kinematic method and the actual joint position for the elbow. The peaks correspond to falsely computed joint position due to wrongly identified markers. However, they can be easily detected and compensated. The necessary length of the initial phase for learning characteristic signatures depends on the presentation of cliques to the system. In an optimal setting, the self-calibration phase can be accomplished with all marker cliques already attached to the subject. In the case of a high loss rate like the above mentioned 21% for our system, it can be more efficient to calibrate the marker cliques in such a way, that a minimal number of occlusions is ensured, e.g. by attaching the marker cliques to the subject after the initial clique-calibration step. For continuously tracked markers without frequent occlusions, the self-calibration is finished as soon as the last clique gets visible. In cases where all markers are visible from the beginning, the calibration is completed after just a few frames. Since the system is running in real-time, one has direct feedback about the tracking quality, during the initialization as well as during the final motion capture. The computation of the skeleton geometry and topology is also a matter of seconds. For the skeleton shown in Figure 6 with markers attached to 12 limbs, we generally consider motion sequences between 20 and 60 seconds duration. The actual model estimation takes approximately 1 second for 60 seconds of recorded motion at 50 fps. For a good model approximation it is of
8 primary importance that the tracked subject exercises all degrees of freedom for each joint. If the initial phase was already performed with the subject equipped with markers, these measurements can already be used for the skeleton reconstruction. Figure 6 shows a few frames from a captured sequence using our system. Despite the fact that only 79% of markers could be tracked between two successive frames, the marker identification and inverse kinematic methods allowed us to reliably record the motion of a person equipped with 12 cliques of markers. 7 Conclusions We presented different methods to make optical motion tracking a robust and automatic process. Automation was achieved by creating a real-time capable self-calibration method, which learns characteristic marker signatures to identify rigid cliques of markers corresponding to limbs of the tracked subject. One key aspect of our method was the mapping of several crucial subproblems to different instances of one generic correspondence matching problem, resulting in a simple and robust algorithm. From the rigid cliques of markers the system automatically extracts the geometry and topology of the subject s skeleton. The robustness of the tracking process is improved by the reliable identification of markers even after long and frequent occlusions. Based on the computed skeleton we showed how to apply inverse kinematic methods to reconstruct limb or joint po- sequence using our system. Figure 6: A few frames from a captured motion sitions in cases, where an explicit clique-based computation is impossible due to an insufficient number of tracked markers. In the future we will integrate more sophisticated prediction filters for the movement of markers, which will improve the tracking during those periods, where insufficient information is available to apply the presented inverse kinematic methods as in the case of lost outer limbs. References [ART] ARTtrack1 & DTrack, A.R.T. advanced realtime tracking GmbH,
9 [CTMS03] Joel Carranza, Christian Theobalt, Marcus A. Magnor, and Hans-Peter Seidel. Freeviewpoint video of human actors. In ACM Transactions on Graphics, volume 22, pages , [DU03] [HFP + 00] Klaus Dorfmüller-Ulhaas. Robust optical user motion tracking using a kalman filter. In 10th ACM Symposium on Virtual Reality Software and Technology, Lorna Herda, Pascal Fua, Ralf Plänkers, Ronan Boulic, and Daniel Thalmann. Skeleton-based motion capture for robust reconstruction of human motion. In Proc. Computer Animation, [OBBH00] James F. O Brien, Robert E. Bodenheimer, Gabriel J. Brostow, and Jessica K. Hodgins. Automatic joint parameter estimation from magnetic motion capture data. In Graphics Interface, [RL02] [SLH91] [SPB + 98] [TGB99] [Vic] [vlvr03] [ZH03] Maurice Ringer and Joan Lasenby. A procedure for automatically estimating model parameters in optical motion capture. In British Machine Vision Conference, pages , Guy L. Scott and H. Christopher Longuet-Higgins. An algorithm for associating the features of two images. In Proc. R. Soc. London, volume 244, pages 21 26, Marius-Calin Silaghi, Ralf Plänkers, Ronan Boulic, Pascal Fua, and Daniel Thalmann. Local and global skeleton fitting techniques for optical motion capture. Lecture Notes in Computer Science, 1537:26 40, Deepak Tolani, Ambarish Goswami, and Norman I. Badler. Real-time inverse kinematics techniques for anthropomorphic limbs. Graphical Models, (62): , Vicon iq, Vicon Motion System Ltd, Robert van Liere and Arjen van Rhijn. Search space reduction in optical tracking. In A. Kunz and J. Deisinger, editors, Ninth Eurographics Workshop on Virtual Environments, number 9, Victor B. Zordan and Nicholas C. Van Der Horst. Mapping optical motion capture data to skeletal motion using a physical model. In D. Breen and M. Lin, editors, Eurographics/SIGGRAPH Symposium on Computer Animation, 2003.
Character Animation from 2D Pictures and 3D Motion Data ALEXANDER HORNUNG, ELLEN DEKKERS, and LEIF KOBBELT RWTH-Aachen University
Character Animation from 2D Pictures and 3D Motion Data ALEXANDER HORNUNG, ELLEN DEKKERS, and LEIF KOBBELT RWTH-Aachen University Presented by: Harish CS-525 First presentation Abstract This article presents
More informationOnline Motion Capture Marker Labeling for Multiple Interacting Articulated Targets
EUROGRAPHICS 2007 / D. Cohen-Or and P. Slavík (Guest Editors) Volume 26 (2007), Number 3 Online Motion Capture Marker Labeling for Multiple Interacting Articulated Targets Qian Yu 1,2, Qing Li 1, and Zhigang
More informationTracking Densely Moving Markers
Tracking Densely Moving Markers N. Alberto Borghese and Paolo Rigiroli b) Laboratory of Human Motion Analysis and Virtual Reality (MAVR), Department of Computer Science, University of Milano, Via Comelico
More informationAutomatic hand-over animation for free-hand motions from low resolution input
Automatic hand-over animation for free-hand motions from low resolution input Chris Kang 1, Nkenge Wheatland 1, Michael Neff 2, and Victor Zordan 1 1 University of California, Riverside 2 University of
More informationInteractive Computer Graphics
Interactive Computer Graphics Lecture 18 Kinematics and Animation Interactive Graphics Lecture 18: Slide 1 Animation of 3D models In the early days physical models were altered frame by frame to create
More informationMotion Capture Sistemi a marker passivi
Motion Capture Sistemi a marker passivi N. Alberto Borghese Laboratory of Human Motion Analysis and Virtual Reality (MAVR) Department of Computer Science University of Milano 1/41 Outline Introduction:
More informationTwo hours UNIVERSITY OF MANCHESTER SCHOOL OF COMPUTER SCIENCE. M.Sc. in Advanced Computer Science. Friday 18 th January 2008.
COMP60321 Two hours UNIVERSITY OF MANCHESTER SCHOOL OF COMPUTER SCIENCE M.Sc. in Advanced Computer Science Computer Animation Friday 18 th January 2008 Time: 09:45 11:45 Please answer any THREE Questions
More informationMotion Capture Technologies. Jessica Hodgins
Motion Capture Technologies Jessica Hodgins Motion Capture Animation Video Games Robot Control What games use motion capture? NBA live PGA tour NHL hockey Legends of Wrestling 2 Lords of Everquest Lord
More informationFalse alarm in outdoor environments
Accepted 1.0 Savantic letter 1(6) False alarm in outdoor environments Accepted 1.0 Savantic letter 2(6) Table of contents Revision history 3 References 3 1 Introduction 4 2 Pre-processing 4 3 Detection,
More informationChapter 1. Introduction. 1.1 The Challenge of Computer Generated Postures
Chapter 1 Introduction 1.1 The Challenge of Computer Generated Postures With advances in hardware technology, more powerful computers become available for the majority of users. A few years ago, computer
More informationSimFonIA Animation Tools V1.0. SCA Extension SimFonIA Character Animator
SimFonIA Animation Tools V1.0 SCA Extension SimFonIA Character Animator Bring life to your lectures Move forward with industrial design Combine illustrations with your presentations Convey your ideas to
More informationSegmentation of building models from dense 3D point-clouds
Segmentation of building models from dense 3D point-clouds Joachim Bauer, Konrad Karner, Konrad Schindler, Andreas Klaus, Christopher Zach VRVis Research Center for Virtual Reality and Visualization, Institute
More informationCombining 2D Feature Tracking and Volume Reconstruction for Online Video-Based Human Motion Capture
Combining 2D Feature Tracking and Volume Reconstruction for Online Video-Based Human Motion Capture Christian Theobalt Marcus Magnor Pascal Schüler Hans-Peter Seidel Max-Planck-Institut für Informatik
More informationHuman-like Arm Motion Generation for Humanoid Robots Using Motion Capture Database
Human-like Arm Motion Generation for Humanoid Robots Using Motion Capture Database Seungsu Kim, ChangHwan Kim and Jong Hyeon Park School of Mechanical Engineering Hanyang University, Seoul, 133-791, Korea.
More informationARTIFICIAL INTELLIGENCE METHODS IN EARLY MANUFACTURING TIME ESTIMATION
1 ARTIFICIAL INTELLIGENCE METHODS IN EARLY MANUFACTURING TIME ESTIMATION B. Mikó PhD, Z-Form Tool Manufacturing and Application Ltd H-1082. Budapest, Asztalos S. u 4. Tel: (1) 477 1016, e-mail: miko@manuf.bme.hu
More informationWii Remote Calibration Using the Sensor Bar
Wii Remote Calibration Using the Sensor Bar Alparslan Yildiz Abdullah Akay Yusuf Sinan Akgul GIT Vision Lab - http://vision.gyte.edu.tr Gebze Institute of Technology Kocaeli, Turkey {yildiz, akay, akgul}@bilmuh.gyte.edu.tr
More informationMaking Machines Understand Facial Motion & Expressions Like Humans Do
Making Machines Understand Facial Motion & Expressions Like Humans Do Ana C. Andrés del Valle & Jean-Luc Dugelay Multimedia Communications Dpt. Institut Eurécom 2229 route des Crêtes. BP 193. Sophia Antipolis.
More informationStatic Environment Recognition Using Omni-camera from a Moving Vehicle
Static Environment Recognition Using Omni-camera from a Moving Vehicle Teruko Yata, Chuck Thorpe Frank Dellaert The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 USA College of Computing
More informationQuestions? Assignment. Techniques for Gathering Requirements. Gathering and Analysing Requirements
Questions? Assignment Why is proper project management important? What is goal of domain analysis? What is the difference between functional and non- functional requirements? Why is it important for requirements
More informationHigh-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound
High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound Ralf Bruder 1, Florian Griese 2, Floris Ernst 1, Achim Schweikard
More informationTracking Moving Objects In Video Sequences Yiwei Wang, Robert E. Van Dyck, and John F. Doherty Department of Electrical Engineering The Pennsylvania State University University Park, PA16802 Abstract{Object
More informationHow To Analyze Ball Blur On A Ball Image
Single Image 3D Reconstruction of Ball Motion and Spin From Motion Blur An Experiment in Motion from Blur Giacomo Boracchi, Vincenzo Caglioti, Alessandro Giusti Objective From a single image, reconstruct:
More informationThe Process of Motion Capture: Dealing with the Data
The Process of Motion Capture: Dealing with the Data Bobby Bodenheimer Chuck Rose Microsoft Research Seth Rosenthal John Pella Interactive Media Production Microsoft Abstract This paper presents a detailed
More informationCMSC 425: Lecture 13 Animation for Games: Basics Tuesday, Mar 26, 2013
CMSC 425: Lecture 13 Animation for Games: Basics Tuesday, Mar 26, 2013 Reading: Chapt 11 of Gregory, Game Engine Architecture. Game Animation: Most computer games revolve around characters that move around
More informationMetropoGIS: A City Modeling System DI Dr. Konrad KARNER, DI Andreas KLAUS, DI Joachim BAUER, DI Christopher ZACH
MetropoGIS: A City Modeling System DI Dr. Konrad KARNER, DI Andreas KLAUS, DI Joachim BAUER, DI Christopher ZACH VRVis Research Center for Virtual Reality and Visualization, Virtual Habitat, Inffeldgasse
More informationCALIBRATION OF A ROBUST 2 DOF PATH MONITORING TOOL FOR INDUSTRIAL ROBOTS AND MACHINE TOOLS BASED ON PARALLEL KINEMATICS
CALIBRATION OF A ROBUST 2 DOF PATH MONITORING TOOL FOR INDUSTRIAL ROBOTS AND MACHINE TOOLS BASED ON PARALLEL KINEMATICS E. Batzies 1, M. Kreutzer 1, D. Leucht 2, V. Welker 2, O. Zirn 1 1 Mechatronics Research
More information3D Arm Motion Tracking for Home-based Rehabilitation
hapter 13 3D Arm Motion Tracking for Home-based Rehabilitation Y. Tao and H. Hu 13.1 Introduction This paper presents a real-time hbrid solution to articulated 3D arm motion tracking for home-based rehabilitation
More informationIndustrial Robotics. Training Objective
Training Objective After watching the program and reviewing this printed material, the viewer will learn the basics of industrial robot technology and how robots are used in a variety of manufacturing
More informationEdge tracking for motion segmentation and depth ordering
Edge tracking for motion segmentation and depth ordering P. Smith, T. Drummond and R. Cipolla Department of Engineering University of Cambridge Cambridge CB2 1PZ,UK {pas1001 twd20 cipolla}@eng.cam.ac.uk
More informationThis week. CENG 732 Computer Animation. Challenges in Human Modeling. Basic Arm Model
CENG 732 Computer Animation Spring 2006-2007 Week 8 Modeling and Animating Articulated Figures: Modeling the Arm, Walking, Facial Animation This week Modeling the arm Different joint structures Walking
More information2.5 Physically-based Animation
2.5 Physically-based Animation 320491: Advanced Graphics - Chapter 2 74 Physically-based animation Morphing allowed us to animate between two known states. Typically, only one state of an object is known.
More informationA Learning Based Method for Super-Resolution of Low Resolution Images
A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 emre.ugur@ceng.metu.edu.tr Abstract The main objective of this project is the study of a learning based method
More informationProject 2: Character Animation Due Date: Friday, March 10th, 11:59 PM
1 Introduction Project 2: Character Animation Due Date: Friday, March 10th, 11:59 PM The technique of motion capture, or using the recorded movements of a live actor to drive a virtual character, has recently
More informationEpipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R.
Epipolar Geometry We consider two perspective images of a scene as taken from a stereo pair of cameras (or equivalently, assume the scene is rigid and imaged with a single camera from two different locations).
More informationTalking Head: Synthetic Video Facial Animation in MPEG-4.
Talking Head: Synthetic Video Facial Animation in MPEG-4. A. Fedorov, T. Firsova, V. Kuriakin, E. Martinova, K. Rodyushkin and V. Zhislina Intel Russian Research Center, Nizhni Novgorod, Russia Abstract
More informationColorado School of Mines Computer Vision Professor William Hoff
Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Introduction to 2 What is? A process that produces from images of the external world a description
More informationThe Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
More informationA Cognitive Approach to Vision for a Mobile Robot
A Cognitive Approach to Vision for a Mobile Robot D. Paul Benjamin Christopher Funk Pace University, 1 Pace Plaza, New York, New York 10038, 212-346-1012 benjamin@pace.edu Damian Lyons Fordham University,
More informationPHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY V. Knyaz a, *, Yu. Visilter, S. Zheltov a State Research Institute for Aviation System (GosNIIAS), 7, Victorenko str., Moscow, Russia
More informationAutomatic Labeling of Lane Markings for Autonomous Vehicles
Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 jkiske@stanford.edu 1. Introduction As autonomous vehicles become more popular,
More informationComputer Puppetry: An Importance-Based Approach
Computer Puppetry: An Importance-Based Approach HYUN JOON SHIN, JEHEE LEE, and SUNG YONG SHIN Korea Advanced Institute of Science & Technology and MICHAEL GLEICHER University of Wisconsin Madison Computer
More informationA Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow
, pp.233-237 http://dx.doi.org/10.14257/astl.2014.51.53 A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow Giwoo Kim 1, Hye-Youn Lim 1 and Dae-Seong Kang 1, 1 Department of electronices
More informationFrequently Asked Questions About VisionGauge OnLine
Frequently Asked Questions About VisionGauge OnLine The following frequently asked questions address the most common issues and inquiries about VisionGauge OnLine: 1. What is VisionGauge OnLine? VisionGauge
More informationThe 3D rendering pipeline (our version for this class)
The 3D rendering pipeline (our version for this class) 3D models in model coordinates 3D models in world coordinates 2D Polygons in camera coordinates Pixels in image coordinates Scene graph Camera Rasterization
More informationLeast-Squares Intersection of Lines
Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a
More informationTaking Inverse Graphics Seriously
CSC2535: 2013 Advanced Machine Learning Taking Inverse Graphics Seriously Geoffrey Hinton Department of Computer Science University of Toronto The representation used by the neural nets that work best
More informationSpeed Performance Improvement of Vehicle Blob Tracking System
Speed Performance Improvement of Vehicle Blob Tracking System Sung Chun Lee and Ram Nevatia University of Southern California, Los Angeles, CA 90089, USA sungchun@usc.edu, nevatia@usc.edu Abstract. A speed
More informationA Study on M2M-based AR Multiple Objects Loading Technology using PPHT
A Study on M2M-based AR Multiple Objects Loading Technology using PPHT Sungmo Jung, Seoksoo Kim * Department of Multimedia Hannam University 133, Ojeong-dong, Daedeok-gu, Daejeon-city Korea sungmoj@gmail.com,
More informationClustering and scheduling maintenance tasks over time
Clustering and scheduling maintenance tasks over time Per Kreuger 2008-04-29 SICS Technical Report T2008:09 Abstract We report results on a maintenance scheduling problem. The problem consists of allocating
More informationMotion Retargetting and Transition in Different Articulated Figures
Motion Retargetting and Transition in Different Articulated Figures Ming-Kai Hsieh Bing-Yu Chen Ming Ouhyoung National Taiwan University lionkid@cmlab.csie.ntu.edu.tw robin@ntu.edu.tw ming@csie.ntu.edu.tw
More information5-Axis Test-Piece Influence of Machining Position
5-Axis Test-Piece Influence of Machining Position Michael Gebhardt, Wolfgang Knapp, Konrad Wegener Institute of Machine Tools and Manufacturing (IWF), Swiss Federal Institute of Technology (ETH), Zurich,
More informationSYNTHESIZING FREE-VIEWPOINT IMAGES FROM MULTIPLE VIEW VIDEOS IN SOCCER STADIUM
SYNTHESIZING FREE-VIEWPOINT IMAGES FROM MULTIPLE VIEW VIDEOS IN SOCCER STADIUM Kunihiko Hayashi, Hideo Saito Department of Information and Computer Science, Keio University {hayashi,saito}@ozawa.ics.keio.ac.jp
More informationIMD4003 3D Computer Animation
Contents IMD4003 3D Computer Animation Strange from MoCap G03 Correcting Animation in MotionBuilder Prof. Chris Joslin Overview This document covers how to correct animation (specifically rotations) in
More informationA method of generating free-route walk-through animation using vehicle-borne video image
A method of generating free-route walk-through animation using vehicle-borne video image Jun KUMAGAI* Ryosuke SHIBASAKI* *Graduate School of Frontier Sciences, Shibasaki lab. University of Tokyo 4-6-1
More informationanimation animation shape specification as a function of time
animation animation shape specification as a function of time animation representation many ways to represent changes with time intent artistic motion physically-plausible motion efficiency control typically
More informationTracking performance evaluation on PETS 2015 Challenge datasets
Tracking performance evaluation on PETS 2015 Challenge datasets Tahir Nawaz, Jonathan Boyle, Longzhen Li and James Ferryman Computational Vision Group, School of Systems Engineering University of Reading,
More informationHuman Skeletal and Muscle Deformation Animation Using Motion Capture Data
Human Skeletal and Muscle Deformation Animation Using Motion Capture Data Ali Orkan Bayer Department of Computer Engineering, Middle East Technical University 06531 Ankara, Turkey orkan@ceng.metu.edu.tr
More informationME 115(b): Solution to Homework #1
ME 115(b): Solution to Homework #1 Solution to Problem #1: To construct the hybrid Jacobian for a manipulator, you could either construct the body Jacobian, JST b, and then use the body-to-hybrid velocity
More informationLow-resolution Character Recognition by Video-based Super-resolution
2009 10th International Conference on Document Analysis and Recognition Low-resolution Character Recognition by Video-based Super-resolution Ataru Ohkura 1, Daisuke Deguchi 1, Tomokazu Takahashi 2, Ichiro
More informationSimultaneous Gamma Correction and Registration in the Frequency Domain
Simultaneous Gamma Correction and Registration in the Frequency Domain Alexander Wong a28wong@uwaterloo.ca William Bishop wdbishop@uwaterloo.ca Department of Electrical and Computer Engineering University
More informationCharacter Animation from a Motion Capture Database
Max-Planck-Institut für Informatik Computer Graphics Group Saarbrücken, Germany Character Animation from a Motion Capture Database Master Thesis in Computer Science Computer Science Department University
More informationEFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM
EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM Amol Ambardekar, Mircea Nicolescu, and George Bebis Department of Computer Science and Engineering University
More informationA new Optical Tracking System for Virtual and Augmented Reality Applications
IEEE Instrumentation and Measurement Technology Conference Budapest, Hungary, May 21 23, 2001 A new Optical Tracking System for Virtual and Augmented Reality Applications Miguel Ribo VRVis Competence Center
More informationTraffic Monitoring Systems. Technology and sensors
Traffic Monitoring Systems Technology and sensors Technology Inductive loops Cameras Lidar/Ladar and laser Radar GPS etc Inductive loops Inductive loops signals Inductive loop sensor The inductance signal
More informationRobot Task-Level Programming Language and Simulation
Robot Task-Level Programming Language and Simulation M. Samaka Abstract This paper presents the development of a software application for Off-line robot task programming and simulation. Such application
More informationIntroduction Epipolar Geometry Calibration Methods Further Readings. Stereo Camera Calibration
Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration 12.10.2004 Overview Introduction Summary / Motivation Depth Perception Ambiguity of Correspondence
More informationVRSPATIAL: DESIGNING SPATIAL MECHANISMS USING VIRTUAL REALITY
Proceedings of DETC 02 ASME 2002 Design Technical Conferences and Computers and Information in Conference Montreal, Canada, September 29-October 2, 2002 DETC2002/ MECH-34377 VRSPATIAL: DESIGNING SPATIAL
More informationLOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE. indhubatchvsa@gmail.com
LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE 1 S.Manikandan, 2 S.Abirami, 2 R.Indumathi, 2 R.Nandhini, 2 T.Nanthini 1 Assistant Professor, VSA group of institution, Salem. 2 BE(ECE), VSA
More informationOPTIMIZATION MODEL OF EXTERNAL RESOURCE ALLOCATION FOR RESOURCE-CONSTRAINED PROJECT SCHEDULING PROBLEMS
OPTIMIZATION MODEL OF EXTERNAL RESOURCE ALLOCATION FOR RESOURCE-CONSTRAINED PROJECT SCHEDULING PROBLEMS Kuo-Chuan Shih Shu-Shun Liu Ph.D. Student, Graduate School of Engineering Science Assistant Professor,
More informationTracking and Recognition in Sports Videos
Tracking and Recognition in Sports Videos Mustafa Teke a, Masoud Sattari b a Graduate School of Informatics, Middle East Technical University, Ankara, Turkey mustafa.teke@gmail.com b Department of Computer
More information3D SCANNING: A NEW APPROACH TOWARDS MODEL DEVELOPMENT IN ADVANCED MANUFACTURING SYSTEM
3D SCANNING: A NEW APPROACH TOWARDS MODEL DEVELOPMENT IN ADVANCED MANUFACTURING SYSTEM Dr. Trikal Shivshankar 1, Patil Chinmay 2, Patokar Pradeep 3 Professor, Mechanical Engineering Department, SSGM Engineering
More informationBlender in Research & Education
Blender in Research & Education 1 Overview The RWTH Aachen University The Research Projects Blender in Research Modeling and scripting Video editing Blender in Education Modeling Simulation Rendering 2
More informationEpipolar Geometry and Visual Servoing
Epipolar Geometry and Visual Servoing Domenico Prattichizzo joint with with Gian Luca Mariottini and Jacopo Piazzi www.dii.unisi.it/prattichizzo Robotics & Systems Lab University of Siena, Italy Scuoladi
More informationReal-Time Tracking of Pedestrians and Vehicles
Real-Time Tracking of Pedestrians and Vehicles N.T. Siebel and S.J. Maybank. Computational Vision Group Department of Computer Science The University of Reading Reading RG6 6AY, England Abstract We present
More informationVEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS
VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS Norbert Buch 1, Mark Cracknell 2, James Orwell 1 and Sergio A. Velastin 1 1. Kingston University, Penrhyn Road, Kingston upon Thames, KT1 2EE,
More informationDesign of a six Degree-of-Freedom Articulated Robotic Arm for Manufacturing Electrochromic Nanofilms
Abstract Design of a six Degree-of-Freedom Articulated Robotic Arm for Manufacturing Electrochromic Nanofilms by Maxine Emerich Advisor: Dr. Scott Pierce The subject of this report is the development of
More informationPart-Based Recognition
Part-Based Recognition Benedict Brown CS597D, Fall 2003 Princeton University CS 597D, Part-Based Recognition p. 1/32 Introduction Many objects are made up of parts It s presumably easier to identify simple
More informationA Reliability Point and Kalman Filter-based Vehicle Tracking Technique
A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video
More informationCG T17 Animation L:CC, MI:ERSI. Miguel Tavares Coimbra (course designed by Verónica Orvalho, slides adapted from Steve Marschner)
CG T17 Animation L:CC, MI:ERSI Miguel Tavares Coimbra (course designed by Verónica Orvalho, slides adapted from Steve Marschner) Suggested reading Shirley et al., Fundamentals of Computer Graphics, 3rd
More informationJournal of Industrial Engineering Research. Adaptive sequence of Key Pose Detection for Human Action Recognition
IWNEST PUBLISHER Journal of Industrial Engineering Research (ISSN: 2077-4559) Journal home page: http://www.iwnest.com/aace/ Adaptive sequence of Key Pose Detection for Human Action Recognition 1 T. Sindhu
More informationData-driven Motion Estimation with Low-Cost Sensors
Data-driven Estimation with Low-Cost Sensors Liguang Xie 1, Mithilesh Kumar 1, Yong Cao 1,Denis Gracanin 1, Francis Quek 1 1 Computer Science Department Virginia Polytechnic Institute and State University,
More informationDINAMIC AND STATIC CENTRE OF PRESSURE MEASUREMENT ON THE FORCEPLATE. F. R. Soha, I. A. Szabó, M. Budai. Abstract
ACTA PHYSICA DEBRECINA XLVI, 143 (2012) DINAMIC AND STATIC CENTRE OF PRESSURE MEASUREMENT ON THE FORCEPLATE F. R. Soha, I. A. Szabó, M. Budai University of Debrecen, Department of Solid State Physics Abstract
More informationTracking of Small Unmanned Aerial Vehicles
Tracking of Small Unmanned Aerial Vehicles Steven Krukowski Adrien Perkins Aeronautics and Astronautics Stanford University Stanford, CA 94305 Email: spk170@stanford.edu Aeronautics and Astronautics Stanford
More informationPoker Vision: Playing Cards and Chips Identification based on Image Processing
Poker Vision: Playing Cards and Chips Identification based on Image Processing Paulo Martins 1, Luís Paulo Reis 2, and Luís Teófilo 2 1 DEEC Electrical Engineering Department 2 LIACC Artificial Intelligence
More informationUnderstanding Purposeful Human Motion
M.I.T Media Laboratory Perceptual Computing Section Technical Report No. 85 Appears in Fourth IEEE International Conference on Automatic Face and Gesture Recognition Understanding Purposeful Human Motion
More informationIn: Proceedings of RECPAD 2002-12th Portuguese Conference on Pattern Recognition June 27th- 28th, 2002 Aveiro, Portugal
Paper Title: Generic Framework for Video Analysis Authors: Luís Filipe Tavares INESC Porto lft@inescporto.pt Luís Teixeira INESC Porto, Universidade Católica Portuguesa lmt@inescporto.pt Luís Corte-Real
More informationAutodesk Fusion 360: Assemblies. Overview
Overview In this module you will learn how different components can be put together to create an assembly. We will use several tools in Fusion 360 to make sure that these assemblies are constrained appropriately
More informationThe Big Data methodology in computer vision systems
The Big Data methodology in computer vision systems Popov S.B. Samara State Aerospace University, Image Processing Systems Institute, Russian Academy of Sciences Abstract. I consider the advantages of
More informationSpatio-Temporally Coherent 3D Animation Reconstruction from Multi-view RGB-D Images using Landmark Sampling
, March 13-15, 2013, Hong Kong Spatio-Temporally Coherent 3D Animation Reconstruction from Multi-view RGB-D Images using Landmark Sampling Naveed Ahmed Abstract We present a system for spatio-temporally
More informationUsing LSI for Implementing Document Management Systems Turning unstructured data from a liability to an asset.
White Paper Using LSI for Implementing Document Management Systems Turning unstructured data from a liability to an asset. Using LSI for Implementing Document Management Systems By Mike Harrison, Director,
More informationTracking in flussi video 3D. Ing. Samuele Salti
Seminari XXIII ciclo Tracking in flussi video 3D Ing. Tutors: Prof. Tullio Salmon Cinotti Prof. Luigi Di Stefano The Tracking problem Detection Object model, Track initiation, Track termination, Tracking
More informationIntroduction to Quantum Computing
Introduction to Quantum Computing Javier Enciso encisomo@in.tum.de Joint Advanced Student School 009 Technische Universität München April, 009 Abstract In this paper, a gentle introduction to Quantum Computing
More informationT-REDSPEED White paper
T-REDSPEED White paper Index Index...2 Introduction...3 Specifications...4 Innovation...6 Technology added values...7 Introduction T-REDSPEED is an international patent pending technology for traffic violation
More informationIntroduction to Pattern Recognition
Introduction to Pattern Recognition Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2009 CS 551, Spring 2009 c 2009, Selim Aksoy (Bilkent University)
More informationStick It! Articulated Tracking using Spatial Rigid Object Priors
Stick It! Articulated Tracking using Spatial Rigid Object Priors Søren Hauberg and Kim Steenstrup Pedersen {hauberg, kimstp}@diku.dk, The escience Centre, Dept. of Computer Science, University of Copenhagen
More informationGraphics. Computer Animation 고려대학교 컴퓨터 그래픽스 연구실. kucg.korea.ac.kr 1
Graphics Computer Animation 고려대학교 컴퓨터 그래픽스 연구실 kucg.korea.ac.kr 1 Computer Animation What is Animation? Make objects change over time according to scripted actions What is Simulation? Predict how objects
More informationanimation shape specification as a function of time
animation 1 animation shape specification as a function of time 2 animation representation many ways to represent changes with time intent artistic motion physically-plausible motion efficiency typically
More informationOptical Tracking Using Projective Invariant Marker Pattern Properties
Optical Tracking Using Projective Invariant Marker Pattern Properties Robert van Liere, Jurriaan D. Mulder Department of Information Systems Center for Mathematics and Computer Science Amsterdam, the Netherlands
More informationA STRATEGIC PLANNER FOR ROBOT EXCAVATION' by Humberto Romero-Lois, Research Assistant, Department of Civil Engineering
A STRATEGIC PLANNER FOR ROBOT EXCAVATION' by Humberto Romero-Lois, Research Assistant, Department of Civil Engineering Chris Hendrickson, Professor, Department of Civil Engineering, and Irving Oppenheim,
More informationDYNAMIC RANGE IMPROVEMENT THROUGH MULTIPLE EXPOSURES. Mark A. Robertson, Sean Borman, and Robert L. Stevenson
c 1999 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or
More information