Model-based Avatar Head Animation for Interactive Virtual Reality Applications
|
|
|
- Jocelin Miller
- 10 years ago
- Views:
Transcription
1 Model-based Avatar Head Animation for Interactive Virtual Reality Applications Emil M. Petriu, Dr. Eng., FIEEE Marius D. Cordea, M.A.Sc. Michel Bondy, M.A.Sc. School of Information Technology and Engineering Ottawa, ON., K1N 6N5 Canada
2 Model-based video coding, also known as knowledge-based video coding, has emerged as a very low bit rate video compression. The principle of this compression is to generate a parametric model of the image at the emission end and to transmit only the parameters describing how the model changes in time. These differential parameters are then used to animate the model recovered at the reception end. This talk will review some basic elements of the muscle-based avatar model animation and will present two aspects of the research work on interactive avatar head animation carried out in the SMRLab at the : (i) real-time recovery of 3D motion parameters from sequences of 2D live images, and (ii) audio-video synchronization using the correlations between the lip shape and the properties of the performer s voice, which allows for the audio stream to drive the lip contour shape in the animation process.
3 Computer Generated Models Animation Script Object Interaction Models Object Shape &Behavior Models Virtual Object Manipulation Motion Tracking VIRTUAL SCENE Object Recognition Avatar HCI (Human Computer Interaction) Interface HCI Interface Visual Feedback(s) Video Sensor(s) Structured Light Tactile Feedback(s) Tactile Sensor(s) Force Feedback(s) Force Sensor(s) Human Owner
4 Face and lip animation using model-based audio and video coding
5 Muscle-based avatar modeling Model-based coding is a very low bit rate video compression method for telepresensence and multimedia communications The principle is to generate a parametric model of the image acquired at the emission and to transmit only the differential parameters describing how the model changes in time. These parameters are then used to animate the recovered model at the reception. The 3D generic mesh model is molded by Marius Cordea s face image (on the left) to generate the 3D polygonal model (center).
6 Facial muscles
7 Facial expressions are described using the Facial Action Coding System, allowing to control the movements of specific facial muscles. It supports 46 Action Units (37 are muscle-controlled and 11 do not involve facial muscles)
8 Combining different muscle actions it becomes possible to obtain a variety of facial expressions of Marius avatar: Neutral Happy Sad Surprised
9 TRACKING 3D FACE MOTIONS Stochastic color-based detection and tracking of human faces The skin color distribution of people with different skin colors forms a compact cluster, with a regular shape in rg (or HS)-chromatic color space. ==> Modeling human faces as a Mixture of Gaussian (MOG) distributions in the 2D normalized color space. Face and Non-Face pixel classifications
10 Face shape evaluation Once the first frame of the sequence is segmented based on color information, the face detection algorithm will search for faces based on shape. The oval shape of a face can be approximated as an elliptical outline. The region-based detection algorithm has the advantage over the edge-based detection of being more robust against noise and changes in illumination. To find faces in images we are searching for elliptical components based on region growing algorithm applied at a coarse resolution of the segmented image. Normally, we obtain one large connected area for the face, a smaller one for the hands and small connected components in the background. Each connected component CC is approximated by its best-fit ellipse based on moments. An ellipse is defined by its center (center of gravity) (µx, µy), length a and b of its minor and major axis (ellipse size), and its orientation θ. The selection of a face as the largest blob In order to increase computational speed, we use offline pre-computed elliptical outlines called matching-ellipses, as opposed to the detection-ellipses Fitting a matching ellipse on detected face
11 Tracking for 2½D Head Pose Recovery In order to increase its robustness, our system combines several basic tracking methods in a unified framework used to compute the 2½D motion parameters of a head. We used both the face color and contour matching techniques to track the 2D projection of the head. In order to find head models convenient for tracking, we exploit two geometrical properties: rigidity and symmetry. The inherent rigidity of the head is quite a convenient property for tracking. The rough symmetry about the vertical axis passing through its center results in an approximate constant 2D-head projection into the image plane. This projection can be modeled by an ellipse with a fixed aspect ratio of 1.2, and a variable orientation: Once the head is detected, an elliptical outline is fitted to the head contour. Every time a new image becomes available, the tracker will try to fit the ellipse model from the previous image in such a way to best approximate the position of the head in the new image. Essentially, tracking consists of an update of the ellipse state, to provide a best model match for the head in the new image. The state is updated by a hypothesize-and-test procedure in which the goodness of the match is dependent upon the intensity gradients around the object s boundary and the color of the object s interior.
12 Real-time tracking of the avatar-owner's face 2½D parameters: position and orientation Marius face live image is tracked by Marius avatar
13 Linear Kalman Filter (LKF) for 2½D Tracking The measurement values obtained by tracking are quite naturally corrupted by noise, resulting in an unstable tracking behaviour. The face ellipse will be jumpy and easily lose the locked target. For instance in the case of an sequence with multiple faces the ellipse will jump from one face to another during scene motion, without smooth following the initial target as expected. Localization errors in the face tracking propagate to the recovered pose parameters. When used for synthesis, applying these pose computations to a 3D-head model results in jerky movements, of the animated head. In order to overcome this inconvenience we use an optimal discrete LKF to process the measurements of the tracking parameters for each frame. The tracking process, in case of a locked target and lost target after a quick move. The continuous linear imaging process is sampled at discrete time intervals by grabbing images at a constant time interval. These images are then sequentially analyzed using a LKF to determine the motion trajectory of the face within a determined error range. The LKF is a recursive procedure that consists of two stages: time updates (or prediction) and measurement updates (or correction). At each iteration, the filter provides an optimal estimate of the current state using the current input measurement, and produces an estimate of the future state using the underlying state model.the values, which we want to smooth and predict independently, are the tracker state parameters. The tracker will employ a LKF as a recursive motion prediction tool, for the recovery of the 2½D head pose parameters.
14 Tracking a moving face in an image sequence The diagrams show the estimated vs. measured position of the elliptical tracker and its confidence, for a sequence of 200 frames, containing a moving face. The face is detected in first 5 frames, then successfully tracked in next frames.
15 Tracking for 3D Head Pose Recovery The general problem of recovering 3D position parameters from 2D images could be solved using different 2D views of the 3D objects. If these 2D images are taken at the same time the problem is solved by stereovision. Another approach using monocular 2D images of moving objects is known as Structure-From-Motion (SFM). Given 2D-object images the SFM problem aims to recover: the 3D object coordinates the relative 3D camera- object motion camera geometry (camera calibration) The SFM problem assumes no prior knowledge about the 3D model and motion, and camera calibration. SFM aims to recover these 3D parameters from 2D observations over a sequence of images. The SFM framework consists of two main modules:
16 Due to the perspective camera model, SFM is a nonlinear problem. Due to the continuous nature of visual information acquisition and the large amounts of data involved, we decided to use a recursive estimation technique. Least-squares techniques need a priori knowledge of the 3D object models and are not able to handle gross and systematic errors, and correlation in measurements. A robust alternative to the least square methods is the Extended Kalman Filter (EKF), for the following reasons: it explicitly considers the observation uncertainties; it is fed with measurement recursively; it is a simple and robust solution to parameter estimation problems. Our SFM system recursively recovers the 3D structure, 3D motion and perspective camera geometry from feature correspondences over a sequence of 2D images. To speed up the calculations we are using a motion model that simplifies the Jacobian. EKF is used to solve the SFM problem resulting in an accurate, stable and real time solution. This EKF takes in consideration the non-linear aspect of mapping. We use a perspective camera model to reflect the mapping between the 3D world and its projection.
17 Initialization Process Grab frame n 2D-track each feature: ( u, v ) ( u v ) i _ est i _ est i _ real, i _ real 2D tracker estimate for next frame: ( u, v ) ( u v ) i _ real i _ real i _ est+, i _ est+ EKF estimate for next frame, and SFM output: ( u, v ) ( u v ) x i _ real i _ real i _ est+, i _ est+ ( tx, t y, t z, ω x, ω y, ω z, f, t& x, t& y, t& z, ω& x, ω& y, ω& z, Zi ) Calibrate camera: Adjust structure: Select best estimate for next frame n+: u v ( ) i _ est+, i _ est +, n+ Translate and rotate the model (modify pose): Continuos 3D pose recovery using Extended Kalman Filter
18 Calibration Computer Graphics generated / animated 3D head model Extraction of the synthetic motion m m m m m m vector ( tx, t y, t z, θx, θ y, θ z ) 2D live image of the synthetic 2D projection of the 3D head Human-aided point identification 3D/2D EKF (IEKF) based 3D tracking Extraction of the synthetic motion e e e e e e vector ( tx, ty, tz, θx, θ y, θ z ) Calibration errors
19 The 3D-model provides the initial structure parameters of the Kalman filter. Each 2D-feature point corresponds to a structure point. As shown in the figure the feature points are obtained by intersecting the 2D image plane with a ray rooted in the camera s center of projection and aiming to the 3D structure point on the head model. The typical point identification problem of the 3D pose recovery from 2D images is solved in our case by identifying corresponding points in both the 2D live image of the subject and the 3D model of the subject s head. Identical point selection process on Marius image and the corresponding 3D model projection
20 Mesh fitting Point selection in the initial frame
21 Calibration during tracking
22 Calibration during tracking
23 True and recovered rotation angles: EKF-4 points (a) and EKF-5 points (b).
24 Examples of real-time animation of the face model using EKF tracking
25 AUDIO-STREAM DRIVEN LIP ANIMATION Speech Driven Lip Animation: 1.Training phase Areference video of a given speaker is recorded with the sound stream synchronized with the video steam. The reference video is recorded under laboratory conditions with little background noise, with makeup marking the lips, with the proper lighting and with the speaker's head positioned squarely in front of the camera. These constraints (aside from low background noise) are needed only in the training stage. The audio signal is cut into frames and processed using cepstral analysis. A 20-element vector containing the cepstral coefficients describes each audio frame. The video stream is analyzedto determine the parameters of the lip model that best fit with a given video frame. An audio-video mapping is created, associating a set of lip model parameters from the video analysis to each set of cepstral coefficients from the audio analysis. 2. Animation phase This time there is no video analysis and therefore no need for makeup, proper lighting or correct camera alignment. The audio analysis finds the cepstral vector from the reference mapping that is most similar (cepstral distance measure) to each of the input audio frames and uses the associated set of lip model parameters as the values to use in the animation of the video frame.
26 Parametric lip contour model The parameters of the lip contour model are: xo, yo = the origin of the outside parabolas; xi, yi = the origin of the inside parabolas; Bo = outer height; Bi = inner height; Ao = outer width; Ai = inner width; D = depth of dip ; C = width of dip ; E = offset height of cosine function; tordero = top outside parabola order; bordero = bottom outside parabola order; orderi =inside parabola order (same on both top an bottom). The lip contur model used in the mapping: The only parameters of the lip model that are associated to the cepstral coefficients are the outer width A o and the outer height B o. Relations can be found linking the parameter values of the inner contour of the lip model to the parameter values of the outer contour. Therefore, estimating the inner contour values from the audio signal would be redundant.
27 Mapping audio_cepstrum_parameters to video_lip_contour_ parameters During the training phase, the cepstral coefficients of every frame of the training video that also contains speech are associated to the optimal lip model parameters A o and B o. What has been created is a map telling the system that when the speaker produces this sound his/her lips make this shape.. In the animation phase the mapping of reference cepstral coefficients associated to lip model parameters is used to estimate the lip positions of the synthetic face:. Finding the lip contour model from the cepstral parameters of the audio stream.
28 Examples of the lip model being molded to the shape of the speaker lips Comparing the speech-driven and the real lip shape for a female speaker saying in French the ten digits: zero, un, deux,...neuf.
29 Comparing the speech-driven and the real lip shape for a male speaker saying in French the ten digits: zero, un, deux,...neuf.
30 SMRLab Publications: Refereed Journal Papers * M.D. Cordea, E. M. Petriu, N.D. Georganas, D.C. Petriu, T.E. Whalen, "Real-Time 2½D Head Pose Recovery for Model-Based Video-Coding," IEEE Trans. nstrum. Meas., Vol. 50, No. 4, pp , * M.D. Cordea, D.C. Petriu, E.M. Petriu, N.D. Georganas, T.E. Whalen, 3-D Head Pose Recovery for Interactive Virtual Reality Avatars, IEEE Trans. Instrum. Meas., Vol. 51, No. 4, pp , Conference Papers * M.H. Assaf, M. Cordea, M. Bondy, C.M. Nafornita, E.M. Petriu, H.J.W. Spoelder, "Image/Voice Modeling and Synchronization for Model-Based Video-Telephony," Proc. ET&VS-IM/97 IEEE Workshop on Emergent Technol. and Virtual Systemsfor Instrum. Meas., pp , Niagara Falls, Ont., * M. Cordea, E.M. Petriu, D.C. Petriu, "Object-Oriented Face Animation for Model -Based Video Compression Applications," Proc. ICT'98, Intl. Conf. Telecom. Vol. 1, pp , Porto Caras, Greece, * H.J.W. Spoelder, E.M. Petriu, T. Whalen, D.C. Petriu, M. Cordea, Knowledge-Based Animation of Articulated Anthropomorphic Models for Virtual Reality Applications, Proc. IMTC/99, IEEE Instrum. Meas. Technol. Conf., pp , Venice, Italy, May * M. D. Bondy, E. M. Petriu, M. D. Cordea, N. D. Georganas, D. C. Petriu, T.E. Whalen, Model-based Face and Lip Animation for Interactive Virtual Reality Applications, Proc.ACM Multimedia 2001, pp , Ottawa, ON, Sept Theses * M.D. Cordea, "Real Time 3D Head Pose Recovery for Model Based Video Coding," M.A.Sc. Thesis, * M. Bondy, Voice Stream Based Lip Animation for Audio-Video Communication, M.A.Sc. Thesis, 2001.
Template-based Eye and Mouth Detection for 3D Video Conferencing
Template-based Eye and Mouth Detection for 3D Video Conferencing Jürgen Rurainsky and Peter Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz-Institute, Image Processing Department, Einsteinufer
This week. CENG 732 Computer Animation. Challenges in Human Modeling. Basic Arm Model
CENG 732 Computer Animation Spring 2006-2007 Week 8 Modeling and Animating Articulated Figures: Modeling the Arm, Walking, Facial Animation This week Modeling the arm Different joint structures Walking
Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006
Practical Tour of Visual tracking David Fleet and Allan Jepson January, 2006 Designing a Visual Tracker: What is the state? pose and motion (position, velocity, acceleration, ) shape (size, deformation,
Introduction to Computer Graphics
Introduction to Computer Graphics Torsten Möller TASC 8021 778-782-2215 [email protected] www.cs.sfu.ca/~torsten Today What is computer graphics? Contents of this course Syllabus Overview of course topics
Limits and Possibilities of Markerless Human Motion Estimation
Limits and Possibilities of Markerless Human Motion Estimation Bodo Rosenhahn Universität Hannover Motion Capture Wikipedia (MoCap): Approaches for recording and analyzing Human motions Markerless Motion
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - [email protected]
A Reliability Point and Kalman Filter-based Vehicle Tracking Technique
A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video
Building an Advanced Invariant Real-Time Human Tracking System
UDC 004.41 Building an Advanced Invariant Real-Time Human Tracking System Fayez Idris 1, Mazen Abu_Zaher 2, Rashad J. Rasras 3, and Ibrahiem M. M. El Emary 4 1 School of Informatics and Computing, German-Jordanian
HAND GESTURE BASEDOPERATINGSYSTEM CONTROL
HAND GESTURE BASEDOPERATINGSYSTEM CONTROL Garkal Bramhraj 1, palve Atul 2, Ghule Supriya 3, Misal sonali 4 1 Garkal Bramhraj mahadeo, 2 Palve Atule Vasant, 3 Ghule Supriya Shivram, 4 Misal Sonali Babasaheb,
Tracking and Recognition in Sports Videos
Tracking and Recognition in Sports Videos Mustafa Teke a, Masoud Sattari b a Graduate School of Informatics, Middle East Technical University, Ankara, Turkey [email protected] b Department of Computer
Analyzing Facial Expressions for Virtual Conferencing
IEEE Computer Graphics & Applications, pp. 70-78, September 1998. Analyzing Facial Expressions for Virtual Conferencing Peter Eisert and Bernd Girod Telecommunications Laboratory, University of Erlangen,
Face Model Fitting on Low Resolution Images
Face Model Fitting on Low Resolution Images Xiaoming Liu Peter H. Tu Frederick W. Wheeler Visualization and Computer Vision Lab General Electric Global Research Center Niskayuna, NY, 1239, USA {liux,tu,wheeler}@research.ge.com
Deterministic Sampling-based Switching Kalman Filtering for Vehicle Tracking
Proceedings of the IEEE ITSC 2006 2006 IEEE Intelligent Transportation Systems Conference Toronto, Canada, September 17-20, 2006 WA4.1 Deterministic Sampling-based Switching Kalman Filtering for Vehicle
A Movement Tracking Management Model with Kalman Filtering Global Optimization Techniques and Mahalanobis Distance
Loutraki, 21 26 October 2005 A Movement Tracking Management Model with ing Global Optimization Techniques and Raquel Ramos Pinho, João Manuel R. S. Tavares, Miguel Velhote Correia Laboratório de Óptica
How To Analyze Ball Blur On A Ball Image
Single Image 3D Reconstruction of Ball Motion and Spin From Motion Blur An Experiment in Motion from Blur Giacomo Boracchi, Vincenzo Caglioti, Alessandro Giusti Objective From a single image, reconstruct:
EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM
EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM Amol Ambardekar, Mircea Nicolescu, and George Bebis Department of Computer Science and Engineering University
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY V. Knyaz a, *, Yu. Visilter, S. Zheltov a State Research Institute for Aviation System (GosNIIAS), 7, Victorenko str., Moscow, Russia
Emotion Detection from Speech
Emotion Detection from Speech 1. Introduction Although emotion detection from speech is a relatively new field of research, it has many potential applications. In human-computer or human-human interaction
Interactive person re-identification in TV series
Interactive person re-identification in TV series Mika Fischer Hazım Kemal Ekenel Rainer Stiefelhagen CV:HCI lab, Karlsruhe Institute of Technology Adenauerring 2, 76131 Karlsruhe, Germany E-mail: {mika.fischer,ekenel,rainer.stiefelhagen}@kit.edu
An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network
Proceedings of the 8th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING & DATA BASES (AIKED '9) ISSN: 179-519 435 ISBN: 978-96-474-51-2 An Energy-Based Vehicle Tracking System using Principal
Graduate Co-op Students Information Manual. Department of Computer Science. Faculty of Science. University of Regina
Graduate Co-op Students Information Manual Department of Computer Science Faculty of Science University of Regina 2014 1 Table of Contents 1. Department Description..3 2. Program Requirements and Procedures
OBJECT TRACKING USING LOG-POLAR TRANSFORMATION
OBJECT TRACKING USING LOG-POLAR TRANSFORMATION A Thesis Submitted to the Gradual Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements
Visual Vehicle Tracking Using An Improved EKF*
ACCV: he 5th Asian Conference on Computer Vision, 3--5 January, Melbourne, Australia Visual Vehicle racing Using An Improved EKF* Jianguang Lou, ao Yang, Weiming u, ieniu an National Laboratory of Pattern
LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE. [email protected]
LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE 1 S.Manikandan, 2 S.Abirami, 2 R.Indumathi, 2 R.Nandhini, 2 T.Nanthini 1 Assistant Professor, VSA group of institution, Salem. 2 BE(ECE), VSA
Tracking Moving Objects In Video Sequences Yiwei Wang, Robert E. Van Dyck, and John F. Doherty Department of Electrical Engineering The Pennsylvania State University University Park, PA16802 Abstract{Object
Monash University Clayton s School of Information Technology CSE3313 Computer Graphics Sample Exam Questions 2007
Monash University Clayton s School of Information Technology CSE3313 Computer Graphics Questions 2007 INSTRUCTIONS: Answer all questions. Spend approximately 1 minute per mark. Question 1 30 Marks Total
How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm
IJSTE - International Journal of Science Technology & Engineering Volume 1 Issue 10 April 2015 ISSN (online): 2349-784X Image Estimation Algorithm for Out of Focus and Blur Images to Retrieve the Barcode
Image Compression through DCT and Huffman Coding Technique
International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015 INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Rahul
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic
Introduction to Robotics Analysis, Systems, Applications
Introduction to Robotics Analysis, Systems, Applications Saeed B. Niku Mechanical Engineering Department California Polytechnic State University San Luis Obispo Technische Urw/carsMt Darmstadt FACHBEREfCH
Effective Use of Android Sensors Based on Visualization of Sensor Information
, pp.299-308 http://dx.doi.org/10.14257/ijmue.2015.10.9.31 Effective Use of Android Sensors Based on Visualization of Sensor Information Young Jae Lee Faculty of Smartmedia, Jeonju University, 303 Cheonjam-ro,
Wii Remote Calibration Using the Sensor Bar
Wii Remote Calibration Using the Sensor Bar Alparslan Yildiz Abdullah Akay Yusuf Sinan Akgul GIT Vision Lab - http://vision.gyte.edu.tr Gebze Institute of Technology Kocaeli, Turkey {yildiz, akay, akgul}@bilmuh.gyte.edu.tr
Mean-Shift Tracking with Random Sampling
1 Mean-Shift Tracking with Random Sampling Alex Po Leung, Shaogang Gong Department of Computer Science Queen Mary, University of London, London, E1 4NS Abstract In this work, boosting the efficiency of
Part-Based Recognition
Part-Based Recognition Benedict Brown CS597D, Fall 2003 Princeton University CS 597D, Part-Based Recognition p. 1/32 Introduction Many objects are made up of parts It s presumably easier to identify simple
Solving Simultaneous Equations and Matrices
Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering
Robot Perception Continued
Robot Perception Continued 1 Visual Perception Visual Odometry Reconstruction Recognition CS 685 11 Range Sensing strategies Active range sensors Ultrasound Laser range sensor Slides adopted from Siegwart
A Short Introduction to Computer Graphics
A Short Introduction to Computer Graphics Frédo Durand MIT Laboratory for Computer Science 1 Introduction Chapter I: Basics Although computer graphics is a vast field that encompasses almost any graphical
Video-Based Eye Tracking
Video-Based Eye Tracking Our Experience with Advanced Stimuli Design for Eye Tracking Software A. RUFA, a G.L. MARIOTTINI, b D. PRATTICHIZZO, b D. ALESSANDRINI, b A. VICINO, b AND A. FEDERICO a a Department
CE801: Intelligent Systems and Robotics Lecture 3: Actuators and Localisation. Prof. Dr. Hani Hagras
1 CE801: Intelligent Systems and Robotics Lecture 3: Actuators and Localisation Prof. Dr. Hani Hagras Robot Locomotion Robots might want to move in water, in the air, on land, in space.. 2 Most of the
Taking Inverse Graphics Seriously
CSC2535: 2013 Advanced Machine Learning Taking Inverse Graphics Seriously Geoffrey Hinton Department of Computer Science University of Toronto The representation used by the neural nets that work best
Bernice E. Rogowitz and Holly E. Rushmeier IBM TJ Watson Research Center, P.O. Box 704, Yorktown Heights, NY USA
Are Image Quality Metrics Adequate to Evaluate the Quality of Geometric Objects? Bernice E. Rogowitz and Holly E. Rushmeier IBM TJ Watson Research Center, P.O. Box 704, Yorktown Heights, NY USA ABSTRACT
Understanding The Face Image Format Standards
Understanding The Face Image Format Standards Paul Griffin, Ph.D. Chief Technology Officer Identix April 2005 Topics The Face Image Standard The Record Format Frontal Face Images Face Images and Compression
Bandwidth Adaptation for MPEG-4 Video Streaming over the Internet
DICTA2002: Digital Image Computing Techniques and Applications, 21--22 January 2002, Melbourne, Australia Bandwidth Adaptation for MPEG-4 Video Streaming over the Internet K. Ramkishor James. P. Mammen
Fall detection in the elderly by head tracking
Loughborough University Institutional Repository Fall detection in the elderly by head tracking This item was submitted to Loughborough University's Institutional Repository by the/an author. Citation:
VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS
VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS Aswin C Sankaranayanan, Qinfen Zheng, Rama Chellappa University of Maryland College Park, MD - 277 {aswch, qinfen, rama}@cfar.umd.edu Volkan Cevher, James
Static Environment Recognition Using Omni-camera from a Moving Vehicle
Static Environment Recognition Using Omni-camera from a Moving Vehicle Teruko Yata, Chuck Thorpe Frank Dellaert The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 USA College of Computing
Real-time Visual Tracker by Stream Processing
Real-time Visual Tracker by Stream Processing Simultaneous and Fast 3D Tracking of Multiple Faces in Video Sequences by Using a Particle Filter Oscar Mateo Lozano & Kuzahiro Otsuka presented by Piotr Rudol
COMPUTING CLOUD MOTION USING A CORRELATION RELAXATION ALGORITHM Improving Estimation by Exploiting Problem Knowledge Q. X. WU
COMPUTING CLOUD MOTION USING A CORRELATION RELAXATION ALGORITHM Improving Estimation by Exploiting Problem Knowledge Q. X. WU Image Processing Group, Landcare Research New Zealand P.O. Box 38491, Wellington
Colorado School of Mines Computer Vision Professor William Hoff
Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Introduction to 2 What is? A process that produces from images of the external world a description
The Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
Vision based Vehicle Tracking using a high angle camera
Vision based Vehicle Tracking using a high angle camera Raúl Ignacio Ramos García Dule Shu [email protected] [email protected] Abstract A vehicle tracking and grouping algorithm is presented in this work
How To Use Trackeye
Product information Image Systems AB Main office: Ågatan 40, SE-582 22 Linköping Phone +46 13 200 100, fax +46 13 200 150 [email protected], Introduction TrackEye is the world leading system for motion
Crater detection with segmentation-based image processing algorithm
Template reference : 100181708K-EN Crater detection with segmentation-based image processing algorithm M. Spigai, S. Clerc (Thales Alenia Space-France) V. Simard-Bilodeau (U. Sherbrooke and NGC Aerospace,
A System for Capturing High Resolution Images
A System for Capturing High Resolution Images G.Voyatzis, G.Angelopoulos, A.Bors and I.Pitas Department of Informatics University of Thessaloniki BOX 451, 54006 Thessaloniki GREECE e-mail: [email protected]
Vision based approach to human fall detection
Vision based approach to human fall detection Pooja Shukla, Arti Tiwari CSVTU University Chhattisgarh, [email protected] 9754102116 Abstract Day by the count of elderly people living alone at home
RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 29 (2008) Indiana University
RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 29 (2008) Indiana University A Software-Based System for Synchronizing and Preprocessing Eye Movement Data in Preparation for Analysis 1 Mohammad
Automatic Calibration of an In-vehicle Gaze Tracking System Using Driver s Typical Gaze Behavior
Automatic Calibration of an In-vehicle Gaze Tracking System Using Driver s Typical Gaze Behavior Kenji Yamashiro, Daisuke Deguchi, Tomokazu Takahashi,2, Ichiro Ide, Hiroshi Murase, Kazunori Higuchi 3,
Essential Mathematics for Computer Graphics fast
John Vince Essential Mathematics for Computer Graphics fast Springer Contents 1. MATHEMATICS 1 Is mathematics difficult? 3 Who should read this book? 4 Aims and objectives of this book 4 Assumptions made
Computer Graphics. Geometric Modeling. Page 1. Copyright Gotsman, Elber, Barequet, Karni, Sheffer Computer Science - Technion. An Example.
An Example 2 3 4 Outline Objective: Develop methods and algorithms to mathematically model shape of real world objects Categories: Wire-Frame Representation Object is represented as as a set of points
Design of Multi-camera Based Acts Monitoring System for Effective Remote Monitoring Control
보안공학연구논문지 (Journal of Security Engineering), 제 8권 제 3호 2011년 6월 Design of Multi-camera Based Acts Monitoring System for Effective Remote Monitoring Control Ji-Hoon Lim 1), Seoksoo Kim 2) Abstract With
CS 534: Computer Vision 3D Model-based recognition
CS 534: Computer Vision 3D Model-based recognition Ahmed Elgammal Dept of Computer Science CS 534 3D Model-based Vision - 1 High Level Vision Object Recognition: What it means? Two main recognition tasks:!
VBA Macro for construction of an EM 3D model of a tyre and part of the vehicle
VBA Macro for construction of an EM 3D model of a tyre and part of the vehicle Guillermo Vietti, Gianluca Dassano, Mario Orefice LACE, Politecnico di Torino, Turin, Italy. [email protected] Work
Open Access A Facial Expression Recognition Algorithm Based on Local Binary Pattern and Empirical Mode Decomposition
Send Orders for Reprints to [email protected] The Open Electrical & Electronic Engineering Journal, 2014, 8, 599-604 599 Open Access A Facial Expression Recognition Algorithm Based on Local Binary
Galaxy Morphological Classification
Galaxy Morphological Classification Jordan Duprey and James Kolano Abstract To solve the issue of galaxy morphological classification according to a classification scheme modelled off of the Hubble Sequence,
Edge tracking for motion segmentation and depth ordering
Edge tracking for motion segmentation and depth ordering P. Smith, T. Drummond and R. Cipolla Department of Engineering University of Cambridge Cambridge CB2 1PZ,UK {pas1001 twd20 cipolla}@eng.cam.ac.uk
Application of Virtual Instrumentation for Sensor Network Monitoring
Application of Virtual Instrumentation for Sensor etwor Monitoring COSTATI VOLOSECU VICTOR MALITA Department of Automatics and Applied Informatics Politehnica University of Timisoara Bd. V. Parvan nr.
Understanding and Applying Kalman Filtering
Understanding and Applying Kalman Filtering Lindsay Kleeman Department of Electrical and Computer Systems Engineering Monash University, Clayton 1 Introduction Objectives: 1. Provide a basic understanding
PHYSIOLOGICALLY-BASED DETECTION OF COMPUTER GENERATED FACES IN VIDEO
PHYSIOLOGICALLY-BASED DETECTION OF COMPUTER GENERATED FACES IN VIDEO V. Conotter, E. Bodnari, G. Boato H. Farid Department of Information Engineering and Computer Science University of Trento, Trento (ITALY)
Tracking in flussi video 3D. Ing. Samuele Salti
Seminari XXIII ciclo Tracking in flussi video 3D Ing. Tutors: Prof. Tullio Salmon Cinotti Prof. Luigi Di Stefano The Tracking problem Detection Object model, Track initiation, Track termination, Tracking
Very Low Frame-Rate Video Streaming For Face-to-Face Teleconference
Very Low Frame-Rate Video Streaming For Face-to-Face Teleconference Jue Wang, Michael F. Cohen Department of Electrical Engineering, University of Washington Microsoft Research Abstract Providing the best
Introduction. www.imagesystems.se
Product information Image Systems AB Main office: Ågatan 40, SE-582 22 Linköping Phone +46 13 200 100, fax +46 13 200 150 [email protected], Introduction Motion is the world leading software for advanced
Vision-Based Blind Spot Detection Using Optical Flow
Vision-Based Blind Spot Detection Using Optical Flow M.A. Sotelo 1, J. Barriga 1, D. Fernández 1, I. Parra 1, J.E. Naranjo 2, M. Marrón 1, S. Alvarez 1, and M. Gavilán 1 1 Department of Electronics, University
Low-resolution Image Processing based on FPGA
Abstract Research Journal of Recent Sciences ISSN 2277-2502. Low-resolution Image Processing based on FPGA Mahshid Aghania Kiau, Islamic Azad university of Karaj, IRAN Available online at: www.isca.in,
How To Fuse A Point Cloud With A Laser And Image Data From A Pointcloud
REAL TIME 3D FUSION OF IMAGERY AND MOBILE LIDAR Paul Mrstik, Vice President Technology Kresimir Kusevic, R&D Engineer Terrapoint Inc. 140-1 Antares Dr. Ottawa, Ontario K2E 8C4 Canada [email protected]
Traffic Monitoring Systems. Technology and sensors
Traffic Monitoring Systems Technology and sensors Technology Inductive loops Cameras Lidar/Ladar and laser Radar GPS etc Inductive loops Inductive loops signals Inductive loop sensor The inductance signal
MMGD0203 Multimedia Design MMGD0203 MULTIMEDIA DESIGN. Chapter 3 Graphics and Animations
MMGD0203 MULTIMEDIA DESIGN Chapter 3 Graphics and Animations 1 Topics: Definition of Graphics Why use Graphics? Graphics Categories Graphics Qualities File Formats Types of Graphics Graphic File Size Introduction
Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition
Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition 1. Image Pre-Processing - Pixel Brightness Transformation - Geometric Transformation - Image Denoising 1 1. Image Pre-Processing
Robotics. Lecture 3: Sensors. See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information.
Robotics Lecture 3: Sensors See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review: Locomotion Practical
Virtual Environments - Basics -
Virtual Environments - Basics - What Is Virtual Reality? A Web-Based Introduction Version 4 Draft 1, September, 1998 Jerry Isdale http://www.isdale.com/jerry/vr/whatisvr.html Virtual Environments allow
Interactive Computer Graphics
Interactive Computer Graphics Lecture 18 Kinematics and Animation Interactive Graphics Lecture 18: Slide 1 Animation of 3D models In the early days physical models were altered frame by frame to create
The Visualization Simulation of Remote-Sensing Satellite System
The Visualization Simulation of Remote-Sensing Satellite System Deng Fei, Chu YanLai, Zhang Peng, Feng Chen, Liang JingYong School of Geodesy and Geomatics, Wuhan University, 129 Luoyu Road, Wuhan 430079,
B2.53-R3: COMPUTER GRAPHICS. NOTE: 1. There are TWO PARTS in this Module/Paper. PART ONE contains FOUR questions and PART TWO contains FIVE questions.
B2.53-R3: COMPUTER GRAPHICS NOTE: 1. There are TWO PARTS in this Module/Paper. PART ONE contains FOUR questions and PART TWO contains FIVE questions. 2. PART ONE is to be answered in the TEAR-OFF ANSWER
3D Interactive Information Visualization: Guidelines from experience and analysis of applications
3D Interactive Information Visualization: Guidelines from experience and analysis of applications Richard Brath Visible Decisions Inc., 200 Front St. W. #2203, Toronto, Canada, [email protected] 1. EXPERT
Real-Time Tracking of Pedestrians and Vehicles
Real-Time Tracking of Pedestrians and Vehicles N.T. Siebel and S.J. Maybank. Computational Vision Group Department of Computer Science The University of Reading Reading RG6 6AY, England Abstract We present
Tracking and integrated navigation Konrad Schindler
Tracking and integrated navigation Konrad Schindler Institute of Geodesy and Photogrammetry Tracking Navigation needs predictions for dynamic objects estimate trajectories in 3D world coordinates and extrapolate
Robot Task-Level Programming Language and Simulation
Robot Task-Level Programming Language and Simulation M. Samaka Abstract This paper presents the development of a software application for Off-line robot task programming and simulation. Such application
Detection and Restoration of Vertical Non-linear Scratches in Digitized Film Sequences
Detection and Restoration of Vertical Non-linear Scratches in Digitized Film Sequences Byoung-moon You 1, Kyung-tack Jung 2, Sang-kook Kim 2, and Doo-sung Hwang 3 1 L&Y Vision Technologies, Inc., Daejeon,
3D Annotation and Manipulation of Medical Anatomical Structures
3D Annotation and Manipulation of Medical Anatomical Structures Dime Vitanovski, Christian Schaller, Dieter Hahn, Volker Daum, Joachim Hornegger Chair of Pattern Recognition, Martensstr. 3, 91058 Erlangen,
Analecta Vol. 8, No. 2 ISSN 2064-7964
EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,
