Robust Real-Time Face Tracking and Gesture Recognition

Size: px
Start display at page:

Download "Robust Real-Time Face Tracking and Gesture Recognition"

Transcription

1 Robust Real-Time Face Tracking and Gesture Recognition J. Heinzmann and A. Zelinsky Department of Systems Engineering Research School of Information Sciences and Engineering Australian National University Canberra, ACT 0200, Australia Abstract People naturally express themselves through facial gestures. We have implemented an interface that tracks a person's facial features robustly in real time (30Hz) and does not require artificial artifacts such as special illumination or facial makeup. Even if features become occluded the system is capable of recovering tracking in a couple of frames after the features reappear in the image. Based on this fault tolerant face tracker we have implemented realtime gesture recognition capable of distinguish 12 different gestures ranging from "yes", "no" and "may be" to winks, blinks and "asleep". 1 Introduction To advance the field of robotics greater efforts are needed to improve the communication interface between humans and robots. An important area of this work is the recognition of gestures of the head, body and expressions of the face. Such body language is regarded as one of the most natural forms of human expression. If we are to create intelligent robots in the future, human body language must be understood. Robots that can communicate in this manner will be particularly useful for disabled people and to users who are inexperienced in working with traditional computer interfaces. A gesture interface could open the door to many applications ranging from control of machines to "helping hands" for the elderly and disabled. The crucial aspects of such systems are their robustness and their real-time capability. At present many computer vision systems are still lacking in computational speed and can't cope with tracking errors, temporary occlusion of features. We have overcome these problems by using dedicated computer vision hardware that is capable of real-time feature tracking and by developing specialised algorithms that make face tracking robust. The principle of our vision system is image correlation (or template matching). Template matching is not directly applicable to face tracking especially under the constraints of non-homogeneous illumination and contrast enhancing makeup can't be used. The appearance (i.e shape and shade) of facial features changes when an observed person's head moves. A large difference occurs between the reference template and the features being tracked. This effect can be further compounded when people tilt their heads to the side and cause a rotation of the feature. So tracking of the facial feature fails and there is no way to recover from this error. Implementations of gesture interfaces based on image correlation have been previously reported [Azarbayejani et al., 1993; Hager and Toyama, 1996; 1995]. The work of [Azarbayejani et a/., 1993] reported on the manipulation of a virtual reality clone. This system automatically acquired the tracking templates at runtime using heuristic filters. With this approach it is not possible to determine the initial state of features e.g. eyes are open and the position of the face can only be measured relative to the initial image. The [Hager and Toyama, 1996; 1995] implementation uses deformable template matching performed in software to track a face. This system does not exploit the geometry of the facial features to assist in tracking or error recovery so only small rotations of the head are allowed. Related to the template based approach is the method proposed by [Maurer and von der Malsburg, 1996]. Instead of using bitmap templates for tracking this system calculates the local response to a Garbor filter of different frequencies and orientations. The video frame rate required for this method is reported to be >10Hz. Although, even using reduced images of 128x128 pixels the calculation time for one frame is 4sec. Like the Hager et al system it does not exploit the geometric relationship between features. Both systems are not able to recover from tracking errors caused by temporary occlusion of features. An alternative approach is to use specialised filters instead of templates to localise facial features. The filters are implemented as small programs that classify pixels, e.g. belonging to the iris or to the eyeball or to the eyelid HEIZMANN & ZELINKSY 1525

2 of an eye. The classifier usually uses heuristics or probabilistic techniques. A system controlling a clone in virtual reality using this approach is reported by [Saulnier et a/., 1995]. To improve the reliability of pixel classification contrast enhancing makeup has to be used and the system only runs at 10Hz. Since the system does not exploit facial geometries the face tracking can fail when features are temporarily occluded. A system running theoretically at 100Hz is reported by [Gee and Cipolla, 1994]. This impressive performance is achieved because the system uses simple filters that are not able to validate the tracking results. Thus, the tracker can neither detect tracking errors nor recover from them. Like the other systems this tracker is not able to pick up a face that recently entered the image. Another face tracker reported by [Jacquin and Eleftheriadis, 1995] uses a oval shape model to detect the location of the face in the image. Using this general model a high robustness and person independence can be achieved, though, the required computation is high and makes the approach unsuitable to real-time applications. Reliable and rapid tracking of the face gives rise to ability to recognise facial gestures. This should not be confused with work being done to understand facial expressions [Darrel and Pentland, 1995; Black and Yacoob, 1995; Saulnier et a/., 1995]. The work described in [Darrel and Pentland, 1995] is concerned with the classification of the expression into states such as happiness, anger, fear, sadness or disgust. The system uses nearest neighbour matching to a set of templates to classify the expression state. Templates can be acquired automatically or synthesised by the system. The detection of facial expression runs at about 8Hz. The aim of the system reported in [Black and Yacoob, 1995] is similar, but the classification is based on the affine transformation parameters of individual templates for different facial features. However, the system requires high resolution images (560x420 pixels) so real-time performance could not be achieved. Due to the real-time constraints of the problem of gesture recognition based on head motion there have been few projects undertaken in this area. [Darrel and Pentland, 1995] describe a system that is able to recognise hand and facial gestures based on the template matching system developed by [Azarbayejani et al., 1993]. The tracking output consists of different motion parameters of the face. Sequences of motion vector parameters can be recorded where the start and the end must be manually determined. Such a sequence forms a "space-time" gesture. During recognition the last motion vectors are compared with the stored gestures using a dynamic time warping algorithm similar to that used in speech recognition applications. After receiving each new motion vector the history of motion vectors have to be matched with all gestures stored in the library. This has a heavy impact on the calculation time. Using only two gestures of length 20 state vectors the system can only run at 10Hz. The system we describe in this paper has proved to run at 30Hz and can distinguish 12 different gestures. This is possible due to a specialised gesture model which requires little computation yet can still detect gestures robustly. 2 Real-time Vision System The MEP tracking vision system manufactured by FU JITSU is designed for real time tracking of multiple objects in the frames of a NTSC video stream. It consists of two VME-bus cards, a video module with frame grabber and overlay memory for displaying graphics and textual information, and a tracking module which can track over 100 templates at video frame rate (30Hz for NTSC). A MC68040/25MHz processor card running VxWorks executes the application program and controls the vision system. Objects are tracked using template correlation. There are two sizes for templates available, 8x8 or 16x16 pixels, but both sizes can be magnified by a factor between 1 and 4 in the X- and Y-direction independently. The frames of the video stream are digitised by the video module and stored in dedicated video RAM which can also be accessed by the tracking module. Next, the templates are matched within specified search windows of the image. The comparison itself of the template and an area of the digitised frame is done by a cross correlation using the mean absolute error method. The distortion, which is the sum of the absolute greyscale differences of corresponding pixels in the template and the video frame, indicates how similar the two images are. Low distortions indicate a good match and high distortions appear when the images are very different. The formula for the distortion D is shown in Formula 1 where Size is the template size (8 or 16), gt{x,y) is the grey value of the template at the specified coordinates, gf{x,y) is the grey value of the pixel in the last frame, m x and m y are the magnifications of the template in X- and Y-direction and o x and o y are the offsets in the frame. To track the template of an object it is necessary to calculate the distortion not only at one point in the image but at a number of points within the search area. To track the motion of an object the tracking module 1526 VIDEOS

3 finds the position in the search window where the template matches with the lowest distortion. By moving the search window along according to the tracking results objects can be tracked easily This method works perfectly for objects that do not change their appearance or shade and that get occluded by other objects. Figure 1: The tracked features and their references 3 Improving the System Robustness Our basic idea to overcome the problem of temporarily distorted or occluded features is to have individual search windows help each other to track their features. Since the geometric relationship between the features in a face is known a lost search window can be relocated by assistance from those that are still tracking. In a first approach a two dimensional model of the face is used where the features are placed on a virtual grid. During face tracking the search windows can monitor their relative position to the other windows and readjust their coordinates if necessary. Figure 1 shows the face of a person where the boxes mark the 9 features that are used for tracking. The lines indicate the connections between the search windows. The robustness of the tracker heavily depends on the method used to fuse the tracking results of the vision system with the geometric data derived from the 2D face model. In difficult situations, when the head is turned and all the templates match with high distortions, it is not possible to determine whether or not a feature has been lost based on the distortion reported by the vision system. Thus, a probabilistic approach must be chosen to merge the uncertain tracking data with the information derived from the 2D facial model. We believe that Kalman filters are an appropriate method to cope with this problem. 3.1 Multiple Interdependent Kalman Filters The Kalman filter is a recursive linear estimator which merges the measurements of sensors observing the environment with a system state prediction that is derived from a system model [Bozic, 1979; Durrant-Whyte, 1995]. Kalman filters are used in many applications such as navigation of planes, missiles and mobile robots where uncertain measurements from sensors that observe landmarks are used to localise a vehicle. By merging sensor information with the internal model information, the Kalman filter guarantees an optimal estimate of the system's state vector. The use of Kalman filters in feature tracking has been previously reported by McLaughlan et al. [McLauchlan et a/., 1994] to estimate 3D structure from motion. However, in our approach features are used to assist each other in tracking. Hager et al. [Hager and Toyama, 1996] proposed a similar idea using geometric constraints and feature states for tracking. However, a binary switching method between winning and loosing features is used to decide the features that guide the other tracking features. Our approach uses all the features for tracking and is weighted to the features that are tracking best. In difficult situations, when the head is turned around and all templates match with high distortions our merging method shows much more robust behaviour than binary switching. So larger rotations of the head can be tracked than with previously reported methods. An individual extended Kalman filter is applied to each feature. The tracking results provided by the vision system are used as sensor input where the distortion is converted to a variance of this measurement. The results of the different templates are fed into a Selectionunit which selects the template with the lowest distortion. Only the coordinates and the associated variance of the selected template are forwarded to the Kalman filter. Instead of a system model describing the behaviour of the feature itself an estimation of the features position based on the position of the reference features is fed into the Kalman filter. Equation 2 shows how a Mergeunit derives a position estimation from the positions of the reference features with the according variance the displacement vectors between the reference feature and the feature itself and the movement vector m. The variance PPi associated with this position estima- HEIZMANN & ZELINKSY 1527

4 tion is calculated from the variance of the last position and the variances of the reference positions according to Formula 3. Next, according to the general Kalman filter equations the position of the feature is calculated. These computations must be performed for each feature in each frame cycle. In the next cycle the position calculation is used by other features to estimate their own position. The system described thus far will track facial features however strictly limited turning and tilting of the head is only possible. The rigid connections in the 2D facial model can force well matching templates to leave their associated features to satisfy the geometric requirements of the 2D model. We overcome this problem by introducing a projection to deal with the deformations of the 2D model. The rotational angle and the compressions factors c x and c y in X- and Y-direction of the model are calculated each frame. If the calculated values are used as raw input the system becomes unstable. However, a simple low pass filter sorts these problems out. 3.2 Further Enhancing Robustness At this stage the system is capable of robustly tracking the face in the rotational range of about 60 degrees. Further rotations causes features to disappear and others to be highly distorted. Should all the features disappear or deform in a way that the system can't track them anymore, the tracking fails completely and is not able to recover even if the features reappear in the image. A similar situation arises when the system is started up and the person may not be in the image yet or the face is located differently from the coordinates in the initialisation file. The search windows can't lock onto the features and thus guide others to their correct locations. These situations are common in real world applications and manual initialisation of the feature coordinates in the first frame of a recorded video sequence can't deal with this problem. A mechanism is required to perform online recovery in real-time. Another desirable characteristic in this context is that the system does not need to switch between an initialisation mode trying to localise any feature and a tracking mode because this implies that the system can determine whether all features are lost or not. The single mode solution proposed in this paper increases the system resources used to relocate a feature as the variance of its position increases. Additional search windows, called Area search windows, that use the same template as the feature tracking window are added in the surrounding area of the estimated feature position. The size of the area and the number of search windows used are functions of the feature's variance. Since system resources are limited, for each template we must determine the number of area search windows to allocate. Our system distributes the available search windows according to the requirements and the uniqueness of the features. Characteristic features like the eyes and eyebrows go first, while the features with poor contrast around the mouth go last. The last problem to be solved is when to believe a feature was actually lost and found by one of its area search windows or when should the feature be relocated to the position of a well matching area search window. At the moment we are using a malus factor 1 applied to the distortion of the area search window and compare the resulting distortion to the distortion of the feature tracking window. If the area distortion after application of the malus factor is smaller than the centre distortion, the feature is relocated by selecting the area search windows coordinates as one input of the Kalman filter. Another problem to be addressed is the similarity of the two eyes and eyebrows. During the recovery phase the window of an eye can easily get stuck on the wrong eye, and the accompanying eyebrow will also match very well on the wrong eyebrow. The system remains in this state without any chance of to recover unless the eye, that is tracked by the wrong window, disappears for some reason. For this reason a third eye is introduced into the system. The search window for the 3rd eye is set to the coordinates calculated by adding the vector connecting the left eye and the right eye to the coordinates of the right eye, this is where the right eye would be if the right eye window tracks the left eye. The variance is set to the variance of the left eye and the appropriate number of area search windows is added. This variance is usually large if the left eye is being searched in the area of the ear and low if both eyes are being tracked the right way. So only when one eye appears to be lost then the area search windows are used to recover it. The decision whether or not an error has occurred is done in the same way as for area search windows. The best distortion of the left eye is compared with the best distortion of the 3rd eye search window multiplied with a malus factor 2. If the 3rd eye search window detects an eye, then the coordinates of the right eye are set to those of the 3rd eye and those of the left eye are set to those of the right eye. The search position of the 3rd eye window is alternated each frame, so both sides (left and right) of the facial feature network are checked at a frequency of 15Hz. 1 The malus factor for area search windows is For the 3rd eye the malus factor is VIDEOS

5 b) c) Figure 2: Various deformations of the feature grid 4 Gesture Recognition Robust real-time face tracking gives rise to the possibility of recognising gestures based on motions of the head. Gesture recognition and face tracking are implemented as independent processes. Both processes run at the NTSC video frame rate (30Hz). The design of the algorithms for gesture recognition is determined by hard real-time constraints since dozens of gestures must be compared with the data stream from the face tracking module. Also, gesture recognition must be flexible in respect to the performance of the gesture including timing and amplitudes of the parameter. The system should assign confidence values to the gestures that it recognises. This allows higher level processes to use the gestures as input and interpret them optimally. To avoid the computationally expensive time warping method we developed a recursive method taking only the current motion vector of the face into account. A set of finite state machines implicitly store the previous tracking information. Gesture recognition is implemented as a two layer system based on the decomposition of gestures into atomic actions. The lower layer recognises basic motion primitives and state primitives called atomic actions. The output of an atomic action consists of its activation which is a measure for the similarity of the recent motion vectors and the atomic action definition. The system incorporates 22 predefined atomic actions for which the activation is calculated in each frame cycle. The upper layer is concerned with the recognition of patterns in the activation of the set of atomic actions. A gesture is defined by a sequence of atomic actions and time constraints for the occurrence of each of them. Each time the first atomic action of a gesture definition is activated an instance of a finite state machine is generated dynamically. This instance then observes the activation of the next atomic action in the gesture definition and does the transition to the next state if the activation occurs within a given time frame. If no activation occurs in the time frame the finite state machine instance is deleted from the system. When the state machine reaches it's final state the atomic action sequence is completely recognised and the gesture is send to output together with a confidence measure of how well the observed pattern matched the gesture definition. Figure 3 shows a selection of gestures recognised by the system. For a full description of gesture recognition refer to /citehei96. Look Down Look Left Turn to Left Figure 3: Example Gestures 5 Experimental Results The complete system was extensively tested in our laboratory. Sequences of several minutes of persons performing gestures were video taped and used for analysis. The face tracker proved to be reliable under stable illumination and tolerated rotations of the face not exceeding 60. Figure 4 shows the output of the face tracker and the recognised gestures. The recognition rate for the HEIZMANN & ZELINKSY 1529

6 gestures differs significantly. Complex but easy to track gestures like yes and no have the best recognition rate (95%). Difficult to track gestures like the turn-gestures are sometimes not identified due to tracking errors but still have recognition rate over 70%. Short gestures such as winking and blinking are recognised whenever they are performed, though due to the shortness of the gesture the number false detections can be as high as 20%. Figure 4: Activation pattern of the atomic actions eyes.open, half.open, closed during a blink 6 Conclusion and Future Work Our system is able to track the features of the face at 30Hz without special illumination or contrast enhancing makeup. It is able to automatically initialise tracking without any restrictions to the position of the face in the first frame. It is also able to lock on temporary occluded features as soon as they are appearing in the image and even if all features where occluded the system can recover within in a few seconds. The gesture recognition module runs in parallel and is able to recognise 12 different gestures running at video frame rate. By decomposing gestures into atomic actions it also provides ways of measuring the conformity of a performed gesture to the gesture definition. In our future work we plan to develop a 3D adaptive model of the face and use dynamic template acquisition during tracking. Our ultimate aim is to use the facial gesture recognition system in a robotic system for the disabled. Acknowledgements This work has been funded by the Australian Research Council and by the generous support of Wind River Systems, supplier of VxWorks. References [Azarbayejani et al, 1993] A. Azarbayejani, T. Starner, B. Horowitz, and A. Pentland. Visually controlled graphics. IEEE Transactions on Pattern Analysis and Machine Intelligence, 15(6): ,1993. [Black and Yacoob, 1995] Black and Yacoob. Tracking and recognizing rigid and non-rigid facial motions using parametric models of image motion. In Proceedings of ICCV'95, pages ,1995. [Bozic, 1979] S.M. Bozic. Digital and Kalman Filtering. Edward Anold Publishers Ltd, [Darrel and Pentland, 1995] T. Darrel and A.P. Pentland. Attention-driven expression and gesture analysis in an interactive environment. In Proceedings of the International Workshop on Automatic Face- and Gesture-Recognition 95, pages , [Durrant-Whyte, 1995] H. Durrant-Whyte. Autonomous guided vehicle technology. In Proceedings of the Joint Australia-Korea Workshop on Manufacturing Technology 1995, pages 51-60, [Gee and Cipolla, 1994] A. Gee and R. Cipolla. Nonintrusive gaze tracking for human-computer interaction. IEEE, pages , [Hager and Toyama, 1995] G.D. Hager and K. Toyama. A framework for real-time window-based tracking using off-the-shelf hardware. Technical report, Yale University, Department of Computer Science, [Hager and Toyama, 1996] G.D. Hager and K. Toyama. Xvision: Combining image warping and geometric constraints for fast vision tracking. In Proceedings of ECCV'96, pages , [Heinzmann, 1996] J. Heinzmann. Real-time human face tracking and gesture recognition, Master's Thesis, Department of Computer Science, University of Karlsruhe, [Jacquin and Eleftheriadis, 1995] A. Jacquin and A. Eleftheriadis. Automatic location tracking of faces and facial features in video sequences. In Proceedings of the International Workshop on Automatic Face- and Gesture Recognition 95, pages , [Maurer and von der Malsburg, 1996] T. Maurer and C. von der Malsburg. Tracking and learning graphs and pose on image sequences of faces. In Proceedings of the International Conference on Automaic Face and Gesture Recognition '96, pages , [McLauchlan et al, 1994] P.F. McLauchlan, I.D. Reid, and D.W. Murray. Recursive affine structure and motion from image sequences. In Proceedings of ECCV'94, pages , [Saulnier et al, 1995] A. Saulnier, M.-L. Viaud, and D. Geldreich. Real-time facial analysis and synthesis chain. In Proceedings of the International Workshop on Automatic Face- and Gesture Recognition 95, pages 86-91, VIDEOS

7 PAC - Personality and Cognition: an interactive system for modelling agent scenarios Lin Padgham and Guy Taylor Department of Computer Science Royal Melbourne Institute of Technology Melbourne, VIC 3001, Australia <linpa@cs.rmit.edu.au> <guy@yallara.cs.rmit.edu.au> Abstract PAC is an interactive system for experimenting with scenarios of agents, where the agents are modelled as having both cognition and personality, as well as a physical realisation. The aim of the system is to provide an environment where scenarios can quickly and easily be built up, varying aspects of agent personality (or emotions) and agent cognition (plans and beliefs). This will allow us to experiment with different combinations of agents in different worlds. We can then investigate the effect of various parameters on emergent behaviour of the agent system. Some aspects of the system are described more fully in [PT97]. 1 Motivation The motivation for the system is to provide a testbed where we can easily build up scenarios of agents to investigate the effect of modeling emotion and personality as one of the important aspects of agents. Some researchers are interested in modelling emotion and personality as a way of creating agents that are engaging for the human user of a system (e.g. [Bat94a; HR95]). Others, such as Toda [Tod82] also believe that emotions play a functional role in the behaviour of humans and animals, particularly behaviour as part of complex social systems. Certainly the introduction of emotions, and their interaction with goals, at various levels, increases the complexity of the agents and social systems that can be modelled. Our hypothesis is that, just as modelling of beliefs and goals has facilitated the building of complex agent systems, so will the modelling of emotion and personality enable a further step forward in the level of robustness and complexity able to be developed. However significant work is needed before we can expect to understand the functional role of emotion sufficiently to successfully model it in our software agents. The PAC system will facilitate experimentation with groups of agents in varying worlds, and will allow us to observe the effect of representation of emotional characteristics and combinations of personality types under varying aspects of the simulated world. Our aim in this system is not to develop a full psychological model of emotions, personality or cognition, but rather to use a simplified model to study how this might facilitate building of more complex systems, or more engaging and credible agents. 2 Emotional Model We have based our model of cause of emotion on a simple version of two models found in the literature. The first is that explored in [OCC88], and used by both Dyer [Dye87] and Bates [Bat94b] in their systems. In this model emotional reactions are caused by goal success and failure events. For instance an agent which experiences a goal failure may feel unhappy, while one experiencing goal success may feel glad. Dyer [Dye87] develops a comprehensive lexicon of emotional states, based on goal success and failure. For example an agent which expects its goal to succeed will feel hopeful, an agent whose goal is achieved by the action of another agent will feel grateful, and an agent who expects its goal to be thwarted will feel apprehensive. Figure 2 shows examples of some emotions (modified from [Dye87]), indexed by contributing cause. We have currently implemented a simplified version of this model. The second model we have used is based on what we call motivational concerns. These are long term concerns which differ from the usual goals of rational agent systems in that they are not things which the agent acts to achieve, but rather something which the agent is continually monitoring in the background. If a threat or opportunity related to a motivational concern arises, then an emotional reaction is triggered, which leads to behaviour. Frijda and Swagerman [FS87] postulate emotions as processes which safeguard the long-term persistent goals PADGHAM & TAYLOR 1531

Template-based Eye and Mouth Detection for 3D Video Conferencing

Template-based Eye and Mouth Detection for 3D Video Conferencing Template-based Eye and Mouth Detection for 3D Video Conferencing Jürgen Rurainsky and Peter Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz-Institute, Image Processing Department, Einsteinufer

More information

Bachelor of Games and Virtual Worlds (Programming) Subject and Course Summaries

Bachelor of Games and Virtual Worlds (Programming) Subject and Course Summaries First Semester Development 1A On completion of this subject students will be able to apply basic programming and problem solving skills in a 3 rd generation object-oriented programming language (such as

More information

Making Machines Understand Facial Motion & Expressions Like Humans Do

Making Machines Understand Facial Motion & Expressions Like Humans Do Making Machines Understand Facial Motion & Expressions Like Humans Do Ana C. Andrés del Valle & Jean-Luc Dugelay Multimedia Communications Dpt. Institut Eurécom 2229 route des Crêtes. BP 193. Sophia Antipolis.

More information

COMPUTATIONAL ENGINEERING OF FINITE ELEMENT MODELLING FOR AUTOMOTIVE APPLICATION USING ABAQUS

COMPUTATIONAL ENGINEERING OF FINITE ELEMENT MODELLING FOR AUTOMOTIVE APPLICATION USING ABAQUS International Journal of Advanced Research in Engineering and Technology (IJARET) Volume 7, Issue 2, March-April 2016, pp. 30 52, Article ID: IJARET_07_02_004 Available online at http://www.iaeme.com/ijaret/issues.asp?jtype=ijaret&vtype=7&itype=2

More information

Subspace Analysis and Optimization for AAM Based Face Alignment

Subspace Analysis and Optimization for AAM Based Face Alignment Subspace Analysis and Optimization for AAM Based Face Alignment Ming Zhao Chun Chen College of Computer Science Zhejiang University Hangzhou, 310027, P.R.China zhaoming1999@zju.edu.cn Stan Z. Li Microsoft

More information

Face detection is a process of localizing and extracting the face region from the

Face detection is a process of localizing and extracting the face region from the Chapter 4 FACE NORMALIZATION 4.1 INTRODUCTION Face detection is a process of localizing and extracting the face region from the background. The detected face varies in rotation, brightness, size, etc.

More information

Communicating Agents Architecture with Applications in Multimodal Human Computer Interaction

Communicating Agents Architecture with Applications in Multimodal Human Computer Interaction Communicating Agents Architecture with Applications in Multimodal Human Computer Interaction Maximilian Krüger, Achim Schäfer, Andreas Tewes, Rolf P. Würtz Institut für Neuroinformatik, Ruhr-Universität

More information

Graduate Co-op Students Information Manual. Department of Computer Science. Faculty of Science. University of Regina

Graduate Co-op Students Information Manual. Department of Computer Science. Faculty of Science. University of Regina Graduate Co-op Students Information Manual Department of Computer Science Faculty of Science University of Regina 2014 1 Table of Contents 1. Department Description..3 2. Program Requirements and Procedures

More information

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014 Efficient Attendance Management System Using Face Detection and Recognition Arun.A.V, Bhatath.S, Chethan.N, Manmohan.C.M, Hamsaveni M Department of Computer Science and Engineering, Vidya Vardhaka College

More information

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - nzarrin@qiau.ac.ir

More information

Tracking Groups of Pedestrians in Video Sequences

Tracking Groups of Pedestrians in Video Sequences Tracking Groups of Pedestrians in Video Sequences Jorge S. Marques Pedro M. Jorge Arnaldo J. Abrantes J. M. Lemos IST / ISR ISEL / IST ISEL INESC-ID / IST Lisbon, Portugal Lisbon, Portugal Lisbon, Portugal

More information

False alarm in outdoor environments

False alarm in outdoor environments Accepted 1.0 Savantic letter 1(6) False alarm in outdoor environments Accepted 1.0 Savantic letter 2(6) Table of contents Revision history 3 References 3 1 Introduction 4 2 Pre-processing 4 3 Detection,

More information

CREATING EMOTIONAL EXPERIENCES THAT ENGAGE, INSPIRE, & CONVERT

CREATING EMOTIONAL EXPERIENCES THAT ENGAGE, INSPIRE, & CONVERT CREATING EMOTIONAL EXPERIENCES THAT ENGAGE, INSPIRE, & CONVERT Presented by Tim Llewellynn (CEO & Co-founder) Presented 14 th March 2013 2 Agenda Agenda and About nviso Primer on Emotions and Decision

More information

Real-Time Tracking of Pedestrians and Vehicles

Real-Time Tracking of Pedestrians and Vehicles Real-Time Tracking of Pedestrians and Vehicles N.T. Siebel and S.J. Maybank. Computational Vision Group Department of Computer Science The University of Reading Reading RG6 6AY, England Abstract We present

More information

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network Proceedings of the 8th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING & DATA BASES (AIKED '9) ISSN: 179-519 435 ISBN: 978-96-474-51-2 An Energy-Based Vehicle Tracking System using Principal

More information

Building an Advanced Invariant Real-Time Human Tracking System

Building an Advanced Invariant Real-Time Human Tracking System UDC 004.41 Building an Advanced Invariant Real-Time Human Tracking System Fayez Idris 1, Mazen Abu_Zaher 2, Rashad J. Rasras 3, and Ibrahiem M. M. El Emary 4 1 School of Informatics and Computing, German-Jordanian

More information

Tracking in flussi video 3D. Ing. Samuele Salti

Tracking in flussi video 3D. Ing. Samuele Salti Seminari XXIII ciclo Tracking in flussi video 3D Ing. Tutors: Prof. Tullio Salmon Cinotti Prof. Luigi Di Stefano The Tracking problem Detection Object model, Track initiation, Track termination, Tracking

More information

CS231M Project Report - Automated Real-Time Face Tracking and Blending

CS231M Project Report - Automated Real-Time Face Tracking and Blending CS231M Project Report - Automated Real-Time Face Tracking and Blending Steven Lee, slee2010@stanford.edu June 6, 2015 1 Introduction Summary statement: The goal of this project is to create an Android

More information

A Learning Based Method for Super-Resolution of Low Resolution Images

A Learning Based Method for Super-Resolution of Low Resolution Images A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 emre.ugur@ceng.metu.edu.tr Abstract The main objective of this project is the study of a learning based method

More information

HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT

HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT International Journal of Scientific and Research Publications, Volume 2, Issue 4, April 2012 1 HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT Akhil Gupta, Akash Rathi, Dr. Y. Radhika

More information

Professional Organization Checklist for the Computer Science Curriculum Updates. Association of Computing Machinery Computing Curricula 2008

Professional Organization Checklist for the Computer Science Curriculum Updates. Association of Computing Machinery Computing Curricula 2008 Professional Organization Checklist for the Computer Science Curriculum Updates Association of Computing Machinery Computing Curricula 2008 The curriculum guidelines can be found in Appendix C of the report

More information

A Short Introduction to Computer Graphics

A Short Introduction to Computer Graphics A Short Introduction to Computer Graphics Frédo Durand MIT Laboratory for Computer Science 1 Introduction Chapter I: Basics Although computer graphics is a vast field that encompasses almost any graphical

More information

Tracking performance evaluation on PETS 2015 Challenge datasets

Tracking performance evaluation on PETS 2015 Challenge datasets Tracking performance evaluation on PETS 2015 Challenge datasets Tahir Nawaz, Jonathan Boyle, Longzhen Li and James Ferryman Computational Vision Group, School of Systems Engineering University of Reading,

More information

The Scientific Data Mining Process

The Scientific Data Mining Process Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In

More information

RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 29 (2008) Indiana University

RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 29 (2008) Indiana University RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 29 (2008) Indiana University A Software-Based System for Synchronizing and Preprocessing Eye Movement Data in Preparation for Analysis 1 Mohammad

More information

Multi-ultrasonic sensor fusion for autonomous mobile robots

Multi-ultrasonic sensor fusion for autonomous mobile robots Multi-ultrasonic sensor fusion for autonomous mobile robots Zou Yi *, Ho Yeong Khing, Chua Chin Seng, and Zhou Xiao Wei School of Electrical and Electronic Engineering Nanyang Technological University

More information

VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS

VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS Aswin C Sankaranayanan, Qinfen Zheng, Rama Chellappa University of Maryland College Park, MD - 277 {aswch, qinfen, rama}@cfar.umd.edu Volkan Cevher, James

More information

Computer Animation and Visualisation. Lecture 1. Introduction

Computer Animation and Visualisation. Lecture 1. Introduction Computer Animation and Visualisation Lecture 1 Introduction 1 Today s topics Overview of the lecture Introduction to Computer Animation Introduction to Visualisation 2 Introduction (PhD in Tokyo, 2000,

More information

Tracking of Small Unmanned Aerial Vehicles

Tracking of Small Unmanned Aerial Vehicles Tracking of Small Unmanned Aerial Vehicles Steven Krukowski Adrien Perkins Aeronautics and Astronautics Stanford University Stanford, CA 94305 Email: spk170@stanford.edu Aeronautics and Astronautics Stanford

More information

Taking Inverse Graphics Seriously

Taking Inverse Graphics Seriously CSC2535: 2013 Advanced Machine Learning Taking Inverse Graphics Seriously Geoffrey Hinton Department of Computer Science University of Toronto The representation used by the neural nets that work best

More information

Face Locating and Tracking for Human{Computer Interaction. Carnegie Mellon University. Pittsburgh, PA 15213

Face Locating and Tracking for Human{Computer Interaction. Carnegie Mellon University. Pittsburgh, PA 15213 Face Locating and Tracking for Human{Computer Interaction Martin Hunke Alex Waibel School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract Eective Human-to-Human communication

More information

Classifying Manipulation Primitives from Visual Data

Classifying Manipulation Primitives from Visual Data Classifying Manipulation Primitives from Visual Data Sandy Huang and Dylan Hadfield-Menell Abstract One approach to learning from demonstrations in robotics is to make use of a classifier to predict if

More information

Introduction to Computer Graphics

Introduction to Computer Graphics Introduction to Computer Graphics Torsten Möller TASC 8021 778-782-2215 torsten@sfu.ca www.cs.sfu.ca/~torsten Today What is computer graphics? Contents of this course Syllabus Overview of course topics

More information

Monitoring Head/Eye Motion for Driver Alertness with One Camera

Monitoring Head/Eye Motion for Driver Alertness with One Camera Monitoring Head/Eye Motion for Driver Alertness with One Camera Paul Smith, Mubarak Shah, and N. da Vitoria Lobo Computer Science, University of Central Florida, Orlando, FL 32816 rps43158,shah,niels @cs.ucf.edu

More information

LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE. indhubatchvsa@gmail.com

LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE. indhubatchvsa@gmail.com LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE 1 S.Manikandan, 2 S.Abirami, 2 R.Indumathi, 2 R.Nandhini, 2 T.Nanthini 1 Assistant Professor, VSA group of institution, Salem. 2 BE(ECE), VSA

More information

CS Master Level Courses and Areas COURSE DESCRIPTIONS. CSCI 521 Real-Time Systems. CSCI 522 High Performance Computing

CS Master Level Courses and Areas COURSE DESCRIPTIONS. CSCI 521 Real-Time Systems. CSCI 522 High Performance Computing CS Master Level Courses and Areas The graduate courses offered may change over time, in response to new developments in computer science and the interests of faculty and students; the list of graduate

More information

A Demonstration of a Robust Context Classification System (CCS) and its Context ToolChain (CTC)

A Demonstration of a Robust Context Classification System (CCS) and its Context ToolChain (CTC) A Demonstration of a Robust Context Classification System () and its Context ToolChain (CTC) Martin Berchtold, Henning Günther and Michael Beigl Institut für Betriebssysteme und Rechnerverbund Abstract.

More information

Virtual Data Gloves : Interacting with Virtual Environments through Computer Vision

Virtual Data Gloves : Interacting with Virtual Environments through Computer Vision Virtual Data Gloves : Interacting with Virtual Environments through Computer Vision Richard Bowden (1), Tony Heap(2), Craig Hart(2) (1) Dept of M & ES (2) School of Computer Studies Brunel University University

More information

Real Time Simulation for Off-Road Vehicle Analysis. Dr. Pasi Korkealaakso Mevea Ltd., May 2015

Real Time Simulation for Off-Road Vehicle Analysis. Dr. Pasi Korkealaakso Mevea Ltd., May 2015 Real Time Simulation for Off-Road Vehicle Analysis Dr. Pasi Korkealaakso Mevea Ltd., May 2015 Contents Introduction Virtual machine model Machine interaction with environment and realistic environment

More information

AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION

AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION Saurabh Asija 1, Rakesh Singh 2 1 Research Scholar (Computer Engineering Department), Punjabi University, Patiala. 2 Asst.

More information

Head Pose Estimation on Low Resolution Images

Head Pose Estimation on Low Resolution Images Head Pose Estimation on Low Resolution Images Nicolas Gourier, Jérôme Maisonnasse, Daniela Hall, James L. Crowley PRIMA, GRAVIR-IMAG INRIA Rhône-Alpes, 38349 St. Ismier. France. Abstract. This paper addresses

More information

MACHINE VISION MNEMONICS, INC. 102 Gaither Drive, Suite 4 Mount Laurel, NJ 08054 USA 856-234-0970 www.mnemonicsinc.com

MACHINE VISION MNEMONICS, INC. 102 Gaither Drive, Suite 4 Mount Laurel, NJ 08054 USA 856-234-0970 www.mnemonicsinc.com MACHINE VISION by MNEMONICS, INC. 102 Gaither Drive, Suite 4 Mount Laurel, NJ 08054 USA 856-234-0970 www.mnemonicsinc.com Overview A visual information processing company with over 25 years experience

More information

Robotics. Lecture 3: Sensors. See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information.

Robotics. Lecture 3: Sensors. See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Robotics Lecture 3: Sensors See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review: Locomotion Practical

More information

How To Get A Computer Engineering Degree

How To Get A Computer Engineering Degree COMPUTER ENGINEERING GRADUTE PROGRAM FOR MASTER S DEGREE (With Thesis) PREPARATORY PROGRAM* COME 27 Advanced Object Oriented Programming 5 COME 21 Data Structures and Algorithms COME 22 COME 1 COME 1 COME

More information

FPGA Implementation of Human Behavior Analysis Using Facial Image

FPGA Implementation of Human Behavior Analysis Using Facial Image RESEARCH ARTICLE OPEN ACCESS FPGA Implementation of Human Behavior Analysis Using Facial Image A.J Ezhil, K. Adalarasu Department of Electronics & Communication Engineering PSNA College of Engineering

More information

VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION

VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION Mark J. Norris Vision Inspection Technology, LLC Haverhill, MA mnorris@vitechnology.com ABSTRACT Traditional methods of identifying and

More information

Limitations of Human Vision. What is computer vision? What is computer vision (cont d)?

Limitations of Human Vision. What is computer vision? What is computer vision (cont d)? What is computer vision? Limitations of Human Vision Slide 1 Computer vision (image understanding) is a discipline that studies how to reconstruct, interpret and understand a 3D scene from its 2D images

More information

Visual-based ID Verification by Signature Tracking

Visual-based ID Verification by Signature Tracking Visual-based ID Verification by Signature Tracking Mario E. Munich and Pietro Perona California Institute of Technology www.vision.caltech.edu/mariomu Outline Biometric ID Visual Signature Acquisition

More information

Professor, D.Sc. (Tech.) Eugene Kovshov MSTU «STANKIN», Moscow, Russia

Professor, D.Sc. (Tech.) Eugene Kovshov MSTU «STANKIN», Moscow, Russia Professor, D.Sc. (Tech.) Eugene Kovshov MSTU «STANKIN», Moscow, Russia As of today, the issue of Big Data processing is still of high importance. Data flow is increasingly growing. Processing methods

More information

Open Access A Facial Expression Recognition Algorithm Based on Local Binary Pattern and Empirical Mode Decomposition

Open Access A Facial Expression Recognition Algorithm Based on Local Binary Pattern and Empirical Mode Decomposition Send Orders for Reprints to reprints@benthamscience.ae The Open Electrical & Electronic Engineering Journal, 2014, 8, 599-604 599 Open Access A Facial Expression Recognition Algorithm Based on Local Binary

More information

Robot Task-Level Programming Language and Simulation

Robot Task-Level Programming Language and Simulation Robot Task-Level Programming Language and Simulation M. Samaka Abstract This paper presents the development of a software application for Off-line robot task programming and simulation. Such application

More information

Edge tracking for motion segmentation and depth ordering

Edge tracking for motion segmentation and depth ordering Edge tracking for motion segmentation and depth ordering P. Smith, T. Drummond and R. Cipolla Department of Engineering University of Cambridge Cambridge CB2 1PZ,UK {pas1001 twd20 cipolla}@eng.cam.ac.uk

More information

Colorado School of Mines Computer Vision Professor William Hoff

Colorado School of Mines Computer Vision Professor William Hoff Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Introduction to 2 What is? A process that produces from images of the external world a description

More information

E190Q Lecture 5 Autonomous Robot Navigation

E190Q Lecture 5 Autonomous Robot Navigation E190Q Lecture 5 Autonomous Robot Navigation Instructor: Chris Clark Semester: Spring 2014 1 Figures courtesy of Siegwart & Nourbakhsh Control Structures Planning Based Control Prior Knowledge Operator

More information

Head Tracking via Robust Registration in Texture Map Images

Head Tracking via Robust Registration in Texture Map Images BU CS TR97-020. To appear in IEEE Conf. on Computer Vision and Pattern Recognition, Santa Barbara, CA, 1998. Head Tracking via Robust Registration in Texture Map Images Marco La Cascia and John Isidoro

More information

An Instructional Aid System for Driving Schools Based on Visual Simulation

An Instructional Aid System for Driving Schools Based on Visual Simulation An Instructional Aid System for Driving Schools Based on Visual Simulation Salvador Bayarri, Rafael Garcia, Pedro Valero, Ignacio Pareja, Institute of Traffic and Road Safety (INTRAS), Marcos Fernandez

More information

Animation. Persistence of vision: Visual closure:

Animation. Persistence of vision: Visual closure: Animation Persistence of vision: The visual system smoothes in time. This means that images presented to the eye are perceived by the visual system for a short time after they are presented. In turn, this

More information

VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS

VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS Norbert Buch 1, Mark Cracknell 2, James Orwell 1 and Sergio A. Velastin 1 1. Kingston University, Penrhyn Road, Kingston upon Thames, KT1 2EE,

More information

Vision In and Out of Vehicles: Integrated Driver and Road Scene Monitoring

Vision In and Out of Vehicles: Integrated Driver and Road Scene Monitoring Vision In and Out of Vehicles: Integrated Driver and Road Scene Monitoring Nicholas Apostoloff and Alexander Zelinsky The Australian National University, Robotic Systems Laboratory, Research School of

More information

3D Model based Object Class Detection in An Arbitrary View

3D Model based Object Class Detection in An Arbitrary View 3D Model based Object Class Detection in An Arbitrary View Pingkun Yan, Saad M. Khan, Mubarak Shah School of Electrical Engineering and Computer Science University of Central Florida http://www.eecs.ucf.edu/

More information

Synthetic Aperture Radar: Principles and Applications of AI in Automatic Target Recognition

Synthetic Aperture Radar: Principles and Applications of AI in Automatic Target Recognition Synthetic Aperture Radar: Principles and Applications of AI in Automatic Target Recognition Paulo Marques 1 Instituto Superior de Engenharia de Lisboa / Instituto de Telecomunicações R. Conselheiro Emídio

More information

Visual Tracking. Frédéric Jurie LASMEA. CNRS / Université Blaise Pascal. France. jurie@lasmea.univ-bpclermoaddressnt.fr

Visual Tracking. Frédéric Jurie LASMEA. CNRS / Université Blaise Pascal. France. jurie@lasmea.univ-bpclermoaddressnt.fr Visual Tracking Frédéric Jurie LASMEA CNRS / Université Blaise Pascal France jurie@lasmea.univ-bpclermoaddressnt.fr What is it? Visual tracking is the problem of following moving targets through an image

More information

3D Human Face Recognition Using Point Signature

3D Human Face Recognition Using Point Signature 3D Human Face Recognition Using Point Signature Chin-Seng Chua, Feng Han, Yeong-Khing Ho School of Electrical and Electronic Engineering Nanyang Technological University, Singapore 639798 ECSChua@ntu.edu.sg

More information

Gaze is not Enough: Computational Analysis of Infant s Head Movement Measures the Developing Response to Social Interaction

Gaze is not Enough: Computational Analysis of Infant s Head Movement Measures the Developing Response to Social Interaction Gaze is not Enough: Computational Analysis of Infant s Head Movement Measures the Developing Response to Social Interaction Lars Schillingmann (lars@ams.eng.osaka-u.ac.jp) Graduate School of Engineering,

More information

Masters in Information Technology

Masters in Information Technology Computer - Information Technology MSc & MPhil - 2015/6 - July 2015 Masters in Information Technology Programme Requirements Taught Element, and PG Diploma in Information Technology: 120 credits: IS5101

More information

PASSIVE DRIVER GAZE TRACKING WITH ACTIVE APPEARANCE MODELS

PASSIVE DRIVER GAZE TRACKING WITH ACTIVE APPEARANCE MODELS PASSIVE DRIVER GAZE TRACKING WITH ACTIVE APPEARANCE MODELS Takahiro Ishikawa Research Laboratories, DENSO CORPORATION Nisshin, Aichi, Japan Tel: +81 (561) 75-1616, Fax: +81 (561) 75-1193 Email: tishika@rlab.denso.co.jp

More information

Master of Science in Computer Science

Master of Science in Computer Science Master of Science in Computer Science Background/Rationale The MSCS program aims to provide both breadth and depth of knowledge in the concepts and techniques related to the theory, design, implementation,

More information

Automated Model Based Testing for an Web Applications

Automated Model Based Testing for an Web Applications Automated Model Based Testing for an Web Applications Agasarpa Mounica, Lokanadham Naidu Vadlamudi Abstract- As the development of web applications plays a major role in our day-to-day life. Modeling the

More information

Robot Perception Continued

Robot Perception Continued Robot Perception Continued 1 Visual Perception Visual Odometry Reconstruction Recognition CS 685 11 Range Sensing strategies Active range sensors Ultrasound Laser range sensor Slides adopted from Siegwart

More information

MMGD0203 Multimedia Design MMGD0203 MULTIMEDIA DESIGN. Chapter 3 Graphics and Animations

MMGD0203 Multimedia Design MMGD0203 MULTIMEDIA DESIGN. Chapter 3 Graphics and Animations MMGD0203 MULTIMEDIA DESIGN Chapter 3 Graphics and Animations 1 Topics: Definition of Graphics Why use Graphics? Graphics Categories Graphics Qualities File Formats Types of Graphics Graphic File Size Introduction

More information

In mathematics, there are four attainment targets: using and applying mathematics; number and algebra; shape, space and measures, and handling data.

In mathematics, there are four attainment targets: using and applying mathematics; number and algebra; shape, space and measures, and handling data. MATHEMATICS: THE LEVEL DESCRIPTIONS In mathematics, there are four attainment targets: using and applying mathematics; number and algebra; shape, space and measures, and handling data. Attainment target

More information

Face Model Fitting on Low Resolution Images

Face Model Fitting on Low Resolution Images Face Model Fitting on Low Resolution Images Xiaoming Liu Peter H. Tu Frederick W. Wheeler Visualization and Computer Vision Lab General Electric Global Research Center Niskayuna, NY, 1239, USA {liux,tu,wheeler}@research.ge.com

More information

Multi-view Intelligent Vehicle Surveillance System

Multi-view Intelligent Vehicle Surveillance System Multi-view Intelligent Vehicle Surveillance System S. Denman, C. Fookes, J. Cook, C. Davoren, A. Mamic, G. Farquharson, D. Chen, B. Chen and S. Sridharan Image and Video Research Laboratory Queensland

More information

Analecta Vol. 8, No. 2 ISSN 2064-7964

Analecta Vol. 8, No. 2 ISSN 2064-7964 EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,

More information

How does the Kinect work? John MacCormick

How does the Kinect work? John MacCormick How does the Kinect work? John MacCormick Xbox demo Laptop demo The Kinect uses structured light and machine learning Inferring body position is a two-stage process: first compute a depth map (using structured

More information

Real-time Visual Tracker by Stream Processing

Real-time Visual Tracker by Stream Processing Real-time Visual Tracker by Stream Processing Simultaneous and Fast 3D Tracking of Multiple Faces in Video Sequences by Using a Particle Filter Oscar Mateo Lozano & Kuzahiro Otsuka presented by Piotr Rudol

More information

Object Recognition and Template Matching

Object Recognition and Template Matching Object Recognition and Template Matching Template Matching A template is a small image (sub-image) The goal is to find occurrences of this template in a larger image That is, you want to find matches of

More information

Computer Aided Systems

Computer Aided Systems 5 Computer Aided Systems Ivan Kuric Prof. Ivan Kuric, University of Zilina, Faculty of Mechanical Engineering, Department of Machining and Automation, Slovak republic, ivan.kuric@fstroj.utc.sk 1.1 Introduction

More information

BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES

BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES 123 CHAPTER 7 BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES 7.1 Introduction Even though using SVM presents

More information

Static Environment Recognition Using Omni-camera from a Moving Vehicle

Static Environment Recognition Using Omni-camera from a Moving Vehicle Static Environment Recognition Using Omni-camera from a Moving Vehicle Teruko Yata, Chuck Thorpe Frank Dellaert The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 USA College of Computing

More information

SoMA. Automated testing system of camera algorithms. Sofica Ltd

SoMA. Automated testing system of camera algorithms. Sofica Ltd SoMA Automated testing system of camera algorithms Sofica Ltd February 2012 2 Table of Contents Automated Testing for Camera Algorithms 3 Camera Algorithms 3 Automated Test 4 Testing 6 API Testing 6 Functional

More information

Speed Performance Improvement of Vehicle Blob Tracking System

Speed Performance Improvement of Vehicle Blob Tracking System Speed Performance Improvement of Vehicle Blob Tracking System Sung Chun Lee and Ram Nevatia University of Southern California, Los Angeles, CA 90089, USA sungchun@usc.edu, nevatia@usc.edu Abstract. A speed

More information

Hands-free navigation in VR environments by tracking the head. Sing Bing Kang. CRL 97/1 March 1997

Hands-free navigation in VR environments by tracking the head. Sing Bing Kang. CRL 97/1 March 1997 TM Hands-free navigation in VR environments by tracking the head Sing Bing Kang CRL 97/ March 997 Cambridge Research Laboratory The Cambridge Research Laboratory was founded in 987 to advance the state

More information

CS 4204 Computer Graphics

CS 4204 Computer Graphics CS 4204 Computer Graphics Computer Animation Adapted from notes by Yong Cao Virginia Tech 1 Outline Principles of Animation Keyframe Animation Additional challenges in animation 2 Classic animation Luxo

More information

Efficient Background Subtraction and Shadow Removal Technique for Multiple Human object Tracking

Efficient Background Subtraction and Shadow Removal Technique for Multiple Human object Tracking ISSN: 2321-7782 (Online) Volume 1, Issue 7, December 2013 International Journal of Advance Research in Computer Science and Management Studies Research Paper Available online at: www.ijarcsms.com Efficient

More information

A System for Capturing High Resolution Images

A System for Capturing High Resolution Images A System for Capturing High Resolution Images G.Voyatzis, G.Angelopoulos, A.Bors and I.Pitas Department of Informatics University of Thessaloniki BOX 451, 54006 Thessaloniki GREECE e-mail: pitas@zeus.csd.auth.gr

More information

MEng, BSc Computer Science with Artificial Intelligence

MEng, BSc Computer Science with Artificial Intelligence School of Computing FACULTY OF ENGINEERING MEng, BSc Computer Science with Artificial Intelligence Year 1 COMP1212 Computer Processor Effective programming depends on understanding not only how to give

More information

Tracking and measuring drivers eyes

Tracking and measuring drivers eyes 383 Tracking and measuring drivers eyes David Tock and Ian Craw Department of Mathematical Sciences * University of Aberdeen Aberdeen AB9 2TY, Scotland Abstract We describe a computer vision system for

More information

FACE RECOGNITION BASED ATTENDANCE MARKING SYSTEM

FACE RECOGNITION BASED ATTENDANCE MARKING SYSTEM Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 2, February 2014,

More information

Automatic Labeling of Lane Markings for Autonomous Vehicles

Automatic Labeling of Lane Markings for Autonomous Vehicles Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 jkiske@stanford.edu 1. Introduction As autonomous vehicles become more popular,

More information

Course Syllabus For Operations Management. Management Information Systems

Course Syllabus For Operations Management. Management Information Systems For Operations Management and Management Information Systems Department School Year First Year First Year First Year Second year Second year Second year Third year Third year Third year Third year Third

More information

Computer Graphics. Geometric Modeling. Page 1. Copyright Gotsman, Elber, Barequet, Karni, Sheffer Computer Science - Technion. An Example.

Computer Graphics. Geometric Modeling. Page 1. Copyright Gotsman, Elber, Barequet, Karni, Sheffer Computer Science - Technion. An Example. An Example 2 3 4 Outline Objective: Develop methods and algorithms to mathematically model shape of real world objects Categories: Wire-Frame Representation Object is represented as as a set of points

More information

Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006

Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006 Practical Tour of Visual tracking David Fleet and Allan Jepson January, 2006 Designing a Visual Tracker: What is the state? pose and motion (position, velocity, acceleration, ) shape (size, deformation,

More information

OBJECT TRACKING USING LOG-POLAR TRANSFORMATION

OBJECT TRACKING USING LOG-POLAR TRANSFORMATION OBJECT TRACKING USING LOG-POLAR TRANSFORMATION A Thesis Submitted to the Gradual Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements

More information

International Journal of Computer Science Trends and Technology (IJCST) Volume 3 Issue 3, May-June 2015

International Journal of Computer Science Trends and Technology (IJCST) Volume 3 Issue 3, May-June 2015 RESEARCH ARTICLE OPEN ACCESS Data Mining Technology for Efficient Network Security Management Ankit Naik [1], S.W. Ahmad [2] Student [1], Assistant Professor [2] Department of Computer Science and Engineering

More information

Face Recognition in Low-resolution Images by Using Local Zernike Moments

Face Recognition in Low-resolution Images by Using Local Zernike Moments Proceedings of the International Conference on Machine Vision and Machine Learning Prague, Czech Republic, August14-15, 014 Paper No. 15 Face Recognition in Low-resolution Images by Using Local Zernie

More information

Tracking and integrated navigation Konrad Schindler

Tracking and integrated navigation Konrad Schindler Tracking and integrated navigation Konrad Schindler Institute of Geodesy and Photogrammetry Tracking Navigation needs predictions for dynamic objects estimate trajectories in 3D world coordinates and extrapolate

More information

Go to contents 18 3D Visualization of Building Services in Virtual Environment

Go to contents 18 3D Visualization of Building Services in Virtual Environment 3D Visualization of Building Services in Virtual Environment GRÖHN, Matti Gröhn; MANTERE, Markku; SAVIOJA, Lauri; TAKALA, Tapio Telecommunications Software and Multimedia Laboratory Department of Computer

More information

This week. CENG 732 Computer Animation. Challenges in Human Modeling. Basic Arm Model

This week. CENG 732 Computer Animation. Challenges in Human Modeling. Basic Arm Model CENG 732 Computer Animation Spring 2006-2007 Week 8 Modeling and Animating Articulated Figures: Modeling the Arm, Walking, Facial Animation This week Modeling the arm Different joint structures Walking

More information

KNOWLEDGE-BASED IN MEDICAL DECISION SUPPORT SYSTEM BASED ON SUBJECTIVE INTELLIGENCE

KNOWLEDGE-BASED IN MEDICAL DECISION SUPPORT SYSTEM BASED ON SUBJECTIVE INTELLIGENCE JOURNAL OF MEDICAL INFORMATICS & TECHNOLOGIES Vol. 22/2013, ISSN 1642-6037 medical diagnosis, ontology, subjective intelligence, reasoning, fuzzy rules Hamido FUJITA 1 KNOWLEDGE-BASED IN MEDICAL DECISION

More information