Computer Vision Techniques for Man-Machine Interaction

Size: px
Start display at page:

Download "Computer Vision Techniques for Man-Machine Interaction"

Transcription

1 Computer Vision Techniques for Man-Machine Interaction James L. Crowley I.N.P. Grenoble 46 Ave Félix Viallet Grenoble France Abstract Computer vision offers many new possibilities for making machines aware and responsive to man. Manipulating objects is a natural means by which man operates on and shapes his environment. By tracking the hands of a person manipulating objects, computer vision allows any convenient object, including fingers, to be used as a computer input devices. In the first part of this paper we describe experiments with techniques for watching the hands and recognizing gestures. Vision of the face is an important aspect of human-to-human communication. Computer vision makes it possible to track and recognize faces, detect speech acts, estimate focus of attention and recognize emotions through facial expressions. When combined with real time image processing and active control of camera parameters, these techniques can greatly reduce the communications bandwidth required for video-phone and video-conference communications. In the second part of this paper we describe techniques for detecting, tracking, and interpreting faces. By following the movements of humans within a room or building, computer vision makes it possible to automatically migrate communications and access to information technology. In the final section we mention work on detecting and tracking full body motion. All of these systems use relatively simple techniques to describe the appearance of people in images. Such techniques are easily programmed to operate in real time on widely available personal computers. Each of the techniques has been integrated into a continuously operating system using a reactive architecture. 1 Looking at people: perception for man-machine interaction One of the effects of the continued exponential growth in available computing power has been an exponential decrease in the cost of hardware for real time computer vision. This trend has been accelerated by the recent integration of image acquisition and processing hardware for multi-media applications in personal computers. Lowered cost has meant more wide-spread experimentation in real time computer vision, creating a rapid evolution in robustness and reliability and the development of architectures for integrated vision systems [Crowley et al 1994]. Man-machine interaction provides a fertile applications domain for this technological evolution. The barrier between physical objects (paper, pencils, calculators) and their electronic counterparts limits both the integration of computing into human tasks, and the population willing to adapt to the required input devices. Computer vision, coupled with video projection using low cost devices, makes it possible for a human to use any convenient object, including fingers, as digital input devices. Computer vision can permit a machine to track, identify and watch the face of a user. This offers the possibility of reducing bandwidth for video-telephone applications, for following the attention of a user by tracking his fixation point, and for exploiting facial expression as an additional information channel between man and machine. Traditional computer vision techniques have been oriented toward using contrast contours (edges) to describe polyhedral objects. This approach has proved fragile even for man-made

2 objects in a laboratory environment, and inappropriate for watching deformable non-polyhedric objects such as hands and faces. Thus man-machine interaction requires computer vision scientists to "go back to basics" to design techniques adapted to the problem. The following sections describe experiments with techniques for watching hands and faces. 2 Looking at hands: gesture as an input device Human gesture serves three functional roles [Cadoz 94]: semiotic, ergotic, and epistemic. The semiotic function of gesture is to communicate meaningful information. The structure of a semiotic gesture is conventional and commonly results from shared cultural experience. The good-bye gesture, the American sign language, the operational gestures used to guide airplanes on the ground, and even the vulgar finger, each illustrates the semiotic function of gesture. The ergotic function of gesture is associated with the notion of work. It corresponds to the capacity of humans to manipulate the real world, to create artifacts, or to change the state of the environment by direct manipulation. Shaping pottery from clay, wiping dust, etc. result from ergotic gestures. The epistemic function of gesture allows humans to learn from the environment through tactile experience. By moving your hand over an object, you appreciate its structure, you may discover the material it is made of, as well as other properties. All three functions may be augmented using an instrument or tool. Examples include a handkerchief for the semiotic good-bye gesture, a turn-table for the ergotic shape-up gesture of pottery, or a dedicated artifact to explore the world. In Human Computer Interaction, gesture has been primarily exploited for its ergotic function: typing on a keyboard, moving a mouse and clicking buttons. The epistemic role of gesture has emerged effectively from pen computing and virtual reality: ergotic gestures applied to an electronic pen, to a data-glove or to a body-suit are transformed into meaningful expressions for the computer system. Special purpose interaction languages have been defined, typically 2-D pen gestures as in the Apple Newton, or 3-D hand gestures to navigate in virtual spaces or to control objects remotely. Mice, data-gloves, and body-suits are artificial add-on s that wire the user down to the computer. They are not real end-user instruments (as a hammer would be), but convenient tricks for computer scientists to sense human gesture. Computer vision can transform ordinary artifacts and even body parts into effective input devices. We are exploring the integration of appearance-based computer vision techniques to non-intrusively observe human gesture in a fast and robust manner. As a working problem, we are studying such techniques in the context of a digital desk [Wellner et al 93]. A digital desk, illustrated in figure 1, combines a projected computer screen with a real physical desk. The projection is easily obtained using an overhead projector and a liquid-crystal "datashow" working with standard overhead projector. A video-camera is set up to watch the workspace such that the surface of the projected image and the surface of the imaged area coincide. The transformation between the projected image and the observed image is a projection between two planes, and thus is easily described by an affine transformation. The workspace is populated by a manipulating "hand" and a number of physical and virtual objects which can be manipulated. Both physical and virtual devices act as tools whose manipulation is a communication channel between the user and the computer. The identity of the object which is manipulated carries a strong semiotic message. The manner in which the object is manipulated provides both semiotic and ergotic information. Virtual objects are generated internally by the system and projected onto the workspace. Examples include cursors and other visual feedback symbols as well as shapes and words which may have meaning to the user. Virtual objects are easily created, but they lack the tactile (epistemic) feedback provided by physical objects. The vision system must be able to detect,

3 track and recognize the user's hands as well as the tools which he manipulates. Tools may be virtual objects, or any convenient physical object which has been previously presented to the system. The system must also be able to extract meaning from the way in which tools are manipulated. Camera Projector Tracking Trigger Figure 1. A Digital Desk combines a video projector and a computer vision system to integrate manipulation of virtual and physical objects. To recognize gestures, we describe the space of possible appearances which the hands and tools can present. This space is reduced by registering and normalizing a window around the hand and tool using multiple visual processes. The space is further reduced by compressing the set of all possible appearances to a subspace defined by its principal components. Registering and normalizing the images of the hand and objects requires tracking. We have experimented with a number of different approaches to tracking hands and recognizing their gestures, including the techniques described above for face tracking. These include color histogram matching, cross-correlation with images, and active contours. As with faces, by combining several techniques in such a way that they can be dynamically initiated and controlled, we arrive at a system which is both flexible and robust. Figure 2. A photo of a hand on a desk (left), the probability of skin (middle), the largest connected component of the thresholded probability of skin (right). A gesture can be defined as a combination of a trajectory in a physical work-space and a sequence of configurations of the hand and tool. The physical trajectory of a hand is the timesampled sequence of positions of the center of the hand in the workspace, (x(t), y(t)). To express the sequence of hand and tool configurations as a trajectory requires defining a parametric representation for such configuration. Such a representation is provided by the principal components of a large collection of possible appearances. Appearance based vision provides a technique to express the hand-tool configuration in terms of a multi-dimensional parameter vector. For each tool to be manipulated, an "appearance space" is built up from a large sample of images of the hand manipulating the tool, as for example in Figure 2. The appearance space, contains one dimension for each pixel. In order to align the dimensions, the images are normalized in position and scale, which requires a reliable tracking process as described in the previous section. The possible set of appearances for a hand manipulating a tool is described as a manifold in the appearance space. The appearance space

4 can be reduced to a minimal subspace composed of an average image and a set of orthogonal basis images using principal components analysis (PCA ), as shown in Figure 3. Figure 3. A set of hand configurations. The principal components of a appearance space represent the eigen-vectors of a square matrix which is formed by computing the inner product of all pairs of images in the set. The information provided by each principal components is given by the associated principal value. The principal components can be normalized so that their sum is 1. A convenient method to determine how many dimension to conserve is to choose the M largest components such that their sum is above a threshold, such as The threshold determines the percentage of information conserved. Figure 4 Principal Components of images from Figure 3. Figure 5. A Hand gesture as a trajectory in a space defined by two principal components. Figure 5 gives an example of a gesture in the appearance space defined as a trajectory in principal components space. To create this figure, the 11 dimensions which represent the view space have been projected onto the first two principal components. The projection of each observed image is represented by a cross. The sequence of hand-tool configurations is described as a trajectory in appearance space. A gesture space may be defined by the concatenation of the trajectory in appearance space and the physical work space. Alternatively, in many cases, the trajectory in physical workspace may be

5 used to express ergotic parameters, while the trajectory in appearance space provides semiotic "meaning". Among the advantages of an appearance based approach using PCA is that it can be applied to deformable objects, such as hands and faces. We have found that trajectories in principal component space provide a simple and robust means for encoding both hand gestures, facial expression, and lip movements. 3. Looking at faces: making the machine aware of the user Locating and normalizing a face is a processes of tracking. A variety of methods for detecting and registering the position and scale of a face have been demonstrated in laboratory environments. However, each of these methods can fail in naturally occurring circumstances. A reliable tracking system for registration can be obtained by integrating and coordinating several complementary tracking processes [Crowley-Berard 97]. Integration and coordination are performed using a synchronous architecture in which a supervisor activates and controls visual processes in cyclic manner [Crowley et al 1994]. In collaboration with France Telecom, we have developed a system that achieves robust face tracking by integrating a number of simple visual processes. Control of individual processes is made possible by the inclusion of a confidence factor accompanying each observation. Fusion of the results is made possible by the determination of an error estimate (a covariance matrix) for each observation. Composing a system from a redundant ensemble of processes permits the overall system to automatically adapt to a variety of operational circumstances. The system initializes itself by detecting the blinking of eyes, as shown in Figure 6. Because eyes blink in unison, blinking provides a simple means to detect and locate the presence of a face in an image. Blink detection operates intermittently, succeeding for one image every 5 to 10 seconds. However, when a blink is detected, detection provides reliable and precise determination of the location and size of a face. Thus blink detection is ideally suited to initializing other means of tracking such as color histogram detection of skin and correlation masks for tracking of mouth and eyes. Blink detection also provides a simple means to confirm (or contradict) the detection of a face. Figure 6. A blink face image with the low resolution difference, bounding boxes of changed regions, and detected face position. Once a face has been detected, it is possible to follow it by a number of techniques. One such techniques is to detect the pixels with the same color (hue and saturation) of skin. Such detection can be performed using color histogram matching. Color histograms have been used in image processing for decades, particularly for segmenting multi-spectral satellite images, and medical images. Schiele and Waibel [Schiele-Waibel 95] have demonstrated that skin could be reliably detected by normalizing the color vector by dividing out the luminance component. A 2-D joint histogram of the luminance normalized color components (r, g) can be computed from a patch of an image known to be a sample of skin. For color components (R, G, B):

6 R G r = R+G+B g = R+G+B The histogram of normalized color gives the number of occurrences for each normalized color pair (r, g). This histogram must be periodically re-initialized to compensate for changes in ambient light, or differences in skin color of different users. Such a color sample is captured automatically whenever eye blink has been detected with a sufficient confidence. A normalized color histogram h(r, g) based on a sample of N pixels, gives the conditional probability of observing a color vector given that the pixel is an image of skin. Using Bayes rule, we convert this to the conditional probability of skin given the color vector, p(skin C ). This allows us to construct a probability image in which each pixel is replaced by the probability that it is the projection of skin, as shown in Figure 7. The center of gravity from the probability of skin gives the estimate of the position of the face. The bounding rectangles gives an estimate of size. A confidence factor is the computed by comparing the detected bounding box to an ideal width and height, using a Normal probability law. The average width and height and the covariance matrix are obtained from test observations selected by hand. Figure 7a. Bounding box and center of gravity superimposed on an image of a person in front of a computer screen. Figure 7b. In this image, each pixel from image 1a has been replaced by the probability that the pixel is skin. Figure 7c. The bounding box of the face region and the center of gravity. Detection of a face by color is fast and reliable, but not always precise. Detection by blinking is precise, but requires capturing an image pair during a blink. Correlation can be used to complete these two techniques and to hold the face centered in the image as the head moves. Energy normalized cross-correlation tracking can be shown to be optimum in the presence of additive Gaussian noise [Crowley-Martin 95]. The dominant noise in the case of face detection is neither Gaussian nor additive. However, when assisted by other detection processes, correlation tracking provides a technique which is inexpensive, relatively reliable, and formally analyzable. Correlation tracking processes are initiated by blink detection. The template for correlation is taken from the estimated position of the eyes. The search region for each tracker is estimated from the expected speed of the users movements measured in pixels per frame. This value can be kept quite small if the frame rate is kept high. Each reference template is a small neighborhood, W(m, n), of the image P(i, j) obtained during initialization just after blink detection. In subsequent images, the reference template is compared to an image neighborhood (i, j). This comparison can be made by an energy normalised inner product (NCC) or a sum of squared difference (SSD). N N SSD(i, j) = (Pk(i+m,j+n) W(m, n)) 2 m=0 n=0 Comparison by SSD is observed to give slightly better results, but is more sensitive to changes in background lighting. This sensitivity is not a problem because the system continually reinitializes the template whenever blink is detected.

7 The estimated position of the target is determined by finding the position (i, j) at which the SSD measure is a closest to zero. The actual center position can be determined by adding the half size of the mask to the corner position (i, j). By keeping the search region small, the system can operate at a processing rate of 25 Hz. Figure 8 shows a typical map of the SSD values obtained when a template for the eye is convolved with a face. The local image of SSD values is inverted to provide the CF and Covariance. The covariance of the detection is estimated from the second moment of the inverted SSD values. A sharp correlation peaks give a small covariance, while a larger correlation gives a larger spread in covariance. The confidence is estimated from the value of the minimum SSD. When this confidence measure drops below a threshold the tracking processes is halted or re-initialized. Figure 8a. Correlation template is taken from eye (detected from blink detection) Figure 8b. Eye is detected in later images by sum of squared difference (SSD). Figure 8c. Map of values from Sum of Squared difference. (white represents 0) The perceptual processes of eye blink detection, color histogram matching, correlation tracking, and sound localization are complementary. Each process fails under different circumstances, and produces a different precision for a different computational cost. The fact that all three processes produce a confidence factor makes it possible to coordinate the different processes in order to maximize confidence and precision while minimizing computing cost. Figure 9 shows an example of the number of calls per second for the processes as the tracking moves through different states (labeled 1, 2 and 3) Blink SSD Color Figure 9. Cycles per second, for each of three processes, as the supervisor steps through the 3 states. Square is Blink Detection, o is correlation, + is histogram. Computer vision provides a means to greatly reduce the bandwidth requirements for video communications while enhancing ease of use by permitting the system to keep the user centered in the image. Images of a talking face, transmitted for video-communications, are highly repetitive. In such images, the face undergoes a limited set of deformations, as the subject speaks and gestures with his body movements. These deformations can be captured to form an orthogonal "basis space" of images. Such a space permits each individual image to be coded and transmitted as a relatively small vector of coefficients. As few as 15 such coefficients

8 (coded as 60 bytes) can be sufficient for quite realistic reconstruction of a talking face, provided that the face is registered and normalized in position and size. 4. Looking at walking Walking is a periodic movement. The boundary contour of a walking person deforms in a cyclic and predictable manner. This deformation can be described with respect to a vector of boundary points using Principal components analysis (PCA). With this technique, the principal components of the covariance matrix from the vector of boundary points are used to define the axes of an orthogonal space. A rhythmic movement, such as a person walking, can be described as a closed contour in this space. This contour captures the sequence of configurations of boundary points typical of walking. An example of a real time system for detecting and tracking humans walking using PCA of the contour points has been developed at the University of Leeds [Baumberg-Hogg 94]. This system begins processing by subtracting out an average background image from each image. Subtraction brings out regions where individuals are moving in the scene. However, a region where an image is different from the background can result from many sources: a passing cloud, rain, a passing car, a dog or a walking person. Each such source can result in a very large number of possible forms, making it difficult to detect only walking figures. The Leeds system solves this problem by comparing the contour of a detected region to a model for a walking human using principal component analysis. The system isolates regions of a scene by subtracting each image from an average "background image". The subtracted image is then compared to a threshold to produce a binary image (composed of 1's and 0's). Objects which were not present in the back-ground image, appear as regions of pixels with the value of 1. Such regions are described as a vector of boundary points. An example of the reconstructed contour superimposed on the images of a person walking is shown in figure 10. Figure 10. Reconstructed contour superimposed on an image of a person walking. The Contour is reconstructed as a weighted sum of principal components (Figures provided by D. Hogg, U. Leeds). Any rhythmic movement, such as walking, running, sitting or crawling can be described by the projection of sequences of boundary point sets into a principal components space. Although the mathematical foundations of the technique are somewhat unusual, the technique itself is easily implemented on a standard workstation or personal computer and can operate in real time if the computer is equipped with a sufficiently fast image acquisition card. 5 Conclusions For many years, applications of computer vision and image processing have been held back by the high cost and limited availability of hardware with sufficient digitizing bandwidth and computing power. In addition, restrictions on computing power led some researchers to employ edge based techniques which ultimately provided fragile and unreliable. This situation is now changing dramatically with the widespread introduction of computer workstations, and even

9 personal computers, equipped with on-board image digitizers, and with sufficient computing power for real time processing. This has made possible the use of appearance based approaches to real time computer vision [Murase-Nayar 95], [Turk-Pentland 91]. Appearance based computer vision is expensive in memory and computing power. However, these techniques are based on a solid theoretical foundation which makes it possible to predict failure rates and failure conditions. Furthermore, these techniques tend to have constant or linear computational complexities, which make it possible to employ them to real time systems. The continued exponential growth in computer power has now reached the necessary levels to make possible inexpensive use of such techniques. Research in detecting, tracking and understanding the actions of people for surveillance applications has turned out to have important spin-offs for human computer interaction. Techniques originally designed to monitor movements in a secure site, can also be used to provide communications and access to information technology, to reduce band-width requirements for video-telephones, and to allow computer users to employ any convenient physical object as a computer entry device. References [Baumberg-Hogg 94] A. M. Baumberg and D. Hogg, "Learning Flexible Models from Image Sequences", European Conference on Computer Vision (ECCV '94), Stockholm, May 94 [Cadoz 94] C. Cadoz,, Les réalités virtuelles, Dominos, Flammarion, [Crowley et al 1994] J. L. Crowley and Bedrune, J. M. "Integration and Control of Reactive Visual Processes", 1994 European Conference on Computer Vision, (ECCV-'94), Stockholm, may 94. [Crowley-Martin 95]. J. L. Crowley and J. Martin, "Experimental Comparison of Correlation Techniques", IAS-4, International Conference on Intelligent Autonomous Systems, Karlsruhe, [Crowley-Berard 97] J. L. Crowley and F. Berard, "Multi-Modal Tracking of Faces for Video Communications", IEEE Conference on Computer Vision and Pattern Recognition, CVPR '97, St. Juan, Puerto Rico, June [Crowley-Martin 97] J. L. Crowley and J. Martin, "Visual Processes for Tracking and Recognition of Hand Gestures", International Workshop on Percpetual User Interfaces, Banf, Canada, October [Krueger 91] M. Krueger, Artifical Reality II, Addison Wesley, [Murase-Nayar 95] H. Murase and S. K. Nayar, "Visual Learning and Recognition of 3-D Objects from Appearance", IJCV, Vol 14, No. 1, January [Schiele-Waibel] B. Schiele and Waibel, A. "Gaze Tracking Based on Face Color", International Workshop on Face and Gesture Recognition, Zurich, [Turk-Pentland 91] M. Turk, and A. P. Pentland. Eigenfaces for Recognition Journal of Cognitive Neuroscience, 3(1):71-86, [Wellner et al 93] P. Wellner, Mackay, W. and Gold, R., Computer-augmented environments : back to the real world. Special Issue of Communications of the ACM, Vol.36 No.7., 1993.

Object Recognition and Template Matching

Object Recognition and Template Matching Object Recognition and Template Matching Template Matching A template is a small image (sub-image) The goal is to find occurrences of this template in a larger image That is, you want to find matches of

More information

Vision based Vehicle Tracking using a high angle camera

Vision based Vehicle Tracking using a high angle camera Vision based Vehicle Tracking using a high angle camera Raúl Ignacio Ramos García Dule Shu gramos@clemson.edu dshu@clemson.edu Abstract A vehicle tracking and grouping algorithm is presented in this work

More information

Template-based Eye and Mouth Detection for 3D Video Conferencing

Template-based Eye and Mouth Detection for 3D Video Conferencing Template-based Eye and Mouth Detection for 3D Video Conferencing Jürgen Rurainsky and Peter Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz-Institute, Image Processing Department, Einsteinufer

More information

Subspace Analysis and Optimization for AAM Based Face Alignment

Subspace Analysis and Optimization for AAM Based Face Alignment Subspace Analysis and Optimization for AAM Based Face Alignment Ming Zhao Chun Chen College of Computer Science Zhejiang University Hangzhou, 310027, P.R.China zhaoming1999@zju.edu.cn Stan Z. Li Microsoft

More information

HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT

HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT International Journal of Scientific and Research Publications, Volume 2, Issue 4, April 2012 1 HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT Akhil Gupta, Akash Rathi, Dr. Y. Radhika

More information

A secure face tracking system

A secure face tracking system International Journal of Information & Computation Technology. ISSN 0974-2239 Volume 4, Number 10 (2014), pp. 959-964 International Research Publications House http://www. irphouse.com A secure face tracking

More information

Introduction to Pattern Recognition

Introduction to Pattern Recognition Introduction to Pattern Recognition Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2009 CS 551, Spring 2009 c 2009, Selim Aksoy (Bilkent University)

More information

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic

More information

ROBOTRACKER A SYSTEM FOR TRACKING MULTIPLE ROBOTS IN REAL TIME. by Alex Sirota, alex@elbrus.com

ROBOTRACKER A SYSTEM FOR TRACKING MULTIPLE ROBOTS IN REAL TIME. by Alex Sirota, alex@elbrus.com ROBOTRACKER A SYSTEM FOR TRACKING MULTIPLE ROBOTS IN REAL TIME by Alex Sirota, alex@elbrus.com Project in intelligent systems Computer Science Department Technion Israel Institute of Technology Under the

More information

Face Locating and Tracking for Human{Computer Interaction. Carnegie Mellon University. Pittsburgh, PA 15213

Face Locating and Tracking for Human{Computer Interaction. Carnegie Mellon University. Pittsburgh, PA 15213 Face Locating and Tracking for Human{Computer Interaction Martin Hunke Alex Waibel School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract Eective Human-to-Human communication

More information

Mouse Control using a Web Camera based on Colour Detection

Mouse Control using a Web Camera based on Colour Detection Mouse Control using a Web Camera based on Colour Detection Abhik Banerjee 1, Abhirup Ghosh 2, Koustuvmoni Bharadwaj 3, Hemanta Saikia 4 1, 2, 3, 4 Department of Electronics & Communication Engineering,

More information

LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE. indhubatchvsa@gmail.com

LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE. indhubatchvsa@gmail.com LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE 1 S.Manikandan, 2 S.Abirami, 2 R.Indumathi, 2 R.Nandhini, 2 T.Nanthini 1 Assistant Professor, VSA group of institution, Salem. 2 BE(ECE), VSA

More information

Image Segmentation and Registration

Image Segmentation and Registration Image Segmentation and Registration Dr. Christine Tanner (tanner@vision.ee.ethz.ch) Computer Vision Laboratory, ETH Zürich Dr. Verena Kaynig, Machine Learning Laboratory, ETH Zürich Outline Segmentation

More information

Laser Gesture Recognition for Human Machine Interaction

Laser Gesture Recognition for Human Machine Interaction International Journal of Computer Sciences and Engineering Open Access Research Paper Volume-04, Issue-04 E-ISSN: 2347-2693 Laser Gesture Recognition for Human Machine Interaction Umang Keniya 1*, Sarthak

More information

HAND GESTURE BASEDOPERATINGSYSTEM CONTROL

HAND GESTURE BASEDOPERATINGSYSTEM CONTROL HAND GESTURE BASEDOPERATINGSYSTEM CONTROL Garkal Bramhraj 1, palve Atul 2, Ghule Supriya 3, Misal sonali 4 1 Garkal Bramhraj mahadeo, 2 Palve Atule Vasant, 3 Ghule Supriya Shivram, 4 Misal Sonali Babasaheb,

More information

How To Compress Video For Real Time Transmission

How To Compress Video For Real Time Transmission University of Edinburgh College of Science and Engineering School of Informatics Informatics Research Proposal supervised by Dr. Sethu Vijayakumar Optimized bandwidth usage for real-time remote surveillance

More information

FACE RECOGNITION BASED ATTENDANCE MARKING SYSTEM

FACE RECOGNITION BASED ATTENDANCE MARKING SYSTEM Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 2, February 2014,

More information

Index Terms: Face Recognition, Face Detection, Monitoring, Attendance System, and System Access Control.

Index Terms: Face Recognition, Face Detection, Monitoring, Attendance System, and System Access Control. Modern Technique Of Lecture Attendance Using Face Recognition. Shreya Nallawar, Neha Giri, Neeraj Deshbhratar, Shamal Sane, Trupti Gautre, Avinash Bansod Bapurao Deshmukh College Of Engineering, Sewagram,

More information

Tracking in flussi video 3D. Ing. Samuele Salti

Tracking in flussi video 3D. Ing. Samuele Salti Seminari XXIII ciclo Tracking in flussi video 3D Ing. Tutors: Prof. Tullio Salmon Cinotti Prof. Luigi Di Stefano The Tracking problem Detection Object model, Track initiation, Track termination, Tracking

More information

Real-Time Tracking of Pedestrians and Vehicles

Real-Time Tracking of Pedestrians and Vehicles Real-Time Tracking of Pedestrians and Vehicles N.T. Siebel and S.J. Maybank. Computational Vision Group Department of Computer Science The University of Reading Reading RG6 6AY, England Abstract We present

More information

Simultaneous Gamma Correction and Registration in the Frequency Domain

Simultaneous Gamma Correction and Registration in the Frequency Domain Simultaneous Gamma Correction and Registration in the Frequency Domain Alexander Wong a28wong@uwaterloo.ca William Bishop wdbishop@uwaterloo.ca Department of Electrical and Computer Engineering University

More information

Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006

Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006 Practical Tour of Visual tracking David Fleet and Allan Jepson January, 2006 Designing a Visual Tracker: What is the state? pose and motion (position, velocity, acceleration, ) shape (size, deformation,

More information

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014 Efficient Attendance Management System Using Face Detection and Recognition Arun.A.V, Bhatath.S, Chethan.N, Manmohan.C.M, Hamsaveni M Department of Computer Science and Engineering, Vidya Vardhaka College

More information

VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS

VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS Norbert Buch 1, Mark Cracknell 2, James Orwell 1 and Sergio A. Velastin 1 1. Kingston University, Penrhyn Road, Kingston upon Thames, KT1 2EE,

More information

Low-resolution Character Recognition by Video-based Super-resolution

Low-resolution Character Recognition by Video-based Super-resolution 2009 10th International Conference on Document Analysis and Recognition Low-resolution Character Recognition by Video-based Super-resolution Ataru Ohkura 1, Daisuke Deguchi 1, Tomokazu Takahashi 2, Ichiro

More information

A Learning Based Method for Super-Resolution of Low Resolution Images

A Learning Based Method for Super-Resolution of Low Resolution Images A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 emre.ugur@ceng.metu.edu.tr Abstract The main objective of this project is the study of a learning based method

More information

Colorado School of Mines Computer Vision Professor William Hoff

Colorado School of Mines Computer Vision Professor William Hoff Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Introduction to 2 What is? A process that produces from images of the external world a description

More information

Interactive person re-identification in TV series

Interactive person re-identification in TV series Interactive person re-identification in TV series Mika Fischer Hazım Kemal Ekenel Rainer Stiefelhagen CV:HCI lab, Karlsruhe Institute of Technology Adenauerring 2, 76131 Karlsruhe, Germany E-mail: {mika.fischer,ekenel,rainer.stiefelhagen}@kit.edu

More information

Design of Multi-camera Based Acts Monitoring System for Effective Remote Monitoring Control

Design of Multi-camera Based Acts Monitoring System for Effective Remote Monitoring Control 보안공학연구논문지 (Journal of Security Engineering), 제 8권 제 3호 2011년 6월 Design of Multi-camera Based Acts Monitoring System for Effective Remote Monitoring Control Ji-Hoon Lim 1), Seoksoo Kim 2) Abstract With

More information

Image Compression through DCT and Huffman Coding Technique

Image Compression through DCT and Huffman Coding Technique International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015 INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Rahul

More information

Chapter 5 Objectives. Chapter 5 Input

Chapter 5 Objectives. Chapter 5 Input Chapter 5 Input Describe two types of input List characteristics of a Identify various types of s Identify various types of pointing devices Chapter 5 Objectives Explain how voice recognition works Understand

More information

False alarm in outdoor environments

False alarm in outdoor environments Accepted 1.0 Savantic letter 1(6) False alarm in outdoor environments Accepted 1.0 Savantic letter 2(6) Table of contents Revision history 3 References 3 1 Introduction 4 2 Pre-processing 4 3 Detection,

More information

Part-Based Recognition

Part-Based Recognition Part-Based Recognition Benedict Brown CS597D, Fall 2003 Princeton University CS 597D, Part-Based Recognition p. 1/32 Introduction Many objects are made up of parts It s presumably easier to identify simple

More information

Face Model Fitting on Low Resolution Images

Face Model Fitting on Low Resolution Images Face Model Fitting on Low Resolution Images Xiaoming Liu Peter H. Tu Frederick W. Wheeler Visualization and Computer Vision Lab General Electric Global Research Center Niskayuna, NY, 1239, USA {liux,tu,wheeler}@research.ge.com

More information

OBJECT TRACKING USING LOG-POLAR TRANSFORMATION

OBJECT TRACKING USING LOG-POLAR TRANSFORMATION OBJECT TRACKING USING LOG-POLAR TRANSFORMATION A Thesis Submitted to the Gradual Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements

More information

Position Estimation Using Principal Components of Range Data

Position Estimation Using Principal Components of Range Data Position Estimation Using Principal Components of Range Data James L. Crowley, Frank Wallner and Bernt Schiele Project PRIMA-IMAG, INRIA Rhône-Alpes 655, avenue de l'europe 38330 Montbonnot Saint Martin,

More information

Mean-Shift Tracking with Random Sampling

Mean-Shift Tracking with Random Sampling 1 Mean-Shift Tracking with Random Sampling Alex Po Leung, Shaogang Gong Department of Computer Science Queen Mary, University of London, London, E1 4NS Abstract In this work, boosting the efficiency of

More information

Working With Animation: Introduction to Flash

Working With Animation: Introduction to Flash Working With Animation: Introduction to Flash With Adobe Flash, you can create artwork and animations that add motion and visual interest to your Web pages. Flash movies can be interactive users can click

More information

What is Visualization? Information Visualization An Overview. Information Visualization. Definitions

What is Visualization? Information Visualization An Overview. Information Visualization. Definitions What is Visualization? Information Visualization An Overview Jonathan I. Maletic, Ph.D. Computer Science Kent State University Visualize/Visualization: To form a mental image or vision of [some

More information

Automatic parameter regulation for a tracking system with an auto-critical function

Automatic parameter regulation for a tracking system with an auto-critical function Automatic parameter regulation for a tracking system with an auto-critical function Daniela Hall INRIA Rhône-Alpes, St. Ismier, France Email: Daniela.Hall@inrialpes.fr Abstract In this article we propose

More information

Cloud-Empowered Multimedia Service: An Automatic Video Storytelling Tool

Cloud-Empowered Multimedia Service: An Automatic Video Storytelling Tool Cloud-Empowered Multimedia Service: An Automatic Video Storytelling Tool Joseph C. Tsai Foundation of Computer Science Lab. The University of Aizu Fukushima-ken, Japan jctsai@u-aizu.ac.jp Abstract Video

More information

HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER

HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER Gholamreza Anbarjafari icv Group, IMS Lab, Institute of Technology, University of Tartu, Tartu 50411, Estonia sjafari@ut.ee

More information

Efficient Attendance Management: A Face Recognition Approach

Efficient Attendance Management: A Face Recognition Approach Efficient Attendance Management: A Face Recognition Approach Badal J. Deshmukh, Sudhir M. Kharad Abstract Taking student attendance in a classroom has always been a tedious task faultfinders. It is completely

More information

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network Proceedings of the 8th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING & DATA BASES (AIKED '9) ISSN: 179-519 435 ISBN: 978-96-474-51-2 An Energy-Based Vehicle Tracking System using Principal

More information

B2.53-R3: COMPUTER GRAPHICS. NOTE: 1. There are TWO PARTS in this Module/Paper. PART ONE contains FOUR questions and PART TWO contains FIVE questions.

B2.53-R3: COMPUTER GRAPHICS. NOTE: 1. There are TWO PARTS in this Module/Paper. PART ONE contains FOUR questions and PART TWO contains FIVE questions. B2.53-R3: COMPUTER GRAPHICS NOTE: 1. There are TWO PARTS in this Module/Paper. PART ONE contains FOUR questions and PART TWO contains FIVE questions. 2. PART ONE is to be answered in the TEAR-OFF ANSWER

More information

Building an Advanced Invariant Real-Time Human Tracking System

Building an Advanced Invariant Real-Time Human Tracking System UDC 004.41 Building an Advanced Invariant Real-Time Human Tracking System Fayez Idris 1, Mazen Abu_Zaher 2, Rashad J. Rasras 3, and Ibrahiem M. M. El Emary 4 1 School of Informatics and Computing, German-Jordanian

More information

Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 269 Class Project Report

Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 269 Class Project Report Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 69 Class Project Report Junhua Mao and Lunbo Xu University of California, Los Angeles mjhustc@ucla.edu and lunbo

More information

3D Scanner using Line Laser. 1. Introduction. 2. Theory

3D Scanner using Line Laser. 1. Introduction. 2. Theory . Introduction 3D Scanner using Line Laser Di Lu Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute The goal of 3D reconstruction is to recover the 3D properties of a geometric

More information

Human-Computer Interaction: Input Devices

Human-Computer Interaction: Input Devices Human-Computer Interaction: Input Devices Robert J.K. Jacob Department of Electrical Engineering and Computer Science Tufts University Medford, Mass. All aspects of human-computer interaction, from the

More information

Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall

Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Automatic Photo Quality Assessment Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Estimating i the photorealism of images: Distinguishing i i paintings from photographs h Florin

More information

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY V. Knyaz a, *, Yu. Visilter, S. Zheltov a State Research Institute for Aviation System (GosNIIAS), 7, Victorenko str., Moscow, Russia

More information

A Genetic Algorithm-Evolved 3D Point Cloud Descriptor

A Genetic Algorithm-Evolved 3D Point Cloud Descriptor A Genetic Algorithm-Evolved 3D Point Cloud Descriptor Dominik Wȩgrzyn and Luís A. Alexandre IT - Instituto de Telecomunicações Dept. of Computer Science, Univ. Beira Interior, 6200-001 Covilhã, Portugal

More information

Interactive Projector Screen with Hand Detection Using LED Lights

Interactive Projector Screen with Hand Detection Using LED Lights Interactive Projector Screen with Hand Detection Using LED Lights Padmavati Khandnor Assistant Professor/Computer Science/Project Mentor Aditi Aggarwal Student/Computer Science/Final Year Ankita Aggarwal

More information

Speed Performance Improvement of Vehicle Blob Tracking System

Speed Performance Improvement of Vehicle Blob Tracking System Speed Performance Improvement of Vehicle Blob Tracking System Sung Chun Lee and Ram Nevatia University of Southern California, Los Angeles, CA 90089, USA sungchun@usc.edu, nevatia@usc.edu Abstract. A speed

More information

Very Low Frame-Rate Video Streaming For Face-to-Face Teleconference

Very Low Frame-Rate Video Streaming For Face-to-Face Teleconference Very Low Frame-Rate Video Streaming For Face-to-Face Teleconference Jue Wang, Michael F. Cohen Department of Electrical Engineering, University of Washington Microsoft Research Abstract Providing the best

More information

Computer Graphics. Geometric Modeling. Page 1. Copyright Gotsman, Elber, Barequet, Karni, Sheffer Computer Science - Technion. An Example.

Computer Graphics. Geometric Modeling. Page 1. Copyright Gotsman, Elber, Barequet, Karni, Sheffer Computer Science - Technion. An Example. An Example 2 3 4 Outline Objective: Develop methods and algorithms to mathematically model shape of real world objects Categories: Wire-Frame Representation Object is represented as as a set of points

More information

RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 29 (2008) Indiana University

RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 29 (2008) Indiana University RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 29 (2008) Indiana University A Software-Based System for Synchronizing and Preprocessing Eye Movement Data in Preparation for Analysis 1 Mohammad

More information

Adaptive Face Recognition System from Myanmar NRC Card

Adaptive Face Recognition System from Myanmar NRC Card Adaptive Face Recognition System from Myanmar NRC Card Ei Phyo Wai University of Computer Studies, Yangon, Myanmar Myint Myint Sein University of Computer Studies, Yangon, Myanmar ABSTRACT Biometrics is

More information

Learning Enhanced 3D Models for Vehicle Tracking

Learning Enhanced 3D Models for Vehicle Tracking Learning Enhanced 3D Models for Vehicle Tracking J.M. Ferryman, A. D. Worrall and S. J. Maybank Computational Vision Group, Department of Computer Science The University of Reading RG6 6AY, UK J.M.Ferryman@reading.ac.uk

More information

The Flat Shape Everything around us is shaped

The Flat Shape Everything around us is shaped The Flat Shape Everything around us is shaped The shape is the external appearance of the bodies of nature: Objects, animals, buildings, humans. Each form has certain qualities that distinguish it from

More information

Classifying Manipulation Primitives from Visual Data

Classifying Manipulation Primitives from Visual Data Classifying Manipulation Primitives from Visual Data Sandy Huang and Dylan Hadfield-Menell Abstract One approach to learning from demonstrations in robotics is to make use of a classifier to predict if

More information

CS231M Project Report - Automated Real-Time Face Tracking and Blending

CS231M Project Report - Automated Real-Time Face Tracking and Blending CS231M Project Report - Automated Real-Time Face Tracking and Blending Steven Lee, slee2010@stanford.edu June 6, 2015 1 Introduction Summary statement: The goal of this project is to create an Android

More information

Experiments with a Camera-Based Human-Computer Interface System

Experiments with a Camera-Based Human-Computer Interface System Experiments with a Camera-Based Human-Computer Interface System Robyn Cloud*, Margrit Betke**, and James Gips*** * Computer Science Department, Boston University, 111 Cummington Street, Boston, MA 02215,

More information

Journal of Industrial Engineering Research. Adaptive sequence of Key Pose Detection for Human Action Recognition

Journal of Industrial Engineering Research. Adaptive sequence of Key Pose Detection for Human Action Recognition IWNEST PUBLISHER Journal of Industrial Engineering Research (ISSN: 2077-4559) Journal home page: http://www.iwnest.com/aace/ Adaptive sequence of Key Pose Detection for Human Action Recognition 1 T. Sindhu

More information

Professor, D.Sc. (Tech.) Eugene Kovshov MSTU «STANKIN», Moscow, Russia

Professor, D.Sc. (Tech.) Eugene Kovshov MSTU «STANKIN», Moscow, Russia Professor, D.Sc. (Tech.) Eugene Kovshov MSTU «STANKIN», Moscow, Russia As of today, the issue of Big Data processing is still of high importance. Data flow is increasingly growing. Processing methods

More information

Communicating Agents Architecture with Applications in Multimodal Human Computer Interaction

Communicating Agents Architecture with Applications in Multimodal Human Computer Interaction Communicating Agents Architecture with Applications in Multimodal Human Computer Interaction Maximilian Krüger, Achim Schäfer, Andreas Tewes, Rolf P. Würtz Institut für Neuroinformatik, Ruhr-Universität

More information

Vibrations can have an adverse effect on the accuracy of the end effector of a

Vibrations can have an adverse effect on the accuracy of the end effector of a EGR 315 Design Project - 1 - Executive Summary Vibrations can have an adverse effect on the accuracy of the end effector of a multiple-link robot. The ability of the machine to move to precise points scattered

More information

The Visual Internet of Things System Based on Depth Camera

The Visual Internet of Things System Based on Depth Camera The Visual Internet of Things System Based on Depth Camera Xucong Zhang 1, Xiaoyun Wang and Yingmin Jia Abstract The Visual Internet of Things is an important part of information technology. It is proposed

More information

COMPUTING CLOUD MOTION USING A CORRELATION RELAXATION ALGORITHM Improving Estimation by Exploiting Problem Knowledge Q. X. WU

COMPUTING CLOUD MOTION USING A CORRELATION RELAXATION ALGORITHM Improving Estimation by Exploiting Problem Knowledge Q. X. WU COMPUTING CLOUD MOTION USING A CORRELATION RELAXATION ALGORITHM Improving Estimation by Exploiting Problem Knowledge Q. X. WU Image Processing Group, Landcare Research New Zealand P.O. Box 38491, Wellington

More information

Video compression: Performance of available codec software

Video compression: Performance of available codec software Video compression: Performance of available codec software Introduction. Digital Video A digital video is a collection of images presented sequentially to produce the effect of continuous motion. It takes

More information

MACHINE VISION MNEMONICS, INC. 102 Gaither Drive, Suite 4 Mount Laurel, NJ 08054 USA 856-234-0970 www.mnemonicsinc.com

MACHINE VISION MNEMONICS, INC. 102 Gaither Drive, Suite 4 Mount Laurel, NJ 08054 USA 856-234-0970 www.mnemonicsinc.com MACHINE VISION by MNEMONICS, INC. 102 Gaither Drive, Suite 4 Mount Laurel, NJ 08054 USA 856-234-0970 www.mnemonicsinc.com Overview A visual information processing company with over 25 years experience

More information

Virtual Data Gloves : Interacting with Virtual Environments through Computer Vision

Virtual Data Gloves : Interacting with Virtual Environments through Computer Vision Virtual Data Gloves : Interacting with Virtual Environments through Computer Vision Richard Bowden (1), Tony Heap(2), Craig Hart(2) (1) Dept of M & ES (2) School of Computer Studies Brunel University University

More information

Object Tracking System Using Approximate Median Filter, Kalman Filter and Dynamic Template Matching

Object Tracking System Using Approximate Median Filter, Kalman Filter and Dynamic Template Matching I.J. Intelligent Systems and Applications, 2014, 05, 83-89 Published Online April 2014 in MECS (http://www.mecs-press.org/) DOI: 10.5815/ijisa.2014.05.09 Object Tracking System Using Approximate Median

More information

An Instructional Aid System for Driving Schools Based on Visual Simulation

An Instructional Aid System for Driving Schools Based on Visual Simulation An Instructional Aid System for Driving Schools Based on Visual Simulation Salvador Bayarri, Rafael Garcia, Pedro Valero, Ignacio Pareja, Institute of Traffic and Road Safety (INTRAS), Marcos Fernandez

More information

Object tracking in video scenes

Object tracking in video scenes A Seminar On Object tracking in video scenes Presented by Alok K. Watve M.Tech. IT 1st year Indian Institue of Technology, Kharagpur Under the guidance of Dr. Shamik Sural Assistant Professor School of

More information

MVA ENS Cachan. Lecture 2: Logistic regression & intro to MIL Iasonas Kokkinos Iasonas.kokkinos@ecp.fr

MVA ENS Cachan. Lecture 2: Logistic regression & intro to MIL Iasonas Kokkinos Iasonas.kokkinos@ecp.fr Machine Learning for Computer Vision 1 MVA ENS Cachan Lecture 2: Logistic regression & intro to MIL Iasonas Kokkinos Iasonas.kokkinos@ecp.fr Department of Applied Mathematics Ecole Centrale Paris Galen

More information

Face detection is a process of localizing and extracting the face region from the

Face detection is a process of localizing and extracting the face region from the Chapter 4 FACE NORMALIZATION 4.1 INTRODUCTION Face detection is a process of localizing and extracting the face region from the background. The detected face varies in rotation, brightness, size, etc.

More information

Linear Codes. Chapter 3. 3.1 Basics

Linear Codes. Chapter 3. 3.1 Basics Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length

More information

The Scientific Data Mining Process

The Scientific Data Mining Process Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In

More information

Force/position control of a robotic system for transcranial magnetic stimulation

Force/position control of a robotic system for transcranial magnetic stimulation Force/position control of a robotic system for transcranial magnetic stimulation W.N. Wan Zakaria School of Mechanical and System Engineering Newcastle University Abstract To develop a force control scheme

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow 02/09/12 Feature Tracking and Optical Flow Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Many slides adapted from Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve

More information

State of Stress at Point

State of Stress at Point State of Stress at Point Einstein Notation The basic idea of Einstein notation is that a covector and a vector can form a scalar: This is typically written as an explicit sum: According to this convention,

More information

CERTIFICATE COURSE IN WEB DESIGNING

CERTIFICATE COURSE IN WEB DESIGNING CERTIFICATE COURSE IN WEB DESIGNING REGULATIONS AND SYLLABUS FOR CERTIFICATE COURSE IN WEB DESIGNING OFFERED BY BHARATHIAR UNIVERSITY,COIMBATORE FROM 2008-2009 UNDER THE UNIVERSITY INDUSTRY INTERACTION

More information

Implementation of Canny Edge Detector of color images on CELL/B.E. Architecture.

Implementation of Canny Edge Detector of color images on CELL/B.E. Architecture. Implementation of Canny Edge Detector of color images on CELL/B.E. Architecture. Chirag Gupta,Sumod Mohan K cgupta@clemson.edu, sumodm@clemson.edu Abstract In this project we propose a method to improve

More information

Static Environment Recognition Using Omni-camera from a Moving Vehicle

Static Environment Recognition Using Omni-camera from a Moving Vehicle Static Environment Recognition Using Omni-camera from a Moving Vehicle Teruko Yata, Chuck Thorpe Frank Dellaert The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 USA College of Computing

More information

Screen Capture A Vector Quantisation Approach

Screen Capture A Vector Quantisation Approach Screen Capture A Vector Quantisation Approach Jesse S. Jin and Sue R. Wu Biomedical and Multimedia Information Technology Group School of Information Technologies, F09 University of Sydney, NSW, 2006 {jesse,suewu}@it.usyd.edu.au

More information

Incremental PCA: An Alternative Approach for Novelty Detection

Incremental PCA: An Alternative Approach for Novelty Detection Incremental PCA: An Alternative Approach for Detection Hugo Vieira Neto and Ulrich Nehmzow Department of Computer Science University of Essex Wivenhoe Park Colchester CO4 3SQ {hvieir, udfn}@essex.ac.uk

More information

Understanding Compression Technologies for HD and Megapixel Surveillance

Understanding Compression Technologies for HD and Megapixel Surveillance When the security industry began the transition from using VHS tapes to hard disks for video surveillance storage, the question of how to compress and store video became a top consideration for video surveillance

More information

System Identification for Acoustic Comms.:

System Identification for Acoustic Comms.: System Identification for Acoustic Comms.: New Insights and Approaches for Tracking Sparse and Rapidly Fluctuating Channels Weichang Li and James Preisig Woods Hole Oceanographic Institution The demodulation

More information

Accurate and robust image superresolution by neural processing of local image representations

Accurate and robust image superresolution by neural processing of local image representations Accurate and robust image superresolution by neural processing of local image representations Carlos Miravet 1,2 and Francisco B. Rodríguez 1 1 Grupo de Neurocomputación Biológica (GNB), Escuela Politécnica

More information

FCE: A Fast Content Expression for Server-based Computing

FCE: A Fast Content Expression for Server-based Computing FCE: A Fast Content Expression for Server-based Computing Qiao Li Mentor Graphics Corporation 11 Ridder Park Drive San Jose, CA 95131, U.S.A. Email: qiao li@mentor.com Fei Li Department of Computer Science

More information

How To Filter Spam Image From A Picture By Color Or Color

How To Filter Spam Image From A Picture By Color Or Color Image Content-Based Email Spam Image Filtering Jianyi Wang and Kazuki Katagishi Abstract With the population of Internet around the world, email has become one of the main methods of communication among

More information

Automatic Traffic Estimation Using Image Processing

Automatic Traffic Estimation Using Image Processing Automatic Traffic Estimation Using Image Processing Pejman Niksaz Science &Research Branch, Azad University of Yazd, Iran Pezhman_1366@yahoo.com Abstract As we know the population of city and number of

More information

Virtual Mouse Using a Webcam

Virtual Mouse Using a Webcam 1. INTRODUCTION Virtual Mouse Using a Webcam Since the computer technology continues to grow up, the importance of human computer interaction is enormously increasing. Nowadays most of the mobile devices

More information

Edge tracking for motion segmentation and depth ordering

Edge tracking for motion segmentation and depth ordering Edge tracking for motion segmentation and depth ordering P. Smith, T. Drummond and R. Cipolla Department of Engineering University of Cambridge Cambridge CB2 1PZ,UK {pas1001 twd20 cipolla}@eng.cam.ac.uk

More information

Making Machines Understand Facial Motion & Expressions Like Humans Do

Making Machines Understand Facial Motion & Expressions Like Humans Do Making Machines Understand Facial Motion & Expressions Like Humans Do Ana C. Andrés del Valle & Jean-Luc Dugelay Multimedia Communications Dpt. Institut Eurécom 2229 route des Crêtes. BP 193. Sophia Antipolis.

More information

balesio Native Format Optimization Technology (NFO)

balesio Native Format Optimization Technology (NFO) balesio AG balesio Native Format Optimization Technology (NFO) White Paper Abstract balesio provides the industry s most advanced technology for unstructured data optimization, providing a fully system-independent

More information

High Quality Image Magnification using Cross-Scale Self-Similarity

High Quality Image Magnification using Cross-Scale Self-Similarity High Quality Image Magnification using Cross-Scale Self-Similarity André Gooßen 1, Arne Ehlers 1, Thomas Pralow 2, Rolf-Rainer Grigat 1 1 Vision Systems, Hamburg University of Technology, D-21079 Hamburg

More information

Frequently Asked Questions About VisionGauge OnLine

Frequently Asked Questions About VisionGauge OnLine Frequently Asked Questions About VisionGauge OnLine The following frequently asked questions address the most common issues and inquiries about VisionGauge OnLine: 1. What is VisionGauge OnLine? VisionGauge

More information

The aims: Chapter 14: Usability testing and field studies. Usability testing. Experimental study. Example. Example

The aims: Chapter 14: Usability testing and field studies. Usability testing. Experimental study. Example. Example Chapter 14: Usability testing and field studies The aims: Explain how to do usability testing through examples. Outline the basics of experimental design. Discuss the methods used in usability testing.

More information