Real-time Detection of Nodding and Head-shaking by Directly Detecting and Tracking the Between-Eyes

Size: px
Start display at page:

Download "Real-time Detection of Nodding and Head-shaking by Directly Detecting and Tracking the Between-Eyes"

Transcription

1 Real-time Detection of Nodding and Head-shaking by Directly Detecting and Tracking the Between-Eyes Shinjiro Kawato and Jun Ohya ATR Media Integration and Communications Research Laboratories Seika-cho, Soraku-gun, Kyoto , JAPAN {skawato, Abstract Among head gestures, nodding and head-shaking are very common and used often. Thus the detection of such gestures is basic to a visual understanding of human responses. However it is difficult to detect them in real-time, because nodding and head-shaking are fairly small and fast head movements. In this paper, we propose an approach for detecting nodding and head-shaking in real time from a single color video stream by directly detecting and tracking a point between the eyes, or what we call the between-eyes. Along a circle of a certain radius centered at the between-eyes, the pixel value has two cycles of bright parts (forehead and nose bridge) and dark parts (eyes and brows). The output of the proposed circle-frequency filter has a local maximum at these characteristic points. To distinguish the true between-eyes from similar characteristic points in other face parts, we do a confirmation with eye detection. Once the between-eyes is detected, a small area around it is copied as a template and the system enters the tracking mode. Combining with the circle-frequency filtering and the template, the tracking is done not by searching around but by selecting candidates using the template; the template is then updated. Due to this special tracking algorithm, the system can track the between-eyes stably and accurately. It runs at 13 frames/sec rate without special hardware. By analyzing the movement of the point, we can detect nodding and headshaking. Some experimental results are shown. 1. Introduction Gesture recognition plays an important role in the advancement of human-computer interaction since it provides a natural and efficient interface to computers. Among head gestures, nodding and head-shaking are very common and used often, so their detection is basic to a visual understanding of human responses. However it is difficult to detect them in real-time, because nodding and head-shaking are fairly small and fast head movements. Methods of estimating the 3D head pose with the regions of skin and hair have been reported in [1][3]. In these papers, they use area information rather than feature points, so the algorithm seems to be robust. However, the resolution is inadequate for detecting small movements like nodding or head-shaking. Feature-based tracking approach for a real-time 3D facial pose estimation have been reported in [2][10]. In these papers, they track multiple feature points by template matching using a special correlation processor and feed them to a Kalman filter, although they do not mention how to make the templates. A facial motion vector is calculated by averaging all feature motion vectors[10]. In an example plot of nodding gesture, we cannot read the motion speed and size. It is only qualitatively described. In this paper, we propose a more simple and straightforward approach to a detection of nodding and head-shaking in real time from a single color video stream by directly detecting and tracking a point between the eyes, or what we call the between-eyes. General face location problem in image sequences seems to have been solved using skin-color information[1][6][8]. We take this approach but do so in a very simple way. Most previous research has tried to find the eyes first to detect face orientation [2][4][9], because eyes are the most characteristic part of the face. We try to start with other parts than eyes. What is common to most people and easy to find for a wide range of face orientations? We claim that one possible candidate is the point between the eyes. We call it the between-eyes. The between-eyes has dark parts on both side as eyes and eyebrows, and it is comparably bright on the upper side (forehead) and the lower side (nose bridge). This characteristic seems to be common to most people, and it can be seen for a wide range of face orientations. Moreover, the pattern

2 around this point is fairly stable for any facial expression, while eyes, eyebrows, and the mouth are changeable. We propose an image filter, the circle-frequency filter, to detect the between-eyes. Although the between-eyes can be robustly extracted using the circle-frequency filter, many other similar characteristic points are also extracted. By evaluating other local features, we can limit them to one or several points. Unfortunately, we concluded from some experiments that we cannot recognize the between-eyes by only using local features. Thus, we use eyes to confirm the candidate point for the true between-eyes. Once the between-eyes is detected, we do not need eye detection for tracking. A small area around the between-eyes is copied as a template. Then the tracking is done using the template by selecting the between-eyes from the extracted points with the circle-frequency filter. Due to this special tracking algorithm, the system stably and accurately tracks the between-eyes and runs at 13 frames/sec rate without special hardware. By analyzing the movement of the point, we can detect nodding and head-shaking. Applying a special filter to detect feature points on a face is a similar technique to applying the separability filter described in [7]. However our approach will more directly detect the target point. In section 2, the details of the circle-frequency filter are described. In section 3, we describe a strategy for detecting and tracking the between-eyes using the circle-frequency filter, combined with the local features and eye detection. In section 4, a simple algorithm for nodding and head-shaking recognition is presented. In section 5, the implementation and some experimental results are described. Section 6 concludes the paper. 2. Circle-frequency filter Assume that f i (i =0; :::; N, 1) are pixel values along a circle centered at (x; y). Then we can calculate their discreet Fourier transform as X N,1 F n = k=0 f k e 2ikn=N : (1) The circle-frequency-n filter outputs jjfnjj 2, or the square of the spectral power of frequency n, at(x; y). Hereafter, n =2, and we call it simply the circle-frequency filter. Figure 1 shows an example of f i s at the between-eyes (the white dot at the center). The radius of the circle is 7 pixels, and the 36 pixel values along it are plotted on the right graph. The plot starts at the forehead and goes counter clockwise. There are two cycles of dark and bright. The output of the circle-frequency filter for this pattern becomes high, and this is known as a characteristic of the Fourier transform. Figure 2 shows an example of the input (left) and output (right) images of the circle-frequency filter, where the filter Figure 1. \Between-eyes" and pixel values along 7 pixel circle of radius Figure 2. Example of circle-frequency ltered image and local maximum points radius is 7 pixels. The cross marks on the left image are the local maximum points of the output image. Although there are many local maximum points other than the between-eyes, the between-eyes robustly appears as a local maximum point, and we can reduce the number of candidates with simple local feature criteria as described below. 3. Strategies for detection and tracking the between-eyes Figure 3 shows a flowchart of the system. We try to keep things simple to achieve a higher frame rate. The system starts with the detection mode. After getting a new color frame, a skin-color area is extracted (see section 3.1). In the tracking mode, instead of the skin-color area extraction, we select a small area around the previous point of the between-eyes for the search area, and the remaining processes are restricted to this area. This greatly reduces the processing time. The skin-color area or the search area is converted to a monochrome smoothed image. The circle-frequency filter is applied to this monochrome image,

3 and local maximum points of the output are extracted. We screen with local feature conditions, and a few candidates are left for the true between-eyes. Detection mode Start with detection mode Get a new frame Tracking mode In the detection mode, eyes are searched for each candidate. When only one candidate has a pair of eyes, the system assumes it is the between-eyes, and switches to the tracking mode after saving a small area around it as a template. Otherwise, it assumes that the detection failed and goes to the next frame. In the tracking mode, each candidate is compared with the template, and the best match is selected. If it satisfies a predefined matching criteria, the system assumes it is the between-eyes, and the template is updated. Otherwise, it assumes the between-eyes is lost, and switches to the detection mode. At the end of each process of the detection mode or the tracking mode, the system investigates if nodding or headshaking has happened or not. Extract skin-color area Search area prediction 3.1. Extraction of skin-color area Detection mode Search eyes Monochrom smoothed image of the area Circle-frequency filter Extract local maximum with some conditions Set detection mode NO Only YES one has a pair of eyes? Set tracking mode NO Tracking mode Select the between-eyes using the template Good match? YES Skin-color provides good information for extracting an area of the face[8][1][6]. Several different kinds of color space have been proposed, and Terrillon discussed which is best for modeling a Gaussian skin-color distribution[6]. To keep it simple, we use a look-up table instead of a Gaussian model. Video cameras use RGB representation. Chromatic colors (r, g), known as pure colors in the absence of brightness, are defined in a normalization process: r = R=(R + G + B); (2) g = G=(R + G + B): (3) We modify (r, g) space to (a, b) space as follows: G 1.0 a = r + g=2; (4) b = p 3=2g: (5) Save new template Save the coordinate of the between-eyes (R,G,B) (a,b) 1.0 R Nodding and headshaking detection O b a Figure 3. Flow chart 1.0 B Figure 4. Color space model

4 This means that a color (R, G, B) is projected on to (a, b) plane as shown in Fig. 4. Any color can be projected on the hatched regular triangle region, and the range of (a, b) is from (0, 0) to (1, p 3=2). We digitize the (a, b) space into squares of 0.01 size as a look-up table, and get a histogram for face images. We define the skin-color as colors when the histogram value is over a threshold because face images contain non-skin colors. The threshold value is determined empirically. Actually, we divide an image into blocks of 8 8 pixels. When more than half of the pixels of a block are skin-color, we assume it is a skin-color block. The remaining processes are done on skin-color block areas Extraction of candidate points We convert the skin-color block area (plus one peripheral block for correct filtering) to a monochrome and smooth 1/4 size image by sub-sampling and averaging 5 5 pixel values. We then apply a circle-frequency filter and extract the local maximum points. The color to monochrome conversion equation is v =0:299R +0:587G +0:144B: (6) To screen the candidates of the between-eyes, we establish two simple conditions. (1) The left and right pixel value of the between-eyes on the circle of the filter must be lower than that of the between-eyes itself. (2) The real part of F 2 in Eq. 1 must be positive, which means that the phase of signal f i begins with a brighter part. These conditions are reasonable when the face rotation in the image plane is restricted to less than 45 degrees. After applying these conditions, the candidates are sorted on the output of the circle-frequency filter. The top three are tested further by confirming eye detection in the detection mode and by confirming template matching in the tracking mode Conrmation by eye detection (in the detection mode) Eye detection is done in a pretty simple way. A similar technique is used in [5]. At first, rectangular areas on both sides of the between-eyes candidate point where eyes should be found are extracted. The area size is defined in advance. The next step is to find the threshold level for each area to binarize the image. This is done by searching the level from the lowest level with a connected component analysis. The threshold level is determined when the sum of the number of pixels of all the components, which is not on the border of the rectangular area, exceeds a certain value (25 in the experiment). We avoid the component on the border because it often belongs to hair. Sometimes the eyebrows have the same grey level as eyes. Thus, from the extracted components, we select one within a certain range of pixels (more than 7, less than 30) and with the lowest position. We applied this method to 344 cases of true betweeneyes candidates in a preliminary experiment. In 308 cases both eyes are found. We measured two parameters in these cases, the distance between the located eyes (D e ) and the angle (A e ) between the left and right eyes at the betweeneyes. Table 1 lists these parameters. Table 1: Statistics of eye parameters. (n = 308) av. min. max. D e (pixels) A e (degrees) D e varies proportionally with the image size, or the distance from the camera to the face. The image size does not change the angle A e, but it varies more than we anticipated when the face turns up, down, or sideways. It exceeds 180 degrees in five cases. We looked at their distribution and defined the threshold value for eye parameters as follows. 31 <D e < 42 (7) 115 <A e < 180 (8) When only one candidate has eyes with this condition, we assume it is the between-eyes, and the pixels around it is copied from the monochrome smoothed image as a template. Otherwise, the between-eyes is not determined Conrmation by a template (in the tracking mode) In the tracking mode, template matching is done for all the candidates and the best match is selected. The matching index is not a normalized cross correlation but simply a sum-up of the absolute values of the difference in corresponding pixels, where the template size is Evenif the best match is found, when the matching index is larger than a predefined threshold, the system assumes that the between-eyes is out of the search area and switches to the detection mode. The template is updated for every frame. Therefore, the system can follow the change in the appearance of the between-eyes, as long as the change is gradual and the circle-frequency filter picks it out as a candidate.

5 4. Detection of nodding and head-shaking As a result of the tracking of the between-eyes, we obtain its coordinate (x i ;y i ) for every frame i, where (x i ;y i ) is a pixel coordinate of the monochrome smoothed image. From a preliminary observation, we find that nodding and head-shaking are very similar movements, except that nodding appears in the y i movement, while head-shaking appears in the x i movement. We have to distinguish them from repeated look-right and look-left or repeated look-up and look-down. Figure 5 shows typical nodding and head-shaking patterns, which are measured by the between-eyes tracking system. The unit of time axis is 1=13 seconds. The head y (a) nodding t x (b)head-shaking Figure 5. Typical pattern of nodding and headshaking motion is not symmetric, and typically involves two or three cycles. The deviation is plus/minus 4 or 5 pixels at maximum, although it depends on the imaging size. The deviation also does not last long, typically seconds. The algorithms for nodding and head-shaking detection are identical except that the former processes the y- coordinate and the latter processes the x-coordinate. The head-shaking detection algorithm is described here. First, each frame is categorized to a stable, transient, or extreme state. If t In the case of look-left or look-right, there are stable states instead of extreme states between or after transient states, so that they can be distinguishable. 5. Implementation and experiment We implemented the system on a Silicon Graphics O2 workstation with a 175 MHz CPU. (Fig. 3). Figure 6 shows an example of the processing result. The lower image is the input color image with some overlaid graphics. The image size is A circle on the face means that the system assumes this point is the betweeneye. There is a horizontal plot and a vertical plot that show the movement of the y and x coordinate of the tracked between-eyes, respectively. Every dot is classified into either stable (yellow), transient (green), or extreme (red). These two plots shows a repeated nodding and head-shaking when all cases were detected correctly. When the system detects nodding, the lower left corner flashes green, and head-shaking flashes red. (see demo movie from ~skawato/) The upper left image is a monochrome smoothed image of the search area predicted from the previous point of the between-eyes. Sub-sampling creates a size that is 1/4 that of the lower image. The two cross marks are the candidates of the between-eyes. max(x i+n ), min(x i+n ) 2 (n =,2::: +2) (9) then frame i is in a stable state. If x i = max(x i+n ) (n =,2::: +2) (10) or x i = min(x i+n ) (n =,2::: +2) (11) then frame i is in an extreme state. Otherwise, the frame is in a transient state. The last frame and the previous one have no state. When the state changes from non-stable to stable, the nodding evaluation process is triggered, so the detection has a two frame delay. The evaluation is simple. If there are more than two extreme state between current stable state and the previous stable state, and all adjacent extreme states differ with more than two pixels in the x coordinate, then the system assumes that head-shaking has occurred. Figure 6. Example of the processing result There are two images in the upper right corner. In the middle is a result of the circle-frequency filter applied to the left image, although the pixel values are normalized to 0

6 255. A small image on the upper edge is a current template of the between-eyes. The system runs at 13 frames/second rate. 150 frame sequences of three people (450 frames total) are tested to determine the detection and tracking ability. The result is shown in Table 2. Each person looks up and down, left and right, and moves his neck. Give-up or false results occur when subjects look aside at extreme angles. Another case of give-up occurs when a subject looks in a certain direction and a glass reflection prevents a detection of the true candidate. In the case of subject C, the number of false results is large, because the wrong point is tracked for a long time. Regardless, when a subject looks forward in an ordinary ways, the system does not miss the between-eyes. Figure 7 shows the extreme cases for a subject that the system can follow in the tracking mode. Table 2: Experimental result of tracking for three persons. success give-up false A B C Figure 7. Limitation of face postures for tracking 6. Conclusion We have proposed a method of detecting nodding and head-shaking, combining the detection algorithm with tracking of the between-eyes. To efficiently detect and track the between-eyes, we proposed the circle-frequency filter. Local maximum points of the filter output are candidate points for the betweeneyes. The true between-eyes is robustly included in these candidates. We examined local features to select the true between-eyes, and we concluded that local features can be used to reduce the number of candidates. However, we cannot be sure which is the true between-eyes with local features alone, because there are similar patterns, so we use eye locations to confirm which is the true between-eyes. Once the true between-eyes is detected, there is no need for confirmation with eyes from the next frame or in the tracking mode. A small area around the between-eyes is copied as a template, and this copy is used to select the true between-eyes in the tracking mode. The template is updated after every success of tracking. To reduce the processing area, we extract a skin-color area in the detection mode, and predict a small search area from the previous point of the between-eyes in the tracking mode. This area restriction reduces the processing time significantly. We implemented the algorithm on a workstation, and it ran at 13 frames/second. The detection rate of nodding and head-shaking was satisfactory when subjects did not look extremely aside. In the future, we will research the detection of individual face parts such as the eyes, the eyebrows, the nose, and the mouth, and estimate the expression of the face or the gaze direction. References [1] Q. Chen, H. Wu, T. Fukumoto, and M. Yachida. 3D head pose estimation without feature tracking. Proc. IEEE Int. Conf. on Automatic Face and Gesture Recognition, pp , [2] J. Heinzmann and A. Zelinsky. 3-D facial pose and gaze point estimation using a robust real-time tracking paradigm. Proc. IEEE Int. Conf. on Automatic Face and Gesture Recognition, pp , [3] K. Mase, Y. Watanabe, and Y. Suenaga. Headreader: Realtime motion detection of human head from image sequence. Trans. IEICE, J74-D-II(3): , [4] E. Petajan. Robust face feature analysis for automatic speech reading and charactor animation. Proc. IEEE Int. Conf. on Automatic Face and Gesture Recognition, pp , [5] R. Stiefelhagen, J. Yang, and A. Waibel. Tracking eyes and monitoring eye gaze. Proc. of the Workshop on Perceptual User Interfaces, pp , [6] J. C. Terrillon, M. David, and S. Akamatsu. Automatic detection of human faces in natural scene images by use of a skin color model and of invariant moments. Proc. IEEE Int. Conf. on Automatic Face and Gesture Recognition, pp , [7] O. Yamaguchi, K. Fukui, and K. Maeda. Face recognition using temporal image sequence. Proc. IEEE Int. Conf. on Automatic Face and Gesture Recognition, pp , [8] J. Yang, R. Stiefelhagen, U. Meier, and A. Waibel. A realtime face tracker. Proc. 3rd IEEE Workshop on Application of Computer Vision, pp , [9] J. Yang and A. Waibel. Real-time face and facial feature tracking and applications. Proc. Workshop on Audio-Visual Speech Processing, pp , [10] A. Zelinsky and J. Heinzmann. Real-time visial recognition of facial gestures for human-computer interaction. Proc. IEEE Int. Conf. on Automatic Face and Gesture Recognition, pp , 1996.

Template-based Eye and Mouth Detection for 3D Video Conferencing

Template-based Eye and Mouth Detection for 3D Video Conferencing Template-based Eye and Mouth Detection for 3D Video Conferencing Jürgen Rurainsky and Peter Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz-Institute, Image Processing Department, Einsteinufer

More information

Face Locating and Tracking for Human{Computer Interaction. Carnegie Mellon University. Pittsburgh, PA 15213

Face Locating and Tracking for Human{Computer Interaction. Carnegie Mellon University. Pittsburgh, PA 15213 Face Locating and Tracking for Human{Computer Interaction Martin Hunke Alex Waibel School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract Eective Human-to-Human communication

More information

Face detection is a process of localizing and extracting the face region from the

Face detection is a process of localizing and extracting the face region from the Chapter 4 FACE NORMALIZATION 4.1 INTRODUCTION Face detection is a process of localizing and extracting the face region from the background. The detected face varies in rotation, brightness, size, etc.

More information

HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT

HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT International Journal of Scientific and Research Publications, Volume 2, Issue 4, April 2012 1 HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT Akhil Gupta, Akash Rathi, Dr. Y. Radhika

More information

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014 Efficient Attendance Management System Using Face Detection and Recognition Arun.A.V, Bhatath.S, Chethan.N, Manmohan.C.M, Hamsaveni M Department of Computer Science and Engineering, Vidya Vardhaka College

More information

LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE. indhubatchvsa@gmail.com

LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE. indhubatchvsa@gmail.com LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE 1 S.Manikandan, 2 S.Abirami, 2 R.Indumathi, 2 R.Nandhini, 2 T.Nanthini 1 Assistant Professor, VSA group of institution, Salem. 2 BE(ECE), VSA

More information

A Real Time Hand Tracking System for Interactive Applications

A Real Time Hand Tracking System for Interactive Applications A Real Time Hand Tracking System for Interactive Applications Siddharth Swarup Rautaray Indian Institute of Information Technology Allahabad ABSTRACT In vision based hand tracking systems color plays an

More information

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network Proceedings of the 8th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING & DATA BASES (AIKED '9) ISSN: 179-519 435 ISBN: 978-96-474-51-2 An Energy-Based Vehicle Tracking System using Principal

More information

REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING

REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING Ms.PALLAVI CHOUDEKAR Ajay Kumar Garg Engineering College, Department of electrical and electronics Ms.SAYANTI BANERJEE Ajay Kumar Garg Engineering

More information

Very Low Frame-Rate Video Streaming For Face-to-Face Teleconference

Very Low Frame-Rate Video Streaming For Face-to-Face Teleconference Very Low Frame-Rate Video Streaming For Face-to-Face Teleconference Jue Wang, Michael F. Cohen Department of Electrical Engineering, University of Washington Microsoft Research Abstract Providing the best

More information

Open Access A Facial Expression Recognition Algorithm Based on Local Binary Pattern and Empirical Mode Decomposition

Open Access A Facial Expression Recognition Algorithm Based on Local Binary Pattern and Empirical Mode Decomposition Send Orders for Reprints to reprints@benthamscience.ae The Open Electrical & Electronic Engineering Journal, 2014, 8, 599-604 599 Open Access A Facial Expression Recognition Algorithm Based on Local Binary

More information

Tracking of Small Unmanned Aerial Vehicles

Tracking of Small Unmanned Aerial Vehicles Tracking of Small Unmanned Aerial Vehicles Steven Krukowski Adrien Perkins Aeronautics and Astronautics Stanford University Stanford, CA 94305 Email: spk170@stanford.edu Aeronautics and Astronautics Stanford

More information

HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER

HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER Gholamreza Anbarjafari icv Group, IMS Lab, Institute of Technology, University of Tartu, Tartu 50411, Estonia sjafari@ut.ee

More information

Scanners and How to Use Them

Scanners and How to Use Them Written by Jonathan Sachs Copyright 1996-1999 Digital Light & Color Introduction A scanner is a device that converts images to a digital file you can use with your computer. There are many different types

More information

Static Environment Recognition Using Omni-camera from a Moving Vehicle

Static Environment Recognition Using Omni-camera from a Moving Vehicle Static Environment Recognition Using Omni-camera from a Moving Vehicle Teruko Yata, Chuck Thorpe Frank Dellaert The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 USA College of Computing

More information

A System for Capturing High Resolution Images

A System for Capturing High Resolution Images A System for Capturing High Resolution Images G.Voyatzis, G.Angelopoulos, A.Bors and I.Pitas Department of Informatics University of Thessaloniki BOX 451, 54006 Thessaloniki GREECE e-mail: pitas@zeus.csd.auth.gr

More information

Potential of face area data for predicting sharpness of natural images

Potential of face area data for predicting sharpness of natural images Potential of face area data for predicting sharpness of natural images Mikko Nuutinen a, Olli Orenius b, Timo Säämänen b, Pirkko Oittinen a a Dept. of Media Technology, Aalto University School of Science

More information

Tracking and Recognition in Sports Videos

Tracking and Recognition in Sports Videos Tracking and Recognition in Sports Videos Mustafa Teke a, Masoud Sattari b a Graduate School of Informatics, Middle East Technical University, Ankara, Turkey mustafa.teke@gmail.com b Department of Computer

More information

Vision-based Real-time Driver Fatigue Detection System for Efficient Vehicle Control

Vision-based Real-time Driver Fatigue Detection System for Efficient Vehicle Control Vision-based Real-time Driver Fatigue Detection System for Efficient Vehicle Control D.Jayanthi, M.Bommy Abstract In modern days, a large no of automobile accidents are caused due to driver fatigue. To

More information

Detection and Restoration of Vertical Non-linear Scratches in Digitized Film Sequences

Detection and Restoration of Vertical Non-linear Scratches in Digitized Film Sequences Detection and Restoration of Vertical Non-linear Scratches in Digitized Film Sequences Byoung-moon You 1, Kyung-tack Jung 2, Sang-kook Kim 2, and Doo-sung Hwang 3 1 L&Y Vision Technologies, Inc., Daejeon,

More information

Fixplot Instruction Manual. (data plotting program)

Fixplot Instruction Manual. (data plotting program) Fixplot Instruction Manual (data plotting program) MANUAL VERSION2 2004 1 1. Introduction The Fixplot program is a component program of Eyenal that allows the user to plot eye position data collected with

More information

Poker Vision: Playing Cards and Chips Identification based on Image Processing

Poker Vision: Playing Cards and Chips Identification based on Image Processing Poker Vision: Playing Cards and Chips Identification based on Image Processing Paulo Martins 1, Luís Paulo Reis 2, and Luís Teófilo 2 1 DEEC Electrical Engineering Department 2 LIACC Artificial Intelligence

More information

Video in Logger Pro. There are many ways to create and use video clips and still images in Logger Pro.

Video in Logger Pro. There are many ways to create and use video clips and still images in Logger Pro. Video in Logger Pro There are many ways to create and use video clips and still images in Logger Pro. Insert an existing video clip into a Logger Pro experiment. Supported file formats include.avi and.mov.

More information

Automatic Calibration of an In-vehicle Gaze Tracking System Using Driver s Typical Gaze Behavior

Automatic Calibration of an In-vehicle Gaze Tracking System Using Driver s Typical Gaze Behavior Automatic Calibration of an In-vehicle Gaze Tracking System Using Driver s Typical Gaze Behavior Kenji Yamashiro, Daisuke Deguchi, Tomokazu Takahashi,2, Ichiro Ide, Hiroshi Murase, Kazunori Higuchi 3,

More information

Navigation Aid And Label Reading With Voice Communication For Visually Impaired People

Navigation Aid And Label Reading With Voice Communication For Visually Impaired People Navigation Aid And Label Reading With Voice Communication For Visually Impaired People A.Manikandan 1, R.Madhuranthi 2 1 M.Kumarasamy College of Engineering, mani85a@gmail.com,karur,india 2 M.Kumarasamy

More information

How To Filter Spam Image From A Picture By Color Or Color

How To Filter Spam Image From A Picture By Color Or Color Image Content-Based Email Spam Image Filtering Jianyi Wang and Kazuki Katagishi Abstract With the population of Internet around the world, email has become one of the main methods of communication among

More information

A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow

A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow , pp.233-237 http://dx.doi.org/10.14257/astl.2014.51.53 A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow Giwoo Kim 1, Hye-Youn Lim 1 and Dae-Seong Kang 1, 1 Department of electronices

More information

Building an Advanced Invariant Real-Time Human Tracking System

Building an Advanced Invariant Real-Time Human Tracking System UDC 004.41 Building an Advanced Invariant Real-Time Human Tracking System Fayez Idris 1, Mazen Abu_Zaher 2, Rashad J. Rasras 3, and Ibrahiem M. M. El Emary 4 1 School of Informatics and Computing, German-Jordanian

More information

Visual Structure Analysis of Flow Charts in Patent Images

Visual Structure Analysis of Flow Charts in Patent Images Visual Structure Analysis of Flow Charts in Patent Images Roland Mörzinger, René Schuster, András Horti, and Georg Thallinger JOANNEUM RESEARCH Forschungsgesellschaft mbh DIGITAL - Institute for Information

More information

Low-resolution Character Recognition by Video-based Super-resolution

Low-resolution Character Recognition by Video-based Super-resolution 2009 10th International Conference on Document Analysis and Recognition Low-resolution Character Recognition by Video-based Super-resolution Ataru Ohkura 1, Daisuke Deguchi 1, Tomokazu Takahashi 2, Ichiro

More information

Canny Edge Detection

Canny Edge Detection Canny Edge Detection 09gr820 March 23, 2009 1 Introduction The purpose of edge detection in general is to significantly reduce the amount of data in an image, while preserving the structural properties

More information

Mean-Shift Tracking with Random Sampling

Mean-Shift Tracking with Random Sampling 1 Mean-Shift Tracking with Random Sampling Alex Po Leung, Shaogang Gong Department of Computer Science Queen Mary, University of London, London, E1 4NS Abstract In this work, boosting the efficiency of

More information

Automatic Traffic Estimation Using Image Processing

Automatic Traffic Estimation Using Image Processing Automatic Traffic Estimation Using Image Processing Pejman Niksaz Science &Research Branch, Azad University of Yazd, Iran Pezhman_1366@yahoo.com Abstract As we know the population of city and number of

More information

Demo: Real-time Tracking of Round Object

Demo: Real-time Tracking of Round Object Page 1 of 1 Demo: Real-time Tracking of Round Object by: Brianna Bikker and David Price, TAMU Course Instructor: Professor Deepa Kundur Introduction Our project is intended to track the motion of a round

More information

Self-Portrait Steps Images taken from Andrew Loomis Drawing Head & Hands (there are many sites to download this out of print book for free)

Self-Portrait Steps Images taken from Andrew Loomis Drawing Head & Hands (there are many sites to download this out of print book for free) Self-Portrait Steps Images taken from Andrew Loomis Drawing Head & Hands (there are many sites to download this out of print book for free) First of all- put the idea of it doesn t look like me! out of

More information

RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 29 (2008) Indiana University

RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 29 (2008) Indiana University RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 29 (2008) Indiana University A Software-Based System for Synchronizing and Preprocessing Eye Movement Data in Preparation for Analysis 1 Mohammad

More information

Build Panoramas on Android Phones

Build Panoramas on Android Phones Build Panoramas on Android Phones Tao Chu, Bowen Meng, Zixuan Wang Stanford University, Stanford CA Abstract The purpose of this work is to implement panorama stitching from a sequence of photos taken

More information

UNIVERSITY OF CENTRAL FLORIDA AT TRECVID 2003. Yun Zhai, Zeeshan Rasheed, Mubarak Shah

UNIVERSITY OF CENTRAL FLORIDA AT TRECVID 2003. Yun Zhai, Zeeshan Rasheed, Mubarak Shah UNIVERSITY OF CENTRAL FLORIDA AT TRECVID 2003 Yun Zhai, Zeeshan Rasheed, Mubarak Shah Computer Vision Laboratory School of Computer Science University of Central Florida, Orlando, Florida ABSTRACT In this

More information

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - nzarrin@qiau.ac.ir

More information

Eye Tracking Instructions

Eye Tracking Instructions Eye Tracking Instructions [1] Check to make sure that the eye tracker is properly connected and plugged in. Plug in the eye tracker power adaptor (the green light should be on. Make sure that the yellow

More information

PHYSIOLOGICALLY-BASED DETECTION OF COMPUTER GENERATED FACES IN VIDEO

PHYSIOLOGICALLY-BASED DETECTION OF COMPUTER GENERATED FACES IN VIDEO PHYSIOLOGICALLY-BASED DETECTION OF COMPUTER GENERATED FACES IN VIDEO V. Conotter, E. Bodnari, G. Boato H. Farid Department of Information Engineering and Computer Science University of Trento, Trento (ITALY)

More information

Multivariate data visualization using shadow

Multivariate data visualization using shadow Proceedings of the IIEEJ Ima and Visual Computing Wor Kuching, Malaysia, Novembe Multivariate data visualization using shadow Zhongxiang ZHENG Suguru SAITO Tokyo Institute of Technology ABSTRACT When visualizing

More information

2 SYSTEM DESCRIPTION TECHNIQUES

2 SYSTEM DESCRIPTION TECHNIQUES 2 SYSTEM DESCRIPTION TECHNIQUES 2.1 INTRODUCTION Graphical representation of any process is always better and more meaningful than its representation in words. Moreover, it is very difficult to arrange

More information

A Reliability Point and Kalman Filter-based Vehicle Tracking Technique

A Reliability Point and Kalman Filter-based Vehicle Tracking Technique A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video

More information

False alarm in outdoor environments

False alarm in outdoor environments Accepted 1.0 Savantic letter 1(6) False alarm in outdoor environments Accepted 1.0 Savantic letter 2(6) Table of contents Revision history 3 References 3 1 Introduction 4 2 Pre-processing 4 3 Detection,

More information

Evaluation of Optimizations for Object Tracking Feedback-Based Head-Tracking

Evaluation of Optimizations for Object Tracking Feedback-Based Head-Tracking Evaluation of Optimizations for Object Tracking Feedback-Based Head-Tracking Anjo Vahldiek, Ansgar Schneider, Stefan Schubert Baden-Wuerttemberg State University Stuttgart Computer Science Department Rotebuehlplatz

More information

Automatic Recognition Algorithm of Quick Response Code Based on Embedded System

Automatic Recognition Algorithm of Quick Response Code Based on Embedded System Automatic Recognition Algorithm of Quick Response Code Based on Embedded System Yue Liu Department of Information Science and Engineering, Jinan University Jinan, China ise_liuy@ujn.edu.cn Mingjun Liu

More information

Speed Performance Improvement of Vehicle Blob Tracking System

Speed Performance Improvement of Vehicle Blob Tracking System Speed Performance Improvement of Vehicle Blob Tracking System Sung Chun Lee and Ram Nevatia University of Southern California, Los Angeles, CA 90089, USA sungchun@usc.edu, nevatia@usc.edu Abstract. A speed

More information

Laser Gesture Recognition for Human Machine Interaction

Laser Gesture Recognition for Human Machine Interaction International Journal of Computer Sciences and Engineering Open Access Research Paper Volume-04, Issue-04 E-ISSN: 2347-2693 Laser Gesture Recognition for Human Machine Interaction Umang Keniya 1*, Sarthak

More information

Drowsy Driver Detection System

Drowsy Driver Detection System Drowsy Driver Detection System Design Project By: Neeta Parmar Instructor: Peter Hiscocks Department of Electrical and Computer Engineering, Ryerson University. 2002. All Rights Reserved. CERTIFICATE OF

More information

Mouse Control using a Web Camera based on Colour Detection

Mouse Control using a Web Camera based on Colour Detection Mouse Control using a Web Camera based on Colour Detection Abhik Banerjee 1, Abhirup Ghosh 2, Koustuvmoni Bharadwaj 3, Hemanta Saikia 4 1, 2, 3, 4 Department of Electronics & Communication Engineering,

More information

Implementation of OCR Based on Template Matching and Integrating it in Android Application

Implementation of OCR Based on Template Matching and Integrating it in Android Application International Journal of Computer Sciences and EngineeringOpen Access Technical Paper Volume-04, Issue-02 E-ISSN: 2347-2693 Implementation of OCR Based on Template Matching and Integrating it in Android

More information

Analecta Vol. 8, No. 2 ISSN 2064-7964

Analecta Vol. 8, No. 2 ISSN 2064-7964 EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,

More information

Introduction to the Smith Chart for the MSA Sam Wetterlin 10/12/09 Z +

Introduction to the Smith Chart for the MSA Sam Wetterlin 10/12/09 Z + Introduction to the Smith Chart for the MSA Sam Wetterlin 10/12/09 Quick Review of Reflection Coefficient The Smith chart is a method of graphing reflection coefficients and impedance, and is often useful

More information

Tracking Moving Objects In Video Sequences Yiwei Wang, Robert E. Van Dyck, and John F. Doherty Department of Electrical Engineering The Pennsylvania State University University Park, PA16802 Abstract{Object

More information

First Steps with CoDeSys. Last update: 05.03.2004

First Steps with CoDeSys. Last update: 05.03.2004 Last update: 05.03.2004 CONTENT 1 STARTING CODESYS 3 2 WRITING THE FIRST PROGRAM 3 3 A VISUALIZATION FOR THIS 7 4 START THE TARGET SYSTEM 9 5 SETTINGS FOR ESTABLISHING THE CONNECTION 9 6 START THE PROJECT

More information

CS231M Project Report - Automated Real-Time Face Tracking and Blending

CS231M Project Report - Automated Real-Time Face Tracking and Blending CS231M Project Report - Automated Real-Time Face Tracking and Blending Steven Lee, slee2010@stanford.edu June 6, 2015 1 Introduction Summary statement: The goal of this project is to create an Android

More information

An Active Head Tracking System for Distance Education and Videoconferencing Applications

An Active Head Tracking System for Distance Education and Videoconferencing Applications An Active Head Tracking System for Distance Education and Videoconferencing Applications Sami Huttunen and Janne Heikkilä Machine Vision Group Infotech Oulu and Department of Electrical and Information

More information

Self-Calibrated Structured Light 3D Scanner Using Color Edge Pattern

Self-Calibrated Structured Light 3D Scanner Using Color Edge Pattern Self-Calibrated Structured Light 3D Scanner Using Color Edge Pattern Samuel Kosolapov Department of Electrical Engineering Braude Academic College of Engineering Karmiel 21982, Israel e-mail: ksamuel@braude.ac.il

More information

Talking Head: Synthetic Video Facial Animation in MPEG-4.

Talking Head: Synthetic Video Facial Animation in MPEG-4. Talking Head: Synthetic Video Facial Animation in MPEG-4. A. Fedorov, T. Firsova, V. Kuriakin, E. Martinova, K. Rodyushkin and V. Zhislina Intel Russian Research Center, Nizhni Novgorod, Russia Abstract

More information

Super-resolution method based on edge feature for high resolution imaging

Super-resolution method based on edge feature for high resolution imaging Science Journal of Circuits, Systems and Signal Processing 2014; 3(6-1): 24-29 Published online December 26, 2014 (http://www.sciencepublishinggroup.com/j/cssp) doi: 10.11648/j.cssp.s.2014030601.14 ISSN:

More information

A Computer Vision System for Monitoring Production of Fast Food

A Computer Vision System for Monitoring Production of Fast Food ACCV2002: The 5th Asian Conference on Computer Vision, 23 25 January 2002, Melbourne, Australia A Computer Vision System for Monitoring Production of Fast Food Richard Russo Mubarak Shah Niels Lobo Computer

More information

Whitepaper. Image stabilization improving camera usability

Whitepaper. Image stabilization improving camera usability Whitepaper Image stabilization improving camera usability Table of contents 1. Introduction 3 2. Vibration Impact on Video Output 3 3. Image Stabilization Techniques 3 3.1 Optical Image Stabilization 3

More information

Multimodal Biometric Recognition Security System

Multimodal Biometric Recognition Security System Multimodal Biometric Recognition Security System Anju.M.I, G.Sheeba, G.Sivakami, Monica.J, Savithri.M Department of ECE, New Prince Shri Bhavani College of Engg. & Tech., Chennai, India ABSTRACT: Security

More information

How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm

How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm IJSTE - International Journal of Science Technology & Engineering Volume 1 Issue 10 April 2015 ISSN (online): 2349-784X Image Estimation Algorithm for Out of Focus and Blur Images to Retrieve the Barcode

More information

Calculation of Minimum Distances. Minimum Distance to Means. Σi i = 1

Calculation of Minimum Distances. Minimum Distance to Means. Σi i = 1 Minimum Distance to Means Similar to Parallelepiped classifier, but instead of bounding areas, the user supplies spectral class means in n-dimensional space and the algorithm calculates the distance between

More information

3D Scanner using Line Laser. 1. Introduction. 2. Theory

3D Scanner using Line Laser. 1. Introduction. 2. Theory . Introduction 3D Scanner using Line Laser Di Lu Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute The goal of 3D reconstruction is to recover the 3D properties of a geometric

More information

Common Core Unit Summary Grades 6 to 8

Common Core Unit Summary Grades 6 to 8 Common Core Unit Summary Grades 6 to 8 Grade 8: Unit 1: Congruence and Similarity- 8G1-8G5 rotations reflections and translations,( RRT=congruence) understand congruence of 2 d figures after RRT Dilations

More information

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY V. Knyaz a, *, Yu. Visilter, S. Zheltov a State Research Institute for Aviation System (GosNIIAS), 7, Victorenko str., Moscow, Russia

More information

A Real-Time Driver Fatigue Detection System Based on Eye Tracking and Dynamic Template Matching

A Real-Time Driver Fatigue Detection System Based on Eye Tracking and Dynamic Template Matching Tamkang Journal of Science and Engineering, Vol. 11, No. 1, pp. 65 72 (28) 65 A Real-Time Driver Fatigue Detection System Based on Eye Tracking and Dynamic Template Matching Wen-Bing Horng* and Chih-Yuan

More information

Bernice E. Rogowitz and Holly E. Rushmeier IBM TJ Watson Research Center, P.O. Box 704, Yorktown Heights, NY USA

Bernice E. Rogowitz and Holly E. Rushmeier IBM TJ Watson Research Center, P.O. Box 704, Yorktown Heights, NY USA Are Image Quality Metrics Adequate to Evaluate the Quality of Geometric Objects? Bernice E. Rogowitz and Holly E. Rushmeier IBM TJ Watson Research Center, P.O. Box 704, Yorktown Heights, NY USA ABSTRACT

More information

Tutorial for Tracker and Supporting Software By David Chandler

Tutorial for Tracker and Supporting Software By David Chandler Tutorial for Tracker and Supporting Software By David Chandler I use a number of free, open source programs to do video analysis. 1. Avidemux, to exerpt the video clip, read the video properties, and save

More information

Effective Use of Android Sensors Based on Visualization of Sensor Information

Effective Use of Android Sensors Based on Visualization of Sensor Information , pp.299-308 http://dx.doi.org/10.14257/ijmue.2015.10.9.31 Effective Use of Android Sensors Based on Visualization of Sensor Information Young Jae Lee Faculty of Smartmedia, Jeonju University, 303 Cheonjam-ro,

More information

BLIND SOURCE SEPARATION OF SPEECH AND BACKGROUND MUSIC FOR IMPROVED SPEECH RECOGNITION

BLIND SOURCE SEPARATION OF SPEECH AND BACKGROUND MUSIC FOR IMPROVED SPEECH RECOGNITION BLIND SOURCE SEPARATION OF SPEECH AND BACKGROUND MUSIC FOR IMPROVED SPEECH RECOGNITION P. Vanroose Katholieke Universiteit Leuven, div. ESAT/PSI Kasteelpark Arenberg 10, B 3001 Heverlee, Belgium Peter.Vanroose@esat.kuleuven.ac.be

More information

Free Inductive/Logical Test Questions

Free Inductive/Logical Test Questions Free Inductive/Logical Test Questions (With questions and answers) JobTestPrep invites you to a free practice session that represents only some of the materials offered in our online practice packs. Have

More information

Low-resolution Image Processing based on FPGA

Low-resolution Image Processing based on FPGA Abstract Research Journal of Recent Sciences ISSN 2277-2502. Low-resolution Image Processing based on FPGA Mahshid Aghania Kiau, Islamic Azad university of Karaj, IRAN Available online at: www.isca.in,

More information

Video Tracking Software User s Manual. Version 1.0

Video Tracking Software User s Manual. Version 1.0 Video Tracking Software User s Manual Version 1.0 Triangle BioSystems International 2224 Page Rd. Suite 108 Durham, NC 27703 Phone: (919) 361-2663 Fax: (919) 544-3061 www.trianglebiosystems.com Table of

More information

Vision based Vehicle Tracking using a high angle camera

Vision based Vehicle Tracking using a high angle camera Vision based Vehicle Tracking using a high angle camera Raúl Ignacio Ramos García Dule Shu gramos@clemson.edu dshu@clemson.edu Abstract A vehicle tracking and grouping algorithm is presented in this work

More information

VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS

VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS Norbert Buch 1, Mark Cracknell 2, James Orwell 1 and Sergio A. Velastin 1 1. Kingston University, Penrhyn Road, Kingston upon Thames, KT1 2EE,

More information

VIDEO COMMUNICATION SYSTEM-TECHNICAL DOCUMENTATION. Tracking Camera (PCSA-CTG70/CTG70P) PCS-G70/G70P All

VIDEO COMMUNICATION SYSTEM-TECHNICAL DOCUMENTATION. Tracking Camera (PCSA-CTG70/CTG70P) PCS-G70/G70P All Tracking Camera () PCS-G70/G70P All Introduction The Tracking Camera is a camera unit dedicated to the PCS-G70/G70P. It provides the Voice-Directional Detection function, the Face Recognition function,

More information

Palmprint Recognition. By Sree Rama Murthy kora Praveen Verma Yashwant Kashyap

Palmprint Recognition. By Sree Rama Murthy kora Praveen Verma Yashwant Kashyap Palmprint Recognition By Sree Rama Murthy kora Praveen Verma Yashwant Kashyap Palm print Palm Patterns are utilized in many applications: 1. To correlate palm patterns with medical disorders, e.g. genetic

More information

EMR-9 Quick Start Guide (00175)D 1

EMR-9 Quick Start Guide (00175)D 1 NAC Eye Mark Recorder EMR-9 Quick Start Guide May 2009 NAC Image Technology Inc. (00175)D 1 Contents 1. System Configurations 1.1 Standard configuration 1.2 Head unit variations 1.3 Optional items 2. Basic

More information

Efficient Background Subtraction and Shadow Removal Technique for Multiple Human object Tracking

Efficient Background Subtraction and Shadow Removal Technique for Multiple Human object Tracking ISSN: 2321-7782 (Online) Volume 1, Issue 7, December 2013 International Journal of Advance Research in Computer Science and Management Studies Research Paper Available online at: www.ijarcsms.com Efficient

More information

ALGEBRA. sequence, term, nth term, consecutive, rule, relationship, generate, predict, continue increase, decrease finite, infinite

ALGEBRA. sequence, term, nth term, consecutive, rule, relationship, generate, predict, continue increase, decrease finite, infinite ALGEBRA Pupils should be taught to: Generate and describe sequences As outcomes, Year 7 pupils should, for example: Use, read and write, spelling correctly: sequence, term, nth term, consecutive, rule,

More information

Analyses: Statistical Measures

Analyses: Statistical Measures Application Note 129 APPLICATION NOTE Heart Rate Variability 42 Aero Camino, Goleta, CA 93117 Tel (805) 685-0066 Fax (805) 685-0067 info@biopac.com www.biopac.com 05.22.14 Analyses: Statistical Measures

More information

Neural Network based Vehicle Classification for Intelligent Traffic Control

Neural Network based Vehicle Classification for Intelligent Traffic Control Neural Network based Vehicle Classification for Intelligent Traffic Control Saeid Fazli 1, Shahram Mohammadi 2, Morteza Rahmani 3 1,2,3 Electrical Engineering Department, Zanjan University, Zanjan, IRAN

More information

Face Model Fitting on Low Resolution Images

Face Model Fitting on Low Resolution Images Face Model Fitting on Low Resolution Images Xiaoming Liu Peter H. Tu Frederick W. Wheeler Visualization and Computer Vision Lab General Electric Global Research Center Niskayuna, NY, 1239, USA {liux,tu,wheeler}@research.ge.com

More information

A Beginners Guide to Track Laying.

A Beginners Guide to Track Laying. A Beginners Guide to Track Laying. I should first say that none of the material below is original. I have made use of many sources of information and have often copied directly from them. All I claim to

More information

AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION

AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION Saurabh Asija 1, Rakesh Singh 2 1 Research Scholar (Computer Engineering Department), Punjabi University, Patiala. 2 Asst.

More information

Enhanced LIC Pencil Filter

Enhanced LIC Pencil Filter Enhanced LIC Pencil Filter Shigefumi Yamamoto, Xiaoyang Mao, Kenji Tanii, Atsumi Imamiya University of Yamanashi {daisy@media.yamanashi.ac.jp, mao@media.yamanashi.ac.jp, imamiya@media.yamanashi.ac.jp}

More information

Making Machines Understand Facial Motion & Expressions Like Humans Do

Making Machines Understand Facial Motion & Expressions Like Humans Do Making Machines Understand Facial Motion & Expressions Like Humans Do Ana C. Andrés del Valle & Jean-Luc Dugelay Multimedia Communications Dpt. Institut Eurécom 2229 route des Crêtes. BP 193. Sophia Antipolis.

More information

3D Viewer. user's manual 10017352_2

3D Viewer. user's manual 10017352_2 EN 3D Viewer user's manual 10017352_2 TABLE OF CONTENTS 1 SYSTEM REQUIREMENTS...1 2 STARTING PLANMECA 3D VIEWER...2 3 PLANMECA 3D VIEWER INTRODUCTION...3 3.1 Menu Toolbar... 4 4 EXPLORER...6 4.1 3D Volume

More information

Monitoring Head/Eye Motion for Driver Alertness with One Camera

Monitoring Head/Eye Motion for Driver Alertness with One Camera Monitoring Head/Eye Motion for Driver Alertness with One Camera Paul Smith, Mubarak Shah, and N. da Vitoria Lobo Computer Science, University of Central Florida, Orlando, FL 32816 rps43158,shah,niels @cs.ucf.edu

More information

Session 7 Bivariate Data and Analysis

Session 7 Bivariate Data and Analysis Session 7 Bivariate Data and Analysis Key Terms for This Session Previously Introduced mean standard deviation New in This Session association bivariate analysis contingency table co-variation least squares

More information

Drawing Lines with Pixels. Joshua Scott March 2012

Drawing Lines with Pixels. Joshua Scott March 2012 Drawing Lines with Pixels Joshua Scott March 2012 1 Summary Computers draw lines and circles during many common tasks, such as using an image editor. But how does a computer know which pixels to darken

More information

Mobile Multimedia Application for Deaf Users

Mobile Multimedia Application for Deaf Users Mobile Multimedia Application for Deaf Users Attila Tihanyi Pázmány Péter Catholic University, Faculty of Information Technology 1083 Budapest, Práter u. 50/a. Hungary E-mail: tihanyia@itk.ppke.hu Abstract

More information

OBJECT TRACKING USING LOG-POLAR TRANSFORMATION

OBJECT TRACKING USING LOG-POLAR TRANSFORMATION OBJECT TRACKING USING LOG-POLAR TRANSFORMATION A Thesis Submitted to the Gradual Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements

More information

Locating and Decoding EAN-13 Barcodes from Images Captured by Digital Cameras

Locating and Decoding EAN-13 Barcodes from Images Captured by Digital Cameras Locating and Decoding EAN-13 Barcodes from Images Captured by Digital Cameras W3A.5 Douglas Chai and Florian Hock Visual Information Processing Research Group School of Engineering and Mathematics Edith

More information

Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006

Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006 Practical Tour of Visual tracking David Fleet and Allan Jepson January, 2006 Designing a Visual Tracker: What is the state? pose and motion (position, velocity, acceleration, ) shape (size, deformation,

More information

Robust and accurate global vision system for real time tracking of multiple mobile robots

Robust and accurate global vision system for real time tracking of multiple mobile robots Robust and accurate global vision system for real time tracking of multiple mobile robots Mišel Brezak Ivan Petrović Edouard Ivanjko Department of Control and Computer Engineering, Faculty of Electrical

More information