A REAL-TIME VIDEO-BASED EYE TRACKING APPROACH FOR DRIVER ATTENTION STUDY
|
|
|
- Lorena Patterson
- 10 years ago
- Views:
Transcription
1 Computing and Informatics, Vol. 31, 2012, A REAL-TIME VIDEO-BASED EYE TRACKING APPROACH FOR DRIVER ATTENTION STUDY Xianping Fu, Ying Zang, Hongbo Liu School of Information Science and Technology Dalian Maritime University Dalian , China [email protected], [email protected] Communicated by Steve Maybank Abstract. Knowing the driver s point of gaze has significant potential to enhance driving safety, eye movements can be used as an indicator of the attention state of a driver; but the primary obstacle of integrating eye gaze into today s large scale real world driving attention study is the availability of a reliable, low-cost eyetracking system. In this paper, we make an attempt to investigate such a real-time system to collect driver s eye gaze in real world driving environment. A novel eyetracking approach is proposed based on low cost head mounted eye tracker. Our approach detects corneal reflection and pupil edge points firstly, and then fits the points with ellipse. The proposed approach is available in different illumination and driving environment from simple inexpensive head mounted eye tracker, which can be widely used in large scale experiments. The experimental results illustrate our approach can reliably estimate eye position with an accuracy of average 0.34 degree of visual angle in door experiment and 2 5 degrees in real driving environments. Keywords: Eye-tracking, driver attention, corneal refection, random sample consensus 1 INTRODUCTION The analysis of driver attention has long been a popular field of research in light of the potential for safety improvements [1, 2, 3]. Driver s eye gaze has been most Corresponding author
2 806 X. Fu, Y. Zang, H. Liu recently conducted as driver workload metrics [4], and as a proxy for driver attention [5, 6, 7]. Despite active research and significant progress in the last 30 years, eye detection and tracking remains challenging due to the individuality of eyes, occlusion, scale variability, location, and light conditions in real driving environment [8, 9]. Although eye tracking has been deployed in a number of research systems and to a smaller degree of consumer products, eye tracking has not reached its full potential. The primary obstacle to integrating these techniques into large scaleusageisthattheyhavebeen eithertooinvasiveortooexpensiveforroutineuse. Eye-tracking systems can be divided into remote and head mounted systems [10]. Each type of system has its respective advantages. For example, remote systems are not as intrusive but are not as accurate or flexible as head mounted systems [11]. In driving environments, the driver s view field is extremely wide; except looking ahead, looking left and right for the attractive objects aside the road drivers sometimes even look back for reversing or monitoring a following vehicle. Therefore, for the remote system, it is difficult to make a scene camera with so large view field available and to calibrate eye gaze with such a large scale head movement. To address driver s visual distraction problem in large view field (almost 360 degrees), the designed system uses a head mounted eye tracker. Given this advance, the most significant remaining obstacle is the cost and flexibility. In the recent years, the price of high-quality digital camera technology has dropped precipitously and new technology made the camera lighter and more flexible than before. Some software implementations are integrated with specialized digital processors in cameras to obtain high-speed performance, which makes them more convenient and executable for head mounted device to be used in real driving environment. Therefore it is possible to develop a widely available, reliable and high-speed eye-tracking algorithm that runs on general embedded computing hardware in order to integrate eye tracking into everyday driver s attention study [12]. Towards this goal, we have developed a hybrid eye-tracking algorithm that integrates featurebased and model-based approaches and made its implementation available for low cost device. The main contribution of this paper is focused on the two parts. Firstly, we develop an eye-tracking algorithm which has improved performance in pupil center and corneal reflection detection. It can enhance the performance of head mounted eye gaze tracking system. The pupil contour and corneal reflection are detected by feature based method, and then the pupil location, shape and size are calculated by ellipse fitting method. Secondly, we proposed a novel calibration method in car. The scene camera with infrared illumination, four infrared reflection labels are fixed on rear view mirror, center console, left side mirror and right side mirror by help of a sticker. The calibration is implemented and updated every time when driver looks at these labels over a 200ms dwell time. A proposed method is more flexible than laser pointer calibration method on biopic telescope aiming point tracking [13]. The similar method is implemented in door simulation to test the real time performance. The rest of the paper is organized as follows. Related works about eye tracking algorithms and applications are reviewed in Section 2. Section 3 introduces the
3 A Real-Time Video-based Eye Tracking Approach for Driver Attention Study 807 structure of our proposed head mounted eye tracking system. The proposed hybrid algorithms are presented in Section 4. In Section 5, our experimental results and some discussion are illustrated, and finally conclusions are given in Section 6. 2 RELATED WORKS Eye tracking technology has been available for many years using a variety of methods, such as Purkinje-reflection based, contact-lens based eye coil systems, electrooculography, and corneal reflection [14]. In recent years, head-mounted and remote camera-based systems have been developed to allow more natural and less cumbersome methods of gaze tracking. They make it possible to collect the real-time video record for the eye movement. Eye-tracking algorithms can be classified into two approaches: feature-based and model-based approaches [15, 16, 17]. Featurebased approaches detect and localize image features related to the position of the eye [12]. Feature-based approaches have in common that a threshold is needed to decide when a feature is present or absent. The determination of an appropriate threshold is typically left as a free parameter that is adjusted by the user. The detected eye features vary widely across algorithms but most often rely on intensity levels or intensity gradients. For example, in infrared images created with the dark-pupil or bright-pupil technique, an appropriately set intensity threshold can be used to extract the region corresponding to the pupil. The pupil center can be taken as the geometric center of this identified region. The intensity gradient can be used to detect the limbus in visible spectrum images or the pupil contour in infrared spectrum images. An ellipse can then be fitted to these feature points. On the other hand, model-based approaches do not explicitly detect features but rather find the best fitting model that is consistent with the image. For example, integral differential operators can be used to find the best-fitting circle or ellipse for the limbus and pupil contour [18]. This approach requires an iterative search of the model parameter space that maximizes the integral of the derivative along the contour of the circle or ellipse. The model-based approach can provide a more precise estimate of the pupil center and pupil contour than a feature-based approach given that a feature defining criteria is not applied to the image data. However, this approach requires searching a complex parameter space that can be fraught with local minima [19]. Thus gradient techniques cannot be used without a good initial guess for the model parameters. Thus, the gain in accuracy of a model based approach is obtained at a significant cost in terms of computational speed and flexibility. Notably however, the use of multi-scale image processing methods in combination with a model-based approach holds promise for real time performance. Infrared spectrum imaging is commonly used in eye tracking. Infrared imaging eliminates uncontrolled specular reflection by actively illuminating the eye with a uniform and controlled infrared light not perceivable by the user. Infrared eye tracking typically utilizes either bright-pupil or dark-pupil techniques. Bright-pupil techniques illuminate the eye with a source that is on or very near the axis of the
4 808 X. Fu, Y. Zang, H. Liu camera. The result of such illumination is that the pupil is clearly demarcated as a bright region due to the photo reflective nature of the back of the eye. Dark-pupil techniques illuminate the eye with an off-axis source such that the pupil is the darkest region in the image, while the sclera, iris and eyelids all reflect relatively more illumination. In either method, the first-surface specular reflection of the illumination source off the cornea (the outer-most optical element of the eye) is also visible. A further benefit of infrared imaging is that the pupil, rather than the limbus, is the strongest feature contour in the image (Figure 2); both the sclera and the iris strongly reflect infrared light while only the sclera strongly reflects visible light. Tracking the pupil contour is preferable given that the pupil contour is smaller and more sharply defined than the limbus. The vector between the pupil center and the corneal reflection that is a white dot on cornea is typically used as the dependent measure rather than the pupil center alone. This is because the vector difference is insensitive to slippage of the head mounted device both the camera and the source move simultaneously. Furthermore, due to its size, the pupil is less likely to be occluded by the eyelids. In this paper, we investigate a novel algorithm on infrared spectrum imaging techniques and extend these techniques to visible spectrum imaging as well. The dark pupil techniques are considered. 3 SYSTEM STRUCTURE We implement an eye-tracking algorithm with images captured from head mounted system. There are two cameras in this head mounted system, one is infrared camera with IR (Infra Red) illumination source and IR filter, which face driver s eye. The other is scene camera fixed on glasses frame. The structure of this head mounted system is shown in Figure 1. Common sun glasses frame and cheap compact camera with resolution are used, with infrared illumination by 850nm LED fixed beside the eye camera. The IR filter is a nm band pass filter. 4 EYE-TRACKING ALGORITHM In this section, we propose an eye-tracking algorithm that combines feature-based and model-based approaches to achieve a good trade-off between run-time performance and accuracy for dark-pupil infrared illumination. The goal of the algorithm is to extract the location of the pupil center and the corneal reflection position so as to relate the vector difference between these measures to coordinates in the scene image. Li et al. proposed a pupil feature detection approach, namely starburst [20]. Our algorithm improved their work. The improvements consist in using the horizontal and vertical projections of the binary image to estimate the pupil center at the first and key frame, which is different from Li s random guess. The corneal reflection is eliminated from the image, and the pupil edge points are located using an iterative feature-based technique with only eight rays from estimated
5 A Real-Time Video-based Eye Tracking Approach for Driver Attention Study 809 Fig. 1. The low cost head mounted eye gaze tracker in our driving attention study Fig. 2. Corneal reflection of dark pupil effect with different pupil position in our head mounted eye gaze tracker (the corneal reflection is the brightest point in the image and pupil is the darkest region) pupil center. Before ellipse fitting, a proximity based approach is used to eliminate outliers. An ellipse is fitted to a subset of the detected inliers edge points using the Random Sample Consensus (RANSAC) paradigm. The best fitting parameters from this feature based approach are then used to initialize a local model based search for the ellipse parameters that maximize the fit to the image data. 4.1 Noise Reduction Due tothe uninformedilluminationin realdrivingenvironmentandthe use ofalowcostcamerainthisheadmountedeyetracker,weneed tobeginbyreducingthenoise present in the images. We reduce the shot noise by applying a 5 5 Gaussian filter with a standard deviation of 2 pixels.
6 810 X. Fu, Y. Zang, H. Liu 4.2 Corneal Reflection Detection As illustrated in Figure 2, the corneal reflection corresponds to one of the brightest regions in the eye image, and the round shape and size of the corneal reflection is almost fixed when the distance from IR camera to cornea is established after hardware setup. Thus the corneal reflection can be obtained through pixel intensity threshold and geometrical character. Fig. 3. The distortion shape of corneal reflection. If the ratio of distortion of the brightest region is larger than this one it is not considered as corneal reflection Note that because the cornea extends approximately to the limbus, we can limit oursearchforthecornealreflectiontoasquareregionofinterestwithasmallwindow of pixels. To begin, the threshold is used to produce a binary image in which only values above this threshold are taken as corneal reflection candidates. However, a constant threshold across observers and even within observers is not optimal. Therefore an adaptive threshold which decreases from the brightest pixel intensity value in each frame is used to localize the corneal reflection [20]. Given its small size, the corneal reflection is approximately a circle in the image. Within these corneal reflection candidates, only those ratios between width and height of candidate blobs less than 2 will be processed due to the round shape property of corneal reflection (as shown in Figure 3). Our corneal reflection detection algorithm is given below (Algorithm 1). Algorithm 1 Corneal Reflection Detection Algorithm. 01. Input image; 02. Threshold brightest pixel; 03. Do 04. i = i+1; 05. threshold = threshold 1; 06. Image binaryzation by threshold; 07. s(i) the area of largest blob/the average area of blobs 07. when the absolute of width/height of blobs less than 2; 08. While (s(i) < s(i 1)) 09. Output the center coordination of cornel reflection.
7 A Real-Time Video-based Eye Tracking Approach for Driver Attention Study 811 In this algorithm, threshold will decrease from the brightest to lower intensity. When the brightest threshold is adopted, it is likely that the largest candidate region is attributable to the corneal reflection, as other specular reflections tend to be quite small and located off the cornea as well as near the corner of the image where the eyelids meet. The ratio between the area of the largest candidate and the average area of other regions is calculated as the threshold is lowered. At first, the ratio will increase because the corneal reflection will grow in size faster than other areas. Note that the intensity of the corneal reflection monotonically decreases towards its edges, explaining this growth. A lower threshold will, in general, also induce an increase in false candidates. The ratio will begin to drop as the false candidates become more prominent and the size of the corneal reflection region becomes large. The highest ratio is taken as optimal threshold. Thelocationofthecornealreflectionisthengivenbythegeometriccenter(x c,y c ) of the largest region in the image using the adaptively determined threshold. While the approximate size of the corneal reflection can be derived using the threshold region from the localization step, this region does not typically include the entire profile of the corneal reflection. To determine the full extent of the corneal reflection, we assume that the intensity profile of the corneal reflection follows a bivariate Gaussian distribution. If the radius r where the average decline in intensity is maximal is related to the radius with maximal decline for a Gaussian (i.e. a radius of one standarddeviation),the fullextentof the cornealreflectionas 2.5r tocapture 99% of the corneal reflection profile is taken. Radial interpolation is then used to remove the corneal reflection. First, the central pixel of the identified corneal reflection region is set to the average of the intensities along the contour of the region. Then for each pixel between the center and the contour, the pixel intensity is determined via linear interpolation. An example of this process can be seen in Figure 4 (compare Figures 4a) and 4b)). a) b) Fig. 4. The corneal reflection and removal by Gaussian with a radius of one standard deviation. The corneal reflection is treated as 2.5r to capture 99% of the corneal reflection profile.
8 812 X. Fu, Y. Zang, H. Liu 4.3 Pupil Edge Points Detection We have improved the feature-based method [20] to detect the pupil contour with small neighborhood and fixed eight rays. The best guess of the pupil center is implemented on horizontal and vertical projection, as the pupil is the darkest region in the input eye image and corneal reflection size is much smaller than the pupil, the projections on horizontal and vertical have trough on the curve which can be treated as an estimated pupil center. The horizontal and vertical projection results are shown in Figure 5. In local region, the estimated pupil center should be located around the center of the image. Therefore in Figure 5b), the wave trough m and n will be removed and wave trough k will be treated as horizontalpositionof the pupil center. The initial pupil center position is shown in Figure 5 c). Our algorithm to detect pupil contour features is given below (Algorithm 2). Algorithm 2 Pupil Contour Features Detection Algorithm. 01. Input image; 02. E pc Projection center as estimated pupil center; 03. α = 0; 04. Do 05. α = α+45; 06. P E Intensity derivatives as estimated pupil contour points; 06. on the rays from E pc ; 07. β = 0; 08. Do 09. β = β +5; 10. [P c ] Intensity derivatives as estimated pupil contour points 10. on rays from P E ; 11. While (β < 360) 12. While (α < 360) 13. Output the center coordination of cornel reflection. For each frame, a location is chosen that represents the best guess of the pupil center in the frame. For the first frame and key frame this can be taken as the trough value of the projection image. The pupil shape is also considered as circle and its size is limited within a reasonable scope. For subsequent frames, the location of the pupil center from the previous frame is used. Because the pupil contour frequently occupies very little of the image, instead of applying edge detection to the entire eye image or to a region of interest around the estimated pupil location, we detect pupil edges along a limitednumber of rays that extend from a centralbest guess of the pupil center. The proposed method to detect pupil center is shown in Figure 6. After the pupil center is calculated, in the next frame, the neighborhood region of pixels is used to calculate pupil center rather than the whole frame.
9 A Real-Time Video-based Eye Tracking Approach for Driver Attention Study 813 a) b) c) Fig. 5. The horizontal and vertical projection results to locate the initial pupil center position
10 814 X. Fu, Y. Zang, H. Liu When the pupil center is estimated, rays from this center are used to estimate pupil contour edge points which are intensity leap point along rays. The rays from estimated pupil center are limited on eight directions with equal angle step, which can be shown in Figures 6a) and 6d). In Figure 6a) there is a good estimated pupil center, so the eight rays can reach the proper pupil edge. Figure 6d) shows a wrong estimated pupil center, the estimated pupil center is outside of the pupil, so only two from the eight rays can reach the pupil edge. Because the horizontal and vertical projection results are used to locate the initial pupil center position within neighbor region of , almost all the estimated pupil center is inside the pupil. This method takes advantage of the high-contrast elliptical profile of the pupil contour present in images taken with infrared illumination using the dark-pupil technique. a) b) c) d) e) f) Fig. 6. Detected pupil contour using two step method: The first step is drawing eight ray radiation from estimated pupil center. The second step is drawing rays from the detected pupil edge points. a) The good estimated pupil center, eight rays from the estimated pupil center can reach the pupil edge. b) From the eight detected pupil edge points after a), second round of rays from the pupil edge points is used to detect pupil edge; in this figure, only two groups of rays from pupil edge points are shown. There are eight groups of rays together. c) The detected pupil contours. d) The wrong estimated pupil center. e) Two groups of rays from detected pupil edge points are good enough to detect the pupil edge points. f) The detected pupil edge points begin with the wrong estimated pupil center. Next, the eight derivatives rays from estimated pupil center, extending radial rays away from this starting point, are independently evaluated pixel by pixel until athresholdθ (θ = 20)isexceeded. Giventhatweareusingthedark-pupiltechnique, only positive derivatives (increasing intensity as the ray extends) are considered. When this threshold is exceeded, a feature point is defined at that location and the
11 A Real-Time Video-based Eye Tracking Approach for Driver Attention Study 815 processing along the ray is halted. If the ray extends to the border of the image, no feature point is defined. The eight candidate feature points of the initial rays are shown in Figure 6a). For each of the eight candidate feature points whose distance from starting point is less than 100, the above-described feature detection process is repeated backwards from the feature points. However, rays are every 5 degrees and are limited to γ = ±50 degrees around the ray that originally generated the feature point. The motivation for limiting the return rays in this way is that if the candidate feature point is indeed on the pupil contour (as shown in Figure 6b)), the returning rays will generate additional feature points on the opposite side of the pupil such that they are all consistent with a single ellipse (i.e. the pupil contour). The two-stage feature detection process improves the robustness of the method to poor initial guesses for the starting point. This is a problem when an eye movement is made as the eye can rapidly change positions from frame to frame. This is especially true for images obtained at low frame rates. For example, such a case is shown in Figure 6d). However, the feature points are biased to the side of the pupil contour nearest to the initialization point. The second iteration of the ray process would minimize this bias, the computational burden is affordable with the two iterations and thus the strategy would be efficient. At this point an ellipse could be fitted to the candidate points. The detected feature locations for the second group of rays are shown in Figures 6b) and 6e). When the initial guess is a good estimate of the pupil center, for example during eye fixations which occupy the majority of the frames, only a single iteration is required. 4.4 Ellipse Fitting There are twophases toget the pupil contourby ellipse fittingbased on the detected pupil edge points. The first is outlier elimination algorithm; the other is model-based ellipse fitting algorithm. Before fitting these data, it is desirable to eliminate the outliers first. To this end, we classify a group of unlabeled data into two classes. One of them consists of data that can be fitted well by an ellipse, and the other consists of data that can be classified as outliers. Inliers are those sample points for which the algebraic distance to the ellipse is less than some threshold. In other words, it is a two-class classification problem with prior knowledge on one of the classes. This threshold is derived from a probabilistic model of the error expected based on the nature of our feature detector. The outlier elimination algorithm is a proximity-based algorithm which is based on algebra graphic theory to eliminate distant, isolated outliers. The model-based algorithm can fit an ellipse model to the inliers selected after the first phase. Now that the first phase eliminates most of the outliers, model-based algorithm can be effectively applied to fit an ellipse model, which tests all the data points with respect to ellipse model and classifies the points that saliently deviate from ellipse as outliers and then classify other points as inliers.
12 816 X. Fu, Y. Zang, H. Liu Outlier Elimination An inlier is a sample in the data attributable to the mechanism being modeled whereas an outlier is a sample generated through error and is attributable to another mechanism not under consideration. In our application, inliers are all of those detected feature points that correspond to the pupil contour and outliers are feature points that correspond to other contours, such as that between the eyelid and the eye. Assume that we have K pupil contour points f i = [x i,y i ], i = 1,2,3,...,K, where K means N or M. N points are from an ellipse with small amounts of noise (inliers), M points are randomly scattered in the plane (outliers). We can make further assumptions about the data points: Average distances between inliers are smallerthanthosebetweeninliersandoutliers,andinliersarethemajority( > 50%) of the data set. g i = min{d(f i,f j i )}, i,j K (1) where D is the distance between the neighbors of detected pupil feature points. Construct an adjacency graph based on proximity which calculates the distance between the neighbors of detected pupil feature points. The major component is considered as composed of inliers, other small components are considered as composed of outliers Model-Based Algorithm Given a set of candidate feature points, the next step of the algorithm is to find the best fitting ellipse. In two-dimensional space, specifically, let p 1, p 2,..., p N be a set of N points, p i = [x i,y i ] T. Let t = [x 2,xy,y 2,x,y,1] T, then we have the function F( p, v) = t T v = ax 2 +bxy +cy 2 +dx+ey +f = 0, (2) the implicit equation of the generic ellipse, characterized by the parameter vector v = [a,b,c,d,e,f] T. The task is to find the parameter vector v 0, associated with the ellipse which fits p 1,..., p N best in the least squares sense, as the solution of the objective min v = N [D( p i, v)] 2 (3) i=1 where D( p i, v) is a suitable distance. We can achieve this goal by running an algorithm similar to RANSAC, which is an effective technique for model fitting in the presence of a large but unknown percentage of outliers in a measurement sample. However, RANSAC has been shown to be inappropriate when the percentage of outliers is high and the number of parameters in the model is large; it is computationally unacceptable when the number of parameters and the portion of outliers are large. The random sample consensus
13 A Real-Time Video-based Eye Tracking Approach for Driver Attention Study 817 is determined by Equation (4) P = 1 (1 w n ) k (4) where P is the probability of finding the correct model after running RANSAC for k times; w is the portion of the inliers; n is the minimum number of data points needed to fit a model. Assume w = 0.5, to guarantee P = 0.99, n = 5, k = 146 to fit an ellipse. Fitting algorithm itself becomes computationally expensive when n is large [21]. However, since we have greatly decreased the percentage of outliers in the remaining data set by employing the outlier detection algorithm based on proximity, it is now feasible to run a RANSAC-type algorithm. On the other hand, RANSAC admits the possibility of outliers and only uses a subset of the data to fit the model. In detail, RANSAC is an iterative procedure that selects many small but random subsets of the data, uses each subset to fit a model, and finds the model that has the highest agreement with the data set as a whole. The subset of data consistent with this model is the consensus set. First, we use the entire set of inliers selected by the first stage algorithm to fit an initial model, instead of randomly choosing the minimum number of points as in the original RANSAC, since the remaining outliers represent just a small percentage and are close to the inliers. Moreover, since our initialization is not random, it is unnecessary to run RANSAC repeatedly many times. The following procedure is repeated R times. First, five samples are randomly chosen from the detected feature set given that this is the minimum sample size required to determine all the parameters of an ellipse. Singular Value Decomposition (SVD) on the conic constraint matrix generated with normalized feature-point coordinates is used to find the parameters of the ellipse that perfectly fit these five points. If the parameters of the ellipse are imaginary, the ellipse center is outside of the image, or the major axis is greater than two times the minor axis, five different points are randomly chosen until this is no longer the case. Then, the number of candidate feature points in the data set that agree with this model (i.e. the inliers) are counted. After the necessary number of iterations, an ellipse is fitted to the largest consensus set (shown in Figure 7). 4.5 Mapping and Calibration In order to calculate the point of gaze of the user in the scene image, a mapping between locations in the scene image and an eye-position measure (e.g., the vector difference between the pupil center and the corneal reflection) must be determined. The typical procedure in eye-tracking methodology is to measure this relationship through a calibration procedure. During calibration, the user is required to look at anumberofscenepointsforwhichthepositionsinthesceneimageareknown. While the user is fixating each scene point s = (x s,y s,1), the eye position e = (x e,y e,1) is measured (note the homogeneous coordinates). In this paper, the calculation
14 818 X. Fu, Y. Zang, H. Liu a) b) c) Fig. 7. Ellipse fitting result and the eye-position measure. The calibration is based on the mapping between locations in the scene image and the vector from pupil center to the corneal reflection. Different pupil positions are shown in these figures is based on floating calibrator method [22], in which the light spot from a head mounted laser pointer projected on a wall while the head is scanning is recorded by the scene camera, in synchronization with the infrared eye camera. The difference is that the calibrators (infrared labels) are fixed on several locations such as left side mirror, right side mirror, rear view mirror, center console. The driver will look at these calibrators to accomplish calibration procedure before experiment is started every time. Interpolation is performed within target position where no samples were taken. Thus, non-linear interpolation error can be minimized, even for wide-range tracking. We generate the mapping between the two sets of points using a linear homographic mapping. The calibration result can be updated every time the driver looks at these calibration labels during experiments. Therefore when the glasses slide on the nose bridge or the eyes are squinting due to lighting changes or seating position is changed, the calibration will be updated when the driver looks at side mirror, rear view mirror or center console. 5 EXPERIMENTAL RESULTS AND DISCUSSIONS Eye-tracking evaluation was conducted in order to validate the performance of the algorithm. Two groups of experiments are implemented, one using door simulation where the calibration is more delicate with floating calibrators and illumination is good; the other is real driving environment where calibration is based on four labels and illumination is not uniform. The smoothing buffer size of the gaze data is 4 frames, that is, every gaze piece of data is related with last three ones. The resolution of eye image is Scene camera is used to capture calibration points(laser dots or infrared labels). The mapping relationship between eye camera, scene camera and real world is calibrated when the system is set up (when the cameras are installed on glasses frame). During indoor experiment, the frame rate is 25 frames per second with 2.4GHz Intel CPU, 4GB RAM PC when the image resolution is Video was recorded from the head mounted eye tracker described in Section 3 while three
15 A Real-Time Video-based Eye Tracking Approach for Driver Attention Study 819 subjects viewed a movie trailer projected on a white wall. Prior to viewing the trailer, the subjects placed a laser pointer on their head mounted tracker and scanned randomly on the white wall; the subjects will gaze at this moving laser dot. The distance fromthe wallisapproximately300cm. The laserdotson the whitewallcan be automatically detected using image processing method to be treated as floating calibrators. The evaluation was conducted twice for each user. After viewing the movie trailer, the evaluation is implemented, nine dots are projected on the wall, the subjects fixed these dots and calibrated eye gaze positions are calculated. The evaluation result is shown in Figure 8. The average of error is 0.34 degree. For the real driving environment, a Car PC with 1.5GHz CPU, 2GB RAM is used; the frame rate is 15 frames per second of resolution. The four different shapes infrared reflect labels are fixed on two side mirror, rear view mirror and center console. The shapes of the four labels are cross, dot, vertical and horizontal line. The scene camera can separate the four labels with the help of different shape and infrared light reflection. These labels are used as calibrators. The calibration procedure is not only implemented before the experiment, but can also be used to improve eye gaze during the experiment, especially when the glasses slide on the nose bridge, the eyes are squinting due to lighting changes or seating position change. We use dwell time if the user continues to look at the labels over 200ms, i.e. the scene camera focuses on the target labels over 200ms, the labels are recorded as calibrators. In this case, we think the driver is paying attention to the label. Such a long dwell time is used to ensure that an inadvertent fixation will not be made by simply looking around on the labels. We compare the tracking results under different conditions in real-world driving. The pupil center and corneal reflection detection and ellipse fitting results are shown in Figures 9 and 10. In some frames the pupil detection failed. The detection rate is 96.31% in all the performed frames. We manually verified these gaze positions from scene video and notes, the calibration points and tracking average error in different illumination conditions are shown in Figure 11 and Table 1 with the mean and standard deviations. The average error is 2.95 degrees in usual light conditions, and 4.81 degrees in sunlight. The average error is 2.48 degrees at night; therefore sunlight makes tracking accuracy degrade and the best experimental result is at night with infrared illumination. 6 CONCLUSIONS In this paper, we focused on eye-tracking approaches of driver attention. A novel eye-tracking algorithm was proposed to collect driver s eye gaze in real world driving study with for a low cost head mounted tracker. Both corneal reflection location and pupil contour are detected through adaptive feature-based techniques. Horizontal and vertical projection of binary image is used to estimate pupil center, then eight radial rays from this center to reach the pupil edge are iterated to get the pupil edge points. After outliers elimination, the RANSAC paradigm is applied to maximize the accuracy of ellipse fitting in the presence of gross feature-detection
16 820 X. Fu, Y. Zang, H. Liu Eye gaze point Verify point a) 0.5 Subject 1 Subject 2 Subject Error (deg) Verify points b) Fig. 8. Verification of the proposed low cost head mounted eye gaze tracking. a) 9 check points in the scene image (circles) and tracking results (stars), b) Tracking errors for the 9 points. The average error is 0.34 degree.
17 A Real-Time Video-based Eye Tracking Approach for Driver Attention Study 821 a) b) c) Fig. 9. Experimental results under sunlight. The pupil center, corneal reflection and ellipse fitting result still usable. a) b) c) Fig. 10. Experimental results at night. When surrounding light is weak, the infrared illumination can make pupil center, corneal reflection and ellipse fitting result very accurate. errors. Finally, a model-based approach is applied to further refine the fit. We conducted a validation study which indicates that the algorithm performs well on video obtained from the low-cost head mounted eye tracker. The average error of the verification experiments for three subjects is 0.34 degree in door experiment. In Point Original Estimated Gaze Sunlight Normal Night X Y X Y Mean STD Mean STD Mean STD Table 1. Experimental results in different real driving environment
18 822 X. Fu, Y. Zang, H. Liu 1 2 Rearview mirror Left side mirror Driver Center consoleright side mirror a) Normal Sunlight Night 6 Error (deg) Verify points b) Fig. 11. Experimental result in real driving environment. a) Different shape infrared reflection labels are attached on rear view mirror, center console, left and right side mirror. Positions 1 12 are used as check points. b) Tracking errors for 12 check points for different conditions. The average error is 2.95 degrees in usual light conditions, 4.81 degrees under sunlight and 2.48 degrees at night.
19 A Real-Time Video-based Eye Tracking Approach for Driver Attention Study 823 real world driving environment, the average error is degrees in different illumination conditions. Acknowledgment This work is supported by the National Natural Science Foundation of China(Grant Nos , , ), the Fundamental Research Funds for the Central Universities (Grant No. 2011QN031, 2011JC006), Liaoning Education Department Research Fund(L ), Dalian Science and Technology Fund (Grant No. 2010J21 DW006), and America Research to Prevent Blindness International Research Scholar Award (2010). REFERENCES [1] Kandil, F. Rotter, A. Lappe, M.: Car Drivers Attend to Different Gaze Targets when Negotiating Closed Vs. Open Bends. Journal of Vision, Vol. 10, 2010, No. 4, pp [2] Doshi, A. Trivedi, M.: On the Rolesof EyeGaze and Head Dynamics in Predicting Driver s Intent to Change Lanes. IEEE Transactions on Intelligent Transportation Systems, Vol. 10, 2009, No. 3, pp [3] Burguillo, J. Rodriguez, P. Costa, E. Gil, F.: History-Based Self- Organizing Traffic Lights. Computing and Informatics, Vol. 28, 2009, No. 2, pp [4] Reimer, B.: Impact of Cognitive Task Complexity on Drivers Visual Tunnelling. Transportation Research Record: Journal of the Transportation Research Board, Vol. 2138, 2009, No. 1, pp [5] Hammoud, R.: Passive Eye Monitoring: Algorithms, Applications and Experiments. Springer Verlag [6] Yao, K. Lin, W. Fang, C. Wang, J. Chang, S. Chen, S.: Real-Time Vision-Based Driver Drowsiness/Fatigue Detection System. Proceedings of 2010 IEEE Vehicular Technology Conference (VTC 2010-Spring), 2010, pp [7] Miyaji, M. Kawanaka, H. Oguri, K.: Driver s Cognitive Distraction Detection Using Physiological Features by the AdaBoost. Proceedings of 12 th International IEEE Conference on Intelligent Transportation Systems, Missouri 2009, pp [8] Liu, R. Yuan, B.: Automatic Eye Feature Extraction in Human Face Images. Computing and informatics, Vol. 20, 2001, No. 3, pp [9] Lu, Y. Zhou, J. Yu, S.: A Survey of Face Detection, Extraction and Recognition. Computing and informatics, Vol. 22, 2003, No. 2, pp [10] Yamazoe, H. Utsumi, A. Yonezawa, T. Abe, S.: Remote and Head- Motion-Free Gaze Tracking for Real Environments With Automated Head-Eye Model Calibrations. Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Anchorage 2008, pp. 1 6.
20 824 X. Fu, Y. Zang, H. Liu [11] Zhu, Z. Ji, Q.: Novel Eye Gaze Tracking Techniques Under Natural Head Movement. IEEE Transactions on Biomedical Engineering, Vol. 54, 2007, No. 12, pp [12] Miyake, T. Asakawa, T. Yoshida, T. Imamura, T. Zhang, Z.: Detection of View Direction with a Single Camera and Its Application Using Eye Gaze. Proceedings of the 35 th Annual Conference of IEEE Industrial Electronics, Porto 2009, pp [13] Fu, X. Luo, G. Peli, E.: Telescope Aiming Point Tracking Method for Bioptic Driving Surveillance. IEEE Transactions on Neural Systems and Rehabilitation Engineering, Vol. 18, 2010, No. 6, pp [14] Villanueva, A. Cabeza, R.: Evaluation of Corneal Refraction in a Model of a Gaze Tracking System. IEEE Transactions on Biomedical Engineering, Vol. 55, 2008, No. 12, pp [15] Hansen, D. Ji, Q.: In the Eye of the Beholder: A Survey of Models for Eyes and Gaze. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 32, 2010, No. 3, pp [16] Li, Y. Wang, S. Ding, X.: Eye/Eyes Tracking Based on a Unified Deformable Template and Particle Filtering. Pattern Recognition Letters, Vol. 31, 2010, No. 11, pp [17] Amiri, A. Fathy, M.: Video Shot Boundary Detection Using Generalized Eigenvalue Decomposition and Gaussian Transition Detection. Computing and Informatics, Vol. 30, 2011, No. 3, pp [18] Doshi, A. Trivedi, M.: Investigating the Relationships Between Gaze Patterns, Dynamic Vehicle Surround Analysis, and Driver Intentions. Proceedings of the 2009 IEEE Intelligent Vehicles Symposium, Xi an 2009, pp [19] Sha, S. Jianer, C. Sanding, L.: A Fast Matching Algorithm Based on K-Degree Template. Proceedings of the 4 th International Conference on Computer Science and Education, Nanning 2009, pp [20] Li, D. Winfield, D. Parkhurst, D.: Starburst: A Hybrid Algorithm for Video-Based Eye Tracking Combining Feature-Based and Model-Based Approaches. Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, 2005, pp [21] Yu, J. Zheng, H. Kulkarni, S. Poor, H.: Outlier Elimination for Robust Ellipse and Ellipsoid Fitting. Proceedings of the 3 rd IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, Dutch Antilles 2009, pp [22] Fu, X. Luo, G. Peli, E.: Tracking Telescope Aiming Point for Bioptic Driving Surveillance. Proceedings of International Conference on Image Processing, Computer Vision, and Pattern Recognition, Las Vegas: WORLDCOMP 2009.
21 A Real-Time Video-based Eye Tracking Approach for Driver Attention Study 825 Xianping Ù received the Ph.D. degree in communication and information systems from Dalian Maritime University, Dalian, China, in Now he is Professor at Information Science and Technology College, Dalian Maritime University, Dalian, China. From2008 to 2009 he wasapostdoctoralfellow atschepens Eye Research Institute, Harvard Medical School, Boston, MA. His research interests include perception of natural scenes in engineering systems, including multimedia, image/video processing, and object recognition. Ying Ò received her B.Sc. degrees in computer science and technology of Liao-Ning University (China) in 2004 and her M. Sc. degree in computer science and technology of Dalian Maritime University in Her research interests include digital image processing and pattern recognition. Hongbo Ä Ù is a Professor at the School of Information Science and Technology at Dalian Maritime University, Dalian, China. He received Ph. D. degrees in computer science from Dalian University of Technology. His research interests include system modelling and optimization involving soft computing, probabilistic modeling, cognitive computing, machine learning, data mining, etc. In the last several years, he has been engaged in research and teaching on design and analysis of computer algorithms, agent system, intelligent robotics, brain and cognition, etc. He actively participates and organises international conferences and workshops.
Starburst: A hybrid algorithm for video-based eye tracking combining feature-based and model-based approaches
Starburst: A hybrid algorithm for video-based eye tracking combining feature-based and model-based approaches Dongheng Li, David Winfield, Derrick J. Parkhurst Human Computer Interaction Program Iowa State
Automatic Labeling of Lane Markings for Autonomous Vehicles
Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 [email protected] 1. Introduction As autonomous vehicles become more popular,
Low-cost eye-tracking for human computer interaction. Dongheng Li. A thesis submitted to the graduate faculty
Low-cost eye-tracking for human computer interaction by Dongheng Li A thesis submitted to the graduate faculty in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE Major: Human
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - [email protected]
HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT
International Journal of Scientific and Research Publications, Volume 2, Issue 4, April 2012 1 HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT Akhil Gupta, Akash Rathi, Dr. Y. Radhika
An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network
Proceedings of the 8th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING & DATA BASES (AIKED '9) ISSN: 179-519 435 ISBN: 978-96-474-51-2 An Energy-Based Vehicle Tracking System using Principal
A Reliability Point and Kalman Filter-based Vehicle Tracking Technique
A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video
Building an Advanced Invariant Real-Time Human Tracking System
UDC 004.41 Building an Advanced Invariant Real-Time Human Tracking System Fayez Idris 1, Mazen Abu_Zaher 2, Rashad J. Rasras 3, and Ibrahiem M. M. El Emary 4 1 School of Informatics and Computing, German-Jordanian
Video-Based Eye Tracking
Video-Based Eye Tracking Our Experience with Advanced Stimuli Design for Eye Tracking Software A. RUFA, a G.L. MARIOTTINI, b D. PRATTICHIZZO, b D. ALESSANDRINI, b A. VICINO, b AND A. FEDERICO a a Department
Neural Network based Vehicle Classification for Intelligent Traffic Control
Neural Network based Vehicle Classification for Intelligent Traffic Control Saeid Fazli 1, Shahram Mohammadi 2, Morteza Rahmani 3 1,2,3 Electrical Engineering Department, Zanjan University, Zanjan, IRAN
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY V. Knyaz a, *, Yu. Visilter, S. Zheltov a State Research Institute for Aviation System (GosNIIAS), 7, Victorenko str., Moscow, Russia
Vision-Based Blind Spot Detection Using Optical Flow
Vision-Based Blind Spot Detection Using Optical Flow M.A. Sotelo 1, J. Barriga 1, D. Fernández 1, I. Parra 1, J.E. Naranjo 2, M. Marrón 1, S. Alvarez 1, and M. Gavilán 1 1 Department of Electronics, University
Static Environment Recognition Using Omni-camera from a Moving Vehicle
Static Environment Recognition Using Omni-camera from a Moving Vehicle Teruko Yata, Chuck Thorpe Frank Dellaert The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 USA College of Computing
Analecta Vol. 8, No. 2 ISSN 2064-7964
EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,
REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING
REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING Ms.PALLAVI CHOUDEKAR Ajay Kumar Garg Engineering College, Department of electrical and electronics Ms.SAYANTI BANERJEE Ajay Kumar Garg Engineering
Automatic Calibration of an In-vehicle Gaze Tracking System Using Driver s Typical Gaze Behavior
Automatic Calibration of an In-vehicle Gaze Tracking System Using Driver s Typical Gaze Behavior Kenji Yamashiro, Daisuke Deguchi, Tomokazu Takahashi,2, Ichiro Ide, Hiroshi Murase, Kazunori Higuchi 3,
Poker Vision: Playing Cards and Chips Identification based on Image Processing
Poker Vision: Playing Cards and Chips Identification based on Image Processing Paulo Martins 1, Luís Paulo Reis 2, and Luís Teófilo 2 1 DEEC Electrical Engineering Department 2 LIACC Artificial Intelligence
Segmentation of building models from dense 3D point-clouds
Segmentation of building models from dense 3D point-clouds Joachim Bauer, Konrad Karner, Konrad Schindler, Andreas Klaus, Christopher Zach VRVis Research Center for Virtual Reality and Visualization, Institute
ECE 533 Project Report Ashish Dhawan Aditi R. Ganesan
Handwritten Signature Verification ECE 533 Project Report by Ashish Dhawan Aditi R. Ganesan Contents 1. Abstract 3. 2. Introduction 4. 3. Approach 6. 4. Pre-processing 8. 5. Feature Extraction 9. 6. Verification
Eye-Tracking with Webcam-Based Setups: Implementation of a Real-Time System and an Analysis of Factors Affecting Performance
Universitat Autònoma de Barcelona Master in Computer Vision and Artificial Intelligence Report of the Master Project Option: Computer Vision Eye-Tracking with Webcam-Based Setups: Implementation of a Real-Time
Eye-tracking. Benjamin Noël
Eye-tracking Benjamin Noël Basics Majority of all perceived stimuli through the eyes Only 10% through ears, skin, nose Eye-tracking measuring movements of the eyes Record eye movements to subsequently
Face Model Fitting on Low Resolution Images
Face Model Fitting on Low Resolution Images Xiaoming Liu Peter H. Tu Frederick W. Wheeler Visualization and Computer Vision Lab General Electric Global Research Center Niskayuna, NY, 1239, USA {liux,tu,wheeler}@research.ge.com
Circle Object Recognition Based on Monocular Vision for Home Security Robot
Journal of Applied Science and Engineering, Vol. 16, No. 3, pp. 261 268 (2013) DOI: 10.6180/jase.2013.16.3.05 Circle Object Recognition Based on Monocular Vision for Home Security Robot Shih-An Li, Ching-Chang
Vision based Vehicle Tracking using a high angle camera
Vision based Vehicle Tracking using a high angle camera Raúl Ignacio Ramos García Dule Shu [email protected] [email protected] Abstract A vehicle tracking and grouping algorithm is presented in this work
Wii Remote Calibration Using the Sensor Bar
Wii Remote Calibration Using the Sensor Bar Alparslan Yildiz Abdullah Akay Yusuf Sinan Akgul GIT Vision Lab - http://vision.gyte.edu.tr Gebze Institute of Technology Kocaeli, Turkey {yildiz, akay, akgul}@bilmuh.gyte.edu.tr
Automatic Traffic Estimation Using Image Processing
Automatic Traffic Estimation Using Image Processing Pejman Niksaz Science &Research Branch, Azad University of Yazd, Iran [email protected] Abstract As we know the population of city and number of
Mean-Shift Tracking with Random Sampling
1 Mean-Shift Tracking with Random Sampling Alex Po Leung, Shaogang Gong Department of Computer Science Queen Mary, University of London, London, E1 4NS Abstract In this work, boosting the efficiency of
3D Scanner using Line Laser. 1. Introduction. 2. Theory
. Introduction 3D Scanner using Line Laser Di Lu Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute The goal of 3D reconstruction is to recover the 3D properties of a geometric
The Role of Size Normalization on the Recognition Rate of Handwritten Numerals
The Role of Size Normalization on the Recognition Rate of Handwritten Numerals Chun Lei He, Ping Zhang, Jianxiong Dong, Ching Y. Suen, Tien D. Bui Centre for Pattern Recognition and Machine Intelligence,
Robot Perception Continued
Robot Perception Continued 1 Visual Perception Visual Odometry Reconstruction Recognition CS 685 11 Range Sensing strategies Active range sensors Ultrasound Laser range sensor Slides adopted from Siegwart
Speed Performance Improvement of Vehicle Blob Tracking System
Speed Performance Improvement of Vehicle Blob Tracking System Sung Chun Lee and Ram Nevatia University of Southern California, Los Angeles, CA 90089, USA [email protected], [email protected] Abstract. A speed
Simultaneous Gamma Correction and Registration in the Frequency Domain
Simultaneous Gamma Correction and Registration in the Frequency Domain Alexander Wong [email protected] William Bishop [email protected] Department of Electrical and Computer Engineering University
Monitoring Head/Eye Motion for Driver Alertness with One Camera
Monitoring Head/Eye Motion for Driver Alertness with One Camera Paul Smith, Mubarak Shah, and N. da Vitoria Lobo Computer Science, University of Central Florida, Orlando, FL 32816 rps43158,shah,niels @cs.ucf.edu
A Prototype For Eye-Gaze Corrected
A Prototype For Eye-Gaze Corrected Video Chat on Graphics Hardware Maarten Dumont, Steven Maesen, Sammy Rogmans and Philippe Bekaert Introduction Traditional webcam video chat: No eye contact. No extensive
A Study on M2M-based AR Multiple Objects Loading Technology using PPHT
A Study on M2M-based AR Multiple Objects Loading Technology using PPHT Sungmo Jung, Seoksoo Kim * Department of Multimedia Hannam University 133, Ojeong-dong, Daedeok-gu, Daejeon-city Korea [email protected],
Do-It-Yourself Eye Tracker: Impact of the Viewing Angle on the Eye Tracking Accuracy
Do-It-Yourself Eye Tracker: Impact of the Viewing Angle on the Eye Tracking Accuracy Michal Kowalik Supervised by: Radoslaw Mantiuk Faculty of Computer Science West Pomeranian University of Technology
LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE. [email protected]
LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE 1 S.Manikandan, 2 S.Abirami, 2 R.Indumathi, 2 R.Nandhini, 2 T.Nanthini 1 Assistant Professor, VSA group of institution, Salem. 2 BE(ECE), VSA
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic
OBJECT TRACKING USING LOG-POLAR TRANSFORMATION
OBJECT TRACKING USING LOG-POLAR TRANSFORMATION A Thesis Submitted to the Gradual Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements
RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 29 (2008) Indiana University
RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 29 (2008) Indiana University A Software-Based System for Synchronizing and Preprocessing Eye Movement Data in Preparation for Analysis 1 Mohammad
Introduction to Pattern Recognition
Introduction to Pattern Recognition Selim Aksoy Department of Computer Engineering Bilkent University [email protected] CS 551, Spring 2009 CS 551, Spring 2009 c 2009, Selim Aksoy (Bilkent University)
Make and Model Recognition of Cars
Make and Model Recognition of Cars Sparta Cheung Department of Electrical and Computer Engineering University of California, San Diego 9500 Gilman Dr., La Jolla, CA 92093 [email protected] Alice Chu Department
Tracking Moving Objects In Video Sequences Yiwei Wang, Robert E. Van Dyck, and John F. Doherty Department of Electrical Engineering The Pennsylvania State University University Park, PA16802 Abstract{Object
Face Recognition in Low-resolution Images by Using Local Zernike Moments
Proceedings of the International Conference on Machine Vision and Machine Learning Prague, Czech Republic, August14-15, 014 Paper No. 15 Face Recognition in Low-resolution Images by Using Local Zernie
Template-based Eye and Mouth Detection for 3D Video Conferencing
Template-based Eye and Mouth Detection for 3D Video Conferencing Jürgen Rurainsky and Peter Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz-Institute, Image Processing Department, Einsteinufer
How To Filter Spam Image From A Picture By Color Or Color
Image Content-Based Email Spam Image Filtering Jianyi Wang and Kazuki Katagishi Abstract With the population of Internet around the world, email has become one of the main methods of communication among
EMR-9 Quick Start Guide (00175)D 1
NAC Eye Mark Recorder EMR-9 Quick Start Guide May 2009 NAC Image Technology Inc. (00175)D 1 Contents 1. System Configurations 1.1 Standard configuration 1.2 Head unit variations 1.3 Optional items 2. Basic
EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set
EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set Amhmed A. Bhih School of Electrical and Electronic Engineering Princy Johnson School of Electrical and Electronic Engineering Martin
Tracking of Small Unmanned Aerial Vehicles
Tracking of Small Unmanned Aerial Vehicles Steven Krukowski Adrien Perkins Aeronautics and Astronautics Stanford University Stanford, CA 94305 Email: [email protected] Aeronautics and Astronautics Stanford
How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm
IJSTE - International Journal of Science Technology & Engineering Volume 1 Issue 10 April 2015 ISSN (online): 2349-784X Image Estimation Algorithm for Out of Focus and Blur Images to Retrieve the Barcode
VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS
VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS Aswin C Sankaranayanan, Qinfen Zheng, Rama Chellappa University of Maryland College Park, MD - 277 {aswch, qinfen, rama}@cfar.umd.edu Volkan Cevher, James
Build Panoramas on Android Phones
Build Panoramas on Android Phones Tao Chu, Bowen Meng, Zixuan Wang Stanford University, Stanford CA Abstract The purpose of this work is to implement panorama stitching from a sequence of photos taken
PRODUCT SHEET. [email protected] [email protected] www.biopac.com
EYE TRACKING SYSTEMS BIOPAC offers an array of monocular and binocular eye tracking systems that are easily integrated with stimulus presentations, VR environments and other media. Systems Monocular Part
Galaxy Morphological Classification
Galaxy Morphological Classification Jordan Duprey and James Kolano Abstract To solve the issue of galaxy morphological classification according to a classification scheme modelled off of the Hubble Sequence,
Traffic Monitoring Systems. Technology and sensors
Traffic Monitoring Systems Technology and sensors Technology Inductive loops Cameras Lidar/Ladar and laser Radar GPS etc Inductive loops Inductive loops signals Inductive loop sensor The inductance signal
Integrated sensors for robotic laser welding
Proceedings of the Third International WLT-Conference on Lasers in Manufacturing 2005,Munich, June 2005 Integrated sensors for robotic laser welding D. Iakovou *, R.G.K.M Aarts, J. Meijer University of
A System for Capturing High Resolution Images
A System for Capturing High Resolution Images G.Voyatzis, G.Angelopoulos, A.Bors and I.Pitas Department of Informatics University of Thessaloniki BOX 451, 54006 Thessaloniki GREECE e-mail: [email protected]
DYNAMIC RANGE IMPROVEMENT THROUGH MULTIPLE EXPOSURES. Mark A. Robertson, Sean Borman, and Robert L. Stevenson
c 1999 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or
A Learning Based Method for Super-Resolution of Low Resolution Images
A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 [email protected] Abstract The main objective of this project is the study of a learning based method
Towards an open-hardware open-software toolkit for robust low-cost eye tracking in HCI applications
Towards an open-hardware open-software toolkit for robust low-cost eye tracking in HCI applications David Winfield 12, Dongheng Li 2, Jason Babcock 2, Derrick J. Parkhurst 12 The Department of Psychology
Detection and Recognition of Mixed Traffic for Driver Assistance System
Detection and Recognition of Mixed Traffic for Driver Assistance System Pradnya Meshram 1, Prof. S.S. Wankhede 2 1 Scholar, Department of Electronics Engineering, G.H.Raisoni College of Engineering, Digdoh
COMPARISON OF OBJECT BASED AND PIXEL BASED CLASSIFICATION OF HIGH RESOLUTION SATELLITE IMAGES USING ARTIFICIAL NEURAL NETWORKS
COMPARISON OF OBJECT BASED AND PIXEL BASED CLASSIFICATION OF HIGH RESOLUTION SATELLITE IMAGES USING ARTIFICIAL NEURAL NETWORKS B.K. Mohan and S. N. Ladha Centre for Studies in Resources Engineering IIT
VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS
VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS Norbert Buch 1, Mark Cracknell 2, James Orwell 1 and Sergio A. Velastin 1 1. Kingston University, Penrhyn Road, Kingston upon Thames, KT1 2EE,
A Review on Driver Face Monitoring Systems for Fatigue and Distraction Detection
, pp.73-100 http://dx.doi.org/10.14257/ijast.2014.64.07 A Review on Driver Face Monitoring Systems for Fatigue and Distraction Detection Mohamad-Hoseyn Sigari 1, Muhammad-Reza Pourshahabi 2 Mohsen Soryani
EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM
EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM Amol Ambardekar, Mircea Nicolescu, and George Bebis Department of Computer Science and Engineering University
3D Vehicle Extraction and Tracking from Multiple Viewpoints for Traffic Monitoring by using Probability Fusion Map
Electronic Letters on Computer Vision and Image Analysis 7(2):110-119, 2008 3D Vehicle Extraction and Tracking from Multiple Viewpoints for Traffic Monitoring by using Probability Fusion Map Zhencheng
Synthetic Sensing: Proximity / Distance Sensors
Synthetic Sensing: Proximity / Distance Sensors MediaRobotics Lab, February 2010 Proximity detection is dependent on the object of interest. One size does not fit all For non-contact distance measurement,
Vehicle Tracking System Robust to Changes in Environmental Conditions
INORMATION & COMMUNICATIONS Vehicle Tracking System Robust to Changes in Environmental Conditions Yasuo OGIUCHI*, Masakatsu HIGASHIKUBO, Kenji NISHIDA and Takio KURITA Driving Safety Support Systems (DSSS)
An Active Head Tracking System for Distance Education and Videoconferencing Applications
An Active Head Tracking System for Distance Education and Videoconferencing Applications Sami Huttunen and Janne Heikkilä Machine Vision Group Infotech Oulu and Department of Electrical and Information
Data Storage 3.1. Foundations of Computer Science Cengage Learning
3 Data Storage 3.1 Foundations of Computer Science Cengage Learning Objectives After studying this chapter, the student should be able to: List five different data types used in a computer. Describe how
Document Image Retrieval using Signatures as Queries
Document Image Retrieval using Signatures as Queries Sargur N. Srihari, Shravya Shetty, Siyuan Chen, Harish Srinivasan, Chen Huang CEDAR, University at Buffalo(SUNY) Amherst, New York 14228 Gady Agam and
ROBUST VEHICLE TRACKING IN VIDEO IMAGES BEING TAKEN FROM A HELICOPTER
ROBUST VEHICLE TRACKING IN VIDEO IMAGES BEING TAKEN FROM A HELICOPTER Fatemeh Karimi Nejadasl, Ben G.H. Gorte, and Serge P. Hoogendoorn Institute of Earth Observation and Space System, Delft University
Optimal Vision Using Cameras for Intelligent Transportation Systems
WHITE PAPER www.baslerweb.com Optimal Vision Using Cameras for Intelligent Transportation Systems Intelligent Transportation Systems (ITS) require intelligent vision solutions. Modern camera technologies
Using Received Signal Strength Variation for Surveillance In Residential Areas
Using Received Signal Strength Variation for Surveillance In Residential Areas Sajid Hussain, Richard Peters, and Daniel L. Silver Jodrey School of Computer Science, Acadia University, Wolfville, Canada.
Fall detection in the elderly by head tracking
Loughborough University Institutional Repository Fall detection in the elderly by head tracking This item was submitted to Loughborough University's Institutional Repository by the/an author. Citation:
Ensemble Methods. Knowledge Discovery and Data Mining 2 (VU) (707.004) Roman Kern. KTI, TU Graz 2015-03-05
Ensemble Methods Knowledge Discovery and Data Mining 2 (VU) (707004) Roman Kern KTI, TU Graz 2015-03-05 Roman Kern (KTI, TU Graz) Ensemble Methods 2015-03-05 1 / 38 Outline 1 Introduction 2 Classification
Vision-based Real-time Driver Fatigue Detection System for Efficient Vehicle Control
Vision-based Real-time Driver Fatigue Detection System for Efficient Vehicle Control D.Jayanthi, M.Bommy Abstract In modern days, a large no of automobile accidents are caused due to driver fatigue. To
Feature Tracking and Optical Flow
02/09/12 Feature Tracking and Optical Flow Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Many slides adapted from Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve
Low-resolution Character Recognition by Video-based Super-resolution
2009 10th International Conference on Document Analysis and Recognition Low-resolution Character Recognition by Video-based Super-resolution Ataru Ohkura 1, Daisuke Deguchi 1, Tomokazu Takahashi 2, Ichiro
Limitations of Human Vision. What is computer vision? What is computer vision (cont d)?
What is computer vision? Limitations of Human Vision Slide 1 Computer vision (image understanding) is a discipline that studies how to reconstruct, interpret and understand a 3D scene from its 2D images
MACHINE VISION MNEMONICS, INC. 102 Gaither Drive, Suite 4 Mount Laurel, NJ 08054 USA 856-234-0970 www.mnemonicsinc.com
MACHINE VISION by MNEMONICS, INC. 102 Gaither Drive, Suite 4 Mount Laurel, NJ 08054 USA 856-234-0970 www.mnemonicsinc.com Overview A visual information processing company with over 25 years experience
Designing eye tracking experiments to measure human behavior
Designing eye tracking experiments to measure human behavior Eindhoven, The Netherlands August, 2010 Ricardo Matos Tobii Technology Steps involved in measuring behaviour 1. Formulate and initial question
Detection and Restoration of Vertical Non-linear Scratches in Digitized Film Sequences
Detection and Restoration of Vertical Non-linear Scratches in Digitized Film Sequences Byoung-moon You 1, Kyung-tack Jung 2, Sang-kook Kim 2, and Doo-sung Hwang 3 1 L&Y Vision Technologies, Inc., Daejeon,
A Robust Method for Solving Transcendental Equations
www.ijcsi.org 413 A Robust Method for Solving Transcendental Equations Md. Golam Moazzam, Amita Chakraborty and Md. Al-Amin Bhuiyan Department of Computer Science and Engineering, Jahangirnagar University,
Binary Image Scanning Algorithm for Cane Segmentation
Binary Image Scanning Algorithm for Cane Segmentation Ricardo D. C. Marin Department of Computer Science University Of Canterbury Canterbury, Christchurch [email protected] Tom
Usability Testing Jeliot 3- Program Visualization Tool: Evaluation Using Eye-Movement Tracking
Usability Testing Jeliot 3- Program Visualization Tool: Evaluation Using Eye-Movement Tracking Roman Bednarik University of Joensuu Connet course 281: Usability in Everyday Environment February 2005 Contents
MetropoGIS: A City Modeling System DI Dr. Konrad KARNER, DI Andreas KLAUS, DI Joachim BAUER, DI Christopher ZACH
MetropoGIS: A City Modeling System DI Dr. Konrad KARNER, DI Andreas KLAUS, DI Joachim BAUER, DI Christopher ZACH VRVis Research Center for Virtual Reality and Visualization, Virtual Habitat, Inffeldgasse
Real-Time Tracking of Pedestrians and Vehicles
Real-Time Tracking of Pedestrians and Vehicles N.T. Siebel and S.J. Maybank. Computational Vision Group Department of Computer Science The University of Reading Reading RG6 6AY, England Abstract We present
Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 269 Class Project Report
Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 69 Class Project Report Junhua Mao and Lunbo Xu University of California, Los Angeles [email protected] and lunbo
T-REDSPEED White paper
T-REDSPEED White paper Index Index...2 Introduction...3 Specifications...4 Innovation...6 Technology added values...7 Introduction T-REDSPEED is an international patent pending technology for traffic violation
An Iterative Image Registration Technique with an Application to Stereo Vision
An Iterative Image Registration Technique with an Application to Stereo Vision Bruce D. Lucas Takeo Kanade Computer Science Department Carnegie-Mellon University Pittsburgh, Pennsylvania 15213 Abstract
Advanced Methods for Pedestrian and Bicyclist Sensing
Advanced Methods for Pedestrian and Bicyclist Sensing Yinhai Wang PacTrans STAR Lab University of Washington Email: [email protected] Tel: 1-206-616-2696 For Exchange with University of Nevada Reno Sept. 25,
Impact of Feature Selection on the Performance of Wireless Intrusion Detection Systems
2009 International Conference on Computer Engineering and Applications IPCSIT vol.2 (2011) (2011) IACSIT Press, Singapore Impact of Feature Selection on the Performance of ireless Intrusion Detection Systems
DEVELOPMENT OF HYBRID VECTORIZING SOFTWARE FOR DIGITIZATION OF CADASTRAL MAPS
DEVELOPMENT OF HYBRID VECTORIZING SOFTWARE FOR DIGITIZATION OF CADASTRAL MAPS Byoungjun SEO, Jaejoon JEONG, Jaebin LEE and Prof. Yongil KIM, Korea ABSTRACT The Cadastral map is a basic data that prescribes
Processing the Image or Can you Believe what you see? Light and Color for Nonscientists PHYS 1230
Processing the Image or Can you Believe what you see? Light and Color for Nonscientists PHYS 1230 Optical Illusions http://www.michaelbach.de/ot/mot_mib/index.html Vision We construct images unconsciously
Character Image Patterns as Big Data
22 International Conference on Frontiers in Handwriting Recognition Character Image Patterns as Big Data Seiichi Uchida, Ryosuke Ishida, Akira Yoshida, Wenjie Cai, Yaokai Feng Kyushu University, Fukuoka,
High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound
High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound Ralf Bruder 1, Florian Griese 2, Floris Ernst 1, Achim Schweikard
