RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 29 (2008) Indiana University

Size: px
Start display at page:

Download "RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 29 (2008) Indiana University"

Transcription

1 RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 29 (2008) Indiana University A Software-Based System for Synchronizing and Preprocessing Eye Movement Data in Preparation for Analysis 1 Mohammad B. Afaneh, Visal Kith, and Tonya R. Bergeson 2 Speech Research Laboratory Department of Psychological and Brain Sciences Indiana University Bloomington, Indiana This work was supported by NIH-NIDCD Research Grant R21DC06682 and Training Grant T32DC We thank Luis Hernández for his insightful comments on this project. 2 Babytalk Research Laboratory, Department of Otolaryngology Head and Neck Surgery, Indiana University School of Medicine. Correspondence concerning this article should be addressed to Tonya R. Bergeson, Ph.D., Department of Otolaryngology Head and Neck Surgery, Indiana University School of Medicine, 699 West Drive RR044, Indianapolis, IN 353

2 AFANEH, KITH, AND BERGESON A Software-Based System for Synchronizing and Preprocessing Eye Movement Data in Preparation for Analysis Abstract. When upgrading or adding a new component or system to an existing system in a research lab, the problem of incompatibility often arises. In this paper, we present a software-based solution for integrating and synchronizing an eye-tracking system with another software system used for stimulus presentation in an infant speech research lab. The algorithms developed and implemented in this system originate from different tracks of images and data processing algorithms. The solution presented is specific to our particular system set up. Nevertheless, it can be easily applied to any other similar setup using an eye-tracker system with stimuli presentation software. Introduction Traditionally, researchers of visual attention and perception have made use of techniques such as monitoring a live view of the subject's head and face to get a rough idea of their gaze direction (e.g., looking right, left, or center). With recent advances in eye tracking technology, however, eye tracking systems have been introduced and integrated in visual perception laboratories where both high accuracy and high resolution are necessary to investigate looking behavior towards a variety of visual scenes. The use of eye trackers has several advantages over the traditional methods. The first and most important advantage is the increased accuracy over traditional methods (up to 0.5 degrees visual angle). Second is the measurement of other types of useful information in addition to the direction of gaze (e.g., pupil diameter, blinks, head position, and pupil position). Another very important advantage is the ability to have external software analyzing eye tracker output data. By using such software, the process of detecting blinks, fixations, dwell times and saccades can be done automatically. A typical eye tracker consists of a camera, which is encircled by a ring of infrared LEDs, and a control unit in which the image captured is processed before being transferred to a PC or other interface device. The ring of LEDs illuminates the eye, and when placed on the axis of the camera lens it produces an interesting effect on the pupil. That is, the subject s pupil will appear as a bright object in the captured image, similar to the red-eye effect in photography. The infrared light also causes a reflection off the cornea. By computing the vector between the corneal reflection and the pupil center, the system can compute the direction of the subject s gaze. In many laboratories, software specific to an operating system is used for presenting stimuli in experiments. For example, Habit software, which runs on Mac OS, is used in several infant laboratories (Cohen, Atkinson, & Chaput, 2004). Experiments run with Habit allow the duration of trials to be set online according to predefined criteria regarding the attention and behavior of the subject. To do this, Habit monitors the operator's key strokes on a computer keyboard, which tell the software where the subject is looking (left, right, or center), and according to these strokes Habit determines the duration of the trial on the screen and moves on to the next trial. The output file of the program contains the direction of looks and cumulative look times in each trial. 354

3 Problem Statement In the infant laboratory setup described above, the Mac will send both an output video and audio stimulus to a TV screen or a monitor to be viewed by the subject. When an eye tracker is introduced into the system, the video output first has to pass through a scan converter device before continuing on its way to the TV screen. This device splits the video signal into two signals: one which goes into the eye tracker control unit, and a second which passes to the monitor to be displayed (see Figure 1). Mac computer running stimuli presentation software Stimulus video Scan Converter Stimulus video TV screen or PC monitor used for stimuli display Eye tracker camera Eye monitor Scene monitor Scene Video Capture Video Frame Overlay Stimulus video Eye Tracker Control Unit (CU) Serial Data Eye tracking PC Figure 1. System model represents the interaction between a Mac, a PC and an Eye-Tracker system. The Mac sends video and audio output to the TV by pass through a scan converter. The scan converter then splits the video signal into two: one goes to the TV and one goes to the eyetracker control unit. The eye-tracker control unit superimposes crosshairs on the frames indicating the gaze target of the subject and then sends to the scene monitor to be captured by the PC. At the same time the eye-tracker control unit also receives video input from an eye-tracker camera which to be displayed on the eye monitor. The eye tracker obtains the video of the stimulus and then superimposes crosshairs on the frames indicating the gaze target of the subject. In such a setup the eye tracker gets an input of the video, but does not know the start or end frame of a trial within an experiment. Also, the output file consists of rows of eye movement data, each of which corresponds to one frame (or 1/60 th of a second) for the entire experiment. That is, the output file contains no separation between trials within each experiment. The separation between trials is very important since statistical analyses are performed using data from each trial within an experiment. Finally, each video is time stamped; we will take advantage of this timestamp as part of our solution. 355

4 AFANEH, KITH, AND BERGESON Proposed Solution The solution can be approached in two different ways: hardware or software. Each has its advantages and disadvantages. The advantage of software over hardware is usually cost, and the disadvantage is usually speed. Another important advantage for software-based solutions is portability. To avoid adding new hardware components to the system and reduce cost, we chose the software approach. The video output presented on the TV screen by Habit consists of pretest, habituation, and/or test phases of stimuli, each of which is called a trial. In between the different trials, a short video (the attention getter ) is repeated to draw the attention of the infant. The general idea behind our solution is to make use of the captured attention getter video to detect the start and end frames of the different trials. Then the program can extract this information, detect the rows in the output data file corresponding to these trials, and mark them as separate trials for the purpose of later analyses. Even though eye movement data usually exists for the majority of the trials, the output of the Habit software can still serve as a backup of the gaze direction results in cases where the eye tracker loses data or does not work properly. It also serves to mark the trials in the output file with their specific descriptions. Finally, our hope was to minimize the requirement of user interaction and make the use of the software as easy as possible. We chose to develop a program with a simple Graphical User Interface (GUI) with this in mind. Figure 2 shows the design and appearance of the GUI. Figure 2. The design of GUI is to make the use of the software as easy as possible. It contains step by step instruction for the user to follow to achieve the data analysis. 356

5 Algorithm Discussion There are two phases in the proposed algorithm. The first phase consists of the image processing and object recognition processing, which help in detection of the different trials in the video. The second phase is the data processing phase which includes reading, preprocessing, and then marking of the rows of the eye movement data file. Phase I Figure 3 shows a flow chart of the basic steps in the first phase of our system solution. To make the process of the algorithm clearer we will describe each step in detail. The first step in this phase is to split the video file into a sequence of jpeg image files with each file name containing the sequence number of that image in the original video (frame0001.jpg, frame0002.jpg, etc.). This video decomposition will allow us to process and analyze each frame in its original sequential order. This step is also beneficial for programs that deal with images, rather than extracting individual frames from the original video file format. The program then loops over all the image files in order, starting with the first frame (frame_number = 1). To be able to tell the software to look for a trial start or end frame in the entire series of frames, we define a variable which is set to 1 when looking for a start frame of a trial (start_flag = 1), and set to zero when looking for a trial end frame (start_flag = 0). If start_flag is set to 1 then the algorithm will first find the correlation of the current frame with a reference image (defined prior to entering the loop). The reference image should be an image which represents the image displayed between the trials (see Figure 4). To find the start and end of trials, we calculate the autocorrelation between the current frame in the video and the reference frame. However, in our case a movie file was presented between trials. This meant that we could not rely on just one reference image in detecting the separation movie. The more practical solution was to choose the correlation coefficient to be 0.8 instead of 1 (chosen by trial and error). This technique required only one reference image which is relatively similar to the frames of the separation movie. The choice of a correlation coefficient of less than 1 also accounted for any artifacts present in the frames. If the correlation is indeed less than 0.8 then we have detected the start frame of the first trial. Next, the algorithm calls another routine which recognizes and extracts the sequence number present in the current frame and stores it in the start frames array. The next step in the algorithm sets the start_flag equal to 0 until we find the end frame of the trial. If start_flag equals 0 then the algorithm will find the correlation between the current frame and the reference frame. If the correlation is larger than or equal to 0.8, then it will decide that the image is indeed the start of the separation movie. In this case, the algorithm calls the digit recognition routine to extract the sequence number present in the previous frame and not the current frame because the previous one was the last frame of the trial and the current frame is not included in the trial. The algorithm then stores the digit recognition result in the end frame array. 357

6 AFANEH, KITH, AND BERGESON Afterwards, start_flag is set to 1 so that the program searches for the start frame of the next trial and so on until it detects the end frame of the last trial. Finally, the algorithm outputs the two arrays (start frame array and end frame array), each in a separate line to an output file chosen by the user as an argument to the algorithm. Table 1 is an example of the output. The first row represents the start frames and the second represents the end frames of the different trials in order. In this case there were three trials in the experiment. Figure 3. Phase I Flow Chart shows the basic steps in detecting the starting and ending of each trial. First, the algorithm checks if the variable start_flag is equal to 1 or not. If it is, then it searches for a start frame; otherwise it searches for an end frame. After detecting the starting or ending frame, the algorithm recognizes sequence digits and stores those digits in an array. 358

7 Figure 4. The reference image is an image which represents the image displayed in between the trials. We are able to find the start and end frame for each trial by calculating the correlation between the current frame and this reference image. If the correlation is less then 0.8 then we have detected the start frame. Trial 1 Trial 2 Trial 3 Start Frames End Frames Table 1. Output of Start and End Frames for 3 Trials Digit recognition Digit detection and extraction from a particular frame are included in Figure 3. In our system, we had 5 digits superimposed on each video frame, each of which corresponds to a row of data in the eye movement data file. Those frame numbers are superimposed by an external hardware device (called video frame overlay) which takes in two inputs, the video output and the data stream from the eye tracking system; and one output, the video with the superimposed frame numbers. The idea behind using such a device is to synchronize the video output with the eye movement data. By reading the frame number from the video frame one can know the corresponding row of data in the eye movement file. In our solution, we use image processing techniques to detect the different trials in the video, and then go back to the data file and mark the lines corresponding to each trial in order to perform the analysis on each trial separately. 359

8 AFANEH, KITH, AND BERGESON Figure 5 shows an example of a sub-image taken from a video frame containing the superimposed frame numbers. We chose this frame to illustrate a serious problem: two frame numbers are overlapped on the same frame. This is caused by the down-sampling of the video due to the mismatch between the frame grabber capturing video at 30 Hz and the eye tracker outputting frames at 60 Hz. Because the video is interlaced, the odd rows of the image are captured at the first run of the image capture while the even rows are captured at the second run of the capturing. One digit comes from the odd pass while the other comes from the even pass. We have to note here, however, that this is a worst-case scenario since in most images only the last digit (far right) of the sequence will be an overlap of two consecutive digits. The second to last digit will appear as an overlap every 5 frames and the third to last every 25 and so on. Figure 5. Superimposed frame numbers on a frame are output frame numbers from eye-tracker which are superimposed on a frame. This figure shows the worst-case scenario where two frame numbers are overlapped on the same frame. To solve this issue, for each frame we extract the odd rows and extract the digits from this subimage. There will still be an error detecting the exact row which represents either the start or end frame of that trial, but the maximum error would be 2 frames in each trial, which is negligible, compared to the total number of frames in each trial. Figure 6 shows the even and odd rows extracted from the sub-image in Figure 5. (a) Odd Row (b) Even Row Figure 6. Odd and even row images extracted from the image in Figure 5. (a) A frame number in odd row. (b) A frame number in even row. In order to recognize all digits in a certain frame, we propose an algorithm which uses templatematching theory to recognize one digit at a time and output an array of five elements representing the frame sequence (see Figures 7 and 8). Since the digits in the frames are not located in the exact position in each experiment (possibly because of jpeg artifacts or random errors in the image capture process), the algorithm first searches for the area in which each digits is located. Figure 7. Template images used in the template matching algorithm to recognize each digit of a frame number. 360

9 Figure 8. The Digit Recognition Algorithm is used to recognize each digit of a frame number. First, this algorithm searches for the area in which each digit is located. Then the variable digit_sequence specifies how many digits need to recognize. Note that for optimization, not all digits will be recognized all the time. 361

10 AFANEH, KITH, AND BERGESON The method approached was to search for the area that had the highest correlation with each of the digit template images. That is, the correlation between each area and the image templates is calculated and the highest correlation with the digits is stored for each candidate region. Across all the areas, the one with the highest of these correlations is chosen to be the correct digit. This is done for each of the five digits in order to find all the exact regions in each frame. This solution was very accurate in recognizing all five digits. However, the processing time turned out to be much longer than we expected, so to optimize the algorithm we made the program only extract all five digits in cases where it was needed. As we mentioned above, each frame contained an overlap of two sequence numbers. So if a frame was overlapped by the numbers and 00002, then the following frame will most likely be stamped with an overlap of and unless there was loss of one or more frames during the frame capture process. Thus, we could use an approximation method to calculate the number on each frame. In other words, the program could be modified to extract the digits from one frame and then use an offset to determine the frame sequence number on any subsequent frame. However, we found that this approximation method might not always get the exact number on the frame due to frames dropped during capture. For example, sometimes the approximation was but the exact number was The difference between these two numbers might vary depending on how many frames dropped during video capture. If 7 frames were dropped, then the difference would be 14.To account for the loss in frames captured and still be able to reduce the running time of the algorithm, we check for the difference between the approximation and the recognition of the last two digits. If the error is less than α, the approximation is added to the recognition of the first frame in that trial. If the error is greater than or equal to α, then we perform recognition for all five digits by using algorithm in Figure 8, where α is defined as the maximum number of frames loss in each trail. This is true assuming that the maximum number of lost frames is 25, thus α = 25 * 2 = 50 (each frame overlap by two numbers). If the difference between the approximation and the recognition of these two digits is less than 50, that leaves no chance for the other digits to be different. An example in which the difference could be greater than 50 is where the approximation is 1196 and the recognition is 04. Thus = 92 (compare only the last two digits), so after performing analyses of all five digits the exact digit would be This modification would mostly use the Digits Recognition Algorithm described in Figure 8 to recognize only two digits instead of five digits all the time. Thus, it would improve the performance speed by up to 60%. Data Alignment and Analysis The next step after detecting the start and end frames of each trial would be to mark the rows in the eye movement data file according to the trial number. After this step, the data are ready for analysis using any eye tracking analysis program which is capable of handling text files as data input. An example of such analysis software is ILAB (Gitelman, 2002). 362

11 After obtaining the fixation output file, we use it to calculate the fixations on predefined areas of interest (AOI). The idea is simple: detect the coordinates of each fixation, and then determine in which area of interest it exists. This is done per trial because we are interested in analyzing each trial separately. Future Directions In the system solution proposed, we used MATLAB, which is known to be simple and powerful though relatively slow in comparison to other programming languages. Because of this, we believe that migrating from MATLAB to a faster programming language (such as C/C++) would be an important improvement and would save time in executing the steps. Another important future development would be to convert the software to a real-time application so that it detects the trials online. This could also lead to development of software programs that could control the stimulus according to the eye movements of the subject. References Cohen, L.B., Atkinson, D.J., and Chaput, H. H. (2004). Habit X: A new program for obtaining and organizing data in infant perception and cognition studies (Version 1.0). Austin: University of Texas. Gitelman, D. R., (2002). ILAB: A program for post experimental eye movement analysis. Behavior Research Methods, Instruments and Computers, 34(4), Gonzalez, R. C., and Woods, R. E. (2000). Digital Image Processing. 2 nd Edition, Prentice Hall. Jacob, R. J. K., (1991). The use of eye movements in human computer interaction techniques: what you look at is what you get. ACM Transactions on Information Systems 9 (3), Jacob, R. J. K., (1995). Eye tracking in advanced interface design. In Barøeld, W., & Furness, T. (Eds.), Advanced Interface Design and Virtual Environments, Oxford: Oxford University Press. Pelz, B. J., Canosa, L. R., Kucharczyk, D., Babcock, J., Silver, A. and Konno, D., (2000). Portable Eyetracking: A Study of Natural Eye Movements. Proceedings of the SPIE, Human Vision and Electronic Imaging, San Jose, CA: SPIE. Young, L., and Sheena, D., (1975). Survey of eye movement recording methods. Behavior Research Methods and Instrumentation 7,

Eye-tracking. Benjamin Noël

Eye-tracking. Benjamin Noël Eye-tracking Benjamin Noël Basics Majority of all perceived stimuli through the eyes Only 10% through ears, skin, nose Eye-tracking measuring movements of the eyes Record eye movements to subsequently

More information

Video-Based Eye Tracking

Video-Based Eye Tracking Video-Based Eye Tracking Our Experience with Advanced Stimuli Design for Eye Tracking Software A. RUFA, a G.L. MARIOTTINI, b D. PRATTICHIZZO, b D. ALESSANDRINI, b A. VICINO, b AND A. FEDERICO a a Department

More information

PRODUCT SHEET. [email protected] [email protected] www.biopac.com

PRODUCT SHEET. info@biopac.com support@biopac.com www.biopac.com EYE TRACKING SYSTEMS BIOPAC offers an array of monocular and binocular eye tracking systems that are easily integrated with stimulus presentations, VR environments and other media. Systems Monocular Part

More information

How To Use Eye Tracking With A Dual Eye Tracking System In A Collaborative Collaborative Eye Tracking (Duet)

How To Use Eye Tracking With A Dual Eye Tracking System In A Collaborative Collaborative Eye Tracking (Duet) Framework for colocated synchronous dual eye tracking Craig Hennessey Department of Electrical and Computer Engineering University of British Columbia Mirametrix Research [email protected] Abstract Dual

More information

GAZETRACKERrM: SOFTWARE DESIGNED TO FACILITATE EYE MOVEMENT ANALYSIS

GAZETRACKERrM: SOFTWARE DESIGNED TO FACILITATE EYE MOVEMENT ANALYSIS GAZETRACKERrM: SOFTWARE DESIGNED TO FACILITATE EYE MOVEMENT ANALYSIS Chris kankford Dept. of Systems Engineering Olsson Hall, University of Virginia Charlottesville, VA 22903 804-296-3846 [email protected]

More information

Eye-Tracking Methodology and Applications in Consumer Research 1

Eye-Tracking Methodology and Applications in Consumer Research 1 FE947 Eye-Tracking Methodology and Applications in Consumer Research 1 Hayk Khachatryan and Alicia L. Rihn 2 Introduction Eye-tracking analysis is a research tool used to measure visual attention. Visual

More information

SUPERIOR EYE TRACKING TECHNOLOGY. Totally Free Head Motion Unmatched Accuracy State-Of-The-Art Analysis Software. www.eyegaze.com

SUPERIOR EYE TRACKING TECHNOLOGY. Totally Free Head Motion Unmatched Accuracy State-Of-The-Art Analysis Software. www.eyegaze.com SUPERIOR EYE TRACKING TECHNOLOGY Totally Free Head Motion Unmatched Accuracy State-Of-The-Art Analysis Software www.eyegaze.com LC TECHNOLOGIES EYEGAZE EDGE SYSTEMS LC Technologies harnesses the power

More information

Fixation-identification in dynamic scenes: Comparing an automated algorithm to manual coding

Fixation-identification in dynamic scenes: Comparing an automated algorithm to manual coding Fixation-identification in dynamic scenes: Comparing an automated algorithm to manual coding Susan M. Munn Leanne Stefano Jeff B. Pelz Chester F. Carlson Center for Imaging Science Department of Psychology

More information

Automated corneal-reflection eye-tracking in infancy: Methodological. developments and applications to cognition

Automated corneal-reflection eye-tracking in infancy: Methodological. developments and applications to cognition Automated corneal-reflection eye-tracking in infancy: Methodological developments and applications to cognition Richard N. Aslin and Bob McMurray University of Rochester This is the introductory article

More information

Implementation of eye tracking functions in the Presentation interface for the EyeLink-II eye tracking system, Version 0.9-Beta

Implementation of eye tracking functions in the Presentation interface for the EyeLink-II eye tracking system, Version 0.9-Beta Implementation of eye tracking functions in the Presentation interface for the EyeLink-II eye tracking system, Version 0.9-Beta Missing functions and remarks are shown in red. new eye_tracker( string object_id

More information

Eyetracker Output Utility

Eyetracker Output Utility Eyetracker Output Utility Latest version: 1.27, 22 July 2016 Walter van Heuven Please note that this program is still under development Email: [email protected] Website: http://www.psychology.nottingham.ac.uk/staff/wvh/eou

More information

Mouse Control using a Web Camera based on Colour Detection

Mouse Control using a Web Camera based on Colour Detection Mouse Control using a Web Camera based on Colour Detection Abhik Banerjee 1, Abhirup Ghosh 2, Koustuvmoni Bharadwaj 3, Hemanta Saikia 4 1, 2, 3, 4 Department of Electronics & Communication Engineering,

More information

Designing eye tracking experiments to measure human behavior

Designing eye tracking experiments to measure human behavior Designing eye tracking experiments to measure human behavior Eindhoven, The Netherlands August, 2010 Ricardo Matos Tobii Technology Steps involved in measuring behaviour 1. Formulate and initial question

More information

Timing Guide for Tobii Eye Trackers and Eye Tracking Software

Timing Guide for Tobii Eye Trackers and Eye Tracking Software WhitePaper Tobii Technology Timing Guide for Tobii Eye Trackers and Eye Tracking Software Timing Guide for Tobii Eye Trackers and Eye Tracking Software February 23, 2010 Tobii Technology AB The Tobii

More information

MACHINE VISION MNEMONICS, INC. 102 Gaither Drive, Suite 4 Mount Laurel, NJ 08054 USA 856-234-0970 www.mnemonicsinc.com

MACHINE VISION MNEMONICS, INC. 102 Gaither Drive, Suite 4 Mount Laurel, NJ 08054 USA 856-234-0970 www.mnemonicsinc.com MACHINE VISION by MNEMONICS, INC. 102 Gaither Drive, Suite 4 Mount Laurel, NJ 08054 USA 856-234-0970 www.mnemonicsinc.com Overview A visual information processing company with over 25 years experience

More information

Template-based Eye and Mouth Detection for 3D Video Conferencing

Template-based Eye and Mouth Detection for 3D Video Conferencing Template-based Eye and Mouth Detection for 3D Video Conferencing Jürgen Rurainsky and Peter Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz-Institute, Image Processing Department, Einsteinufer

More information

Usability Testing Jeliot 3- Program Visualization Tool: Evaluation Using Eye-Movement Tracking

Usability Testing Jeliot 3- Program Visualization Tool: Evaluation Using Eye-Movement Tracking Usability Testing Jeliot 3- Program Visualization Tool: Evaluation Using Eye-Movement Tracking Roman Bednarik University of Joensuu Connet course 281: Usability in Everyday Environment February 2005 Contents

More information

Fixplot Instruction Manual. (data plotting program)

Fixplot Instruction Manual. (data plotting program) Fixplot Instruction Manual (data plotting program) MANUAL VERSION2 2004 1 1. Introduction The Fixplot program is a component program of Eyenal that allows the user to plot eye position data collected with

More information

EMR-9 Quick Start Guide (00175)D 1

EMR-9 Quick Start Guide (00175)D 1 NAC Eye Mark Recorder EMR-9 Quick Start Guide May 2009 NAC Image Technology Inc. (00175)D 1 Contents 1. System Configurations 1.1 Standard configuration 1.2 Head unit variations 1.3 Optional items 2. Basic

More information

Video-Conferencing System

Video-Conferencing System Video-Conferencing System Evan Broder and C. Christoher Post Introductory Digital Systems Laboratory November 2, 2007 Abstract The goal of this project is to create a video/audio conferencing system. Video

More information

Very Low Frame-Rate Video Streaming For Face-to-Face Teleconference

Very Low Frame-Rate Video Streaming For Face-to-Face Teleconference Very Low Frame-Rate Video Streaming For Face-to-Face Teleconference Jue Wang, Michael F. Cohen Department of Electrical Engineering, University of Washington Microsoft Research Abstract Providing the best

More information

Final Year Project Progress Report. Frequency-Domain Adaptive Filtering. Myles Friel. Supervisor: Dr.Edward Jones

Final Year Project Progress Report. Frequency-Domain Adaptive Filtering. Myles Friel. Supervisor: Dr.Edward Jones Final Year Project Progress Report Frequency-Domain Adaptive Filtering Myles Friel 01510401 Supervisor: Dr.Edward Jones Abstract The Final Year Project is an important part of the final year of the Electronic

More information

SoMA. Automated testing system of camera algorithms. Sofica Ltd

SoMA. Automated testing system of camera algorithms. Sofica Ltd SoMA Automated testing system of camera algorithms Sofica Ltd February 2012 2 Table of Contents Automated Testing for Camera Algorithms 3 Camera Algorithms 3 Automated Test 4 Testing 6 API Testing 6 Functional

More information

Psychology equipment

Psychology equipment Psychology equipment Equipment Quantity Description Photo Biopac acquisition unit 1 The Biopac is used for measuring a range of physiological responses. The acquisition unit is the central component, to

More information

A Guided User Experience Using Subtle Gaze Direction

A Guided User Experience Using Subtle Gaze Direction A Guided User Experience Using Subtle Gaze Direction Eli Ben-Joseph and Eric Greenstein Stanford University {ebj, ecgreens}@stanford.edu 1 Abstract This paper demonstrates how illumination modulation can

More information

Face Model Fitting on Low Resolution Images

Face Model Fitting on Low Resolution Images Face Model Fitting on Low Resolution Images Xiaoming Liu Peter H. Tu Frederick W. Wheeler Visualization and Computer Vision Lab General Electric Global Research Center Niskayuna, NY, 1239, USA {liux,tu,wheeler}@research.ge.com

More information

ACE: Illustrator CC Exam Guide

ACE: Illustrator CC Exam Guide Adobe Training Services Exam Guide ACE: Illustrator CC Exam Guide Adobe Training Services provides this exam guide to help prepare partners, customers, and consultants who are actively seeking accreditation

More information

Powerful advances in whole-field electrophysiology measurements

Powerful advances in whole-field electrophysiology measurements Powerful advances in whole-field electrophysiology measurements Introducing Advanced electrophysiology focused on your research needs. Phoenix Research Labs electrophysiology tools are not adaptations

More information

Picture Memory Improves with Longer On Time and Off Time

Picture Memory Improves with Longer On Time and Off Time Journal ol Experimental Psychology: Human Learning and Memory 197S, Vol. 104, No. 2, 114-118 Picture Memory mproves with Longer On Time and Off Time Barbara Tversky and Tracy Sherman The Hebrew University

More information

The Eyelink Toolbox: Eye Tracking with MATLAB and the Psychophysics Toolbox.

The Eyelink Toolbox: Eye Tracking with MATLAB and the Psychophysics Toolbox. The Eyelink Toolbox: Eye Tracking with MATLAB and the Psychophysics Toolbox. Frans W. Cornelissen 1, Enno M. Peters 1, John Palmer 2 1 Laboratory of Experimental Ophthalmology, School for Behavioral and

More information

The Scientific Data Mining Process

The Scientific Data Mining Process Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In

More information

Starburst: A hybrid algorithm for video-based eye tracking combining feature-based and model-based approaches

Starburst: A hybrid algorithm for video-based eye tracking combining feature-based and model-based approaches Starburst: A hybrid algorithm for video-based eye tracking combining feature-based and model-based approaches Dongheng Li, David Winfield, Derrick J. Parkhurst Human Computer Interaction Program Iowa State

More information

Eye Tracking in User Experience Design

Eye Tracking in User Experience Design Eye Tracking in User Experience Design Jennifer Romano Bergstrom, Ph.D Andrew Jonathan Schall i'p-&>,' JDIIL ELSEVIER AMSTERDAM BOSTON HEIDELBERG LONDON NEW Y0RK * OXFORD * PARIS * SAN DIEGO SAN FRANCISCO

More information

SNC-VL10P Video Network Camera

SNC-VL10P Video Network Camera SNC-VL10P Video Network Camera CHANGING THE WAY BUSINESS 2AM. WATCHING HIS NEW PRODUCTION LINE. 10,000 MILES AWAY. COMMUNICATES www.sonybiz.net/netstation CORPORATE COMMUNICATIONS SURVEILLANCE VIDEOCONFERENCING

More information

Comp 410/510. Computer Graphics Spring 2016. Introduction to Graphics Systems

Comp 410/510. Computer Graphics Spring 2016. Introduction to Graphics Systems Comp 410/510 Computer Graphics Spring 2016 Introduction to Graphics Systems Computer Graphics Computer graphics deals with all aspects of creating images with a computer Hardware (PC with graphics card)

More information

Research-Grade Research-Grade. Capture

Research-Grade Research-Grade. Capture Research-Grade Research-Grade Motion Motion Capture Capture The System of Choice For Resear systems have earned the reputation as the gold standard for motion capture among research scientists. With unparalleled

More information

Introduction to 3D Imaging

Introduction to 3D Imaging Chapter 5 Introduction to 3D Imaging 5.1 3D Basics We all remember pairs of cardboard glasses with blue and red plastic lenses used to watch a horror movie. This is what most people still think of when

More information

HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT

HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT International Journal of Scientific and Research Publications, Volume 2, Issue 4, April 2012 1 HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT Akhil Gupta, Akash Rathi, Dr. Y. Radhika

More information

How an electronic shutter works in a CMOS camera. First, let s review how shutters work in film cameras.

How an electronic shutter works in a CMOS camera. First, let s review how shutters work in film cameras. How an electronic shutter works in a CMOS camera I have been asked many times how an electronic shutter works in a CMOS camera and how it affects the camera s performance. Here s a description of the way

More information

Research of User Experience Centric Network in MBB Era ---Service Waiting Time

Research of User Experience Centric Network in MBB Era ---Service Waiting Time Research of User Experience Centric Network in MBB ---Service Waiting Time Content Content 1 Introduction... 1 1.1 Key Findings... 1 1.2 Research Methodology... 2 2 Explorer Excellent MBB Network Experience

More information

UvA facelab 5 extension for Presentation

UvA facelab 5 extension for Presentation UvA facelab 5 extension for Presentation a Presentation IEyeTracker2 interface Filename : UvA EyeTracker Author : B. Molenkamp 1/7 UvA facelab IEyeTracker2 interface for Presentation Installation of the

More information

Introduction to Digital Video

Introduction to Digital Video Introduction to Digital Video Significance of the topic With the increasing accessibility of technology for everyday people, things are starting to get digitalized: digital camera, digital cable, digital

More information

RIVA Megapixel cameras with integrated 3D Video Analytics - The next generation

RIVA Megapixel cameras with integrated 3D Video Analytics - The next generation RIVA Megapixel cameras with integrated 3D Video Analytics - The next generation Creating intelligent solutions with Video Analytics (VCA- Video Content Analysis) Intelligent IP video surveillance is one

More information

Frequently Asked Questions About VisionGauge OnLine

Frequently Asked Questions About VisionGauge OnLine Frequently Asked Questions About VisionGauge OnLine The following frequently asked questions address the most common issues and inquiries about VisionGauge OnLine: 1. What is VisionGauge OnLine? VisionGauge

More information

ACE: After Effects CC

ACE: After Effects CC Adobe Training Services Exam Guide ACE: After Effects CC Adobe Training Services provides this exam guide to help prepare partners, customers, and consultants who are actively seeking accreditation as

More information

Clearing the Way for VoIP

Clearing the Way for VoIP Gen2 Ventures White Paper Clearing the Way for VoIP An Alternative to Expensive WAN Upgrades Executive Overview Enterprises have traditionally maintained separate networks for their voice and data traffic.

More information

A System for Capturing High Resolution Images

A System for Capturing High Resolution Images A System for Capturing High Resolution Images G.Voyatzis, G.Angelopoulos, A.Bors and I.Pitas Department of Informatics University of Thessaloniki BOX 451, 54006 Thessaloniki GREECE e-mail: [email protected]

More information

Eye tracking in usability research: What users really see

Eye tracking in usability research: What users really see Printed in: Empowering Software Quality: How Can Usability Engineering Reach These Goals? Usability Symposium 2005: pp 141-152, OCG publication vol. 198. Eye tracking in usability research: What users

More information

ARTICLE. 10 reasons to switch to IP-based video

ARTICLE. 10 reasons to switch to IP-based video ARTICLE 10 reasons to switch to IP-based video Table of contents 1. High resolution 3 2. Easy to install 4 3. Truly digital 5 4. Camera intelligence 5 5. Fully integrated 7 6. Built-in security 7 7. Crystal-clear

More information

The Professional Software for Observational Research Relaxed Research - Significant Results

The Professional Software for Observational Research Relaxed Research - Significant Results The Professional Software for Observational Research Relaxed Research - Significant Results Get extensive results at your fingertips with INTERACT Are You ready for the next Great Discovery? Professional

More information

TRACKING DRIVER EYE MOVEMENTS AT PERMISSIVE LEFT-TURNS

TRACKING DRIVER EYE MOVEMENTS AT PERMISSIVE LEFT-TURNS TRACKING DRIVER EYE MOVEMENTS AT PERMISSIVE LEFT-TURNS Michael A. Knodler Jr. Department of Civil & Environmental Engineering University of Massachusetts Amherst Amherst, Massachusetts, USA E-mail: [email protected]

More information

User Manual. Tobii Eye Tracker ClearView analysis software. February 2006 Copyright Tobii Technology AB

User Manual. Tobii Eye Tracker ClearView analysis software. February 2006 Copyright Tobii Technology AB Web: www.tobii.com E-mail: [email protected] Phone: +46 (0)8 663 6003 User Manual Tobii Eye Tracker ClearView analysis software February 2006 Copyright Tobii Technology AB Contents User Manual 1 Tobii

More information

Cheap and easy PIN entering using eye gaze

Cheap and easy PIN entering using eye gaze Cheap and easy PIN entering using eye gaze Pawel Kasprowski, Katarzyna Harężlak Institute of Informatics Silesian University of Technology Gliwice, Poland {pawel.kasprowski,katarzyna.harezlak}@polsl.pl

More information

Arrayoptik für ultraflache Datensichtbrillen

Arrayoptik für ultraflache Datensichtbrillen Arrayoptik für ultraflache Datensichtbrillen Peter Schreiber, Fraunhofer Institut für Angewandte Optik und Feinmechanik IOF, Jena 1. Introduction 2. Conventional near-to-eye (N2E) projection optics - first

More information

4.2: Multimedia File Systems Traditional File Systems. Multimedia File Systems. Multimedia File Systems. Disk Scheduling

4.2: Multimedia File Systems Traditional File Systems. Multimedia File Systems. Multimedia File Systems. Disk Scheduling Chapter 2: Representation of Multimedia Data Chapter 3: Multimedia Systems Communication Aspects and Services Chapter 4: Multimedia Systems Storage Aspects Optical Storage Media Multimedia File Systems

More information

Glossary How to Support Institutionalization of a Mature UX Practice

Glossary How to Support Institutionalization of a Mature UX Practice Glossary How to Support Institutionalization of a Mature UX Practice of a Mature UX Practice Table of Contents A... 3 B... 3 C... 3 D... 4 E... 4 F... 4 G... 5 H... 5 I... 5 M... 6 0... 6 P... 6 R... 7

More information

Artificial Neural Network for Speech Recognition

Artificial Neural Network for Speech Recognition Artificial Neural Network for Speech Recognition Austin Marshall March 3, 2005 2nd Annual Student Research Showcase Overview Presenting an Artificial Neural Network to recognize and classify speech Spoken

More information

Data Analysis Methods: Net Station 4.1 By Peter Molfese

Data Analysis Methods: Net Station 4.1 By Peter Molfese Data Analysis Methods: Net Station 4.1 By Peter Molfese Preparing Data for Statistics (preprocessing): 1. Rename your files to correct any typos or formatting issues. a. The General format for naming files

More information

Introduction to Computer Graphics

Introduction to Computer Graphics Introduction to Computer Graphics Torsten Möller TASC 8021 778-782-2215 [email protected] www.cs.sfu.ca/~torsten Today What is computer graphics? Contents of this course Syllabus Overview of course topics

More information

Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm

Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm 1 Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm Hani Mehrpouyan, Student Member, IEEE, Department of Electrical and Computer Engineering Queen s University, Kingston, Ontario,

More information

Voice Driven Animation System

Voice Driven Animation System Voice Driven Animation System Zhijin Wang Department of Computer Science University of British Columbia Abstract The goal of this term project is to develop a voice driven animation system that could take

More information

Data Management, Analysis Tools, and Analysis Mechanics

Data Management, Analysis Tools, and Analysis Mechanics Chapter 2 Data Management, Analysis Tools, and Analysis Mechanics This chapter explores different tools and techniques for handling data for research purposes. This chapter assumes that a research problem

More information

Tobii X2 Eye Trackers

Tobii X2 Eye Trackers Tobii X2 Eye Trackers Tobii X2 Eye Trackers The world s smallest, most versatile eye tracking system Just done a second session using Tobii X2 Eye Tracker, which was awesome. LOVE IT. Oliver Bradley, Global

More information

Release Notes. Tobii Studio 1.7 release. What s New? What has improved? Important Notice

Release Notes. Tobii Studio 1.7 release. What s New? What has improved? Important Notice W Release Notes Tobii Studio Analysis Software Tobii Studio 1.7 release This release comes with important improvements that will enhance the performance and functionality of your Tobii Studio analysis

More information

Building an Advanced Invariant Real-Time Human Tracking System

Building an Advanced Invariant Real-Time Human Tracking System UDC 004.41 Building an Advanced Invariant Real-Time Human Tracking System Fayez Idris 1, Mazen Abu_Zaher 2, Rashad J. Rasras 3, and Ibrahiem M. M. El Emary 4 1 School of Informatics and Computing, German-Jordanian

More information

Understanding Megapixel Camera Technology for Network Video Surveillance Systems. Glenn Adair

Understanding Megapixel Camera Technology for Network Video Surveillance Systems. Glenn Adair Understanding Megapixel Camera Technology for Network Video Surveillance Systems Glenn Adair Introduction (1) 3 MP Camera Covers an Area 9X as Large as (1) VGA Camera Megapixel = Reduce Cameras 3 Mega

More information

Poker Vision: Playing Cards and Chips Identification based on Image Processing

Poker Vision: Playing Cards and Chips Identification based on Image Processing Poker Vision: Playing Cards and Chips Identification based on Image Processing Paulo Martins 1, Luís Paulo Reis 2, and Luís Teófilo 2 1 DEEC Electrical Engineering Department 2 LIACC Artificial Intelligence

More information

History of eye-tracking in psychological research

History of eye-tracking in psychological research History of eye-tracking in psychological research In the 1950s, Alfred Yarbus showed the task given to a subject has a very large influence on the subjects eye movements. Yarbus also wrote about the relation

More information

ivms-4200 Client Software Technical Specification v1.02

ivms-4200 Client Software Technical Specification v1.02 ivms-4200 Client Software Technical Specification v1.02 Introduction ivms-4200 Client Software is a centralized video management software using a distributed structure for surveillance device control and

More information

Understanding The Face Image Format Standards

Understanding The Face Image Format Standards Understanding The Face Image Format Standards Paul Griffin, Ph.D. Chief Technology Officer Identix April 2005 Topics The Face Image Standard The Record Format Frontal Face Images Face Images and Compression

More information

To determine vertical angular frequency, we need to express vertical viewing angle in terms of and. 2tan. (degree). (1 pt)

To determine vertical angular frequency, we need to express vertical viewing angle in terms of and. 2tan. (degree). (1 pt) Polytechnic University, Dept. Electrical and Computer Engineering EL6123 --- Video Processing, S12 (Prof. Yao Wang) Solution to Midterm Exam Closed Book, 1 sheet of notes (double sided) allowed 1. (5 pt)

More information

Applied Science Laboratories. Model 504. Eye Tracker and Gaze Tracker. System Setup and Operations Manual

Applied Science Laboratories. Model 504. Eye Tracker and Gaze Tracker. System Setup and Operations Manual Applied Science Laboratories Model 504 Eye Tracker and Gaze Tracker System Setup and Operations Manual Original C. 2001 by Applied Science Group, Manual version 2.4 Revisions C. 2006 by Michael Wogan,

More information

DELL. Virtual Desktop Infrastructure Study END-TO-END COMPUTING. Dell Enterprise Solutions Engineering

DELL. Virtual Desktop Infrastructure Study END-TO-END COMPUTING. Dell Enterprise Solutions Engineering DELL Virtual Desktop Infrastructure Study END-TO-END COMPUTING Dell Enterprise Solutions Engineering 1 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL

More information

Tobii 50 Series. Product Description. Tobii 1750 Eye Tracker Tobii 2150 Eye Tracker Tobii x50 Eye Tracker

Tobii 50 Series. Product Description. Tobii 1750 Eye Tracker Tobii 2150 Eye Tracker Tobii x50 Eye Tracker Product Description Tobii 50 Series Tobii 1750 Eye Tracker Tobii 2150 Eye Tracker Tobii x50 Eye Tracker Web: www.tobii.com E-mail: [email protected] Phone: +46 (0)8 663 69 90 Copyright Tobii Technology AB,

More information

Interactive person re-identification in TV series

Interactive person re-identification in TV series Interactive person re-identification in TV series Mika Fischer Hazım Kemal Ekenel Rainer Stiefelhagen CV:HCI lab, Karlsruhe Institute of Technology Adenauerring 2, 76131 Karlsruhe, Germany E-mail: {mika.fischer,ekenel,rainer.stiefelhagen}@kit.edu

More information

How To Use Trackeye

How To Use Trackeye Product information Image Systems AB Main office: Ågatan 40, SE-582 22 Linköping Phone +46 13 200 100, fax +46 13 200 150 [email protected], Introduction TrackEye is the world leading system for motion

More information

CHAPTER 2: USING THE CAMERA WITH THE APP

CHAPTER 2: USING THE CAMERA WITH THE APP TABLE OF CONTENTS OVERVIEW... 1 Front of your camera... 1 Back of your camera... 2 ACCESSORIES... 3 CHAPTER 1: Navigating the Mobile Application... 4 Device List: How to Use this Page... 4 My Messages:

More information

NetClient software user manual

NetClient software user manual NetClient software user manual 1-1. General information Net Client is an application which provides users not only viewing and controling remote DVRs, but also receiving realtime event data or alarm signals

More information

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - [email protected]

More information

Thirukkural - A Text-to-Speech Synthesis System

Thirukkural - A Text-to-Speech Synthesis System Thirukkural - A Text-to-Speech Synthesis System G. L. Jayavardhana Rama, A. G. Ramakrishnan, M Vijay Venkatesh, R. Murali Shankar Department of Electrical Engg, Indian Institute of Science, Bangalore 560012,

More information

MMGD0203 Multimedia Design MMGD0203 MULTIMEDIA DESIGN. Chapter 3 Graphics and Animations

MMGD0203 Multimedia Design MMGD0203 MULTIMEDIA DESIGN. Chapter 3 Graphics and Animations MMGD0203 MULTIMEDIA DESIGN Chapter 3 Graphics and Animations 1 Topics: Definition of Graphics Why use Graphics? Graphics Categories Graphics Qualities File Formats Types of Graphics Graphic File Size Introduction

More information

KODAK KEYKODE NUMBERS TECHNOLOGY

KODAK KEYKODE NUMBERS TECHNOLOGY KODAK KEYKODE NUMBERS TECHNOLOGY Consider the amount of film that needs to be managed in a typical feature film. At 24 frames a second, 90 feet of film are running through the camera every minute. That

More information

Chapter I Model801, Model802 Functions and Features

Chapter I Model801, Model802 Functions and Features Chapter I Model801, Model802 Functions and Features 1. Completely Compatible with the Seventh Generation Control System The eighth generation is developed based on the seventh. Compared with the seventh,

More information

3D U ser I t er aces and Augmented Reality

3D U ser I t er aces and Augmented Reality 3D User Interfaces and Augmented Reality Applications Mechanical CAD 3D Animation Virtual Environments Scientific Visualization Mechanical CAD Component design Assembly testingti Mechanical properties

More information

ADVANCES IN AUTOMATIC OPTICAL INSPECTION: GRAY SCALE CORRELATION vs. VECTORAL IMAGING

ADVANCES IN AUTOMATIC OPTICAL INSPECTION: GRAY SCALE CORRELATION vs. VECTORAL IMAGING ADVANCES IN AUTOMATIC OPTICAL INSPECTION: GRAY SCALE CORRELATION vs. VECTORAL IMAGING Vectoral Imaging, SPC & Closed Loop Communication: The Zero Defect SMD Assembly Line Mark J. Norris Vision Inspection

More information

PRIMING OF POP-OUT AND CONSCIOUS PERCEPTION

PRIMING OF POP-OUT AND CONSCIOUS PERCEPTION PRIMING OF POP-OUT AND CONSCIOUS PERCEPTION Peremen Ziv and Lamy Dominique Department of Psychology, Tel-Aviv University [email protected] [email protected] Abstract Research has demonstrated

More information

CSE 237A Final Project Final Report

CSE 237A Final Project Final Report CSE 237A Final Project Final Report Multi-way video conferencing system over 802.11 wireless network Motivation Yanhua Mao and Shan Yan The latest technology trends in personal mobile computing are towards

More information

MH - Gesellschaft für Hardware/Software mbh

MH - Gesellschaft für Hardware/Software mbh E.d.a.s.VX Data acquisition on board road and track vehicles The E.d.a.s.VX System is designed for portable applications running on 12 Volts DC, and is capable of measuring at selectable rates up to 30,000,000

More information

Eye-Tracking with Webcam-Based Setups: Implementation of a Real-Time System and an Analysis of Factors Affecting Performance

Eye-Tracking with Webcam-Based Setups: Implementation of a Real-Time System and an Analysis of Factors Affecting Performance Universitat Autònoma de Barcelona Master in Computer Vision and Artificial Intelligence Report of the Master Project Option: Computer Vision Eye-Tracking with Webcam-Based Setups: Implementation of a Real-Time

More information

Towards Inferring Web Page Relevance An Eye-Tracking Study

Towards Inferring Web Page Relevance An Eye-Tracking Study Towards Inferring Web Page Relevance An Eye-Tracking Study 1, [email protected] Yinglong Zhang 1, [email protected] 1 The University of Texas at Austin Abstract We present initial results from a project,

More information

FSI Machine Vision Training Programs

FSI Machine Vision Training Programs FSI Machine Vision Training Programs Table of Contents Introduction to Machine Vision (Course # MVC-101) Machine Vision and NeuroCheck overview (Seminar # MVC-102) Machine Vision, EyeVision and EyeSpector

More information