Simulation of Mouse Using Human Face (HCI)

Similar documents
HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT

Mouse Control using a Web Camera based on Colour Detection

Template-based Eye and Mouth Detection for 3D Video Conferencing

Hands free HCI based on head tracking using feature extraction

Face detection is a process of localizing and extracting the face region from the

Tutorial for Tracker and Supporting Software By David Chandler

Laser Gesture Recognition for Human Machine Interaction

A Method for Controlling Mouse Movement using a Real- Time Camera

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network

Evaluation of Optimizations for Object Tracking Feedback-Based Head-Tracking

Whitepaper. Image stabilization improving camera usability

A System for Capturing High Resolution Images

Chapter 5 Objectives. Chapter 5 Input

HAND GESTURE BASEDOPERATINGSYSTEM CONTROL

White paper. CCD and CMOS sensor technology Technical white paper

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY

A Simple Feature Extraction Technique of a Pattern By Hopfield Network

Fixplot Instruction Manual. (data plotting program)

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

Emotion Recognition Using Blue Eyes Technology

Eye Tracking Instructions

Photoshop- Image Editing

Analecta Vol. 8, No. 2 ISSN

T O B C A T C A S E G E O V I S A T DETECTIE E N B L U R R I N G V A N P E R S O N E N IN P A N O R A MISCHE BEELDEN

Self-Portrait Steps Images taken from Andrew Loomis Drawing Head & Hands (there are many sites to download this out of print book for free)

Virtual Mouse Using a Webcam

CATIA Basic Concepts TABLE OF CONTENTS

A secure face tracking system

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014

Tracking of Small Unmanned Aerial Vehicles

Experiments with a Camera-Based Human-Computer Interface System

How To Run A Factory I/O On A Microsoft Gpu 2.5 (Sdk) On A Computer Or Microsoft Powerbook 2.3 (Powerpoint) On An Android Computer Or Macbook 2 (Powerstation) On

A Reliability Point and Kalman Filter-based Vehicle Tracking Technique

Navigation Aid And Label Reading With Voice Communication For Visually Impaired People

INTRUSION PREVENTION AND EXPERT SYSTEMS

So today we shall continue our discussion on the search engines and web crawlers. (Refer Slide Time: 01:02)

Unit A451: Computer systems and programming. Section 2: Computing Hardware 4/5: Input and Output Devices

Making Machines Understand Facial Motion & Expressions Like Humans Do

Open-Set Face Recognition-based Visitor Interface System

CS1112 Spring 2014 Project 4. Objectives. 3 Pixelation for Identity Protection. due Thursday, 3/27, at 11pm

Graphic Design. Background: The part of an artwork that appears to be farthest from the viewer, or in the distance of the scene.

Carrie Schoenborn Molly Switalski. Big Idea: Journeys

Course Project Lab 3 - Creating a Logo (Illustrator)

Effective Use of Android Sensors Based on Visualization of Sensor Information

Chapter I Model801, Model802 Functions and Features

Video in Logger Pro. There are many ways to create and use video clips and still images in Logger Pro.

Session 7 Bivariate Data and Analysis

Digital Image Requirements for New Online US Visa Application

Introduction to the Perceptual Computing

Vision-based Real-time Driver Fatigue Detection System for Efficient Vehicle Control

Face Recognition For Remote Database Backup System

Video Tracking Software User s Manual. Version 1.0

Self-Calibrated Structured Light 3D Scanner Using Color Edge Pattern

Building an Advanced Invariant Real-Time Human Tracking System

Image Processing Based Automatic Visual Inspection System for PCBs

Abstract. Introduction

Advanced MakeUp Tips by Kathy Whittington

Very Low Frame-Rate Video Streaming For Face-to-Face Teleconference

2. More Use of the Mouse in Windows 7

Potential of face area data for predicting sharpness of natural images

A Cheap Portable Eye-Tracker Solution for Common Setups

Within minutes you can transform your high-resolution digital image in to a DVquality movie that pans, zooms and rotates the once-still image.

PURPOSE OF GRAPHS YOU ARE ABOUT TO BUILD. To explore for a relationship between the categories of two discrete variables

A Quick Start Guide to Using PowerPoint For Image-based Presentations

User Tutorial on Changing Frame Size, Window Size, and Screen Resolution for The Original Version of The Cancer-Rates.Info/NJ Application

FACE RECOGNITION BASED ATTENDANCE MARKING SYSTEM

Embroidery Fonts Plus ( EFP ) Tutorial Guide Version

Screen Capture A Vector Quantisation Approach

Introduction to the TI-Nspire CX

Adobe Illustrator CS5 Part 1: Introduction to Illustrator

Multi-Touch Control Wheel Software Development Kit User s Guide

WebEx. Remote Support. User s Guide

ClarisWorks 5.0. Graphics

ADAPTIVE ESTIMATION ALGORITHM- BASED TARGET TRACKING

Operating Systems. and Windows

SUPER RESOLUTION FROM MULTIPLE LOW RESOLUTION IMAGES

Drawing a histogram using Excel

Chapter 5 Understanding Input. Discovering Computers Your Interactive Guide to the Digital World

Indoor Surveillance System Using Android Platform

An Overview of the Developmental Stages in Children's Drawings

GelAnalyzer 2010 User s manual. Contents

Create a Poster Using Publisher

Pictorial User s Guide

CS231M Project Report - Automated Real-Time Face Tracking and Blending

ε: Voltage output of Signal Generator (also called the Source voltage or Applied

EMR-9 Quick Start Guide (00175)D 1

Smartphone Based Driver Aided System to Reduce Accidents Using OpenCV

A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow

Quick use reference book

How to make a line graph using Excel 2007

Understanding Gcode Commands as used for Image Engraving

ASSOCIATION RULE MINING ON WEB LOGS FOR EXTRACTING INTERESTING PATTERNS THROUGH WEKA TOOL

Microsoft Windows Overview Desktop Parts

Interactive Voting System. IVS-Basic IVS-Professional 4.4

Introduction to Measurement Tools

SeeVogh User Guide. Date: March 26 th, 2012

3D SCANNING: A NEW APPROACH TOWARDS MODEL DEVELOPMENT IN ADVANCED MANUFACTURING SYSTEM

Index Terms: Face Recognition, Face Detection, Monitoring, Attendance System, and System Access Control.

Transcription:

Simulation of Mouse Using Human Face (HCI) Snehal Dongre 1, Sachine patil 2 Student of M.E. (C.E.), G.H. Raisoni College of Engineering & Management, Ahmednagar, Pune, India Asst. Prof, Dept. Of I.T., G.H. Raisoni College of Engineering & Management, Wagholi, Pune, India ABSTRACT: This project is used for an application that it s capable of swapping mouse with human face for interaction with PC. Facial structures (tip, eyes and nose) are traced and tracked in to use their movements of mouse events. Coordinates of the nose tip in the video feed are interpreted to develop the coordinates of the mouse on the screen. The right /left blinks control right/left events for click. The external device will be cam for the video stream. KEYWORDS: Cursor control system, Eye detection, Eye movement, Face detection, Human-Computer interaction, Support Vector Machine (SVM). I. INTRODUCTION In the past few years knowledge becomes progressed and less costly. With high performed processors and economical cams, people are using real-time applications using image processing. The artificial intelligence is one of them. Our work aims to use human face which is interacting with machine. Our system captures feature along with cam and note its act to interpret to events which is communicating with the machine. We are compensating people those have hands disabilities. Our system prevents them from using mouse. Our system uses facial expression which interacts with the machine. The nose tip was selected as the pointing device; the reason behind that decision is the location and shape of the nose; as it is located in the middle of the face it is more comfortable to use it as the feature that moves the mouse pointer and defines coordinates, it is placed on axis that the face take turns about. It is in convex shape which makes it easier to track as the face moves. Eyes were used to simulate clicks. User now fire events. He blinks their eyes while different devices were used in our system. (e.g. infrared cameras, sensors, microphones) we used an off-the-shelf cam that affords a moderate resolution and frame rate as the capturing device in order to make the ability of using the program affordable for all individuals. We will try to present an algorithm that distinguishes true eye blinks from involuntary ones, detects and tracks the desired facial features. II. RELATED WORK With the growth of attention about machine vision, the interest in our project has increased proportionally.as we mentioned before different human features and monitoring devices were used to achieve our project, but during our research we were interested only in works that involved the use of facial features and webcams. We noticed a large diversity of the facial features that were selected, the way they were detected and tracked, and the functionality that they presented for the project. Researchers chose different facial features: eye pupils, eyebrows, nose tip, lips, eye lids corners, mouth corners for each of which they provided an explanation to choose that particular one. Different detection techniques were applied (e.g. feature based, image based) where the goal was to achieve more accurate results with less processing time. To control the mouse pointer various points were tracked ranging from the middle distance between the eyes, the middle distance between the eyebrows, to the nose tip. To simulate mouse clicks; eye blinks, mouth opening/closing, and sometimes eyebrow movement were used. Each our project method that we read about had some drawbacks, some methods used expensive equipment, some were not fast enough to achieve real-time execution, and others were not robust and precise enough to replace the mouse. We tried to profit from the experience that other researchers gained in the project field and added our own ideas to produce an application that is fast, robust, and useable. Copyright to IJIRCCE www.ijircce.com 6940

A. Design Considerations: Face Detection Algorithm Overview: III. ALGORITHM Two main categories: 1. Feature-based methods 2. Image-based methods. Feature-based methods: first involves finding facial features (e.g. eye brows, lips, eye pupils.) Image-based methods: scanning the image of interest with a window that looks for faces at all scales and locations. B. Description of the Algorithm: Figure 1: Face detection general steps. Our system is believed that people from different races have same skin color but they differ in lighting intensity of that color (e.g. In HSB color model according to this idea people have close H and S values but different B values). In order to discover a suitable skin color model, according to the previously mentioned idea, we will use the pure r and g values which are the R and G values of the RGB color model in the absence of brightness, and they are calculated with the following equations: r = R / (R + G + B)... (1) g = G / (R+ G + B)... (2) We move the r and g values to the (a, b) color space with the following equations: a = r + g / 2... (3) b = v3 / 2 g... (4) The range of a is from 0 to 1, while the range of b is from 0 to v3 / 2[12]. 0.49 < a < 0.59... (5) 0.24 < b < 0.29... (6) We using feature based face detection methods to reduce the area in which we are looking for the face, so we can decrease the execution time. To discover face candidates the (SSR) Six Segmented Rectangular filters, filter used in the following way: Initially we calculate the integral image. We are making a one pass over the video frame using these equations [3]: s(x, y) = s(x, y-1) + i(x, y)... (7) Copyright to IJIRCCE www.ijircce.com 6941

ii(x, y) = ii(x-1, y) + s(x, y)... (8) Here s(x, y) is cumulative row sum, s(x,-1) = 0, and ii(-1, y) = 0.Figure 2 shows an ideal location of the SSR filter, where its center is considered as a face candidate. Figure 2: Ideal SSR filter location for a face candidate. (x, y) is the location of the filter (upper left corner). The plus sign is the center of the filter which is the face candidate. We can notice that in this ideal position the eyes fall in sectors S1 and S3, while the nose falls in sector S5. Since the eyes and eye brows are darker than the BTE and the cheek bones, we deduce that: S1 < S2 && S2 > S3... (9) S1 < S4 && S3 < S6... (10) We are using a threshold that is relevant to the size of the currently used SSR filter, to eliminate clusters that are small. The center of each cluster that is big enough is set with the following equations: x = [ x(i)] / n... (11) y = [ y(i)] / n... (12) i is the pixel from the cluster, n is the cluster s area. Find Nose Tip: Now that we located the eyes, the final step is to discover the nose tip. Figure 3: A human face with different lines that define the facial features proportions. Copyright to IJIRCCE www.ijircce.com 6942

From figure 3, we can see that the blue line defines a square of the pupils. corners of the mouth; the nose tip should fall inside this square, so this square becomes our region of interest in finding the nose tip (see fig.4).so the first step is to extract the ROI; in case the face was rotated we need to rotate the ROI back to a horizontal alignment of the eyes. Figure 4: The square that forms the ROI, and the ROI after extraction. Using the previous idea we tried to locate the nose tip with intensity profiles. In horizontal intensity profiles we add vertically to each line the values of the lines that precedes it in the ROI (see fig.5), so since that the nose bridge is brighter than the surrounding features the values should accumulate faster at the bridge location; in other words the maximum value of the horizontal profile gives us the x coordinate of the nose tip. IV. PSEUDO CODE Step 1: Place the camera in front of human face. Step 2: Then human face detection. Step 3: Find the face of candidate. If find, go to Step 4 else Try again human face detection. Step 4: Then find the eye and nose of the candidate. Step 5: Then do all the process, which is in Step 6, 7, 8, 9, 10, 11. Step 6: Connect the web cam device to machine which gives us RGB format output. Step 7: We will fetch all devices which give us RGB format and we will select first one. Step 8: We will set a cam on a face to recognize it as a human face and process will get enable. Step 9: After we will fetch the region of interest i.e. eye pupils and nose tip. Step 10: At real time process will take place. Continuously we fetch the data from webcam in form of frames. Step 11: We will operate our mouse pointer according to region of interest on our face. Step 12: End. V. SIMULATION RESULTS To digitize the (a, b) space we round values to multiplicands of 0.01, so we get 101 values for a, and 88 values for b. Finally, to discover the skin color model we extracted 735 skin pixel samples from each of 771 face images which were taken from [6, 7], so the total number of skin pixel samples was 566685 samples. For each of the samples we calculated the a and b values, and increased the counter of that value ( a has 101 counters, b has 88 counters).the values that were most frequent are considered as skin pixel values, so after plotting the a and b results (see fig. 5 & 6) we deduced the following thresholds for a, and b to consider a pixel as a skin pixel. Using the previous idea we tried to locate the nose tip with intensity profiles. In horizontal intensity profiles we add vertically to each line the values of the lines that precedes it in the ROI (see fig.5), so since that the nose bridge is brighter than the surrounding features the values should accumulate faster at the bridge location; in other words the maximum value of the horizontal profile gives us the x coordinate of the nose tip. Copyright to IJIRCCE www.ijircce.com 6943

ROI: Region of interest. We add horizontally to each column the values of the columns that precedes it in the ROI (see fig. 7); the same as in the horizontal profile, the values accumulate faster at the nose tip position so the maximum value gives us the y coordinate of the nose tip. From both, the horizontal and vertical profiles we were able to locate the nose tip position (see fig. 8), but unfortunately this method did not give accurate results because here might be several max values in a profile that are close to each other, and choosing the correct max value that really points out the coordinates of the nose tip is a difficult task. Each NBP will add with its S2 sector a certain amount to the accumulated sum in the horizontal profile, but the NBP at the nose trills location will add a smaller amount (S2 sector of the NBP at the nose trills location is darker than S2 sector of other NBPs), so if we calculated the first derivate of the values of NBPs S2 sectors (the first derivate of the values of the maximum value of the horizontal profile at each ROI line) we will notice a local minima at the nose trills location (see fig. 9); by locating this local minima we take the NBP that corresponds to it as the nose trills location, and the next step is to look for the nose tip above the nose trills. Since the nose tip is brighter than other features it will donate with its S2sector to the accumulated sum more than other NBPs, which means a local maxima in the first derivate (see fig. 9); so the location of the nose tip is the location of the NBP that corresponds to the local maxima that is above the local minima in the first derivate. In tracking mode the results were very robust when: 1. The frame rate is 20 fps and higher; the user can move very fast without having the program losing his facial features. 2. The glasses reflect light and cause bright spots that sometimes force our program to lose track of the eyes. For the detection and tracking to be accurate and robust the lighting conditions must be set so the light is frontal in a way that it will spread evenly on the face, because side light will cause false face detection and will affect eventually the tracking process. Figure 5: A plot of a counters over 566685 skin samples. Figure 6: A plot of b counters over 566685 skin samples. Copyright to IJIRCCE www.ijircce.com 6944

Figure 7: The horizontal profile of the ROI from figure 4. Figure 8: The vertical profile of the ROI from figure 4. Figure 9: First derivate of the values of NBPs S2 sectors. VI. CONCLUSION AND FUTURE WORK We are very promising; the following conclusions were made based on the experiments that were held during the expo. In detection mode the eyes and nose tip were located accurately when the following conditions were fulfilled: 1. The face is not rotated more than 5 around the axis that passes from the nose tip (as long as the eyes fall in sectors S1 and S3 of the SSR filter). 2. The face is not rotated more than 30 around the axis that passes from the neck (profile view). 3. Wearing glasses does not affect our detection process. 4. As for different scales it is best to get about 35 cm close to the cam, because when the face is a bit far from the screen the program may detect a false positive especially when the background is jammed (crowded). Copyright to IJIRCCE www.ijircce.com 6945

Feature works may include improving the tracking robustness against lighting conditions; perhaps by using more sophisticated and expensive capturing devices such as infrared cameras that can operate in absence of light and give more accurate tracking results. Adding the double left click (detecting the double left eye blink) and the drag mode (enabling/disabling with the right double eye blink)functionalities. Adding voice commands to launch the program, start the detection process, and to enable/disable controlling the mouse with the face. REFERENCES 1. M.Mangaiyarkarasi and A.Geetha, Cursor control system using facial expressions for human- computer interactions.international Journal of Emerging Technology in Computer Science & Electronics (IJETCSE) ISSN: 0976-1353 Volume 8 Issue 1 APRIL 2014. 2. Shrunkhala Satish Wankhede, Ms. S.A. Chhabria and Dr. R.V. Dharaskar, Controlling mouse cursor using eye movements.international Journal of Application or Innovation in Engineering & Management (IJAIEM) 2013. 3. Tarun Dhar Diwan, Rajesh Tiwari, Automatic Eye Blink Tracking and Detection in an Image Sequence. International Journal of Computer Science and Information Technologies (IJCSIT), Vol. 2. (5), 2011. 4. Akhil Gupta, Akash Rathi, Dr. Y. Radhika, Hands free pc control controlling of mouse cursor using eye movements. International Journal of Scientific and research publication, Volume 2, Issue 4, April 2012. 5. V. Pradeep, A simple algorithm for using face as a pointing device using opencv. International Journal of advanced research in computer scince and software engineering, Volume 2, Issue 2, February 2012. 6. Erik Hjelm and Boon Kee Low, Face detection : A survey.machine Vision and Image Understanding 83, 236 274 (2001). BIOGRAPHY Snehal Dongre is a Student in the Computer Engineering Department, College of G. H. Raisoni College of engineering and management, Chas, Ahmednagar, Pune University. She received B.E degree in 2012 from Nagpur, MH, India. Her research interests are Image Processing, HCI, Algorithms etc. Copyright to IJIRCCE www.ijircce.com 6946