A NATURAL HAND GESTURE HUMAN COMPUTER INTERFACE USING CONTOUR SIGNATURES



Similar documents
A Method for Controlling Mouse Movement using a Real- Time Camera

A Real Time Hand Tracking System for Interactive Applications

Classifying Manipulation Primitives from Visual Data

Mouse Control using a Web Camera based on Colour Detection

Journal of Industrial Engineering Research. Adaptive sequence of Key Pose Detection for Human Action Recognition

Object Recognition and Template Matching

An Active Head Tracking System for Distance Education and Videoconferencing Applications

Vision based Vehicle Tracking using a high angle camera

Laser Gesture Recognition for Human Machine Interaction

A Genetic Algorithm-Evolved 3D Point Cloud Descriptor

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014

Virtual Mouse Using a Webcam

VISUAL RECOGNITION OF HAND POSTURES FOR INTERACTING WITH VIRTUAL ENVIRONMENTS

Real Time Target Tracking with Pan Tilt Zoom Camera

ROBOTRACKER A SYSTEM FOR TRACKING MULTIPLE ROBOTS IN REAL TIME. by Alex Sirota, alex@elbrus.com

Building an Advanced Invariant Real-Time Human Tracking System

Neural Network based Vehicle Classification for Intelligent Traffic Control

Tracking Groups of Pedestrians in Video Sequences

Speed Performance Improvement of Vehicle Blob Tracking System

How To Run A Factory I/O On A Microsoft Gpu 2.5 (Sdk) On A Computer Or Microsoft Powerbook 2.3 (Powerpoint) On An Android Computer Or Macbook 2 (Powerstation) On

Robert Collins CSE598G. More on Mean-shift. R.Collins, CSE, PSU CSE598G Spring 2006

Template-based Eye and Mouth Detection for 3D Video Conferencing

Visual-based ID Verification by Signature Tracking

Colour Image Segmentation Technique for Screen Printing

HAND GESTURE BASEDOPERATINGSYSTEM CONTROL

A Novel Multitouch Interface for 3D Object Manipulation

Analecta Vol. 8, No. 2 ISSN

Tracking and Recognition in Sports Videos

Mean-Shift Tracking with Random Sampling

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network

The Scientific Data Mining Process

The aims: Chapter 14: Usability testing and field studies. Usability testing. Experimental study. Example. Example

3D Viewer. user's manual _2

Screen Capture A Vector Quantisation Approach

Experiments with a Camera-Based Human-Computer Interface System

1 ImageBrowser Software Guide

HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT

Efficient Background Subtraction and Shadow Removal Technique for Multiple Human object Tracking

Understand the Sketcher workbench of CATIA V5.

Understanding Operating System Configurations

A secure face tracking system

Input and output devices for specific needs

Overview. Raster Graphics and Color. Overview. Display Hardware. Liquid Crystal Display (LCD) Cathode Ray Tube (CRT)

Real-time Hand Tracking using Flocks of Features

Fixplot Instruction Manual. (data plotting program)

Using MATLAB to Measure the Diameter of an Object within an Image

Build Panoramas on Android Phones

Microsoft Publisher 2010 What s New!

HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER

Tracking And Object Classification For Automated Surveillance

Potential of face area data for predicting sharpness of natural images

Adobe Illustrator CS5 Part 1: Introduction to Illustrator

ActiView. Visual Presenter Image Software User Manual - English

TestManager Administration Guide

Cabri Geometry Application User Guide

3D lumber edging, trimming, and visualization system. User Guide

Determining optimal window size for texture feature extraction methods

Teaching Methodology for 3D Animation

Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 269 Class Project Report

VIRTUAL TRIAL ROOM USING AUGMENTED REALITY

Twelve. Figure 12.1: 3D Curved MPR Viewer Window

A Proposal for OpenEXR Color Management

MassArt Studio Foundation: Visual Language Digital Media Cookbook, Fall 2013

OPERATION MANUAL. MV-410RGB Layout Editor. Version 2.1- higher

Graphic Design. Background: The part of an artwork that appears to be farthest from the viewer, or in the distance of the scene.

Copyright 2006 TechSmith Corporation. All Rights Reserved.

A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow

Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall

DIGICLIENT 8.0 Remote Agent Software

Maya 2014 Basic Animation & The Graph Editor

Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006

Visualizing molecular simulations

Project 4: Camera as a Sensor, Life-Cycle Analysis, and Employee Training Program

Interactive Projector Screen with Hand Detection Using LED Lights

Module 9. User Interface Design. Version 2 CSE IIT, Kharagpur

Robust and accurate global vision system for real time tracking of multiple mobile robots

Human Behavior Interpretation System and Modeling

1 ImageBrowser Software User Guide

CATIA Basic Concepts TABLE OF CONTENTS

A General Framework for Tracking Objects in a Multi-Camera Environment

Podium View TM 2.0 Visual Presenter Image Software User Manual - English (WINDOWS)

Multi-Touch Ring Encoder Software Development Kit User s Guide

Hands Tracking from Frontal View for Vision-Based Gesture Recognition

To Begin Customize Office

VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION

AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION

Video in Logger Pro. There are many ways to create and use video clips and still images in Logger Pro.

Working With Animation: Introduction to Flash

DVR4C Remote Viewer Operation Manual Table of Contents EN 3 1. OVERVIEW MINIMUM PC REQUIREMENTS INSTALLING THE PROGRAM...

Watch Your Garden Grow

SMART BOARD USER GUIDE FOR PC TABLE OF CONTENTS I. BEFORE YOU USE THE SMART BOARD. What is it?

A System for Capturing High Resolution Images

Multimodal Biometric Recognition Security System

Video Tracking Software User s Manual. Version 1.0

Color Balancing Techniques

QUALITY TESTING OF WATER PUMP PULLEY USING IMAGE PROCESSING

Embroidery Fonts Plus ( EFP ) Tutorial Guide Version

13 Managing Devices. Your computer is an assembly of many components from different manufacturers. LESSON OBJECTIVES

Simultaneous Gamma Correction and Registration in the Frequency Domain

High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound

Transcription:

A NATURAL HAND GESTURE HUMAN COMPUTER INTERFACE USING CONTOUR SIGNATURES Paulo Peixoto ISR - Institute of Systems and Robotics Department of Electrical and Computer Engineering University of Coimbra Polo II - Pinhal de Marrocos 3030 Coimbra email: peixoto@isrucpt Joao Carreira ISR - Institute of Systems and Robotics Department of Electrical and Computer Engineering University of Coimbra Polo II - Pinhal de Marrocos 3030 Coimbra email: joaoluis@isrucpt ABSTRACT Communication between humans and computers could benefit a lot from the introduction of more natural forms of non intrusive communication between them We, as humans, frequently use gestures on our daily routines, so the idea of employing natural hand gestures to control the way our computer work seems very attractive In this paper we propose a novel human computer interface based on hand gesture recognition using computer vision The proposed method will not only be interested on replacing the mouse by using our hand as another pointing device, but also to add a more complex gesture vocabulary, allowing several frequent actions to be understood by the computer (using hand gestures like dragging, closing a window or simulating a simple mouse click with the hand) The proposed method will also be able to deal with two of the major difficulties associated with any gesture recognition task: how to determine the beginning and end of a gesture during a continuous hand trajectory and how to deal with the spatial-temporal variations users produce when performing the same gesture each time KEY WORDS Hand Gesture Recognition, Human Computer Interfaces, Computer Vision 1 Introduction The WIMP (windows, icons, menus, pointers) paradigm, together with the mouse and the keyboard, has been decisive in the generalization of the use of computers It provides users a clear model of what actions and commands are possible and what their outcomes can be Also, it allows users to have a sense of accomplishment and responsibility about their interactions with computer applications [8] Under this paradigm, users express their intents to the computer using their hand to perform key presses, button clicks and positioning the mouse This is a rather unnatural, limitative way of interaction As computers become more and more pervasive in our daily life, it is highly desirable that interaction with them doesn t fundamentally differ from interaction with other persons and with the rest of the world That is the ground of Perceptual User Interfaces (PUI), which are concerned with extending human computer interaction to use all modalities of human perception One of the most promising approaches for the early development of PUI is the use of vision-based interfaces, which perform online hand gesture recognition The advantages of hands are their high precision and speed Their capabilities for HCI have been thoroughly certified by the success of tools like mice, keyboards and joysticks Humans learn easily how to use them to execute the most diverse and complex tasks Also, vision based interfaces are unobtrusive and inexpensive, making a good fit In traditional HCI, most attempts have used some device, such as instrumented gloves, for incorporating gestures into the interface If the goal is natural interaction in everyday situations this might not be acceptable However, a number of applications of hand gesture recognition for HCI exist Most of them require restricted backgrounds and camera positions, and a small set of gestures, performed with one hand They can be classified as applications for pointing, presenting, digital desktops, virtual workbenches and VR Gesture input can be categorized [4] into deictic gestures (point at an object or direction), mimetic gestures (accept or refuse an action), and iconic gestures (define an object or its feature) Pavlovic et all [7] noticed that ideally the naturalness of a human computer interface requires that any and every gesture performed by the user should be interpretable, but that the state of the art in vision-based gesture recognition is far from providing a satisfactory solution to this problem A major reason is obviously the complexity associated with the analysis and recognition of gestures A number of pragmatic solutions to gesture input in human computer interfaces have been proposed in the past, such as: the use of props or input devices (ie, pen, or data glove), restrict the object information (ie, silhouette of the hand), restrict the recognition situation (uniform background, restricted area) and restrict the set of gestures being understood Liu and Lovell [5], for instance, proposed a system for tracking real-time hand gestures captured by a cheap web camera and a standard Intel Pentium based personal computer

without any specialized image processing hardware In this paper we describe a human computer interface based on hand gesture recognition using computer vision Our main goal is to accomplish a natural interface, based on the recognition of a reasonable set of hand gestures, without sacrificing computer efficiency The recognition is based on the analysis of the temporal variation of the hand contour This variation allows a parametric representation of each gesture This reduces the amount of data to process during the recognition step The proposed setup consists of a small firewire camera located in front of the computer This camera is used to acquire images from the user s hand These images are used to segment, track and recognize a set of predefined hand gestures Several types of gestures were considered: pointing gestures that provide a very natural and intuitive method of engaging an interface and command gestures which make the execution of complex tasks fast and easy By combining these two methods we hope to achieve the intuitiveness of the pointing system while maintaining the quick efficiency of the command gesture system 2 Representing Dynamic Hand Gestures Using Contour Signature Images The foundation of the proposed hand gesture recognition method consists on the parametric representation of each dynamic gesture as a recording of the temporal evolution of the contour of the hand: the contour signature image (CSI) In order to compute the CSI of each gesture three steps are required: Hand segmentation In this first step we identify the contour of the hand Special care must be put here since the quality of the segmentation will determine the performance of the proposed method Computing the contour signature of each hand s image The next step consists on the definition of the contour signature (CS) as the parametric representation of the contour This contour signature is defined as the set of points on the contour expressed using polar coordinates The origin of the coordinate system is defined as the centroid of the segmented hand blob Each contour signature is normalized in order to be grouped with other contour signatures along time Computing the Contour Signature Image In order to start the gesture recognition process we need to compute the contour signature image This image is a collection of contour signatures along time The recognition process becomes a traditional pattern recognition task: we need to verify the computed CSI against a set of previously recorded CSI corresponding to the set of recognizable gestures (obtained by training) 21 Hand Segmentation Process The success of the recognition process depends a great deal on the quality of the segmentation process A good segmentation that allows for the full recovery of the hand s contour is essential to the success of the entire process In this paper we do not intent to provide a solution to this complex problem We will simply assume that the segmentation process ensures the necessary quality for the identification of the contour of the hand Another important requirement for the segmentation process is related to the proposed objective of real-time performance: it should be complex enough in order to obtain a good quality in the segmentation and at the same time it should be simple enough in order to enable its real-time implementation We use the Continuously Adaptive Mean-shift algorithm (Camshift) [2] to track the user s hand position, orientation and scale In order to use it, a probability distribution image of the skin color is needed Also, invariance to illumination changes and different skin tones is desirable In order to achieve this, we first create a model of the desired hue using a color histogram, as in [1] We use the Hue Saturation Value (HSV) color system that corresponds to projecting standard Red, Green, and Blue (RGB) color space along its principle diagonal from white to black The HSV space separates hue (color) from saturation (how concentrated the color is) and intensity The color model is created by taking a 16 bin 1-D histogram of the H (hue) channel in HSV space For hand tracking via a skin color model, skin regions from the users are sampled by prompting them to select an area of their hand s skin with a mouse The hues derived from the skin pixels in the image are sampled from the H channel and binned into the 1D histogram When sampling is complete, the histogram is saved for future use The low selectivity of this method determines that it works for most kinds of skin tones and produces solid blobs This method can misbehave when the scene contains objects with colors similar to the skin color (like orange and red) When the user wears short sleeves an additional problem occurs: the segmented blob also includes the forearm To solve this problem another segmentation process based on the statistics of the hand dimensions is used to separate the hand from the forearm An example of the result of the segmentation process is shown in Figure 1 The skin color histogram (obtained by learning several samples of skin images) is used as a lookup table to convert incoming video pixels into a skin probability distribution image This image is used by the tracker process to identify potential hand pixels We threshold the output of the tracker and apply morphological operators (erosion and dilation) to compute the hand s blob

(a) (b) (c) Figure 1 Overview of the segmentation process: (a) Hand tracking (b) Result of the color segmentation process (skin color probability distribution image) (c) Final segmented blob after thresholding and application of morphological operators Figure 4 The Contour Signature is defined using the polar coordinates of each point belonging to the contour of the hand The center of the coordinate system is the centroid of the hand blob Figure 2 Example of gesture Drag 22 Contour Signature Definition Assuming that, after the segmentation process, we obtain a correct view of the hand blob contour, we can proceed to the next step The next step consists on the determination of the contour signature of the hand: The contour of the segmented object (S) consists on a finite set of N i points on the image (s k ) that defines the basic shape of the hand (Figure 4): S = {s k = (x k, y k ), k = 1,, N i } We assume that the contour S has the following properties: S is closed, ie s 1 is next to s Ni S has a depth of one single point S is defined by accounting points in the clockwise direction The starting point of the definition of the contour signature is the polar coordinates representation of each point s k belonging to the contour of the segmented blob The polar coordinates are defined in such a way that the origin of the coordinate system is the centroid C = (c x, c y ) T of the segmented region R, defined as: Figure 3 Example of gesture Click x y c x = f(x,y)x y, e c x f(x,y) y = x x y f(x,y)y y f(x,y) { 1 if x, y R with f(x, y) = 0 otherwise (1) Given the contour S = (s 1, s 2,, s Ni ) T from the segmented hand on frame i we can compute the coordinate ρ k that corresponds to the Euclidean distance of each point to the centroid of the segmented hand blob: ρ k = s k C = (x k c x ) 2 + (y k c y ) 2, with k = 1N i We can also compute the θ k coordinate for each point on the contour: θ k = arctan (y k c y ) (x k c x ), with k = 1N i This two coordinates can be expressed as two discrete functions that define, each one of them, a contour signature: CS ρ (i) = {ρ 1, ρ 2,, ρ Ni } e CS θ (i) = {θ 1, θ 2,, θ Ni } Since we want to guarantee an unequivocal representation of each gesture, we will need to consider both contour signatures: one corresponding to the ρ coordinate and another to the θ coordinate The length of the elements that compose the contour signature is determined by the number of points that compose the contour by it self Since that this number of points is variable along time we need to normalize the dimension of the contour signature To accomplish it we sub-sample the contour in order to obtain a fixed dimension of the contour signature To improve the quality of the sub-sampling process we use a linear interpolation For each new image we sub-sample both discreet functions CS ρ (i) and CS θ (i) to obtain two new discreet functions, CS Nρ (i) = {ρ 1, ρ 2,, ρ N } and CS Nθ (i) = {θ 1, θ 2,, θ N }, both with a constant length N To allow for a more reliable correspondence between the several recognizable gestures, the signature contour

should be invariant to translation and scale changes in the image The invariance to translation is accomplished naturally since our contour is defined in relation to a coordinate system with its origin at the centroid of the hand blob The same is not true for the scale: different distances from the camera to the hand will imply different contour amplitudes To solve this problem we simply normalize the ρ k coordinates in order to have them in the range 0 ρ k 1 This is accomplished by dividing each ρ k by ρ max = max(ρ k ), with k = 1N In the contour signature corresponding to the θ coordinate we do not have this problem since the interval of variation of θ is independent of the scale However, for processing compatibility reasons the angles θ k are scaled between 0 θ k 1 Another important issue is the definition of which of the contour points should be considered as the first point of the contour signature (ie which point should be considered as point k = 1) This point is defined as the point with the maximum ρ coordinate Figure 5 Illustration of the process used for the creation of a contour signature image (CSI ρ ) On each new frame i, a new contour signature (CS ρi ) is added to the image while the oldest one is discarded (CS ρi M ) In this example M=75 which corresponds to a gesture with a duration of 3 seconds 23 Definition of the Contour Signature Images After the acquisition of every new image, the corresponding contour signature is grouped with all previously computed contour signatures This grouping results in an image that intends to represent the dynamic variation of the contour of the gesture along time (Figure 5) The dimension of this image will depend on the temporal duration of each gesture If we consider that a gesture typically has a duration of M frames and that the assumed dimension of the contour signature is N, then the contour signature image will have a MxN dimension After the acquisition of every new image i, we can define the contour signature images CSIρ i e CSIθ i as follows: CSIρ i = and, CSIθ i = ρ Ni (1) ρ Ni (2) ρ Ni (N) ρ Ni 1 (1) ρ Ni 1 (2) ρ Ni 1 (N) ρ Ni M+1 (1) ρ Ni M+1 (2) ρ Ni M+1 (N) θ Ni (1) θ Ni (2) θ Ni (N) θ Ni 1 (1) θ Ni 1 (2) θ Ni 1 (N) θ Ni M+1 (1) θ Ni M+1 (2) θ Ni M+1 (N) In each instant of time the construction process for the contour signature images can be viewed as a scroll on each of the two images followed by the introduction of the very new computer signature on the first line of each image This procedure allows us to obtain a parametric representation of each gesture As can be seen, this process is far from (a) (b) (c) Figure 6 Visual representation of three different contour signature images, corresponding to three different gestures: (a) Drag (b) Click (c) Close Window being computationally demanding, which makes it suitable for real-time applications Figure 6 presents three examples of different computer signature images corresponding to three different gestures In this specific case M=75 which means that the represented gestures have a duration of about 3 seconds (assuming a standard video rate of 25 frames/second) The length of each contour signature is also N=75 An interesting property that results from the way we represent each gesture, is that gestures that are symmetrical in relation to the horizontal axis of the image are also represented by symmetrical contour signature images This means that the recognition process can be programmed to automatically deal with this kind of symmetries 3 Gesture Recognition From Contour Signature Images If we have available a database with the contour signature images of each recognizable gesture then the recognition process becomes the process of finding a correct match between the contour signature image corresponding to the gesture being performed and each gesture on that database

The gesture recognition process becomes a typical pattern recognition problem There are several methods in the literature that address this kind of problem Our choice for the method was based on two criteria: in the first place a method that could promise good results in terms of recognition rates and secondly a method simple enough to be easily implemented in real-time Although several methods that address this kind of problem are referred in the literature we choose to utilize a method that is a derivation of the principal component analysis method (PCA) called RPCA or robust principal component analysis [3] One additional advantage of these methods is the fact that the memory costs are small since instead of storing the entire set of training images we only store a small amount of information (the eigenvectors that define the eigenspace and the coordinates of each training sequence on that eigenspace) This particular method improves the quality of the determination of the eigenspace by reducing the influence of outliers (caused for instance by errors on the segmentation process) on the determination of the eigenspace For each of the gestures that will be recognized by the system we capture several sequences of images in which the gesture is fully expressed These sequences of images are used to compute the pair of contour signature images CSIρ n and CSIθ n We used several instances of the same gesture in order to capture small variations of it (ideally performed by several different subjects) Since we are in fact using two different images we construct two different eigenspaces, one for each contour signature image The parametric representation of each gesture on the eigenspace allows us to classify the gesture being performed, in a very efficient way In the eigenspace, the correlation between two images corresponds to the distance of their images on the eigenspace [6] This property is used to make the recognition process After every new frame is captured and after the segmentation process the two contour signature images in ρ and in θ are determined Every new contour signature is subtracted by the average of the training images and then projected on the eigenspace defined by the training images If we consider that Y ρ is the contour signature image of the gesture being recognized and that C ρ is the average of the images on the training set, then the projection of the current gesture on the eigenspace is given by: z ρ = [e ρ1, e ρ2,, e ρk ] T (Y ρ C ρ ) k define the number of eigenvectors used to define the eigenspace This process is not computer demanding since it only requires the computation of an internal product of the vector defining the actual gesture with each of the k eigenvectors that define the eigenspace The recognition process consists in the determination of which of the reference contour signature images, obtained by training, best corresponds to the image generated by the gesture being recognized However, due to several factors, such as errors in the segmentation process and variability of the instances of each trained gesture, that correspondence can not be exact To overcome this question we try to find the gesture that minimizes the distance between point z ρ and the representation of this gestures in the eigenspace (g ρi ) d ρ = min z ρ g ρi, with i = 1N g N g defines the number of different gestures recognizable by the system Since for each gesture several training contour signature images are used we can pose the question: which is the distribution on the eigenspace of points belonging to different instances of the same gesture? Since the different images correspond to the same gesture then the differences should be small which implies that their projection on the eigenspace will be concentrated on the same area In a simplistic way we have considered that all points were contained on a hypersphere The equation of this hypersphere is obtained by fitting the data obtained from each gesture If we assume that the coordinates of the contour signature image in the eigenspace are given by z ρ = z ρ1, z ρ2,, z ρk then we can compute the distance from this point to every other point that corresponds to the projection of the trained images, using the following expression: (z ρ1 zc j ρ 1 ) 2 + (z ρ2 zc j ρ 2 ) 2 + + (z ρk zc j ρ k ) 2 = R j2 where zc j ρ 1, zc j ρ 2,, zc j ρ k and R j are obtained by fitting the data from the several training instances of the same gesture j, with j = 1N g This computation should be done for each of the contour signature images (both on ρ and θ) For each gesture, both distances are combined using the following expression: d g = d 2 ρ + d 2 θ If the smallest of these distances is bellow a certain threshold L we assume that the gesture being performed is of the same type as the gesture corresponding to the second point If, by the other way, the minimal distance is above the mentioned threshold L then we consider that the gesture being performed is unknown 4 Experimental Results In order to evaluate the performance of the proposed method we carried out several experiments We defined a set of ten hand gestures that enables the user to perform the following set of actions: pointing, dragging, clicking, open window, open start menu, page down, page up, open menu (equivalent to right click on a mouse), enter and delete

Gesture Recognition Rate (%) Pointing 9364 Dragging 9421 Clicking 9642 Open Window 9091 Start Menu 9421 Page Down 9642 Page Up 9662 Open Menu 9686 Enter 9715 Delete 9741 Table 1 Experimental results of the proposed method in terms of recognition rate (percentage of gestures correctly recognized by the proposed method) gestures which can be associated with the most common control actions on the computer We plan to extend this kind of interfaces to new interface paradigms where more than one user can freely interact with the computer, allowing for a more natural interaction with it We envision a collaborative scenario that will enable teams to view, share, annotate, manage and make decisions on visual digital assets more effectively than using traditional computer interfaces Every intervenient can freely interact with the computer, without the need of special external devices Instead, they can use a set of natural gestures as a modality for basic computer interaction This approach will provide an elegant solution to a wide range of users and will improve the nature of computer based collaboration 6 Acknowledgements Both static and dynamic gestures were considered (five of each) For some gestures additional information was also determined: for instance for the gestures pointing and clicking we also compute the coordinates of the pointing finger in order to assign the pointing position to the mouse position To determine the position of the pointing finger an histogram of the ρ coordinate of the distance to the centroid was used The point of maximum variation corresponds to the pointing finger The gesture recognition system was implemented on a PC (Pentium 4, 3GHz) with Windows XP The video images were captured using a firewire camera All computations are made by the host computer and images are processed at the full frame rate The interaction with the operating system was made by implementation a Windows XP service, running on the background Each of the gestures was trained using ten different instances of the same gesture performed by two different persons The results are summarized on table 1 We define recognition rate as the percentage of gestures correctly recognized by the proposed method As it can be seen the overall behavior of the method is very good Practical experiences have demonstrated that in most of the cases where errors have occurred the user was able to overcome the problem by seamlessly repeating the erroneous command 5 Discussion In this paper we have presented a natural hand gesture human computer interface using contour signatures The proposed method allows a natural interaction with the computer by allowing the use of very intuitive and simple gestures to control the most common tasks in a computer Although the proposed method only allows the recognition of hand gestures that are fully identified by the analysis of their contour, we believe that, with some imagination, one can come up with a full set of different recognizable Research described in the paper was financially supported by FCT under grant No POSC/EEA-SRI/61451/2004 References [1] Gary R Bradski Computer vision face tracking for use in a perceptual user interface In IEEE Workshop Applications Computer Vision, pages 214 219, October 1998 [2] Intel Corporation Intel open source computer vision library reference manual http://wwwintelcom/research/mrl/research/opencv [3] F D la Torre and M Black Robust principal component analysis for computer vision In Proc International Conference on Computer Vision, pages 362 369, Vancouver, Canada, 2001 [4] Jin Liu, Siegmund Pastoor, Katharina Seifert, and Jrn Hurtienne Three-dimensional pc: toward novel forms of human-computer interaction In Three-Dimensional Video and Display: Devices and Systems SPIE CR76, 2000 [5] Nianjun Liu and Brian Lovell Mmx-accelerated realtime hand tracking system In IVCNZ, pages 26 28, November 2001 [6] H Murase and S Nayar Visual learning and recognition of 3d objects from appearence International Journal of Computer Vision, 5(24), 1995 [7] V I Pavlovic, R Sharma, and T S Huang Visual interpretation of hand gestures for human-computer interaction: A review IEEE Trans on Pattern Analisys And Machine Inteligence (PAMI), 7(19):677 695 [8] Mathew Turk and George Robertson Perceptual user interfaces Communications of the ACM, 43(3), March 2000