A Survey on User-Device Authentication on Emerging HCI Interfaces



Similar documents
BehavioSec participation in the DARPA AA Phase 2

Biometrics is the use of physiological and/or behavioral characteristics to recognize or verify the identity of individuals through automated means.

MOBILE VOICE BIOMETRICS MEETING THE NEEDS FOR CONVENIENT USER AUTHENTICATION. A Goode Intelligence white paper sponsored by AGNITiO

ARMORVOX IMPOSTORMAPS HOW TO BUILD AN EFFECTIVE VOICE BIOMETRIC SOLUTION IN THREE EASY STEPS

A puzzle based authentication method with server monitoring

Progressive Authentication on Mobile Devices. They are typically restricted to a single security signal in the form of a PIN, password, or unlock

International Journal of Software and Web Sciences (IJSWS)

Opinion and recommendations on challenges raised by biometric developments

SECURITY SYSTEM WITH AUTHENTICATION CODE AND ADAPTIVE PHOTO LOG

User Authentication Guidance for IT Systems

Public Auditing & Automatic Protocol Blocking with 3-D Password Authentication for Secure Cloud Storage

3D PASSWORD. Snehal Kognule Dept. of Comp. Sc., Padmabhushan Vasantdada Patil Pratishthan s College of Engineering, Mumbai University, India

NFC & Biometrics. Christophe Rosenberger

Voice Authentication for ATM Security

May For other information please contact:

Role of Multi-biometrics in Usable Multi- Factor Authentication

Multifactor Graphical Password Authentication System using Sound Signature and Handheld Device

The Keyboard One of the first peripherals to be used with a computer and is still the primary input device for text and numbers.

Biometrics and Cyber Security

How Secure is your Authentication Technology?

Chapter 5 Understanding Input. Discovering Computers Your Interactive Guide to the Digital World

Security Levels for Web Authentication using Mobile Phones

Biometric Authentication using Online Signatures

ABSTRACT I. INTRODUCTION

Sectra Communications ensuring security with flexibility

Security+ Guide to Network Security Fundamentals, Fourth Edition. Chapter 10 Authentication and Account Management

Mouse Control using a Web Camera based on Colour Detection

SECUDROID - A Secured Authentication in Android Phones Using 3D Password

Assignment 1 Biometric authentication

22 nd NISS Conference

Capacitive Touch Technology Opens the Door to a New Generation of Automotive User Interfaces

Chapter 3 Input Devices

Device-Centric Authentication and WebCrypto

addressed. Specifically, a multi-biometric cryptosystem based on the fuzzy commitment scheme, in which a crypto-biometric key is derived from

An Enhanced Countermeasure Technique for Deceptive Phishing Attack

Application-Specific Biometric Templates

Beyond passwords: Protect the mobile enterprise with smarter security solutions

User Authentication using Combination of Behavioral Biometrics over the Touchpad acting like Touch screen of Mobile Device

Framework for Biometric Enabled Unified Core Banking

XYPRO Technology Brief: Stronger User Security with Device-centric Authentication

Computer Enabled Biometric Devices: A Fingerprint Scanner Hardware Overview

Biometrics: Advantages for Employee Attendance Verification. InfoTronics, Inc. Farmington Hills, MI

How Secure is Authentication?

Biometrics in Physical Access Control Issues, Status and Trends White Paper

AUTHENTIFIERS. Authentify Authentication Factors for Constructing Flexible Multi-Factor Authentication Processes

Multi-Factor Biometrics: An Overview

Biometric Authentication using Online Signature

De-duplication The Complexity in the Unique ID context

Knowledge Based Authentication (KBA) Metrics

KEYSTROKE DYNAMIC BIOMETRIC AUTHENTICATION FOR WEB PORTALS

Effective Software Security Management

Guidance on Multi-factor Authentication

Biometric Authentication Platform for a Safe, Secure, and Convenient Society

KANGAROO MOBILE DESKTOP USER GUIDE

Online Proctoring Services

Smart Card- An Alternative to Password Authentication By Ahmad Ismadi Yazid B. Sukaimi

Security Guide. BlackBerry Enterprise Service 12. for ios, Android, and Windows Phone. Version 12.0

Method of Combining the Degrees of Similarity in Handwritten Signature Authentication Using Neural Networks

Biometric For Authentication, Do we need it? Christophe Rosenberger GREYC Research Lab - France

Multi-factor authentication

WHITE PAPER Usher Mobile Identity Platform

Authentication Solutions Through Keystroke Dynamics

Alternative authentication what does it really provide?

SECURING ENTERPRISE NETWORK 3 LAYER APPROACH FOR BYOD

IDRBT Working Paper No. 11 Authentication factors for Internet banking

Continuous Biometric User Authentication in Online Examinations

Mobile Technologies Index

I Think, Therefore I Am: Usability and Security of Authentication Using Brainwaves

Enhancing Organizational Security Through the Use of Virtual Smart Cards

Guide for Setting Up Your Multi-Factor Authentication Account and Using Multi-Factor Authentication. Mobile App Activation

Usable Multi-Factor Authentication and Risk- Based Authorization

Chapter 5 Objectives. Chapter 5 Input

Hitachi ID Password Manager Telephony Integration

CHOOSING THE RIGHT PORTABLE SECURITY DEVICE. A guideline to help your organization chose the Best Secure USB device

Digital Identity & Authentication Directions Biometric Applications Who is doing what? Academia, Industry, Government

Cloud Services. Anti-Spam. Admin Guide

Review Paper on Two Factor Authentication Using Mobile Phone (Android) ISSN

The Internet of Things (IoT) Opportunities and Risks

Physical Security: A Biometric Approach Preeti, Rajni M.Tech (Network Security),BPSMV preetytushir@gmail.com, ratri451@gmail.com

Usable Multi-Factor Authentication and Risk-Based Authorization

Chapter 5 Input. Chapter 5 Objectives. What Is Input? What Is Input? The Keyboard. The Keyboard

RF-Enabled Applications and Technology: Comparing and Contrasting RFID and RF-Enabled Smart Cards

How To Recognize Voice Over Ip On Pc Or Mac Or Ip On A Pc Or Ip (Ip) On A Microsoft Computer Or Ip Computer On A Mac Or Mac (Ip Or Ip) On An Ip Computer Or Mac Computer On An Mp3

ENHANCING ATM SECURITY USING FINGERPRINT AND GSM TECHNOLOGY

WHITE PAPER. Let s do BI (Biometric Identification)

HARDENED MULTI-FACTOR AUTHENTICATION INCREASES ENTERPRISE PC SECURITY

A Security Survey of Strong Authentication Technologies

The Implementation of Face Security for Authentication Implemented on Mobile Phone

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014

Entrust IdentityGuard

GAZETRACKERrM: SOFTWARE DESIGNED TO FACILITATE EYE MOVEMENT ANALYSIS

NetIQ Advanced Authentication Framework

Discovering Computers. Technology in a World of Computers, Mobile Devices, and the Internet. Chapter 7. Input and Output

Dragon Solutions Enterprise Profile Management

Transcription:

JOURNAL OF L A TEX CLASS FILES, VOL. 6, NO. 1, JANUARY 2007 1 A Survey on User-Device Authentication on Emerging HCI Interfaces Abstract As user demand and cost benefits of natural user interface technologies are hastening their wide adoption, computing devices come equipped with these natural interaction interfaces are becoming ubiquitous. Consequently, authentication mechanisms on them are becoming an essential security component to allow wider range of applications to be deployed on the devices including privacy and security sensitive ones. This paper presents a comprehensive review of existing user device authentication approaches that are applicable on new emerging interfaces and identifies their security and usability questions that shall be pursued in future work. In addition, a set of evaluation metrics to assess security and usability of these approaches is derived and a comparative evaluation of different authentication approaches is performed. Finally, future research direction in this topic is discussed. The emerging interfaces include touch-sensitive display, 2-d and 3-d camera, voice interfaces, eye tracking platforms, and brain-computer interaction interfaces. Index Terms IEEEtran, journal, L A TEX, paper, template. I. INTRODUCTION Recent advances in natural user interface technology have remarkably transformed the way we interact with computing devices and the form factors of the computing devices themselves. We are now surrounded by many new categories of computing devices including tablets, tabletops, wearable and augmented reality devices, where we could interact with them more transparently and seamlessly using new forms of natural interfaces. These include, but are not limited to, touch gestures via multi-touch sensitive display, body gestures via 2-D and 3-D cameras, gaze interaction via eye tracking platforms, voice via microphone, and brain signal via BCI headsets. As such, there is an endless list of applications that could be developed and operated on these interfaces and devices. However, without user device authentication mechanisms in place, security and privacy of users associated with sensitive information stored in or accessible from the device will be at risk of being compromised and violated. In addition, types of applications users choose to deploy on these devices will be of the limit. In 2012, Symantec has performed an experiment to study behaviors of strangers who picked up unlocked phones in several cities of the US and Canada [1]. Interestingly, they found that 96% of the devices were accessed by the finders whereas only 50% of the finders contacted the owner and provided contact information. More over, the recent study by Egelamn et al. [2] has indicated that at least over one third of the users stored their sensitive data, e.g., SSN, last four digits of SSN, bank account, credit/debit card number, DOB, email password, and home address in their email accounts, which are most likely accessible from their smartphones. And the fact that this information can be found in their emails was a surprise to them. In addition, the same study also reported that users who did not locked their smartphones stored significantly less sensitive information on their emails than those who locked their devices. This might imply that use users who do not have locking mechanism in place may deter to install or use some sensitive applications, e.g., mobile payment. On the other hand, the age-old text password mechanism is facing new usability challenges, in addition to its known security and usability issues, when deploying on new forms of computing devices that are not equipped with a physical keyboard. This in turn results in users ignoring this mechanism. According to the study on user locking behavior on Android s touch-sensitive smartphones, only a small percentage of users (14%) chose to deploy this text password mechanism whereas a much larger percentage of the users (51%) chose Android s Pattern Lock as an unlocking mechanism [3]. More interestingly, there is currently no user authentication mechanisms implemented on off-the-shelf Google augmented reality headset namely Google Glass, and Samsung smartwatch namely Galaxy Gear. As such, there is a window of opportunity for ones to develop user-device authentication mechanisms on emerging interfaces which can then be operated on new computing devices and platforms. And with the industrial effort to standardize web authentication protocol driven by FIDO alliance, not only such a mechanism can be used for local device authentication, it also be used to initiate the online or web authentication as well [4]. In response, a number of authentication alternatives that are applicable on these emerging interfaces have been proposed and investigated. In addition, several variants and incremental development of these approaches have been suggested and their related studies have been conducted. Nevertheless, the studies related to these approaches often preformed and analyzed independently and diversely by experts in various communities. As a result, the true merit of the approaches can not be justified since some essential aspects of usability and security were missing from their analysis [5]. The focus of this paper is to provide a systematic review of existing authentication approaches that are applicable on different types of emerging interfaces and to identify key usability and security challenges of these approaches. In addition, a comprehensive set of criteria to assess security and usability of user-device authentication mechanisms is presented along with the comparative evaluation of these approaches. This information could help researchers to refine the authentication mechanisms that are inherently more secure and usable, with respect to the context of use, on new forms of computing devices. Furthermore, this paper could serve as a starting point to develop a holistic authentication approach on these new devices by taking advantage of available interfaces to enhance security

JOURNAL OF L A TEX CLASS FILES, VOL. 6, NO. 1, JANUARY 2007 2 Fig. 1: Examples of devices equipped with emerging interfaces TABLE I: Summary of authentication approaches Interfaces Examples of device Approaches Ralated studies Tablet,Smartphone, Smartwatch, Multitouch -Textual password on a virtual keyboard [6], [7] Touch interface coffee table, and Large interactive -Android screen-lock pattern [8] [10] display -Microsoft picture [11], [12] -Multi-touch gesture [13] [16] -Online signature [17] [22] 2-D camera 3-D camera Tablet, Smartphone, Smartwatch, and Head-mouthed device, e.g., Google Glass and Oculus Rift Body motion control, e.g., Kinect and Hand motion control, e.g., Leap -Hand gesture authentication [23] -Face recognition [24] [26] -Fingerprint recognition [27], [28] -In-air signature [29], [30] -SignWave [31], [32] -Body gesture [33], [34] Voice In-vehicle controller -Voice authentication [35] [41] Smartwatch and Head-mouthed device -Whisper authentication [42] Eye tracking Smartphone and in-vehicle controller -Gaze authentication [43] [47] Brain interface EEG headset -Brain wave authentication [48] [50] provided to users without trading-off usability. That is, as these new devices are equipped with various types of user interfaces, there is a larger amount of user interaction s trait that could be silently observed and make use by the authentication mechanism to ensure the identity of users. The summary of authentication approaches on emerging HCI interfaces reviewed by this paper is presented in Table II. A. Background Computing devices are traditionally equipped with a physical keyboard. Consequently, textual password which can be input by typing on the keyboard has been used as a de facto user authentication mechanism, despite its known security and usability problems (a memorability issue in particular). However, the change of user interaction interfaces and the usage pattern of these new devices have brought new security threats and usability issues to this conventional authentication mechanism, namely text password. First, these devices are often used in social context and in public space where users are surrounded by handheld recording devices and public surveillance. In such situations, the assumption that users can enter a password or other shared-secret credentials to the devices privately and securely no longer holds. In particular, the study by Shukla et al. has reported that 85% of PIN entries could be recovered within ten attempts using only information about dynamics of the hand [51], [52]. Noting that this information can be recorded easily using smart-phones or wearable devices. Secondly, computing devices are no longer built around a physical keyboard as a main user interaction interface but they employ other natural user interfaces (NUI). As a result, there is an inevitable end to the era of user inputting text password on a physical keyboard. Moreover, since these new interfaces are not designed for text oriented applications, designing usable mechanism to input text password is proven not trivial from both user and machine perspective. In particular, research has shown that entering text password on a virtual keyboard is much slower and harder [53] than on a physical keyboard due to fat finger problem [54], user distraction [55], and inaccurate touch input precision [56], [57]. Consequently, most users do not choose text passwords as an unlocking mechanism on their multi-touch devices [11]. Therefore, a number of authentication alternatives on these interfaces have been proposed and investigated. In addition, many techniques that take advantages of these emerging interfaces to provide intrinsic security to users have been developed. That is, user interaction on these new interfaces provides much higher dimensional data than the one on conventional interfaces. This interaction information can be utilized as additional authentication factors. For example, with a physical keyboard, only press-up and press-down key are detected during user interaction, whereas, with mouse, a time-series of mouse movement and mouse-click event is continuously captured. With multi-touch surface, not only can the system detect multiple of touch points at the time, it can

JOURNAL OF L A TEX CLASS FILES, VOL. 6, NO. 1, JANUARY 2007 3 also collect other contextual information about touch event, for example, touch area, touch pressure, and touch shape. With voice interface, a voice signal, which is one dimensional timeseries data of voice amplitudes, is captured with a very high sampling rate. With two dimensional or three dimensional cameras, the output is a time-series of part of the user s body interaction, e.g., whole body and hand gestures. With eye tracker system, a time-series of gaze movement can also be detected. And with the recent commercial brain computer interface (BCI), the sensor system outputs neural signals resulting from user s brain activities which is a time-series of multiple attributes recorded from several EEG sensors attached to user s head surfaces with a sampling rate up to 1 khz. Using information extracted from user interaction on these emerging interfaces, several studies have shown that biometric traits of users can be derived and it can be used to ensure the identities of users based on who-they-are, in addition to what-they-know. B. Outline of this paper The remainder of paper is organized as follows. First, we present a review of authentication approaches on touch interfaces in Section II, and followed by the the ones on 2-D and 3-D sensing cameras in Section III and IV, respectively. Then, those on voice, eye tracking, and brain computer interfaces are presented in section V, VI, and VII. Each of these sections begins by describing the hardware sensor technologies of the user interfaces and the device categories that come equipped with these interfaces. Then, authentication approaches that are applicable on this interface are reviewed. For each authentication alternative approach, security and usability benefits are also analyzed. Then, security and usability evaluation metrics for userdevice authentication mechanism are presented in Section VIII. Lastly, in Section IX, we discuss conclusion remarks and future research direction to develop inherently secure and usable authentication approaches. II. TOUCH INTERFACE Touch sensitive display has increasingly become a dominant human computer interaction interface in several device categories including handsets, tablets, large interactive displays, multi-touch coffee tables, and smartwatches. To communicate with devices via touch sensitive display, a user performs touch gestures on the display with one or more fingers from one or two hands. Nevertheless, the type of touch gestures that users can comfortably perform are typically restricted by size of the device s display. For example, a large multi-touch sensitive display can accommodate more complex touch gestures, e.g., the ones performed by four or more fingers, as opposed to the smaller one where only simple gestures can be performed. In turn, user s touch input is sensed by an array of capacitive or resistive sensors that is embedded within the display. In the context of a signal, a touch gesture is a time series of touch point sets where each point in the set has attributes such as x-y coordinates, time stamp, and sometimes pressure information. With respect to authentication mechanisms that have been deployed or proposed, the authentication interaction varies from mimicking keyboard interaction (typing gesture) and mouse interaction (tap, draw, and drag) to multi-touch interaction. Textual password on a virtual keyboard This is one practical implementation of user authentication mechanisms on touch sensitive display. In this approach, a virtual keyboard or keypad is rendered on the display, as illustrated in Figure 2. A user is then authenticated by typing an alphanumeric password on that virtual keyboard. The system then authenticates users by comparing the typing password and the setting one. In addition, Ben et al. [6] have shown that typing characteristics on soft keyboard (the specific location touched on each key, the drift from finger down to finger up, the force of touch, the area of press) could also be used to differentiate the users. Similarly, the study conducted on 16 subjects has demonstrated that typing gestures of text messages are unique to individuals and they are consistent over time [7]. This implies that softkeyboard typing behavior can also be used as an additional authentication factor. However, the verification error of the system when only few characters are typed (typical user s choices of passwords) is very high (32.3% for five keypresses in the first study and ranging from 2% to 50% for a single work in the second study). Furthermore, while such an approach enjoys the benefit of user s familiarity, it also inherits several security drawback from the textual password approach including shoulder surfing. In particular, it has been demonstrated that a PIN entry can be automatically recovered from the video recording of the hand dynamic while it is being entered on a smartphone (via google glass, hand-held recording devices, public surveillance, etc.) [51], [52]. While the proposed solution for this is to use randomized keyboard as opposed to the static layout one, a user would have to spend the time as twice as it takes on a normal keyboard and, in addition, the solution does not prevent the attack where the screen content is also visible in the video. Moreover, users often select weak passwords in response to their inability to memorize multiple of complex passwords. In addition, typing on a virtual keyboard has incurred new usability problems. In particular, entering text entry on a virtual keyboard is reportedly slower and harder than on physical keyboard [53]. This is due to fat finger problem [54], user distraction [55], and inaccurate touch input precision [56], [57]. This in turn results in users ignoring this mechanism. According to Bruggen, et al. [3], of the Android s users who lock their devices, only 22% of them chose to deploy alphanumeric password as their authentication mechanism. Android screen-lock pattern This graphical password scheme is one variant of draw-a-secret proposed by I. Jermyn et al. [8], in 1999. The scheme is regarded as a phone s unlock mechanism that is more popular than a form of text password [3]. That is 78% and 22% of the users who locked their phones were used a pattern and password as an authentication mechanism, respectively, where 35% of all users did not enable any authentication mechanism. In this scheme, a user is asked to create and memorize an exact dot-connected pattern on 3 3 grid, as shown in Figure 4. The user is then authenticated based purely on knowledge of a drawing pattern shared between the system and the user. As a result, an attacker who observes a user drawing a legitimate pattern

JOURNAL OF L A TEX CLASS FILES, VOL. 6, NO. 1, JANUARY 2007 4 Fig. 4: Microsoft picture authentication Fig. 2: Text password on touch devices Fig. 3: Android screen-lock pattern could use that knowledge to gain access to the system. In addition, the recent study by Andriotis et al. [9], in 2013, have shown that by combining physical, i.e., oily residue or smudges, with the knowledge of pattern distribution, 54.54% of the patterns can be fully recovered. To address this problem, Luca et al. [10] suggested that, in addition to the knowledge of the pattern, user s drawing behavior can be used as a second authentication credential. Currently, they achieved 77% accuracy with 19% FRR and 21% FAR on their 31-participant study. This performance is impressive but barely practical. One future direction is to improve the performance by developing a more effective recognition technique. In addition, a multi-session training strategy shall be studied since many studies (including EEG [49], voice biometric [58], and face [59]) have shown that it can be used to model withinuser variation more effectively. Lastly, a long-term study about consistency of user s drawing pattern shall be investigated since the performance may automatically improve over time once a user stabilizes his drawing behavior. Microsoft picture authentication This graphical password approach has been implemented in Windows 8 operating system as an alternative user-device authentication mechanism on multitouch display devices. In this approach, a user is asked to perform at least three sequential gestures, i.e., tab, line, and circle, on a user choice of background images where a user is required to memorize the set of gestures as well as the positions at which these gestures were performed [11]. As it is a graphical password scheme, memorability is believed to be its advantage over alphanumerical password scheme. However, several studies have indicated that the entropy of user passwords is much smaller than that of theoretical guarantee [11], [12], which due in part to the well-known hotspot problem. In addition, shoulder surfing attack is one shortcoming of this authentication mechanism, similar to other something-you-know authentication schemes. One possible solution to overcome this limitation is to devise a challenge-response based system. For example, instead of using a single set of three sequential gestures on one particular image, the system may prompt a user to set up multiple sequences over multiple images and ask for only a subset of those sequences during verification. The problem related to such an approach would then be how many sequences a user could memorize, or password interference problem, which shall be investigated. Online signature A finger-drawn signature is another plausible candidate given that signatures are socially and legally accepted authentication credentials. In this scheme, a user will be authenticated by drawing a legitimate signature on touch sensitive display using his fingertip. The system to verify a authenticity of an online signature that is drawn using stylus in controlled environment has long been studied and the verification performance has been reported using the dataset where the signatures are drawn using stylus in controlled environment [17] [20]. In terms of verification performance when an online signature is drawn on personal mobile devices in an unsupervised setting, with the algorithms proposed in [22], 3% EER was achieved given that a user template is generated from enrolled samples across multiple sessions [21]. The study also showed that the drawing signature has an advantage over the 4-digit PIN in securing users against guessing attacks. In addition, the scheme also has an intrinsic security advantage over share-secret approaches against shoulder surfing attacks as the system also leverages user s signing behavior in ensuring the identity of a user. One direction for future work of this approach is to quantify the quality of enrolled signatures during the enrollment process, similar to password strength meter, so that the system could inform users with weak signatures and alternately prompt them to enter the new ones. Multi-touch gesture authentication This approach is an alternative authentication mechanism proposed by Sae-Bae et al. [13] [15]. The approach is more specific to touch interface as it requires users to directly interact with touch sensitive display using multiple fingers simultaneously, as opposed to interacting with a pointing device. This natural and fluid multitouch interaction can also be used as a proof of user identity. That is, authentication credential can be drawn from something-

JOURNAL OF L A TEX CLASS FILES, VOL. 6, NO. 1, JANUARY 2007 5 you-know, which is the type of gesture password, as well as something-you-are, which is the movement characteristic of user fingertips that is unique among users. The biometric recognition performance of multi-touch gestures is reported at the rate of 4.37-19.23% EER depending on the type of gestures being performed. In addition, the study has shown that usability of the gestures in terms of ease of use, excitement, pleasantness, and willingness to used is correlated well with verification performance. This is indeed an encouraging result towards usable and secure authentication approach. Another study by Shahzad and Liu has also shown that gestures performed 1-3 fingers, in stead of 5-finger gestures, can also be used for authentication [16]. One direction of future work in this research area is to improve recognition performance by developing a more effective training strategies and recognition method as well as leveraging utilities of UI elements such as background or visual feedback to enhance user consistency and memorability. III. 2-D SENSING CAMERA INTERFACES A 2-D camera is another sensor that is widely available on many off-the-shelve device categories including mobile phones, tablets, smart watch, laptop, as well as Google Glass. In addition to the use of sensor as an image/video recorder, it can be used as a communication channel between in devices and users [60]. For example, a user can perform hand gestures to execute various devices s functionalities [61] [63]. Regarding authentication methods that make use of 2-D camera as a user interface, a number of approaches have been proposed. Hand gesture authentication This is one authentication approach that is applicable on 2-D camera. In particular, Fong et al. have proposed to use a hang sign language as a mechanism to input textual password entries on 2-d camera interfaces [23] (See Figure 5 for an example of hand language that can be used to input textual password). In turn, the system can make use of biometric information, i.e., hand shape and behavior characteristics, as an additional authentication factor thereby providing better security against shoulder surfing attack or insiders. Technically, they make use of static images that are captured while a user performs a sign language to verify both a signer s identity and signed content. The proposed signer verification as well as signed content are evaluated on a fouruser set and the best recognition performance of 93.75% is achieved. However, validation of the result on a larger set of users, usability and time it takes to memorize sign language for unfamiliar users has not yet been investigated. Face recognition This is one of the most well-known authentication mechanism that use 2-D camera as an interaction interface. In addition to airport security application, the system has also been offered as an unlocking mechanism on a number of operating system platforms including Microsoft window, Ubuntu, and android. Security of face recognition as an authentication mechanism Verification performance of face recognition system depends on the context of use. For still face under uncontrolled illumination condition, in Face Recognition Vendor Test(FVRT) 2006 [24], 11% and 13% FRR are measured at 0.1%FAR on two datasets: Notre Dame data Fig. 5: American sign language to input text password entry (http://www.iidc.indiana.edu/cedir/kidsweb/amachart.html) set and Sandia. However, in the context of mobile environment, face verification system achieve much lower performance at 10.9% HTER (Half Total Error Rate) [25] due to several factors including facial occlusion, pose and facial expression. Figure 6 presents an example of commercial face locking application products. However, there is always a tradeoff between false acceptance and false rejection rate especially when the system is employed in uncontrolled usage scenario. Consequently, it could impact security and usability of the system. In addition, the system is vulnerable to several security threats. For example, an attacker may fool the system by presenting a picture or video of the authorized user instead. To counter this attack, liveness detection mechanisms have been proposed. For example, the system may challenge a user to perform some task such as blinking an eye [26] in order to verify liveness of the presented proof (a user s face) or looking at and following a secret icon [64]. Nevertheless, usability of these preventive mechanisms shall be investigated since user adoption is an important factor for their success. Fingerprint recognition One plausible biometric authentication approach on this interface is to acquire a fingerprint impression from a camera and use it as a factor to authenticate users [27], [28]. However, one limitation of this approach is that the quality of fingerprint impression taken from a camera is typically lower than the one taken from fingerprint scanners due to several factors. These include variation of lighting conditions, the focus of camera s lenses, image distortions, and scale and pose variations of a user s finger while its image is taken. As a result, verification performance can significantly deteriorates. The verification performance of fingerprint recognition with Embedded Cameras on mobile phones has been reported at 4.5% Equal Error Rate (EER) [27] as compared to around 0.02% when fingerprint images are with very high quality [65]. In addition, its usability in terms of verification time, failure

JOURNAL OF L A TEX CLASS FILES, VOL. 6, NO. 1, JANUARY 2007 6 TABLE II: Authentication approaches on touch interfaces and their authentication factors Approaches Authentication factors Shared knowledge Biometric traits Refs Textual password on a sequence of alphabet keystroke dynamic and typing [6], [7] a virtual keyboard patterns on virtual keyboard Microsoft picture a combination of tab, line, and circle - [11] gestures and their associative positions on a background image Android screen-lock a dot-connected pattern swiping patterns [10] Online signature a user s signature signing patterns [17] [22] Multi-touch gesture a sequence of multi-touch gestures multi-touch gesture patterns [13] [16] Fig. 6: An example of face locking application available on ios ( http://www.wired.com/2012/04/facevault-app-face-recognition/) to acquire rate, etc., and its vulnerability to replay attack has not been addressed. Apart from face and fingerprint recognition, there are other schemes that make use of physical body to authenticate a user via 2-D camera including hand contour [66], finger knuckle [67], [68], etc. IV. 3-D SENSING CAMERA INTERFACES A 3-D sensing camera or depth camera is another emerging interface that allows users to interact with computer in more tangible ways, i.e., more direct reflection of real-world interaction [69]. The 3-D sensing camera or depth camera technology allows depth of objects to be identified, in addition to color information. Using this depth information, object tracking and segmentation algorithm can be more robust and more accurate [70]. Therefore, it has been widely used as a human computer interaction interface. Kinect a body tracking game controller for Xbox developed by Microsoft, and Leap Motion a hand tracking computer controller, are the two well-known examples of using a 3-D sensing camera as an interaction interface. In-air signature This is one authentication approach on 3-D sensing camera that has been proposed. In this scheme, users are authenticated by writing their passwords in 3-D space. The system then accepts a user if the writing is closed enough to the enrolled ones, and rejects otherwise. The first variation of this approach make use of an accelerometer embedded on regular smartphone devices [29] as a sensor. A user then sign his signature in 3D space while holding the phone. Later, Jing et al. have proposed a similar concept, KinWrite, but using Kinect, or 3-D sensing camera to track a fingertip motion while a user is writing his password [30]. Their method to recognize a user s 3D signature achieved 100% precision at an average of 77% recall rate on their experiment involving 35 signatures from 18 subjects collecting over a period of five months. Its usability and memorability are however still questionable and shall be investigated. Gesture-Based User Authentication The scheme is proposed by Lai et al. [33]. In this scheme, a user is authenticated by performing a whole-body gesture in front of a 3-D camera. The system then compares and accepts authentication interaction if and only if the input gesture is in close proximity to the enrolled gestures is within a close proximity, i.e., the dissimilarity score between them is lower that the predefined threshold. The cross-session verification performance was reported at 10.5% and 1.5% for the pre-defined S-gesture and user-defined gesture, accordingly [34]. The system was also attested in different scenarios including users wearing coats or carrying bags where the performance degradation is observed. One possible way to improve verification performance is to model intra-user variation across multiple sessions, which have proven successful in other biometric modalities, e.g., ECG [71] and speaker verification [72]. Another possibility is to develop a multi-factor authentication system by incorporating other contextual information, e.g., face and generic body skeleton model, which could be observed simultaneously by the same interface. These are the open challenges to be pursue in future work. SignWave This is the commercial authentication application that has been launched in Airspace store since 2013 [31]. Using this application, users can simply wave their hands in front of the Leap motion controller to gain access to computing devices. The application relies on the set of geometric features derived from a user s hand to authenticate the user. The application has been well-received by users as it has been downloaded more than a million times. Unfortunately, the security it has offered is currently of the limit and it can be circumvented quite easily [32]. Several factors that could contribute to this issue include, but are not limited to, accuracy and precision of raw information retrieved from the controller, effectiveness of verification algorithm, and sensitivity of the controller itself to lighting and other environmental conditions. These are the topics that shall be studied in order to develop a more robust solution to authenticate users by way of this hand waving interaction. In addition, other visual information like

JOURNAL OF L A TEX CLASS FILES, VOL. 6, NO. 1, JANUARY 2007 7 hand s skin-texture and appearance may be used as additional authentication factors to improve system verification accuracy. V. VOICE INTERFACES Voice interface is one of human computer interaction platforms that allows users to communicate with the devices using hand free, natural interaction. To enable this type of interaction, the system acquires an audio signal from a user using an embedded microphone and then translate the signal into machine language. It has been used as one of the main communication channels between users and devices in several wearable device categories including smart-watch; e.g., Samsung Galaxy Gear, Sony SmartWatch, and augmented reality headsets (with head-up display); e.g., Google Glass, Vuzix m100. In addition, it has been embedded in many handset devices, i.e., mobile phone, tablet, laptop, as well as other smart devices, e.g., thermostat [73]. Voice authentication In this approach, a user is authenticated by speaking in normal level. Typically, voice authentication or speaker recognition system are broadly classified into two variations: text-independent and text-dependent speaker verification [35]. The first one is referred to the system in which speaking contexts of a user can be arbitrary [36] where the second one is referred to the system in which a speaking context has to be the same [37]. The system of the first type could prompt a user to speak a randomly assigned context and verify both the context as well as a speaker thereby providing security against replay attack. On the other hand, the system of the second type could use a speaking context as an additional authentication factor thereby providing a better recognition performance resulting in lower both false acceptance and false rejection rates. However, replay attack becomes much more feasible [38]. In terms of verification performance, a wide range of results have been reported due to variation of devices, environmental factors, and verification techniques. For a mobile device dataset collected from 48 enrolled users and 40 imposters, Ram et al. [39] have reported text-dependent verification performance at 7-11% equal error rate, depending on experimental environments, when testing against the samples of the same context spoken by imposters. Note that, each of the samples in this experiment was a spoken ice-cream flavor phrase. For text-independent verification performance, Baloul et al. [40] have reported 1-12% equal error rate, depending on system configuration and number of training sentences, on CMU PDA dataset collected from 16 users. Note that, each of the samples in this experiment was a spoken sentence of 4 to 6 seconds, as opposed to the spoken phrase in the previous study. In terms of its usability for user authentication applications on personal computing devices, speech or voice authentication is reviewed as an insecure and intrusive choice in some situations (and device categories) [41]. In addition, several other issues could impact security and usability of this authentication approach and they should be taken into consideration prior to deployment of the approach. These include human factors such as Lombard effect, i.e., speakers increase their vocal effort to compensate for signal to noise ratio in noisy condition [39], emotions, vocal organ illness, aging, and level of attention [74]; device and environmental factors such as level of noise background [75], acoustic disturbances like echo, and microphone frequency response; and attack vectors such as computerized imitation [74]. Whisper authentication is another derivative of voice authentication that is less intrusive to users and more resistant to the secret phrase being overheard. With respect to textindependent whisper authentication system, the techniques from regular voice authentication system shall be modified due to intrinsic difference between signal spectra of whispered and normal speech [76]. In particular, Xiaohong and Heming [42] have demonstrated that adaptive fractional Fourier transform cepstral coefficients are a more effective feature set than MFCC coefficients in identifying a speaker from whisper signal. They were able to achieve almost 100% identification accuracy. However, to the best of our knowledge, the study on whisper signal as a biometric trait (or text-independent whisper authentication system) has not been conducted. In addition, a usability study on whisper authentication approaches have never been performed. VI. EYE TRACKING INTERFACES Eye tracking interface is one of the latest user interface technology that is expected to be used in consumer electronic devices and smart vehicles in the near future [77], [78]. With this technology, users can hand freely communicate with the devices using gaze interaction. In general, there are two types of hardware for eye trackers: wearable and fix station ones [79]. To accurately capture user s eye movement, the sensor hardware may include optical sensors, near-infrared emitting diode, and imaging devices. Gaze authentication With respect to user authentication applications, the idea of using gaze movement as a way user enter a secret to the authentication system has been proposed and experiments to evaluate usability of the system have been conducted. Particularly, in 2007, Luca et al. have proposed and evaluated the gaze entry for PIN authentication using dwell time, look-and-shoot, and gaze gesture interaction techniques [43]. Another variation of the authentication system using gaze interaction is to use gaze entry as a way enter a set of clicked points in a cued-recall graphical authentication [44], [45]. Note that the cued-recall graphical authentication is the authentication system where a user is expected to choose and memorize a certain number of points on background images, one point per image. Its advantages include security against shoulder surfing attack since this type of interaction is hardly observable by passengers. In addition, as eye movement is one of the behavioral biometric modalities that has been studied [46], gaze authentication systems could utilize this information to provide secondary security layer to users gaze [47]. Moreover, other biometric traits that could be monitored non-intrusively like face or iris could also be used as part of the authentication factors to further strengthen the security and verification performance of the system. However, sensor calibration and sensitivity, user

JOURNAL OF L A TEX CLASS FILES, VOL. 6, NO. 1, JANUARY 2007 8 Fig. 7: NeuroSky Mindset tasks in the first session and is asked to recall it in the second session. In addition, for both session, a user is asked to perform tasks and to answer series of usability questions. The result demonstrated feasibility of the scheme using consumer grade hardware by presenting performance improvement over the previous work and on a larger population. However, one challenge for research in this area is to design generic tasks that are both usable and secure. In addition, while an initial investigation of robustness of the scheme against impersonation attack has been conducted [82], the study is limited only to the set of three attackers where the though process of the three attackers might differ from the users due to their native languages and social environments by which they have been surrounded. Future work to address this impersonation attack more intensively is warranted. familiarity with the gaze entry system and usability of the authentication interaction, as well as the problem of separation between intended and unintended gaze interaction are the challenges that shall be addressed [43], [44]. VII. BRAIN COMPUTER INTERFACE Brain Computer Interface is another communicating channel between users and machine [80] that do not require physical movement of users. Currently, electroencephalographic activity (EEG) is used to measure electrical signals of brain activities. Using this technology, many vendors have set to offer commercial grade BCI products including Neurosky and Intel [81]. Brain wave authentication Motivated by advancement of Brain Computer Interface (BCI) technology, J. Thorpe et al. have envisioned Pass-thought authentication, a two factor authentication method resistant to physical observation [48]. The first factor is thought secret whereas the second one is user s brain signal that is unique among users. Later, in 2007, the feasibility of using brain activity for user authentication purpose has been studied by S. Marcel and J. Millán [49]. In particular, they setup an experiment to evaluate biometric performance of such approach by asking 9 users to perform three brain tasks while wearing the electrode cap that can record brain wave signal(eeg). For a given user, the experiment ran for three days; four sessions per day with 5-10 minutes break between each session. They achieved verification performance of 35.5% HTER (Half Total Error Rate) when training and testing samples are drawn from different days and it degrades over time. However, the performance revamp to 12.9% HTER when training samples are drawn from two days, instead of one day. This initial result has shown some potential but needs major improvement both in terms of performance and usability. Recently, J. Chuang et al. [50] have investigated the scheme using consumer-grade EEG headset devices (see Figure 7), as opposed to clinical grade ones. In this paper, performance, usability, and recall of user authentication using brain wave signal are studied. In particular, they designed 7 different brain tasks and perform a two-session experiment on 15 subjects. Each user is asked to select their secret for each of the four VIII. EVALUATION METRICS FOR USER-DEVICE AUTHENTICATION Since the ultimate goal of an authentication mechanism is to prevent user s information from an unauthorized access, security benefit against various threat models is one of the most important criteria in evaluating authentication mechanisms. However, similar to any other security mechanism, an authentication mechanism will only be useful if it is adopted by users. Therefore its usability aspects need to be investigated simultaneously along with its security potential. With respect to evaluation framework for web based authentication mechanism, a comprehensive list to assess security, usability, and deployability is proposed by Bonneau et al. [5]. However, in contrary to their paper, this paper focuses on user-device authentication. That is, the authentication process is expected to take place locally and with hardware sensor readily available on the device. Therefore, we do not consider deployability properties and other security properties that are more related to remote or web authentication mechanisms as oppose to the local ones. The summary of security and usability criteria that shall be used to evaluate different types of authentication mechanisms depending on which factors are used to authenticate users, derived from their list is as follow. A. Security Physical observation: An attacker in this threat model could observe including record video of authorized authentication interaction while it is being performed before attempting to be authenticated by the system. Security against threat models is becoming more important features of authentication mechanisms due to the change in usage pattern of computing devices. First, many devices are often used in a social context and in public spaces. In addition, users are surrounded by not only public surveillance systems but also handheld recording devices. In such situations, the assumption that users can enter a password or other shared-secret credentials to the devices privately and securely is no longer hold. One direction for an knowledge-based authentication mechanism to prevent this attack is to use

JOURNAL OF L A TEX CLASS FILES, VOL. 6, NO. 1, JANUARY 2007 9 challenge-response approaches where a user only presents partial information about the secret at the authentication time. However, the benefit typically come at the cost of reduced usability approaches. Another direction is to also rely on other factors to authenticate users. Targeted impersonation: An attacker in this threat model attempts to impersonate a victim by using knowledge about his personal information. For example, it is known that many people use one form of their date of birth as their PINs. Therefore, PIN based authentication is highly vulnerable to this attack. In addition, some graphical approaches that allow users to select background images of their choices and perform some interaction on them is also vulnerable to this attack since many will choose their family photos and perform interaction on faces of their family members. Random guessing (throttled or unthrottled): An attacker in this threat model is the one who does not have an access to information about authentication credential and attempts to perform authentication interaction in order to gain a legitimate access to the system. This is one of the most common threat since more sensitive information are being stored on or being access from personal computing devices. This information is another main motivation for device s theft which could lead to several malicious activities, e.g., financial fraud, identity theft, data loss. The Cross-site attack: An attacker in this threat model attempts to use a compromised credentials from one system on another system in order to get an authorized access. This is a particularly serious threat for the authentication system that stored physiological biometric trait without appropriate template protection mechanism, since users can not easily replace them once they are compromised. In addition, any knowledge-based authentication factor that are widely used across different sites is also vulnerable to this attack since it is known that users often use them repeatedly across the sites. One countermeasure to handle this attack is for the developers to deploy an appropriate hashing technique to protect privacy of the stored credential. B. Usability Memorability: This characteristic refers to the amount of cognitive burden imposes to users in order to use the system including knowledge interference that could possibly happen when users have to use multiple credentials for different systems or devices. This is an important usability matrix to be considered particularly for the mechanisms that rely on share secret between users and systems as one of the authentication factors. False rejection: This characteristic refers to the capability of system to verify and grant access to genuine users when presenting legitimate authentication credentials. Similar to memorability for shared secret authentication approaches, this is a particularly important property for the approach that relies on biometric information as one of the authentication factors. That is, in a biometric authentication, a user who presents a biometric trait that its representative (or feature set) is within a close proximity to that of the enrolled template will be granted an access. As a result, there are two types of error that could occur: false acceptance and false rejection. While the first one could cause a security concern, the latter one could lead to either an increase in the number of authentication attempts or user rejection and temporary lockout which both cause usability issues. Therefore, for an ideal system, these two rates should be kept very low. For many biometric modalities, there are several environmental factors that could affect the verification performance of the system. These include but are not limited to level of noise background (for voice), lighting condition (for image or video), humidity (for capacitive sensor like fingerprint). Ease of use: This characteristic refers to user effort and efficiency in using an authentication mechanism including ease of learning it. Since there is an implicit economic cost involved when users decide on security issues [83], this is one of the most important property for an authentication system to be adopted. Typically, authentication factors derived from an authentication interaction can be either shared knowledge, users biometric traits, or the combination of both, where each of the factor has its own set of advantages and disadvantages. Based on the type of authentication credential, the evaluation of authentication approaches that are reviewed in this paper is summarized and presented in Table III. Nothing that the security and usability evaluation criteria are the ones mentioned above. Regarding the fusion approaches, the more credentials being used by the system, the harder the system to be circumvented. However, this will also increase the chance of genuine and honest users being rejected. Therefore, it is always important to strike a balance between security and usability trade-off. IX. DISCUSSION AND FUTURE WORK The capability of these emerging interfaces in detecting a large amount of user interaction information has created a unique opportunity to redesign and develop authentication mechanisms that could provide intrinsic security to the user. Specifically, much research have shown that it is feasible to design the user authentication mechanisms that use this information to verify the identity of a user based on whothey-are, in addition to what-they-know. This is one interesting direction to enhance the security of authentication mechanisms. In addition, many off-the-shelf computing devices that are entering the consumer market have equipped with multiple interfaces to enable users to communicate with the device using an array of interaction. Therefore, one research direction is to form a multi-factor authentication system by simultaneously using multiple interfaces to capture different authentication factors while users are performing an authentication interaction.

JOURNAL OF L A TEX CLASS FILES, VOL. 6, NO. 1, JANUARY 2007 10 TABLE III: Summary of security and usability for user device authentication systems that rely on each type of authentication factor where evaluation criteria are derived from Bonneau s list [5] Authentication factor(s) Knowledge Biometric Textual Graphical Physical Behavioral Fusion approaches Security advantages - Physical observation *Depends on the required user interaction and feedback system* - Targeted impersonation Low Medium Low Medium } - Random guessing Low to medium Low High Medium to high The highest offered (at least) - Cross-site attack Medium High Low High Usability advantages - Memorability Low Medium High High The lowest offered - Infrequent false rejection High High Medium Low In between - Ease of use *Depends on the required user interaction and feedback system* For example, with gaze or eye movement as an authentication interaction, geometry and appearance of a user s face could easily be capture all together using embedded 2-D and 3-D cameras and use as additional authentication factors. As such, it would be much harder to circumvent the system. Moreover, studies have shown that each of the authentication approaches has its own limitations in terms of environmental conditions it can operate on and has its own weaknesses on different security threat models. And for wearable and augmented devices, their use case scenarios are more diverse and by which environment and situation the users will be surrounded are not restricted. Therefore, another interesting direction for future work is to design a mechanism that allows users to enter the same secret in multiple ways depending on availability of hardware sensors on the devices, surrounding environments, and threat models at the moment. For example, in the Google Glass, users have multiple ways to communicate with the device including voice command, hand gestures, and tab sliding. Their suitability depends on the situation. For example, voice channel is not an appropriate way to be used in the places that people are supposed to be quiet and it would be less effective in a noisy place. In addition, hand gestures are not robust to perform in a dark place or clutter background. Therefore, having a mechanism that allows user to operate in multiple ways would increase overall usability of the system. Lastly, user effort and accessibility when authenticating to the system are other important factors to be evaluated, in addition to the security provided to the user, since any security tool would be useful only if it is adopted by the user. Also, memorability is another issue to consider if the somethingthe user-know is also part of the authentication credential. Therefore, these factors shall be considered when designing and evaluating authentication alternative mechanisms. REFERENCES [1] The symantec smartphone honey stick project. http: //www.symantec.com/about/news/resources/press kits/detail.jsp? pkid=symantec-smartphone-honey-stick-project, 2012. [Online; accessed 07-Sep-2014]. [2] S. Egelman, S. Jain, R. S. Portnoff, K. Liao, S. Consolvo, and D. Wagner, Are you ready to lock? understanding user motivations for smartphone locking behaviors, in Proceedings of the 2014 ACM SIGSAC conference on Computer & communications security, ACM, 2014. [3] D. Van Bruggen, S. Liu, M. Kajzer, A. Striegel, C. R. Crowell, and J. D Arcy, Modifying smartphone user locking behavior, in Proceedings of the Ninth Symposium on Usable Privacy and Security, p. 10, ACM, 2013. [4] Fast IDentity Online Alliance. https://fidoalliance.org/, 2014. [Online; accessed 25-Sep-2014]. [5] J. Bonneau, C. Herley, P. C. Van Oorschot, and F. Stajano, The quest to replace passwords: A framework for comparative evaluation of web authentication schemes, in Security and Privacy (SP), 2012 IEEE Symposium on, pp. 553 567, IEEE, 2012. [6] B. Draffin, J. Zhu, and J. Zhang, Keysens: Passive user authentication through micro-behavior modeling of soft keyboard interaction, in Mobile Computing, Applications, and Services, pp. 184 201, Springer, 2014. [7] U. Burgbacher and K. Hinrichs, An implicit author verification system for text messages based on gesture typing biometrics, in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 14, (New York, NY, USA), pp. 2951 2954, ACM, 2014. [8] I. Jermyn, A. Mayer, F. Monrose, M. K. Reiter, A. D. Rubin, et al., The design and analysis of graphical passwords, in Proceedings of the 8th USENIX Security Symposium, pp. 1 14, Washington DC, 1999. [9] P. Andriotis, T. Tryfonas, G. Oikonomou, and C. Yildiz, A pilot study on the security of pattern screen-lock methods and soft side channel attacks, in Proceedings of the sixth ACM conference on Security and privacy in wireless and mobile networks, pp. 1 6, ACM, 2013. [10] A. De Luca, A. Hang, F. Brudy, C. Lindner, and H. Hussmann, Touch me once and i know it s you!: implicit authentication based on touch screen patterns, in Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems, pp. 987 996, ACM, 2012. [11] Z. Zhao, G.-J. Ahn, J.-J. Seo, and H. Hu, On the security of picture gesture authentication, in Proceedings of the 22nd USENIX conference on Security, pp. 383 398, USENIX Association, 2013. [12] Z. Min, B. Ryan, and S. Atkinson, The factors affect user behaviour in a picture-based user authentication system: Pixelpin, in Computer Science and Convergence, pp. 31 42, Springer, 2012. [13] N. Sae-Bae, K. Ahmed, K. Isbister, and N. Memon, Biometric-rich gestures: a novel approach to authentication on multi-touch devices, in Proceedings of the 2012 ACM annual conference on human factors in computing systems, pp. 977 986, ACM, 2012. [14] N. Sae-Bae, N. Memon, and K. Isbister, Investigating multi-touch gestures as a novel biometric modality, in Biometrics: Theory, Applications and Systems (BTAS), 2012 IEEE Fifth International Conference on, pp. 156 161, IEEE, 2012. [15] N. Sae-Bae, N. Memon, K. Isbister, and K. Ahmed, Multitouch gesturebased authentication., IEEE Transactions on Information Forensics and Security, vol. 9, no. 4, pp. 568 582, 2014. [16] M. Shahzad, A. X. Liu, and A. Samuel, Secure unlocking of mobile touch screen devices by simple gestures: You can see it but you can not do it, in Proceedings of the 19th Annual International Conference on Mobile Computing & Networking, MobiCom 13, (New York, NY, USA), pp. 39 50, ACM, 2013. [17] R. Plamondon and G. Lorette, Automatic signature verification and writer identificationthe state of the art, Pattern recognition, vol. 22, no. 2, pp. 107 131, 1989. [18] M. Faundez-Zanuy, On-line signature recognition based on vq-dtw, Pattern Recognition, vol. 40, no. 3, pp. 981 992, 2007. [19] J. Ortega-Garcia, J. Fierrez-Aguilar, D. Simon, J. Gonzalez, M. Faundez- Zanuy, V. Espinosa, A. Satue, I. Hernaez, J.-J. Igarza, C. Vivaracho, et al., Mcyt baseline corpus: a bimodal biometric database, IEE Proceedings- Vision, Image and Signal Processing, vol. 150, no. 6, pp. 395 401, 2003. [20] A. Kholmatov and B. Yanikoglu, Susig: an on-line signature database, associated protocols and benchmark results, Pattern Analysis and Applications, vol. 12, no. 3, pp. 227 236, 2009.

JOURNAL OF L A TEX CLASS FILES, VOL. 6, NO. 1, JANUARY 2007 11 [21] N. Sae-Bae and N. Memon, Online signature verification on mobile devices, Information Forensics and Security, IEEE Transactions on, vol. 9, pp. 933 947, June 2014. [22] N. Sae-Bae and N. Memon, A simple and effective method for online signature verification, in Biometrics Special Interest Group (BIOSIG), 2013 International Conference of the, pp. 1 12, IEEE, 2013. [23] S. Fong, Y. Zhuang, and I. Fister, A biometric authentication model using hand gesture images, Biomedical engineering online, vol. 12, no. 1, p. 111, 2013. [24] P. J. Phillips, W. T. Scruggs, A. J. O Toole, P. J. Flynn, K. W. Bowyer, C. L. Schott, and M. Sharpe, Frvt 2006 and ice 2006 large-scale experimental results, Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 32, no. 5, pp. 831 846, 2010. [25] S. Marcel, C. McCool, P. Matějka, T. Ahonen, J. Černockỳ, S. Chakraborty, V. Balasubramanian, S. Panchanathan, C. H. Chan, J. Kittler, et al., On the results of the first mobile biometry (mobio) face and speaker verification evaluation, in Recognizing Patterns in Signals, Speech, Images and Videos, pp. 210 225, Springer, 2010. [26] G. Pan, Z. Wu, and L. Sun, Liveness detection for face recognition, Recent advances in face recognition, pp. 236 252, 2008. [27] M. O. Derawi, B. Yang, and C. Busch, Fingerprint recognition with embedded cameras on mobile phones, in Security and Privacy in Mobile Information and Communication Systems, pp. 136 147, Springer, 2012. [28] B. Y. Hiew, A. B. J. Teoh, and O. S. Yin, A secure digital camera based fingerprint verification system, Journal of Visual Communication and Image Representation, vol. 21, no. 3, pp. 219 231, 2010. [29] J. G. Casanova, C. S. Ávila, A. de Santos Sierra, G. B. del Pozo, and V. J. Vera, A real-time in-air signature biometric technique using a mobile device embedding an accelerometer, in Networked Digital Technologies, pp. 497 503, Springer, 2010. [30] J. Tian, C. Qu, W. Xu, and S. Wang, Kinwrite: Handwriting-based authentication using kinect, in Proceedings of the 20th Annual Network & Distributed System Security Symposium, NDSS, 2013. [31] Battelle SignWave T M Unlock App for Leap Motion Lets You Wave Goodbye to Passwords. http://www.marketwired.com/press-release/, 2013. [Online; accessed 09-Sep-2013]. [32] Hacking Leap Motion apps: Security researchers spoof biometric auto-login system. http://venturebeat.com/2013/08/13/, 2013. [Online; accessed 09-Sep-2013]. [33] K. Lai, J. Konrad, and P. Ishwar, Towards gesture-based user authentication, in Advanced Video and Signal-Based Surveillance (AVSS), 2012 IEEE Ninth International Conference on, pp. 282 287, IEEE, 2012. [34] J. Wu, J. Konrad, and P. Ishwar, The value of multiple viewpoints in gesture-based user authentication, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 90 97, 2014. [35] M. Rogowski, K. Saeed, M. Rybnik, M. Tabedzki, and M. Adamski, User authentication for mobile devices, in Computer Information Systems and Industrial Management, pp. 47 58, Springer, 2013. [36] D. A. Reynolds and R. C. Rose, Robust text-independent speaker identification using gaussian mixture speaker models, Speech and Audio Processing, IEEE Transactions on, vol. 3, no. 1, pp. 72 83, 1995. [37] S. Hayakawa and F. Itakura, Text-dependent speaker recognition using the information in the higher frequency band, in Acoustics, Speech, and Signal Processing, 1994. ICASSP-94., 1994 IEEE International Conference on, vol. 1, pp. I 137, IEEE, 1994. [38] S. Marinov and H. i Skövde, Text dependent and text independent speaker verification systems. technology and applications, 2003. [39] R. H. Woo, A. Park, and T. J. Hazen, The mit mobile device speaker verification corpus: data collection and preliminary experiments, in Speaker and Language Recognition Workshop, 2006. IEEE Odyssey 2006: The, pp. 1 6, IEEE, 2006. [40] M. Baloul, E. Cherrier, and C. Rosenberger, Challenge-based speaker recognition for mobile authentication, in Biometrics Special Interest Group (BIOSIG), 2012 BIOSIG-Proceedings of the International Conference of the, pp. 1 7, IEEE, 2012. [41] S. Trewin, C. Swart, L. Koved, J. Martino, K. Singh, and S. Ben-David, Biometric authentication on a mobile device: a study of user effort, error and task disruption, in Proceedings of the 28th Annual Computer Security Applications Conference, pp. 159 168, ACM, 2012. [42] Q. Xiaohong and Z. Heming, Adaptive order of fractional fourier transform for whispered speaker identification, in Automatic Control and Artificial Intelligence (ACAI 2012), International Conference on, pp. 363 366, IET, 2012. [43] A. De Luca, R. Weiss, and H. Drewes, Evaluation of eye-gaze interaction methods for security enhanced pin-entry, in Proceedings of the 19th Australasian Conference on Computer-Human Interaction: Entertaining User Interfaces, OZCHI 07, (New York, NY, USA), pp. 199 202, ACM, 2007. [44] A. Forget, S. Chiasson, and R. Biddle, Shoulder-surfing resistance with eye-gaze entry in cued-recall graphical passwords, in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1107 1110, ACM, 2010. [45] A. Bulling, F. Alt, and A. Schmidt, Increasing the security of gaze-based cued-recall graphical passwords using saliency masks, in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 3011 3020, ACM, 2012. [46] O. V. Komogortsev, A. Karpov, L. R. Price, and C. Aragon, Biometric authentication via oculomotor plant characteristics, in Biometrics (ICB), 2012 5th IAPR International Conference on, pp. 413 420, IEEE, 2012. [47] M. Brooks, C. Aragon, and O. Komogortsev, Poster: User centered design and evaluation of an eye movement-based biometric authentication system, [48] J. Thorpe, P. C. van Oorschot, and A. Somayaji, Pass-thoughts: authenticating with our minds, in Proceedings of the 2005 workshop on New security paradigms, pp. 45 56, ACM, 2005. [49] S. Marcel and J. d. R. Millán, Person authentication using brainwaves (eeg) and maximum a posteriori model adaptation, Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 29, no. 4, pp. 743 752, 2007. [50] J. Chuang, C. W. Hamilton Nguyen, and B. Johnson, I think, therefore i am: Usability and security of authentication using brainwaves, in Proceedings of the Workshop on Usable Security, USEC, vol. 13, 2013. [51] D. Shukla, R. Kumar, A. Serwadda, and V. V. Phoha, Beware, your hands reveal your secrets, in Proceedings of the 2014 ACM SIGSAC conference on Computer & communications security, ACM, 2014. [52] Q. Yue, Z. Ling, X. Fu, B. Liu, W. Yu, and W. Zhao, My google glass sees your passwords!, [53] L. Findlater, J. O. Wobbrock, and D. Wigdor, Typing on flat glass: examining ten-finger expert typing patterns on touch surfaces, in Proceedings of the 2011 annual conference on Human factors in computing systems, CHI 11, (New York, NY, USA), pp. 2453 2462, ACM, 2011. [54] D. Vogel and P. Baudisch, Shift: A technique for operating pen-based interfaces using touch, in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 07, (New York, NY, USA), pp. 657 666, ACM, 2007. [55] H. Lü and Y. Li, Gesture avatar: A technique for operating mobile user interfaces using gestures, in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 11, (New York, NY, USA), pp. 207 216, ACM, 2011. [56] D. Wigdor, C. Forlines, P. Baudisch, J. Barnwell, and C. Shen, Lucid touch: A see-through mobile device, in Proceedings of the 20th Annual ACM Symposium on User Interface Software and Technology, UIST 07, (New York, NY, USA), pp. 269 278, ACM, 2007. [57] H.-Y. Chiang and S. Chiasson, Improving user authentication on mobile devices: A touchscreen graphical password, in Proceedings of the 15th international conference on Human-computer interaction with mobile devices and services, pp. 251 260, ACM, 2013. [58] P. Kenny, G. Boulianne, P. Ouellet, and P. Dumouchel, Speaker and session variability in gmm-based speaker verification, Audio, Speech, and Language Processing, IEEE Transactions on, vol. 15, no. 4, pp. 1448 1460, 2007. [59] R. Wallace, M. McLaren, C. McCool, and S. Marcel, Inter-session variability modelling and joint factor analysis for face authentication, in Biometrics (IJCB), 2011 International Joint Conference on, pp. 1 8, IEEE, 2011. [60] V. I. Pavlovic, R. Sharma, and T. S. Huang, Visual interpretation of hand gestures for human-computer interaction: A review, Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 19, no. 7, pp. 677 695, 1997. [61] X. Zabulis, H. Baltzakis, and A. Argyros, Vision-based hand gesture recognition for human-computer interaction, The Universal Access Handbook. LEA, 2009. [62] Y. Wu and T. S. Huang, Vision-based gesture recognition: A review, Urbana, vol. 51, p. 61801, 1999. [63] A. Erol, G. Bebis, M. Nicolescu, R. D. Boyle, and X. Twombly, Visionbased hand pose estimation: A review, Computer Vision and Image Understanding, vol. 108, no. 1, pp. 52 73, 2007. [64] A. Boehm, D. Chen, M. Frank, L. Huang, C. Kuo, T. Lolic, I. Martinovic, and D. Song, Safe: Secure authentication with face and eyes, in In Proceedings of International Conference on Security and Privacy in Mobile Information and Communication Systems, Citeseer, 2013.

JOURNAL OF L A TEX CLASS FILES, VOL. 6, NO. 1, JANUARY 2007 12 [65] D. Gafurov, P. Bours, B. Yang, and C. Busch, Guc100 multi-scanner fingerprint database for in-house (semi-public) performance and interoperability evaluation, in Computational Science and Its Applications (ICCSA), 2010 International Conference on, pp. 303 306, IEEE, 2010. [66] D. Schmidt, M. K. Chong, and H. Gellersen, Handsdown: hand-contourbased user identification for interactive surfaces, in Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries, pp. 432 441, ACM, 2010. [67] K. Cheng and A. Kumar, Contactless finger knuckle identification using smartphones, in Biometrics Special Interest Group (BIOSIG), 2012 BIOSIG - Proceedings of the International Conference of the, pp. 1 6, 2012. [68] L. Zhang, L. Zhang, D. Zhang, and H. Zhu, Online finger-knuckle-print verification for personal authentication, Pattern Recognition, vol. 43, no. 7, pp. 2560 2571, 2010. [69] Y. Duan, H. Deng, and F. Wang, Depth camera in human-computer interaction: An overview, in Intelligent Networks and Intelligent Systems (ICINIS), 2012 Fifth International Conference on, pp. 25 28, Nov 2012. [70] Y. Sato, M. Saito, and H. Koike, Real-time input of 3d pose and gestures of a user s hand and its applications for hci, in Virtual Reality, 2001. Proceedings. IEEE, pp. 79 86, IEEE, 2001. [71] I. Odinaka, P. Lai, A. Kaplan, J. OSullivan, E. Sirevaag, and J. Rohrbaugh, Ecg biometric recognition: A comparative analysis, 2012. [72] R. Vogt and S. Sridharan, Explicit modelling of session variability for speaker verification, Computer Speech & Language, vol. 22, no. 1, pp. 17 38, 2008. [73] Honeywell Wi-Fi Smart Thermostat with Voice Control review. http://reviews.cnet.com/smart-home/honeywell-wi-fi-smart/4505-9788 7-35827868.html, 2013. [Online; accessed 04-Dec-2013]. [74] T. Kinnunen and H. Li, An overview of text-independent speaker recognition: From features to supervectors, Speech communication, vol. 52, no. 1, pp. 12 40, 2010. [75] P. Tresadern, C. McCool, N. Poh, P. Matejka, A. Hadid, C. Levy, T. Cootes, and S. Marcel, Mobile biometrics (mobio): Joint face and voice verification for a mobile platform, IEEE Pervasive Computing, vol. 99, 2012. [76] A. Neustein and H. A. Patil, Forensic Speaker Recognition: Law Enforcement and Counter-Terrorism. Springer, 2012. [77] General Motors to fit eye-tracking technology that reveals when a driver is not paying attention to the road. http://www.dailymail.co.uk/ sciencetech/article-2740588/, 2014. [Online; accessed 09-Sep-2014]. [78] Seeing Machines announces Samsung relationship. http://www.everyinvestor.co.uk/news/2014/09/08/ seeing-machines-announces-samsung-relationship-9459/, 2014. [Online; accessed 09-Sep-2014]. [79] D. Li, D. Winfield, and D. J. Parkhurst, Starburst: A hybrid algorithm for video-based eye tracking combining feature-based and model-based approaches, in Computer Vision and Pattern Recognition-Workshops, 2005. CVPR Workshops. IEEE Computer Society Conference on, pp. 79 79, IEEE, 2005. [80] J. R. Wolpaw, N. Birbaumer, W. J. Heetderks, D. J. McFarland, P. H. Peckham, G. Schalk, E. Donchin, L. A. Quatrano, C. J. Robinson, T. M. Vaughan, et al., Brain-computer interface technology: a review of the first international meeting, IEEE transactions on rehabilitation engineering, vol. 8, no. 2, pp. 164 173, 2000. [81] Intel And Others To Enter Brain-Computer-Interface (BCI) Market, According To Mind Solutions, Inc.. http://online.wsj.com/article/ PR-CO-20140114-906425.html, 2014. [Online; accessed 14-Jan-2014]. [82] J. Chuang, T. Maillart, and B. Johnson, My thoughts are not your thoughts, in Proceedings of the Workshop on Usable Privacy & Security for wearable and domestic ubiquitous DEvices (UPSIDE 14), vol. 14, 2014. [83] C. Herley, So long, and no thanks for the externalities: the rational rejection of security advice by users, in Proceedings of the 2009 workshop on New security paradigms workshop, pp. 133 144, ACM, 2009.