Suspect Identification Based on Descriptive Facial Attributes

Size: px
Start display at page:

Download "Suspect Identification Based on Descriptive Facial Attributes"

Transcription

1 Suspect Identification Based on Descriptive Facial Attributes Brendan F. Klare Scott Klum Joshua C. Klontz Emma Taborsky Tayfun Akgul Anil K. Jain Abstract We present a method for using human describable face attributes to perform face identification in criminal investigations. To enable this approach, a set of 46 facial attributes were carefully defined with the goal of capturing all describable and persistent facial features. Using crowd sourced labor, a large corpus of face images were manually annotated with the proposed attributes. In turn, we train an automated attribute extraction algorithm to encode target repositories with the attribute information. Attribute extraction is performed using localized face components to improve the extraction accuracy. Experiments are conducted to compare the use of attribute feature information, derived from crowd workers, to face sketch information, drawn by expert artists. In addition to removing the dependence on expert artists, the proposed method complements sketchbased face recognition by allowing investigators to immediately search face repositories without the time delay that is incurred due to sketch generation. 1. Introduction Despite the continued ubiquity of surveillance cameras networks, a large number of crimes occur where only a witness description of a subject s appearance is available. The ability to accurately search a face database or videos from surveillance networks using verbal descriptions of a subject s facial appearance would have tremendous implications in the timely resolution of criminal and intelligence investigations. Pattern recognition technology should support such an identification paradigm where a human can describe the appearance of a subject s face to directly search a media repository. The goal of this research is to understand whether technology can currently support such a paradigm. A major progression in the ability to search face image databases using verbal descriptions has been realized through a long line of research in matching hand drawn facial sketches to photographs [19, 15, 22, 21, 23, 9, 7, 3]. B. Klare, S. Klum, J. Klontz and E. Taborsky are with Noblis, Falls Church, VA, U.S.A. T. Akgul is with Istanbul Technical University, Istanbul, Turkey A. Jain is with Michigan State University, East Lansing, MI, U.S.A. While automated sketch recognition technology offers a clear advantage over the legacy approach of disseminating a sketch through media outlets, issues with the sketch generation process can limit the use of sketch recognition to only high profile crimes. For example, while sketch recognition systems can leverage the expertise of a forensic sketch artist, they are equally limited by the requirement of having such an expert available to generate the sketch. Another limitation with sketch recognition is the time delay between when a crime occurs, when a sketch artist can be deployed, when the artist finishes eliciting the witness for gathering enough information to draw the sketch, and when the sketch is finalized for dissemination. Such delays can prove costly in time sensitive investigations. Finally, sketch-based face recognition is often hampered by noisy information provided by witnesses. A major reason for this is that a generated sketch provides no information regarding which regions of the face the witness feels most confident in describing. Because a witness may have varying degrees of confidence for different facial features, weighting (or removing) certain features to reflect witness s confidence should improve the retrieval process. Despite its limitations, the use of hand drawn sketches has several distinct advantages: sketch artists often have specialized training to elicit witness memory descriptions, generated sketches can be disseminated to the public, and a sketch can be drawn with exact precision. Thus, the work in this paper is meant to supplement, not supplant, the use of sketch recognition technology. The use of computer generated facial composites partially addresses the aforementioned issue by allowing nonexperts (i.e., non-forensic sketch artists) to leverage witness descriptions of a person of interest. Computer generated facial composites typically provide a menu-based interface where each facial component (eyes, nose, mouth, etc.) may be selected to compose a rendered image of a suspect s face. Researchers have recently investigated algorithms that can match the computer generated composites to mug shot databases [5, 13]. However, despite the added benefit of having an image that can be disseminated to media outlets, searching face image databases using computer generated composites is a convoluted process that can be greatly simplified. That is, if the end goal is to search a face database, then the generation of a composite is both unnecessary and

2 To appear: Proceedings of the 2014 IEEE/IAPR International Joint Conference on Biometrics findings, and expand on other research related to matching caricatures to photographs [10]. Section 4 discusses the development of these attributes, the use of crowd sourced annotation to label a corpus of data, and provides an analysis of the consistency and discriminability of the choosen attributes. Witness-based identification from facial attributes uses manually provided attributes as query information. However, practical applications require that the target gallery be automatically encoded with the facial attributes. In Section 5 we provide an algorithm to perform such automated extraction. The proposed algorithm operates by performing face component localization and alignment, followed by texture descriptor encoding and support vector regression. The results shown in Section 6 demonstrate that the proposed method has efficacy in searching face image databases. As such, we provide a sound basis for performing witness-based identification with the following advantages over sketch recognition: (i) facial attribute descriptions can be provided by non-experts using menu driven software, (ii) face image databases can be immediately searched using attributes, and (iii) witness attribute search replaces the indirect path of computer/artist generated sketches followed by mug shot retrieval. Other contributions of the research described in this paper include the development of a set of 46 carefully crafted attribute features, an algorithm for automatically extracting attributes from face images, and the ability to improve sketch recognition accuracy by fusing sketch recognition with facial attribute recognition. Attributes from description Sketch from description Probe Proposed Face image Gallery Face Recogni+on System CCTV Mug shots Figure 1. Existing approaches to automted face identification of a suspect are generally limited to querying target repositories with either face images, or hand drawn sketches from verbal descriptions. We propose a system for performing suspect identification based on described facial attributes. may even be a source of noise since such a software system is not designed with the intention of performing automated face recognition. Furthermore, the issue of low confidence regions still manifests with computer generated composites; the output composite has no indication of the witness s confidence in a given facial region. In this paper we perform an initial investigation into witness-based identification using facial attribute recognition. The objective is to use witness provided attribute descriptions to search large scale face image repositories (e.g., mug shots databases or videos from surveillance networks) of automatically extracted attribute information. This approach is motivated by the aforementioned deficiencies in sketch recognition algorithms. The goal of this paper is to address many of the fundamental challenges in using verbal descriptions of facial appearnce to search media repositories for persons of interest [2]. Our research focuses on the scenario of manually labelled query images to measure the inherent feasibility of the proposed search paradigm. A critical factor when using face attributes to search a database is the development of a set of attributes (features) which compactly, yet concisely, represent the face. As previous research has suggested [20], caricature recognition is enabled by targeting the postulated sparse encoding of predominant facial attributes in the human brain. The facial attributes developed in this project were motivated by such 2. Related Research Two lines of face recognition research have motivated this study: sketch recognition and attribute-based face recognition. The notion of automatically matching a hand drawn sketch of a face to photographs was popularized by Tang et al. through a series of early papers that sought to synthesize a photograph from a sketch [19, 15, 21]. These approaches were evaluated on databases consisting of a photograph and a viewed sketch (or viewed composites ). A viewed sketch refers to a hand drawn sketch where the sketch was drawn while looking at a photograph of the subject. This approach is hypothetical: if a high quality photograph of a suspect were available his sketch would not be needed. However, it was important for the problem to be presented in this manner to isolate the heterogeneity between sketches and photographs and perform an initial investigation of the problem. We are similarly motivated to isolate and understand the challenge of matching human (witness) provided facial descriptions to automatically derived features. Klare et al. extended the work of Tang and Wang by examining the case of matching sketches drawn based on wit2

3 To appear: Proceedings of the 2014 IEEE/IAPR International Joint Conference on Biometrics Witness Description Controlled Gallery Retrieval / Attribute Extraction System Exploration Low Quality Image Attributes Investigative Leads Urban surveillance Figure 2. The use of human derived face attributes to query mug shot galleries and nearby surveillance cameras is proposed to aid in the timely generation of investigative leads in criminal investigations. This paper focuses on the application of using witness provided descriptions as query, and a controlled gallery as the target. However, the other use case illustrated here (low quality image as query and surveillance imagery as target) is also supported by the research presented in this paper. 3. Datasets ness descriptions (i.e., forensic sketches) [9]. The increased difficulty of matching operational sketches necessitated an alternative algorithmic solution, which was provided in the form of encoding sketches and photos with feature descriptors, and extracting features that were consistent between the sketch and photo modalities [8]. This approach was used in several subsequent works that improved the state of the art in sketch recognition [23, 3]. More recently, the problem of matching computer generated composites to photographs has been explored [5, 13]. A major contribution to the face recognition community has been the research by Kumar et al. on matching faces using attributes [14]. Motivated by the demonstrated properties of attribute-based representations in other pattern recognition problems, and the desire to semantically search for face images, the solution achieved notable recognition accuracies and the desirable properties of a compact representation and human interpretable features. Our work seeks to build on this previous work on attribute-based face recognition [14], and adapt it to the problem at hand: witness-based face identification. As such, our work can be differentiated in two ways. First, the proposed attribute-based facial features have been crafted with persistence and uniqueness properties in mind [6]. For example, the previous approach [14] to attribute-based face recognition used features that have no identity information, such as black and white photo, flash, posed photo, or teeth visible. Other features used are rather subjective such as attractive woman. By contrast, we have developed features using a human face representation expert to systemically describe the facial characteristics that most concisely convey the identity of a person of interest. The second differentiation is that our approach is motivated by the need to perform face recognition queries using descriptions from witnesses of a crime. As such, we are primarily conducting experiments to understand the trade offs between attribute-based face recognition and sketch recognition. The majority of the experiments in this paper were conducted on the CUHK Face Sketch FERET Database (CUFSF) [21, 23]. This database consists of 1,194 photographs from the FERET [17] database, and 1,194 hand drawn sketches of the subjects in the photographs which were generated at the Chinese University of Hong Kong. Each sketch in the CUFSF database is drawn by an artist while viewing the corresponding photograph. Thus, the appearance and structure of the sketches is highly accurate. Of these 1,194 subjects, 175 had an additional photograph in the FERET database with time lapse between the original image. These additional 175 images were used in certain experiments. 4. Facial Attributes The success of attribute-based face recognition hinges on the development of a set of facial attributes that both capture all the variations in facial appearance, and are terse enough for a witness to provide soon after a crime occurs. To yield such attributes, our research was performed in collaboration with an artist who specializes in the caricaturing of faces using a minimal number of features. The result was a set of 46 facial attributes that capture component level information (the appearance of the eyes, nose, mouth, etc.), the relationship between components (e.g., distance between the nose and mouth), and holistic information such as gender or wrinkles. The developed features are qualitative, and categorical. Of the 46 features, 19 are binary with two feature categories (e.g., unibrow1 vs. no unibrow). An additional 19 features have three categories (e.g., nose size small, normal and large). Of the remaining features, six features have four categories, one feature has five categories, and one feature has seven categories. The features with a large number of categories are generally holistic features that described 1A 3 unibrow is when the two eyes brows are connected.

4 Mouth width = Small Buried eyes = True Neck = Turkey Neck Broken nose = True Brow position = Close Round eyes = True (a) (b) (c) Figure 3. Shown are images of notorious criminals, and their more predominant attribute features. For each subject, five human annotators labelled these images with all 46 attributes defined in this study. (a) Timothy McVeigh was consistently labelled as having a small mouth and a broken nose; (b) Ted Kaczynski was labelled as having buried eyes and eyebrows close to his eyes; and (c) Griselda Blanco was labelled as having round eyes and a turkey neck. The goal of this study is to understand whether attributes derived from amateurs can improve on limitations of hand drawn facial sketches. the appearance of hair or other specific attributes. With a total of 46 attributes defined, the next step was to have the datasets described in Section 3 manually annotated. To obtain attribute type labels from human annotators, we created Human Intelligence Tasks (HITs) using the Amazon Mechanical Turk (AMT) service. A HIT Type was created for every attribute, and a photograph of the subject was displayed along with simple line drawings representing the different categories for each attribute. Annotators were then asked to choose the drawing that best matched the subject in the image. In most cases, an example photograph exhibiting the attribute type was provided to the annotators and displayed along with the line drawings. To eliminate any possible language barrier, tasks included a minimal amount of verbage describing the attributes. Every HIT had three assignments, meaning each attribute type was labelled by three different Turk workers for every image. This variety of responses allows for an attribute confidence level to be inferred from the amount of consensus. In order to better understand the nature of the attributes and data collected from Turk workers, we measured the entropy and the consistency of the responses. The entropy of an attribute estimates how much information each attribute might provide about a face. The greater the entropy of an attribute, the more potential it has to discriminate between faces. For a given attribute feature f, the entropy of that feature H(f) is defined as H(f) = n f i=1 p(f i) log 2 p(f i ), where n f is the number of possible categories for a given features, and p(f i ) is the probability that the i-th category of feature f will occur. p(f i ) is measured empirically based on the statistics of manual annotations. The overall consistency is a simple way to assess how difficult or subjective an attribute is for human labelers. If all labelers chose the same attribute type for an image, that attribute is considered consistent for that image. If labelers chose two attribute types it is labeled as partial, and all three labelers disagreeing means inconsistent attribute. Figure 4 lists all the attributes used in this study, as well as the entropy and consistency of those features. 5. Automated Attribute Extraction While the attributes for the query (person of interest) can be manually elicited from the source (witness memory or low quality image), the target gallery is typically very large, and must be automatically processed. As such, the algorithmic extraction of attribute labels from face images is required to support the proposed system. In this section we describe the process for attribute extraction. A summary of Attribute Entropy Consistency Lip$Thickness Face$Marks Eye$Slant Sharp$Eyes Thick$Eyebrows Face$Length Ear$Size Small$Eyes Face$Shape Gender Eye$Separation Smiling Cheek$Density Baggy$Eyes Nose$Size Buried$Eyes Eyebrow$Position Bent$Eyes Ear$Pitch Eyelash$Visibility Chin$Size Almond$Eyes NosePEye$Distance Beard NosePMouth$Distance Mouth$Asymmetry Mouth$Width Line$Eyes Forehead$Size Round$Eyes Nose$Width Sleepy$Eyes Eye$Color Widows$Peak Eyebrow$Orientation Hairstyle Mouth$Bite Hair$Density Nose$Orientation Broken$Nose Neck$Thickness Forehead$Wrinkles Hair$Color Unibrow Mustache Glasses Figure 4. Listed are names of all 46 facial attributes used in this study, along with their entropy and consistency based on manually assigned values. Consistency values shown are the average for a given attribute across all 1,194 images. 4

5 Input Image Landmark Detection Component Cropping LBP Descriptor Representation SVM Regression Attributes Likelihoods Figure 5. The process for automatically extracting attributes from face images is illustrated above. Each attribute regressor function operates on the component cropping corresponding to the region of the face related to that attribute. For example, mouth attributes use descriptor representations extracted from the cropped mouth component. this approach is provided in Figure 5. The software implementation of the described algorithm has been performed within the OpenBR framework [11]. The first step in automatic extraction of facial attributes is to localize the face. Localization is performed by detecting the eye locations using a pre-trained black box eye detector. An affine normalization transformation is applied to the image such that, after cropping the image to 192x240 pixels, the normalized eye locations are 34.5% in from the sides and 47.5% in from the top. Next, facial landmarks are localized in the normalized image using an active shape model (ASM) via the open source library STASM [16]. To improve the accuracy of detected landmarks, the ASM is seeded with the normalized eye locations detected in the previous step. The detected landmarks provide the information needed to extract the attributes from the corresponding region of the face. That is, if an attribute is only related to the eyes, it is important to not perform classification using other regions of the face. As such, based on the landmark information provided, a bounding box containing relevant landmarks is used to segment individual facial components (nose, mouth, eyes, hair, brow, and jaw). For attributes that describe a large region of the face (e.g. cheek density, gender), the normalized face image is simply cropped to a tighter region around the face. Each of the attributes used in this study is then assigned to one of the cropped regions. For example, all of the eye attributes are assigned to the eye cropping, holistic attributes are assigned to the face cropping, etc. After cropping the components, scale space normalization is performed: the mouth, eyes, hair, and brow components are resized to 24x36 pixels; the jaw and face to 36x36 pixels; and the nose to 36x24 pixels. Empirical results show that discarding the aspect ratios had no noticeable impact on recognition performance. With each component cropped and aligned, the appearance is normalized using the Tan & Triggs [18] preprocessing pipeline to reduce illumination variations. The texture of the components is then encoded using uniform local binary pattern (LBP) [1] features with radius r = 1. Descriptor histograms are then computed for 8x8 patches (with a 4 pixel overlap), which are concatenated to form a single feature vector per component x c (where c specifies the component). The dimensionality of each x c vector is reduced by projecting into a learned PCA subspace V c, which preserves 95.0% of the training data variation. For each facial attribute, the set of n (in our case n is half the available dataset) training vectors X c = [x 1 c,..., x n c ] is used to train an epsilon support vector regression function using the LibSVM library [4]. Attribute features from the same component will have the same vector as training input, however the target values y i a could be different. For attribute a, y i a is the percentage of votes that manual annotators provided for that given feature. Thus, with three annotations per feature, the regression function can still leverage inconsistent labels as y i a {0, 1/3, 2/3, 1}. The SVR parameters of the RBF kernel (C, γ) are computed automatically during training via cross-validation. Given the possible designations for the 46 attributes, a total of 86 SVRs are trained based on the use of one-vs-all SVRs for the attributes with more than two possible labels. The output of the 86 regressors are concatenated into a final attribute feature vector v i for image i. To generate the similarity between two faces i and j, the output of the regressors are concatenated into a final attribute feature vector v i. Several distance metrics were explored, including L 2, L 1, dot product, and weighting schemes using the entropy and consistency metrics discussed in Section 4. The L 1 distance metric yielded the best results. Thus, the distance between faces i and j is computed as d(i, j) = v i v j Experiments 6.1. Facial Attributes vs. Hand Drawn Sketches The first set of experiments are designed to compare the use of facial attributes to hand drawn sketches. Using the 1,196 FERET photographs, we randomly partitioned the dataset into training and testing sets using two-fold cross validation. Thus, one half of the images/subjects were available for training, the other half of the images/subjects were available for testing, and this process was repeated two times (with the roles of training and testing sets inter- 5

6 FAR = 1% % Retrieval Rate True Accept Rate 8% 6% 4% 2% 0.25 Brow Eyes Jaw Hair Face Algorithm / False Accept Rate Mouth Nose Rank Algorithm AllAttributes FaceSketchID Fused Figure 7. The accuracy of attribute feature categories extracted from different facial components are shown. While the eyes are highly informative in standard face recognition, there is a lot of difficulty in categorically describing eyes, which likely explains its relatively low accuracy. For each component, we list the true accept rate at a fixed face accept rate of 1.0%. Figure 6. CMC accuracies comparing the use of facial attributes to hand drawn sketches. As expected, the use of attributes provided by amateur annotators does not match the accuracy achieved through expert artists and a sketch recognition system. However, the attributes achieve good accuracy given they contain limited information. Further, the attributes improve the accuracy of the sketch system when fused. changed); the average results are presented. The training portion of the dataset was used to train the automated attribute extraction system. No subjects used in training were used for testing. Recognition results were generated on the test data by using the human provided attributes as query/probe, and the features automatically extracted from the photographs for the target/gallery. The baseline used was the MSU FaceSketchID sketch recognition algorithm [12], which achieves state of the art accuracy in sketch recognition. The training data was used to train the discriminative subspaces in the baseline sketch recognition algorithm. The hand drawn sketches from the testing partition were used as probes, and the corresponding photographs were used as the gallery (consistent with the attribute system). In both systems (the attribute-based and the sketch recognition systems), the gallery images are the same: photographs of subjects from the testing partition. The difference is the query information used to probe these gallery images. In the case of sketch recognition, hand drawn sketches were created by expert artists looking at the gallery photograph (again, these are called viewed sketches). In the case of the attribute system, attributes features were selected by amateur AMT laborers. While this scenario is hypothetical, our main motivation in this experiment is to answer the following question: can information derived from nonexperts (annotators) achieve comparable results to information derived by experts (sketch artists)? The cumulative match characteristic (CMC) plot of the recognition accuracies of the attribute algorithm can be found in Figure 6. As expected, the accuracy of the attribute-based representation does not match that of the sketch-based recognition algorithm. This is because of the high precision of the sketches, and that nearly five years was spent developing the sketch recognition algorithm. However, it is notable that the proposed approach takes amateur annotators, and is able to achieve identification accuracy similar to previously reported results on forensic sketches [7]. Further, the fusion with attributes increases the sketch recognition accuracy from 84.5% to 92.0% at Rank-1. Thus, while the intent of this research is to offer an alternate paradigm to leverage witness descriptions, the proposed method can also improve the accuracy of a well tuned system. That is, even when a expert forensic artist is used to create a facial sketch, it may be valuable to use the verbal description Experiment 2: Comparison of Components Next we compare the recognition accuracy of each facial component. We use manual annotations as the probe information, automated extractions as the gallery information, and two-fold cross validation in the same manner as the first experiment in Section 6.1. Figure 7 lists the recognition accuracies (true accept rates at a fixed false accept rates of 1.0%) for each of the different facial components used in this study. As discussed in Section 5, each of the 46 attributes was assigned to one of the seven cropped facial regions (eyes, brow, nose, mouth, jaw, hair, and the entire 6

7 Retrieval Rate 100% 75% 50% 25% 0% Rank Gallery.Probe Human.Human Human.Machine Machine.Human Machine.Machine Figure 8. CMC accuracies when using human provided facial attributes and machine extracted attributes. face). The results listed here are the fusion of all attributes for a given component. Perhaps the most surprising result presented here is the relatively poor accuracy of the eye component compared to other interior facial features (nose, mouth, and brow). We believe this is due to the difficulty in characterizing eye variations into categorical features given the complex shape of the eyes. In fact, this finding also agrees with recent research in computer generated composite recognition, where the relative performance of the eye component was poor [5]. Because computer generated composites are equally limited by the discrete set of options, the difficulty in capturing the complex shape of the eyes would similarly seem to be a limiting factor Experiment 3: Human vs. Machine In our next experiment we measure the accuracy of manually labelled attributes vs. automatically extracted attributes. To accomplish this experiment, we had AMT annotators label the 175 mated photographs in FERET with attribute information. Thus, for 175 subjects among the 1,196 total subjects in the CUFSF database, we have manually labelled attribute information from two separate images. Two-fold cross validation was performed using the exact splits used in the previous experiments. Of the 175 subjects, only those subjects who were in the test partition were used to generate recognition accuracies. The first image from the other subjects was used in the training process. This resulted in 84 testing subjects in the first split with a gallery of 604 subjects, and 91 testing subjects in the second split with a gallery of 508 subjects. We compared three methods for matching attribute representations: (i) human labelled versus human labelled, (ii) human labelled versus automatically extracted, and (iii) automatically extracted versus automatically extracted. In the case of the automatically (or machine) extracted attributes, the extraction algorithm was trained on the training partition, which contained no subjects in the testing partition. Figure 8 lists the CMC results comparing human vs. automatic extraction. It is quite notable that machine derived attributes achieve nearly twice the Rank-1 accuracy as human attributes when comparing the human vs. human to machine vs. machine. This speaks perhaps to the difficulty that humans face in assigning discrete values to the attributes, whereas the algorithm can assigned a numeric value based on the regression output. When comparing human vs. machine performance, an interesting observation can be made: when using human/manual attributes in the gallery and machine/automated attributes as probes, the accuracy is significantly worse than using human attributes as probes and machine attributes as gallery (as shown in Figure 8). While the only practical case (fortunately) is the use of machine derived attributes in the gallery, the discrepancy between these results is quite informative. We believe this to be a manifestation of the discrete values provided by humans, versus the numeric values provided by machines. When the attribute values are numeric and continuous (as in the case of machine derived attributes), subjects are more naturally separated in the feature space. 7. Conclusions We have proposed a method for suspect identification using facial attribute descriptions. This was achieved by developing a set of 46 categorical facial attributes. In operational scenarios, the attribute values from a probe image would be provided by human witnesses, or derived from low quality imagery. In turn, these attribute descriptions could be used to search mug shot galleries or surveillance videos for investigative leads. In order to search such large repositories with manually derived query information, faces in the repositories must have their attributes automatically extracted. As such, we have developed an algorithm to automatically extract attribute information from face images. Experiments were conducted to compare the proposed attribute-based recognition paradigm to hand drawn sketch recognition, as each method seeks to perform identification using witness descriptions. The proposed method does not achieve the same accuracy as sketch recognition, nor was it expected to meet this upper bound. Instead, we demonstrate that the proposed method can achieve accuracies of roughly the same order as sketch recognition, and can also improve the accuracy of sketch recognition through fusion. Addi- 7

8 tional experiments demonstrate the strengths of automatically extracted features versus manually labelled features. The results of our initial investigation into attributebased suspect identification are very compelling, and have prompted us to broaden the investigation. We will explore the impact of face descriptions provided from human memory, to better understand the operational use cases of witness identification. The use of attributes from low quality imagery will also be explored, as commercial face recognition algorithms that fail on such low quality imagery. By contrast, attribute-based suspect identification is premised on the use of high recall matching, and thus should perform well in this scenario. Further, we will investigate the use of an incomplete set of attributes to replicate scenarios in which a witness is not able to provide an exhaustive description. Finally, we will explore the use of confidencebased matching, where users can provide a confidence in their attribute assignments. Acknowledgements Research from Brendan Klare, Josh Klontz, and Emma Taborsky was partially supported by the Noblis Sponsored Research program. Research from Anil Jain and Scott Klum was partially supported by NIJ grant no IJ-CX- K057. Research from Tayfun Akgul was partially supported by The Scientific and Technological Research Council of Turkey (TUBIITAK project #112E142). References [1] T. Ahonen, A. Hadid, and M. Pietikainen. Face description with local binary patterns: Application to face recognition. IEEE Trans. Pattern Analysis and Machine Intelligence, 28(12): , December [2] L. Best-Rowden, H. Han, C. Otto, B. Klare, and A. K. Jain. Unconstrained face recognition: Identifying a person of interest from a media collection. In MSU Technical Report, MSU-CSE-14-1, [3] H. Bhatt, S. Bharadwaj, R. Singh, and M. Vatsa. Memetically optimized MCWLD for matching sketches with digital face images. IEEE Trans. Information Forensics Security, 7(5): , October [4] C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1 27:27, [5] H. Han, B. Klare, K. Bonnen, and A. Jain. Matching composite sketches to face photos: A component based approach. IEEE Trans. Information Forensics and Security, 8(1): , January [6] A. K. Jain, P. Flynn, and A. A. Ross. Handbook of Biometrics. Springer, [7] B. Klare and A. Jain. Heterogeneous face recognition using kernel prototype similarities. IEEE Trans. Pattern Analysis and Machine Intelligence, 35(6): , June [8] B. Klare and A. K. Jain. Sketch-to-photo matching: a feature-based approach. In SPIE Defense, Security, and Sensing, pages , [9] B. Klare, Z. Li, and A. Jain. Matching forensic sketches to mug shot photos. IEEE Trans. Pattern Analysis and Machine Intelligence, 33(3): , March [10] B. F. Klare, S. S. Bucak, A. K. Jain, and T. Akgul. Towards automated caricature recognition. In International Conference on Biometrics, pages IEEE, [11] J. Klontz, B. Klare, S. Klum, A. Jain, and M. Burge. Open source biometric recognition. In Proc. IEEE Biometrics: Theory, Applications, and Systems, [12] S. Klum, H. Han, B. Klare, and A. K. Jain. The FaceSketchID system: Matching facial composites to mugshots. In MSU Technical Report, MSU-CSE-14-6, [13] S. Klum, B. Klare, H. Han, and A. Jain. Sketch based face recognition: Forensic vs. composite sketches. In International Conference on Biometrics, [14] N. Kumar, A. C. Berg, P. N. Belhumeur, and S. K. Nayar. Describable visual attributes for face verification and image search. IEEE Trans. Pattern Analysis and Machine Intelligence, 33(10): , [15] Q. Liu, X. Tang, H. Jin, H. Lu, and S. Ma. A nonlinear approach for face sketch synthesis and recognition. In Proc. IEEE CVPR, pages , June [16] S. Milborrow and F. Nicolls. Locating facial features with an extended active shape model. In Proc. ECCV, [17] P. J. Phillips, H. Moon, P. J. Rauss, and S. Rizvi. The feret evaluation methodology for face recognition algorithms. IEEE Trans. Pattern Analysis and Machine Intelligence, 22(10): , [18] X. Tan and B. Triggs. Enhanced local texture feature sets for face recognition under difficult lighting conditions. IEEE Trans. Image Processing, 19(6): , June [19] X. Tang and X. Wang. Face sketch recognition. IEEE Trans. Circuits System for Video Technology, 14(1):50 57, January [20] T. Valentine and V. Bruce. The effects of distinctiveness in recognising and classifying faces. Perception, 15(5): , [21] X. Wang and X. Tang. Face photo-sketch synthesis and recognition. IEEE Trans. Pattern Analysis and Machine Intelligence, 31(11): , November [22] P. Yuen and C. Man. Human face image searching system using sketches. IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, 37(4): , July [23] W. Zhang, X. Wang, and X. Tang. Coupled informationtheoretic encoding for face photo-sketch recognition. In Proc. IEEE CVPR,

Sketch Based Face Recognition: Forensic vs. Composite Sketches

Sketch Based Face Recognition: Forensic vs. Composite Sketches Sketch Based Face Recognition: Forensic vs. Composite Sketches Scott Klum, Hu Han, Anil K. Jain Department of Computer Science and Engineering Michigan State University, East Lansing, MI, U.S.A. {klumscot,hhan,jain}@cse.msu.edu

More information

Face Recognition: Some Challenges in Forensics. Anil K. Jain, Brendan Klare, and Unsang Park

Face Recognition: Some Challenges in Forensics. Anil K. Jain, Brendan Klare, and Unsang Park Face Recognition: Some Challenges in Forensics Anil K. Jain, Brendan Klare, and Unsang Park Forensic Identification Apply A l science i tto analyze data for identification Traditionally: Latent FP, DNA,

More information

Illumination, Expression and Occlusion Invariant Pose-Adaptive Face Recognition System for Real- Time Applications

Illumination, Expression and Occlusion Invariant Pose-Adaptive Face Recognition System for Real- Time Applications Illumination, Expression and Occlusion Invariant Pose-Adaptive Face Recognition System for Real- Time Applications Shireesha Chintalapati #1, M. V. Raghunadh *2 Department of E and CE NIT Warangal, Andhra

More information

Sketch to Photo Matching: A Feature-based Approach

Sketch to Photo Matching: A Feature-based Approach Sketch to Photo Matching: A Feature-based Approach Brendan Klare a and Anil K Jain a,b a Department of Computer Science and Engineering Michigan State University East Lansing, MI, U.S.A b Department of

More information

Face Recognition in Low-resolution Images by Using Local Zernike Moments

Face Recognition in Low-resolution Images by Using Local Zernike Moments Proceedings of the International Conference on Machine Vision and Machine Learning Prague, Czech Republic, August14-15, 014 Paper No. 15 Face Recognition in Low-resolution Images by Using Local Zernie

More information

AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION

AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION Saurabh Asija 1, Rakesh Singh 2 1 Research Scholar (Computer Engineering Department), Punjabi University, Patiala. 2 Asst.

More information

The Scientific Data Mining Process

The Scientific Data Mining Process Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In

More information

Efficient Attendance Management: A Face Recognition Approach

Efficient Attendance Management: A Face Recognition Approach Efficient Attendance Management: A Face Recognition Approach Badal J. Deshmukh, Sudhir M. Kharad Abstract Taking student attendance in a classroom has always been a tedious task faultfinders. It is completely

More information

Online Learning in Biometrics: A Case Study in Face Classifier Update

Online Learning in Biometrics: A Case Study in Face Classifier Update Online Learning in Biometrics: A Case Study in Face Classifier Update Richa Singh, Mayank Vatsa, Arun Ross, and Afzel Noore Abstract In large scale applications, hundreds of new subjects may be regularly

More information

Comparing the Results of Support Vector Machines with Traditional Data Mining Algorithms

Comparing the Results of Support Vector Machines with Traditional Data Mining Algorithms Comparing the Results of Support Vector Machines with Traditional Data Mining Algorithms Scott Pion and Lutz Hamel Abstract This paper presents the results of a series of analyses performed on direct mail

More information

The Delicate Art of Flower Classification

The Delicate Art of Flower Classification The Delicate Art of Flower Classification Paul Vicol Simon Fraser University University Burnaby, BC pvicol@sfu.ca Note: The following is my contribution to a group project for a graduate machine learning

More information

Interactive person re-identification in TV series

Interactive person re-identification in TV series Interactive person re-identification in TV series Mika Fischer Hazım Kemal Ekenel Rainer Stiefelhagen CV:HCI lab, Karlsruhe Institute of Technology Adenauerring 2, 76131 Karlsruhe, Germany E-mail: {mika.fischer,ekenel,rainer.stiefelhagen}@kit.edu

More information

The Visual Internet of Things System Based on Depth Camera

The Visual Internet of Things System Based on Depth Camera The Visual Internet of Things System Based on Depth Camera Xucong Zhang 1, Xiaoyun Wang and Yingmin Jia Abstract The Visual Internet of Things is an important part of information technology. It is proposed

More information

2.11 CMC curves showing the performance of sketch to digital face image matching. algorithms on the CUHK database... 40

2.11 CMC curves showing the performance of sketch to digital face image matching. algorithms on the CUHK database... 40 List of Figures 1.1 Illustrating different stages in a face recognition system i.e. image acquisition, face detection, face normalization, feature extraction, and matching.. 10 1.2 Illustrating the concepts

More information

Naive-Deep Face Recognition: Touching the Limit of LFW Benchmark or Not?

Naive-Deep Face Recognition: Touching the Limit of LFW Benchmark or Not? Naive-Deep Face Recognition: Touching the Limit of LFW Benchmark or Not? Erjin Zhou zej@megvii.com Zhimin Cao czm@megvii.com Qi Yin yq@megvii.com Abstract Face recognition performance improves rapidly

More information

Subspace Analysis and Optimization for AAM Based Face Alignment

Subspace Analysis and Optimization for AAM Based Face Alignment Subspace Analysis and Optimization for AAM Based Face Alignment Ming Zhao Chun Chen College of Computer Science Zhejiang University Hangzhou, 310027, P.R.China zhaoming1999@zju.edu.cn Stan Z. Li Microsoft

More information

Behavior Analysis in Crowded Environments. XiaogangWang Department of Electronic Engineering The Chinese University of Hong Kong June 25, 2011

Behavior Analysis in Crowded Environments. XiaogangWang Department of Electronic Engineering The Chinese University of Hong Kong June 25, 2011 Behavior Analysis in Crowded Environments XiaogangWang Department of Electronic Engineering The Chinese University of Hong Kong June 25, 2011 Behavior Analysis in Sparse Scenes Zelnik-Manor & Irani CVPR

More information

Extend Table Lens for High-Dimensional Data Visualization and Classification Mining

Extend Table Lens for High-Dimensional Data Visualization and Classification Mining Extend Table Lens for High-Dimensional Data Visualization and Classification Mining CPSC 533c, Information Visualization Course Project, Term 2 2003 Fengdong Du fdu@cs.ubc.ca University of British Columbia

More information

LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE. indhubatchvsa@gmail.com

LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE. indhubatchvsa@gmail.com LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE 1 S.Manikandan, 2 S.Abirami, 2 R.Indumathi, 2 R.Nandhini, 2 T.Nanthini 1 Assistant Professor, VSA group of institution, Salem. 2 BE(ECE), VSA

More information

Image Normalization for Illumination Compensation in Facial Images

Image Normalization for Illumination Compensation in Facial Images Image Normalization for Illumination Compensation in Facial Images by Martin D. Levine, Maulin R. Gandhi, Jisnu Bhattacharyya Department of Electrical & Computer Engineering & Center for Intelligent Machines

More information

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014 Efficient Attendance Management System Using Face Detection and Recognition Arun.A.V, Bhatath.S, Chethan.N, Manmohan.C.M, Hamsaveni M Department of Computer Science and Engineering, Vidya Vardhaka College

More information

Potential of face area data for predicting sharpness of natural images

Potential of face area data for predicting sharpness of natural images Potential of face area data for predicting sharpness of natural images Mikko Nuutinen a, Olli Orenius b, Timo Säämänen b, Pirkko Oittinen a a Dept. of Media Technology, Aalto University School of Science

More information

CS231M Project Report - Automated Real-Time Face Tracking and Blending

CS231M Project Report - Automated Real-Time Face Tracking and Blending CS231M Project Report - Automated Real-Time Face Tracking and Blending Steven Lee, slee2010@stanford.edu June 6, 2015 1 Introduction Summary statement: The goal of this project is to create an Android

More information

FACE RECOGNITION BASED ATTENDANCE MARKING SYSTEM

FACE RECOGNITION BASED ATTENDANCE MARKING SYSTEM Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 2, February 2014,

More information

Keywords Input Images, Damage face images, Face recognition system, etc...

Keywords Input Images, Damage face images, Face recognition system, etc... Volume 5, Issue 4, 2015 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com A Research - Face Recognition

More information

FPGA Implementation of Human Behavior Analysis Using Facial Image

FPGA Implementation of Human Behavior Analysis Using Facial Image RESEARCH ARTICLE OPEN ACCESS FPGA Implementation of Human Behavior Analysis Using Facial Image A.J Ezhil, K. Adalarasu Department of Electronics & Communication Engineering PSNA College of Engineering

More information

Open Source Biometric Recognition

Open Source Biometric Recognition Open Source Biometric Recognition Joshua C. Klontz Michigan State University East Lansing, MI, U.S.A klontzjo@msu.edu Anil K. Jain Michigan State University East Lansing, MI, U.S.A jain@cse.msu.edu Brendan

More information

Document Image Retrieval using Signatures as Queries

Document Image Retrieval using Signatures as Queries Document Image Retrieval using Signatures as Queries Sargur N. Srihari, Shravya Shetty, Siyuan Chen, Harish Srinivasan, Chen Huang CEDAR, University at Buffalo(SUNY) Amherst, New York 14228 Gady Agam and

More information

Recognizing Cats and Dogs with Shape and Appearance based Models. Group Member: Chu Wang, Landu Jiang

Recognizing Cats and Dogs with Shape and Appearance based Models. Group Member: Chu Wang, Landu Jiang Recognizing Cats and Dogs with Shape and Appearance based Models Group Member: Chu Wang, Landu Jiang Abstract Recognizing cats and dogs from images is a challenging competition raised by Kaggle platform

More information

Classifying Manipulation Primitives from Visual Data

Classifying Manipulation Primitives from Visual Data Classifying Manipulation Primitives from Visual Data Sandy Huang and Dylan Hadfield-Menell Abstract One approach to learning from demonstrations in robotics is to make use of a classifier to predict if

More information

Server Load Prediction

Server Load Prediction Server Load Prediction Suthee Chaidaroon (unsuthee@stanford.edu) Joon Yeong Kim (kim64@stanford.edu) Jonghan Seo (jonghan@stanford.edu) Abstract Estimating server load average is one of the methods that

More information

Clustering Big Data. Anil K. Jain. (with Radha Chitta and Rong Jin) Department of Computer Science Michigan State University November 29, 2012

Clustering Big Data. Anil K. Jain. (with Radha Chitta and Rong Jin) Department of Computer Science Michigan State University November 29, 2012 Clustering Big Data Anil K. Jain (with Radha Chitta and Rong Jin) Department of Computer Science Michigan State University November 29, 2012 Outline Big Data How to extract information? Data clustering

More information

Teaching Methodology for 3D Animation

Teaching Methodology for 3D Animation Abstract The field of 3d animation has addressed design processes and work practices in the design disciplines for in recent years. There are good reasons for considering the development of systematic

More information

Open Access A Facial Expression Recognition Algorithm Based on Local Binary Pattern and Empirical Mode Decomposition

Open Access A Facial Expression Recognition Algorithm Based on Local Binary Pattern and Empirical Mode Decomposition Send Orders for Reprints to reprints@benthamscience.ae The Open Electrical & Electronic Engineering Journal, 2014, 8, 599-604 599 Open Access A Facial Expression Recognition Algorithm Based on Local Binary

More information

Pedestrian Detection with RCNN

Pedestrian Detection with RCNN Pedestrian Detection with RCNN Matthew Chen Department of Computer Science Stanford University mcc17@stanford.edu Abstract In this paper we evaluate the effectiveness of using a Region-based Convolutional

More information

ENHANCED WEB IMAGE RE-RANKING USING SEMANTIC SIGNATURES

ENHANCED WEB IMAGE RE-RANKING USING SEMANTIC SIGNATURES International Journal of Computer Engineering & Technology (IJCET) Volume 7, Issue 2, March-April 2016, pp. 24 29, Article ID: IJCET_07_02_003 Available online at http://www.iaeme.com/ijcet/issues.asp?jtype=ijcet&vtype=7&itype=2

More information

Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall

Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Automatic Photo Quality Assessment Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Estimating i the photorealism of images: Distinguishing i i paintings from photographs h Florin

More information

Face Recognition: Impostor-based Measures of Uniqueness and Quality

Face Recognition: Impostor-based Measures of Uniqueness and Quality To appear: Proceedings of the IEEE Conference on Biometrics: Theory, Applications, and Systems, 22. Face Recognition: Impostor-based Measures of Uniqueness and Quality Brendan F. Klare Noblis Falls Church,

More information

Face Model Fitting on Low Resolution Images

Face Model Fitting on Low Resolution Images Face Model Fitting on Low Resolution Images Xiaoming Liu Peter H. Tu Frederick W. Wheeler Visualization and Computer Vision Lab General Electric Global Research Center Niskayuna, NY, 1239, USA {liux,tu,wheeler}@research.ge.com

More information

Facebook Friend Suggestion Eytan Daniyalzade and Tim Lipus

Facebook Friend Suggestion Eytan Daniyalzade and Tim Lipus Facebook Friend Suggestion Eytan Daniyalzade and Tim Lipus 1. Introduction Facebook is a social networking website with an open platform that enables developers to extract and utilize user information

More information

Adaptive Face Recognition System from Myanmar NRC Card

Adaptive Face Recognition System from Myanmar NRC Card Adaptive Face Recognition System from Myanmar NRC Card Ei Phyo Wai University of Computer Studies, Yangon, Myanmar Myint Myint Sein University of Computer Studies, Yangon, Myanmar ABSTRACT Biometrics is

More information

Data Insufficiency in Sketch Versus Photo Face Recognition

Data Insufficiency in Sketch Versus Photo Face Recognition Data Insufficiency in Sketch Versus Photo Face Recognition Jonghyun Choi, Abhishek Sharma, David W. Jacobs and Larry S. Davis Institute for Advanced Computer Studies University of Maryland, College Park

More information

The Role of Size Normalization on the Recognition Rate of Handwritten Numerals

The Role of Size Normalization on the Recognition Rate of Handwritten Numerals The Role of Size Normalization on the Recognition Rate of Handwritten Numerals Chun Lei He, Ping Zhang, Jianxiong Dong, Ching Y. Suen, Tien D. Bui Centre for Pattern Recognition and Machine Intelligence,

More information

VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS

VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS Norbert Buch 1, Mark Cracknell 2, James Orwell 1 and Sergio A. Velastin 1 1. Kingston University, Penrhyn Road, Kingston upon Thames, KT1 2EE,

More information

Face Recognition For Remote Database Backup System

Face Recognition For Remote Database Backup System Face Recognition For Remote Database Backup System Aniza Mohamed Din, Faudziah Ahmad, Mohamad Farhan Mohamad Mohsin, Ku Ruhana Ku-Mahamud, Mustafa Mufawak Theab 2 Graduate Department of Computer Science,UUM

More information

Statistics in Face Recognition: Analyzing Probability Distributions of PCA, ICA and LDA Performance Results

Statistics in Face Recognition: Analyzing Probability Distributions of PCA, ICA and LDA Performance Results Statistics in Face Recognition: Analyzing Probability Distributions of PCA, ICA and LDA Performance Results Kresimir Delac 1, Mislav Grgic 2 and Sonja Grgic 2 1 Croatian Telecom, Savska 32, Zagreb, Croatia,

More information

Low-resolution Image Processing based on FPGA

Low-resolution Image Processing based on FPGA Abstract Research Journal of Recent Sciences ISSN 2277-2502. Low-resolution Image Processing based on FPGA Mahshid Aghania Kiau, Islamic Azad university of Karaj, IRAN Available online at: www.isca.in,

More information

A Genetic Algorithm-Evolved 3D Point Cloud Descriptor

A Genetic Algorithm-Evolved 3D Point Cloud Descriptor A Genetic Algorithm-Evolved 3D Point Cloud Descriptor Dominik Wȩgrzyn and Luís A. Alexandre IT - Instituto de Telecomunicações Dept. of Computer Science, Univ. Beira Interior, 6200-001 Covilhã, Portugal

More information

Performance Comparison of Visual and Thermal Signatures for Face Recognition

Performance Comparison of Visual and Thermal Signatures for Face Recognition Performance Comparison of Visual and Thermal Signatures for Face Recognition Besma Abidi The University of Tennessee The Biometric Consortium Conference 2003 September 22-24 OUTLINE Background Recognition

More information

Beating the MLB Moneyline

Beating the MLB Moneyline Beating the MLB Moneyline Leland Chen llxchen@stanford.edu Andrew He andu@stanford.edu 1 Abstract Sports forecasting is a challenging task that has similarities to stock market prediction, requiring time-series

More information

A secure face tracking system

A secure face tracking system International Journal of Information & Computation Technology. ISSN 0974-2239 Volume 4, Number 10 (2014), pp. 959-964 International Research Publications House http://www. irphouse.com A secure face tracking

More information

Knowledge Discovery from patents using KMX Text Analytics

Knowledge Discovery from patents using KMX Text Analytics Knowledge Discovery from patents using KMX Text Analytics Dr. Anton Heijs anton.heijs@treparel.com Treparel Abstract In this white paper we discuss how the KMX technology of Treparel can help searchers

More information

EHR CURATION FOR MEDICAL MINING

EHR CURATION FOR MEDICAL MINING EHR CURATION FOR MEDICAL MINING Ernestina Menasalvas Medical Mining Tutorial@KDD 2015 Sydney, AUSTRALIA 2 Ernestina Menasalvas "EHR Curation for Medical Mining" 08/2015 Agenda Motivation the potential

More information

Social Media Mining. Data Mining Essentials

Social Media Mining. Data Mining Essentials Introduction Data production rate has been increased dramatically (Big Data) and we are able store much more data than before E.g., purchase data, social media data, mobile phone data Businesses and customers

More information

Unsupervised Joint Alignment of Complex Images

Unsupervised Joint Alignment of Complex Images Unsupervised Joint Alignment of Complex Images Gary B. Huang Vidit Jain University of Massachusetts Amherst Amherst, MA {gbhuang,vidit,elm}@cs.umass.edu Erik Learned-Miller Abstract Many recognition algorithms

More information

Simultaneous Gamma Correction and Registration in the Frequency Domain

Simultaneous Gamma Correction and Registration in the Frequency Domain Simultaneous Gamma Correction and Registration in the Frequency Domain Alexander Wong a28wong@uwaterloo.ca William Bishop wdbishop@uwaterloo.ca Department of Electrical and Computer Engineering University

More information

Expression Invariant 3D Face Recognition with a Morphable Model

Expression Invariant 3D Face Recognition with a Morphable Model Expression Invariant 3D Face Recognition with a Morphable Model Brian Amberg brian.amberg@unibas.ch Reinhard Knothe reinhard.knothe@unibas.ch Thomas Vetter thomas.vetter@unibas.ch Abstract We present an

More information

An Introduction to Data Mining. Big Data World. Related Fields and Disciplines. What is Data Mining? 2/12/2015

An Introduction to Data Mining. Big Data World. Related Fields and Disciplines. What is Data Mining? 2/12/2015 An Introduction to Data Mining for Wind Power Management Spring 2015 Big Data World Every minute: Google receives over 4 million search queries Facebook users share almost 2.5 million pieces of content

More information

Capacity of an RCE-based Hamming Associative Memory for Human Face Recognition

Capacity of an RCE-based Hamming Associative Memory for Human Face Recognition Capacity of an RCE-based Hamming Associative Memory for Human Face Recognition Paul Watta Department of Electrical & Computer Engineering University of Michigan-Dearborn Dearborn, MI 48128 watta@umich.edu

More information

Music Mood Classification

Music Mood Classification Music Mood Classification CS 229 Project Report Jose Padial Ashish Goel Introduction The aim of the project was to develop a music mood classifier. There are many categories of mood into which songs may

More information

Matching Forensic Sketches to Mug Shot Photos

Matching Forensic Sketches to Mug Shot Photos IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 33, NO. 3, MARCH 2011 639 Matching Forensic Sketches to Mug Shot Photos Brendan F. Klare, Student Member, IEEE, Zhifeng Li, Member,

More information

Making Sense of the Mayhem: Machine Learning and March Madness

Making Sense of the Mayhem: Machine Learning and March Madness Making Sense of the Mayhem: Machine Learning and March Madness Alex Tran and Adam Ginzberg Stanford University atran3@stanford.edu ginzberg@stanford.edu I. Introduction III. Model The goal of our research

More information

Data Quality Mining: Employing Classifiers for Assuring consistent Datasets

Data Quality Mining: Employing Classifiers for Assuring consistent Datasets Data Quality Mining: Employing Classifiers for Assuring consistent Datasets Fabian Grüning Carl von Ossietzky Universität Oldenburg, Germany, fabian.gruening@informatik.uni-oldenburg.de Abstract: Independent

More information

Blog Post Extraction Using Title Finding

Blog Post Extraction Using Title Finding Blog Post Extraction Using Title Finding Linhai Song 1, 2, Xueqi Cheng 1, Yan Guo 1, Bo Wu 1, 2, Yu Wang 1, 2 1 Institute of Computing Technology, Chinese Academy of Sciences, Beijing 2 Graduate School

More information

ILLUMINATION NORMALIZATION BASED ON SIMPLIFIED LOCAL BINARY PATTERNS FOR A FACE VERIFICATION SYSTEM. Qian Tao, Raymond Veldhuis

ILLUMINATION NORMALIZATION BASED ON SIMPLIFIED LOCAL BINARY PATTERNS FOR A FACE VERIFICATION SYSTEM. Qian Tao, Raymond Veldhuis ILLUMINATION NORMALIZATION BASED ON SIMPLIFIED LOCAL BINARY PATTERNS FOR A FACE VERIFICATION SYSTEM Qian Tao, Raymond Veldhuis Signals and Systems Group, Faculty of EEMCS University of Twente, the Netherlands

More information

Think of the beards as a layer on top of the face rather than part of the face itself. Using

Think of the beards as a layer on top of the face rather than part of the face itself. Using Tyler Ambroziak Ryan Fox CS 638-1 (Dyer) Spring 2010 Virtual Barber Abstract What would you look like without a beard? Or how about with a different type of beard? Think of the beards as a layer on top

More information

Supporting Online Material for

Supporting Online Material for www.sciencemag.org/cgi/content/full/313/5786/504/dc1 Supporting Online Material for Reducing the Dimensionality of Data with Neural Networks G. E. Hinton* and R. R. Salakhutdinov *To whom correspondence

More information

Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 269 Class Project Report

Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 269 Class Project Report Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 69 Class Project Report Junhua Mao and Lunbo Xu University of California, Los Angeles mjhustc@ucla.edu and lunbo

More information

3M Cogent, Inc. White Paper. Facial Recognition. Biometric Technology. a 3M Company

3M Cogent, Inc. White Paper. Facial Recognition. Biometric Technology. a 3M Company 3M Cogent, Inc. White Paper Facial Recognition Biometric Technology a 3M Company Automated Facial Recognition: Turning Promise Into Reality Once the province of fiction, automated facial recognition has

More information

Data Mining and Predictive Analytics - Assignment 1 Image Popularity Prediction on Social Networks

Data Mining and Predictive Analytics - Assignment 1 Image Popularity Prediction on Social Networks Data Mining and Predictive Analytics - Assignment 1 Image Popularity Prediction on Social Networks Wei-Tang Liao and Jong-Chyi Su Department of Computer Science and Engineering University of California,

More information

Crowdclustering with Sparse Pairwise Labels: A Matrix Completion Approach

Crowdclustering with Sparse Pairwise Labels: A Matrix Completion Approach Outline Crowdclustering with Sparse Pairwise Labels: A Matrix Completion Approach Jinfeng Yi, Rong Jin, Anil K. Jain, Shaili Jain 2012 Presented By : KHALID ALKOBAYER Crowdsourcing and Crowdclustering

More information

Novelty Detection in image recognition using IRF Neural Networks properties

Novelty Detection in image recognition using IRF Neural Networks properties Novelty Detection in image recognition using IRF Neural Networks properties Philippe Smagghe, Jean-Luc Buessler, Jean-Philippe Urban Université de Haute-Alsace MIPS 4, rue des Frères Lumière, 68093 Mulhouse,

More information

Active Learning SVM for Blogs recommendation

Active Learning SVM for Blogs recommendation Active Learning SVM for Blogs recommendation Xin Guan Computer Science, George Mason University Ⅰ.Introduction In the DH Now website, they try to review a big amount of blogs and articles and find the

More information

Multisensor Data Fusion and Applications

Multisensor Data Fusion and Applications Multisensor Data Fusion and Applications Pramod K. Varshney Department of Electrical Engineering and Computer Science Syracuse University 121 Link Hall Syracuse, New York 13244 USA E-mail: varshney@syr.edu

More information

The Implementation of Face Security for Authentication Implemented on Mobile Phone

The Implementation of Face Security for Authentication Implemented on Mobile Phone The Implementation of Face Security for Authentication Implemented on Mobile Phone Emir Kremić *, Abdulhamit Subaşi * * Faculty of Engineering and Information Technology, International Burch University,

More information

Applications of Deep Learning to the GEOINT mission. June 2015

Applications of Deep Learning to the GEOINT mission. June 2015 Applications of Deep Learning to the GEOINT mission June 2015 Overview Motivation Deep Learning Recap GEOINT applications: Imagery exploitation OSINT exploitation Geospatial and activity based analytics

More information

MULTIMODAL BIOMETRICS IN IDENTITY MANAGEMENT

MULTIMODAL BIOMETRICS IN IDENTITY MANAGEMENT International Journal of Information Technology and Knowledge Management January-June 2012, Volume 5, No. 1, pp. 111-115 MULTIMODAL BIOMETRICS IN IDENTITY MANAGEMENT A. Jaya Lakshmi 1, I. Ramesh Babu 2,

More information

Tracking in flussi video 3D. Ing. Samuele Salti

Tracking in flussi video 3D. Ing. Samuele Salti Seminari XXIII ciclo Tracking in flussi video 3D Ing. Tutors: Prof. Tullio Salmon Cinotti Prof. Luigi Di Stefano The Tracking problem Detection Object model, Track initiation, Track termination, Tracking

More information

Cees Snoek. Machine. Humans. Multimedia Archives. Euvision Technologies The Netherlands. University of Amsterdam The Netherlands. Tree.

Cees Snoek. Machine. Humans. Multimedia Archives. Euvision Technologies The Netherlands. University of Amsterdam The Netherlands. Tree. Visual search: what's next? Cees Snoek University of Amsterdam The Netherlands Euvision Technologies The Netherlands Problem statement US flag Tree Aircraft Humans Dog Smoking Building Basketball Table

More information

Software-assisted document review: An ROI your GC can appreciate. kpmg.com

Software-assisted document review: An ROI your GC can appreciate. kpmg.com Software-assisted document review: An ROI your GC can appreciate kpmg.com b Section or Brochure name Contents Introduction 4 Approach 6 Metrics to compare quality and effectiveness 7 Results 8 Matter 1

More information

Bootstrapping Big Data

Bootstrapping Big Data Bootstrapping Big Data Ariel Kleiner Ameet Talwalkar Purnamrita Sarkar Michael I. Jordan Computer Science Division University of California, Berkeley {akleiner, ameet, psarkar, jordan}@eecs.berkeley.edu

More information

EFFICIENT DATA PRE-PROCESSING FOR DATA MINING

EFFICIENT DATA PRE-PROCESSING FOR DATA MINING EFFICIENT DATA PRE-PROCESSING FOR DATA MINING USING NEURAL NETWORKS JothiKumar.R 1, Sivabalan.R.V 2 1 Research scholar, Noorul Islam University, Nagercoil, India Assistant Professor, Adhiparasakthi College

More information

Recognition Method for Handwritten Digits Based on Improved Chain Code Histogram Feature

Recognition Method for Handwritten Digits Based on Improved Chain Code Histogram Feature 3rd International Conference on Multimedia Technology ICMT 2013) Recognition Method for Handwritten Digits Based on Improved Chain Code Histogram Feature Qian You, Xichang Wang, Huaying Zhang, Zhen Sun

More information

Predict the Popularity of YouTube Videos Using Early View Data

Predict the Popularity of YouTube Videos Using Early View Data 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

More information

3D Model based Object Class Detection in An Arbitrary View

3D Model based Object Class Detection in An Arbitrary View 3D Model based Object Class Detection in An Arbitrary View Pingkun Yan, Saad M. Khan, Mubarak Shah School of Electrical Engineering and Computer Science University of Central Florida http://www.eecs.ucf.edu/

More information

Colour Image Segmentation Technique for Screen Printing

Colour Image Segmentation Technique for Screen Printing 60 R.U. Hewage and D.U.J. Sonnadara Department of Physics, University of Colombo, Sri Lanka ABSTRACT Screen-printing is an industry with a large number of applications ranging from printing mobile phone

More information

Multi-Factor Biometrics: An Overview

Multi-Factor Biometrics: An Overview Multi-Factor Biometrics: An Overview Jones Sipho-J Matse 24 November 2014 1 Contents 1 Introduction 3 1.1 Characteristics of Biometrics........................ 3 2 Types of Multi-Factor Biometric Systems

More information

VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS

VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS Aswin C Sankaranayanan, Qinfen Zheng, Rama Chellappa University of Maryland College Park, MD - 277 {aswch, qinfen, rama}@cfar.umd.edu Volkan Cevher, James

More information

University of Glasgow Terrier Team / Project Abacá at RepLab 2014: Reputation Dimensions Task

University of Glasgow Terrier Team / Project Abacá at RepLab 2014: Reputation Dimensions Task University of Glasgow Terrier Team / Project Abacá at RepLab 2014: Reputation Dimensions Task Graham McDonald, Romain Deveaud, Richard McCreadie, Timothy Gollins, Craig Macdonald and Iadh Ounis School

More information

Online Play Segmentation for Broadcasted American Football TV Programs

Online Play Segmentation for Broadcasted American Football TV Programs Online Play Segmentation for Broadcasted American Football TV Programs Liexian Gu 1, Xiaoqing Ding 1, and Xian-Sheng Hua 2 1 Department of Electronic Engineering, Tsinghua University, Beijing, China {lxgu,

More information

Facial Expression Analysis and Synthesis

Facial Expression Analysis and Synthesis 1. Research Team Facial Expression Analysis and Synthesis Project Leader: Other Faculty: Post Doc(s): Graduate Students: Undergraduate Students: Industrial Partner(s): Prof. Ulrich Neumann, IMSC and Computer

More information

Fully Automatic Pose-Invariant Face Recognition via 3D Pose Normalization

Fully Automatic Pose-Invariant Face Recognition via 3D Pose Normalization MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Fully Automatic Pose-Invariant Face Recognition via 3D Pose Normalization Asthana, A.; Marks, T.K.; Jones, M.J.; Tieu, K.H.; Rohith, M.V. TR2011-074

More information

Part-Based Pedestrian Detection and Tracking for Driver Assistance using two stage Classifier

Part-Based Pedestrian Detection and Tracking for Driver Assistance using two stage Classifier International Journal of Research Studies in Science, Engineering and Technology [IJRSSET] Volume 1, Issue 4, July 2014, PP 10-17 ISSN 2349-4751 (Print) & ISSN 2349-476X (Online) Part-Based Pedestrian

More information

PARTIAL FINGERPRINT REGISTRATION FOR FORENSICS USING MINUTIAE-GENERATED ORIENTATION FIELDS

PARTIAL FINGERPRINT REGISTRATION FOR FORENSICS USING MINUTIAE-GENERATED ORIENTATION FIELDS PARTIAL FINGERPRINT REGISTRATION FOR FORENSICS USING MINUTIAE-GENERATED ORIENTATION FIELDS Ram P. Krish 1, Julian Fierrez 1, Daniel Ramos 1, Javier Ortega-Garcia 1, Josef Bigun 2 1 Biometric Recognition

More information

Introduction to Pattern Recognition

Introduction to Pattern Recognition Introduction to Pattern Recognition Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2009 CS 551, Spring 2009 c 2009, Selim Aksoy (Bilkent University)

More information

More Local Structure Information for Make-Model Recognition

More Local Structure Information for Make-Model Recognition More Local Structure Information for Make-Model Recognition David Anthony Torres Dept. of Computer Science The University of California at San Diego La Jolla, CA 9093 Abstract An object classification

More information

Machine Learning for Medical Image Analysis. A. Criminisi & the InnerEye team @ MSRC

Machine Learning for Medical Image Analysis. A. Criminisi & the InnerEye team @ MSRC Machine Learning for Medical Image Analysis A. Criminisi & the InnerEye team @ MSRC Medical image analysis the goal Automatic, semantic analysis and quantification of what observed in medical scans Brain

More information

TIETS34 Seminar: Data Mining on Biometric identification

TIETS34 Seminar: Data Mining on Biometric identification TIETS34 Seminar: Data Mining on Biometric identification Youming Zhang Computer Science, School of Information Sciences, 33014 University of Tampere, Finland Youming.Zhang@uta.fi Course Description Content

More information

Open-Set Face Recognition-based Visitor Interface System

Open-Set Face Recognition-based Visitor Interface System Open-Set Face Recognition-based Visitor Interface System Hazım K. Ekenel, Lorant Szasz-Toth, and Rainer Stiefelhagen Computer Science Department, Universität Karlsruhe (TH) Am Fasanengarten 5, Karlsruhe

More information

How To Use Neural Networks In Data Mining

How To Use Neural Networks In Data Mining International Journal of Electronics and Computer Science Engineering 1449 Available Online at www.ijecse.org ISSN- 2277-1956 Neural Networks in Data Mining Priyanka Gaur Department of Information and

More information