Deep Learning Face Representation by Joint Identification-Verification
|
|
|
- Jack Watson
- 9 years ago
- Views:
Transcription
1 Deep Learning Face Representation by Joint Identification-Verification Yi Sun 1 Yuheng Chen 2 Xiaogang Wang 3,4 Xiaoou Tang 1,4 1 Department of Information Engineering, The Chinese University of Hong Kong 2 SenseTime Group 3 Department of Electronic Engineering, The Chinese University of Hong Kong 4 Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences [email protected] [email protected] [email protected] [email protected] Abstract The key challenge of face recognition is to develop effective feature representations for reducing intra-personal variations while enlarging inter-personal differences. In this paper, we show that it can be well solved with deep learning and using both face identification and verification signals as supervision. The Deep IDentification-verification features (DeepID2) are learned with carefully designed deep convolutional networks. The face identification task increases the inter-personal variations by drawing DeepID2 features extracted from different identities apart, while the face verification task reduces the intra-personal variations by pulling DeepID2 features extracted from the same identity together, both of which are essential to face recognition. The learned DeepID2 features can be well generalized to new identities unseen in the training data. On the challenging LFW dataset [11], 99.15% face verification accuracy is achieved. Compared with the best previous deep learning result [20] on LFW, the error rate has been significantly reduced by 67%. 1 Introduction Faces of the same identity could look much different when presented in different poses, illuminations, expressions, ages, and occlusions. Such variations within the same identity could overwhelm the variations due to identity differences and make face recognition challenging, especially in unconstrained conditions. Therefore, reducing the intra-personal variations while enlarging the inter-personal differences is a central topic in face recognition. It can be traced back to early subspace face recognition methods such as LDA [1], Bayesian face [16], and unified subspace [22, 23]. For example, LDA approximates inter- and intra-personal face variations by using two scatter matrices and finds the projection directions to maximize the ratio between them. More recent studies have also targeted the same goal, either explicitly or implicitly. For example, metric learning [6, 9, 14] maps faces to some feature representation such that faces of the same identity are close to each other while those of different identities stay apart. However, these models are much limited by their linear nature or shallow structures, while inter- and intra-personal variations are complex, highly nonlinear, and observed in high-dimensional image space. In this work, we show that deep learning provides much more powerful tools to handle the two types of variations. Thanks to its deep architecture and large learning capacity, effective features for face recognition can be learned through hierarchical nonlinear mappings. We argue that it is essential to learn such features by using two supervisory signals simultaneously, i.e. the face identification and verification signals, and the learned features are referred to as Deep IDentification-verification features (DeepID2). Identification is to classify an input image into a large number of identity 1
2 classes, while verification is to classify a pair of images as belonging to the same identity or not (i.e. binary classification). In the training stage, given an input face image with the identification signal, its DeepID2 features are extracted in the top hidden layer of the learned hierarchical nonlinear feature representation, and then mapped to one of a large number of identities through another function g(deepid2). In the testing stage, the learned DeepID2 features can be generalized to other tasks (such as face verification) and new identities unseen in the training data. The identification supervisory signal tends to pull apart the DeepID2 features of different identities since they have to be classified into different classes. Therefore, the learned features would have rich identity-related or inter-personal variations. However, the identification signal has a relatively weak constraint on DeepID2 features extracted from the same identity, since dissimilar DeepID2 features could be mapped to the same identity through function g( ). This leads to problems when DeepID2 features are generalized to new tasks and new identities in test where g is not applicable anymore. We solve this by using an additional face verification signal, which requires that every two DeepID2 feature vectors extracted from the same identity are close to each other while those extracted from different identities are kept away. The strong per-element constraint on DeepID2 features can effectively reduce the intra-personal variations. On the other hand, using the verification signal alone (i.e. only distinguishing a pair of DeepID2 feature vectors at a time) is not as effective in extracting identityrelated features as using the identification signal (i.e. distinguishing thousands of identities at a time). Therefore, the two supervisory signals emphasize different aspects in feature learning and should be employed together. To characterize faces from different aspects, complementary DeepID2 features are extracted from various face regions and resolutions, and are concatenated to form the final feature representation after PCA dimension reduction. Since the learned DeepID2 features are diverse among different identities while consistent within the same identity, it makes the following face recognition easier. Using the learned feature representation and a recently proposed face verification model [3], we achieved the highest 99.15% face verification accuracy on the challenging and extensively studied LFW dataset [11]. This is the first time that a machine provided with only the face region achieves an accuracy on par with the 99.20% accuracy of human to whom the entire LFW face image including the face region and large background area are presented to verify. In recent years, a great deal of efforts have been made for face recognition with deep learning [5, 10, 18, 26, 8, 21, 20, 27]. Among the deep learning works, [5, 18, 8] learned features or deep metrics with the verification signal, while DeepFace [21] and our previous work DeepID [20] learned features with the identification signal and achieved accuracies around 97.45% on LFW. Our approach significantly improves the state-of-the-art. The idea of jointly solving the classification and verification tasks was applied to general object recognition [15], with the focus on improving classification accuracy on fixed object classes instead of hidden feature representations. Our work targets on learning features which can be well generalized to new classes (identities) and the verification task. 2 Identification-verification guided deep feature learning We learn features with variations of deep convolutional neural networks (deep ConvNets) [12]. The convolution and pooling operations in deep ConvNets are specially designed to extract visual features hierarchically, from local low-level features to global high-level ones. Our deep ConvNets take similar structures as in [20]. It contains four convolutional layers, with local weight sharing [10] in the third and fourth convolutional layers. The ConvNet extracts a 160-dimensional DeepID2 feature vector at its last layer (DeepID2 layer) of the feature extraction cascade. The DeepID2 layer to be learned are fully-connected to both the third and fourth convolutional layers. We use rectified linear units (ReLU) [17] for neurons in the convolutional layers and the DeepID2 layer. An illustration of the ConvNet structure used to extract DeepID2 features is shown in Fig. 1 given an RGB input of size When the size of the input region changes, the map sizes in the following layers will change accordingly. The DeepID2 feature extraction process is denoted as f = Conv(x, θ c ), where Conv( ) is the feature extraction function defined by the ConvNet, x is the input face patch, f is the extracted DeepID2 feature vector, and θ c denotes ConvNet parameters to be learned. 2
3 Figure 1: The ConvNet structure for DeepID2 feature extraction. DeepID2 features are learned with two supervisory signals. The first is face identification signal, which classifies each face image into one of n (e.g., n = 8192) different identities. Identification is achieved by following the DeepID2 layer with an n-way softmax layer, which outputs a probability distribution over the n classes. The network is trained to minimize the cross-entropy loss, which we call the identification loss. It is denoted as Ident(f, t, θ id ) = n p i log ˆp i = log ˆp t, (1) i=1 where f is the DeepID2 feature vector, t is the target class, and θ id denotes the softmax layer parameters. p i is the target probability distribution, where p i = 0 for all i except p t = 1 for the target class t. ˆp i is the predicted probability distribution. To correctly classify all the classes simultaneously, the DeepID2 layer must form discriminative identity-related features (i.e. features with large inter-personal variations). The second is face verification signal, which encourages DeepID2 features extracted from faces of the same identity to be similar. The verification signal directly regularize DeepID2 features and can effectively reduce the intra-personal variations. Commonly used constraints include the L1/L2 norm and cosine similarity. We adopt the following loss function based on the L2 norm, which was originally proposed by Hadsell et al.[7] for dimensionality reduction, Verif(f i, f j, y ij, θ ve ) = { 1 2 f i f j 2 2 if y ij = max ( ) 2 0, m f i f j 2 if y ij = 1, (2) where f i and f j are DeepID2 feature vectors extracted from the two face images in comparison. y ij = 1 means that f i and f j are from the same identity. In this case, it minimizes the L2 distance between the two DeepID2 feature vectors. y ij = 1 means different identities, and Eq. (2) requires the distance larger than a margin m. θ ve = {m} is the parameter to be learned in the verification loss function. Loss functions based on the L1 norm could have similar formulations [15]. The cosine similarity was used in [17] as Verif(f i, f j, y ij, θ ve ) = 1 2 (y ij σ(wd + b)) 2, (3) f i f j f i 2 f j 2 where d = is the cosine similarity between DeepID2 feature vectors, θ ve = {w, b} are learnable scaling and shifting parameters, σ is the sigmoid function, and y ij is the binary target of whether the two compared face images belong to the same identity. All the three loss functions are evaluated and compared in our experiments. Our goal is to learn the parameters θ c in the feature extraction function Conv( ), while θ id and θ ve are only parameters introduced to propagate the identification and verification signals during training. In the testing stage, only θ c is used for feature extraction. The parameters are updated by stochastic gradient descent. The identification and verification gradients are weighted by a hyperparameter λ. Our learning algorithm is summarized in Tab. 1. The margin m in Eq. (2) is a special case, which cannot be updated by gradient descent since this will collapse it to zero. Instead, m is fixed and updated every N training pairs (N 200, 000 in our experiments) such that it is the threshold of 3
4 Table 1: The DeepID2 feature learning algorithm. input: training set χ = {(xi, li )}, initialized parameters θc, θid, and θve, hyperparameter λ, learning rate η(t), t 0 while not converge do t t + 1 sample two training samples (xi, li ) and (xj, lj ) from χ fi = Conv(xi, θc ) and fj = Conv(xj, θc ) Ident(fj,lj,θid ) (fi,li,θid ) θid = Ident θ + θid id Verif(fi,fj,yij,θve ) θve = λ, where yij = 1 if li = lj, and yij = 1 otherwise. θve Verif(fi,fj,yij,θve ) Ident(fi,li,θid ) fi = +λ fi fi Verif(fi,fj,yij,θve ) Ident(fj,lj,θid ) + λ fj = fj fj Conv(xj,θc ) Conv(xi,θc ) + fj θc = fi θc θc update θid = θid η(t) θid, θve = θve η(t) θve, and θc = θc η(t) θc. end while output θc Figure 2: Patches selected for feature extraction. The Joint Bayesian [3] face verification accuracy (%) using features extracted from each individual patch is shown below. the feature distances kfi fj k to minimize the verification error of the previous N training pairs. Updating m is not included in Tab. 1 for simplicity. 3 Face Verification To evaluate the feature learning algorithm described in Sec. 2, DeepID2 features are embedded into the conventional face verification pipeline of face alignment, feature extraction, and face verification. We first use the recently proposed SDM algorithm [24] to detect 21 facial landmarks. Then the face images are globally aligned by similarity transformation according to the detected landmarks. We cropped 400 face patches, which vary in positions, scales, color channels, and horizontal flipping, according to the globally aligned faces and the position of the facial landmarks. Accordingly, 400 DeepID2 feature vectors are extracted by a total of 200 deep ConvNets, each of which is trained to extract two 160-dimensional DeepID2 feature vectors on one particular face patch and its horizontally flipped counterpart, respectively, of each face. To reduce the redundancy among the large number of DeepID2 features and make our system practical, we use the forward-backward greedy algorithm [25] to select a small number of effective and complementary DeepID2 feature vectors (25 in our experiment), which saves most of the feature extraction time during test. Fig. 2 shows all the selected 25 patches, from which dimensional DeepID2 feature vectors are extracted and are concatenated to a 4000-dimensional DeepID2 feature vector. The 4000-dimensional vector is further compressed to 180 dimensions by PCA for face verification. We learned the Joint Bayesian model [3] for face verification based on the extracted DeepID2 features. Joint Bayesian has been successfully used to model the joint probability of two faces being the same or different persons [3, 4]. 4
5 4 Experiments We report face verification results on the LFW dataset [11], which is the de facto standard test set for face verification in unconstrained conditions. It contains 13, 233 face images of 5749 identities collected from the Internet. For comparison purposes, algorithms typically report the mean face verification accuracy and the ROC curve on 6000 given face pairs in LFW. Though being sound as a test set, it is inadequate for training, since the majority of identities in LFW have only one face image. Therefore, we rely on a larger outside dataset for training, as did by all recent highperformance face verification algorithms [4, 2, 21, 20, 13]. In particular, we use the CelebFaces+ dataset [20] for training, which contains 202, 599 face images of 10, 177 identities (celebrities) collected from the Internet. People in CelebFaces+ and LFW are mutually exclusive. DeepID2 features are learned from the face images of 8192 identities randomly sampled from CelebFaces+ (referred to as CelebFaces+A), while the remaining face images of 1985 identities (referred to as CelebFaces+B) are used for the following feature selection and learning the face verification models (Joint Bayesian). When learning DeepID2 features on CelebFaces+A, CelebFaces+B is used as a validation set to decide the learning rate, training epochs, and hyperparameter λ. After that, CelebFaces+B is separated into a training set of 1485 identities and a validation set of 500 identities for feature selection. Finally, we train the Joint Bayesian model on the entire CelebFaces+B data and test on LFW using the selected DeepID2 features. We first evaluate various aspect of feature learning from Sec. 4.1 to Sec. 4.3 by using a single deep ConvNet to extract DeepID2 features from the entire face region. Then the final system is constructed and compared with existing best performing methods in Sec Balancing the identification and verification signals We investigates the interactions of identification and verification signals on feature learning, by varying λ from 0 to +. At λ = 0, the verification signal vanishes and only the identification signal takes effect. When λ increases, the verification signal gradually dominates the training process. At the other extreme of λ +, only the verification signal remains. The L2 norm verification loss in Eq. (2) is used for training. Figure 3 shows the face verification accuracy on the test set by comparing the learned DeepID2 features with L2 norm and the Joint Bayesian model, respectively. It clearly shows that neither the identification nor the verification signal is the optimal one to learn features. Instead, effective features come from the appropriate combination of the two. This phenomenon can be explained from the view of inter- and intra-personal variations, which could be approximated by LDA. According to LDA, the inter-personal scatter matrix is S inter = c i=1 n i ( x i x) ( x i x), where x i is the mean feature of the i-th identity, x is the mean of the entire dataset, and n i is the number of face images of the i-th identity. The intra-personal scatter x D i (x x i ) (x x i ), where D i is the set of features of the i-th matrix is S intra = c i=1 identity, x i is the corresponding mean, and c is the number of different identities. The inter- and intra-personal variances are the eigenvalues of the corresponding scatter matrices, and are shown in Fig. 5. The corresponding eigenvectors represent different variation patterns. Both the magnitude and diversity of feature variances matter in recognition. If all the feature variances concentrate on a small number of eigenvectors, it indicates the diversity of intra- or inter-personal variations is low. The features are learned with λ = 0, 0.05, and +, respectively. The feature variances of each given λ are normalized by the corresponding mean feature variance. When only the identification signal is used (λ = 0), the learned features contain both diverse inter- and intra-personal variations, as shown by the long tails of the red curves in both figures. While diverse inter-personal variations help to distinguish different identities, large and diverse intra-personal variations are disturbing factors and make face verification difficult. When both the identification and verification signals are used with appropriate weighting (λ = 0.05), the diversity of the inter-personal variations keeps unchanged while the variations in a few main directions become even larger, as shown by the green curve in the left compared to the red one. At the same time, the intra-personal variations decrease in both the diversity and magnitude, as shown by the green curve in the right. Therefore, both the inter- and intra-personal variations changes in a direction that makes face verification easier. When λ further increases towards infinity, both the inter- and intra-personal variations collapse to the variations in only a few main directions, since without the identification signal, diverse features cannot be formed. With low diversity on inter- 5
6 Figure 3: Face verification accuracy by varying the weighting parameter λ. λ is plotted in log scale. Figure 4: Face verification accuracy of DeepID2 features learned by both the the face identification and verification signals, where the number of training identities (shown in log scale) used for face identification varies. The result may be further improved with more than 8192 identities. Figure 5: Spectrum of eigenvalues of the inter- and intra-personal scatter matrices. Best viewed in color. personal variations, distinguishing different identities becomes difficult. Therefore the performance degrades significantly. Figure 6 shows the first two PCA dimensions of features learned with λ = 0, 0.05, and +, respectively. These features come from the six identities with the largest numbers of face images in LFW, and are marked by different colors. The figure further verifies our observations. When λ = 0 (left), different clusters are mixed together due to the large intra-personal variations, although the cluster centers are actually different. When λ increases to 0.05 (middle), intra-personal variations are significantly reduced and the clusters become distinguishable. When λ further increases towards infinity (right), although the intra-personal variations further decrease, the cluster centers also begin to collapse and some clusters become significantly overlapped (as the red, blue, and cyan clusters in Fig. 6 right), making it hard to distinguish again. 4.2 Rich identity information improves feature learning We investigate how would the identity information contained in the identification supervisory signal influence the learned features. In particular, we experiment with an exponentially increasing number of identities used for identification during training from 32 to 8192, while the verification signal is generated from all the 8192 training identities all the time. Fig. 4 shows how the verification accuracies of the learned DeepID2 features (derived from the L2 norm and Joint Bayesian) vary on the test set with the number of identities used in the identification signal. It shows that 6
7 Figure 6: The first two PCA dimensions of DeepID2 features extracted from six identities in LFW. Table 2: Comparison of different verification signals. verification signal L2 L2+ L2- L1 cosine none L2 norm (%) Joint Bayesian (%) identifying a large number (e.g., 8192) of identities is key to learning effective DeepID2 feature representation. This observation is consistent with those in Sec The increasing number of identities provides richer identity information and helps to form DeepID2 features with diverse interpersonal variations, making the class centers of different identities more distinguishable. 4.3 Investigating the verification signals As shown in Sec. 4.1, the verification signal with moderate intensity mainly takes the effect of reducing the intra-personal variations. To further verify this, we compare our L2 norm verification signal on all the sample pairs with those only constrain either the positive or negative sample pairs, denoted as L2+ and L2-, respectively. That is, the L2+ only decreases the distances between DeepID2 features of the same identity, while L2- only increases the distances between DeepID2 features of different identities if they are smaller than the margin. The face verification accuracies of the learned DeepID2 features on the test set, measured by the L2 norm and Joint Bayesian respectively, are shown in Table 2. It also compares with the L1 norm and cosine verification signals, as well as no verification signal (none). The identification signal is the same (classifying the 8192 identities) for all the comparisons. DeepID2 features learned with the L2+ verification signal are only slightly worse than those learned with L2. In contrast, the L2- verification signal helps little in feature learning and gives almost the same result as no verification signal is used. This is a strong evidence that the effect of the verification signal is mainly reducing the intra-personal variations. Another observation is that the face verification accuracy improves in general whenever the verification signal is added in addition to the identification signal. However, the L2 norm is better than the other compared verification metrics. This may be due to that all the other constraints are weaker than L2 and less effective in reducing the intra-personal variations. For example, the cosine similarity only constrains the angle, but not the magnitude. 4.4 Final system and comparison with other methods Before learning Joint Bayesian, DeepID2 features are first projected to 180 dimensions by PCA. After PCA, the Joint Bayesian model is trained on the entire CelebFaces+B data and tested on the 6000 given face pairs in LFW, where the log-likelihood ratio given by Joint Bayesian is compared to a threshold optimized on the training data for face verification. Tab. 3 shows the face verification accuracy with an increasing number of face patches to extract DeepID2 features, as well as the time used to extract those DeepID2 features from each face with a single Titan GPU. We achieve 98.97% accuracy with all the 25 selected face patches. The feature extraction process is also efficient and takes only 35 ms for each face image. The face verification accuracy of each individual face patch is provided in Fig. 2. The short DeepID2 signature is extremely efficient for face identification and face image search when matching a query image with a large number of candidates. 7
8 Table 3: Face verification accuracy with DeepID2 features extracted from an increasing number of face patches. # patches accuracy (%) time (ms) Table 4: Accuracy comparison with the previous best results on LFW. method accuracy (%) High-dim LBP [4] ± 1.13 TL Joint Bayesian [2] ± 1.08 DeepFace [21] ± 0.25 DeepID [20] ± 0.26 GaussianFace [13] ± 0.66 DeepID ± 0.13 Figure 7: ROC comparison with the previous best results on LFW. Best viewed in color. To further exploit the rich pool of DeepID2 features extracted from the large number of patches, we repeat the feature selection algorithm for another six times, each time choosing DeepID2 features from the patches that have not been selected by previous feature selection steps. Then we learn the Joint Bayesian model on each of the seven groups of selected features, respectively. We fuse the seven Joint Bayesian scores on each pair of compared faces by further learning an SVM. In this way, we achieve an even higher 99.15% face verification accuracy. The accuracy and ROC comparison with previous state-of-the-art methods on LFW are shown in Tab. 4 and Fig. 7, respectively. We achieve the best results and improve previous results with a large margin. 5 Conclusion This paper have shown that the effect of the face identification and verification supervisory signals on deep feature representation coincide with the two aspects of constructing ideal features for face recognition, i.e., increasing inter-personal variations and reducing intra-personal variations, and the combination of the two supervisory signals lead to significantly better features than either one of them. When embedding the learned features to the traditional face verification pipeline, we achieved an extremely effective system with 99.15% face verification accuracy on LFW. The arxiv report of this paper was published in June 2014 [19]. 8
9 References [1] P. N. Belhumeur, J. a. P. Hespanha, and D. J. Kriegman. Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection. PAMI, 19: , [2] X. Cao, D. Wipf, F. Wen, G. Duan, and J. Sun. A practical transfer learning algorithm for face verification. In Proc. ICCV, [3] D. Chen, X. Cao, L. Wang, F. Wen, and J. Sun. Bayesian face revisited: A joint formulation. In Proc. ECCV, [4] D. Chen, X. Cao, F. Wen, and J. Sun. Blessing of dimensionality: High-dimensional feature and its efficient compression for face verification. In Proc. CVPR, [5] S. Chopra, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively, with application to face verification. In Proc. CVPR, [6] M. Guillaumin, J. Verbeek, and C. Schmid. Is that you? Metric learning approaches for face identification. In Proc. ICCV, [7] R. Hadsell, S. Chopra, and Y. LeCun. Dimensionality reduction by learning an invariant mapping. In Proc. CVPR, [8] J. Hu, J. Lu, and Y.-P. Tan. Discriminative deep metric learning for face verification in the wild. In Proc. CVPR, [9] C. Huang, S. Zhu, and K. Yu. Large scale strongly supervised ensemble metric learning, with applications to face verification and retrieval. NEC Technical Report TR115, [10] G. B. Huang, H. Lee, and E. Learned-Miller. Learning hierarchical representations for face verification with convolutional deep belief networks. In Proc. CVPR, [11] G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller. Labeled Faces in the Wild: A database for studying face recognition in unconstrained environments. Technical Report 07-49, University of Massachusetts, Amherst, [12] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, [13] C. Lu and X. Tang. Surpassing human-level face verification performance on LFW with GaussianFace. Technical report, arxiv: , [14] A. Mignon and F. Jurie. PCCA: A new approach for distance learning from sparse pairwise constraints. In Proc. CVPR, [15] H. Mobahi, R. Collobert, and J. Weston. Deep learning from temporal coherence in video. In Proc. ICML, [16] B. Moghaddam, T. Jebara, and A. Pentland. Bayesian face recognition. PR, 33: , [17] V. Nair and G. E. Hinton. Rectified linear units improve restricted Boltzmann machines. In Proc. ICML, [18] Y. Sun, X. Wang, and X. Tang. Hybrid deep learning for face verification. In Proc. ICCV, [19] Y. Sun, X. Wang, and X. Tang. Deep learning face representation by joint identificationverification. Technical report, arxiv: , [20] Y. Sun, X. Wang, and X. Tang. Deep learning face representation from predicting 10,000 classes. In Proc. CVPR, [21] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf. DeepFace: Closing the gap to human-level performance in face verification. In Proc. CVPR, [22] X. Wang and X. Tang. Unified subspace analysis for face recognition. In Proc. ICCV, [23] X. Wang and X. Tang. A unified framework for subspace face recognition. PAMI, 26: , [24] X. Xiong and F. De la Torre Frade. Supervised descent method and its applications to face alignment. In Proc. CVPR, [25] T. Zhang. Adaptive forward-backward greedy algorithm for learning sparse representations. IEEE Trans. Inf. Theor., 57: , [26] Z. Zhu, P. Luo, X. Wang, and X. Tang. Deep learning identity-preserving face space. In Proc. ICCV, [27] Z. Zhu, P. Luo, X. Wang, and X. Tang. Deep learning and disentangling face representation by multi-view perceptron. In Proc. NIPS,
Naive-Deep Face Recognition: Touching the Limit of LFW Benchmark or Not?
Naive-Deep Face Recognition: Touching the Limit of LFW Benchmark or Not? Erjin Zhou [email protected] Zhimin Cao [email protected] Qi Yin [email protected] Abstract Face recognition performance improves rapidly
Supporting Online Material for
www.sciencemag.org/cgi/content/full/313/5786/504/dc1 Supporting Online Material for Reducing the Dimensionality of Data with Neural Networks G. E. Hinton* and R. R. Salakhutdinov *To whom correspondence
Blessing of Dimensionality: High-dimensional Feature and Its Efficient Compression for Face Verification
Blessing of Dimensionality: High-dimensional Feature and Its Efficient Compression for Face Verification Dong Chen Xudong Cao Fang Wen Jian Sun University of Science and Technology of China Microsoft Research
Learning Deep Face Representation
Learning Deep Face Representation Haoqiang Fan Megvii Inc. [email protected] Zhimin Cao Megvii Inc. [email protected] Yuning Jiang Megvii Inc. [email protected] Qi Yin Megvii Inc. [email protected] Chinchilla Doudou
DeepFace: Closing the Gap to Human-Level Performance in Face Verification
DeepFace: Closing the Gap to Human-Level Performance in Face Verification Yaniv Taigman Ming Yang Marc Aurelio Ranzato Facebook AI Research Menlo Park, CA, USA {yaniv, mingyang, ranzato}@fb.com Lior Wolf
Novelty Detection in image recognition using IRF Neural Networks properties
Novelty Detection in image recognition using IRF Neural Networks properties Philippe Smagghe, Jean-Luc Buessler, Jean-Philippe Urban Université de Haute-Alsace MIPS 4, rue des Frères Lumière, 68093 Mulhouse,
SYMMETRIC EIGENFACES MILI I. SHAH
SYMMETRIC EIGENFACES MILI I. SHAH Abstract. Over the years, mathematicians and computer scientists have produced an extensive body of work in the area of facial analysis. Several facial analysis algorithms
Image Classification for Dogs and Cats
Image Classification for Dogs and Cats Bang Liu, Yan Liu Department of Electrical and Computer Engineering {bang3,yan10}@ualberta.ca Kai Zhou Department of Computing Science [email protected] Abstract
LOGISTIC SIMILARITY METRIC LEARNING FOR FACE VERIFICATION
LOGISIC SIMILARIY MERIC LEARNING FOR FACE VERIFICAION Lilei Zheng, Khalid Idrissi, Christophe Garcia, Stefan Duffner and Atilla Baskurt Université de Lyon, CNRS, INSA-Lyon, LIRIS, UMR5205, F-69621, France
DeepFace: Closing the Gap to Human-Level Performance in Face Verification
DeepFace: Closing the Gap to Human-Level Performance in Face Verification Yaniv Taigman Ming Yang Marc Aurelio Ranzato Facebook AI Group Menlo Park, CA, USA {yaniv, mingyang, ranzato}@fb.com Lior Wolf
CS 1699: Intro to Computer Vision. Deep Learning. Prof. Adriana Kovashka University of Pittsburgh December 1, 2015
CS 1699: Intro to Computer Vision Deep Learning Prof. Adriana Kovashka University of Pittsburgh December 1, 2015 Today: Deep neural networks Background Architectures and basic operations Applications Visualizing
Unsupervised Joint Alignment of Complex Images
Unsupervised Joint Alignment of Complex Images Gary B. Huang Vidit Jain University of Massachusetts Amherst Amherst, MA {gbhuang,vidit,elm}@cs.umass.edu Erik Learned-Miller Abstract Many recognition algorithms
STA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! [email protected]! http://www.cs.toronto.edu/~rsalakhu/ Lecture 6 Three Approaches to Classification Construct
Steven C.H. Hoi School of Information Systems Singapore Management University Email: [email protected]
Steven C.H. Hoi School of Information Systems Singapore Management University Email: [email protected] Introduction http://stevenhoi.org/ Finance Recommender Systems Cyber Security Machine Learning Visual
Is that you? Metric Learning Approaches for Face Identification
Is that you? Metric Learning Approaches for Face Identification Matthieu Guillaumin, Jakob Verbeek and Cordelia Schmid LEAR, INRIA Grenoble Laboratoire Jean Kuntzmann [email protected] Abstract
Introduction to Deep Learning and its applications in Computer Vision
Introduction to Deep Learning and its applications in Computer Vision Xiaogang Wang Department of Electronic Engineering, The Chinese University of Hong Kong Morning Tutorial (9:00 12:30) March 23, 2015
Lecture 6: Classification & Localization. boris. [email protected]
Lecture 6: Classification & Localization boris. [email protected] 1 Agenda ILSVRC 2014 Overfeat: integrated classification, localization, and detection Classification with Localization Detection. 2 ILSVRC-2014
The Role of Size Normalization on the Recognition Rate of Handwritten Numerals
The Role of Size Normalization on the Recognition Rate of Handwritten Numerals Chun Lei He, Ping Zhang, Jianxiong Dong, Ching Y. Suen, Tien D. Bui Centre for Pattern Recognition and Machine Intelligence,
Efficient online learning of a non-negative sparse autoencoder
and Machine Learning. Bruges (Belgium), 28-30 April 2010, d-side publi., ISBN 2-93030-10-2. Efficient online learning of a non-negative sparse autoencoder Andre Lemme, R. Felix Reinhart and Jochen J. Steil
Rectified Linear Units Improve Restricted Boltzmann Machines
Rectified Linear Units Improve Restricted Boltzmann Machines Vinod Nair [email protected] Geoffrey E. Hinton [email protected] Department of Computer Science, University of Toronto, Toronto, ON
Image and Video Understanding
Image and Video Understanding 2VO 710.095 WS Christoph Feichtenhofer, Axel Pinz Slide credits: Many thanks to all the great computer vision researchers on which this presentation relies on. Most material
Facebook Friend Suggestion Eytan Daniyalzade and Tim Lipus
Facebook Friend Suggestion Eytan Daniyalzade and Tim Lipus 1. Introduction Facebook is a social networking website with an open platform that enables developers to extract and utilize user information
Open-Set Face Recognition-based Visitor Interface System
Open-Set Face Recognition-based Visitor Interface System Hazım K. Ekenel, Lorant Szasz-Toth, and Rainer Stiefelhagen Computer Science Department, Universität Karlsruhe (TH) Am Fasanengarten 5, Karlsruhe
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical
Face Recognition in Low-resolution Images by Using Local Zernike Moments
Proceedings of the International Conference on Machine Vision and Machine Learning Prague, Czech Republic, August14-15, 014 Paper No. 15 Face Recognition in Low-resolution Images by Using Local Zernie
Adaptive Face Recognition System from Myanmar NRC Card
Adaptive Face Recognition System from Myanmar NRC Card Ei Phyo Wai University of Computer Studies, Yangon, Myanmar Myint Myint Sein University of Computer Studies, Yangon, Myanmar ABSTRACT Biometrics is
Statistical Machine Learning
Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes
Class-specific Sparse Coding for Learning of Object Representations
Class-specific Sparse Coding for Learning of Object Representations Stephan Hasler, Heiko Wersing, and Edgar Körner Honda Research Institute Europe GmbH Carl-Legien-Str. 30, 63073 Offenbach am Main, Germany
Subspace Analysis and Optimization for AAM Based Face Alignment
Subspace Analysis and Optimization for AAM Based Face Alignment Ming Zhao Chun Chen College of Computer Science Zhejiang University Hangzhou, 310027, P.R.China [email protected] Stan Z. Li Microsoft
Introduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Deep Learning Barnabás Póczos & Aarti Singh Credits Many of the pictures, results, and other materials are taken from: Ruslan Salakhutdinov Joshua Bengio Geoffrey
AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION
AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION Saurabh Asija 1, Rakesh Singh 2 1 Research Scholar (Computer Engineering Department), Punjabi University, Patiala. 2 Asst.
Selecting Receptive Fields in Deep Networks
Selecting Receptive Fields in Deep Networks Adam Coates Department of Computer Science Stanford University Stanford, CA 94305 [email protected] Andrew Y. Ng Department of Computer Science Stanford
Recurrent Neural Networks
Recurrent Neural Networks Neural Computation : Lecture 12 John A. Bullinaria, 2015 1. Recurrent Neural Network Architectures 2. State Space Models and Dynamical Systems 3. Backpropagation Through Time
Lecture 6: CNNs for Detection, Tracking, and Segmentation Object Detection
CSED703R: Deep Learning for Visual Recognition (206S) Lecture 6: CNNs for Detection, Tracking, and Segmentation Object Detection Bohyung Han Computer Vision Lab. [email protected] 2 3 Object detection
arxiv:1312.6034v2 [cs.cv] 19 Apr 2014
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps arxiv:1312.6034v2 [cs.cv] 19 Apr 2014 Karen Simonyan Andrea Vedaldi Andrew Zisserman Visual Geometry Group,
Recognizing Cats and Dogs with Shape and Appearance based Models. Group Member: Chu Wang, Landu Jiang
Recognizing Cats and Dogs with Shape and Appearance based Models Group Member: Chu Wang, Landu Jiang Abstract Recognizing cats and dogs from images is a challenging competition raised by Kaggle platform
A Simple Introduction to Support Vector Machines
A Simple Introduction to Support Vector Machines Martin Law Lecture for CSE 802 Department of Computer Science and Engineering Michigan State University Outline A brief history of SVM Large-margin linear
How To Cluster
Data Clustering Dec 2nd, 2013 Kyrylo Bessonov Talk outline Introduction to clustering Types of clustering Supervised Unsupervised Similarity measures Main clustering algorithms k-means Hierarchical Main
Pedestrian Detection with RCNN
Pedestrian Detection with RCNN Matthew Chen Department of Computer Science Stanford University [email protected] Abstract In this paper we evaluate the effectiveness of using a Region-based Convolutional
Visualization by Linear Projections as Information Retrieval
Visualization by Linear Projections as Information Retrieval Jaakko Peltonen Helsinki University of Technology, Department of Information and Computer Science, P. O. Box 5400, FI-0015 TKK, Finland [email protected]
Administrivia. Traditional Recognition Approach. Overview. CMPSCI 370: Intro. to Computer Vision Deep learning
: Intro. to Computer Vision Deep learning University of Massachusetts, Amherst April 19/21, 2016 Instructor: Subhransu Maji Finals (everyone) Thursday, May 5, 1-3pm, Hasbrouck 113 Final exam Tuesday, May
Monotonicity Hints. Abstract
Monotonicity Hints Joseph Sill Computation and Neural Systems program California Institute of Technology email: [email protected] Yaser S. Abu-Mostafa EE and CS Deptartments California Institute of Technology
1816 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 7, JULY 2006. Principal Components Null Space Analysis for Image and Video Classification
1816 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 7, JULY 2006 Principal Components Null Space Analysis for Image and Video Classification Namrata Vaswani, Member, IEEE, and Rama Chellappa, Fellow,
Distance Metric Learning in Data Mining (Part I) Fei Wang and Jimeng Sun IBM TJ Watson Research Center
Distance Metric Learning in Data Mining (Part I) Fei Wang and Jimeng Sun IBM TJ Watson Research Center 1 Outline Part I - Applications Motivation and Introduction Patient similarity application Part II
An Analysis of Single-Layer Networks in Unsupervised Feature Learning
An Analysis of Single-Layer Networks in Unsupervised Feature Learning Adam Coates 1, Honglak Lee 2, Andrew Y. Ng 1 1 Computer Science Department, Stanford University {acoates,ang}@cs.stanford.edu 2 Computer
Principal components analysis
CS229 Lecture notes Andrew Ng Part XI Principal components analysis In our discussion of factor analysis, we gave a way to model data x R n as approximately lying in some k-dimension subspace, where k
Predict Influencers in the Social Network
Predict Influencers in the Social Network Ruishan Liu, Yang Zhao and Liuyu Zhou Email: rliu2, yzhao2, [email protected] Department of Electrical Engineering, Stanford University Abstract Given two persons
A comparative study on face recognition techniques and neural network
A comparative study on face recognition techniques and neural network 1. Abstract Meftah Ur Rahman Department of Computer Science George Mason University [email protected] In modern times, face
Component Ordering in Independent Component Analysis Based on Data Power
Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic
Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data
CMPE 59H Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data Term Project Report Fatma Güney, Kübra Kalkan 1/15/2013 Keywords: Non-linear
Stochastic Pooling for Regularization of Deep Convolutional Neural Networks
Stochastic Pooling for Regularization of Deep Convolutional Neural Networks Matthew D. Zeiler Department of Computer Science Courant Institute, New York University [email protected] Rob Fergus Department
Supervised Feature Selection & Unsupervised Dimensionality Reduction
Supervised Feature Selection & Unsupervised Dimensionality Reduction Feature Subset Selection Supervised: class labels are given Select a subset of the problem features Why? Redundant features much or
MACHINE LEARNING IN HIGH ENERGY PHYSICS
MACHINE LEARNING IN HIGH ENERGY PHYSICS LECTURE #1 Alex Rogozhnikov, 2015 INTRO NOTES 4 days two lectures, two practice seminars every day this is introductory track to machine learning kaggle competition!
Learning to Process Natural Language in Big Data Environment
CCF ADL 2015 Nanchang Oct 11, 2015 Learning to Process Natural Language in Big Data Environment Hang Li Noah s Ark Lab Huawei Technologies Part 1: Deep Learning - Present and Future Talk Outline Overview
Tattoo Detection for Soft Biometric De-Identification Based on Convolutional NeuralNetworks
1 Tattoo Detection for Soft Biometric De-Identification Based on Convolutional NeuralNetworks Tomislav Hrkać, Karla Brkić, Zoran Kalafatić Faculty of Electrical Engineering and Computing University of
SUCCESSFUL PREDICTION OF HORSE RACING RESULTS USING A NEURAL NETWORK
SUCCESSFUL PREDICTION OF HORSE RACING RESULTS USING A NEURAL NETWORK N M Allinson and D Merritt 1 Introduction This contribution has two main sections. The first discusses some aspects of multilayer perceptrons,
Lecture 2: The SVM classifier
Lecture 2: The SVM classifier C19 Machine Learning Hilary 2015 A. Zisserman Review of linear classifiers Linear separability Perceptron Support Vector Machine (SVM) classifier Wide margin Cost function
Analysis of kiva.com Microlending Service! Hoda Eydgahi Julia Ma Andy Bardagjy December 9, 2010 MAS.622j
Analysis of kiva.com Microlending Service! Hoda Eydgahi Julia Ma Andy Bardagjy December 9, 2010 MAS.622j What is Kiva? An organization that allows people to lend small amounts of money via the Internet
Classifying Manipulation Primitives from Visual Data
Classifying Manipulation Primitives from Visual Data Sandy Huang and Dylan Hadfield-Menell Abstract One approach to learning from demonstrations in robotics is to make use of a classifier to predict if
A Survey on Outlier Detection Techniques for Credit Card Fraud Detection
IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661, p- ISSN: 2278-8727Volume 16, Issue 2, Ver. VI (Mar-Apr. 2014), PP 44-48 A Survey on Outlier Detection Techniques for Credit Card Fraud
The Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies
Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online at: www.ijarcsms.com
A Hybrid Neural Network-Latent Topic Model
Li Wan Leo Zhu Rob Fergus Dept. of Computer Science, Courant institute, New York University {wanli,zhu,fergus}@cs.nyu.edu Abstract This paper introduces a hybrid model that combines a neural network with
EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set
EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set Amhmed A. Bhih School of Electrical and Electronic Engineering Princy Johnson School of Electrical and Electronic Engineering Martin
Deformable Part Models with CNN Features
Deformable Part Models with CNN Features Pierre-André Savalle 1, Stavros Tsogkas 1,2, George Papandreou 3, Iasonas Kokkinos 1,2 1 Ecole Centrale Paris, 2 INRIA, 3 TTI-Chicago Abstract. In this work we
Chapter ML:XI (continued)
Chapter ML:XI (continued) XI. Cluster Analysis Data Mining Overview Cluster Analysis Basics Hierarchical Cluster Analysis Iterative Cluster Analysis Density-Based Cluster Analysis Cluster Evaluation Constrained
Tensor Methods for Machine Learning, Computer Vision, and Computer Graphics
Tensor Methods for Machine Learning, Computer Vision, and Computer Graphics Part I: Factorizations and Statistical Modeling/Inference Amnon Shashua School of Computer Science & Eng. The Hebrew University
Clustering. Danilo Croce Web Mining & Retrieval a.a. 2015/201 16/03/2016
Clustering Danilo Croce Web Mining & Retrieval a.a. 2015/201 16/03/2016 1 Supervised learning vs. unsupervised learning Supervised learning: discover patterns in the data that relate data attributes with
Convolutional Feature Maps
Convolutional Feature Maps Elements of efficient (and accurate) CNN-based object detection Kaiming He Microsoft Research Asia (MSRA) ICCV 2015 Tutorial on Tools for Efficient Object Detection Overview
Simplified Machine Learning for CUDA. Umar Arshad @arshad_umar Arrayfire @arrayfire
Simplified Machine Learning for CUDA Umar Arshad @arshad_umar Arrayfire @arrayfire ArrayFire CUDA and OpenCL experts since 2007 Headquartered in Atlanta, GA In search for the best and the brightest Expert
Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 269 Class Project Report
Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 69 Class Project Report Junhua Mao and Lunbo Xu University of California, Los Angeles [email protected] and lunbo
Predict the Popularity of YouTube Videos Using Early View Data
000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050
Unsupervised and supervised dimension reduction: Algorithms and connections
Unsupervised and supervised dimension reduction: Algorithms and connections Jieping Ye Department of Computer Science and Engineering Evolutionary Functional Genomics Center The Biodesign Institute Arizona
Behavior Analysis in Crowded Environments. XiaogangWang Department of Electronic Engineering The Chinese University of Hong Kong June 25, 2011
Behavior Analysis in Crowded Environments XiaogangWang Department of Electronic Engineering The Chinese University of Hong Kong June 25, 2011 Behavior Analysis in Sparse Scenes Zelnik-Manor & Irani CVPR
MVA ENS Cachan. Lecture 2: Logistic regression & intro to MIL Iasonas Kokkinos [email protected]
Machine Learning for Computer Vision 1 MVA ENS Cachan Lecture 2: Logistic regression & intro to MIL Iasonas Kokkinos [email protected] Department of Applied Mathematics Ecole Centrale Paris Galen
Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall
Automatic Photo Quality Assessment Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Estimating i the photorealism of images: Distinguishing i i paintings from photographs h Florin
THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS
THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS O.U. Sezerman 1, R. Islamaj 2, E. Alpaydin 2 1 Laborotory of Computational Biology, Sabancı University, Istanbul, Turkey. 2 Computer Engineering
Clarify Some Issues on the Sparse Bayesian Learning for Sparse Signal Recovery
Clarify Some Issues on the Sparse Bayesian Learning for Sparse Signal Recovery Zhilin Zhang and Bhaskar D. Rao Technical Report University of California at San Diego September, Abstract Sparse Bayesian
Fast Matching of Binary Features
Fast Matching of Binary Features Marius Muja and David G. Lowe Laboratory for Computational Intelligence University of British Columbia, Vancouver, Canada {mariusm,lowe}@cs.ubc.ca Abstract There has been
Module 5. Deep Convnets for Local Recognition Joost van de Weijer 4 April 2016
Module 5 Deep Convnets for Local Recognition Joost van de Weijer 4 April 2016 Previously, end-to-end.. Dog Slide credit: Jose M 2 Previously, end-to-end.. Dog Learned Representation Slide credit: Jose
Università degli Studi di Bologna
Università degli Studi di Bologna DEIS Biometric System Laboratory Incremental Learning by Message Passing in Hierarchical Temporal Memory Davide Maltoni Biometric System Laboratory DEIS - University of
Supervised and unsupervised learning - 1
Chapter 3 Supervised and unsupervised learning - 1 3.1 Introduction The science of learning plays a key role in the field of statistics, data mining, artificial intelligence, intersecting with areas in
Self Organizing Maps: Fundamentals
Self Organizing Maps: Fundamentals Introduction to Neural Networks : Lecture 16 John A. Bullinaria, 2004 1. What is a Self Organizing Map? 2. Topographic Maps 3. Setting up a Self Organizing Map 4. Kohonen
Document Image Retrieval using Signatures as Queries
Document Image Retrieval using Signatures as Queries Sargur N. Srihari, Shravya Shetty, Siyuan Chen, Harish Srinivasan, Chen Huang CEDAR, University at Buffalo(SUNY) Amherst, New York 14228 Gady Agam and
Feature Engineering in Machine Learning
Research Fellow Faculty of Information Technology, Monash University, Melbourne VIC 3800, Australia August 21, 2015 Outline A Machine Learning Primer Machine Learning and Data Science Bias-Variance Phenomenon
Bert Huang Department of Computer Science Virginia Tech
This paper was submitted as a final project report for CS6424/ECE6424 Probabilistic Graphical Models and Structured Prediction in the spring semester of 2016. The work presented here is done by students
Efficient Attendance Management: A Face Recognition Approach
Efficient Attendance Management: A Face Recognition Approach Badal J. Deshmukh, Sudhir M. Kharad Abstract Taking student attendance in a classroom has always been a tedious task faultfinders. It is completely
Linear Classification. Volker Tresp Summer 2015
Linear Classification Volker Tresp Summer 2015 1 Classification Classification is the central task of pattern recognition Sensors supply information about an object: to which class do the object belong
Deep Learning using Linear Support Vector Machines
Yichuan Tang [email protected] Department of Computer Science, University of Toronto. Toronto, Ontario, Canada. Abstract Recently, fully-connected and convolutional neural networks have been trained
A secure face tracking system
International Journal of Information & Computation Technology. ISSN 0974-2239 Volume 4, Number 10 (2014), pp. 959-964 International Research Publications House http://www. irphouse.com A secure face tracking
Machine Learning and Pattern Recognition Logistic Regression
Machine Learning and Pattern Recognition Logistic Regression Course Lecturer:Amos J Storkey Institute for Adaptive and Neural Computation School of Informatics University of Edinburgh Crichton Street,
More Local Structure Information for Make-Model Recognition
More Local Structure Information for Make-Model Recognition David Anthony Torres Dept. of Computer Science The University of California at San Diego La Jolla, CA 9093 Abstract An object classification
Blood Vessel Classification into Arteries and Veins in Retinal Images
Blood Vessel Classification into Arteries and Veins in Retinal Images Claudia Kondermann and Daniel Kondermann a and Michelle Yan b a Interdisciplinary Center for Scientific Computing (IWR), University
Parallel Data Selection Based on Neurodynamic Optimization in the Era of Big Data
Parallel Data Selection Based on Neurodynamic Optimization in the Era of Big Data Jun Wang Department of Mechanical and Automation Engineering The Chinese University of Hong Kong Shatin, New Territories,
A Data Mining Study of Weld Quality Models Constructed with MLP Neural Networks from Stratified Sampled Data
A Data Mining Study of Weld Quality Models Constructed with MLP Neural Networks from Stratified Sampled Data T. W. Liao, G. Wang, and E. Triantaphyllou Department of Industrial and Manufacturing Systems
Colour Image Segmentation Technique for Screen Printing
60 R.U. Hewage and D.U.J. Sonnadara Department of Physics, University of Colombo, Sri Lanka ABSTRACT Screen-printing is an industry with a large number of applications ranging from printing mobile phone
MULTI-LEVEL SEMANTIC LABELING OF SKY/CLOUD IMAGES
MULTI-LEVEL SEMANTIC LABELING OF SKY/CLOUD IMAGES Soumyabrata Dev, Yee Hui Lee School of Electrical and Electronic Engineering Nanyang Technological University (NTU) Singapore 639798 Stefan Winkler Advanced
