Efficient Approximate Similarity Search Using Random Projection Learning


 Rudolph Blake
 1 years ago
 Views:
Transcription
1 Efficient Approximate Similarity Search Using Random Projection Learning Peisen Yuan, Chaofeng Sha, Xiaoling Wang 2,BinYang, and Aoying Zhou 2 School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai 2433, P.R. China 2 Shanghai Key Laboratory of Trustworthy Computing, Software Engineering Institute, East China Normal University, Shanghai 262, P.R. China Abstract. Efficient similarity search on high dimensional data is an important research topic in database and information retrieval fields. In this paper, we propose a random projection learning approach for solving the approximate similarity search problem. First, the random projection technique of the locality sensitive hashing is applied for generating the high quality binary codes. Then the binary code is treated as the labels and a group of SVM classifiers are trained with the labeled data for predicting the binary code for the similarity queries. The experiments on real datasets demonstrate that our method substantially outperforms the existing work in terms of preprocessing time and query processing. Introduction The similarity search, also known as the knearest neighbor query, is a classical problem and core operation in database and information retrieval fields. The problem has been extensively studied and applied in many fields, such as contentbased multimedia retrieval, time series and scientific database, text documents etc. One of the common characteristics of these kinds of data is highdimensional. Similarity search on the highdimensional data is a big challenge due to the time and space demands. However, in many real applications, the approximate results obtained in a more constrained time and space setting can also satisfy the users requirements. For example, in the contentbased image retrieval problem, a similar image can be returned as the result. Recently, researchers propose the approximate way for the similarity query, which can provide the satisfying results with much efficiency improvement [ 5]. The locality sensitive hashing (LSH for short) [] is an efficient way for processing similarity search approximately. The principle of LSH is that the more similar the objects, the higher probability they fall into the same hashing bucket. The random projection technique of LSH is designed to approximately evaluate the cosine similarity between vectors, which transforms the highdimensional data into much lower dimensions with the compact bit vectors. However, the LSH is a dataindependent approach. Recently, the learningbased dataaware methods, such as semantic hashing [6], are proposed and improve the search H. Wang et al. (Eds.): WAIM 2, LNCS 6897, pp , 2. c SpringerVerlag Berlin Heidelberg 2
2 58 P. Yuan et al. efficiency with much shorter binary code. The key of the learningbased approaches is designing a way to obtain the binary codes for the data and the query. For measuring the quality of the binary code, the entropy maximizing criterion is proposed [6]. The stateoftheart of learningbased technique is selftaught hashing (STH for short) [7], which converts the similarity search into a two stage learning problem. The first stage is the unsupervised learning of the binary code and the second one is supervised learning with the twoclasses classifiers. In order to obtain the binary code satisfying the entropy maximizing criterion, the adjacency matrix of the knn graph of the dataset is constructed firstly. After solving the matrix by the binarised Laplacian Eigenmap, the median of the eigenvalues is set as the threshold for getting the bit labels: if the eigenvalue is larger than the threshold, the corresponding label is, otherwise is. In the second stage, binary code of the objects is taken as the class label, then classifiers are trained for predicting the binary code for the query. However, the time and space complexity of the preprocessing stage is considerably high. In this paper, based on the filterandrefine framework, we propose a random projection learning (RPL for short) approach for the approximate similarity search problem, which requires much less time and space cost for acquiring the binary code in the preprocessing step. Firstly, the random projection technique is used for obtaining the binary code of the data objects. The binary codes are used as the labels of the data objects andthenthel SVM classifiers are trained subsequently, which are used for predicting the binary labels for the queries. We prove that the binary code after random projection satisfies the entropy maximizing criterion which is required by semantic hashing. Theoretical analysis and empirical experiment study on the real datasets are conducted, which demonstrate that our method gains comparable effectiveness with much less time and space cost. To summarize, the main contributions of this paper are briefly outlined as follows: Random projection learning method for similarity search is proposed, and the approximate knearest neighbor query is studied. Properties of random projection of LSH that satisfying the entropy maximizing criterion needed by the semantic hashing is proved. Extensive experiments are conducted to demonstrate the effectiveness and efficiency of our methods. The rest of paper is organized as follows. The random projection learning method for approximate similarity search is presented in Section 2. An extensive experimental evaluation is introduced in Section 3. In Section 4, the related work is briefly reviewed. In Section 5, the conclusion is drawn and future work is summarized. 2 Random Projection Learning 2. The Framework The processing framework of RPL is described in Figure (a). Given data set S, firstly, the random projection technique of LSH is used for obtaining the binary code. After that, the binary code is treated as the labels of the data objects. Then l SVM classifiers
3 Efficient Approximate Similarity Search 59 Data Objects Train SVM Random Sketching Objects l User Query Predict (a) Processing framework of RPL Binary Vectors (b) SVM training on LSH signature vectors Fig.. The processing framework and the training with signature are trained subsequently, which are used for predicting the binary labels for the queries. The binary code after random projection satisfying the entropy maximizing criterion in section 5. To answer a query, the binary labels of the query are learned with these l classifiers firstly. Then the similarities are evaluated in the hamming space, and hamming distance between the query less than a threshold is treated as the candidates. The distances or similarities are evaluated on the candidate set. Results are reranked and returned finally. In this paper, we take the approximate knn query into account. Since there is no need of computing the knn graph, LapEng and median, our framework can efficiently answer the query with much less preprocessing time and space consumption comparing with STH [7]. In this paper, the cosine similarity and Euclidean distance are used as the similarity metric and the distance metric without pointing out specifically. 2.2 Random Projection M. Charikar [2] proposes the random projection technique using random hyperplanes, which preserves the cosine similarity between vectors in lower space. The random projection is a method of locality sensitive hashing for dimensionality reduction, which is a powerful tool designed for approximating the cosine similarity. Let u and v be two vectors in R d and θ(u, v) be the angle between them. The cosine similarity between u and v is defined as Eq.. cosine(θ(u, v)) = u v () u v Given a vector u R d, a random vector r is randomly generated with each component randomly choosing from the standard normal distribution N(, ). Each hash function h r of the random projection LSH family H is defined as Eq.2. { :if r u ; h r (u) = :otherwise. Given the hash family H and vectors u and v, Eq.3 can be obtained [2]. Pr[h r (u) =h r (v)] = θ(u, v) π (2) (3)
4 52 P. Yuan et al. Form Eq.3, the cosine similarity between u and v can be approximately evaluated with Eq.4. cosine(θ(u, v)) = cosine(( Pr[h r (u) =h r (v)])π) (4) For the random projection of LSH, l hash functions h r,,h rl are chosen from H. After hashing, the vector u can be represented by the signature s as Eq.5. s = {h r (u),,h rl (u)}. (5) The more similar of the data vectors, the higher probability they are projected with the same labels. 2.3 The Algorithm The primary idea of RPL is that:() similar data vectors have similar binary code after random projection, and the disparity of binary code predicted for the query with the data vector set should be small; (2) The smaller distance between the vectors, the high chance they belong to he same class. Before introducing the query processing, the definitions used in the following sections are introduced firstly. Definition. Hamming Distance. Given two binary vectors v and v 2 with equal length L, their Hamming distance is defined as H dist (v, v 2 )= L i= v [i] v 2 [i]. Definition 2. Hamming Ball Coverset. Given an integer R, a binary vector v and a vector set V b, the Hamming Ball Coverset relative to R of v is denoted as BC R (v) = {v i v i V b,andh dist (v, v i ) R },wherer is Radius. The intuitive meaning of Hamming Ball Coverset is the set of all the binary vectors in the set V b whose Hamming distance to v is less than or equal to R. Obtaining the Binary Vector. The algorithm of random projection used for generating the binary code is illustrated in Algorithm. First a random matrix R d l is generated (line ). The entry of the vector of the =, j = i,,l.for each vector v of the set V, the inner products with each vector of the matrix R d l are evaluated (lines 3  ). Therefore, the signatures of v V can be obtained as Eq. 5. After projecting all the objects, a signature matrix S R n l can be obtained. Row of the signature matrix represents the object vector and the column represents the LSH signatures. In order to access the signature easily, an inverted list of the binary vector is built based the LSH signatures (line ). Each signature vector v b of the corresponding vector is a binary vector {, } l. Finally, the inverted list of binary vectors returned (line ). The similarity value of Eq.4 can be normalized between and. matrix is chosen from N(, ) and normalized, i e., d i= r2 ij
5 Efficient Approximate Similarity Search 52 Algorithm. Random Projection of LSH Input: Object vector set V, V R d. Output: Inverted List of Binary Vector (ILBV ). ILBV = ; generate a normalized random matrix R d l ; foreach v Vdo v b = ; foreach vector r of R d l do h = r v; val = sgn(h); v b.append(val); ILBV.add(v b ); return ILBV ; Training Classifiers. The support vector machine (SVM) classifying technique is a classic classifying technique in data mining field. In this paper, the SVM classifiers are used and trained with the labeled data vectors. Given labeled training data set (x i,y i ),i =,,n,wherex i R d and y {, } l. The solution of SVM is formalized as the following optimization problem. 2 wt w + C n min ξ i w,b,ξ i i= (6) subject to y i w T x i ξ i The learning procedure after obtaining the binary labels is presented in Figure (b). The xaxis represents the binary vectors of the LSH signatures, and the yaxis represents the data object set 2. As illustrated in Figure (b), the column of the binary vectors is used as the class labels of the objects, then the first SVM classifier is trained. Since the length of signature vector is l, l SVM classifiers can be trained in the same manner. The classifier training algorithm used in RPL is described in Algorithm 2. For each column of the signature matrix, an SVM classifier is trained at first (lines 4). In the same manner, l classifiers are trained. Finally, these l classifiers are returned, which are denoted as (w j,b j ),forj =,, l (line 5). Processing Query. The procedure for processing the approximate knn query is summarized in Algorithm 3. The algorithm consists of two stages: filter and refine. In the filter stage, the binary vector for the query is obtained with the l SVM classifiers firstly (lines 46). After that, the Hamming distances of the query with each binary vector in ILBV are evaluated (lines 79) and the distances are sorted (line ). The objects whose Hamming distance larger than R are filtered in this stage. In the refine stage, the Euclidean distances are evaluated on the candidate set which fall in the Hamming Ball Coverset with radius R (lines 4). The topk results are returned after being sorted finally (lines 56). 2 label {,} can be transformed to {,} plainly.
6 522 P. Yuan et al. Algorithm 2. SVM Training of RPL Input: Object vector v i V,i=,,n,V R d ; ILBV. Output: l SVM Classifiers. for (j = ; j l; j++) do foreach v i Vdo v b [i] =get(ilbv,v i); SV MTrain(v i, v b [ij]); 5 return (w j,b j),j =,, l; Algorithm 3. Approximate knn Query Algorithm Input:Queryq; l SVM Classifiers; ILBV, Hamming Ball Radius R,Integerk. Output: Topk Result List. HammingResult = ; 2 Result = ; 3 Vector q v = new Vector();//The query binary vector 4 for (i = ; i l; i++) do 5 b = SV MPredict(q,svm i); 6 q v = q v.append(b); 7 foreach v b ILBV do 8 H dist = HammingBallDist(v b, q v ); 9 HammingResult = HammingResult H dist ; sort(hammingresult); Select the vectors Ṽ = BCR(q v ) ; foreach v Ṽ do 3 distance = dist(v,q); 4 Result = Result distance; 5 sort(result); 6 return Topk result list; 2.4 Complexity Analysis Suppose the object vector v R d, and the length of the binary vector after random projection is l. In the algorithm, for generating the matrix R d l, its time and space complexity is O(dl) and O(dl) respectively. Let z be the number of nonzero values per data object. Training l SVM classifiers takes O(lzn) or even less [8]. For the query processing, algorithm 3 predicts l bits with the l SVM classifiers takes O(lznlog n) time complexity. Let the size of the Hamming Ball Coverset BC R (q) = C, the evaluation of the candidate with the query takes O(Cl). The sort step takes O(n log n). Therefore, the query complexity is O(lzn log n + Cl + n log n). The time and space complexity of the preprocessing step of STH is O(n 2 + n 2 k + lnkt + ln) and O(n 2 ) respectively [7]. Thus the time complexity of the preprocessing of our method O(dl) O(n 2 + n 2 k + lnkt + ln), and space complexity O(dl) O(n 2 ) also, where l is much smaller than n.
7 Efficient Approximate Similarity Search Experimental Study 3. Experimental Setup All the algorithms are implemented in Java SDK.6 and run on an Ubuntu.4 enabled PC with Intel duo core E GHz CPU and 4G main memory. The SVM Java Package [9] is used and the kernel function is configured to linear kernel function with other default configurations. In term of the dataset, the following two datasets are used. WebKB [] contains web pages collected from the computer science departments of various universities by the World Wide Knowledge Base project, which is widely used for text mining. It includes 283 pages for training, which are mainly classified into faculty, staff, department, course and project 5 classes. Reuters2578 [] is a collection of news documents used for text mining, which includes 2578 documents and 7 topics. Due to space limited when solving the knn matrix problem, we randomly select 283 documents which include acq, crude, earn, grain, interest, moneyfx, ship, trade 8 classes. The vectors are generated with TFIDF vector model [2] after removing the stopwords and stemming. In total, 5 queries are issued on each dataset and the average results are reported. The statistics of the datasets are summarized in the table. Table. Statistics of the datasets Dataset Document No. Vector Length Class No. WebKB Reuters For evaluating the effectiveness, the kprecision metric is used and defined as k precision = ann k ennk k,whereann k is the result list of the approximate knn query, and enn k is the knn result list by the linear scaning. 3.2 Effectiveness To evaluate the query effectiveness of the approximate knn query algorithm, we test the kprecision varying the bit length L and the radius R. In this experiment, the parameter k of the kprecision is set to 4. The binary length L varies from 4 to 28, the radius R is selected between to 2. The experiments results of the effectiveness of RPL on the two datasets are presented in Figure 2. From the Figure 2, we can observe that () for a certain radius R, with the increasing of the parameter L, the precision drops quickly. For example, when R =8,the kprecision drops from. to.2 when the length L variesfromto28onthetwo datasets. (2) given the parameter L, increasing the radius can improve the precision. Considering when the length L is 5, the precision increases from. to. with the radius R varies from to 2. The reason that the bigger the radius R, more candidates are survived after filtering, thus the higher precision.
8 524 P. Yuan et al. kprecision R= R=2 R=4 R=8 R=2 kprecision R= R=2 R=4 R=8 R= (a) WebKB (b) Retures Fig. 2. kprecision on two datasets Time (ms) R= R=2 R=4 R=8 R= (a) WebKB Time (ms) R= R=2 R=4 R=8 R= (b) Retures Fig. 3. Query efficiency on the two Datasets 3.3 Efficiency To evaluate the efficiency of RPL, the approximate knn query processing time is tested when varying the radius R and the bit length L. The test result is demonstrated in Figure 3. In this experiment, the radius R varies from to 2, and the bit length is selected between 4 and 3. From the performance evaluation, conclusions can be drawn that: () with the increasing of the bit length L, the query processing time drop sharply, because there are fewer candidates; (2) the bigger the radius are, the higher query processing time for there are more candidates. But it is much smaller than the linear scan, which takes about 737 and 264 ms on the WebKB and Reuters datasets respectively. 3.4 Comparison Comparisons with STH [7] are made on both of effectiveness and efficiency. STH takes about 9645 and 76 seconds on WebKB and Retures datasets respectively for preprocessing, i.e., evaluating knn graph and the the Eigenvectors of the matrix. However, the preprocessing time of our method is much less than STH, only takes only several milliseconds, i.e., only a random matrix is needed to generate. In this comparison, the parameter k of the knn graph used in STH is set to 25. The effectiveness comparison with STH is shown in Figure 4. The radius R is set to 3, and the parameter k to 3 and. The kprecision is reported varying bit length L on the two datasets. The experiment results show that the precision of RPL is a bit lower
9 Efficient Approximate Similarity Search 525 kprecision RPL,WebKB RPL,Reuters STH,WebKB STH,Reuters (a) 3Precision kprecision RPL,WebKB RPL,Reuters STH,WebKB STH,Reuters (b) Precision Fig. 4. Precision comparison Time(ms) RPL,WebKB RPL,Reuters STH,WebKB STH,Reuters Time(ms) RPL,WebKB RPL,Reuters STH,WebKB STH,Reuters (a) R= (b) R= RPL,WebKB RPL,Reuters STH,WebKB STH,Reuters 25 2 RPL,WebKB RPL,Reuters STH,WebKB STH,Reuters Time(ms) 5 Time(ms) (c) R= (d) R=4 Fig. 5. Performance comparison with different R than the STH, because STH use the knn graph of the dataset, and this take the data relationship into the graph. The efficiency comparison with STH is conducted and the results are shown in Figure 5. In this experiment, the bit length varies from 4 to 6, results show the query processing time when varying the the radius R and different bit length L. Results demonstrate that the performance of our method is better than STH, which indicates better filtering skill. 4 Related Work The similarity search problem is well studied in low dimensional space with the space partitioning and data partitioning index techniques [3 5]. However, the efficiency
10 526 P. Yuan et al. degrades after the dimension is large than [6]. Therefore, the researches propose the approximate techniques to process the problem approximately. Approximate search method improves the efficiency by relaxing the precision quality. MEDRANK [7] solves the nearest neighbor search by sorted list and TA algorithm [8] in an approximate way. Yao et al.[9] study the approximate knn query in relation database. However, both MEDRANK and [9] focus on much lower dimensional data (from 2 to ). Locality sensitive hashing is a wellknown effective technique for approximate nearest neighbor search [, 3], which ensures that the more similar of the data objects, the higher probability they are hashed into the same buckets. The near neighbor of the query can be found in the candidate bucket with high probability. The space filling techniques convert highdimensional data into one dimensional while preserving their similarity. One kind of space filling is Zorder curve, which is built by connecting the zvalue of the points [2]. Thus the knn search for a query can be translated into range query on the zvalues. Another space filling technique is Hilbert curve [2]. Based on Zorder curve and LSH, Tao et al.[4] propose lsbtree for fast approximate nearest neighbor search in high dimensional space. The random projection method of LSH is designed for approximately evaluating similarity between vectors. Recently, CompactProjection [5] employs it for the contentbased image similarity search. Recently, the dataaware hashing techniques are proposed which demand much less bits with the machine learning skill [6, 7, 22]. Semantic hashing [6] employs the Hamming distance of the compact binary codes for semantic similarity search. Selftaught hashing [7] proposes a two stage learning method for similarity search with much less bit length. Nevertheless, the preprocessing stage takes too much time and space for evaluating and storing knn graph and solving the LagMap problem. It s not proper for the data evolving setting and also its scalability is not well especially for the large volume of data. Motivated by the above research works, the random projection learning method is proposed, which is proved to satisfy the entropy maximizing criterion nicely. 5 Conclusions In this paper, a learning based framework for similarity search is studied. According to the framework, the data vectors are randomly projected into binary code firstly, and then the binary code are employed as the labels. The SVM classifiers are trained for predicting the binary code for the query. We prove that the binary code after random projection satisfying the entropy maximizing criterion. The approximateknn query is effectively evaluated within this framework. Experimental results show that our method achieves better performance comparing with the existing technique. Though the the query effectiveness is bit lower than the STH, RPL takes much smaller time and space complexity than STH in the preprocessing step and the query gains better performance, and this is very important especially in the dataintensive environment. Acknowledgments. This work is supported by NSFC grants (No and No. 6934), 973 program (No. 2CB3286), Shanghai International Cooperation
11 Efficient Approximate Similarity Search 527 Fund Project (Project No ), Program for New Century Excellent Talents in University (No. NCET388) and Shanghai Leading Academic Discipline Project (No. B42). References. Indyk, P., Motwani, R.: Approximate nearest neighbors: towards removing the curse of dimensionality. In: STOC, pp (998) 2. Charikar, M.S.: Similarity estimation techniques from rounding algorithms. In: STOC, pp (22) 3. Andoni, A., Indyk, P.: Nearoptimal hashing algorithms for approximate nearest neighbor in high dimensions. In: FOCS, pp MIT, Cambridge (26) 4. Tao, Y., Yi, K., Sheng, C., Kalnis, P.: Quality and efficiency in high dimensional nearest neighbor search. In: SIGMOD, pp (29) 5. Min, K., Yang, L., Wright, J., Wu, L., Hua, X.S., Ma, Y.: Compact Projection: Simple and Efficient Near Neighbor Search with Practical Memory Requirements. In: CVPR, pp (2) 6. Salakhutdinov, R., Hinton, G.: Semantic Hashing. International Journal of Approximate Reasoning 5(7), (29) 7. Zhang, D., Wang, J., Cai, D., Lu, J.: Selftaught hashing for fast similarity search. In: SIGIR, pp (2) 8. Joachims, T.: Training linear SVMs in linear time. In: SIGKDD, pp (26) 9. Chang, C.C., Lin, C.J.: LIBSVM: a library for support vector machines (2), cjlin/libsvm. World Wide Knowledge Base project (2), webkb/. Reuters2578 (999), html 2. BaezaYates, R.A., RibeiroNeto, B.A.: Modern Information Retrieval. Addison Wesley, Reading (999) 3. Bentley, J.L.: Multidimensional binary search trees used for associative searching. Communications of the ACM 8(9), 57 (975) 4. Guttman, A.: Rtrees: a dynamic index structure for spatial searching. In: SIGMOD, pp (984) 5. Beckmann, N., Kriegel, H.P., Schneider, R., Seeger, B.: The R*tree: an efficient and robust access method for points and rectangles. SIGMOD 9(2), (99) 6. Weber, R., Schek, H.J., Blott, S.: A quantitative analysis and performance study for similaritysearch methods in highdimensional spaces. In: VLDB, pp (998) 7. Fagin, R., Kumar, R., Sivakumar, D.: Efficient similarity search and classification via rank aggregation. In: SIGMOD, pp (23) 8. Fagin, R., Lotem, A., Naor, M.: Optimal aggregation algorithms for middleware. Journal of Computer and System Sciences 66(4), (23) 9. Yao, B., Li, F., Kumar, P.: knearest neighbor queries and knnjoins in large relational databases (almost) for free. In: ICDE, pp. 4 5 (2) 2. Ramsak, F., Markl, V., Fenk, R., Zirkel, M., Elhardt, K., Bayer, R.: Integrating the UBtree into a database system kernel. In: VLDB, pp (2) 2. Liao, S., Lopez, M., Leutenegger, S.: High dimensional similarity search with space filling curves. In: ICDE, pp (2) 22. Baluja, S., Covell, M.: Learning to hash: forgiving hash functions and applications. Data Mining and Knowledge Discovery 7(3), (28) 23. Weiss, Y., Torralba, A., Fergus, R.: Spectral hashing. NIPS 2, (29)
12 528 P. Yuan et al. Appendix: Theoretical Proof The entropy H of a discrete random variable X with possible values x,,x n is defined as H(X) = n i= p(x i)i(x i )= n i= p(x i)logp(x i ), where I(x i ) is the selfinformation of x i. For the binary random variable X, assume that P (X =)=p and P (X =)= p, the entropy of X can be represented as Eq.7. H(X) = p log p ( p) log( p), (7) when p =/2, Eq.7 attains its maximum value. Semantic hashing [6] is an effective solution for similarity search with learning technique. For ensuring the search efficiency, an elegant semantic hashing should satisfy entropy maximizing criterion [6, 7]. The intuitive meaning of entropy maximizing criterion is that the datasets are uniformly represented by each bit, thus maximizing the information of each bit, i.e., bits are uncorrelated and each bit is expected to fire 5% [23]. Thus, we propose the following property for semantic hashing. Property. Semantic hashing should satisfy entropy maximizing criterion to ensure efficiency, i.e., the chance of each bit to occur is 5% and each bit is uncorrelated. In the following section, we prove that the binary code after random projection of LSH naturally satisfies the entropy maximum criterion. This is the key differencebetween our method and STH [7] in the generating binary code step. The latter needs considerable space and time consumption. Let R d be a ddimensions real data space and u R d is normalized into the unit vector, i.e., d i= u2 i =. Suppose a random vector r Rd, r i is the ith component of r and chosen randomly from the standard normal distribution N(, ), i =,,d. Let v = u r and v is random variable. Then the following lemma holds. Lemma. Let random variable v = u r,thenv N(, ). Proof. Since v = u r, thus v = d i= u i r i. Since r i is a random variable and drawn randomly and independently from the standard normal distribution N(, ), accordinglywe have u i r i N(, u 2 i ). Thus, according to the property of standard normal distribution, the random variable v N(, d i= u2 i )=N(, ). Hence the lemma is proved. From lemma, the corollary can be derived, which means that the binary code after random projection is uncorrelated.
13 Efficient Approximate Similarity Search 529 Corollary. The binary code after random projection is independent. Lemma 2. Let f(x) =sgn(x) be a function defined on the real data set R, a random variable v = u r. Set random variable v = f(v), thenp (v =)=P (v =)= 2. The prove of Lemma 2 is omitted here due to space limit. The lemma 2 means that the and occur with the same probability in the signature vector. Hence, on the basis of corollary and lemma 2, we have the following corollary. Corollary 2. The binary code of after random projection satisfies entropy maximizing criterion.