Exact Matrix Completion via Convex Optimization
|
|
- Kathleen Miles
- 7 years ago
- Views:
Transcription
1 Exact Matrix Copletion via Convex Optiization Eanuel J. Candès and Benjain Recht Applied and Coputational Matheatics, Caltech, Pasadena, CA Center for the Matheatics of Inforation, Caltech, Pasadena, CA May 2008 Abstract We consider a proble of considerable practical interest: the recovery of a data atrix fro a sapling of its entries. Suppose that we observe entries selected uniforly at rando fro a atrix M. Can we coplete the atrix and recover the entries that we have not seen? We show that one can perfectly recover ost low-rank atrices fro what appears to be an incoplete set of entries. We prove that if the nuber of sapled entries obeys C n 1.2 r log n for soe positive nuerical constant C, then with very high probability, ost n n atrices of rank r can be perfectly recovered by solving a siple convex optiization progra. This progra finds the atrix with iniu nuclear nor that fits the data. The condition above assues that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Siilar results hold for arbitrary rectangular atrices as well. Our results are connected with the recent literature on copressed sensing, and show that objects other than signals and iages can be perfectly reconstructed fro very liited inforation. Keywords. Matrix copletion, low-rank atrices, convex optiization, duality in optiization, nuclear nor iniization, rando atrices, noncoutative Khintchine inequality, decoupling, copressed sensing. 1 Introduction In any practical probles of interest, one would like to recover a atrix fro a sapling of its entries. As a otivating exaple, consider the task of inferring answers in a partially filled out survey. That is, suppose that questions are being asked to a collection of individuals. Then we can for a atrix where the rows index each individual and the coluns index the questions. We collect data to fill out this table but unfortunately, any questions are left unanswered. Is it possible to ake an educated guess about what the issing answers should be? How can one ake such a guess? Forally, we ay view this proble as follows. We are interested in recovering a data atrix M with n 1 rows and n 2 coluns but only get to observe a nuber of its entries which is coparably uch saller than n 1 n 2, the total nuber of entries. Can one recover the atrix M fro of its entries? In general, everyone would agree that this is ipossible without soe additional inforation. 1
2 In any instances, however, the atrix we wish to recover is known to be structured in the sense that it is low-rank or approxiately low-rank. (We recall for copleteness that a atrix with n 1 rows and n 2 coluns has rank r if its rows or coluns span an r-diensional space.) Below are two exaples of practical scenarios where one would like to be able to recover a low-rank atrix fro a sapling of its entries. The Netflix proble. In the area of recoender systes, users subit ratings on a subset of entries in a database, and the vendor provides recoendations based on the user s preferences [28, 32]. Because users only rate a few ites, one would like to infer their preference for unrated ites. A special instance of this proble is the now faous Netflix proble [2]. Users (rows of the data atrix) are given the opportunity to rate ovies (coluns of the data atrix) but users typically rate only very few ovies so that there are very few scattered observed entries of this data atrix. Yet one would like to coplete this atrix so that the vendor (here Netflix) ight recoend titles that any particular user is likely to be willing to order. In this case, the data atrix of all user-ratings ay be approxiately low-rank because it is coonly believed that only a few factors contribute to an individual s tastes or preferences. Triangulation fro incoplete data. Suppose we are given partial inforation about the distances between objects and would like to reconstruct the low-diensional geoetry describing their locations. For exaple, we ay have a network of low-power wirelessly networked sensors scattered randoly across a region. Suppose each sensor only has the ability to construct distance estiates based on signal strength readings fro its nearest fellow sensors. Fro these noisy distance estiates, we can for a partially observed distance atrix. We can then estiate the true distance atrix whose rank will be equal to two if the sensors are located in a plane or three if they are located in three diensional space [24,31]. In this case, we only need to observe a few distances per node to have enough inforation to reconstruct the positions of the objects. These exaples are of course far fro exhaustive and there are any other probles which fall in this general category. For instance, we ay have soe very liited inforation about a covariance atrix of interest. Yet, this covariance atrix ay be low-rank or approxiately low-rank because the variables only depend upon a coparably saller nuber of factors. 1.1 Ipedients and solutions Suppose for siplicity that we wish to recover a square n n atrix M of rank r. 1 Such a atrix M can be represented by n 2 nubers, but it only has (2n r)r degrees of freedo. This fact can be revealed by counting paraeters in the singular value decoposition (the nuber of degrees of freedo associated with the description of the singular values and of the left and right singular vectors). When the rank is sall, this is considerably saller than n 2. For instance, when M encodes a 10-diensional phenoenon, then the nuber of degrees of freedo is about 20 n offering a reduction in diensionality by a factor about equal to n/20. When n is large (e.g. in the thousands or illions), the data atrix carries uch less inforation than its abient diension 1 We ephasize that there is nothing special about M being square and all of our discussion would apply to arbitrary rectangular atrices as well. The advantage of focusing on square atrices is a siplified exposition and reduction in the nuber of paraeters of which we need to keep track. 2
3 suggests. The proble is now whether it is possible to recover this atrix fro a sapling of its entries without having to probe all the n 2 entries, or ore generally collect n 2 or ore easureents about M Which atrices? In general, one cannot hope to be able to recover a low-rank atrix fro a saple of its entries. Consider the rank-1 atrix M equal to M = e 1 e n =....., (1.1) where here and throughout, e i is the ith canonical basis vector in Euclidean space (the vector with all entries equal to 0 but the ith equal to 1). This atrix has a 1 in the top-right corner and all the other entries are 0. Clearly this atrix cannot be recovered fro a sapling of its entries unless we pretty uch see all the entries. The reason is that for ost sapling sets, we would only get to see zeros so that we would have no way of guessing that the atrix is not zero. For instance, if we were to see 90% of the entries selected at rando, then 10% of the tie we would only get to see zeroes. It is therefore ipossible to recover all low-rank atrices fro a set of sapled entries but can one recover ost of the? To investigate this issue, we introduce a siple odel of low-rank atrices. Consider the singular value decoposition (SVD) of a atrix M M = r σ k u k vk, (1.2) k=1 where the u k s and v k s are the left and right singular vectors, and the σ k s are the singular values (the roots of the eigenvalues of M M). Then we could think of a generic low-rank atrix as follows: the faily {u k } 1 k r is selected uniforly at rando aong all failies of r orthonoral vectors, and siilarly for the the faily {v k } 1 k r. The two failies ay or ay not be independent of each other. We ake no assuptions about the singular values σ k. In the sequel, we will refer to this odel as the rando orthogonal odel. This odel is convenient in the sense that it is both very concrete and siple, and useful in the sense that it will help us fix the ain ideas. In the sequel, however, we will consider far ore general odels. The question for now is whether or not one can recover such a generic atrix fro a sapling of its entries Which sapling sets? Clearly, one cannot hope to reconstruct any low-rank atrix M even of rank 1 if the sapling set avoids any colun or row of M. Suppose that M is of rank 1 and of the for xy, x, y R n so that the (i, j)th entry is given by M ij = x i y j. Then if we do not have saples fro the first row for exaple, one could never guess the value of the first coponent x 1, by any ethod whatsoever; no inforation about x 1 is observed. There is 3
4 of course nothing special about the first row and this arguent extends to any row or colun. To have any hope of recovering an unknown atrix, one needs at least one observation per row and one observation per colun. We have just seen that if the sapling is adversarial, e.g. one observes all of the entries of M but those in the first row, then one would not even be able to recover atrices of rank 1. But what happens for ost sapling sets? Can one recover a low-rank atrix fro alost all sapling sets of cardinality? Forally, suppose that the set Ω of locations corresponding to the observed entries ((i, j) Ω if M ij is observed) is a set of cardinality sapled uniforly at rando. Then can one recover a generic low-rank atrix M, perhaps with very large probability, fro the knowledge of the value of its entries in the set Ω? Which algorith? If the nuber of easureents is sufficiently large, and if the entries are sufficiently uniforly distributed as above, one ight hope that there is only one low-rank atrix with these entries. If this were true, one would want to recover the data atrix by solving the optiization proble iniize rank(x) subject to X ij = M ij (i, j) Ω, (1.3) where X is the decision variable and rank(x) is equal to the rank of the atrix X. The progra (1.3) is a coon sense approach which siply seeks the siplest explanation fitting the observed data. If there were only one low-rank object fitting the data, this would recover M. This is unfortunately of little practical use because this optiization proble is not only NP-hard, but all known algoriths which provide exact solutions require tie doubly exponential in the diension n of the atrix in both theory and practice [14]. If a atrix has rank r, then it has exactly r nonzero singular values so that the rank function in (1.3) is siply the nuber of nonvanishing singular values. In this paper, we consider an alternative which iniizes the su of the singular values over the constraint set. This su is called the nuclear nor, n X = σ k (X) (1.4) k=1 where, here and below, σ k (X) denotes the kth largest singular value of X. The heuristic optiization is then given by iniize X (1.5) subject to X ij = M ij (i, j) Ω. Whereas the rank function counts the nuber of nonvanishing singular values, the nuclear nor sus their aplitude and in soe sense, is to the rank functional what the convex l 1 nor is to the counting l 0 nor in the area of sparse signal recovery. The ain point here is that the nuclear nor is a convex function and, as we will discuss in Section 1.4 can be optiized efficiently via seidefinite prograing A first typical result Our first result shows that, perhaps unexpectedly, this heuristic optiization recovers a generic M when the nuber of randoly sapled entries is large enough. We will prove the following: 4
5 Theore 1.1 Let M be an n 1 n 2 atrix of rank r sapled fro the rando orthogonal odel, and put n = ax(n 1, n 2 ). Suppose we observe entries of M with locations sapled uniforly at rando. Then there are nuerical constants C and c such that if C n 5/4 r log n, (1.6) the iniizer to the proble (1.5) is unique and equal to M with probability at least 1 cn 3 ; that is to say, the seidefinite progra (1.5) recovers all the entries of M with no error. In addition, if r n 1/5, then the recovery is exact with probability at least 1 cn 3 provided that C n 6/5 r log n. (1.7) The theore states that a surprisingly sall nuber of entries are sufficient to coplete a generic low-rank atrix. For sall values of the rank, e.g. when r = O(1) or r = O(log n), one only needs to see on the order of n 6/5 entries (ignoring logarithic factors) which is considerably saller than n 2 the total nuber of entries of a squared atrix. The real feat, however, is that the recovery algorith is tractable and very concrete. Hence the contribution is twofold: Under the hypotheses of Theore 1.1, there is a unique low-rank atrix which is consistent with the observed entries. Further, this atrix can be recovered by the convex optiization (1.5). In other words, for ost probles, the nuclear nor relaxation is forally equivalent to the cobinatorially hard rank iniization proble (1.3). Theore 1.1 is in fact a special instance of a far ore general theore that covers a uch larger set of atrices M. We describe this general class of atrices and precise recovery conditions in the next section. 1.2 Main results As seen in our first exaple (1.1), it is ipossible to recover a atrix which is equal to zero in nearly all of its entries unless we see all the entries of the atrix. To recover a low-rank atrix, this atrix cannot be in the null space of the sapling operator giving the values of a subset of the entries. Now it is easy to see that if the singular vectors of a atrix M are highly concentrated, then M could very well be in the null-space of the sapling operator. For instance consider the rank-2 syetric atrix M given by M = 2 k=1 σ k u k u k, u 1 = (e 1 + e 2 )/ 2, u 2 = (e 1 e 2 )/ 2, where the singular values are arbitrary. Then this atrix vanishes everywhere except in the top-left 2 2 corner and one would basically need to see all the entries of M to be able to recover this atrix exactly by any ethod whatsoever. There is an endless list of exaples of this sort. Hence, we arrive at the notion that, soehow, the singular vectors need to be sufficiently spread that is, uncorrelated with the standard basis in order to iniize the nuber of observations needed to recover a low-rank atrix. 2 This otivates the following definition. 2 Both the left and right singular vectors need to be uncorrelated with the standard basis. Indeed, the atrix e 1v has its first row equal to v and all the others equal to zero. Clearly, this rank-1 atrix cannot be recovered unless we basically see all of its entries. 5
6 Definition 1.2 Let U be a subspace of R n of diension r and P U be the orthogonal projection onto U. Then the coherence of U (vis-à-vis the standard basis (e i )) is defined to be µ(u) n r ax 1 i n P Ue i 2. (1.8) Note that for any subspace, the sallest µ(u) can be is 1, achieved, for exaple, if U is spanned by vectors whose entries all have agnitude 1/ n. The largest possible value for µ(u) is n/r which would correspond to any subspace that contains a standard basis eleent. We shall be priarily interested in subspace with low coherence as atrices whose colun and row spaces have low coherence cannot really be in the null space of the sapling operator. For instance, we will see that the rando subspaces discussed above have nearly inial coherence. To state our ain result, we introduce two assuptions about an n 1 n 2 atrix M whose SVD is given by M = 1 k r σ ku k vk and with colun and row spaces denoted by U and V respectively. A0 The coherences obey ax(µ(u), µ(v )) µ 0 for soe positive µ 0. A1 The n 1 n 2 atrix 1 k r u kv k has a axiu entry bounded by µ 1 r/(n1 n 2 ) in absolute value for soe positive µ 1. The µ s above ay depend on r and n 1, n 2. Moreover, note that A1 always holds with µ 1 = µ 0 r since the (i, j)th entry of the atrix 1 k r u kvk is given by 1 k r u ikv jk and by the Cauchy- Schwarz inequality, u ik v jk u ik 2 v jk 2 µ 0r. n1 n 2 1 k r 1 k r 1 k r Hence, for sufficiently sall ranks, µ 1 is coparable to µ 0. As we will see in Section 2, for larger ranks, both subspaces selected fro the unifor distribution and spaces constructed as the span of singular vectors with bounded entries are not only incoherent with the standard basis, but also obey A1 with high probability for values of µ 1 at ost logarithic in n 1 and/or n 2. Below we will assue that µ 1 is greater than or equal to 1. We are in the position to state our ain result: if a atrix has row and colun spaces that are incoherent with the standard basis, then nuclear nor iniization can recover this atrix fro a rando sapling of a sall nuber of entries. Theore 1.3 Let M be an n 1 n 2 atrix of rank r obeying A0 and A1 and put n = ax(n 1, n 2 ). Suppose we observe entries of M with locations sapled uniforly at rando. Then there exist constants C, c such that if C ax(µ 2 1, µ 1/2 0 µ 1, µ 0 n 1/4 ) nr(β log n) (1.9) for soe β > 2, then the iniizer to the proble (1.5) is unique and equal to M with probability at least 1 cn β. For r µ 1 0 n1/5 this estiate can be iproved to with the sae probability of success. C µ 0 n 6/5 r(β log n) (1.10) 6
7 Theore 1.3 asserts that if the coherence is low, few saples are required to recover M. For exaple, if µ 0 = O(1) and the rank is not too large, then the recovery is exact with large probability provided that C n 6/5 r log n. (1.11) We give two illustrative exaples of atrices with incoherent colun and row spaces. This list is by no eans exhaustive. 1. The first exaple is the rando orthogonal odel. For values of the rank r greater than log n, µ(u) and µ(v ) are O(1), µ 1 = O(log n) both with very large probability. Hence, the recovery is exact provided that obeys (1.6) or (1.7). Specializing Theore 1.3 to these values of the paraeters gives Theore 1.1. Hence, Theore 1.1 is a special case of our general recovery result. 2. The second exaple is ore general and, in a nutshell, siply requires that the coponents of the singular vectors of M are sall. Assue that the u j and v j s obey ax e i, u j 2 µ B /n, ij ax e i, v j 2 µ B /n, (1.12) ij for soe value of µ B = O(1). Then the axiu coherence is at ost µ B since µ(u) µ B and µ(v ) µ B. Further, we will see in Section 2 that A1 holds ost of the tie with µ 1 = O( log n). Thus, for atrices with singular vectors obeying (1.12), the recovery is exact provided that obeys (1.11) for values of the rank not exceeding µ 1 B n1/ Extensions Our ain result (Theore 1.3) extends to a variety of other low-rank atrix copletion probles beyond the sapling of entries. Indeed, suppose we have two orthonoral bases f 1,..., f n and g 1,..., g n of R n, and that we are interested in solving the rank iniization proble iniize rank(x) subject to fi Xg j = fi Mg j, (i, j) Ω,. (1.13) This coes up in a nuber of applications. As a otivating exaple, there has been a great deal of interest in the achine learning counity in developing specialized algoriths for the ulticlass and ultitask learning probles (see, e.g., [1, 3, 5]). In ulticlass learning, the goal is to build ultiple classifiers with the sae training data to distinguish between ore than two categories. For exaple, in face recognition, one ight want to classify whether an iage patch corresponds to an eye, nose, or outh. In ultitask learning, we have a large set of data, but have a variety of different classification tasks, and, for each task, only partial subsets of the data are relevant. For instance, in activity recognition, we ay have acquired sets of observations of ultiple subjects and want to deterine if each observed person is walking or running. However, a different classifier is to be learned for each individual, and it is not clear how having access to the full collection of observations can iprove classification perforance. Multitask learning ais precisely to take advantage of the access to the full database to iprove perforance on the individual tasks. In the abstract forulation of this proble for linear classifiers, we have K classes to distinguish and are given training exaples f 1,..., f n. For each exaple, we are given partial labeling inforation about which classes it belongs or does not belong to. That is, for each exaple f j 7
8 and class k, we ay either be told that f j belongs to class k, be told f j does not belong to class k, or provided no inforation about the ebership of f j to class k. For each class 1 k K, we would like to produce a linear function w k such that w k f i > 0 if f i belongs to class k and w k f i < 0 otherwise. Forally, we can search for the vector w k that satisfies the equality constraints w k f i = y ik where y ik = 1 if we are told that f i belongs to class k, y ik = 1 if we are told that f i does not belong to class k, and y ik unconstrained if we are not provided inforation. A coon hypothesis in the ultitask setting is that the w k corresponding to each of the classes together span a very low diensional subspace with diension significantly saller than K [1, 3, 5]. That is, the basic assuption is that W = [w 1,..., w K ] is low-rank. Hence, the ulticlass learning proble can be cast as (1.13) with observations of the for f i W e j. To see that our theore provides conditions under which (1.13) can be solved via nuclear nor iniization, note that there exist unitary transforations F and G such that e j = F f j and e j = Gg j for each j = 1,..., n. Hence, f i Xg j = e i (F XG )e j. Then if the conditions of Theore 1.3 hold for the atrix F XG, it is iediate that nuclear nor iniization finds the unique optial solution of (1.13) when we are provided a large enough rando collection of the inner products fi Mg j. In other words, all that is needed is that the colun and row spaces of M be respectively incoherent with the basis (f i ) and (g i ). Fro this perspective, we additionally reark that our results likely extend to the case where one observes a sall nuber of arbitrary linear functionals of a hidden atrix M. Set N = n 2 and A 1,..., A N be an orthonoral basis for the linear space of n n atrices with the usual inner product X, Y = trace(x Y ). Then we expect our results should also apply to the rank iniization proble iniize rank(x) subject to A k, X = A k, M k Ω, (1.14) where Ω {1,..., N} is selected uniforly at rando. In fact, (1.14) is (1.3) when the orthobasis is the canonical basis (e i e j ) 1 i,j n. Here, those low-rank atrices which have sall inner product with all the basis eleents A k ay be recoverable by nuclear nor iniization. To avoid unnecessary confusion and notational clutter, we leave this general low-rank recovery proble for future work. 1.4 Connections, alternatives and prior art Nuclear nor iniization is a recent heuristic introduced by Fazel in [18], and is an extension of the trace heuristic often used by the control counity, see e.g. [6, 26]. Indeed, when the atrix variable is syetric and positive seidefinite, the nuclear nor of X is the su of the (nonnegative) eigenvalues and thus equal to the trace of X. Hence, for positive seidefinite unknowns, (1.5) would siply iniize the trace over the constraint set: iniize trace(x) subject to X ij = M ij (i, j) Ω. X 0 8
9 This is a seidefinite progra. Even for the general atrix M which ay not be positive definite or even syetric, the nuclear nor heuristic can be forulated in ters of seidefinite prograing as, for instance, the progra (1.5) is equivalent to iniize trace(w 1 ) + trace(w 2 ) subject to X [ ij = M ij ] (i, j) Ω W1 X X 0 W 2 with optiization variables X, W 1 and W 2, (see, e.g., [18,35]). There are any efficient algoriths and high-quality software available for solving these types of probles. Our work is inspired by results in the eerging field of copressive sapling or copressed sensing, a new paradig for acquiring inforation about objects of interest fro what appears to be a highly incoplete set of easureents [11, 13, 17]. In practice, this eans for exaple that high-resolution iaging is possible with fewer sensors, or that one can speed up signal acquisition tie in bioedical applications by orders of agnitude, siply by taking far fewer specially coded saples. Matheatically speaking, we wish to reconstruct a signal x R n fro a sall nuber easureents y = Φx, y R, and is uch saller than n; i.e. we have far fewer equations than unknowns. In general, one cannot hope to reconstruct x but assue now that the object we wish to recover is known to be structured in the sense that it is sparse (or approxiately sparse). This eans that the unknown object depends upon a saller nuber of unknown paraeters. Then it has been shown that l 1 iniization allows recovery of sparse signals fro rearkably few easureents: supposing Φ is chosen randoly fro a suitable distribution, then with very high probability, all sparse signals with about k nonzero entries can be recovered fro on the order of k log n easureents. For instance, if x is k-sparse in the Fourier doain, i.e. x is a superposition of k sinusoids, then it can be perfectly recovered with high probability by l 1 iniization fro the knowledge of about k log n of its entries sapled uniforly at rando [11]. Fro this viewpoint, the results in this paper greatly extend the theory of copressed sensing by showing that other types of interesting objects or structures, beyond sparse signals and iages, can be recovered fro a liited set of easureents. Moreover, the techniques for proving our ain results build upon ideas fro the copressed sensing literature together with probabilistic tools such as the powerful techniques of Bourgain and of Rudelson for bounding nors of operators between Banach spaces. Our notion of incoherence generalizes the concept of the sae nae in copressive sapling. Notably, in [10], the authors introduce the notion of the incoherence of a unitary transforation. Letting U be an n n unitary atrix, the coherence of U is given by µ(u) = n ax j,k U jk 2. This quantity ranges in values fro 1 for a unitary transforation whose entries all have the sae agnitude to n for the identity atrix. Using this notion, [10] showed that with high probability, a k-sparse signal could be recovered via linear prograing fro the observation of the inner product of the signal with = Ω(µ(U)k log n) randoly selected coluns of the atrix U. This result provided a generalization of the celebrated results about partial Fourier observations described in [11], a special case where µ(u) = 1. This paper generalizes the notion of incoherence to probles beyond the setting of sparse signal recovery. 9
10 In [27], the authors studied the nuclear nor heuristic applied to a related proble where partial inforation about a atrix M is available fro equations of the for A (k), M = ij A (k) ij M ij = b k, k = 1,...,, (1.15) where for each k, {A (k) ij } ij is an i.i.d. sequence of Gaussian or Bernoulli rando variables and the sequences {A (k) } are also independent fro each other (the sequences {A (k) } and {b k } are available to the analyst). Building on the concept of restricted isoetry introduced in [12] in the context of sparse signal recovery, [27] establishes the first sufficient conditions for which the nuclear nor heuristic returns the iniu rank eleent in the constraint set. They prove that the heuristic succeeds with large probability whenever the nuber of available easureents is greater than a constant ties 2nr log n for n n atrices. Although this is an interesting result, a serious ipedient to this approach is that one needs to essentially easure rando projections of the unknown data atrix a situation which unfortunately does not coonly arise in practice. Further, the easureents in (1.15) give soe inforation about all the entries of M whereas in our proble, inforation about ost of the entries is siply not available. In particular, the results and techniques introduced in [27] do not begin to address the atrix copletion proble of interest to us in this paper. As a consequence, our ethods are copletely different; for exaple, they do not rely on any notions of restricted isoetry. Instead, as we discuss below, we prove the existence of a Lagrange ultiplier for the optiization (1.5) that certifies the unique optial solution is precisely the atrix that we wish to recover. Finally, we would like to briefly discuss the possibility of other recovery algoriths when the sapling happens to be chosen in a very special fashion. For exaple, suppose that M is generic and that we precisely observe every entry in the first r rows and coluns of the atrix. Write M in block for as [ ] M11 M M = 12 M 21 M 22 with M 11 an r r atrix. In the special case that M 11 is invertible and M has rank r, then it is easy to verify that M 22 = M 21 M11 1 M 12. One can prove this identity by foring the SVD of M, for exaple. That is, if M is generic, and the upper r r block is invertible, and we observe every entry in the first r rows and coluns, we can recover M. This result iediately generalizes to the case where one observes precisely r rows and r coluns and the r r atrix at the intersection of the observed rows and coluns is invertible. However, this schee has any practical drawbacks that stand in the way of a generalization to a copletion algorith fro a general set of entries. First, if we iss any entry in these rows or coluns, we cannot recover M, nor can we leverage any inforation provided by entries of M 22. Second, if the atrix has rank less than r, and we observe r rows and coluns, a cobinatorial search to find the collection that has an invertible square sub-block is required. Moreover, because of the atrix inversion, the algorith is rather fragile to noise in the entries. 1.5 Notations and organization of the paper The paper is organized as follows. We first argue in Section 2 that the rando orthogonal odel and, ore generally, atrices with incoherent colun and row spaces obey the assuptions of the general Theore 1.3. To prove Theore 1.3, we first establish sufficient conditions which guarantee 10
11 that the true low-rank atrix M is the unique solution to (1.5) in Section 3. One of these conditions is the existence of a dual vector obeying two crucial properties. Section 4 constructs such a dual vector and provides the overall architecture of the proof which shows that, indeed, this vector obeys the desired properties provided that the nuber of easureents is sufficiently large. Surprisingly, as explored in Section 5, the existence of a dual vector certifying that M is unique is related to soe probles in rando graph theory including the coupon collector s proble. Following this discussion, we prove our ain result via several interediate results which are all proven in Section 6. Section 7 introduces nuerical experients showing that atrix copletion based on nuclear nor iniization works well in practice. Section 8 closes the paper with a short suary of our findings, a discussion of iportant extensions and iproveents. In particular, we will discuss possible ways of iproving the 1.2 exponent in (1.10) so that it gets closer to 1. Finally, the Appendix provides proofs of auxiliary leas supporting our ain arguent. Before continuing, we provide here a brief suary of the notations used throughout the paper. Matrices are bold capital, vectors are bold lowercase and scalars or entries are not bold. For instance, X is a atrix and X ij its (i, j)th entry. Likewise x is a vector and x i its ith coponent. When we have a collection of vectors u k R n for 1 k d, we will denote by u ik the ith coponent of the vector u k and [u 1,..., u d ] will denote the n d atrix whose kth colun is u k. A variety of nors on atrices will be discussed. The spectral nor of a atrix is denoted by X. The Euclidean inner product between two atrices is X, Y = trace(x Y ), and the corresponding Euclidean nor, called the Frobenius or Hilbert-Schidt nor, is denoted X F. That is, X F = X, X 1/2. The nuclear nor of a atrix X is X. The axiu entry of X (in absolute value) is denoted by X ax ij X ij. For vectors, we will only consider the usual Euclidean l 2 nor which we siply write as x. Further, we will also anipulate linear transforation which acts on atrices and will use caligraphic letters for these operators as in A(X). In particular, the identity operator will be denoted by I. The only nor we will consider for these operators is their spectral nor (the top singular value) denoted by A = sup X: X F 1 A(X) F. Finally, we adopt the convention that C denotes a nuerical constant independent of the atrix diensions, rank, and nuber of easureents, whose value ay change fro line to line. Certain special constants with precise nuerical values will be ornaented with subscripts (e.g., C R ). Any exceptions to this notational schee will be noted in the text. 2 Which atrices are incoherent? In this section we restrict our attention to square n n atrices, but the extension to rectangular n 1 n 2 atrices iediately follows by setting n = ax(n 1, n 2 ). 2.1 Incoherent bases span incoherent subspaces Alost all n n atrices M with singular vectors {u k } 1 k r and {v k } 1 k r obeying the size property (1.12) also satisfy the assuptions A0 and A1 with µ 0 = µ B, µ 1 = Cµ B log n for soe positive constant C. As entioned above, A0 holds autoatically, but, observe that A1 would not hold with a sall value of µ 1 if two rows of the atrices [u 1,..., u r ] and [v 1,..., v r ] are identical 11
12 with all entries of agnitude µ B /n since it is not hard to see that in this case k u k v k = µ B r/n. Certainly, this exaple is constructed in a very special way, and should occur infrequently. We now show that it is generically unlikely. Consider the atrix r ɛ k u k vk, (2.1) k=1 where {ɛ k } 1 k r is an arbitrary sign sequence. For alost all choices of sign sequences, A1 is satisfied with µ 1 = O(µ B log n). Indeed, if one selects the signs uniforly at rando, then for each β > 0, r P( ɛ k u k v k µ B 8βr log n/n) (2n 2 ) n β. (2.2) k=1 This is of interest because suppose the low-rank atrix we wish to recover is of the for M = r λ k u k vk (2.3) k=1 with scalars λ k. Since the vectors {u k } and {v k } are orthogonal, the singular values of M are given by λ k and the singular vectors are given by sgn(λ k )u k and v k for k = 1,..., r. Hence, in this odel A1 concerns the axiu entry of the atrix given by (2.1) with ɛ k = sgn(λ k ). That is to say, for ost sign patterns, the atrix of interest obeys an appropriate size condition. We ephasize here that the only thing that we assued about the u k s and v k s was that they had sall entries. In particular, they could be equal to each other as would be the case for a syetric atrix. The clai (2.2) is a siple application of Hoeffding s inequality. The (i, j)th entry of (2.1) is given by Z ij = ɛ k u ik v jk, 1 k r and is a su of r zero-ean independent rando variables, each bounded by µ B /n. Therefore, P( Z ij λµ B r/n) 2e λ 2 /8. Setting λ proportional to log n and applying the union bound gives the clai. To suarize, we say that M is sapled fro the incoherent basis odel if it is of the for M = r ɛ k σ k u k vk ; (2.4) k=1 {ɛ k } 1 k r is a rando sign sequence, and {u k } 1 k r and {v k } 1 k r have axiu entries of size at ost µ B /n. Lea 2.1 There exist nuerical constants c and C such that for any β > 0, atrices fro the incoherent basis odel obey the assuption A1 with µ 1 Cµ B (β + 2) log n with probability at least 1 cn β. 12
13 2.2 Rando subspaces span incoherent subspaces In this section, we prove that the rando orthogonal odel obeys the two assuptions A0 and A1 (with appropriate values for the µ s) with large probability. Lea 2.2 Set r = ax(r, log n). orthogonal odel obeys: 3 Then there exist constants C and c such that the rando 1. ax i P U e i 2 C r/n, 2. 1 k r u kv k C log n r/n. with probability 1 cn 3 log n. We note that an arguent siilar to the following proof would give that if C of the for Kβ where K is a fixed nuerical constant, we can achieve a probability at least 1 cn β provided that n is sufficiently large. To establish these facts, we ake use of the standard result below [21]. Lea 2.3 Let Y d be distributed as a chi-squared rando variable with d degrees of freedo. Then for each t > 0 P(Y d d t 2d + t 2 ) e t2 /2 and P(Y d d t 2d) e t2 /2. (2.5) We will use (2.5) as follows: for each ɛ (0, 1) we have P(Y d d (1 ɛ) 1 ) e ɛ2 d/4 and P(Y d d (1 ɛ)) e ɛ2 d/4. (2.6) We begin with the second assertion of Lea 2.2 since it will iply the first as well. Observe that it follows fro P U e i 2 = u 2 ik, (2.7) 1 k r that Z r P U e i 2 (a is fixed) is the squared Euclidean length of the first r coponents of a unit vector uniforly distributed on the unit sphere in n diensions. Now suppose that x 1, x 2,..., x n are i.i.d. N(0, 1). Then the distribution of a unit vector uniforly distributed on the sphere is that of x/ x and, therefore, the law of Z r is that of Y r /Y n, where Y r = k r x2 k. Fix ɛ > 0 and consider the event A n,ɛ = {Y n /n 1 ɛ}. For each λ > 0, it follows fro (2.6) that P(Z r r/n λ 2r/n) = P(Y r [r + λ 2r]Y n /n) P(Y r [r + λ 2r]Y n /n and A n,ɛ ) + P(A c n,ɛ) P(Y r [r + λ 2r][1 ɛ]) + e ɛ2 n/4 = P(Y r r λ 2r[1 ɛ ɛ r/2λ 2 ]) + e ɛ2 n/4. Now pick ɛ = 4(n 1 log n) 1/2, λ = 8 2 log n and assue that n is sufficiently large so that ɛ(1 + r/2λ 2 ) 1/2. 3 When r C (log n) 3 for soe positive constant C, a better estiate is possible, naely, P 1 k r u kv k C r log n/n. 13
14 Then P(Z r r/n λ 2r/n) P(Y r r (λ/2) 2r) + n 4. Assue now that r 4 log n (which eans that λ 4 2r). Then it follows fro (2.5) that Hence P(Y r r (λ/2) 2r) P(Y r r (λ/4) 2r + (λ/4) 2 ) e λ2 /32 = n 4. P(Z r r/n 16 r log n/n) 2n 4 and, therefore, P(ax P U e i 2 r/n 16 r log n/n) 2n 3 (2.8) i by the union bound. Note that (2.8) establishes the first clai of the lea (even for r < 4 log n since in this case Z r Z 4 log n ). It reains to establish the second clai. Notice that by syetry, E = 1 k r u kvk has the sae distribution as r F = ɛ k u k vk, k=1 where {ɛ k } is an independent Radeacher sequence. It then follows fro Hoeffding s inequality that conditional on {u k } and {v k } we have P( F ij > t) 2e t2 /2σij, 2 σij 2 = u 2 ik v2 ik. 1 k r Our previous results indicate that ax ij v ij 2 (10 log n)/n with large probability and thus σ 2 ij 10 log n n P Ue i 2. Set r = ax(r, log n). Since P U e i 2 C r/n with large probability, we have σ 2 ij C(log n) r/n 2 with large probability. Hence the arginal distribution of F ij obeys for soe nuerical constant γ. constant gives P( F ij > λ r/n) 2e γλ2 / log n + P(σ 2 ij C(log n) r/n 2 ). Picking λ = γ log n where γ is a sufficiently large nuerical F C (log n) r/n with large probability. Since E and F have the sae distribution, the second clai follows. The clai about the size of ax ij v ij 2 is straightforward since our techniques show that for each λ > 0 P(Z 1 λ(log n)/n) P(Y 1 λ(1 ɛ) log n) + e ɛ2 n/4. Moreover, P(Y 1 λ(1 ɛ) log n) = P( x 1 λ(1 ɛ) log n) 2e 1 2 λ(1 ɛ) log n. If n is sufficiently large so that ɛ 1/5, this gives P(Z 1 10(log n)/n) 3n 4 and, therefore, P(ax v ij 2 10(log n)/n) 12n 3 log n ij since the axiu is taken over at ost 4n log n pairs. 14
15 3 Duality Let R Ω : R n 1 n 2 R Ω be the sapling operator which extracts the observed entries, R Ω (X) = (X ij ) ij Ω, so that the constraint in (1.5) becoes R Ω (X) = R Ω (M). Standard convex optiization theory asserts that X is solution to (1.5) if there exists a dual vector (or Lagrange ultiplier) λ R Ω such that R Ω λ is a subgradient of the nuclear nor at the point X, which we denote by R Ω λ X (3.1) (see, e.g. [7]). Recall the definition of a subgradient of a convex function f : R n 1 n 2 R. We say that Y is a subgradient of f at X 0, denoted Y f(x 0 ), if for all X. Suppose X 0 R n 1 n 2 f(x) f(x 0 ) + Y, X X 0 (3.2) has rank r with a singular value decoposition given by X 0 = σ k u k vk, (3.3) 1 k r With these notations, Y is a subgradient of the nuclear nor at X 0 if and only if it is of the for Y = u k vk + W, (3.4) where W obeys the following two properties: 1 k r (i) the colun space of W is orthogonal to U span (u 1,..., u r ), and the row space of W is orthogonal to V span (v 1,..., v r ); (ii) the spectral nor of W is less than or equal to 1. (see, e.g., [23,36]). To express these properties concisely, it is convenient to introduce the orthogonal decoposition R n 1 n 2 = T T where T is the linear space spanned by eleents of the for u k x and yv k, 1 k r, where x and y are arbitrary, and T is its orthogonal copleent. Note that di(t ) = r(n 1 + n 2 r), precisely the nuber of degrees of freedo in the set of n 1 n 2 atrices of rank r. T is the subspace of atrices spanned by the faily (xy ), where x (respectively y) is any vector orthogonal to U (respectively V ). The orthogonal projection P T onto T is given by P T (X) = P U X + XP V P U XP V, (3.5) where P U and P V are the orthogonal projections onto U and V. Note here that while P U and P V are atrices, P T is a linear operator apping atrices to atrices. We also have P T (X) = (I P T )(X) = (I n1 P U )X(I n2 P V ) where I d denotes the d d identity atrix. With these notations, Y X 0 if (i ) P T (Y ) = 1 k r u kv k, 15
16 (ii ) and P T Y 1. Now that we have characterized the subgradient of the nuclear nor, the lea below gives sufficient conditions for the uniqueness of the iniizer to (1.5). Lea 3.1 Consider a atrix X 0 = r k=1 σ k u k vk of rank r which is feasible for the proble (1.5), and suppose that the following two conditions hold: 1. there exists a dual point λ such that Y = R Ω λ obeys r P T (Y ) = u k vk, P T (Y ) < 1; (3.6) 2. the sapling operator R Ω restricted to eleents in T is injective. Then X 0 is the unique iniizer. k=1 Before proving this result, we would like to ephasize that this lea provides a clear strategy for proving our ain result, naely, Theore 1.3. Letting M = r k=1 σ k u k vk, M is the unique solution to (1.5) if the injectivity condition holds and if one can find a dual point λ such that Y = R Ωλ obeys (3.6). The proof of Lea 3.1 uses a standard fact which states that the nuclear nor and the spectral nor are dual to one another. Lea 3.2 For each pair W and H, we have W, H W H. In addition, for each H, there is a W obeying W = 1 which achieves the equality. A variety of proofs are available for this Lea, and an eleentary arguent is sketched in [27]. We now turn to the proof of Lea 3.1. Proof [of Lea 3.1] Consider any perturbation X 0 + H where R Ω (H) = 0. Then for any W 0 obeying (i) (ii), r k=1 u kvk + W 0 is a subgradient of the nuclear nor at X 0 and, therefore, r X 0 + H X 0 + u k vk + W 0, H. Letting W = P T (Y ), we ay write r k=1 u kv k = R Ω λ W. Since W < 1 and R Ω(H) = 0, it then follows that X 0 + H X 0 + W 0 W, H. Now by construction k=1 W 0 W, H = P T (W 0 W ), H = W 0 W, P T (H). We use Lea 3.2 and set W 0 = P T (Z) where Z is any atrix obeying Z 1 and Z, P T (H) = P T (H). Then W 0 T, W 0 1, and W 0 W, H (1 W ) P T (H), which by assuption is strictly positive unless P T (H) = 0. In other words, X 0 + H > X 0 unless P T (H) = 0. Assue then that P T (H) = 0 or equivalently that H T. Then R Ω (H) = 0 iplies that H = 0 by the injectivity assuption. In conclusion, X 0 +H > X unless H = 0. 16
17 4 Architecture of the proof Our strategy to prove that M = 1 k r σ ku k vk is the unique iniizer to (1.5) is to construct a atrix Y which vanishes on Ω c and obeys the conditions of Lea 3.1 (and show the injectivity of the sapling operator restricted to atrices in T along the way). Set P Ω to be the orthogonal projector onto the indices in Ω so that the (i, j)th coponent of P Ω (X) is equal to X ij if (i, j) Ω and zero otherwise. Our candidate Y will be the solution to iniize subject to X F (P T P Ω )(X) = r k=1 u kvk. (4.1) The atrix Y vanishes on Ω c as otherwise it would not be an optial solution since P Ω (Y ) would obey the constraint and have a saller Frobenius nor. Hence Y = P Ω (Y ) and P T (Y ) = r k=1 u kvk. Since the Pythagoras forula gives Y 2 F = P T (Y ) 2 F + P T (Y ) 2 F = r u k vk 2 F + P T (Y ) 2 F k=1 = r + P T (Y ) 2 F, iniizing the Frobenius nor of X aounts to iniizing the Frobenius nor of P T (X) under the constraint P T (X) = r k=1 u kvk. Our otivation is twofold. First, the solution to the leastsquares proble (4.1) has a closed for that is aenable to analysis. Second, by forcing P T (Y ) to be sall in the Frobenius nor, we hope that it will be sall in the spectral nor as well, and establishing that P T (Y ) < 1 would prove that M is the unique solution to (1.5). To copute the solution to (4.1), we introduce the operator A ΩT defined by A ΩT (M) = P Ω P T (M). Then, if A ΩT A ΩT = P T P Ω P T has full rank when restricted to T, the iniizer to (4.1) is given by Y = A ΩT (A ΩT A ΩT ) 1 (E), We clarify the eaning of (4.2) to avoid any confusion. eleent F in T obeying (A ΩT A ΩT )(F ) = E. To suarize the ais of our proof strategy, r E u k vk. (4.2) k=1 (A ΩT A ΩT ) 1 (E) is eant to be that We ust first show that A ΩT A ΩT = P T P Ω P T is a one-to-one linear apping fro T onto itself. In this case, A ΩT = P Ω P T as a apping fro T to R n 1 n 2 is injective. This is the second sufficient condition of Lea 3.1. Moreover, our ansatz for Y given by (4.2) is well-defined. Having established that Y is well-defined, we will show that thus proving the first sufficient condition. P T (Y ) < 1, 17
18 4.1 The Bernoulli odel Instead of showing that the theore holds when Ω is a set of size sapled uniforly at rando, we prove the theore for a subset Ω sapled according to the Bernoulli odel. Here and below, {δ ij } 1 i n1,1 j n 2 is a sequence of independent identically distributed 0/1 Bernoulli rando variables with P(δ ij = 1) = p, (4.3) n 1 n 2 and define Ω = {(i, j) : δ ij = 1}. (4.4) Note that E Ω =, so that the average cardinality of Ω is that of Ω. Then following the sae reasoning as the arguent developed in Section II.C of [11] shows that the probability of failure under the unifor odel is bounded by 2 ties the probability of failure under the Bernoulli odel; the failure event is the event on which the solution to (1.5) is not exact. Hence, we can restrict our attention to the Bernoulli odel and fro now on, we will assue that Ω is given by (4.4). This is advantageous because the Bernoulli odel adits a sipler analysis than unifor sapling thanks to the independence between the δ ij s. 4.2 The injectivity property We study the injectivity of A ΩT, which also shows that Y is well-defined. To prove this, we will show that the linear operator p 1 P T (P Ω pi)p T has sall operator nor, which we recall is sup X F 1 p 1 P T (P Ω pi)p T (X) F. Theore 4.1 Suppose Ω is sapled according to the Bernoulli odel (4.3) (4.4) and put n = ax(n 1, n 2 ). Suppose that the coherences obey ax(µ(u), µ(v )) µ 0. Then, there is a nuerical constants C R such that for all β > 1, p 1 µ0 nr(β log n) P T P Ω P T pp T C R with probability at least 1 3n β µ provided that C 0 nr(β log n) R < 1. Proof Decopose any atrix X as X = ab X, e ae b e ae b so that (4.5) P T (X) = ab P T (X), e a e b e ae b = ab X, P T (e a e b ) e ae b. Hence, P Ω P T (X) = ab δ ab X, P T (e a e b ) e ae b which gives (P T P Ω P T )(X) = ab δ ab X, P T (e a e b ) P T (e a e b ). In other words, P T P Ω P T = ab δ ab P T (e a e b ) P T (e a e b ). It follows fro the definition (3.5) of P T that P T (e a e b ) = (P Ue a )e b + e a(p V e b ) (P U e a )(P V e b ). (4.6) 18
19 This gives P T (e a e b ) 2 F = P T (e a e b ), e ae b = P Ue a 2 + P V e b 2 P U e a 2 P V e b 2 (4.7) and since P U e a 2 µ(u)r/n 1 and P V e b 2 µ(u)r/n 2, P T (e a e b ) 2 F 2µ 0 r/ in(n 1, n 2 ). (4.8) Now the fact that the operator P T P Ω P T does not deviate fro its expected value E(P T P Ω P T ) = P T (E P Ω )P T = P T (pi)p T = pp T in the spectral nor is related to Rudelson s selection theore [29]. The first part of the theore below ay be found in [10] for exaple, see also [30] for a very siilar stateent. Theore 4.2 [10] Let {δ ab } be independent 0/1 Bernoulli variables with P(δ ab = 1) = p = n 1 n 2 and put n = ax(n 1, n 2 ). Suppose that P T (e a e b ) 2 F 2µ 0r/n. Set Z p 1 ab (δ ab p) P T (e a e b ) P T (e a e b ) = p 1 P T P Ω P T pp T. 1. There exists a constant C R such that E Z C R provided that the right-hand side is saller than 1. µ0 nr log n 2. Suppose E Z 1. Then for each λ > 0, we have ( ) ( { }) µ0 nr log n P Z E Z > λ 3 exp γ 0 in λ 2 log n log n, λ µ 0 nr (4.9) (4.10) for soe positive constant γ 0. As entioned above, the first part, naely, (4.9) is an application of an established result which states that if {y i } is a faily of vectors in R d and {δ i } is a 0/1 Bernoulli sequence with P(δ i = 1) = p, then p 1 i (δ i p)y i y i C log d p ax y i i for soe C > 0 provided that the right-hand side is less than 1. The proof ay be found in the cited literature, e.g. in [10]. Hence, the first part follows fro applying this result to vectors of the for P T (e a e b ) and using the available bound on P T (e a e b ) F. The second part follows fro Talagrand s concentration inequality and ay be found in the Appendix. Set λ = β/γ 0 and assue that > (β/γ 0 )µ 0 nr log n. Then the left-hand side of (4.10) is bounded by 3n β and thus, we established that Z C R µ0 nr log n + 1 γ 0 µ0 nr β log n 19
20 with probability at least 1 3n β. Setting C R = C R + 1/ γ 0 finishes the proof. Take large enough so that C R µ0 (nr/) log n 1/2. Then it follows fro (4.5) that p 2 P T (X) F (P T P Ω P T )(X) F 3p 2 P T (X) F (4.11) for all X with large probability. In particular, the operator A ΩT A ΩT = P T P Ω P T apping T onto itself is well-conditioned and hence invertible. An iediate consequence is the following: Corollary 4.3 Assue that C R µ0 nr(log n)/ 1/2. With the sae probability as in Theore 4.1, we have P Ω P T (X) F 3p/2 P T (X) F. (4.12) Proof We have P Ω P T (X) 2 F = X, (P ΩP T ) (P Ω P T )X = X, (P T P Ω P T )X and thus P Ω P T (X) 2 F = P T X, (P T P Ω P T )X P T (X) F (P T P Ω P T )(X) F, where the inequality is due to Cauchy-Schwarz. The conclusion (4.12) follows fro (4.11). 4.3 The size property In this section, we explain how we will show that P T (Y ) < 1. This result will follow fro five leas that we will prove in Section 6. Introduce H P T p 1 P T P Ω P T, which obeys H(X) F C R µ0 (nr/) β log n P T (X) F with large probability because of Theore 4.1. For any atrix X T, (P T P Ω P T ) 1 (X) can be expressed in ters of the power series (P T P Ω P T ) 1 (X) = p 1 (X + H(X) + H 2 (X) +...) for H is a contraction when is sufficiently large. Since Y = P Ω P T (P T P Ω P T ) 1 ( 1 k r u kvk ), P T (Y ) ay be decoposed as P T (Y ) = p 1 (P T P Ω P T )(E + H(E) + H 2 (E) +...), E = u k vk. (4.13) 1 k r To bound the nor of the left-hand side, it is of course sufficient to bound the nor of the suands in the right-hand side. Taking the following five leas together establishes Theore 1.3. Lea 4.4 Fix β 2 and λ 1. There is a nuerical constant C 0 such that if λ µ 2 1 nrβ log n, then p 1 (P T P Ω P T )E C 0 λ 1/2. (4.14) with probability at least 1 n β. Lea 4.5 Fix β 2 and λ 1. There are nuerical constants C 1 and c 1 such that if λ µ 1 ax( µ 0, µ 1 ) nrβ log n, then with probability at least 1 c 1 n β. p 1 (P T P Ω P T )H(E) C 1 λ 1 (4.15) 20
Machine Learning Applications in Grid Computing
Machine Learning Applications in Grid Coputing George Cybenko, Guofei Jiang and Daniel Bilar Thayer School of Engineering Dartouth College Hanover, NH 03755, USA gvc@dartouth.edu, guofei.jiang@dartouth.edu
More informationLecture L26-3D Rigid Body Dynamics: The Inertia Tensor
J. Peraire, S. Widnall 16.07 Dynaics Fall 008 Lecture L6-3D Rigid Body Dynaics: The Inertia Tensor Version.1 In this lecture, we will derive an expression for the angular oentu of a 3D rigid body. We shall
More informationData Set Generation for Rectangular Placement Problems
Data Set Generation for Rectangular Placeent Probles Christine L. Valenzuela (Muford) Pearl Y. Wang School of Coputer Science & Inforatics Departent of Coputer Science MS 4A5 Cardiff University George
More informationBinary Embedding: Fundamental Limits and Fast Algorithm
Binary Ebedding: Fundaental Liits and Fast Algorith Xinyang Yi The University of Texas at Austin yixy@utexas.edu Eric Price The University of Texas at Austin ecprice@cs.utexas.edu Constantine Caraanis
More informationBudget-optimal Crowdsourcing using Low-rank Matrix Approximations
Budget-optial Crowdsourcing using Low-rank Matrix Approxiations David R. Karger, Sewoong Oh, and Devavrat Shah Departent of EECS, Massachusetts Institute of Technology Eail: {karger, swoh, devavrat}@it.edu
More informationarxiv:0805.1434v1 [math.pr] 9 May 2008
Degree-distribution stability of scale-free networs Zhenting Hou, Xiangxing Kong, Dinghua Shi,2, and Guanrong Chen 3 School of Matheatics, Central South University, Changsha 40083, China 2 Departent of
More informationReliability Constrained Packet-sizing for Linear Multi-hop Wireless Networks
Reliability Constrained acket-sizing for inear Multi-hop Wireless Networks Ning Wen, and Randall A. Berry Departent of Electrical Engineering and Coputer Science Northwestern University, Evanston, Illinois
More informationMINIMUM VERTEX DEGREE THRESHOLD FOR LOOSE HAMILTON CYCLES IN 3-UNIFORM HYPERGRAPHS
MINIMUM VERTEX DEGREE THRESHOLD FOR LOOSE HAMILTON CYCLES IN 3-UNIFORM HYPERGRAPHS JIE HAN AND YI ZHAO Abstract. We show that for sufficiently large n, every 3-unifor hypergraph on n vertices with iniu
More informationHalloween Costume Ideas for the Wii Game
Algorithica 2001) 30: 101 139 DOI: 101007/s00453-001-0003-0 Algorithica 2001 Springer-Verlag New York Inc Optial Search and One-Way Trading Online Algoriths R El-Yaniv, 1 A Fiat, 2 R M Karp, 3 and G Turpin
More information6. Time (or Space) Series Analysis
ATM 55 otes: Tie Series Analysis - Section 6a Page 8 6. Tie (or Space) Series Analysis In this chapter we will consider soe coon aspects of tie series analysis including autocorrelation, statistical prediction,
More informationCalculating the Return on Investment (ROI) for DMSMS Management. The Problem with Cost Avoidance
Calculating the Return on nvestent () for DMSMS Manageent Peter Sandborn CALCE, Departent of Mechanical Engineering (31) 45-3167 sandborn@calce.ud.edu www.ene.ud.edu/escml/obsolescence.ht October 28, 21
More informationA quantum secret ballot. Abstract
A quantu secret ballot Shahar Dolev and Itaar Pitowsky The Edelstein Center, Levi Building, The Hebrerw University, Givat Ra, Jerusale, Israel Boaz Tair arxiv:quant-ph/060087v 8 Mar 006 Departent of Philosophy
More informationTrading Regret for Efficiency: Online Convex Optimization with Long Term Constraints
Journal of Machine Learning Research 13 2012) 2503-2528 Subitted 8/11; Revised 3/12; Published 9/12 rading Regret for Efficiency: Online Convex Optiization with Long er Constraints Mehrdad Mahdavi Rong
More informationFactor Model. Arbitrage Pricing Theory. Systematic Versus Non-Systematic Risk. Intuitive Argument
Ross [1],[]) presents the aritrage pricing theory. The idea is that the structure of asset returns leads naturally to a odel of risk preia, for otherwise there would exist an opportunity for aritrage profit.
More informationSearching strategy for multi-target discovery in wireless networks
Searching strategy for ulti-target discovery in wireless networks Zhao Cheng, Wendi B. Heinzelan Departent of Electrical and Coputer Engineering University of Rochester Rochester, NY 467 (585) 75-{878,
More informationOn Computing Nearest Neighbors with Applications to Decoding of Binary Linear Codes
On Coputing Nearest Neighbors with Applications to Decoding of Binary Linear Codes Alexander May and Ilya Ozerov Horst Görtz Institute for IT-Security Ruhr-University Bochu, Gerany Faculty of Matheatics
More informationCRM FACTORS ASSESSMENT USING ANALYTIC HIERARCHY PROCESS
641 CRM FACTORS ASSESSMENT USING ANALYTIC HIERARCHY PROCESS Marketa Zajarosova 1* *Ph.D. VSB - Technical University of Ostrava, THE CZECH REPUBLIC arketa.zajarosova@vsb.cz Abstract Custoer relationship
More informationInformation Processing Letters
Inforation Processing Letters 111 2011) 178 183 Contents lists available at ScienceDirect Inforation Processing Letters www.elsevier.co/locate/ipl Offline file assignents for online load balancing Paul
More informationUse of extrapolation to forecast the working capital in the mechanical engineering companies
ECONTECHMOD. AN INTERNATIONAL QUARTERLY JOURNAL 2014. Vol. 1. No. 1. 23 28 Use of extrapolation to forecast the working capital in the echanical engineering copanies A. Cherep, Y. Shvets Departent of finance
More informationAirline Yield Management with Overbooking, Cancellations, and No-Shows JANAKIRAM SUBRAMANIAN
Airline Yield Manageent with Overbooking, Cancellations, and No-Shows JANAKIRAM SUBRAMANIAN Integral Developent Corporation, 301 University Avenue, Suite 200, Palo Alto, California 94301 SHALER STIDHAM
More informationExtended-Horizon Analysis of Pressure Sensitivities for Leak Detection in Water Distribution Networks: Application to the Barcelona Network
2013 European Control Conference (ECC) July 17-19, 2013, Zürich, Switzerland. Extended-Horizon Analysis of Pressure Sensitivities for Leak Detection in Water Distribution Networks: Application to the Barcelona
More informationIntroduction to Unit Conversion: the SI
The Matheatics 11 Copetency Test Introduction to Unit Conversion: the SI In this the next docuent in this series is presented illustrated an effective reliable approach to carryin out unit conversions
More informationReconnect 04 Solving Integer Programs with Branch and Bound (and Branch and Cut)
Sandia is a ultiprogra laboratory operated by Sandia Corporation, a Lockheed Martin Copany, Reconnect 04 Solving Integer Progras with Branch and Bound (and Branch and Cut) Cynthia Phillips (Sandia National
More informationMulti-Class Deep Boosting
Multi-Class Deep Boosting Vitaly Kuznetsov Courant Institute 25 Mercer Street New York, NY 002 vitaly@cis.nyu.edu Mehryar Mohri Courant Institute & Google Research 25 Mercer Street New York, NY 002 ohri@cis.nyu.edu
More informationStochastic Online Scheduling on Parallel Machines
Stochastic Online Scheduling on Parallel Machines Nicole Megow 1, Marc Uetz 2, and Tark Vredeveld 3 1 Technische Universit at Berlin, Institut f ur Matheatik, Strasse des 17. Juni 136, 10623 Berlin, Gerany
More informationThis paper studies a rental firm that offers reusable products to price- and quality-of-service sensitive
MANUFACTURING & SERVICE OPERATIONS MANAGEMENT Vol., No. 3, Suer 28, pp. 429 447 issn 523-464 eissn 526-5498 8 3 429 infors doi.287/so.7.8 28 INFORMS INFORMS holds copyright to this article and distributed
More informationExercise 4 INVESTIGATION OF THE ONE-DEGREE-OF-FREEDOM SYSTEM
Eercise 4 IVESTIGATIO OF THE OE-DEGREE-OF-FREEDOM SYSTEM 1. Ai of the eercise Identification of paraeters of the euation describing a one-degree-of- freedo (1 DOF) atheatical odel of the real vibrating
More informationMedia Adaptation Framework in Biofeedback System for Stroke Patient Rehabilitation
Media Adaptation Fraework in Biofeedback Syste for Stroke Patient Rehabilitation Yinpeng Chen, Weiwei Xu, Hari Sundara, Thanassis Rikakis, Sheng-Min Liu Arts, Media and Engineering Progra Arizona State
More informationAdaptive Modulation and Coding for Unmanned Aerial Vehicle (UAV) Radio Channel
Recent Advances in Counications Adaptive odulation and Coding for Unanned Aerial Vehicle (UAV) Radio Channel Airhossein Fereidountabar,Gian Carlo Cardarilli, Rocco Fazzolari,Luca Di Nunzio Abstract In
More informationPhysics 211: Lab Oscillations. Simple Harmonic Motion.
Physics 11: Lab Oscillations. Siple Haronic Motion. Reading Assignent: Chapter 15 Introduction: As we learned in class, physical systes will undergo an oscillatory otion, when displaced fro a stable equilibriu.
More informationSAMPLING METHODS LEARNING OBJECTIVES
6 SAMPLING METHODS 6 Using Statistics 6-6 2 Nonprobability Sapling and Bias 6-6 Stratified Rando Sapling 6-2 6 4 Cluster Sapling 6-4 6 5 Systeatic Sapling 6-9 6 6 Nonresponse 6-2 6 7 Suary and Review of
More informationCooperative Caching for Adaptive Bit Rate Streaming in Content Delivery Networks
Cooperative Caching for Adaptive Bit Rate Streaing in Content Delivery Networs Phuong Luu Vo Departent of Coputer Science and Engineering, International University - VNUHCM, Vietna vtlphuong@hciu.edu.vn
More informationAN ALGORITHM FOR REDUCING THE DIMENSION AND SIZE OF A SAMPLE FOR DATA EXPLORATION PROCEDURES
Int. J. Appl. Math. Coput. Sci., 2014, Vol. 24, No. 1, 133 149 DOI: 10.2478/acs-2014-0011 AN ALGORITHM FOR REDUCING THE DIMENSION AND SIZE OF A SAMPLE FOR DATA EXPLORATION PROCEDURES PIOTR KULCZYCKI,,
More informationGeometrico-static Analysis of Under-constrained Cable-driven Parallel Robots
Geoetrico-static Analysis of Under-constrained Cable-driven Parallel Robots M. Carricato and J.-P. Merlet 1 DIEM - Dept. of Mechanical Engineering, University of Bologna, Italy, e-ail: arco.carricato@ail.ing.unibo.it
More informationOnline Bagging and Boosting
Abstract Bagging and boosting are two of the ost well-known enseble learning ethods due to their theoretical perforance guarantees and strong experiental results. However, these algoriths have been used
More informationDynamic Placement for Clustered Web Applications
Dynaic laceent for Clustered Web Applications A. Karve, T. Kibrel, G. acifici, M. Spreitzer, M. Steinder, M. Sviridenko, and A. Tantawi IBM T.J. Watson Research Center {karve,kibrel,giovanni,spreitz,steinder,sviri,tantawi}@us.ib.co
More informationINTEGRATED ENVIRONMENT FOR STORING AND HANDLING INFORMATION IN TASKS OF INDUCTIVE MODELLING FOR BUSINESS INTELLIGENCE SYSTEMS
Artificial Intelligence Methods and Techniques for Business and Engineering Applications 210 INTEGRATED ENVIRONMENT FOR STORING AND HANDLING INFORMATION IN TASKS OF INDUCTIVE MODELLING FOR BUSINESS INTELLIGENCE
More informationConstruction Economics & Finance. Module 3 Lecture-1
Depreciation:- Construction Econoics & Finance Module 3 Lecture- It represents the reduction in arket value of an asset due to age, wear and tear and obsolescence. The physical deterioration of the asset
More informationCapacity of Multiple-Antenna Systems With Both Receiver and Transmitter Channel State Information
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO., OCTOBER 23 2697 Capacity of Multiple-Antenna Systes With Both Receiver and Transitter Channel State Inforation Sudharan K. Jayaweera, Student Meber,
More informationMarkovian inventory policy with application to the paper industry
Coputers and Cheical Engineering 26 (2002) 1399 1413 www.elsevier.co/locate/copcheeng Markovian inventory policy with application to the paper industry K. Karen Yin a, *, Hu Liu a,1, Neil E. Johnson b,2
More informationFactored Models for Probabilistic Modal Logic
Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (2008 Factored Models for Probabilistic Modal Logic Afsaneh Shirazi and Eyal Air Coputer Science Departent, University of Illinois
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
More informationSoftware Quality Characteristics Tested For Mobile Application Development
Thesis no: MGSE-2015-02 Software Quality Characteristics Tested For Mobile Application Developent Literature Review and Epirical Survey WALEED ANWAR Faculty of Coputing Blekinge Institute of Technology
More informationModified Latin Hypercube Sampling Monte Carlo (MLHSMC) Estimation for Average Quality Index
Analog Integrated Circuits and Signal Processing, vol. 9, no., April 999. Abstract Modified Latin Hypercube Sapling Monte Carlo (MLHSMC) Estiation for Average Quality Index Mansour Keraat and Richard Kielbasa
More informationPreference-based Search and Multi-criteria Optimization
Fro: AAAI-02 Proceedings. Copyright 2002, AAAI (www.aaai.org). All rights reserved. Preference-based Search and Multi-criteria Optiization Ulrich Junker ILOG 1681, route des Dolines F-06560 Valbonne ujunker@ilog.fr
More information5.7 Chebyshev Multi-section Matching Transformer
/9/ 5_7 Chebyshev Multisection Matching Transforers / 5.7 Chebyshev Multi-section Matching Transforer Reading Assignent: pp. 5-55 We can also build a ultisection atching network such that Γ f is a Chebyshev
More informationLecture Topic: Low-Rank Approximations
Lecture Topic: Low-Rank Approximations Low-Rank Approximations We have seen principal component analysis. The extraction of the first principle eigenvalue could be seen as an approximation of the original
More informationRECURSIVE DYNAMIC PROGRAMMING: HEURISTIC RULES, BOUNDING AND STATE SPACE REDUCTION. Henrik Kure
RECURSIVE DYNAMIC PROGRAMMING: HEURISTIC RULES, BOUNDING AND STATE SPACE REDUCTION Henrik Kure Dina, Danish Inforatics Network In the Agricultural Sciences Royal Veterinary and Agricultural University
More informationModeling Parallel Applications Performance on Heterogeneous Systems
Modeling Parallel Applications Perforance on Heterogeneous Systes Jaeela Al-Jaroodi, Nader Mohaed, Hong Jiang and David Swanson Departent of Coputer Science and Engineering University of Nebraska Lincoln
More informationON SELF-ROUTING IN CLOS CONNECTION NETWORKS. BARRY G. DOUGLASS Electrical Engineering Department Texas A&M University College Station, TX 77843-3128
ON SELF-ROUTING IN CLOS CONNECTION NETWORKS BARRY G. DOUGLASS Electrical Engineering Departent Texas A&M University College Station, TX 778-8 A. YAVUZ ORUÇ Electrical Engineering Departent and Institute
More informationModeling Cooperative Gene Regulation Using Fast Orthogonal Search
8 The Open Bioinforatics Journal, 28, 2, 8-89 Open Access odeling Cooperative Gene Regulation Using Fast Orthogonal Search Ian inz* and ichael J. Korenberg* Departent of Electrical and Coputer Engineering,
More informationPosition Auctions and Non-uniform Conversion Rates
Position Auctions and Non-unifor Conversion Rates Liad Blurosen Microsoft Research Mountain View, CA 944 liadbl@icrosoft.co Jason D. Hartline Shuzhen Nong Electrical Engineering and Microsoft AdCenter
More informationPartitioned Elias-Fano Indexes
Partitioned Elias-ano Indexes Giuseppe Ottaviano ISTI-CNR, Pisa giuseppe.ottaviano@isti.cnr.it Rossano Venturini Dept. of Coputer Science, University of Pisa rossano@di.unipi.it ABSTRACT The Elias-ano
More informationLecture L9 - Linear Impulse and Momentum. Collisions
J. Peraire, S. Widnall 16.07 Dynaics Fall 009 Version.0 Lecture L9 - Linear Ipulse and Moentu. Collisions In this lecture, we will consider the equations that result fro integrating Newton s second law,
More informationHow To Get A Loan From A Bank For Free
Finance 111 Finance We have to work with oney every day. While balancing your checkbook or calculating your onthly expenditures on espresso requires only arithetic, when we start saving, planning for retireent,
More informationBANACH AND HILBERT SPACE REVIEW
BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but
More informationPerformance Evaluation of Machine Learning Techniques using Software Cost Drivers
Perforance Evaluation of Machine Learning Techniques using Software Cost Drivers Manas Gaur Departent of Coputer Engineering, Delhi Technological University Delhi, India ABSTRACT There is a treendous rise
More informationImpact of Processing Costs on Service Chain Placement in Network Functions Virtualization
Ipact of Processing Costs on Service Chain Placeent in Network Functions Virtualization Marco Savi, Massio Tornatore, Giacoo Verticale Dipartiento di Elettronica, Inforazione e Bioingegneria, Politecnico
More informationComment on On Discriminative vs. Generative Classifiers: A Comparison of Logistic Regression and Naive Bayes
Coent on On Discriinative vs. Generative Classifiers: A Coparison of Logistic Regression and Naive Bayes Jing-Hao Xue (jinghao@stats.gla.ac.uk) and D. Michael Titterington (ike@stats.gla.ac.uk) Departent
More informationOptimal Resource-Constraint Project Scheduling with Overlapping Modes
Optial Resource-Constraint Proect Scheduling with Overlapping Modes François Berthaut Lucas Grèze Robert Pellerin Nathalie Perrier Adnène Hai February 20 CIRRELT-20-09 Bureaux de Montréal : Bureaux de
More informationApplying Multiple Neural Networks on Large Scale Data
0 International Conference on Inforation and Electronics Engineering IPCSIT vol6 (0) (0) IACSIT Press, Singapore Applying Multiple Neural Networks on Large Scale Data Kritsanatt Boonkiatpong and Sukree
More informationOnline Appendix I: A Model of Household Bargaining with Violence. In this appendix I develop a simple model of household bargaining that
Online Appendix I: A Model of Household Bargaining ith Violence In this appendix I develop a siple odel of household bargaining that incorporates violence and shos under hat assuptions an increase in oen
More information2. FINDING A SOLUTION
The 7 th Balan Conference on Operational Research BACOR 5 Constanta, May 5, Roania OPTIMAL TIME AND SPACE COMPLEXITY ALGORITHM FOR CONSTRUCTION OF ALL BINARY TREES FROM PRE-ORDER AND POST-ORDER TRAVERSALS
More informationA Scalable Application Placement Controller for Enterprise Data Centers
W WWW 7 / Track: Perforance and Scalability A Scalable Application Placeent Controller for Enterprise Data Centers Chunqiang Tang, Malgorzata Steinder, Michael Spreitzer, and Giovanni Pacifici IBM T.J.
More informationConsiderations on Distributed Load Balancing for Fully Heterogeneous Machines: Two Particular Cases
Considerations on Distributed Load Balancing for Fully Heterogeneous Machines: Two Particular Cases Nathanaël Cheriere Departent of Coputer Science ENS Rennes Rennes, France nathanael.cheriere@ens-rennes.fr
More informationInternational Journal of Management & Information Systems First Quarter 2012 Volume 16, Number 1
International Journal of Manageent & Inforation Systes First Quarter 2012 Volue 16, Nuber 1 Proposal And Effectiveness Of A Highly Copelling Direct Mail Method - Establishent And Deployent Of PMOS-DM Hisatoshi
More informationModels and Algorithms for Stochastic Online Scheduling 1
Models and Algoriths for Stochastic Online Scheduling 1 Nicole Megow Technische Universität Berlin, Institut für Matheatik, Strasse des 17. Juni 136, 10623 Berlin, Gerany. eail: negow@ath.tu-berlin.de
More informationAn Optimal Task Allocation Model for System Cost Analysis in Heterogeneous Distributed Computing Systems: A Heuristic Approach
An Optial Tas Allocation Model for Syste Cost Analysis in Heterogeneous Distributed Coputing Systes: A Heuristic Approach P. K. Yadav Central Building Research Institute, Rooree- 247667, Uttarahand (INDIA)
More informationGaussian Processes for Regression: A Quick Introduction
Gaussian Processes for Regression A Quick Introduction M Ebden, August 28 Coents to arkebden@engoacuk MOTIVATION Figure illustrates a typical eaple of a prediction proble given soe noisy observations of
More informationStable Learning in Coding Space for Multi-Class Decoding and Its Extension for Multi-Class Hypothesis Transfer Learning
Stable Learning in Coding Space for Multi-Class Decoding and Its Extension for Multi-Class Hypothesis Transfer Learning Bang Zhang, Yi Wang 2, Yang Wang, Fang Chen 2 National ICT Australia 2 School of
More informationAn improved TF-IDF approach for text classification *
Zhang et al. / J Zheiang Univ SCI 2005 6A(1:49-55 49 Journal of Zheiang University SCIECE ISS 1009-3095 http://www.zu.edu.cn/zus E-ail: zus@zu.edu.cn An iproved TF-IDF approach for text classification
More informationStandards and Protocols for the Collection and Dissemination of Graduating Student Initial Career Outcomes Information For Undergraduates
National Association of Colleges and Eployers Standards and Protocols for the Collection and Disseination of Graduating Student Initial Career Outcoes Inforation For Undergraduates Developed by the NACE
More informationImage restoration for a rectangular poor-pixels detector
Iage restoration for a rectangular poor-pixels detector Pengcheng Wen 1, Xiangjun Wang 1, Hong Wei 2 1 State Key Laboratory of Precision Measuring Technology and Instruents, Tianjin University, China 2
More informationThe individual neurons are complicated. They have a myriad of parts, subsystems and control mechanisms. They convey information via a host of
CHAPTER 4 ARTIFICIAL NEURAL NETWORKS 4. INTRODUCTION Artificial Neural Networks (ANNs) are relatively crude electronic odels based on the neural structure of the brain. The brain learns fro experience.
More informationAn Innovate Dynamic Load Balancing Algorithm Based on Task
An Innovate Dynaic Load Balancing Algorith Based on Task Classification Hong-bin Wang,,a, Zhi-yi Fang, b, Guan-nan Qu,*,c, Xiao-dan Ren,d College of Coputer Science and Technology, Jilin University, Changchun
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationResource Allocation in Wireless Networks with Multiple Relays
Resource Allocation in Wireless Networks with Multiple Relays Kağan Bakanoğlu, Stefano Toasin, Elza Erkip Departent of Electrical and Coputer Engineering, Polytechnic Institute of NYU, Brooklyn, NY, 0
More informationAn Approach to Combating Free-riding in Peer-to-Peer Networks
An Approach to Cobating Free-riding in Peer-to-Peer Networks Victor Ponce, Jie Wu, and Xiuqi Li Departent of Coputer Science and Engineering Florida Atlantic University Boca Raton, FL 33431 April 7, 2008
More informationGenerating Certification Authority Authenticated Public Keys in Ad Hoc Networks
SECURITY AND COMMUNICATION NETWORKS Published online in Wiley InterScience (www.interscience.wiley.co). Generating Certification Authority Authenticated Public Keys in Ad Hoc Networks G. Kounga 1, C. J.
More informationAudio Engineering Society. Convention Paper. Presented at the 119th Convention 2005 October 7 10 New York, New York USA
Audio Engineering Society Convention Paper Presented at the 119th Convention 2005 October 7 10 New York, New York USA This convention paper has been reproduced fro the authors advance anuscript, without
More informationModeling operational risk data reported above a time-varying threshold
Modeling operational risk data reported above a tie-varying threshold Pavel V. Shevchenko CSIRO Matheatical and Inforation Sciences, Sydney, Locked bag 7, North Ryde, NSW, 670, Australia. e-ail: Pavel.Shevchenko@csiro.au
More informationManaging Complex Network Operation with Predictive Analytics
Managing Coplex Network Operation with Predictive Analytics Zhenyu Huang, Pak Chung Wong, Patrick Mackey, Yousu Chen, Jian Ma, Kevin Schneider, and Frank L. Greitzer Pacific Northwest National Laboratory
More informationEnergy Efficient VM Scheduling for Cloud Data Centers: Exact allocation and migration algorithms
Energy Efficient VM Scheduling for Cloud Data Centers: Exact allocation and igration algoriths Chaia Ghribi, Makhlouf Hadji and Djaal Zeghlache Institut Mines-Téléco, Téléco SudParis UMR CNRS 5157 9, Rue
More information17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function
17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function, : V V R, which is symmetric, that is u, v = v, u. bilinear, that is linear (in both factors):
More informationThe Fundamentals of Modal Testing
The Fundaentals of Modal Testing Application Note 243-3 Η(ω) = Σ n r=1 φ φ i j / 2 2 2 2 ( ω n - ω ) + (2ξωωn) Preface Modal analysis is defined as the study of the dynaic characteristics of a echanical
More informationHOW CLOSE ARE THE OPTION PRICING FORMULAS OF BACHELIER AND BLACK-MERTON-SCHOLES?
HOW CLOSE ARE THE OPTION PRICING FORMULAS OF BACHELIER AND BLACK-MERTON-SCHOLES? WALTER SCHACHERMAYER AND JOSEF TEICHMANN Abstract. We copare the option pricing forulas of Louis Bachelier and Black-Merton-Scholes
More informationThe Velocities of Gas Molecules
he Velocities of Gas Molecules by Flick Colean Departent of Cheistry Wellesley College Wellesley MA 8 Copyright Flick Colean 996 All rights reserved You are welcoe to use this docuent in your own classes
More informationLinear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
More informationPartitioning Data on Features or Samples in Communication-Efficient Distributed Optimization?
Partitioning Data on Features or Saples in Counication-Efficient Distributed Optiization? Chenxin Ma Industrial and Systes Engineering Lehigh University, USA ch54@lehigh.edu Martin Taáč Industrial and
More informationIEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, ACCEPTED FOR PUBLICATION 1. Secure Wireless Multicast for Delay-Sensitive Data via Network Coding
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, ACCEPTED FOR PUBLICATION 1 Secure Wireless Multicast for Delay-Sensitive Data via Network Coding Tuan T. Tran, Meber, IEEE, Hongxiang Li, Senior Meber, IEEE,
More informationBayes Point Machines
Journal of Machine Learning Research (2) 245 279 Subitted 2/; Published 8/ Bayes Point Machines Ralf Herbrich Microsoft Research, St George House, Guildhall Street, CB2 3NH Cabridge, United Kingdo Thore
More informationPERFORMANCE METRICS FOR THE IT SERVICES PORTFOLIO
Bulletin of the Transilvania University of Braşov Series I: Engineering Sciences Vol. 4 (53) No. - 0 PERFORMANCE METRICS FOR THE IT SERVICES PORTFOLIO V. CAZACU I. SZÉKELY F. SANDU 3 T. BĂLAN Abstract:
More informationThe Virtual Spring Mass System
The Virtual Spring Mass Syste J. S. Freudenberg EECS 6 Ebedded Control Systes Huan Coputer Interaction A force feedbac syste, such as the haptic heel used in the EECS 6 lab, is capable of exhibiting a
More informationEnrolment into Higher Education and Changes in Repayment Obligations of Student Aid Microeconometric Evidence for Germany
Enrolent into Higher Education and Changes in Repayent Obligations of Student Aid Microeconoetric Evidence for Gerany Hans J. Baugartner *) Viktor Steiner **) *) DIW Berlin **) Free University of Berlin,
More informationAn Improved Decision-making Model of Human Resource Outsourcing Based on Internet Collaboration
International Journal of Hybrid Inforation Technology, pp. 339-350 http://dx.doi.org/10.14257/hit.2016.9.4.28 An Iproved Decision-aking Model of Huan Resource Outsourcing Based on Internet Collaboration
More informationEnergy Proportionality for Disk Storage Using Replication
Energy Proportionality for Disk Storage Using Replication Jinoh Ki and Doron Rote Lawrence Berkeley National Laboratory University of California, Berkeley, CA 94720 {jinohki,d rote}@lbl.gov Abstract Energy
More informationImplementation of Active Queue Management in a Combined Input and Output Queued Switch
pleentation of Active Queue Manageent in a obined nput and Output Queued Switch Bartek Wydrowski and Moshe Zukeran AR Special Research entre for Ultra-Broadband nforation Networks, EEE Departent, The University
More informationA Gas Law And Absolute Zero Lab 11
HB 04-06-05 A Gas Law And Absolute Zero Lab 11 1 A Gas Law And Absolute Zero Lab 11 Equipent safety goggles, SWS, gas bulb with pressure gauge, 10 C to +110 C theroeter, 100 C to +50 C theroeter. Caution
More informationOnline Methods for Multi-Domain Learning and Adaptation
Online Methods for Multi-Doain Learning and Adaptation Mark Dredze and Koby Craer Departent of Coputer and Inforation Science University of Pennsylvania Philadelphia, PA 19104 USA {dredze,craer}@cis.upenn.edu
More informationEfficient Key Management for Secure Group Communications with Bursty Behavior
Efficient Key Manageent for Secure Group Counications with Bursty Behavior Xukai Zou, Byrav Raaurthy Departent of Coputer Science and Engineering University of Nebraska-Lincoln Lincoln, NE68588, USA Eail:
More information