On Computing Nearest Neighbors with Applications to Decoding of Binary Linear Codes

Size: px
Start display at page:

Download "On Computing Nearest Neighbors with Applications to Decoding of Binary Linear Codes"

Transcription

1 On Coputing Nearest Neighbors with Applications to Decoding of Binary Linear Codes Alexander May and Ilya Ozerov Horst Görtz Institute for IT-Security Ruhr-University Bochu, Gerany Faculty of Matheatics Abstract. We propose a new decoding algorith for rando binary linear codes. The so-called inforation set decoding algorith of Prange 96 achieves worst-case coplexity 0.n. In the late 80s, Stern proposed a sort-and-atch version for Prange s algorith, on which all variants of the currently best known decoding algoriths are build. The fastest algorith of Becker, Joux, May and Meurer 0 achieves running tie 0.0n in the full distance decoding setting and n with half bounded distance decoding. In this work we point out that the sort-and-atch routine in Stern s algorith is carried out in a non-optial way, since the atching is done in a two step anner to realize an approxiate atching up to a sall nuber of error coordinates. Our observation is that such an approxiate atching can be done by a variant of the so-called High Diensional Nearest Neighbor Proble. Naely, out of two lists with entries fro F we have to find a pair with closest Haing distance. We develop a new algorith for this proble with sub-quadratic coplexity which ight be of independent interest in other contexts. Using our algorith for full distance decoding iproves Stern s coplexity fro 0.7n to 0.4n. Since the techniques of Becker et al apply for our algorith as well, we eventually obtain the fastest decoding algorith for binary linear codes with coplexity 0.097n. In the half distance decoding scenario, we obtain a coplexity of n. Keywords: linear codes, nearest neighbor proble, approxiate atching, eet-in-the-iddle Introduction The NP-hard decoding proble for rando linear codes is one of the ost fundaental cobinatorial probles in coding and coplexity theory. Due to its purely cobinatorial structure it is the ajor source for constructing cryptographic hardness assuptions that retain their hardness even in the presence of Supported by DFG as part of GRK 87 Ubicrypt and SPP 736 Big Data Supported by DFG as part of SPP 307 Algorith Engineering

2 quantu coputers. Alost all code-based cryptosystes, such as the McEliece encryption schee [5], rely on the fact that rando linear codes are hard to decode. The way cryptographers usually ebed a trapdoor into code-based constructions is that they start with soe structured code C, which allows for efficient decoding, and then use a linear transforation to obtain a scrabled code C. The cryptographic transforation has to ensure that the scrabled code C is indistinguishable fro a purely rando code. Unless soebody is able to unscrable the code, a cryptanalyst faces the proble of decoding a rando linear code. Hence, for choosing appropriate security paraeters for a cryptographic schee it is crucial to know the best perforance of generic decoding algoriths for linear codes. Also closely related to rando linear codes is the so-called Learning Parity with Noise Proble LPN that is frequently used in cryptography [9, ]. In LPN, one directly starts with a generator atrix that defines a rando linear code C and the LPN search proble is a decoding proble on C. Cryptographers usually prefer decision versions of hard probles for proving security of their schees. However, for LPN there is a reduction fro the decision to the search version that directly links the cryptographic hardness of the underlying schees to the task of decoding rando linear codes. The Learning with Errors Proble LWE that was introduced for cryptographic purposes in the work of Regev [9, 0] can be seen as a generalization of LPN to codes defined over larger fields. LWE is closely related to well-studied probles in learning theory, and it proved to be a fruitful source within the last decade for any cryptographic constructions that provide new functionalities [5, 7, 6]. Although we focus in this work on rando linear codes over F, we do not see any obstacles in transferring our techniques to larger finite fields F q as this was done in [7]. Surely, our techniques will also lead to soe iproveent for arbitrary fields, but we believe that our iproveents are best tailored to F, easiest to explain for the binary field, and we feel that binary codes define the ost fundaental and widely applied class of linear codes. Let us define soe basics of linear codes and review the progress that decoding algoriths underwent. A rando binary linear code C is a rando k-diensional subspace of F n. Therefore, a code defines a apping F k F n that aps a essage F k to a codeword c F n. On a noisy channel, a receiver gets an erroneous version x = c + e for soe error vector e F n. The decoding proble now asks for finding e, which in turn enables to reconstruct c and. Usually, we assue that during transission of c not too any errors occurred, such that e has a sall Haing weight wte and c is the closest codeword to x. This defines the search space of e, which is closely linked to the distance d of C. So naturally, the running tie T n, k, d of a decoding algorith is a function of all code paraeters n, k and d. However, we know that asyptotically rando linear codes reach the Gilbert-Varshaov bound k n H d n, where H is the binary entropy function see e.g. []. In the full distance decoding setting we

3 are looking for an error vector e s.t. wte d, whereas in half distance decoding we have wte d. Thus, in both cases we can upper bound the running tie by a function T n, k of n and k only. When we speak of worst-case running tie, we axiize T n, k for all k where the axiu is obtained for code rates k n near. Usually, it suffices to copare worst-case coplexities since all known decoding algoriths with running tie T, T and worst-case coplexity T n < T n satisfy T n, k < T n, k for all k. The siplest algorith is to enuerate naively over e s search space and check whether x+e C. However, it was already noticed in 96 by Prange [8] that the search space for e can considerably be lowered by applying siple linear algebra. Prange s algorith consists of an enueration step with exponential coplexity and soe Gaussian eliination step with only polynoial coplexity. The worst-case coplexity of Prange s algorith is 0.n in the full distance decoding case and n with half distance decoding. In 989, Stern [] noticed that Prange s enueration step can be accelerated by enuerating two lists L, R within half of the search space, a typical tieeory trade-off. The lists L, R are then sorted and one looks for a atching pair. This is a standard trick for any cobinatorial search probles and is usually called a sort-and-atch or Meet-in-the-iddle approach in the literature. Stern s algorith however is unable to directly realize an approxiate atching of lists, where one wants to find a pair of vectors fro L R with sall Haing distance. In Stern s algorith this is solved by a non-optial two-step approach, where one first atches vector pairs exactly on soe portion of the coordinates, and then in a second step checks whether any of these pairs has the desired distance on all coordinates. Stern s algorith led to a running tie iproveent to 0.7n full distance and n half distance, respectively. Our contribution: In this work, we propose a different type of atching algorith for Stern s algorith that directly recovers a pair fro L R with sall Haing distance. Fortunately, this proble is well-known in different variants in any fields of coputer science as the High Diensional Nearest Neighbor Proble [6, 8, 3] or the Bichroatic Closest Pair Proble []. The best bounds that are known for the Haing etric are due to Dubiner [6] and Valiant [4]. However, we were not able to apply Dubiner s algorith to our decoding proble, since we have a bounded vector size, which is referred to as the liited aount of data case in [6]. Although it is stated in Dubiner s work [6] that his algorith ight also be applicable to the liited case, the analysis is only done for the unliited aount of data case. The algorith of Valiant [4] is only optial in special cases i.e. large Haing distances and unfortunately doesn t apply to our proble either. Thus we decided to give an own algorithic solution, which gives us the required flexibility for choosing paraeters that are tailored to the decoding setting. We provide a different and quite general algorith for finding a pair of liited or unliited size vectors in two lists that fulfill soe consistency criterion, like e.g. in our case being close in soe distance etric. The way we solve this 3

4 proble is by checking parts of the vectors locally, and thus sorting out vector pairs that violate consistency locally, since these pairs are highly unlikely to fulfill consistency globally. In our special case, where L = R and where we want to find the closest pair of vectors fro F with Haing distance γ, γ, we obtain an algorith that approaches sub-quadratic coplexity L γ. In our analysis in Sections 4 and 5 we propose an algorith for bounded vector size and show that in the unliited aount of data case which is not iportant for the decoding proble our algorith approaches Dubiner s bound. Using this atching algorith in the full distance decoding setting directly leads to an iproved decoding algorith for rando binary linear codes with coplexity 0.4n, as opposed to Stern s coplexity 0.7n. In 0, Stern s algorith was iproved by Bernstein, Lange and Peters to 0.6n. Then, using recent techniques fro iproving subset su algoriths [, 3], May, Meurer, Thoae [4] and Becker, Joux, May, Meurer [4] further iproved the running tie to 0.0n. Fortunately, these techniques directly apply to our algorith as well and result in a new worst-case running tie as sall as 0.097n. We obtain siilar results in the case of half distance decoding Theore 3 BJMM 0 MMT 0 Th. Stern 989Prange 96 Fig.. History of Inforation Set Decoding: full distance decoding FDD Theore 3 BJMM 0 MMT 0 Th. Stern 989 Prange 96 Fig.. History of Inforation Set Decoding: half distance decoding HDD The paper is organized as follows. In Section, we elaborate a bit ore on previous work and explain the basic idea of our atching approach. This leads to a sort-and-atch decoding algorith that we describe and analyze in Section 3, including the application of the iproveent fro Becker et al [4]. Our decoding algorith calls a new atching subroutine that finds a pair of 4

5 Fig. 3. Stern, Th., BJMM and Th. 3 for BDD/FDD and all code rates k/n vectors with inial Haing distance in two lists. We describe this atching procedure in Section 4. Previous Work Inforation Set Decoding A binary linear code C is a k-diensional subspace of F n. Thus, C is generated by a atrix G F k n that defines a linear apping F k F n. If G s entries are chosen independently and uniforly at rando fro F k n with the restriction that G has rank k, then we call C a rando binary linear code. The distance d of C is defined by the iniu Haing distance of two different codewords in C. Let H F n k n be a basis of the kernel of C. If C is rando then it is not hard to see that the entries of H are also independently and uniforly at rando distributed in F. Therefore, we have Hc t = 0 for every c C. For siplicity, we oit fro now on all transposition of vectors and write c instead of c t. For every erroneous codeword x = c + e with error vector e, we obtain Hx = He by linearity. We call s := Hx F n k the syndroe of a essage x. In order to decode x, it suffices to find a low weight vector e such that He = s. Once e is found, we can siply recover c fro x. This process is called syndroe decoding, and the proble is known to be NP-hard. In the case of full distance decoding FDD, we receive an arbitrary point x F n and want to decode to the closest codeword in the Haing etric. We want to argue that there is always a codeword within roughly Haing 5

6 distance d. Therefore, we observe that the syndroe equation He = s is solvable as long as the search space for e roughly equals n k. Hence, for weight-d vectors e F n we obtain n d H d n n n k, where H is the binary entropy function. This iplies H d n k n, a relation that is known as the Gilbert- Varshaov bound. Moreover, it is well-known that rando codes asyptotically reach the Gilbert-Varshaov bound []. This iplies that for every x we can always expect to find a closest codeword within distance d. In the other case of half distance decoding HDD, we obtain the proise that the error vector is within the error correction distance, i.e. wte d, which is for exaple the case in cryptographic settings, where an error is added artificially by e.g. the encryption of a essage. Since the decoding algoriths that we study use ω := wte as an input, we run the algoriths in the range ω [0, d] or ω [0, d ], respectively. However, all our algoriths attain their axiu running tie for their axial weight ω = d, respectively ω = d. In the following we assue that we know ω. Let us return to our syndroe decoding proble He = s. Naively, one can solve this equation by siply enuerating all weight-ω vectors e F n in tie Õ n ω. Prange s Inforation Set decoding: At the beginning of the 60s, Prange showed that the use of linear algebra provides a significant speedup. Notice that we can siply reorder the positions of the error vector e by peruting the coluns of H. For soe colun perutation π let Q F n k n k denote the quadratic atrix at the right hand side of πh = Q. Assue that Q has full rank, which happens with constant probability and define s = Q s and H = Q πh = I for an n k n k identity atrix I. Let πe = e + 0 k e q with e F k 0 n k and e q F n k be the peruted error vector. Assue that e = 0 n. In this case, we call the first k error-free coordinates an inforation set. Having an inforation set, we can rewrite our syndroe equation as Hπe = He + e q = s, where wte = 0 and wte q = ω. Since e is the zero vector, we can siplify as e q = s. Thus we only have to check whether s has the correct weight wt s = ω. Notice that all coplexity in Prange s algorith is oved to the initial perutation π of H s coluns, whereas the reaining step has polynoial coplexity. To iprove upon the running tie, it is reasonable to lower the restriction that the inforation set has no -entries in e, which was done in the work of Lee-Brickell [3]. Assue that the inforation set carries exactly p -positions. Then we enuerate over all k p possible e F k 0 n k with weight p. Therefore, we can test whether wte q = wt s He = ω p. On the downside, this trade-off between lowering the coplexity for finding a good π and enuerating weight-p vectors does not pay off. Naely, asyptotically in n the trade-off achieves its optiu for p = 0. 6

7 On the positive side, we can iprove on the siple enueration of weightp vectors by enuerating two lists of weight- p vectors. This classical tieeory trade-off, usually called a Meet-in-the-iddle approach, allows to reduce the enueration tie fro k p to k/ p/ by increasing the eory to the sae aount. Such a Meet-in-the-iddle approach was introduced by Stern for Prange s Inforation Set decoding in 989 [8]. In a nutshell, Stern s variant splits the first k coluns of πe = e + e + 0 k e q with e q F n k in two parts e F k/ 0 k/ 0 n k and e 0 k/ F k/ 0 n k. Additionally, we want a good perutation π to achieve H e + e + e q = s with wte + e = p and wte q = ω p. Thus we have He = He + s + e q. Since wte q = ω p, for all but ω p of the n k coordinates of the vectors we have He = He + s. 3 Reeber that Q was defined as the right hand part of πh. It can be shown that Q is invertible with constant probability over the choice of H and π. Thus inverting Q can be ignored for the coputation of the tie coplexity that suppresses polynoial factors. Definition. A perutation π is good if Q is invertible and if for πe there exists a solution e, e satisfying. In Stern s algorith, one coputes for every candidate e the left-hand side of Eq. 3 and stores the result in a sorted list L. Then, one coputes for every e the right-hand side and looks whether the result is in L. But recall that the above equation only holds for all but ω p coordinates. Thus, one cannot siply atch a candidate solution to one entry in L. The solution to this proble in Stern s algorith is to introduce another paraeter l and to test whether there is an exact atch on l out of all n k coordinates. For those pairs e, e whose result atches on l coordinates, one checks whether they in total atch on all but ω p coordinates. However, this two-step approach for approxiately atching siilar vectors introduces another probability that enters the running tie naely that both vectors atch on the chosen l coordinates. Clearly, one would like to have soe algorith that directly addresses the approxiate atching proble given by identity. Altogether, Stern s algorith leads to the first asyptotical iproveent since Prange s algorith. Throughout the paper we ignore any rounding issues. 7

8 3 Our Decoding Algorith 3. Application to Stern s algorith Our ain observation is that the atching in Stern s algorith can be done in a sarter way by an algorith for approxiately atching vectors. We propose such an algorith in Section 4. Our algorith NearestNeighbor works on two lists L, R with uniforly distributed and pairwise independent entries fro F, where one guarantees the existence of a pair fro L R that has sall Haing distance γ, 0 γ. On lists of equal size L = R we achieve a running tie that approaches Õ L γ. Notice that our running tie is sub-quadratic for any γ <. The saller γ the better is our algorith. In the case of γ = 0, we achieve linear running tie up to logarithic factors which coincides with the siple sort-and-atch routine. Our new atching algorith iediately gives us an iproved coplexity for decoding rando binary linear codes. First we proceed as in Stern s algorith by enuerating over all weight- p candidates for e F k/ 0 k/ 0 n k. We copute the left-hand side He of Eq. 3 and store the result in a list L. For the right-hand side we proceed siilar and store He + s in a list R. Notice that by construction each pair e, e of enuerated error vectors yields an error vector e + e + 0 k e q with e q = He + e + s such that identity holds. Moreover, by construction we have wte + e = p. Thus all that reains is to find aong all tuples He, He + s L R one with sall Haing distance ω p. Our coplete algorith Decode is described in Algorith. Before we analyze correctness and running tie of algorith Decode, we want to explain the idea of the subroutine NearestNeighbor. Basic Idea of NearestNeighbor: Let be the length of the vectors in L and R and define a constant λ such that L = R = λ. Assue the perutation π is good and hence e, e exist such that wte +e = p and wt He + He + s = ω p =: γ for soe 0 < γ < holds. Let u := He and v := He + s. Our algorith NearestNeighbor gets the input L, R, γ and outputs a list C, where u, v C with overwheling probability. Thus our algorith solves the following proble. Definition NN proble. Let N, 0 < γ < and 0 < λ <. In the, γ, λ-nearest Neighbor NN proble, we are given γ and two lists L, R F of equal size λ with unifor and pairwise independent vectors. If there exists a pair u, v L R with Haing distance u, v = γ, we have to output a list C that contains u, v. Notice that in Definition the lists L, R theselves do not have to be independent, e.g. L = R is allowed. A naive algorith solves the NN proble by siply coputing the Haing distance of each u, v L R in quadratic tie Õλ. In the following we describe a sub-quadratic algorith for the NN proble. 8

9 Algorith Decode : procedure Decode : Input: n, k, H F n k n, x F n 3: Output: e F n with He = Hx and wte d FDD, wte d 4: s Hx copute the syndroe 5: d H k n H: bin. entropy function, inverse n H aps to [0, ] 6: for ω 0... d do FDD or ω 0... d in the HDD case 7: Choose 0 < p < ω We find an optial choice of p nuerically 8: repeat polyn any ties P[π is good] 9: π rando perutation on F n. 0: Q πh perute coluns with Q F n k n k : choose another perutation goto line 9, if Q is not invertible : H Q πh and s Q s 3: L He for all e F k/ 0 k/ 0 n k with wte = p 4: R He + s for all e 0 k/ F k/ 0 n k with wte = p 5: C NearestNeighborL, R, ω p n k 6: if u, v C L R with Haing distance u, v = ω p then 7: find e, e s.t. u = He and v = He + s binary search in L, R 8: return π e + e + 0 k u + v 9: end if 0: until : end for : end procedure Given an initial list pair L, R, our ain idea is to create exponentially any pairs of sublists L, R. Each sublist is coputed by first choosing a rando partition A [] of the coluns of size. We keep only those eleents in L, R that have a certain Haing weight h on the coluns defined by A, for soe 0 < h < that only depends on λ. The paraeter h will be chosen s.t. each of the L, R have expected polynoially in any eleents. We create as any sublists L, R s.t. with overwheling probability there exists a pair of sublists L, R with u, v L R. For each sublist pair L, R we check naively for a possible good vector by coputing the Haing distance u, v for all u, v L R. Notice that this results only in a polynoial blow-up, because the list sizes are polynoial. We store all vectors u, v with the correct Haing distance in the output list C. The idea of the algorith is suarized in Fig. 4. We will discuss the algorith NearestNeighbor in ore detail in Section 4. The following theore that we prove in Section 5 states its correctness and tie coplexity. Theore. For any constant ε > 0 and any λ < H γ, NearestNeighbor solves the, γ, λ-nn proble with overwheling probability over both the coins of the algorith and the rando choice of the input in tie Õ y+ε with y := γ 9 H H λ γ γ.

10 u L v R L R size: λ create exponentially any sublists by choosing rando partitions A L R L R L R L R For at least one sublist pair we have u, v L R w.o.p. Fig. 4. Main idea of our algorith NearestNeighbor In Definition, we defined our list sizes L = R = λ to be exponential in, which is the cryptographically relevant scenario. A naive solution of the NN proble yields an exponent of λ, so we are interested in quotients y λ <. In the following corollary we achieve a coplexity of Õ L γ in the case of polynoial list sizes L = R. This is the best case scenario for our algorith, which is in the literature often referred to as the unliited aount of data case. Notice that the quotient y λ is strictly increasing in λ until we reach the prerequisite bound λ = H γ, beyond which our algorith does no longer work. Finding a better dependency for the NN proble on λ would iediately result in further iproveents for the decoding bounds fro Theores and 3. Corollary. In the case of a list size L = R that is polynoial in, we obtain a coplexity exponent li λ 0 y/λ = γ, i.e. our coplexity is Õ L γ. Proof. Notice that we defined the inverse of the binary entropy function as H and that H =. The derivative of the binary entropy function is H x = log x and the derivative of the inverse of the binary entropy function is H λ = log. We obtain the result by the following calculation, using L Hospital s rule twice. H λ y li λ 0 λ = li γ log γ λ 0 H λ γ γ log H λ γ = li ln λ 0 H λ γ / ln H λ γ γ H λ H γ H = li λ γ λ λ 0 = li λ 0 H λ γ H λ γ = li γ λ 0 H λ H λ H λ H λ H λ γ = γ 0

11 In the following Theore we show that Decode is correct and prove its tie coplexity. Theore coplexity and correctness. Decode solves the decoding proble with overwheling probability in tie O 0.4n in the full and tie O n in the half distance decoding setting. Proof. Let us define γ := ω p n k, r := H ω n k p n H k H γ, µ := k p k n n H k and y := γ H H µn n k γ. γ We want to show that for any ε > 0 Decode solves the decoding proble with overwheling probability in tie Õ rn µn + y+εn k. 4 In line 8 of Decode, we repeat polyn P[π is good] ties. A perutation π is good, whenever p/ ones are in the first k/ coluns, p/ in the subsequent k/ coluns and ω p ones in the last n k coluns. Additionally, Q as defined in line 0 has to be invertible which happens with constant probability. Therefore, the nuber of repetitions until we find a good π is n polyn ω k/ p/ n k = Õ n Hω/n k Hp/k n k Hγ = Õrn. γn k We fix a repetition that leads to a good perutation π. In this repetition, we therefore have e, e with wte + e = p and u, v = ω p =: γn k with u := He and v := He + s. In lines 3 and 4, the algorith creates two lists of unifor and pairwise independent vectors of size n k s.t. by construction u L and v R and k/ L = R = = p/ Õ k/ Hp/k = Õµn. Notice that µn = λn k controls the list size. By a suitably sall choice of p one can always satisfy the prerequisite λ < H γ of Theore. Thus we obtain an instance L, R, γ of the n k, γ, µ n n k -NN proble, for which Theore guarantees to output a list C that contains the solution u, v with overwheling probability. Notice that any vector u, v C L R with a Haing distance of ω p solves our proble, because the corresponding e with He = u and e with He + s = v have the property wte + e = p. This in turn leads to wte + e + 0 k u + v = ω. In line 7, the e, e can be

12 found by a binary search in slightly odified lists L, R that also store e and e. Thus, with overwheling probability, the algorith outputs a solution to the decoding proble in line 8. Also by Theore, an application of algorith NearestNeighbor has tie coplexity Õy+εn k for any ε > 0. Notice that this coplexity is independent of whether we have a good perutation π or not. This coplexity has to be added to the creation tie of the input lists L, R. Recall that the loop has to be repeated Õrn ties, leading to the runtie of Eq. 4. Nuerical optiization in the half distance decoding case yields tie coplexity O n in the worst case k/n with p/n In the full distance decoding case we get a runtie of O 0.4n in the worst case k/n with p/n Application to the BJMM algorith It is possible to apply our idea to the decoding algorith of Becker, Joux, May and Meurer BJMM [4]. We already explained that BJMM is a variant of Stern s algorith and thus a Meet-in-the-Middle algorith that constructs two list L, R. The ajor difference is that L, R are not directly enuerated as the lists L, R in Stern s algorith. Instead, L, R are constructed in a ore involved tree-based anner. This has the benefit that the list length is significantly saller than in Stern s construction, which in turn leads to an iproved running tie. This siilarity however enables us to directly apply our technique to the BJMM algorith. Naely, we have to siply replace in Decode the construction of L, R by the BJMM-construction of L, R, on which we apply our NearestNeighbor-algorith. Notice that as opposed to Section 3. not all possible vector pairs in C with the correct Haing distance solve the decoding proble. The issue is that the corresponding e, e do not necessarily have a Haing distance of p. Thus, additionally to u, v = ω p, we have to verify that also e, e = p holds. Algorith DecodeBJMM describes the application of our algorith in the BJMM fraework. Notice that up to line the algorith is identical to the one in [4]. The only difference is the final step fro line 3. Instead of using the exact atching of Stern, we use our NearestNeighbor algorith that searches for two vectors that are close i.e. have Haing distance ω p on the reaining n k l coordinates. Algorith DecodeBJMM uses a subroutine BaseLists. As described in [4], BaseLists chooses a rando partition of the first k + l coluns into two sets P, P [k + l] of equal size. Then a list B is created that contains all vectors b F k+l 0 n k l with wtb = p 8 + ε 4 + ε that are zero on the coordinates fro P. Analogously, a list B is build, that contains all vectors b F k+l 0 n k l of the sae Haing weight as above, but with zeros on the coordinates fro P. Thus, with inverse polynoial probability, a fixed vector of size k + l with Haing weight p 4 + ε + ε can be represented as a su of an eleent in B and an eleent in B. Repeating this choice a polynoial nuber of ties guarantees the representation with overwheling probability.

13 Algorith DecodeBJMM : procedure DecodeBJMM : Input: n, k, H F n k n, x F n 3: Output: e F n with He = Hx and wte d FDD, wte d HDD 4: s Hx 5: d H k n H: bin. entropy function, inverse n H aps to [0, ]. 6: for ω 0... d do FDD or ω 0... d in the HDD case 7: Choose 0 < p, ε, ε < ω, 0 < l < l < n k optiize nuerically 8: repeat polyn any ties P[π is good] 9: π rando perutation on F n. 0: Q πh perute coluns with Q F n k n k : choose another perutation goto line 9, if Q is not invertible : H Q πh and s Q s 3: t L R F l, t L0, t L, t R0 R F l choose uniforly at rando 4: t R = [ s] l t L [ ] c restricts to first c coluns 5: t R = [ s] l t L0 t L t R0 [ ] c restricts to last c coluns 6: L 0 BaseLists H, p, ε, ε, t L0 list of b F k+l 0 n k l 7: L BaseLists H, p, ε, ε, t L with wtb = p + ε 4 + ε 8: R 0 BaseLists H, p, ε, ε, t R0 s.t. [ Hb] l = t L0 9: R BaseLists H, p, ε, ε, t R 0: L [ Hx + y] n k l for all x L 0, y L with [ Hx + y] l = t L : R [ Hx + y + s] n k l for all x R 0, y R with [ Hx + y] l = t R : In lines 0, : only keep eleents with wtx + y = p + ε. ω p n k l 3: C NearestNeighborL, R, 4: for all u, v C L R with distance u, v = ω p do 5: find e, e s.t. u = [ He ] n k l and v = [ He + s] n k l 6: if wte + e = p then 7: return π e + e + 0 k+l u + v 8: end if 9: end for 30: until 3: end for 3: end procedure 3

14 BaseLists continues by coputing the corresponding values Hb for each eleent b B, stores these eleents and sorts the list by these values. Eventually an output list of vectors in b F k+l 0 n k l with weight p 4 + ε + ε is coputed by a standard eet-in-the-iddle technique s.t. Hb equals the input target value t on the first l coordinates. The sae technique is used in lines 0 and. In the coputation of L, for each pair of vectors x, y L 0 L a list of sus x + y is obtained such that Hx + y atches a uniforly chosen target value t L on the first l coordinates. After this step we also restrict to only those eleents that have a certain Haing weight of p + ε, since by [4] the target solution splits in two vectors of this particular weight. The coputation tree is illustrated in Figure 5. construction of the base lists: Õ τn final size of the base lists: Õ τn l coputation tie for L, R: Õ 4τn l l L 0 L R 0 R weight: p + ε 4 + ε weight: p L + ε base level weight: p e + e representations: final size of L, R: Õ µn e coputation: top level Õ y+εn k l R l = Õ p p/ k+l p ε k+l p/ ε ε l = Õ p/+ε p/4+ε / Fig. 5. Coputation tree of DecodeBJMM Theore 3. DecodeBJMM solves the decoding proble with overwheling probability in tie O 0.097n in the full distance decoding setting and tie O n in the half distance decoding setting. Proof. Let us define γ := ω p n k l, r := H ω n τ := k + l p n H 4 + ε + ε k + l µ := k + l p n H + ε k + l and y := γ k + l n p H k + l k + l n, l! = p + k + l p H l n, l := p +ε +k+l p ε H H H µn n k l γ. γ H γ, ε k + l p ε k + l p ε We want to show that for any ε > 0 the decoding proble can be solved with overwheling probability in tie Õ rn τn + τn l + 4τn l l + µn + y+εn k l. 4

15 The correctness and tie coplexity of the first part of the algorith i.e. the coputation of L and R up to line was already shown in [4]. Let us suarize the tie coplexity for this coputation. First of all we have a loop that guarantees a good distribution with overwheling probability. In our case, a good splitting would be p ones on the first k + l coordinates and ω p ones on the reaining n k l coordinates. Thus the necessary nuber of repetitions is Õ n ω /[ k+l p n k l ω p ] = Õ rn. The algorith of BJMM [4] akes use of the so-called representation technique introduced by Joux and Howgrave-Graha []. The ain idea is to blow up the search space such that each error vector can be represented as a su of two vectors in any different ways. In the algorith, all but one of these representations are filtered out using soe restriction. Inside the loop, we therefore first need to ake sure that l is the nuber of representations on the top level, because we restrict to l binary coordinates. On the top level we split the F k+l vector with p ones in a su of two vectors in F k+l with p + ε ones each. ways to represent the ones as + 0 or 0 + and k+l p Thus there are p p/ ways to represent the zeros as or +. Hence we need to choose l such ε k+l p. On the botto level, vectors with p + ε ones are that l = Θ p p/ ε represented as sus of two vectors with p 4 + ε + ε ones each. In this case there ways to represent the ones and k+l p/ ε are p/+ε p/4+ε / ε k+l p/ ε ways to represent the zeros. Thus we choose l = Θ p/+ε p/4+ε / ε. The coputation starts by creating the four base lists. In the first step two lists with vectors of size k+l and p 8 + ε 4 + ε ones are created, which takes tie Õ k+l p 8 + ε 4 + ε = Õ τn. These two lists are erged, considering each pair of one vector of the first list and one vector of the second list such that the first l coordinates of the su are a fixed value i.e. restricting to one special representation. The nuber of eleents in the base lists is therefore Õτn / l. In lines 0 and the top level lists L and R are coputed fro the base lists. In this step, the vectors are restricted to additional l l coordinates, resulting in a total restriction of l. Therefore, the tie coplexity is Õ τn / l / l l =. Õ4τn l l In line the algorith restricts the lists L and R to those vectors with p +ε ones. Due to the fact that the vectors are restricted to fixed l coordinates, the nuber of eleents can be upper bounded by Õ k+l p/+ε / l = Õµn. In the final step we have two lists of unifor because the eleents are a linear cobination of the coluns of H and pairwise independent because each eleent is coputed by a linear cobination of pairwise different coluns vectors in F n k l. Fro the analysis in [4] we know that there are e, e F k+l 0 n k l with wte + e = p such that u = [ He ] n k l L and v = [ He + s] n k l R, where wtu + v = ω p. Therefore, by Theore, NearestNeighbor outputs a list C that contains u, v. The e, e can be found in line 5 by binary searching in slightly odified L, R that also contain x + y. 5

16 Thus π e + e + 0 k+l u + v with weight ω is a correct solution to the proble, because H e + e + 0 k+l u + v = H e + e + 0 l u + v = H e + e + 0 l [ H e + e + s] n k l = H e + e + H e + e + s = s. Notice that [ He +e + s] l = 0 l holds by construction. By Theore the tie coplexity of NearestNeighbor is y+εn k l. In the half distance decoding case, for the worst case k/n 0.45 we get a tie coplexity of O n with p/n , ε /n and ε /n In the full distance decoding setting, we have a runtie of O 0.097n with k/n 0.4, p/n , ε /n and ε /n Solving the Nearest Neighbor Proble In this section we will describe NearestNeighbor, an algorith that solves the Nearest Neighbor proble fro Definition. We will prove correctness and tie coplexity of our algorith in the subsequent section. As already outlined in the previous section, given the input lists L and R with L = R = λ, our idea is to create exponentially any sublists L, R that are of expected polynoial size. The sublists are chosen such that with overwheling probability our unknown solution u, v L R with u, v = γ is contained in at least one of these sublists. The sublists L resp. R are defined as all eleents of L resp. R that have a Haing weight of h on the coluns defined by a rando partition A [] of size. In Lea 3, we will prove that h := H λ with 0 h is a suitable choice, because it leads to sublists of expected polynoial size. We will prove in Lea that the required nuber of sublists such that u, v is contained in one of these sublists is Õy with H λ γ y := γ H. 5 γ We will ake use of the fact that y > λ for any constant 0 < λ <, 0 < γ <, which can be verified nuerically. There is still one proble to solve, because we never discussed how to copute the L, R given L, R. A naive way to do so would be to traverse the original lists linearly and to check the weight condition. Unfortunately, this would have to be done for each sapled A, which would result in an overall coplexity of Õ y λ. Instead, as illustrated in Fig. 6, we do not proceed with the whole coordinates at once, but first start with a strip {,..., α } of the left hand side coluns and filter only on that strip. The resulting list pairs L, R are still of 6

17 list L list R α α sae A partition A part of vectors with weight h in the A coluns part of vectors with weight h in the A coluns λ λ create Õyα sublist pairs L, R by sapling rando A s rando partition  list L α list R α sae  wt. h in  O λ α wt. h in  create Õyα sublist pairs L, R of each of the Õyα sublist pairs and solve recursively or finally naively Fig. 6. Step-by-step coputation of NearestNeighbor 7

18 exponential, but saller, size. In the second step, we proceed on the subsequent α coluns {α +,..., α + α } to generate sublists L, R that contain all eleents that in addition have a sall Haing weight on the second strip. The advantage of this technique is that we are able to use the saller lists L, R to construct L, R, again by traversing these lists, instead of using L, R for all the coluns. As illustrated in Fig. 7, we grow a search tree of constant depth t, where the leaves are pairs of lists L t, R t, which were filtered on α α t coordinates. We choose α α t = to cover all coordinates. The choice of the α j will be given in Theore and is basically done in a way to balance the costs in each level of the search tree, which allows us to solve the proble in tie Õy+ε for any constant ε > 0. Notice that we never actually copute the Haing distance between eleents fro the list pairs, except for the very last step, where we obtain list pairs L t, R t of sall size Õ ε, and copute the Haing distance of all pairs by a naive quadratic algorith, which has tie coplexity Õε. Because we proceed analogously for any of the Õy sublists, we obtain an overall tie coplexity of Õy+ε. L R L R L R L R L R L R L R L R L R Fig. 7. Exaple: coputation tree of depth t = with one good path In total, our algorith heavily relies on the observation that the property of the pair u, v with sall Haing distance u, v = γ also holds locally on each of the t strips. In case that the differing coordinates of u, v would cluster in any of the strips α j, we would by istake sort out the pair. However, this issue can be easily resolved by rerandoizing the position of the coordinates in both input lists L and R. Denote z := u + v F and the splitting z = z,..., z t F α F α... F αt according to the t 8

19 strips. We will prove in Lea that after randoly peruting the coluns a polynoial in nuber of ties, there will be one perutation such that wtz j = γα j for all j t. Furtherore, we want to enforce that wtu = wtv =, which also has to hold on each of the t strips. Define u = u,..., u t and v = v,..., vt as above. We therefore want to ake sure that wtu j = wtv j = α j for all j t. Notice that this can also be achieved by rerandoization. Concretely, we pick a uniforly rando vector r F and add this vector to all eleents of both input lists L, R. Notice that the Haing weight of our solution pair u, v isn t changed by this operation. We will also show in Lea that after applying this process a polynoial in nuber of ties, the vectors u and v have the desired Haing weight in at least one step. Algorith 3 NearestNeighbor : procedure NearestNeighborL, R, γ L, R F, 0 < γ < : copute vectors length and size λ fro L, R 3: y := γ H as defined in Theore H λ γ γ 4: choose a constant ε > 0 could also be an input 5: t := logy λ+ ε log ε as defined in the proof of Theore logy logλ 6: α := y λ+ ε as defined in the proof of Theore y 7: for j t do 8: α i := y αi λ as defined in the proof of Theore 9: end for 0: for poly uniforly rando perutations π of [] do : for poly unif. rand. r F α... F α t wt. α j : L πl + r on each strip do 3: R πr + r perute coluns and add r to all eleents 4: Reove all vectors fro L, R that are not of weight α j on each strip 5: return NearestNeighborRec L, R,, t, γ, λ, α,..., α t, y, ε, 6: end for 7: end for 8: end procedure Our algorith NearestNeighbor starts by coputing the length of the vectors and a λ such that L = R = λ fro L and R. This list size is used to copute the repetition paraeter y. The algorith chooses a constant ε > 0 that is part of the asyptotic tie coplexity. The paraeter ε also deterines the nuber of strips t and the relative sizes of the strips α,..., α t. Eventually, the two input lists are rerandoized by peruting the coluns and adding a rando vector. Another recursive algorith NearestNeighborRec is then called with the rerandoized lists as input. In the algorith NearestNeighborRec we saple rando partitions A of the coluns, until it is guaranteed with overwheling probability that the solution is in at least one of the created sublists. The sublists are created by naively 9

20 traversing the input lists for all vectors with a Haing weight of H λ αj on the coluns defined by the rando partition. We only continue, if L and R don t grow too large, as defined in Lea 3. In line 0, we apply the algorith recursively on the subsequent α coluns, and so on. Eventually, we copare the final list pairs naively. Algorith 4 NearestNeighborRec : procedure NearestNeighborRecL, R,, t, γ, λ, α,..., α t, y, ε, j init j = : if j = t + then 3: run the naive algorith to copute C, a list of correct pairs 4: end if 5: for Θ yα j ties do 6: A partitionα j rando partition of size α j of the α j coluns 7: L all v L L with wt. H λ α j on A-coluns naive search 8: R all v R R with wt. H λ α j on A-coluns naive search 9: if L, R don t grow too large then as defined in Lea 3 0: C C NearestNeighborRecL, R,, t, γ, λ, α,..., α t, y, ε, j + solve the proble recursively : end if : end for 3: return C output a list of correct pairs 4: end procedure 5 Analysis of Our Algorith In this section, we first show that NearestNeighbor achieves a good distribution in at least one of the polynoially any repetitions. We define a good coputation path and show that NearestNeighbor has at least one of the with overwheling probability over the coins of the algorith. We continue by showing that the lists on that coputation path achieve their expected size with overwheling probability over the rando choice of the input. We conclude with Theore that cobines these results and shows the correctness and tie coplexity of NearestNeighbor. Lea good distribution. Let L, R, γ be an instance of an, γ, λ Nearest Neighbor proble with unknown solution vectors u, v L R. Let z := u + v and for any constant t let u := u,..., u t, v := v,..., v t z := z,..., z t be a splitting of the vectors in t strips with sizes α j for all j t with α α t =. Then the double rerandoization of algorith NearestNeighbor guarantees with overwheling probability that wtz j = γα j and wtu j = wtv j = α j for all j t in at least one of the rerandoized input lists. 0

21 Proof. In the first loop, rando perutations π of the coluns are chosen. Thus the probability for wtz j = γα j for all j t is α αt... /. γα γα t γ A cancellation of the coon ters, an application of Stirling s forula and the fact that t is constant shows the clai. In the second loop we choose uniforly rando r = r,..., r t F α... F αt with weight α j on each of the t strips and add the to all eleents in L and R. Fix one of the strips j and consider the vectors u j, v j. Let c 0 denote the nuber of coluns such that u j has a 0-coordinate and vj has a -coordinate. Define c 00, c 0 and c analogously. We define r to be good on the strip j, if it has exactly c xy ones in all four parts xy. The probability for that is c00 c0 c0 c αj c 00 c 0 c 0 c / α j Notice that this is again inverse polynoial, because c 00 + c 0 + c 0 + c = α j per definition. Thus the probability stays polynoial for all t strips, since t is constant. We conclude that a good r solves the proble, because on each strip. wtu j + r j = wtv j + r j = c 00 + c 0 + c 0 + c = α j. In the following we use the notion of a good coputation path inside the coputation tree of our algorith. See Fig. 7 for an exaple. In this figure the good path is arked as, whereas all the other paths are arked as dashed arrows. Definition 3 good coputation path. Let u, v L R be the target solution. A coputation path of NearestNeighbor is called good, if u, v is contained in all t sublist pairs fro the root to a leaf. Lea correctness. Let t N be the constant depth of Nearest- Neighbor and λ < H γ. Then the coputation tree of NearestNeighbor has a good coputation path with overwheling probability over the coins of the algorith. Proof. By construction, the target solution u, v is contained in the initial list pair L R on level of the coputation tree. In the following we show that if the solution is in one of the input lists on a level j, then with overwheling probability it is also in one of the output lists on level j which are either the input lists on level j + or the input lists for the naive algorith on the last level. Thus, if this holds for any j t, we have a good path by induction. Let us create yαj sublist pairs for each input list pair on level j with y fro 5. On each level j we therefore have a total of yα+...+αj output list pairs, resulting in a total of y output list pairs on the last level.

22 Fix a level j and a target solution u, v in one of the input pairs L, R on that level. On this level, we work on a strip of size α j. Let u j and v j be the restrictions of u, resp. v to that strip. Due to the rerandoization in our algorith, it is guaranteed by Lea that wtu j = wtv j = α j and that their Haing distance is u j, v j = γα j. Thus, if we look at the pairwise coordinates of u j, v j this iplies that we have exactly γ α j 0, 0-pairs and, -pairs and exactly γ α j 0, -pairs and, 0-pairs, respectively. We illustrate this input distribution of the target pair u j, v j in Fig. 8. u j = vj = weight γ αj γ αj γ γ αj αj Fig. 8. input distribution α j-strip The algorith constructs sublists L, R by choosing a rando partition A of the α j -strip with A = α j. The algorith only keeps those vectors of the input lists that have a relative Haing weight of h := H λ on the coluns defined by A, a choice that will be justified in Lea 3. The choice of h iplies that the nuber of, 0 overlaps on the coluns defined by A plus the nuber of, overlaps on the coluns defined by A is hα j. This is also the case for the nuber of 0, overlaps plus the nuber of, overlaps. Finally, we also know that the su of all overlaps 0, 0, 0,,, 0 and, is α j. Copared to the input distribution, this leaves one degree of freedo, which we denote by a paraeter 0 c h. We obtain the output distribution shown in Fig. 9. u j,a = vj,a = weight h c α j c α j c αj h c α j Fig. 9. output distribution A-coluns of an α j-strip In order to copute the necessary nuber of repetitions, we have to copute the nuber of good partitions A that lead to an output distribution of Fig. 9 for any 0 c h. This can be coputed by ultiplying the nuber of choices we have for each overlap of zeros and ones for any possible value of c α j, which is hα j cα j =0 γ α j h cα j γ α γ j c α j α j h cα j.

23 For a fixed c, it is for exaple possible to choose c α j 0, overlaps in the A-area fro an overall nuber of γ α j 0, s in the whole strip. Notice that soe choices of c ight lead to no possible choice for A, e.g. if c > γ. We deterine a c that axiizes the nuber of good partitions. Nuerical optiization shows that this axiu is obtained at c = γ. Notice that this is a valid choice for c i.e. none of the binoial coefficients is zero due to the restriction λ < H γ of Theore that iplies h > γ. Thus the nuber of good partitions up to polynoial factors is γ α j h γ α j γ α γ j γ α j α j h γ α j = Õ αj γ h γ+ γh γ The total nuber of partitions is α j αj = Õ αj. Thus the expected nuber r of repetitions until we find the correct pair is the total nuber of partitions divided by the nuber of good partitions, which is r = Õyαj, which justifies our choice of y in identity 5. It suffices to choose r repetitions in order to find the correct pair with overwheling probability, since the probability to not find the correct pair can be upper bounded by /r r. Thus the probability that the algorith goes wrong in any of its t calls on a good coputation path can be upper bounded by t, which is negligible. Notice that r = Õyαj, since polynoial factors vanish in Õ-notation. Lea 3 list sizes. Let t N be the constant depth of NearestNeighbor and ε > 0. Consider a good coputation path of depth t. Then with overwheling probability the t lists inside the good coputation path have sizes Õ λ j i= αi+ ε for all j t and thus are not cut off by the algorith. Proof. Fix soe j t and an initial input list L with L = λ the arguent is analogous for R. For each vector v k L we define rando variables X k such that { if j i= X k = v k L i. 0 otherwise Let X := L k= X k. Thus the rando variable X counts the nuber of eleents in the output list L j. Recall that NearestNeighbor restricts to relative weight h = H λ on the coluns in A. Since we know that the coputation path is good, there is one v k L with P[X k = ] =. Notice that all the other eleents are independent of v k and are uniforly chosen aong all vectors with weight α j on each strip j. Thus for all v k L \ {v k } we have P[X k = ] = j i= αi α i h αi h αi / 3 αi α i,

24 which is, for each of the first j strips, the nuber of vectors that have relative weight h on the A-coluns divided by the nuber of all possible vectors. Thus the expected size of the output list is E[X] = + λ j i= αi α i h αi h αi / αi Notice that obviously E[X]. Applying Chebyshev s inequality, we get P [ X E[X] ε E[X] ] α i. V[X] ε E[X] ε E[X] ε, using V[X] = V[ k X k] = k V[X k] = k E[X k ] E[X k] k E[X k] = E[X]. We have V[ k X k] = k V[X k], because the X k are pairwise independent. Thus for both lists on each level we obtain P [Xtoo large in any of the steps] t ε, applying the union bound. Fro Stirling s forula it also follows that E[X] λ j i= αi for each j t. Hence, with overwheling probability, the list sizes are as claied. Now we are able to prove Theore. Theore. For any constant ε > 0 and any λ < H γ, NearestNeighbor solves the, γ, λ NN proble with overwheling probability over both the coins of the algorith and the rando choice of the input in tie Õ y+ε H λ γ with y := γ H. γ Proof. By Lea, there is a path that includes the solution with overwheling probability over the coins of the algorith. Fix that path. We show that by Lea 3 with overwheling probability over the rando choice of the input NearestNeighbor s tie coplexity is the axiu of the ties Õ λ j i= αi+y j i= αi+ ε 6 to create all sublists on level j for all levels j t and the tie Õ y+ε 7 to naively solve the proble on the last level. Fro Lea 3 we know that on each level j the sizes of the input lists are Õ λ j i= αi+ ε. Notice that if the lists grow too large, we siply abort. On level j we construct Õyαj new sublists for each input list so that we have a total nuber of Õy j i= αi sublists on this level. Each of these sublists is coputed by naively searching through the input lists. Thus, we have 4

Reliability Constrained Packet-sizing for Linear Multi-hop Wireless Networks

Reliability Constrained Packet-sizing for Linear Multi-hop Wireless Networks Reliability Constrained acket-sizing for inear Multi-hop Wireless Networks Ning Wen, and Randall A. Berry Departent of Electrical Engineering and Coputer Science Northwestern University, Evanston, Illinois

More information

arxiv:0805.1434v1 [math.pr] 9 May 2008

arxiv:0805.1434v1 [math.pr] 9 May 2008 Degree-distribution stability of scale-free networs Zhenting Hou, Xiangxing Kong, Dinghua Shi,2, and Guanrong Chen 3 School of Matheatics, Central South University, Changsha 40083, China 2 Departent of

More information

Searching strategy for multi-target discovery in wireless networks

Searching strategy for multi-target discovery in wireless networks Searching strategy for ulti-target discovery in wireless networks Zhao Cheng, Wendi B. Heinzelan Departent of Electrical and Coputer Engineering University of Rochester Rochester, NY 467 (585) 75-{878,

More information

Data Set Generation for Rectangular Placement Problems

Data Set Generation for Rectangular Placement Problems Data Set Generation for Rectangular Placeent Probles Christine L. Valenzuela (Muford) Pearl Y. Wang School of Coputer Science & Inforatics Departent of Coputer Science MS 4A5 Cardiff University George

More information

Halloween Costume Ideas for the Wii Game

Halloween Costume Ideas for the Wii Game Algorithica 2001) 30: 101 139 DOI: 101007/s00453-001-0003-0 Algorithica 2001 Springer-Verlag New York Inc Optial Search and One-Way Trading Online Algoriths R El-Yaniv, 1 A Fiat, 2 R M Karp, 3 and G Turpin

More information

ON SELF-ROUTING IN CLOS CONNECTION NETWORKS. BARRY G. DOUGLASS Electrical Engineering Department Texas A&M University College Station, TX 77843-3128

ON SELF-ROUTING IN CLOS CONNECTION NETWORKS. BARRY G. DOUGLASS Electrical Engineering Department Texas A&M University College Station, TX 77843-3128 ON SELF-ROUTING IN CLOS CONNECTION NETWORKS BARRY G. DOUGLASS Electrical Engineering Departent Texas A&M University College Station, TX 778-8 A. YAVUZ ORUÇ Electrical Engineering Departent and Institute

More information

How To Get A Loan From A Bank For Free

How To Get A Loan From A Bank For Free Finance 111 Finance We have to work with oney every day. While balancing your checkbook or calculating your onthly expenditures on espresso requires only arithetic, when we start saving, planning for retireent,

More information

MINIMUM VERTEX DEGREE THRESHOLD FOR LOOSE HAMILTON CYCLES IN 3-UNIFORM HYPERGRAPHS

MINIMUM VERTEX DEGREE THRESHOLD FOR LOOSE HAMILTON CYCLES IN 3-UNIFORM HYPERGRAPHS MINIMUM VERTEX DEGREE THRESHOLD FOR LOOSE HAMILTON CYCLES IN 3-UNIFORM HYPERGRAPHS JIE HAN AND YI ZHAO Abstract. We show that for sufficiently large n, every 3-unifor hypergraph on n vertices with iniu

More information

Machine Learning Applications in Grid Computing

Machine Learning Applications in Grid Computing Machine Learning Applications in Grid Coputing George Cybenko, Guofei Jiang and Daniel Bilar Thayer School of Engineering Dartouth College Hanover, NH 03755, USA gvc@dartouth.edu, guofei.jiang@dartouth.edu

More information

Information Processing Letters

Information Processing Letters Inforation Processing Letters 111 2011) 178 183 Contents lists available at ScienceDirect Inforation Processing Letters www.elsevier.co/locate/ipl Offline file assignents for online load balancing Paul

More information

Preference-based Search and Multi-criteria Optimization

Preference-based Search and Multi-criteria Optimization Fro: AAAI-02 Proceedings. Copyright 2002, AAAI (www.aaai.org). All rights reserved. Preference-based Search and Multi-criteria Optiization Ulrich Junker ILOG 1681, route des Dolines F-06560 Valbonne ujunker@ilog.fr

More information

Airline Yield Management with Overbooking, Cancellations, and No-Shows JANAKIRAM SUBRAMANIAN

Airline Yield Management with Overbooking, Cancellations, and No-Shows JANAKIRAM SUBRAMANIAN Airline Yield Manageent with Overbooking, Cancellations, and No-Shows JANAKIRAM SUBRAMANIAN Integral Developent Corporation, 301 University Avenue, Suite 200, Palo Alto, California 94301 SHALER STIDHAM

More information

A quantum secret ballot. Abstract

A quantum secret ballot. Abstract A quantu secret ballot Shahar Dolev and Itaar Pitowsky The Edelstein Center, Levi Building, The Hebrerw University, Givat Ra, Jerusale, Israel Boaz Tair arxiv:quant-ph/060087v 8 Mar 006 Departent of Philosophy

More information

This paper studies a rental firm that offers reusable products to price- and quality-of-service sensitive

This paper studies a rental firm that offers reusable products to price- and quality-of-service sensitive MANUFACTURING & SERVICE OPERATIONS MANAGEMENT Vol., No. 3, Suer 28, pp. 429 447 issn 523-464 eissn 526-5498 8 3 429 infors doi.287/so.7.8 28 INFORMS INFORMS holds copyright to this article and distributed

More information

Partitioned Elias-Fano Indexes

Partitioned Elias-Fano Indexes Partitioned Elias-ano Indexes Giuseppe Ottaviano ISTI-CNR, Pisa giuseppe.ottaviano@isti.cnr.it Rossano Venturini Dept. of Coputer Science, University of Pisa rossano@di.unipi.it ABSTRACT The Elias-ano

More information

Media Adaptation Framework in Biofeedback System for Stroke Patient Rehabilitation

Media Adaptation Framework in Biofeedback System for Stroke Patient Rehabilitation Media Adaptation Fraework in Biofeedback Syste for Stroke Patient Rehabilitation Yinpeng Chen, Weiwei Xu, Hari Sundara, Thanassis Rikakis, Sheng-Min Liu Arts, Media and Engineering Progra Arizona State

More information

2. FINDING A SOLUTION

2. FINDING A SOLUTION The 7 th Balan Conference on Operational Research BACOR 5 Constanta, May 5, Roania OPTIMAL TIME AND SPACE COMPLEXITY ALGORITHM FOR CONSTRUCTION OF ALL BINARY TREES FROM PRE-ORDER AND POST-ORDER TRAVERSALS

More information

Generating Certification Authority Authenticated Public Keys in Ad Hoc Networks

Generating Certification Authority Authenticated Public Keys in Ad Hoc Networks SECURITY AND COMMUNICATION NETWORKS Published online in Wiley InterScience (www.interscience.wiley.co). Generating Certification Authority Authenticated Public Keys in Ad Hoc Networks G. Kounga 1, C. J.

More information

Adaptive Modulation and Coding for Unmanned Aerial Vehicle (UAV) Radio Channel

Adaptive Modulation and Coding for Unmanned Aerial Vehicle (UAV) Radio Channel Recent Advances in Counications Adaptive odulation and Coding for Unanned Aerial Vehicle (UAV) Radio Channel Airhossein Fereidountabar,Gian Carlo Cardarilli, Rocco Fazzolari,Luca Di Nunzio Abstract In

More information

AN ALGORITHM FOR REDUCING THE DIMENSION AND SIZE OF A SAMPLE FOR DATA EXPLORATION PROCEDURES

AN ALGORITHM FOR REDUCING THE DIMENSION AND SIZE OF A SAMPLE FOR DATA EXPLORATION PROCEDURES Int. J. Appl. Math. Coput. Sci., 2014, Vol. 24, No. 1, 133 149 DOI: 10.2478/acs-2014-0011 AN ALGORITHM FOR REDUCING THE DIMENSION AND SIZE OF A SAMPLE FOR DATA EXPLORATION PROCEDURES PIOTR KULCZYCKI,,

More information

6. Time (or Space) Series Analysis

6. Time (or Space) Series Analysis ATM 55 otes: Tie Series Analysis - Section 6a Page 8 6. Tie (or Space) Series Analysis In this chapter we will consider soe coon aspects of tie series analysis including autocorrelation, statistical prediction,

More information

Resource Allocation in Wireless Networks with Multiple Relays

Resource Allocation in Wireless Networks with Multiple Relays Resource Allocation in Wireless Networks with Multiple Relays Kağan Bakanoğlu, Stefano Toasin, Elza Erkip Departent of Electrical and Coputer Engineering, Polytechnic Institute of NYU, Brooklyn, NY, 0

More information

Reconnect 04 Solving Integer Programs with Branch and Bound (and Branch and Cut)

Reconnect 04 Solving Integer Programs with Branch and Bound (and Branch and Cut) Sandia is a ultiprogra laboratory operated by Sandia Corporation, a Lockheed Martin Copany, Reconnect 04 Solving Integer Progras with Branch and Bound (and Branch and Cut) Cynthia Phillips (Sandia National

More information

Online Bagging and Boosting

Online Bagging and Boosting Abstract Bagging and boosting are two of the ost well-known enseble learning ethods due to their theoretical perforance guarantees and strong experiental results. However, these algoriths have been used

More information

Extended-Horizon Analysis of Pressure Sensitivities for Leak Detection in Water Distribution Networks: Application to the Barcelona Network

Extended-Horizon Analysis of Pressure Sensitivities for Leak Detection in Water Distribution Networks: Application to the Barcelona Network 2013 European Control Conference (ECC) July 17-19, 2013, Zürich, Switzerland. Extended-Horizon Analysis of Pressure Sensitivities for Leak Detection in Water Distribution Networks: Application to the Barcelona

More information

Binary Embedding: Fundamental Limits and Fast Algorithm

Binary Embedding: Fundamental Limits and Fast Algorithm Binary Ebedding: Fundaental Liits and Fast Algorith Xinyang Yi The University of Texas at Austin yixy@utexas.edu Eric Price The University of Texas at Austin ecprice@cs.utexas.edu Constantine Caraanis

More information

Efficient Key Management for Secure Group Communications with Bursty Behavior

Efficient Key Management for Secure Group Communications with Bursty Behavior Efficient Key Manageent for Secure Group Counications with Bursty Behavior Xukai Zou, Byrav Raaurthy Departent of Coputer Science and Engineering University of Nebraska-Lincoln Lincoln, NE68588, USA Eail:

More information

RECURSIVE DYNAMIC PROGRAMMING: HEURISTIC RULES, BOUNDING AND STATE SPACE REDUCTION. Henrik Kure

RECURSIVE DYNAMIC PROGRAMMING: HEURISTIC RULES, BOUNDING AND STATE SPACE REDUCTION. Henrik Kure RECURSIVE DYNAMIC PROGRAMMING: HEURISTIC RULES, BOUNDING AND STATE SPACE REDUCTION Henrik Kure Dina, Danish Inforatics Network In the Agricultural Sciences Royal Veterinary and Agricultural University

More information

Use of extrapolation to forecast the working capital in the mechanical engineering companies

Use of extrapolation to forecast the working capital in the mechanical engineering companies ECONTECHMOD. AN INTERNATIONAL QUARTERLY JOURNAL 2014. Vol. 1. No. 1. 23 28 Use of extrapolation to forecast the working capital in the echanical engineering copanies A. Cherep, Y. Shvets Departent of finance

More information

INTEGRATED ENVIRONMENT FOR STORING AND HANDLING INFORMATION IN TASKS OF INDUCTIVE MODELLING FOR BUSINESS INTELLIGENCE SYSTEMS

INTEGRATED ENVIRONMENT FOR STORING AND HANDLING INFORMATION IN TASKS OF INDUCTIVE MODELLING FOR BUSINESS INTELLIGENCE SYSTEMS Artificial Intelligence Methods and Techniques for Business and Engineering Applications 210 INTEGRATED ENVIRONMENT FOR STORING AND HANDLING INFORMATION IN TASKS OF INDUCTIVE MODELLING FOR BUSINESS INTELLIGENCE

More information

A Scalable Application Placement Controller for Enterprise Data Centers

A Scalable Application Placement Controller for Enterprise Data Centers W WWW 7 / Track: Perforance and Scalability A Scalable Application Placeent Controller for Enterprise Data Centers Chunqiang Tang, Malgorzata Steinder, Michael Spreitzer, and Giovanni Pacifici IBM T.J.

More information

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, ACCEPTED FOR PUBLICATION 1. Secure Wireless Multicast for Delay-Sensitive Data via Network Coding

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, ACCEPTED FOR PUBLICATION 1. Secure Wireless Multicast for Delay-Sensitive Data via Network Coding IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, ACCEPTED FOR PUBLICATION 1 Secure Wireless Multicast for Delay-Sensitive Data via Network Coding Tuan T. Tran, Meber, IEEE, Hongxiang Li, Senior Meber, IEEE,

More information

Factor Model. Arbitrage Pricing Theory. Systematic Versus Non-Systematic Risk. Intuitive Argument

Factor Model. Arbitrage Pricing Theory. Systematic Versus Non-Systematic Risk. Intuitive Argument Ross [1],[]) presents the aritrage pricing theory. The idea is that the structure of asset returns leads naturally to a odel of risk preia, for otherwise there would exist an opportunity for aritrage profit.

More information

Models and Algorithms for Stochastic Online Scheduling 1

Models and Algorithms for Stochastic Online Scheduling 1 Models and Algoriths for Stochastic Online Scheduling 1 Nicole Megow Technische Universität Berlin, Institut für Matheatik, Strasse des 17. Juni 136, 10623 Berlin, Gerany. eail: negow@ath.tu-berlin.de

More information

Construction Economics & Finance. Module 3 Lecture-1

Construction Economics & Finance. Module 3 Lecture-1 Depreciation:- Construction Econoics & Finance Module 3 Lecture- It represents the reduction in arket value of an asset due to age, wear and tear and obsolescence. The physical deterioration of the asset

More information

Stochastic Online Scheduling on Parallel Machines

Stochastic Online Scheduling on Parallel Machines Stochastic Online Scheduling on Parallel Machines Nicole Megow 1, Marc Uetz 2, and Tark Vredeveld 3 1 Technische Universit at Berlin, Institut f ur Matheatik, Strasse des 17. Juni 136, 10623 Berlin, Gerany

More information

Modified Latin Hypercube Sampling Monte Carlo (MLHSMC) Estimation for Average Quality Index

Modified Latin Hypercube Sampling Monte Carlo (MLHSMC) Estimation for Average Quality Index Analog Integrated Circuits and Signal Processing, vol. 9, no., April 999. Abstract Modified Latin Hypercube Sapling Monte Carlo (MLHSMC) Estiation for Average Quality Index Mansour Keraat and Richard Kielbasa

More information

CRM FACTORS ASSESSMENT USING ANALYTIC HIERARCHY PROCESS

CRM FACTORS ASSESSMENT USING ANALYTIC HIERARCHY PROCESS 641 CRM FACTORS ASSESSMENT USING ANALYTIC HIERARCHY PROCESS Marketa Zajarosova 1* *Ph.D. VSB - Technical University of Ostrava, THE CZECH REPUBLIC arketa.zajarosova@vsb.cz Abstract Custoer relationship

More information

Dynamic Placement for Clustered Web Applications

Dynamic Placement for Clustered Web Applications Dynaic laceent for Clustered Web Applications A. Karve, T. Kibrel, G. acifici, M. Spreitzer, M. Steinder, M. Sviridenko, and A. Tantawi IBM T.J. Watson Research Center {karve,kibrel,giovanni,spreitz,steinder,sviri,tantawi}@us.ib.co

More information

AUC Optimization vs. Error Rate Minimization

AUC Optimization vs. Error Rate Minimization AUC Optiization vs. Error Rate Miniization Corinna Cortes and Mehryar Mohri AT&T Labs Research 180 Park Avenue, Florha Park, NJ 0793, USA {corinna, ohri}@research.att.co Abstract The area under an ROC

More information

Research Article Performance Evaluation of Human Resource Outsourcing in Food Processing Enterprises

Research Article Performance Evaluation of Human Resource Outsourcing in Food Processing Enterprises Advance Journal of Food Science and Technology 9(2): 964-969, 205 ISSN: 2042-4868; e-issn: 2042-4876 205 Maxwell Scientific Publication Corp. Subitted: August 0, 205 Accepted: Septeber 3, 205 Published:

More information

Position Auctions and Non-uniform Conversion Rates

Position Auctions and Non-uniform Conversion Rates Position Auctions and Non-unifor Conversion Rates Liad Blurosen Microsoft Research Mountain View, CA 944 liadbl@icrosoft.co Jason D. Hartline Shuzhen Nong Electrical Engineering and Microsoft AdCenter

More information

Equivalent Tapped Delay Line Channel Responses with Reduced Taps

Equivalent Tapped Delay Line Channel Responses with Reduced Taps Equivalent Tapped Delay Line Channel Responses with Reduced Taps Shweta Sagari, Wade Trappe, Larry Greenstein {shsagari, trappe, ljg}@winlab.rutgers.edu WINLAB, Rutgers University, North Brunswick, NJ

More information

Exploiting Hardware Heterogeneity within the Same Instance Type of Amazon EC2

Exploiting Hardware Heterogeneity within the Same Instance Type of Amazon EC2 Exploiting Hardware Heterogeneity within the Sae Instance Type of Aazon EC2 Zhonghong Ou, Hao Zhuang, Jukka K. Nurinen, Antti Ylä-Jääski, Pan Hui Aalto University, Finland; Deutsch Teleko Laboratories,

More information

Performance Evaluation of Machine Learning Techniques using Software Cost Drivers

Performance Evaluation of Machine Learning Techniques using Software Cost Drivers Perforance Evaluation of Machine Learning Techniques using Software Cost Drivers Manas Gaur Departent of Coputer Engineering, Delhi Technological University Delhi, India ABSTRACT There is a treendous rise

More information

Real Time Target Tracking with Binary Sensor Networks and Parallel Computing

Real Time Target Tracking with Binary Sensor Networks and Parallel Computing Real Tie Target Tracking with Binary Sensor Networks and Parallel Coputing Hong Lin, John Rushing, Sara J. Graves, Steve Tanner, and Evans Criswell Abstract A parallel real tie data fusion and target tracking

More information

Factored Models for Probabilistic Modal Logic

Factored Models for Probabilistic Modal Logic Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (2008 Factored Models for Probabilistic Modal Logic Afsaneh Shirazi and Eyal Air Coputer Science Departent, University of Illinois

More information

Software Quality Characteristics Tested For Mobile Application Development

Software Quality Characteristics Tested For Mobile Application Development Thesis no: MGSE-2015-02 Software Quality Characteristics Tested For Mobile Application Developent Literature Review and Epirical Survey WALEED ANWAR Faculty of Coputing Blekinge Institute of Technology

More information

- 265 - Part C. Property and Casualty Insurance Companies

- 265 - Part C. Property and Casualty Insurance Companies Part C. Property and Casualty Insurance Copanies This Part discusses proposals to curtail favorable tax rules for property and casualty ("P&C") insurance copanies. The syste of reserves for unpaid losses

More information

5.7 Chebyshev Multi-section Matching Transformer

5.7 Chebyshev Multi-section Matching Transformer /9/ 5_7 Chebyshev Multisection Matching Transforers / 5.7 Chebyshev Multi-section Matching Transforer Reading Assignent: pp. 5-55 We can also build a ultisection atching network such that Γ f is a Chebyshev

More information

Optimal Resource-Constraint Project Scheduling with Overlapping Modes

Optimal Resource-Constraint Project Scheduling with Overlapping Modes Optial Resource-Constraint Proect Scheduling with Overlapping Modes François Berthaut Lucas Grèze Robert Pellerin Nathalie Perrier Adnène Hai February 20 CIRRELT-20-09 Bureaux de Montréal : Bureaux de

More information

Data Streaming Algorithms for Estimating Entropy of Network Traffic

Data Streaming Algorithms for Estimating Entropy of Network Traffic Data Streaing Algoriths for Estiating Entropy of Network Traffic Ashwin Lall University of Rochester Vyas Sekar Carnegie Mellon University Mitsunori Ogihara University of Rochester Jun (Ji) Xu Georgia

More information

The Fundamentals of Modal Testing

The Fundamentals of Modal Testing The Fundaentals of Modal Testing Application Note 243-3 Η(ω) = Σ n r=1 φ φ i j / 2 2 2 2 ( ω n - ω ) + (2ξωωn) Preface Modal analysis is defined as the study of the dynaic characteristics of a echanical

More information

Modeling operational risk data reported above a time-varying threshold

Modeling operational risk data reported above a time-varying threshold Modeling operational risk data reported above a tie-varying threshold Pavel V. Shevchenko CSIRO Matheatical and Inforation Sciences, Sydney, Locked bag 7, North Ryde, NSW, 670, Australia. e-ail: Pavel.Shevchenko@csiro.au

More information

Energy Efficient VM Scheduling for Cloud Data Centers: Exact allocation and migration algorithms

Energy Efficient VM Scheduling for Cloud Data Centers: Exact allocation and migration algorithms Energy Efficient VM Scheduling for Cloud Data Centers: Exact allocation and igration algoriths Chaia Ghribi, Makhlouf Hadji and Djaal Zeghlache Institut Mines-Téléco, Téléco SudParis UMR CNRS 5157 9, Rue

More information

An Approach to Combating Free-riding in Peer-to-Peer Networks

An Approach to Combating Free-riding in Peer-to-Peer Networks An Approach to Cobating Free-riding in Peer-to-Peer Networks Victor Ponce, Jie Wu, and Xiuqi Li Departent of Coputer Science and Engineering Florida Atlantic University Boca Raton, FL 33431 April 7, 2008

More information

SOME APPLICATIONS OF FORECASTING Prof. Thomas B. Fomby Department of Economics Southern Methodist University May 2008

SOME APPLICATIONS OF FORECASTING Prof. Thomas B. Fomby Department of Economics Southern Methodist University May 2008 SOME APPLCATONS OF FORECASTNG Prof. Thoas B. Foby Departent of Econoics Southern Methodist University May 8 To deonstrate the usefulness of forecasting ethods this note discusses four applications of forecasting

More information

A CHAOS MODEL OF SUBHARMONIC OSCILLATIONS IN CURRENT MODE PWM BOOST CONVERTERS

A CHAOS MODEL OF SUBHARMONIC OSCILLATIONS IN CURRENT MODE PWM BOOST CONVERTERS A CHAOS MODEL OF SUBHARMONIC OSCILLATIONS IN CURRENT MODE PWM BOOST CONVERTERS Isaac Zafrany and Sa BenYaakov Departent of Electrical and Coputer Engineering BenGurion University of the Negev P. O. Box

More information

Multi-Class Deep Boosting

Multi-Class Deep Boosting Multi-Class Deep Boosting Vitaly Kuznetsov Courant Institute 25 Mercer Street New York, NY 002 vitaly@cis.nyu.edu Mehryar Mohri Courant Institute & Google Research 25 Mercer Street New York, NY 002 ohri@cis.nyu.edu

More information

A magnetic Rotor to convert vacuum-energy into mechanical energy

A magnetic Rotor to convert vacuum-energy into mechanical energy A agnetic Rotor to convert vacuu-energy into echanical energy Claus W. Turtur, University of Applied Sciences Braunschweig-Wolfenbüttel Abstract Wolfenbüttel, Mai 21 2008 In previous work it was deonstrated,

More information

Managing Complex Network Operation with Predictive Analytics

Managing Complex Network Operation with Predictive Analytics Managing Coplex Network Operation with Predictive Analytics Zhenyu Huang, Pak Chung Wong, Patrick Mackey, Yousu Chen, Jian Ma, Kevin Schneider, and Frank L. Greitzer Pacific Northwest National Laboratory

More information

Exercise 4 INVESTIGATION OF THE ONE-DEGREE-OF-FREEDOM SYSTEM

Exercise 4 INVESTIGATION OF THE ONE-DEGREE-OF-FREEDOM SYSTEM Eercise 4 IVESTIGATIO OF THE OE-DEGREE-OF-FREEDOM SYSTEM 1. Ai of the eercise Identification of paraeters of the euation describing a one-degree-of- freedo (1 DOF) atheatical odel of the real vibrating

More information

Analyzing Spatiotemporal Characteristics of Education Network Traffic with Flexible Multiscale Entropy

Analyzing Spatiotemporal Characteristics of Education Network Traffic with Flexible Multiscale Entropy Vol. 9, No. 5 (2016), pp.303-312 http://dx.doi.org/10.14257/ijgdc.2016.9.5.26 Analyzing Spatioteporal Characteristics of Education Network Traffic with Flexible Multiscale Entropy Chen Yang, Renjie Zhou

More information

Physics 211: Lab Oscillations. Simple Harmonic Motion.

Physics 211: Lab Oscillations. Simple Harmonic Motion. Physics 11: Lab Oscillations. Siple Haronic Motion. Reading Assignent: Chapter 15 Introduction: As we learned in class, physical systes will undergo an oscillatory otion, when displaced fro a stable equilibriu.

More information

ASIC Design Project Management Supported by Multi Agent Simulation

ASIC Design Project Management Supported by Multi Agent Simulation ASIC Design Project Manageent Supported by Multi Agent Siulation Jana Blaschke, Christian Sebeke, Wolfgang Rosenstiel Abstract The coplexity of Application Specific Integrated Circuits (ASICs) is continuously

More information

CLOSED-LOOP SUPPLY CHAIN NETWORK OPTIMIZATION FOR HONG KONG CARTRIDGE RECYCLING INDUSTRY

CLOSED-LOOP SUPPLY CHAIN NETWORK OPTIMIZATION FOR HONG KONG CARTRIDGE RECYCLING INDUSTRY CLOSED-LOOP SUPPLY CHAIN NETWORK OPTIMIZATION FOR HONG KONG CARTRIDGE RECYCLING INDUSTRY Y. T. Chen Departent of Industrial and Systes Engineering Hong Kong Polytechnic University, Hong Kong yongtong.chen@connect.polyu.hk

More information

Markov Models and Their Use for Calculations of Important Traffic Parameters of Contact Center

Markov Models and Their Use for Calculations of Important Traffic Parameters of Contact Center Markov Models and Their Use for Calculations of Iportant Traffic Paraeters of Contact Center ERIK CHROMY, JAN DIEZKA, MATEJ KAVACKY Institute of Telecounications Slovak University of Technology Bratislava

More information

SAMPLING METHODS LEARNING OBJECTIVES

SAMPLING METHODS LEARNING OBJECTIVES 6 SAMPLING METHODS 6 Using Statistics 6-6 2 Nonprobability Sapling and Bias 6-6 Stratified Rando Sapling 6-2 6 4 Cluster Sapling 6-4 6 5 Systeatic Sapling 6-9 6 6 Nonresponse 6-2 6 7 Suary and Review of

More information

The Application of Bandwidth Optimization Technique in SLA Negotiation Process

The Application of Bandwidth Optimization Technique in SLA Negotiation Process The Application of Bandwidth Optiization Technique in SLA egotiation Process Srecko Krile University of Dubrovnik Departent of Electrical Engineering and Coputing Cira Carica 4, 20000 Dubrovnik, Croatia

More information

Online Appendix I: A Model of Household Bargaining with Violence. In this appendix I develop a simple model of household bargaining that

Online Appendix I: A Model of Household Bargaining with Violence. In this appendix I develop a simple model of household bargaining that Online Appendix I: A Model of Household Bargaining ith Violence In this appendix I develop a siple odel of household bargaining that incorporates violence and shos under hat assuptions an increase in oen

More information

Introduction to Unit Conversion: the SI

Introduction to Unit Conversion: the SI The Matheatics 11 Copetency Test Introduction to Unit Conversion: the SI In this the next docuent in this series is presented illustrated an effective reliable approach to carryin out unit conversions

More information

Capacity of Multiple-Antenna Systems With Both Receiver and Transmitter Channel State Information

Capacity of Multiple-Antenna Systems With Both Receiver and Transmitter Channel State Information IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO., OCTOBER 23 2697 Capacity of Multiple-Antenna Systes With Both Receiver and Transitter Channel State Inforation Sudharan K. Jayaweera, Student Meber,

More information

Information Dimension and the Degrees of Freedom of the Interference Channel

Information Dimension and the Degrees of Freedom of the Interference Channel Inforation Diension and the Degrees of Freedo of the Interference Channel Yihong Wu, Shloo Shaai Shitz and Sergio Verdú Abstract The degrees of freedo of the K-user Gaussian interference channel deterine

More information

Enrolment into Higher Education and Changes in Repayment Obligations of Student Aid Microeconometric Evidence for Germany

Enrolment into Higher Education and Changes in Repayment Obligations of Student Aid Microeconometric Evidence for Germany Enrolent into Higher Education and Changes in Repayent Obligations of Student Aid Microeconoetric Evidence for Gerany Hans J. Baugartner *) Viktor Steiner **) *) DIW Berlin **) Free University of Berlin,

More information

Considerations on Distributed Load Balancing for Fully Heterogeneous Machines: Two Particular Cases

Considerations on Distributed Load Balancing for Fully Heterogeneous Machines: Two Particular Cases Considerations on Distributed Load Balancing for Fully Heterogeneous Machines: Two Particular Cases Nathanaël Cheriere Departent of Coputer Science ENS Rennes Rennes, France nathanael.cheriere@ens-rennes.fr

More information

The individual neurons are complicated. They have a myriad of parts, subsystems and control mechanisms. They convey information via a host of

The individual neurons are complicated. They have a myriad of parts, subsystems and control mechanisms. They convey information via a host of CHAPTER 4 ARTIFICIAL NEURAL NETWORKS 4. INTRODUCTION Artificial Neural Networks (ANNs) are relatively crude electronic odels based on the neural structure of the brain. The brain learns fro experience.

More information

Trading Regret for Efficiency: Online Convex Optimization with Long Term Constraints

Trading Regret for Efficiency: Online Convex Optimization with Long Term Constraints Journal of Machine Learning Research 13 2012) 2503-2528 Subitted 8/11; Revised 3/12; Published 9/12 rading Regret for Efficiency: Online Convex Optiization with Long er Constraints Mehrdad Mahdavi Rong

More information

An Optimal Task Allocation Model for System Cost Analysis in Heterogeneous Distributed Computing Systems: A Heuristic Approach

An Optimal Task Allocation Model for System Cost Analysis in Heterogeneous Distributed Computing Systems: A Heuristic Approach An Optial Tas Allocation Model for Syste Cost Analysis in Heterogeneous Distributed Coputing Systes: A Heuristic Approach P. K. Yadav Central Building Research Institute, Rooree- 247667, Uttarahand (INDIA)

More information

Markovian inventory policy with application to the paper industry

Markovian inventory policy with application to the paper industry Coputers and Cheical Engineering 26 (2002) 1399 1413 www.elsevier.co/locate/copcheeng Markovian inventory policy with application to the paper industry K. Karen Yin a, *, Hu Liu a,1, Neil E. Johnson b,2

More information

REQUIREMENTS FOR A COMPUTER SCIENCE CURRICULUM EMPHASIZING INFORMATION TECHNOLOGY SUBJECT AREA: CURRICULUM ISSUES

REQUIREMENTS FOR A COMPUTER SCIENCE CURRICULUM EMPHASIZING INFORMATION TECHNOLOGY SUBJECT AREA: CURRICULUM ISSUES REQUIREMENTS FOR A COMPUTER SCIENCE CURRICULUM EMPHASIZING INFORMATION TECHNOLOGY SUBJECT AREA: CURRICULUM ISSUES Charles Reynolds Christopher Fox reynolds @cs.ju.edu fox@cs.ju.edu Departent of Coputer

More information

Approximately-Perfect Hashing: Improving Network Throughput through Efficient Off-chip Routing Table Lookup

Approximately-Perfect Hashing: Improving Network Throughput through Efficient Off-chip Routing Table Lookup Approxiately-Perfect ing: Iproving Network Throughput through Efficient Off-chip Routing Table Lookup Zhuo Huang, Jih-Kwon Peir, Shigang Chen Departent of Coputer & Inforation Science & Engineering, University

More information

International Journal of Management & Information Systems First Quarter 2012 Volume 16, Number 1

International Journal of Management & Information Systems First Quarter 2012 Volume 16, Number 1 International Journal of Manageent & Inforation Systes First Quarter 2012 Volue 16, Nuber 1 Proposal And Effectiveness Of A Highly Copelling Direct Mail Method - Establishent And Deployent Of PMOS-DM Hisatoshi

More information

Image restoration for a rectangular poor-pixels detector

Image restoration for a rectangular poor-pixels detector Iage restoration for a rectangular poor-pixels detector Pengcheng Wen 1, Xiangjun Wang 1, Hong Wei 2 1 State Key Laboratory of Precision Measuring Technology and Instruents, Tianjin University, China 2

More information

Audio Engineering Society. Convention Paper. Presented at the 119th Convention 2005 October 7 10 New York, New York USA

Audio Engineering Society. Convention Paper. Presented at the 119th Convention 2005 October 7 10 New York, New York USA Audio Engineering Society Convention Paper Presented at the 119th Convention 2005 October 7 10 New York, New York USA This convention paper has been reproduced fro the authors advance anuscript, without

More information

Pricing Asian Options using Monte Carlo Methods

Pricing Asian Options using Monte Carlo Methods U.U.D.M. Project Report 9:7 Pricing Asian Options using Monte Carlo Methods Hongbin Zhang Exaensarbete i ateatik, 3 hp Handledare och exainator: Johan Tysk Juni 9 Departent of Matheatics Uppsala University

More information

Energy Proportionality for Disk Storage Using Replication

Energy Proportionality for Disk Storage Using Replication Energy Proportionality for Disk Storage Using Replication Jinoh Ki and Doron Rote Lawrence Berkeley National Laboratory University of California, Berkeley, CA 94720 {jinohki,d rote}@lbl.gov Abstract Energy

More information

Implementation of Active Queue Management in a Combined Input and Output Queued Switch

Implementation of Active Queue Management in a Combined Input and Output Queued Switch pleentation of Active Queue Manageent in a obined nput and Output Queued Switch Bartek Wydrowski and Moshe Zukeran AR Special Research entre for Ultra-Broadband nforation Networks, EEE Departent, The University

More information

Using Bloom Filters to Refine Web Search Results

Using Bloom Filters to Refine Web Search Results Using Bloo Filters to Refine Web Search Results Navendu Jain Departent of Coputer Sciences University of Texas at Austin Austin, TX, 78712 nav@cs.utexas.edu Mike Dahlin Departent of Coputer Sciences University

More information

PERFORMANCE METRICS FOR THE IT SERVICES PORTFOLIO

PERFORMANCE METRICS FOR THE IT SERVICES PORTFOLIO Bulletin of the Transilvania University of Braşov Series I: Engineering Sciences Vol. 4 (53) No. - 0 PERFORMANCE METRICS FOR THE IT SERVICES PORTFOLIO V. CAZACU I. SZÉKELY F. SANDU 3 T. BĂLAN Abstract:

More information

Method of supply chain optimization in E-commerce

Method of supply chain optimization in E-commerce MPRA Munich Personal RePEc Archive Method of supply chain optiization in E-coerce Petr Suchánek and Robert Bucki Silesian University - School of Business Adinistration, The College of Inforatics and Manageent

More information

Applying Multiple Neural Networks on Large Scale Data

Applying Multiple Neural Networks on Large Scale Data 0 International Conference on Inforation and Electronics Engineering IPCSIT vol6 (0) (0) IACSIT Press, Singapore Applying Multiple Neural Networks on Large Scale Data Kritsanatt Boonkiatpong and Sukree

More information

Lecture L26-3D Rigid Body Dynamics: The Inertia Tensor

Lecture L26-3D Rigid Body Dynamics: The Inertia Tensor J. Peraire, S. Widnall 16.07 Dynaics Fall 008 Lecture L6-3D Rigid Body Dynaics: The Inertia Tensor Version.1 In this lecture, we will derive an expression for the angular oentu of a 3D rigid body. We shall

More information

An Innovate Dynamic Load Balancing Algorithm Based on Task

An Innovate Dynamic Load Balancing Algorithm Based on Task An Innovate Dynaic Load Balancing Algorith Based on Task Classification Hong-bin Wang,,a, Zhi-yi Fang, b, Guan-nan Qu,*,c, Xiao-dan Ren,d College of Coputer Science and Technology, Jilin University, Changchun

More information

Comment on On Discriminative vs. Generative Classifiers: A Comparison of Logistic Regression and Naive Bayes

Comment on On Discriminative vs. Generative Classifiers: A Comparison of Logistic Regression and Naive Bayes Coent on On Discriinative vs. Generative Classifiers: A Coparison of Logistic Regression and Naive Bayes Jing-Hao Xue (jinghao@stats.gla.ac.uk) and D. Michael Titterington (ike@stats.gla.ac.uk) Departent

More information

The Benefit of SMT in the Multi-Core Era: Flexibility towards Degrees of Thread-Level Parallelism

The Benefit of SMT in the Multi-Core Era: Flexibility towards Degrees of Thread-Level Parallelism The enefit of SMT in the Multi-Core Era: Flexibility towards Degrees of Thread-Level Parallelis Stijn Eyeran Lieven Eeckhout Ghent University, elgiu Stijn.Eyeran@elis.UGent.be, Lieven.Eeckhout@elis.UGent.be

More information

The Research of Measuring Approach and Energy Efficiency for Hadoop Periodic Jobs

The Research of Measuring Approach and Energy Efficiency for Hadoop Periodic Jobs Send Orders for Reprints to reprints@benthascience.ae 206 The Open Fuels & Energy Science Journal, 2015, 8, 206-210 Open Access The Research of Measuring Approach and Energy Efficiency for Hadoop Periodic

More information

Budget-optimal Crowdsourcing using Low-rank Matrix Approximations

Budget-optimal Crowdsourcing using Low-rank Matrix Approximations Budget-optial Crowdsourcing using Low-rank Matrix Approxiations David R. Karger, Sewoong Oh, and Devavrat Shah Departent of EECS, Massachusetts Institute of Technology Eail: {karger, swoh, devavrat}@it.edu

More information

Work Travel and Decision Probling in the Network Marketing World

Work Travel and Decision Probling in the Network Marketing World TRB Paper No. 03-4348 WORK TRAVEL MODE CHOICE MODELING USING DATA MINING: DECISION TREES AND NEURAL NETWORKS Chi Xie Research Assistant Departent of Civil and Environental Engineering University of Massachusetts,

More information

A Fast Algorithm for Online Placement and Reorganization of Replicated Data

A Fast Algorithm for Online Placement and Reorganization of Replicated Data A Fast Algorith for Online Placeent and Reorganization of Replicated Data R. J. Honicky Storage Systes Research Center University of California, Santa Cruz Ethan L. Miller Storage Systes Research Center

More information

Stable Learning in Coding Space for Multi-Class Decoding and Its Extension for Multi-Class Hypothesis Transfer Learning

Stable Learning in Coding Space for Multi-Class Decoding and Its Extension for Multi-Class Hypothesis Transfer Learning Stable Learning in Coding Space for Multi-Class Decoding and Its Extension for Multi-Class Hypothesis Transfer Learning Bang Zhang, Yi Wang 2, Yang Wang, Fang Chen 2 National ICT Australia 2 School of

More information