Learnability and Stability in the General Learning Setting
|
|
- Posy Melton
- 7 years ago
- Views:
Transcription
1 Learnability and Stability in the General Learning Setting Shai Shalev-Shwartz TTI-Chicago Ohad Shair The Hebrew University Nathan Srebro TTI-Chicago Karthik Sridharan TTI-Chicago Abstract We establish that stability is necessary and sufficient for learning, even in the General Learning Setting where unifor convergence conditions are not necessary for learning, and where learning ight only be possible with a non-erm learning rule. This goes beyond previous work on the relationship between stability and learnability, which focused on supervised classification and regression, where learnability is equivalent to unifor convergence and it is enough to consider the ERM. Introduction We consider the General Setting of Learning 0] where we would like to iniize a population risk functional (stochastic objective) F (h) = E Z D f(h; Z)] () where the distribution D of Z is unknown, based on i.i.d. saple z,..., z drawn fro D (and full knowledge of the function f). This General Setting subsues supervised classification and regression, certain unsupervised learning probles, density estiation and stochastic optiization. For exaple, in supervised learning z = (x, y) is an instance-label pair, h is a predictor, and f(h, (x, y)) = loss(h(x), y) is the loss functional. For supervised classification and regression probles, it is well known that a proble is learnable (see precise definition in Section 2) if and only if the epirical risks F S (h) = f(h, z i ) (2) converge uniforly to their expectations ]. If unifor convergence holds, then the epirical risk iniizer (ERM) is consistent, i.e. the population risk of the ERM converges to the optial population risk, and the proble is learnable using the ERM. That is, learnability is equivalent to learnability by ERM, and so we can focus our attention solely on epirical risk iniizers. Stability has also been suggested as an explicit alternate condition for learnability. Intuitively, stability notions focus on particular algoriths, or learning rules, and easure their sensitivity to perturbations in the training set. In particular, it has been established that various fors of stability of the ERM are sufficient for learnability. Mukherjee et al 7] argue that since unifor convergence also iplies stability of the ERM, and is necessary for (distribution independent) learning in the supervised classification and regression setting, then stability of the ERM is necessary and sufficient for learnability in the supervised classification and regression setting. It is iportant to ephasize that this characterization of stability as necessary for learnability goes through unifor convergence, i.e. the chain of iplications is: ERM Stable with ERM Unifor Convergence However, the equivalence between (distribution independent) consistency of epirical risk iniization and unifor convergence is specific to supervised classification and regression. The results of Alon et al ] establishing this equivalence do not always hold in the General Learning Setting. In particular, we recently showed that in strongly convex stochastic optiization probles, the ERM is stable and thus consistent, even though the epirical risks do not converge to their expectations uniforly (Exaple 7., taken fro 9]). Since the other iplications in the chain above still hold in the general learning setting (e.g., unifor convergence iplies stability and stability iplies learnability by ERM), this exaple deonstrates that stability is a strictly ore general sufficient condition for learnability. A natural question is then whether, in the General Setting, stability is also necessary for learning. Here we establish that indeed, even in the general learning setting, (distribution independent) stability of ERM is necessary and sufficient for (distribution independent) consistency of the ERM. The situation is therefore as follows: Unifor Convergence ERM Stable with ERM We ephasize that, unlike the arguents of Mukherjee et al 7], the proof of necessity does not go through unifor convergence, allowing us to deal also with settings beyond supervised classification and regression. The discussion above concerns only stability and learnability of the ERM. In supervised classification and regression there is no need to go beyond the ERM, since learnability is equivalent to learnability by e-
2 pirical risk iniization. But as we recently showed, there are learning probles in the General Setting which are learnable using soe alternate learning rule, but in which ERM is neither stable nor consistent (Exaple 7.2, taken fro 9]). Stability of ERM is therefore a sufficient, but not necessary, condition for learnability: Unifor Convergence ERM Stable with ERM This propts us to study the stability properties of non-erm learning rules. We establish that, even in the General Setting, any consistent and generalizing learning rule (i.e. where in addition to consistency, the epirical risk is also a good estiate of the population risk) ust be asyptotically epirical risk iniizing (AERM, see precise definition in Section 2). We thus focus on such rules and show that also for an AERM, stability is sufficient for consistency and generalization. The converse is a bit weaker for AERMs, though. We show that a strict notion of stability, which is necessary for ERM consistency, is not necessary for AERM consistency, and instead suggest a weaker notion (averaging out fluctuations across rando training sets) that is necessary and sufficient for AERM consistency. Noting that any consistent learning rule can be transfored to a consistent and generalizing learning rule, we obtain a sharp characterization of learnability in ters of stability learnability is equivalent to the existence of a stable AERM: Exists Stable AERM with AERM 2 The General Learning Setting A learning proble is specified by a hypothesis doain H, an instance doain Z and an objective function (e.g. loss functional or cost function ) f : H Z R. Throughout this paper we assue the function is bounded by soe constant B, i.e. f(h, z) B for all h H and z Z. A learning rule is a apping A : Z H fro sequences of instances in Z to hypotheses. We refer to sequences S = {z,..., z } as saple sets, but it is iportant to reeber that the order and ultiplicity of instances ay be significant. A learning rule that does not depend on the order is said to be syetric. We will generally consider saples S D of i.i.d. draws fro D. A possible approach to learning is to iniize the epirical risk F S (h). We say that a rule A is an ERM (Epirical Risk Miniizer) if it iniizes the epirical risk F S (A(S)) = F S (ĥ) = in F S(h). (3) h H where we use F S (ĥ) = in h H F S (h) to refer to the iniu epirical risk. But since there ight be several hypotheses iniizing the epirical risk, ĥ does not refer to a specific hypotheses and there ight be any rules which are all ERM. We say that a rule A is an AERM (Asyptotical Epirical Risk Miniizer) with rate ɛ er () under distribution D if: ] E S D F S (A(S)) F S (ĥs) ɛ er () (4) Here and whenever talking about a rate ɛ(), we require it be onotone decreasing with ɛ() 0. A learning rule is universally an AERM with rate ɛ er (), if it is an AERM with rate ɛ er () under all distributions D over Z. Returning to our goal of iniizing the expected risk, we say a rule A is consistent with rate ɛ cons () under distribution D if for all, E S D F (A(S)) F (h )] ɛ cons (). (5) where we denote F (h ) = inf h H F (h). A rule is universally consistent with rate ɛ cons () if it is consistent with rate ɛ cons () under all distributions D over Z. A proble is said to be learnable if there exists soe universally consistent learning rule for the proble. This definition of learnability, requiring a unifor rate for all distributions, is the relevant notion for studying learnability of a hypothesis class. It is a direct generalization of agnostic PAC-learnability 4] to Vapnik s General Setting of Learning as studied by Haussler 3] and others. We say a rule A generalizes with rate ɛ gen () under distribution D if for all, E S D F (A(S)) F S (A(S)) ] ɛ gen (). (6) A rule universally generalizes with rate ɛ gen () if it generalizes with rate ɛ gen () under all distributions D over Z. We note that other authors soeties define consistency, and thus also learnable as a cobination of our notions of consistency and generalizing. 3 Stability We define a sequence of progressively weaker notions of stability, all based on leave-one-out validation. For a saple S of size, let S \i = {z,..., z i, z i+,..., z } be a saple of points obtained by deleting the i-th observation of S. All our easures of stability concern the effect deleting z i has on f(h, z i ), where h is the hypotheses returned by the learning rule. That is, all easures consider the agnitude of f(a(s \i ); z i ) f(a(s); z i ). Definition. A rule A is unifor-loo stable with rate ɛ stable () if for all saples S of points and for all i: f(a(s \i ); z i ) f(a(s); z i ) ɛ stable (). Definition 2. A rule A is all-i-loo stable with rate ɛ stable () under distributions D if for all i: ] E S D f(a(s \i ); z i ) f(a(s); z i ) ɛ stable (). Definition 3. A rule A is LOO stable with rate ɛ stable () under distributions D if ] f(a(s E \i S D ); z i ) f(a(s); z i ) ɛ stable (). For syetric learning rules, Definitions 2 and 3 are equivalent. Exaple 7.5 shows that the syetry assuption is necessary, and the two definitions are not equivalent for non-syetric learning rules. Our weakest notion of stability, which we show is still enough to ensure learnability, is:
3 Definition 4. A rule A is on-average-loo stable with rate ɛ stable () under distributions D if E S D f(a(s \i ); z i ) f(a(s); z i )] ɛ stable (). We say that a rule is universally stable with rate ɛ stable (), if the stability property holds with rate ɛ stable () for all distributions. Clai 3.. Unifor-LOO stability with rate ɛ stable () iplies all-i-loo stability with rate ɛ stable (), which iplies LOO stability with rate ɛ stable (), which iplies onaverage-loo stability with rate ɛ stable (). Relationship to Other Notions of Stability Many different notions of stability, soe under ultiple naes, have been suggested in the literature. In particular, our notion of all-i-loo stability has been studied by several authors under different naes: pointwise-hypothesis stability 2], CV loo stability 7], and cross-validation-(deletion) stability 8]. All are equivalent, though the rate is soeties defined differently. Other authors define stability with respect to replacing, rather then deleting, an observation. E.g. CV stability 5] and cross-validation-(replaceent) 8] are analogous to all-i- LOO stability and average stability 8] is analogous to average-loo stability for syetric learning rules. In general the deletion and replaceent variants of stability are incoparable in Appendix A we briefly discuss how the results in this paper change if replaceent stability is used. A uch stronger notion is unifor stability 2], which is strictly stronger than any of our notions, and is sufficient for tight generalization bounds. However, this notion is far fro necessary for learnability (5] and Exaple 7.3 below). In the context of syetric learning rules, all-i-loo stability and LOO stability are equivalent. In order to treat nonsyetric rules ore easily, we prefer working with LOO stability. For an elaborate discussion of the relationships between different notions of stability, see 5]. 4 Main Results We first establish that existence of a stable AERM is sufficient for learning: Theore 4.. If a rule is an AERM with rate ɛ er () and stable (under any of our definitions) with rate ɛ stable () under D, then it is consistent and generalizes under D with rates ɛ cons () 3ɛ er () + ɛ stable ( + ) + 2B + ɛ gen () 4ɛ er () + ɛ stable ( + ) + 6B Corollary 4.2. If a rule is universally an AERM and stable then it is universally consistent and generalizing. Seeking a converse to the above, we first note that it is not possible to obtain a converse for each distribution D separately, i.e. to Theore 4.. In Exaple 7.6, we show a specific learning proble and distribution D in which the ERM (in fact, any AERM) is consistent, but not stable, even under our weakest notion of stability. However, we are able to obtain a converse to Corollary 4.2. That is, establish that a universally consistent ERM, or even AERM, ust also be stable. For exact ERMs we have: Theore 4.3. For an ERM the following are equivalent: Universal LOO stability. Universal consistency. Universal generalization. Recall that for a syetric rule, LOO stability and all-i- LOO stability are equivalent, and so consistency or generalization of a syetric ERM (the typical case) also iply all-i-loo stability. Theore 4.3 only guarantees LOO stability as a necessary condition for consistency. Exaple 7.3 (adapted fro 5]) establishes that we cannot strengthen the condition to unifor-loo stability, or any stronger definition: there exists a learning proble for which the ERM is universally consistent, but not unifor-loo stable. For AERMs, we obtain a weaker converse, ensuring only on-average-loo stability: Theore 4.4. For an AERM, the following are equivalent: Universal on-average-loo stability. Universal consistency. Universal generalization. On-average-LOO stability is strictly weaker then LOO stability, but this is the best that can be assured. In Exaple 7.4 we present a learning proble and an AERM that is universally consistent, but is not LOO stable. The exact rate conversions of Theores 4.3 and 4.4 are specified in the corresponding proofs (Section 6), and are all polynoial. In particular, an ɛ cons -universal consistent ɛ er - AERM is on-average-loo stable with rate ɛ stable () 3ɛ er ( ) + 3ɛ cons (( ) /4 ) + 6B. The above results apply only to AERMs, for which we also see that universal consistency and generalization are equivalent. Next we show that if in fact we seek universal consistency and generalization, then we ust consider only AERMs: Theore 4.5. If a rule A is universally consistent with rate ɛ cons () and generalizing with rate ɛ gen (), then it is universally an AERM with rate ɛ er () ɛ gen () + 3ɛ cons ( /4 ) + 4B Cobining theores 4.4 and 4.5, we get that the existence of a universally on-average-loo stable AERM is a necessary (and sufficient) condition for existence of soe universally consistent and generalizing rule. As we show in Exaple 7.7, there ight still be a universally consistent learning rule (hence the proble is learnable by our definition) that is not stable even by our weakest definition (and is not an AERM nor generalizing). Nevertheless, any universally consistent learning rule can be transfored into a universally consistent and generalizing learning rule (Lea 6.). Thus by Theores 4.5 and 4.4 this rule ust also be a stable AERM, establishing:
4 Theore 4.6. A learning proble is learnable if and only if there exists a universally on-average-loo stable AERM. In particular, if there exists a ɛ cons -universally consistent rule, then there exists a rule that is ɛ stable -on-average-loo stable and ɛ er -AERM where: ɛ er () = 3ɛ cons ( /4 ) + 7B, ɛ stable () = 6ɛ cons (( ) /4 ) + 5 Coparison with Prior Work 5. Theore 4. and Corollary 4.2 (7) 9B The consistency iplication in Theore 4. (specifically, that all-loo stability of an AERM iplies consistency) was established by Mukherjee et al 7, Theore 3.5]. As for the generalization guarantee, Rakhlin et al 8] prove that for ERM, average-loo stability is equivalent to generalization. For ore general learning rules, 2] attepted to show that all-i-loo stability iplies generalization. However, Mukherjee et al 7] (in reark 3, pg. 73) provide a siple counterexaple and note that the proof of 2] is wrong, and in fact all-i-loo stability alone is not enough to ensure generalization. To correct this, Mukherjee et al 7] introduced an additional condition, referred to as Eloo err stability, which together with all-i-loo stability ensures generalization. For AERMs, they use arguents specific to supervised learning, arguing that universal consistency iplies unifor convergence, and establish generalization only via this route. And so, Mukherjee et al obtain a version of Corollary 4.2 that is specific to supervised learning. In suary, coparing Theore 4. to previous work, our results extend the generalization guarantee also to AERMs in the general learning setting. Rakhlin et al 8] also show that the replaceent (rather then deletion) version stability iplies generalization for any rule (even non-aerm), and hence consistency for AERMs. Recall that the deletion and replaceent version are not equivalent. We are not aware of strong converses for the replaceent variant. 5.2 Converse Results Mukherjee et al 7] argue that all-i-loo stability of the ERM (in fact, of any AERM) is also necessary for ERM universal consistency and thus learnability. However, their arguents are specific to supervised learning, and establish stability only via unifor convergence of F S (h) to F (h), as discussed in the introduction. As we now know, in the general learning setting, ERM consistency is not equivalent to this unifor convergence, and furtherore, there ight be a non-erm universally consistent rule even though the ERM is not universally consistent. Therefore, our results here apply to the general learning setting and do not use unifor convergence arguents. For an ERM, Rakhlin et al 8] show that generalization is equivalent to on-average-loo stability, for any distribution and without resorting to unifor convergence arguents. This provides a partial converse to Theore 4.. However, our results extend to AERM s as well and ore iportantly, provide a converse to AERM consistency rather than just generalization. This distinction between consistency and generalization is iportant, as there are situations with consistent but not stable AERM s (Exaple 7.6), or even universally consistent learning rules which are not stable, generalizing nor AERM s (Exaple 7.7). Another converse result that does not use unifor convergence arguents, but is specific only to the realizable binary learning setting was given by Kutin and Niyogi 5]. They show that in this setting, for any distribution D, all-i- LOO stability of the ERM under D is necessary for ERM consistency under D. This is a uch stronger for of converse as it applies to any specific distribution separately, rather then requiring universal consistency. However, not only is it specific to supervised learning, but further requires the distribution be realizable (i.e. zero error is achievable). As we show in Exaple 7.6, a distribution-specific converse is not possible in the general setting. All the papers cited above focus on syetric learning rules where all-i-loo stability is equivalent to LOO stability. We prefer not to liit our attention to syetric rules, and instead use LOO stability. 6 Detailed Results and Proofs We first establish that for AERMs, on-average-loo stability and generalization are equivalent, and that for ERMs the equivalence extends also to LOO stability. This extends the work of Rakhlin et al 8] fro ERMs to AERMs, and with soewhat better rate conversions. 6. Equivalence of Stability and Generalization It will be convenient to work with a weaker version of generalization as an interediate step: We say a rule A onaverage generalizes with rate ɛ oag () under distribution D if for all, E S D F (A(S)) F S (A(S))] ɛ oag (). (8) It is straightforward to see that generalization iplies onaverage generalization with the sae rate. We show that for AERMs, the converse is also true, and also that on-average generalization is equivalent to on-average stability, establishing the equivalence between generalization and on-average stability (for AERMs). Lea 6. (For AERMs: on-average generalization on-average stability). Let A be AERM with rate ɛ er () under D. If A is on-average generalizing with rate ɛ oag () then it is on-average LOO stable with rate ɛ oag ( ) + 2ɛ er ( ) + 2B/. If A is on-average LOO stable with rate ɛ stable () then it is on-average generalizing with rate ɛ stable ( + ) + 2ɛ er () + 2B/. Proof. For the ERMs of S and S \i we have F S \i(ĥs \i) F S(ĥS) 2B, and so since A is AERM: ] FS E (A(S)) F S \i(a(s \i )) 2ɛ er ( )+ 2B (9)
5 generalization stability Applying (8) to S \i and cobining with (9) we have E F (A(S \i )) F S (A(S)) ] ɛ oag ( ) + 2ɛ er ( ) + 2B/, which does not actually depend on i, hence: ɛ oag ( ) + 2ɛ er ( ) + 2B/ E F (A(S \i )) F S (A(S))] (0) = E i E F (A(S \i )) F S (A(S))]] ]] = E S \i,z i E i f(a(s \i ), z i ) E S f(a(s), z i )] = E E i f(a(s \i ), z i ) f(a(s), z i )]] () which establishes on-average stability. stability generalization Bounding () by ɛ stable () and working back we get that (0) is also bounded by ɛ stable (). Cobined with (9) we get E F (A(S \i )) F S \i(a(s \i )) ] ɛ stable () + 2ɛ oag ( ) + 2B/ which establishes on-average generalization. Lea 6.2 (AERM + on-average generalization generalization). If A is an AERM with rate ɛ er () and onaverage generalizes with rate ɛ oag () under D, then A generalizes with rate ɛ oag () + 2ɛ er () + 2B under D. Proof. Using respective optialities of ĥs and h we can bound: F S (A(S)) F (A(S)) = F S (A(S)) F S (ĥs) + F S (ĥs) F S (h ) + F S (h ) F (h ) + F (h ) F (A(S)) F S (A(S)) F S (ĥs) + F S (h ) F (h ) = Y (2) Where the final equality defines a new rando variable Y. By Lea 6.3 and the AERM guarantee we have E Y ] ɛ er () + B/ ). Fro Lea 6.4 we can conclude that E F S (A(S)) F (A(S)) ] E F S (A(S)) F (A(S))] + 2E Y ] ɛ oag () + 2ɛ er () + 2B. Utility Lea 6.3. For i.i.d. X i, X i B and X = X i we have E X E X] ] B/. Proof. E X E X] ] VarX] = VarX i ]/ B/. Utility Lea 6.4. Let X, Y be rando variables s.t. X Y alost surely. Then E X ] E X] + 2E Y ]. Proof. Denote a + = ax(0, a) and observe that X Y iplies X + Y + (this holds when both have the sae sign, and when X 0 Y, while Y < 0 < X is not possible). We therefor have E X + ] E Y + ] E Y ]. Also note that X = 2X + X. We can now calculate: E X ] = E 2X + X] = 2E X + ] E X] 2E Y ] + E X]. For exact ERM, we get a stronger equivalence: Lea 6.5 (ERM+on-average-LOO LOO stable). If an exact ERM A is on-average-loo stable with rate ɛ stable () under D, then it is also LOO stable under D with the sae rate. Proof. By optiality of ĥs = A(S): f(ĥs \i, z i) f(ĥs, z i ) = F S (ĥs \i) F S(ĥS) Then using on-average-loo stability: = + F S \i(ĥs) F S \i(ĥs\i) 0. (3) ] f( E ĥ S \i, z i ) f(ĥs, z i ) ( )] E f(ĥs \i, z i) f(ĥs, z i ) ɛ stable () Lea 6.5 can be extended also to AERMs with rate o( n ). However, for AERMs with a slower rate, or at least with rate Ω( n ), Exaple 7.4 establishes that this stronger converse is not possible. We have now established the stability generalization parts of Theores 4., 4.3 and 4.4 (in fact, even a slightly stronger converse than in Theores 4.3 and 4.4, as it does not require universality). 6.2 A Sufficient Condition for Consistency It is also fairly straightforward to see that generalization (or even on-average generalization) of an AERM iplies its consistency: Lea 6.6 (AERM+generalization consistency). If A is AERM with rate ɛ er () and it on-average generalizes with rate ɛ oag () under D then it is consistent with rate ɛ oag () + ɛ er () under D. Proof. E F (A(S)) F (h )] = E F (A(S)) F S (h )] = E F (A(S)) F S (A(S))] + E F S (A(S)) F S (h )] ] E F (A(S)) F S (A(S))] + E F S (A(S)) F S (ĥs) ɛ gen () + ɛ er () Cobined with the results of Section 6., this copletes the proof of Theore 4. and the stability consistency and generalization consistency parts of Theores 4.3 and Converse Direction Lea 6. already provides a converse result, establishing that stability is necessary for generalization. However, in order to establish that stability is also necessary for universal consistency we ust prove that universal consistency of an AERM iplies universal generalization. Note that consistency under a specific distribution for an AERM does not iply generalization nor stability (Exaple 7.6). We ust instead rely on universal consistency. The ain tool we use is the following lea:
6 Lea 6.7 (Main Converse Lea). If a proble is learnable, i.e. there exists a universally consistent rule A with rate ɛ cons (), then under any distribution, ] FS E (ĥs) F (h ) ɛ ep () where ɛ ep () = 2ɛ cons ( ) + 2B + 2B 2 for any sequence is such that and = o( ). Proof. Let I = {I,..., I } be a rando saple of indexes in the range.. where each I i is independently uniforly distributed, and I is independent of S. Let S = {z Ii }, i.e. a saple of size drawn fro the unifor distribution over saples in S (with replaceents). We first bound the probability that I has no repeated indexes ( duplicates ): (i ) Pr (I has duplicates) (4) 2 Conditioned on not having duplicates in I, the saple S is actually distributed according to D, i.e. can be viewed as a saple fro the original distribution. We therefor have by universal consistency: 2 E F (A(S )) F (h ) no dups] ɛ cons ( ) (5) But viewed as a saple drawn fro the unifor distribution over instances in S, we also have: ] FS E S (A(S )) F S (ĥs) ɛ cons ( ) (6) Conditioned on having no duplications in I, S \S (i.e. those saples in S not chosen by I) is independent of S, and S \ S =, and so by Lea 6.3: E S F (A(S )) F S\S (A(S )) ] B (7) Finally, if there are no duplicates, then for any hypothesis, and in particular for A(S ) we have: F S (A(S )) F S\S (A(S )) 2B (8) Cobining (5),(6),(7) and (8), accounting for a axial discrepancy of B when we do have duplicates, and assuing 2 /2, we get the desired bound. In the supervised learning setting, Lea 6.7 is just an iediate consequence of learnability being equivalent to consistency and generalization of the ERM. However, the Lea applies also in the General Setting, where universal consistency ight be achieved only by a non-erm. The Lea states that if a proble is learnable, even though the ERM ight not be consistent (as in, e.g. Exaple 7.2), the epirical error achieved by the ERM is in fact an asyptotically unbiased estiator of F (h ). Equipped with Lea 6.7, we are now ready to show that universal consistency of an AERM iplies generalization and that any universally consistent and generalizing rule ust be an AERM. What we show is actually a bit stronger: that if a proble is learnable, and so Lea 6.7 holds, then for any distribution D separately, consistency of an AERM under D iplies generalization under D and also any consistent and generalizing rule under D ust be an AERM. Lea 6.8 (learnable+aerm+consistent generalizing). If Lea 6.7 holds with rate ɛ ep (), and A is an ɛ er - AERM and ɛ cons -consistent under D, then it is generalizing under D with rate ɛ ep () + ɛ er () + ɛ cons (). Proof. ] FS E F S (A(S)) F (A(S)) ] E (A(S)) F S (ĥs) ] + E F (h FS ) F (A(S)) ] + E (ĥs) F (h ) ɛ er () + ɛ cons () + ɛ ep () Lea 6.9 (learnable+consistent+generalizing AERM). If Lea 6.7 holds with rate ɛ ep (), and A is ɛ cons - consistent and ɛ gen -generalizing under D, then it is AERM under D with rate ɛ ep () + ɛ gen () + ɛ cons (). Proof. ] FS E (A(S)) F S (ĥs) E F S (A(S)) F (A(S)) ] ] + E F (A(S)) F (h F ) ] + E (h ) F S (ĥs) ɛ gen () + ɛ cons () + ɛ ep () Lea 6.8 establishes that universal consistency of an AERM iplies universal generalization, and thus copletes the proof of Theores 4.3 and 4.4. Lea 6.9 establishes Theore 4.5. To get the rates in 4, we use = /4 in Lea 6.7. Leas 6.6, 6.8 and 6.9 together establish an interesting relationship: Corollary 6.0. For a (universally) learnable proble, for any distribution D and learning rule A, any two of the following iply the third : A is an AERM under D. A is consistent under D. A generalizes under D. Note, however, that any one property by itself is possible, even universally: - The ERM in Exaple 7.2 is neither consistent nor generalizing, despite the proble being learnable. - Exaple 7.7 deonstrates a universally consistent learning rule which is neither generalizing nor an AERM. - A rule returning a fixed hypothesis always generalizes, but of course need not be consistent nor an AERM. In contrast, for learnable supervised classification and regression probles, it is not possible for a learning rule to be just universally consistent, without being an AERM and without generalization. Nor is it possible for a learning rule to be a universal AERM for a learnable proble, without being generalizing and consistent. Corollary 6.0 can also provide a certificate of nonlearnability. E.g. for the proble in Exaple 7.6 we show a specific distribution for which there is a consistent AERM that does not generalize. We can conclude that there is no universally consistent learning rule for the proble, otherwise the corollary is violated.
7 6.4 Existence of a Stable Rule Theores 4.5 and 4.4, which we just copleted proving, already establish that for AERMs, universal consistency is equivalent to universal on-average-loo stability. Existence of a universally on-average-loo stable AERM is thus sufficient for learnability. In order to prove that it is also necessary, it is enough to show that existence of a universally consistent learning rule iplies existence of a universally consistent AERM. This AERM ust then be on-average-loo stable by Theore 4.4. We actually show how to transfor a consistent rule to a consistent and generalizing rule. If this rule is universally consistent, then by Lea 6.9 we can then conclude it ust an AERM, and by 6. that it ust be on-average-loo stable. Lea 6.. For any rule A there exists a rule A, such that: A universally generalizes with rate 3B. For any D, if A is ɛ cons -consistent under D then A is ɛ cons ( ) consistent under D. Proof. For a saple S of size, let S be a sub-saple consisting of the first observation in S. Define A (S) = A(S ). That is, A applies A to only of the observation in S. A generalizes: We can decopose: F S (A(S )) F (A(S )) = (F S (A(S )) F (A(S ))) + ( )(F S\S (A(S )) F (A(S ))) The first ter can be bounded by 2B/. As for the second ter, S \ S is statistically independent of S and so we can use Lea 6.3 to bound its expected agnitude to obtain: E F S (A(S )) F (A(S )) ] 2B + ( ) B 3B A is consistent: If A is consistent, then: ] E F (A (S)) inf F (h) h H ] E F (A(S )) inf F (h) ɛ cons ( ) h H (9) Proof of Converse in Theore 4.6 If there exists a universally consistent rule with rate ɛ cons (), by Lea 6. there exists A which is universally consistent and generalizing. Choosing = /4 in Lea 6.7 and applying Leas 6.9 and 6. we get the rates specified in (7). Reark We can strengthen the above theore to show existence of an on-average-loo stable, always AERM (ie. a rule which for every saple approxiately iniizes F S (h)). The new learning rule for this purpose chooses the hypothesis returned by the original rule whenever epirical risk is sall and chooses an ERM otherwise. The proof is copleted via Markov inequality to bound the probability that we don t choose the hypothesis returned by the original learning rule. 7 Exaples Our first exaple (taken fro 9]) shows that unifor convergence is not necessary for ERM consistency. I.e. universal ERM consistency holds without unifor convergence. Of course, this can also happen in trivial settings where there is one hypothesis h 0 which doinates all other hypothesis (i.e. f(h 0, z) < f(h, z) for all z and all h h 0 ) 0]. However, the exaple below deonstrates a non-trivial situation with ERM universal consistency but no unifor convergence: there is no doinating hypothesis, and finding the optial hypothesis does require learning. In particular, unlike trivial probles with a doinating hypothesis, in the exaple below there is not even local unifor convergence. I.e. there is no unifor convergence even aong hypotheses that are close to being population optial. Exaple 7.. There exists a learning proble for which any ERM is universally consistent, but the epirical risks do not converge uniforly to their expectations. Proof. Consider a convex stochastic optiization proble given by: f(w; (x, α)) = α (w x) + w 2 = α 2 i](wi] xi]) 2 + w 2, i where w is the hypothesis, w, x are eleents in a unit ball around the origin of a Hilbert space with a countably infinite orthonoral basis e, e 2,..., and α is an infinite binary sequence. αi] is the i-th coordinate of α, wi] := w, e i, and xi] is defined siilarly. In our other subission 9], we show that the ERM is stable, hence consistent. However, when x = 0 a.s. and α is i.i.d. unifor, there is no unifor convergence, not even locally. To see why, note that for a rando saple S of any finite size, with probability one there exists an excluded basis vector e j such that α i j] = 0 for all (x i, α i ) S. For any t > 0, we have F (te j ) F S (te j ) t 2, regardless of the saple size. Setting t = establishes sup w F (w) F S (w) even as, and so there is no unifor convergence. Choosing t arbitrarily sall, we see that even when F (te j ) is close to optial, the deviations F (w) F S (w) still do not converge to zero as. Perhaps ore surprisingly, the next exaple (also taken fro 9]) shows that in the general setting, learnability ight require using a non-erm. Exaple 7.2. There exists a learning proble with a universally consistent learning rule, but for which no ERM is universally consistent. Proof. Consider the sae hypothesis space and saple space as before, with: f(w, z) = α (w x) 2 + ɛ 2 2 i (w i ) 2, where ɛ = 0.0. When x = 0 a.s. and α is i.i.d. unifor, then the ERM ust have ŵ =. To see why, note that
8 for an excluded e j (which exists a.s.) increasing wj] towards one decreases the objective. But since ŵ =, we have F (ŵ) /2, while inf w F (w) F (0) = ɛ, and so F (ŵ) inf w F (w). On the other hand, A(S) = arg in F S (w) + 20 w 2 is a uniforly-loo stable AERM and hence by Theore 4. universally consistent. In the next three exaples, we show that in a certain sense, Theore 4.3 and Theore 4.4 cannot be iproved with stronger stability notions. Viewed differently, they also constitute separation results between our various stability notions, and show which are strictly stronger than the other. Exaple 7.4 also deonstrates the gap between supervised learning and a general learning setting, by presenting a learning proble and an AERM that is universally consistent, but not LOO stable. Exaple 7.3. There exists a learning proble with a universally consistent and all-i-loo stable learning rule, but there is no universally consistent and unifor LOO stable learning rule. Proof. This exaple is taken fro 5]. Consider the hypothesis space {0, }, the instance space {0, }, and the objective function f(h, z) = h z. It is straightforward to verify that an ERM is a universally consistent learning rule. It is also universally all-i-loo stable, because reoving an instance can change the hypothesis only if the original saple had an equal nuber of 0 s and s (plus or inus one), which happens with probability at ost O(/ ) where is the saple size. However, it is not hard to see that the only unifor LOO stable learning rule, at least for large enough saple sizes, is a constant rule which always returns the sae hypothesis h regardless of the saple. Such a learning rule is obviously not universally consistent. Exaple 7.4. There exists a learning proble with a universally consistent (and average-loo stable) AERM, which is not LOO stable. Proof. Let the instance space, hypothesis space and objective function be as in Exaple 7.3. Consider the following learning rule, based on a saple S = (z,..., z ): if i {z}/ > /2 + log(4)/2, return. If i {z}/ < /2 log(4)/2, return 0. Otherwise, return Parity(S) = (z +... z ) od 2. This learning rule is an AERM, with ɛ er () = 2 log(4)/. Since we have only two hypotheses, we have unifor convergence of F S ( ) to F ( ) for any hypothesis. Therefore, our learning rule universally generalizes (with rate ɛ gen () = log(4/δ)/2), and by Theore 4.4, this iplies that the learning rule is also universally consistent and average-loo stable. However, the learning rule is not LOO stable. Consider the unifor distribution on the instance space. By Hoeffding s inequality, i {z}/ /2 log(4)/2 with probability at least /2 for any saple size. In that case, the returned hypothesis is the parity function (even when we reove an instance fro the saple, assuing 3). When this happens, it is not hard to see that for any i, f(a(s), z i ) f(a(s \i ), z i ) = {z}( ) Parity(S). This iplies that ] ( ) E f(a(s \i ); z i ) f(a(s); z i ) (20) 2 E ] log(4) {z} 2 {z} 2 ( ) log(4) , which does not converge to zero with the saple size. Therefore, the learning rule is not LOO stable. Note that the proof iplies that average-loo stability cannot be replaced even by weaker stability notions than LOO stability. For instance, a natural stability notion interediate between average-loo stability and LOO stability is ( ) ] E S D f(a(s \i ); z i ) f(a(s); z i ), (2) where the absolute value is now over the entire su, but inside the expectation. In the exaple used in the proof, (2) is still lower bounded by (20), which does not converge to zero with the saple size. Exaple 7.5. There exists a learning proble with a universally consistent and LOO-stable AERM, which is not syetric and is not all-i-loo stable. Proof. Let the instance space be 0, ], the hypothesis space 0, ] 2, and the objective function f(h, z) = {h=z}. Consider the following learning rule A: given a saple, check if the value z appears ore than once in the saple. If no, return z, otherwise return 2. Since F S (2) = 0, and z returns only if this value constitutes / of the saple, the rule above is an AERM with rate ɛ er () = /. To see universal consistency, let Pr(z ) = p. With probability ( p) 2, z / {z 2,..., z }, and the returned hypothesis is z, with F (z ) = p. Otherwise, the returned hypothesis is 2, with F (2) = 0. Hence E S F (A(S))] p( p) 2, which can be easily verified to be at ost /( ), so the learning rule is consistent with rate ɛ cons () /( ). To see LOO-stability, notice that our learning hypothesis can change by deleting z i, i >, only if z i is the only instance in z 2,..., z equal to z. So ɛ stable () 2/ (in fact, LOOstability holds even without the expectation). However, this learning rule is not all-i-loo-stable. For instance, for any continuous distribution, f(a(s \ ), z ) f(a(s), z ) = with probability, so it obviously cannot be all-i-loo-stable with respect to i =. Next we show that for specific distributions, even ERM consistency does not iply even our weakest notion of stability.
9 Unifor Convergence ERM Strict Consistency ERM Stability ERM Consistency All AERM Stable and Consistent Exists Stable AERM Exists Consistent AERM Learnability Figure : Iplications of various properties of learning probles. Consistency refers to univeral consistency and stability refers to univeral on-average-loo stability. Exaple 7.6. There exists a learning proble and a distribution on the instance space, such that the ERM (or any AERM) is consistent but is not average-loo stable. Proof. Let the instance space be 0, ], the hypothesis space consist of all finite subsets of 0, ], and define the objective function as f(h, z) = {z / h} ). Consider any continuous distribution on the instance space. Since the underlying distribution D is continuous, we have F (h) = for any hypothesis h. Therefore, any learning rule (including any AERM) will be consistent with F (A(S)) =. On the other hand, the ERM here always achieves F S (ĥs) = 0, so any AERM cannot generalize, or even on-average-generalize (by Lea 6.2), hence cannot be average-loo stable (by Lea 6.). Finally, the following exaple shows that while learnability is equivalent to the existence of stable and consistent AERM s (Theore 4.4 and Theore 4.6), there ight still exist other learning rules, which are neither of the above. Exaple 7.7. There exists a learning proble with a universally consistent learning rule, which is not average-loo stable, generalizing nor an AERM. Proof. Let the instance space be 0, ]. Let the hypothesis space consist of all finite subsets of 0, ], and the objective function be the indicator function f(h, z) = {z h}. Consider the following learning rule: given a saple S 0, ], the learning rule checks if there are any two identical instances in the saple. If so, the learning rule returns the epty set. Otherwise, it returns the saple. This learning rule is not an AERM, nor does it necessarily generalize or is average-loo stable. Consider any continuous distribution on 0, ]. The learning rule always returns a countable set A(S), with F S (A(S)) =, while F S ( ) = 0 (so it is not an AERM) and F (A(S)) = 0 (so it does not generalize). Also, f(a(s), z i ) = 0 while f(a(s \i 0, z i ) = with probability, so it is not average- LOO stable either. However, the learning rule is universally consistent. If the underlying distribution is continuous on 0, ], then the returned hypothesis is S, which is countable hence, F (S) = 0 = inf h F (h). For discrete distributions, let M denote the proportion of instances in the saple which appear exactly once, and let M 0 be the probability ass of instances which did not appear in the saple. Using 6, Theore 3], we have that for any δ, it holds with probability at least δ over a saple of size that ( ) M 0 M O log(/δ), uniforly for any discrete distribution. If this event occurs, then either M <, or M 0 O(log(/δ)/ ). But in the first event, we get duplicate instances in the saple, so the returned hypothesis is the optial, and in the second case, the returned hypothesis is the saple, which has a total probability ass of at least O(log(/δ)/ ), and therefore F (A(S)) O(log(/δ)/ ). As a result, regardless of the underlying distribution, with probability of at least δ over the saple, ( ) F (A(S)) O log(/δ). Since the r.h.s. converges to 0 with for any δ, it is easy to see that the learning rule is universally consistent. 8 Discussion In the failiar setting of supervised classification or regression, the question of learnability is reduced to that of unifor convergence of epirical risks to their expectation, and in turn to finiteness of the fat-shattering diension ]. Furtherore, due to the equivalence of learnability and unifor convergence, there is no need to look beyond the ERM. We recently showed 9] that the situation in the General Learning Setting is substantially ore coplex. Universal ERM consistency ight not be equivalent to unifor convergence, and furtherore, learnability ight be possible only with a non-erm. We are therefore in need of a new understanding of the question of learnability that applies ore broadly then just to supervised classification and regression. In studying learnability in the General Setting, Vapnik 0] focuses solely on epirical risk iniization, which we now know is not sufficient for understanding learnability (e.g. Exaple 7.2). Furtherore, for epirical risk iniization, Vapnik establishes unifor convergence as a necessary and sufficient condition not for ERM consistency, but rather for strict consistency of the ERM. We now know that even in rather non-trivial probles (e.g. Exaple 7. taken fro 9]), where the ERM is consistent and generalizes, strict consistency does not hold. Furtherore, Exaple 7. also deonstrates that ERM stability guarantees ERM consistency, but not strict consistency, perhaps giving another indication that strict consistency ight be too strict (this and other relationships are depicted in Figure ). In Exaples 7. and 7.2 we see that stability is a strictly ore general sufficient condition for learnability. This akes stability an appealing candidate for understanding learnability in the ore general setting. Indeed, we show that stability is not only sufficient, but is also necessary for learning, even in the General Learning Setting. A previous such characterization was based on unifor convergence and thus applied only to supervised clas-
10 sification and regression 7]. Extending the characterization beyond these settings is particularly interesting, since for supervised classification and regression the question of learnability is already essentially solved. Extending the characterization, without relying on unifor convergence, also allows us to frae stability as the core condition guaranteeing learnability, with unifor convergence only a sufficient, but not necessary, condition for stability (see Figure ). In studying the question of learnability and its relation to stability, we encounter several differences between this ore general setting, and settings such as supervised classification and regression where learnability is equivalent to unifor convergence. We suarize soe of these distinctions: Perhaps the ost iportant distinction is that in the General Setting learnability ight be possible only with a non-erm. In this paper we establish that if a proble is learnable, although it ight not be learnable with an ERM, it ust be learnable with soe AERM. And so, in the General Setting we ust look beyond epirical risk iniization, but not beyond asyptotic epirical risk iniization. In supervised classification and regression, if one AERM is universally consistent then all AERMs are universally consistent. In the General Setting we ust choose the AERM carefully. In supervised classification and regression, a universally consistent rule ust also generalize and be AERM. In the General Setting, a universally consistent rule need not generalize nor be an AERM, as exaple 7.7 deonstrates. However, Theore 4.5 establishes that, even in the General Setting, if a rule is universally consistent and generalizing then it ust be an AERM. This gives us another reason to not look beyond asyptotic epirical risk iniization, even in the General Setting. The above distinctions can also be seen through Corollary 6.0, which concerns the relationship between AERM, consistency and generalization in learnable probles. In the General Setting, any two conditions iply the other, but it is possible for any one condition to exist without the others. In supervised classification and regression, if a proble is learnable then generalization always holds (for any rule), and so universal consistency and AERM iply each other. In supervised classification and regression, ERM inconsistency for soe distribution is enough to establish non-learnability. Establishing non-learnability in the General Setting is trickier, since one ust consider all AERMs. We show how Corollary 6.0 can provide a certificate for non-learnability, in the for of a rule that is consistent and an AERM for soe specific distribution, but does not generalize (Exaple 7.6). In the General Setting, universal consistency of an AERM only guarantees on-average-loo stability, but not LOO stability as in the supervised classification setting 7]. As we show in Exaple 7.4, this is a real difference and not erely a deficiency of our proofs. We have begun exploring the issue of learnability in the General Setting, and uncovered iportant relationships between learnability and stability. But any probles are left open. Throughout the paper we ignored the issue of getting high-confidence concentration guarantees. We choose to use convergence in expectation, and defined the rates as rates on the expectation. Since the objective f is bounded, convergence in expectation is equivalent to convergence in probability and using Markov s inequality we can translate a rate of the for E ] ɛ() to a low confidence guarantee Pr( > ɛ()/δ) δ. Can we also obtain exponential concentration results of the for Pr( > ɛ()polylog(/δ)) δ? It is possible to construct exaples in the General Setting in which convergence in expectation of the stability does not iply exponential concentration of consistency and generalization. Is it possible to show that exponential concentration of stability is equivalent to exponential concentration of consistency and generalization? We showed that existence of an average-loo stable AERM is necessary and sufficient for learnability (Theore 4.6). Although specific AERMs ight be universally consistent and generalizing without being LOO stable (Exaple 7.4), it ight still be possible to show that for a learnable proble, there always exists soe LOO stable AERM. This would tighten our converse result and establish existence of a LOO stable AERM as equivalent to learnability. Even existence of a LOO stable AERM is not as elegant and siple as having finite VC diension, or fat-shattering diension. It would be very interesting to derive equivalent but ore cobinatorial conditions for learnability. References ] N. Alon, S. Ben-David, N. Cesa-Bianchi, and D. Haussler. Scale-sensitive diensions, unifor convergence, and learnability. J. ACM, 44(4):65 63, ] O. Bousquet and A. Elisseeff. Stability and generalization. J. Mach. Learn. Res., 2: , ] David Haussler. Decision theoretic generalizations of the PAC odel for neural net and other learning applications. Inforation and Coputation, 00():78 50, ] Michael J. Kearns, Robert E. Schapire, and Linda M. Sellie. Toward efficient agnostic learning. In Proc. of COLT 5, ] S. Kutin and P. Niyogi. Alost-everywhere algorithic stability and generalization error. In Proc. of UAI 8, ] D.A. McAllester and R.E. Schapire. On the convergence rate of good-turing estiators. In Proc. of COLT 3, ] S. Mukherjee, P. Niyogi, T. Poggio, and R. M. Rifkin. Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of epirical risk iniization. Adv. Coput. Math., 25(- 3):6 93, ] S. Rakhlin, S. Mukherjee, and T. Poggio. Stability results in learning theory. Analysis and Applications, 3(4):397 49, ] S. Shalev-Shwartz, O. Shair, N. Srebro, and K. Sridharan. Stochastic convex optiization. In Proceedings of COLT 22, ] V.N. Vapnik. The Nature of Statistical Learning Theory. Springer, 995.
11 A Replaceent Stability We used the Leave-One-Out version of stabilities throughout the paper, however any of the results hold when we use the replaceent versions instead. Here we briefly survey the differences in the ain results as they apply to replaceentbased stabilities. Let S (i) denote the saple S with z i replaced by soe other z i drawn fro the sae unknown distribution D. Definition 5. A rule A is unifor-ro stable with rate ɛ stable () if for all saples S of points and z, z,..., z Z : f(a(s (i) ); z ) f(a(s); z ) ɛ stable (). Definition 6. A rule A is on-average-ro stable with rate ɛ stable () under distributions D if ] E S D ;z f(a(s,...,z D (i) ); z i ) f(a(s); z i ) ɛ stable (). With the above definitions replacing unifor-loo stability and on-average-loo stability respectively, all theores in Section 4 other than Theore 4.3 hold (i.e. Theore 4., Corollary 4.2, Theore 4.4 and Theore 4.6). We do not know how to obtain a replaceent-variant of Theore 4.3 even for a consistent ERM, we can only guarantee on-average-ro stability (as in Theore 4.4), but we do not know if this is enough to ensure RO stability. However, although for ERMs we can only obtain a weaker converse, we can guarantee the existence of an AERM that is not only on-average-ro stable but actually unifor-ro stable. That is, we get a uch stronger variant of Theore 4.6: Theore A.. A learning proble is learnable if and only if there exists an unifor-ro stable AERM. Proof. Clearly if there exists any rule A that is unifor-ro stable and AERM then the proble is learnable, since the learning rule A is in fact universally consistent by theore 4.. On the other hand if there exists a rule A that is universally consistent, then consider the rule A as in the construction of Lea 6.. As shown in the lea this rule is consistent. Now note that A only uses the first saples of S. Hence for i > we have A (S (i) ) = A (S) and so: f(a(s (i) ); z ) f(a(s); z ) = f(a(s (i) ); z ) f(a(s); z ) 2B We thus showed that this rule is consistent, generalizes, and is 2B -uniforly RO stable.
Factor Model. Arbitrage Pricing Theory. Systematic Versus Non-Systematic Risk. Intuitive Argument
Ross [1],[]) presents the aritrage pricing theory. The idea is that the structure of asset returns leads naturally to a odel of risk preia, for otherwise there would exist an opportunity for aritrage profit.
More informationData Set Generation for Rectangular Placement Problems
Data Set Generation for Rectangular Placeent Probles Christine L. Valenzuela (Muford) Pearl Y. Wang School of Coputer Science & Inforatics Departent of Coputer Science MS 4A5 Cardiff University George
More informationThis paper studies a rental firm that offers reusable products to price- and quality-of-service sensitive
MANUFACTURING & SERVICE OPERATIONS MANAGEMENT Vol., No. 3, Suer 28, pp. 429 447 issn 523-464 eissn 526-5498 8 3 429 infors doi.287/so.7.8 28 INFORMS INFORMS holds copyright to this article and distributed
More informationMachine Learning Applications in Grid Computing
Machine Learning Applications in Grid Coputing George Cybenko, Guofei Jiang and Daniel Bilar Thayer School of Engineering Dartouth College Hanover, NH 03755, USA gvc@dartouth.edu, guofei.jiang@dartouth.edu
More informationReliability Constrained Packet-sizing for Linear Multi-hop Wireless Networks
Reliability Constrained acket-sizing for inear Multi-hop Wireless Networks Ning Wen, and Randall A. Berry Departent of Electrical Engineering and Coputer Science Northwestern University, Evanston, Illinois
More informationSupport Vector Machine Soft Margin Classifiers: Error Analysis
Journal of Machine Learning Research? (2004)?-?? Subitted 9/03; Published??/04 Support Vector Machine Soft Margin Classifiers: Error Analysis Di-Rong Chen Departent of Applied Matheatics Beijing University
More informationOnline Learning, Stability, and Stochastic Gradient Descent
Online Learning, Stability, and Stochastic Gradient Descent arxiv:1105.4701v3 [cs.lg] 8 Sep 2011 September 9, 2011 Tomaso Poggio, Stephen Voinea, Lorenzo Rosasco CBCL, McGovern Institute, CSAIL, Brain
More informationON SELF-ROUTING IN CLOS CONNECTION NETWORKS. BARRY G. DOUGLASS Electrical Engineering Department Texas A&M University College Station, TX 77843-3128
ON SELF-ROUTING IN CLOS CONNECTION NETWORKS BARRY G. DOUGLASS Electrical Engineering Departent Texas A&M University College Station, TX 778-8 A. YAVUZ ORUÇ Electrical Engineering Departent and Institute
More informationBinary Embedding: Fundamental Limits and Fast Algorithm
Binary Ebedding: Fundaental Liits and Fast Algorith Xinyang Yi The University of Texas at Austin yixy@utexas.edu Eric Price The University of Texas at Austin ecprice@cs.utexas.edu Constantine Caraanis
More informationOnline Bagging and Boosting
Abstract Bagging and boosting are two of the ost well-known enseble learning ethods due to their theoretical perforance guarantees and strong experiental results. However, these algoriths have been used
More informationRECURSIVE DYNAMIC PROGRAMMING: HEURISTIC RULES, BOUNDING AND STATE SPACE REDUCTION. Henrik Kure
RECURSIVE DYNAMIC PROGRAMMING: HEURISTIC RULES, BOUNDING AND STATE SPACE REDUCTION Henrik Kure Dina, Danish Inforatics Network In the Agricultural Sciences Royal Veterinary and Agricultural University
More informationMINIMUM VERTEX DEGREE THRESHOLD FOR LOOSE HAMILTON CYCLES IN 3-UNIFORM HYPERGRAPHS
MINIMUM VERTEX DEGREE THRESHOLD FOR LOOSE HAMILTON CYCLES IN 3-UNIFORM HYPERGRAPHS JIE HAN AND YI ZHAO Abstract. We show that for sufficiently large n, every 3-unifor hypergraph on n vertices with iniu
More informationLecture L26-3D Rigid Body Dynamics: The Inertia Tensor
J. Peraire, S. Widnall 16.07 Dynaics Fall 008 Lecture L6-3D Rigid Body Dynaics: The Inertia Tensor Version.1 In this lecture, we will derive an expression for the angular oentu of a 3D rigid body. We shall
More informationMulti-Class Deep Boosting
Multi-Class Deep Boosting Vitaly Kuznetsov Courant Institute 25 Mercer Street New York, NY 002 vitaly@cis.nyu.edu Mehryar Mohri Courant Institute & Google Research 25 Mercer Street New York, NY 002 ohri@cis.nyu.edu
More informationStochastic Online Scheduling on Parallel Machines
Stochastic Online Scheduling on Parallel Machines Nicole Megow 1, Marc Uetz 2, and Tark Vredeveld 3 1 Technische Universit at Berlin, Institut f ur Matheatik, Strasse des 17. Juni 136, 10623 Berlin, Gerany
More informationarxiv:0805.1434v1 [math.pr] 9 May 2008
Degree-distribution stability of scale-free networs Zhenting Hou, Xiangxing Kong, Dinghua Shi,2, and Guanrong Chen 3 School of Matheatics, Central South University, Changsha 40083, China 2 Departent of
More informationHalloween Costume Ideas for the Wii Game
Algorithica 2001) 30: 101 139 DOI: 101007/s00453-001-0003-0 Algorithica 2001 Springer-Verlag New York Inc Optial Search and One-Way Trading Online Algoriths R El-Yaniv, 1 A Fiat, 2 R M Karp, 3 and G Turpin
More informationTrading Regret for Efficiency: Online Convex Optimization with Long Term Constraints
Journal of Machine Learning Research 13 2012) 2503-2528 Subitted 8/11; Revised 3/12; Published 9/12 rading Regret for Efficiency: Online Convex Optiization with Long er Constraints Mehrdad Mahdavi Rong
More informationOn Computing Nearest Neighbors with Applications to Decoding of Binary Linear Codes
On Coputing Nearest Neighbors with Applications to Decoding of Binary Linear Codes Alexander May and Ilya Ozerov Horst Görtz Institute for IT-Security Ruhr-University Bochu, Gerany Faculty of Matheatics
More informationBudget-optimal Crowdsourcing using Low-rank Matrix Approximations
Budget-optial Crowdsourcing using Low-rank Matrix Approxiations David R. Karger, Sewoong Oh, and Devavrat Shah Departent of EECS, Massachusetts Institute of Technology Eail: {karger, swoh, devavrat}@it.edu
More information6. Time (or Space) Series Analysis
ATM 55 otes: Tie Series Analysis - Section 6a Page 8 6. Tie (or Space) Series Analysis In this chapter we will consider soe coon aspects of tie series analysis including autocorrelation, statistical prediction,
More informationExtended-Horizon Analysis of Pressure Sensitivities for Leak Detection in Water Distribution Networks: Application to the Barcelona Network
2013 European Control Conference (ECC) July 17-19, 2013, Zürich, Switzerland. Extended-Horizon Analysis of Pressure Sensitivities for Leak Detection in Water Distribution Networks: Application to the Barcelona
More informationModels and Algorithms for Stochastic Online Scheduling 1
Models and Algoriths for Stochastic Online Scheduling 1 Nicole Megow Technische Universität Berlin, Institut für Matheatik, Strasse des 17. Juni 136, 10623 Berlin, Gerany. eail: negow@ath.tu-berlin.de
More informationAUC Optimization vs. Error Rate Minimization
AUC Optiization vs. Error Rate Miniization Corinna Cortes and Mehryar Mohri AT&T Labs Research 180 Park Avenue, Florha Park, NJ 0793, USA {corinna, ohri}@research.att.co Abstract The area under an ROC
More informationPreference-based Search and Multi-criteria Optimization
Fro: AAAI-02 Proceedings. Copyright 2002, AAAI (www.aaai.org). All rights reserved. Preference-based Search and Multi-criteria Optiization Ulrich Junker ILOG 1681, route des Dolines F-06560 Valbonne ujunker@ilog.fr
More informationA Scalable Application Placement Controller for Enterprise Data Centers
W WWW 7 / Track: Perforance and Scalability A Scalable Application Placeent Controller for Enterprise Data Centers Chunqiang Tang, Malgorzata Steinder, Michael Spreitzer, and Giovanni Pacifici IBM T.J.
More informationSearching strategy for multi-target discovery in wireless networks
Searching strategy for ulti-target discovery in wireless networks Zhao Cheng, Wendi B. Heinzelan Departent of Electrical and Coputer Engineering University of Rochester Rochester, NY 467 (585) 75-{878,
More informationPricing Asian Options using Monte Carlo Methods
U.U.D.M. Project Report 9:7 Pricing Asian Options using Monte Carlo Methods Hongbin Zhang Exaensarbete i ateatik, 3 hp Handledare och exainator: Johan Tysk Juni 9 Departent of Matheatics Uppsala University
More informationPosition Auctions and Non-uniform Conversion Rates
Position Auctions and Non-unifor Conversion Rates Liad Blurosen Microsoft Research Mountain View, CA 944 liadbl@icrosoft.co Jason D. Hartline Shuzhen Nong Electrical Engineering and Microsoft AdCenter
More informationSTRONGLY CONSISTENT ESTIMATES FOR FINITES MIX'IURES OF DISTRIBUTION FUNCTIONS ABSTRACT. An estimator for the mixing measure
STRONGLY CONSISTENT ESTIMATES FOR FINITES MIX'IURES OF DISTRIBUTION FUNCTIONS Keewhan Choi Cornell University ABSTRACT The probles with which we are concerned in this note are those of identifiability
More informationInformation Processing Letters
Inforation Processing Letters 111 2011) 178 183 Contents lists available at ScienceDirect Inforation Processing Letters www.elsevier.co/locate/ipl Offline file assignents for online load balancing Paul
More informationUse of extrapolation to forecast the working capital in the mechanical engineering companies
ECONTECHMOD. AN INTERNATIONAL QUARTERLY JOURNAL 2014. Vol. 1. No. 1. 23 28 Use of extrapolation to forecast the working capital in the echanical engineering copanies A. Cherep, Y. Shvets Departent of finance
More informationFactored Models for Probabilistic Modal Logic
Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (2008 Factored Models for Probabilistic Modal Logic Afsaneh Shirazi and Eyal Air Coputer Science Departent, University of Illinois
More informationPartitioned Elias-Fano Indexes
Partitioned Elias-ano Indexes Giuseppe Ottaviano ISTI-CNR, Pisa giuseppe.ottaviano@isti.cnr.it Rossano Venturini Dept. of Coputer Science, University of Pisa rossano@di.unipi.it ABSTRACT The Elias-ano
More informationA Gas Law And Absolute Zero
A Gas Law And Absolute Zero Equipent safety goggles, DataStudio, gas bulb with pressure gauge, 10 C to +110 C theroeter, 100 C to +50 C theroeter. Caution This experient deals with aterials that are very
More informationIntroduction to Unit Conversion: the SI
The Matheatics 11 Copetency Test Introduction to Unit Conversion: the SI In this the next docuent in this series is presented illustrated an effective reliable approach to carryin out unit conversions
More informationModified Latin Hypercube Sampling Monte Carlo (MLHSMC) Estimation for Average Quality Index
Analog Integrated Circuits and Signal Processing, vol. 9, no., April 999. Abstract Modified Latin Hypercube Sapling Monte Carlo (MLHSMC) Estiation for Average Quality Index Mansour Keraat and Richard Kielbasa
More informationMarkov Models and Their Use for Calculations of Important Traffic Parameters of Contact Center
Markov Models and Their Use for Calculations of Iportant Traffic Paraeters of Contact Center ERIK CHROMY, JAN DIEZKA, MATEJ KAVACKY Institute of Telecounications Slovak University of Technology Bratislava
More informationMedia Adaptation Framework in Biofeedback System for Stroke Patient Rehabilitation
Media Adaptation Fraework in Biofeedback Syste for Stroke Patient Rehabilitation Yinpeng Chen, Weiwei Xu, Hari Sundara, Thanassis Rikakis, Sheng-Min Liu Arts, Media and Engineering Progra Arizona State
More informationAirline Yield Management with Overbooking, Cancellations, and No-Shows JANAKIRAM SUBRAMANIAN
Airline Yield Manageent with Overbooking, Cancellations, and No-Shows JANAKIRAM SUBRAMANIAN Integral Developent Corporation, 301 University Avenue, Suite 200, Palo Alto, California 94301 SHALER STIDHAM
More informationModeling operational risk data reported above a time-varying threshold
Modeling operational risk data reported above a tie-varying threshold Pavel V. Shevchenko CSIRO Matheatical and Inforation Sciences, Sydney, Locked bag 7, North Ryde, NSW, 670, Australia. e-ail: Pavel.Shevchenko@csiro.au
More informationDynamic Placement for Clustered Web Applications
Dynaic laceent for Clustered Web Applications A. Karve, T. Kibrel, G. acifici, M. Spreitzer, M. Steinder, M. Sviridenko, and A. Tantawi IBM T.J. Watson Research Center {karve,kibrel,giovanni,spreitz,steinder,sviri,tantawi}@us.ib.co
More informationSAMPLING METHODS LEARNING OBJECTIVES
6 SAMPLING METHODS 6 Using Statistics 6-6 2 Nonprobability Sapling and Bias 6-6 Stratified Rando Sapling 6-2 6 4 Cluster Sapling 6-4 6 5 Systeatic Sapling 6-9 6 6 Nonresponse 6-2 6 7 Suary and Review of
More informationSOME APPLICATIONS OF FORECASTING Prof. Thomas B. Fomby Department of Economics Southern Methodist University May 2008
SOME APPLCATONS OF FORECASTNG Prof. Thoas B. Foby Departent of Econoics Southern Methodist University May 8 To deonstrate the usefulness of forecasting ethods this note discusses four applications of forecasting
More information2. FINDING A SOLUTION
The 7 th Balan Conference on Operational Research BACOR 5 Constanta, May 5, Roania OPTIMAL TIME AND SPACE COMPLEXITY ALGORITHM FOR CONSTRUCTION OF ALL BINARY TREES FROM PRE-ORDER AND POST-ORDER TRAVERSALS
More informationAN ALGORITHM FOR REDUCING THE DIMENSION AND SIZE OF A SAMPLE FOR DATA EXPLORATION PROCEDURES
Int. J. Appl. Math. Coput. Sci., 2014, Vol. 24, No. 1, 133 149 DOI: 10.2478/acs-2014-0011 AN ALGORITHM FOR REDUCING THE DIMENSION AND SIZE OF A SAMPLE FOR DATA EXPLORATION PROCEDURES PIOTR KULCZYCKI,,
More informationConstruction Economics & Finance. Module 3 Lecture-1
Depreciation:- Construction Econoics & Finance Module 3 Lecture- It represents the reduction in arket value of an asset due to age, wear and tear and obsolescence. The physical deterioration of the asset
More informationOnline Appendix I: A Model of Household Bargaining with Violence. In this appendix I develop a simple model of household bargaining that
Online Appendix I: A Model of Household Bargaining ith Violence In this appendix I develop a siple odel of household bargaining that incorporates violence and shos under hat assuptions an increase in oen
More informationData Streaming Algorithms for Estimating Entropy of Network Traffic
Data Streaing Algoriths for Estiating Entropy of Network Traffic Ashwin Lall University of Rochester Vyas Sekar Carnegie Mellon University Mitsunori Ogihara University of Rochester Jun (Ji) Xu Georgia
More informationReal Time Target Tracking with Binary Sensor Networks and Parallel Computing
Real Tie Target Tracking with Binary Sensor Networks and Parallel Coputing Hong Lin, John Rushing, Sara J. Graves, Steve Tanner, and Evans Criswell Abstract A parallel real tie data fusion and target tracking
More informationPartitioning Data on Features or Samples in Communication-Efficient Distributed Optimization?
Partitioning Data on Features or Saples in Counication-Efficient Distributed Optiization? Chenxin Ma Industrial and Systes Engineering Lehigh University, USA ch54@lehigh.edu Martin Taáč Industrial and
More informationEfficient Key Management for Secure Group Communications with Bursty Behavior
Efficient Key Manageent for Secure Group Counications with Bursty Behavior Xukai Zou, Byrav Raaurthy Departent of Coputer Science and Engineering University of Nebraska-Lincoln Lincoln, NE68588, USA Eail:
More informationResource Allocation in Wireless Networks with Multiple Relays
Resource Allocation in Wireless Networks with Multiple Relays Kağan Bakanoğlu, Stefano Toasin, Elza Erkip Departent of Electrical and Coputer Engineering, Polytechnic Institute of NYU, Brooklyn, NY, 0
More informationESTIMATING LIQUIDITY PREMIA IN THE SPANISH GOVERNMENT SECURITIES MARKET
ESTIMATING LIQUIDITY PREMIA IN THE SPANISH GOVERNMENT SECURITIES MARKET Francisco Alonso, Roberto Blanco, Ana del Río and Alicia Sanchis Banco de España Banco de España Servicio de Estudios Docuento de
More informationCalculating the Return on Investment (ROI) for DMSMS Management. The Problem with Cost Avoidance
Calculating the Return on nvestent () for DMSMS Manageent Peter Sandborn CALCE, Departent of Mechanical Engineering (31) 45-3167 sandborn@calce.ud.edu www.ene.ud.edu/escml/obsolescence.ht October 28, 21
More informationCooperative Caching for Adaptive Bit Rate Streaming in Content Delivery Networks
Cooperative Caching for Adaptive Bit Rate Streaing in Content Delivery Networs Phuong Luu Vo Departent of Coputer Science and Engineering, International University - VNUHCM, Vietna vtlphuong@hciu.edu.vn
More informationHow To Get A Loan From A Bank For Free
Finance 111 Finance We have to work with oney every day. While balancing your checkbook or calculating your onthly expenditures on espresso requires only arithetic, when we start saving, planning for retireent,
More informationSoftware Quality Characteristics Tested For Mobile Application Development
Thesis no: MGSE-2015-02 Software Quality Characteristics Tested For Mobile Application Developent Literature Review and Epirical Survey WALEED ANWAR Faculty of Coputing Blekinge Institute of Technology
More informationApplying Multiple Neural Networks on Large Scale Data
0 International Conference on Inforation and Electronics Engineering IPCSIT vol6 (0) (0) IACSIT Press, Singapore Applying Multiple Neural Networks on Large Scale Data Kritsanatt Boonkiatpong and Sukree
More informationA CHAOS MODEL OF SUBHARMONIC OSCILLATIONS IN CURRENT MODE PWM BOOST CONVERTERS
A CHAOS MODEL OF SUBHARMONIC OSCILLATIONS IN CURRENT MODE PWM BOOST CONVERTERS Isaac Zafrany and Sa BenYaakov Departent of Electrical and Coputer Engineering BenGurion University of the Negev P. O. Box
More informationA quantum secret ballot. Abstract
A quantu secret ballot Shahar Dolev and Itaar Pitowsky The Edelstein Center, Levi Building, The Hebrerw University, Givat Ra, Jerusale, Israel Boaz Tair arxiv:quant-ph/060087v 8 Mar 006 Departent of Philosophy
More informationADJUSTING FOR QUALITY CHANGE
ADJUSTING FOR QUALITY CHANGE 7 Introduction 7.1 The easureent of changes in the level of consuer prices is coplicated by the appearance and disappearance of new and old goods and services, as well as changes
More informationApproximately-Perfect Hashing: Improving Network Throughput through Efficient Off-chip Routing Table Lookup
Approxiately-Perfect ing: Iproving Network Throughput through Efficient Off-chip Routing Table Lookup Zhuo Huang, Jih-Kwon Peir, Shigang Chen Departent of Coputer & Inforation Science & Engineering, University
More information1 if 1 x 0 1 if 0 x 1
Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or
More informationABSTRACT KEYWORDS. Comonotonicity, dependence, correlation, concordance, copula, multivariate. 1. INTRODUCTION
MEASURING COMONOTONICITY IN M-DIMENSIONAL VECTORS BY INGE KOCH AND ANN DE SCHEPPER ABSTRACT In this contribution, a new easure of coonotonicity for -diensional vectors is introduced, with values between
More information5.7 Chebyshev Multi-section Matching Transformer
/9/ 5_7 Chebyshev Multisection Matching Transforers / 5.7 Chebyshev Multi-section Matching Transforer Reading Assignent: pp. 5-55 We can also build a ultisection atching network such that Γ f is a Chebyshev
More informationAnalyzing Spatiotemporal Characteristics of Education Network Traffic with Flexible Multiscale Entropy
Vol. 9, No. 5 (2016), pp.303-312 http://dx.doi.org/10.14257/ijgdc.2016.9.5.26 Analyzing Spatioteporal Characteristics of Education Network Traffic with Flexible Multiscale Entropy Chen Yang, Renjie Zhou
More informationAn Optimal Task Allocation Model for System Cost Analysis in Heterogeneous Distributed Computing Systems: A Heuristic Approach
An Optial Tas Allocation Model for Syste Cost Analysis in Heterogeneous Distributed Coputing Systes: A Heuristic Approach P. K. Yadav Central Building Research Institute, Rooree- 247667, Uttarahand (INDIA)
More informationA Gas Law And Absolute Zero Lab 11
HB 04-06-05 A Gas Law And Absolute Zero Lab 11 1 A Gas Law And Absolute Zero Lab 11 Equipent safety goggles, SWS, gas bulb with pressure gauge, 10 C to +110 C theroeter, 100 C to +50 C theroeter. Caution
More informationThe Velocities of Gas Molecules
he Velocities of Gas Molecules by Flick Colean Departent of Cheistry Wellesley College Wellesley MA 8 Copyright Flick Colean 996 All rights reserved You are welcoe to use this docuent in your own classes
More informationEndogenous Credit-Card Acceptance in a Model of Precautionary Demand for Money
Endogenous Credit-Card Acceptance in a Model of Precautionary Deand for Money Adrian Masters University of Essex and SUNY Albany Luis Raúl Rodríguez-Reyes University of Essex March 24 Abstract A credit-card
More informationA Strategic Approach to Software Protection U
  Strategic pproach to Software Protection U OZ SHY University of Haifa, Israel and Stockhol School of Econoics, Sweden ozshy@econ.haifa.ac.il JCQUES-FRNCË OIS THISSE CORE, Universite Catholique de Louvain,
More informationINTEGRATED ENVIRONMENT FOR STORING AND HANDLING INFORMATION IN TASKS OF INDUCTIVE MODELLING FOR BUSINESS INTELLIGENCE SYSTEMS
Artificial Intelligence Methods and Techniques for Business and Engineering Applications 210 INTEGRATED ENVIRONMENT FOR STORING AND HANDLING INFORMATION IN TASKS OF INDUCTIVE MODELLING FOR BUSINESS INTELLIGENCE
More informationEnergy Efficient VM Scheduling for Cloud Data Centers: Exact allocation and migration algorithms
Energy Efficient VM Scheduling for Cloud Data Centers: Exact allocation and igration algoriths Chaia Ghribi, Makhlouf Hadji and Djaal Zeghlache Institut Mines-Téléco, Téléco SudParis UMR CNRS 5157 9, Rue
More informationStandards and Protocols for the Collection and Dissemination of Graduating Student Initial Career Outcomes Information For Undergraduates
National Association of Colleges and Eployers Standards and Protocols for the Collection and Disseination of Graduating Student Initial Career Outcoes Inforation For Undergraduates Developed by the NACE
More informationCapacity of Multiple-Antenna Systems With Both Receiver and Transmitter Channel State Information
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO., OCTOBER 23 2697 Capacity of Multiple-Antenna Systes With Both Receiver and Transitter Channel State Inforation Sudharan K. Jayaweera, Student Meber,
More informationWork Travel and Decision Probling in the Network Marketing World
TRB Paper No. 03-4348 WORK TRAVEL MODE CHOICE MODELING USING DATA MINING: DECISION TREES AND NEURAL NETWORKS Chi Xie Research Assistant Departent of Civil and Environental Engineering University of Massachusetts,
More informationREQUIREMENTS FOR A COMPUTER SCIENCE CURRICULUM EMPHASIZING INFORMATION TECHNOLOGY SUBJECT AREA: CURRICULUM ISSUES
REQUIREMENTS FOR A COMPUTER SCIENCE CURRICULUM EMPHASIZING INFORMATION TECHNOLOGY SUBJECT AREA: CURRICULUM ISSUES Charles Reynolds Christopher Fox reynolds @cs.ju.edu fox@cs.ju.edu Departent of Coputer
More informationQuality evaluation of the model-based forecasts of implied volatility index
Quality evaluation of the odel-based forecasts of iplied volatility index Katarzyna Łęczycka 1 Abstract Influence of volatility on financial arket forecasts is very high. It appears as a specific factor
More informationCRM FACTORS ASSESSMENT USING ANALYTIC HIERARCHY PROCESS
641 CRM FACTORS ASSESSMENT USING ANALYTIC HIERARCHY PROCESS Marketa Zajarosova 1* *Ph.D. VSB - Technical University of Ostrava, THE CZECH REPUBLIC arketa.zajarosova@vsb.cz Abstract Custoer relationship
More informationOptimal Resource-Constraint Project Scheduling with Overlapping Modes
Optial Resource-Constraint Proect Scheduling with Overlapping Modes François Berthaut Lucas Grèze Robert Pellerin Nathalie Perrier Adnène Hai February 20 CIRRELT-20-09 Bureaux de Montréal : Bureaux de
More informationMarkovian inventory policy with application to the paper industry
Coputers and Cheical Engineering 26 (2002) 1399 1413 www.elsevier.co/locate/copcheeng Markovian inventory policy with application to the paper industry K. Karen Yin a, *, Hu Liu a,1, Neil E. Johnson b,2
More informationNon-Price Equilibria in Markets of Discrete Goods
Non-Price Equilibria in Markets of Discrete Goods (working paper) Avinatan Hassidi Hai Kaplan Yishay Mansour Noa Nisan ABSTRACT We study arkets of indivisible ites in which price-based (Walrasian) equilibria
More informationHOW CLOSE ARE THE OPTION PRICING FORMULAS OF BACHELIER AND BLACK-MERTON-SCHOLES?
HOW CLOSE ARE THE OPTION PRICING FORMULAS OF BACHELIER AND BLACK-MERTON-SCHOLES? WALTER SCHACHERMAYER AND JOSEF TEICHMANN Abstract. We copare the option pricing forulas of Louis Bachelier and Black-Merton-Scholes
More informationConsiderations on Distributed Load Balancing for Fully Heterogeneous Machines: Two Particular Cases
Considerations on Distributed Load Balancing for Fully Heterogeneous Machines: Two Particular Cases Nathanaël Cheriere Departent of Coputer Science ENS Rennes Rennes, France nathanael.cheriere@ens-rennes.fr
More informationInternational Journal of Management & Information Systems First Quarter 2012 Volume 16, Number 1
International Journal of Manageent & Inforation Systes First Quarter 2012 Volue 16, Nuber 1 Proposal And Effectiveness Of A Highly Copelling Direct Mail Method - Establishent And Deployent Of PMOS-DM Hisatoshi
More informationEnrolment into Higher Education and Changes in Repayment Obligations of Student Aid Microeconometric Evidence for Germany
Enrolent into Higher Education and Changes in Repayent Obligations of Student Aid Microeconoetric Evidence for Gerany Hans J. Baugartner *) Viktor Steiner **) *) DIW Berlin **) Free University of Berlin,
More informationThe Fundamentals of Modal Testing
The Fundaentals of Modal Testing Application Note 243-3 Η(ω) = Σ n r=1 φ φ i j / 2 2 2 2 ( ω n - ω ) + (2ξωωn) Preface Modal analysis is defined as the study of the dynaic characteristics of a echanical
More informationThe AGA Evaluating Model of Customer Loyalty Based on E-commerce Environment
6 JOURNAL OF SOFTWARE, VOL. 4, NO. 3, MAY 009 The AGA Evaluating Model of Custoer Loyalty Based on E-coerce Environent Shaoei Yang Econoics and Manageent Departent, North China Electric Power University,
More informationReconnect 04 Solving Integer Programs with Branch and Bound (and Branch and Cut)
Sandia is a ultiprogra laboratory operated by Sandia Corporation, a Lockheed Martin Copany, Reconnect 04 Solving Integer Progras with Branch and Bound (and Branch and Cut) Cynthia Phillips (Sandia National
More informationIEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, ACCEPTED FOR PUBLICATION 1. Secure Wireless Multicast for Delay-Sensitive Data via Network Coding
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, ACCEPTED FOR PUBLICATION 1 Secure Wireless Multicast for Delay-Sensitive Data via Network Coding Tuan T. Tran, Meber, IEEE, Hongxiang Li, Senior Meber, IEEE,
More informationF=ma From Problems and Solutions in Introductory Mechanics (Draft version, August 2014) David Morin, morin@physics.harvard.edu
Chapter 4 F=a Fro Probles and Solutions in Introductory Mechanics (Draft version, August 2014) David Morin, orin@physics.harvard.edu 4.1 Introduction Newton s laws In the preceding two chapters, we dealt
More informationExact Nonparametric Tests for Comparing Means - A Personal Summary
Exact Nonparametric Tests for Comparing Means - A Personal Summary Karl H. Schlag European University Institute 1 December 14, 2006 1 Economics Department, European University Institute. Via della Piazzuola
More informationModeling Cooperative Gene Regulation Using Fast Orthogonal Search
8 The Open Bioinforatics Journal, 28, 2, 8-89 Open Access odeling Cooperative Gene Regulation Using Fast Orthogonal Search Ian inz* and ichael J. Korenberg* Departent of Electrical and Coputer Engineering,
More informationAudio Engineering Society. Convention Paper. Presented at the 119th Convention 2005 October 7 10 New York, New York USA
Audio Engineering Society Convention Paper Presented at the 119th Convention 2005 October 7 10 New York, New York USA This convention paper has been reproduced fro the authors advance anuscript, without
More informationA magnetic Rotor to convert vacuum-energy into mechanical energy
A agnetic Rotor to convert vacuu-energy into echanical energy Claus W. Turtur, University of Applied Sciences Braunschweig-Wolfenbüttel Abstract Wolfenbüttel, Mai 21 2008 In previous work it was deonstrated,
More informationStable Learning in Coding Space for Multi-Class Decoding and Its Extension for Multi-Class Hypothesis Transfer Learning
Stable Learning in Coding Space for Multi-Class Decoding and Its Extension for Multi-Class Hypothesis Transfer Learning Bang Zhang, Yi Wang 2, Yang Wang, Fang Chen 2 National ICT Australia 2 School of
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete
More informationLecture L9 - Linear Impulse and Momentum. Collisions
J. Peraire, S. Widnall 16.07 Dynaics Fall 009 Version.0 Lecture L9 - Linear Ipulse and Moentu. Collisions In this lecture, we will consider the equations that result fro integrating Newton s second law,
More informationPerformance Evaluation of Machine Learning Techniques using Software Cost Drivers
Perforance Evaluation of Machine Learning Techniques using Software Cost Drivers Manas Gaur Departent of Coputer Engineering, Delhi Technological University Delhi, India ABSTRACT There is a treendous rise
More information