Learning to Search Better than Your Teacher

Size: px
Start display at page:

Download "Learning to Search Better than Your Teacher"

Transcription

1 Ki-Wei Chng University of Illinois t Urbn Chmpign, IL Akshy Krishnmurthy Crnegie Mellon University, Pittsburgh, PA Alekh Agrwl Microsoft Reserch, New York, NY Hl Dumé III University of Mrylnd, College Prk, MD John Lngford Microsoft Reserch, New York, NY Abstrct Methods for lerning to serch for structured prediction typiclly imitte reference policy, with existing theoreticl gurntees demonstrting low regret compred to tht reference. This is unstisfctory in mny pplictions where the reference policy is suboptiml nd the gol of lerning is to improve upon it. Cn lerning to serch work even when the reference is poor? We provide new lerning to serch lgorithm, LOLS, which does well reltive to the reference policy, but dditionlly gurntees low regret compred to devitions from the lerned policy: locl-optimlity gurntee. Consequently, LOLS cn improve upon the reference policy, unlike previous lgorithms. This enbles us to develop structured contextul bndits, prtil informtion structured prediction setting with mny potentil pplictions. 1. Introduction In structured prediction problems, lerner mkes joint predictions over set of interdependent output vribles nd observes joint loss. For exmple, in prsing tsk, the output is prse tree over sentence. Achieving optiml performnce commonly requires the prediction of ech out- Proceedings of the 32 nd Interntionl Conference on Mchine Lerning, Lille, Frnce, JMLR: W&CP volume 37. Copyright 2015 by the uthor(s). put vrible to depend on neighboring vribles. One pproch to structured prediction is lerning to serch (L2S) (Collins & Rork, 2004; Dumé III & Mrcu, 2005; Dumé III et l., 2009; Ross et l., 2011; Dopp et l., 2014; Ross & Bgnell, 2014), which solves the problem by: 1. converting structured prediction into serch problem with specified serch spce nd ctions; 2. defining structured fetures over ech stte to cpture the interdependency between output vribles; 3. constructing reference policy bsed on trining dt; 4. lerning policy tht imittes the reference policy. Empiriclly, L2S pproches hve been shown to be competitive with other structured prediction pproches both in ccurcy nd running time (see e.g. Dumé III et l. (2014)). Theoreticlly, existing L2S lgorithms gurntee tht if the lerning step performs well, then the lerned policy is lmost s good s the reference policy, implicitly ssuming tht the reference policy ttins good performnce. Good reference policies re typiclly derived using lbels in the trining dt, such s ssigning ech word to its correct POS tg. However, when the reference policy is suboptiml, which cn rise for resons such s computtionl constrints, nothing cn be sid for existing pproches. This problem is most obviously mnifest in structured contextul bndit 1 setting. For exmple, one might wnt to predict how the lnding pge of high profile web- 1 The key difference from (1) contextul bndits is tht the ction spce is exponentilly lrge (in the length of trjectories in the serch spce); nd from (2) reinforcement lerning is tht bseline reference policy exists before lerning strts.

2 site should be displyed; this involves mny interdependent predictions: items to show, position nd size of those items, font, color, lyout, etc. It my be plusible to derive qulity signl for the displyed pge bsed on user feedbck, nd we my hve ccess to resonble reference policy (nmely the existing rule-bsed system tht renders the current web pge). But, pplying L2S techniques results in nonsense lerning something lmost s good s the existing policy is useless s we cn just keep using the current system nd obtin tht gurntee. Unlike the full feedbck settings, lbel informtion is not even vilble during lerning to define substntilly better reference. The gol of lerning here is to improve upon the current system, which is most likely fr from optiml. This nturlly leds to the question: is lerning to serch useless when the reference policy is poor? This is the core question of the pper, which we ddress first with new L2S lgorithm, LOLS (Loclly Optiml Lerning to Serch) in Section 2. LOLS opertes in n online fshion nd chieves bound on convex combintion of regret-to-reference nd regret-to-own-one-stepdevitions. The first prt ensures tht good reference policies cn be leverged effectively; the second prt ensures tht even if the reference policy is very sub-optiml, the lerned policy is pproximtely loclly optiml in sense mde forml in Section 3. LOLS opertes ccording to generl schemtic tht encompses mny pst L2S lgorithms (see Section 2), including Sern (Dumé III et l., 2009), DAgger (Ross et l., 2011) nd AggreVTe (Ross & Bgnell, 2014). A secondry contribution of this pper is theoreticl nlysis of both good nd bd wys of instntiting this schemtic under vriety of conditions, including: whether the reference policy is optiml or not, nd whether the reference policy is in the hypothesis clss or not. We find tht, while pst lgorithms chieve good regret gurntees when the reference policy is optiml, they cn fil rther drmticlly when it is not. LOLS, on the other hnd, hs superior performnce to other L2S lgorithms when the reference policy performs poorly but locl hill-climbing in policy spce is effective. In Section 5, we empiriclly confirm tht LOLS cn significntly outperform the reference policy in prctice on relworld dtsets. In Section 4 we extend LOLS to ddress the structured contextul bndit setting, giving nturl modifiction to the lgorithm s well s the corresponding regret nlysis. The proofs of our min results, nd the detils of the costsensitive clssifier used in experiments re deferred to the ppendix. The lgorithm LOLS, the new kind of regret gurntee it stisfies, the modifictions for the structured contextul bndit setting, nd ll experiments re new here. [ ] [V ] [N ] [N N ] [N V ] [N V N],loss=0 [N V V],loss= Figure 1. An illustrtion of the serch spce of sequentil tgging exmple tht ssigns prt-of-speech tg sequence to the sentence John sw Mry. Ech stte represents prtil lbeling. The strt stte b = [ ] nd the set of end sttes E = {[N V N], [N V V ],...}. Ech end stte is ssocited with loss. A policy chooses n ction t ech stte in the serch spce to specify the next stte. 2. Lerning to Serch A structured prediction problem consists of n input spce X, n output spce Y, fixed but unknown distribution D over X Y, nd non-negtive loss function l(y, ŷ) R 0 which mesures the distnce between the true (y ) nd predicted (ŷ) outputs. The gol of structured lerning is to use N smples (x i, y i ) N i=1 to lern mpping f : X Y tht minimizes the expected structured loss under D. In the lerning to serch frmework, n input x X induces serch spce, consisting of n initil stte b (which we will tke to lso encode x), set of end sttes nd trnsition function tht tkes stte/ction pirs s, nd deterministiclly trnsitions to new stte s. For ech end stte e, there is corresponding structured output y e nd for convenience we define the loss l(e) = l(y, y e ) where y will be cler from context. We futher define feture generting function Φ tht mps sttes to feture vectors in R d. The fetures express both the input x nd previous predictions (ctions). Fig. 1 shows n exmple serch spce 2. An gent follows policy π Π, which chooses n ction A(s) t ech non-terminl stte s. An ction specifies the next stte from s. We consider policies tht only ccess stte s through its feture vector Φ(s), mening tht π(s) is mpping from R d to the set of ctions A(s). A trjectory is complete sequence of stte/ction pirs from the strting stte b to n end stte e. Trjectories cn be generted by repetedly executing policy π in the serch spce. Without loss of generlity, we ssume the lengths of trjectories re fixed nd equl to T. The expected loss of policy J(π) is the expected loss of the end stte of the trjectory e π, where e E is n end stte reched by following the policy 3. Throughout, expecttions re tken with 2 Dopp et l. (2014) discuss severl pproches for defining serch spce. The theoreticl properties of our pproch do not depend on which serch spce definition is used. 3 Some imittion lerning literture (e.g., (Ross et l., 2011; He et l., 2012)) defines the loss of policy s n ccumultion of the costs of sttes nd ctions in the trjectory generted by the policy. For simplicity, we define the loss only bsed on the end

3 x X s r e rollin one-step devitions e rollout e y e Y, l(y e )=0.8 y e Y, l(y e )=0.0 y e Y, l(y e )=0.2 Figure 2. An exmple serch spce. The explortion begins t the strt stte s nd chooses the middle mong three ctions by the roll-in policy twice. Grey nodes re not explored. At stte r the lerning lgorithm considers the chosen ction (middle) nd both one-step devitions from tht ction (top nd bottom). Ech of these devitions is completed using the roll-out policy until n end stte is reched, t which point the loss is collected. Here, we lern tht deviting to the top ction (insted of middle) t stte r decreses the loss by 0.2. respect to drws of (x, y) from the trining distribution, s well s ny internl rndomness in the lerning lgorithm. An optiml policy chooses the ction leding to the miniml expected loss t ech stte. For losses decomposble over the sttes in trjectory, generting n optiml policy is trivil given y (e.g., the sequence tgging exmple in (Dumé III et l., 2009)). In generl, finding the optiml ction t sttes not in the optiml trjectory cn be tricky (e.g., (Goldberg & Nivre, 2013; Goldberg et l., 2014)). Finlly, like most other L2S lgorithms, LOLS ssumes ccess to cost-sensitive clssifiction lgorithm. A costsensitive clssifier predicts lbel ŷ given n exmple x, nd receives loss c x (ŷ), where c x is vector contining the cost for ech possible lbel. In order to perform online updtes, we ssume ccess to no-regret online costsensitive lerner, which we formlly define below. Definition 1. Given hypothesis clss H : X [K], the regret of n online cost-sensitive clssifiction lgorithm which produces hypotheses h 1,..., h M on cost-sensitive exmple sequence {(x 1, c 1 ),..., (x M, c M )} is Regret CS M = M M c m (h m (x m )) min c m (h(x m )). m=1 h H m=1 An lgorithm is no-regret if Regret CS M = o(m). Such no-regret gurntees cn be obtined, for instnce, by pplying the SECOC technique (Lngford & Beygelzimer, 2005) on top of ny importnce weighted binry clssifiction lgorithm tht opertes in n online fshion, exmples being the perceptron lgorithm or online ridge regression. stte. However, our theorems cn be generlized. (1) Algorithm 1 Loclly Optiml Lerning to Serch (LOLS) Require: Dtset {x i, y i } N i=1 drwn from D nd β 0: mixture prmeter for roll-out. 1: Initilize policy π 0. 2: for ll i {1, 2,..., N} (loop over ech instnce) do 3: Generte reference policy π ref bsed on y i. 4: Initilize Γ =. 5: for ll t {0, 1, 2,..., T 1} do 6: Roll-in by executing πi in = ˆπ i for t rounds nd rech s t. 7: for ll A(s t ) do 8: Let π out i =π ref with probbility β, otherwise ˆπ i. 9: Evlute cost c i,t () by rolling-out with πi out for T t 1 steps. 10: end for 11: Generte feture vector Φ(x i, s t ). 12: Set Γ = Γ { c i,t, Φ(x i, s t ) }. 13: end for 14: ˆπ i+1 Trin(ˆπ i, Γ) (Updte). 15: end for 16: Return the verge policy cross ˆπ 0, ˆπ 1,... ˆπ N. LOLS (see Algorithm 1) lerns policy ˆπ Π to pproximtely minimize J(π), 4 ssuming ccess to reference policy π ref (which my or my not be optiml). The lgorithm proceeds in n online fshion generting sequence of lerned policies ˆπ 0, ˆπ 1, ˆπ 2,.... At round i, structured smple (x i, y i ) is observed, nd the configurtion of serch spce is generted long with the reference policy π ref. Bsed on (x i, y i ), LOLS constructs T costsensitive multiclss exmples using roll-in policy πi in nd roll-out policy πi out. The roll-in policy is used to generte n initil trjectory nd the roll-out policy is used to derive the expected loss. More specificlly, for ech decision point t [0, T ), LOLS executes πi in for t rounds reching stte s t πi in. Then, cost-sensitive multiclss exmple is generted using the fetures Φ(s t ). Clsses in the multiclss exmple correspond to vilble ctions in stte s t. The cost c() ssigned to ction is the difference in loss between tking ction nd the best ction. c() = l(e()) min l(e( )), (2) where e() is the end stte reched with rollout by πi out fter tking ction in stte s t. LOLS collects the T exmples from the different roll-out points nd feeds the set of exmples Γ into n online cost-sensitive multiclss lerner, thereby updting the lerned policy from ˆπ i to ˆπ i+1. By defult, we use the lerned policy ˆπ i for roll-in nd mixture 4 We cn prmeterize the policy ˆπ using weight vector w R d such tht cost-sensitive clssifier cn be used to choose n ction bsed on the fetures t ech stte. We do not consider using different weight vectors t different sttes.

4 roll-out roll-in Reference Reference Mixture Lerned Inconsistent Lerned Not loclly opt. Good RL Tble 1. Effect of different roll-in nd roll-out policies. The strtegies mrked with Inconsistent might generte lerned policy with lrge structured regret, nd the strtegies mrked with Not loclly opt. could be much worse thn its one step devition. The strtegy mrked with RL reduces the structure lerning problem to reinforcement lerning problem, which is much hrder. The strtegy mrked with Good is fvored. policy for roll-out. For ech roll-out, the mixture policy either executes π ref to n end-stte with probbility β or ˆπ i with probbility 1 β. LOLS converts into btch lgorithm with stndrd online-to-btch conversion where the finl model π is generted by verging ˆπ i cross ll rounds (i.e., picking one of ˆπ 1,... ˆπ N uniformly t rndom). 3. Theoreticl Anlysis In this section, we nlyze LOLS nd nswer the questions rised in Section 1. Throughout this section we use π to denote the verge policy obtined by first choosing n [1, N] uniformly t rndom nd then cting ccording to π n.we begin with discussing the choices of roll-in nd roll-out policies. Tble 1 summrizes the results of using different strtegies for roll-in nd roll-out The Bd Choices An obvious bd choice is roll-in nd roll-out with the lerned policy, becuse the lerner is blind to the reference policy. It reduces the structured lerning problem to reinforcement lerning problem, which is much hrder. To build intuition, we show two other bd cses. Roll-in with π ref is bd. Roll-in with reference policy cuses the stte distribution to be unrelisticlly good. As result, the lerned policy never lerns to correct for previous mistkes, performing poorly when testing. A relted discussion cn be found t Theorem 2.1 in (Ross & Bgnell, 2010). We show theorem below. Theorem 1. For πi in = π ref, there is distribution D over (x, y) such tht the induced cost-sensitive regret RegretM CS = o(m) but J( π) J(π ref ) = Ω(1). Proof. We demonstrte exmples where the clim is true. We strt with the cse where πi out = πi in = π ref. In this cse, suppose we hve one structured exmple, whose serch spce is defined s in Figure 3(). From stte s 1, there re s 1 b e 1, 0 s 2 e 2, 10 s3 () π in c e d f e 3, 100 e 4, 0 i =πi out =π ref s 1 e 1, 0 s 2 e 2, 10 s3 c e d f e 3, 100 e 4, 0 (b) πi in = π ref, representtion constrined s 1 b s 2 s3 c c d d e 1, 1 e 2, 1 ɛ e 3, 1+ɛ e 4, 0 (c) πi out =π ref Figure 3. Counterexmples of πi in = π ref nd πi out = π ref. All three exmples hve 7 sttes. The loss of ech end stte is specified in the figure. A policy chooses ctions to trverse through the serch spce until it reches n end stte. Legl policies re bit-vectors, so tht policy with weight on goes up in s 1 of Figure 3() while weight on b sends it down. Since fetures uniquely identify ctions of the policy in this cse, we just mrk the edges with corresponding fetures for simplicity. The reference policy is bold-fced. In Figure 3(b), the fetures re the sme on either brnch from s 1, so tht the lerned policy cn do no better thn pick rndomly between the two. In Figure 3(c), sttes s 2 nd s 3 shre the sme feture set (i.e., Φ(s 2) = Φ(s 3)). Therefore, policy chooses the sme set of ctions t sttes s 2 nd s 3. Plese see text for detils. two possible ctions: nd b (we will use ctions nd fetures interchngebly since fetures uniquely identify ctions here); the (optiml) reference policy tkes ction. From stte s 2, there re gin two ctions (c nd d); the reference tkes c. Finlly, even though the reference policy would never visit s 3, from tht stte it chooses ction f. When rolling in with π ref, the cost-sensitive exmples re generted only t stte s 1 (if we tke one-step devition on s 1 ) nd s 2 but never t s 3 (since tht would require two devitions, one t s 1 nd one t s 3 ). As result, we cn never lern how to mke predictions t stte s 3. Furthermore, under rollout with π ref, both ctions from stte s 1 led to loss of zero. The lerner cn therefore lern to tke ction c t stte s 2 nd b t stte s 1, nd chieve zero cost-sensitive regret, thereby thinking it is doing good job. Unfortuntely, when this policy is ctully run, it performs s bdly s possible (by tking ction e hlf the time in s 3 ), which results in the lrge structured regret. Next we consider the cse where πi out is either the lerned policy or mixture with π ref. When pplied to the exmple in Figure 3(b), our feture representtion is not expressive enough to differentite between the two ctions t stte s 1, so the lerned policy cn do no better thn pick rndomly between the top nd bottom brnches from this stte. The lgorithm either rolls in with π ref on s 1 nd genertes cost-sensitive exmple t s 2, or genertes cost-sensitive exmple on s 1 nd then completes roll out with πi out. Crucilly, the lgorithm still never genertes cost-sensitive exmple t the stte s 3 (since it would hve lredy tken one-step devition to rech s 3 nd is constrined to do roll out from s 3 ). As result, if the lerned policy were to

5 choose the ction e in s 3, it leds to zero cost-sensitive regret but lrge structured regret. Despite these negtive results, rolling in with the lerned policy is robust to both the bove filure modes. In Figure 3(), if the lerned policy picks ction b in stte s 1, then we cn roll in to the stte s 3, then generte cost-sensitive exmple nd lern tht f is better ction thn e. Similrly, we lso observe cost-sensitive exmple in s 3 in the exmple of Figure 3(b), which clerly demonstrtes the benefits of rolling in with the lerned policy s opposed to π ref. Roll-out with π ref is bd if π ref is not optiml. When the reference policy is not optiml or the reference policy is not in the hypothesis clss, roll-out with π ref cn mke the lerner blind to compounding errors. The following theorem holds. We stte this in terms of locl optimlity : policy is loclly optiml if chnging ny one decision it mkes never improves its performnce. Theorem 2. For πi out = π ref, there is distribution D over (x, y) such tht the induced cost-sensitive regret RegretM CS = o(m) but π hs rbitrrily lrge structured regret to one-step devitions. Proof. Suppose we hve only one structured exmple, whose serch spce is defined s in Figure 3(c) nd the reference policy chooses or c depending on the node. If we roll-out with π ref, we observe expected losses 1 nd 1 + ɛ for ctions nd b t stte s 1, respectively. Therefore, the policy with zero cost-sensitive clssifiction regret chooses ctions nd d depending on the node. However, one step devition ( b) does rdiclly better nd cn be lerned by insted rolling out with mixture policy. The bove theorems show the bd cses nd motivte good L2S lgorithm which genertes lerned policy tht competes with the reference policy nd devitions from the lerned policy. In the following section, we show tht Algorithm 1 is such n lgorithm Regret Gurntees Let Q π (s t, ) represent the expected loss of executing ction t stte s t nd then executing policy π until reching n end stte. T is the number of decisions required before reching n end stte. For nottionl simplicity, we use Q π (s t, π ) s shorthnd for Q π (s t, π (s t )), where π (s t ) is the ction tht π tkes t stte s t. Finlly, we use d t π to denote the distribution over sttes t time t when cting ccording to the policy π. The expected loss of policy is: J(π) = E s d t π [Q π (s, π)], (3) for ny t [0, T ]. In words, this is the expected cost of rolling in with π up to some time t, tking π s ction t time t nd then completing the roll out with π. Our min regret gurntee for Algorithm 1 shows tht LOLS minimizes combintion of regret to the reference policy π ref nd regret its own one-step devitions. In order to concisely present the result, we present n dditionl definition which cptures the regret of our pproch: δ N = 1 NT N i=1 t=1 T [ E s d ṱ πi Q πout i +(1 β) min Qˆπ i (s, ) ( (s, ˆπ i) β min Q π ref (s, ) )], (4) where πi out = βπ ref + (1 β)ˆπ i is the mixture policy used to roll-out in Algorithm 1. With these definitions in plce, we cn now stte our min result for Algorithm 1. Theorem 3. Let δ N be s defined in Eqution 4. The verged policy π generted by running N steps of Algorithm 1 with mixing prmeter β stisfies β(j( π) J(π ref )) + (1 β) T t=1 ( J( π) min π Π E s d t π [Q π (s, π)] ) T δ N. It might pper tht the LHS of the theorem combines one term which is constnt to nother scling with T. We point the reder to Lemm 1 in the ppendix to see why the terms re comprble in mgnitude. Note tht the theorem does not ssume nything bout the qulity of the reference policy, nd it might be rbitrrily suboptiml. Assuming tht Algorithm 1 uses no-regret cost-sensitive clssifiction lgorithm (recll Definition 1), the first term in the definition of δ N converges to l 1 = min π Π NT N T i=1 t=1 E s d ṱ πi [Q πout i (s, π)]. This observtion is formlized in the next corollry. Corollry 1. Suppose we use no-regret cost-sensitive clssifier in Algorithm 1. As N, δ N δ clss, where δ clss = l 1 NT E s d ṱ πi [β min Q πref (s, ) i,t ] +(1 β) min Qˆπi (s, ). When we hve β = 1, so tht LOLS becomes lmost identicl to AGGREVATE (Ross & Bgnell, 2014), δ clss rises solely due to the policy clss Π being restricted. For other vlues of β (0, 1), the symptotic gp does not lwys vnish even if the policy clss is unrestricted, since l mounts to obtining min Q πout i (s, ) in ech stte. This corresponds to tking minimum of n verge rther thn the verge of the corresponding minimum vlues. In order to void this symptotic gp, it seems desirble to hve regrets to reference policy nd one-step devitions

6 controlled individully, which is equivlent to hving the gurntee of Theorem 3 for ll vlues of β in [0, 1] rther thn specific one. As we show in the next section, gurnteeing regret bound to one-step devitions when the reference policy is rbitrrily bd is rther tricky nd cn tke n exponentilly long time. Understnding structures where this cn be done more trctbly is n importnt question for future reserch. Nevertheless, the result of Theorem 3 hs interesting consequences in severl settings, some of which we discuss next. 1. The second term on the left in the theorem is lwys non-negtive by definition, so the conclusion of Theorem 3 is t lest s powerful s existing regret gurntee to reference policy when β = 1. Since the previous works in this re (Dumé III et l., 2009; Ross et l., 2011; Ross & Bgnell, 2014) hve only studied regret gurntees to the reference policy, the quntity we re studying is strictly more difficult. 2. The symptotic regret incurred by using mixture policy for roll-out might be lrger thn tht using the reference policy lone, when the reference policy is neroptiml. How the combintion of these fctors mnifests in prctice is empiriclly evluted in Section When the reference policy is optiml, the first term is non-negtive. Consequently, the theorem demonstrtes tht our lgorithm competes with one-step devitions in this cse. This is true irrespective of whether π ref is in the policy clss Π or not. 4. When the reference policy is very suboptiml, then the first term cn be negtive. In this cse, the regret to one-step devitions cn be lrge despite the gurntee of Theorem 3, since the first negtive term llows the second term to be lrge while the sum stys bounded. However, when the first term is significntly negtive, then the lerned policy hs lredy improved upon the reference policy substntilly! This bility to improve upon poor reference policy by using mixture policy for rolling out is n importnt distinction for Algorithm 1 compred with previous pproches. Overll, Theorem 3 shows tht the lerned policy is either competitive with the reference policy nd nerly loclly optiml, or improves substntilly upon the reference policy Hrdness of locl optimlity In this section we demonstrte tht the process of reching locl optimum (under one-step devitions) cn be exponentilly slow when the initil strting policy is rbitrry. This reflects the hrdness of lerning to serch problems when equipped with poor reference policy, even if locl rther thn globl optimlity is considered yrdstick. We estblish this lower bound for clss of lgorithms substntilly more powerful thn LOLS. We strt by defining serch spce nd policy clss. Our serch spce consists of trjectories of length T, with 2 ctions vilble t ech step of the trjectory. We use 0 nd 1 to index the two ctions. We consider policies whose only feture in stte is the depth of the stte in the trjectory, mening tht the ction tken by ny policy π in stte s t depends only on t. Consequently, ech policy cn be indexed by bit string of length T. For instnce, the policy executes ction 0 in the first step of ny trjectory, ction 1 in the second step nd 0 t ll other levels. It is esily seen tht two policies re one-step devitions of ech other if the corresponding bit strings hve Hmming distnce of 1. To estblish lower bound, consider the following powerful lgorithmic pttern. Given current policy π, the lgorithm exmines the cost J(π ) for ll the one-step devitions π of π. It then chooses the policy with the smllest cost s its new lerned policy. Note tht ccess to the ctul costs J(π) mkes this lgorithm more powerful thn existing L2S lgorithms, which cn only estimte costs of policies through rollouts on individul exmples. Suppose this lgorithm strts from n initil policy ˆπ 0. How long does it tke for the lgorithm to rech policy ˆπ i which is loclly optiml compred with ll its one-step devitions? We next present lower bound for lgorithms of this style. Theorem 4. Consider ny lgorithm which updtes policies only by moving from the current policy to one-step devition. Then there is serch spce, policy clss nd cost function where the ny such lgorithm must mke Ω(2 T ) updtes before reching loclly optiml policy. Specificlly, the lower bound lso pplies to Algorithm 1. The result shows tht competing with the seemingly resonble benchmrk of one-step devitions my be very chllenging from n lgorithmic perspective, t lest without ssumptions on the serch spce, policy clss, loss function, or strting policy. For instnce, the construction used to prove Theorem 4 does not pply to Hmming loss. 4. Structured Contextul Bndit We now show tht vrint of LOLS cn be run in structured contextul bndit setting, where only the loss of single structured lbel cn be observed. As mentioned, this setting hs pplictions to webpge lyout, personlized serch, nd severl other domins. At ech round, the lerner is given n input exmple x, mkes prediction ŷ nd suffers structured loss l(y, ŷ). We ssume tht the structured losses lie in the intervl [0, 1], tht the serch spce hs depth T nd tht there re t most K ctions vilble t ech stte. As before, the lgorithm hs ccess to policy clss Π, nd lso to reference policy π ref. It is importnt to emphsize tht the reference policy does not hve ccess to the true lbel, nd the gol

7 Algorithm 2 Structured Contextul Bndit Lerning Require: Exmples {x i } N i=1, reference policy πref, explortion probbility ɛ nd mixture prmeter β 0. 1: Initilize policy π 0, nd set I =. 2: for ll i = 1, 2,..., N (loop over ech instnce) do 3: Obtin the exmple x i, set explore = 1 with probbility ɛ, set n i = I. 4: if explore then 5: Pick rndom time t {0, 1,..., T 1}. 6: Roll-in by executing πi in = ˆπ ni for t rounds nd rech s t. 7: Pick rndom ction t A(s t ); let K = A(s t ). 8: Let πi out = π ref with probbility β, otherwise ˆπ ni. 9: Roll-out with πi out for T t 1 steps to evlute ĉ() = Kl(e( t ))1[ = t ]. 10: Generte feture vector Φ(x i, s t ). 11: ˆπ ni+1 Trin(ˆπ ni, ĉ, Φ(x i, s t )). 12: Augment I = I {ˆπ ni+1} 13: else 14: Follow the trjectory of policy π drwn rndomly from I to n end stte e, predict the corresponding structured output y ie. 15: end if 16: end for is improving on the reference policy. Our pproch is bsed on the ɛ-greedy lgorithm which is common strtegy in prtil feedbck problems. Upon receiving n exmple x i, the lgorithm rndomly chooses whether to explore or exploit on this exmple. With probbility 1 ɛ, the lgorithm chooses to exploit nd follows the recommendtion of the current lerned policy. With the remining probbility, the lgorithm performs rndomized vrint of the LOLS updte. A detiled description is given in Algorithm 2. We ssess the lgorithm s performnce vi mesure of regret, where the comprtor is mixture of the reference policy nd the best one-step devition. Let π i be the verged policy bsed on ll policies in I t round i. y ie is the predicted lbel in either step 9 or step 14 of Algorithm 2. The verge regret is defined s: Regret = 1 N N i=1 (1 β) ( E[l(y i, y ie )] βe[l(y i, y ieref )] T ) min E s d π Π t π [Q πi (s, π)] i t=1 Reclling our erlier definition of δ i (4), we bound on the regret of Algorithm 2 with proof in the ppendix. Theorem 5. Algorithm 2 with prmeter ɛ stisfies: Regret ɛ + 1 N N δ ni, i=1 With no-regret lerning lgorithm, we expect log Π δ i δ clss + ck, (5) i where Π is the crdinlity of the policy clss. This leds to the following corollry with proof in the ppendix. Corollry 2. In the setup of Theorem 5, suppose further tht the underlying no-regret lerner stisfies (5). Then with probbility t lest 1 2/(N 5 K 2 T 2 log(n Π )) 3, Regret = O ( 5. Experiments (KT ) 2/3 3 log(n Π ) N + T δ clss ) This section shows tht LOLS is ble to improve upon suboptiml reference policy nd provides empiricl evidence to support the nlysis in Section 3. We conducted experiments on the following three pplictions. Cost-Sensitive Multiclss clssifiction. For ech costsensitive multiclss smple, ech choice of lbel hs n ssocited cost. The serch spce for this tsk is binry serch tree. The root of the tree corresponds to the whole set of lbels. We recursively split the set of lbels in hlf, until ech subset contins only one lbel. A trjectory through the serch spce is pth from root-to-lef in this tree. The loss of the end stte is defined by the cost. An optiml reference policy cn led the gent to the end stte with the miniml cost. We lso show results of using bd reference policy which rbitrrily chooses n ction t ech stte. The experiments re conducted on KDDCup 99 dtset 5 generted from computer network intrusion detection tsk. The dtset contins 5 clsses, 4, 898, 431 trining nd 311, 029 test instnces. Prt of speech tgging. The serch spce for POS tgging is left-to-right prediction. Under Hmming loss the trivil optiml reference policy simply chooses the correct prt of speech for ech word. We trin on 38k sentences nd test on 11k from the Penn Treebnk (Mrcus et l., 1993). One cn construct suboptiml or even bd reference policies, but under Hmming loss these re ll equivlent to the optiml policy becuse roll-outs by ny fixed policy will incur exctly the sme loss nd the lerner cn immeditely lern from one-step devitions. 5 kddcup99/kddcup99.html.

8 roll-out roll-in Reference Mixture Lerned Reference is optiml Reference Lerned Reference is bd Reference Lerned Tble 2. The verge cost on cost-sensitive clssifiction dtset; columns re roll-out nd rows re roll-in. The best result is bold. SEARN chieves nd when the reference policy is optiml nd bd, respectively. LOLS is Lerned/Mixture nd highlighted in green. roll-out roll-in Reference Mixture Lerned Reference is optiml Reference Lerned roll-out roll-in Reference Mixture Lerned Reference is optiml Reference Lerned Reference is suboptiml Reference Lerned Reference is bd Reference Lerned Tble 4. The UAS score on dependency prsing dt set; columns re roll-out nd rows re roll-in. The best result is bold. SEARN chieves 84.0, 81.1, nd 63.4 when the reference policy is optiml, suboptiml, nd bd, respectively. LOLS is Lerned/Mixture nd highlighted in green. Tble 3. The ccurcy on POS tgging; columns re roll-out nd rows re roll-in. The best result is bold. SEARN chieves LOLS is Lerned/Mixture nd highlighted in green. Dependency prsing. A dependency prser lerns to generte tree structure describing the syntctic dependencies between words in sentence (McDonld et l., 2005; Nivre, 2003). We implemented hybrid trnsition system (Kuhlmnn et l., 2011) which prses sentence from left to right with three ctions: SHIFT, REDUCELEFT nd REDUCERIGHT. We used the non-deterministic orcle (Goldberg & Nivre, 2013) s the optiml reference policy, which leds the gent to the best end stte rechble from ech stte. We lso designed two suboptiml reference policies. A bd reference policy chooses n rbitrry legl ction t ech stte. A suboptiml policy pplies greedy selection nd chooses the ction which leds to good tree when it is obvious; otherwise, it rbitrrily chooses legl ction. (This suboptiml reference ws the defult reference policy used prior to the work on nondeterministic orcles. ) We used dt from the Penn Treebnk Wll Street Journl corpus: the stndrd dt split for trining (sections 02-21) nd test (section 23). The loss is evluted in UAS (unlbeled ttchment score), which mesures the frction of words tht pick the correct prent. For ech tsk nd ech reference policy, we compre 6 different combintions of roll-in (lerned or reference) nd roll-out (lerned, mixture or reference) strtegies. We lso include SEARN in the comprison, since it hs notble differences from LOLS. SEARN rolls in nd out with mixture where different policy is drwn for ech stte, while LOLS drws policy once per exmple. SEARN uses btch lerner, while LOLS uses online. The policy in SEARN is mixture over the policies produced t ech itertion. For LOLS, it suffices to keep just the most recent one. It is n open reserch question whether n nlogous theoreticl gurntee of Theorem 3 cn be estblished for SEARN. Our implementtion is bsed on Vowpl Wbbit 6, mchine lerning system tht supports online lerning nd L2S. For LOLS s mixture policy, we set β = 0.5. We found tht LOLS is not sensitive to β, nd setting β to be 0.5 works well in prctice. For SEARN, we set the mixture prmeter to be 1 (1 α) t, where t is the number of rounds nd α = Unless stted otherwise ll the lerners tke 5 psses over the dt. Tbles 2, 3 nd 4 show the results on cost-sensitive multiclss clssifiction, POS tgging nd dependency prsing, respectively. The empiricl results qulittively gree with the theory. Rolling in with reference is lwys bd. When the reference policy is optiml, then doing roll-outs with reference is good ide. However, when the reference policy is suboptiml or bd, then rolling out with reference is bd ide, nd mixture rollouts perform substntilly better. LOLS lso significntly outperforms SEARN on ll tsks. Acknowledgements Prt of this work ws crried out while Ki-Wei, Akshy nd Hl were visiting Microsoft Reserch. 6 vw/

9 References Abbott, H.L nd Ktchlski, M. On the snke in the box problem. Journl of Combintoril Theory, Series B, 45 (1):13 24, Ces-Binchi, N. nd Lugosi, G. Prediction, Lerning, nd Gmes. Cmbridge University Press, Collins, Michel nd Rork, Brin. Incrementl prsing with the perceptron lgorithm. In Proceedings of the Conference of the Assocition for Computtionl Linguistics (ACL), Dumé III, Hl nd Mrcu, Dniel. Lerning s serch optimiztion: Approximte lrge mrgin methods for structured prediction. In Proceedings of the Interntionl Conference on Mchine Lerning (ICML), Dumé III, Hl, Lngford, John, nd Mrcu, Dniel. Serch-bsed structured prediction. Mchine Lerning Journl, Dumé III, Hl, Lngford, John, nd Ross, Stéphne. Efficient progrmmble lerning to serch. rxiv: , Dopp, Jnrdhn Ro, Fern, Aln, nd Tdeplli, Prsd. HC-Serch: A lerning frmework for serch-bsed structured prediction. Journl of Artificil Intelligence Reserch (JAIR), 50, Goldberg, Yov nd Nivre, Jokim. Trining deterministic prsers with non-deterministic orcles. Trnsctions of the ACL, 1, Goldberg, Yov, Srtorio, Frncesco, nd Stt, Giorgio. A tbulr method for dynmic orcles in trnsition-bsed prsing. Trnsctions of the ACL, 2, He, He, Dumé III, Hl, nd Eisner, Json. Imittion lerning by coching. In Neurl Informtion Processing Systems (NIPS), Kuhlmnn, Mrco, Gómez-Rodríguez, Crlos, nd Stt, Giorgio. Dynmic progrmming lgorithms for trnsition-bsed dependency prsers. In Proceedings of the 49th Annul Meeting of the Assocition for Computtionl Linguistics: Humn Lnguge Technologies- Volume 1, pp Assocition for Computtionl Linguistics, Lngford, John nd Beygelzimer, Alin. Sensitive error correcting output codes. In Lerning Theory, pp Springer, Mrcus, Mitch, Mrcinkiewicz, Mry Ann, nd Sntorini, Betrice. Building lrge nnotted corpus of English: The Penn Treebnk. Computtionl Linguistics, 19(2): , McDonld, Ryn, Pereir, Fernndo, Ribrov, Kiril, nd Hjic, Jn. Non-projective dependency prsing using spnning tree lgorithms. In Proceedings of the Joint Conference on Humn Lnguge Technology Conference nd Empiricl Methods in Nturl Lnguge Processing (HLT/EMNLP), Nivre, Jokim. An efficient lgorithm for projective dependency prsing. In Interntionl Workshop on Prsing Technologies (IWPT), pp , Ross, Stéphne nd Bgnell, J. Andrew. Efficient reductions for imittion lerning. In Proceedings of the Workshop on Artificil Intelligence nd Sttistics (AI-Stts), Ross, Stéphne nd Bgnell, J. Andrew. Reinforcement nd imittion lerning vi interctive no-regret lerning. rxiv: , Ross, Stéphne, Gordon, Geoff J., nd Bgnell, J. Andrew. A reduction of imittion lerning nd structured prediction to no-regret online lerning. In Proceedings of the Workshop on Artificil Intelligence nd Sttistics (AI- Stts), Zinkevich, Mrtin. Online convex progrmming nd generlized infinitesiml grdient scent. In Proceedings of the Interntionl Conference on Mchine Lerning (ICML), 2003.

Polynomial Functions. Polynomial functions in one variable can be written in expanded form as ( )

Polynomial Functions. Polynomial functions in one variable can be written in expanded form as ( ) Polynomil Functions Polynomil functions in one vrible cn be written in expnded form s n n 1 n 2 2 f x = x + x + x + + x + x+ n n 1 n 2 2 1 0 Exmples of polynomils in expnded form re nd 3 8 7 4 = 5 4 +

More information

Reasoning to Solve Equations and Inequalities

Reasoning to Solve Equations and Inequalities Lesson4 Resoning to Solve Equtions nd Inequlities In erlier work in this unit, you modeled situtions with severl vriles nd equtions. For exmple, suppose you were given usiness plns for concert showing

More information

Economics Letters 65 (1999) 9 15. macroeconomists. a b, Ruth A. Judson, Ann L. Owen. Received 11 December 1998; accepted 12 May 1999

Economics Letters 65 (1999) 9 15. macroeconomists. a b, Ruth A. Judson, Ann L. Owen. Received 11 December 1998; accepted 12 May 1999 Economics Letters 65 (1999) 9 15 Estimting dynmic pnel dt models: guide for q mcroeconomists b, * Ruth A. Judson, Ann L. Owen Federl Reserve Bord of Governors, 0th & C Sts., N.W. Wshington, D.C. 0551,

More information

Treatment Spring Late Summer Fall 0.10 5.56 3.85 0.61 6.97 3.01 1.91 3.01 2.13 2.99 5.33 2.50 1.06 3.53 6.10 Mean = 1.33 Mean = 4.88 Mean = 3.

Treatment Spring Late Summer Fall 0.10 5.56 3.85 0.61 6.97 3.01 1.91 3.01 2.13 2.99 5.33 2.50 1.06 3.53 6.10 Mean = 1.33 Mean = 4.88 Mean = 3. The nlysis of vrince (ANOVA) Although the t-test is one of the most commonly used sttisticl hypothesis tests, it hs limittions. The mjor limittion is tht the t-test cn be used to compre the mens of only

More information

Factoring Polynomials

Factoring Polynomials Fctoring Polynomils Some definitions (not necessrily ll for secondry school mthemtics): A polynomil is the sum of one or more terms, in which ech term consists of product of constnt nd one or more vribles

More information

All pay auctions with certain and uncertain prizes a comment

All pay auctions with certain and uncertain prizes a comment CENTER FOR RESEARC IN ECONOMICS AND MANAGEMENT CREAM Publiction No. 1-2015 All py uctions with certin nd uncertin prizes comment Christin Riis All py uctions with certin nd uncertin prizes comment Christin

More information

Helicopter Theme and Variations

Helicopter Theme and Variations Helicopter Theme nd Vritions Or, Some Experimentl Designs Employing Pper Helicopters Some possible explntory vribles re: Who drops the helicopter The length of the rotor bldes The height from which the

More information

Decision Rule Extraction from Trained Neural Networks Using Rough Sets

Decision Rule Extraction from Trained Neural Networks Using Rough Sets Decision Rule Extrction from Trined Neurl Networks Using Rough Sets Alin Lzr nd Ishwr K. Sethi Vision nd Neurl Networks Lbortory Deprtment of Computer Science Wyne Stte University Detroit, MI 48 ABSTRACT

More information

PROF. BOYAN KOSTADINOV NEW YORK CITY COLLEGE OF TECHNOLOGY, CUNY

PROF. BOYAN KOSTADINOV NEW YORK CITY COLLEGE OF TECHNOLOGY, CUNY MAT 0630 INTERNET RESOURCES, REVIEW OF CONCEPTS AND COMMON MISTAKES PROF. BOYAN KOSTADINOV NEW YORK CITY COLLEGE OF TECHNOLOGY, CUNY Contents 1. ACT Compss Prctice Tests 1 2. Common Mistkes 2 3. Distributive

More information

Binary Representation of Numbers Autar Kaw

Binary Representation of Numbers Autar Kaw Binry Representtion of Numbers Autr Kw After reding this chpter, you should be ble to: 1. convert bse- rel number to its binry representtion,. convert binry number to n equivlent bse- number. In everydy

More information

EQUATIONS OF LINES AND PLANES

EQUATIONS OF LINES AND PLANES EQUATIONS OF LINES AND PLANES MATH 195, SECTION 59 (VIPUL NAIK) Corresponding mteril in the ook: Section 12.5. Wht students should definitely get: Prmetric eqution of line given in point-direction nd twopoint

More information

An Undergraduate Curriculum Evaluation with the Analytic Hierarchy Process

An Undergraduate Curriculum Evaluation with the Analytic Hierarchy Process An Undergrdute Curriculum Evlution with the Anlytic Hierrchy Process Les Frir Jessic O. Mtson Jck E. Mtson Deprtment of Industril Engineering P.O. Box 870288 University of Albm Tuscloos, AL. 35487 Abstrct

More information

Contextualizing NSSE Effect Sizes: Empirical Analysis and Interpretation of Benchmark Comparisons

Contextualizing NSSE Effect Sizes: Empirical Analysis and Interpretation of Benchmark Comparisons Contextulizing NSSE Effect Sizes: Empiricl Anlysis nd Interprettion of Benchmrk Comprisons NSSE stff re frequently sked to help interpret effect sizes. Is.3 smll effect size? Is.5 relly lrge effect size?

More information

Finite Automata. Informatics 2A: Lecture 3. John Longley. 25 September School of Informatics University of Edinburgh

Finite Automata. Informatics 2A: Lecture 3. John Longley. 25 September School of Informatics University of Edinburgh Lnguges nd Automt Finite Automt Informtics 2A: Lecture 3 John Longley School of Informtics University of Edinburgh jrl@inf.ed.c.uk 25 September 2015 1 / 30 Lnguges nd Automt 1 Lnguges nd Automt Wht is

More information

4.11 Inner Product Spaces

4.11 Inner Product Spaces 314 CHAPTER 4 Vector Spces 9. A mtrix of the form 0 0 b c 0 d 0 0 e 0 f g 0 h 0 cnnot be invertible. 10. A mtrix of the form bc d e f ghi such tht e bd = 0 cnnot be invertible. 4.11 Inner Product Spces

More information

Use Geometry Expressions to create a more complex locus of points. Find evidence for equivalence using Geometry Expressions.

Use Geometry Expressions to create a more complex locus of points. Find evidence for equivalence using Geometry Expressions. Lerning Objectives Loci nd Conics Lesson 3: The Ellipse Level: Preclculus Time required: 120 minutes In this lesson, students will generlize their knowledge of the circle to the ellipse. The prmetric nd

More information

Protocol Analysis. 17-654/17-764 Analysis of Software Artifacts Kevin Bierhoff

Protocol Analysis. 17-654/17-764 Analysis of Software Artifacts Kevin Bierhoff Protocol Anlysis 17-654/17-764 Anlysis of Softwre Artifcts Kevin Bierhoff Tke-Awys Protocols define temporl ordering of events Cn often be cptured with stte mchines Protocol nlysis needs to py ttention

More information

Value Function Approximation using Multiple Aggregation for Multiattribute Resource Management

Value Function Approximation using Multiple Aggregation for Multiattribute Resource Management Journl of Mchine Lerning Reserch 9 (2008) 2079-2 Submitted 8/08; Published 0/08 Vlue Function Approximtion using Multiple Aggregtion for Multittribute Resource Mngement Abrhm George Wrren B. Powell Deprtment

More information

Basic Analysis of Autarky and Free Trade Models

Basic Analysis of Autarky and Free Trade Models Bsic Anlysis of Autrky nd Free Trde Models AUTARKY Autrky condition in prticulr commodity mrket refers to sitution in which country does not engge in ny trde in tht commodity with other countries. Consequently

More information

Regular Sets and Expressions

Regular Sets and Expressions Regulr Sets nd Expressions Finite utomt re importnt in science, mthemtics, nd engineering. Engineers like them ecuse they re super models for circuits (And, since the dvent of VLSI systems sometimes finite

More information

9 CONTINUOUS DISTRIBUTIONS

9 CONTINUOUS DISTRIBUTIONS 9 CONTINUOUS DISTIBUTIONS A rndom vrible whose vlue my fll nywhere in rnge of vlues is continuous rndom vrible nd will be ssocited with some continuous distribution. Continuous distributions re to discrete

More information

Graphs on Logarithmic and Semilogarithmic Paper

Graphs on Logarithmic and Semilogarithmic Paper 0CH_PHClter_TMSETE_ 3//00 :3 PM Pge Grphs on Logrithmic nd Semilogrithmic Pper OBJECTIVES When ou hve completed this chpter, ou should be ble to: Mke grphs on logrithmic nd semilogrithmic pper. Grph empiricl

More information

ClearPeaks Customer Care Guide. Business as Usual (BaU) Services Peace of mind for your BI Investment

ClearPeaks Customer Care Guide. Business as Usual (BaU) Services Peace of mind for your BI Investment ClerPeks Customer Cre Guide Business s Usul (BU) Services Pece of mind for your BI Investment ClerPeks Customer Cre Business s Usul Services Tble of Contents 1. Overview...3 Benefits of Choosing ClerPeks

More information

9.3. The Scalar Product. Introduction. Prerequisites. Learning Outcomes

9.3. The Scalar Product. Introduction. Prerequisites. Learning Outcomes The Sclr Product 9.3 Introduction There re two kinds of multipliction involving vectors. The first is known s the sclr product or dot product. This is so-clled becuse when the sclr product of two vectors

More information

Mathematics. Vectors. hsn.uk.net. Higher. Contents. Vectors 128 HSN23100

Mathematics. Vectors. hsn.uk.net. Higher. Contents. Vectors 128 HSN23100 hsn.uk.net Higher Mthemtics UNIT 3 OUTCOME 1 Vectors Contents Vectors 18 1 Vectors nd Sclrs 18 Components 18 3 Mgnitude 130 4 Equl Vectors 131 5 Addition nd Subtrction of Vectors 13 6 Multipliction by

More information

Enterprise Risk Management Software Buyer s Guide

Enterprise Risk Management Software Buyer s Guide Enterprise Risk Mngement Softwre Buyer s Guide 1. Wht is Enterprise Risk Mngement? 2. Gols of n ERM Progrm 3. Why Implement ERM 4. Steps to Implementing Successful ERM Progrm 5. Key Performnce Indictors

More information

Introducing Kashef for Application Monitoring

Introducing Kashef for Application Monitoring WextWise 2010 Introducing Kshef for Appliction The Cse for Rel-time monitoring of dtcenter helth is criticl IT process serving vriety of needs. Avilbility requirements of 6 nd 7 nines of tody SOA oriented

More information

Software Cost Estimation Model Based on Integration of Multi-agent and Case-Based Reasoning

Software Cost Estimation Model Based on Integration of Multi-agent and Case-Based Reasoning Journl of Computer Science 2 (3): 276-282, 2006 ISSN 1549-3636 2006 Science Publictions Softwre Cost Estimtion Model Bsed on Integrtion of Multi-gent nd Cse-Bsed Resoning Hsn Al-Skrn Informtion Technology

More information

Universal Regularizers For Robust Sparse Coding and Modeling

Universal Regularizers For Robust Sparse Coding and Modeling 1 Universl Regulrizers For Robust Sprse Coding nd Modeling Igncio Rmírez nd Guillermo Spiro Deprtment of Electricl nd Computer Engineering University of Minnesot {rmir48,guille}@umn.edu Abstrct rxiv:13.2941v2

More information

Integration. 148 Chapter 7 Integration

Integration. 148 Chapter 7 Integration 48 Chpter 7 Integrtion 7 Integrtion t ech, by supposing tht during ech tenth of second the object is going t constnt speed Since the object initilly hs speed, we gin suppose it mintins this speed, but

More information

Data replication in mobile computing

Data replication in mobile computing Technicl Report, My 2010 Dt repliction in mobile computing Bchelor s Thesis in Electricl Engineering Rodrigo Christovm Pmplon HALMSTAD UNIVERSITY, IDE SCHOOL OF INFORMATION SCIENCE, COMPUTER AND ELECTRICAL

More information

Babylonian Method of Computing the Square Root: Justifications Based on Fuzzy Techniques and on Computational Complexity

Babylonian Method of Computing the Square Root: Justifications Based on Fuzzy Techniques and on Computational Complexity Bbylonin Method of Computing the Squre Root: Justifictions Bsed on Fuzzy Techniques nd on Computtionl Complexity Olg Koshelev Deprtment of Mthemtics Eduction University of Texs t El Pso 500 W. University

More information

Performance analysis model for big data applications in cloud computing

Performance analysis model for big data applications in cloud computing Butist Villlpndo et l. Journl of Cloud Computing: Advnces, Systems nd Applictions 2014, 3:19 RESEARCH Performnce nlysis model for big dt pplictions in cloud computing Luis Edurdo Butist Villlpndo 1,2,

More information

5.2. LINE INTEGRALS 265. Let us quickly review the kind of integrals we have studied so far before we introduce a new one.

5.2. LINE INTEGRALS 265. Let us quickly review the kind of integrals we have studied so far before we introduce a new one. 5.2. LINE INTEGRALS 265 5.2 Line Integrls 5.2.1 Introduction Let us quickly review the kind of integrls we hve studied so fr before we introduce new one. 1. Definite integrl. Given continuous rel-vlued

More information

COMPARISON OF SOME METHODS TO FIT A MULTIPLICATIVE TARIFF STRUCTURE TO OBSERVED RISK DATA BY B. AJNE. Skandza, Stockholm ABSTRACT

COMPARISON OF SOME METHODS TO FIT A MULTIPLICATIVE TARIFF STRUCTURE TO OBSERVED RISK DATA BY B. AJNE. Skandza, Stockholm ABSTRACT COMPARISON OF SOME METHODS TO FIT A MULTIPLICATIVE TARIFF STRUCTURE TO OBSERVED RISK DATA BY B. AJNE Skndz, Stockholm ABSTRACT Three methods for fitting multiplictive models to observed, cross-clssified

More information

Mechanics Cycle 1 Chapter 5. Chapter 5

Mechanics Cycle 1 Chapter 5. Chapter 5 Chpter 5 Contct orces: ree Body Digrms nd Idel Ropes Pushes nd Pulls in 1D, nd Newton s Second Lw Neglecting riction ree Body Digrms Tension Along Idel Ropes (i.e., Mssless Ropes) Newton s Third Lw Bodies

More information

Small Business Networking

Small Business Networking Why network is n essentil productivity tool for ny smll business Effective technology is essentil for smll businesses looking to increse the productivity of their people nd business. Introducing technology

More information

How fast can we sort? Sorting. Decision-tree model. Decision-tree for insertion sort Sort a 1, a 2, a 3. CS 3343 -- Spring 2009

How fast can we sort? Sorting. Decision-tree model. Decision-tree for insertion sort Sort a 1, a 2, a 3. CS 3343 -- Spring 2009 CS 4 -- Spring 2009 Sorting Crol Wenk Slides courtesy of Chrles Leiserson with smll chnges by Crol Wenk CS 4 Anlysis of Algorithms 1 How fst cn we sort? All the sorting lgorithms we hve seen so fr re comprison

More information

A Decision Theoretic Framework for Ranking using Implicit Feedback

A Decision Theoretic Framework for Ranking using Implicit Feedback A Decision Theoretic Frmework for Rnking using Implicit Feedbck Onno Zoeter Michel Tylor Ed Snelson John Guiver Nick Crswell Mrtin Szummer Microsoft Reserch Cmbridge 7 J J Thomson Avenue Cmbridge, United

More information

Econ 4721 Money and Banking Problem Set 2 Answer Key

Econ 4721 Money and Banking Problem Set 2 Answer Key Econ 472 Money nd Bnking Problem Set 2 Answer Key Problem (35 points) Consider n overlpping genertions model in which consumers live for two periods. The number of people born in ech genertion grows in

More information

Optimal Control of Serial, Multi-Echelon Inventory/Production Systems with Periodic Batching

Optimal Control of Serial, Multi-Echelon Inventory/Production Systems with Periodic Batching Optiml Control of Seril, Multi-Echelon Inventory/Production Systems with Periodic Btching Geert-Jn vn Houtum Deprtment of Technology Mngement, Technische Universiteit Eindhoven, P.O. Box 513, 56 MB, Eindhoven,

More information

Bayesian Updating with Continuous Priors Class 13, 18.05, Spring 2014 Jeremy Orloff and Jonathan Bloom

Bayesian Updating with Continuous Priors Class 13, 18.05, Spring 2014 Jeremy Orloff and Jonathan Bloom Byesin Updting with Continuous Priors Clss 3, 8.05, Spring 04 Jeremy Orloff nd Jonthn Bloom Lerning Gols. Understnd prmeterized fmily of distriutions s representing continuous rnge of hypotheses for the

More information

Operations with Polynomials

Operations with Polynomials 38 Chpter P Prerequisites P.4 Opertions with Polynomils Wht you should lern: Write polynomils in stndrd form nd identify the leding coefficients nd degrees of polynomils Add nd subtrct polynomils Multiply

More information

LINEAR TRANSFORMATIONS AND THEIR REPRESENTING MATRICES

LINEAR TRANSFORMATIONS AND THEIR REPRESENTING MATRICES LINEAR TRANSFORMATIONS AND THEIR REPRESENTING MATRICES DAVID WEBB CONTENTS Liner trnsformtions 2 The representing mtrix of liner trnsformtion 3 3 An ppliction: reflections in the plne 6 4 The lgebr of

More information

Experiment 6: Friction

Experiment 6: Friction Experiment 6: Friction In previous lbs we studied Newton s lws in n idel setting, tht is, one where friction nd ir resistnce were ignored. However, from our everydy experience with motion, we know tht

More information

Modeling POMDPs for Generating and Simulating Stock Investment Policies

Modeling POMDPs for Generating and Simulating Stock Investment Policies Modeling POMDPs for Generting nd Simulting Stock Investment Policies Augusto Cesr Espíndol Bff UNIRIO - Dep. Informátic Aplicd Av. Psteur, 458 - Térreo Rio de Jneiro - Brzil ugusto.bff@uniriotec.br Angelo

More information

Math 135 Circles and Completing the Square Examples

Math 135 Circles and Completing the Square Examples Mth 135 Circles nd Completing the Squre Exmples A perfect squre is number such tht = b 2 for some rel number b. Some exmples of perfect squres re 4 = 2 2, 16 = 4 2, 169 = 13 2. We wish to hve method for

More information

Approximate Factoring for A Search

Approximate Factoring for A Search Approximte Fctoring for A Serch Ari Hghighi, John DeNero, Dn Klein Computer Science Division University of Cliforni Berkeley {ri42, denero, klein}@cs.berkeley.edu Abstrct We present novel method for creting

More information

DlNBVRGH + Sickness Absence Monitoring Report. Executive of the Council. Purpose of report

DlNBVRGH + Sickness Absence Monitoring Report. Executive of the Council. Purpose of report DlNBVRGH + + THE CITY OF EDINBURGH COUNCIL Sickness Absence Monitoring Report Executive of the Council 8fh My 4 I.I...3 Purpose of report This report quntifies the mount of working time lost s result of

More information

CHAPTER 11 Numerical Differentiation and Integration

CHAPTER 11 Numerical Differentiation and Integration CHAPTER 11 Numericl Differentition nd Integrtion Differentition nd integrtion re bsic mthemticl opertions with wide rnge of pplictions in mny res of science. It is therefore importnt to hve good methods

More information

ORBITAL MANEUVERS USING LOW-THRUST

ORBITAL MANEUVERS USING LOW-THRUST Proceedings of the 8th WSEAS Interntionl Conference on SIGNAL PROCESSING, ROBOICS nd AUOMAION ORBIAL MANEUVERS USING LOW-HRUS VIVIAN MARINS GOMES, ANONIO F. B. A. PRADO, HÉLIO KOII KUGA Ntionl Institute

More information

Small Business Networking

Small Business Networking Why network is n essentil productivity tool for ny smll business Effective technology is essentil for smll businesses looking to increse the productivity of their people nd business. Introducing technology

More information

Learning from Collective Behavior

Learning from Collective Behavior Lerning from Collective Behvior Michel Kerns Computer nd Informtion Science University of Pennsylvni mkerns@cis.upenn.edu Jennifer Wortmn Computer nd Informtion Science University of Pennsylvni wortmnj@ses.upenn.edu

More information

Section 5-4 Trigonometric Functions

Section 5-4 Trigonometric Functions 5- Trigonometric Functions Section 5- Trigonometric Functions Definition of the Trigonometric Functions Clcultor Evlution of Trigonometric Functions Definition of the Trigonometric Functions Alternte Form

More information

Curve Sketching. 96 Chapter 5 Curve Sketching

Curve Sketching. 96 Chapter 5 Curve Sketching 96 Chpter 5 Curve Sketching 5 Curve Sketching A B A B A Figure 51 Some locl mximum points (A) nd minimum points (B) If (x, f(x)) is point where f(x) reches locl mximum or minimum, nd if the derivtive of

More information

Lower Bound for Envy-Free and Truthful Makespan Approximation on Related Machines

Lower Bound for Envy-Free and Truthful Makespan Approximation on Related Machines Lower Bound for Envy-Free nd Truthful Mespn Approximtion on Relted Mchines Lis Fleischer Zhenghui Wng July 14, 211 Abstrct We study problems of scheduling jobs on relted mchines so s to minimize the mespn

More information

Small Business Networking

Small Business Networking Why network is n essentil productivity tool for ny smll business Effective technology is essentil for smll businesses looking to increse the productivity of their people nd processes. Introducing technology

More information

Lecture 3 Gaussian Probability Distribution

Lecture 3 Gaussian Probability Distribution Lecture 3 Gussin Probbility Distribution Introduction l Gussin probbility distribution is perhps the most used distribution in ll of science. u lso clled bell shped curve or norml distribution l Unlike

More information

AntiSpyware Enterprise Module 8.5

AntiSpyware Enterprise Module 8.5 AntiSpywre Enterprise Module 8.5 Product Guide Aout the AntiSpywre Enterprise Module The McAfee AntiSpywre Enterprise Module 8.5 is n dd-on to the VirusScn Enterprise 8.5i product tht extends its ility

More information

Online Multicommodity Routing with Time Windows

Online Multicommodity Routing with Time Windows Konrd-Zuse-Zentrum für Informtionstechnik Berlin Tkustrße 7 D-14195 Berlin-Dhlem Germny TOBIAS HARKS 1 STEFAN HEINZ MARC E. PFETSCH TJARK VREDEVELD 2 Online Multicommodity Routing with Time Windows 1 Institute

More information

and thus, they are similar. If k = 3 then the Jordan form of both matrices is

and thus, they are similar. If k = 3 then the Jordan form of both matrices is Homework ssignment 11 Section 7. pp. 249-25 Exercise 1. Let N 1 nd N 2 be nilpotent mtrices over the field F. Prove tht N 1 nd N 2 re similr if nd only if they hve the sme miniml polynomil. Solution: If

More information

Plotting and Graphing

Plotting and Graphing Plotting nd Grphing Much of the dt nd informtion used by engineers is presented in the form of grphs. The vlues to be plotted cn come from theoreticl or empiricl (observed) reltionships, or from mesured

More information

SPECIAL PRODUCTS AND FACTORIZATION

SPECIAL PRODUCTS AND FACTORIZATION MODULE - Specil Products nd Fctoriztion 4 SPECIAL PRODUCTS AND FACTORIZATION In n erlier lesson you hve lernt multipliction of lgebric epressions, prticulrly polynomils. In the study of lgebr, we come

More information

Integration by Substitution

Integration by Substitution Integrtion by Substitution Dr. Philippe B. Lvl Kennesw Stte University August, 8 Abstrct This hndout contins mteril on very importnt integrtion method clled integrtion by substitution. Substitution is

More information

Algebra Review. How well do you remember your algebra?

Algebra Review. How well do you remember your algebra? Algebr Review How well do you remember your lgebr? 1 The Order of Opertions Wht do we men when we write + 4? If we multiply we get 6 nd dding 4 gives 10. But, if we dd + 4 = 7 first, then multiply by then

More information

Techniques for Requirements Gathering and Definition. Kristian Persson Principal Product Specialist

Techniques for Requirements Gathering and Definition. Kristian Persson Principal Product Specialist Techniques for Requirements Gthering nd Definition Kristin Persson Principl Product Specilist Requirements Lifecycle Mngement Elicit nd define business/user requirements Vlidte requirements Anlyze requirements

More information

MATH 150 HOMEWORK 4 SOLUTIONS

MATH 150 HOMEWORK 4 SOLUTIONS MATH 150 HOMEWORK 4 SOLUTIONS Section 1.8 Show tht the product of two of the numbers 65 1000 8 2001 + 3 177, 79 1212 9 2399 + 2 2001, nd 24 4493 5 8192 + 7 1777 is nonnegtive. Is your proof constructive

More information

Space Vector Pulse Width Modulation Based Induction Motor with V/F Control

Space Vector Pulse Width Modulation Based Induction Motor with V/F Control Interntionl Journl of Science nd Reserch (IJSR) Spce Vector Pulse Width Modultion Bsed Induction Motor with V/F Control Vikrmrjn Jmbulingm Electricl nd Electronics Engineering, VIT University, Indi Abstrct:

More information

Example 27.1 Draw a Venn diagram to show the relationship between counting numbers, whole numbers, integers, and rational numbers.

Example 27.1 Draw a Venn diagram to show the relationship between counting numbers, whole numbers, integers, and rational numbers. 2 Rtionl Numbers Integers such s 5 were importnt when solving the eqution x+5 = 0. In similr wy, frctions re importnt for solving equtions like 2x = 1. Wht bout equtions like 2x + 1 = 0? Equtions of this

More information

Small Business Networking

Small Business Networking Why network is n essentil productivity tool for ny smll business Effective technology is essentil for smll businesses looking to increse the productivity of their people nd processes. Introducing technology

More information

GFI MilArchiver 6 vs C2C Archive One Policy Mnger GFI Softwre www.gfi.com GFI MilArchiver 6 vs C2C Archive One Policy Mnger GFI MilArchiver 6 C2C Archive One Policy Mnger Who we re Generl fetures Supports

More information

TRUST and reputation are crucial requirements for most

TRUST and reputation are crucial requirements for most IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, VOL. 9, NO. 3, MAY/JUNE 2012 375 Itertive Trust nd Reputtion Mngement Using Belief Propgtion Ermn Aydy, Student Member, IEEE, nd Frmrz Feri, Senior

More information

Small Business Networking

Small Business Networking Why network is n essentil productivity tool for ny smll business Effective technology is essentil for smll businesses looking to increse the productivity of their people nd processes. Introducing technology

More information

Lecture 5. Inner Product

Lecture 5. Inner Product Lecture 5 Inner Product Let us strt with the following problem. Given point P R nd line L R, how cn we find the point on the line closest to P? Answer: Drw line segment from P meeting the line in right

More information

Vectors 2. 1. Recap of vectors

Vectors 2. 1. Recap of vectors Vectors 2. Recp of vectors Vectors re directed line segments - they cn be represented in component form or by direction nd mgnitude. We cn use trigonometry nd Pythgors theorem to switch between the forms

More information

STATUS OF LAND-BASED WIND ENERGY DEVELOPMENT IN GERMANY

STATUS OF LAND-BASED WIND ENERGY DEVELOPMENT IN GERMANY Yer STATUS OF LAND-BASED WIND ENERGY Deutsche WindGurd GmbH - Oldenburger Strße 65-26316 Vrel - Germny +49 (4451)/9515 - info@windgurd.de - www.windgurd.com Annul Added Cpcity [MW] Cumultive Cpcity [MW]

More information

Hillsborough Township Public Schools Mathematics Department Computer Programming 1

Hillsborough Township Public Schools Mathematics Department Computer Programming 1 Essentil Unit 1 Introduction to Progrmming Pcing: 15 dys Common Unit Test Wht re the ethicl implictions for ming in tody s world? There re ethicl responsibilities to consider when writing computer s. Citizenship,

More information

Review guide for the final exam in Math 233

Review guide for the final exam in Math 233 Review guide for the finl exm in Mth 33 1 Bsic mteril. This review includes the reminder of the mteril for mth 33. The finl exm will be cumultive exm with mny of the problems coming from the mteril covered

More information

Solving BAMO Problems

Solving BAMO Problems Solving BAMO Problems Tom Dvis tomrdvis@erthlink.net http://www.geometer.org/mthcircles Februry 20, 2000 Abstrct Strtegies for solving problems in the BAMO contest (the By Are Mthemticl Olympid). Only

More information

Recognition Scheme Forensic Science Content Within Educational Programmes

Recognition Scheme Forensic Science Content Within Educational Programmes Recognition Scheme Forensic Science Content Within Eductionl Progrmmes one Introduction The Chrtered Society of Forensic Sciences (CSoFS) hs been ccrediting the forensic content of full degree courses

More information

C-crcs Cognitive - Counselling Research & Conference Services (eissn: 2301-2358)

C-crcs Cognitive - Counselling Research & Conference Services (eissn: 2301-2358) C-crcs Cognitive - Counselling Reserch & Conference Services (eissn: 2301-2358) Volume I Effects of Music Composition Intervention on Elementry School Children b M. Hogenes, B. Vn Oers, R. F. W. Diekstr,

More information

Corporate Compliance vs. Enterprise-Wide Risk Management

Corporate Compliance vs. Enterprise-Wide Risk Management Corporte Complince vs. Enterprise-Wide Risk Mngement Brent Sunders, Prtner (973) 236-4682 November 2002 Agend Corporte Complince Progrms? Wht is Enterprise-Wide Risk Mngement? Key Differences Why Will

More information

Distributions. (corresponding to the cumulative distribution function for the discrete case).

Distributions. (corresponding to the cumulative distribution function for the discrete case). Distributions Recll tht n integrble function f : R [,] such tht R f()d = is clled probbility density function (pdf). The distribution function for the pdf is given by F() = (corresponding to the cumultive

More information

Abstract. This paper introduces new algorithms and data structures for quick counting for machine

Abstract. This paper introduces new algorithms and data structures for quick counting for machine Journl of Artiæcil Intelligence Reserch 8 è998è 67-9 Submitted 7è97; published è98 Cched Suæcient Sttistics for Eæcient Mchine Lerning with Lrge Dtsets Andrew Moore Mry Soon Lee School of Computer Science

More information

6.2 Volumes of Revolution: The Disk Method

6.2 Volumes of Revolution: The Disk Method mth ppliction: volumes of revolution, prt ii Volumes of Revolution: The Disk Method One of the simplest pplictions of integrtion (Theorem ) nd the ccumultion process is to determine so-clled volumes of

More information

Portfolio approach to information technology security resource allocation decisions

Portfolio approach to information technology security resource allocation decisions Portfolio pproch to informtion technology security resource lloction decisions Shivrj Knungo Deprtment of Decision Sciences The George Wshington University Wshington DC 20052 knungo@gwu.edu Abstrct This

More information

Small Business Cloud Services

Small Business Cloud Services Smll Business Cloud Services Summry. We re thick in the midst of historic se-chnge in computing. Like the emergence of personl computers, grphicl user interfces, nd mobile devices, the cloud is lredy profoundly

More information

Discovering General Logical Network Topologies

Discovering General Logical Network Topologies Discovering Generl Logicl Network Topologies Mrk otes McGill University, Montrel, Quebec Emil: cotes@ece.mcgill.c Michel Rbbt nd Robert Nowk Rice University, Houston, TX Emil: {rbbt, nowk}@rice.edu Technicl

More information

Health insurance exchanges What to expect in 2014

Health insurance exchanges What to expect in 2014 Helth insurnce exchnges Wht to expect in 2014 33096CAEENABC 02/13 The bsics of exchnges As prt of the Affordble Cre Act (ACA or helth cre reform lw), strting in 2014 ALL Americns must hve minimum mount

More information

The Velocity Factor of an Insulated Two-Wire Transmission Line

The Velocity Factor of an Insulated Two-Wire Transmission Line The Velocity Fctor of n Insulted Two-Wire Trnsmission Line Problem Kirk T. McDonld Joseph Henry Lbortories, Princeton University, Princeton, NJ 08544 Mrch 7, 008 Estimte the velocity fctor F = v/c nd the

More information

Assuming all values are initially zero, what are the values of A and B after executing this Verilog code inside an always block? C=1; A <= C; B = C;

Assuming all values are initially zero, what are the values of A and B after executing this Verilog code inside an always block? C=1; A <= C; B = C; B-26 Appendix B The Bsics of Logic Design Check Yourself ALU n [Arthritic Logic Unit or (rre) Arithmetic Logic Unit] A rndom-numer genertor supplied s stndrd with ll computer systems Stn Kelly-Bootle,

More information

On the Robustness of Most Probable Explanations

On the Robustness of Most Probable Explanations On the Robustness of Most Probble Explntions Hei Chn School of Electricl Engineering nd Computer Science Oregon Stte University Corvllis, OR 97330 chnhe@eecs.oregonstte.edu Adnn Drwiche Computer Science

More information

Math Review 1. , where α (alpha) is a constant between 0 and 1, is one specific functional form for the general production function.

Math Review 1. , where α (alpha) is a constant between 0 and 1, is one specific functional form for the general production function. Mth Review Vribles, Constnts nd Functions A vrible is mthemticl bbrevition for concept For emple in economics, the vrible Y usully represents the level of output of firm or the GDP of n economy, while

More information

Small Business Networking

Small Business Networking Why Network is n Essentil Productivity Tool for Any Smll Business TechAdvisory.org SME Reports sponsored by Effective technology is essentil for smll businesses looking to increse their productivity. Computer

More information

Fast Demand Learning for Display Advertising Revenue Management

Fast Demand Learning for Display Advertising Revenue Management Fst Demnd Lerning for Disply Advertising Revenue Mngement Drgos Florin Ciocn Vivek F Fris April 30, 2014 Abstrct The present pper is motivted by the network revenue mngement problems tht occur in online

More information

FDIC Study of Bank Overdraft Programs

FDIC Study of Bank Overdraft Programs FDIC Study of Bnk Overdrft Progrms Federl Deposit Insurnce Corportion November 2008 Executive Summry In 2006, the Federl Deposit Insurnce Corportion (FDIC) initited two-prt study to gther empiricl dt on

More information

icbs: Incremental Cost based Scheduling under Piecewise Linear SLAs

icbs: Incremental Cost based Scheduling under Piecewise Linear SLAs i: Incrementl Cost bsed Scheduling under Piecewise Liner SLAs Yun Chi NEC Lbortories Americ 18 N. Wolfe Rd., SW3 35 Cupertino, CA 9514, USA ychi@sv.nec lbs.com Hyun Jin Moon NEC Lbortories Americ 18 N.

More information