Learning to Search Better than Your Teacher


 Cuthbert Anthony
 2 years ago
 Views:
Transcription
1 KiWei Chng University of Illinois t Urbn Chmpign, IL Akshy Krishnmurthy Crnegie Mellon University, Pittsburgh, PA Alekh Agrwl Microsoft Reserch, New York, NY Hl Dumé III University of Mrylnd, College Prk, MD John Lngford Microsoft Reserch, New York, NY Abstrct Methods for lerning to serch for structured prediction typiclly imitte reference policy, with existing theoreticl gurntees demonstrting low regret compred to tht reference. This is unstisfctory in mny pplictions where the reference policy is suboptiml nd the gol of lerning is to improve upon it. Cn lerning to serch work even when the reference is poor? We provide new lerning to serch lgorithm, LOLS, which does well reltive to the reference policy, but dditionlly gurntees low regret compred to devitions from the lerned policy: locloptimlity gurntee. Consequently, LOLS cn improve upon the reference policy, unlike previous lgorithms. This enbles us to develop structured contextul bndits, prtil informtion structured prediction setting with mny potentil pplictions. 1. Introduction In structured prediction problems, lerner mkes joint predictions over set of interdependent output vribles nd observes joint loss. For exmple, in prsing tsk, the output is prse tree over sentence. Achieving optiml performnce commonly requires the prediction of ech out Proceedings of the 32 nd Interntionl Conference on Mchine Lerning, Lille, Frnce, JMLR: W&CP volume 37. Copyright 2015 by the uthor(s). put vrible to depend on neighboring vribles. One pproch to structured prediction is lerning to serch (L2S) (Collins & Rork, 2004; Dumé III & Mrcu, 2005; Dumé III et l., 2009; Ross et l., 2011; Dopp et l., 2014; Ross & Bgnell, 2014), which solves the problem by: 1. converting structured prediction into serch problem with specified serch spce nd ctions; 2. defining structured fetures over ech stte to cpture the interdependency between output vribles; 3. constructing reference policy bsed on trining dt; 4. lerning policy tht imittes the reference policy. Empiriclly, L2S pproches hve been shown to be competitive with other structured prediction pproches both in ccurcy nd running time (see e.g. Dumé III et l. (2014)). Theoreticlly, existing L2S lgorithms gurntee tht if the lerning step performs well, then the lerned policy is lmost s good s the reference policy, implicitly ssuming tht the reference policy ttins good performnce. Good reference policies re typiclly derived using lbels in the trining dt, such s ssigning ech word to its correct POS tg. However, when the reference policy is suboptiml, which cn rise for resons such s computtionl constrints, nothing cn be sid for existing pproches. This problem is most obviously mnifest in structured contextul bndit 1 setting. For exmple, one might wnt to predict how the lnding pge of high profile web 1 The key difference from (1) contextul bndits is tht the ction spce is exponentilly lrge (in the length of trjectories in the serch spce); nd from (2) reinforcement lerning is tht bseline reference policy exists before lerning strts.
2 site should be displyed; this involves mny interdependent predictions: items to show, position nd size of those items, font, color, lyout, etc. It my be plusible to derive qulity signl for the displyed pge bsed on user feedbck, nd we my hve ccess to resonble reference policy (nmely the existing rulebsed system tht renders the current web pge). But, pplying L2S techniques results in nonsense lerning something lmost s good s the existing policy is useless s we cn just keep using the current system nd obtin tht gurntee. Unlike the full feedbck settings, lbel informtion is not even vilble during lerning to define substntilly better reference. The gol of lerning here is to improve upon the current system, which is most likely fr from optiml. This nturlly leds to the question: is lerning to serch useless when the reference policy is poor? This is the core question of the pper, which we ddress first with new L2S lgorithm, LOLS (Loclly Optiml Lerning to Serch) in Section 2. LOLS opertes in n online fshion nd chieves bound on convex combintion of regrettoreference nd regrettoownonestepdevitions. The first prt ensures tht good reference policies cn be leverged effectively; the second prt ensures tht even if the reference policy is very suboptiml, the lerned policy is pproximtely loclly optiml in sense mde forml in Section 3. LOLS opertes ccording to generl schemtic tht encompses mny pst L2S lgorithms (see Section 2), including Sern (Dumé III et l., 2009), DAgger (Ross et l., 2011) nd AggreVTe (Ross & Bgnell, 2014). A secondry contribution of this pper is theoreticl nlysis of both good nd bd wys of instntiting this schemtic under vriety of conditions, including: whether the reference policy is optiml or not, nd whether the reference policy is in the hypothesis clss or not. We find tht, while pst lgorithms chieve good regret gurntees when the reference policy is optiml, they cn fil rther drmticlly when it is not. LOLS, on the other hnd, hs superior performnce to other L2S lgorithms when the reference policy performs poorly but locl hillclimbing in policy spce is effective. In Section 5, we empiriclly confirm tht LOLS cn significntly outperform the reference policy in prctice on relworld dtsets. In Section 4 we extend LOLS to ddress the structured contextul bndit setting, giving nturl modifiction to the lgorithm s well s the corresponding regret nlysis. The proofs of our min results, nd the detils of the costsensitive clssifier used in experiments re deferred to the ppendix. The lgorithm LOLS, the new kind of regret gurntee it stisfies, the modifictions for the structured contextul bndit setting, nd ll experiments re new here. [ ] [V ] [N ] [N N ] [N V ] [N V N],loss=0 [N V V],loss= Figure 1. An illustrtion of the serch spce of sequentil tgging exmple tht ssigns prtofspeech tg sequence to the sentence John sw Mry. Ech stte represents prtil lbeling. The strt stte b = [ ] nd the set of end sttes E = {[N V N], [N V V ],...}. Ech end stte is ssocited with loss. A policy chooses n ction t ech stte in the serch spce to specify the next stte. 2. Lerning to Serch A structured prediction problem consists of n input spce X, n output spce Y, fixed but unknown distribution D over X Y, nd nonnegtive loss function l(y, ŷ) R 0 which mesures the distnce between the true (y ) nd predicted (ŷ) outputs. The gol of structured lerning is to use N smples (x i, y i ) N i=1 to lern mpping f : X Y tht minimizes the expected structured loss under D. In the lerning to serch frmework, n input x X induces serch spce, consisting of n initil stte b (which we will tke to lso encode x), set of end sttes nd trnsition function tht tkes stte/ction pirs s, nd deterministiclly trnsitions to new stte s. For ech end stte e, there is corresponding structured output y e nd for convenience we define the loss l(e) = l(y, y e ) where y will be cler from context. We futher define feture generting function Φ tht mps sttes to feture vectors in R d. The fetures express both the input x nd previous predictions (ctions). Fig. 1 shows n exmple serch spce 2. An gent follows policy π Π, which chooses n ction A(s) t ech nonterminl stte s. An ction specifies the next stte from s. We consider policies tht only ccess stte s through its feture vector Φ(s), mening tht π(s) is mpping from R d to the set of ctions A(s). A trjectory is complete sequence of stte/ction pirs from the strting stte b to n end stte e. Trjectories cn be generted by repetedly executing policy π in the serch spce. Without loss of generlity, we ssume the lengths of trjectories re fixed nd equl to T. The expected loss of policy J(π) is the expected loss of the end stte of the trjectory e π, where e E is n end stte reched by following the policy 3. Throughout, expecttions re tken with 2 Dopp et l. (2014) discuss severl pproches for defining serch spce. The theoreticl properties of our pproch do not depend on which serch spce definition is used. 3 Some imittion lerning literture (e.g., (Ross et l., 2011; He et l., 2012)) defines the loss of policy s n ccumultion of the costs of sttes nd ctions in the trjectory generted by the policy. For simplicity, we define the loss only bsed on the end
3 x X s r e rollin onestep devitions e rollout e y e Y, l(y e )=0.8 y e Y, l(y e )=0.0 y e Y, l(y e )=0.2 Figure 2. An exmple serch spce. The explortion begins t the strt stte s nd chooses the middle mong three ctions by the rollin policy twice. Grey nodes re not explored. At stte r the lerning lgorithm considers the chosen ction (middle) nd both onestep devitions from tht ction (top nd bottom). Ech of these devitions is completed using the rollout policy until n end stte is reched, t which point the loss is collected. Here, we lern tht deviting to the top ction (insted of middle) t stte r decreses the loss by 0.2. respect to drws of (x, y) from the trining distribution, s well s ny internl rndomness in the lerning lgorithm. An optiml policy chooses the ction leding to the miniml expected loss t ech stte. For losses decomposble over the sttes in trjectory, generting n optiml policy is trivil given y (e.g., the sequence tgging exmple in (Dumé III et l., 2009)). In generl, finding the optiml ction t sttes not in the optiml trjectory cn be tricky (e.g., (Goldberg & Nivre, 2013; Goldberg et l., 2014)). Finlly, like most other L2S lgorithms, LOLS ssumes ccess to costsensitive clssifiction lgorithm. A costsensitive clssifier predicts lbel ŷ given n exmple x, nd receives loss c x (ŷ), where c x is vector contining the cost for ech possible lbel. In order to perform online updtes, we ssume ccess to noregret online costsensitive lerner, which we formlly define below. Definition 1. Given hypothesis clss H : X [K], the regret of n online costsensitive clssifiction lgorithm which produces hypotheses h 1,..., h M on costsensitive exmple sequence {(x 1, c 1 ),..., (x M, c M )} is Regret CS M = M M c m (h m (x m )) min c m (h(x m )). m=1 h H m=1 An lgorithm is noregret if Regret CS M = o(m). Such noregret gurntees cn be obtined, for instnce, by pplying the SECOC technique (Lngford & Beygelzimer, 2005) on top of ny importnce weighted binry clssifiction lgorithm tht opertes in n online fshion, exmples being the perceptron lgorithm or online ridge regression. stte. However, our theorems cn be generlized. (1) Algorithm 1 Loclly Optiml Lerning to Serch (LOLS) Require: Dtset {x i, y i } N i=1 drwn from D nd β 0: mixture prmeter for rollout. 1: Initilize policy π 0. 2: for ll i {1, 2,..., N} (loop over ech instnce) do 3: Generte reference policy π ref bsed on y i. 4: Initilize Γ =. 5: for ll t {0, 1, 2,..., T 1} do 6: Rollin by executing πi in = ˆπ i for t rounds nd rech s t. 7: for ll A(s t ) do 8: Let π out i =π ref with probbility β, otherwise ˆπ i. 9: Evlute cost c i,t () by rollingout with πi out for T t 1 steps. 10: end for 11: Generte feture vector Φ(x i, s t ). 12: Set Γ = Γ { c i,t, Φ(x i, s t ) }. 13: end for 14: ˆπ i+1 Trin(ˆπ i, Γ) (Updte). 15: end for 16: Return the verge policy cross ˆπ 0, ˆπ 1,... ˆπ N. LOLS (see Algorithm 1) lerns policy ˆπ Π to pproximtely minimize J(π), 4 ssuming ccess to reference policy π ref (which my or my not be optiml). The lgorithm proceeds in n online fshion generting sequence of lerned policies ˆπ 0, ˆπ 1, ˆπ 2,.... At round i, structured smple (x i, y i ) is observed, nd the configurtion of serch spce is generted long with the reference policy π ref. Bsed on (x i, y i ), LOLS constructs T costsensitive multiclss exmples using rollin policy πi in nd rollout policy πi out. The rollin policy is used to generte n initil trjectory nd the rollout policy is used to derive the expected loss. More specificlly, for ech decision point t [0, T ), LOLS executes πi in for t rounds reching stte s t πi in. Then, costsensitive multiclss exmple is generted using the fetures Φ(s t ). Clsses in the multiclss exmple correspond to vilble ctions in stte s t. The cost c() ssigned to ction is the difference in loss between tking ction nd the best ction. c() = l(e()) min l(e( )), (2) where e() is the end stte reched with rollout by πi out fter tking ction in stte s t. LOLS collects the T exmples from the different rollout points nd feeds the set of exmples Γ into n online costsensitive multiclss lerner, thereby updting the lerned policy from ˆπ i to ˆπ i+1. By defult, we use the lerned policy ˆπ i for rollin nd mixture 4 We cn prmeterize the policy ˆπ using weight vector w R d such tht costsensitive clssifier cn be used to choose n ction bsed on the fetures t ech stte. We do not consider using different weight vectors t different sttes.
4 rollout rollin Reference Reference Mixture Lerned Inconsistent Lerned Not loclly opt. Good RL Tble 1. Effect of different rollin nd rollout policies. The strtegies mrked with Inconsistent might generte lerned policy with lrge structured regret, nd the strtegies mrked with Not loclly opt. could be much worse thn its one step devition. The strtegy mrked with RL reduces the structure lerning problem to reinforcement lerning problem, which is much hrder. The strtegy mrked with Good is fvored. policy for rollout. For ech rollout, the mixture policy either executes π ref to n endstte with probbility β or ˆπ i with probbility 1 β. LOLS converts into btch lgorithm with stndrd onlinetobtch conversion where the finl model π is generted by verging ˆπ i cross ll rounds (i.e., picking one of ˆπ 1,... ˆπ N uniformly t rndom). 3. Theoreticl Anlysis In this section, we nlyze LOLS nd nswer the questions rised in Section 1. Throughout this section we use π to denote the verge policy obtined by first choosing n [1, N] uniformly t rndom nd then cting ccording to π n.we begin with discussing the choices of rollin nd rollout policies. Tble 1 summrizes the results of using different strtegies for rollin nd rollout The Bd Choices An obvious bd choice is rollin nd rollout with the lerned policy, becuse the lerner is blind to the reference policy. It reduces the structured lerning problem to reinforcement lerning problem, which is much hrder. To build intuition, we show two other bd cses. Rollin with π ref is bd. Rollin with reference policy cuses the stte distribution to be unrelisticlly good. As result, the lerned policy never lerns to correct for previous mistkes, performing poorly when testing. A relted discussion cn be found t Theorem 2.1 in (Ross & Bgnell, 2010). We show theorem below. Theorem 1. For πi in = π ref, there is distribution D over (x, y) such tht the induced costsensitive regret RegretM CS = o(m) but J( π) J(π ref ) = Ω(1). Proof. We demonstrte exmples where the clim is true. We strt with the cse where πi out = πi in = π ref. In this cse, suppose we hve one structured exmple, whose serch spce is defined s in Figure 3(). From stte s 1, there re s 1 b e 1, 0 s 2 e 2, 10 s3 () π in c e d f e 3, 100 e 4, 0 i =πi out =π ref s 1 e 1, 0 s 2 e 2, 10 s3 c e d f e 3, 100 e 4, 0 (b) πi in = π ref, representtion constrined s 1 b s 2 s3 c c d d e 1, 1 e 2, 1 ɛ e 3, 1+ɛ e 4, 0 (c) πi out =π ref Figure 3. Counterexmples of πi in = π ref nd πi out = π ref. All three exmples hve 7 sttes. The loss of ech end stte is specified in the figure. A policy chooses ctions to trverse through the serch spce until it reches n end stte. Legl policies re bitvectors, so tht policy with weight on goes up in s 1 of Figure 3() while weight on b sends it down. Since fetures uniquely identify ctions of the policy in this cse, we just mrk the edges with corresponding fetures for simplicity. The reference policy is boldfced. In Figure 3(b), the fetures re the sme on either brnch from s 1, so tht the lerned policy cn do no better thn pick rndomly between the two. In Figure 3(c), sttes s 2 nd s 3 shre the sme feture set (i.e., Φ(s 2) = Φ(s 3)). Therefore, policy chooses the sme set of ctions t sttes s 2 nd s 3. Plese see text for detils. two possible ctions: nd b (we will use ctions nd fetures interchngebly since fetures uniquely identify ctions here); the (optiml) reference policy tkes ction. From stte s 2, there re gin two ctions (c nd d); the reference tkes c. Finlly, even though the reference policy would never visit s 3, from tht stte it chooses ction f. When rolling in with π ref, the costsensitive exmples re generted only t stte s 1 (if we tke onestep devition on s 1 ) nd s 2 but never t s 3 (since tht would require two devitions, one t s 1 nd one t s 3 ). As result, we cn never lern how to mke predictions t stte s 3. Furthermore, under rollout with π ref, both ctions from stte s 1 led to loss of zero. The lerner cn therefore lern to tke ction c t stte s 2 nd b t stte s 1, nd chieve zero costsensitive regret, thereby thinking it is doing good job. Unfortuntely, when this policy is ctully run, it performs s bdly s possible (by tking ction e hlf the time in s 3 ), which results in the lrge structured regret. Next we consider the cse where πi out is either the lerned policy or mixture with π ref. When pplied to the exmple in Figure 3(b), our feture representtion is not expressive enough to differentite between the two ctions t stte s 1, so the lerned policy cn do no better thn pick rndomly between the top nd bottom brnches from this stte. The lgorithm either rolls in with π ref on s 1 nd genertes costsensitive exmple t s 2, or genertes costsensitive exmple on s 1 nd then completes roll out with πi out. Crucilly, the lgorithm still never genertes costsensitive exmple t the stte s 3 (since it would hve lredy tken onestep devition to rech s 3 nd is constrined to do roll out from s 3 ). As result, if the lerned policy were to
5 choose the ction e in s 3, it leds to zero costsensitive regret but lrge structured regret. Despite these negtive results, rolling in with the lerned policy is robust to both the bove filure modes. In Figure 3(), if the lerned policy picks ction b in stte s 1, then we cn roll in to the stte s 3, then generte costsensitive exmple nd lern tht f is better ction thn e. Similrly, we lso observe costsensitive exmple in s 3 in the exmple of Figure 3(b), which clerly demonstrtes the benefits of rolling in with the lerned policy s opposed to π ref. Rollout with π ref is bd if π ref is not optiml. When the reference policy is not optiml or the reference policy is not in the hypothesis clss, rollout with π ref cn mke the lerner blind to compounding errors. The following theorem holds. We stte this in terms of locl optimlity : policy is loclly optiml if chnging ny one decision it mkes never improves its performnce. Theorem 2. For πi out = π ref, there is distribution D over (x, y) such tht the induced costsensitive regret RegretM CS = o(m) but π hs rbitrrily lrge structured regret to onestep devitions. Proof. Suppose we hve only one structured exmple, whose serch spce is defined s in Figure 3(c) nd the reference policy chooses or c depending on the node. If we rollout with π ref, we observe expected losses 1 nd 1 + ɛ for ctions nd b t stte s 1, respectively. Therefore, the policy with zero costsensitive clssifiction regret chooses ctions nd d depending on the node. However, one step devition ( b) does rdiclly better nd cn be lerned by insted rolling out with mixture policy. The bove theorems show the bd cses nd motivte good L2S lgorithm which genertes lerned policy tht competes with the reference policy nd devitions from the lerned policy. In the following section, we show tht Algorithm 1 is such n lgorithm Regret Gurntees Let Q π (s t, ) represent the expected loss of executing ction t stte s t nd then executing policy π until reching n end stte. T is the number of decisions required before reching n end stte. For nottionl simplicity, we use Q π (s t, π ) s shorthnd for Q π (s t, π (s t )), where π (s t ) is the ction tht π tkes t stte s t. Finlly, we use d t π to denote the distribution over sttes t time t when cting ccording to the policy π. The expected loss of policy is: J(π) = E s d t π [Q π (s, π)], (3) for ny t [0, T ]. In words, this is the expected cost of rolling in with π up to some time t, tking π s ction t time t nd then completing the roll out with π. Our min regret gurntee for Algorithm 1 shows tht LOLS minimizes combintion of regret to the reference policy π ref nd regret its own onestep devitions. In order to concisely present the result, we present n dditionl definition which cptures the regret of our pproch: δ N = 1 NT N i=1 t=1 T [ E s d ṱ πi Q πout i +(1 β) min Qˆπ i (s, ) ( (s, ˆπ i) β min Q π ref (s, ) )], (4) where πi out = βπ ref + (1 β)ˆπ i is the mixture policy used to rollout in Algorithm 1. With these definitions in plce, we cn now stte our min result for Algorithm 1. Theorem 3. Let δ N be s defined in Eqution 4. The verged policy π generted by running N steps of Algorithm 1 with mixing prmeter β stisfies β(j( π) J(π ref )) + (1 β) T t=1 ( J( π) min π Π E s d t π [Q π (s, π)] ) T δ N. It might pper tht the LHS of the theorem combines one term which is constnt to nother scling with T. We point the reder to Lemm 1 in the ppendix to see why the terms re comprble in mgnitude. Note tht the theorem does not ssume nything bout the qulity of the reference policy, nd it might be rbitrrily suboptiml. Assuming tht Algorithm 1 uses noregret costsensitive clssifiction lgorithm (recll Definition 1), the first term in the definition of δ N converges to l 1 = min π Π NT N T i=1 t=1 E s d ṱ πi [Q πout i (s, π)]. This observtion is formlized in the next corollry. Corollry 1. Suppose we use noregret costsensitive clssifier in Algorithm 1. As N, δ N δ clss, where δ clss = l 1 NT E s d ṱ πi [β min Q πref (s, ) i,t ] +(1 β) min Qˆπi (s, ). When we hve β = 1, so tht LOLS becomes lmost identicl to AGGREVATE (Ross & Bgnell, 2014), δ clss rises solely due to the policy clss Π being restricted. For other vlues of β (0, 1), the symptotic gp does not lwys vnish even if the policy clss is unrestricted, since l mounts to obtining min Q πout i (s, ) in ech stte. This corresponds to tking minimum of n verge rther thn the verge of the corresponding minimum vlues. In order to void this symptotic gp, it seems desirble to hve regrets to reference policy nd onestep devitions
6 controlled individully, which is equivlent to hving the gurntee of Theorem 3 for ll vlues of β in [0, 1] rther thn specific one. As we show in the next section, gurnteeing regret bound to onestep devitions when the reference policy is rbitrrily bd is rther tricky nd cn tke n exponentilly long time. Understnding structures where this cn be done more trctbly is n importnt question for future reserch. Nevertheless, the result of Theorem 3 hs interesting consequences in severl settings, some of which we discuss next. 1. The second term on the left in the theorem is lwys nonnegtive by definition, so the conclusion of Theorem 3 is t lest s powerful s existing regret gurntee to reference policy when β = 1. Since the previous works in this re (Dumé III et l., 2009; Ross et l., 2011; Ross & Bgnell, 2014) hve only studied regret gurntees to the reference policy, the quntity we re studying is strictly more difficult. 2. The symptotic regret incurred by using mixture policy for rollout might be lrger thn tht using the reference policy lone, when the reference policy is neroptiml. How the combintion of these fctors mnifests in prctice is empiriclly evluted in Section When the reference policy is optiml, the first term is nonnegtive. Consequently, the theorem demonstrtes tht our lgorithm competes with onestep devitions in this cse. This is true irrespective of whether π ref is in the policy clss Π or not. 4. When the reference policy is very suboptiml, then the first term cn be negtive. In this cse, the regret to onestep devitions cn be lrge despite the gurntee of Theorem 3, since the first negtive term llows the second term to be lrge while the sum stys bounded. However, when the first term is significntly negtive, then the lerned policy hs lredy improved upon the reference policy substntilly! This bility to improve upon poor reference policy by using mixture policy for rolling out is n importnt distinction for Algorithm 1 compred with previous pproches. Overll, Theorem 3 shows tht the lerned policy is either competitive with the reference policy nd nerly loclly optiml, or improves substntilly upon the reference policy Hrdness of locl optimlity In this section we demonstrte tht the process of reching locl optimum (under onestep devitions) cn be exponentilly slow when the initil strting policy is rbitrry. This reflects the hrdness of lerning to serch problems when equipped with poor reference policy, even if locl rther thn globl optimlity is considered yrdstick. We estblish this lower bound for clss of lgorithms substntilly more powerful thn LOLS. We strt by defining serch spce nd policy clss. Our serch spce consists of trjectories of length T, with 2 ctions vilble t ech step of the trjectory. We use 0 nd 1 to index the two ctions. We consider policies whose only feture in stte is the depth of the stte in the trjectory, mening tht the ction tken by ny policy π in stte s t depends only on t. Consequently, ech policy cn be indexed by bit string of length T. For instnce, the policy executes ction 0 in the first step of ny trjectory, ction 1 in the second step nd 0 t ll other levels. It is esily seen tht two policies re onestep devitions of ech other if the corresponding bit strings hve Hmming distnce of 1. To estblish lower bound, consider the following powerful lgorithmic pttern. Given current policy π, the lgorithm exmines the cost J(π ) for ll the onestep devitions π of π. It then chooses the policy with the smllest cost s its new lerned policy. Note tht ccess to the ctul costs J(π) mkes this lgorithm more powerful thn existing L2S lgorithms, which cn only estimte costs of policies through rollouts on individul exmples. Suppose this lgorithm strts from n initil policy ˆπ 0. How long does it tke for the lgorithm to rech policy ˆπ i which is loclly optiml compred with ll its onestep devitions? We next present lower bound for lgorithms of this style. Theorem 4. Consider ny lgorithm which updtes policies only by moving from the current policy to onestep devition. Then there is serch spce, policy clss nd cost function where the ny such lgorithm must mke Ω(2 T ) updtes before reching loclly optiml policy. Specificlly, the lower bound lso pplies to Algorithm 1. The result shows tht competing with the seemingly resonble benchmrk of onestep devitions my be very chllenging from n lgorithmic perspective, t lest without ssumptions on the serch spce, policy clss, loss function, or strting policy. For instnce, the construction used to prove Theorem 4 does not pply to Hmming loss. 4. Structured Contextul Bndit We now show tht vrint of LOLS cn be run in structured contextul bndit setting, where only the loss of single structured lbel cn be observed. As mentioned, this setting hs pplictions to webpge lyout, personlized serch, nd severl other domins. At ech round, the lerner is given n input exmple x, mkes prediction ŷ nd suffers structured loss l(y, ŷ). We ssume tht the structured losses lie in the intervl [0, 1], tht the serch spce hs depth T nd tht there re t most K ctions vilble t ech stte. As before, the lgorithm hs ccess to policy clss Π, nd lso to reference policy π ref. It is importnt to emphsize tht the reference policy does not hve ccess to the true lbel, nd the gol
7 Algorithm 2 Structured Contextul Bndit Lerning Require: Exmples {x i } N i=1, reference policy πref, explortion probbility ɛ nd mixture prmeter β 0. 1: Initilize policy π 0, nd set I =. 2: for ll i = 1, 2,..., N (loop over ech instnce) do 3: Obtin the exmple x i, set explore = 1 with probbility ɛ, set n i = I. 4: if explore then 5: Pick rndom time t {0, 1,..., T 1}. 6: Rollin by executing πi in = ˆπ ni for t rounds nd rech s t. 7: Pick rndom ction t A(s t ); let K = A(s t ). 8: Let πi out = π ref with probbility β, otherwise ˆπ ni. 9: Rollout with πi out for T t 1 steps to evlute ĉ() = Kl(e( t ))1[ = t ]. 10: Generte feture vector Φ(x i, s t ). 11: ˆπ ni+1 Trin(ˆπ ni, ĉ, Φ(x i, s t )). 12: Augment I = I {ˆπ ni+1} 13: else 14: Follow the trjectory of policy π drwn rndomly from I to n end stte e, predict the corresponding structured output y ie. 15: end if 16: end for is improving on the reference policy. Our pproch is bsed on the ɛgreedy lgorithm which is common strtegy in prtil feedbck problems. Upon receiving n exmple x i, the lgorithm rndomly chooses whether to explore or exploit on this exmple. With probbility 1 ɛ, the lgorithm chooses to exploit nd follows the recommendtion of the current lerned policy. With the remining probbility, the lgorithm performs rndomized vrint of the LOLS updte. A detiled description is given in Algorithm 2. We ssess the lgorithm s performnce vi mesure of regret, where the comprtor is mixture of the reference policy nd the best onestep devition. Let π i be the verged policy bsed on ll policies in I t round i. y ie is the predicted lbel in either step 9 or step 14 of Algorithm 2. The verge regret is defined s: Regret = 1 N N i=1 (1 β) ( E[l(y i, y ie )] βe[l(y i, y ieref )] T ) min E s d π Π t π [Q πi (s, π)] i t=1 Reclling our erlier definition of δ i (4), we bound on the regret of Algorithm 2 with proof in the ppendix. Theorem 5. Algorithm 2 with prmeter ɛ stisfies: Regret ɛ + 1 N N δ ni, i=1 With noregret lerning lgorithm, we expect log Π δ i δ clss + ck, (5) i where Π is the crdinlity of the policy clss. This leds to the following corollry with proof in the ppendix. Corollry 2. In the setup of Theorem 5, suppose further tht the underlying noregret lerner stisfies (5). Then with probbility t lest 1 2/(N 5 K 2 T 2 log(n Π )) 3, Regret = O ( 5. Experiments (KT ) 2/3 3 log(n Π ) N + T δ clss ) This section shows tht LOLS is ble to improve upon suboptiml reference policy nd provides empiricl evidence to support the nlysis in Section 3. We conducted experiments on the following three pplictions. CostSensitive Multiclss clssifiction. For ech costsensitive multiclss smple, ech choice of lbel hs n ssocited cost. The serch spce for this tsk is binry serch tree. The root of the tree corresponds to the whole set of lbels. We recursively split the set of lbels in hlf, until ech subset contins only one lbel. A trjectory through the serch spce is pth from roottolef in this tree. The loss of the end stte is defined by the cost. An optiml reference policy cn led the gent to the end stte with the miniml cost. We lso show results of using bd reference policy which rbitrrily chooses n ction t ech stte. The experiments re conducted on KDDCup 99 dtset 5 generted from computer network intrusion detection tsk. The dtset contins 5 clsses, 4, 898, 431 trining nd 311, 029 test instnces. Prt of speech tgging. The serch spce for POS tgging is lefttoright prediction. Under Hmming loss the trivil optiml reference policy simply chooses the correct prt of speech for ech word. We trin on 38k sentences nd test on 11k from the Penn Treebnk (Mrcus et l., 1993). One cn construct suboptiml or even bd reference policies, but under Hmming loss these re ll equivlent to the optiml policy becuse rollouts by ny fixed policy will incur exctly the sme loss nd the lerner cn immeditely lern from onestep devitions. 5 kddcup99/kddcup99.html.
8 rollout rollin Reference Mixture Lerned Reference is optiml Reference Lerned Reference is bd Reference Lerned Tble 2. The verge cost on costsensitive clssifiction dtset; columns re rollout nd rows re rollin. The best result is bold. SEARN chieves nd when the reference policy is optiml nd bd, respectively. LOLS is Lerned/Mixture nd highlighted in green. rollout rollin Reference Mixture Lerned Reference is optiml Reference Lerned rollout rollin Reference Mixture Lerned Reference is optiml Reference Lerned Reference is suboptiml Reference Lerned Reference is bd Reference Lerned Tble 4. The UAS score on dependency prsing dt set; columns re rollout nd rows re rollin. The best result is bold. SEARN chieves 84.0, 81.1, nd 63.4 when the reference policy is optiml, suboptiml, nd bd, respectively. LOLS is Lerned/Mixture nd highlighted in green. Tble 3. The ccurcy on POS tgging; columns re rollout nd rows re rollin. The best result is bold. SEARN chieves LOLS is Lerned/Mixture nd highlighted in green. Dependency prsing. A dependency prser lerns to generte tree structure describing the syntctic dependencies between words in sentence (McDonld et l., 2005; Nivre, 2003). We implemented hybrid trnsition system (Kuhlmnn et l., 2011) which prses sentence from left to right with three ctions: SHIFT, REDUCELEFT nd REDUCERIGHT. We used the nondeterministic orcle (Goldberg & Nivre, 2013) s the optiml reference policy, which leds the gent to the best end stte rechble from ech stte. We lso designed two suboptiml reference policies. A bd reference policy chooses n rbitrry legl ction t ech stte. A suboptiml policy pplies greedy selection nd chooses the ction which leds to good tree when it is obvious; otherwise, it rbitrrily chooses legl ction. (This suboptiml reference ws the defult reference policy used prior to the work on nondeterministic orcles. ) We used dt from the Penn Treebnk Wll Street Journl corpus: the stndrd dt split for trining (sections 0221) nd test (section 23). The loss is evluted in UAS (unlbeled ttchment score), which mesures the frction of words tht pick the correct prent. For ech tsk nd ech reference policy, we compre 6 different combintions of rollin (lerned or reference) nd rollout (lerned, mixture or reference) strtegies. We lso include SEARN in the comprison, since it hs notble differences from LOLS. SEARN rolls in nd out with mixture where different policy is drwn for ech stte, while LOLS drws policy once per exmple. SEARN uses btch lerner, while LOLS uses online. The policy in SEARN is mixture over the policies produced t ech itertion. For LOLS, it suffices to keep just the most recent one. It is n open reserch question whether n nlogous theoreticl gurntee of Theorem 3 cn be estblished for SEARN. Our implementtion is bsed on Vowpl Wbbit 6, mchine lerning system tht supports online lerning nd L2S. For LOLS s mixture policy, we set β = 0.5. We found tht LOLS is not sensitive to β, nd setting β to be 0.5 works well in prctice. For SEARN, we set the mixture prmeter to be 1 (1 α) t, where t is the number of rounds nd α = Unless stted otherwise ll the lerners tke 5 psses over the dt. Tbles 2, 3 nd 4 show the results on costsensitive multiclss clssifiction, POS tgging nd dependency prsing, respectively. The empiricl results qulittively gree with the theory. Rolling in with reference is lwys bd. When the reference policy is optiml, then doing rollouts with reference is good ide. However, when the reference policy is suboptiml or bd, then rolling out with reference is bd ide, nd mixture rollouts perform substntilly better. LOLS lso significntly outperforms SEARN on ll tsks. Acknowledgements Prt of this work ws crried out while KiWei, Akshy nd Hl were visiting Microsoft Reserch. 6 vw/
9 References Abbott, H.L nd Ktchlski, M. On the snke in the box problem. Journl of Combintoril Theory, Series B, 45 (1):13 24, CesBinchi, N. nd Lugosi, G. Prediction, Lerning, nd Gmes. Cmbridge University Press, Collins, Michel nd Rork, Brin. Incrementl prsing with the perceptron lgorithm. In Proceedings of the Conference of the Assocition for Computtionl Linguistics (ACL), Dumé III, Hl nd Mrcu, Dniel. Lerning s serch optimiztion: Approximte lrge mrgin methods for structured prediction. In Proceedings of the Interntionl Conference on Mchine Lerning (ICML), Dumé III, Hl, Lngford, John, nd Mrcu, Dniel. Serchbsed structured prediction. Mchine Lerning Journl, Dumé III, Hl, Lngford, John, nd Ross, Stéphne. Efficient progrmmble lerning to serch. rxiv: , Dopp, Jnrdhn Ro, Fern, Aln, nd Tdeplli, Prsd. HCSerch: A lerning frmework for serchbsed structured prediction. Journl of Artificil Intelligence Reserch (JAIR), 50, Goldberg, Yov nd Nivre, Jokim. Trining deterministic prsers with nondeterministic orcles. Trnsctions of the ACL, 1, Goldberg, Yov, Srtorio, Frncesco, nd Stt, Giorgio. A tbulr method for dynmic orcles in trnsitionbsed prsing. Trnsctions of the ACL, 2, He, He, Dumé III, Hl, nd Eisner, Json. Imittion lerning by coching. In Neurl Informtion Processing Systems (NIPS), Kuhlmnn, Mrco, GómezRodríguez, Crlos, nd Stt, Giorgio. Dynmic progrmming lgorithms for trnsitionbsed dependency prsers. In Proceedings of the 49th Annul Meeting of the Assocition for Computtionl Linguistics: Humn Lnguge Technologies Volume 1, pp Assocition for Computtionl Linguistics, Lngford, John nd Beygelzimer, Alin. Sensitive error correcting output codes. In Lerning Theory, pp Springer, Mrcus, Mitch, Mrcinkiewicz, Mry Ann, nd Sntorini, Betrice. Building lrge nnotted corpus of English: The Penn Treebnk. Computtionl Linguistics, 19(2): , McDonld, Ryn, Pereir, Fernndo, Ribrov, Kiril, nd Hjic, Jn. Nonprojective dependency prsing using spnning tree lgorithms. In Proceedings of the Joint Conference on Humn Lnguge Technology Conference nd Empiricl Methods in Nturl Lnguge Processing (HLT/EMNLP), Nivre, Jokim. An efficient lgorithm for projective dependency prsing. In Interntionl Workshop on Prsing Technologies (IWPT), pp , Ross, Stéphne nd Bgnell, J. Andrew. Efficient reductions for imittion lerning. In Proceedings of the Workshop on Artificil Intelligence nd Sttistics (AIStts), Ross, Stéphne nd Bgnell, J. Andrew. Reinforcement nd imittion lerning vi interctive noregret lerning. rxiv: , Ross, Stéphne, Gordon, Geoff J., nd Bgnell, J. Andrew. A reduction of imittion lerning nd structured prediction to noregret online lerning. In Proceedings of the Workshop on Artificil Intelligence nd Sttistics (AI Stts), Zinkevich, Mrtin. Online convex progrmming nd generlized infinitesiml grdient scent. In Proceedings of the Interntionl Conference on Mchine Lerning (ICML), 2003.
arxiv: v2 [cs.lg] 20 May 2015
KiWei Chng KCHANG10@ILLINOIS.EDU University of Illinois t Urbn Chmpign, IL rxiv:1502.02206v2 [cs.lg 20 My 2015 Akshy Krishnmurthy Crnegie Mellon University, Pittsburgh, PA Alekh Agrwl Microsoft Reserch,
More informationPolynomial Functions. Polynomial functions in one variable can be written in expanded form as ( )
Polynomil Functions Polynomil functions in one vrible cn be written in expnded form s n n 1 n 2 2 f x = x + x + x + + x + x+ n n 1 n 2 2 1 0 Exmples of polynomils in expnded form re nd 3 8 7 4 = 5 4 +
More information1 Numerical Solution to Quadratic Equations
cs42: introduction to numericl nlysis 09/4/0 Lecture 2: Introduction Prt II nd Solving Equtions Instructor: Professor Amos Ron Scribes: Yunpeng Li, Mrk Cowlishw Numericl Solution to Qudrtic Equtions Recll
More informationReasoning to Solve Equations and Inequalities
Lesson4 Resoning to Solve Equtions nd Inequlities In erlier work in this unit, you modeled situtions with severl vriles nd equtions. For exmple, suppose you were given usiness plns for concert showing
More informationEconomics Letters 65 (1999) 9 15. macroeconomists. a b, Ruth A. Judson, Ann L. Owen. Received 11 December 1998; accepted 12 May 1999
Economics Letters 65 (1999) 9 15 Estimting dynmic pnel dt models: guide for q mcroeconomists b, * Ruth A. Judson, Ann L. Owen Federl Reserve Bord of Governors, 0th & C Sts., N.W. Wshington, D.C. 0551,
More informationTreatment Spring Late Summer Fall 0.10 5.56 3.85 0.61 6.97 3.01 1.91 3.01 2.13 2.99 5.33 2.50 1.06 3.53 6.10 Mean = 1.33 Mean = 4.88 Mean = 3.
The nlysis of vrince (ANOVA) Although the ttest is one of the most commonly used sttisticl hypothesis tests, it hs limittions. The mjor limittion is tht the ttest cn be used to compre the mens of only
More informationChapter 4: Dynamic Programming
Chpter 4: Dynmic Progrmming Objectives of this chpter: Overview of collection of clssicl solution methods for MDPs known s dynmic progrmming (DP) Show how DP cn be used to compute vlue functions, nd hence,
More informationFactoring Polynomials
Fctoring Polynomils Some definitions (not necessrily ll for secondry school mthemtics): A polynomil is the sum of one or more terms, in which ech term consists of product of constnt nd one or more vribles
More informationHelicopter Theme and Variations
Helicopter Theme nd Vritions Or, Some Experimentl Designs Employing Pper Helicopters Some possible explntory vribles re: Who drops the helicopter The length of the rotor bldes The height from which the
More informationAll pay auctions with certain and uncertain prizes a comment
CENTER FOR RESEARC IN ECONOMICS AND MANAGEMENT CREAM Publiction No. 12015 All py uctions with certin nd uncertin prizes comment Christin Riis All py uctions with certin nd uncertin prizes comment Christin
More informationPROF. BOYAN KOSTADINOV NEW YORK CITY COLLEGE OF TECHNOLOGY, CUNY
MAT 0630 INTERNET RESOURCES, REVIEW OF CONCEPTS AND COMMON MISTAKES PROF. BOYAN KOSTADINOV NEW YORK CITY COLLEGE OF TECHNOLOGY, CUNY Contents 1. ACT Compss Prctice Tests 1 2. Common Mistkes 2 3. Distributive
More informationBinary Representation of Numbers Autar Kaw
Binry Representtion of Numbers Autr Kw After reding this chpter, you should be ble to: 1. convert bse rel number to its binry representtion,. convert binry number to n equivlent bse number. In everydy
More informationDecision Rule Extraction from Trained Neural Networks Using Rough Sets
Decision Rule Extrction from Trined Neurl Networks Using Rough Sets Alin Lzr nd Ishwr K. Sethi Vision nd Neurl Networks Lbortory Deprtment of Computer Science Wyne Stte University Detroit, MI 48 ABSTRACT
More informationUniform convergence and its consequences
Uniform convergence nd its consequences The following issue is centrl in mthemtics: On some domin D, we hve sequence of functions {f n }. This mens tht we relly hve n uncountble set of ordinry sequences,
More informationEQUATIONS OF LINES AND PLANES
EQUATIONS OF LINES AND PLANES MATH 195, SECTION 59 (VIPUL NAIK) Corresponding mteril in the ook: Section 12.5. Wht students should definitely get: Prmetric eqution of line given in pointdirection nd twopoint
More informationFinite Automata. Informatics 2A: Lecture 3. John Longley. 25 September School of Informatics University of Edinburgh
Lnguges nd Automt Finite Automt Informtics 2A: Lecture 3 John Longley School of Informtics University of Edinburgh jrl@inf.ed.c.uk 25 September 2015 1 / 30 Lnguges nd Automt 1 Lnguges nd Automt Wht is
More information4.11 Inner Product Spaces
314 CHAPTER 4 Vector Spces 9. A mtrix of the form 0 0 b c 0 d 0 0 e 0 f g 0 h 0 cnnot be invertible. 10. A mtrix of the form bc d e f ghi such tht e bd = 0 cnnot be invertible. 4.11 Inner Product Spces
More informationQuadratic Equations. Math 99 N1 Chapter 8
Qudrtic Equtions Mth 99 N1 Chpter 8 1 Introduction A qudrtic eqution is n eqution where the unknown ppers rised to the second power t most. In other words, it looks for the vlues of x such tht second degree
More informationAn Undergraduate Curriculum Evaluation with the Analytic Hierarchy Process
An Undergrdute Curriculum Evlution with the Anlytic Hierrchy Process Les Frir Jessic O. Mtson Jck E. Mtson Deprtment of Industril Engineering P.O. Box 870288 University of Albm Tuscloos, AL. 35487 Abstrct
More informationUse Geometry Expressions to create a more complex locus of points. Find evidence for equivalence using Geometry Expressions.
Lerning Objectives Loci nd Conics Lesson 3: The Ellipse Level: Preclculus Time required: 120 minutes In this lesson, students will generlize their knowledge of the circle to the ellipse. The prmetric nd
More informationOn the Meaning of Regression Coefficients for Categorical and Continuous Variables: Model I and Model II; Effect Coding and Dummy Coding
Dt_nlysisclm On the Mening of Regression for tegoricl nd ontinuous Vribles: I nd II; Effect oding nd Dummy oding R Grdner Deprtment of Psychology This describes the simple cse where there is one ctegoricl
More informationContextualizing NSSE Effect Sizes: Empirical Analysis and Interpretation of Benchmark Comparisons
Contextulizing NSSE Effect Sizes: Empiricl Anlysis nd Interprettion of Benchmrk Comprisons NSSE stff re frequently sked to help interpret effect sizes. Is.3 smll effect size? Is.5 relly lrge effect size?
More informationProtocol Analysis. 17654/17764 Analysis of Software Artifacts Kevin Bierhoff
Protocol Anlysis 17654/17764 Anlysis of Softwre Artifcts Kevin Bierhoff TkeAwys Protocols define temporl ordering of events Cn often be cptured with stte mchines Protocol nlysis needs to py ttention
More information4. Greed. Algorithm Design by Éva Tardos and Jon Kleinberg Slides by Kevin Wayne (modified by Neil Rhodes)
4 Greed Greed is good Greed is right Greed works Greed clrifies, cuts through, nd cptures the essence of the evolutionry spirit  Gordon Gecko (Michel Dougls) Algorithm Design by Év Trdos nd Jon Kleinberg
More informationBasic Analysis of Autarky and Free Trade Models
Bsic Anlysis of Autrky nd Free Trde Models AUTARKY Autrky condition in prticulr commodity mrket refers to sitution in which country does not engge in ny trde in tht commodity with other countries. Consequently
More informationValue Function Approximation using Multiple Aggregation for Multiattribute Resource Management
Journl of Mchine Lerning Reserch 9 (2008) 20792 Submitted 8/08; Published 0/08 Vlue Function Approximtion using Multiple Aggregtion for Multittribute Resource Mngement Abrhm George Wrren B. Powell Deprtment
More information9 CONTINUOUS DISTRIBUTIONS
9 CONTINUOUS DISTIBUTIONS A rndom vrible whose vlue my fll nywhere in rnge of vlues is continuous rndom vrible nd will be ssocited with some continuous distribution. Continuous distributions re to discrete
More informationRegular Sets and Expressions
Regulr Sets nd Expressions Finite utomt re importnt in science, mthemtics, nd engineering. Engineers like them ecuse they re super models for circuits (And, since the dvent of VLSI systems sometimes finite
More informationMATLAB: Mfiles; Numerical Integration Last revised : March, 2003
MATLAB: Mfiles; Numericl Integrtion Lst revised : Mrch, 00 Introduction to Mfiles In this tutoril we lern the bsics of working with Mfiles in MATLAB, so clled becuse they must use.m for their filenme
More informationQuantity Oriented Resource Allocation Strategy on Multiple Resources Projects under Stochastic Conditions
Interntionl Conference on Industril Engineering nd Systems Mngement IESM 2009 My 1315 MONTRÉAL  CANADA Quntity Oriented Resource Alloction Strtegy on Multiple Resources Projects under Stochstic Conditions
More informationUnion, Intersection and Complement. Formal Foundations Computer Theory
Union, Intersection nd Complement FAs Union, Intersection nd Complement FAs Forml Foundtions Computer Theory Ferury 21, 2013 This hndout shows (y exmples) how to construct FAs for the union, intersection
More informationGraphs on Logarithmic and Semilogarithmic Paper
0CH_PHClter_TMSETE_ 3//00 :3 PM Pge Grphs on Logrithmic nd Semilogrithmic Pper OBJECTIVES When ou hve completed this chpter, ou should be ble to: Mke grphs on logrithmic nd semilogrithmic pper. Grph empiricl
More information1. The leves re either lbeled with sentences in ;, or with sentences of the form All X re X. 2. The interior leves hve two children drwn bove them) if
Q520 Notes on Nturl Logic Lrry Moss We hve seen exmples of wht re trditionlly clled syllogisms lredy: All men re mortl. Socrtes is mn. Socrtes is mortl. The ide gin is tht the sentences bove the line should
More informationClearPeaks Customer Care Guide. Business as Usual (BaU) Services Peace of mind for your BI Investment
ClerPeks Customer Cre Guide Business s Usul (BU) Services Pece of mind for your BI Investment ClerPeks Customer Cre Business s Usul Services Tble of Contents 1. Overview...3 Benefits of Choosing ClerPeks
More informationSolutions to Section 1
Solutions to Section Exercise. Show tht nd. This follows from the fct tht mx{, } nd mx{, } Exercise. Show tht = { if 0 if < 0 Tht is, the bsolute vlue function is piecewise defined function. Grph this
More informationNOTES AND CORRESPONDENCE. Uncertainties of Derived Dewpoint Temperature and Relative Humidity
MAY 4 NOTES AND CORRESPONDENCE 81 NOTES AND CORRESPONDENCE Uncertinties of Derived Dewpoint Temperture nd Reltive Humidity X. LIN AND K. G. HUBBARD High Plins Regionl Climte Center, School of Nturl Resource
More information9.3. The Scalar Product. Introduction. Prerequisites. Learning Outcomes
The Sclr Product 9.3 Introduction There re two kinds of multipliction involving vectors. The first is known s the sclr product or dot product. This is soclled becuse when the sclr product of two vectors
More informationMechanics Cycle 1 Chapter 5. Chapter 5
Chpter 5 Contct orces: ree Body Digrms nd Idel Ropes Pushes nd Pulls in 1D, nd Newton s Second Lw Neglecting riction ree Body Digrms Tension Along Idel Ropes (i.e., Mssless Ropes) Newton s Third Lw Bodies
More informationChapter 6 Solving equations
Chpter 6 Solving equtions Defining n eqution 6.1 Up to now we hve looked minly t epressions. An epression is n incomplete sttement nd hs no equl sign. Now we wnt to look t equtions. An eqution hs n = sign
More informationIn this section make precise the idea of a matrix inverse and develop a method to find the inverse of a given square matrix when it exists.
Mth 52 Sec S060/S0602 Notes Mtrices IV 5 Inverse Mtrices 5 Introduction In our erlier work on mtrix multipliction, we sw the ide of the inverse of mtrix Tht is, for squre mtrix A, there my exist mtrix
More informationMathematics. Vectors. hsn.uk.net. Higher. Contents. Vectors 128 HSN23100
hsn.uk.net Higher Mthemtics UNIT 3 OUTCOME 1 Vectors Contents Vectors 18 1 Vectors nd Sclrs 18 Components 18 3 Mgnitude 130 4 Equl Vectors 131 5 Addition nd Subtrction of Vectors 13 6 Multipliction by
More informationIntroducing Kashef for Application Monitoring
WextWise 2010 Introducing Kshef for Appliction The Cse for Reltime monitoring of dtcenter helth is criticl IT process serving vriety of needs. Avilbility requirements of 6 nd 7 nines of tody SOA oriented
More informationSection 2.3. Motion Along a Curve. The Calculus of Functions of Several Variables
The Clculus of Functions of Severl Vribles Section 2.3 Motion Along Curve Velocity ccelertion Consider prticle moving in spce so tht its position t time t is given by x(t. We think of x(t s moving long
More informationIntegration. 148 Chapter 7 Integration
48 Chpter 7 Integrtion 7 Integrtion t ech, by supposing tht during ech tenth of second the object is going t constnt speed Since the object initilly hs speed, we gin suppose it mintins this speed, but
More informationEnterprise Risk Management Software Buyer s Guide
Enterprise Risk Mngement Softwre Buyer s Guide 1. Wht is Enterprise Risk Mngement? 2. Gols of n ERM Progrm 3. Why Implement ERM 4. Steps to Implementing Successful ERM Progrm 5. Key Performnce Indictors
More informationSoftware Cost Estimation Model Based on Integration of Multiagent and CaseBased Reasoning
Journl of Computer Science 2 (3): 276282, 2006 ISSN 15493636 2006 Science Publictions Softwre Cost Estimtion Model Bsed on Integrtion of Multigent nd CseBsed Resoning Hsn AlSkrn Informtion Technology
More informationUniversal Regularizers For Robust Sparse Coding and Modeling
1 Universl Regulrizers For Robust Sprse Coding nd Modeling Igncio Rmírez nd Guillermo Spiro Deprtment of Electricl nd Computer Engineering University of Minnesot {rmir48,guille}@umn.edu Abstrct rxiv:13.2941v2
More informationHow fast can we sort? Sorting. Decisiontree model. Decisiontree for insertion sort Sort a 1, a 2, a 3. CS 3343  Spring 2009
CS 4  Spring 2009 Sorting Crol Wenk Slides courtesy of Chrles Leiserson with smll chnges by Crol Wenk CS 4 Anlysis of Algorithms 1 How fst cn we sort? All the sorting lgorithms we hve seen so fr re comprison
More informationNet Change and Displacement
mth 11, pplictions motion: velocity nd net chnge 1 Net Chnge nd Displcement We hve seen tht the definite integrl f (x) dx mesures the net re under the curve y f (x) on the intervl [, b] Any prt of the
More informationData replication in mobile computing
Technicl Report, My 2010 Dt repliction in mobile computing Bchelor s Thesis in Electricl Engineering Rodrigo Christovm Pmplon HALMSTAD UNIVERSITY, IDE SCHOOL OF INFORMATION SCIENCE, COMPUTER AND ELECTRICAL
More informationBabylonian Method of Computing the Square Root: Justifications Based on Fuzzy Techniques and on Computational Complexity
Bbylonin Method of Computing the Squre Root: Justifictions Bsed on Fuzzy Techniques nd on Computtionl Complexity Olg Koshelev Deprtment of Mthemtics Eduction University of Texs t El Pso 500 W. University
More information5.2. LINE INTEGRALS 265. Let us quickly review the kind of integrals we have studied so far before we introduce a new one.
5.2. LINE INTEGRALS 265 5.2 Line Integrls 5.2.1 Introduction Let us quickly review the kind of integrls we hve studied so fr before we introduce new one. 1. Definite integrl. Given continuous relvlued
More informationEcon 4721 Money and Banking Problem Set 2 Answer Key
Econ 472 Money nd Bnking Problem Set 2 Answer Key Problem (35 points) Consider n overlpping genertions model in which consumers live for two periods. The number of people born in ech genertion grows in
More informationCOMPARISON OF SOME METHODS TO FIT A MULTIPLICATIVE TARIFF STRUCTURE TO OBSERVED RISK DATA BY B. AJNE. Skandza, Stockholm ABSTRACT
COMPARISON OF SOME METHODS TO FIT A MULTIPLICATIVE TARIFF STRUCTURE TO OBSERVED RISK DATA BY B. AJNE Skndz, Stockholm ABSTRACT Three methods for fitting multiplictive models to observed, crossclssified
More informationPerformance analysis model for big data applications in cloud computing
Butist Villlpndo et l. Journl of Cloud Computing: Advnces, Systems nd Applictions 2014, 3:19 RESEARCH Performnce nlysis model for big dt pplictions in cloud computing Luis Edurdo Butist Villlpndo 1,2,
More informationBayesian Updating with Continuous Priors Class 13, 18.05, Spring 2014 Jeremy Orloff and Jonathan Bloom
Byesin Updting with Continuous Priors Clss 3, 8.05, Spring 04 Jeremy Orloff nd Jonthn Bloom Lerning Gols. Understnd prmeterized fmily of distriutions s representing continuous rnge of hypotheses for the
More informationA new algorithm for generating Pythagorean triples
A new lgorithm for generting Pythgoren triples RH Dye 1 nd RWD Nicklls 2 The Mthemticl Gzette (1998); 82 (Mrch, No. 493), p. 86 91 (JSTOR rchive) http://www.nicklls.org/dick/ppers/mths/pythgtriples1998.pdf
More informationSmall Business Networking
Why network is n essentil productivity tool for ny smll business Effective technology is essentil for smll businesses looking to increse the productivity of their people nd business. Introducing technology
More informationTests for One Poisson Mean
Chpter 412 Tests for One Poisson Men Introduction The Poisson probbility lw gives the probbility distribution of the number of events occurring in specified intervl of time or spce. The Poisson distribution
More informationA Decision Theoretic Framework for Ranking using Implicit Feedback
A Decision Theoretic Frmework for Rnking using Implicit Feedbck Onno Zoeter Michel Tylor Ed Snelson John Guiver Nick Crswell Mrtin Szummer Microsoft Reserch Cmbridge 7 J J Thomson Avenue Cmbridge, United
More informationCHAPTER 11 Numerical Differentiation and Integration
CHAPTER 11 Numericl Differentition nd Integrtion Differentition nd integrtion re bsic mthemticl opertions with wide rnge of pplictions in mny res of science. It is therefore importnt to hve good methods
More informationApproximate Factoring for A Search
Approximte Fctoring for A Serch Ari Hghighi, John DeNero, Dn Klein Computer Science Division University of Cliforni Berkeley {ri42, denero, klein}@cs.berkeley.edu Abstrct We present novel method for creting
More informationOperations with Polynomials
38 Chpter P Prerequisites P.4 Opertions with Polynomils Wht you should lern: Write polynomils in stndrd form nd identify the leding coefficients nd degrees of polynomils Add nd subtrct polynomils Multiply
More informationLecture 3 Basic Probability and Statistics
Lecture 3 Bsic Probbility nd Sttistics The im of this lecture is to provide n extremely speedy introduction to the probbility nd sttistics which will be needed for the rest of this lecture course. The
More informationLearning from Collective Behavior
Lerning from Collective Behvior Michel Kerns Computer nd Informtion Science University of Pennsylvni mkerns@cis.upenn.edu Jennifer Wortmn Computer nd Informtion Science University of Pennsylvni wortmnj@ses.upenn.edu
More informationSuffix Trees CMSC 423
Suffix Trees CMSC 423 Preprocessing Strings Over the next few lectures, we ll see severl methods for preprocessing string dt into dt structures tht mke mny questions (like serching) esy to nswer: Suffix
More informationOptimal Control of Serial, MultiEchelon Inventory/Production Systems with Periodic Batching
Optiml Control of Seril, MultiEchelon Inventory/Production Systems with Periodic Btching GeertJn vn Houtum Deprtment of Technology Mngement, Technische Universiteit Eindhoven, P.O. Box 513, 56 MB, Eindhoven,
More informationLINEAR TRANSFORMATIONS AND THEIR REPRESENTING MATRICES
LINEAR TRANSFORMATIONS AND THEIR REPRESENTING MATRICES DAVID WEBB CONTENTS Liner trnsformtions 2 The representing mtrix of liner trnsformtion 3 3 An ppliction: reflections in the plne 6 4 The lgebr of
More informationMath 135 Circles and Completing the Square Examples
Mth 135 Circles nd Completing the Squre Exmples A perfect squre is number such tht = b 2 for some rel number b. Some exmples of perfect squres re 4 = 2 2, 16 = 4 2, 169 = 13 2. We wish to hve method for
More informationExperiment 6: Friction
Experiment 6: Friction In previous lbs we studied Newton s lws in n idel setting, tht is, one where friction nd ir resistnce were ignored. However, from our everydy experience with motion, we know tht
More informationAlgorithms Chapter 4 Recurrences
Algorithms Chpter 4 Recurrences Outline The substitution method The recursion tree method The mster method Instructor: Ching Chi Lin 林清池助理教授 chingchilin@gmilcom Deprtment of Computer Science nd Engineering
More informationSection 54 Trigonometric Functions
5 Trigonometric Functions Section 5 Trigonometric Functions Definition of the Trigonometric Functions Clcultor Evlution of Trigonometric Functions Definition of the Trigonometric Functions Alternte Form
More informationCurve Sketching. 96 Chapter 5 Curve Sketching
96 Chpter 5 Curve Sketching 5 Curve Sketching A B A B A Figure 51 Some locl mximum points (A) nd minimum points (B) If (x, f(x)) is point where f(x) reches locl mximum or minimum, nd if the derivtive of
More informationModeling POMDPs for Generating and Simulating Stock Investment Policies
Modeling POMDPs for Generting nd Simulting Stock Investment Policies Augusto Cesr Espíndol Bff UNIRIO  Dep. Informátic Aplicd Av. Psteur, 458  Térreo Rio de Jneiro  Brzil ugusto.bff@uniriotec.br Angelo
More informationSmall Business Networking
Why network is n essentil productivity tool for ny smll business Effective technology is essentil for smll businesses looking to increse the productivity of their people nd business. Introducing technology
More informationand thus, they are similar. If k = 3 then the Jordan form of both matrices is
Homework ssignment 11 Section 7. pp. 24925 Exercise 1. Let N 1 nd N 2 be nilpotent mtrices over the field F. Prove tht N 1 nd N 2 re similr if nd only if they hve the sme miniml polynomil. Solution: If
More informationDlNBVRGH + Sickness Absence Monitoring Report. Executive of the Council. Purpose of report
DlNBVRGH + + THE CITY OF EDINBURGH COUNCIL Sickness Absence Monitoring Report Executive of the Council 8fh My 4 I.I...3 Purpose of report This report quntifies the mount of working time lost s result of
More informationLecture 3 Gaussian Probability Distribution
Lecture 3 Gussin Probbility Distribution Introduction l Gussin probbility distribution is perhps the most used distribution in ll of science. u lso clled bell shped curve or norml distribution l Unlike
More informationSmall Business Networking
Why network is n essentil productivity tool for ny smll business Effective technology is essentil for smll businesses looking to increse the productivity of their people nd processes. Introducing technology
More informationAntiSpyware Enterprise Module 8.5
AntiSpywre Enterprise Module 8.5 Product Guide Aout the AntiSpywre Enterprise Module The McAfee AntiSpywre Enterprise Module 8.5 is n ddon to the VirusScn Enterprise 8.5i product tht extends its ility
More informationExponentiation: Theorems, Proofs, Problems Pre/Calculus 11, Veritas Prep.
Exponentition: Theorems, Proofs, Problems Pre/Clculus, Verits Prep. Our Exponentition Theorems Theorem A: n+m = n m Theorem B: ( n ) m = nm Theorem C: (b) n = n b n ( ) n n Theorem D: = b b n Theorem E:
More informationPlotting and Graphing
Plotting nd Grphing Much of the dt nd informtion used by engineers is presented in the form of grphs. The vlues to be plotted cn come from theoreticl or empiricl (observed) reltionships, or from mesured
More informationORBITAL MANEUVERS USING LOWTHRUST
Proceedings of the 8th WSEAS Interntionl Conference on SIGNAL PROCESSING, ROBOICS nd AUOMAION ORBIAL MANEUVERS USING LOWHRUS VIVIAN MARINS GOMES, ANONIO F. B. A. PRADO, HÉLIO KOII KUGA Ntionl Institute
More informationIntegration by Substitution
Integrtion by Substitution Dr. Philippe B. Lvl Kennesw Stte University August, 8 Abstrct This hndout contins mteril on very importnt integrtion method clled integrtion by substitution. Substitution is
More informationMATH 150 HOMEWORK 4 SOLUTIONS
MATH 150 HOMEWORK 4 SOLUTIONS Section 1.8 Show tht the product of two of the numbers 65 1000 8 2001 + 3 177, 79 1212 9 2399 + 2 2001, nd 24 4493 5 8192 + 7 1777 is nonnegtive. Is your proof constructive
More information11. Fourier series. sin mx cos nx dx = 0 for any m, n, sin 2 mx dx = π.
. Fourier series Summry of the bsic ides The following is quick summry of the introductory tretment of Fourier series in MATH. We consider function f with period π, tht is, stisfying f(x + π) = f(x) for
More informationN Mean SD Mean SD Shelf # Shelf # Shelf #
NOV xercises smple of 0 different types of cerels ws tken from ech of three grocery store shelves (1,, nd, counting from the floor). summry of the sugr content (grms per serving) nd dietry fiber (grms
More informationOnline Multicommodity Routing with Time Windows
KonrdZuseZentrum für Informtionstechnik Berlin Tkustrße 7 D14195 BerlinDhlem Germny TOBIAS HARKS 1 STEFAN HEINZ MARC E. PFETSCH TJARK VREDEVELD 2 Online Multicommodity Routing with Time Windows 1 Institute
More informationSolving BAMO Problems
Solving BAMO Problems Tom Dvis tomrdvis@erthlink.net http://www.geometer.org/mthcircles Februry 20, 2000 Abstrct Strtegies for solving problems in the BAMO contest (the By Are Mthemticl Olympid). Only
More informationLower Bound for EnvyFree and Truthful Makespan Approximation on Related Machines
Lower Bound for EnvyFree nd Truthful Mespn Approximtion on Relted Mchines Lis Fleischer Zhenghui Wng July 14, 211 Abstrct We study problems of scheduling jobs on relted mchines so s to minimize the mespn
More informationExample 27.1 Draw a Venn diagram to show the relationship between counting numbers, whole numbers, integers, and rational numbers.
2 Rtionl Numbers Integers such s 5 were importnt when solving the eqution x+5 = 0. In similr wy, frctions re importnt for solving equtions like 2x = 1. Wht bout equtions like 2x + 1 = 0? Equtions of this
More informationSPECIAL PRODUCTS AND FACTORIZATION
MODULE  Specil Products nd Fctoriztion 4 SPECIAL PRODUCTS AND FACTORIZATION In n erlier lesson you hve lernt multipliction of lgebric epressions, prticulrly polynomils. In the study of lgebr, we come
More informationAlgebra Review. How well do you remember your algebra?
Algebr Review How well do you remember your lgebr? 1 The Order of Opertions Wht do we men when we write + 4? If we multiply we get 6 nd dding 4 gives 10. But, if we dd + 4 = 7 first, then multiply by then
More informationLecture 5. Inner Product
Lecture 5 Inner Product Let us strt with the following problem. Given point P R nd line L R, how cn we find the point on the line closest to P? Answer: Drw line segment from P meeting the line in right
More informationIntroduction to Mathematical Reasoning, Saylor 111
Frction versus rtionl number. Wht s the difference? It s not n esy question. In fct, the difference is somewht like the difference between set of words on one hnd nd sentence on the other. A symbol is
More informationSpace Vector Pulse Width Modulation Based Induction Motor with V/F Control
Interntionl Journl of Science nd Reserch (IJSR) Spce Vector Pulse Width Modultion Bsed Induction Motor with V/F Control Vikrmrjn Jmbulingm Electricl nd Electronics Engineering, VIT University, Indi Abstrct:
More informationWritten Homework 6 Solutions
Written Homework 6 Solutions Section.10 0. Explin in terms of liner pproximtions or differentils why the pproximtion is resonble: 1.01) 6 1.06 Solution: First strt by finding the liner pproximtion of f
More informationReview guide for the final exam in Math 233
Review guide for the finl exm in Mth 33 1 Bsic mteril. This review includes the reminder of the mteril for mth 33. The finl exm will be cumultive exm with mny of the problems coming from the mteril covered
More informationGFI MilArchiver 6 vs C2C Archive One Policy Mnger GFI Softwre www.gfi.com GFI MilArchiver 6 vs C2C Archive One Policy Mnger GFI MilArchiver 6 C2C Archive One Policy Mnger Who we re Generl fetures Supports
More informationSmall Business Networking
Why network is n essentil productivity tool for ny smll business Effective technology is essentil for smll businesses looking to increse the productivity of their people nd processes. Introducing technology
More information