The Power of Free Branching in a General Model of Backtracking and Dynamic Programming Algorithms

Size: px
Start display at page:

Download "The Power of Free Branching in a General Model of Backtracking and Dynamic Programming Algorithms"

Transcription

1 The Power of Free Brachig i a Geeral Model of Backtrackig ad Dyamic Programmig Algorithms SASHKA DAVIS IDA/Ceter for Computig Scieces Bowie, MD sashka.davis@gmail.com RUSSELL IMPAGLIAZZO Dept. of Computer Sciece UCSD russell@cs.ucsd.edu September 24, 2015 JEFF EDMONDS Dept. of Computer Sciece York Uiversity jeff@cs.yorku.ca Abstract The Prioritized brachig trees (pbt) model was defied i [ABBO + 05] to model recursive backtrackig ad a large sub-class of dyamic programmig algorithms. We exted this model to the prioritized Free Brachig Tree (pfbt) model by allowig the algorithm to also brach i order to try differet priority orderigs without eedig to commit to part of the solutio. I additio to capturig a much larger class of dyamic programmig algorithms, aother motivatio was that this class is closed uder a wider variety of reductios tha the origial model, ad so could be used to traslate lower bouds from oe problem to several related problems. First, we show that this free brachig adds a great deal of power by givig a surprisig ad geeral moderately expoetial (2 c for c < 1) upper boud i this model. This upper boud holds wheever the umber of possible data items is moderately expoetial, which icludes cases like k-sat with a restricted umber of clauses. We also give a 2 Ω( ) lower boud for 7-SAT ad a strogly expoetial lower boud for Subset Sum over vectors mod 2 both of which match our upper bouds for very similar problems. 1 Itroductio Algorithms are usually grouped pedagogically i terms of basic paradigms such as greedy algorithms, divide-ad-coquer algorithms, or dyamic programmig. There have bee may successful efforts over the years to formalize these paradigms ([BNR03, AB04, DI07, ABBO + 05, BODI11]). The models defied capture most of the algorithms ituitively i the give paradigm; lower bouds prove that these algorithms caot be improved (i terms of approximatio ratio or simpler form) without usig a stroger paradigm; ad lower bouds prove that other problems that ituitively should ot be solvable i the paradigm provably require expoetial size algorithms withi the model. This gives a formal distictio betwee the power of these differet paradigms. The geeral framework of all these model is that such a algorithm views the iput oe data item at a time, ad after each is forced to make a irrevocable decisio about the part of solutio related to the received data item, (for example, should the data item be icluded or ot i the solutio). The techique for lower bouds are iformatio-theoretic i that the algorithm does ot kow the rest of the iput before eedig to commit to part of the solutio. These techiques borrowed from those dealig with olie algorithms 1

2 [FW98], where a adversary chooses the order i which the items arrive. However, the differece here is that, while iputs arrive i a certai order, the algorithm determies that order, ot a adversary. A very successful formalizatio of greedy-like algorithms was developed i [BNR03] with the priority model. I this model, the algorithm is able to choose a priority order (greedy criteria) specifyig the order i which the data items are processed. [ABBO + 05] exteds these ideas to the prioritized brachig tree (pbt) model which allows the algorithm whe readig a data item to brach so that it ca make, ot just oe, but may differet irrevocable decisio about the read data item. Beig adaptive, each such brach is also allowed to specify a differet order i which to read the remaiig data items. The width of the program is the maximum umber of braches that are alive at ay give time. This relates to the ruig time of the algorithm. I order to keep this width dow the algorithm is allowed to later prue off a brach (ad back track) whe this computatio path does ot seem to be workig well. Surprisigly, the resultig model ca also simulate a large fragmet of dyamic programmig algorithms. This model was exteded further i [BODI11] to the prioritized brachig program model (pbp) as a way of ituitively capturig more of the power of dyamic programmig. Oe weakess of both the pbt ad pbp models is that they are ot closed uder reductios that itroduce dummy data items. I a typical reductio, we might traslate each data item for oe problem ito a gadget cosistig of a small set of correspodig data items for aother. However, may reductios also have gadgets that are always preset, idepedet of the actual iput. For example, whe we reduce 3-SAT to exactly-3-out-of-five SAT, we might itroduce two ew variables per clause, ad if the clause is satisfied by t 1 literals, we ca set 3 t of these ew variables to true. The set of ew variables ad which clauses they are i depeds oly o the umber of clauses i the origial formula, ot the structure of these clauses. Settig the value of oe of these dummy variables does costrai the origial satisfyig assigmet, but ot ay particular origial variable. Such a data item i the extreme is what we call a free data item. These are data items such that the irrevocably decisio that must be made about it does ot actually affect the solutio i ayway. The reaso addig these free data items to the pbt model icreases its power is because whe readig them the computatio is allowed to brach without eedig to read a data item that maters. I hidsight, ot allowig this is ot such a atural restrictio o the model ayway. We exted the pbt model to the prioritized Free Brachig Tree (pfbt) model by allowig the algorithm to also brach i order to try differet priority orderigs without readig a data time ad hece eedig to fix the part of the solutio relatig to this data item. I additio to closig the model uder reductios, it also allows for a much wider rage of atural algorithms. For example, oe such brach a algorithm for the Kapsack Problem might sort the data items accordig to price, i aother accordig to weight, ad i aother accordig to price per weight. The cotributios of this paper are both to develop a ew geeral proof techique for provig lower bouds i this seemigly ot much more powerful model ad for fidig ew surprisig algorithms showig that this chage i fact gives the model great deal of additioal (ufair) power. While the kow lower bouds for 3-SAT i the brachig program model are the tight 2 Ω(), we are oly able to prove a 2 Ω(1/2 ) lower boud (ad for 7 SAT ). The upper bouds show that this is due to the model s surprisig stregth, rather tha our iability to prove tight lower bouds. More precisely, ay problem whose set D of possible data items is relatively small, has a upper boud i the model of roughly expoetial i (). This covers the case of ay sparse SAT formula, where the umber of occurreces of each variable is bouded. There is some formal evidece that this is actually the most difficult case of SAT ([IPZ01]). The idea behid the ureasoable algorithm is as follows. Whe the algorithm reads data item D, it iadvertetly lears that all data items appearig before D i the priority orderig are ot i the istace. Beig allowed to try may such orderigs ad abado those that are ot lucky, the algorithm is able to lear all of the data items from the domai D that are ot i the istace ad i so doig effectively lear what the iput is. The algorithm is all-powerful with what it kows ad hece istatly kows the aswer. Havig 2

3 m log D free data item, i.e. those whose decisio does ot affect the solutio, further improves this upper boud i the free brachig model ad havig m = O( log D ), brigs the tree width dow to O( log D ) 3 eve without free brachig, i.e. i pbt. Figure 1 gives a summary of those results. Computatioal Problem D W i pbt W i pfbt W with m f.d.i. Geeral d 2 O() 2 O( log d) 2 O log d m+ log d k-sat O(1) clauses per var O(1) 2 O( log ) k-sat m clauses 2 k 1 2 O(k m 1 3 log ) Matrix Iversio O(1) 2 Ω() 2 Ω( ) 2 Ω 7-SAT O(1) clauses per var Matrix Parity 2 2 Ω() 2 Ω Subset Sum 2Ω(/log ) SS o carries or i pfbt 2 Ω() m+ 2 m+ Figure 1: The summery of the results i the paper. Here pbt ad pfbt stad for the two models of computatio prioritized Brachig Trees ad prioritized Free Brachig Tree. The prioritizig allows the model to specify a order i which to read the data items defiig the iput istace. Brachig allow the algorithm to make differet irrevocable decisios about the data items read ad to try differet such orderigs. The free braches allows the tree to brach without readig ad hece eedig to fix the part of the solutio relatig to this data item. Free data items, f.d.i., are data items whose irrevocable decisio does ot effect the solutio. Here D is the full domai of all possible data items that potetially could be used to defie the problem. The algorithm that uses free brachig ad/or free data items i a surprisig ad ufair way depeds o this domai beig small. W is short for the width of the brachig tree. This is a measure of how much time the algorithm uses. The problem SS o carries is the Subset Sum problem with the o-stadard chage that the sums are doe over GF-2. A slightly better lower boud ca be obtaied for this because the reductio to it does ot eed dummy/free data items. Similarly the lower boud is obtaied i the model pfbt which restrict the use of such dummy data items. The secod way of gettig aroud the challeges of doig reductios is to restrict the model pfbt to pfbt. At ay poit durig the curret computatio path, a adversary is allowed to completely reveal ay data items that the algorithm may use as a free data item makig them ito kow data items. Kowig that this kow data item is i the actual iput istace, the algorithm o loger has ay direct eed to receive the data item. Besides if the algorithm did receive it, it would be forced to make a irrevocable decisio about it, which it may or may ot be prepared to do. The oly remaiig purpose of receivig a kow data item is to iadvertetly lear what data items are ot i the istace (because they would appear before the kow data items i the priority order). The pfbt model is a restrictio of the pfbt model that does ot allow the priority orderig to mix kow ad ukow data items together. Havig removed their power, the 2 Ω() width lower boud for the matrix parity problem goes through i the pfbt model o matter how may kow/free data items there are. The reductio the gives the 2 Ω() width lower boud for Subset Sum i pfbt model. The paper is orgaized as follows. Sectio 2 formally defies the computatioal problems ad the pfbt model. It also motivates the model because of its ability to express olie, greedy, recursive backtrackig, ad dyamic programmig algorithms. Sectio 3 provides the upper bouds arisig from havig free brachig ad/or free data items ad small domai size. We also exted this algorithm to obtai the 3

4 subexpoetial algorithms whe there are few clauses. I Sectio 4 we preset the reductios from the hard problems 7-SAT ad Subset Sum to the easy problems Matrix Iversio ad Matrix Parity. I Sectio 5 we provide a geeral techique for provig lower bouds withi pfbt ad the use the techique to obtai the results for the matrix problems. The remaiig sectios after Sectio 2, ca be read i ay order. 2 Defiitios Here we give a formal defiitio of a computatioal problem ad also state the k-sat, the Matrix Iversio, the Matrix Parity Problem, ad the Subset Sum problem i the priority algorithm framework: A Geeral Computatioal Problem: A computatioal problem with priority model (D, Σ) ad a family of objective fuctios f : D Σ R is defied as follows. The iput I = D 1, D 2,...,D D is specified by a set of data items D i chose from the give domai D. A solutio S = σ 1, σ 2,...,σ Σ cosists of a decisio σ i Σ made about each data item D i i the istace. For a optimizatio problem, this solutio must be oe that maximizes f(i, S). For a search problem, f(i, S) {0, 1}. k-sat: The iput cosists of the AND of a set of clauses, each cotaiig at most k literals ad the output is a satisfyig assigmet, assumig there is oe. A data item cotais the ame x j of the variable ad the list of all clauses i which it participates. Urestricted: The umber of potetial data items i the domai is D = 2 2(2)k 1, because there are 2(2) k 1 clauses that the give variable x i might be i. m clauses: Restrictig the iput istace to cotai oly m clauses, does ot restrict the domai of data items D. O(1) clause per variable: If k-sat is restricted so that each variable is i at most O(1) clauses, the D = [2(2) k 1 ] O(1) = O(1). The Matrix Iversio Problem: This problem was itroduced i [ABBO + 05] ad modified by us slightly. The matrix M {0, 1} is fixed ad kow to the algorithm. It is defied by a fixed matrix M {0, 1} that is o-sigular, has exactly seve oes i each row, at most K oes i each colum, ad is a (r, 7, c)-boudary expader (defied i Sectio 5.2), with K Θ(1), c = 3 ad r Θ(). The iput to this problem is vector b {0, 1}. The output is x {0, 1} such that Mx = b over GF-2. Each data item cotais the ame x j of a variable ad the value of the at most K bits b i from b ivolvig this variable, i.e. those for which M i,j = 1. Note D = 2 K. The Matrix Parity Problem (MP): The iput cosists of a matrix M {0, 1}. The output x i, for each i [], is the parity of the i th row, amely x i = j [] M i,j. This is easy eough, but the problem is that this data is distributed i a awkward way. Each data item cotais the ame x j of a variable ad the j th colum of the matrix. Note D = 2. Subset Sum (SS): The iput cosists of a set of -bit itegers, each stored i a data item. The goal is to accept a subset of the data items that adds up to a fixed kow target T. Agai D = 2. Subset Sum No Carries: Aother versio of SS does the GF-2 sum bit-wise so that there are o carries to the ext bits. 4

5 2.1 Defiitios of Olie, Greedy, Recursive Back-Trackig, ad Dyamic Programmig Algorithms Ideally a formal model of a algorithmic paradigm should be able to capture the complete power ad weakesses of the paradigm. I this sectio, we focus o the defiig characteristics of olie, greedy, recursive back-trackig, ad dyamic programmig. The formal defiitio of the pfbt model appears i Sectio 2.2. To make the discussio cocrete, cosider the Kapsack Problem specified as follows. A data item D i = p i, w i specifies the price ad the weight of the i th object. The decisio optio is σ i Σ = {0, 1}, where 1 ad 0 are the two optios, meaig to accept ad ot to accept the object, respectively. The goal is to maximize f(i, S) = i p iσ i as log as i w iσ i W. A determiistic olie algorithm for such a problem would receive the data items D i oe at a time ad must make a irrevocable decisio σ i about each as it arrives. For example, a algorithm for the Kapsack Problem may choose to accept the icomig data item as log as it still fits i the kapsack. We will assume that the algorithm has ubouded computatioal power based o what it kows from the data items it has see already. The algorithm s limitatio of power arises because it does ot kow the data items that it has ot yet see. After it commits to a partial solutio PS i = σ 1, σ 2,...,σ k by makig decisios about the partial istace PI i = D 1, D 2,...,D k see so far, a adversary ca make the future items PI future = D k+1, D k+2,...,d be such that there is o possible way to exted the solutio with decisios PS future = σ k+1, σ k+2,...,σ so that the fial solutio S = PS i, PS future is a optimal solutio for the actual istace I = PI i, PI future. This lower boud strategy is at the core of all the lower bouds i this paper. A greedy algorithm is give the additioal power to specify a priority orderig fuctio (greedy criteria) ad is esured that it will see the data items of the istace i this order. For example, a algorithm for the Kapsack Problem may choose to sort the data items by the ratio p i w i. I order to prove lower bouds as doe with the olie algorithm, [BNR03] defied the priority model. It allows the algorithm to specify the priority order without allowig it to see the future data items by havig it specifyig a orderig π O(D) of all possible data items. The algorithm the receives the ext data item D i i the actual iput accordig this order. A strogly adaptive algorithm is allowed to reorder the data items every time it sees a ew data item (also kow as fully adaptive priority algorithm i [BNR03, DI07]). A added beefit of receivig data item D is that the algorithm iadvertetly lears that every data item D before D i the priority orderig is ot i the iput istace. The set of data items leared ot to belog to the istace is deoted as PI out. It is a restrictio o the power of the greedy algorithm to require it to make a sigle irrevocable decisios σ i about each data item before kowig the future data items. I cotrast, backtrackig algorithms search for a solutio by first explorig oe decisio optio about a data item ad the if the search fails, the the algorithm backs up ad searches for a solutio with a differet decisio optio. The computatio forms a tree. To model this [ABBO + 05] defies prioritized Brachig Trees (pbt). Each state of the computatio tree specifies oe priority orderig, receives oe data item D i, ad makes zero, oe, or more decisios σ i about it. If more tha oe decisio is made, the tree braches to accommodate these. If zero decisios are made the the computatio path termiates. The computatio returs the solutio S that is best from all the otermiatig paths. A computatio of the algorithm o a istace I dyamically builds a tree of states as follows. Alog the path from the root state to a give state u i the tree, the algorithm has see some partial istace PI u = D1 u, Du 2,...,Du k ad committed to some partial solutio PS u = σ1 u, σu 2,...,σu k about it. At this state (kowig oly this iformatio) the algorithm specifies a orderig π u O(D) of all possible data items. The algorithm the receives the ext usee data item Dk+1 u i the actual iput accordig this order. The algorithm the ca make oe decisio σk+1 u that is irrevocable for the duratio of this path, to fork makig a umber of such decisios, or to termiate this path of the search all together. The computatio returs the solutio S that is best from all the otermiatig paths. 5

6 A curious restrictio of this model is that it does ot allow the followig completely reasoable kapsack algorithm. Retur the best solutio after ruig three parallel greedy algorithms, oe with the priority fuctio beig largest p i, aother smallest w i, ad aother largest p i w i. Beig fully adaptive, each state alog each brach is allowed to choose a differet orderig π O(D) for its priority fuctio. However, the computatio is oly allowed to fork whe it is makig differet decisios about a ewly see data item ad this oly occurs after seeig a data item. We geeralize their model by allowig the algorithm to fork without seeig a data item. This is facilitated by, i additio to the iput readig states described above, havig free brachig states which do othig except brach to some umber of childre. This ability itroduces a surprisig additioal power to the algorithms. We call this ew model prioritized Free Brachig Tree (pfbt) algorithms. See Sectio 2.2 for the formal defiitio. Though it is ot iitially obvious how, pfbt (pbt) are also quite good at expressig dyamic programmig algorithms. Dyamic programmig algorithms derive a collectio of sub-istaces to the computatioal problem from the origial istace such that the solutio to each sub-istace is either trivially obtaied or ca be easily computed from the solutios to the subsubistaces. Geerally, oe thiks of these sub-istaces beig solved from the smallest to largest, but oe ca equivaletly cotiue i the recursive backtrackig model where braches are prued whe the same subistace has previously bee solved. Each state u i the pfbt computatio tree ca be thought of the root of the computatio o the subistace PI future = D k+1, D k+2,...,d cosistig of the yet usee data items, whose goal is to fid a solutio PS future = σ k+1, σ k+2,...,σ so that S = PS u, PS future is a optimal solutio for the actual istace I = PI i u, PI future. Here, PI i u = D u,1,...,d u,k is the partial istace see alog the path to this state ad PS u = σ u,1,...,σ u,k is the partial solutio committed to. It is up to the algorithm to decide which pairs of states it cosiders to be represetig the same computatio. For example, i the stadard dyamic programmig algorithm for the Kapsack Problem, the data items are always viewed i the same order. Hece, all states at level k have the same subistace PI future = D k+1, D k+2,...,d yet to be see. However, because differet partial solutios PS u = σ u,1,...,σ u,k have bee committed to alog the path to differet states u, their computatio tasks are differet. As far as the future computatio is cocered, the oly differece betwee the differet subproblems is the amout W = W i k w iσ u,i of the kapsack remaiig. Hece, for each such W, the algorithm should kill off all but oe of these computatio paths. This reduces the width from 2 k to beig the rage of W. For a give value W, which of the paths should be kept is clearly the oe that made the best iitial partial solutio PS u = σ u,k+1,...,σ u, accordig to maximizig i k p iσ u,i. A additioal challege i implemetig the stadard dyamic programmig algorithm for the Kapsack Problem i the pbt model is the followig. I the model, the algorithm does ot kow which path should live util it kows partial istace PI i u, however, after this partial istace is received, the pbt model does ot allow cross talk betwee these differet paths. This is where the ubouded computatioal power of the pbt model comes i hady. Idepedetly, each path, kowig PI i u, kows what every other paths i the computatio is doig. Hece, it kows whether or ot it should be termiated. If required, it quietly termiates itself. I this way, pbt is able to model may dyamic programmig algorithms. However, Bellma-Ford s dyamic programmig algorithm for the shortest paths problem caot be expressed i the model. Each state i the computatio tree at depth k will have traversed a sub-path of legth k i the iput graph from the source ode s, ad the future goal is to fid to a shortest path from the curret ode to the destiatio ode t. States i the computatio tree whose sub-path ed i the same iput graph ode v are cosidered to be computig the same subproblem. However, because the differet states u have read differet paths PI i u = D u,1,...,d u,k of the iput graph, they caot kow what the other computatio states are doig. Without cross talk, they caot kow whether they are to termiate or ot. [BODI11] formally proves that pbt requires expoetial width to solve shortest paths ad defies aother model called prioritized Brachig Programs (pbp) which captures this otio of memoizatio by mergig computatio states that are computig the same subproblem. The computatio the forms a DAG istead 6

7 of a tree. 2.2 The Formal Defiitio of pfbt We proceed with a formal defiitio of a pfbt algorithm A. O a istace I, a pfbt algorithm builds a computatio tree T A (I) as follows. Cosider a state u p of the tree alog a path p. It is labeled with PI p, PS p, f p, costitutig the partial istace, the partial solutio, ad the free brachig see alog this path. The partial iformatio about the istace is split ito two types PI p = PIp i, PIp out. Here PIp i = D p,1,...,d p,k ad PSp = σ p,1,...,σ p,k, where D p,i D is the data item see ad σ p,i Σ is the decisio made about it i the i th iput readig state alog path to p. PIp out D deotes the set of data items iadvertetly leared by the algorithm to be ot i the iput istace. Fially, f p = f p,1,...,f p,k, where fi N idicates which brach is followed i the i th free brachig state. Each such state u p is either a iput readig or free brachig state. A iput readig state u p first specifies the priority fuctio with the orderig π A (u p ) 2 D of the possible data items. Suppose D p,k+1 is the first data item from I \ PIp i accordig to this total order. The iput readig state u p must the specify the decisios c A (u p, D p,k+1 ) Σ to be made about D p,k+1. 1 For each σ p,k+1 c A (u p, D p,k+1 ), the state u p has a child u p with D p,k+1 added to PIp i, each data item D appearig before D p,k+1 i the orderig π A (u p ) added to PIp out, ad σ p,k+1 added to PS p. A free brachig state u p must specify oly the umber F A (u p ) of braches it will have. For each f p,k +1 F A (u p ), the state u p has a child u p with f p,k +1 added to f p. The width W A () of the algorithm A is the maximum over istaces I with data items, of the maximum over levels k, of the umber of states u p i the computatio tree at level k. Though this completes the formal defiitio of the model pfbt, i order to give a better uderstadig of it, we will ow prove what such a algorithm kows about the iput istace I whe i a give state u p. This requires uderstadig how the computatio tree T A (I) chages for differet istaces I. If these T A (I) were allowed to be completely differet for differet istaces I, the for each I, T A (I) could simply kow the aswer for I. Lemma 1 proves that the partial istace PI p = PIp i, PIp out costitutes the sum kowledge that algorithm A kows about the istace. Namely, if we switched algorithm A s iput istace from beig I to beig aother istace I cosistet with this iformatio, the A would remai i the same state u p. Defie I PI p to mea that istace I is cosistet with p, i.e. cotais the data items i PI i p ad ot those i PI out p. Lemma 1. Suppose that two iputs I ad I are both cosistet with the partial istace PI, i.e. I PI ad I PI. If there exists a path p T A (I) idetified with PI p, PS p, f p, with PI p = PI, the there exists a path p T A (I ) idetified with the same PI p, PS p, f p = PIp, PS p, f p. Proof. The proof uses the fact that pfbt cosiders determiistic algorithms that explore a ukow istace ad give the same kowledge they always make idetical decisios. This could be prove by lookig at the iductive structure of the computatio tree T A (I). Recall that T A (I) is the tree of the thigs algorithm A tries o iput I. Istead, we will cosider every actio A will do over all possible istaces. We will refer to this as the super decisio tree view the algorithm A. Just as with T A (I), its states u p are idetified with PI p, PS p, f p. As with the computatio tree T A (I), the super decisio tree braches whe the algorithm tries differet irrevocable decisios PS p ad whe free brachig i order to try differet priority orderigs. I additio, the super decisio tree also braches depedig o which ext data item D the algorithm receives. The set of paths i this super decisio tree is the the uio {p = PI p, PS p, f p I such that p is a path i T A (I)} of the paths from the differet computatio trees T A (I). 1 If oe wated to measure the time util a solutio is foud, the oe would wat to specify the order i which these decisios σ p,k+1 c A(u p, D p,k+1 ) were tried. 7

8 2.3 The Formal Defiitio of pfbt We ow formally defie the restricted model pfbt. Recall that free data items are defied to be additioal data items that have o bearig o the solutio of the problem. The reaso addig these free data items to the pbt model icreases its power is that whe the algorithm reads data item D, it iadvertetly lears that all data items appearig before D i the priority orderig are ot i the istace. This allows a pbt algorithm to solve ay computatioal problem with width W = ( log D ) 3. Though this makes the provig of lower bouds i the model eve more impressive, this aspect of the model is clearly ot practical. The pfbt model restricts the algorithm i a way that excludes the possibility of it implemetig this impractical algorithm while still allowig ay practical algorithm that is i pfbt to still be i pfbt. A data item is said to be kow at some poit i a computatio path if the algorithm kows that it is i the actual iput istace eve though it has ot yet received it or made a irrevocable decisio about it. Such items ca arise i a umber of ways: 1) It might be a free data item defied i the computatioal problem. 2) It might be a dummy data item ad the algorithm kows that the iput istace is comig from the reductio. 3) A lower boud adversary, worried that the data item may be used as free data item, may iform the algorithm that this data item is i the iput istace. I cotrast, a data item is said to be ukow if at this poit i a computatio path, the algorithm still does ot kow if it is i the iput istace. Classify iput readig states u p of the computatio tree ito three types. It is said to be a ukow readig state if it puts all of the possible ukow data items before the kow oes. It is said to be a kow readig state if it puts all of the kow oes before the possible ukow oes. Fially, it is said to be a mixed readig state if it itertwies the two types of data items together i the orderigs. The model pfbt is defied to be the same as pfbt except that mixed readig states are ot allowed. We argue that ay practical algorithm that is i pfbt is still i pfbt by arguig that there is o (direct) reaso that a algorithm should wat to receive a kow data item. Already kowig that it is i the istace, it gais o ew iformatio, but it is still required to make a irrevocable decisio about it, which it may or may ot be prepared to do. After the adversary makes a data item kow, the algorithm should readjust its strategy so that it either uses kow readig states to get these data items out of the way, or uses ukow readig states to avoid receivig them util the ed whe kowig the etire iput istace is easy to make ad irrevocable decisio about them. The flaw i this argumet is that mixed readig states are useful as described above. The surprisig algorithm from Theorem 2 uses mixed readig states to lear every data item that is ot i the istace without receivig a sigle real ukow data item. No real poly-time algorithm, however, is able to do this. First, it requires the algorithm to keep track of a expoetial amout of iformatio. Secod, it requires the algorithm to have ubouded computatioal power with which to istatly compute the solutio for the ow kow istace. 3 Upper Bouds I this sectio, we provide a surprisig algorithm for ay computatioal problem where the algorithm has access either to free brachig ad/or free data items, ad the computatioal problem has a small set of possible data items D. Next we will exted the geeric algorithm to a algorithm for the k-sat problem with oly m clauses. Theorem 2 (Algorithm with Free Brachig ad/or Free Data Items). Cosider ay computatioal problem with kow iput size ad D = d potetial data items. There is a pfbt algorithm for this problem with width W A () = 2 O( log d). Whe m free data items are added to the problem, the required width decreases to 2 O( log d m+ log d ). Whe m = O( log d), it decreases to O( log d) 3 eve without free brachig, i.e. i pbt. 8

9 Proof. We first cosider the case without free data items. We costruct the determiistic algorithm by provig that for every fixed iput istace, the probability of our radomized algorithm fidig a correct solutio is less tha oe over the umber ( d) d of istaces. Hece, by the uio boud we kow that the probability that a radomly chose settig of all to coi flips fails to work for all iput istaces is less tha oe. Hece, there must exist a algorithm that solves every iput istace correctly. I its first stage, the algorithm lears what the iput istace is, ot by learig each of the data items i the istace but by learig the complete set of D data items that are ot i the iput istace. It does this without havig to commit to values for more tha l = O( log d) variables. The algorithm braches whe makig a decisio about each of these variables. Hece, the width is Σ l = 2 O( log d). I its secod stage, the algorithm uses its ubouded computatioal power to istatly compute the solutio for the ow kow istace. This solutio will be cosistet with oe of its Σ l braches. We ow focus o the how the algorithm ca use free braches to lear lots of data items that are ot i the istace. The algorithm radomly orders the potetial data items ad lears the first oe that is i the iput istace. The items earlier i the orderig are leared to be ot i the istace ad are added to PI out. This is expected to cosist of a 1 fractio of the remaiig potetial items. However, if istead, the algorithm free braches to repeat this experimet F = 2 l = 2 O( log d) times, the we will see below that with high probability at least oe of these braches will result a i Θ( l(f) ) = Θ( log d ) fractio of the remaiig potetial data items beig added to PI out. The algorithm might wat to keep the oe brach that performs the best, but this would require cross talk betwee the braches. Istead, it keeps ay brach that performs sufficietly well. The threshold q i is carefully set so that the expected umber of braches alive stays the costat r from oe iteratio to the ext. algorithm pf BT(, d) pre cod : is the umber of data items i the iput istace ad d is the umber of potetial data items post cod : Forms a pfbt tree such that whp oe of the leaves kows the solutio begi l = O( log d) F = 2 l r 0 = r = O(l 2 log d) = O( log d) 2 Width (2r) F Σ l = 2 O( log d) Free brach to form r 0 braches Loop: i = 1, 2,...,l loop ivariat : The curret states i the pfbt tree form a r i 1 Σ i 1 rectagle. The r i 1 rows were formed usig free braches. For each such row, the states kows a set PIi 1 i of i 1 of the data items kow to be i the iput istace. The colum of Σ i 1 states withi this row were formed from decisio braches i order to make all Σ i 1 possible decisios o these i 1 variables. Except for makig differet decisio, these states are idetical. Each of the r i 1 rows of states also kows a set PIi 1 out of the data items kow to be ot i the istace. PIi 1 out has grow large eough so that there are oly d i 1 = d PIi 1 i PIout i 1 remaiig potetial data items. Each state free braches with degree F formig a total of r i 1 F rows For each of these r i 1 F rows of states 9

10 Radomly order the d i 1 remaiig potetial data items D = the first data item that is i the iput q = # of data items before D i the order q i = Θ( d i l(f) ), set so that Pr(q q i ) = 1 F if(q q i ) the add D to PI i i 1 ad these q earlier data items to PIout i 1 d i = d i 1 q i 1 remaiig potetial data items Make a decisio brach to decide each possibility i Σ about D else Abado this brach ed if ed for r i = the total # of these r i 1 F rows that survive if( r i [1, 2r] ) abort ed loop We kow what the iput istace is by kowig all the data items that are ot i it. We use our ubouded power to compute the solutio. This solutio will be cosistet with oe of our Σ l states. ed algorithm The first thig to esure is that d l = d PIl i PIout l becomes l so the remaiig possible data items must all be i the istace. At the begiig of the i th iteratio, there are d i 1 remaiig potetial data items. Let q deote the umber of these data items, chose before the first appears that is i the istace. The algorithm radomly chooses oe of these to be first i the orderig. It is i our fixed iput istace with probability at most d i 1. Coditioal o this oe ot beig i the istace, the ext has probability at most d i 1 1 of beig i the istace, while the last of cocer has probability at most d i 1 q 1 = d i. Exp[q] d i. Pr[q q i] (1 d i ) q i e q i/d i. (This approximatio is good util the ed game. See below for how the time for that is bouded.) Because we have F chaces ad we wat the expected umber of these to survive to be oe, we set q i = d i l(f) so that this probability is 1 F. For the states that achieve q q i, the umber of remaiig potetial data items is d i = d i 1 q i 1 d i 1 d i l(f) = d i 1 d il = d i 1. 1 l Hece, i the ed d l = d d (1 l )l e l2 d = 2 l d e < 1. Before this occurs, the iput istace is kow. As is ofte the case, the last 20% of the work takes 80% the time. The approximatio (1 i d i ) q i e iq i /d i fails i the ed game whe the umber of remaiig possible data items d i gets close to the umber of usee iput items i, say d i 2 i. This meas the umber of data items that must still be elimiated is at most i. Not beig doe meas that d i i +1 givig that Pr[q q i ] (1 i d i ) q i 1 q i d i. Settig this to F 1 ad solvig gives that q i log 2 (F). If this may data items are elimiated log 2 (d i ) = O( log d) log 2 (d i ) each roud the the umber of required rouds to elimiate the remaiig i data elemets is at most i q i O( log d). This at most doubles the umber of rouds already beig cosidered. We must also boud the probability that the algorithm aborts to be less tha oe over the umber d of istaces. It does so if r l [1, 2r] because it eeds at least oe brach to survive ad we do t wat the computatio tree to grow too wide. Lemma 3 proves that this failure probability is at most 2 Θ(r/(m+l)2), which is d as log as r = Θ((m + l) 2 log d). This is r = Θ( log d) 2 whe m = 0 ad grows to Θ( log d) 3 whe m = O( log d). Free data items provide eve more power tha free brachig does. Just as doe above the set PIi out of data items kow ot to be i the istace grows every time oe of these data items is received, however, because they have o bearig o the solutio of the problem, the algorithm eed ot brach makig all 10

11 possible decisios Σ m about them. The algorithm is idetical to the oe above, except that m = mi(m, ) of the free data items are radomly ordered ito the remaiig real data items. The ext data item received will either be oe of the real data items or oe of these m free data items. As before, let q deote the umber of data items before this received data item i the priority orderig, i.e. the umber added to PIi 1 out. Pr[q q i ] (1 +m d i ) q i e 2q i/d i. Durig this first stage, the algorithm wats to receive all m of the free data items ad oly receive l of the real data items. If a real data item is read after already havig received l of them, the the brach dies. Pr[q q i ad the data item received is a free oe] e 2q i/d i m +m. Still havig F = 2 l chaces ad watig the expected umber of these to survive to be +m )) 2 oe, we set q i = d i(l(f) l( m d il 3 so that this probability is 1 log d F. (l(f) = l will be set to O( m ) l( m +m ).) Hece, d i = d i 1 q i 1 d i 1 d, givig i the ed d 1 l m+l = d < 1, whe (1 l 3 3 )m+l e l(m+l) 3 log d l = Ω( m+ log d ). This gives a total width of W A() (2r) F Σ l = 2 O( log d m+ log d ) as required. The remaiig poit is that this ca be achieved without free brachig, i.e. i pbt, whe m = O( log d) ad W A () 2r = O( log d) 3. I this case, a brach termiates every time a real data item is received, i.e. l = 0. Oly F = 2 braches are eeded each time a free data item is received. Though we do ot have free braches, this ca be achieved by havig the algorithm brach o the Σ differet possible decisios for this received free data item. We ow boud the probability that the umber of rows startig at r goes to zero or expads past 2r. Lemma 3. Start with r rabbits. Each iteratio i [l], all the rabbits curretly alive have F babies (ad dies). Each baby lives idepedetly with probability 1 F. Hece, the expected umber of rabbits remaiig is agai r. The game succeeds if the populatio does ot die out ad there are ever more tha 2r rabbits. The probability of failure is at most 2 Θ(r/l2). r/2 Proof. Set h = l. Exp[r i ] = r i 1 ad usig Cheroff bouds the probability that it deviates by more tha h r i 1 is at most 2 Θ(h2). The probability that this does ot occur i ay of the l iteratios is at most l2 Θ(h2) = 2 Θ(r/l2). I such cases, assumig r i is always below 2r, each of the l iteratios chages r i by at most h 2r = r l. I reality, these chages perform a radom walk, because each of these chages is i a radom directio. But eve if they are all i the same directio, the total chage is at most r, which either zeros, or doubles r i. The width 2 O( log D ) of the algorithm described i the last sectio is 2 O( log ) for k-sat whe restricted to havig oly O(1) clauses per variable, because the the domai D of data items is o bigger tha O(1). However, for geeral k-sat problem, this result does ot directly improve the width because there are D = 2 2(2)k 1 potetial data items. Despite this, this algorithm ca surprisigly be exteded to 2 O(1 3 m 1 3 log ) whe the istace is restricted to havig oly m clauses. Theorem 4 (k-sat m-clauses). Whe there are oly m clauses, the k-sat ca be doe with 2 O(k m 1 3 log ) width. For k = 3, this is 2 O(2 3 log ) whe there are oly m = Θ() clauses ad 2 o() whe there are oly m = o( 2 ) clauses. The problem is that there may be m = Θ( 3 ) clauses. Proof. The first step is to ask for all variables that appear i at least l = k m 2 3 clauses. There are at most km l = k 1 23m of these. Brachig o each of these requires width 2 O(k m 1 3 ) width. For each of the remaiig variables, oly kllog(2) bits are eeded to specify the k variables i each of the at most l clauses that they are i. Hece, the umber of possible data items for each of these is at most d = (2) kl. 11

12 Theorem 2 the gives a pfbt algorithm for this problem with width w = 2 O( log d) = 2 O( kllog ) = 2 O(k m 1 3 log ). 4 Reductios from Hard to Easy Problems This sectio does reductios to show that if pbt or pfbt caot do well o the simple matrix problems the they caot do well o the hard problems either. Theorem 5 (Lower Bouds for Hard Problems). 7-SAT requires width 2 Ω() i pbt ad 2 Ω( ) i pfbt. Subset Sum requires width 2 Ω(/log ) i pfbt ad 2 Ω() i pfbt. Fially, Subset Sum without carries requires width 2 Ω() i pfbt. Proof. Lemma 6 gives a reductio to 7-SAT from the Matrix Iversio Problem i both the pbt ad the pfbt models. Lemma 7 gives a reductio i pfbt to Subset Sum (with ad without carries) from the Matrix Parity Problem with free data items. Lemma 8 gives a reductio from Subset Sum i the pfbt model to the Matrix Parity Problem with o free data items i the pfbt model. The results the follow from the lower bouds for the matrix problems give i Theorem 9. Lemma 6. [Reductio to 7-SAT] If 7-SAT ca be solved with width W i the pbt (pfbt) model, the the Matrix Iversio Problem (with O(log) free data items) ca be solved with the same width. Recall that the Matrix Iversio Problem is defied by a fixed ad kow matrix M {0, 1} with at most seve oes i each row ad K O(1) i each colum. The iput is a vector b {0, 1}. The output is x {0, 1} such that Mx = b. Each data item cotais the ame x j of a variable ad the value of the at most K bits b i from b ivolvig this variable, i.e. those for which M i,j = 1. Proof. Give a pbt (pfbt) algorithm for 7-SAT, we desig a pbt (pfbt) algorithm for the Matrix Iversio Problem as follows. Cosider a matrix iversio data item. Let x j be the variable specified ad b i be oe of the at most K bits from b ivolvig this variable, i.e. M i,j = 1. Because the i th row of M has at most seve oes, its cotributio to Mx = b is that the parity of these correspodig seve x variables must be equal to b i. This parity ca be expressed as the AND of 64 clauses ivolvig these seve variables. These clauses for each of the K bits b i are icluded i the 7-SAT data item correspodig to this matrix iversio data item. Note that as required for the 7-SAT problem, this costructed data item cotais a variable x j ad the list of clauses cotaiig this variable. Each clause cotais at most seve variables. As with a matrix iversio data item, the oly ew iformatio that a algorithm who is aware of the reductio lears from a ew 7-SAT data item is the value of the at most K bits b i from b ivolvig this variable. Fially, ote that the output of both the matrix iversio problem ad of 7-SAT is x {0, 1} such that Mx = b. Lemma 7. [Reductio to Subset Sum with free data items] If Subset Sum ca be solved with width W i pfbt, the the Matrix Parity Problem with the itroductio of m = O( log ) data items ca be solved with the same width. If the versio of SS without carries ca be solved the oly m = O() data items eed to be added to the Matrix Parity Problem. Recall that the Matrix Parity Problem is give as iput a matrix M {0, 1}. For each j [], there is a data item cotaiig the ame x j of a variable ad the j th colum of the matrix. The output x i, for each i [], is the parity of the i th row, amely x i = j [] M i,j. Proof. Give a pfbt algorithm for SS (Subset Sum), we desig a pfbt algorithm for the MP (Matrix Parity Problem) as follows. We map the matrix M istace for MP to a 2 + O( log ) iteger istace for SS. If you write these itegers i biary i separate rows j liig up the 2 i bit places ito colums tha 12

13 at least part of this will be the traspose of M. Let D j deote the MP data item cotaiig the variable ame x j ad the j th colum of the matrix M. This data item will get mapped to two SS data items D j, ad D j,+. Each such iteger is 4 log 2 bits log. Of these 2 are special, bit idex c j for each colum j ad r i for each row i of the matrix M. Cosecutive special bits are separated by at least 2 log 2 zeros so that the sum over all the umbers of the oe special bit does ot carry ito the ext special bit. More formally, Let c j = j 2 log ad r i = 2 log + i 2 log. To idetify the colum-iteger relatio, the c j th bit of D j, will be oe iff j = j. To store the j th colum of the matrix, for i [], the r th i bit is M i,j. The secod iteger D j,+ associated with data item D j is exactly the same as D j, except that the r th j bit of D j, storig the diagoal etry M j,j is complemeted. The SS istace will have aother O( log ) itegers i dummy data items, i order to deal with carries from oe bit to the ext. For each bit idex k [1, log ] ad each row i [], the iteger E i,k will be oe oly i the r i +k th bit. The target T will be oe i the c th j bit ad the r i + log th bit for each i, j []. Havig mapped a istace of the MP problem to a istace of the SS problem, we must ow prove that the solutios to these two istaces correspod to each other. Let S sum be a solutio to the SS istace described above. We first ote that for each j, S SS must accept exactly oe of D j,+ ad D j, because these are the oly itegers with a oe i the c th j bit ad the target T requires a oe i this bit. Let S MP be the solutio to the MP istace i which x j = 1 if ad oly if D j,+ is accepted by S SS. We ow prove that this is a valid solutio, i.e. i [], x i = j [] M i,j. Cosider some idex i. Because S SS is a valid solutio, we kow that its itegers sum up to T ad hece the r th i bit of the sum is zero. Because there is o carry to the r th i bit, we kow that the parity of the r th i bit of the itegers i S SS is zero. The dummy itegers E i,k are zero i this bit. For j i, both D j,+ ad D j, have M i,j i this bit ad as see exactly oe of them is i S SS ad hece these itegers cotribute M i,j to the parity. For j = i, it is slightly trickier. D i,, if it is i S SS, also cotributes M i,i to the parity, while D i,+ cotributes the complimet M i,i 1. I this first case, x i = 0 ad i the secod x i = 1. Hece, we ca simplify this by sayig that together D i, ad D i,+ cotribute M i,i x i. Combiig these gives that the parity of the r th i bit of the [ itegers i S SS is 0 = j i i,j ] M [ ] M i,i x i. This gives as required that xi = j [] M i,j. Ad hece, S MP is a valid solutio. Now we must go i the opposite directio. Let S MP be a solutio for our MP problem. Put D j,+ i S SS if x j = 1, otherwise put D j, i it. For each row i, let N = 2 log ad N i = N (x i + j [] M i,j ) ad let the biary expasio of N i be [N i ] 2 = N i, log,...,n i,0, so that Ni = k [0, log ] N i,k 2 k. For k 0, iclude the iteger E i,k i S SS if ad oly if N i,k = 1. We must ow prove that the itegers i S SS add up to T. The same argumet as above proves that the c th j ad the r th i bits of the sum are as eeded for T. What remais is to deal with the carries. The c th j bits will ot carry. The r th i bits i the D j,+ or D j, itegers of S SS add up to x i + j [] M i,j. For k [1, log ], the iteger i E i,k is oe oly i the r i +k th bit ad hece ca be thought of cotributig 2 k to the r th i bit. Together, they cotribute k [1, log ] N i,k 2 k, which by costructio is equal to N i N i,0. Because S MP is a solutio, N i = N (x i + j [] M i,j ) is eve, makig N i,0 = 0. The total cotributio to the r th i bit is the (x i + j [] M i,j )+N i = N = 2 log, which carries to be zeros everywhere except i the r i + log th bit. This agrees with the target T. What remais to prove is that the MP computatio tree ca state-by-state mirror the SS computatio tree. It is the m = O( log ) free data items that the MP problem has that allows this reductio to be straightforward. For each i [], the MP problem will have a free data item labeled D i,2 d ad for each i [] ad k [1, log ], it will have oe labeled E i,k. I each state of the computatio tree, the SS algorithm specifies a priority orderig of its data items D i,+, D i,, ad E i,k. The simulatig MP algorithm costructs as follows its orderig of its data items D i, D i,2 d, ad E i,k. If either D i,+ or 13

14 D i, have bee see yet, the the first occurrece of them i this orderig is replaced by D i. The secod occurrece of them ca be igored because it will ever come ito play. If at least oe of D i,+ or D i, has bee see already, the the occurrece of the other oe i this SS orderig is replaced by the free data item D i,2 d. Ay dummy data item E i,k i the SS orderig are replaced by the correspodig free data item E i,k i the MP orderig. If accordig to this SS priority orderig, SS receives the first of D i,±, the MP accordig to its mirrored orderig will receive D i. Note that MP lears the same iformatio about its istace that SS does. If o the other had, SS receives the secod of D i,± or receives a dummy data item E i,k, the MP will receive the correspodig free data item. The MP algorithm (ad we ca assume the SS algorithm) kows that its istace is comig from this reductio ad hece they kew before receivig it that this dummy/free data item is i the istace ad hece either gais ay ew iformatio from this fact whe it is received. The iformatio of iterest that they both iadvertetly lear is that all data items D i the orderig before this received data item are leared to ot be i the istace ad as such are added to PI out. Because the MP algorithm always lears the same iformatio that SS does, MP is able to cotiue simulatig SS s algorithm. If we are reducig from the carry free versio of Subset Sum the the dummy data items E i,k are ot eeded. Lemma 8. [Reductio to Subset Sum i pfbt ] If Subset Sum ca be solved with width W i the pfbt model, the the Matrix Parity Problem without free data items ca be solved i pfbt with the same width. Proof. Give a pfbt algorithm for SS, our goal is to costruct a pfbt algorithm for MP. Despite this, we will eed to talk about a adversary. Recall that a data item is said to be kow at some poit i a computatio path if the algorithm kows that it is i the actual iput istace eve though it has ot yet received it. Oe way that such a item ca arise is that a adversary, worried that the data item may be used as free data item, may iform the algorithm that this data item is i the iput istace. We map the matrix M istace for MP to a istace for SS as before with itegers D j,, D j,+, ad E i,k. We must assume that the Subset Sum algorithm, SSA, kows that it is receivig a iput from the reductio. Hece, whe it receives the first of D i,±, it does i fact kow that the secod of the pair is i the iput. Hece, at this poit i the curret computatio, the adversary desigates that this secod data item is a kow data item. The E i,k data items are cosidered to be kow from the begiig. The pfbt model the does ot allow SSA to have mixed readig states that itertwie the ukow ad kow data item together i its priority orderigs. Cosider a ukow readig state of SSA, i.e. oe that puts all of the possible ukow data items before the kow oes. The simulatig MP algorithm costructs its orderig of its data items D i by replacig the first occurrece of D i,± with D i. If SSA receives oe of D i,±, the MP receives D i. If SSA receives a kow data item, the this meas that the curret computatio path is doe because the iput istace is completely kow. Now cosider a kow readig state of SSA, i.e. oe that puts all of the kow oes before the possible ukow oes. Both SSA ad MP kow that these kow data items are i SSA s istace ad hece kow that SSA will receive the first oe i this order. I fact, there was o poit i this read state at all. If SSA forks i order to make more tha oe irrevocable decisio about this kow data item the MP makes this same braches usig a free brach. This free brach was oe of the iitial motivators of havig free braches. 5 Lower Bouds for the Matrix Problems This sectio will prove the followig lower bouds for the matrix problems. 14

15 Theorem 9 (Lower Bouds for the Matrix Problems). The Matrix Iversio Problem requires width 2 Ω() i the pbt model ad width 2 Ω( ) i the pfbt model. The Matrix Parity Problem requires width 2 Ω() i the pfbt model. If icluded i these problems are m free data items, the widths 2 Ω m+ ad 2 Ω 2 m+ are still eeded. This Sectio will be orgaized as follow. We will begi by developig i Sectio 5.1 a geeral techique for provig lower bouds o the width of the computatio tree of ay algorithm solvig a give problem i either the pbt or the pfbt models. Theorem 10 formally states ad proves that this techique works. We will the show how it is applied to get all of the results i Theorem 9. This proof still depeds o four lemmas specific to the problems at had. Sectio 5.2 proves Lemmas 12 ad 13 for the Matrix Parity Problem ad Sectio 5.2 proves Lemmas 16 ad 17 for the Matrix Iversio Problem. 5.1 The Lower Boud Techique As said, we begi developig a geeral techique for provig lower bouds for pbt ad pfbt models usig it to prove Theorem 9 subject to four lemmas proved later. For each iput istace I, the computatio tree T A (I) has height = I measured i terms of the umber of data items received. Let l be a parameter, Θ() or Θ( ) depedig o the result. We will focus our attetio o partial paths p from the root to a state at level l i T A (I). Recall that such states are uiquely idetified with PIp i, PIp out, PS p, f p. The partial istace PIp = PIp i, PIp out cosists of the l data items kow to be i the istace ad those iadvertetly leared to be ot i the istace. Lemma 1 proves that PI p costitutes the sum kowledge that algorithm A kows about the istace I i this state. With oly this limited kowledge, it is ulikely that the partial solutio PS p that the algorithm has irrevocably decided about the data items i PIp i are correct. Formally, we prove that Pr I [S(I) PS p I PI p ] is small, where I PI p is defied to mea that istace I is cosistet with p, i.e. cotais the data items i PIp i ad ot those i PIp out ad S(I) PS p is defied to mea that the decisios PS S are cosistet with the/a solutio for I. With free brachig, i depth oly l upper = O( log d), the ureasoable upper boud i Theorem 2 maages to lear the etire iput I ot by havig PIp i cotai all of its data items but by havig PIp out cotai all of the data items ot i it. This demostrates how havig PIp out extra ordiarily large, ca cause I PI p to give the algorithm sufficiet iformatio about the istace I that it ca deduce a correct partial solutio PS p causig Pr I [S(I) PS p I PI p ] to be far too big. Hece, we defie a path p to be bad if this is the case. A pfbt algorithm A is allowed to fork both i a free brachig state i order to try differet priority orderigs ad i a iput readig state i order to try differet irrevocable decisios. Each of the computatio paths p for a give tree T A (I) ca be uiquely idetified by a tuple of forkig idexes f = σ 1,...,σ l, f 1,...,f l, where σ i Σ is idicates the decisio made i the i th iput readig state ad f i N idicates which brach is followed i the i th free brachig state. Note that if the width of the computatio tree T A (I) is bouded by W A (), the each f i is at most W A (). This allows us to defie Forks pfbt = [Σ [W A ()]] l be the set of possible tuples of forkig idexes. I cotrast, a pbt algorithm is ot allowed free brachig, ad hece Forks pbt = Σ l. The oly differece betwee these two models that we will use is the sizes of these sets. Oce we fix such a tuple f (idepedet of a iput I), we ca the defie the algorithmic strategy A f used by the brachig program A to be the mappig betwee the iput I ad the PI i p, PI out p, PS p, f p idetifyig the state at the ed of the path idetified by tuple f. This ext theorem outlies the steps sufficiet to prove that the width of the algorithm must be high. Theorem 10 (The Lower Boud Techique). 1. Defie a probability distributio P o a fiite family of hard istaces. 15

16 For both the Matrix Parity ad Matrix Iversio Problems the distributio is uiform o the legal istaces with data items. 2. Oly cosider partial paths p from the root to a state at level l i the tree T A (I). l = Θ() or Θ( ) depedig o the result. 3. A path p is defied to be good if its set PI out p is good. For the Matrix Iversio Problems, PIp out is defied to be good if it is sufficietly big. For the Matrix Parity Problems, it oly must be sufficietly big with respect to at least oe variable x j. 4. Prove that if p is good path at level key level l, the Pr I [S(I) PS p I PI p ] pr good, amely that the iformatio about the istace I gaied from PI p = PIp i, PIp out is ot sufficiet to effectively make the irrevocable decisios PS p about the data items i i If the algorithm simply guesses the decisio PS p for each of the l data items i PI i p, the it is correct with probability Σ l = 2 l. For both the Matrix Parity ad the Matrix Iversio problems, we are able to boud pr good 2 Ω(l). See Lemmas 13 ad Prove that with probability at least 1 2, every set PIout p i T A (I) is sufficietly small so that it is cosidered good. Cosider a tuple of forkig idexes [ f Forks. Cosider some algorithmic strategy A f alog these forkig idexes. Prove that Pr I Af produces a PI out that is bad ] 1 pr bad 1 Note that 2 Forks gives us the probability 1 2 after doig the uio boud. p. 2 Forks. For the Matrix Iversio Problem, we are able to boud pr bad 2 Ω(). For the Matrix Parity Problem, we are able to do much better boudig it by 2 Ω(2). See Lemmas 16 ad 12. These three step gives that ay pfbt (pbt) algorithm A requires width W A () 1 2pr good. We ow prove that the techique works. Proof. of Theorem 10: Step 5 i the techique proves that for a radom istace I, the probability that T A (I) cotais a bad partial path is at most 1 2. Hece, ay workig algorithm A must solve the problem with probability at least 1 2 usig a full path whose first l levels is a good partial path. This requires that there exists a partial path p T A (I) such that the irrevocable decisios PS p made alog it are cosistet with the/a solutio S(I) of the istace, amely 1 2 Pr I [ p T A (I) l, such that p is good ad S(I) PS p ] This probability deals oly with paths i T A (I), which i tur depeds o the radomly selected istace I. I cotrasts, the Step 4 probability, Pr I [S(I) PS p I PI p ] pr good, talks about fixed sets PI p ad PS p that ca deped o the algorithm A as a whole but ot o our curret choice of I. To uderstad the algorithm A idepedet of a particular istace I, defie Paths = {p = PI p, PS p, f p I such that p is a partial path of legth l i T A (I ) } ad Paths g those that are good. Note that p T A (I) [p Paths ad I PI p ]. It follows that the previously metioed probability is bouded as follows. Pr I [ p Paths g, such that I PI p ad S(I) PS p ] The uio boud, coditioal probabilities, ad pluggig i the probability from Step 4 traslates this probability ito the followig. 16

17 Pr I [I PI p ad S(I) PS p ] p Paths g = Pr I [S(I) PS p I PI p ] Pr I [I PI p ] p Paths g pr good Pr I [I PI p ] p Paths g Traslatig from the algorithm A as a whole back to the just those paths i T A (I) requires uderstadig how the computatio tree T A (I) chages for differet istaces I. Lemma 11 proves that if p = PI p, PS p, f p is a path that algorithm A follows for some istace I ad I is cosistet with what is leared i this path, the this same p will be followed whe A is give istace I, amely p Paths ad I PI p, the p T A (I). This bouds the previous probability with the followig. pr good p P aths Pr I [p T A (I)] The width of T A (I) at level l is equal to the umber of paths of legth l i T A (I). This width is bouded by W A (). Hece, the above sum of probabilities is ca be iterpreted as follows. = pr good Exp I [width of T A (I) at level l] pr good W A () Together this gives us W A () 1 2pr good as required. We ow state a prove the lemma eeded i the above proof that cosiders how the computatio tree T A (I) chages for differet istaces I. Lemma 11. If p = PI p, PS p, f p is a path that algorithm A follows for some istace I ad I is cosistet with what is leared i this path, the this same p will be followed whe A is give istace I, amely p Paths ad I PI p, the p T A (I). Proof. Suppose p Paths ad I PI p. Recall p Paths meas that there exists a istace I such that p = PI p, PS p, f p is the label of a partial path of legth l i T A (I ). It follows that I PI p. This gives us everythig eeded for Lemma 1, amely we have two iputs I ad I that are both cosistet with the partial istace PI p, i.e. I PI p ad I PI p ad there exists a path p T A (I ) idetified with PI p, PS p, f p. This lemma the gives that p T A (I). Our lower boud results follow from this techique ad the four Lemmas specific to the problems at had. Proof. of Theorem 9: I all cases, the techique gives that W A () 1 2pr good 2 Ω(l). See Lemmas 13 ad 17. What remais is to esure that the probability pr bad of a bad path is at most 1 2 Forks. Lemmas 16 states that for the Matrix Iversio Problem, pr bad 2 Ω(). I the model pbt, Forks pbt = 2 l, givig Forks pbt pr bad 1 2 eve whe l Θ(). I the pfbt model, however, Forks pfbt = [2W A ()] l = [2 Θ(l) ] l = 2 Θ(l2), givig Forks pfbt pr bad 1 2 oly whe l Θ( ). Lemmas 12 states that for the Matrix Parity Problem pr bad 2 Ω(2). Hece we have o problem settig l Θ() either way. If icluded i this problem are m free data items the what chages is that istead of just cosiderig l levels of the computatio, we cosider the computatio up to the poit at which l real data items have bee received ad ay umber of free data items have bee received. This ivolves up to l + m levels i total. This icreases Forks pfbt from [2W A ()] l = 2 Θ(l2) to [2W A ()] l+m = 2 Θ(l(l+m)). Hece, whe pr bad 2 Ω(), l Θ ( m+ ) is eeded ad whe pr bad 2 Ω(2), l Θ ( 2 m+ ) is eeded. 17

18 5.2 The Matrix Parity Problem This sectio proves the required good ad bad path lemmas eeded for the Matrix Parity Problem. Here is a remider the relevat defiitios. MP with distributio P: The distributio P uiformly chooses a matrix M {0, 1} for the iput istace. The j th data item cotais the ame x j of a variable ad the j th colum of the matrix. The output x i, for each i [], is the parity of the i th row, amely x i = j [] M i,j. Height l: Defie l = ǫ to be the level to which the partial paths p are cosidered. Partitioig D: For each j [], let D j D deote the set of possible data items labeled by x j ad hece cotaiig the j th colum of the matrix. Similarly, for a give path p, PI out ca be partitioed ito PIj out D j. Defie PI j? D j to be the set of data items for which it remais ukow whether or ot it is i the istace I. If j PI i (i.e. PI i does ot cotai a data item from D j ), the PI j? = D j PIj out. If j PI i, the PI j? =, because havig received oe data item from D j, the algorithm kows that o more from this domai are possible. Good Path: Such a computatio path p ad the set PI out arisig from it are cosidered to be good if j PI i, PI? j q D j, where q is set to be 2 (1 ε)l = 2 (1 ε)ǫ. Because a 2 (1 ε)ǫ fractio of D j effectively meas that at most (1 ε)ǫ about the j th colum have bee revealed. The ext lemma the bouds the probability that a computatio path is bad. Cosider a tuple of forkig idices f Forks. Cosider some algorithmic strategy A f alog these forkig idexes haltig after l levels. Lemma 12. pr bad = Pr I [A f is a q-bad path] (3q) l. This gives pr bad ǫ2 for the matrix parity problem as claimed. Proof. The algorithmic strategy A f follows oe path p util l data items have bee read. Durig each of these l steps, it chooses a permutatio of the remaiig data items ad the lears which of the items i the radomly chose iput appears first i this orderig. The item revealed to be i the iput is put ito PI i ad all those appearig before it i the specified orderig are put ito PI out. We will, however, break this process ito sub-steps, both i terms of defiig the orders ad i terms of delay fully choosig the iput util the iformatio is eeded. I each sub-step, the computatio A f specifies oe data item D to be ext i the curret orderig. The the coditioed distributio P radomly decides whether to put D ito the iput. Above we partitioed the data items ito D j D based o which variable x j it is about. Though the algorithm deals with all such D j i parallel, they really are idepedet. Whe takig a step i the j th game, A f selects oe data item D from PI j? = D j PIj out to be ext i the curret orderig. Give A f kows that exactly oe of the data items i this set are i the iput ad they are equally likely to be, P decides that D is i the iput with probability 1/ PI j?. If it is, the this jth game is doe. Otherwise, D is removed from PI j?. Give the statemet of the lemma beig proved bouds the probability that this path p followed is bad, we eed to assume that the algorithm A f is doig everythig i its power to make this the case. Hece, we say that A f lose this game if this path is good, i.e. if after PI i cotais l data items it is the case that j PI i, PI j? q D j. It order to make the game less cofusig ad to give A f more chace of wiig, we will ot stop the game whe PI i cotais l data items, but istead we will let him play all games to completio. He wis the j th game if he ca maage to shrik PI j? to be smaller tha q D j without addig j to PI i. If he accidetal does add j to PI i, the he loses this j th game. It is ok for him to lose l 18

19 of these games, because, durig the part of his computatio we cosider, we allow PI i to grow to be of size l. However, if he loses l+1 of the games, the we claim that this computatio path is good. At least oe of these lost games had its j added to PI i after we stopped cosiderig his computatio ad at the poit i time j PI i ad PI? j q D j. Now let us boud the probability that he is able to wi the j th game. Effectively what he is able to do i this game is to specify the oe order i which it will ask about the data items i D j. The algorithm wis this game if ad oly if the data item D j from D j that is i the iput lays i the last q fractio of this orderig. Because D j is selected uiformly at radom, the probability of the algorithm wiig this game is q. For j [], let X j be the radom variable idicatig whether A f wis the j th game ad let X = j [] X j deote the umber of games wo. We proved that Pr[X j = 1] = q. We also proved that Pr[path is bad] Pr[X l]. Defie µ = Exp[X] = q ad 1 + δ = 1 q (1 l ( ) ) so µ e that (1 + δ)µ = l. Cheroff bouds gives that Pr[X (1 + δ)µ] < δ (1+δ) (3q) l. 1+δ We ow prove that the partial iformatio about the iput I gaied from a good PI = PI i, PI out is ot sufficiet to effectively make irrevocable decisios PS about the data items i PI i. This is the oly result i the lower bouds sectio that depeds o aythig specific about the computatio problem ad it does ot deped o the specifics of the model of computatio. Lemma 13. If PI, PS, f is the label of a q-good path p of legth l, the pr good = Pr I [S(I) PS I PI] 1 q 2 l. Here q = 2 (1 ε)l, givig pr good 2 εl as eeded. Proof. Assume the data items i PI i have told the algorithm the first l colums of M, but required him to make irrevocable guesses PS of the parities x 1,...,x l of the first l rows of M. PI out beig q-good meas that j PI i, PI? j q D j, where D j PI? j D j = {0, 1} is a vector that remais possible for the j th colum of M. I order to be ice to the algorithm, we will give the algorithm all of M except for this colum j. The for each row i [1, l], we ow kow the parity j j M i,j of these colums ad kow the parity x i = j [] M i,j that the algorithm has promised for the etire row. Hece, we ca compute the required value for M i,j i order for the algorithm to be correct. Let A be the set of vectors α for the j th colum such that the algorithm is correct, amely A = {α {0, 1} i [1, l], α i = the required M i,j }. Note that A = 2 l because there is such a α for each way of settig the remaiig l bits of this colum. The adversary ow radomly selects α PI? j that will be the jth colum i the iput M. If the selected α is i A, the the algorithm wis. The probability of this, however, is A / PI? j [ 2 l] / [q2 ] = 1 q 2 l. This completes the lower boud for the Matrix Parity Problem. 5.3 The Matrix Iversio Problem This sectio proves the required good ad bad path lemmas eeded for the Matrix Iversio Problem. Here is a remider the relevat defiitios. Fixed Matrix M: The problem MI is defied by a fixed matrix M {0, 1} that is o-sigular, has exactly seve oes i each row, at most K oes i each colum, ad is a (r, 7, 3)-boudary expader, with K Θ(1), ad r Θ(). Boudary Expader: M is a (r, 7, c)-boudary expader if it has exactly seve oes i each row ad if for every set R [] of at most r rows, the boudary M (R) has size at least c R. The boudary is defied to cotai the colum j if there is a sigle oe i the rows R of this colum. Sectio 5.4 sets K = Θ(1), c = 3, ad r = Θ() ad proves that such a matrix exits. 19

20 MI with distributio P: The distributio P uiformly chooses a vector b {0, 1}. The j th data item cotais the ame x j of a variable ad the value of the at most K bits b i from b ivolvig this variable, i.e. those for which M i,j = 1. The output is x {0, 1} such that Mx = b over GF-2. Height l: Defie l to be the level to which the partial paths p are cosidered. If we are provig the 2 Ω() width lower boud for the pbt model, the set l = = Θ(). If we are provig 2 Ω( ) for 2 K+3 the pfbt, the set l = Θ( ). Good Path: Such a computatio path p ad the set PI out arisig from it are cosidered to be good if PI out Q. Here Q = r Kl = Θ(), which is reasoable give that the umber of possible data items is D = 2 K = Θ(). At each poit alog the computatioal path p, the algorithmic strategy A f kows that the iput is cosistet with the curret partial istace P I. Kowig what the algorithm lears from this is complex. Hece, we will chage the game by havig the adversary reveal extra iformatio to the algorithms so that at each poit i his computatio, what he kows is oly some r bits of the iput b. If the path is good, the this umber does ot exceed r, i.e. the first parameter of expasio. Lemma 14. Let PI = PI i, PI out be arbitrary sets of data items. I PI reveals at most r = K PI i + PI out bits of b. More formally, let B = {b {0, 1} I PI} be the set of istaces b I cosistet with the partial istace P I. The adversary ca reveal more iformatio to the algorithm, restrictig this set to B B, so that the vectors i B have fixed values i r of its bits ad all 2 r possibilities i the remaiig r bits. Note that whe PI out is good, the algorithm lears at most K PI i + PI out K l + (r Kl) = r bits of b. Proof. Recall Lemma 12 gives a sub-step iterpretatio of the algorithmic strategy A f followig oe path p. I each of these sub-steps, the algorithm specifies oe data item D from those that are still possible ad is told whether or ot D is i the iput. If it is the D is added to PI i ad if ot it is added to PI out. Whe the algorithm lears that data item D is i PI i, he lears lears the value of at the most K bits of b {0, 1} correspodig to the j th colum of M. Whe he lears that D is i PI out, he lears a settig i {0, 1} k that k K of bits i b do ot have. If k is large, the the immediate usefuless of this iformatio is ot clear. The adversary chooses oe of the k bits of b that the data item is about ad that the algorithm does ot yet kow the value of ad reveals that this bit of b has the opposite value from that give i the data item. The algorithm, seeig this as the reaso the data item is ot i the iput, lears othig more about the iput. I the ed, what the algorithm lears from I PI is K PI i + PI out bits of b. Lemma 15. Let PI = PI i, PI out deotes the curret state computatio. Let D be a data item that is still possibly i the iput I. It follows that Pr I [D I I PI] q = 1 2 K. Proof. The proof of Lemma 14 chages the game so that what the algorithm kows at each step of the computatio is some r of bits of the vector b {0, 1}. The coditioal distributio P uiformly at radom chooses the remaiig r bits. Each data item D specifies a variable x j ad up to K bits of the vector b. If this D disagrees with some of the r revealed bits of b, the D i ot still possible. Otherwise, 1 the probability that D is i the iput is 1, where K K is the umber of the specified bits that 2 K 2 K have ot bee fixed. The ext lemma the bouds the probability that the computatio path give by A f is bad. Lemma 16. pr bad = Pr I [A f is a Q-bad path] 2 Ω(). 20

5 Boolean Decision Trees (February 11)

5 Boolean Decision Trees (February 11) 5 Boolea Decisio Trees (February 11) 5.1 Graph Coectivity Suppose we are give a udirected graph G, represeted as a boolea adjacecy matrix = (a ij ), where a ij = 1 if ad oly if vertices i ad j are coected

More information

In nite Sequences. Dr. Philippe B. Laval Kennesaw State University. October 9, 2008

In nite Sequences. Dr. Philippe B. Laval Kennesaw State University. October 9, 2008 I ite Sequeces Dr. Philippe B. Laval Keesaw State Uiversity October 9, 2008 Abstract This had out is a itroductio to i ite sequeces. mai de itios ad presets some elemetary results. It gives the I ite Sequeces

More information

Week 3 Conditional probabilities, Bayes formula, WEEK 3 page 1 Expected value of a random variable

Week 3 Conditional probabilities, Bayes formula, WEEK 3 page 1 Expected value of a random variable Week 3 Coditioal probabilities, Bayes formula, WEEK 3 page 1 Expected value of a radom variable We recall our discussio of 5 card poker hads. Example 13 : a) What is the probability of evet A that a 5

More information

Discrete Mathematics and Probability Theory Spring 2014 Anant Sahai Note 13

Discrete Mathematics and Probability Theory Spring 2014 Anant Sahai Note 13 EECS 70 Discrete Mathematics ad Probability Theory Sprig 2014 Aat Sahai Note 13 Itroductio At this poit, we have see eough examples that it is worth just takig stock of our model of probability ad may

More information

Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed. This documet was writte ad copyrighted by Paul Dawkis. Use of this documet ad its olie versio is govered by the Terms ad Coditios of Use located at http://tutorial.math.lamar.edu/terms.asp. The olie versio

More information

Soving Recurrence Relations

Soving Recurrence Relations Sovig Recurrece Relatios Part 1. Homogeeous liear 2d degree relatios with costat coefficiets. Cosider the recurrece relatio ( ) T () + at ( 1) + bt ( 2) = 0 This is called a homogeeous liear 2d degree

More information

Department of Computer Science, University of Otago

Department of Computer Science, University of Otago Departmet of Computer Sciece, Uiversity of Otago Techical Report OUCS-2006-09 Permutatios Cotaiig May Patters Authors: M.H. Albert Departmet of Computer Sciece, Uiversity of Otago Micah Colema, Rya Fly

More information

Chapter 6: Variance, the law of large numbers and the Monte-Carlo method

Chapter 6: Variance, the law of large numbers and the Monte-Carlo method Chapter 6: Variace, the law of large umbers ad the Mote-Carlo method Expected value, variace, ad Chebyshev iequality. If X is a radom variable recall that the expected value of X, E[X] is the average value

More information

CS103A Handout 23 Winter 2002 February 22, 2002 Solving Recurrence Relations

CS103A Handout 23 Winter 2002 February 22, 2002 Solving Recurrence Relations CS3A Hadout 3 Witer 00 February, 00 Solvig Recurrece Relatios Itroductio A wide variety of recurrece problems occur i models. Some of these recurrece relatios ca be solved usig iteratio or some other ad

More information

Modified Line Search Method for Global Optimization

Modified Line Search Method for Global Optimization Modified Lie Search Method for Global Optimizatio Cria Grosa ad Ajith Abraham Ceter of Excellece for Quatifiable Quality of Service Norwegia Uiversity of Sciece ad Techology Trodheim, Norway {cria, ajith}@q2s.tu.o

More information

SECTION 1.5 : SUMMATION NOTATION + WORK WITH SEQUENCES

SECTION 1.5 : SUMMATION NOTATION + WORK WITH SEQUENCES SECTION 1.5 : SUMMATION NOTATION + WORK WITH SEQUENCES Read Sectio 1.5 (pages 5 9) Overview I Sectio 1.5 we lear to work with summatio otatio ad formulas. We will also itroduce a brief overview of sequeces,

More information

Incremental calculation of weighted mean and variance

Incremental calculation of weighted mean and variance Icremetal calculatio of weighted mea ad variace Toy Fich faf@cam.ac.uk dot@dotat.at Uiversity of Cambridge Computig Service February 009 Abstract I these otes I eplai how to derive formulae for umerically

More information

The Stable Marriage Problem

The Stable Marriage Problem The Stable Marriage Problem William Hut Lae Departmet of Computer Sciece ad Electrical Egieerig, West Virgiia Uiversity, Morgatow, WV William.Hut@mail.wvu.edu 1 Itroductio Imagie you are a matchmaker,

More information

Asymptotic Growth of Functions

Asymptotic Growth of Functions CMPS Itroductio to Aalysis of Algorithms Fall 3 Asymptotic Growth of Fuctios We itroduce several types of asymptotic otatio which are used to compare the performace ad efficiecy of algorithms As we ll

More information

Project Deliverables. CS 361, Lecture 28. Outline. Project Deliverables. Administrative. Project Comments

Project Deliverables. CS 361, Lecture 28. Outline. Project Deliverables. Administrative. Project Comments Project Deliverables CS 361, Lecture 28 Jared Saia Uiversity of New Mexico Each Group should tur i oe group project cosistig of: About 6-12 pages of text (ca be loger with appedix) 6-12 figures (please

More information

I. Chi-squared Distributions

I. Chi-squared Distributions 1 M 358K Supplemet to Chapter 23: CHI-SQUARED DISTRIBUTIONS, T-DISTRIBUTIONS, AND DEGREES OF FREEDOM To uderstad t-distributios, we first eed to look at aother family of distributios, the chi-squared distributios.

More information

Repeating Decimals are decimal numbers that have number(s) after the decimal point that repeat in a pattern.

Repeating Decimals are decimal numbers that have number(s) after the decimal point that repeat in a pattern. 5.5 Fractios ad Decimals Steps for Chagig a Fractio to a Decimal. Simplify the fractio, if possible. 2. Divide the umerator by the deomiator. d d Repeatig Decimals Repeatig Decimals are decimal umbers

More information

CS103X: Discrete Structures Homework 4 Solutions

CS103X: Discrete Structures Homework 4 Solutions CS103X: Discrete Structures Homewor 4 Solutios Due February 22, 2008 Exercise 1 10 poits. Silico Valley questios: a How may possible six-figure salaries i whole dollar amouts are there that cotai at least

More information

Hypothesis testing. Null and alternative hypotheses

Hypothesis testing. Null and alternative hypotheses Hypothesis testig Aother importat use of samplig distributios is to test hypotheses about populatio parameters, e.g. mea, proportio, regressio coefficiets, etc. For example, it is possible to stipulate

More information

Lecture 2: Karger s Min Cut Algorithm

Lecture 2: Karger s Min Cut Algorithm priceto uiv. F 3 cos 5: Advaced Algorithm Desig Lecture : Karger s Mi Cut Algorithm Lecturer: Sajeev Arora Scribe:Sajeev Today s topic is simple but gorgeous: Karger s mi cut algorithm ad its extesio.

More information

Properties of MLE: consistency, asymptotic normality. Fisher information.

Properties of MLE: consistency, asymptotic normality. Fisher information. Lecture 3 Properties of MLE: cosistecy, asymptotic ormality. Fisher iformatio. I this sectio we will try to uderstad why MLEs are good. Let us recall two facts from probability that we be used ofte throughout

More information

.04. This means $1000 is multiplied by 1.02 five times, once for each of the remaining sixmonth

.04. This means $1000 is multiplied by 1.02 five times, once for each of the remaining sixmonth Questio 1: What is a ordiary auity? Let s look at a ordiary auity that is certai ad simple. By this, we mea a auity over a fixed term whose paymet period matches the iterest coversio period. Additioally,

More information

Lecture 4: Cauchy sequences, Bolzano-Weierstrass, and the Squeeze theorem

Lecture 4: Cauchy sequences, Bolzano-Weierstrass, and the Squeeze theorem Lecture 4: Cauchy sequeces, Bolzao-Weierstrass, ad the Squeeze theorem The purpose of this lecture is more modest tha the previous oes. It is to state certai coditios uder which we are guarateed that limits

More information

5: Introduction to Estimation

5: Introduction to Estimation 5: Itroductio to Estimatio Cotets Acroyms ad symbols... 1 Statistical iferece... Estimatig µ with cofidece... 3 Samplig distributio of the mea... 3 Cofidece Iterval for μ whe σ is kow before had... 4 Sample

More information

1 Computing the Standard Deviation of Sample Means

1 Computing the Standard Deviation of Sample Means Computig the Stadard Deviatio of Sample Meas Quality cotrol charts are based o sample meas ot o idividual values withi a sample. A sample is a group of items, which are cosidered all together for our aalysis.

More information

Measures of Spread and Boxplots Discrete Math, Section 9.4

Measures of Spread and Boxplots Discrete Math, Section 9.4 Measures of Spread ad Boxplots Discrete Math, Sectio 9.4 We start with a example: Example 1: Comparig Mea ad Media Compute the mea ad media of each data set: S 1 = {4, 6, 8, 10, 1, 14, 16} S = {4, 7, 9,

More information

Example 2 Find the square root of 0. The only square root of 0 is 0 (since 0 is not positive or negative, so those choices don t exist here).

Example 2 Find the square root of 0. The only square root of 0 is 0 (since 0 is not positive or negative, so those choices don t exist here). BEGINNING ALGEBRA Roots ad Radicals (revised summer, 00 Olso) Packet to Supplemet the Curret Textbook - Part Review of Square Roots & Irratioals (This portio ca be ay time before Part ad should mostly

More information

A Faster Clause-Shortening Algorithm for SAT with No Restriction on Clause Length

A Faster Clause-Shortening Algorithm for SAT with No Restriction on Clause Length Joural o Satisfiability, Boolea Modelig ad Computatio 1 2005) 49-60 A Faster Clause-Shorteig Algorithm for SAT with No Restrictio o Clause Legth Evgey Datsi Alexader Wolpert Departmet of Computer Sciece

More information

Notes on exponential generating functions and structures.

Notes on exponential generating functions and structures. Notes o expoetial geeratig fuctios ad structures. 1. The cocept of a structure. Cosider the followig coutig problems: (1) to fid for each the umber of partitios of a -elemet set, (2) to fid for each the

More information

THE REGRESSION MODEL IN MATRIX FORM. For simple linear regression, meaning one predictor, the model is. for i = 1, 2, 3,, n

THE REGRESSION MODEL IN MATRIX FORM. For simple linear regression, meaning one predictor, the model is. for i = 1, 2, 3,, n We will cosider the liear regressio model i matrix form. For simple liear regressio, meaig oe predictor, the model is i = + x i + ε i for i =,,,, This model icludes the assumptio that the ε i s are a sample

More information

Determining the sample size

Determining the sample size Determiig the sample size Oe of the most commo questios ay statisticia gets asked is How large a sample size do I eed? Researchers are ofte surprised to fid out that the aswer depeds o a umber of factors

More information

Output Analysis (2, Chapters 10 &11 Law)

Output Analysis (2, Chapters 10 &11 Law) B. Maddah ENMG 6 Simulatio 05/0/07 Output Aalysis (, Chapters 10 &11 Law) Comparig alterative system cofiguratio Sice the output of a simulatio is radom, the comparig differet systems via simulatio should

More information

THE ABRACADABRA PROBLEM

THE ABRACADABRA PROBLEM THE ABRACADABRA PROBLEM FRANCESCO CARAVENNA Abstract. We preset a detailed solutio of Exercise E0.6 i [Wil9]: i a radom sequece of letters, draw idepedetly ad uiformly from the Eglish alphabet, the expected

More information

Definition. A variable X that takes on values X 1, X 2, X 3,...X k with respective frequencies f 1, f 2, f 3,...f k has mean

Definition. A variable X that takes on values X 1, X 2, X 3,...X k with respective frequencies f 1, f 2, f 3,...f k has mean 1 Social Studies 201 October 13, 2004 Note: The examples i these otes may be differet tha used i class. However, the examples are similar ad the methods used are idetical to what was preseted i class.

More information

0.7 0.6 0.2 0 0 96 96.5 97 97.5 98 98.5 99 99.5 100 100.5 96.5 97 97.5 98 98.5 99 99.5 100 100.5

0.7 0.6 0.2 0 0 96 96.5 97 97.5 98 98.5 99 99.5 100 100.5 96.5 97 97.5 98 98.5 99 99.5 100 100.5 Sectio 13 Kolmogorov-Smirov test. Suppose that we have a i.i.d. sample X 1,..., X with some ukow distributio P ad we would like to test the hypothesis that P is equal to a particular distributio P 0, i.e.

More information

Chapter 7 Methods of Finding Estimators

Chapter 7 Methods of Finding Estimators Chapter 7 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 011 Chapter 7 Methods of Fidig Estimators Sectio 7.1 Itroductio Defiitio 7.1.1 A poit estimator is ay fuctio W( X) W( X1, X,, X ) of

More information

CHAPTER 3 THE TIME VALUE OF MONEY

CHAPTER 3 THE TIME VALUE OF MONEY CHAPTER 3 THE TIME VALUE OF MONEY OVERVIEW A dollar i the had today is worth more tha a dollar to be received i the future because, if you had it ow, you could ivest that dollar ad ear iterest. Of all

More information

CHAPTER 3 DIGITAL CODING OF SIGNALS

CHAPTER 3 DIGITAL CODING OF SIGNALS CHAPTER 3 DIGITAL CODING OF SIGNALS Computers are ofte used to automate the recordig of measuremets. The trasducers ad sigal coditioig circuits produce a voltage sigal that is proportioal to a quatity

More information

THE ARITHMETIC OF INTEGERS. - multiplication, exponentiation, division, addition, and subtraction

THE ARITHMETIC OF INTEGERS. - multiplication, exponentiation, division, addition, and subtraction THE ARITHMETIC OF INTEGERS - multiplicatio, expoetiatio, divisio, additio, ad subtractio What to do ad what ot to do. THE INTEGERS Recall that a iteger is oe of the whole umbers, which may be either positive,

More information

1. C. The formula for the confidence interval for a population mean is: x t, which was

1. C. The formula for the confidence interval for a population mean is: x t, which was s 1. C. The formula for the cofidece iterval for a populatio mea is: x t, which was based o the sample Mea. So, x is guarateed to be i the iterval you form.. D. Use the rule : p-value

More information

Taking DCOP to the Real World: Efficient Complete Solutions for Distributed Multi-Event Scheduling

Taking DCOP to the Real World: Efficient Complete Solutions for Distributed Multi-Event Scheduling Taig DCOP to the Real World: Efficiet Complete Solutios for Distributed Multi-Evet Schedulig Rajiv T. Maheswara, Milid Tambe, Emma Bowrig, Joatha P. Pearce, ad Pradeep araatham Uiversity of Souther Califoria

More information

Basic Elements of Arithmetic Sequences and Series

Basic Elements of Arithmetic Sequences and Series MA40S PRE-CALCULUS UNIT G GEOMETRIC SEQUENCES CLASS NOTES (COMPLETED NO NEED TO COPY NOTES FROM OVERHEAD) Basic Elemets of Arithmetic Sequeces ad Series Objective: To establish basic elemets of arithmetic

More information

Your organization has a Class B IP address of 166.144.0.0 Before you implement subnetting, the Network ID and Host ID are divided as follows:

Your organization has a Class B IP address of 166.144.0.0 Before you implement subnetting, the Network ID and Host ID are divided as follows: Subettig Subettig is used to subdivide a sigle class of etwork i to multiple smaller etworks. Example: Your orgaizatio has a Class B IP address of 166.144.0.0 Before you implemet subettig, the Network

More information

Domain 1: Designing a SQL Server Instance and a Database Solution

Domain 1: Designing a SQL Server Instance and a Database Solution Maual SQL Server 2008 Desig, Optimize ad Maitai (70-450) 1-800-418-6789 Domai 1: Desigig a SQL Server Istace ad a Database Solutio Desigig for CPU, Memory ad Storage Capacity Requiremets Whe desigig a

More information

The following example will help us understand The Sampling Distribution of the Mean. C1 C2 C3 C4 C5 50 miles 84 miles 38 miles 120 miles 48 miles

The following example will help us understand The Sampling Distribution of the Mean. C1 C2 C3 C4 C5 50 miles 84 miles 38 miles 120 miles 48 miles The followig eample will help us uderstad The Samplig Distributio of the Mea Review: The populatio is the etire collectio of all idividuals or objects of iterest The sample is the portio of the populatio

More information

5.4 Amortization. Question 1: How do you find the present value of an annuity? Question 2: How is a loan amortized?

5.4 Amortization. Question 1: How do you find the present value of an annuity? Question 2: How is a loan amortized? 5.4 Amortizatio Questio 1: How do you fid the preset value of a auity? Questio 2: How is a loa amortized? Questio 3: How do you make a amortizatio table? Oe of the most commo fiacial istrumets a perso

More information

MARTINGALES AND A BASIC APPLICATION

MARTINGALES AND A BASIC APPLICATION MARTINGALES AND A BASIC APPLICATION TURNER SMITH Abstract. This paper will develop the measure-theoretic approach to probability i order to preset the defiitio of martigales. From there we will apply this

More information

Analyzing Longitudinal Data from Complex Surveys Using SUDAAN

Analyzing Longitudinal Data from Complex Surveys Using SUDAAN Aalyzig Logitudial Data from Complex Surveys Usig SUDAAN Darryl Creel Statistics ad Epidemiology, RTI Iteratioal, 312 Trotter Farm Drive, Rockville, MD, 20850 Abstract SUDAAN: Software for the Statistical

More information

Approximating Area under a curve with rectangles. To find the area under a curve we approximate the area using rectangles and then use limits to find

Approximating Area under a curve with rectangles. To find the area under a curve we approximate the area using rectangles and then use limits to find 1.8 Approximatig Area uder a curve with rectagles 1.6 To fid the area uder a curve we approximate the area usig rectagles ad the use limits to fid 1.4 the area. Example 1 Suppose we wat to estimate 1.

More information

Tradigms of Astundithi and Toyota

Tradigms of Astundithi and Toyota Tradig the radomess - Desigig a optimal tradig strategy uder a drifted radom walk price model Yuao Wu Math 20 Project Paper Professor Zachary Hamaker Abstract: I this paper the author iteds to explore

More information

Lecture 13. Lecturer: Jonathan Kelner Scribe: Jonathan Pines (2009)

Lecture 13. Lecturer: Jonathan Kelner Scribe: Jonathan Pines (2009) 18.409 A Algorithmist s Toolkit October 27, 2009 Lecture 13 Lecturer: Joatha Keler Scribe: Joatha Pies (2009) 1 Outlie Last time, we proved the Bru-Mikowski iequality for boxes. Today we ll go over the

More information

Infinite Sequences and Series

Infinite Sequences and Series CHAPTER 4 Ifiite Sequeces ad Series 4.1. Sequeces A sequece is a ifiite ordered list of umbers, for example the sequece of odd positive itegers: 1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29...

More information

Sequences and Series

Sequences and Series CHAPTER 9 Sequeces ad Series 9.. Covergece: Defiitio ad Examples Sequeces The purpose of this chapter is to itroduce a particular way of geeratig algorithms for fidig the values of fuctios defied by their

More information

A Combined Continuous/Binary Genetic Algorithm for Microstrip Antenna Design

A Combined Continuous/Binary Genetic Algorithm for Microstrip Antenna Design A Combied Cotiuous/Biary Geetic Algorithm for Microstrip Atea Desig Rady L. Haupt The Pesylvaia State Uiversity Applied Research Laboratory P. O. Box 30 State College, PA 16804-0030 haupt@ieee.org Abstract:

More information

The Forgotten Middle. research readiness results. Executive Summary

The Forgotten Middle. research readiness results. Executive Summary The Forgotte Middle Esurig that All Studets Are o Target for College ad Career Readiess before High School Executive Summary Today, college readiess also meas career readiess. While ot every high school

More information

A probabilistic proof of a binomial identity

A probabilistic proof of a binomial identity A probabilistic proof of a biomial idetity Joatho Peterso Abstract We give a elemetary probabilistic proof of a biomial idetity. The proof is obtaied by computig the probability of a certai evet i two

More information

Mathematical goals. Starting points. Materials required. Time needed

Mathematical goals. Starting points. Materials required. Time needed Level A1 of challege: C A1 Mathematical goals Startig poits Materials required Time eeded Iterpretig algebraic expressios To help learers to: traslate betwee words, symbols, tables, ad area represetatios

More information

PENSION ANNUITY. Policy Conditions Document reference: PPAS1(7) This is an important document. Please keep it in a safe place.

PENSION ANNUITY. Policy Conditions Document reference: PPAS1(7) This is an important document. Please keep it in a safe place. PENSION ANNUITY Policy Coditios Documet referece: PPAS1(7) This is a importat documet. Please keep it i a safe place. Pesio Auity Policy Coditios Welcome to LV=, ad thak you for choosig our Pesio Auity.

More information

How to use what you OWN to reduce what you OWE

How to use what you OWN to reduce what you OWE How to use what you OWN to reduce what you OWE Maulife Oe A Overview Most Caadias maage their fiaces by doig two thigs: 1. Depositig their icome ad other short-term assets ito chequig ad savigs accouts.

More information

*The most important feature of MRP as compared with ordinary inventory control analysis is its time phasing feature.

*The most important feature of MRP as compared with ordinary inventory control analysis is its time phasing feature. Itegrated Productio ad Ivetory Cotrol System MRP ad MRP II Framework of Maufacturig System Ivetory cotrol, productio schedulig, capacity plaig ad fiacial ad busiess decisios i a productio system are iterrelated.

More information

Section 11.3: The Integral Test

Section 11.3: The Integral Test Sectio.3: The Itegral Test Most of the series we have looked at have either diverged or have coverged ad we have bee able to fid what they coverge to. I geeral however, the problem is much more difficult

More information

Perfect Packing Theorems and the Average-Case Behavior of Optimal and Online Bin Packing

Perfect Packing Theorems and the Average-Case Behavior of Optimal and Online Bin Packing SIAM REVIEW Vol. 44, No. 1, pp. 95 108 c 2002 Society for Idustrial ad Applied Mathematics Perfect Packig Theorems ad the Average-Case Behavior of Optimal ad Olie Bi Packig E. G. Coffma, Jr. C. Courcoubetis

More information

where: T = number of years of cash flow in investment's life n = the year in which the cash flow X n i = IRR = the internal rate of return

where: T = number of years of cash flow in investment's life n = the year in which the cash flow X n i = IRR = the internal rate of return EVALUATING ALTERNATIVE CAPITAL INVESTMENT PROGRAMS By Ke D. Duft, Extesio Ecoomist I the March 98 issue of this publicatio we reviewed the procedure by which a capital ivestmet project was assessed. The

More information

Present Value Factor To bring one dollar in the future back to present, one uses the Present Value Factor (PVF): Concept 9: Present Value

Present Value Factor To bring one dollar in the future back to present, one uses the Present Value Factor (PVF): Concept 9: Present Value Cocept 9: Preset Value Is the value of a dollar received today the same as received a year from today? A dollar today is worth more tha a dollar tomorrow because of iflatio, opportuity cost, ad risk Brigig

More information

INVESTMENT PERFORMANCE COUNCIL (IPC)

INVESTMENT PERFORMANCE COUNCIL (IPC) INVESTMENT PEFOMANCE COUNCIL (IPC) INVITATION TO COMMENT: Global Ivestmet Performace Stadards (GIPS ) Guidace Statemet o Calculatio Methodology The Associatio for Ivestmet Maagemet ad esearch (AIM) seeks

More information

Chapter 5 O A Cojecture Of Erdíos Proceedigs NCUR VIII è1994è, Vol II, pp 794í798 Jeærey F Gold Departmet of Mathematics, Departmet of Physics Uiversity of Utah Do H Tucker Departmet of Mathematics Uiversity

More information

hp calculators HP 12C Statistics - average and standard deviation Average and standard deviation concepts HP12C average and standard deviation

hp calculators HP 12C Statistics - average and standard deviation Average and standard deviation concepts HP12C average and standard deviation HP 1C Statistics - average ad stadard deviatio Average ad stadard deviatio cocepts HP1C average ad stadard deviatio Practice calculatig averages ad stadard deviatios with oe or two variables HP 1C Statistics

More information

3 Basic Definitions of Probability Theory

3 Basic Definitions of Probability Theory 3 Basic Defiitios of Probability Theory 3defprob.tex: Feb 10, 2003 Classical probability Frequecy probability axiomatic probability Historical developemet: Classical Frequecy Axiomatic The Axiomatic defiitio

More information

Irreducible polynomials with consecutive zero coefficients

Irreducible polynomials with consecutive zero coefficients Irreducible polyomials with cosecutive zero coefficiets Theodoulos Garefalakis Departmet of Mathematics, Uiversity of Crete, 71409 Heraklio, Greece Abstract Let q be a prime power. We cosider the problem

More information

Lecture 3. denote the orthogonal complement of S k. Then. 1 x S k. n. 2 x T Ax = ( ) λ x. with x = 1, we have. i = λ k x 2 = λ k.

Lecture 3. denote the orthogonal complement of S k. Then. 1 x S k. n. 2 x T Ax = ( ) λ x. with x = 1, we have. i = λ k x 2 = λ k. 18.409 A Algorithmist s Toolkit September 17, 009 Lecture 3 Lecturer: Joatha Keler Scribe: Adre Wibisoo 1 Outlie Today s lecture covers three mai parts: Courat-Fischer formula ad Rayleigh quotiets The

More information

Designing Incentives for Online Question and Answer Forums

Designing Incentives for Online Question and Answer Forums Desigig Icetives for Olie Questio ad Aswer Forums Shaili Jai School of Egieerig ad Applied Scieces Harvard Uiversity Cambridge, MA 0238 USA shailij@eecs.harvard.edu Yilig Che School of Egieerig ad Applied

More information

Overview. Learning Objectives. Point Estimate. Estimation. Estimating the Value of a Parameter Using Confidence Intervals

Overview. Learning Objectives. Point Estimate. Estimation. Estimating the Value of a Parameter Using Confidence Intervals Overview Estimatig the Value of a Parameter Usig Cofidece Itervals We apply the results about the sample mea the problem of estimatio Estimatio is the process of usig sample data estimate the value of

More information

Estimating Probability Distributions by Observing Betting Practices

Estimating Probability Distributions by Observing Betting Practices 5th Iteratioal Symposium o Imprecise Probability: Theories ad Applicatios, Prague, Czech Republic, 007 Estimatig Probability Distributios by Observig Bettig Practices Dr C Lych Natioal Uiversity of Irelad,

More information

Chatpun Khamyat Department of Industrial Engineering, Kasetsart University, Bangkok, Thailand ocpky@hotmail.com

Chatpun Khamyat Department of Industrial Engineering, Kasetsart University, Bangkok, Thailand ocpky@hotmail.com SOLVING THE OIL DELIVERY TRUCKS ROUTING PROBLEM WITH MODIFY MULTI-TRAVELING SALESMAN PROBLEM APPROACH CASE STUDY: THE SME'S OIL LOGISTIC COMPANY IN BANGKOK THAILAND Chatpu Khamyat Departmet of Idustrial

More information

A Constant-Factor Approximation Algorithm for the Link Building Problem

A Constant-Factor Approximation Algorithm for the Link Building Problem A Costat-Factor Approximatio Algorithm for the Lik Buildig Problem Marti Olse 1, Aastasios Viglas 2, ad Ilia Zvedeiouk 2 1 Ceter for Iovatio ad Busiess Developmet, Istitute of Busiess ad Techology, Aarhus

More information

Chair for Network Architectures and Services Institute of Informatics TU München Prof. Carle. Network Security. Chapter 2 Basics

Chair for Network Architectures and Services Institute of Informatics TU München Prof. Carle. Network Security. Chapter 2 Basics Chair for Network Architectures ad Services Istitute of Iformatics TU Müche Prof. Carle Network Security Chapter 2 Basics 2.4 Radom Number Geeratio for Cryptographic Protocols Motivatio It is crucial to

More information

Convexity, Inequalities, and Norms

Convexity, Inequalities, and Norms Covexity, Iequalities, ad Norms Covex Fuctios You are probably familiar with the otio of cocavity of fuctios. Give a twicedifferetiable fuctio ϕ: R R, We say that ϕ is covex (or cocave up) if ϕ (x) 0 for

More information

INFINITE SERIES KEITH CONRAD

INFINITE SERIES KEITH CONRAD INFINITE SERIES KEITH CONRAD. Itroductio The two basic cocepts of calculus, differetiatio ad itegratio, are defied i terms of limits (Newto quotiets ad Riema sums). I additio to these is a third fudametal

More information

Confidence Intervals. CI for a population mean (σ is known and n > 30 or the variable is normally distributed in the.

Confidence Intervals. CI for a population mean (σ is known and n > 30 or the variable is normally distributed in the. Cofidece Itervals A cofidece iterval is a iterval whose purpose is to estimate a parameter (a umber that could, i theory, be calculated from the populatio, if measuremets were available for the whole populatio).

More information

Chapter 5 Unit 1. IET 350 Engineering Economics. Learning Objectives Chapter 5. Learning Objectives Unit 1. Annual Amount and Gradient Functions

Chapter 5 Unit 1. IET 350 Engineering Economics. Learning Objectives Chapter 5. Learning Objectives Unit 1. Annual Amount and Gradient Functions Chapter 5 Uit Aual Amout ad Gradiet Fuctios IET 350 Egieerig Ecoomics Learig Objectives Chapter 5 Upo completio of this chapter you should uderstad: Calculatig future values from aual amouts. Calculatig

More information

Trackless online algorithms for the server problem

Trackless online algorithms for the server problem Iformatio Processig Letters 74 (2000) 73 79 Trackless olie algorithms for the server problem Wolfgag W. Bei,LawreceL.Larmore 1 Departmet of Computer Sciece, Uiversity of Nevada, Las Vegas, NV 89154, USA

More information

Hypergeometric Distributions

Hypergeometric Distributions 7.4 Hypergeometric Distributios Whe choosig the startig lie-up for a game, a coach obviously has to choose a differet player for each positio. Similarly, whe a uio elects delegates for a covetio or you

More information

Lesson 17 Pearson s Correlation Coefficient

Lesson 17 Pearson s Correlation Coefficient Outlie Measures of Relatioships Pearso s Correlatio Coefficiet (r) -types of data -scatter plots -measure of directio -measure of stregth Computatio -covariatio of X ad Y -uique variatio i X ad Y -measurig

More information

Pre-Suit Collection Strategies

Pre-Suit Collection Strategies Pre-Suit Collectio Strategies Writte by Charles PT Phoeix How to Decide Whether to Pursue Collectio Calculatig the Value of Collectio As with ay busiess litigatio, all factors associated with the process

More information

BINOMIAL EXPANSIONS 12.5. In this section. Some Examples. Obtaining the Coefficients

BINOMIAL EXPANSIONS 12.5. In this section. Some Examples. Obtaining the Coefficients 652 (12-26) Chapter 12 Sequeces ad Series 12.5 BINOMIAL EXPANSIONS I this sectio Some Examples Otaiig the Coefficiets The Biomial Theorem I Chapter 5 you leared how to square a iomial. I this sectio you

More information

Confidence Intervals for One Mean

Confidence Intervals for One Mean Chapter 420 Cofidece Itervals for Oe Mea Itroductio This routie calculates the sample size ecessary to achieve a specified distace from the mea to the cofidece limit(s) at a stated cofidece level for a

More information

CS100: Introduction to Computer Science

CS100: Introduction to Computer Science Review: History of Computers CS100: Itroductio to Computer Sciece Maiframes Miicomputers Lecture 2: Data Storage -- Bits, their storage ad mai memory Persoal Computers & Workstatios Review: The Role of

More information

Elementary Theory of Russian Roulette

Elementary Theory of Russian Roulette Elemetary Theory of Russia Roulette -iterestig patters of fractios- Satoshi Hashiba Daisuke Miematsu Ryohei Miyadera Itroductio. Today we are goig to study mathematical theory of Russia roulette. If some

More information

Running Time ( 3.1) Analysis of Algorithms. Experimental Studies ( 3.1.1) Limitations of Experiments. Pseudocode ( 3.1.2) Theoretical Analysis

Running Time ( 3.1) Analysis of Algorithms. Experimental Studies ( 3.1.1) Limitations of Experiments. Pseudocode ( 3.1.2) Theoretical Analysis Ruig Time ( 3.) Aalysis of Algorithms Iput Algorithm Output A algorithm is a step-by-step procedure for solvig a problem i a fiite amout of time. Most algorithms trasform iput objects ito output objects.

More information

Chapter 7: Confidence Interval and Sample Size

Chapter 7: Confidence Interval and Sample Size Chapter 7: Cofidece Iterval ad Sample Size Learig Objectives Upo successful completio of Chapter 7, you will be able to: Fid the cofidece iterval for the mea, proportio, ad variace. Determie the miimum

More information

Maximum Likelihood Estimators.

Maximum Likelihood Estimators. Lecture 2 Maximum Likelihood Estimators. Matlab example. As a motivatio, let us look at oe Matlab example. Let us geerate a radom sample of size 00 from beta distributio Beta(5, 2). We will lear the defiitio

More information

How to read A Mutual Fund shareholder report

How to read A Mutual Fund shareholder report Ivestor BulletI How to read A Mutual Fud shareholder report The SEC s Office of Ivestor Educatio ad Advocacy is issuig this Ivestor Bulleti to educate idividual ivestors about mutual fud shareholder reports.

More information

University of California, Los Angeles Department of Statistics. Distributions related to the normal distribution

University of California, Los Angeles Department of Statistics. Distributions related to the normal distribution Uiversity of Califoria, Los Ageles Departmet of Statistics Statistics 100B Istructor: Nicolas Christou Three importat distributios: Distributios related to the ormal distributio Chi-square (χ ) distributio.

More information

Trigonometric Form of a Complex Number. The Complex Plane. axis. ( 2, 1) or 2 i FIGURE 6.44. The absolute value of the complex number z a bi is

Trigonometric Form of a Complex Number. The Complex Plane. axis. ( 2, 1) or 2 i FIGURE 6.44. The absolute value of the complex number z a bi is 0_0605.qxd /5/05 0:45 AM Page 470 470 Chapter 6 Additioal Topics i Trigoometry 6.5 Trigoometric Form of a Complex Number What you should lear Plot complex umbers i the complex plae ad fid absolute values

More information

GOOD PRACTICE CHECKLIST FOR INTERPRETERS WORKING WITH DOMESTIC VIOLENCE SITUATIONS

GOOD PRACTICE CHECKLIST FOR INTERPRETERS WORKING WITH DOMESTIC VIOLENCE SITUATIONS GOOD PRACTICE CHECKLIST FOR INTERPRETERS WORKING WITH DOMESTIC VIOLENCE SITUATIONS I the sprig of 2008, Stadig Together agaist Domestic Violece carried out a piece of collaborative work o domestic violece

More information

Lesson 15 ANOVA (analysis of variance)

Lesson 15 ANOVA (analysis of variance) Outlie Variability -betwee group variability -withi group variability -total variability -F-ratio Computatio -sums of squares (betwee/withi/total -degrees of freedom (betwee/withi/total -mea square (betwee/withi

More information

Solving equations. Pre-test. Warm-up

Solving equations. Pre-test. Warm-up Solvig equatios 8 Pre-test Warm-up We ca thik of a algebraic equatio as beig like a set of scales. The two sides of the equatio are equal, so the scales are balaced. If we add somethig to oe side of the

More information

Solving Logarithms and Exponential Equations

Solving Logarithms and Exponential Equations Solvig Logarithms ad Epoetial Equatios Logarithmic Equatios There are two major ideas required whe solvig Logarithmic Equatios. The first is the Defiitio of a Logarithm. You may recall from a earlier topic:

More information

Lecture 4: Cheeger s Inequality

Lecture 4: Cheeger s Inequality Spectral Graph Theory ad Applicatios WS 0/0 Lecture 4: Cheeger s Iequality Lecturer: Thomas Sauerwald & He Su Statemet of Cheeger s Iequality I this lecture we assume for simplicity that G is a d-regular

More information

GCSE STATISTICS. 4) How to calculate the range: The difference between the biggest number and the smallest number.

GCSE STATISTICS. 4) How to calculate the range: The difference between the biggest number and the smallest number. GCSE STATISTICS You should kow: 1) How to draw a frequecy diagram: e.g. NUMBER TALLY FREQUENCY 1 3 5 ) How to draw a bar chart, a pictogram, ad a pie chart. 3) How to use averages: a) Mea - add up all

More information