Moving Target D* Lite
|
|
|
- Gwen Spencer
- 10 years ago
- Views:
Transcription
1 Moving Target * Lite Xiaoxun un William Yeoh ven Koenig omputer cience epartment University of outhern alifornia Los ngeles, , U {xiaoxuns, wyeoh, skoenig}@usc.edu TRT Incremental search algorithms, such as eneralized ringe- Retrieving * and * Lite, reuse search trees from previous searches to speed up the current search and thus often find cost-minimal paths for series of similar search problems faster than by solving each search problem from scratch. However, existing incremental search algorithms have limitations. or example, * Lite is slow on moving target search problems, where both the start and goal states can change over time. In this paper, we therefore introduce Moving Target * Lite, an extension of * Lite that uses the principle behind eneralized ringe-retrieving * to repeatedly calculate a cost-minimal path from the hunter to the target in environments whose blockages can change over time. We demonstrate experimentally that Moving Target * Lite can be three to seven times faster than eneralized daptive *, which so far was believed to be the fastest incremental search algorithm for solving moving target search problems in dynamic environments. ategories and ubject escriptors I.2.8 [Problem olving, ontrol Methods, and earch]: raph and tree search strategies eneral Terms lgorithms, xperimentation, Performance, Theory Keywords * Lite, ynamic nvironment, eneralized ringe- Retrieving *, Hunter, Incremental earch, Moving Target earch, Path Planning, Video ames We thank Maxim Likhachev for sharing his insights on the properties of * Lite with us. This material is based upon work supported by, or in part by, N under contract/grant number , RL/RO under contract/grant number W911N and ONR in form of a MURI under contract/grant number N The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the sponsoring organizations, agencies or the U.. government. ite as: Moving Target * Lite, X. un, W. Yeoh and. Koenig, Proc. of 9th Int. onf. on utonomous gents and Multiagent ystems (M 2010), van der Hoek, Kaminka, Lespérance, Luck and en (eds.), May, 10 14, 2010, Toronto, anada, pp. XXX-XXX. opyright c 2010, International oundation for utonomous gents and Multiagent ystems ( ll rights reserved. 1. INTROUTION moving target search problem is a planning problem where a hunter has to catch a moving target [5]. One application of this area of research is computer games, where computer characters (= hunters) has to chase or catch up with other characters (= targets), typically in gridworlds whose blockages can change over time [5, 13, 9]. The moving target search problem is solved once the hunter and target are in the same state (= cell). The hunter has to determine quickly how to move. The computer game company ioware, for example, recently imposed a limit of 1-3 milliseconds on the search time [1]. There are generally two classes of approaches for solving moving target search problems with search algorithms: Offline approaches take into account all possible contingencies (namely, movements of the target and changes in the environment) to find the best plan for the hunter, for example, using minimax or alpha-beta search [9]. Unfortunately, offline approaches do not scale to large environments due to the large number of contingencies. Online approaches sacrifice optimality for smaller computation times by interleaving planning and movement, for example, by finding the best plan for the hunter with the information currently available and allowing the hunter to gather additional information while moving. or example, they can determine a costminimal path (or a prefix of it) from the current state of the hunter to the current state of the target in the current environment. This allows them to find plans for the hunter efficiently while being reactive to movements of the target and changes in the environment. xamples include real-time search algorithms [5] and incremental search algorithms [13]. Most real-time search algorithms limit the lookahead of each search and thus find only a prefix of a path from the hunter to the target before the hunter starts to move. Their advantage is that the hunter starts to move in constant time, independent of the number of states. Their disadvantage is that the trajectory of the hunter can be highly suboptimal and that it is difficult to determine whether there exists a path from the hunter to the target, although there are recent exceptions [16]. urthermore, they do not scale to large environments due to their memory requirements that typically grow quadratically in the number of states [2]. Incremental search algorithms, on the other hand, find a path from the
2 hunter to the target before the hunter starts to move. If they run sufficiently fast, they provide alternatives to realtime search algorithms that avoid their disadvantages. We therefore investigate incremental search algorithms in this paper. There are generally two classes of incremental search algorithms: Incremental search algorithms of the first kind use information from previous searches to update the h- values to make them more informed over time and focus the (complete) searches better. xamples include MT-daptive * [8] and its generalization eneralized daptive * (*) [12], which build on an idea in [4]. Incremental search algorithms of the second kind transform the previous search tree to the current search tree. The current search then starts with the current search tree instead of from scratch. xamples include ifferential * [15], * [11], * Lite [7], ringe-retrieving * (R*) [13] and its generalization eneralized R* (-R*) [14]. Most incremental search algorithms of the second kind were designed for search problems with stationary start states or in static environments (whose blockages do not change over time). Thus, they are not efficient for moving target search problems in dynamic environments (whose blockages can change over time) since both the start and goal states can change over time for moving target search problems. or example, * Lite can shift the map to keep the start state stationary but then cannot reuse much of the information from previous searches and can be slower than running * from scratch [8]. In this paper, we therefore introduce Moving Target * Lite (MT-* Lite), an extension of * Lite that uses the principle behind -R* to repeatedly calculate a cost-minimal path from the hunter to the target in dynamic environments. MT-* Lite does not shift the map to keep the start state stationary and can be three to seven times faster than *, which so far was believed to be the fastest incremental search algorithm for solving moving target search problems in dynamic environments. 2. PROLM INITION lthough all proposed search algorithms can operate on arbitrary directed graphs, for ease of illustration, we describe their operation on known four-neighbor gridworlds with blocked and unblocked states. The hunter can move from its current unblocked state to any unblocked neighboring state with cost one and to any blocked neighboring state with cost infinity. It always follows a cost-minimal path from its current state to the current state of the target until it reaches the current state of the target, which is often a reasonable strategy for the hunter that can be computed efficiently. We make no assumptions about how the target moves. We use the following notation: denotes the finite set of all (blocked and unblocked) states, s start denotes the current state of the hunter and the start state of the search, and s goal denotes the current state of the target and the goal state of the search. c(s, s ) denotes the cost of a costminimal path from state s to state s. ucc(s) denotes the set of successors of state s, and Pred(s) denotes the set of predecessors of state s. The cost of moving from state s to its successors or from its predecessors to state s can be infinity. In gridworlds, the predecessors and successors of a state are thus its (blocked and unblocked) neighbors. 3. KROUN We now provide the background on *, eneralized ringe-retrieving * and * Lite that is necessary to understand the proposed search algorithms, closely following the descriptions in [13, 14]. 3.1 * * [3] is probably the most popular search algorithm in artificial intelligence and the basis of all search algorithms described in this paper. * maintains four values for every state s : (1) The h-value h(s, s goal) is a user-provided approximation of c(s, s goal). The h-values do not only need to be consistent [10] but also satisfy the additional triangle inequality h(s, s ) h(s, s ) + h(s, s ) for all states s, s, s [7]. (2) The g-value g(s) is an approximation of c(s start, s). Initially, it is infinity. (3) The f-value f(s) := g(s) + h(s, s goal) is an approximation of the smallest cost of moving from the start state via state s to the goal state. (4) The parent pointer par(s) Pred(s) points to the parent of state s in the search tree. Initially, it is NULL. The parent pointers are used to extract a cost-minimal path from the start state to the goal state after the search terminates. * also maintains two data structures: (1) The OPN list contains all states to be considered for expansion. Initially, it contains only the start state with g-value zero. (2) The LO list contains all states that have been expanded. Initially, it is empty. ctually, none of the search algorithms described in this paper require a LO list but, for ease of description, we pretend they do. * repeatedly deletes a state s with the smallest f-value from the OPN list, inserts it into the LO list and expands it by performing the following operations for each state s ucc(s). If g(s) + c(s, s ) < g(s ), then * generates s by setting g(s ) := g(s) + c(s, s ) and par(s ) := s and, if s is not in the OPN list, inserting it into the OPN list. * terminates when the OPN list is empty or when it expands the goal state. The former condition indicates that no path exists from the start state to the goal state, and the latter condition indicates that * found a cost-minimal path. Therefore, one can solve moving target search problems in dynamic environments by finding a cost-minimal path with * from the current state of the hunter to the current state of the target whenever the environment changes or the target moves off the known path R* eneralized ringe-retrieving * (-R*) [14] is an incremental search algorithm that generalizes ringe- Retrieving * (R*) from gridworlds to arbitrary directed graphs. We use the principle behind -R* in one of the proposed search algorithms. -R* cannot solve moving target search problems in dynamic environments but it solves moving target search problems in static environments by finding a cost-minimal path with * from the current state of the hunter to the current state of the target whenever the target moves off the known path. ach * run is called a search iteration. In each search iteration, -R* first transforms the previous search tree to the initial search
3 (a) earch Iteration 1 (b) Target Moves off Path (c) fter tep 2 (d) fter tep 4 igure 1: Trace of -R* tree, which consists of the initial OPN and LO lists. It then starts * with the initial OPN and LO lists instead of from scratch. Thus, -R* inherits the properties of *, for example, it finds cost-minimal paths from the current state of the hunter to the current state of the target. The initial search tree is the subtree of the previous search tree rooted at the current start state. Thus, the initial and previous search trees are different if the hunter moved. To determine the initial OPN and LO lists, -R* maintains a LT list, which contains all states that are in the previous search tree but not the initial search tree. Then, the initial LO list contains those states that are in the previous LO list but not the LT list, and the initial OPN list contains both (O1) those states that are in the previous OPN list but not the LT list and (O2) those states that are in the LT list and are successors of states in the initial LO list. igure 1 illustrates the steps of -R*. locked states are black, and unblocked states are white. or ease of illustration, we use h-values that are zero for all states. The first search iteration of -R* runs * from the current state 2 of the hunter, labeled, to the current state 6 of the target, labeled (see igure 1(a)). The g-value of a state is shown in its upper left corner. The arrow leaving a state points to its parent. The LO list contains 2, 3, 4, 5, 6, 2, 3, 4, 6, 4, 4 and 4, and the OPN list contains 6. state is shaded iff it is in the OPN list. The cost-minimal path is 2, 3, 4, 4, 5, 6 and 6. The hunter moves along this path to 4, at which point in time the target moves off the path to 6 (see igure 1(b)). ince the target moved off the path, -R* finds a cost-minimal path from the current state 4 of the hunter to the current state 6 of the target using the following steps, where the previous start state is 2 and the previous goal state is 6. tep 1 (tarting * Immediately): -R* performs the following operations in this step if it did not terminate in tep 3 (Terminating arly) in the previous search iteration. If the current start state is the same as the previous start state and the current goal state is not in the LO list, -R* runs * with the OPN and LO lists, uses the parent pointers to extract a cost-minimal path from the current start state to the current goal state and terminates the current search iteration. If the current start state is the same as the previous start state and the current goal state is in the LO list, -R* uses the parent pointers to extract a cost-minimal path from the current start state to the current goal state and terminates the current search iteration. In our example, -R* executes the next steps since the current start state is different from the previous start state. tep 2 (eleting tates): -R* sets the parent pointer of the current start state to NULL, determines the LT list and deletes all states in the LT list from the LO and OPN lists. It then sets the parent pointers of all states in the LT list to NULL and their g-values to infinity. In our example, -R* sets the parent pointer of 4 to NULL, determines the LT list to contain 2, 3, 2 and 3 and deletes these states from the OPN and LO lists. The LO list now contains 4, 5, 6, 4, 6, 4, 4 and 4, and the OPN list contains 6. The OPN list is still incomplete since it contains only the states satisfying (O1) so far. -R* then sets the g-values of 2, 3, 2 and 3 to infinity and their parent pointers to NULL (see igure 1(c)). tep 3 (Terminating arly): If the current goal state is in the LO list, -R* uses the parent pointers to extract a cost-minimal path from the current start state to the current goal state and terminates the current search iteration. In our example, -R* executes the next steps since the current goal state is not in the LO list. tep 4 (Inserting tates): -R* adds those states to the OPN list that satisfy (O2), which makes the OPN list complete. -R* then sets the parent pointers of all states s added to the OPN list to the state s in the LO list that minimizes g(s )+c(s, s) and sets their g-values to g(s )+c(s, s) for this state s. In our example, -R* adds 3 and 3 to the OPN list, which now contains 3, 3 and 6. It sets the parent pointers of 3 and 3 to 4 and 4, respectively, and their g-values to 4 and 3, respectively (see igure 1(d)). tep 5 (tarting *): -R* runs * with the OPN and LO lists, uses the parent pointers to extract a cost-minimal path from the current start state to the current goal state and terminates the current search iteration. 3.3 * Lite
4 01 function alculatekey(s) 02 return [min(g(s),rhs(s)) + h(s, s goal ) + k m; min(g(s),rhs(s))]; 03 procedure Initialize() 04 OPN := ; 05 k m := 0; 06 for all s 07 rhs(s) := g(s) := ; 08 par(s) := NULL; 09 s start := the current state of the hunter; 10 s goal := the current state of the target; 11 rhs(s start) := 0; 12 OPN.Insert(s start,alculatekey(s start)); 13 procedure Updatetate(u) 14 if (g(u) rhs(u) N u OPN) 15 OPN.Update(u, alculatekey(u)); 16 else if (g(u) rhs(u) N u / OPN) 17 OPN.Insert(u, alculatekey(u)); 18 else if (g(u) = rhs(u) N u OPN) 19 OPN.elete(u); 20 procedure omputeostminimalpath() 21 while (OPN.TopKey() < alculatekey(s goal ) OR rhs(s goal ) > g(s goal )) 22 u := OPN.Top(); 23 k old := OPN.TopKey(); 24 k new :=alculatekey(u); 25 if (k old < k new) 26 OPN.Update(u,k new); 27 else if (g(u) > rhs(u)) 28 g(u) := rhs(u); 29 OPN.elete(u); 30 for all s ucc(u) 31 if (s s start N (rhs(s) > g(u) + c(u, s))) 32 par(s) := u; 33 rhs(s) := g(u) + c(u, s); 34 Updatetate(s); 35 else 36 g(u) := ; 37 for all s ucc(u) {u} 38 if (s s start N par(s) = u) 39 rhs(s) := min s Pred(s) (g(s ) + c(s, s)); 40 if (rhs(s) = ) 41 par(s) := NULL; 42 else 43 par(s) := arg min s Pred(s) (g(s ) + c(s, s)); 44 Updatetate(s); 45 function Main() 46 Initialize(); 47 while (s start s goal ) 48 s oldstart := s start; 49 s oldgoal := s goal ; 50 omputeostminimalpath(); 51 if (rhs(s goal ) = ) /*no path exists*/ 52 return false; 53 identify a path from s start to s goal using the parent pointers; 54 while (target not caught N target on path from s start to s goal N no edge costs changed) 55 hunter follows path from s start to s goal ; 56 if hunter caught target 57 return true; 58 s start := the current state of the hunter; 59 s goal := the current state of the target; 60 k m := k m + h(s goal, s oldgoal ); 61 if (s oldstart s start) 62 shift the map appropriately (which changes s start and s goal ); 63 for all directed edges (u, v) with changed edge costs 64 c old := c(u, v); 65 update the edge cost c(u, v); 66 if (c old > c(u, v)) 67 if (v s start N rhs(v) > g(u) + c(u, v)) 68 par(v) := u; 69 rhs(v) := g(u) + c(u, v); 70 Updatetate(v); 71 else 72 if (v s start N par(v) = u) 73 rhs(v) := min s Pred(v) (g(s ) + c(s, v)); 74 if (rhs(v) = ) 75 par(v) := NULL; 76 else 77 par(v) := arg min s Pred(v) (g(s ) + c(s, v)); 78 Updatetate(v); 79 return true; igure 2: * Lite for Moving Target earch 80 procedure asiceletion() 81 par(s start) := NULL; 82 rhs(s oldstart ) := min s Pred(soldstart ) (g(s ) + c(s, s oldstart )); 83 if (rhs(s oldstart ) = ) 84 par(s oldstart ) := NULL; 85 else 86 par(s oldstart ) := arg min s Pred(soldstart ) (g(s ) + c(s, s oldstart )); 87 Updatetate(s oldstart ); 88 procedure Optimizedeletion() 89 LT := ; 90 par(s start) := NULL; 91 for all s in the search tree but not the subtree rooted at s start 92 par(s) := NULL; 93 rhs(s) := g(s) := ; 94 if (s OPN) 95 OPN.elete(s); 96 LT := LT {s}; 97 for all s LT 98 for all s Pred(s) 99 if (rhs(s) > g(s ) + c(s, s)) 100 rhs(s) := g(s ) + c(s, s); 101 par(s) := s ; 102 if (rhs(s) < ) 103 OPN.Insert(s, alculatekey(s)); igure 3: (asic) MT-* Lite * Lite [6] is an incremental search algorithm that forms the basis of the proposed search algorithms. * Lite solves sequences of search problems in dynamic environments where the start state does not change over time by repeatedly transforming the previous search tree to the current search tree. Researchers have extended it in a straightforward way to solve moving target search problems in dynamic environments by finding a cost-minimal path from the current state of the hunter to the current state of the target whenever the environment changes or the target moves off the known path. In each search iteration, * Lite first shifts the map to keep the start state stationary (for example, it shifts the map one unit south if the hunter moved one unit north) and then updates the g-values and parent pointers of states as necessary [8] until it has found a cost-minimal path from the current state of the hunter to the current state of the target. igure 2 shows the pseudocode of such a version of * Lite. 1 We explain only those parts of * Lite that are sufficient for understanding how to extend it to Moving Target * Lite. * Lite maintains an h-value, g-value, f-value and parent pointer for every state s with similar meanings as used by * but it also maintain an rhs-value. The rhs-value is defined to be ( c if s = sstart (q. 1) rhs(s) = min s Pred(s) (g(s ) + c(s, s)) otherwise (q. 2) where c = 0. Thus, the rhs-value is basically is a onestep lookahead g-value. tate s is called locally inconsistent iff g(s) rhs(s). The f-value of state s is defined to be 1 The pseudocode uses the following functions to manage the OPN list: OPN.Top() returns a state with the smallest priority of all states in the OPN list. OPN.TopKey() returns the smallest priority of all states in the OPN list. (If the OPN list is empty, then OPN.TopKey() returns [ ; ].) OPN.Insert(s, k) inserts state s into the OPN list with priority k. OPN.Update(s, k) changes the priority of state s in the OPN list to k. OPN.elete(s) deletes state s from the OPN list. Priorities are compared lexicographically. The minimum of an empty set is infinity.
5 min(g(s),rhs(s)) + h(s, s goal). inally, the parent pointer of state s is defined to be 8 >< NULL par(s) = >: argmin s Pred(s) if s = s start OR rhs(s) = (q. 3) (g(s ) + c(s, s)) otherwise (q. 4) which is exactly the definition used by *. 2 * Lite also maintains an OPN list with a similar meaning as used by *. omputeostminimalpath() determines a cost-minimal path from the start state to the goal state. efore it is called, states can have arbitrary g-values but their rhs-values and parent pointers have to satisfy qs. 1-4 and the OPN list has to contain all locally inconsistent states. The runtime of omputeostminimalpath() is typically the higher the more g-values it updates. 4. ONTRIUTION The blockage statuses of many states typically change when * Lite shifts the map to keep the start state stationary, which in turn changes the g-values of many states. or example, 3 inherits the blockage status of 3 when * Lite shifts the map one unit south. Then, * Lite updates the g-values of many states, which makes it often slower than running * from scratch [8]. We therefore introduce asic Moving Target * Lite (asic MT-* Lite) and Moving Target * Lite (MT-* Lite), two extensions of * Lite that solve moving target search problems in dynamic environments without having to shift the map. We first describe asic MT-* Lite and then MT-* Lite, an optimization of asic MT-* Lite that uses the principle behind -R* to speed it up. 4.1 asic MT-* Lite asic MT-* Lite is an incremental search algorithm that speeds up the version of * Lite that shifts the map by changing it slightly to avoid having to shift the map. asic MT-* Lite solves moving target search problems in dynamic environments by finding a cost-minimal path with omputeostminimalpath() from the current state of the hunter to the current state of the target whenever the environment changes or the target moves off the known path. igure 3 shows the necessary changes to the pseudocode from igure 2. asic MT-* Lite calls asiceletion() on Line 62 instead of shifting the map. t this point in time, the hunter has moved from the previous start state s oldstart to the current start state s start. efore omputeostminimal- Path() is called again, the rhs-values and parent pointers of all states have to satisfy qs. 1-4 and the OPN list has to contain all locally inconsistent states. ortunately, the rhs-values and parent pointers of all states already satisfy these invariants, with the possible exception of the previous and current start states. asic MT-* Lite therefore calculates the rhs-value of the previous start state (according to q. 2 since it is no longer the current start state), its parent pointer (according to qs. 3-4) and its membership in the OPN list on Lines The correctness proofs of * 2 * Lite typically defines the parent pointers of both the start state and states with rhs-values that are infinity differently because they are not really needed. We set them to NULL in preparation for Moving Target * Lite that requires them to have this value. Lite continue to hold if c is an arbitrary finite value in q. 1. instead of zero. Thus, the rhs-value of the current start state can be an arbitrary finite value, including its current rhs-value since its current rhs-value is finite. 3 asic MT-* Lite therefore does not calculate the rhs-value of the current start state and its membership in the OPN list and only sets its parent pointer to NULL (according to q. 3) on Line 81. igure 4 illustrates the steps of asic MT-* Lite using the setup from igure 1. The first search iteration of asic MT-* Lite runs omputeostminimalpath() from the current state 2 of the hunter to the current state 6 of the target (see igure 4(a)). The rhs-value of a state is shown in its upper right corner. The cost-minimal path is 2, 3, 4, 4, 5, 6 and 6. The hunter then moves along the path to 4, the target moves off the path to 6 and 5 becomes unblocked, which changes the costs c(4,5), c(5,4), c(5,6), c(6,5), c(5,5) and c(5,5) from infinity to one (see igure 1(b)). ince the target moved off the path and the environment changed, MT-* Lite finds a cost-minimal path from the current state 4 of the hunter to the current state 6 of the target using the following steps. asic MT-* Lite sets the parent pointer of the current start state 4 to NULL, updates the rhs-value and parent pointer of the previous start state 2 to 2 and 3, respectively, and inserts 2 into the OPN list (see igure 1(c)). It then processes the edges with changed edge costs, like * Lite, and runs omputeostminimalpath(), which expands 2, 2, 3, 3, 3, 5, 3, 2 and 6 (see igures 4(d-l)). The cost-minimal path is 4, 5, 6 and 6. asic MT-* Lite can be optimized. When it runs omputeostminimalpath(), the g-values of all states in the subtree of the previous search tree rooted at the current start state are based on the g-value of the current start state and thus correct. The g-values of all other states in the previous search tree could be incorrect, in which case they are too small. asic MT-* Lite updates the g-values of these states by expanding them. When it expands one of them for the first time, it sets its g-value to infinity. When it expands the state a second time, it sets its g-value to the correct value. In our example, the g-values of 2, 3, 3 and 4 could be incorrect. asic MT-* Lite expands all of them to set their g-values to infinity and then expands all of them again (except for 2) to set their g-values to the correct value. ach state expansion is slow since asic MT-* Lite iterates over all successors of the expanded state to update their rhs-values (according to qs. 1-2), parent pointers (according to qs. 3-4) and memberships in the OPN list on Lines or asic MT-* Lite also iterates over all predecessors of each successor. Thus, each state expansion can require n 2 operations on n-neighbor gridworlds. urthermore, each state expansion manipulates the OPN list, which, if it is implemented as a binary heap, requires the execution of heap operations to keep it sorted each time asic MT-* Lite expands a state since it needs to determine the next state to expand, which is a state with the smallest f-value in the OPN list. We therefore optimize asic MT-* Lite in the following. 3 The rhs-value of the current start state is finite because it is on the cost-minimal path from the previous start state to the previous goal state. The rhs-values of all states on this path are no larger than the rhs-value of the previous goal state, which is finite due to Line 51.
6 (a) earch Iteration 1 (b) Target Moves off Path (c) fter asiceletion() (d) fter xpanding (e) fter xpanding 2 (f) fter xpanding 3 (g) fter xpanding 3 (h) fter xpanding (i) fter xpanding 5 (j) fter xpanding 3 (k) fter xpanding 2 (l) fter xpanding 6 igure 4: Trace of (asic) MT-* Lite 4.2 MT-* Lite MT-* Lite is an incremental search algorithm that is an optimized version of asic MT-* Lite. MT-* Lite operates in the same way as asic MT-* Lite except that it replaces asiceletion() with a more sophisticated procedure. igure 3 shows the necessary changes to the pseudocode from igure 2. asic MT-* Lite calls Optimizedeletion() on Line 62 instead of shifting the map. asic MT-* Lite lazily expands the states in the previous search tree that are not in the subtree rooted at the current start state one at a time to set their g-values to infinity when needed. Whenever it sets the g-value of a state to infinity, it needs to update the rhs-values, parent pointers and memberships in the OPN list for all successors of the state. MT-* Lite, on the other hand, eagerly uses a specialized procedure that operates in two phases. MT-* Lite maintains a LT list, which contains all states in the previous search tree that are not in the subtree rooted at the current start state, which is exactly the definition that -R* uses. irst, MT-* Lite sets the parent pointer of the current start state to NULL on Line 90. Then, Phase 1 on Lines eagerly sets the g-values of all states in the LT list to infinity in one pass. inally, Phase 2 on Lines updates the rhsvalues, parent pointers and memberships in the OPN list of all potentially affected states in one pass, which are the states in the LT list. Their rhs-values, parent pointers and memberships in the OPN list can be updated in one pass since they depend only on the g-values of their predecessors, which do not change in Phase 2. If the g-values of all predecessors are infinity, which is likely the case for many of these states due to Phase 1, their rhs-values have to be updated to infinity, their parent pointers have to be updated to NULL and they have to be deleted from the OPN list since they are not locally inconsistent. Phase 1 therefore sets the rhs-values of all states in the LT list to infinity, their parent pointers to NULL and removes them from the OPN list. Phase 2 then only updates the rhs-values (according to qs. 1-2) and parent pointers (according to qs. 3-4) of the few exceptions and inserts them into the OPN list if they are locally inconsistent. Overall, Phase 1 is thus similar to tep 2 (eleting tates) of -R*, and Phase 2 is similar to tep 4 (Inserting tates) of -R*. The runtimes of both phases are small. Phases 1 and 2 iterate over all states in the LT list. Phase 2 also iterates over all predecessors of the states in the LT list. Thus, each state in the LT list can require n operations on n-neighbor gridworlds. oth phases manipulate the OPN list, which, if it is implemented as a binary heap, requires the execution of heap operations to keep it sorted only once, namely at the end of Phase 2, since it can remain unsorted
7 searches moves expanded deleted runtime per per states states per test case test case per search per search search * (32.5) 5618 i erential * (32.5) 7557 * (30.0) 4531 R* (11.1) 349 (13.1) 395 -R* (8.3) 435 (18.1) 405 asic MT-* Lite (9.4) 1401 MT-* Lite (8.4) 688 (25.5) 809 Table 1: tatic nvironments until then. igure 4 illustrates the steps of MT-* Lite using the setup from igure 1. The only difference from the steps of asic MT-* Lite is that MT-* Lite does not perform the state expansions from igures 4(c-g). Instead, Optimized- eletion() updates the rhs-values, parent pointers and memberships in the OPN list of all states in the LT list, which contains 2, 3, 2 and 3 (see igure 4(g)). The remaining state expansions are the same as the ones of asic MT-* Lite (see igures 4(h-l)). 5. XPRIMNTL RULT We now compare asic Moving Target * Lite (asic MT- * Lite) and Moving Target * Lite (MT-* Lite) against *, ifferential *, eneralized daptive * (*), ringe-retrieving * (R*) and eneralized R* (- R*) for solving moving target search problems. or fairness, we use comparable implementations. or example, all search algorithms implement the OPN list as binary heap and find a cost-minimal path from the current state of the hunter to the current state of the target whenever the environment changes or the target moves off the known path. We perform our experiments in four-neighbor gridworlds of size with 25 percent randomly blocked states and randomly chosen start and goal states. We average our experimental results over the same 1000 test cases for each search algorithm. The hunter always knows the blockage statuses of all states. The target always follows a cost-minimal path from its current state to a randomly selected unblocked state and repeats the process whenever it reaches that state or cannot move due to its path being blocked. The target skips every tenth move to ensure that the hunter catches it. We perform the experiments in two different environments. In static environments, the blockage statuses of states does not change. In dynamic environments, we randomly block k unblocked states and unblock k blocked states after every move of the hunter in a way that ensures that there always exists a path from the current state of the hunter to the current state of the target. We vary k from 1 to We use the Manhattan distances as consistent h-values. We report two measures for the difficulty of the moving target search problems, namely the average number of searches and the average number of moves of the hunter until it catches the target. These values vary slightly among the search algorithms due to their different ways of breaking ties among several cost-minimal paths. We report two measures for the efficiency of the search algorithms, namely the average number of expanded states per search and the average runtime per search in microseconds on a Pentium 3.0 hz P with 2 yte of RM. We also report the average number of deleted states (from the search tree) per search for R*, -R* and MT-* Lite. inally, we report the standard deviation of the mean for the number of expanded and deleted states per search (in parentheses) to searches moves expanded deleted runtime per per states states per test case test case per search per search search * (24.1) 6043 i erential * (24.1) 8385 * (20.3) 4432 asic MT-* Lite (5.2) 917 MT-* Lite (4.5) 550 (18.4) 570 k = 1 * (23.6) 6006 i erential * (23.6) 8270 * (19.9) 4401 asic MT-* Lite (5.2) 920 MT-* Lite (4.4) 548 (18.1) 595 k = 10 * (22.9) 5804 i erential * (22.9) 7956 * (18.9) 4642 asic MT-* Lite (5.3) 1054 MT-* Lite (4.6) 499 (16.1) 794 k = 100 * (21.1) 5311 i erential * (21.1) 7272 * (81.8) 6607 asic MT-* Lite (6.9) 2129 MT-* Lite (6.8) 352 (11.2) 1932 k = 1000 Table 2: ynamic nvironments demonstrate the statistical significance of our results. Tables 1 and 2 show our experimental results in static and dynamic environments, respectively. The version of * Lite that shifts the map is not included in the tables because it has been shown to be an order of magnitude slower than * and is thus not competitive [13]. * is not included in the tables because it has been shown to be about as efficient as * Lite [7]. R* and -R* are not included in Table 2 because they cannot solve moving target search problems in dynamic environments. We make the following observations: In dynamic environments, * has a runtime that increases with k because it uses a consistency procedure to update the h-values of those states whose h- values become inconsistent due to states becoming unblocked and the number of such states increases with k [12]. In dynamic environments, asic MT-* Lite and * Lite have runtimes that increase with k because they update more g-values as k increases and hence expand more states per search. In both static and dynamic environments, * has a smaller runtime than ifferential * because ifferential * constructs its current search tree from scratch and thus expands the same number of states per search as * but also deletes all states from the previous search tree. In both static and dynamic environments, * has a smaller runtime than * because * updates the h-values to make them more informed over time and hence expands fewer states per search. (In dynamic environments with a large value of k, the runtime incurred due to the consistency procedure is larger than runtime saved due to the fewer state expansions.) In static environments, -R* has a smaller runtime than * because -R* does not expand the states
8 in the subtree of the previous search tree rooted at the current start state. *, on the other hand, expands some of these states. In both static and dynamic environments, asic MT- * Lite and MT-* Lite have smaller runtimes than * because * constructs its current search tree from scratch. asic MT-* Lite and * Lite, on the other hand, reuse the previous search tree and hence expand fewer states per search. In both static and dynamic environments, MT-* Lite has a smaller runtime than asic MT-* Lite because asic MT-* Lite expands the states in the previous search tree that are not in the subtree rooted at the current start state to set their g-values to infinity. MT- * Lite, on the other hand, uses Optimizedeletion() instead, which runs faster and results in only a slightly larger number of deleted and expanded states. In static environments, R* and -R* have smaller runtimes than asic MT-* Lite and MT-* Lite because of two reasons: (1) R* and -R* have a smaller runtime per state expansion than asic MT-* Lite and MT-* Lite. or example, the approximate overhead per state expansion (calculated by dividing the runtime per search by the number of expanded states per search) is 0.88 and 0.79 microseconds for R* and -R*, respectively, while the approximate overhead per state expansion is 1.22 and 1.19 microseconds for asic MT-* Lite and MT- * Lite, respectively. (2) R* and -R* expand fewer states than asic MT-* Lite and MT-* Lite in the following case: If the target moves to a state in the subtree of the previous search tree that is rooted at the current start state, R* and -R* terminate without expanding states due to tep 3 (Terminating arly). asic MT-* Lite and MT-* Lite, on the other hand, expand all locally inconsistent states whose f-values are smaller than the f-value of the goal state. In static environments, R* has a smaller runtime than -R* because -R* reuses only the subtree of the previous search tree rooted at the current start state. R*, on the other hand, uses an optimization step for gridworlds, described in [13], that allows it to reuse more of the previous search tree. Thus, R* deletes and expands fewer states per search. Overall, R* has the smallest runtime in static environments, and MT-* Lite has the smallest runtime in dynamic environments, where it is three to seven times faster than *. 6. ONLUION xisting incremental search algorithms are slow on moving target search problems in dynamic environments. In this paper, we therefore introduced MT-* Lite, an extension of * Lite that uses the principle behind eneralized ringe- Retrieving * to solve moving target search problems in dynamic environments fast. We demonstrated experimentally that MT-* Lite can be three to seven times faster than eneralized daptive *, which so far was believed to be the fastest incremental search algorithm for solving moving target search problems in dynamic environments. It is future work to investigate how to speed up the computation of more sophisticated strategies for the hunter [17]. 7. RRN [1] V. ulitko, Y. jornsson, M. Luvstrek, J. chaeffer, and. igmundarson. ynamic control in path-planning with real-time heuristic search. In Proceedings of IP, pages 49 56, [2] V. ulitko, N. turtevant, J. Lu, and T. Yau. raph abstraction in real-time heuristic search. Journal of rtificial Intelligence Research, 30(1):51 100, [3] P. Hart, N. Nilsson, and. Raphael. formal basis for the heuristic determination of minimum cost paths. I Transactions on ystems cience and ybernetics, 2: , [4] R. Holte, T. Mkadmi, R. Zimmer, and. Maconald. peeding up problem solving by abstraction: graph oriented approach. rtificial Intelligence, 85(1 2): , [5] T. Ishida and R. Korf. Moving target search. In Proceedings of IJI, pages , [6]. Koenig and M. Likhachev. * Lite. In Proceedings of I, pages , [7]. Koenig and M. Likhachev. ast replanning for navigation in unknown terrain. Transaction on Robotics, 21(3): , [8]. Koenig, M. Likhachev, and X. un. peeding up moving-target search. In Proceedings of M, pages , [9]. Moldenhauer and N. turtevant. Optimal solutions for moving target search (xtended bstract). In Proceedings of M, pages , [10] J. Pearl. Heuristics: Intelligent earch trategies for omputer Problem olving. ddison-wesley, [11]. tentz. The focussed * algorithm for real-time replanning. In Proceedings of IJI, pages , [12] X. un,. Koenig, and W. Yeoh. eneralized daptive *. In Proceedings of M, pages , [13] X. un, W. Yeoh, and. Koenig. fficient incremental search for moving target search. In Proceedings of IJI, pages , [14] X. un, W. Yeoh, and. Koenig. eneralized ringe-retrieving *: aster moving-target search on state lattices. In Proceedings of M, [15] K. Trovato and L. orst. ifferential *. I Transactions on Knowledge and ata ngineering, 14(6): , [16]. Undeger and. Polat. Real-time edge follow: real-time path search approach. I Transactions on ystems, Man, and ybernetics, Part, 37(5): , [17]. Undeger and. Polat. Multi-agent real-time pursuit. utonomous gents and Multi-gent ystems, pages 1 39, 2009.
root node level: internal node edge leaf node CS@VT Data Structures & Algorithms 2000-2009 McQuain
inary Trees 1 A binary tree is either empty, or it consists of a node called the root together with two binary trees called the left subtree and the right subtree of the root, which are disjoint from each
Binary Search Trees. A Generic Tree. Binary Trees. Nodes in a binary search tree ( B-S-T) are of the form. P parent. Key. Satellite data L R
Binary Search Trees A Generic Tree Nodes in a binary search tree ( B-S-T) are of the form P parent Key A Satellite data L R B C D E F G H I J The B-S-T has a root node which is the only node whose parent
Symbol Tables. Introduction
Symbol Tables Introduction A compiler needs to collect and use information about the names appearing in the source program. This information is entered into a data structure called a symbol table. The
Physical Data Organization
Physical Data Organization Database design using logical model of the database - appropriate level for users to focus on - user independence from implementation details Performance - other major factor
1) The postfix expression for the infix expression A+B*(C+D)/F+D*E is ABCD+*F/DE*++
Answer the following 1) The postfix expression for the infix expression A+B*(C+D)/F+D*E is ABCD+*F/DE*++ 2) Which data structure is needed to convert infix notations to postfix notations? Stack 3) The
Analysis of Algorithms I: Optimal Binary Search Trees
Analysis of Algorithms I: Optimal Binary Search Trees Xi Chen Columbia University Given a set of n keys K = {k 1,..., k n } in sorted order: k 1 < k 2 < < k n we wish to build an optimal binary search
15-466 Computer Game Programming Intelligence I: Basic Decision-Making Mechanisms
15-466 Computer Game Programming Intelligence I: Basic Decision-Making Mechanisms Maxim Likhachev Robotics Institute Carnegie Mellon University AI Architecture from Artificial Intelligence for Games by
Converting a Number from Decimal to Binary
Converting a Number from Decimal to Binary Convert nonnegative integer in decimal format (base 10) into equivalent binary number (base 2) Rightmost bit of x Remainder of x after division by two Recursive
Binary Heap Algorithms
CS Data Structures and Algorithms Lecture Slides Wednesday, April 5, 2009 Glenn G. Chappell Department of Computer Science University of Alaska Fairbanks [email protected] 2005 2009 Glenn G. Chappell
The Goldberg Rao Algorithm for the Maximum Flow Problem
The Goldberg Rao Algorithm for the Maximum Flow Problem COS 528 class notes October 18, 2006 Scribe: Dávid Papp Main idea: use of the blocking flow paradigm to achieve essentially O(min{m 2/3, n 1/2 }
A binary search tree or BST is a binary tree that is either empty or in which the data element of each node has a key, and:
Binary Search Trees 1 The general binary tree shown in the previous chapter is not terribly useful in practice. The chief use of binary trees is for providing rapid access to data (indexing, if you will)
CS 2112 Spring 2014. 0 Instructions. Assignment 3 Data Structures and Web Filtering. 0.1 Grading. 0.2 Partners. 0.3 Restrictions
CS 2112 Spring 2014 Assignment 3 Data Structures and Web Filtering Due: March 4, 2014 11:59 PM Implementing spam blacklists and web filters requires matching candidate domain names and URLs very rapidly
Analysis of Micromouse Maze Solving Algorithms
1 Analysis of Micromouse Maze Solving Algorithms David M. Willardson ECE 557: Learning from Data, Spring 2001 Abstract This project involves a simulation of a mouse that is to find its way through a maze.
Generalized Widening
Generalized Widening Tristan Cazenave Abstract. We present a new threat based search algorithm that outperforms other threat based search algorithms and selective knowledge-based for open life and death
Network (Tree) Topology Inference Based on Prüfer Sequence
Network (Tree) Topology Inference Based on Prüfer Sequence C. Vanniarajan and Kamala Krithivasan Department of Computer Science and Engineering Indian Institute of Technology Madras Chennai 600036 [email protected],
Analysis of Algorithms I: Binary Search Trees
Analysis of Algorithms I: Binary Search Trees Xi Chen Columbia University Hash table: A data structure that maintains a subset of keys from a universe set U = {0, 1,..., p 1} and supports all three dictionary
Load Balancing and Termination Detection
Chapter 7 slides7-1 Load Balancing and Termination Detection slides7-2 Load balancing used to distribute computations fairly across processors in order to obtain the highest possible execution speed. Termination
CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team
CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team Lecture Summary In this lecture, we learned about the ADT Priority Queue. A
CSE 326, Data Structures. Sample Final Exam. Problem Max Points Score 1 14 (2x7) 2 18 (3x6) 3 4 4 7 5 9 6 16 7 8 8 4 9 8 10 4 Total 92.
Name: Email ID: CSE 326, Data Structures Section: Sample Final Exam Instructions: The exam is closed book, closed notes. Unless otherwise stated, N denotes the number of elements in the data structure
Teaching Artificial Intelligence Playfully
Teaching Artificial Intelligence Playfully Mike Zyda and Sven Koenig University of Southern California (USC) Computer Science Department Los Angeles, California (USA) {zyda,skoenig}@usc.edu Abstract In
6.852: Distributed Algorithms Fall, 2009. Class 2
.8: Distributed Algorithms Fall, 009 Class Today s plan Leader election in a synchronous ring: Lower bound for comparison-based algorithms. Basic computation in general synchronous networks: Leader election
Data Structures and Data Manipulation
Data Structures and Data Manipulation What the Specification Says: Explain how static data structures may be used to implement dynamic data structures; Describe algorithms for the insertion, retrieval
AI: A Modern Approach, Chpts. 3-4 Russell and Norvig
AI: A Modern Approach, Chpts. 3-4 Russell and Norvig Sequential Decision Making in Robotics CS 599 Geoffrey Hollinger and Gaurav Sukhatme (Some slide content from Stuart Russell and HweeTou Ng) Spring,
AUTOMATED TEST GENERATION FOR SOFTWARE COMPONENTS
TKK Reports in Information and Computer Science Espoo 2009 TKK-ICS-R26 AUTOMATED TEST GENERATION FOR SOFTWARE COMPONENTS Kari Kähkönen ABTEKNILLINEN KORKEAKOULU TEKNISKA HÖGSKOLAN HELSINKI UNIVERSITY OF
Satisfiability Checking
Satisfiability Checking SAT-Solving Prof. Dr. Erika Ábrahám Theory of Hybrid Systems Informatik 2 WS 10/11 Prof. Dr. Erika Ábrahám - Satisfiability Checking 1 / 40 A basic SAT algorithm Assume the CNF
Ordered Lists and Binary Trees
Data Structures and Algorithms Ordered Lists and Binary Trees Chris Brooks Department of Computer Science University of San Francisco Department of Computer Science University of San Francisco p.1/62 6-0:
Competitive Analysis of On line Randomized Call Control in Cellular Networks
Competitive Analysis of On line Randomized Call Control in Cellular Networks Ioannis Caragiannis Christos Kaklamanis Evi Papaioannou Abstract In this paper we address an important communication issue arising
Load balancing Static Load Balancing
Chapter 7 Load Balancing and Termination Detection Load balancing used to distribute computations fairly across processors in order to obtain the highest possible execution speed. Termination detection
The Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge,
The Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge, cheapest first, we had to determine whether its two endpoints
Lecture 4 Online and streaming algorithms for clustering
CSE 291: Geometric algorithms Spring 2013 Lecture 4 Online and streaming algorithms for clustering 4.1 On-line k-clustering To the extent that clustering takes place in the brain, it happens in an on-line
Load Balancing and Termination Detection
Chapter 7 Load Balancing and Termination Detection 1 Load balancing used to distribute computations fairly across processors in order to obtain the highest possible execution speed. Termination detection
Resources Management
Resources Management. Introduction s we have seen in network scheduling, the basic inputs to criticalpath analysis are the individual project activities, their durations, and their dependency relationships.
1. What are Data Structures? Introduction to Data Structures. 2. What will we Study? CITS2200 Data Structures and Algorithms
1 What are ata Structures? ata Structures and lgorithms ata structures are software artifacts that allow data to be stored, organized and accessed Topic 1 They are more high-level than computer memory
Previous Lectures. B-Trees. External storage. Two types of memory. B-trees. Main principles
B-Trees Algorithms and data structures for external memory as opposed to the main memory B-Trees Previous Lectures Height balanced binary search trees: AVL trees, red-black trees. Multiway search trees:
Section IV.1: Recursive Algorithms and Recursion Trees
Section IV.1: Recursive Algorithms and Recursion Trees Definition IV.1.1: A recursive algorithm is an algorithm that solves a problem by (1) reducing it to an instance of the same problem with smaller
Dynamic programming formulation
1.24 Lecture 14 Dynamic programming: Job scheduling Dynamic programming formulation To formulate a problem as a dynamic program: Sort by a criterion that will allow infeasible combinations to be eli minated
5. A full binary tree with n leaves contains [A] n nodes. [B] log n 2 nodes. [C] 2n 1 nodes. [D] n 2 nodes.
1. The advantage of.. is that they solve the problem if sequential storage representation. But disadvantage in that is they are sequential lists. [A] Lists [B] Linked Lists [A] Trees [A] Queues 2. The
Chapter 7: Termination Detection
Chapter 7: Termination Detection Ajay Kshemkalyani and Mukesh Singhal Distributed Computing: Principles, Algorithms, and Systems Cambridge University Press A. Kshemkalyani and M. Singhal (Distributed Computing)
1. The memory address of the first element of an array is called A. floor address B. foundation addressc. first address D.
1. The memory address of the first element of an array is called A. floor address B. foundation addressc. first address D. base address 2. The memory address of fifth element of an array can be calculated
M. Sugumaran / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 2 (3), 2011, 1001-1006
A Design of Centralized Meeting Scheduler with Distance Metrics M. Sugumaran Department of Computer Science and Engineering,Pondicherry Engineering College, Puducherry, India. Abstract Meeting scheduling
Approximation Algorithms
Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms
Decentralized Utility-based Sensor Network Design
Decentralized Utility-based Sensor Network Design Narayanan Sadagopan and Bhaskar Krishnamachari University of Southern California, Los Angeles, CA 90089-0781, USA [email protected], [email protected]
Why? A central concept in Computer Science. Algorithms are ubiquitous.
Analysis of Algorithms: A Brief Introduction Why? A central concept in Computer Science. Algorithms are ubiquitous. Using the Internet (sending email, transferring files, use of search engines, online
Load Balancing. Load Balancing 1 / 24
Load Balancing Backtracking, branch & bound and alpha-beta pruning: how to assign work to idle processes without much communication? Additionally for alpha-beta pruning: implementing the young-brothers-wait
Chapter 14 The Binary Search Tree
Chapter 14 The Binary Search Tree In Chapter 5 we discussed the binary search algorithm, which depends on a sorted vector. Although the binary search, being in O(lg(n)), is very efficient, inserting a
Chapter 7 Load Balancing and Termination Detection
Chapter 7 Load Balancing and Termination Detection Load balancing used to distribute computations fairly across processors in order to obtain the highest possible execution speed. Termination detection
Write Barrier Removal by Static Analysis
Write Barrier Removal by Static Analysis Karen Zee and Martin Rinard Laboratory for Computer Science Massachusetts Institute of Technology Cambridge, MA 02139 {kkz, [email protected] ABSTRACT We present
A binary search tree is a binary tree with a special property called the BST-property, which is given as follows:
Chapter 12: Binary Search Trees A binary search tree is a binary tree with a special property called the BST-property, which is given as follows: For all nodes x and y, if y belongs to the left subtree
Lecture 2 February 12, 2003
6.897: Advanced Data Structures Spring 003 Prof. Erik Demaine Lecture February, 003 Scribe: Jeff Lindy Overview In the last lecture we considered the successor problem for a bounded universe of size u.
Data Structures and Algorithm Analysis (CSC317) Intro/Review of Data Structures Focus on dynamic sets
Data Structures and Algorithm Analysis (CSC317) Intro/Review of Data Structures Focus on dynamic sets We ve been talking a lot about efficiency in computing and run time. But thus far mostly ignoring data
Load Balancing and Termination Detection
Chapter 7 Slide 1 Slide 2 Load Balancing and Termination Detection Load balancing used to distribute computations fairly across processors in order to obtain the highest possible execution speed. Termination
Home Page. Data Structures. Title Page. Page 1 of 24. Go Back. Full Screen. Close. Quit
Data Structures Page 1 of 24 A.1. Arrays (Vectors) n-element vector start address + ielementsize 0 +1 +2 +3 +4... +n-1 start address continuous memory block static, if size is known at compile time dynamic,
Operations: search;; min;; max;; predecessor;; successor. Time O(h) with h height of the tree (more on later).
Binary search tree Operations: search;; min;; max;; predecessor;; successor. Time O(h) with h height of the tree (more on later). Data strutcure fields usually include for a given node x, the following
Online Learning of Genetic Network Programming (GNP)
Online Learning of Genetic Network Programming (GNP) hingo Mabu, Kotaro Hirasawa, Jinglu Hu and Junichi Murata Graduate chool of Information cience and Electrical Engineering, Kyushu University 6-0-, Hakozaki,
Data Structures and Algorithms
Data Structures and Algorithms CS245-2016S-06 Binary Search Trees David Galles Department of Computer Science University of San Francisco 06-0: Ordered List ADT Operations: Insert an element in the list
B-Trees. Algorithms and data structures for external memory as opposed to the main memory B-Trees. B -trees
B-Trees Algorithms and data structures for external memory as opposed to the main memory B-Trees Previous Lectures Height balanced binary search trees: AVL trees, red-black trees. Multiway search trees:
Learning Outcomes. COMP202 Complexity of Algorithms. Binary Search Trees and Other Search Trees
Learning Outcomes COMP202 Complexity of Algorithms Binary Search Trees and Other Search Trees [See relevant sections in chapters 2 and 3 in Goodrich and Tamassia.] At the conclusion of this set of lecture
Algorithms Chapter 12 Binary Search Trees
Algorithms Chapter 1 Binary Search Trees Outline Assistant Professor: Ching Chi Lin 林 清 池 助 理 教 授 [email protected] Department of Computer Science and Engineering National Taiwan Ocean University
An Introduction to The A* Algorithm
An Introduction to The A* Algorithm Introduction The A* (A-Star) algorithm depicts one of the most popular AI methods used to identify the shortest path between 2 locations in a mapped area. The A* algorithm
Introduction to LAN/WAN. Network Layer
Introduction to LAN/WAN Network Layer Topics Introduction (5-5.1) Routing (5.2) (The core) Internetworking (5.5) Congestion Control (5.3) Network Layer Design Isues Store-and-Forward Packet Switching Services
Efficiency of algorithms. Algorithms. Efficiency of algorithms. Binary search and linear search. Best, worst and average case.
Algorithms Efficiency of algorithms Computational resources: time and space Best, worst and average case performance How to compare algorithms: machine-independent measure of efficiency Growth rate Complexity
Static Load Balancing
Load Balancing Load Balancing Load balancing: distributing data and/or computations across multiple processes to maximize efficiency for a parallel program. Static load-balancing: the algorithm decides
Notes from Week 1: Algorithms for sequential prediction
CS 683 Learning, Games, and Electronic Markets Spring 2007 Notes from Week 1: Algorithms for sequential prediction Instructor: Robert Kleinberg 22-26 Jan 2007 1 Introduction In this course we will be looking
recursion, O(n), linked lists 6/14
recursion, O(n), linked lists 6/14 recursion reducing the amount of data to process and processing a smaller amount of data example: process one item in a list, recursively process the rest of the list
6. Standard Algorithms
6. Standard Algorithms The algorithms we will examine perform Searching and Sorting. 6.1 Searching Algorithms Two algorithms will be studied. These are: 6.1.1. inear Search The inear Search The Binary
CSE 4351/5351 Notes 7: Task Scheduling & Load Balancing
CSE / Notes : Task Scheduling & Load Balancing Task Scheduling A task is a (sequential) activity that uses a set of inputs to produce a set of outputs. A task (precedence) graph is an acyclic, directed
Files. Files. Files. Files. Files. File Organisation. What s it all about? What s in a file?
Files What s it all about? Information being stored about anything important to the business/individual keeping the files. The simple concepts used in the operation of manual files are often a good guide
Binary Heaps * * * * * * * / / \ / \ / \ / \ / \ * * * * * * * * * * * / / \ / \ / / \ / \ * * * * * * * * * *
Binary Heaps A binary heap is another data structure. It implements a priority queue. Priority Queue has the following operations: isempty add (with priority) remove (highest priority) peek (at highest
Data Warehousing und Data Mining
Data Warehousing und Data Mining Multidimensionale Indexstrukturen Ulf Leser Wissensmanagement in der Bioinformatik Content of this Lecture Multidimensional Indexing Grid-Files Kd-trees Ulf Leser: Data
SIGMOD RWE Review Towards Proximity Pattern Mining in Large Graphs
SIGMOD RWE Review Towards Proximity Pattern Mining in Large Graphs Fabian Hueske, TU Berlin June 26, 21 1 Review This document is a review report on the paper Towards Proximity Pattern Mining in Large
Persistent Binary Search Trees
Persistent Binary Search Trees Datastructures, UvA. May 30, 2008 0440949, Andreas van Cranenburgh Abstract A persistent binary tree allows access to all previous versions of the tree. This paper presents
Method To Solve Linear, Polynomial, or Absolute Value Inequalities:
Solving Inequalities An inequality is the result of replacing the = sign in an equation with ,, or. For example, 3x 2 < 7 is a linear inequality. We call it linear because if the < were replaced with
Static Analysis. Find the Bug! 15-654: Analysis of Software Artifacts. Jonathan Aldrich. disable interrupts. ERROR: returning with interrupts disabled
Static Analysis 15-654: Analysis of Software Artifacts Jonathan Aldrich 1 Find the Bug! Source: Engler et al., Checking System Rules Using System-Specific, Programmer-Written Compiler Extensions, OSDI
Vehicle Tracking System Robust to Changes in Environmental Conditions
INORMATION & COMMUNICATIONS Vehicle Tracking System Robust to Changes in Environmental Conditions Yasuo OGIUCHI*, Masakatsu HIGASHIKUBO, Kenji NISHIDA and Takio KURITA Driving Safety Support Systems (DSSS)
Algorithms and Data Structures
Algorithms and Data Structures CMPSC 465 LECTURES 20-21 Priority Queues and Binary Heaps Adam Smith S. Raskhodnikova and A. Smith. Based on slides by C. Leiserson and E. Demaine. 1 Trees Rooted Tree: collection
Software Engineering Techniques
Software Engineering Techniques Low level design issues for programming-in-the-large. Software Quality Design by contract Pre- and post conditions Class invariants Ten do Ten do nots Another type of summary
Measuring the Performance of an Agent
25 Measuring the Performance of an Agent The rational agent that we are aiming at should be successful in the task it is performing To assess the success we need to have a performance measure What is rational
Y. Xiang, Constraint Satisfaction Problems
Constraint Satisfaction Problems Objectives Constraint satisfaction problems Backtracking Iterative improvement Constraint propagation Reference Russell & Norvig: Chapter 5. 1 Constraints Constraints are
6 March 2007 1. Array Implementation of Binary Trees
Heaps CSE 0 Winter 00 March 00 1 Array Implementation of Binary Trees Each node v is stored at index i defined as follows: If v is the root, i = 1 The left child of v is in position i The right child of
OPRE 6201 : 2. Simplex Method
OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2
Binary Trees and Huffman Encoding Binary Search Trees
Binary Trees and Huffman Encoding Binary Search Trees Computer Science E119 Harvard Extension School Fall 2012 David G. Sullivan, Ph.D. Motivation: Maintaining a Sorted Collection of Data A data dictionary
Branch-and-Price Approach to the Vehicle Routing Problem with Time Windows
TECHNISCHE UNIVERSITEIT EINDHOVEN Branch-and-Price Approach to the Vehicle Routing Problem with Time Windows Lloyd A. Fasting May 2014 Supervisors: dr. M. Firat dr.ir. M.A.A. Boon J. van Twist MSc. Contents
How To Check For Differences In The One Way Anova
MINITAB ASSISTANT WHITE PAPER This paper explains the research conducted by Minitab statisticians to develop the methods and data checks used in the Assistant in Minitab 17 Statistical Software. One-Way
Cost Model: Work, Span and Parallelism. 1 The RAM model for sequential computation:
CSE341T 08/31/2015 Lecture 3 Cost Model: Work, Span and Parallelism In this lecture, we will look at how one analyze a parallel program written using Cilk Plus. When we analyze the cost of an algorithm
GRAPH THEORY LECTURE 4: TREES
GRAPH THEORY LECTURE 4: TREES Abstract. 3.1 presents some standard characterizations and properties of trees. 3.2 presents several different types of trees. 3.7 develops a counting method based on a bijection
Data Structures Fibonacci Heaps, Amortized Analysis
Chapter 4 Data Structures Fibonacci Heaps, Amortized Analysis Algorithm Theory WS 2012/13 Fabian Kuhn Fibonacci Heaps Lacy merge variant of binomial heaps: Do not merge trees as long as possible Structure:
Algorithms and Data Structures
Algorithms and Data Structures Part 2: Data Structures PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering (CiE) Summer Term 2016 Overview general linked lists stacks queues trees 2 2
Data Structures. Jaehyun Park. CS 97SI Stanford University. June 29, 2015
Data Structures Jaehyun Park CS 97SI Stanford University June 29, 2015 Typical Quarter at Stanford void quarter() { while(true) { // no break :( task x = GetNextTask(tasks); process(x); // new tasks may
Introduction to Scheduling Theory
Introduction to Scheduling Theory Arnaud Legrand Laboratoire Informatique et Distribution IMAG CNRS, France [email protected] November 8, 2004 1/ 26 Outline 1 Task graphs from outer space 2 Scheduling
Lecture Notes on Binary Search Trees
Lecture Notes on Binary Search Trees 15-122: Principles of Imperative Computation Frank Pfenning André Platzer Lecture 17 October 23, 2014 1 Introduction In this lecture, we will continue considering associative
Binary Coded Web Access Pattern Tree in Education Domain
Binary Coded Web Access Pattern Tree in Education Domain C. Gomathi P.G. Department of Computer Science Kongu Arts and Science College Erode-638-107, Tamil Nadu, India E-mail: [email protected] M. Moorthi
OPTIMAL BINARY SEARCH TREES
OPTIMAL BINARY SEARCH TREES 1. PREPARATION BEFORE LAB DATA STRUCTURES An optimal binary search tree is a binary search tree for which the nodes are arranged on levels such that the tree cost is minimum.
