Optimal Planning with ACO



Similar documents
Using Ant Colony Optimization for Infrastructure Maintenance Scheduling

INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING & TECHNOLOGY (IJCET)

Modified Ant Colony Optimization for Solving Traveling Salesman Problem

An ant colony optimization for single-machine weighted tardiness scheduling with sequence-dependent setups

Journal of Theoretical and Applied Information Technology 20 th July Vol.77. No JATIT & LLS. All rights reserved.

Software Project Planning and Resource Allocation Using Ant Colony Optimization with Uncertainty Handling

An ACO Approach to Solve a Variant of TSP

An Improved Ant Colony Optimization Algorithm for Software Project Planning and Scheduling

ACO Hypercube Framework for Solving a University Course Timetabling Problem

STUDY OF PROJECT SCHEDULING AND RESOURCE ALLOCATION USING ANT COLONY OPTIMIZATION 1

Obtaining Optimal Software Effort Estimation Data Using Feature Subset Selection

Ant colony optimization techniques for the vehicle routing problem

An Improved ACO Algorithm for Multicast Routing

ANT COLONY OPTIMIZATION ALGORITHM FOR RESOURCE LEVELING PROBLEM OF CONSTRUCTION PROJECT

On-line scheduling algorithm for real-time multiprocessor systems with ACO

Integration of ACO in a Constraint Programming Language

The ACO Encoding. Alberto Moraglio, Fernando E. B. Otero, and Colin G. Johnson

A hybrid ACO algorithm for the Capacitated Minimum Spanning Tree Problem

Branch-and-Price Approach to the Vehicle Routing Problem with Time Windows

A novel ACO technique for Fast and Near Optimal Solutions for the Multi-dimensional Multi-choice Knapsack Problem

Review of Solving Software Project Scheduling Problem with Ant Colony Optimization


A Proposed Scheme for Software Project Scheduling and Allocation with Event Based Scheduler using Ant Colony Optimization

Ant Colony Optimization (ACO)

Finding Liveness Errors with ACO

A DISTRIBUTED APPROACH TO ANT COLONY OPTIMIZATION

D A T A M I N I N G C L A S S I F I C A T I O N

Implementing Ant Colony Optimization for Test Case Selection and Prioritization

npsolver A SAT Based Solver for Optimization Problems

Swarm Intelligence Algorithms Parameter Tuning

Biological inspired algorithm for Storage Area Networks (ACOSAN)

TEST CASE SELECTION & PRIORITIZATION USING ANT COLONY OPTIMIZATION

Evaluation of Test Cases Using ACO and TSP Gulwatanpreet Singh, Sarabjit Kaur, Geetika Mannan CTITR&PTU India

Ant Colony Optimization and Constraint Programming

Optimization and Ranking in Web Service Composition using Performance Index

Solving the Vehicle Routing Problem with Multiple Trips by Adaptive Memory Programming

A Mathematical Programming Solution to the Mars Express Memory Dumping Problem

HYBRID GENETIC ALGORITHMS FOR SCHEDULING ADVERTISEMENTS ON A WEB PAGE

A Comparative Study of Scheduling Algorithms for Real Time Task

Big Data - Lecture 1 Optimization reminders

Dimensioning an inbound call center using constraint programming

Alpha Cut based Novel Selection for Genetic Algorithm

Memory Allocation Technique for Segregated Free List Based on Genetic Algorithm

Course Outline Department of Computing Science Faculty of Science. COMP Applied Artificial Intelligence (3,1,0) Fall 2015

A SURVEY ON LOAD BALANCING ALGORITHMS IN CLOUD COMPUTING

HYBRID ACO-IWD OPTIMIZATION ALGORITHM FOR MINIMIZING WEIGHTED FLOWTIME IN CLOUD-BASED PARAMETER SWEEP EXPERIMENTS

Hybrid Algorithm using the advantage of ACO and Cuckoo Search for Job Scheduling

Multi-Objective Supply Chain Model through an Ant Colony Optimization Approach

CLOUD DATABASE ROUTE SCHEDULING USING COMBANATION OF PARTICLE SWARM OPTIMIZATION AND GENETIC ALGORITHM

International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS)

Random vs. Structure-Based Testing of Answer-Set Programs: An Experimental Comparison

A Novel Binary Particle Swarm Optimization

Web Mining using Artificial Ant Colonies : A Survey

A SURVEY ON WORKFLOW SCHEDULING IN CLOUD USING ANT COLONY OPTIMIZATION

Solving Traveling Salesman Problem by Using Improved Ant Colony Optimization Algorithm

A Survey on Load Balancing Techniques Using ACO Algorithm

A Performance Comparison of GA and ACO Applied to TSP

ACO Based Dynamic Resource Scheduling for Improving Cloud Performance

International Journal of Emerging Technology & Research

Beam-ACO hybridizing ant colony optimization with beam search: an application to open shop scheduling

Comparative Study: ACO and EC for TSP

The Application Research of Ant Colony Algorithm in Search Engine Jian Lan Liu1, a, Li Zhu2,b

ASSIGNMENT OF CONTAINER TRUCKS OF A ROAD TRANSPORT COMPANY WITH CONSIDERATION OF THE LOAD BALANCING PROBLEM

A Study of Local Optima in the Biobjective Travelling Salesman Problem

Proposed Software Testing Using Intelligent techniques (Intelligent Water Drop (IWD) and Ant Colony Optimization Algorithm (ACO))

Optimization of ACO for Congested Networks by Adopting Mechanisms of Flock CC

Distributed Optimization by Ant Colonies

Binary Ant Colony Evolutionary Algorithm

Web Services and Automated Planning for Intelligent Calendars

A Set-Partitioning-Based Model for the Stochastic Vehicle Routing Problem

MRI Brain Tumor Segmentation Using Improved ACO

Introduction. Swarm Intelligence - Thiemo Krink EVALife Group, Dept. of Computer Science, University of Aarhus

Optimal Scheduling for Dependent Details Processing Using MS Excel Solver

SINGLE-STAGE MULTI-PRODUCT PRODUCTION AND INVENTORY SYSTEMS: AN ITERATIVE ALGORITHM BASED ON DYNAMIC SCHEDULING AND FIXED PITCH PRODUCTION

Disjunction of Non-Binary and Numeric Constraint Satisfaction Problems

An ACO/VNS Hybrid Approach for a Large-Scale Energy Management Problem

Approximation Algorithms

How To Balance In Cloud Computing

Algorithmic Mechanism Design for Load Balancing in Distributed Systems

The Network Structure of Hard Combinatorial Landscapes

The Bees Algorithm A Novel Tool for Complex Optimisation Problems

Ant Colony System: A Cooperative Learning Approach to the Traveling Salesman Problem

Transcription:

Optimal Planning with ACO M.Baioletti, A.Milani, V.Poggioni, F.Rossi Dipartimento Matematica e Informatica Universit di Perugia, ITALY email: {baioletti, milani, poggioni, rossi}@dipmat.unipg.it Abstract. In this paper a planning framework based on Ant Colony Optimization techniques is presented. Optimal planning is a very hard computational problem which has been coped with different methodologies. Approximate methods do not guarantee either optimality or completeness, but it has been proved that in many applications they are able to find very good, often optimal, solutions. We propose several approaches based both on backward and forward search over the state space, using different pheromone models and heuristic functions in order to solve sequential optimization planning problems. Introduction Optimal sequential planning is a very hard task which consists on searching for solution plans with minimal length or minimal total cost. The cost of a plan is an important feature in many domains because high cost plans can be useless, almost like non executable plans. Several approaches to optimization planning have been recently proposed [9]. The main contribution of our work consists in investigating the application of the well known Ant Colony Optimization (ACO) meta heuristic [6, 5] to optimal propositional planning. ACO is a meta heuristic inspired by the behaviour of natural ants colony that has been successfully applied to many Combinatorial Optimization problems. The first application of ACO to planning has been proposed very recently [1]. The approach, although still preliminary, seems promising since it has been shown that stochastic approaches to planning can perform very well [7]. ACO seems suitable for planning because there is a strong analogy between the construction solution process of ACO and planning as progressive/regressive search in the state space. The basic idea is to use a probabilistic model of ants, in which each ant incrementally builds a solution to the problem by randomly selecting actions according to the pheromone values deposited by the previous generations of ants and to a heuristic function. A first framework has been presented in [1], where the basis of our approach have been placed. In that work we proposed to solve optimal planning problems using an ACO approach to perform a forward heuristic search in the state space. From this first work we have developed the research following two lines [3, 2]. At first we have deepen this model providing and testing several pheromone models, pheromone updating strategies and parameters tuning [1, 3]; then we proposed a new approach based on the idea of regression planning where the search starts from the goal state and proceeds towards the initial state[2]. The forward ACO model employs the Fast Forward (FF)

heuristic [10], while in the backward version the heuristic h 2 in the h m family [8] is used. In this paper we summarize some new ideas regarding pheromone models and new experimental results. In particular we present the best results so far obtained that are reached using the Action-Action pheromone model and an improved implementation. *** Se mettiamo i nuovi dati con il restart bisognera dire qui qualcosa e cambiare l algoritmo *** Ant Colony Optimization Ant Colony Optimization (ACO) is a meta heuristic used to solve combinatorial optimization problems, introduced since early 90s by Dorigo et al. [6], whose inspiration comes from the foraging behavior of natural ant colonies. ACO uses a colony of artificial ants, which move on the search space and build solutions by composing discrete components. The construction of a solution is incremental: each ant randomly chooses a component to add to the partial solution built so far, according to the problem constraints. The random choice is biased by the pheromone value τ related to each component and by a heuristic function η. Both terms evaluate the appropriateness of the component. The probability that an ant will choose the component c is p(c) = [τ(c)]α [η(c)] β x [τ(x)]α [η(x)] β (1) where the sum on x ranges on all the components which can be chosen, and α and β are tuning parameters for pheromone and heuristic contributions. The pheromone values represent a kind of memory shared by the ant colony and are subject to updating and evaporation. In particular, the pheromone can be updated at each construction step or for complete solutions (either all or the best ones) in the current iteration, possibly considering also the best solution of the previous iterations. The update phase is usually performed by adding to the pheromone values associated to components of the solution s an increment (s) which depends on a solution quality function F (s) 1 The pheromone evaporation has the purpose of avoiding a premature convergence towards not optimal solutions and it is simulated by multiplying it by a factor 1 ρ, where 0 < ρ < 1 is the evaporation rate. The simulation of the ant colony is iterated until a satisfactory solution is found, an unsuccessful condition is met or after a given number of iterations. ACO has been used to solve several combinatorial optimization problems, reaching in many cases state of art performances, as shown in [6]. Optimal Propositional Planning Automated planning is a well known AI problem deeply studied in the last years. Many different kinds of problems defined over several planning model have been proposed. 1 For minimization problem whose objective function is f, F (s) is a decreasing function of f

Our aim is to solve Optimal Propositional Planning problems defined in terms of finding the shortest solution plan of a classical propositional planning problem. The standard reference model for planning is the Propositional STRIPS model [11], called also Classical Planning. In this model the world states are described in terms of a finite set F of propositional variables: a state s is represented with a subset of F, containing all the variables which are true in s. A Planning Problem is a triple (I, G, A), in which I is the initial state, G denotes the goal states, and A is a finite set of actions. An action a A is described by a triple (pre(a), add(a), del(a)). pre(a) F is the set of preconditions: a is executable in a state s if and only if pre(a) s. add(a), del(a) F are respectively the sets of positive (negative) effects: if an action a is executable in a state s, the state resulting after its execution is Res(s, a) = s add(a) \ del(a). Otherwise Res(s, a) is undefined. A linear sequence of actions (a 1, a 2,..., a n ) is a plan for a problem (I, G, A) if a 1 is executable in I, a 2 is executable in s 1 = Res(I, a 1 ), a 3 is executable in s 2 = Res(s 1, a 2 ), and so on. A plan (a 1, a 2,..., a n ) is a solution for (I, G, A) if G s n. Planning with Forward Ants In this first approach, ants are used to perform a forward search through the state space. The ants build a plan starting from the initial state s 0 executing actions step by step. Each ant draws the next action to execute in the current state from the set of executable actions. The choice is made by taking into account the pheromone value τ(a) and the heuristic value η(a) for each candidate action a with a formula similar to (1). Once an action a is selected, the current state is updated by means of the effects of a. Choosing always only executable actions means that the entire plan is executable in the initial state. The construction phase stops when a solution is found (the goals are true in the current state), a dead end is encountered (no action is executable) or an upper bound L max for the the length of action sequence is reached. Pheromone models The first choice is what is a solution component in terms of pheromone values. While it is clear that actions take surely part of components, it would be worth to add further information in order to perform a more informed choice. Therefore the function τ(a) can also depend on other data locally available to the ant. However we have also studied a simple pheromone model, called Action, in which τ depends only on the action and nothing else. We have designed and implemented four different pheromone models. In the State Action model τ also depends on the current state s. This is by far the most expensive pheromone model, because the number of possible components is exponential with respect to the problem size. On the other hand, the pheromone values have a straightforward interpretation as a policy (although in terms of a preference policy): τ(a, s) represents how much it is desirable (or better, it has been useful) to execute a in the state s. In the Level Action model τ also depends on the time step in which the action will be executed. This model is clearly more parsimonious then the state action, but its

interpretation is less clear. A drawback is that the desirability of an action at a time step t is unrelated to the values for close time steps, while if an action is desirable at time 2, i.e. at an early planning stage, it could also be desirable at time 1 and 3. To solve this problem we introduce the Fuzzy level Action model which is the fuzzified version of the previous model: the pheromone used for action a to be executed at time t can be seen as the weighted average of the pheromone values τ(a, t ) for a and for t = t W,..., t + W computed with the level action model. The weights and the time window W can be tuned by experimental results. In the Action Action model the pheromone also depends on the last executed action. This is the simplest way in which the action choice can be directly influenced by the previous choices: using just the last choice is a sort of first order Markov property. As we will discuss in the conclusion, this list of pheromone models is by no means exhaustive and many other pheromone models could be invented and tested. Heuristic function The function η(a) is computed as follows. First, we compute the state s resulting from the execution of a in the current state s. Then the heuristic function h F F (s ), defined in the FF planner [10], is computed as a distance between the state s and the goals. Therefore we set η(a) = 1 h F F (s ) The computation of h F F (s ) can also give a further information, a set H of helpful actions, actions which can be considered as more promising. For the actions in H we use a larger value of η, defining where k [0, 1]. η(a) = 1 (1 k)h F F (s ) Plan evaluation Plans are evaluated with respect to two different parameters. Let π a plan with length L and S π =< s 0, s 1,..., s L > the sequence of all the states reached by π, then h min (π) = min{h F F (s i ) : s i S π } and t min (π) = min{i : h F F (s i ) = h min (π)} h min (π) is the distance (estimated by the heuristic function) between the goals and the state closest to the goals. t min (π) is the time step at which the minimum distance is firstly obtained. Note that for a solution plan h min (π) = 0 and t min (π) corresponds the length of π. A possible synthesis of these two parameters is given by P (π) = t min (π) + W h min (π) where W > 1 is a weight which gives a large penality to non valid plans and then drives the search process towards valid plans.

Pheromone updating We have implemented a Rank Ant System pheromone update method [4], which operates as follows. The updating process is performed only at the end of each iteration. First, the pheromone values are evaporated by a given rate ρ. Then, the solutions are sorted in the decreasing order with respect to P. Let π 2,..., π σ the σ 1 best solutions found in the current iteration and let π 1 the best solution found so far, then the pheromone value τ(c) for the component c is increased by the value (c) = σ i=1 (σ + 1 i) i(c) where i (c) = { 1 s(π i) if c takes part in π i 0 otherwise The algorithm The optimization process continues for a given number N of iterations, in each of them n a ants build plans with a maximum number of actions bound to L max. c is the initial value for pheromone. The pseudo code of the resulting algorithm is shown in figure 1. Algorithm 1 The algorithm ACOPlan-F 1: π best 2: InitPheromone(c) 3: for g 1 to N do 4: s iter 5: for m 1 to n a do 6: π m 7: state initial state of the problem 8: for i 1 to L max and state is not final do 9: A i feasible actions on state 10: a k ChooseAction(A i) 11: extend π m with a k 12: update state 13: end for 14: end for 15: Update(π best ) 16: Sort(π 1,..., π na ) 17: UpdatePheromone(π best, π 1,..., π σ 1, ρ) 18: end for Planning with Backward Ants The other possibility is to use backward ants. The construction phase operates in a backward direction: the ants begin with the goals and at each time step they choose an action among those which reaches at least one current subgoal, without deleting any other subgoal. After having chosen an action a, the current subgoal set is updated by

regression with respect to a. *** mettere la formula di regressione? In this way, the created sequences are always compatible with all the subgoals. The construction phase stops when a solution is found (all the current subgoals are true in the initial state), a dead end is encountered (no action is compatible with subgoals) or the upper bound L max is reached. Pheromone models The same considerations seen for the pheromone models of forward ants can also be reformulated for backward ants. We thus propose five different pheromone models. The models Action, Level Action, Fuzzy Level Action and Action Action are similar to those defined for forward ants. The only difference is that the level number is counted from the goal, instead of the initial state. While the State Action model cannot be defined in this framework, as subgoals are not complete states, it is possible to define a new model, called Goal Action in which τ also depends on the goal which is reached by a. This model has a nice interpretation in that τ(a, g) quantifies how much it is promising to reach the goal g by executing a. Heuristic function The heuristic value for an action a is computed by means of the function h 2 HSP, described in [8], which estimates the distance between the initial state and the subgoals g, obtained by regression of the current subgoals with respect to a. Besides the fact that h 2 HSP is suited for a backward search, it is important to note that most of the operations needed to compute it are performed only once, at the beginning of the search process. Moreover it is worth to notice that h 2 HSP can be easily adapted to planning problems in which actions can have a non fixed cost. Plan evaluation and pheromone update The plans are evaluated in a similar way as done for forward ants, except that h min is defined in terms of the sequence G π =< g 0, g 1,..., g L > of subgoal sets generated by π. Also the pheromone update process is exactly the same. Note that, also in this case, h min (π) = 0 if and only if π is a valid plan and thus, for valid plans, P (π) reduces to the length of π. The algorithm The algorithm, called ACOPlan B, is very similar to ACOPlan F and will not be reported. The only relevant difference is that the state is replaced by the subgoals set. Experimental results We chose to compare ACOPlan with the LPG system [7]. The choice is twice motivated: LPG is a stochastic planner and it is very efficient from both a computational and a solution quality point of view. Moreover, when it runs with -n option, it gives solution plans with, in general, a number of actions very close to the optimum (often it can find solutions with the optimum number of actions). The test phase is running in two parallel

tracks. The first one tests ACOPlan F and evaluates the performances with respect to the different pheromone models. In this phase ACOPlan F has already been tested over some domains taken from the International Planning Competitions (IPC). So far we have run a set of systematics tests over the domains Rovers, Depots, Blocksworld and Driverlog. These domains have been chosen among the set of benchmarks because they offer a good variety and the corresponding results allow us interesting comments. In this test phase, the best results have been obtained with the Action-Action pheromone model; they are shown in the next tables. For lack of space only results for Driverlog and Rovers are reported; the results for the other domains are similar. ACOPlan F has many parameters that have to be chosen. After systematic tests we decided to use this setting: 10 ants, 5000 iterations, α = 2, β = 5, ρ = 0.15, c = 1, k = 0.5. Since both the systems are non deterministic planners, the results collected here are the mean values obtained over 100 runs. In Fig. 2 and in Fig. 1 results of tests over DriverLog and Rovers domains are respectively shown. The two tables have the same structure: in the first column problem names are listed; in the next columns the length of best solution plans, the average solution length and the average execution times are reported for each planner. The results we collected show that the system is comparable in terms of solution quality, but is still less efficient for the computation time. The second track of tests runs with ACOPlan B, after being performed a tuning process for the parameters α, β, ρ, c, as done for forward ants. ACOplan LPG Problem Min Length Avg. Length Time Min Length Avg. Length CPU Time drv1 7 7 0.01 7 7 0.03 drv2 19 19.25 6.49 19 19 0.53 drv3 12 12 0.03 12 12 0.08 drv4 16 16.15 9.90 16 16 0.08 drv5 18 18.2 39.81 18 18 2.28 drv6 11 11.7 21.37 11 11 0.85 drv7 13 13 0.27 13 13 0.08 drv8 22 23 39.52 22 22 9.25 drv9 22 22.1 24.03 22 22 0.54 drv10 17 17 8.22 17 17 0.54 drv11 19 19.25 14.16 19 19 23.06 drv12 37 42.1 615.58 35 35.25 237.28 drv13 26 26 39.81 26 26 4.55 drv14 30 32.4 1250.42 28 28 12.54 drv15 38 43.15 2504.10 32 32.80 499.78 Fig. 1. Results for Driverlog domain collecting solution lengths and CPU time expressed in seconds

ACOplan LPG Problem Min Length Avg. Length Time Min Length Avg. Length CPU Time rvr1 10 10 0.01 10 10 0.04 rvr2 8 8 0.01 8 8 0.01 rvr3 11 11 0.30 11 11 0.71 rvr4 8 8 0.01 8 8 0.02 rvr5 22 22 0.11 22 22 0.04 rvr6 36 36 3.33 36 36 2.32 rvr7 18 18 0.07 18 18 10.86 rvr8 26 26 0.28 26 26.1 158.48 rvr9 31 31 7.07 31 31 0.09 rvr10 35 35 9.13 35 35 133.03 rvr11 30 30.65 222.39 30 30 13.41 rvr12 19 19 0.16 19 19 0.17 rvr13 43 43.95 52.95 43 43.65 190.79 rvr14 28 28 22.07 28 28 0.94 rvr15 41 41.15 622.45 42 42.25 474.17 rvr16 41 41 65.37 41 41 116.97 rvr17 47 47 204.03 47 47 10.33 rvr18 41 41 333.82 40 41.85 434.88 rvr19 64 66.85 1445.33 66 68.7 519.62 rvr20 89 96.2 1409.46 88 92 480.21 Fig. 2. Results for Rovers domain collecting solution lengths and CPU time expressed in seconds

Conclusions and Future Works In this paper we have described a planning framework based on Ant Colony Optimization meta heuristic to solve optimal sequential planning problems. The preliminary empirical tests have shown encouraging results and that this approach is a viable method for optimization in classical planning. For these reasons we are thinking to improve and extend this work in several directions. We are planning to perform extensive sets of experiments in order to compare the different pheromone models and to contrast the difference between forward and backward ants. Another possibility could be to define other pheromone models and using different ACO techniques.. The heuristic function used for forward ants should be replaced with a heuristic function which can handle action costs, following the approach proposed in the FF(h a ) system [9]. Finally we are considering to apply ACO techniques also to other types of planning. The extensions of classical planning which appears to be appealing for ACO are planning with numerical fluents and planning with preferences, being both optimization problems. These extensions is quite simple, although a major problem seems to be the choice of a suitable heuristic function, mainly for backward ants. References 1. Marco Baioletti, Alfredo Milani, Valentina Poggioni, and Fabio Rossi. An aco approach to planning. In In Proc of the 9th European Conference on Evolutionary Computation in Combinatorial Optimisation, EVOCOP 2009, 2009. 2. Marco Baioletti, Alfredo Milani, Valentina Poggioni, and Fabio Rossi. Ant search strategies for planning optimization. In Submitted to International Conference on Planning and Scheduling, ICAPS 2009, 2009. 3. Marco Baioletti, Alfredo Milani, Valentina Poggioni, and Fabio Rossi. Placo: Planning with ants. In In Proc of The 22nd International FLAIRS Conference. AAAI Press, 2009. 4. Bernd Bullnheimer, Richard F. Hartl, and Christine Strauss. A new rank based version of the ant system - a computational study. Central European Journal for Operations Research and Economics, 7:25 38, 1999. 5. Marco Dorigo and Luca Maria Gambardella. Ant colony system: A cooperative learning approach to the traveling salesman problem. IEEE Transactions on Evolutionary Computation, 1(1):53 66, April 1997. 6. Marco Dorigo and Thomas Stuetzle. Ant Colony Optimization. MIT Press,, Cambridge, MA, USA, 2004. 7. A. Gerevini and I. Serina. Lpg: a planner based on local search for planning graphs. In Proceedings of the Sixth International Conference on Artificial Intelligence Planning and Scheduling (AIPS 02), AAAI Press, Toulouse, France, 2002. 8. Patrik Haslum, Blai Bonet, and Hector Geffner. New admissible heuristics for domainindependent planning. In Proc. of AAAI 2005), pages 1163 1168, 2005. 9. M. Helmert, M. Do, and I. Refanidis. International planning competition ipc-2008, the deterministic part. http://ipc.icaps-conference.org/, 2008.

10. J. Hoffmann and B. Nebel. The ff planning system: Fast plan generation through heuristic search. Journal of Artificial Intelligence Research, 14:253 302, 2001. 11. D. Nau, M. Ghallab, and P. Traverso. Automated Planning: Theory and Practice. Morgan Kaufmann, 2004.