Chaff: Engineering an Efficient SAT Solver
|
|
|
- Drusilla Simon
- 9 years ago
- Views:
Transcription
1 Matthew W. Moskewicz Department of EECS UC Berkeley Chaff: Engineering an Efficient SAT Solver Conor F. Madigan Department of EECS MIT Ying Zhao, Lintao Zhang, Sharad Malik Department of Electrical Engineering Princeton University {yingzhao, lintaoz, ABSTRACT Boolean Satisfiability is probably the most studied of combinatorial optimization/search problems. Significant effort has been devoted to trying to provide practical solutions to this problem for problem instances encountered in a range of applications in Electronic Design Automation (EDA), as well as in Artificial Intelligence (AI). This study has culminated in the development of several SAT packages, both proprietary and in the public domain (e.g. GRASP, SATO) which find significant use in both research and industry. Most existing complete solvers are variants of the Davis-Putnam (DP) search algorithm. In this paper we describe the development of a new complete solver, Chaff, which achieves significant performance gains through careful engineering of all aspects of the search especially a particularly efficient implementation of Boolean constraint propagation (BCP) and a novel low overhead decision strategy. Chaff has been able to obtain one to two orders of magnitude performance improvement on difficult SAT benchmarks in comparison with other solvers (DP or otherwise), including GRASP and SATO. Categories and Subject Descriptors J6 [Computer-Aided Engineering]: Computer-Aided Design. General Terms Algorithms, Verification. Keywords Boolean satisfiability, design verification. 1. Introduction The Boolean Satisfiability (SAT) problem consists of determining a satisfying variable assignment, V, for a Boolean function, f, or determining that no such V exists. SAT is one of the central NP-complete problems. In addition, SAT lies at the core of many practical application domains including EDA (e.g. automatic test generation [10] and logic synthesis [6]) and AI (e.g. automatic theorem proving). As a result, the subject of practical SAT solvers has received considerable research attention, and numerous solver algorithms have been proposed and implemented. Many publicly available SAT solvers (e.g. GRASP [8], POSIT [5], SATO [13], rel_sat [2], WalkSAT [9]) have been developed, most employing some combination of two main strategies: the Davis-Putnam (DP) backtrack search and heuristic local search. Heuristic local search techniques are not guaranteed to be complete (i.e. they are not guaranteed to find a satisfying assignment if one exists or prove unsatisfiability); as a result, complete SAT solvers (including ours) are based almost exclusively on the DP search algorithm. 1.1 Problem Specification Most solvers operate on problems for which f is specified in conjunctive normal form (CNF). This form consists of the logical AND of one or more clauses, which consist of the logical OR of one or more literals. The literal comprises the fundamental logical unit in the problem, being merely an instance of a variable or its complement. (In this paper, complement is represented by.) All Boolean functions can be described in the CNF format. The advantage of CNF is that in this form, for f to be satisfied (sat), each individual clause must be sat. 1.2 Basic Davis-Putnam Backtrack Search We start with a quick review of the basic Davis-Putnam backtrack search. This is described in the following pseudo-code fragment: while (true) { if (!decide()) // if no unassigned vars return(satisifiable); while (!bcp()) { if (!resolveconflict()) return(not satisfiable); } } bool resolveconflict() { d = most recent decision not tried both ways ; } if (d == NULL) // no such d was found return false; flip the value of d; mark d as tried both ways; undo any invalidated implications; return true; The operation of decide() is to select a variable that is not currently assigned, and give it a value. This variable assignment is referred to as a decision. As each new decision is made, a record of that decision is pushed onto the decision stack.
2 This function will return false if no unassigned variables remain and true otherwise. The operation of bcp(), which carries out Boolean Constraint Propagation (BCP), is to identify any variable assignments required by the current variable state to satisfy f. Recall that every clause must be sat, for f to be sat. Therefore, if a clause consists of only literals with value 0 and one unassigned literal, then that unassigned literal must take on a value of 1 to make f sat. Clauses in this state are said to be unit, and this rule is referred to as the unit clause rule. The necessary variable assignment associated with giving the unassigned literal a value of 1 is referred to as an implication. In general, BCP therefore consists of the identification of unit clauses and the creation of the associated implications. In the pseudo-code from above, bcp() carries out BCP transitively until either there are no more implications (in which case it returns true) or a conflict is produced (in which case it returns false). A conflict occurs when implications for setting the same variable to both 1 and 0 are produced. At the time a decision is made, some variable state exists and is represented by the decision stack. Any implication generated following a new decision is directly triggered by that decision, but predicated on the entire prior variable state. By associating each implication with the triggering decision, this dependency can be compactly recorded in the form of an integer tag, referred to as the decision level (DL). For the basic DP search, the DL is equivalent to the height of the decision stack at the time the implication is generated. To explain what handleconflict() does, we note that we can invalidate all the implications generated on the most recent decision level simply by flipping the value of the most recent decision assignment. Therefore, to deal with a conflict, we can just undo all those implications, flip the value of the decision assignment, and allow BCP to then proceed as normal. If both values have already been tried for this decision, then we backtrack through the decision stack until we encounter a decision that has not been tried both ways, and proceed from there in the manner described above. Clearly, in backtracking through the decision stack, we invalidate any implications with decision levels equal to or greater than the decision level to which we backtracked. If no decision can be found which has not been tried both ways, that indicates that f is not satisfiable. Thus far we have focused on the overall structure of the basic DP search algorithm. The following sections describe features specific to Chaff. 2. Optimized BCP In practice, for most SAT problems, a major potion (greater than 90% in most cases) of the solvers run time is spent in the BCP process. Therefore, an efficient BCP engine is key to any SAT solver. To restate the semantics of the BCP operation: Given a formula and set of assignments with DLs, deduce any necessary assignments and their DLs, and continue this process transitively by adding the necessary assignments to the initial set. Necessary assignments are determined exclusively by repeated applications of the unit clause rule. Stop when no more necessary assignments can be deduced, or when a conflict is identified. For the purposes of this discussion, we say that a clause is implied iif all but one of its literals is assigned to zero. So, to implement BCP efficiently, we wish to find a way to quickly visit all clauses that become newly implied by a single addition to a set of assignments. The most intuitive way to do this is to simply look at every clause in the database clauses that contain a literal that the current assignment sets to 0. In effect, we would keep a counter for each clause of how many value 0 literals are in the clause, and modify the counter every time a literal in the clause is set to 0. However, if the clause has N literals, there is really no reason that we need to visit it when 1, 2, 3, 4,, N-1 literals are set to zero. We would like to only visit it when the number of zero literals counter goes from N-2 to N-1. As an approximation to this goal, we can pick any two literals not assigned to 0 in each clause to watch at any given time. Thus, we can guarantee that until one of those two literals is assigned to 0, there cannot be more than N-2 literals in the clause assigned to zero, that is, the clause is not implied. Now, we need only visit each clause when one of its two watched literals is assigned to zero. When we visit each clause, one of two conditions must hold: (1) The clause is not implied, and thus at least 2 literals are not assigned to zero, including the other currently watched literal. This means at least one non-watched literal is not assigned to zero. We choose this literal to replace the one just assigned to zero. Thus, we maintain the property that the two watched literals are not assigned to 0. (2) The clause is implied. Follow the procedure for visiting an implied clause (usually, this will generate a new implication, unless the unless the clause is already sat). One should take note that the implied variable must always be the other watched literal, since, by definition, the clause only has one literal not assigned to zero, and one of the two watched literals is now assigned to zero. It is invariant that in any state where a clause can become newly implied, both watched literals are not assigned to 0. A key benefit of the two literal watching scheme is that at the time of backtracking, there is no need to modify the watched literals in the clause database. Therefore, unassigning a variable can be done in constant time. Further, reassigning a variable that has been recently assigned and unassigned will tend to be faster than the first time it was assigned. This is true because the variable may only be watched in a small subset of the clauses in which was previously watched. This significantly reduces the total number of memory accesses, which, exacerbated by the high data cache miss rate is the main bottleneck for most SAT implementations. Figure 1 illustrates this technique. It shows how the watched literals for a single clause change under a series of assignments and unassignments. Note that the initial choice of watched literals is arbitrary, and that for the purposes of this example, the exact details of how the sequence of assignments and unassignments is being generated is irrelevant. One of the SATO[13] BCP schemes has some similarities to this one in the sense that it also watches two literals (called the head and tail literals by its authors) to detect unit clauses and conflicts. However, our algorithm is different from SATO s in
3 that we do not require a fixed direction of motion for the watched literals while in SATO, the head literal can only move towards tail literal and vice versa. Therefore, in SATO, unassignment has the same complexity as assignment. 3. Variable State Independent Decaying Sum (VSIDS) Decision Heuristic Decision assignment consists of the determination of which new variable and state should be selected each time decide() is called. A lack of clear statistical evidence supporting one decision strategy over others has made it difficult to determine what makes a good decision strategy and what makes a bad one. To explain this further, we briefly review some common strategies. For a more comprehensive review of the effect of decision strategies on SAT solver performance, see [7] by Silva. The simplest possible strategy is to simply select the next decision randomly from among the unassigned variables, an approach commonly denoted as RAND. At the other extreme, one can employ a heuristic involving the maximization of some moderately complex function of the current variable state and the clause database (e.g. BOHM and MOMs heuristics). One of the most popular strategies, which falls somewhere in the middle of this spectrum, is the dynamic largest individual sum (DLIS) heuristic, in which one selects the literal that appears most frequently in unresolved clauses. Variations on this strategy (e.g. RDLIS and DLCS) are also possible. Other slightly more sophisticated heuristics (e.g. JW-OS and JE-TS) have been developed as well, and the reader is referred again to [7] for a full description of these other methods. Clearly, with so many strategies available, it is important to understand how best to evaluate them. One can consider, for instance, the number of decisions performed by the solver when processing a given problem. Since this statistic has the feel of a good metric for analyzing decision strategies ought to mean smarter decisions were made, the reasoning goes! "# $ % &' ( *) " scant literature on the subject. However, not all decisions yield an equal number of BCP operations, and as a result, a shorter sequence of decisions may actually lead to more BCP operations than a longer sequence of decisions, begging the question: what does the number of decisions really tell us? The same argument applies to statistics involving conflicts. Furthermore, it is also important to recognize that not all decision strategies have the same computational overhead, and as a result, the best % +,, -! +. /+10 decision strategy! & combination of the available computation statistics actually be the slowest if the overhead is significant enough. All we really want to know is which strategy is fastest, regardless of the computation statistics. No clear answer exists in the literature, though based on [7] DLIS would appear to be a solid all-around strategy. However, even RAND performs well on the problems described in that paper. While developing our solver, we implemented and tested all of the strategies outlined above, and found that we could design a considerably better strategy for the range of problems on which we tested our solver. This strategy, termed Variable State Independent Decaying Sum (VSIDS) is described as follows: (1) Each variable in each polarity has a counter, initialized to 0. (2) When a clause is added to the database, the counter associated with each literal in the clause is incremented. (3) The (unassigned) variable and polarity with the highest counter is chosen at each decision. (4) Ties are broken randomly by default, although this is configurable (5) Periodically, all the counters are divided by a constant. Also, in order to choose the variable with the highest counter value even more quickly at decision time, a list of the unassigned variables sorted by counter value is maintained during BCP and conflict analysis (using an STL set in the current implementation). Overall, this strategy can be viewed as attempting to satisfy the conflict clauses but particularly attempting to satisfy recent conflict clauses. Since difficult problems generate many conflicts (and therefore many conflict clauses), the conflict clauses dominate the problem in terms of literal count, so this approach distinguishes itself primarily in how the low pass filtering of the statistics (indicated by step (5)) favors the information generated by recent conflict clauses. We believe this is valuable because it is the conflict clauses that primarily drive the search process on difficult problems. And so this decision strategy can be viewed as directly coupling that driving force to the decision process. Of course, another key property of this strategy is that since it is independent of the variable state (except insofar as we must choose an unassigned variable) it has very low overhead, since the statistics are only updated when there is a conflict, and correspondingly, a new conflict clause. Even so, decision related computation is still accounts for ~10% of the run-time on some difficult instances. (Conflict analysis is also ~10% of the runtime, with the remaining ~80% of the time spent in BCP.) Ultimately, employing this strategy dramatically (i.e. an order of magnitude) improved performance on all the most difficult problems without hurting performance on any of the simpler problems, which we viewed as the true metric of its success. 4. Other Features Chaff employs a conflict resolution scheme that is philosophically very similar to GRASP, employing the same type of conflict analysis, conflict clause addition, and UIPidentification. There are some differences that the authors believe have dramatically enhanced the simplicity and elegance of the implementation, but due to space limitations, we will not delve into that subject here. 4.1 Clause Deletion Like many other solvers, Chaff supports the deletion of added conflict clauses to avoid a memory explosion. However, since the method for doing so in Chaff differs somewhat from the standard method, we briefly describe it here. Essentially, Chaff uses scheduled lazy clause deletion. When each clause is added, it is examined to determine at what point in the future, if any, the clause should be deleted. The metric used is relevance, such that when more than N (where N is typically ) literals in the clause will become unassigned for the first time, the clause will be marked as deleted. The actual memory associated with deleted clauses is recovered with an infrequent monolithic database compaction step.
4 4.2 Restarts Chaff also employs a feature referred to as restarts. Restarts in general consist of a halt in the solution process, and a restart of the analysis, with some of the information gained from the previous analysis included in the new one. As implemented in Chaff, a restart consists of clearing the state of all the variables (including all the decisions) then proceeding as normal. As a result, any still-relevant clauses added to the clause database at some time prior to the restart are still present after the restart. It is for this reason that the solver will not simply repeat the previous analysis following a restart. In addition, one can add a certain amount of transient randomness to the decision procedure to aid in the selection of a new search path. Such randomness is typically small, and lasts only a few decisions. Of course, the frequency of restarts and the characteristics of the transient randomness are configurable in the final implementation. It should be noted that restarts impact the completeness of the algorithm. If all clauses were kept, however, the algorithm would still be complete, so completeness could be maintained by increasing the relevance parameter N slowly with time. GRASP uses a similar strategy to maintain completeness by extending the restart period with each restart (Chaff also does this by default, since it generally improves performance). Note that Chaff s restarts differ from those employed by, for instance, GRASP in that they do not affect the current decision statistics. They mainly are intended to provide a chance to change early decisions in view of the current problem state, including all added clauses and the current search path. With default settings, Chaff may restart in this sense thousands of times on a hard instance (sat or unsat), although similar results can often (or at least sometimes) be achieved with restarts completely disabled. 5. Experimental Results On smaller examples with relatively inconsequential run times, Chaff is comparable to any other solver. However, on larger examples where other solvers struggle or give up, Chaff dominates by completing in up to one to two orders of magnitude less time than the best public domain solvers. Chaff has been run on and compared with other solvers on almost a thousand benchmark formulas. Obviously, it is impossible to provide complete results for each individual benchmark. Instead, we will present summary results for each class of benchmarks. Comparisons were done with GRASP, as well as SATO. GRASP provides for a range of parameters that can be individually tuned. Two different recommended sets of parameters were used (GRASP(A) and GRASP(B)). For SATO, the default settings as well as g100 (which restricts the size of added clauses to be 100 literals as opposed to the default of 20) were used. Chaff was used with the default cherry.smj configuration in all cases, except for the dimacs pret* instances, which required a single parameter change to the decision strategy. All experiments were done on a 4 CPU 336 Mhz UltraSparc II Solaris machine with 4GB main memory. Memory usage was typically MB depending on the run time of each instance. Table 1 provides the summary results for the DIMACs [4] benchmark suite. Each row is a set of individual benchmarks grouped by category. For GRASP, both options resulted in several benchmarks aborting after 100secs, which was sufficient for both SATO and Chaff to complete all instances. On examples that the others also complete, Chaff is comparable to the others, with some superiority on the hole and par16 classes, which seem to be among the more difficult ones. Overall, most of the DIMACs benchmarks are now considered easy, as there are a variety of solvers that excel on various subsets of them. Note that some of the DIMACS benchmarks, such as the large 3-sat instance sets f and g, as well as the par32 set were not used, since none of the solvers considered here performs well on these benchmark classes. The next set of experiments was done using the CMU Benchmark Suite [11]. This consists of hard problems, satisfiable and unsatisfiable, arising from verification of microprocessors (for a detailed description of these benchmarks and Chaff s performance on them, see [12]). It is here that Chaff s prowess begins to show more clearly. For SSS.1.0, Chaff is about an order of magnitude faster than the others and can complete all the examples within 100secs. Both GRASP and SATO abort the 5 hard unsat instances in this set, which are known to take both GRASP and SATO significantly longer to complete than the sat instances. Results on using randomized restart techniques with the newest version of GRASP have been reported on a subset of these examples in [1]. We have been unable to reproduce all of those results, due to the unavailability of the necessary configuration profiles for GRASP (again, see [12]). However, comparing our experiments with the reported results shows the superiority of Chaff, even given a generous margin for the differences in the testing environments. For SSS.1.0.a Chaff completed all 9 of the benchmarks SATO and GRASP could do only two. For SSS-SAT.1.0, SATO aborted 32 of the first 41 instances when we decided to stop running any further instances for lack of hope and limited compute cycles. GRASP was not competitive at all on this set. Chaff again completed all 100 in less than 1000secs, within a 100sec limit for each instance. In FVP-UNSAT.1.0 both GRASP and SATO could only complete one easy example and aborted the next two. Chaff completed all 4. Finally for VLIW-SAT.1.0 both SATO and GRASP aborted the first 19 of twenty instances tried. Chaff finished all 100 in less than seconds total. For many of these benchmarks, only incomplete solvers (not considered here) can find solutions in time comparable to Chaff, and for the harder unsatisfiable instances in these benchmarks, no solver the authors were able to run was within 10x of Chaff s performance, which prohibited running them on the harder problems. When enough information is released to run GRASP and locally reproduce results as in [1], these results will be revisited, although the results given would indicate that Chaff is still a full 2 orders of magnitude faster on the hard unsat instances, and at least 1 order of magnitude faster on the satisfiable instances. 7. Conclusions This paper describes a new SAT solver, Chaff, which has been shown to be at least an order of magnitude (and in several cases, two orders of magnitude) faster than existing public domain SAT solvers on difficult problems from the EDA domain. This speedup is not the result of sophisticated learning strategies for pruning the search space, but rather, of efficient engineering
5 of the key steps involved in the basic search algorithm. Specifically, this speedup is derived from: a highly optimized BCP algorithm, and a decision strategy highly optimized for speed, as well as focused on recently added clauses. 8. References [1] Baptista, L., and Marques-Silva, J.P., Using Randomization and Learning to Solve Hard Real-World Instances of Satisfiability, Proceedings of the 6th International Conference on Principles and Practice of Constraint Programming (CP), September [2] Bayardo, R. and Schrag, R.: Using CSP look-back techniques to solve real-world SAT instances, in Proc. of the 14th Nat. (US) Conf. on Artificial Intelligence (AAAI-97), AAAI Press/The MIT Press, 1997, pp [3] Biere, A., Cimatti, A., Clarke, E.M., and Zhu, Y., Symbolic Model Checking without BDDs, Tools and Algorithms for the Analysis and Construction of Systems (TACAS'99), number 1579 in LNCS. Springer-Verlag, ( [4] DIMACS benchmarks available at ftp://dimacs.rutgers.edu/pub/challenge/sat/benchmarks [5] Freeman, J.W., Improvements to Propositional Satisfiability Search Algorithms, Ph.D. Dissertation, Department of Computer and Information Science, University of Pennsylvania, May [6] Kunz, W, and Sotoffel, D., Reasoning in Boolean Networks, Kluwer Academic Publishers, [7] Marques-Silva, J.P., The Impact of Branching Heuristics in Propositional Satisfiability Algorithms, Proceedings of the 9th Portuguese Conference on Artificial Intelligence (EPIA), September [8] Marques-Silva, J. P., and Sakallah, K. A., GRASP: A Search Algorithm for Propositional Satisfiability, IEEE Transactions on Computers, vol. 48, , [9] McAllester, D., Selman, B. and Kautz, H.: Evidence for invariants in local search, in Proceedings of AAAI 97, MIT Press, 1997, pp [10] Stephan, P., Brayton, R., and Sangiovanni-Vencentelli, A., Combinational Test Generation Using Satisfiability, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 15, , [11] Velev, M., FVP-UNSAT.1.0, FVP-UNSAT.2.0, VLIW- SAT.1.0, SSS-SAT.1.0, Superscalar Suite 1.0, Superscalar Suite 1.0a, Available from: [12] Velev, M. and Bryant, R., Effective Use of Boolean Satisfiability Procedures in the Formal Verification of Superscalar and VLIW Microprocessors, In Proceedings of the Design Automation Conference, [13] Zhang, H., SATO: An efficient propositional prover, Proceedings of the International Conference on Automated Deduction, pages , July V 1 =1 -V 1 V 4 -V 7 V 11 V 12 V 15 -V 1 V 4 -V 7 V 11 V 12 V 15 Implication: V 12 =1 V 4 =0 -V 1 V 4 -V 7 V 11 V 12 V 15 -V 1 V 4 -V 7 V 11 V 12 V 15 V 15 =0 V 7 =1 V 11 =0 Conflict, Backtrack. V 15 =X V 11 =X V 7 =X V 4 =X V 7 = 0 V 12 =1 -V 1 V 4 -V 7 V 11 V 12 V 15 -V 1 V 4 -V 7 V 11 V 12 V 15 V 12 V 12 V 12 Literal with value X (unassigned) Literal with value 0 Literal with value 1 -V 1 V 4 -V 7 V 11 V 12 V 15 V 4 =0 Figure 1: BCP using two watched literals
6 All times are in seconds. I = total number of instances in set A = number of instances aborted. If a number n in () follows this, then only n instances in the set were attempted due to frequency of aborts. Time = total user time for search, including aborted instances * = SATO was run with (B) for this set. # = GRASP was run with (B) for this set. ^ = Chaff was run with (B) for this set. All solvers run with (A) options unless marked. Shown result is for whichever set of options was better for each set. GRASP options (A): +T100 +B C S V0 +g40 +rt4 +dmsmm +dr5 GRASP options (B): +T100 +B C S g20 +rt4 +ddlis SATO options (A): -g100 SATO options (B): [default] Chaff options (A): cherry.smj config Chaff options (B): cherry.smj config plus maxlitsforconfdriven = 10 Table 1 GRASP SATO Chaff I Time A Time A Time A ii # ii # 0 2.1* ii aim aim # pret 8 5.9# ^ 0 Par # ssa 8 2.7# jnh # dubois hole # * par # 7 256* Abort timeout was 100s for these sets. Table 2 GRASP SATO Chaff I Time A Time A Time A SSS & SSS 1.0a & 20 0 SSS-SAT (41) FVP-UNSAT (3) (3) VLIW-SAT (20) 19 (20) Abort timeout was 1000s for these sets, except for & ed sets where it was 100s.
University of Potsdam Faculty of Computer Science. Clause Learning in SAT Seminar Automatic Problem Solving WS 2005/06
University of Potsdam Faculty of Computer Science Clause Learning in SAT Seminar Automatic Problem Solving WS 2005/06 Authors: Richard Tichy, Thomas Glase Date: 25th April 2006 Contents 1 Introduction
Satisfiability Checking
Satisfiability Checking SAT-Solving Prof. Dr. Erika Ábrahám Theory of Hybrid Systems Informatik 2 WS 10/11 Prof. Dr. Erika Ábrahám - Satisfiability Checking 1 / 40 A basic SAT algorithm Assume the CNF
Between Restarts and Backjumps
Between Restarts and Backjumps Antonio Ramos, Peter van der Tak, and Marijn Heule Department of Software Technology, Delft University of Technology, The Netherlands Abstract. This paper introduces a novel
MPBO A Distributed Pseudo-Boolean Optimization Solver
MPBO A Distributed Pseudo-Boolean Optimization Solver Nuno Miguel Coelho Santos Thesis to obtain the Master of Science Degree in Information Systems and Computer Engineering Examination Committee Chairperson:
A SAT Based Scheduler for Tournament Schedules
A SAT Based Scheduler for Tournament Schedules Hantao Zhang, Dapeng Li, and Haiou Shen Department of Computer Science The University of Iowa Iowa City, IA 52242-1419, USA {dapli, hshen, hzhang}@cs.uiowa.edu
npsolver A SAT Based Solver for Optimization Problems
npsolver A SAT Based Solver for Optimization Problems Norbert Manthey and Peter Steinke Knowledge Representation and Reasoning Group Technische Universität Dresden, 01062 Dresden, Germany [email protected]
Memory Efficient All-Solutions SAT Solver and Its Application for Reachability Analysis
Memory Efficient All-Solutions SAT Solver and Its Application for Reachability Analysis Orna Grumberg, Assaf Schuster, and Avi Yadgar Computer Science Department, Technion, Haifa, Israel Abstract. This
Optimizing Description Logic Subsumption
Topics in Knowledge Representation and Reasoning Optimizing Description Logic Subsumption Maryam Fazel-Zarandi Company Department of Computer Science University of Toronto Outline Introduction Optimization
Performance Test of Local Search Algorithms Using New Types of
Performance Test of Local Search Algorithms Using New Types of Random CNF Formulas Byungki Cha and Kazuo Iwama Department of Computer Science and Communication Engineering Kyushu University, Fukuoka 812,
Genetic Algorithms and Sudoku
Genetic Algorithms and Sudoku Dr. John M. Weiss Department of Mathematics and Computer Science South Dakota School of Mines and Technology (SDSM&T) Rapid City, SD 57701-3995 [email protected] MICS 2009
Why? A central concept in Computer Science. Algorithms are ubiquitous.
Analysis of Algorithms: A Brief Introduction Why? A central concept in Computer Science. Algorithms are ubiquitous. Using the Internet (sending email, transferring files, use of search engines, online
Solving the Employee Timetabling Problem Using Advanced SAT & ILP Techniques
JOURNAL OF COMPUTERS, VOL. 8, NO. 4, APRIL 2013 851 Solving the Employee Timetabling Problem Using Advanced SAT & ILP Techniques Fadi A. Aloul*, Syed Z. H. Zahidi, Anas Al-Farra, Basel Al-Roh American
A Tool for Generating Partition Schedules of Multiprocessor Systems
A Tool for Generating Partition Schedules of Multiprocessor Systems Hans-Joachim Goltz and Norbert Pieth Fraunhofer FIRST, Berlin, Germany {hans-joachim.goltz,nobert.pieth}@first.fraunhofer.de Abstract.
Introduction to Logic in Computer Science: Autumn 2006
Introduction to Logic in Computer Science: Autumn 2006 Ulle Endriss Institute for Logic, Language and Computation University of Amsterdam Ulle Endriss 1 Plan for Today Now that we have a basic understanding
A search based Sudoku solver
A search based Sudoku solver Tristan Cazenave Labo IA Dept. Informatique Université Paris 8, 93526, Saint-Denis, France, [email protected] Abstract. Sudoku is a popular puzzle. In this paper we
Extending Semantic Resolution via Automated Model Building: applications
Extending Semantic Resolution via Automated Model Building: applications Ricardo Caferra Nicolas Peltier LIFIA-IMAG L1F1A-IMAG 46, Avenue Felix Viallet 46, Avenue Felix Viallei 38031 Grenoble Cedex FRANCE
Symbol Tables. Introduction
Symbol Tables Introduction A compiler needs to collect and use information about the names appearing in the source program. This information is entered into a data structure called a symbol table. The
Mathematical Induction
Mathematical Induction (Handout March 8, 01) The Principle of Mathematical Induction provides a means to prove infinitely many statements all at once The principle is logical rather than strictly mathematical,
Analysis of Micromouse Maze Solving Algorithms
1 Analysis of Micromouse Maze Solving Algorithms David M. Willardson ECE 557: Learning from Data, Spring 2001 Abstract This project involves a simulation of a mouse that is to find its way through a maze.
NP-Completeness and Cook s Theorem
NP-Completeness and Cook s Theorem Lecture notes for COM3412 Logic and Computation 15th January 2002 1 NP decision problems The decision problem D L for a formal language L Σ is the computational task:
Sudoku as a SAT Problem
Sudoku as a SAT Problem Inês Lynce IST/INESC-ID, Technical University of Lisbon, Portugal [email protected] Joël Ouaknine Oxford University Computing Laboratory, UK [email protected] Abstract Sudoku
The Classes P and NP
The Classes P and NP We now shift gears slightly and restrict our attention to the examination of two families of problems which are very important to computer scientists. These families constitute the
Logic in Computer Science: Logic Gates
Logic in Computer Science: Logic Gates Lila Kari The University of Western Ontario Logic in Computer Science: Logic Gates CS2209, Applied Logic for Computer Science 1 / 49 Logic and bit operations Computers
Static Program Transformations for Efficient Software Model Checking
Static Program Transformations for Efficient Software Model Checking Shobha Vasudevan Jacob Abraham The University of Texas at Austin Dependable Systems Large and complex systems Software faults are major
A Static Analyzer for Large Safety-Critical Software. Considered Programs and Semantics. Automatic Program Verification by Abstract Interpretation
PLDI 03 A Static Analyzer for Large Safety-Critical Software B. Blanchet, P. Cousot, R. Cousot, J. Feret L. Mauborgne, A. Miné, D. Monniaux,. Rival CNRS École normale supérieure École polytechnique Paris
EFFICIENT KNOWLEDGE BASE MANAGEMENT IN DCSP
EFFICIENT KNOWLEDGE BASE MANAGEMENT IN DCSP Hong Jiang Mathematics & Computer Science Department, Benedict College, USA [email protected] ABSTRACT DCSP (Distributed Constraint Satisfaction Problem) has
ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2015
ECON 459 Game Theory Lecture Notes Auctions Luca Anderlini Spring 2015 These notes have been used before. If you can still spot any errors or have any suggestions for improvement, please let me know. 1
6.080/6.089 GITCS Feb 12, 2008. Lecture 3
6.8/6.89 GITCS Feb 2, 28 Lecturer: Scott Aaronson Lecture 3 Scribe: Adam Rogal Administrivia. Scribe notes The purpose of scribe notes is to transcribe our lectures. Although I have formal notes of my
Mathematical Induction
Mathematical Induction In logic, we often want to prove that every member of an infinite set has some feature. E.g., we would like to show: N 1 : is a number 1 : has the feature Φ ( x)(n 1 x! 1 x) How
CS510 Software Engineering
CS510 Software Engineering Propositional Logic Asst. Prof. Mathias Payer Department of Computer Science Purdue University TA: Scott A. Carr Slides inspired by Xiangyu Zhang http://nebelwelt.net/teaching/15-cs510-se
Using Walk-SAT and Rel-SAT for Cryptographic Key Search
Using Walk-SAT and Rel-SAT for Cryptographic Key Search Fabio Massacci* Dip. di Informatica e Sistemistica Univ. di Roma I "La Sapienza" via Salaria 113,1-00198 Roma, Italy email: [email protected].
Sudoku Puzzles Generating: from Easy to Evil
Team # 3485 Page 1 of 20 Sudoku Puzzles Generating: from Easy to Evil Abstract As Sudoku puzzle becomes worldwide popular among many players in different intellectual levels, the task is to devise an algorithm
Sudoku puzzles and how to solve them
Sudoku puzzles and how to solve them Andries E. Brouwer 2006-05-31 1 Sudoku Figure 1: Two puzzles the second one is difficult A Sudoku puzzle (of classical type ) consists of a 9-by-9 matrix partitioned
Cost Model: Work, Span and Parallelism. 1 The RAM model for sequential computation:
CSE341T 08/31/2015 Lecture 3 Cost Model: Work, Span and Parallelism In this lecture, we will look at how one analyze a parallel program written using Cilk Plus. When we analyze the cost of an algorithm
CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team
CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team Lecture Summary In this lecture, we learned about the ADT Priority Queue. A
Section 6.4: Counting Subsets of a Set: Combinations
Section 6.4: Counting Subsets of a Set: Combinations In section 6.2, we learnt how to count the number of r-permutations from an n-element set (recall that an r-permutation is an ordered selection of r
WRITING PROOFS. Christopher Heil Georgia Institute of Technology
WRITING PROOFS Christopher Heil Georgia Institute of Technology A theorem is just a statement of fact A proof of the theorem is a logical explanation of why the theorem is true Many theorems have this
Regular Expressions and Automata using Haskell
Regular Expressions and Automata using Haskell Simon Thompson Computing Laboratory University of Kent at Canterbury January 2000 Contents 1 Introduction 2 2 Regular Expressions 2 3 Matching regular expressions
Binary Adders: Half Adders and Full Adders
Binary Adders: Half Adders and Full Adders In this set of slides, we present the two basic types of adders: 1. Half adders, and 2. Full adders. Each type of adder functions to add two binary bits. In order
COMPUTER SCIENCE TRIPOS
CST.98.5.1 COMPUTER SCIENCE TRIPOS Part IB Wednesday 3 June 1998 1.30 to 4.30 Paper 5 Answer five questions. No more than two questions from any one section are to be answered. Submit the answers in five
Wait-Time Analysis Method: New Best Practice for Performance Management
WHITE PAPER Wait-Time Analysis Method: New Best Practice for Performance Management September 2006 Confio Software www.confio.com +1-303-938-8282 SUMMARY: Wait-Time analysis allows IT to ALWAYS find the
CS311 Lecture: Sequential Circuits
CS311 Lecture: Sequential Circuits Last revised 8/15/2007 Objectives: 1. To introduce asynchronous and synchronous flip-flops (latches and pulsetriggered, plus asynchronous preset/clear) 2. To introduce
Chapter 8: Bags and Sets
Chapter 8: Bags and Sets In the stack and the queue abstractions, the order that elements are placed into the container is important, because the order elements are removed is related to the order in which
Statistical tests for SPSS
Statistical tests for SPSS Paolo Coletti A.Y. 2010/11 Free University of Bolzano Bozen Premise This book is a very quick, rough and fast description of statistical tests and their usage. It is explicitly
2 Temporal Logic Model Checking
Bounded Model Checking Using Satisfiability Solving Edmund Clarke 1, Armin Biere 2, Richard Raimi 3, and Yunshan Zhu 4 1 Computer Science Department, CMU, 5000 Forbes Avenue Pittsburgh, PA 15213, USA,
Chapter 1. NP Completeness I. 1.1. Introduction. By Sariel Har-Peled, December 30, 2014 1 Version: 1.05
Chapter 1 NP Completeness I By Sariel Har-Peled, December 30, 2014 1 Version: 1.05 "Then you must begin a reading program immediately so that you man understand the crises of our age," Ignatius said solemnly.
M.S. Project Proposal. SAT Based Attacks on SipHash
M.S. Project Proposal SAT Based Attacks on SipHash Santhosh Kantharaju Siddappa Department of Computer Science Rochester Institute of Technology Chair Prof. Alan Kaminsky Reader Prof. Stanisław P. Radziszowski
3 Some Integer Functions
3 Some Integer Functions A Pair of Fundamental Integer Functions The integer function that is the heart of this section is the modulo function. However, before getting to it, let us look at some very simple
Practical Guide to the Simplex Method of Linear Programming
Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear
Page 1. CSCE 310J Data Structures & Algorithms. CSCE 310J Data Structures & Algorithms. P, NP, and NP-Complete. Polynomial-Time Algorithms
CSCE 310J Data Structures & Algorithms P, NP, and NP-Complete Dr. Steve Goddard [email protected] CSCE 310J Data Structures & Algorithms Giving credit where credit is due:» Most of the lecture notes
Association Rules Mining: Application to Large-Scale Satisfiability
Association Rules Mining: Application to Large-Scale Satisfiability Habiba Drias, Youcef Djenouri LRIA, USTHB Abstract Big data mining is a promising technology for industrial applications such as business
Cloud Computing is NP-Complete
Working Paper, February 2, 20 Joe Weinman Permalink: http://www.joeweinman.com/resources/joe_weinman_cloud_computing_is_np-complete.pdf Abstract Cloud computing is a rapidly emerging paradigm for computing,
A Comparison of General Approaches to Multiprocessor Scheduling
A Comparison of General Approaches to Multiprocessor Scheduling Jing-Chiou Liou AT&T Laboratories Middletown, NJ 0778, USA [email protected] Michael A. Palis Department of Computer Science Rutgers University
Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 10
CS 70 Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 10 Introduction to Discrete Probability Probability theory has its origins in gambling analyzing card games, dice,
Evolutionary SAT Solver (ESS)
Ninth LACCEI Latin American and Caribbean Conference (LACCEI 2011), Engineering for a Smart Planet, Innovation, Information Technology and Computational Tools for Sustainable Development, August 3-5, 2011,
Memory Efficient All-Solutions SAT Solver and its Application for Reachability Analysis
Memory Efficient All-Solutions SAT Solver and its Application for Reachability Analysis Orna Grumberg Assaf Schuster Avi Yadgar Computer Science Department, Technion, Haifa, Israel Abstract This work presents
Graph Theory Problems and Solutions
raph Theory Problems and Solutions Tom Davis [email protected] http://www.geometer.org/mathcircles November, 005 Problems. Prove that the sum of the degrees of the vertices of any finite graph is
Guide to Performance and Tuning: Query Performance and Sampled Selectivity
Guide to Performance and Tuning: Query Performance and Sampled Selectivity A feature of Oracle Rdb By Claude Proteau Oracle Rdb Relational Technology Group Oracle Corporation 1 Oracle Rdb Journal Sampled
Index Terms Domain name, Firewall, Packet, Phishing, URL.
BDD for Implementation of Packet Filter Firewall and Detecting Phishing Websites Naresh Shende Vidyalankar Institute of Technology Prof. S. K. Shinde Lokmanya Tilak College of Engineering Abstract Packet
Random Map Generator v1.0 User s Guide
Random Map Generator v1.0 User s Guide Jonathan Teutenberg 2003 1 Map Generation Overview...4 1.1 Command Line...4 1.2 Operation Flow...4 2 Map Initialisation...5 2.1 Initialisation Parameters...5 -w xxxxxxx...5
International Journal of Advanced Research in Computer Science and Software Engineering
Volume 3, Issue 7, July 23 ISSN: 2277 28X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Greedy Algorithm:
Windows Server Performance Monitoring
Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly
İSTANBUL AYDIN UNIVERSITY
İSTANBUL AYDIN UNIVERSITY FACULTY OF ENGİNEERİNG SOFTWARE ENGINEERING THE PROJECT OF THE INSTRUCTION SET COMPUTER ORGANIZATION GÖZDE ARAS B1205.090015 Instructor: Prof. Dr. HASAN HÜSEYİN BALIK DECEMBER
Notes on Complexity Theory Last updated: August, 2011. Lecture 1
Notes on Complexity Theory Last updated: August, 2011 Jonathan Katz Lecture 1 1 Turing Machines I assume that most students have encountered Turing machines before. (Students who have not may want to look
The Basics of Graphical Models
The Basics of Graphical Models David M. Blei Columbia University October 3, 2015 Introduction These notes follow Chapter 2 of An Introduction to Probabilistic Graphical Models by Michael Jordan. Many figures
Test Automation Architectures: Planning for Test Automation
Test Automation Architectures: Planning for Test Automation Douglas Hoffman Software Quality Methods, LLC. 24646 Heather Heights Place Saratoga, California 95070-9710 Phone 408-741-4830 Fax 408-867-4550
www.dotnetsparkles.wordpress.com
Database Design Considerations Designing a database requires an understanding of both the business functions you want to model and the database concepts and features used to represent those business functions.
A New Paradigm for Synchronous State Machine Design in Verilog
A New Paradigm for Synchronous State Machine Design in Verilog Randy Nuss Copyright 1999 Idea Consulting Introduction Synchronous State Machines are one of the most common building blocks in modern digital
3. Mathematical Induction
3. MATHEMATICAL INDUCTION 83 3. Mathematical Induction 3.1. First Principle of Mathematical Induction. Let P (n) be a predicate with domain of discourse (over) the natural numbers N = {0, 1,,...}. If (1)
Analysis of Compression Algorithms for Program Data
Analysis of Compression Algorithms for Program Data Matthew Simpson, Clemson University with Dr. Rajeev Barua and Surupa Biswas, University of Maryland 12 August 3 Abstract Insufficient available memory
6.4 Normal Distribution
Contents 6.4 Normal Distribution....................... 381 6.4.1 Characteristics of the Normal Distribution....... 381 6.4.2 The Standardized Normal Distribution......... 385 6.4.3 Meaning of Areas under
Formal Languages and Automata Theory - Regular Expressions and Finite Automata -
Formal Languages and Automata Theory - Regular Expressions and Finite Automata - Samarjit Chakraborty Computer Engineering and Networks Laboratory Swiss Federal Institute of Technology (ETH) Zürich March
Task Scheduling in Hadoop
Task Scheduling in Hadoop Sagar Mamdapure Munira Ginwala Neha Papat SAE,Kondhwa SAE,Kondhwa SAE,Kondhwa Abstract Hadoop is widely used for storing large datasets and processing them efficiently under distributed
Database Replication with MySQL and PostgreSQL
Database Replication with MySQL and PostgreSQL Fabian Mauchle Software and Systems University of Applied Sciences Rapperswil, Switzerland www.hsr.ch/mse Abstract Databases are used very often in business
The Taxman Game. Robert K. Moniot September 5, 2003
The Taxman Game Robert K. Moniot September 5, 2003 1 Introduction Want to know how to beat the taxman? Legally, that is? Read on, and we will explore this cute little mathematical game. The taxman game
A Case for Dynamic Selection of Replication and Caching Strategies
A Case for Dynamic Selection of Replication and Caching Strategies Swaminathan Sivasubramanian Guillaume Pierre Maarten van Steen Dept. of Mathematics and Computer Science Vrije Universiteit, Amsterdam,
AUTOMATED TEST GENERATION FOR SOFTWARE COMPONENTS
TKK Reports in Information and Computer Science Espoo 2009 TKK-ICS-R26 AUTOMATED TEST GENERATION FOR SOFTWARE COMPONENTS Kari Kähkönen ABTEKNILLINEN KORKEAKOULU TEKNISKA HÖGSKOLAN HELSINKI UNIVERSITY OF
QUEUING THEORY. 1. Introduction
QUEUING THEORY RYAN BERRY Abstract. This paper defines the building blocks of and derives basic queuing systems. It begins with a review of some probability theory and then defines processes used to analyze
NetBeans Profiler is an
NetBeans Profiler Exploring the NetBeans Profiler From Installation to a Practical Profiling Example* Gregg Sporar* NetBeans Profiler is an optional feature of the NetBeans IDE. It is a powerful tool that
5 Homogeneous systems
5 Homogeneous systems Definition: A homogeneous (ho-mo-jeen -i-us) system of linear algebraic equations is one in which all the numbers on the right hand side are equal to : a x +... + a n x n =.. a m
[Refer Slide Time: 05:10]
Principles of Programming Languages Prof: S. Arun Kumar Department of Computer Science and Engineering Indian Institute of Technology Delhi Lecture no 7 Lecture Title: Syntactic Classes Welcome to lecture
Formal Verification Coverage: Computing the Coverage Gap between Temporal Specifications
Formal Verification Coverage: Computing the Coverage Gap between Temporal Specifications Sayantan Das Prasenjit Basu Ansuman Banerjee Pallab Dasgupta P.P. Chakrabarti Department of Computer Science & Engineering
Rapid Bottleneck Identification
Rapid Bottleneck Identification TM A Better Way to Load Test WHITEPAPER You re getting ready to launch or upgrade a critical Web application. Quality is crucial, but time is short. How can you make the
Resolution. Informatics 1 School of Informatics, University of Edinburgh
Resolution In this lecture you will see how to convert the natural proof system of previous lectures into one with fewer operators and only one proof rule. You will see how this proof system can be used
Garbage Collection in NonStop Server for Java
Garbage Collection in NonStop Server for Java Technical white paper Table of contents 1. Introduction... 2 2. Garbage Collection Concepts... 2 3. Garbage Collection in NSJ... 3 4. NSJ Garbage Collection
Efficiently Identifying Inclusion Dependencies in RDBMS
Efficiently Identifying Inclusion Dependencies in RDBMS Jana Bauckmann Department for Computer Science, Humboldt-Universität zu Berlin Rudower Chaussee 25, 12489 Berlin, Germany [email protected]
Logical Cryptanalysis as a SAT Problem
Journal of Automated Reasoning 24: 165 203, 2000. 2000 Kluwer Academic Publishers. Printed in the Netherlands. 165 Logical Cryptanalysis as a SAT Problem Encoding and Analysis of the U.S. Data Encryption
Comparing Methods to Identify Defect Reports in a Change Management Database
Comparing Methods to Identify Defect Reports in a Change Management Database Elaine J. Weyuker, Thomas J. Ostrand AT&T Labs - Research 180 Park Avenue Florham Park, NJ 07932 (weyuker,ostrand)@research.att.com
Advanced Computer Architecture-CS501. Computer Systems Design and Architecture 2.1, 2.2, 3.2
Lecture Handout Computer Architecture Lecture No. 2 Reading Material Vincent P. Heuring&Harry F. Jordan Chapter 2,Chapter3 Computer Systems Design and Architecture 2.1, 2.2, 3.2 Summary 1) A taxonomy of
Time Series Forecasting Techniques
03-Mentzer (Sales).qxd 11/2/2004 11:33 AM Page 73 3 Time Series Forecasting Techniques Back in the 1970s, we were working with a company in the major home appliance industry. In an interview, the person
arxiv:1112.0829v1 [math.pr] 5 Dec 2011
How Not to Win a Million Dollars: A Counterexample to a Conjecture of L. Breiman Thomas P. Hayes arxiv:1112.0829v1 [math.pr] 5 Dec 2011 Abstract Consider a gambling game in which we are allowed to repeatedly
CHAPTER 2. Logic. 1. Logic Definitions. Notation: Variables are used to represent propositions. The most common variables used are p, q, and r.
CHAPTER 2 Logic 1. Logic Definitions 1.1. Propositions. Definition 1.1.1. A proposition is a declarative sentence that is either true (denoted either T or 1) or false (denoted either F or 0). Notation:
Victor Shoup Avi Rubin. fshoup,[email protected]. Abstract
Session Key Distribution Using Smart Cards Victor Shoup Avi Rubin Bellcore, 445 South St., Morristown, NJ 07960 fshoup,[email protected] Abstract In this paper, we investigate a method by which smart
