Master Thesis no. 442



Similar documents
3. Mathematical Induction

CHAPTER 3. Methods of Proofs. 1. Logical Arguments and Formal Proofs

CHAPTER 7 GENERAL PROOF SYSTEMS

ML for the Working Programmer

Regular Expressions and Automata using Haskell

Formal Languages and Automata Theory - Regular Expressions and Finite Automata -

Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products

Mathematical Induction

8 Divisibility and prime numbers

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 2

WHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT?

Logic in Computer Science: Logic Gates

Mathematical Induction

Regular Languages and Finite Automata

[Refer Slide Time: 05:10]

CS 103X: Discrete Structures Homework Assignment 3 Solutions

Click on the links below to jump directly to the relevant section

Cartesian Products and Relations

CHAPTER 2. Logic. 1. Logic Definitions. Notation: Variables are used to represent propositions. The most common variables used are p, q, and r.


Managing large sound databases using Mpeg7

Parametric Domain-theoretic models of Linear Abadi & Plotkin Logic

11 Multivariate Polynomials

Mathematics for Computer Science/Software Engineering. Notes for the course MSM1F3 Dr. R. A. Wilson

Likewise, we have contradictions: formulas that can only be false, e.g. (p p).

Handout #1: Mathematical Reasoning

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

Cedalion A Language Oriented Programming Language (Extended Abstract)

Midterm Practice Problems

Regular Expressions with Nested Levels of Back Referencing Form a Hierarchy

INCIDENCE-BETWEENNESS GEOMETRY

C H A P T E R Regular Expressions regular expression

Boolean Algebra Part 1

def: An axiom is a statement that is assumed to be true, or in the case of a mathematical system, is used to specify the system.

Chapter 1: Key Concepts of Programming and Software Engineering

How To Understand The Theory Of Computer Science

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Automated Theorem Proving - summary of lecture 1

POLYTYPIC PROGRAMMING OR: Programming Language Theory is Helpful

Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics

1 Operational Semantics for While

Correspondence analysis for strong three-valued logic

Sudoku puzzles and how to solve them

WRITING PROOFS. Christopher Heil Georgia Institute of Technology

Introducing Formal Methods. Software Engineering and Formal Methods

NASA Technical Memorandum (Revised) An Elementary Tutorial on Formal. Specication and Verication. Using PVS 2. Ricky W.

A Few Basics of Probability

NP-Completeness and Cook s Theorem

Peter V. Homeier and David F. Martin. and

Deterministic Discrete Modeling

Testing LTL Formula Translation into Büchi Automata

OPERATIONAL TYPE THEORY by Adam Petcher Prepared under the direction of Professor Aaron Stump A thesis presented to the School of Engineering of

Fundamentele Informatica II

CS510 Software Engineering

Software development process

Outline. 1 Denitions. 2 Principles. 4 Implementation and Evaluation. 5 Debugging. 6 References

So let us begin our quest to find the holy grail of real analysis.

1 if 1 x 0 1 if 0 x 1

The Clean programming language. Group 25, Jingui Li, Daren Tuzi

Termination Checking: Comparing Structural Recursion and Sized Types by Examples

6.3 Conditional Probability and Independence

Simulation-Based Security with Inexhaustible Interactive Turing Machines

Topology-based network security

Continued Fractions and the Euclidean Algorithm

Invertible elements in associates and semigroups. 1

A simple algorithm with no simple verication

Deterministic Finite Automata

THE DIMENSION OF A VECTOR SPACE

Properties of Real Numbers

A Propositional Dynamic Logic for CCS Programs

SECTION 10-2 Mathematical Induction

Maude-NPA: Cryptographic Protocol Analysis Modulo Equational Properties


This asserts two sets are equal iff they have the same elements, that is, a set is determined by its elements.

Predicate Logic Review

Math 223 Abstract Algebra Lecture Notes

36 CHAPTER 1. LIMITS AND CONTINUITY. Figure 1.17: At which points is f not continuous?

BOOLEAN ALGEBRA & LOGIC GATES

Linear Codes. Chapter Basics

Quotient Rings and Field Extensions

Lecture Notes on Linear Search

Logic, Algebra and Truth Degrees Siena. A characterization of rst order rational Pavelka's logic

Relations: their uses in programming and computational specifications

Chapter 2: Algorithm Discovery and Design. Invitation to Computer Science, C++ Version, Third Edition

The last three chapters introduced three major proof techniques: direct,

ON FUNCTIONAL SYMBOL-FREE LOGIC PROGRAMS

MATH10040 Chapter 2: Prime and relatively prime numbers

Introduction. Appendix D Mathematical Induction D1

Automata and Formal Languages

The Generalized Railroad Crossing: A Case Study in Formal. Abstract

Rigorous Software Development CSCI-GA

6.080/6.089 GITCS Feb 12, Lecture 3

DIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents

MULTIPLICATION AND DIVISION OF REAL NUMBERS In this section we will complete the study of the four basic operations with real numbers.

Elementary Number Theory and Methods of Proof. CSE 215, Foundations of Computer Science Stony Brook University

Chapter 7: Functional Programming Languages

Math 4310 Handout - Quotient Vector Spaces

(LMCS, p. 317) V.1. First Order Logic. This is the most powerful, most expressive logic that we will examine.

Edited by: Juan Bicarregui. With contributions from: Sten Agerholm. Bart Van Assche. John Fitzgerald. Jacob Frost. Albert Hoogewijs.

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

Transcription:

Clean Prover System A tool for interactively and automatically proving properties of functional programs Maarten de Mol maartenm@sci.kun.nl Master Thesis no. 442 University of Nijmegen (KUN) August 12, 1998

Preface Concluding my study of Theoretical Computer Science at the Dutch university of Nijmegen I carried out my graduate assignment in the beginning of 1998. From February to August I worked on the subject of automated proving in relation to functional programming languages. This work was carried out in the framework of the research group on `Functional Programming' at the university. The results of the graduate project can be found in this master thesis. Focusing on one subject for a longer period of time was a new experience for me, which I enjoyed very much. I hope this thesis will be interesting to read as well. I would like to thank all that have helped me (mentally and in content) and especially my supervisors dr. Marko van Eekelen and prof. Rinus Plasmeyer who have guided me throughout the entire project. Nijmegen, August 1 1998 iii

iv

Contents Introduction 1 Specication 1 1.1 Specication of the program : : : : : : : : : : : : : : : : : : : : : 1 1.1.1 Specication of algebraic types : : : : : : : : : : : : : : : 2 1.1.2 Specication of functions : : : : : : : : : : : : : : : : : : 3 1.2 Specication of desired behavior : : : : : : : : : : : : : : : : : : 5 1.3 Specication of axioms : : : : : : : : : : : : : : : : : : : : : : : : 6 1.4 Specication of predicates : : : : : : : : : : : : : : : : : : : : : : 7 1.5 Entering specications in the system : : : : : : : : : : : : : : : : 8 2 Accumulative functions 9 2.1 What is an accumulative function? : : : : : : : : : : : : : : : : : 9 2.2 Elt-list-accumulative functions : : : : : : : : : : : : : : : : : : : 10 2.2.1 Syntactical check : : : : : : : : : : : : : : : : : : : : : : : 10 2.2.2 Semantical check : : : : : : : : : : : : : : : : : : : : : : : 11 2.2.3 Constructed lemma : : : : : : : : : : : : : : : : : : : : : 12 2.3 List-list-accumulative functions : : : : : : : : : : : : : : : : : : : 14 2.3.1 Syntactical check : : : : : : : : : : : : : : : : : : : : : : : 14 2.3.2 Semantical check : : : : : : : : : : : : : : : : : : : : : : : 15 2.3.3 Constructed lemma : : : : : : : : : : : : : : : : : : : : : 15 3 Proof power 17 3.1 Information stored with each goal : : : : : : : : : : : : : : : : : : 17 3.2 Available tactics : : : : : : : : : : : : : : : : : : : : : : : : : : : 18 3.2.1 Basic tactics : : : : : : : : : : : : : : : : : : : : : : : : : 19 3.2.2 Basic multi-tactics : : : : : : : : : : : : : : : : : : : : : : 23 3.2.3 Composition mechanisms : : : : : : : : : : : : : : : : : : 26 3.2.4 Composed tactics : : : : : : : : : : : : : : : : : : : : : : : 26 4 Rewrite System 29 4.1 Available rewrite-rules : : : : : : : : : : : : : : : : : : : : : : : : 29 4.2 Desirable properties of rewrite-rules : : : : : : : : : : : : : : : : 30 4.3 Application of rewrite-rules : : : : : : : : : : : : : : : : : : : : : 32 5 Typing 35 5.1 The typing algorithm : : : : : : : : : : : : : : : : : : : : : : : : : 35 5.2 The inx algorithm : : : : : : : : : : : : : : : : : : : : : : : : : : 37 v ix

6 Induction 41 6.1 The induction algorithm : : : : : : : : : : : : : : : : : : : : : : : 41 7 Proof correctness 43 7.1 Translation of the specication : : : : : : : : : : : : : : : : : : : 43 7.1.1 Translation of algebraic types : : : : : : : : : : : : : : : : 43 7.1.2 Translation of functions : : : : : : : : : : : : : : : : : : : 44 7.1.3 Translation of predicates : : : : : : : : : : : : : : : : : : : 45 7.1.4 Translation of lemma's : : : : : : : : : : : : : : : : : : : : 45 7.2 Translation of the proof : : : : : : : : : : : : : : : : : : : : : : : 45 7.2.1 Translation of Curry : : : : : : : : : : : : : : : : : : : : : 46 7.2.2 Translation of Move On : : : : : : : : : : : : : : : : : : : 46 7.2.3 Translation of Split : : : : : : : : : : : : : : : : : : : : : : 47 7.2.4 Translation of Unequal Constructors : : : : : : : : : : : : : 47 7.2.5 Translation of Introduction : : : : : : : : : : : : : : : : : : 47 7.2.6 Translation of Induction : : : : : : : : : : : : : : : : : : : 47 7.2.7 Translation of Hypo Step : : : : : : : : : : : : : : : : : : : 47 7.2.8 Translation of Simplify Step : : : : : : : : : : : : : : : : : 48 7.2.9 Translation of Simplify Equality : : : : : : : : : : : : : : : 49 7.2.10 Translation of Hypo Introduce : : : : : : : : : : : : : : : : 50 7.2.11 Translation of Generalize : : : : : : : : : : : : : : : : : : : 50 7.2.12 Translation of Use Equality : : : : : : : : : : : : : : : : : 50 7.2.13 Translation of Generalize Variable : : : : : : : : : : : : : : 50 8 Examples 51 8.1 Reexivity of =:= : : : : : : : : : : : : : : : : : : : : : : : : : : : 51 8.2 Applying Reverse twice : : : : : : : : : : : : : : : : : : : : : : : 53 8.3 LengthIsOdd and LengthIsEven : : : : : : : : : : : : : : : : : : 56 8.4 Correctness of the accumulative rule : : : : : : : : : : : : : : : : 57 8.5 Proven theorems : : : : : : : : : : : : : : : : : : : : : : : : : : : 60 9 Conclusions 63 A Standard Library 65 A.1 Algebraic Types : : : : : : : : : : : : : : : : : : : : : : : : : : : 65 A.2 Functions : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 66 A.3 Predicates : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 67 A.4 Lemma's : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 67 B Basic rewrite-rules 69 B.1 Dening rules : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 69 B.2 Comparing rules : : : : : : : : : : : : : : : : : : : : : : : : : : : 70 B.3 Lift rules : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 70 B.4 Rules for equality-relation : : : : : : : : : : : : : : : : : : : : : : 70 C Input grammar 73 C.1 Top-level : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 73 C.2 Type-defs : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 73 C.3 Function-defs : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 73 C.4 Predicate-defs : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 74 vi

C.5 Lemma-defs : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 74 C.6 Goal-defs : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 74 C.7 Low-level : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 74 C.8 Lowest-level : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 75 D Internal data structures 77 D.1 Environment : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 77 D.2 Algebraic type-denitions : : : : : : : : : : : : : : : : : : : : : : 77 D.3 Function- and predicate-denitions : : : : : : : : : : : : : : : : : 78 D.4 Rewrite-rules : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 78 D.5 Ongoing proof-sessions : : : : : : : : : : : : : : : : : : : : : : : : 79 D.6 Propositions, expressions and types : : : : : : : : : : : : : : : : : 80 Bibliography 81 vii

viii

Introduction It is a well known fact that it is practically impossible to develop a computer program that is completely free of bugs. Customers don't like bugs in computer programs. If a program is likely to contain many bugs then they will not buy it. It is therefore important to convince customers that a program is correct. Usually this is done by extensively testing the program. A number of testcases are prepared and the program is tested in each of these cases. The results obtained in this way are checked: in case a result is incorrect the program is modied. At the end of the testing process the program behaves correctly in all test-cases. The problem with testing is that it is impossible to test the program in all actual situations that it is going to be used in. This would simply take too much time. Therefore testing can only guarantee correctness up to a limited extent. An approach which can guarantee correctness up to a better extent is formal proving. In this approach the program is not treated as a black-box of which the contents are not known. In stead the actual code of the program is needed. Also a description of the desired behavior of the program is necessary. Then a proof is constructed which shows that the program indeed exhibits the desired behavior. This approach is much more dicult to execute than testing. A lot of expertise and time is needed which makes it expensive. Therefore in practice usually the somewhat less reliable but cheaper method of testing is chosen. Lately tools have been developed to facilitate the process of constructing a formal proof. These proof tools oer a formal framework in which programs and desired behavior of programs can be expressed. Then by applying preprogrammed proving actions a formal proof can be constructed in this framework. Using a proof tool oers many advantages: 1. Administrative tasks (like saving proofs, modifying proofs, etc.) are taken care of. 2. Because only pre-programmed actions can be used, correctness of the resulting proof is guaranteed automatically. 3. High-level proof steps (like simplifying) can be modeled as a single preprogrammed action, limiting the number of steps needed to construct a proof. Because of the availability of increasingly more powerful proof tools the possibility of constructing formal proofs has become more and more interesting. Using proof tools for proving behavior of programs written in the functional ix

programming language clean turned out to be disappointing however[1]. Experiments were conducted using the proof tools PVS[2] and COQ[3]. In both cases too much expertise of the underlying system was needed, both in translating the program to the corresponding specication language and in constructing the proof itself. The main problem is that PVS and COQ are generic provers equipped with a generic formal framework. In order to use the tool, program and behavior have to be modeled in this generic framework. This may require a dicult translation step and extensive knowledge of the framework itself. Also a set of pre-programmed actions which is more tailored to functional programming languages may make proving easier. In order to eectively experiment with (or actually use) a proof tool the amount of expertise needed to use it should be minimal. Due to their generic character both PVS and COQ have drawbacks when proving behavior of cleanprograms. To test whether these drawbacks can be removed and to fully explore the possibilities of formal proving in relation to clean a proof tool especially dedicated to clean was developed. This tool is called Clean Prover System, or in short CPS. As guidance for the development a list of statements was used, describing desired behavior of functions dened in the standard library of clean. The developed prover CPS should at least be able to prove all these statements. Examples of statements on the list include: for all lists x and y: x = y! y = x for all lists x, y and z: x ++ (y ++ z) = (x ++ y) ++ z for all lists x: Reverse (Reverse x) = x for all lists (of numbers) x: Sum (Map Succ x) = (Sum x) + (Length x) It should be possible to express the functions and types in question in CPS without having to translate. It should be possible to express the statements themselves in CPS. Finally the generation of proofs for these statements should be easier than in COQ and PVS, where proving these statements proved to be tedious. Preferably one would even like CPS to generate proofs for these statements automatically. In this thesis a description of CPS will be given. Specifying the program in question and its behavior is described in chapter 1. In chapter 2 an extra functionality of CPS specically for functional programming languages is described: the recognition and handling of functions in accumulative form. The pre-programmed proving actions are described in chapter 3. The most important component of the logical framework is the rewrite-system, which is described in chapter 4. In chapter 5 the algorithm which is used to type expressions is described. The algorithm to do induction, which uses the typing algorithm, is described in chapter 6. In chapter 7 an argument is made on why proofs generated with CPS are correct. In chapter 8 examples are given of proofs constructed with CPS. Finally in chapter 9 the conclusion is drawn and possibilities for further research are mentioned. x

Chapter 1 Specication A proof is constructed for a program and its desired behavior. This desired behavior can be any statement in logic in which the symbols dened in the program may be used. Before a formal proof can be constructed with CPS, the program in question and the statement that is supposed to hold for it must be given as input. In computer science the program is often called the realization and the property its specication. From the provers point of view however, both the developed program and the statement have to be provided as input and are thus both part of the specication. (using the word specication in its normal meaning: `a detailed presentation of something') Therefore the terminology will be used that both are part of the specication. The specication can also contain two other components: denitions of predicates which may be used to construct the statement to prove and denitions of axioms which may be used in the proof to be constructed. The four components of the specication are called the source objects of the prover. Given the source objects CPS can be used to construct a formal proof. A proof consists of a sequence of applications of pre-programmed proving actions. The proof is called the target object of the prover. The relations between the source objects and the target object are depicted in the gure on the next page. In this chapter each component of the specication, as well as the mechanism to enter a specication in CPS, is described. 1.1 Specication of the program The target programming language for CPS is the lazy functional programming language clean[4]. Not all valid clean-programs are allowed as input. Only a subclass of valid programs can be specied. In this subclass just some basic constructs of clean are included: algebraic type-denitions, function-denitions by pattern-matching and recursion, higher-order and partial applications. Excluded are for example where-clauses, list-comprehensions, class-denitions, dotdot-expressions and arrays. This subclass can also be found in many other (functional) programming languages, for which CPS can be used as well. When specifying recursive functions, no check is made whether the function 1

2 CHAPTER 1. SPECIFICATION Program And :: Bool Bool -> Bool And True x = x And False x = False Property Symmetric And Predicate Symmetric :: (a -> (a -> Bool)) -> Prop Symmetric f := [x:a] [y:a] f x y = f y x Proof Simplify Induction Move On...... Use Axiom...... Axiom And x True --> True Figure 1.1: Relation between target and source objects always terminates. In clean this is not mandatory, since lazy evaluation can (in some cases) deal with non-terminating recursive denitions. But CPS is not able to handle non-terminating denitions at all. This is due to the internal rewriting system, which is eager. Therefore the use of CPS is restricted to terminating function-denitions. The specication of algebraic types and functions is described in the following subsections. 1.1.1 Specication of algebraic types An algebraic type-denition denes a type-constructor. A type-constructor constructs a new type out of other types. Examples are List (which has arity 2), but also Bool (which has arity 0). Along with each type-constructor a number of data-constructors are de- ned. A data-constructor constructs an element of the type created by the type-constructor out of elements of other types. This is the only way to construct elements of the type created by the type-constructor. Examples of dataconstructors are Cons (which has arity 2), but also True (which has arity 0). A data-constructor which has arity 2 can be dened as an inx operator. It is allowed (but not obliged) to write an inx operator in between its arguments. Internally applications are always stored in prex form. If the user presents an application which is in inx form to the system it is automatically transformed to prex form. Applications in inx form are detected by using the typing algorithm. By default all internally stored applications are shown in inx form to the user when possible. In CPS it is always possible to use an inx operator in a prex manner. This

1.1. SPECIFICATION OF THE PROGRAM 3 is not the case in clean, where extra brackets around an inx operator are needed to lift it to a prex one. Another dierence is that priorities for inx operators are not supported in CPS. This means that it may be necessary to supply extra brackets when writing down inx applications. Algebraic type-denitions are denoted almost the same as in clean. An algebraic type-denition must start with a `::'-symbol, followed by the name of the type-constructor. Names of functions, data-constructors and type-constructors must be specied as identiers in CPS. Which identiers are valid is described in appendix C. The arguments of the type-constructor (which in a concrete situation will be other types) are denoted by type-variables in the denition. Notice the similarity with function-denitions, where arguments are denoted by (expression- )variables. The system only recognizes the letters `a', `b' and `c' as typevariables. After the arguments a `='-symbol must be written, followed by the denitions of the data-constructors. These denitions must be separated by ` '-symbols and a `.'-symbol must be written at the end of the last denition. The denition of a data-constructor consists of its name (again an identier) followed by the types of its arguments. How types can be specied is described in appendix C. Dening the data-constructor as an inx operator is accomplished by adding the `INFIX'-keyword in front of its name. Almost all pure algebraic type-denitions of clean can be specied without modication in CPS, with the exception that an extra `.' must be added. Uniqueness, records, tuples, arrays and priorities of inx operators can not be expressed in CPS. Examples: :: Boolean = True False. :: List a = Nil Cons a (List a). :: Expression a = Value a INFIX Op (Expression a) (Expression a). 1.1.2 Specication of functions In CPS the user must supply a type for each function. In this type type-variables may be used in order to specify polymorphic functions. However, CPS does not support higher-order type-variables. No type-variables may occur in the result type of the function which did not occur in any of the argument types. Functions are dened by pattern-matching. Each pattern consists of a number of matching expressions (matching on the arguments) and a result expression. A result expression is an ordinary expression, which can be either an expression-variable (denoted by one of the letters `x', `y', `z', `p', `q', `r', `s' or `t') or an application. An application can be the application of a dataconstructor, function or random other expression on a list of argument expressions. In a matching expression however, only applications of data-constructors

4 CHAPTER 1. SPECIFICATION are allowed. Thus only expressions consisting entirely of applications are allowed in CPS. Expression-variables can be introduced in the matching expressions of a pattern. These variables then correspond to (part of) an argument of the function. In the result expression of the pattern these variables may be used, but no other expression-variables are allowed. When specifying a function-denition, little typing is done. The type given by the user is assumed to be correct and is stored in the environment in order to enable further typing. For each matching expression the types of the occurring expression-variables are inferred. These types are then used for removing the inx applications from the result expression. For this some typing is necessary. See for more information chapter 5. The notation for function-denitions in CPS resembles clean. A functiondenition is started with its name (as identier), followed by a `::'-symbol and the type of the function. The type of the function consists of a list of argument types, a `->'-symbol and the result-type. If the function is dened as an inx-operator, the `INFIX'-keyword must be added in front of the type of the function. Then the patterns of the function must be specied. Each pattern starts with the name of the function followed by the matching expressions. Then a `='-symbol must be given, followed by the result expression of the pattern. When the result expression of the last pattern of a function is an application, it must end with a bracket or a square bracket. It may sometimes be necessary to put the result expression in between extra brackets to achieve this. Examples: ++ :: INFIX (List a) (List a) -> (List a) ++ [] x = x ++ [x:y] z = [x:y ++ z] Map :: (a->b) (List a) -> (List b) Map x [] = [] Map x [y:z] = [x y:z] In these denitions two special facilities of CPS are used: 1. Writing an inx application in between its arguments. The application of `++' on `y' and `z' is normally denoted by `++ y z'. Since `++' was dened as an inx-operator the notation `y ++ z' is also allowed. 2. Special notations for lists and numbers. In stead of writing down `Cons x (y ++ z)' the notation `[x:y ++ z]' was used. This is a special notation for Cons. Also special notations for Succ, Zero and Nil exist. By default the system uses the special notation when producing output for the user. Input is allowed in both special and normal notation.

1.2. SPECIFICATION OF DESIRED BEHAVIOR 5 The special notations possible are: Writing `0' in stead of `Zero'. Writing the number `n' in stead of `Succ (... (Succ Zero)...)', where in this application the constructor Succ appears n times. Writing `[]' in stead of `Nil'. Writing `[expr]' in stead of `(Cons expr Nil)'. Writing `[expr:exprs]' in stead of `(Cons expr exprs)'. 1.2 Specication of desired behavior Desired behavior of dened functions is expressed in an extension of rst-order predicate logic in CPS. The extension consists of the possibility to state the equality of two expressions. A statement in this logic will be called a proposition. A proposition can be one of the following: A proposition-variable. This alternative is only needed to allow unication of propositions in the rewrite-system. A proposition-constant. The two dened constants are TRUE and FALSE. A conjunction. The conjunction of P and Q is written as P AND Q. A disjunction. The disjunction of P and Q is written as P OR Q. An implication. The proposition stating that P implies Q is written as P -> Q. A negation. The negation of P is written as ~P. An equality of two expressions. The equality of E and F is written as E = F. A quantication (types). If P is a proposition in which the type-variable a occurs, then the proposition stating `for all types a P holds' is written as [a:set] P. A quantication (expressions). If P is a proposition in which the expression-variable x of type A occurs, then the proposition stating `for all expressions x of type A P holds' is written as [x:a] P. Propositions for which one wants to construct a proof are called goals in CPS. Goals are part of the specication and are denoted by the `GOAL'-keyword, followed by a name for the goal and the proposition making up the goal itself. The name of a goal may be any string enclosed in `"'. Examples: GOAL "Listeq is reflexive" [a:set] [x:list a] Listeq x x = True GOAL "++ is associative" [a:set] [x:list a] [y:list a] [z:list a] x ++ (y ++ z) = (x ++ y) ++ z

6 CHAPTER 1. SPECIFICATION 1.3 Specication of axioms An axiom is a logical proposition of which is assumed without proof that it is true. In CPS it is possible to make use of axioms in constructing proofs. The application of an axiom in a proof is modeled by a rewrite step and axioms are represented by rewrite-rules. Such rewrite-rules are actually called lemma's in CPS, since from a provers point of view these rules are just an expedient for constructing a proof. More information on the rewrite system can be found in chapter 4. Because lemma's are represented by rewrite-rules, two dierent kinds of lemma's exist: one for rewriting expressions and one for rewriting propositions. When the right-hand-side of a lemma is omitted it is assumed to be the proposition TRUE. A lemma is specied by the `LEMMA'-keyword, followed by a name for the lemma, the free variables occurring in it, a `='-symbol, the left-hand-side of the rewrite-rule, a `--->'-symbol and the right-hand-side of the rewrite-rule. The name of a lemma may again be any string enclosed in `"'. The types of occuring expression-variables must be supplied for each lemma since these may occur unbound in rewrite-rules to specify unication positions. This information is specied by a list of typing units separated by `,'-symbols. The entire list must be enclosed in square brackets. A typing unit consists of an expression-variable, followed by a `:'-symbol and its type. However, also the type-variables (used in the types of the expressionvariables) must be specied. Thus typing units can also be a type-variable, followed by `:'-symbol and the `SET'-keyword. Examples: LEMMA "Listeq is reflexive" [a:set, x:list a] = Listeq x x ---> True LEMMA "demorgan 1" = ~(P OR Q) ---> ~P AND ~Q LEMMA "ex falso..." = FALSE -> P LEMMA "+ is associative" [x:peano, y:peano, z:peano] = (x + y) + z ---> x + (y + z) LEMMA "is + associative?" = Associative + Warning: The last lemma rewrites (x + y) + z = x + (y + z) to TRUE. In order to state the associativity of + the lemma above it is more suitable.

1.4. SPECIFICATION OF PREDICATES 7 1.4 Specication of predicates To allow easier specication of goals predicates were introduced in CPS. A predicate is a function which as result delivers a proposition in stead of an expression. To distinguish between functions and predicates the `='-symbols must be replaced by `:=' in predicate-denitions. Arguments of functions must always be expressions. This is also the case for predicates. Thus it is not possible to use propositions (or other predicates) as arguments of a function or predicate. As with functions a type must be supplied for each predicate. The result type of a predicate must always be `Prop'. If a type-variable is introduced in one of the argument types, this type-variable may be used in the result propositions of all patterns. However, quantication over this type-variable is not allowed. Thus it can only appear in a result proposition as part of the type of an expression-variable in a quantication. Example: Symmetric :: (a -> (a -> Bool)) -> Prop Symmetric x := [y:a] [z:a] x y z = True -> x z y = True Notice that a currying-mechanism is used to specify a function-type as parameter. When Symmetric is applied to, say Listeq, the rewriting delivers the following result: Symmetric Listeq [y:a] [z:a] Listeq y z = True -> Listeq z y = True For two reasons this is not the desired result. The rst reason is that it is possible that the expression-variables y and z are already used elsewhere. They must be replaced with the rst two free expression-variables, which in this case are x and y: [y:a] [z:a] Listeq y z = True -> Listeq z y = True [x:a] [y:a] Listeq x y = True -> Listeq y x = True The second reason is that the type-variable a is not the desired type of an argument of Listeq. Therefore a fresh type for Listeq is generated, which is List b -> (List b -> Bool). This type is unied with the given parameter type a -> (a -> Bool) resulting in the substitution: a 7! List b Then this substitution is used to obtain the correct argument types: [x:a] [y:a] Listeq x y = True -> Listeq y x = True [x:list b] [y:list b] Listeq x y = True -> Listeq y x = True

8 CHAPTER 1. SPECIFICATION Still, this is not the desired result. The introduced type-variables have to be substituted by the rst free type-variables and they must be made bound by quantication. In the example, a is the rst available type-variable: [x:list b] [y:list b] Listeq x y = True -> Listeq y x = True [a:set] [x:list a] [y:list a] Listeq x y = True -> Listeq y x = True This is the desired result. In the original goal, which was Symmetric Listeq, the use of a predicate is demonstrated. However, syntactically Symmetric Listeq is an expression which has type Prop and thus not a proposition. Therefore another alternative is added to propositions: a proposition may be any expression of type Prop. Example: Equi :: (a -> (a -> Bool)) -> Prop Equi x := Reflexive x AND Symmetric x AND Transitive x 1.5 Entering specications in the system In order to enter a specication in the prover it needs to be stored in a le. A le containing (parts of) a specication is called a module. In a module all dierent components of specications may occur: algebraic types, functions, axioms, predicates and goals. When running the prover the option to read a module is available. When the user attempts to load a module the following actions are taken: 1. The module is parsed. The actual grammar used is described in appendix C. 2. A duplicate check is made. It is forbidden to use the same identier (for type-constructors, dataconstructors and functions) twice. It is also forbidden to have multiple lemma's with the same name or multiple goals with the same name. 3. Inx applications are removed. In the module applications in inx-form may occur. With this check these are removed. All applications are stored in prex-form internally. The algorithm to remove inx applications is described in chapter 5. 4. Extra rules for accumulative functions are generated. When a function-denition syntactically satises certain conditions, an extra lemma can be inferred for it. The algorithm that takes of the detection of accumulative functions and the construction of the extra lemma is described in chapter 2. At start-up CPS by default reads the modules Standard.CPS and Goals.CPS. The module Standard contains denitions for functions and types dened in the standard environment of clean. It also contains some basic predicatedenitions. The Goals module can be used to store the goals (and proofs) one is currently working on.

Chapter 2 Accumulative functions The possibilities for constructing formal proofs concerning the behavior of a developed program can depend heavily on the syntactical form of the used function-denitions. Even when considering simple function-denitions only, the dierence in ease of proving between two obviously equivalent functiondenitions can be big. In general, function-denitions which allow the successful use of induction increase the chances of nding a proof greatly. For induction to have success, it must be possible to eectively use the induction hypothesis. This implies the function performs a recursive call on a part of a composed argument. A function is called tail-recursive if this is the case. In practice many other forms of function-denitions exist which are equivalent to a tail-recursive denition. One of these forms is the accumulative form. CPS supports a still very primitive algorithm to automatically infer an extra lemma for each function in accumulative form. This extra lemma increases the chance of successfully trying induction. At present the algorithm is only able to deal with a very small subset of functions in accumulative form. Two dierent types of accumulative functions are recognized: list-list-accumulative ones and elt-list-accumulative ones. List-list-accumulative functions traverse a list of elements of a certain type and accumulate as result a list of elements of this same type. Elt-list-accumulative functions also traverse a list of elements of a certain type, but accumulate as result one element of this same type in stead of a list. The algorithms to construct an extra lemma for both kinds of denitions are described in this chapter. Both algorithms are based on a program transformation scheme, which can be found in [5]. 2.1 What is an accumulative function? A function is called accumulative if it operates on a list of elements and delivers some (simple) result as output. The elements of the list are examined one by one and after each examined element an intermediate result is updated. When the list has been traversed completely, the intermediate result becomes the nal result. The expression with which the intermediate result is initialized is called the 9

10 CHAPTER 2. ACCUMULATIVE FUNCTIONS neutral element. The function which given an intermediate result and an element of the list constructs a new intermediate result is called the update function. To traverse the list and upkeep the intermediate result at the same time a second function is needed. (since no argument in the original denition can contain the intermediate result) This function, which must be called with the neutral element and the original list as arguments, is called the accumulative function. Example: And :: (List Bool) -> Bool And x = (AccAnd True x) AccAnd :: Bool (List Bool) -> Bool AccAnd x [] = x AccAnd x [y:z] = (AccAnd (x && y) z) In this example the function And is in accumulative form, True is its neutral element, AccAnd its accumulative function and && its update function. 2.2 Elt-list-accumulative functions A function is in list-elt-accumulative form if it traverses an argument of type List A and its intermediate and nal result are of type A. Each time a function is added to the specication, the system checks if the function is in elt-list-accumulative form. This is a syntactical check, in which the neutral element, update function and accumulative function are determined. An extra lemma can not be inferred for all functions syntactically in elt-listaccumulative form however. It is necessary to perform an additional semantical check. If both checks succeed, the lemma belonging to the accumulative function is constructed. 2.2.1 Syntactical check In order to determine whether a function is in list-elt-accumulative form, the following syntactical conditions must be satised: 1. The function must have type List A -> A for some type A. Type-variables are allowed on this position. 2. The function may only have one pattern. 3. The matching expression of the pattern must be x. 4. The result expression of the pattern must be F expr x, where F is the name of a function and expr is an arbitrary expression. Now also the accumulative function, which is F, and the neutral element, which is expr, are known. The function F must satisfy the following syntactical conditions:

2.2. ELT-LIST-ACCUMULATIVE FUNCTIONS 11 1. It must have precisely two patterns. 2. The matching expressions of the rst pattern must be x and []. 3. The result expression of the rst pattern must be x. 4. The matching expressions of the second pattern must be x and [y:z]. 5. The result expression of the second pattern must be F expr z, where expr is an arbitrary expression in which the expression-variables x and y may occur. Note that these conditions are very restrictive. It is not possible for the accumulative function to have other patterns or arguments or to change the order of the parameters or patterns. Allowing more accumulative functions has not been implemented yet. Finally the update function has to be constructed out of expr. For this purpose, expr is regarded as a function with arguments x and y. Whenever the result of this function on some concrete arguments e1 and e2 is needed, the substitution x 7! e1 y 7! e2 is applied on expr. The type of the update function is A -> (A -> A), where A is the type of the neutral element. Example: Sum :: (List Peano) -> Peano Sum x = AccSum 0 x AccSum :: Peano (List Peano) -> Peano AccSum x [] = x AccSum x [y:z] = (AccSum (x + y) z) The function Sum is in elt-list-accumulative form. Earlier the neutral element (0) and accumulative function (AccSum) were determined. Now the update function is also determined: 2.2.2 Semantical check update function (x, y) := x + y In order to show the correctness of the constructed lemma (see next paragraph), the neutral element and the update function must satisfy two additional semantical conditions: 1. The neutral element must be a neutral element for the update function. This means that whenever the update function is applied on some argument x and the neutral element the outcome must again be x. This must hold both when Neutral is the rst argument and when it is the second.

12 CHAPTER 2. ACCUMULATIVE FUNCTIONS 2. The update function must be associative. These two conditions are checked by specifying them as goals and trying to prove these automatically. If this succeeds, the lemma belonging to the accumulative function is constructed. This process leads to the following goals in the given example, which can indeed be proven automatically: [x:peano] x + 0 = x Auto TRUE [x:peano] 0 + x = x Auto TRUE [x:peano] [y:peano] [z:peano] x + (y + z) = (x + y) + z Auto TRUE 2.2.3 Constructed lemma In order to understand why an extra lemma is needed in the rst place, take a look at the following goal: [x:list Peano] Sum (x ++ [0]) = Sum x This seems like a very easy goal which one wants to prove without diculties using CPS. However, using the given denition of Sum this will fail. Examining a proof attempt reveals why. The rst step in the proof attempt is to simplify the goal using the denition of Sum. This leads to the following new goal: [x:list Peano] AccSum 0 (x ++ [0]) = AccSum 0 x This goal can not be simplied any further. Therefore the next step in the attempt is to apply induction. The resulting induction base is left out since it is proven easily by simplication. The induction step is: AccSum 0 (z ++ [0]) = AccSum 0 z -> AccSum 0 ([y:z] ++ [0]) = AccSum 0 [y:z] Now three rewrite-rules can be applied to simplify this proposition. The rst is a pattern-match rule for the function ++, the second for AccSum and the third for +: [x:y] ++ z [x:y ++ z] AccSum x [y:z] AccSum (x+y) z 0 + x x Simplifying the goal using these three rewrite-rules results in the following new goal: AccSum 0 (z ++ [0]) = AccSum 0 z -> AccSum y (z ++ [0]) = AccSum y z

2.2. ELT-LIST-ACCUMULATIVE FUNCTIONS 13 At this point the induction hypothesis can be introduced. It can however not be used on the remaining goal, since this goal does not contain a call of AccSum with rst parameter 0. Therefore the attempt is stuck on this point. The availability of the following lemma would provide a possibility to continue the proof attempt: AccSum x y x + (AccSum 0 y) By applying this lemma on the stuck goal (where the induction hypothesis has not been introduced) the calls of AccSum that do not have 0 as rst argument are replaced by those that have: AccSum 0 (z ++ [0]) = AccSum 0 z -> y + (AccSum 0 (z ++ [0])) = y + (AccSum 0 z) This trivial goal is then proven easily, completing the proof. Note that an extra condition is necessary for the given lemma: it may not be applied when x already is 0. In general the following lemma is inferred for an accumulative function Acc of type A! [A]! A with update function ~ and neutral element e: Acc x y x ~ Acc e y Under the assumptions that ~ is associative and e is a neutral element of ~ this lemma is correct by the following theorem: Theorem : 8 y2[a]8 x2a [ Acc x y = x ~ Acc e y ] Proof : By induction on y. Induction Base : 8 x2a [ Acc x [ ] = x ~ Acc e [ ] ] fdef. Accg 8 x2a [ Acc x [ ] = x ~ e ] fx ~ e = xg 8 x2a [ Acc x [ ] = x ] fdef. Accg 8 x2a [ x = x ] ftrivialg Induction Hypothesis : 8 x2a [ Acc x ys = x ~ Acc e ys ] Induction Step : 8 x2a [ Acc x [y:ys] = x ~ Acc e [y:ys] ] fdef. Accg 8 x2a [ Acc x [y:ys] = x ~ Acc (e ~ y) ys ] fe ~ y = yg 8 x2a [ Acc x [y:ys] = x ~ Acc y ys ] fdef. Accg 8 x2a [ Acc (x ~ y) ys = x ~ Acc y ys ] fuse IHg 8 x2a [ Acc (x ~ y) ys = x ~ (y ~ Acc e ys) ] fuse IHg 8 x2a [ (x ~ y) ~ Acc e ys = x ~ (y ~ Acc e ys) ] ftrans. ~g 8 x2a [ x ~ (y ~ Acc e ys) = x ~ (y ~ Acc e ys) ] ftrivialg

14 CHAPTER 2. ACCUMULATIVE FUNCTIONS 2 This proof can be modeled in CPS as well. See chapter 8 for a description of this proof. 2.3 List-list-accumulative functions A slight variation on the elt-list-accumulative form is the list-list-accumulative form. In this case the intermediate result has type List A and the update function A -> List A -> List A. However, it is necessary for the update function to have some type B -> B -> B in order to be associative. Therefore in this case some extra work is necessary. To detect list-list-accumulative functions again two checks are carried out: a syntactical one and a semantical one. Then the lemma for the function is constructed. 2.3.1 Syntactical check The syntactical conditions for list-list-accumulative functions are exactly the same as for elt-list-accumulative functions, with the exception that the type of the main function has to be List A -> List A for some type A. The rst dierence arises in the construction of the update function. Up to this point, the neutral element, accumulative function and an update expression out of which the update function must be inferred are known. Example: Reverse :: (List a) -> (List a) Reverse x = (AccReverse [] x) AccReverse :: (List a) (List a) -> (List a) AccReverse x [] = x AccReverse x [y:z] = (AccReverse [y:x] z) neutral element := [] accumulative function := AccReverse update expression := [y:x] Using (x, y) 7! [y:x] as update function is not possible, since this function has type List A -> A -> List A. In order to obtain a correct type, y has to be `lifted' to a list. Then the type would be List A -> List A -> List A, which has the correct form. In order to lift y to a list, the following two substitutions are carried out on the update expression: 1. Each occurrence of [y] is substituted by y. 2. Each occurrence of [y:expr] is substituted by y ++ expr.

2.3. LIST-LIST-ACCUMULATIVE FUNCTIONS 15 If some occurrence of y was not covered by one of these two cases, the algorithm fails to recognize the list-list-accumulative form. In the example the substitutions lead to the following update function, which indeed has type List A -> List A -> List A: update function (x, y) := y ++ x In elt-list-accumulative form the result expression of the second pattern of the accumulative function could be expressed by Acc (Update x y) z. For a function in list-list-accumulative form this becomes Acc (Update x [y]) z. 2.3.2 Semantical check The semantical conditions for list-list-accumulative functions are exactly the same as for elt-list-accumulative functions. In the example, the following goals have to be proven and are indeed proven by the system: [a:set] [x:list a] [] ++ x = x Auto TRUE [a:set] [x:list a] x ++ [] = x Auto TRUE [a:set][x:list a][y:list a][z:list a] (x++y)++z = x++(y++z) Auto TRUE 2.3.3 Constructed lemma Take again a function F, with neutral element e, accumulative function Acc and update function ~. Then the lemma constructed by the algorithm is: Acc x y x ~ Acc e y The lemma is constructed in the same way as with elt-list-accumulative functions. The proof of its correctness can be copied, with the exception that in unfolding the second pattern of Acc now [y] has to be written in stead of y. This has no impact whatsoever on the proof. In the example, the following lemma is created by the algorithm: AccReverse x y (AccReverse [] y) ++ x

16 CHAPTER 2. ACCUMULATIVE FUNCTIONS

Chapter 3 Proof power Once the specication has been entered the proving process can begin. In CPS proofs are built top-down. The starting point of a proof is the initial proposition stating a desired behavior. This proposition is called the initial goal and is in the beginning the current goal. A proof is constructed by repeatedly applying preprogrammed proving actions, which will be called tactics. A tactic transforms the current goal into one or more new goals. The proving continues with the rst of these goals (which then becomes the current goal) while the other goals (which are called the latent goals) have to be dealt with on a later point. By the repeated applications of tactics the goal is gradually simplied to easier goals. A goal is dismissed when it is equal to TRUE. When all goals have been dismissed the proof of the initial goal is complete. Additional information is stored for each goal. This information includes for example a list of currently known hypotheses. Besides transforming the goal itself (thus the proposition) tactics can also modify the additional information belonging to the goal. In the rst section of this chapter a description of the additional information stored is given. All the tactics supplied by CPS are described in the rest of the chapter. It is these tactics that determine the proof power of the system as a whole. 3.1 Information stored with each goal The additional information which is stored with each goal includes a list of known hypotheses and typing information. For reasons of proof management, also the starting point of the proof (the initial proposition to prove), a list of so far applied tactics, a list of used lemma's, a list of the other goals which still have to be proven after this goal (including the additional information belonging to these goals), the rst proposition-, expression- and type-variables which have not yet been used, the name of the goal and the name of the module the goal was read from are stored. A list of hypotheses is maintained to store propositions of which during the proving process was determined that they are TRUE. These hypotheses can then be used on a later point in the proof. Hypotheses can only be introduced by use of the Introduction(s)-tactic. 17

18 CHAPTER 3. PROOF POWER Typing information is used to store which expression- and type-variables were introduced in an earlier stage and (in the case of expression-variables) of which types these variables are. The initial goal was a closed proposition, but by the use of the tactics this can change. The typing information is used to keep record of all variables which are not bound by a quantier. Using this information the current goal can always be regarded as a closed proposition. The proof so far of each goal is also stored by CPS. This proof can be inspected by the user, but it can also be written to a module. It is also possible to read a complete proof from a module. In this manner proofs don't have to be constructed more than once. The representation of the proof is simply formed by the initial proposition to prove and the tactics that have been applied so far. Sometimes more than one proof of a goal can be found depending on which lemma's are used. To distinguish between these proofs, the names of the lemma's that have been used are stored. In principle, a proof which requires less lemma's is to be preferred. In CPS more than one goal can be specied at the same time. Thus more than one proof session can be active. In the environment therefore a list of goals is maintained. Each goal also has a list of goals itself: the latent goals which will become active later. In the proving process sometimes fresh variables are needed, for example in the case of applying induction. Therefore it is also necessary to keep track of which variables (proposition-, expression- and type-) have been used already and which are still available. The name of the goal is stored for later use. It is possible to use completed proofs as lemma's. In order to identify the completed proof in question its name is used. Finally the name of the module the goal was read from is stored. A completed proof can only be written to the same module it was read from. 3.2 Available tactics A tactic is in principle nothing more than a piece of code which transforms the goal and the information stored with it. Each transformation can be implemented as a tactic. However, one would like the tactics to be correct. To be more precise, the tactics have to be correctness-preserving. This means that the truth of all derived goals must logically imply the truth of the original goal. The tactics implemented in CPS can be divided in two categories: the basic ones and the composed ones. By use of three composition techniques the composed tactics are built out of the basic ones. All the proving power of CPS has been combined in one single tactic: the Auto-tactic. This tactic is a watereddown version of the composition of all available tactics. It is watered-down because of reasons of eciency, but in such a way that not much proving power is lost. Applying the Auto-tactic boils down to nding a combination of all available tactics which applied in sequence make up a proof of the current goal. Checking all combinations of all available (and applicable) tactics, which can be implemented by a backtrack-algorithm, would take too much time. Therefore another approach must be taken.

3.2. AVAILABLE TACTICS 19 The solution is that not all combinations of tactics are considered. The tactics are again divided in two categories: ordinary tactics and multi-tactics. While the application of a multi-tactic can be undone this is not the case for ordinary tactics. Once the decision was taken to apply an ordinary tactic this can not be corrected. This decreases the number of combinations of tactics considered. Tactics are always tried before multi-tactics. Only when it is not possible to apply any of the ordinary tactics applying a multi-tactic is considered. When the application of multi-tactic does not lead to a completed proof, the decision to use that multi-tactic is undone and the next multi-tactic is tried. Trying a multi-tactic can thus be seen as a form of backtracking. This is also the only point where backtracking can take place. Encoding a proving action as a multi-tactic is safer than encoding it as an ordinary tactic, but also increases the time needed to automatically construct a proof. Therefore all actions that are considered `safe' (meaning unlikely to spoil the proving process) are encoded as ordinary tactics. Note that an order exists within the ordinary tactics, which means that some ordinary tactics are always tried before others. When using the prover in an interactive way, it is possible to undo each decision made on an earlier point. Thus the limitation of backtracking to the points where multi-tactics are applied does not hold for the interactive use of the system. It is possible to try more combinations of tactics interactively than can be generated automatically. Therefore interactively more proofs may be found. Besides the ability to undo there is another dierence between ordinary tactics and multi-tactics. Multi-tactics can generate multiple solutions, where tactics can only generate one. When trying a multi-tactic, each generated solution is checked separately. The multi-tactic is only then rejected when all generated solutions have been tried. Thus multi-tactics are used both for the introduction of backtrack-points and for the encoding of brute-force-searching. In the following sections the basic tactics, the basic multi-tactics, the mechanisms to compose tactics and the composed tactics will be described in detail. 3.2.1 Basic tactics Tasks performed by the basic tactics are collecting function arguments, dismissing nished goals, introducing a variable or hypothesis, splitting the goal or a hypothesis, detecting an equality of applications of two dierent dataconstructors, induction and rewriting. The most complex of these tasks is rewriting. Actually, the rewrite system forms the backbone of the prover. All functions, lemma's and earlier proven goals are modeled as rewrite-rules. Also the management of the logical symbols (AND, 8, =,...) and some other small tasks are modeled by the rewrite system. The rewrite system will be described in a separate chapter, chapter 4. Ordinary tactics have a precondition. Tactics can only be applied if their precondition is satised. Once this is the case and the tactic is tried it is always applied, without the possibility to undo. Therefore the actions performed by these tactics must be safe. This means that they must not prevent the application of other (perhaps crucial) tactics.