# SJÄLVSTÄNDIGA ARBETEN I MATEMATIK

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 SJÄLVSTÄNDIGA ARBETEN I MATEMATIK MATEMATISKA INSTITUTIONEN, STOCKHOLMS UNIVERSITET Automated Theorem Proving av Tom Everitt No 8 MATEMATISKA INSTITUTIONEN, STOCKHOLMS UNIVERSITET, STOCKHOLM

2

3 Automated Theorem Proving Tom Everitt Självständigt arbete i matematik 15 högskolepoäng, grundnivå Handledare: Rikard Bøgvad 2010

4

5 Abstract The calculator was a great invention for the mathematician. No longer was it necessary to spend the main part of the time doing tedious but trivial arithmetic computations. A machine could do it both faster and more accurately. A similar revolution might be just around the corner for proof searching, the perhaps most time consuming part of the modern mathematician s work. In this essay we present the Resolution procedure, an algorithm that finds proofs for statements in propositional and first-order logic. This means that any true statement (expressible in either of these logics), in principle can be proven by a computer. In fact, there are already practically usable implementations available; here we will illustrate the usage of one such program, Prover9, by some examples. Just as many other theorem provers, Prover9 is able to prove many non-trivial true statements surprisingly fast. 1

6 Contents 1 Introduction Theoretical Preliminaries About This Essay Propositional Logic Definitions Normal Forms Negation Normal Form Dual Clause Normal Form Clause Normal Form Normal Form A Note on Time Complexity Proof Procedural Properties Propositional Resolution The R Operation The Procedure Predicate Logic Definitions Syntax Semantics Normal Forms Negation Normal Form Prenex Form Skolem Form Clause Form Normal Form Herbrand s Theorem Herbrand Models Properties of Herbrand Models Herbrand s Theorem The Davis-Putnam Procedure Substitutions Combination Unification Disagreement set Unification Algorithm Resolution The Resolution Rule The R Operation The Procedure Applications Prover Syllogism Group Theory Achievements

7 1 Introduction An automated theorem prover is an algorithm that uses a set of logical deduction rules to prove or verify the validity of a given proposition. So if logical deductions are perceived of as a formalization of correct, sound steps of reasoning, then an automated theorem prover becomes an algorithm of reasoning. As such, the study of automated theorem proving forms a quite natural extension to the mathematical-philosophical programme of formalizing thought (i.e., logic) initiated by Aristotle 300BC. 1.1 Theoretical Preliminaries Sentences of almost any logic can be divided into three groups: the valids, the contradictorious and the ones in between. The difference is their semantical status. The valid ones are tautologies, they are true no matter how interpreted; the contradictorious are unsatisfiable, they can never be true. The rest are sentences which are sometimes true, but not always. The set of valid sentences L is almost the same as the set of contradictorious sentences L, the slight difference being the addition of a negation sign. If P is a valid sentence, P will be contradictorious, and vice versa. In propositional logic there are never more than a finite number of interpretations (valuations), making the partitioning of a propositional language L into L, L and L rest = L (L L ) theoretically easy. To decide which set a given sentence belongs to, we can simply test all possible valuations. The problem of doing this in practice is less easy, and will be discussed later. In predicate logic the situation is more complex. The number of interpretations are uncountably infinite, making any brute force -procedure impossible. We depend entirely on more profound techniques to determine semantical status. Sometimes convincing informal arguments are used, e.g. to motivate axioms. But most often we turn to proofs. Proofs are sequences of sentences S 1,..., S n, where each sentence S i is either an axiom (whose validity is originally established by an informal argument), or follows from sentences occurring earlier in the sequence by some deduction rule (i.e. a relation P, Q R between formulas P, Q, R, such that whenever P and Q are true under an interpretation I, then so are R). Since the deduction rules are chosen to preserve validity, the validity of S n is proven. Proofs can be used in two ways. Either, as above, to prove the validity of S n from axioms; sometimes called forward-proofs. But they can also be used to derive the unsatisfiability of a formula Q, by proving the negation of an axiom from Q (a backward-proof, aka proof by contradiction). The latter technique turns out in practice to be better suited for automatic proof procedures, because no matter what the original formula is, the goal is always the same (the negation of some axiom). In the resolution method, for instance, the goal is the empty clause (which is contradictorious). Consequently, no matter the input, it always tries to derive a clause with all literals eliminated. 3

8 Completeness theorems for the respective logics ensures that a sentence is valid if and only if a proof is available (given some suitable set of deduction rules). Furthermore, the proofs are only countably many: they are strings of symbols, and it is easy to determine whether a string of symbols is a proper proof or not. Hence proofs provide us with a way to verify (in finite time) that a given sentence P is valid (or contradictorious). Systematically stepping through an enumeration of all proofs until a proof of P is found, is a distinct theoretical possibility (although hardly a practical or efficient one). Having thus in principle found a way to verify that a sentence belongs to L or L, we would expect to find a way to verify that a sentence R is in L rest. After all, this problem seems much easier. We do not need to show anything about all interpretations, we only need to find one interpretation in which R is true, and one under which R is false. Having also a method to verify satisfiability, we would always be able to determine the belonging of a given sentence S. We could run the validity and satisfiability procedures simultaneously. One of them would always stop in finite time. Although it is possible in many situations to find two interpretations that gives different truth values to a formula R, it is not, surprisingly, possible to find an algorithm that (always) succeeds. This is a consequence of a result proved independently and almost simultaneously by Alonzo Church and Alan Turing, in the 1930 s. Theorem 1.1 (Turing s Theorem). There exists no algorithm A such that A always finishes with the correct answer (in finite time) to the question: is the formula P a valid sentence in first-order predicate logic? A somewhat analogous result for propositional logic essentially states that there is no efficient procedure to determine the semantical status of a propositional formula. The satisfiability problem for propositional problem is N P - complete[1]. This makes most scientists believe that no algorithm A exists such that (i) A always answers correctly to the question: is the propositional formula P satisfiable? and (ii) A always terminates in polynomial time with respect to the size of the input. (NP -completeness has not yet been proven to imply that no polynomial time algorithm exists, however.) Theorem 1.2 (Cook s Theorem). The satisfiability problem of propositional logic is NP -complete. 1.2 About This Essay In this essay automated theorem proving in propositional and first-order logic will be covered. The main part of the material comes from essentially two sources. The first is Alan Robinson s paper A Machine-Oriented Logic Based on the Resolution Principle [5] from This is the paper where the resolution proof procedure was first presented. The resolution procedure has ever since dominated the area of automated theorem proving in first-order logic (cf. [3, p. ix]). Accordingly it dominates also in this essay. 4

9 The second source is a textbook by Melvin Fitting, named First-Order Logic and Automated Theorem Proving [3], which has provided me the more general picture of automated theorem proving. I claim no originality of the ideas presented in the subsequent parts of this essay. Although I have always written from my own understanding, the original proof ideas etc. comes almost exclusively from the two sources I have mentioned. On the reader s part, I have assumed that he/she possesses a basic knowledge of formal logic, approximately corresponding to a first course in logic. So familiarity with: syntax and semantics; proof and validity; completeness, soundness and compactness is presumed, but the reader should do fine without any prior acquaintance with automated theorem proving. Some important important logical concepts will be reviewed, however. 5

10 2 Propositional Logic We have already sketched a proof procedure for sentences of propositional logic in the introduction. There we used the simplest method at hand: we tried all possible valuations of the occurring propositional letters. In this section we will develop an alternative method called Propositional Resolution. It is based on a the (propositional) resolution rule, a simple variant of the first-order procedure that we will use later. Its main virtue is that it forms a natural introduction to the Predicate Resolution procedure, but it also shares some similarities with one of the most efficient proof procedures of propositional logic: The Davis-Putnam procedure for propositional logic[2]. The Davis-Putnam procedure for propositional logic will not be discussed in this essay however. The Resolution Rule. In logic we generally motivate our choice of deduction rules by their simpleness and intuitiveness. A good example of this is the Modus Ponens rule: A, (A B) B which could hardly be any simpler or more intuitive. In the case of automated theorem proving we are more interested in the speed of the deductions, or, rather, deduction rules upon which we can build fast proof procedures. It turns out that the resolution rule is suitable: (A 1 A k C), (B 1 B l C) (A 1 A k B 1 B l ) The key point is that C occurs in one disjunction and C in another. Hence we may eliminate C. Because any model satisfying the disjunctions to the left must fail to satisfy either C or C. In case it fails to satisfy C it must satisfy at least one of A 1..., A k. And in case it fails to satisfy C, it must satisfy at least one of B 1,..., B l. In either case the model satisfies at least one of A 1,..., A k, B 1,..., B l, and thereby the right hand formula, the resolvent, (A 1 A k B 1 B l ). Refutation Procedures. Based on the resolution rule we can build a procedure that tells whether a given sentence is unsatisfiable or not. If we can deduce a contradiction, it must be unsatisfiable, if we cannot, it must be satisfiable. This can also be used to determine validity. Say for example that we want to know whether a formula P is valid or not. The trick is then to let the procedure determine the satisfiability of P. If P was satisfiable, then P is not valid, and if P was not satisfiable, then P is valid. A procedure determining validity of P by refuting P (i.e. proving that P is contradictorious), is called a refutation procedure. Most ATP:s (including Resolution) are refutation procedures. Before further investigating the resolution procedure, we need to settle some terminology, as well as have a look at some normal forms. 6

11 2.1 Definitions Language Our language consists of propositional letters A, B, C,..., indexed with a natural number if necessary; together with the primary connectives,, and the parentheses (, ). The syntactic rules for their combination are the standard ones. We call a well formed string of these symbols a formula or sentence. Other connectives such as, are not considered primary. They are defined as (A B) = (( A) B) and (A B) = (A B) (B A). Literal. With a literal we mean a formula that is a propositional letter or the negation of a propositional letter. A, C and B 1 are all examples of literals, in contrast to (A B) and B that are non-literals. Complement. For a propositional letter A, the complement of A is A = A, and the complement of A is A = A. So the complement of a literal is also a literal. Two literals form a complementary pair if they are each others complements. The reason for introducing complements is that it is a convenient way of avoiding double negations. The complement of the complement of A, for example, is simply A; whereas the negation of the negation of A is A. Valuation. It is well-known that a valuation V of a formula P, is determined by a valuation of the propositional letters in P. We will use this by identifying a valuation with a set V of literals, such that for every propositional letter A in P, either A or A is in V, and V contains no complementary pairs. For a propositional letter A in P, we gather that V evaluates A to true if A is in V, and V evaluates A to false if A is in V. It is clear from the definition of valuations that exactly one of these cases must arise. The evaluation of the formula P as a whole is then recursively determined from the valuation of its subformulas. Satisfaction. A valuation satisfies a formula P if it evaluates P to true. And it satisfies a set S of formulas if it evaluates each member of S to true. A formula P is said to be satisfiable if there exist some valuation under which P is true, and P is said to be valid if all valuations renders P true. Two formulas P and Q are equivalent if they are satisfied by exactly the same valuations. And P and Q are considered equisatisfiable if either both are satisfiable, or both are unsatisfiable. 7

12 Generalized conjunctions and disjunctions. Normal forms based on generalized conjunctions and generalized disjunctions will be an important component of much of the subsequent theory. Therefore we will take the time to define some extra operations and notation for these. A formula on the form (P 1 P n ) is a generalized conjunction, and a formula on the form (P 1 P n ) is a generalized disjunction, given that they do not contain the same formula more than once (the P i s must be pairwise distinct). To emphasize that a formula is a generalized conjunction (disjunction), rather than just any conjunction (disjunction), we will use different parentheses: P 1 P n for generalized conjunctions and [P 1 P n ] for generalized disjunctions. We shall also apply the following set-conventions to generalized conjunctions and disjunctions: The order of the subformulas will be immaterial, e.g. considered the same as B A. A B will be The set operations,, receive the following meaning: for P = P 1 P m and Q = Q 1 Q n : P Q = P 1 P m Q 1 Q n (duplicates removed). If A = P i for some i, we have that A P. Otherwise A P. P P i = P 1 P i 1 P i+1 P m, also denoted P P i when there is no risk of confusion. If A P, then P A = P. The same applies to generalized disjunctions. The union of a generalized conjunction and a generalized disjunction is undefined. The empty generalized conjunction, denoted, is always true; and the empty generalized disjunction, denoted [], always false. Clause and Dual Clause. A clause is simply a generalized disjunction of literals, and a dual clause the conjunctive counterpart. So if A 1, A 2, A 3 are distinct propositional letters, then [A 1 A 2 A 3 ] is a clause and A 1 A 2 a dual clause. Clauses are a central component of the normal form we will work with the most: the clause normal form (see below). 2.2 Normal Forms Normal forms fixate the structure of a sentence in some way. This is a very useful feature. In the case of an automated theorem prover, knowing that a sentence is in some normal form drastically reduces the number of cases one needs to take into account. Generally it is possible to convert any formula P to an equivalent formula Q on some normal form N, by successively rewriting P according to some tautologies. Sometimes, however, the conversion can be performed faster if we only require Q to be equisatisfiable. This is for example the case with clause normal form, as we shall see below. 8

13 In many cases equisatisfiability is sufficient. To show, for instance, that a formula P is unsatisfiable, it is enough to show that a formula Q that is equisatisfiable to P, is contradictorious. Three normal forms for propositional logic will be discussed: negation normal form, clause normal form, and dual clause normal form Negation Normal Form A formula is said to be on negation normal form if all negation signs prefixes propositional letters. Conversion. To convert a formula to negation normal form we make use of De Morgans Laws to make negations travel inwards, and The Law of Double negation to make double negations disappear. Example. The formula ((P Q) ( P )) will be converted the following way: 1. ((P Q) ( P )) (Original formula) 2. ( (P Q) ( ( P ))) (De Morgan), 3. ( (P Q) P ) (Double Negation), 4. (( P Q)) P ) (De Morgan). More formally, to convert a formula P to negation normal form, do the following. While P is not a literal, depending on the following cases, do: If P = Q, then replace P with Q, and continue converting Q to negation normal form. If P = (Q 1 Q 2 ), then replace P with (( Q 1 ) ( Q 2 )) and convert Q 1 and Q 2 to negation normal form. If P = (Q 1 Q 2 ), then replace P with (( Q 1 ) ( Q 2 )) and convert Q 1 and Q 2 to negation normal form. If P = (Q 1 Q 2 ), then convert Q 1 and Q 2 to negation normal form. If P = (Q 1 Q 2 ), then convert Q 1 and Q 2 to negation normal form. When this algorithm has been carried out till the end our formula will consist of literals combined only by and ( will only prefix propositional letters). We verify this with structural induction: It is clear that the algorithm applied to an atomic formulas will yield an equivalent formula where only prefixes propositional letters (it will simply do nothing at all). Assume now that it will work on formulas Q 1 and Q 2. Then it is clear that it will also work on the formulas Q 1, (Q 1 Q 2 ), (Q 1 Q 2 ), (Q 1 Q 2 ) and (Q 1 Q 2 ). Hence it will work on any formula. 9

14 2.2.2 Dual Clause Normal Form Dual clause normal form (or sometimes disjunctive normal form or simply clause form) is quite easy to characterize. A formula on dual clause form is simply a generalized disjunction of dual clauses. So any formula on the form [ L 11 L 1k1 L n1 L nkn ] is on dual clause form, given that each L ij is a literal. Conversion. A given sentence S has a finite number of satisfying valuations V 1,..., V n. These can easily be determined by trying all possible valuations for S. And from each valuation V i we can form the generalized conjunction P i of all members of V i. Since all members of V i are literals, P i will be a dual clause. Now, to get an equivalent sentence S on dual clause form, simply let S be the generalized disjunction of all P i :s, i.e. let S = [P 1 P n ] S and S will then be satisfied by exactly the same valuations, and S be on normal form. As an example, consider the sentence R = (A (B A)). Two valuations satisfies R, {A, B} and {A, B}. So an equivalent sentence on dual clause form is R = [ A B A B ] Clause Normal Form There is also a normal form based on the clause, the clause normal form (also known as conjunctive normal form or just clause form). It is just like the dual clause form, except that the generalized disjunctions switched place with the generalized conjunction. Hence, any formula on the form [L 11 L 1k1 ] [L n1 L nkn ] where each L ij is a literal, is on clause normal form. Conversion. To convert a formula S to an equivalent formula S on clause normal form, let first T be S converted to dual clause normal form. Then let S be T converted to negation normal form. Then S will be equivalent to S, and, due to a De Morgan law, S will be on clause normal form. Alternative (more efficient) Conversion. In many situations it is sufficient to find an equisatisfiable formula on clause form. This can be done much faster. To convert for example the formula P = (A (B C)), we assign a new propositional letter α 1, α 2,... to every subformula of P. To (B C) we assign α 1, to (A (B C)) we assign α 2 and to (A (B C)) we assign α 3. Now, we can express the fact that α 1 should be true if and only if at least one of B and C is true, by the clauses [ α 1 B C], [ B α 1 ], [ C α 1 ] And to express that α 2 should be true if and only if both α 1 and A are true, we use the clauses [ A α 1 α 2 ], [ α 2 A], [ α 2 α 1 ] 10

15 Finally, the clauses [ α 3 α 2 ], [α 2 α 3 ] express α 3 s relation to α 2. Now, let Q be the conjunction of all these clauses and [α 3 ]. If Q is satisfied by some valuation V, then V also satisfies P. And, conversely, if V satisfies P, then there is an extension of V that also satisfies Q. Hence we have found an equisatisfiable formula Q = [ α 1 B C] [ B α 1 ] [ C α 1 ] [ A α 1 α 2 ] [ α 2 A] [ α 2 α 1 ] [ α 3 α 2 ] [α 2 α 3 ] [α 3 ] The example covers virtually any case that can arise. For a general formula we go ahead much the same way. First we assign new predicate letters to each subformula. Then, for each new predicate letter, we express its relation to its subformula by a few clauses. Finally we create a clause that states that the full formula is true, and form the conjunction of them all Normal Form Normal form will just be short for clause normal form in the case of propositional logic A Note on Time Complexity Definition. An algorithm is considered efficient if the time it requires only grows polynomially with the size of the input. It is considered inefficient if the time grows faster than polynomially, e.g. exponentially. A problem is easy if there exists an efficient algorithm solving it. Conversion to negation normal form is efficient. Essentially only one operation is applied to each subformula, and the number of subformulas in a formula P only grows linearly with the size of P. The conversions to dual clause and clause form are inefficient, for the number of valuations of P grows exponentially with the number of distinct propositional letters in P (and in the worst case, all propositional letters are distinct). In fact, it could not be that there was an efficient algorithm for conversion to dual clause form. For if there were an efficient procedure, we could use that the satisfiability problem for dual clause formulas is easy (see next section 2.2.6), to get an efficient procedure solving the general satisfiability problem of propositional logic. But due to Cook s theorem 1.2, no such procedure can exist. 1 The alternative conversion procedure for clause form is efficient however, it grows linearly with the number of subformulas (just as the negation form procedure). The price is, of course, equivalency. 1 Is likely to exist, if one should be precise. See section

16 2.2.6 Proof Procedural Properties When one wants to determine satisfiability of a formula it is generally advantageous to know that the formula is on some normal form, rather than having no structural information at all. But whether it is on clause or dual clause normal form also has a significant impact on the time it will take to determine satisfiability. The Dual Clause Form. The dual clause form is especially well suited for determining satisfiability. Take a sentence P = [C 1 C n ] = [ L 11 L 1k1 L n1 L nkn ] (the L ij :s literals and the C i :s dual clauses). P is then satisfiable if and only if at least one dual clause C i is satisfiable. But it is easy to see that a conjunction C i of literals is satisfiable if and only if it contains no complementary pair (i.e. B and B for some propositional letter B). That means that P is unsatisfiable if and only if each C i contains a complementary pair. This can be verified in polynomial time in the number of propositional letters and the size of the formula. The Clause Form. There is no similarly straightforward procedure for formulas on clause form. The problem of determining satisfiability for a formula on clause form is N P -complete; an immediate consequence of Cook s theorem 1.2, and the fact that we efficiently can convert any formula to clause form with maintained satisfiability. Unfortunately, the clause form is more common than the dual clause form. Not only is it much faster to convert a formula to clause normal form, but it also arises naturally from the predicate logic proof procedures we shall discuss later. One can of course convert a clause form formula to a dual clause form formula, but it is not a practically useful: the time required for conversion is as great as the time required to check satisfiability of a clause form formula directly. This makes it much more interesting to develop proof procedures for clause form formulas than for dual clause ones. Consequently, both the resolution procedure and the Davis-Putnam procedure (mentioned in the introduction of this section), are developed for clause form formulas. The satisfiability problem for formulas on clause form is generally referred to as the clause normal form satisfiability problem (or simply CNF-Sat) in the literature. 2.3 Propositional Resolution We are now ready for the proof procedure of this section. It is essentially a satisfiability checker for normal form formulas, but as we have seen in the introduction we can easily reduce the proof problem of any formula to the satisfiability problem of a normal form formula by negating the formula and converting it to normal form. Also, the satisfiability problem of a normal form formula will arise naturally by itself in predicate logic that we will come to later. Example. We introduce a main example that will stay with us through the section, hopefully making the formal definitions more intelligible. 12

17 Assume we want a procedure to tell us whether the normal form formula [ A B] [A B] [ B] is satisfiable or not (we can easily see that it is not, but we want a procedure to tell us that). A natural way to go ahead would be to try to prove a contradiction from the formula. In case it implies a contradiction, it must be unsatisfiable; and in case not, we should be able to find a model for the formula. Our satisfiability checker will be based on the resolution rule (stated above on page 6) as its only rule of deduction. It will be convenient to restate the resolution rule specifically for clauses, using the extra terminology introduced with generalized conjunctions. The Propositional Resolution Rule. Assume P = [A 1 A m C] and Q = [B 1 B n C] are both clauses. Then R = (P [C]) (Q [ C]) = [A 1 A m B 1 B n ] is the resolvent of (the ordered pair) P and Q with respect to C. Often we will simply talk about a resolvent of P and Q, without specifying the predicate letter C. Then the resolvent will not necessarily be P and Q s only resolvent, as they may have resolvents with respect to other predicate letters as well. Two properties of this rule will be important: (i) Any model satisfying P and Q will also satisfy R (this is just to say that the rule is valid, which we argued for above on page 6). (ii) If P and Q are clauses, then any resolvent R of P and Q must also be a clause. This must be, since R inevitably will be generalized disjunction of only literals if P and Q contain only literals. Definition. Two sentences S and T are equisatisfiable if each model satisfying S satisfies T and vice versa. Proposition 2.1. Assume that S is a normal form formula containing clauses P and Q, and that R is a resolvent of P and Q. Then S R is equisatisfiable with S, and S R is on clause normal form. Proof. First note that any model satisfying all clauses of S R must also satisfy the clauses of S (they form a subset of the clauses of S R ). Now, assume that a model M satisfies S. Then M satisfies P and Q, and thereby also their resolvent R. Hence M satisfies all clauses of S R. Finally we note that since R is a clause, S R will be on normal form. Example. Using proposition 2.1 we could then go ahead and appending every resolvent we can find to [ A B] [A B] [ B]. Writing the generalized conjunction as a list, we would get: 1. [ A B] 2. [A B] 13

18 3. [ B] 4. [B] 5. [] where 4 is a resolvent of 1 and 2, and 5 is a resolvent of 3 and 4. Since 5 is the empty clause, and hence unsatisfiable, we have reached the desired contradiction here. The success of the above example might tempt us to formulate a procedure for an arbitrary formula S, along the lines of: Take a pair of clauses C and D of S. If they have a resolvent B, update S by substituting B for C and D. Then apply the same procedure on S again. This would have worked fine in the above example. It would also have had the nice feature of successively reducing the number of clauses; leading to a very efficient proof procedure, had it always worked. Unfortunately we can only append new resolvents, not substitute them for old clauses. Otherwise this might have happened: 1. [ A B] 2. [A B] 3. [ B] 4. [A] (4 is a resolvent of 2 and 3). If we had replaced 2 and 3 with 4, we would have been stuck. Hence, we can merely append clauses, not replace. We express this formally with the R operation, which extends a formula with all available resolvents The R Operation. Definition. For S on normal form and R a clause, we call the formation of S R from S to append (the clause) R to S. The result of appending to a sentence S each resolvent, of every pair of clauses of S, will be denoted R(S). In our main example, where S = [ A B] [A B] [ B] this means to append the clauses: [B] (from clause 1 and 2), [ A] (from clause 1 and 3) and [A] (from clause 2 and 3). Which gives R(S) = [ A B] [A B] [ B] [B] [ A] [A] To get the empty clause we have to apply the R operation once more. It is a resolvent of clause 3 and 4 (and of clause 5 and 6!) of R(S). Applying R n times to a formula S will be denoted R n (S). Proposition 2.2. For any normal form formula S and any natural number n, then R n (S) is equisatisfiable with S. 14

19 Proof. It follows directly from proposition 2.1 that R(S) will be equisatisfiable with S for any normal form formula S. And since R preserves the sentence on normal form (also by proposition 2.1), applying R any number of times, will still yield an equisatisfiable sentence. Definition. A formula S to which no new clauses are added by the R operation (i.e. R(S) = S), is said to be R-satisfied The Procedure The propositional resolution procedure can now be defined by the pseudocode of Algorithm 1. For a given sentence S (that should be on normal form), it will keep applying the R procedure until either no new clauses are added (in which case S is satisfiable) or the empty clause is added (S is unsatisfiable). Algorithm 1 The-Propositional-Resolution-Procedure(S) i 0 loop i + + if R i (S) = R i 1 (S) then return satisfiable else if R i (S) contains the empty clause then return unsatisfiable end if end loop If n is the number of laps we go trough the loop, either of these has to happen before n 2 2l (where l is the number of distinct propositional letters of S). For in each application of R at least one new clause has to be appended (otherwise the first break criteria will be met). But only a finite number of clauses can be constructed from the finite number of propositional letters of S. In fact, exactly 2 2l different clauses can be constructed. 2l is the number of literals constructable from l letters, and each literal can either be part of, or not be part of, a clause (a clause is completely determined by its literals). Now, it follows immediately from proposition 2.2 that if the second break criteria is met at some i, i.e. [] R i (S), then S cannot be satisfiable. But are all R-satisfied formulas not containing [] satisfiable? The following lemma answers that question in the positive. Definition. A set of literals, not containing any complementary pair, is called a partial valuation. A partial valuation V contradicts a clause C if, for each literal L in C, the complement L is in V. A partial valuation V is a full valuation (or simply a valuation) of a formula S, if for each literal L in S, either L or L is in V. The partial valuation is used to successively build up a full valuation. Lemma 2.3. Any R-satisfied formula not containing [] is satisfiable. Proof. Assume that S is an R-satisfied formula, [] S, containing propositional letters A 1,..., A n. Then we construct a model for S by successively extending 15

21 R-satisfied and that we should be able to find a model for R 2 (S). Start with M 0 =, and the order A, B, C of the propositional letters. Since {A} does not contradict any clause, we let M 1 = {A}. Moving on, we find that neither {A, B} contradicts any clause, rendering to M 2 = {A, B}. But {A, B, C} contradicts for example [ C] in the R 2 (S) column, hence we are compelled to choose M 3 = {A, B, C}. Verifying M 3 against S shows that M 3 indeed satisfies every clause of S, and hence is a model of S. To sum up we now know that our satisfiability checker will determine satisfiability of a given sentence in finite time. In fact, we even have an upper bound of the time consumption: 2 2l. Also, if we wanted, we could use the algorithm in the proof of lemma 2.3 to return a satisfying valuation instead of just satisfiable in case the given sentence was satisfiable. The fact that it always solves the problem is the most important aspect however, and as a grand finale we express this as a theorem: Theorem 2.4 (Completeness). For any sentence S on normal form, the propositional resolution procedure will, in finite time, return the correct answer to the question whether S is satisfiable or not. 17

22 3 Predicate Logic In this section we will mainly be focusing on extending the resolution procedure to sentences of predicate logic. The most striking difference between sentences of propositional and predicate logic is that the latter ones contain variables. In the normal form we will be using, all variables will be universally quantified. This enables us to search for resolvents not only from the formulas as they stand, but also from any instantiation of the universally quantified variables. Consider, for instance, the clauses [A(y)], [ A(f(a)) B(a)] A(y) (in the first clause) and A(f(a)) (in the second clause) are not complements. Therefore we can not apply resolution immediately. But the variable y is universally quantified, and can thus be instantiated with, or substituted for, any term. Substituting it for f(a), for example, would yield a complementary pair; and thereby the resolvent [B(a)]. A significant part of the theory developed will address the question of finding the right substitutions, i.e. substitutions yielding new and useful resolvents. Before that a number of definitions will be postulated; the relevant normal forms be discussed; and Herbrand s theorem, which is central to the theory of ATP for predicate logic, be proven. 3.1 Definitions Syntax Language The predicate language extends the propositional language with the following components: Variables x, y, z,..., and constants a, b, c.... Functions f, g, h,..., or, more formally, with an integer k determining its arity (i.e. the number of arguments it takes). The function f k takes k arguments. A function with arity 0 is effectively a constant. Predicate letters A, B, C,..., also together with an integer determining their arity when such precision is advantageous. A k is a predicate letter taking k arguments. A predicate letter of arity 0 is essentially a propositional letter. All of which may be indexed with a natural number when a greater number of symbols are required. Finally, the quantifiers, are also added to the language. Term. A term is either a variable or a constant, or a function of arity k with terms t 1,..., t k as arguments. For example f k (t 1,..., t k ), although we will often omit the k and just write f(t 1,..., t k ). We will often denote terms with t or s. 18

23 Atomic Formula. An atomic formula is a predicate letter of arity k together with k terms. For instance A k (s 1,..., s k ), or simply A(s 1,..., s k ). Formula. A formula is either (i) an atomic formula, or (ii) one or two formulas combined with a propositional connective, or (iii) a quantifier followed by a variable followed by a formula. For example A(t), (P Q), or xp (where t is a term, A a unary predicate letter, and P and Q formulas). Literal. A literal is either an atomic formula, or the negation of an atomic formula. Example: A(t), B(s 1, s 2, s 3 ), but not (A(t) B(s 1, s 2, s 3 )). Tightly connected to this is the concept of the complement of a literal L, written L. For an atomic formula A, the complement of A is A, and the complement of A is A. L and L are complements, they form a complementary pair. Closed and Grounded. An expression E (i.e. either a term or a formula) is considered grounded if it contains no variables. And E is closed if all variables in E are bound by some quantifier. For instance, the term f(g(x, y)) is not grounded since it contains the variables x and y. The formula xa(f(x, a)) is closed (since x is bound), but it is not grounded since it contains a variable x. For terms there is no difference between closed and grounded. And for formulas, groundedness implies closedness. Generalized Conjunction and Disjunction, Clause and Dual Clause. The definitions and notations of these are identical to the propositional case. See section Semantics Model. A model M is an ordered pair D, I where D is a nonempty set called the domain, and I an interpretation, i.e. a function that maps: each constant c to an element c I in D, each k-ary function symbol f k to a function (f k ) I : D k D, and each k-ary predicate letter A k to a k-ary relation on D, (which can be identified with its extension, i.e. a subset (A k ) I D k ). We shall also let f(t 1,..., t n ) I denote f I (t I 1,..., t I n), i.e. the object f I takes on input (t I 1,..., t I n), and let A(t 1..., t n ) I be true if and only if (t I 1,..., t I n) satisfies the relation A I (is an element in the extension A I ). Assignment. When free variables are involved, we need to extend the interpretation with an assignment. An assignment in a model M = D, I is a function A that maps each free variable x to an element x A D. 19

24 Satisfiability. A formula P is satisfiable if it is satisfied by some model. For example is xa(x) satisfiable, since it is satisfied by the model M = {1, 2,... }, I, where I assigns the universal relation to A (i.e. A I = D). A formula is valid if it is satisfied by every model. Two formulas are equivalent if they are satisfied by exactly the same models. Two formulas P and Q are equisatisfiable if: P satisfiable Q satisfiable. 3.2 Normal Forms Ultimately we want to work with formulas on the form x 1... x n [L 11 L 1k1 ] [L m1 L mkm ] where each L ij is a literal that contains no other variables than x 1,..., x n. Of great importance is that the clauses contains no quantifiers, and that only universal quantifiers remain. The latter means that we cannot find an equivalent formula on normal form for all formulas, only an equisatisfiable one. That will suffice though, since our method only determines satisfiability anyway. (To have it determine validity, negate the input. See page 6.) The conversion of a formula P 1 to normal form, can be divided into steps. First we convert it to an equivalent formula P 2 on negation normal form. Then we move all quantifiers in P 2 to the front, making it prenex. The result P 3 is then skolemized, in order to get rid of the existential quantifiers. Finally, the result of that, P 4, is converted to a clause form formula P 5. P 5 will then be on the desired form, as we shall see Negation Normal Form A first order formula is on negation normal form when all negation signs prefixes atomic formulas. To achieve that we can simply extend the algorithm we used in the propositional case (cf. section 2.2.1), with rules for the quantifiers. The rules, building on tautologies of predicate logic, are xp converts to x P, and xp converts to x P. Apart from these additions, the method is identical to the propositional one. Example. The formula x y(a(x) B(y)) becomes: 1. x y(a(x) B(y)) 2. x y(a(x) B(y)) 3. x y (A(x) B(y)) 4. x y( A(x) B(y)) 20

25 3.2.2 Prenex Form The next step of the conversion is to move all quantifiers to the front. This is rather easy to do on a formula already on negation normal form, since we can simply move out quantifiers due to the equivalences ( xp Q) x(p Q), (Q xp ) x(q P ), ( xp Q) x(p Q), (Q xp ) x(q P ), which are all the types of subformulas we may encounter in a negation normal form formula. The equivalences are however only valid if Q does not contain x. In the event that Q would contain x, we have to do a a variable substitution first, as explained in the following example. Example. Suppose we want to convert x 1 (A(x 1 ) x 1 B(x 1 )) to prenex form. Directly moving out the quantifier, would yield the non-equivalent formula x 1 x 1 (A(x 1 ) B(x 1 )). If we instead change the variable x 1 to another variable x 2 in the subformula x 1 B(x 1 ), we would get (the equivalent formula) x 1 (A(x 1 ) x 2 B(x 2 )) where we can safely move out the quantifier, yielding x 1 x 2 (A(x 1 ) B(x 2 )) which is equivalent to the original formula x 1 (A(x 1 ) x 1 B(x 1 )) and on prenex form. The idea is that whenever we have a quantifier binding x, we have to check that x is not bound by another quantifier within the scope of the first one. If x is bound by an inner quantifier, we change the variable of the inner quantifier (together with all occurrences bound by the inner quantifier) to a new variable y. Substituting a variable this way has no semantical impact at all, so the formula with a substituted variable is always equivalent to the original one Skolem Form The goal is to get rid of all existential quantifiers. By successively exchanging existential quantifiers to new function symbols we get an equisatisfiable formula on skolem form (it will not always be equivalent, see example below). A formula without existential quantifiers is on skolem form. The process of successively exchanging quantifiers for function symbols is called skolemization. Lemma 3.1. The formula x 1... x n yp (x 1,..., x n, y) is equisatisfiable with x 1... x n P (x 1,..., x n, f(x 1,..., x n )) (as long as P does not contain the function symbol f). 21

26 Proof. We will show that if we have a model satisfying x 1... x n yp (x 1,..., x n, y), we can transform that into a model satisfying x 1... x n P (x 1,..., x n, f(x 1,..., x n )). And vice versa. Assume that x 1... x n yp (x 1,..., x n, y) is satisfied by some model M 1 = D, I 1. Then x 1... x n P (x 1,..., x n, f(x 1,..., x n )) is satisfied by some model M 2 = D, I 2, where I 2 only differ from I 1 in the interpretation of f. By assumption, we can for any n-tuple d 1,..., d n D n find an e D such that P I1 (d 1,..., d n, e) is satisfied. This means that we can express the choice of e as a function of d 1,..., d n : for any choice of d 1,..., d n, let e d1,...,d n be an element e D such that P I1 (d 1,..., d n, e d1,...,d n ) is satisfied. Now, let I 2 be as I 1 except for the interpretation of f. In fact, let f I2 (d 1,..., d n ) = e d1,...,d n for every d 1,..., d n D n. Then P (x 1,..., x n, f(x 1,..., x n )) must be satisfied by M 2. The converse is more straightforward. Assume x 1... x n P (x 1,..., x n, f(x 1,..., x n )) is satisfied by M 2 = D, I 2. Then P I2 (d 1,..., d n, f I2 (d 1,..., d n )) for all choices of d 1,..., d n. Hence M 2 also satisfies x 1... x n yp (x 1,..., x n, y). Choose namely an n-tuple d 1,..., d n D n. Then we can find an e such that P I2 (d 1,..., d n, e), namely e = f I2 (d 1,..., d n ). But the choice of d 1,..., d n was arbitrarily made, hence we can find an e for every choice of d 1,..., d n. Having proved the lemma, it is now easy to see that any formula P can be converted into an equisatisfiable formula on skolem form. By first converting P to negation normal form and prenex form, it is easy to see that successively applying the lemma until all existential quantifiers are gone, will render a formula on skolem form. Example. Skolemizing the the formula x( ya(x, y) za(x, z)). The first step is to rewrite it on negation normal form and prenex form: x y z( A(x, y) A(x, z)) This make us go clear of the potential difficulty of the second universal quantifier essentially being an existential quantifier (due to the preceding negation sign). The skolemization is then straightforward: 1. x y z( A(x, y) A(x, z)) 2. x z( A(x, f 1 (x)) A(x, z)) 3. x( A(x, f 1 (x)) A(x, f 2 (x))) It is easy to see that the last formula is not equivalent to the first one. The formulas are only equisatisfiable. 22

27 3.2.4 Clause Form The clause form (or clause normal form) for predicate logic is just like the clause form for propositional logic, with the addition that quantifiers should be outside the generalized conjunction. The argument that any prenex formula can be converted to clause form is essentially identical to the argument for propositional formulas. Neither the conversion procedures require any substantial modifications (cf ). Example. x y [A(x, y) B(x)] [A(x, x)] is a formula on clause normal form Normal Form Since sentences on negation-, prenex-, clause- and skolem normal form will be so frequently used, we will simply say that any such formula is on normal form. It should be clear from the discussion above that we can rewrite any formula P on normal form with maintained satisfiability. We state that in a theorem for future reference. Theorem 3.2. For any formula P there is an equisatisfiable formula P on normal form. Normally we will only consider formulas with all variables bound (i.e. closed formulas), and since the only type of quantifier in a normal form formula is the universal one, we will sometimes adopt the convention of dropping the quantifiers. This is no loss of information, we know that all variables are universally quantified. We will also make frequent use of the equivalence x 1... x m P 1 P n ( x 1... x m P 1 ) ( x 1... x m P n ) The right form is especially useful when we wish to instantiate one of the clauses of a formula separately, which is a rather common situation when dealing with resolution. Example. The normal form formula [A B(x, y)] [B(z, z)] is understood to mean x y z [A B(x, y)] [B(z, z)] Or, if more useful, ( x y z[a B(x, y)]) ( x y z[b(z, z)]) 3.3 Herbrand s Theorem Normally we make a clear distinction between the syntactical and the semantical aspects of our logic. The syntactics is about the language as a system of symbols. Alphabets and grammars, the criteria for a well formed formula, free variables and substitutions, are all syntactical features. To the semantics belong concepts such as interpretations, instantiations and valuations, domains, validity and satisfiability: essentially anything that is related to the meaning of the formulas. 23

### CHAPTER 7 GENERAL PROOF SYSTEMS

CHAPTER 7 GENERAL PROOF SYSTEMS 1 Introduction Proof systems are built to prove statements. They can be thought as an inference machine with special statements, called provable statements, or sometimes

### WHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT?

WHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT? introduction Many students seem to have trouble with the notion of a mathematical proof. People that come to a course like Math 216, who certainly

### ON FUNCTIONAL SYMBOL-FREE LOGIC PROGRAMS

PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY Physical and Mathematical Sciences 2012 1 p. 43 48 ON FUNCTIONAL SYMBOL-FREE LOGIC PROGRAMS I nf or m at i cs L. A. HAYKAZYAN * Chair of Programming and Information

### Mathematics for Computer Science/Software Engineering. Notes for the course MSM1F3 Dr. R. A. Wilson

Mathematics for Computer Science/Software Engineering Notes for the course MSM1F3 Dr. R. A. Wilson October 1996 Chapter 1 Logic Lecture no. 1. We introduce the concept of a proposition, which is a statement

### Mathematical Induction

Mathematical Induction (Handout March 8, 01) The Principle of Mathematical Induction provides a means to prove infinitely many statements all at once The principle is logical rather than strictly mathematical,

### (LMCS, p. 317) V.1. First Order Logic. This is the most powerful, most expressive logic that we will examine.

(LMCS, p. 317) V.1 First Order Logic This is the most powerful, most expressive logic that we will examine. Our version of first-order logic will use the following symbols: variables connectives (,,,,

### Testing LTL Formula Translation into Büchi Automata

Testing LTL Formula Translation into Büchi Automata Heikki Tauriainen and Keijo Heljanko Helsinki University of Technology, Laboratory for Theoretical Computer Science, P. O. Box 5400, FIN-02015 HUT, Finland

### The Classes P and NP

The Classes P and NP We now shift gears slightly and restrict our attention to the examination of two families of problems which are very important to computer scientists. These families constitute the

### Predicate Logic Review

Predicate Logic Review UC Berkeley, Philosophy 142, Spring 2016 John MacFarlane 1 Grammar A term is an individual constant or a variable. An individual constant is a lowercase letter from the beginning

### INCIDENCE-BETWEENNESS GEOMETRY

INCIDENCE-BETWEENNESS GEOMETRY MATH 410, CSUSM. SPRING 2008. PROFESSOR AITKEN This document covers the geometry that can be developed with just the axioms related to incidence and betweenness. The full

### INTRODUCTORY SET THEORY

M.Sc. program in mathematics INTRODUCTORY SET THEORY Katalin Károlyi Department of Applied Analysis, Eötvös Loránd University H-1088 Budapest, Múzeum krt. 6-8. CONTENTS 1. SETS Set, equal sets, subset,

### OHJ-2306 Introduction to Theoretical Computer Science, Fall 2012 8.11.2012

276 The P vs. NP problem is a major unsolved problem in computer science It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute to carry a \$ 1,000,000 prize for the

### 2. The Language of First-order Logic

2. The Language of First-order Logic KR & R Brachman & Levesque 2005 17 Declarative language Before building system before there can be learning, reasoning, planning, explanation... need to be able to

### Schedule. Logic (master program) Literature & Online Material. gic. Time and Place. Literature. Exercises & Exam. Online Material

OLC mputational gic Schedule Time and Place Thursday, 8:15 9:45, HS E Logic (master program) Georg Moser Institute of Computer Science @ UIBK week 1 October 2 week 8 November 20 week 2 October 9 week 9

### THE SEARCH FOR NATURAL DEFINABILITY IN THE TURING DEGREES

THE SEARCH FOR NATURAL DEFINABILITY IN THE TURING DEGREES ANDREW E.M. LEWIS 1. Introduction This will be a course on the Turing degrees. We shall assume very little background knowledge: familiarity with

### Foundational Proof Certificates

An application of proof theory to computer science INRIA-Saclay & LIX, École Polytechnique CUSO Winter School, Proof and Computation 30 January 2013 Can we standardize, communicate, and trust formal proofs?

### A Beginner s Guide to Modern Set Theory

A Beginner s Guide to Modern Set Theory Martin Dowd Product of Hyperon Software PO Box 4161 Costa Mesa, CA 92628 www.hyperonsoft.com Copyright c 2010 by Martin Dowd 1. Introduction..... 1 2. Formal logic......

### Lecture 8: Resolution theorem-proving

Comp24412 Symbolic AI Lecture 8: Resolution theorem-proving Ian Pratt-Hartmann Room KB2.38: email: ipratt@cs.man.ac.uk 2014 15 In the previous Lecture, we met SATCHMO, a first-order theorem-prover implemented

### THE TURING DEGREES AND THEIR LACK OF LINEAR ORDER

THE TURING DEGREES AND THEIR LACK OF LINEAR ORDER JASPER DEANTONIO Abstract. This paper is a study of the Turing Degrees, which are levels of incomputability naturally arising from sets of natural numbers.

### FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z

FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z DANIEL BIRMAJER, JUAN B GIL, AND MICHAEL WEINER Abstract We consider polynomials with integer coefficients and discuss their factorization

### CS510 Software Engineering

CS510 Software Engineering Propositional Logic Asst. Prof. Mathias Payer Department of Computer Science Purdue University TA: Scott A. Carr Slides inspired by Xiangyu Zhang http://nebelwelt.net/teaching/15-cs510-se

### Introduction to Logic in Computer Science: Autumn 2006

Introduction to Logic in Computer Science: Autumn 2006 Ulle Endriss Institute for Logic, Language and Computation University of Amsterdam Ulle Endriss 1 Plan for Today Now that we have a basic understanding

### Automated Theorem Proving - summary of lecture 1

Automated Theorem Proving - summary of lecture 1 1 Introduction Automated Theorem Proving (ATP) deals with the development of computer programs that show that some statement is a logical consequence of

### [Refer Slide Time: 05:10]

Principles of Programming Languages Prof: S. Arun Kumar Department of Computer Science and Engineering Indian Institute of Technology Delhi Lecture no 7 Lecture Title: Syntactic Classes Welcome to lecture

### Local periods and binary partial words: An algorithm

Local periods and binary partial words: An algorithm F. Blanchet-Sadri and Ajay Chriscoe Department of Mathematical Sciences University of North Carolina P.O. Box 26170 Greensboro, NC 27402 6170, USA E-mail:

### CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e.

CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e. This chapter contains the beginnings of the most important, and probably the most subtle, notion in mathematical analysis, i.e.,

### EQUATIONAL LOGIC AND ABSTRACT ALGEBRA * ABSTRACT

EQUATIONAL LOGIC AND ABSTRACT ALGEBRA * Taje I. Ramsamujh Florida International University Mathematics Department ABSTRACT Equational logic is a formalization of the deductive methods encountered in studying

### 9.2 Summation Notation

9. Summation Notation 66 9. Summation Notation In the previous section, we introduced sequences and now we shall present notation and theorems concerning the sum of terms of a sequence. We begin with a

### Correspondence analysis for strong three-valued logic

Correspondence analysis for strong three-valued logic A. Tamminga abstract. I apply Kooi and Tamminga s (2012) idea of correspondence analysis for many-valued logics to strong three-valued logic (K 3 ).

### First-Order Logics and Truth Degrees

First-Order Logics and Truth Degrees George Metcalfe Mathematics Institute University of Bern LATD 2014, Vienna Summer of Logic, 15-19 July 2014 George Metcalfe (University of Bern) First-Order Logics

### TOPOLOGY: THE JOURNEY INTO SEPARATION AXIOMS

TOPOLOGY: THE JOURNEY INTO SEPARATION AXIOMS VIPUL NAIK Abstract. In this journey, we are going to explore the so called separation axioms in greater detail. We shall try to understand how these axioms

### Which Semantics for Neighbourhood Semantics?

Which Semantics for Neighbourhood Semantics? Carlos Areces INRIA Nancy, Grand Est, France Diego Figueira INRIA, LSV, ENS Cachan, France Abstract In this article we discuss two alternative proposals for

### Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm.

Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm. We begin by defining the ring of polynomials with coefficients in a ring R. After some preliminary results, we specialize

### Predicate logic. Logic in computer science. Logic in Computer Science (lecture) PART II. first order logic

PART II. Predicate logic first order logic Logic in computer science Seminar: INGK401-K5; INHK401; INJK401-K4 University of Debrecen, Faculty of Informatics kadek.tamas@inf.unideb.hu 1 / 19 Alphabets Logical

### Monitoring Metric First-order Temporal Properties

Monitoring Metric First-order Temporal Properties DAVID BASIN, FELIX KLAEDTKE, SAMUEL MÜLLER, and EUGEN ZĂLINESCU, ETH Zurich Runtime monitoring is a general approach to verifying system properties at

### 11 Ideals. 11.1 Revisiting Z

11 Ideals The presentation here is somewhat different than the text. In particular, the sections do not match up. We have seen issues with the failure of unique factorization already, e.g., Z[ 5] = O Q(

### MOP 2007 Black Group Integer Polynomials Yufei Zhao. Integer Polynomials. June 29, 2007 Yufei Zhao yufeiz@mit.edu

Integer Polynomials June 9, 007 Yufei Zhao yufeiz@mit.edu We will use Z[x] to denote the ring of polynomials with integer coefficients. We begin by summarizing some of the common approaches used in dealing

### Turing Degrees and Definability of the Jump. Theodore A. Slaman. University of California, Berkeley. CJuly, 2005

Turing Degrees and Definability of the Jump Theodore A. Slaman University of California, Berkeley CJuly, 2005 Outline Lecture 1 Forcing in arithmetic Coding and decoding theorems Automorphisms of countable

### A Propositional Dynamic Logic for CCS Programs

A Propositional Dynamic Logic for CCS Programs Mario R. F. Benevides and L. Menasché Schechter {mario,luis}@cos.ufrj.br Abstract This work presents a Propositional Dynamic Logic in which the programs are

### This chapter is all about cardinality of sets. At first this looks like a

CHAPTER Cardinality of Sets This chapter is all about cardinality of sets At first this looks like a very simple concept To find the cardinality of a set, just count its elements If A = { a, b, c, d },

### Extending Semantic Resolution via Automated Model Building: applications

Extending Semantic Resolution via Automated Model Building: applications Ricardo Caferra Nicolas Peltier LIFIA-IMAG L1F1A-IMAG 46, Avenue Felix Viallet 46, Avenue Felix Viallei 38031 Grenoble Cedex FRANCE

### An example of a computable

An example of a computable absolutely normal number Verónica Becher Santiago Figueira Abstract The first example of an absolutely normal number was given by Sierpinski in 96, twenty years before the concept

### Dedekind s forgotten axiom and why we should teach it (and why we shouldn t teach mathematical induction in our calculus classes)

Dedekind s forgotten axiom and why we should teach it (and why we shouldn t teach mathematical induction in our calculus classes) by Jim Propp (UMass Lowell) March 14, 2010 1 / 29 Completeness Three common

### COMPUTER SCIENCE TRIPOS

CST.98.5.1 COMPUTER SCIENCE TRIPOS Part IB Wednesday 3 June 1998 1.30 to 4.30 Paper 5 Answer five questions. No more than two questions from any one section are to be answered. Submit the answers in five

### 2.3 Convex Constrained Optimization Problems

42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

### o-minimality and Uniformity in n 1 Graphs

o-minimality and Uniformity in n 1 Graphs Reid Dale July 10, 2013 Contents 1 Introduction 2 2 Languages and Structures 2 3 Definability and Tame Geometry 4 4 Applications to n 1 Graphs 6 5 Further Directions

### Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

### Determination of the normalization level of database schemas through equivalence classes of attributes

Computer Science Journal of Moldova, vol.17, no.2(50), 2009 Determination of the normalization level of database schemas through equivalence classes of attributes Cotelea Vitalie Abstract In this paper,

### x < y iff x < y, or x and y are incomparable and x χ(x,y) < y χ(x,y).

12. Large cardinals The study, or use, of large cardinals is one of the most active areas of research in set theory currently. There are many provably different kinds of large cardinals whose descriptions

### Basic Concepts of Point Set Topology Notes for OU course Math 4853 Spring 2011

Basic Concepts of Point Set Topology Notes for OU course Math 4853 Spring 2011 A. Miller 1. Introduction. The definitions of metric space and topological space were developed in the early 1900 s, largely

### Linear Codes. Chapter 3. 3.1 Basics

Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length

### each college c i C has a capacity q i - the maximum number of students it will admit

n colleges in a set C, m applicants in a set A, where m is much larger than n. each college c i C has a capacity q i - the maximum number of students it will admit each college c i has a strict order i

### Predicate Logic. Example: All men are mortal. Socrates is a man. Socrates is mortal.

Predicate Logic Example: All men are mortal. Socrates is a man. Socrates is mortal. Note: We need logic laws that work for statements involving quantities like some and all. In English, the predicate is

### GROUPS ACTING ON A SET

GROUPS ACTING ON A SET MATH 435 SPRING 2012 NOTES FROM FEBRUARY 27TH, 2012 1. Left group actions Definition 1.1. Suppose that G is a group and S is a set. A left (group) action of G on S is a rule for

### Scalable Automated Symbolic Analysis of Administrative Role-Based Access Control Policies by SMT solving

Scalable Automated Symbolic Analysis of Administrative Role-Based Access Control Policies by SMT solving Alessandro Armando 1,2 and Silvio Ranise 2, 1 DIST, Università degli Studi di Genova, Italia 2 Security

### Theory of Automated Reasoning An Introduction. Antti-Juhani Kaijanaho

Theory of Automated Reasoning An Introduction Antti-Juhani Kaijanaho Intended as compulsory reading for the Spring 2004 course on Automated Reasononing at Department of Mathematical Information Technology,

### SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties

SOLUTIONS TO EXERCISES FOR MATHEMATICS 205A Part 3 Fall 2008 III. Spaces with special properties III.1 : Compact spaces I Problems from Munkres, 26, pp. 170 172 3. Show that a finite union of compact subspaces

### EMBEDDING COUNTABLE PARTIAL ORDERINGS IN THE DEGREES

EMBEDDING COUNTABLE PARTIAL ORDERINGS IN THE ENUMERATION DEGREES AND THE ω-enumeration DEGREES MARIYA I. SOSKOVA AND IVAN N. SOSKOV 1. Introduction One of the most basic measures of the complexity of a

### Summary Last Lecture. Automated Reasoning. Outline of the Lecture. Definition sequent calculus. Theorem (Normalisation and Strong Normalisation)

Summary Summary Last Lecture sequent calculus Automated Reasoning Georg Moser Institute of Computer Science @ UIBK Winter 013 (Normalisation and Strong Normalisation) let Π be a proof in minimal logic

### Metric Spaces. Chapter 1

Chapter 1 Metric Spaces Many of the arguments you have seen in several variable calculus are almost identical to the corresponding arguments in one variable calculus, especially arguments concerning convergence

### Factoring & Primality

Factoring & Primality Lecturer: Dimitris Papadopoulos In this lecture we will discuss the problem of integer factorization and primality testing, two problems that have been the focus of a great amount

### Chapter 7 Uncomputability

Chapter 7 Uncomputability 190 7.1 Introduction Undecidability of concrete problems. First undecidable problem obtained by diagonalisation. Other undecidable problems obtained by means of the reduction

### Computational Methods for Database Repair by Signed Formulae

Computational Methods for Database Repair by Signed Formulae Ofer Arieli (oarieli@mta.ac.il) Department of Computer Science, The Academic College of Tel-Aviv, 4 Antokolski street, Tel-Aviv 61161, Israel.

### A Note on Context Logic

A Note on Context Logic Philippa Gardner Imperial College London This note describes joint work with Cristiano Calcagno and Uri Zarfaty. It introduces the general theory of Context Logic, and has been

### Subsets of Euclidean domains possessing a unique division algorithm

Subsets of Euclidean domains possessing a unique division algorithm Andrew D. Lewis 2009/03/16 Abstract Subsets of a Euclidean domain are characterised with the following objectives: (1) ensuring uniqueness

### Parametric Domain-theoretic models of Linear Abadi & Plotkin Logic

Parametric Domain-theoretic models of Linear Abadi & Plotkin Logic Lars Birkedal Rasmus Ejlers Møgelberg Rasmus Lerchedahl Petersen IT University Technical Report Series TR-00-7 ISSN 600 600 February 00

### 1 Sets and Set Notation.

LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

### Notes on Factoring. MA 206 Kurt Bryan

The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor

### 4: SINGLE-PERIOD MARKET MODELS

4: SINGLE-PERIOD MARKET MODELS Ben Goldys and Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2015 B. Goldys and M. Rutkowski (USydney) Slides 4: Single-Period Market

### GENERIC COMPUTABILITY, TURING DEGREES, AND ASYMPTOTIC DENSITY

GENERIC COMPUTABILITY, TURING DEGREES, AND ASYMPTOTIC DENSITY CARL G. JOCKUSCH, JR. AND PAUL E. SCHUPP Abstract. Generic decidability has been extensively studied in group theory, and we now study it in

### 1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

### Fixed-Point Logics and Computation

1 Fixed-Point Logics and Computation Symposium on the Unusual Effectiveness of Logic in Computer Science University of Cambridge 2 Mathematical Logic Mathematical logic seeks to formalise the process of

### it is easy to see that α = a

21. Polynomial rings Let us now turn out attention to determining the prime elements of a polynomial ring, where the coefficient ring is a field. We already know that such a polynomial ring is a UF. Therefore

### The Student-Project Allocation Problem

The Student-Project Allocation Problem David J. Abraham, Robert W. Irving, and David F. Manlove Department of Computing Science, University of Glasgow, Glasgow G12 8QQ, UK Email: {dabraham,rwi,davidm}@dcs.gla.ac.uk.

### SQL Database queries and their equivalence to predicate calculus

SQL Database queries and their equivalence to predicate calculus Russell Impagliazzo, with assistence from Cameron Helm November 3, 2013 1 Warning This lecture goes somewhat beyond Russell s expertise,

### The Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge,

The Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge, cheapest first, we had to determine whether its two endpoints

### An Innocent Investigation

An Innocent Investigation D. Joyce, Clark University January 2006 The beginning. Have you ever wondered why every number is either even or odd? I don t mean to ask if you ever wondered whether every number

### MA651 Topology. Lecture 6. Separation Axioms.

MA651 Topology. Lecture 6. Separation Axioms. This text is based on the following books: Fundamental concepts of topology by Peter O Neil Elements of Mathematics: General Topology by Nicolas Bourbaki Counterexamples

### Static Program Transformations for Efficient Software Model Checking

Static Program Transformations for Efficient Software Model Checking Shobha Vasudevan Jacob Abraham The University of Texas at Austin Dependable Systems Large and complex systems Software faults are major

### arxiv:1112.0829v1 [math.pr] 5 Dec 2011

How Not to Win a Million Dollars: A Counterexample to a Conjecture of L. Breiman Thomas P. Hayes arxiv:1112.0829v1 [math.pr] 5 Dec 2011 Abstract Consider a gambling game in which we are allowed to repeatedly

### 5. Factoring by the QF method

5. Factoring by the QF method 5.0 Preliminaries 5.1 The QF view of factorability 5.2 Illustration of the QF view of factorability 5.3 The QF approach to factorization 5.4 Alternative factorization by the

### ! " # The Logic of Descriptions. Logics for Data and Knowledge Representation. Terminology. Overview. Three Basic Features. Some History on DLs

,!0((,.+#\$),%\$(-&.& *,2(-\$)%&2.'3&%!&, Logics for Data and Knowledge Representation Alessandro Agostini agostini@dit.unitn.it University of Trento Fausto Giunchiglia fausto@dit.unitn.it The Logic of Descriptions!\$%&'()*\$#)

### CHECKLIST FOR THE DEGREE PROJECT REPORT

Kerstin Frenckner, kfrenck@csc.kth.se Copyright CSC 25 mars 2009 CHECKLIST FOR THE DEGREE PROJECT REPORT This checklist has been written to help you check that your report matches the demands that are

### One More Decidable Class of Finitely Ground Programs

One More Decidable Class of Finitely Ground Programs Yuliya Lierler and Vladimir Lifschitz Department of Computer Sciences, University of Texas at Austin {yuliya,vl}@cs.utexas.edu Abstract. When a logic

### FIRST YEAR CALCULUS. Chapter 7 CONTINUITY. It is a parabola, and we can draw this parabola without lifting our pencil from the paper.

FIRST YEAR CALCULUS WWLCHENW L c WWWL W L Chen, 1982, 2008. 2006. This chapter originates from material used by the author at Imperial College, University of London, between 1981 and 1990. It It is is

### Certamen 1 de Representación del Conocimiento

Certamen 1 de Representación del Conocimiento Segundo Semestre 2012 Question: 1 2 3 4 5 6 7 8 9 Total Points: 2 2 1 1 / 2 1 / 2 3 1 1 / 2 1 1 / 2 12 Here we show one way to solve each question, but there

### ALMOST COMMON PRIORS 1. INTRODUCTION

ALMOST COMMON PRIORS ZIV HELLMAN ABSTRACT. What happens when priors are not common? We introduce a measure for how far a type space is from having a common prior, which we term prior distance. If a type

### Composing Schema Mappings: Second-Order Dependencies to the Rescue

Composing Schema Mappings: Second-Order Dependencies to the Rescue Ronald Fagin IBM Almaden Research Center fagin@almaden.ibm.com Phokion G. Kolaitis UC Santa Cruz kolaitis@cs.ucsc.edu Wang-Chiew Tan UC

### CODING TRUE ARITHMETIC IN THE MEDVEDEV AND MUCHNIK DEGREES

CODING TRUE ARITHMETIC IN THE MEDVEDEV AND MUCHNIK DEGREES PAUL SHAFER Abstract. We prove that the first-order theory of the Medvedev degrees, the first-order theory of the Muchnik degrees, and the third-order

### Copyright 2012 MECS I.J.Information Technology and Computer Science, 2012, 1, 50-63

I.J. Information Technology and Computer Science, 2012, 1, 50-63 Published Online February 2012 in MECS (http://www.mecs-press.org/) DOI: 10.5815/ijitcs.2012.01.07 Using Logic Programming to Represent

### Research Note. Bi-intuitionistic Boolean Bunched Logic

UCL DEPARTMENT OF COMPUTER SCIENCE Research Note RN/14/06 Bi-intuitionistic Boolean Bunched Logic June, 2014 James Brotherston Dept. of Computer Science University College London Jules Villard Dept. of

### 4. CLASSES OF RINGS 4.1. Classes of Rings class operator A-closed Example 1: product Example 2:

4. CLASSES OF RINGS 4.1. Classes of Rings Normally we associate, with any property, a set of objects that satisfy that property. But problems can arise when we allow sets to be elements of larger sets

### SUBGROUPS OF CYCLIC GROUPS. 1. Introduction In a group G, we denote the (cyclic) group of powers of some g G by

SUBGROUPS OF CYCLIC GROUPS KEITH CONRAD 1. Introduction In a group G, we denote the (cyclic) group of powers of some g G by g = {g k : k Z}. If G = g, then G itself is cyclic, with g as a generator. Examples

### E3: PROBABILITY AND STATISTICS lecture notes

E3: PROBABILITY AND STATISTICS lecture notes 2 Contents 1 PROBABILITY THEORY 7 1.1 Experiments and random events............................ 7 1.2 Certain event. Impossible event............................

### Tiers, Preference Similarity, and the Limits on Stable Partners

Tiers, Preference Similarity, and the Limits on Stable Partners KANDORI, Michihiro, KOJIMA, Fuhito, and YASUDA, Yosuke February 7, 2010 Preliminary and incomplete. Do not circulate. Abstract We consider

### Coverability for Parallel Programs

2015 http://excel.fit.vutbr.cz Coverability for Parallel Programs Lenka Turoňová* Abstract We improve existing method for the automatic verification of systems with parallel running processes. The technique

Lecture Notes in Discrete Mathematics Marcel B. Finan Arkansas Tech University c All Rights Reserved 2 Preface This book is designed for a one semester course in discrete mathematics for sophomore or junior

### Removing Partial Inconsistency in Valuation- Based Systems*

Removing Partial Inconsistency in Valuation- Based Systems* Luis M. de Campos and Serafín Moral Departamento de Ciencias de la Computación e I.A., Universidad de Granada, 18071 Granada, Spain This paper