Ordered rewriting and unfailing completion techniques have been introduced in Ref. 6 and successfully applied for deciding (or, in general, semi-decid

Similar documents
Ordering Constraints on Trees

3. Mathematical Induction

1 if 1 x 0 1 if 0 x 1

CHAPTER 7 GENERAL PROOF SYSTEMS

Continued Fractions and the Euclidean Algorithm

Invertible elements in associates and semigroups. 1


Cartesian Products and Relations

Mathematical Induction

CHAPTER 3. Methods of Proofs. 1. Logical Arguments and Formal Proofs

Logic, Algebra and Truth Degrees Siena. A characterization of rst order rational Pavelka's logic

ON FUNCTIONAL SYMBOL-FREE LOGIC PROGRAMS

Jieh Hsiang Department of Computer Science State University of New York Stony brook, NY 11794

36 CHAPTER 1. LIMITS AND CONTINUITY. Figure 1.17: At which points is f not continuous?

Variations of Batteric V, Part 1

No: Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics

Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics

Handout #1: Mathematical Reasoning

1 = (a 0 + b 0 α) (a m 1 + b m 1 α) 2. for certain elements a 0,..., a m 1, b 0,..., b m 1 of F. Multiplying out, we obtain

Chapter 7. Continuity

The Prime Numbers. Definition. A prime number is a positive integer with exactly two positive divisors.

A Propositional Dynamic Logic for CCS Programs

INTRODUCTORY SET THEORY

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 2

(LMCS, p. 317) V.1. First Order Logic. This is the most powerful, most expressive logic that we will examine.

This asserts two sets are equal iff they have the same elements, that is, a set is determined by its elements.

Regular Expressions with Nested Levels of Back Referencing Form a Hierarchy

facultad de informática universidad politécnica de madrid

Quotient Rings and Field Extensions

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

Notes on Complexity Theory Last updated: August, Lecture 1

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

8 Divisibility and prime numbers

Regular Languages and Finite Automata

Degrees that are not degrees of categoricity

Separation Properties for Locally Convex Cones

Mathematics for Computer Science/Software Engineering. Notes for the course MSM1F3 Dr. R. A. Wilson

OPERATIONAL TYPE THEORY by Adam Petcher Prepared under the direction of Professor Aaron Stump A thesis presented to the School of Engineering of

On strong fairness in UNITY

Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA

LEARNING OBJECTIVES FOR THIS CHAPTER

FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z

Practice with Proofs

Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products

From Workflow Design Patterns to Logical Specifications

MA651 Topology. Lecture 6. Separation Axioms.

Set theory as a foundation for mathematics

TAKE-AWAY GAMES. ALLEN J. SCHWENK California Institute of Technology, Pasadena, California INTRODUCTION

SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties

Partial Fractions Decomposition

Computability Theory

Notes on Determinant

Lecture 1: Course overview, circuits, and formulas

Large induced subgraphs with all degrees odd

Sample Induction Proofs

8 Primes and Modular Arithmetic

Generating models of a matched formula with a polynomial delay

Mathematical Induction. Lecture 10-11

Summary Last Lecture. Automated Reasoning. Outline of the Lecture. Definition sequent calculus. Theorem (Normalisation and Strong Normalisation)

So let us begin our quest to find the holy grail of real analysis.

it is easy to see that α = a

Follow links for Class Use and other Permissions. For more information send to:

Some other convex-valued selection theorems 147 (2) Every lower semicontinuous mapping F : X! IR such that for every x 2 X, F (x) is either convex and

PYTHAGOREAN TRIPLES KEITH CONRAD

Correspondence analysis for strong three-valued logic

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

How To Prove A Termination Proof On A Union Of Two (Or More) By Ordering With Modules

It all depends on independence

Math 223 Abstract Algebra Lecture Notes

One More Decidable Class of Finitely Ground Programs

Non-deterministic Semantics and the Undecidability of Boolean BI

Row Echelon Form and Reduced Row Echelon Form

k, then n = p2α 1 1 pα k

Linear Programming Notes V Problem Transformations

1. Prove that the empty set is a subset of every set.

Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm.

Applications of Fermat s Little Theorem and Congruences

Language. Johann Eder. Universitat Klagenfurt. Institut fur Informatik. Universiatsstr. 65. A-9020 Klagenfurt / AUSTRIA

Which Semantics for Neighbourhood Semantics?

Mathematical Induction

The Mean Value Theorem

A simple algorithm with no simple verication

A first step towards modeling semistructured data in hybrid multimodal logic

11 Multivariate Polynomials

POLYNOMIAL RINGS AND UNIQUE FACTORIZATION DOMAINS

Likewise, we have contradictions: formulas that can only be false, e.g. (p p).

Lemma 5.2. Let S be a set. (1) Let f and g be two permutations of S. Then the composition of f and g is a permutation of S.

INCIDENCE-BETWEENNESS GEOMETRY

Max-Min Representation of Piecewise Linear Functions

Representing Reversible Cellular Automata with Reversible Block Cellular Automata

Real Roots of Univariate Polynomials with Real Coefficients

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Lecture 17 : Equivalence and Order Relations DRAFT

CS 3719 (Theory of Computation and Algorithms) Lecture 4

MATH 289 PROBLEM SET 4: NUMBER THEORY

Lecture 13 - Basic Number Theory.

DEGREES OF CATEGORICITY AND THE HYPERARITHMETIC HIERARCHY

Transformation Techniques for Constraint Logic Programs with Applications to Protocol Verification

Stochastic Inventory Control

1 Definition of a Turing machine

Transcription:

SOLVING SYMBOLIC ORDERING CONSTRAINTS HUBERT COMON y CNRS and Laboratoire de Recherche en Informatique Bat. 490, Universite de Paris Sud, 91405 ORSAY cedex, France. comon@sun8.lri.fr Received Revised ABSTRACT We show how to solve boolean combinations of inequations s > t in the Herbrand Universe, assuming that is interpreted as a lexicographic path ordering extending a total precedence. In other words, we prove that the existential fragment of the theory of a lexicographic path ordering which extends a total precedence is decidable. Keywords: simplication orderings, ordered strategies, term algebras, constraint solving. 1. Introduction The rst order theory of term algebras over a language (or alphabet) with no relational symbol (other than equality) has been shown to be decidable 1;2. See also Refs 3 and 4. Introducing into the language a binary relational symbol interpreted as the subterm ordering makes the theory undecidable 5. Venkataraman also shows in the latter paper that the purely existential fragment of the theory, i.e. the subset of sentences whose prenex form does not contain 8, is decidable. Venkataraman was concerned with some applications in functional programming which fulll the interpretation of. We are interested in the purely existential fragment of the theory when is interpreted as a lexicographic path ordering. Let us briey consider the motivations for such an interpretation. An abstract of this paper appeared in Proc. IEEE Logic in Computer Science, Philadelphia, 1990, under the title \Solving inequations in term algebras". y This research was partly supported by the Greco de Programmation and partly by the ESPRIT Basic Research Action COMPASS.

Ordered rewriting and unfailing completion techniques have been introduced in Ref. 6 and successfully applied for deciding (or, in general, semi-deciding) word problems in equational theories. The idea is to replace the relation $ E (the replacement of equals by equals) with the relation $ E (ordered rewriting) which consists in an ordered strategy for replacement of equals by equals dened as follows. u $ E v if u $ E v and u v for a given simplication ordering that is total on ground terms (see e.g. Ref. 7 for the missing denitions). In other words, an equation s = t 2 E is split into two constrained equations s > t : s = t and t > s : s = t. s = t is viewed as (the denotation of) an (innite) term rewriting system fu! v j u vg [ fv! u j v ug. The word problem is then solved by completion. Adding new equational consequences to E leads to simpler equational proofs than the brute-force replacement of equals by equals 8;9. The equational consequences are computed by superposition of two equations (the so-called critical pairs): if u = v and s = t are two equations in E and if some non-variable subterm (at position p) of u is uniable with s (with m.g.u. ), the pair (v; u[t] p ) is a critical pair provided that there exists a ground substitution satisfying u v ^ s t. The existence of the substitution corresponds exactly to the problem of deciding whether s t ^u v has a solution. Up to now, this problem was unsolved when is interpreted as a recursive path ordering. Other applications in the same vein (ordered strategies) are investigated in Ref. 10. For example, the resolution rule is restricted to ordered resolution in the following way: P _ Q :P 0 _ R Q _ R If = mgu(p; P 0 ) and 9; P Q ^ P R. This is a complete strategy (together with ordered factorization, it is a complete set of inference rules), provided that is total on ground terms. It requires however that one decides whether P Q ^ P R is satisable in the Herbrand Universe. For all such applications, needs to be interpreted as an ordering on ground terms which enjoys the following properties: 1. It is a total ordering on ground terms. (This is required for the completeness of the strategy.) 2. It is monotonic (in the sense of Ref. 7). This is required for handling equality. 3. It is well founded. This is required if we want to incorporate simplication rules. This property is also required for completeness. There is one well-known ordering on terms which fullls these three requirements : the lexicographic path ordering whose denition is recalled below (see also Ref. 7). This is why we consider such an interpretation of in this paper. The main result of the paper is to show how to decide whether there is indeed a satisfying u lpo v ^ s lpo t (where lpo is the lexicographic path ordering), therefore solving the above questions. Our result also shows that the ordering on terms with variables dened as t u i, for all ground substitution, t u is a decidable simplication ordering when

is a lexicographic path ordering on ground terms. generalizes the usual extension of recursive path ordering to terms with variables. This means that more (sometimes strictly more) terms are comparable w.r.t. than w.r.t. the recursive path ordering. Let T be the theory of term algebra over the relational symbols = and, where is interpreted as a lexicographic path ordering. We show the decidability of the purely existential fragment of T. The proof is carried out in three steps. The rst step (Sec. 2) consists of the transformation of any quantier-free formula (i.e. all variables are free) into a solved form that has the same set of solutions as. In Sec. 3 we reduce the satisability of an arbitrary solved form to the satisability of some particular problems called simple systems. Roughly, a simple system is a formula which denes a total ordering on the terms occurring in it and which is closed under deduction. This last property means that, if is a solved form of a simple system, then must be a subformula of. In Sec. 4 we show how to reduce the satisability of simple systems to the satisability of some particular simple systems called natural simple systems. Finally, we complete the proof showing that the satisability in the Herbrand Universe of a natural simple system is equivalent to the satisability of a system of linear inequalities over the integers. 2. Inequations Simplication In the following, F is assumed to be a nite set of function symbols. Each function symbol is associated with a non-negative integer a(f) called the arity of f. When a(f) = 0, f is called a constant. We assume through this paper that F contains at least one constant. X is a set of variable symbols, disjoint from F. Terms (or nite trees) over F and X are dened in the usual way (see e.g. Ref. 9 for missing denitions). The set of all terms is denoted by T (F; X). T (F; ;) is simply denoted T (F ). Its elements are called ground terms. As in Ref. 9, tj p denotes the subterm of t at position p and t[u] p denotes the term t in which tj p has been replaced with u. As usual, t[u] p is also used to indicate that u is the subterm of t at position p. Moreover, when p is not precised, t[u] means that u is a subterm of t at some position. Substitutions are mappings from X to T (F; X). A substitution is confused with the (unique) endomorphism of T (F; X) which extends. A ground substitution is a substitution such that (X) T (F ). Denition 1 (Syntax). An equation is an expression s = t where t and t are terms in T (F; X). = is assumed to be commutative (s = t is the same equation as t = s). An inequation is an expression s > t where s; t 2 T (F; X). An inequational problem is a boolean combination of equations and inequations. We use also the following syntactic abbreviations: We write sometimes s < t in place of t > s? is the empty disjunction and > is the empty conjunction

s t (resp. t s) stands for s > t _ s = t. denotes the syntactic equality on terms (resp. the syntactic equality of inequational problems). Let F be a total ordering on F. The lexicographic path ordering lpo is dened on T (F; X) by s lpo t, s > lpo t _ s t, and > lpo is dened in the following way (see also Ref. 7): i one of the following holds 9i, s i lpo t f > F g and 8j; s > lpo t j s f(s 1 ; : : : ; s n ) > lpo g(t 1 ; : : : ; t m ) t f = g, (s 1 ; : : : ; s n ) lpo (t 1 ; : : : ; t n ) and 8j, s > lpo t j where (s 1 ; : : : ; s n ) lpo (t 1 ; : : : ; t n ) i 9j n; 8i < j; s i t i and s j > lpo t j. Denition 2 (Semantics). A solution of an equation s = t (resp. an inequation s > t) is a ground substitution (i.e. an assignment from X to T (F )) such that s t (resp. s > lpo t). This denition of a solution is extended to inequational problems in a standard way. If I is an inequational problem, then S(I) denotes the set of its solutions. The decidability problem we address in this paper is the emptiness of S(I). lpo is a total ordering on ground terms (see e.g. Ref. 7). Therefore, :(s > t) can be replaced with s < t _ s = t and :(s = t) with s > t _ t > s, without changing the set of solutions of the formula. This is the reason why we assume in the following (without loss of generality) that inequational problems have the following form: _ s 1 = t 1 ^ : : : ^ s n = t n ^ u 1 > v 1 ^ : : : ^ u m > v m : j2j By convention, if J is empty, then the inequational problem is? and, if n = m = 0 and J is not empty, then the inequational problem is >. More precisely, let! N1 (resp.! N2 ) be the reduction relation dened on inequational problems by the rules of Fig. 1 (resp. 2). Both sets of rules dene a canonical term rewriting system modulo the commutativity of = and the associativity and commutativity of ^ and _ (see Ref. 9 for denitions). Moreover, the normal form I # N1 # N2 (abbreviated I # N ) of an inequational formula I has the same set of solutions as I. We may therefore restrict our attention to irreducible problems w.r.t.! N1 [! N2 and call them disjunctive normal forms. a where Denition 3 (Solved Forms). A solved form is either >,? or a formula x 1 = t 1 ^ : : : ^ x n = t n ^ u 1 > v 1 ^ : : : ^ u m > v m ; a Let us emphasize that, with our denition, there is no identity s = s in a disjunctive normal form: s = s >.

:(s = t)! s > t _ t > s :(s > t)! t > s _ s = t :>!? :?! > :(a _ b)! :a ^ :b :(a ^ b)! :a _ :b Fig. 1. Elimination of negation from inequational problems. s = s! > s > s!? s < t! t > s (a _ b) ^ c! (a ^ c) _ (b ^ c) a ^ a! a a _ a! a a _ >! > a ^ >! a a_?! a a^?!? Fig. 2. Normalization of inequational problems.

x 1 ; : : : ; x n are variables occurring only once in the formula for each i 2 f1; : : : ; mg, u i or v i is a variable for each index i 2 f1; : : : ; mg, u i is not a subterm of v i nor v i of u i We now describe a set of rules R for the transformation of inequational formulas. The corresponding reduction relation is written out ) R. R is called correct if ) R 0 implies that and 0 have the same solutions. Given a set of solved forms (as above) R is called complete if any normal form for ) R is a solved form. Each rule in Fig. 3 is followed by a (possibly empty) condition. This denes a class of algorithms by choosing any sequence of reductions that fullls the conditions. The rules apply to disjunctive normal forms of problems. Therefore, we assume that a normalization (w.r.t.! N ) is performed after each reduction with ). (This must be kept in mind when proving termination). Proposition 1. The rules given in Fig. 3 are correct, complete and terminating. Proof. Correctness is a direct consequence of the denition of > lpo : assume that s f(s 1 ; : : : ; s n ) and t g(t 1 ; : : : ; t m ) are two ground terms. Then s > lpo t i one of the following holds: 1. f > F g and either 9i; s i lpo t or 8i; s > lpo t i. However, if s i lpo t, then s > lpo t i. Therefore, the former case is useless. This corresponds to the rule (D 2 ). 2. g > F f, then one of the s i 's must be larger than t. This corresponds to rule (D 3 ). 3. f = g, then, either one of the s i 's is larger than t or (s 1 ; : : : ; s n ) lpo (t 1 ; : : : ; t n ). This corresponds to rule (D 4 ). The correctness of other rules is obvious. Completeness is easy to check when the system terminates. Let us prove termination. Consider the following interpretation functions: 1 (s 1 = t 1 ^: : :^s n = t n ^u 1 > v 1 ^: : :^u m > v m ) is the multiset of multisets of natural numbers: ffjs 1 j; jt 1 jg; : : : ; fjs n j; jt n jg; fju 1 j; jv 1 jg; : : : ; fju m j; jv m jgg where jsj is the number of function symbols and variables occurring in s (also called the size of s). Such multisets are ordered by the usual multiset extensions of orderings (see e.g. Ref. 11). 2 (s 1 = t 1 ^ : : : ^ s n = t n ^ u 1 > v 1 ^ : : : ^ u m > v m ) is the number of unsolved variables in the system. A variable x is solved in such a system if x is a member of an equation and x occurs only once in the system. W ( j2j c j), where c j is a conjunction of equations and inequations, is the multiset of pairs ( 2 (c j ); 1 (c j )). Such interpretations are ordered using the multiset extension of the lexicographic ordering on pairs.

Equality Rules (D 1 ) f(v 1 ; : : : ; v n ) = f(u 1 ; : : : ; u n ) ) v 1 = u 1 ^ : : : ^ v n = u n (C 1 ) f(v 1 ; : : : ; v n ) = g(u 1 ; : : : ; u m ) )? If f 6= g (R) x = t ^ P ) x = t ^ P fx 7! tg If x is a variable, x 62 V ar(t), P is a conjunction of equations and inequations, x 2 V ar(p ) and, if t is a variable, then t 2 V ar(p ). (O 1 ) s = t[s] p )? If p 6=. Inequality Rules (D 2 ) f(v 1 ; : : : ; v n ) > g(u 1 ; : : : ; u m ) ) f(v 1 ; : : : ; v n ) > u 1 ^ : : : f(v 1 ; : : : ; v n ) > u m If f > F g (D 3 ) f(v 1 ; : : : ; v n ) > g(u 1 ; : : : ; u m ) ) v 1 g(u 1 ; : : : ; u m ) _ : : : v n g(u 1 ; : : : ; u m ) If g > F f (D 4 ) f(v 1 ; : : : ; v n ) > f(u 1 ; : : : ; u n ) ) (v 1 > u 1 ^ f(v 1 ; : : : ; v n ) > u 2 ^ : : : f(v 1 ; : : : ; v n ) > u n ) _ (v 1 = u 1 ^ v 2 > u 2 ^ : : : f(v 1 ; : : : ; v n ) > u n ) _ : : : _ (v 1 = u 1 ^ v 2 = u 2 ^ : : : v n > u n ) _ v 1 f(u 1 ; : : : ; u n ) _ : : : _ v n f(u 1 ; : : : ; u n ) (O 2 ) t[s] p > s ) > If p 6=. (O 3 ) s > t[s] )? (T 1 ) s > t ^ t > s )? (T 2 ) s = t ^ s > t )? Fig. 3. Transformation rules.

We prove actually that is strictly decreasing W by application W of any rule to an inequational problem. Assume that I c _ j2j c j, and c ) j2j c0 j. Then, we 0 have to prove that, for every j 2 J 0, either 2 (c 0 j ) < 2(c) or 2 (c 0 j ) = 2(c) and 1 (c 0 j ) < 1(c). Actually, for every rule and every j, 2 (c 0 j) 2 (c) because there is no rule which can turn a solved variable into an unsolved one, except if the resulting formula is > or?. Assume now that 2 (c 0 j) = 2 (c) for some j. Note that this excludes the replacement rule (R) for which the number of unsolved variables is strictly decreasing. It is easy to check the strict decreasingness of 1. Let us show it for e.g. (D 3 ): c c 0 ^ f(v 1 ; : : : ; v n ) > g(u 1 ; : : : ; u m ) ) (c 0 ^ v 1 = g(u 1 ; : : : ; u m )) _ : : : _ (c 0 ^ v m = g(u 1 ; : : : ; u m )) _(c 0 ^ v 1 > g(u 1 ; : : : ; u m )) _ : : : _ (c 0 ^ v n > g(u 1 ; : : : ; u m )): For each j, c 0 j is either some c0 ^ v i = g(u 1 ; : : : ; u m ) or some c 0 ^ v i > g(u 1 ; : : : ; u m ). In both cases, 1 (c) = fa 1 ; : : : ; a k ; f1 + b 1 + : : : + b n ; cgg; where, for each i, b i = jv i j and c = jg(u 1 ; : : : ; u m )j and 1 (c 0 j) = fa 1 ; : : : ; a k ; fb i ; cgg; for some i. By denition of the multiset ordering, 1 (c 0 j ) < 1(c). This proves that is strictly decreasing by application of any rule. Since interprets the inequational problems in a well-founded domain, this proves termination. 2 Example 1. Assume that F 0 = F [ fh; gg (where h; g 62 F ) with h > F g > F f. Then, h(u 1 ; u 2 ) > g(h(u 1 ; v 2 ); h(v 1 ; u 2 )) ) R u 1 > v 1 ^ u 2 > v 2 : This shows that solving a conjunction of inequations is equivalent to solving a single inequation w.r.t. another set of function symbols. The reduction relation ) R is not sucient for deciding the existence of a solution. Indeed, there are irreducible inequational problems that are dierent from? but do not have any solution. Example 2. Let F = f: : : ; s; 0g with : : : > F s > F 0. The following problem: s(x) > y ^ y > x has no solution since, for every ground term x, there is no term between x and s(x) The above example shows that our rules system is not sucient. It suggests use of the following rules: v > u succ(u) > v v = succ(u) _ v > succ(u) ; v = u _ u > v

If succ(u) is a term such that, for all ground substitution, there is no term between succ(u) and u. (We say that succ(u) is the successor of u). Unfortunately, there are two problems with these rules. First, they do not terminate because we can derive an innite sequence v > succ n (u) (there may be a \gap" between v and u). Therefore, we have to nd in which situations they should be used. Secondly they are not complete. Indeed, a term u may have some instances that are successor terms and some instances that are not successor terms. Example 3. Assume F = fg > F f > F 1 > F 0g, where f is a binary function symbol. Then, f(x; y) has some instances which are successor terms (f(0; g(0)) is the successor of g(0)) and some instances which are not successor terms: f(1; 0) is a \limit ordinal": 0 < 1 < f(0; 0) < f(0; 1) < f(0; f(0; 0)) < f(0; f(0; 1)) : : : < f(1; 0) : : : Even worse, it may happen that all instances of a term v are successor terms but they cannot be written as successors of instances of a single term u. Example 4. Using the same ordering F as in Example 3, f(0; x) is a successor term for every ground term x. But f(0; 0) = succ(1), whereas f(0; t) = succ(t) if t lpo f(1; 0) (See Sec. 4 for more details). Then, for example, the problem f(0; x) > y ^ x > f(1; 0) ^ y > x has no solution. This shows that we have to study more deeply the successor function in order to derive the right (terminating) rules. This will be achieved in Sec. 4. Before that, let us simplify the problem by reducing the satisability of solved forms to the satisability of some particular formulas. 3. From Solved Forms to Simple Systems In this second step, we show how to reduce the satisability of inequational problems to the satisability of simple systems. The basic idea is to perform at once all possible identications between terms occurring in I. Then, one need no longer consider equalities: we may then assume that all terms occurring in I are distinct and may therefore be totally ordered. Example 5. Let I f(y; 0) > x ^ y > f(0; x). We sketch our method on this simple example. The possible identications are x = 0, x = y, x = f(y; 0), y = 0, y = f(0; x) and all combinations of them. Consider, for example, the identication x = 0. We get the system f(y; 0) > 0 ^ y > f(0; 0): Now the terms occurring in the system are assumed to be distinct and hence can be totally ordered. This leads to the systems f(y; 0) > y > 0 > f(0; 0); f(y; 0) > y > f(0; 0) > 0; : : : : The rst system can be removed since 0 > f(0; 0) ) R?. The second problem is a simple system.

The section is organized as follows: we rst dene simple systems (3.1), then perform all possible identications (3.2) and nally consider all total orderings on the subterms which do not lead to a contradiction, using R (3.3). 3.1. Denition of Simple Systems A system is a conjunction of equations and inequations. It can be considered either as a formula or a set of equations and inequations. If I is a system, then Sub(I) denotes the set of all (sub)terms of all terms t that occur as a member of an equation or an inequation in I. A simple system is a system I satisfying the following properties: There exists a nite set of terms ft 1 ; : : : ; t n g such that I ^ 1i<jn t i > t j : If I ) R Wj2J c j, then some c j is a subformula of I (i.e. there exists a formula c such that I c ^ c j ). This means in particular that J cannot be empty. Every subterm of a term t i is some term t j. Such systems will be written in the following way t 1 > : : : > t n : Note that a simple system, by denition, cannot be reduced to? using R, because of the second condition above. In particular, a simple system cannot contain any inequation s > t[s]. Moreover, note that the second condition above is equivalent to the (stronger) following property: W If I 0 is a subsystem of I and I 0 ) R j2j c j, then some c j0 is a subsystem of I. In the following we will use as well this stronger version of the second condition. 3.2. Identications A variable x that occurs in a system I is solved in I if x occurs only once in I and there is an equation x = t in I. We consider the following rule (identication) (Id) I! Id x = t ^ Ifx 7! tg; if x 2 V ar(i)? V ar(t) is not solved in I and t 2 Sub(I) and t is not a solved variable.

The reduction relation! Id is terminating because each application of the rule decreases the set of unsolved variables in the system. Therefore, the set G(I) of systems I 0 (in disjunctive normal form) such that I! Id I0 is nite. The equational part of a system I is the conjunction of all equations occurring in I. The inequational part IP (I) is the conjunction of inequations occurring in I. Let H(I) = fi 0 j 9I 00 2 G(I); I 0 = IP (I 00 )g: Example 6. Let f > F 0 and I f(y; 0) > x ^ y > f(0; x). Then, H(I) = f f(y; 0) > x ^ y > f(0; x); f(y; 0) > 0 ^ y > f(0; 0); f(y; 0) > y ^ y > f(0; y);?; f(0; 0) > x ^ 0 > f(0; x); f(0; 0) > 0 ^ 0 > f(0; 0); g Lemma 1. H(I) is nite. Proof. This follows the termination property of! Id. 2 Now, because all possible identications have been considered, we may assume that all terms occurring in a formula I 0 2 H(I) are distinct: Lemma 2. I has a solution i there is an inequational problem I 0 2 H(I) which has a solution such that 8s; t 2 Sub(I 0 ); s 6 t: Proof. If is a solution of some I 0 2 H(I), then, by construction, there is an I 00 x 1 = t 1 ^ : : : ^ x n = t n ^ I 0 where x 1 ; : : : ; x n are solved in I 00 and I ) Id I00. Then fx 1 7! t 1 ; : : : x n 7! t n g is a solution of I. Conversely, if is a solution of I, then we construct G (I) as follows: I 0 I. If there is a variable x 2 V ar(i n ) and a term t 2 Sub(I n ) such that x t and I n! Id x = t ^ I n fx 7! tg, then I n+1 x = t ^ I n fx 7! tg. Otherwise, G (I) = I n. The above construction terminates because! Id terminates, and, by denition, IP (G (I) ) 2 H(I). Moreover, by construction, we have the following properties: is a solution of G (I) ( is a solution of each I n ). If x 2 V ar(ip (G (I))) and t 2 Sub(IP (G (I))), then x 6 t. 2

3.3. Considering all Compatible Total Orderings For every system I, let K(I) be the set of all total orderings on Sub(I) compatible with the inequations in I i.e. such that s > t 2 I ) s t. Then, if I is a system, let S(I) = f^ s > t j 2 K(J ); J 2 H(I)g: st Finally, let D(I) be the set of systems I 0 in S(I) that cannot be reduced to? using ) R. The following lemmas show that the satisability of an inequational problem reduces to the satisability of some system in D(I), which is a nite set of simple systems. Lemma 3. A conjunction of inequations I has a solution i there is some system in D(I) that has a solution. Proof. If some system in D(I) has a solution, then is also the solution of some I 0 2 H(I) and, by Lemma 2, I has a solution. Conversely, assume that is a solution of I. Then, by Lemma 2, is a solution of some I 00 2 H(I) and is injective on Sub(I 00 ). Therefore, is a solution of some system I 0 2 S(I) : it is sucient to choose as follows: s t, s > lpo t: This indeed denes a total ordering on Sub(I 00 ) because, if s t and s 6 t, there is a variable x in V ar(s; t) and a subterm u of s or t such that x u, which contradicts the injective property of. Now, I 0 cannot be reduced to? (since it has a solution) : I 0 2 D(I). 2 Lemma 4. D(I) is a nite computable set of simple systems. Proof. We only have to prove that, if I 0 2 D(I) and I 0 ) R Wj2J c j, then some c j is a subformula of I 0. Since I 0 is a conjunction of inequations, only (D 2 ); (D 3 ); (D 4 ); (O 2 ); (O 3 ); (T 1 ); (T 2 ) may apply. Since I 0 6) R?, (O 3); (T 1 ); (T 2 ) cannot apply. Moreover, if the rule replaces an inequation of I 0 with >, b then J contains a single element (assume it is 1). c 1 is obviously a subformula of I 0. We exclude now this case. This means in particular that only the rules (D 2 ); (D 3 ); (D 4 ) have still to be considered. Assume now that I 7! D2;D3;D 4 Wi2J c i. Let i 0 be an index i 0 2 J such that c i0 6) R?. Such an index does exist since I0 6) R?. Sub(c i 0 ) Sub(I 0 ) because each term occurring in the right-hand side of a rule D i is a subterm of some term occurring in the left-hand side. On the other hand, if s; t 2 Sub(I 0 ) are two distinct terms, then either s > t or t > s occurs in I 0, by construction. Then, for every equation s = t and every inequation s > t occurring in c i0, either s > t or t > s occurs in I 0. This means that b This occurs if the rule (O 2 ) is used, but also in other situations, for example, with the rule (D 3 ) when t i g(u 1 ; : : : ; um). Indeed, in such a case, the rule produces an identity s = s which is immediately reduced to > by the normalization of formulas.

1. c i0 contains no equation. (Otherwise, it can be reduced to? by (T 2 )). 2. if s > t 2 c i0, then s > t 2 I 0. (Otherwise, c i0 contains both inequations s > t and t > s. Then it could be reduced to? by (T 1 ).) This shows that c i0 is a subformula of I 0. 2 From the previous lemmas, it is sucient to study simple systems. This is what we are going to do in the next section. 4. Satisability of Simple Systems We show here how to decide the satisability of simple systems. Once again, we split the satisability problem in three steps. First, we establish some technical results about lpo. Then we show how to solve a particular kind of simple systems: the natural simple systems. Finally, we reduce the satisability of a simple system to the satisability of nitely many natural simple systems. 4.1. Successor of a Ground Term The ordering F is assumed to be total. We assume moreover that F contains at least one non-constant function symbol. (If F is a set of constants, then T (F ) is nite and it is easy to nd all solutions of an inequational problem, trying all possible ground instances of the variables.) Let 0 be the least constant symbol and f be the least non-constant function symbol. C is the set of constant function symbols that are either 0 or smaller (w.r.t. F ) than any non-constant function symbol. Let C = fc 1 > F : : : > F c n = 0g. Some terms play a special role w.r.t. the successor function in T (F ) (there is a \discontinuity" as shown in Sec. 2). Let N be the least set of terms solution of the equation N = C [ f(0; : : : ; 0; N ). We also denote by N (X) the least solution of N (X) = C [ X [ f(0; : : : ; 0; N (X)): Note that any term in N (X) contains at most one occurrence of a variable. succ(t) denotes the successor of t 2 T (F ) dened as follows: succ(c i+1 ) = c i. succ(c 1 ) = f(0; : : : ; 0). If t f(0; : : : ; 0; t 0 ) 2 N, then succ(t) = f(0; : : : ; 0; succ(t 0 )). In all other cases, succ(t) = f(0; : : : ; 0; t). Lemma 5. Let s; t be two ground terms. s > lpo t i s lpo succ(t). In particular, there is no term between t and succ(t).

Proof. succ(t) > lpo t is straightforward. We only have to prove that s > lpo t ) s lpo succ(t). We are going to prove this implication by induction on the depth of t. If t 2 C, this is straightforward. If t is a constant and t 62 C, then s > lpo t implies that there is a subterm u g(u 1 ; : : : ; u m ) in s such that g F t. On the other hand, t > F f since t 62 C. Therefore, g(u 1 ; : : : ; u m ) > lpo t ) g(u 1 ; : : : ; u m ) > lpo f(0; : : : ; 0; t) succ(t). Finally, if g = t, then u is a strict subterm of s and the top symbol of s is greater than f, which shows again that s lpo f(0; : : : ; 0; t) succ(t). Assume now that the lemma holds up to a certain depth. As above, s > lpo t g(t 1 ; : : : ; t n ) implies that either there is some subterm u in s such that u > lpo t, u h(u 1 ; : : : ; u m ), h F g and no proper subterm of u has this property or t is a proper subterm of s and we are not in the above case. We investigate now a number of cases Case 1. h > F g and t 62 N. Then u > lpo t and u > lpo 0 implies u > lpo f(0; : : : ; 0; t) since h > F f, by denition of > lpo. Therefore, u > lpo succ(t). Case 2. h > F g = f and t 2 N. Either t is a constant and the result is straightforward or t f(0; : : : ; 0; t 0 ). Then u lpo succ(t 0 ) by induction hypothesis. But u cannot equal succ(t 0 ) since the top symbol of u is h and the top symbol of succ(t 0 ) is f, by definition of succ. Therefore, u > lpo succ(t 0 ). Now, h > F f implies u > lpo f(0; : : : ; 0; succ(t 0 )) succ(t). Case 3. h = g > F f. In such a case, t 62 N and h > F f implies u > lpo f(0; : : : ; 0; t) = succ(t) by denition of > lpo. Case 4. h = g = f, u i 6 0 for some i < n and t 62 N. By denition of > lpo, u > t and u i > 0 for i < n implies f(u 1 ; : : : ; u n ) > lpo f(0; : : : ; 0; t). Case 5. h = g = f, u i 6 0 for some i and t 2 N. By induction hypothesis, u > lpo t n implies u succ(t n ) and, since u i 6 0, u cannot equal succ(t n ). Therefore, by denition of > lpo, u > f(0; : : : ; 0; succ(t n )). Case 6. u f(0; : : : ; 0; u n ). Since u n 6> lpo t and u > lpo t, t must equal f(0; : : : ; 0; t n ) and u n > lpo t n. By induction hypothesis, u n succ(t n ). Now, either t 2 N and obviously u > lpo succ(t) or t 62 N. In the latter case, t n 62 N and succ(t n ) t. Thus, u n lpo t and therefore u lpo succ(t). Case 7. t is a proper subterm of s and s h(s 1 ; : : : ; s n ) satises g > F h. Then g 6= f and h 62 C (and therefore t 62 N ). Now, either h = f and s lpo succ(t) follows from s 1 lpo 0; : : : ; s n lpo 0 and s i lpo t for some i, or else h > F f and the conclusion follows from s > lpo 0 and s > lpo t. 2

Lemma 5 actually states precisely the relationship between ground terms and ordinal numbers: succ is the successor function. Example 7. Let us come back to Example 3: g > F f > F 1 > F 0. Listing the terms in increasing order, we nd (using classical ordinal notations): 0 1 2 3 4... 0 1 f(0; 0) f(0; 1) f(0; f(0; 0))...!! + 1! + 2... f(1; 0) f(0; f(1; 0)) f(0; f(0; f(1; 0)))...! 2! 2 + 1...! 2! 2 + 1... f(1; 1) f(0; f(1; 1))... f(1; f(1; 0)) f(0; f(1; f(1; 0)))...!!...!!!... 0... f(f(0; 0); 0)... f(f(1; 0); 0)... f(f(f(0; 0); 0); 0)... (See Ref. 12 for a description of the order type of recursive path orderings.) The set of terms that have indeed a predecessor will be useful. Let ST be the subset of T (F ) dened by: ST = ft 2 T (F ) j 9u 2 T (F ); t succ(u)g: Lemma 6. We have the following properties of ST; N : ST [ f0g = N [ ff(0; : : : ; 0; u) j u 2 T (F )g. 8t 2 T (F ); succ(t) 6= f(0; : : : ; 0; t) ) t 2 N. N is order-isomorphic to the natural numbers. Every term in T (F )? N is greater (w.r.t. lpo ) than every term in N. Proof. The rst property follows from the denition of succ. The second property follows from the denition of succ by induction on the depth of t. The third property follows from Lemma 5 since N is the set of succ n (0) for every natural number n. The fourth proposition is a consequence of the third one. 2 Extending the previous notation, ST 9 (X) will be the set of terms that have an instance in ST and ST 8 (X) the set of terms whose all ground instances are in ST : Lemma 7. ST 9 (X) [ f0g = N (X) [ ff(y 1 ; : : : ; y k ; u) j 8i; y i 2 X [ f0g; u 2 T (F; X)g and ST 8 (X) [ f0g = N [ ff(0; : : : ; 0; u) j u 2 T (F; X)g: Proof. Let A be the right-hand side of the rst equality in the lemma. If t 2 A, then, replacing all the variables of t by 0 leads to a term in ST [ f0g by Lemma 6.

Therefore, A ST 9 (X) [ f0g. Conversely, replacing any subterm of t 2 ST [ f0g with a variable gives a term in A. If u 2 T (F; X), then, for every substitution, f(0; : : : ; 0; u) 2 ST by Lemma 6. Conversely, if t 2 ST 8 (X), then every ground instance of t is either in C or has the form f(0; : : : ; 0; v) by Lemma 5. This means that t is either in N or has the form f(0; : : : ; 0; u) where u 2 T (F; X). 2 Example 8. In the previous example, we nd T (F )? ST ff(1; 0); f(1; 1); f(1; f(1; 0)); f(f(0; 0); 0); f(f(1; 0); 0); g(0)g T (F; X)? ST 9 (X) = f0g [ ff(1; t) j t 2 T (F; X)g [ fg(t) j t 2 T (F; X)g [ ff(f(t 1 ; t 2 ); t 3 ) j t 1 ; t 2 ; t 3 2 T (F; X)g [ ff(g(t 1 ); t 2 ) j t 1 ; t 2 2 T (F; X)g 4.2. Natural Simple Systems A natural simple system is a simple system t 1 > : : : > t n such that, for every i, t i 2 N (X). The following lemma shows that natural simple systems can easily be solved. Also, it will be used for reducing simple systems to natural simple systems. Lemma 8. Let I t 1 > : : : > t n be a natural simple system. Then I has a solution i it has a solution such that 8x 2 V ar(i); x 2 N. Proof. Assume that is a minimal solution of I. i.e. there is no solution other than such that, for all variable x, x lpo x (such a exists because lpo is well-founded). Suppose now that some variable x satises x 62 N. We are going to derive a contradiction. Let I : : : t k > t k+1 : : : such that t k 62 N and t k+1 2 N. (It is always possible to assume such a situation, maybe by adding t n > 0 when k = n). Each x i can be viewed as an ordinal; in order to get concise information, we will use these ordinal notations until the end of the proof. In such a framework, each element in N corresponds to a natural number and by Lemma 6, f(0; : : : ; 0; t) corresponds to t + 1 if t 62 N. Each t i may be written i x i + N i, where i is 0 or 1 and N i is a natural number. This is so because t i is supposed to belong to N (X). t k must be a variable (otherwise, there would be a variable y occurring in t k such that y 62 N which contradicts the fact that all t j, j > k are in N ). Let be the ordinal t k. Now let X 0 be the set of variables y such that y = + N y with N y <!. We are going to show that the substitution dened by x t k+1 + N x + 1 if x 2 X 0, x x otherwise, is again a solution of I s.t. x < x. We investigate then some cases showing that is a solution of I If t m > t j is in I and m > k, then obviously, is a solution of t m > t j

If t m > t j is in I and m k; j > k, then t m > lpo t k+1 by denition of. Therefore, is a solution of t m > t j If t m > t j is in I and t j +! then x j 62 X 0. Thus t j t j. In the same way, t m t m. is again a solution. If t m > t j is in I and t m +!, t j < +!, then t m t m > t j t j and is a solution of t m > t j. Suppose now that t m > t j is in I and that +! > t m > t j. This means that x m ; x j 2 X 0 : x m = + N m > x j = + N j. Then x m = t k+1 + 1 + N m > x j = t k+1 + 1 + N j : is again a solution of t m > t j. This proves that is a solution of I: all possible cases for t m > t j have been investigated. On the other hand, < which contradicts the minimality hypothesis on. 2 Corollary 1. The satisability of natural simple systems is decidable. This is an easy consequence of Lemma 8 because we get linear inequations over the integers. 4.3. From Simple Systems to Natural Simple Systems The rules given in Figs. 4 and 5 transform any simple system into either >,? or a nite disjunction of simple systems. They preserve the existence of a solution. More precisely, we shall say that a rule (S i ) is correct if, for every simple system I, W j2j c j implies that each c j is a simple system and I has a solution i one I ; Si of the c j 's has a solution. In particular, when J = ;, i.e. I ;?, I has no solution. In the denition of the rules of Figs. 4 and 5 we adopt the following conventions: I t 1 > : : : > t n is the simple system on which the rule is applied If I t 1 > : : : > t n where t 1 ; : : : ; t n are ground terms (we call it a ground system), then I ) R >. In such a case, I is obviously satisable. Therefore, we always assume that there is at least one term t i 62 T (F ). x 1 is the greatest variable of I (i.e. the one occurring with the smallest index in t 1 ; : : : ; t n. Such a variable exists since, by denition of simple systems, any subterm of a term t i is itself some t j ). The symbols i stand for any (possibly empty) sequence t k > : : : > t l. A simple system consisting in an empty sequence of terms or in a single term must be understood as >.

0 must be the rightmost term in the system if 2 is not empty. (S 1 ) 1 > 0 > 2 ;? (S 2 ) > y ; fy 7! 0g > 0 _ > y > 0 if y is a variable and 0 does not occur in. Elimination of the leftmost variable (S 3 ) x 1 > 1 ; 1 (S 4 ) 1 > t[x 1 ] > 2 > x 1 > 3 ; 1 > 2 > x 1 > 3 if x 1 does not occur in any term of the sequence 1. Elimination of the leftmost variable : the case t 62 N (X) (S 5 ) 1 > t > x 1 > 2 > 0 ; 1 > t > 2 > 0 If t 62 ST 8 (X) and x 1 occurs only once in the problem. (S 6 ) 1 > t > x 1 > 2 ;? If t 2 ST 8 (X)? N (X) and x 1 occurs only once in the problem. Reduction to natural simple systems when t 2 N (X) If c 2 C and 1 is not empty. (S 7 ) 1 > c > x 1 > 2 ; c > x 1 > 2 (S 8 ) 1 > t > x 1 > 2 > t 0 > 3 ;? If t 2 N (X), t 0 62 N (X) and x 1 62 V ar( 1; t). Fig. 4. Transformation rules for simple systems

(S 9 ) 1 > f(0; : : : ; 0; t) > x 1 > 2 > 0 ; f(0; : : : ; 0; t) > x 1 > 2 > 0 if Every term in the sequence 2 belongs to N (X), 1 is not empty, x 1 occurs only once in the problem, t 2 N (X). Fig. 5. Transformation rules for simple systems (continued) The rules S 1 ; S 2 force the last term t n to be 0. The rule S 3 removes the largest variable x 1 if there is no upper bound constraint on it. S 4 removes all occurrences of x 1 but one. S 5, S 6 investigate the cases where the term t just before x 1 is not in N (X) : either the problem has no solution because x 1 must be between a term and its predecessor (S 6 ) or x 1 can be removed because t cannot be a successor term (S 5 ). Finally, the rules S 7 ; S 8 ; S 9 assume t 2 N (X) and remove from the system all terms that do not belong to N (X). The following lemmas show the correctness of all rules (S i ). Lemma 9. (S 1 ) is correct. Proof. This is a consequence of the denition of 0. 2 Lemma 10. (S 2 ) is correct. Proof. > y > 0 is a simple system: every subterm of some term t i is some term t j because this property holds for > y. On the other hand, if _ > y > 0 ) R c j ; either the rule which is applied does not involve 0 and, for some j, c j c 0 j > 0 where c 0 j is a subformula of > y or else, the rule involves 0. In the latter case, J contains one element (call it 1) and c 1 > y, because 0 is the least term. fy 7! 0g > 0 is a simple system: if s[t] is a term occurring in fy 7! 0g, then, replacing all occurrences of 0 with y, the corresponding term s 0 [t 0 ] occurs in. > y is a simple system. Therefore, t 0 is one of the t i 's. Then t t 0 fy 7! 0g is a member of the sequence fy 7! 0g > 0. In the same way, if fy 7! 0g > 0 ) R Wj2J c j, then, either the transformation involves 0 or it can be lifted to a transformation > y ) R Wj2J c0 j where c j c 0 j fy 7! 0g for every j. In both cases, there is a c j which is a subformula of fy 7! 0g > 0. Finally, fy 7! tg is a solution of > y i either t 0 and is a solution of fy 7! 0g > 0 or else t > 0 and fy 7! tg is a solution of > y > 0. 2 j2j

Lemma 11. (S 3 ) is correct. Proof. 1 is obviously a simple system because x 1 does not occur in it. On the other hand, if is a solution of the left hand side, it is obviously a solution of the right-hand side. Now, if is a solution of the right right-hand side, let t 2 be the leftmost term in and let u succ(t 2 ). Then = fx 1 7! ug is a solution of the left-hand side. 2 Lemma 12. (S 4 ) is correct. Proof. First, the right-hand side is a simple system because it is a subformula of the left-hand side, since x 1 does not occur in 1, there is no term in 1 having t[x 1 ] as a subterm. Now, we have to prove that any solution of the right-hand side is also a solution of the left-hand side. Assume that I ; S4 I 0. Let W j2j c j be a solved form of t i > t[x 1 ] (resp. t[x 1 ] > t i ) and c j0 a subsystem of I. Every inequation in c j0 has one of the following forms: x i > u : in this case x i > u is a subformula of I 0 since x 1 variable is the greatest u > x i and u is dierent from t[x 1 ]. In this case u > x i is again a subformula of I 0 t[x 1 ] > x i with i 6= 1. In this case, any solution of I 0 is also a solution of x 1 > x i and therefore a solution of t[x 1 ] > x i. In all cases, every solution of I 0 is also a solution of c j0. Therefore, any solution of I 0 is a solution of I. The converse inclusion is obvious, hence the rule (S 4 ) is correct. 2 Lemma 13. (S 5 ) is correct. Proof. As in the previous lemma, the right-hand side is a simple system because it is a subformula of the left-hand side and the term which is erased (x 1 ) does not occur elsewhere in the right-hand side. Let I ; S5 I 0. It is sucient to prove that, if I 0 has a solution, then I has a solution. First, if x 2 V ar(t), then x > lpo 0 because x > 0 is a subformula of 2. Then, from lemmas 6 and 7, t 62 ST. (Actually, t 2 ST 9 (X) implies that either t 2 C or t f(y 1 ; : : : ; y k ; u) with at least one index i such that y i 2 X. t 62 N and y i > lpo 0 then imply that t 62 ST ). Now let 2 > 0 u > 3 and = fx 1 7! succ(u)g. is a solution of I: x 1 > lpo u by construction, t t > lpo x 1 because t 62 ST and all other inequations are satised since they do not contain x 1. 2 Lemma 14. (S 6 ) is correct.

Proof. We only have to prove that the left-hand side has no solution. By Lemma 7 and because of the conditions imposed on (S 6 ), t f(0; : : : ; 0; u) with u 62 N (X). Then, by denition of succ, for every ground substitution, t succ(u). On the other hand, u is not a variable and it must occur in 2. A solution of the lefthand side should therefore satisfy succ(u) > lpo x 1 > lpo u which is impossible, by Lemma 5. 2 Lemma 15. (S 7 ) is correct. Proof. c > x 1 > 2 is a simple system because it is a subformula of the left-hand side, Sub(c > x 1 > 2 ) \ 1 = ;. (Indeed, a term cannot occur twice in a sequence and a term cannot occur after one of its subterms). We only have to prove that every solution of the right-hand side W is a solution of the left-hand side. For, let s > t be a subformula of 1 > c. Let j2j c j be a solved form of s > t and c j0 be a subformula of 1 > c > x 1 > 2. Every inequation in c j0 has one of the following forms: x i > u. In this case, the inequation occurs in c > x 1 > 2 because x 1 is the leftmost variable. Therefore, x i > lpo u. u > x i and u is not a variable. Let u g(u 1 ; : : : ; u m ). Either u occurs in c > x 1 > 2 and is a solution of u > x i or u occurs in 1. In this latter case, g > F c (otherwise, u > c ) R?). Thus, for every, u > lpo c. In particular, since is a solution of c > x i, u > lpo x i. We have proved that, in any case, is a solution of s > t. Therefore, is a solution of 1 > c > x 1 > 2. 2 Lemma 16. (S 8 ) is correct. Proof. First, if t 2 C, the correctness is obvious since no constant in C is greater than any instance of t 0 2 N (X). Assume now that t f(0; : : : ; 0; v). Suppose that is a solution of the left-hand side I. Then x 1 > lpo t 0 62 N. By Lemma 6, a term in N cannot be larger than a term in T (F )? N. Therefore x 1 62 N. In the same way, f(0; : : : ; 0; v) 62 N and therefore v 62 N. From Lemma 6, f(0; : : : ; 0; v) = succ(v). But, as v 6 x 1, there is an inequation x 1 > v in I. This means that succ(v) > x 1 > v which contradicts Lemma 5. 2 The crux of the proof is the correctness of (S 9 ). We will use Lemma 8 and the following result: Lemma 17. In any simple system t 1 > : : : > t n, if t i g(u 1 ; : : : ; u p ) and t i+1 h(v 1 ; : : : ; v q ), then g F h or t i+1 is a subterm of t i. Proof. Suppose h > F g, then g(u 1 ; : : : ; u p ) > h(v 1 ; : : : ; v q ) ) R _ i=1;:::;p u i h(v 1 ; : : : ; v q )

and, by denition of simple systems, there is an index i such that, either u i > h(v 1 ; : : : ; v q ) occurs in I or u i h(v 1 ; : : : ; v q ). On the other hand, u i has to occur somewhere in I, and this must be after g(u 1 ; : : : ; u p ): either h(v 1 ; : : : ; v q ) > u i occurs in I or h(v 1 ; : : : ; v q ) u i. There is only one consistent combination of these two observations: u i h(v 1 ; : : : ; v q ). 2 Lemma 18. (S 9 ) is correct. Proof. By the control, every term in 2 belongs to N (X). (In particular, t belongs to this set). Assume I ; S9 I 0. By Lemma 8, I 0 has a solution i it has a solution such that every variable x satises x 2 N. We are going to show that is a solution of I. Let 1 > f(0; : : : ; 0; t) t 1 > : : : > t k. (For convenience, we include here the case where 1 is empty.) We prove this result by induction on k. If k = 1, this is straightforward. Assume now k 2. Let t i > t j be such that i; j k and let W j2j c j be one of its solved forms. Some problem c j0 is a subformula of I. Let u > v be an inequation in c j0. If u 6 t i, then is a solution of u > v because this inequation must either occur in I 0 or be an inequation t i 0 > t j 0 with i 0 > i. In the latter case, we just use the induction hypothesis. Assume now that u t i and v is a variable. If u 62 N (X), then, from Lemma 6, u > lpo v. Actually, u 62 N (X) is the only possible case. For, assume u 2 N (X). From Lemma 17, t j is either a subterm of t i (and in this case, the solved form of t i > t j would be >) or t j f(u 1 ; : : : ; u n ). In the latter case, f(0; : : : ; 0; v) > f(u 1 ; : : : ; u n ) ) v > f(u 1 ; : : : ; u n ) _(u 1 = 0 ^ : : : ^ u n?1 = 0 ^ v > u n ) _ c 1 _ : : : _ c m : But, in each formula on the right, t i has been decomposed : it is not possible to nd it again in any solved form. This contradicts the assumption u t i. This shows that is a solution of t i > t j when i; j k. Assume now that i k? 1 and j > k. Then t i > lpo t k and t k > lpo t j, which proves again t i > lpo t j. Thus, in any case, is a solution of I. 2 Lemma 19. The system of rules given in gures 4 and 5 terminates. Moreover, any irreducible simple system is either a natural simple system or a ground system. Proof. The termination is straightforward : the number of terms in a simple system strictly decreases by application of any rule but S 2. On the other hand, S 2 may be applied at most once. We have only to show that the rules cover all possible cases. Assume that I t 1 > : : : > t n contains a variable and a term t 62 N (X). For each rule S i, assuming I irreducible w.r.t. S j, j < i, we show below which additional properties I has if it is moreover assumed to be irreducible w.r.t. S i. S 1 ; S 2 : I 1 > 0 S 3 ; S 4 : I 1 > t > x 1 > 2 > 0 and x 1 occurs only once.

S 5 : t 2 ST 8 (X) S 6 : t 2 N (X) S 7 : t 2 N (X)? C i.e. t f(0; : : : ; 0; u) with u 2 N (X) or else, t 2 C and 1 is empty. S 8 : Every term in 2 belongs to N (X) S 9 : Now, all conditions for applying S 9 are fullled: there is a contradiction. 2 Bringing all results of this section together, it is now possible to state: Theorem 1. The existence of a solution to a simple system is decidable. Together with the results of sections 2 and 3 we get: Theorem 2. The existence of a solution to an inequational problem is decidable. 5. Further Remarks Our technique for deciding the existence of a solution is actually constructive: it is possible to extract an actual solution from the satisability proof. However, our method does not provide a \compact" representation of the set of all solutions. More precisely, assuming that some variables of an inequational problem are existentially quantied, our method does not provide an equivalent quantier-free formula. That is the reason why the technique we have presented here cannot be lifted in an obvious way for deciding the rst order theory of a lexicographic path ordering. Our algorithm has a high complexity. This is not surprising since the problem can actually be shown to be NP-hard. Theorem 2 and the technique we presented in this paper can be extended for solving inequations over arbitrary ordinal notations (J.-P. Jouannaud and M. Okada, private communication). The decidability of the rst order theory of the lexicographic path ordering with a total precedence is still an open question. However, it has been shown by R. Treinen that, as soon as two function symbols are uncomparable w.r.t. F, then the rst order theory of lpo is undecidable 13. In this latter case, the decidability of the existential fragment of the theory is still open. (Our technique cannot be generalized in a straightforward way.) As mentioned in the introduction, Theorem 2 can be used for dening an extension of > lpo to non-ground terms which is more powerful than the usual extension 7 where variables are considered as new (unrelated) function symbols. Let s > lpo t i for every ground substitution, s > lpo t. > lpo is

decidable because the unsatisability of s t is decidable (theorem 2). > lpo is a simplication ordering. Of course, it contains the usual extension > lpo. This inclusion may be strict, as shown by the two examples; Example 9. f > F 1 > F 0. f(x; 1) and f(0; 0) are uncomparable w.r.t. > lpo. However, f(x; 1) > lpo f(0; 0) because any ground term is greater or equal to 0. Example 10. h > F g > F s > F 0. u g(h(s(x); x); h(y; y)) and t h(s(x); y) are uncomparable w.r.t. > lpo. On the other hand, u > lpo t. Indeed, u t ) R y > x ^ s(x) > y which has no solution. Acknowledgments I acknowledge J.-P. Jouannaud, M. Rusinowitch and R. Treinen for comments and discussions on an early version of this paper. I also thank an anonymous referee for his careful reading of the paper and a number of relevant comments. References 1. H. Comon and P. Lescanne, \Equational problems and disunication", J. Symbolic Computat. 7 (1989) 371-425. 2. M. J. Maher, \Complete axiomatizations of the algebras of nite, rational and innite trees", in Proc. 3rd IEEE Symp. Logic in Computer Science, Edinburgh, July 1988, pp 348-357. 3. A. I. Mal'cev, \Axiomatizable classes of locally free algebras of various types", in The Metamathematics of Algebraic Systems. Collected Papers, 1936-1967 (North-Holland, 1971) pp. 262-289. 4. H. Comon, \Disunication: a survey", in Computational Logic: Essays in Honor of Alan Robinson, eds J.-L. Lassez and G. Plotkin (MIT Press, 1991) to appear. 5. K. N. Venkataraman, \Decidability of the purely existential fragment of the theory of term algebras", JACM, 34, 2 (1987) 492-510. 6. J. Hsiang and M. Rusinowitch, \On word problems in equational theories", in Proc. 14th ICALP, Karlsruhe, LNCS 267 (Springer-Verlag, July 1987). 7. N. Dershowitz, \Termination of rewriting", J. Symbolic Comput., 3, 1 (1987), 69-115. 8. L. Bachmair, N. Dershowitz and J. Hsiang, \Orderings for equational proofs", in Proc. 1st IEEE Symp. Logic in Computer Science, Cambridge, Mass., June 1986, pp. 346-357. 9. N. Dershowitz and J.-P. Jouannaud, \Rewrite Systems", in Handbook of Theoretical Computer Science, volume B, J. van Leeuwen ed. (North-Holland, 1990).