TERMINATION OF REWRITING SYSTEMS BY POLYNOMIAL INTERPRETATIONS AND ITS IMPLEMENTATION Ahlem Ben Cherifa Pierre Lescanne Centre de Recherche en Informatique de Nancy Campus Scientifique BP 239 54506 Vandoeuvre, FRANCE ABSTRACT This paper describes the actual implementation in the rewrite rule laboratory REVE, of an elementary procedure that checks inequalities between polynomials and is used for proving termination of rewriting systems, especially in the more difficult case of associative- commutative rewriting systems, for which a complete characterization is given. This work was supported by the Greco de Programmation. 1. The origin of the problem. Termination is central in programming and in particular in term rewriting systems, the later being both a theoretical and a practical basis for functional and logic languages. Indeed the problem is not only a key for ensuring that a program and its procedures eventually produce the expected result, it is also important in concurrent programming where liveness results rely on termination of the components. Term rewriting systems are also used for proving equational theorems and are a basic tool for checking specifications of abstract data types. Again, the termination problem is crucial in the implementation of the Knuth-Bendix algorithm, which tests the local confluence and needs the termination to be able to infer the total confluence. Termination is also necessary to direct equations properly. Until now, methods based on recursive path ordering were satisfactory [Dershowitz 82,Jouannaud et al. 82], but when we recently ran experiments on transformation of FP programs [Bellegarde 84], we were faced with a problem that the recursive path ordering could not handle. The problem, motivated by a simple example of code optimization, is just Associativity + Endomorphism. (x1 * x2) * x3 = x1 * (x2 * x3) f(x1 * x2) = f(x1) * f(x2) The variables are functions, * is the composition and f is a mapcar-like operator. In order to optimize the program, the user wants to decrease the number of uses of f and to orient the equation f(x1 * x2) = f(x1) * f(x2) into the rule f(x1 * x2) --> f(x1) * f(x2) what the recursive path ordering accepts by setting * > f. It orients the associativity equation in (x1 * x2) * x3 --> x1 * (x2 * x3) by setting the status of * to be left-to-right, and then the Knuth-Bendix procedure diverges generating the rules: f^n(x1 * x2) * x3 --> f^n(x1) * (f^n(x2) * x3) This is not what the user expects, since he or she rather wants to get the terminating system f(x1) * f(x2) --> f(x1 * x2) (x1 * x2) * x3 --> x1 * (x2 * x3) f(x1) * (f(x2) * x3) --> f(x1 * x2) * x3 where the third rule also decreases the number of occurrences of f. Fortunately, thanks to Lankford [Lankford 79] this system can be easily proved terminating by using the polynomial interpretation [f](X)= 2X and [*] (X1,X2) = X1 X2 + X2 and this motivates our attempt for implementing nice and efficient method for mechanically proving termination based on polynomial interpretations. Our purpose is not to extend his method in any way, but strictly implement what is presented by Huet and Oppen in their survey [Huet and Oppen 80]. The authors say p. 367 : "the proof of (inequality of the form [i(x*y)](X,Y) > [i(y)*i(x)](X,Y) (see example 1) is not straightforward, since it involves showing for instance (forall x,y in N) x^2 (1+2y)^2 > y^2 (l+2 x^2)". We also characterize the polynomial interpretations for associative-commutative operators, restricting drastically the space of interpretations for such operators. We get, that way, the unique safe method for associative commutative rewriting which is implemented. Finally we extend the method to Cartesian product providing a technique for proving termination of associative commutative systems, like that using Natural numbers. This example is especially interesting since it is important arid no other method for proving its termination works and Plaisted B5b,Gnaedigl986]. This paper rather emphasizes on the actual implementation in REVE, and the examples that were mechanically proved by REVE. It describes procedures for checking polynomial me qualities, which are both efficient and general. It was said such a method was already proposed by Lankford, but we do not know if it was actually implemented and we did not have access to published matter on the subject. The reader will find no new results on theoretical aspects of the polynomial interpretation method, but those dealing with actual implementation. We feel indeed it is important to propose algorithms, and people who have used our software REVE are usually grateful to notice the system performs the tedious computations required by the polynomial interpretation method at their place. 2. Interpretation by Functions over the Naturals and Termination Let T(F, {x1 . .,xm}) be the set of terms on {x1,...,xm} and N^m->N the set of m-ary functions on natural numbers. Suppose that for each k-ary functions f \in Fk \subseteq F, we define an interpretation of f as a polynomial with k variables. The interpretation of f will be written lambda X1....Xk [f](X1....Xk) and often we simply write [f](X1....Xk). This interpretation allows us to define on N^m --> N an F-algebra N_m, in the following way: if p1,...,pk \in N^m --> N then f _{N_m}(p1, .,pk)(Xl,...,Xm) = [f](p1(X1,...,Xm),..., pk(X1,...,Xm)) If we define [xi](X1,... ,Xm), then there exists a unique extension to a morphism from T(F,{x1, ..,xm}) to the F-algebra N_m imposed on N^m-> N that we will also write [.]. Example 1: Suppose F_0={e}, F_1={i} and F_2={*} and define [e]=2[i](X1) =X1^2, [*](X1,X2)= 2X1 X2 + X1. Then [*](X1,X2) = 2 X1 X2 + X1 [(x1 * (x2*x3)](X1,X2) = 4X1 X2^3 + 2X1 X2^2 + 2X1 X2 + X1 There exists on N^m-> N a natural partial strict ordering which is defined as h < k if and only if (forall a E N).. .(forall am in N) h(a1,..,am) < k(a1.. .,am). This ordering is obviously well-founded, otherwise a sequence h1 > ... > hn > ... would exist which by instantiation would produce an infinite sequence hi(a1,...,am) > ... > hn(a1,...,am) > On T(F,{x1, ..,xm}) we define an ordering by s < t if and only if [s] < [t]. This ordering is well-founded by definition. It is also stable by instantiation, which means that for all substitution a, a(s) a(t). Indeed since [.] is a morphism, [a(s)](X1,...Xm) < [a(t)] (X1,...Xm) is equivalent to the following inequality between polynomials [s]([a(x1)],..., [a(xm)]) < [t]([a(x1)],..., [a(xm)]) and this inequality holds if the inequality [s] < [t] holds. It will be said to be compatible if s <_[.] t implies f(...,s,...) < f(...,t,...). Let us recall now a main Termination criterion: Proposition 1: [Manna and Ness 70,Dershowitz 85] A term rewriting system R terminates on a set T(F,{x . .,x}) of terms if there exists a well-founded and compatible quasi-ordering such that, for all rules g --> d in R and for all substitution a , a(g) > a(d) So, any interpretation associated with a compatible ordering could be used to prove termination. 3. Polynomial Interpretation for proving Termination Given a term rewriting system {g1 --> d1 ,...,gn --> dn} the problem consists of first guessing a compatible interpretation L.] and then proving that g1>_[.] d1 . ,gn >_[.] dn. In this paper we give no good solution to the first problem since currently nobody, at our knowledge has good heuristics. Instead we make suggestions in Section 6 and we expect that computer experiments will lead to a progress in this direction. So, in this paper, we will essentially focus on proving the inequalities between the interpretations of terms. As many authors do and as many classical examples demonstrate, we will restrict ourselves to polynomial interpretations, although it is known that there are term rewriting systems that cannot be proven to terminate using this method. However, we feel that our method could be extended to other classical recursive functions like the exponentials. 3.1. An overview of the problem Notice first there is no algorithm to decide inequalities between n polynomials over N [Lankford 79]. Otherwise this algorithm could be used to salve the tenth Hilbert Problem. Indeed to decide inequalities between polynomials aver N is equivalent to decide inequalities between polynomials over Z (an inequality between polynomials over Z is transformed into several inequalities between polynomials over N, according to the positiveness or the negativeness of the variables). So, if we could decide n arbitrary inequalities, we could in particular decide whether n polynomials are positive. This in turn could be used for deciding whether n polynomial equations P1=0 & ...& Pn=0 have solutions, since it is equivalent to decide p1^2 >0 & ... & pn^2>0. However we know after Tarski [ 51] that if P and Q are considered as polynomials over R, there exists an algorithm to decide first- order formulas like Q_1 > R_1 & ... & Q_n > R_n. The most elaborate version of such an algorithm was proposed by Collins [Collins 75] and actually used to prove termination of simple term rewriting systems. Anyway this algorithm is a very large piece of code, it has an exponential complexity and is not really efficient (at least for our purpose, where we want to be able to handle thirty, up to one hundred comparisons in a reasonable time). Thus we have chosen a much simpler view of the problem. 1. We do not want to decide inequalities and leave open the problem of guessing an adequate [.]. Our aim is a procedure that simply checks properties that insure the wanted inequalities between polynomials and thereby directs equations into rules. Experience has shown that proving these inequalities is rather tedious even when one has a good intuition on how to select the polynomials. 2. We want to base these computations on really elementary and basic principles such that a simple and efficient implementation can be easily devised. Our first idea will be to restrict the domain of polynomials to N - {0,1}, in other words to consider the set (N-{0,1})^m --> N. Now the idea behind this becomes easy, and generalizes the fact that XY > Y if X > 1. To guarantee the stability by instantiation, we have also to be sure that the values [a(xi)](X1, ..,Xn) are in N - {0,1}. Thus we have to check that the interpretation of each term is a function in (N-{O,1}) --> N-{0,1}. This will be easily satisfied, if each time we have i. > 1, for all j in [1.. .m], then [f](X1, . .,Xm) > 1, and this will be true if [f] satisfies the inequalities[f](a1,...,am)>=ai on N-{0,1}. This last monotonicity condition is quite similar to the sub term property of the simplification orderings and both conditions can obviously be checked with our algorithm. Usually it is enough to ensure that the coefficients of the interpretations are natural numbers. With the degree of each variable greater than 1, this implies the compatibility. It is surprising to notice that, in general, the interpretations suggested by the authors in the literature satisfy these conditions (restriction on the domain and sub term property), which show they are not so restrictive. 3.2. An example Before explaining our method let us look at the termination of Associativity + Endomorphism. Let us take [f](X1)= 2X1 [*]= X1X2 + X1 It is easy to check that this interpretation satisfies the subterm property. Now we may compute the values of the left-hand and right-hand sides of the rules. For the associativity rule, we obtain [(x1*x2)*x3](X1,X2,X3) = X1 X2 X3 + X1 X2 + X1X3 + X1 [(x1*(x2*x3](X1,X2,X3) = X1 X2 X3 + X1 X2 + X1 so that we have to prove the inequality X1 X3 > 0 It is true because of X1 > 0 and X1> 0. Similarly, as [f(x1)*(f(x2)*x3)](X1,X2,X3) = 4X1 X2 X3 + 4X1 X2+ 2X2 [f(x1*x2)*x3](X1,X2,X3) = 2X1 X2 X3+ 2X1 X2 + 2X1 X3 + 2X1 this yields the inequality 2X1 X2 X3 > 2X1 X2 + 2X1 X3 > 0 which is true since 2 X1 X2 X3 > 2 X1 X2 and 2X1 X3 > 0 The latter follows from the fact that all its coefficients are positive. The former comes form the fact that X3 > 1. The test for the rule f(x1)*f(x2) --> f(x1*x2) is left to the reader. 3.3. The Principles of the procedure used for proving positiveness Here the key of our method. We prove inequalities one at a time and starting from a polynomial P0 we build a sequence of inequalities such that ``P0>= . . .Pn-1>= Pn > 0''. The positiveness of Pn is supposed to be checked by a basic principle like ``all coefficients are positive''. At each step we transform some coefficients of P (actually two) such that Pi = Pi+1 + Qi where Qi is a positive polynomial. More precisely, we propose the algorithm shown in Figure-1. We suppose that --------------------------------------------------------------- Positive = proc(P: polynomial) returns(string) while there exists a negative coefficient do if there exist a_{p1,....pm} > 0 and a_{q1,...,qm}< 0, with pi>= qi for all j in [1. .m] then change(a_{p1,....pm}, a_{q1,...,qm}) else return(''no-answer'') end return(''positive'') --------------------------------------------------------------- Figure 1: A procedure for checking positiveness of a polynomial --------------------------------------------------------------- Pi.=Sigma_{ p1,....pm in N^m} a^{i}_{p1,....pm}X ^{p1}. . .X^{pm} The main idea of this procedure is to consider each monomial with negative coefficient, say nu and to try to find a monomial with a positive coefficient, say \pi, which bounds it. This means that for each value greater than or equal to 2, the value of \pi will be greater than the value of nu. This comes from consideration on the degree of the monomials, i.e. the degree of \pi has to be greater than the degree of nu. When such monomials are found, one removes from \pi as a few part we can in order to bound as much part of nu. The procedure will now rely on how we choose the function change. We propose two solutions for the body of (a_{p1,....pm}, a_{q1,...,qm}) where a_{p1,....pm} is positive and a_{q1,...,qm} is negative. The first one is more straightforward and uses the fact that the values of the variable are greater than 1. Therefore a monomial like X is greater than 1 and if n is greater than m monomial like X^n Y is greater than We indicates precisely the transformation to perform on the coefficients of and y according to the previous conventions. Solution 1: if a^i_{p1...pm} > |a^i_{q1...qm}| then a^{i+1}_{p1...pm} : = a^i_{p1...pm} + a^i_{q1...qm} a^{i+1}_{q1...qm} := 0 else a^{i+1}_{p1...pm} : = 0 a^{i+1}_{q1...qm} := a^i_{p1...pm} + a^i_{q1...qm} The example of associativity + endomorphism illustrates an application of this technique. In order to understand the second solution let us look at a realistic example that cannot be solved by the previous transformation. Example 2. The rewriting system x + (y + z) --> (x + y) + z x * (y+z) --> (x*y) + (x*z) can be completed into itself plus the rule (u + (x * y)) + (x * z) --> u + (x * (y + z)) by using the interpretation [+](X, Y) = XY + Y and the [*](X, Y) = XY. A completion algorithm which orients the third rule has to prove that UX^2YZ + X^2YZ + XZ > UXYZ + UXZ + XYZ + XZ. Since all the coefficients equal to 1, in order to apply Solution 1, we should have at least as many monomials with positive coefficients as those with negative coefficients. However we can take advantage of the fact that X is greater than or equal to 2, thus UX^2Y >= 2UXYZ > UXYZ + UXZ. This remark can be generalized to the power of 2. Solution 2: Solution 1: if a^i_{p1...pm} > |a^i_{q1...qm}2^{q1-P1}...2^{qm-Pm}| then a^{i+1}_{p1...pm} : = a^i_{p1...pm} + a^i_{q1...qm}2^{q1P1}...2^{qm-Pm} a^{i+1}_{q1...qm} := 0 else a^{i+1}_{p1...pm} : = 0 a^{i+1}_{q1...qm} := a^i_{p1...pm}2^{q1-P1}...2^{qm-Pm} + a^i_{q1...qm} We may now prove the two following propositions. Proposition 2: Using Solution 1, change transforms a polynomial into a less polynomial. Proof: When one takes the first change function, if a^i_{p1...pm} > |a^i_{q1...qm}| then Pi := Pi+1 - a^i_{q1...qm} X1^q1 ...Xm^qm(X1^{p1-q1}...Xm^{pm-qm} -1) else Pi := Pi+1 + a^i_{p1...pm} X1^q1 ...Xm^qm(X1^{p1-q1}...Xm^{pm-qm} -1) Proposition 3: Using Solution 2, change transforms a polynomial into a less polynomial. Proof: Remember than Xi >= 2. Thus, when one takes the second change function, if a^i_{p1...pm} > |a^i_{q1...qm}2^{q1-P1}...2^{qm-Pm}| then Pi := Pi+1 - a^i_{q1...qm} X1^q1 ...Xm^qm((X1/2)^{p1-q1}...(Xm/2)^{pm-qm} -1) else Pi := Pi+1 + a^i_{p1...pm} X1^q1 ... (Xm/2)^qm(X1^{p1-q1}...(Xm/2)^{pm-qm} -1) We may now state the following theorem about the function Positive. Theorem: Whatever change function is taken, the procedure Positive always terminates and returns ``positive'', only if the polynomial ``input'' is positive for all natural greater than or equal to 2. Notice the second method is always more effective than the first one. For this reason we have implemented the second method in REVE. 4. Polynomial Interpretations of Associative-Commutative Operators Associative-commutative operators often occur in rewriting system and it is really important to have methods to prove termination of associative- commutative rewriting systems. In the presence of associative-commutative equations (written AC), one interprets the quotient algebra T(F,{x1,... ,xm})/AC Therefore any interpretation has to be consistent with the laws. The purpose of this section is to give a characterization of when polynomial interpretations are consistent in this sense, It turns out that this criterion is very simple and can be tested simply Surprisingly enough we are not aware of any mention of it in the literature Thus if a polynomial Q interprets an associative-commutative operator, it satisfies the two conditions: Q(X, Y) = Q(Y, X) Q(Q(X, Y), Z) = Q(X, Q(Y, Z)). The first equation says that Q is symmetric, the second one gives a bound on the degree. Indeed if the highest degree of X in Q is m, then m has to satisfy the identity m = m, since m is highest degree of X in Q(Q(X,Y),Z). Therefore m=O,1 and the general form of Q is Q(X,Y) = aXY + b(X+Y) + e then Q(Q(X,Y),Z) = a^2XYZ ab(XY+YZ+ZX) + b^2 X + b^2 Y + (ac+b)Z + c(b+1). Q(X,Q(Y,Z)) = a + ab(XY+YZ+ZX) + (ac+b)X + b + b + c(b+1), then Q(Q(X,Y),Z) - Q(X,Q(Y,Z)) = (ac + b - b^2)(Z - X) and we have this criterion: Proposition 4: The polynomials that satisfy associative-commutative equations, i.e., the polynomials that interpret associative-commutative operators, are the polynomials of the form: aXY+b(X+Y) + c with ac + b - b^2 =0 5. Interpretations by a Cartesian product or polynomials, or why Lankford's example 3 works? When dealing with a rea classical example, namely the Naturals with addition and product defined in terms of the function ``successor'', we arrived at the rather difficult problem that neither the classical polynomial interpretations nor the other approaches could handle [Bachmair and Plaisted 85]. Indeed, such a specification uses a rewriting system with ``+`` and ``.`` associative and commutative. o + x --> x s(x) + y --> s(x + y) O. x --> 0 s(x) . y --> (x. y) + y x . (y + z) --> (x . y) + (x . z) Because of the associativity and commutativity and the restrictions on the polynomial interpretations of ``+`` and ``.``, we have to choose their interpretations of degree one. A simple computation made on the degree of X in the interpretation of the distributivity shows that the interpretation of ``+`` has to be of the form ``(X + Y) + c'', but this cannot work with any interpretation proving the termination of the rule: s(x) + y --> s(x + y). Thus the classical interpretations do not work in this case and we propose to use a p-tuple of polynomials for interpreting the operators instead or a unique one. We will use the notation [t](X1,...,Xn) = ([t]_1(X1,...,Xn),..., [t]_p(X1,...,Xn)) where the[t]_i(X1,...,Xn) are the same kind of polynomials as defined in the previous sections. A lexicographical comparison will allow us to handle scale of ordering that polynomials cannot. Let us first define this on the previous example. The interpretation of an operator of arity n is a pair of polynomial of same arity n. The interpretation of s term is made component-wise. The comparison of two terms is made lexicographically by first comparing the first components and if they are equal their second components. Since the lexicographical pro- duct of well-founded ordering is well-founded, we obtained that way a well-founded ordering. For instance, let us take: [0] = (2, 2) [s] = (X + 2, X + 1) [s](X) = (X + Y + 1, XY) [.](X, Y) (XY, XY). Since the components of the interpretations of ``+`` and ``.`` satisfy the conditions for the polynomial interpretations o? associative and commutative operators, the whole interpretations are constant on each equivalence class modulo associativity and commutativity and can be used for proving the termination of the previous associative commutative rewriting system. We have indeed for the first rule [0+x](X)=(X+3,2X) [x](X) = (X,X) and (X + 3, 2X) > (X, X) for the second rule [s(x) + y](X, Y) = (X + Y + 3, XY + Y) [s(x + y)](X, Y) (X + Y + 3, XY + 1) and (X + Y + 3, XY + Y) > (X + Y + 3, XY + 1) for the third rule [0 . x](X) = (2X, 2X) [0](X) = (2, 2) and (2X, 2X) > (2, 2) for the fourth rule [s(x).y](X, Y) = (XY + 2Y, XY + Y) [(x. y) + y](X, Y) = (XY + Y + 1, XY) and (XY+2Y, XY+Y) > (XY+Y+1, XY) for the fifth rule [x . (y + z)](X, Y, Z) = (XY + XZ + X, XYZ) [(x. y) + (x . z)](X, Y, Z) = (XY + XZ + 1, X^2 Y Z) and (XY + XZ + X, XYZ) > (XY + XZ + 1, X^2 Y Z) Therefore the rewriting system is terminating. The extension to Cartesian products with more than two components are straightforward and will not be presented here. However, we feel this provides us with the only implemented method for proving such associative com mutative systems. As we mention in the title this method is already present in [Lankford 79] in rule (18) and rule (19). However we feel our presentation gives the conceptual framework behind rules that are only given through an example, and allows extension to more than two levels in the Cartesian product. In addition the method is actually implemented in REVE. 6. Finding the right polynomial interpretation: several suggestions In this section we would like to provide hints for the difficult problem of finding an interpretation which proves the termination of a given rewriting system. During our experiments the main method we used was by ``try end error'' and for this the help of e computer was essential, for instance in the Assaciativity + Distributivity system mentioned in Example 2, the computer behaved better than ourselves. This can be improved by good messages, when the software fails to orient a specific rule. However, Dave Detlefs noticed that in many examples of associative-commutative rewriting systems a trivial interpretation like [+](X, Y) = 2 XY works often and he proposed it as the default. As suggestions for a user faced to the problem of orienting a rewriting system by s polynomial interpretation, we propose experimental facts. Suggestion 1. Set the interpretation of your constants to 2. Suggestion 2. Look for s hierarchy on your operators and use it in a Cartesian product interpretation. Such a hierarchy could be provided by precedence on the operators given, for instance, by an incremental ordering like the recursive path ordering or the recursive decomposition ordering [Forgaard and Detlefs 1985] run on the rewriting system. This precedence is topologically sorted to obtain a total hierarchy. To use it in a Cartesian product of polynomial interpretations, the operators with the highest precedence having a highest degree polynomial interpretation on the first component of the Cartesian product (see appendix) Suggestion 3. For your associative operators, if you want to orient your rule like x+(y+z) (x+y)+z try one of these interpretations, either [+](X, Y) = 2XY + Y or [+](X, Y) = X + Y The idea is more ``weight'' to the second argument and The choice interpretations will depend on the other rules. interpretations [+](X, Y) = XY + X or, [+](X, Y) = 2XY + XY or [+](X,Y) = X^2 + Y may work if you want to orient the rule direction. XY + Y or, always to give between these Obviously, the + X or, in the opposite direction. Suggestion 4. If the rule is a definition of an operator i.e., a rule of the form f(x1,...,xn) t(x1,... , xn), an interpretation like [f] = [t] + 1 works. The appendix gives an image of a computer session run on the term rewriting system laboratory REVE. The example is an axiomatization of groups due to Taussky, and given by Knuth and Bendix in their paper [Knuth and Bendix 70]. The reader will notice that the interpretation of *, namely [*](X,Y) = 2XY + Y is an application of Suggestion 3 and that the interpretation of g, namely [f(x,x*i(y))] + 1, is an application of Suggestion 4. None can be guessed at the beginning of the completion algorithm, but only when the associativity of * and the actual definition of g in terms of f are known. This shows difficulties in the use of this suggestion. 7. Review of Systems whose termination was proved by our algorithm and actually run on REVE. The first example we proved was obviously the Associativity- Endomorphism (see Section 1). A similar example is Associativity- Antimorphism. It can be seen as the optimization of inversions in an algebraic system with an associative law, like in matrix manipulation. x * (y * z) --> (x * y) * z f(x) * f(y) --> f(y * x) (x * f(y)) * f(z) --> x * f(z * y) The system is convergent, but its termination cannot be proved by a recursive path ordering. The interpretation [*](X, Y) = XY+ Y and [f](X) = X+1 works. Then we studied the equations for the groups e * x == x i(x) * x = e (x * y) * z == x * (y * z) x / y == x * i(y) using an interpretation proposed by Huet [Huet 80]. [e] = 2 [i](X) = X^2 [*](X, Y) = 2XY + X [/] =1 + X + 2X Y^2 The interpretation of I is just [(x * i(y)] + 1 (Suggestion 4). This interpretation was used to complete these equations into the ten classical Knuth-Bendix rules plus the definition of I. On the first three equations our Suggestion 3 would give the following Cartesian product interpretation to get the ten classical rules. [e] = (2, 2) [i](X) = (X^2, X^2) [*](X, Y) = (X + Y, X + Y^2) On the four equations for groups another interpretation can be taken [e] = 2 [i] = X^2 [*](X, Y) = 1 + X + Y^4 [/](X,Y) = X + Y^2 It completes them in the non classical set of 10 rules already discovered using the recursive path ordering [Lescanne83]. (e / x) --> i(x) i(e) --> e (x / e) x (x / x) --> e i(i(x)) --> x i((y / x)) --> x / y (x / y) / i(y) --> x (x / i(y)) / y --> x (x / (y / z)) --> ((x / i(z)) / y) (x * y) --> (x / i(y)). We are also interested by the termination proof of the following system, given by Hsiang [81] x +0 = x x + (-x) = 0 x * 1 = x x*x = x (x+y) *z = (x*z)+(y*z) x + x = 0 using the polynomial interpretations: [0] = 2 [1] 2 [-](X) X + 1 [+](X,Y) = X + Y + 2 [*](X,Y) = 2X Y + 1 The last system presented in this review, deals with the symbolic differentiation with respect to x, given by Dershowitz [Dershowitz 85] D_x (x) = l D_x(a) = 0 D_x (a + b) = D_x(a) + D_x(b) D(a - b) = D_x(a) - D_x(b) D_x(- a) = - D_x(a) D_x(a * b) = b * D_x(a) + a* D_x(b) D_x(a / b) = D_x(a)/ b - a * D_x(b)/(b^2) D_x(ln a) = D_x(a)/ a D_x(a^b) = b * a^(b-1) * D_x(a) + a^b * (ln a) * D_x(b) The following polynomial interpretations that can be used to prove termination and to complete the above system: [+](a,b) = a + b [-](a) = a + 1 [c1] = 4 [c2] = 4 [*](a,b) = a + b [/](a,b) = a + b [D_x](a) = a^2 [ln](a) = a + 1 8. Conclusion The method based on polynomial interpretations is now proposed in the rewrite rule laboratory REVE. Though this termination check procedure is definitely necessary in order to run some of the examples we know, like Associativity + Endomorphism, we do not think it will replace in each occasion the current methods based on recursive path ordering [Dershowitz 82] or recursive decomposition ordering [Jouannaud et al. 82], since they have shown to have a large scope and to be really easy to use in many practical cases [Forgaard and Detlefs 85], and to be usable when polynomial interpretation fails as pointed out by Dershowitz [Dershowitz 83]. So, we think we will keep both methods in REVE and let the user choose the method he or she wants. However in the case of associative-commutative operators, the extensions o? the recursive path ordering either fail or are not ready for being incorporated in a rewrite rule laboratory like REVE. So, it is the only method currently available and we are incorporating it into REVE-3 (the general equational rewrite rule laboratory) as the mechanism for proving termination in the associative-commutative case. In conclusion, we would like to talk about the polynomial interpretation limitations. The first one deals with our criterion. It cannot prove the positiveness of the polynomial X1^2 + X2^2 - 2 X1 X2 + 1 = (X1 - X2)^2 + 1 Since we never encountered such a polynomial in termination proofs we feel that would not be an obstacle for using it. On the opposite, we do not feel as a restriction that our method checks positiveness of polynomial for values greater than or equal to 2 instead of any natural value, since a change of variable gives a polynomial that satisfies the condition. Thus if in the polynomial 3X^2 - 7, which is positive for X 3, we change X into Y + 1 we get the polynomial 3X - Y - 4 that fulfills our requirements. The second kind of restriction is mentioned by [Huet and Oppen 1980, Dershowitz 85]. They say that the general polynomial interpretations impose a polynomial upper bound on the complexity of the computations, we claim this is not true as demonstrated by this one rule term rewriting system f(s(s(x)), y) --> f(s(x), f(x,y)) which has an exponential computation complexity, since it computes a term related to the Fibonacci numbers and which can be proved to terminate using the interpretation [f](X, Y) = X + Y and [s](X) = X^2. Acknowledgment We are pleased to thank all the people who gave us helpful suggestions, especially Françoise Bellegarde, Dave Detlefs, Harald Ganzinger, Jean-Pierre Jouannaud, Dallas Lankford, Laurence Puel and Jean-Luc Rémy. ----- The paper is still incomplete and this will be done soon ---- ----- The bibliography and a REVE script remain to be added ----