You may also be interested by the page where I collected not easily accessible publications by several authors in the domain of term rewriting and type theory.
See also the support of talks I gave.
The following list is not exhaustive. For more complete lists see the list of my publications (2007) or DBLP which is more up-to-date list, but less complete for old publications. See also my ArXiV publications.
Since the early work of Church on λI-calculus and Gentzen on structural rules, the control of variable use has gained an important role in computation and logic emerging different resource aware calculi and substructural logics with applications to programming language and compiler design. This paper presents the first formalization of variable control in calculi with implicit names based on de Bruijn indices. We design and implement three calculi: first, a restricted calculus with implicit names; then, a restricted calculus with implicit names and explicit substitution, and finally, an extended calculus with implicit names and resource control. We propose a novel concept of L-types, which are used (a) to define terms and (b) to characterize certain classes of terms in each of the presented calculi. We have adopted to work simultaneously on the design and implementation in Haskell and Agda. The benefit of this strategy is twofold: dependent types enable to express and check properties of programs in the type system and constructive proofs of preservation enable a constructive evaluator for terms (programs).
A λ-calculus with resource is a λ-calculus which takes into account basic operations like explicit duplication (when a variable is used twice this is said explicitly by a specific operator) and explicit erasure (when a variable is not used, this fact is also said explicitly). In this paper, we present a λ-calculus with explicit duplication and explicit erasure. The originality is that we use ®-indices which are indices a la de Bruijn which are designed for the purpose of making the name of the variables implicit. This produces a simple framework, due a nice control on the duplication, and this yields an easy implementation in HASKELL. In addition, we study simple types and types a la System F, both with their relevant HASKELL implementation. Those systems are interesting because they are in a Curry-Howard correspondence with a logic with Contraction and Thinning. We also explore a bestiary of terms with duplication and erasure.
, Simona KasterovicA new presentation of Conversion/Preference Games updating JAIST/IS-RR-2006-006.
In this tutorial we introduce the concept of CP-game which is a generalization of this of strategic game. First we present examples which are relevant to a CP-game approach. Then we give a somewhat naive introduction to CP-games. Then we present the connection between CP games and gene regulation networks as presented by Rene Thomas.
This paper introduces a framework based on game theory that models gene regulation activities. The games used here are called Conversion/Preference games or CP Games in short. CP game theory is a discrete presenetation of game theory that extends nicely classical strategic game theory. A group of genes is a module if a connection between the regulation activity of that group of genes and some biological functions can be established. One of the main issue when a gene regulation system is analyzed through a model is to decompose the model into modules. Assuming that a gene regulation system can be decomposed into modules and that each module "implements" a function, the goal of dynamical analysis is to exhibit the module structure. More specifically we focus on regulation equilibria that give a stable answer to stress One of the key issue in order to identify modules is the ability to decompose the system into games. the analysis of the dynamics of games, enable to understand how new equilibria may emerge in the composed game. In this paper we give partial answers to these questions.
We present a game-theoretic foundation for gene regulatory analysis based on the recent formalism of rewriting game theory. Rewriting game theory is discrete and comes with a graph-based framework for understanding compromises and interactions between players and for computing Nash equilibria. The formalism explicitly captures the dynamics of its Nash equilibria and, therefore, is a suitable foundation for the study of steady states in discrete modelling. We apply the formalism to the discrete analysis of gene regulatory networks introduced by R. Thomas and S. Kaufman. Specifically, we show that their models are specific instances of a C/P game deduced from the K parameter.
Motzkin trees are also called unary-binary trees. This paper proposes a linear algorithm for uniform random generation of Motzkin trees. The algorithm uses the same paradigm as this of Rémy's linear algorithm for random generation of binary trees and is based on a preliminary computation.
Holonomic equations are recursive equations which allow computing efficiently numbers of combinatoric objects. Rémy showed that the holonomic equation associated with binary trees yields an efficient linear random generator of binary trees. I extend this paradigm to Motzkin trees and Schröder trees and show that despite slight differences my algorithm that generates random Schröder trees has linear expected complexity and my algorithm that generates Motzkin trees is in O(n) expected complexity, only if we can implement a specific oracle with a O(1) complexity. For Motzkin trees, I propose a solution which works well for realistic values (up to size ten millions) and yields an efficient algorithm.
Canonical expressions represent the implicative propositions (i.e., the propositions with only implications) up-to renaming of variables. Using a Monte-Carlo approach, we explore the model of canonical expressions in order to confirm the paradox that says that asymptotically almost all classical theorems are intuitionistic. Actually we found that more than 96.6% of classical theorems are intuitionistic among propositions of size 100.
Escalation in games is when agents keep playing forever. Based on formal proofs we claim that if agents assume that resource are infinite, escalation is rational.
Extensive games are tools largely used in economics to describe decision processes of a community of agents. In this paper we propose a formal presentation based on the proof assistant COQ which focuses mostly on infinite extensive games and their characteristics. COQ proposes a feature called "dependent types'', which means that the type of an object may depend on the type of its components. For instance, the set of choices or the set of utilities of an agent may depend on the agent herself. Using dependent types, we describe formally a very general class of games and strategy profiles, which corresponds somewhat to what game theorists are used to. We also discuss the notions of infiniteness in game theory and how this can be precisely described.
We present infinite extensive strategy profiles with perfect information and we show that replacing finite by infinite changes the notions and the reasoning tools. The presentation uses a formalism recently developed by logicians and computer science theoreticians, called coinduction. This builds a bridge between economic game theory and the most recent advance in theoretical computer science and logic. The key result is that rational agents may have strategy leading to divergence .
Escalation is the fact that in a game (for
instance an auction), the agents play forever. It is not necessary to
consider complex examples to establish its rationality. In particular, the
This is an extended and modified version of the previous paper.
As we show using the notion of equilibrium in the theory of infinite sequential games, bubbles and escalations are rational for economic and environmental agents, who believe in an infinite world. This goes against a vision of a self regulating, wise and pacific economy in equilibrium. In other words, in this context, equilibrium is not a synonymous of stability. We attempt to draw from this statement methodological consequences and a new approach to economics. To the mindware of economic agents (a concept due to cognitive psychology) we propose to add coinduction to properly reason on infinite games. This way we refine the notion of rationality.
Comme nous le montrons en utilisant les notions d'équilibre dans
la théorie des jeux
séquentiels infinis, les crashs ou les escalades financières sont
rationnels, pour les
agents économiques ou environnementaux, qui ont une vision d'un
monde infini. Cela va à
l'encontre de la vision auto-régulatrice d'un monde économique
sage et pacifique en
équilibre, autrement dit, dans ce contexte, équilibre n'est plus
synonyme de stabilité.
Nous tentons de tirer de ce constat quelques conséquences
méthodologiques et quelques
nouveaux modes de pensée que l'on doit adopter, notamment en
microéconomie. Parmi ces
modes, il y a la coinduction qui est à la base de notre
raisonnement sur les jeux
infinis.
As we show by using notions of equilibrium in infinite sequential
games, crashes or
financial escalations are rational for economic or environmental
agents, who have a
vision of an infinite world. This contradicts a
picture of a self-regulating, wise and
pacific economic world. In other words, in this context,
equilibrium is not synonymous
Escalation is the fact that in a game (for instance an auction), the
agents play forever. It is not necessary to consider complex examples to
establish its rationality. In particular, the
Finite objects and more specifically finite games are formalized using induction, whereas infinite objects are formalized using coinduction. In this article, after an introduction to the concept of coinduction, we revisit on infinite (discrete) extensive games the basic notions of game theory. Among others, we introduce a definition of Nash equilibrium and a notion of subgame perfect equilibrium for infinite games. We use those concepts to analyze well known infinite games, like the dollar auction game and the centipede game and we show that human behaviors that are often considered as illogic are perfectly rational, if one admits that human agents reason coinductively.
This paper has been rejected by the International
Journal of Game Theory and the report
is worth reading.
In this paper we study carefully and formally the dollar auction game using coinduction and we show that unlike what is commonly admitted the behavior which consists in bidding forever and which is called escalation is rational. Escalation is typical of an infinite game and tools conceived for studying infiniteness are mandatory and they are what coinduction provides.
This paper has been rejected by the journal Games and Economic Behavior and the report is a piece of anthology of what can be written by somone who knows nothing about coinduction and computability. Here is my answer.
This paper presents experiments on common knowledge logic, conducted with the help of the proof assistant Coq. The main feature of common knowledge logic is the eponymous modality that says that a group of agents shares a knowledge about a certain proposition in a inductive way. This modality is specified by using a fixpoint approach. Furthermore, from these experiments, we discuss and compare the structure of theorems that can be proved in specific theories that use common knowledge logic. Those structures manifests the interplay between the theory (as implemented in the proof assistant Coq) and the metatheory.
Common Knowledge Logic is meant to describe situations of the real world where a group of agents is involved. These agents share knowledge and make strong statements on the knowledge of the other agents (the so called common knowledge). But as we know, the real world changes and overall information on what is known about the world changes as well. The changes are described by dynamic logic. To describe knowledge changes, dynamic logic should be combined with logic of common knowledge. In this paper we describe experiments which we have made about the integration in a unique framework of common knowledge logic and dynamic logic in the proof assistant Coq. This results in a set of fully checked proofs for readable statements. We describe the framework and how a proof can be conducted.
This paper proposes a formalization in Coq of common knowledge logic and checks its adequacy on case studies. This exercise allows exploring experimentally the proof-theoretic side of common knowledge logic. This work is original in that nobody has considered higher order common knowledge logic from the point of view of proofs performed on a proof assistant. This work is experimental by nature as it tries to draw conclusions from experiments with a proof assistant.
We develop an intersection type system for the lambda-mu-mu~ calculus of Curien and Herbelin. This calculus provides a symmetric computational interpretation of classical sequent style logic and gives a simple account of call-by-name and call-by-value. The present system improves on earlier type disciplines for lambda-mu-mu~: in addition to characterizing the lambda-mu-mu~ expressions that are strongly normalizing under free (unrestricted) reduction, the system enjoys the Subject Reduction and the Subject Expansion properties.
*X is a diagrammatic calculus. This means that it describes programs by 2-dimensional diagrams and computations are reductions of those diagrams. In addition it has a 1-dimensional syntax. Type system of *X interprets simply classical logic in a Curry-Howard correspondence. Since lambda-calculus can be easily implemented, its untyped version is Turing complete.
We investigate some syntactic properties of Wadler's dual calculus, a term calculus which corresponds to classical sequent logic in the same way that Parigot's lambda-µ calculus corresponds to classical natural deduction. Our main result is strong normalization theorem for reduction in the dual calculus; we also prove some confluence results for the typed and untyped versions of the system.
X is an untyped language for describing circuits by composition of basic components. This language is well suited to describe structures which we call "circuits" and which are made of parts that are connected by wires. Moreover X gives an expressive platform on which algebraic objects and many different (applicative) programming paradigms can be mapped. In this paper we will present the syntax and reduction rules for X and some its potential uses. To demonstrate the expressive power of X, we will show how, even in an untyped setting, elaborate calculi can be embedded, like the naturals, the lambda-calculus, Bloe and Rose's calculus of explicit substitutions lambda-x, Parigot's lambda-mu and Curien and Herbelin's lambda-mu-mu~.
We investigate some fundamental properties of the reduction relation in the untyped term calculus derived from Curien and Herbelin's lambda-mu-mu~. The original lambda-mu-mu~ has a system of simple types, based on sequent calculus, embodying a Curry-Howard correspondence with classical logic; the significance of the untyped calculus of raw terms is that it is a Turing-complete language for computation with explicit representation of control as well as code. We define a type assignment system for the raw terms satisfying: a term is typable if and only if it is strongly normalizing. The intrinsic symmetry in the lambda-mu-mu~ calculus leads to an essential use of both intersection and union types; in contrast to other union-types systems in the literature, our system enjoys the Subject Reduction property.
We present a new system of intersection types for a composition-free calculus of explicit substitutions with a rule for garbage collection, and show that it characterizes those terms which are strongly normalizing. This system extends previous work on the natural generalization of the classical intersection types system, which characterized head normalization and weak normalization, but was not complete for strong normalization. An important role is played by the notion of available variable in a term, which is a generalization of the classical notion of free variable.
We characterize those terms which are strongly normalizing in a composition-free calculus of explicit substitutions by defining a suitable type system using intersection types. The key idea is the notion of available variable in a term, which is a generalization of the classical notion of free variable.
This paper is part of a general program of treating explicit substitutions as the primary lambda-calculi from the point of view of foundations as well as applications. We work in the composition-free calculus lambda-x of explicit substitutions and the augmented calculus obtained by adding explicit garbage-collection, and explore the relationship between intersection-types and reduction. We show that the strongly normalizable terms, the terms which normalize by leftmost reduction, and the terms which normalize by head reduction can each be characterized as the terms typable in a certain system. As a consequence we characterize a class of deterministic reduction strategies which are normalizing strategies and so can serve as models for evaluation in functional programming. The type systems we study are the natural generalizations of the classical systems, but the proofs require some new techniques in the presence of explicit reductions involving substitutions. Indeed, in our proofs we do not rely on results from classical lambda-calculus, which in our view comes after the calculus of explicit substitution. The proof technique we employ yields readily the result that a term is strongly normalizing iff it is strongly normalizing in the calculus extended by garbage-collection.
This short note presents a calculus of explicit substitutions proposed by N. G. de Bruijn in 1978.
Explicit substitutions were proposed by Abadi, Cardelli, Curien, Hardin and Lévy to internalise substitutions into lambda-calculus and to propose a mechanism for computing on substitutions. lambda-upsilon is another view of the same concept which aims to explain the process of substitution and to decompose it in small steps. lambda-upsilon is simple and preserves strong normalisation. Apparently that important property cannot stay with another important one, namely confluence on open terms. The spirit of lambda-upsilon is closely related to another calculus of explicit substitutions proposed by de Bruijn and called C-lambda-xi-phi. In this paper, we introduce lambda-upsilon, we present C-lambda-xi-phi in the same framework as lambda-upsilon and we compare both calculi. Moreover, we prove properties of lambda-upsilon; namely lambda-upsilon correctly implements beta reduction, lambda-upsilon is confluent on closed terms, i.e., on terms of classical lambda-calculus and on all terms that are derived from those terms, and finally lambda-upsilon preserves strong normalization of beta-reduction.
Lambda-upsilon calculus is a new simple calculus of explicit substitutions. In this paper we explore its properties, namely we prove that it correctly implements beta-reduction, it is confluent, its simply typed version is strongly normalizing. We associate with lambda-upsilon an abstract machine called the U-machine and we prove that the U-machine correctly implements lambda-upsilon.
This paper published in POPL'94 gives a systematic description of several calculi of explicit substitutions. These systems are orthogonal and have easy proofs of termination of their substitution calculus. The last system, called lambda-upsilon, entails a very simple environment machine for strong normalization of lambda-terms.
Lambda calculus is the basis of functional programming and higher order proof assistants. However, little is known about combinatorial properties of lambda terms, in particular, about their asymptotic distribution and random generation. This paper tries to answer questions like: How many terms of a given size are there? What is a "typical" structure of a simply typable term? Despite their ostensible simplicity, these questions still remain unanswered, whereas solutions to such problems are essential for testing compilers and optimizing programs whose expected efficiency depends on the size of terms. Our approach toward the afore-mentioned problems may be later extended to any language with bound variables, i.e., with scopes and declarations. This paper presents two complementary approaches: one, theoretical, uses complex analysis and generating functions, the other, experimental, is based on a generator of lambda-terms. Thanks to de Bruijn indices, we provide three families of formulas for the number of closed lambda terms of a given size and we give four relations between these numbers which have interesting combinatorial interpretations. As a by-product of the counting formulas, we design an algorithm for generating lambda terms. Performed tests provide us with experimental data, like the average depth of bound variables and the average number of head lambdas. We also create random generators for various sorts of terms. Thereafter, we conduct experiments that answer questions like: What is the ratio of simply typable terms among all terms? (Very small!) How are simply typable lambda terms distributed among all lambda terms? (A typable term almost always starts with an abstraction.) In this paper, abstractions and applications have size 1 and variables have size 0.
A simple way of counting lambda terms.
Affine lambda-terms are lambda-terms in which each bound variable occurs at most once and linear lambda-terms are lambda-terms in which each bound variable occurs once and only once. In this paper we count the number of affine closed lambda-terms of size n, linear closed lambda-terms of size n, affine closed beta-normal forms of size n and linear closed beta-normal forms of size n, for several measures of the size of lambda-terms. From these formulas, we show how we can derive programs for generating all the terms of size n for each class. The foundation of all of this is a specific data structure, made of contexts in which one counts all the holes at each level of abstractions by lambda's.
We describe the foundations of Obj-a, a framework, or generic calculus, for modeling object-oriented programming languages. This framework provides a setting for a formal operational semantics of object based languages, in the style of the Lambda Calculus of Objects of Fisher, Honsell, and Mitchell. As a formalism for specification, Obj-a is arranged in modules, permitting a natural classification of many object-based calculi according to their features. In particular, there are modules for calculi of non-mutable objects (i.e., functional object-calculi) and for calculi of mutable objects (i.e., imperative object-calculi). As a computational formalism, Obj-a is based on rewriting rules. However, classical first-order term rewriting systems are not appropriate to reflect aspects of implementation practice such as sharing, cycles in data structures and mutation. Therefore, the notion of addressed terms and the corresponding notion of addressed term rewriting are developed.
A version slightly revised and shortened has been published at FM'99. World Congress on Formal Methods (FM'99) 20-24 September 1999, Toulouse, France , Springer LNCS vol. 1709, pp. {963-982}
A proof of well-foundedness of the recursive path ordering based on simple principles.
This paper studies three orderings, useful in theorem proving, especially for proving termination of term rewriting systems: the recursive decomposition ordering with status, the recursive path ordering with status and the closure ordering. It proves the transitivity of the recursive path ordering, the strict inclusion of the recursive path ordering in the recursive decomposition ordering, the totality of the recursive path ordering -- therefore of the recursive decomposition ordering --, the strict inclusion of the recursive decomposition ordering in the closure ordering and the stability of the closure ordering by instantiation.
The ability to use a polynomial iterpretation to prove termination of a rewrite system naturally prompts the question as to what restriction on complexity this imposes. The main result of this paper is that a polynomial interpretation termination proof of a rewrite system which computes a number theoretic function implies a polynomial bound on that function's rate of growth.
This paper presents the first version of the recursive decomposition ordering with status.
Note: The version presented at the conference, which is the one given here, had a slightly different title namely Uniform Termination of Term Rewriting Systems: an new Recursive Decomposition Ordering and its Implementation from the one published in the final proceedings.
In this paper that appeared in Theoretical Computer Science vol. 132, (1994), pp (395-401) we reduce the problem of termination of one rule rewrite systems to problems somewhat more related to rewrite systems namely to Post correspondence problems and to termination of semi-Thue systems. Proofs we obtain this way are shorter and we expect other interesting applications from these codings. In particular, the second part proposes a simulation of semi-Thue systems by one rule systems.
Inductive theorems are properties valid in the initial algebra. A now popular tool for proving them in equational theories or abstract data types is based on proof by consistency. This method uses a completion procedure and requires two essential properties of the specification, namely relative completeness and omega-completeness. This paper investigates ways of proving them. For the first one, the complement algorithm is presented. It is based on unification and computation of coverings and complements. For the second one, a technique based on discrimination of pairs of normal forms is explained and illustrated through examples.
The identities which are stable for power extensions of heterogeneous algebras are characterized by linearity and regularity properties. These results are used to study nondeterministic models of abstract data types and identities they satisfy. They constitute an adequate framework for the study of algebraic sets in heterogeneous algebras.
This is the French version (the introduction and the first two chapters) of my thèse d'État defended on the 11th September 1979. For an English translation of the first chapters, see Algebraic and Relational Approach to Abstract Data Types and their Representations.
It is interesting because it prefigures Evolving Algebras aka Abstract State Machines. See Annoted papers of Yuri Gurevich especially Sequential Abstract State Machines capture Sequential Algorithms
The origin of this work is Jean-Luc Rémy's PhD (1974): the French version is entitled Stuctures d’information, formalisation des notions d’accès et de modification d’une structure de donnée and an English translation is Information Structure: Formalisation of the Notions of Access and of Data
Ceci est un ensemble de notes que j'ai écrites en 1972 pour mes collègues de Nancy quand j'ai été fasciné par le lambda-calcul et le modèle de Scott. En fait, je n'ai retravaillé sur le lambda-calcul qu'en 1993 quand j'ai été fasciné par les substitutions explicites.
Ceci est un livre publié en 1978 dont j'ai été le co-auteur.
This a text on the history of Computer Science in Lorraine, which I wrote for the Encyclopédie Illustrée de la Lorraine, Histoire des Sciences et des Techniques, Volume Sciences exactes, Éditions Serpenoises, pp 105-116.
A text written to the memory of Alain Colmerauer (June 2020).
A story of my first steps in Computer Science.
In this paper we show that a limited vocabulary can lead to strange and irregular situations. A complete analysis is given using five simple canonical case studies.
This talk has been given as the banquet speech at the 3rd UNIF 1989
This text in French teases the kind of research done in France at that time(1, 2, 3).