Languages, Compilation, and Semantics LIP Seminar in 2018
9th Edition, 2018, December 18th
Information
The seminar will be hold at Amphi E (ground floor), Monod Site, ENS Lyon. The meeting is open to all members of FIL, ie the labs LIP, LIRIS, and CITI.
Program
9h00-9h30 | Welcome | |
9h30-10h30 | Claire Maiza (Verimag) | In this talk, we present a survey on multi-core timing analyses including delays due to interferences (shared memory, shared bus). We show that interference analysis is possible (scalability and precision). We illustrate this point with our interference analysis where the delay is integrated into schedulability analysis. We show that applying this analysis to a real platform (Kalray MPPA2) comes with a necessary smart software implementation and hardware configuration. |
10h30-11h00 | Coffee break | |
11h00-11h45 | Hélène Coullon (LS2N) | With the emergence of large-scale virtualized distributed infrastructures, such as Cloud, Fog and Edge computing, for instance, the automation of the deployment of distributed software is of great importance for IT administrators and developers. While many production tools already help developers automate their deployment tasks, academic research is interested in safe and verified modeling to ensure the correct behavior of a software configuration or reconfiguration. In this talk I will present Madeus, a formal component model for the deployment of distributed software that enhances the efficiency of deployment. Madeus puts forward more concurrency than other deployment models. This implies a greater complexity and a greater propensity for error. Hence, I will also introduce ongoing work regarding the verification of properties on Madeus deployments by model checking. Finally, I will rapidly present ongoing work regarding the extension of Madeus to dynamic software reconfiguration. |
11h45-12h30 | Colin Riba (LIP) | This talk surveys a Curry-Howard perspective on Rabin's Tree Theorem, the decidability of Monadic Second-Order Logic (MSO) on infinite trees.
Rabin's Tree Theorem proceeds by effective translations of MSO-formulae to tree automata. We show that the operations on automata used in these translations can be organized in a deduction system based on intuitionistic linear logic (ILL). We propose a computational interpretation of this deduction system along the lines of the Curry-Howard proofs-as-programs correspondence. This interpretation, relying on the usual technology of game semantics, maps proofs to strategies in two-player games generalizing usual acceptance games of tree automata. |
8th Edition, 2018, October 4th
Information
The seminar will be hold at Room 003, Site Buisson, ENS Lyon. The meeting is open to all members of the labs LIP, LIRIS, and CITI.
Program
9h00-9h30 | Welcome | |
9h30-10h30 | Simone Martini (U. Bologne/Collegium de Lyon 2018-2019) | A Conceptual History of Programming Languages aims to be an organic tale of how some of the notions present in programming languages entered the field, how their semantics has been modified during the years, which linguistic mechanisms were proposed to “name” those concepts in specific languages. This evolution happened in dialogue and under the influence of important areas of mathematics, which, at some point across the sixties and the seventies, were identified with mathematical logic (and proof-theory in particular). Contrary to most folklore, however, the explicit recognition of the influence of mathematical logic on programming language design is not something that happens since the beginning. Moreover, concepts of the same name (e.g. types) have slightly different meanings in the two fields. I will describe this phenomenon for the notion of (data) type, starting with the fifties (Fortran and Algol), and arriving to the early eighties (abstract data types and objects oriented languages). |
10h30-11h00 | ||
11h00-11h45 | Emmanuel Coquery (U. LyonI/LIRIS) | Views are named queries in databases. They can be used either for storing the result of costly queries in tables, this is called materialized views. They can also be used to restrict access to data, by allowing users to query only views and forbidding direct access to tables. We call such views « access control views ». We consider the problem of, given a set of access control views on tables and a set of materialized views on the same tables, to devise a set of access control view to protect the materialized views. This problem boils down to identify the information accessible from both sets of views. We give an overview of a sound but incomplete algorithm for the case where views are restricted to conjunctive queries. |
11h45-12h30 | Damiano Mazza (CNRS/LIPN) | The Cook-Levin theorem (the statement that SAT is NP-complete) is a
fundamental result of complexity theory. We will give a proof of this
theorem entirely based on the properties of a certain type system for a
simple Turing-complete programming language. Such a type system arises
from very general considerations, formalized in the language of category
theory, relating types and program approximations. |
7th Edition, 2018, July, 5th
Information
The seminar will be hold at “Salle des Thèses”, ENS Lyon, Site Monod, Batiment MGN1, ENS Lyon, Monod site. The meeting is open to all members of the labs LIP, LIRIS, and CITI.
Program
Click on title for abstract.
9h00-9h30 | Welcome | ||
9h30-10h30 | Nicolas Halbwachs (Verimag) | This is a joint work with Rémy Boutonnet.
Program analysis by abstract interpretation using relational abstract domains — like polyhedra or octagons — easily extends from state analysis (construction of reachable states) to relational analysis (construction of input-output relations). In this talk, we exploit this extension to enable interprocedural program analysis, by constructing relational summaries of procedures. In order to improve the accuracy of procedure summaries, we propose a method to refine them into disjunctions of relations, these disjunctions being directed by preconditions on input parameters. | |
10h30-11h00 | Coffee break | ||
11h00-11h45 | Matthieu Moy (LIP) |
Simple, single-core processors have been the main execution platform
for critical real-time software for years. Such software can be
handwritten in imperative languages, but code generation from
higher-level languages such as the synchronous data-flow SCADE
(industrial version of the Lustre language) are also used in
production today.
Single-core processors are reaching their limits. Performance, or energy efficiency constraints are leading the industry towards multi-core or many-core including for critical systems. These architectures raise new major challenges for timing analysis. In this talk, we present a code generation flow from SCADE that produces parallel and yet predictable code for Kalray MPPA architecture. The code generation scheme is designed together with a timing analysis that take into account interference between cores when they access the local memory. This work was performed as part of the CAPACITES project: http://capacites.minalogic.net/ | |
11h45-12h30 | Pierre Clairambault (LIP) |
Higher-Order Model-Checking (HOMC) has recently emerged as an approach
to automated verification of higher-order programs, prompted by Ong's
2006 result that Monadic Second Order Logic (MSO) is decidable on
infinite trees generated by Higher-Order Recursion Schemes (HORS). It is
under active development, with implementations reporting encouraging
results despite an awful theoretical worst case complexity.
In principle, HOMC algorithms work by reduction to Ong's theorem. In practice, this often involves CPS translations which are costly in terms of complexity, and so as to get tight complexity people have instead reproved variations of Ong's theorem for extensions of HORS (with data, or call-by-value) and drastically restricted fragments of MSO. In this talk, I will introduce Linear HORS (LHORS), an extension of HORS with additional type constructors imported from Linear Logic. Data and call-by-value evaluation admit a finer translation to LHORS exploiting the idea of linearly-used continuations. Moreover LHORS enjoy a decidable model-checking problem, whose complexity essentially ignores linear types. In this way we recover and extend several developments that were originally proved independently of Ong's theorem. This is joint work with Andrzej Murawski and Charles Grellois, presented at POPL'18. |
6th Edition, 2018, April, 26th
Information
The seminar will be hold at the room at 1 place de l'école, ENS Lyon, Monod site. The meeting is open to all members of the labs LIP, LIRIS, and CITI.
Program
Click on title for abstract.
9h00-9h30 | Welcome | ||
9h30-10h30 | Warwick Tucker (Uppsala University & LIP) | In this talk we will discuss some powerful techniques from the field of computer-aided proofs. The methods we will present are all based on set-valued mathematics (interval analysis) which is well-suited for computational proofs where the underlying problem is continuous rather than discrete. We will also talk about some concrete mathematical problems that we are currently trying to solve using the above-mentioned methods. | |
10h30-11h00 | Coffee break | ||
11h00-11h45 | Ludovic Henrio (I3S) |
Active objects is a programming paradigm strongly inspired from actors. In this talk, I will first present the active object programming model, and the different languages that implement it.
In the Scale team we develop one particular active object language: the ASP model, and its reference implementation: ProActive.
This will allow me to show how active objects can be used to implement software components, and how we extended the model to support local multi-threading. I will also illustrate how this model is efficient and expressive.
Finally, I will present different efforts in applying formal methods to this programming model and to software components in order to ensure the correctness of programming languages and of distributed applications.
| PDF PPTX |
11h45-12h30 | Laure Daviaud (Warwick University) |
We define two classes of functions, called regular (respectively,
first-order) list functions, which manipulate objects such as lists,
lists of lists, pairs of lists, lists of pairs of lists, etc. The
definition is in the style of regular expressions: the functions are
constructed by starting with some basic functions (e.g. projections from
pairs, or head and tail operations on lists) and putting them together
using four combinators (most importantly, composition of functions).
Our main results are that first-order list functions are exactly the
same as first-order transductions, under a suitable encoding of the
inputs; and the regular list functions are exactly the same as
MSO-transductions.
This is a joint work with Mikolaj Bojanczyk and Krishna Shankara Naravanan. |
5th Edition, 2018, March, 8th
Information
The seminar will be hold at Amphi J, ENS Lyon, Monod site. The meeting is open to all members of the labs LIP, LIRIS, and CITI.
Program
Click on title for abstract.
9h00-9h30 | Welcome | ||
9h30-10h30 | Albert Cohen (Inria) |
Deep learning models with convolutional and recurrent networks are
ubiquitous and analyze massive amounts of audio, image, video, text
and graph data, with applications in automatic translation,
speech-to-text, scene understanding, ranking user preferences, ad
placement, etc. Competing frameworks for building these networks such
as TensorFlow, Chainer, CNTK, Torch/PyTorch, Caffe1/2, MXNet and
Theano, explore different tradeoffs between usability and
expressiveness, research or production orientation and supported
hardware. They operate on a DAG of computational operators, wrapping
high-performance libraries such as CUDNN (for NVIDIA GPUs) or NNPACK
(for various CPUs), and automate memory allocation, synchronization,
distribution. Custom operators are needed where the computation does
not fit existing high-performance library calls, usually at a high
engineering cost. This is frequently required when new operators are
invented by researchers. Such operators suffer a severe performance
penalty, which limits the pace of innovation. Furthermore, even if
there is an existing runtime call these frameworks can use, it often
does not offer optimal performance for a user's particular network
architecture and dataset, missing optimizations between operators as
well as specialization to the size and shape of data.
We will survey the work-in-progress design of (1) a language close to the mathematics of deep learning called Tensor Comprehensions, featuring interesting developments in the areas of automatic range inference, declarative array programming, and data-flow modeling of recurrent networks; (2) a polyhedral Just-In-Time compiler to convert a mathematical description of a deep learning DAG into a CUDA kernel with delegated memory management and synchronization, also providing optimizations such as operator fusion and specialization for specific sizes; (3) a high level metaprogramming environment and compilation cache populated by an autotuner, acting as a built-to-order library. In a first paper, we demonstrate the suitability of the polyhedral framework to construct a domain-specific optimizer effective on state-of-the-art deep learning models on GPUs. Our flow reaches up to 4x speedup over NVIDIA libraries on kernels relevant to the Machine Learning Community, and on an actual model used in production at Facebook. TC also facilitates algorithmic exploration, exposing up to 2 orders of magnitude speedup on research layers. It is open source, integrated with mainstream frameworks Caffe2 (production-oriented) and PyTorch (research-oriented). TC is still at an early stage, and looking for contributions and collaboration. https://research.fb.com/announcing-tensor-comprehensions | |
10h30-11h00 | Coffee break | ||
11h00-11h45 | Abdallah Arioua (LIRIS) | Repairing techniques for relational databases have leveraged integrity constraints to detect and then resolve errors in the data. User guidance has started to be employed in this setting to avoid a prohibitory exploration of the search space of solutions. In this paper, we present a user-guided repairing technique for Knowledge Bases (KBs) enabling updates suggested by the users to resolve errors. KBs exhibit more expressive constraints with respect to relational tables, such as tuple-generating dependencies (TGDs) and negative rules (a form of denial constraints). We consider TGDs and a notable subset of denial constraints, named contradiction detecting dependencies (CDDs). We propose user-guided polynomial-delay algorithms that ensure the repairing of the KB in the extreme cases of interaction among these two classes of constraints. To the best of our knowledge, such interaction is so far unexplored even in repairing methods for relational data. We prove the correctness of our algorithms and study their feasibility in practical settings. We conduct an extensive experimental study on synthetically generated KBs and a real-world inconsistent KB equipped with TGDs and CDDs. We show the practicality of our proposed interactive strategies by measuring the actual delay time and the number of questions required in our interactive framework. | |
11h45-12h30 | Guilhem Jaber (LIP) | This talk will present SyTeCi, the first general automated tool to check contextual equivalence for programs written in an higher-order language with references (i.e. local mutable states), corresponding to a fragment of OCaml.
After introducing the notion of contextual equivalence, we will see on some examples why it is hard to prove such equivalences (reentrant calls, private states). As we will see, such examples can be found in many different programming languages. Then, we will introduce SyTeCi, a tool to automatically check such equivalences. This tool is based on a reduction of the problem of contextual equivalence of two programs to the problem of reachability of “error states” in transition systems of memory configurations. Contextual equivalence being undecidable (even in a finitary setting), so does the non-reachability problem for such transition systems. However, one can apply model-checking techniques (predicate abstraction, summarization of pushdown systems) to check non-reachability via some approximations. This allows us to prove automatically many non-trivial examples of the literature, that could only be proved by hand before. We will end this talk by the presentation of a prototype implementing this work. |