Monthly lattice and crypto meetings
!!! The lattice and crypto meetings have now stopped !!!
The meetings are organized by Fabien Laguillaumie, Benoît Libert
and Damien Stehlé, and financially supported by
ERC Starting Grant ERC-2013-StG-335086-LATTAC.
If you want to attend, please contact the organizers: relatively precise
headcount is required to book appropriate rooms.
07.06.2018-08.06.2018
Location: ENS de Lyon, Monod campus.
Rooms: Amphi B on Thursday afternoon and Friday morning, Amphi H on Friday afternoon.
Times: Thursday afternoon and Friday.
Introduction to isogeny-based cryptography.
Isogeny-based cryptography is a novel direction in cryptography that
is based on hard problems in graphs of isomorphism classes of elliptic
curves. It is well known that classical discrete logarithm based
cryptography on elliptic curves can be broken in polynomial time on a
quantum computer using Shor's algorithm. Isogeny based cryptography
is based on totally different hard mathematical problems that so far
have proven to be immune to attacks using quantum computers.
The goal of these lectures is to review the necessary mathematical
background required to understand isogeny-based cryptography and to
explain the two different approaches: one based on ordinary curves and
one based on supersingular curves. The most important cryptographic
protocols will be explained as well as the most efficient attacks on
both systems.
Th. 14:00-16:00, Ben Smith (INRIA Saclay, E Polytechnique): Elliptic curves and isogeny graphs.
Fr. 9:30-11:30, Frederik Vercauteren (KU Leuven): Isogeny-based cryptography using ordinary elliptic curves.
Fr. 13:30-15:30, Joost Renes (U Nijmegen): Isogeny-based cryptography using supersingular elliptic curves.
Past meetings
19.04.2018-20.04.2018
Location: ENS de Lyon, Monod campus.
Room: 115 (2nd floor, South aisle).
Times: Starting at 2pm on 19.04.2018, finishing at 3pm on 20.04.2018
Michael Walter (IST): On the Bit Security of Cryptographic Primitives
We introduce a formal quantitative notion of ``bit security'' for a
general type of cryptographic games (capturing both decision and
search problems), aimed at capturing the intuition that a
cryptographic primitive with k-bit security is as hard to break as an
ideal cryptographic function requiring a brute force attack on a k-bit
key space. Our new definition matches the notion of bit security
commonly used by cryptographers and cryptanalysts when studying search
(e.g., key recovery) problems, where the use of the traditional
definition is well established. However, it produces a quantitatively
different metric in the case of decision (indistinguishability)
problems, where the use of (a straightforward generalization of) the
traditional definition is more problematic and leads to a number of
paradoxical situations or mismatches between theoretical/provable
security and practical/common sense intuition. Key to our new
definition is to consider adversaries that may explicitly declare
failure of the attack. We support and justify the new definition by
proving a number of technical results, including tight reductions
between several standard cryptographic problems, a new hybrid theorem
that preserves bit security, and an application to the security
analysis of indistinguishability primitives making use of
(approximate) floating point numbers. This is the first result showing
that (standard precision) 53-bit floating point numbers can be used to
achieve 100-bit security in the context of cryptographic primitives
with general indistinguishability-based security definitions. Previous
results of this type applied only to search problems, or special types
of decision problems.
Based on joint work with Daniele Micciancio:
2018/077.
Kathrin Hövelmanns (Ruhr-Universität Bochum): A Modular Analysis of the Fujisaki-Okamoto Transformation
The Fujisaki-Okamoto (FO) transformation (CRYPTO 1999 and Journal of
Cryptology 2013) turns any weakly secure public-key encryption scheme
into a strongly (i.e., IND-CCA) secure one in the random oracle
model. Unfortunately, the FO analysis suffers from several drawbacks,
such as a non-tight security reduction, and the need for a perfectly
correct scheme. While several alternatives to the FO transformation
have been proposed, they have stronger requirements, or do not obtain
all desired properties. In this work, we provide a fine-grained and
modular toolkit of transformations for turning weakly secure into
strongly secure public-key encryption schemes. All of our
transformations are robust against schemes with correctness errors,
and their combination leads to several tradeoffs among tightness of
the reduction, efficiency, and the required security level of the used
encryption scheme. For instance, one variant of the FO transformation
constructs an IND-CCA secure scheme from an IND-CPA secure one with a
tight reduction and very small efficiency overhead. Another variant
assumes only an OW-CPA secure scheme, but leads to an IND-CCA secure
scheme with larger ciphertexts. We note that we also analyze our
transformations in the quantum random oracle model, which yields
security guarantees in a post-quantum setting.
Based on joint work with Dennis Hofheinz and Eike Kiltz.
2017/604.
Weiqiang Wen (ENS de Lyon): Toward a more accurate BKZ simulator.
The Blockwise-Korkine-Zolotarev (BKZ) lattice reduction algorithm is
central in cryptanalysis, in particular for lattice-based
cryptography. In this talk, I will present a more precise BKZ
simulator compared to the one of Chen and Nguyen [ASIACRYPT’11]. Next,
I will propose a new variant of the BKZ algorithm, which can further
improve the quality of the basis produced by the original BKZ.
Based on an ongoing joint work with Shi Bai and Damien Stehlé.
01.03.2018-02.03.2018
Location: ENS de Lyon, Monod campus.
Room: 115 (2nd floor, South aisle).
Times: Starting at 3pm on 01.03.2018, finishing at 3pm on 02.03.2018
Th. 15:00-16:30, Fernando Virdia (RHUL): Estimate all the {LWE, NTRU} schemes!
Out of the 69 proposals made to NIST as part of the PQC process, 23
base their security on LWE- and NTRU- related assumptions. Each of
these also presents their own security analysis. Using and extending
the APS15 lwe-estimator, the Estimate-all-the-{LWE,NTRU}-schemes! team
proposes a cross-comparison of each scheme under every security
analysis proposed, in order to better understand the candidates for
standardisation but also the status of current lattice cryptanalysis.
In this talk we'll look at the proposed schemes and how they differ
from each other, at the proposed cost models for lattice attacks, and
at the initial results of the cross-comparison. Spoiler alert: huge
differences between models and a non trivial interaction with hybrid
attacks.
Based on joint work with Martin R. Albrecht, Benjamin R. Curtis,
Amit Deo, Alex Davidson, Rachel Player, Eamonn Postlethwaite,
and Thomas Wunderer:
paper.
Fr. 09:30-11:00, Gottfried Herold (ENS de Lyon): New Techniques for Structural Batch Verification in Bilinear Groups with Applications to Groth-Sahai Proofs
Bilinear groups form the algebraic setting for a multitude of
important cryptographic protocols including anonymous credentials,
e-cash, e-voting, e-coupon, and loyalty systems. It is typical of such
crypto protocols that participating parties need to repeatedly verify
that certain equations over bilinear groups are satisfied, e.g., to
check that computed signatures are valid, commitments can be opened,
or non-interactive zero-knowledge proofs verify correctly. Depending
on the form and number of equations this part can quickly become a
performance bottleneck due to the costly evaluation of the bilinear
map.
To ease this burden on the verifier, batch verification techniques
have been proposed that allow to combine and check multiple equations
probabilistically using less operations than checking each equation
individually. In this work, we revisit the batch verification problem
and existing standard techniques. We introduce a new technique which,
in contrast to previous work, enables us to fully exploit the
structure of certain systems of equations. Equations of the
appropriate form naturally appear in many protocols, e.g., due to the
use of Groth-Sahai proofs.
The beauty of our technique is that the underlying idea is pretty
simple: we observe that many systems of equations can alternatively be
viewed as a single equation of products of polynomials for which
probabilistic polynomial identity testing following Schwartz-Zippel
can be applied. Comparisons show that our approach can lead to
significant improvements in terms of the number of pairing
evaluations. For example, for the BeleniosRF voting system presented
at CCS 2016, we can reduce the number of pairings (required for ballot
verification) from 4k+140, as originally reported by Chaidos et al.,
to k+7.
(k is a parameter depending on the number of alternatives in the
ballot and is typically small, possibly k=1)
Based on joint work with Max Hoffmann, Michael Klooß Carla
Ràfols and Andy Rupp.
2017/802.
Fr. 14:00-15:30, Radu Titiu (ENS de Lyon and Bitdefender): Tighter Security Proofs for Lattice-Based PRFs
We provide tight security proofs for several lattice-based
pseudorandom functions for which we eliminate the concrete security
loss proportional to the number Q of adversarial queries. We achieve
this for a lattice-based instantiation of the
Goldreich-Goldwasser-Micali (GGM) construction, the key-homomorphic
PRF of Banerjee and Peikert (Crypto 2014) and the synthesizer-based
realization of Banerjee et al. (Eurocrypt 2012). In the original
security proofs, the reductions rely on a Q-secret instance of the
Learning-With-Errors assumption (LWE), which eventually degrades the
concrete security bound by a factor Θ(Q). We first show that
this gap can be avoided by relying on the lossy mode of LWE, which
allows tightly reducing LWE to its Q-secret variant in larger
dimension. Then, we provide dimension-preserving tight reductions for
the aforementioned PRF families by carefully randomizing LWE secrets
using the Leftover Hash Lemma. Our GGM-based construction turns out to
be the first lattice-based PRF featuring a tight security reduction
from a standard LWE assumption with polynomial approximation factors.
In addition, our GGM-based construction provably retains tight
security when the adversary is given quantum access to the PRF. While
GGM was already known to be secure in this setting, Zhandry's proof
(FOCS 2012) loses a factor Θ(Q^3). To eliminate this cubic gap,
we tightly reduce LWE to a problem, called ``Tons-of-Secrets
Learning-With-Errors'', which directly implies a pseudorandom
generator with oracle indistinguishability.
Based on joint work with Benoît Libert and Damien Stehlé.
18.01.2018-19.01.2018
Location: ENS de Lyon, Monod campus.
Room: 115 (2nd floor, South aisle).
Work room, for around the talks: M7 meeting room (level 3, end of the South-East corridor).
Times: Starting at 2pm on 21.12.2017, finishing at 3pm on 22.12.2017
Th. 14:00-15:30, Rafaël del Pino (ENS Paris and IBM Zürich): Practical Quantum-Safe Voting from Lattices
In this talk I will present an efficient lattice-based e-voting scheme.
Taking inspiration from a work by Cramer we construct a voting protocol
that relies on homomorphic commitments and zero knowledge OR-proofs.
However adapting this scheme to lattices comes with some hurdles:
zero-knowledge proofs on lattices are large and inefficient, and homorphic
commitments are limited in their additive property, as summing commitments
increases their noise. I will show how to overcome these issues and obtain
a practical voting scheme.
Based on joint work with Vadim Lyubashevsky, Gregory Neven and Gregor Seiler:
2017/1235.
Fr. 09:30-11:00, Guillaume Bonnoron (IMT Atlantique): Large FHE gates from tensored homomorphic accumulator
Homomorphic encryption now shows its forth generation, that of the
bootstrapped schemes where the noise is kept constant throughout the
circuit evaluation. I will present in this talk an improvement of FHEW
[DucasMicciancio14]. The idea is to used a tensored accumulator to
express more complex gates, beyond NAND. With this trick, we can
evaluate a gate on 6 input bits in less than 6 seconds. This work is
orthogonal to that of TFHE from Chilotti et al.
Based on joint work with Léo Ducas and Max Fillinger:
2017/996.
Fr. 14:00-15:30, Thomas Prest (Thales): Solving Bézout Equations using the Field Norm and Applications to NTRU
I will present a cute algorithm for solving Bézout-style equations in the
ring Z[x]/(phi), where phi is a cyclotomic polynomial. This algorithm uses
the field norm to map such equations into smaller rings, before lifting
the solutions back in the original ring. For schemes based on *complete*
NTRU lattices, it makes the key generation more efficient by a factor 10
in speed and 100 in memory. The technique is similar to the one used in
"overstretched NTRU", but is here used for constructive purposes.
Based on joint work with Thomas Pornin.
21.12.2017-22.12.2017
Location: ENS de Lyon, Monod campus.
Room: 115 on Thursday, 116 on Friday (both on 2nd floor, South aisle).
Work room, for around the talks: 'Salle du Conseil du LIP' (3rd floor, next to Amphi B) on Thursday, 316C on Friday.
Times: Starting at 2pm on 21.12.2017, finishing at 3pm on 22.12.2017
Th. 14:00-15:30, Yongsoo Song (SNU): Homomorphic Encryption for Approximate Arithmetic
We suggest a method to construct a homomorphic encryption scheme for
approximate arithmetic. It supports approximate addition and
multiplication of encrypted messages, together with the rescaling
procedure for managing the magnitude of plaintext. This procedure
truncates a ciphertext into a smaller modulus, which results in
rounding of plaintext after homomorphic operations. We also propose a
new batching method for RLWE-based construction so that a plaintext
polynomial of characteristic zero is mapped to a message vector of
complex numbers via complex canonical embedding map.
Our construction has the bit size of ciphertext modulus linear in the
circuit depth due to rescaling procedure while all the previous works
either require exponentially large size of modulus or expensive
computations such as bootstrapping or bit extraction. One important
feature of our method is that the precision loss during evaluation is
bounded by circuit depth and it is at most one more bit compared with
unencrypted approximate arithmetic such as floating-point operations.
We show by implementation that our scheme can be applied to
efficiently evaluate the transcendental functions such as
multiplicative inverse, exponential function, logistic function, and
discrete Fourier transform.
Based on joint work with Miran Kim, Andrey Kim and Jung Hee Cheon:
2016/421.
Fr. 9:30-11:00, Alain Passelègue (UCLA): Non-Trivially Efficient Constructions in Obfustopia
In this talk, I will discuss recent results on non-trivial
constructions of obfuscation-related primivites. The talk will consist
in two parts. First, I will explain how to construct
indistinguishability obfuscation from non-trivially efficient
indistinguishability obfuscation (XiO), where the obfuscation of a
circuit C has size poly(k,|C|).2^{an} for some a < 1. I will then
detail an XiO construction assuming only secret-key functional
encryption (SKFE). This leads to the first construction of IO based on
secret-key functional encryption and LWE. We can further relax the LWE
assumption to only assuming the existence of plain PKE. The second
part of the talk will focus on witness encryption and null-iO (iO for
0-circuits). I will explain how we can extend our techniques to build
non-trivially efficient witness encryption for all NP assuming only
LWE (or DBDH for relations in NC1) and why the latter result naturally
extends to get non-trivially efficient null-iO from LWE.
Based on joint works with Nir Bitansky, Zvika Brakerski, Aayush Jain,
Ilan Komargodski,Ryo Nishimaki and Daniel
Wichs: 2016/558
and 2017/874.
Fr. 14:00-15:30, Junqing Gong (ENS de Lyon): A Concise Framework for Tag-based ABE and More Instantiations
A tag-based ABE is an ABE scheme whose secret key and/or ciphertext
contain integers (i.e., tag) besides group elements in bilinear
groups. In general, a tag-based ABE will be more efficient than its
counterpart without tag. In this talk, I will review some recent
progress of building tag-based ABE for useful functionalities and show
a generic framework for tag-based ABE, which is based on Jutla-Roy IBE
[Asiacrypt, 2013]. Since the proposed framework is compatible with
Chen et al.'s (attribute-hiding) predicate encoding [EuroCrypt, 2015],
it is expressive enough to derive several new tag-based ABE
schemes. As an example, I will describe two of them: a tag-based
KP-ABE for boolean span program and its ciphertext-policy variant.
Based on joint work with Jie
Chen: 2017/859.
16.11.1017 - 17.11.2017
Location: ENS de Lyon, Monod campus.
Room: 115 on Thursday, 116 on Friday (both on 2nd floor, South aisle).
Work room, for around the talks: 'Salle du Conseil du LIP' (3rd floor, next to Amphi B) on Thursday, 316C on Friday.
Times: Starting at 2pm on 16.11.2017, finishing at 3pm on 17.11.2017
Th. 14:00 - 15:30, Pierrick Méaux (ENS Paris): Symmetric Encryption Scheme adapted to Fully Homomorphic Encryption Scheme
Fully Homomorphic Encryption (FHE) is a recent powerful cryptographic
construction, allowing to securely compute all functions on encrypted
data, and to get an encrypted version of the result. This construction
gives the possibility to securely outsource computations, which is a
very important property given the increasing development of limited
devices on one side, and cloud computing on the other side.
Nevertheless, in current client-server frameworks, the client devices
are too restricted to support the cost in memory and in time of FHE.
This issue can be circumvented for outsourcing computation, using FHE in
a larger framework, by combining it with primitives which incur small
computation and communication cost: Symmetric Encryption schemes.
In this talk, we will investigate the compatibility between families of
symmetric encryption schemes and homomorphic evaluation, we will present
the FLIP family of stream ciphers designed for FHE, and we will analyze
its homomorphic behavior.
The presentation will be mostly based on joint work with Anthony
Journault, François-Xavier Standaert and Claude
Carlet: 2016/254.
Fr. 9:30 - 11:00, Romain Gay (ENS Paris): Practical Functional Encryption for Quadratic Functions
with Applications to Predicate Encryption
We present two practically efficient functional encryption schemes for
a large class of quadratic functionalities. Specifically, our
constructions enable the computation of so-called bilinear maps on
encrypted vectors. This represents a practically relevant class of
functions that includes, for instance, multivariate quadratic
polynomials (over the integers). Our realizations work over asymmetric
bilinear groups and are surprisingly efficient and easy to
implement. For instance, in our most efficient scheme the public key
and each ciphertext consist of 2n+1 and 4n+2 group elements
respectively, where n is the dimension of the encrypted vectors, while
secret keys are only two group elements. Our two schemes build on
similar ideas, but develop them in a different way in order to achieve
distinct goals. Our first scheme is proved (selectively) secure under
standard assumptions, while our second construction is concretely more
efficient and is proved (adaptively) secure in the generic group
model. As a byproduct of our functional encryption schemes, we show
new predicate encryption schemes for degree-two polynomial evaluation,
where ciphertexts consist of only O(n) group elements. This
significantly improves the O(n2) bound one would get from inner
product encryption-based constructions.
Based on joint work with Carmen E. Z. Baltico, Dario Catalano and Dario Fiore:
2017/151.
Fr. 13:30 - 15:00, Gottfried Herold (ENS de Lyon): Improved Time-Memory Tradeoffs for Lattice Sieving
One of the main approaches to solve the Shortest Vector Problem (SVP)
on lattices is Lattice Sieving, which heuristically solves SVP in
exponential (in the lattice rank) running time and memory. In the
algorithm originally proposed by M. Ajtai, R. Kumar and D. Sivakumar,
we combine pairs of lattice points to form ever shorter lattice
points. This achieves an asymptotic complexity of 2^0.415n time and
2^0.208n memory to solve SVP, ignoring subexponential factors.
There are currently two known techniques to asymptotically improve the
complexity or to provide more flexible time-memory trade-offs: The
first one is to use techniques from Locality Sensitive Hashing, due to
A. Becker, L. Ducas, N. Gama and T. Laarhoven. The second technique
is to combine more than two lattice points at a time, due to S. Bai,
T. Laarhoven and D. Stehlé, which was later improved by G. Herold and
E. Kirshanova.
In this talk, I will first present an overview over those
techniques. Then I will explain a recent result, obtained jointly with
E. Kirshanova and T. Laarhoven, how to improve further on the results
from Herold and Kirshanova and how to combine the two techniques to
get even better time-memory trade-offs. If time permits, I will talk
about implementation challenges and open problems.
Based on joint work with Elena Kirshanova and Thijs Laarhoven.
19.10.2017 - 20.10.2017
Location: ENS de Lyon, Monod campus.
Room: 115 on Thursday, 116 on Friday (both on 2nd floor, South aisle).
Work room, for around the talks: 316C.
Times: Starting at 2pm on 19.10.2017, finishing at 3pm on 20.10.2017
Th. 14:00 - 15:30, Ilaria Chilotti (UVSQ): Faster packed homomorphic operations and efficient circuit bootstrapping for TFHE
We present several methods to improve the evaluation of homomorphic
functions in TFHE, both for fully and for leveled homomorphic
encryption. We propose two methods to manipulate packed data, in order
to decrease the ciphertext expansion and optimize the evaluation of
look-up tables and arbitrary functions in Ring-GSW based homomorphic
schemes. We also extend the automata logic, introduced in [GINX14] and
[CGGI16b], to the efficient leveled evaluation of weighted automata,
and present a new homomorphic counter called TBSR, that supports all
the elementary operations that occur in a multiplication. These
improvements speed-up the evaluation of most arithmetic functions in a
packed leveled mode, with a noise overhead that remains additive. We
finally present a new circuit bootstrapping that converts LWE into
low-noise Ring-GSW ciphertexts in just 137ms, which makes the leveled
mode of TFHE composable, and which is fast enough to speed-up
arithmetic functions, compared to the gate-by-gate bootstrapping given
in [CGGI16b]. Finally, we propose concrete parameter sets and timing
comparison for all our constructions.
Based on joint work with Nicolas Gama, Mariya Georgieva and Malika
Izabachène: 2017/430.
Fr. 9:30 - 11:00, Rafaël del Pino (ENS Paris and IBM Zürich): Efficient Amortized Zero-Knowledge
All known Zero-Knowledge protocols on lattices suffer from either high
communication complexity or high slack. One can do better if one considers
simultaneously proving the knowledge of many secrets: I will present a
protocol that achieves optimal communication complexity as well as
(quasi-)optimal slack in the setting of amortized proofs. I will also show
how to improve upon this protocol to reduce the number of secrets required
for amortization from 2^16 as in Cramer et al (Eurocrypt '17) to less than
500.
Based on joint work with Vadim
Lyubashevsky: 2017/280.
Fr. 13:30 - 15:00, Alexandre Wallet (ENS de Lyon): On the Ring-LWE and Polynomial-LWE
Problems
The Ring Learning With Errors problem (RLWE) comes in various
forms. Vanilla RLWE is the decision dual-RLWE variant, consisting in
distinguishing from uniform a distribution depending on a secret
belonging to the dual OK^vee of the ring of integers OK of a specified
number field K. In primal-RLWE, the secret instead belongs to OK.
Both decision dual-RLWE and primal-RLWE enjoy search counterparts.
Also widely used is (search/decision) Polynomial Learning With Errors
(PLWE), which is not defined using a ring of integers OK of a number
field K but a polynomial ring Z[x]/f for a monic irreducible f in
Z[x]. We show that there exist reductions between all of these six
problems that incur limited parameter losses. More precisely: we prove
that the (decision/search) dual to primal reduction from Lyubashevsky
et al. [EUROCRYPT 2010] and Peikert [SCN 2016] can be implemented with
a small error rate growth for all rings (the resulting reduction is
nonuniform polynomial time); we extend it to polynomial-time
reductions between (decision/search) primal RLWE and PLWE that work
for a family of polynomials f that is exponentially large as a
function of deg f (the resulting reduction is also non-uniform
polynomial time); and we exploit the recent technique from Peikert et
al. [STOC 2017] to obtain a search to decision reduction for RLWE. The
reductions incur error rate increases that depend on intrinsic
quantities related to K and f.
Based on joint work with Miruna Roșca and Damien Stehlé.
21.09.2017 - 22.09.2017
Location: ENS de Lyon, Monod campus.
Room: 115 (2nd floor, South aisle).
Work room, for around the talks: 316C.
Times: Starting at 2pm on 21.09.2017, finishing at 3pm on 22.09.2017
Th. 14:00 - 15:30, Fernando Virdia (RHUL): Revisiting the Expected Cost of Solving uSVP and Applications to LWE
Reducing the Learning with Errors problem (LWE) to the Unique-SVP
problem and then applying lattice reduction is a commonly relied-upon
strategy for estimating the cost of solving LWE-based
constructions. In the literature, two different conditions are
formulated under which this strategy is successful. One, widely used,
going back to Gama & Nguyen's work on predicting lattice reduction
(Eurocrypt 2008) and the other recently outlined by Alkim et
al. (USENIX 2016). Since these two estimates predict significantly
different costs for solving LWE parameter sets from the literature, we
revisit the Unique-SVP strategy. We present empirical evidence from
lattice-reduction experiments exhibiting a behaviour in line with the
latter estimate. However, we also observe that in some situations
lattice-reduction behaves somewhat better than expected from Alkim et
al.'s work and explain this behaviour under standard
assumptions. Finally, we show that the security estimates of some
LWE-based constructions from the literature need to be revised and
give refined expected solving costs.
Based on joint work with Martin Albrecht, Florian Göpfert and Thomas Wunderer: eprint 2017/815.
Fr. 9:30-11:00, Léo Ducas (CWI): Boosting the practical performances of Lattice Sieving
This talk is about Work In Progress on lattice sieving. We shall
present a few tricks that, while asymptotically negligible, provide
significant speed up in practice. Ourcurrent prototype outperform
previous implementations by about two order of magnitude in dimensions
60 to 80, and is less than an order of magnitude away from pruned
enumeration.
Fr. 13:30-15:00, Alice Pellet--Mary (ENS de Lyon): Quantum Attacks against
Branching Program Indistinguishablility Obfuscators over GGH13
I will present a quantum attack against some candidate
obfuscators. First, I will describe an abstract obfuscator, which
captures the structure of six candidate obfuscators that have been
proposed since 2014 (in particular, the GMMSSZ obfuscator of Garg et
al. at TCC 2016). Then, I will explain how we can use recent results
of Biasse and Song (SODA 2016) and Cramer et al. (Eurocrypt 2016), to
mount an attack against this abstract obfuscator.
05.07.2017
Location: ENS de Lyon, Monod campus.
Room: B1 (4th floor, North aisle).
Work room, for around the talks: 316C.
Times: 9am to 4:30pm
09:30-11:30, Shweta Agrawal (IIT Madras): Functional Encryption for Bounded Collusions, Revisited
We provide a new construction of functional encryption (FE)
for circuits in the bounded collusion model. In this model, security of the
scheme is guaranteed as long as the number of colluding adversaries can
be a-priori bounded by some polynomial q . Our construction supports
arithmetic circuits as against Boolean circuits, which have been the focus
of all prior work. The ciphertext of our scheme is succinct for circuits of
depth logn when based on Ring LWE and any constant depth when
based on standard LWE. This gives the first constructions of arithmetic
reusable garbled circuits.
Additionally, our construction achieves several desirable features:
– Our construction for reusable garbled circuits achieves the optimal
“full” simulation based security.
– When generalised to handle Q queries for any fixed polynomial Q ,
our ciphertext size grows additively with Q^2 . Such query dependence
on ciphertext size has only been achieved in a weaker security game
before [Agr17].
– The ciphertext of our scheme can be divided into a succinct
data dependent component and a non-succinct data independent
component. This makes it well suited for optimization in an online-
offline model that allows a majority of the computation to be
performed in an offline phase, before the data becomes available.
Security of our scheme may be based on the Learning With Errors
assumption (LWE) or its ring variant (Ring-LWE). To achieve our result,
we provide new public key and ciphertext evaluation algorithms in the
context of functional encryption. These algorithms are general, and may
find application elsewhere.
Based on joint work with Alon Rosen: eprint 2016/361.
12:30-14:30, Sam Kim (Stanford U.): Private Puncturable PRFs and Applications to Cryptographic Watermarking
A puncturable PRF has a master key k that can be used to evaluate the PRF at all points of the domain, and a punctured key k' that can be used to evaluate the PRF at all points but one. Standard puncturable PRFs have found numerous applications in cryptography, most notably in applications of indistinguishability obfuscation. In this talk, I will talk about some natural extensions that we can define on puncturable PRFs that gives rise to additional applications. Specifically, I will focus on the properties of privacy and programmability for puncturable PRFs and show how puncturable PRFs with these properties can be constructed from standard lattice assumptions. Additionally, I will discuss how puncturable PRFs that satisfy these properties can be used to construct a form of cryptographic watermarking.
Based on joint work with Dan Boneh, Hart Montgomery, and David J. Wu:
eprint 2017/100 and eprint 2017/380.
14:30-16:30, Yilei Chen (Boston U.): Cryptanalyses of Candidate Branching Program Obfuscators
I will describe the new cryptanalytic attacks on the candidate
branching program obfuscator proposed by Garg, Gentry, Halevi,
Raykova, Sahai and Waters (GGHRSW) using the GGH13 graded encoding,
and its variant using the GGH15 graded encoding as specified by
Gentry, Gorbunov and Halevi.
The talk will focus on GGH15. It will be composed of 3 parts: (1) The
description of GGHRSW obfuscator; (2) The description of the GGH 15
maps; (3) The analysis of the GGH15 based obfuscator.
Based on joint work with Craig Gentry and Shai Halevi: eprint 2016/998.
10.05.2017 - 11.05.2017
Topic: Implementation.
Location: ENS Lyon, Monod campus.
Room: 116.
Work room, for around the talks: 316C.
We 10/05, 14:30-16:30, Claire Delaplace (U. Rennes): Fast Lattice-Based Encryption: Stretching SPRING
The SPRING pseudo-random function (PRF) has been described by
Banerjee, Brenner, Leurent, Peikert and Rosen at FSE 2014. It is quite
fast, only 4.5 times slower than the AES (without hardware
acceleration) when used in counter mode. SPRING is similar to the PRF
of Banerjee, Peikert and Rosen from EUROCRYPT 2012, whose security
relies on the hardness of the Learning With Rounding (LWR) problem,
which can itself be reduced to hard lattice problems. However, there
is no such chain of reductions relating SPRING to lattice problems,
because it uses small parameters for efficiency reasons.
Consequently, the heuristic security of SPRING is evaluated using
known attacks and the complexity of the best known algorithms for
breaking the underlying hard problem. We revisit the efficiency and
security of SPRING when used as a pseudo-random generator. We propose
a new variant which is competitive with the AES in counter mode
without hardware AES acceleration, and about four times slower than
AES with hardware acceleration. In terms of security, we improve some
previous analysis of SPRING and we estimate the security of our
variant against classical algorithms and attacks. Finally, we
implement our variant using AVX2 instructions, resulting in high
performances on high-end desktop computers.
This is joint work with Charles Bouillaguet, Pierre-Alain Fouque and Paul Kirchner.
Th 11/05, 9:45-10:45, Thomas Prest (Thales): Easier floating-point on a
Rényi day
The Rényi divergence has already improved numerous proofs in
lattice-based cryptography (see e.g. the work of Bai et
al. [BLLSS16]). In this talk, I show that in the context of the NIST
competition (at most 2^64 queries), it provably eliminates in many
cases the need for floating-point precision higher than 53. This has
many positive implications for precomputation tables (given a little
tweak), FPA-heavy computations (such as Gaussian sampling) and
rejection sampling. I emphasize that though these applications seem
feasible in software, it remains to see whether this is the case in
hardware.
Th 11/05, 11:00-12:00, Thomas Prest (Thales): The Falcon signature scheme
In this talk I will present the signature scheme Falcon. It stands for
'FAst Fourier Lattice-based COmpact signatures over NTRU' and is an
instantiation of the hash-and-sign framework described in [GPV08]. I
will review some tools and rationales used in the design of this
scheme: a new Gaussian sampler which is a randomized version of the
fast Fourier nearest plane [DP16], the use of the Rényi divergence
(see previous talk) to have better parameters and the choice of optimal (in
some sense) NTRU distributions. These allow us to get signatures which
are among the most compact obtained with lattices.
This is joint work with Carlos Aguilar-Melchor, Pierre-Alain Fouque,
Vadim Lyubashevsky, Thomas Pornin and Thomas Ricosset.
Th 11/05, 13:30-15:30, Thomas Espitau (U. Paris 6): Generalized Howgrave-Graham–Szydlo
and Side-Channel Attacks Against BLISS
In this talk, we investigate the security of the BLISS lattice- based
signature scheme against side-channel attacks. BLISS is one of the
most promising candidate schemes for postquantum-secure signatures,
owing to its strong performance in both hardware and software, and its
relatively compact signature size. Several works have been devoted to
its efficient implementation on various platforms, from desktop CPUs
to microcontrollers and FPGAs, and more recent papers have also
considered its security against certain types of physical attacks,
notably fault injection and cache attacks.
We now turn to more traditional power analysis and EMA, and propose
several attacks that can yield a full key recovery in that setting,
the most striking of which exploits an easy SPA leakage in the
rejection sampling algorithm used during signature
generation. Existing implementations of that rejection sampling step,
which is essential for security, actually leak the relative norm of
the secret key in the totally real subfield of the cyclotomic field
used in BLISS. We show how an extension to power-of-two cyclotomic
fields of an algorithm due to Howgrave-Graham and Szydlo (ANTS 2004)
can be used to recover the secret key from that relative norm, at
least when the absolute norm is easy to factor (which happens for a
significant fraction of secret keys). The entire attack is validated
in practice using EMA experiments against an 8-bit AVR
microcontroller, and an implementation of our generalized
Howgrave-Graham–Szydlo algorithm in PARI/GP.
The talk will be based on this article.
This is joint work with Pierre-Alain Fouque, Benoît Gérard and Mehdi Tibouchi.
05.04.2017 - 06.04.2017
Topic: Cryptographic constructions with and without lattices.
Location: ENS Lyon, Monod campus.
Room: 116 on Thursday morning, 115 the rest of the time.
Work room, for around the talks: 316C.
We 05/04, 1:30pm-3:30pm: Florian Bourse (ENS Paris): CCA-Secure Inner-Product Functional Encryption from Projective Hash Functions
In an inner-product functional encryption scheme, the plaintexts are
vectors and the owner of the secret key can delegate the ability to
compute weighted sums of the coefficients of the plaintext of any
ciphertext. Recently, many inner-product functional encryption schemes
were proposed. However, none of the known schemes are secure against
chosen ciphertext attacks (IND-FE-CCA).
We present a generic construction of IND-FE-CCA inner-product
functional encryption from projective hash functions with homomorphic
properties. We show concrete instantiations based on the DCR
assumption, the DDH assumption, and more generally, any Matrix DDH
assumption.
The talk will be based on this article.
This is joint work with Fabrice Benhamouda and Helger Lipmaa.
4pm-6pm: Fabrice Mouhartem (ENS de Lyon): Adaptive oblivious transfer from LWE
Adaptive oblivious transfer (OT) is a protocol where a sender
initially commits to a database M_1, …, M_N. Then, a receiver can
query the sender up to k times with private indexes ρ_1, …, ρ_k so as
to obtain M_{ρ_1}, …, M_{ρ_k} and nothing else. Moreover, for each i ∈
[k], the receiver’s choice ρ_i may depend on previously obtained
messages {M_{ρ_j}}_{j< i} . Oblivious transfer with access control
(OT-AC) is a flavor of adaptive OT where database records are
protected by distinct access control policies that specify which
credentials a receiver should obtain in order to acces each M_i. So
far, all known OT-AC protocols only support access policies made of
conjunctions or rely on ad hoc assumptions in pairing-friendly groups
(or both). In this paper, we provide an OT-AC protocol where access
policies may consist of any branching program of polynomial length,
which is sufficient to realize any access policy in NC^1. The security
of our protocol is proved under the Learning-with-Errors (LWE) and
Short-Integer-Solution (SIS) assumptions. As a result of independent
interest, we provide protocols for proving the correct evaluation of a
committed branching program on a committed input.
This is joint work with Benoît Libert,
San Ling, Khoa Nguyen and Huaxiong Wang.
Th 06/04, 9:30am-11:30am: Damien Stehlé (ENS de Lyon): From Inner-Product Functional Encryption to
Trace-and-Revoke with Public Traceability
We describe a generic construction of a trace-and-revoke
broadcast encryption scheme from a functional encryption scheme for
inner products modulo a prime integer. The resulting scheme achieves
the strongest notion of adaptive security for the tracing procedure, as
well as public traceability in the black-box confirmation model. Thus,
tracing traitors can be delegated to untrusted parties in our constructions.
By combining the construction with inner product functional encryption
schemes from Agrawal et al. [CRYPTO'16], we obtain concrete
trace-and-revoke schemes with security based on the Decision Composite
Residuosity (DCR) and Learning With Errors (LWE) hardness assumptions.
Our LWE based construction resolves a question explicitly left
open by Ling et al. [CRYPTO'14]. We also adapt our technique to obtain
an ecient, publicly traceable traitor tracing scheme with security
based on the DDH hardness assumption. Our constructions are ecient,
and enjoy hardness based on standard assumptions.
Such a combination of eciency, public traceability and support for both
trace and revoke was not known before to the best of our knowledge.
This is joint work with Shweta Agrawal, Sanjay Bhattacherjee, Duong Hieu Phan
and Shota Yamada.
1:30pm-3:30pm: Olivier Blazy (U. Limoges): Hash Proof Systems over Lattices Revisited
Hash Proof Systems or Smooth Projective Hash Functions (SPHFs) are a
form of implicit arguments introduced by Cramer and Shoup at
Eurocrypt’02. They have found many applications since then, in
particular for authenticated key exchange or honest-verifier
zero-knowledge proofs. While they are relatively well understood in
group settings, they seem painful to construct directly in the lattice
setting.
Only one construction of an SPHF over lattices has been proposed, by
Katz and Vaikuntanathan at Asiacrypt’09. But this construction has an
important drawback: it only works for an ad-hoc language of ciphertexts.
Concretely, the corresponding decryption procedure needs to be tweaked,
now requiring q many trapdoor inversion attempts, where q is the modulus
of the underlying Learning With Error (LWE) problem.
Using harmonic analysis, we explain the source of this limitation, and
propose a way around it. We show how to construct SPHFs for standard
languages of LWE ciphertexts, and explicit our construction over a
tag-CCA2 encryption scheme à la Micciancio-Peikert (Eurocrypt’12).
If there is enough time, we will conclude with applications of these
SPHFs to password-authenticated key exchange, honest-verifier
zero-knowledge and a variant of witness encryption.
This is joint work with Fabrice Benhamouda, Léo Ducas and Willy Quach.
08.03.2017 - 09.03.2017
Topic: Algebraic number theory and Ring-LWE.
Location: ENS Lyon, Monod campus.
Room: 116.
Work room, for around the talks: 316C.
We. 08/03, 13:30-15:30. Guillaume Hanrot (ENS Lyon): Crash course in algebraic number theory
We. 08/03, 16:00-18:00. Wouter Castryck (U. Lille), Remarks on the error distributions in ring-based LWE
The existing literature contains several ring-based variants of the
Learning With Errors problem, all of which are often referred to as
Ring-LWE, a habit which has led to some confusion in the recent
past. The main difference lies in the choice of the probability
distribution from which the errors are to be sampled. We will briefly
compare the main versions, such as Poly-LWE and “proper” Ring-LWE as
introduced by Lyubashevsky et al., and discuss some pitfalls that
arise when mixing things up.
The talk will be based on this article.
This is joint work with Ilia Iliashenko and Frederik Vercauteren.
Th. 09/03, 10:00-12:00. Thomas Espitau (U. Paris 6): Computing generator in cyclotomic integer rings
The Principal Ideal Problem (resp. Short Principal Ideal Problem),
shorten as PIP (resp. SPIP), consists in finding a generator
(resp. short generator) of a principal ideal in the ring of integers
of a number field. Several lattice-based cryptosystems rely on the
presumed hardness of these two problems. Yet, in practice, most of
them do not use an arbitrary number field but a power-of-two
cyclotomic field. The Smart and Vercauteren fully homomorphic
encryption scheme and the multilinear map of Garg, Gentry and Halevi
epitomize this common restriction. Recently, Cramer, Ducas, Peikert
and Regev show that solving the SPIP in such cyclotomic rings boils
down to solving the PIP. We complete their result by an algorithm that
solves PIP in cyclotomic fields in subexponential time L|∆K| (1/2,
0.717) = e^((0.717+o(1))n^(1/2) log n), where ∆K denotes the
discriminant of the number field and n its degree. This asymptotic
complexity could be compared with the one obtained by Biasse and
Fieker method, that aims at finding a generator as we do, but runs in
L|∆K| (2/3). Besides this theoretical improvement, our algorithm
allows to recover in practice the secret key of the Smart and
Vercauteren scheme, for the smallest proposed parameters. In this talk
we propose to detail the main and novel aspects of this algorithm,
namely:
* The Gentry-Szydlo algorithm to perform a halving of
dimension by working in the totally real subfield of the cyclotomic
field.
* A q-descent procedure using lattice reduction and ideal
arithmetic to reduce the problem to PIP for L(1/2)-smooth ideals.
* A
linear algebra phase to solve the L(1/2)-smooth instances.
All along the presentation we will emphasize on the relations between
the commutative algebra side (ideals arithmetic, manipulations of
number field elements,...) and the geometric side (norms related issues,
lattice reduction,...).
The talk will be based
on this
article.
Joint work with Pierre-Alain Fouque, Alexandre Gélin and Paul
Kirchner.
Th. 09/03, 14:00-16:00. Léo Ducas (CWI), Short Stickelberger Class Relations and application to Ideal-SVP
The hardness of finding short vectors in ideals of cyclotomic number
fields (Ideal-SVP) serves as the worst-case hypothesis underlying the
Ring-LWE assumption, and therefore the security of numerous
cryptographic schemes --- including key-exchange, public-key
encryption and fully-homomorphic encryption. A series of recent works
has shown that *Principal* Ideal-SVP is not always as hard as finding
short vectors in general lattices, and some schemes were broken using
quantum algorithms --- the *Soliloquy* encryption scheme,
Smart-Vercauteren fully homomorphic encryption scheme from *PKC 2010*,
and Gentry-Garg-Halevi cryptographic multilinear-maps from *Eurocrypt
2013*.
Those broken schemes were using a special class of principal ideals,
but this approach can also solve SVP for principal ideals in the
*worst-case* in quantum polynomial time for an approximation factor of
$\exp(\tilde O(\sqrt n))$. This exposed an unexpected hardness gap
between general lattices and some structured ones, and called into
question the hardness of various problems over structured lattices,
such as Ideal-SVP and Ring-LWE.
In this work, we generalize the previous result to general ideals.
Precisely, we show how to solve the close principal multiple problem
(CPM) by exploiting the classical theorem that the class-group is
annihilated by the (Galois-module action of) the so-called
Stickelberger ideal. Under some plausible number-theoretical
hypothesis, our approach provides a close enough principal multiple in
quantum polynomial time. Combined with the previous results, this
solves Ideal-SVP in the worst case in quantum polynomial time for an
approximation factor of $\exp(\tilde O(\sqrt n))$.
Although it does not seems that the security of Ring-LWE based
cryptosystems is directly affected, we contribute novel ideas to the
cryptanalysis of schemes based on structured lattices. Moreover, our
result shows a deepening of the gap between general lattices and
structured ones.
The talk will be based on this article.
Joint work with Ronald Cramer and Benjamin Wesolowski.
24.01.2017 (organized by Hoeteck Wee)
Topic: lattice-based cryptographic constructions.
Location: ENS Paris.
Room: Amphi Rataud.
Tu 24/01, 10am-noon: Vinod Vaikuntanathan (MIT), Three Tricks to
Rule them All: Advanced Cryptographic Constructions from Lattices
Many recent constructions of advanced lattice-based primitives such as
attribute-based encryption, fully homomorphic encryption, fully
homomorphic signatures, predicate encryption and constrained PRFs rely
on a handful of techniques that play with LWE and lattice trapdoors.
We will present them and show as many of these constructions as time
permits.
Tu 24/01, 2pm-4pm: Zvika Brakerski (Weizmann Institute), Nipped in
the Bud: Graded Encoding Schemes that Did Not Make It
I will present a few suggested constructions of (polynomial ring
based) graded encoding schemes that turned out to be insecure at an
early stage, and therefore were not widely published and might not be
known to many.
15.12.2016 - 16.12.2016
Topic: Algorithms and assumptions.
Location: ENS Lyon, Monod campus.
Rooms: 115 on Thursday the 15th (main building, level 1, South aisle), Amphi G on Friday the 16th (ground floor, North aisle).
Work room, for around the talks: 316C.
Dates: This is indeed a Thursday and a Friday, not as usual!
Th 15/12, 9:30am-11:30am: Gottfried Herold (Ruhr-Universität Bochum), Improved Algorithms for
the Approximate k-List Problem in Euclidean norm
We present an algorithm for the approximate k-List problem for the Euclidean distance that
improves upon the Bai-Laarhoven-Stehle (BLS) algorithm from ANTS'16. The improvement
stems from the observation that almost all the solutions to the approximate k-List problem form
a particular conguration in n-dimensional space. Due to special properties of congurations,
it is much easier to verify whether a k-tuple forms a conguration rather than checking whether
it gives a solution to the k-List problem. Thus, phrasing the k-List problem as a problem of
finding such congurations immediately gives a better algorithm. Furthermore, the search for
configurations can be sped up using techniques from Locality-Sensitive Hashing (LSH). Stated
in terms of configuration-search, our LSH-like algorithm offers a broader picture on previous
LSH algorithms.
Joint work with Elena Kirshanova.
Th 15/12, 2:30pm-4:30pm: Léo Ducas (CWI), The closest vector problem in tensored
root lattices of type A and in their duals
In this work we consider the closest vector problem (CVP) ---a problem
also known as maximum-likelihood decoding--- in the tensor of two root
lattices of type A (Am \tensor An), as well as in their duals (A*m \tensor A*n). This
problem is mainly motivated by lattice based cryptography, where
the cyclotomic rings Z[\zetac] (resp. its co-different) play a central
role, and turn out to be isomorphic as lattices to tensors of A*lattices
(resp. Aroot lattices). In particular, our results lead to solving CVP
in Z[\zetac] and iits co-different for conductors of the form
c=2\alpha_p \beta_q \gamma for any two odd
primes p,q.
For the primal case Am \tensor An, we provide a full characterization of the
Voronoi region in terms of simple cycles in the complete directed
bipartite graph Km+1,n+1. This leads ---relying on the Bellman-Ford
algorithm for negative cycle detection--- to a CVP algorithm running in
*polynomial time*. Precisely, our algorithm performs
O(l m^2n^2 min{m,n}) operations on reals, where l is the number of bits per
coordinate of the input target. For the dual case, we use a
gluing-construction to solve CVP in sub-exponential time.
The talk will be based on this article.
Joint work with Wessel P.J. van Woerden.
Th 15/12, 4:30pm-5:30pm and Fr 16/12 9:30am-10:30am:
Weiqiang Wen (ENS de Lyon), Learning With Errors and the Generalized
Hidden Shift Problem
We give reductions between the Learning With Errors problem
and a variant, introduced
by Childs and van Dam at SODA'07, of the Hidden Shift Problem modulo an integer.
The reductions are essentially sharp
as the parameter degradations incurred by applying them back and forth
are limited. They allow to better capture the (quantum) hardness
of Learning With Errors in terms of the Hidden Shift Problem.
From a technical viewpoint, the reductions are inspired by Regev's
reductions from worst-case lattice problems to the Dihedral Coset Problem
and to Learning With Errors.
Fr 16/12, 10:30am-12:30am: Zvika Brakerski (Weizmann Institute), Obfuscating Conjunctions under Entropic Ring LWE
We show how to securely obfuscate conjunctions, which are functions f(x1, ..., xn) =
AND_{i in I} y_i, where I \subseteq [n] and each literal y_i is either just xi or NOT(xi).
Whereas prior work of Brakerski and Rothblum (CRYPTO 2013) showed how to achieve this using a
non-standard object called cryptographic multilinear maps, our scheme is based on an 'entropic' variant
of the Ring Learning with Errors (Ring LWE) assumption. As our core tool, we prove that hardness
assumptions on the recent multilinear map construction of Gentry, Gorbunov and Halevi (TCC 2015) can
be established based on entropic Ring LWE. We view this as a first step towards proving the security of
additional multilinear map based constructions, and in particular program obfuscators, under standard
assumptions.
Our scheme satisfies virtual black box (VBB) security, meaning that the obfuscated program reveals
nothing more than black-box access to f as an oracle, at least as long as (essentially) the conjunction is
chosen from a distribution having sufficient entropy.
The talk will be based on this article.
Joint work with Vinod Vaikuntanathan, Hoeteck Wee and Daniel Wichs.
23.11.2016 - 24.11.2016
Topic: Fully homomorphic encryption.
Location: ENS Lyon, Monod campus.
Rooms: 116 and then 115 (main building, level 1, South aisle).
Wed 23/11, 2:30pm-4:30pm: Ilaria Chillotti (UVSQ), Faster Fully Homomorphic Encryption: Bootstrapping in less than 0.1 Seconds
We revisit fully homomorphic encryption (FHE) based on GSW and its ring variants. We notice that the internal product of GSW can be replaced by a simpler external product between a GSW and an LWE ciphertext.
We show that the bootstrapping scheme FHEW of Ducas and Micciancio (Eurocrypt 2015) can be expressed only in terms of this external product. As a result, we obtain a speed up from less than 1 second to less than 0.1 seconds. We also reduce the 1GB bootstrapping key size to 24MB, preserving the same security levels, and we improve the noise propagation overhead by replacing exact decomposition algorithms with approximate ones.
Moreover, our external product allows to explain the unique asymmetry in the noise propagation of GSW samples and makes it possible to evaluate deterministic automata homomorphically as in (ePrint 2014/283) in an efficient way with a noise overhead only linear in the length of the tested word.
Finally, we provide an alternative practical analysis of LWE based scheme, which directly relates the security parameter to the error rate of LWE and the entropy of the LWE secret key.
The talk will be based on this article.
Joint work with Nicolas Gama and Mariya Georgieva and Malika Izabachène.
Th 24/11, 10am - 12pm: Florian Bourse (ENS Paris), FHE Circuit Privacy Almost for Free
Circuit privacy is an important property for many applications of
fully homomorphic encryption. Prior approaches for achieving circuit
privacy rely on superpolynomial noise flooding or on bootstrapping. In
this work, we present a conceptually different approach to circuit
privacy based on a novel characterization of the noise growth amidst
homomorphic evaluation. In particular, we show that a variant of the
GSW FHE for branching programs already achieves circuit privacy; this
immediately yields a circuit-private FHE for NC1 circuits under the
standard LWE assumption with polynomial modulus-to-noise ratio. Our
analysis relies on a variant of the discrete Gaussian leftover hash
lemma which states that e^T G^{−1}(v)+small noise does not
depend on v. We believe that this result is of independent
interest.
The talk will be based on this article.
Joint work with Rafael Del Pino, Michele Minelli and Hoeteck Wee.
Th 24/11, 2pm-4pm: Eduardo Soria-Vazquez (U. of Bristol),
More Efficient Constant-Round Multi-Party Computation from BMR and SHE
We present a multi-party computation protocol in the case of dishonest
majority which has very low round complexity. Our protocol sits
philosophically between Gentry's Fully Homomorphic Encryption based
protocol and the SPDZ-BMR protocol of Lindell et al (CRYPTO 2015). Our
protocol avoids various inefficiencies of the previous two
protocols. Compared to Gentry's protocol we only require Somewhat
Homomorphic Encryption (SHE). Whilst in comparison to the SPDZ-BMR
protocol we require only a quadratic complexity in the number of
players (as opposed to cubic), we have fewer rounds, and we require
less proofs of correctness of ciphertexts. Additionally, we present a
variant of our protocol which trades the depth of the garbling circuit
(computed using SHE) for some more multiplications in the offline and
online phases.
The talk will be based on this article.
Joint work with Yehuda Lindell and Nigel P. Smart.
24.05.2016 - 25.05.2016
Topic: Cryptographic constructions.
Location: ENS Lyon, Monod campus.
Room: A2 (main building, level 4, North aisle).
Warning: the meeting will be on Tuesday + Wednesday (instead of the usual Wednesday + Thursday).
Tu 24/05, 2:30pm - 5pm: Pooya Farshim (ENS Paris)
Multilinear Maps from Obfuscation
We provide constructions of multilinear groups equipped with natural
hard problems from indistinguishability obfuscation, homomorphic
encryption, and NIZKs. This complements known results on the
constructions of indistinguishability obfuscators from multilinear
maps in the reverse direction.
We provide two distinct, but closely related constructions and show
that multilinear analogues of the DDH assumption hold for them. Our
first construction is \emph{symmetric} and comes with a k-linear map e
: G^k --> G_T for prime-order groups G and G_T. To establish the
hardness of the k-linear DDH problem, we rely on the existence of a
base group for which the (k - 1)-strong DDH assumption holds. Our
second construction is for the \emph{asymmetric} setting, where e :
G_1 x ... x G_k --> G_T for a collection of k + 1 prime-order groups
G_i and G_T, and relies only on the standard DDH assumption in its
base group. In both constructions the linearity k can be set to any
arbitrary but a priori fixed polynomial value in the security
parameter.
We rely on a number of powerful tools in our constructions:
(probabilistic) indistinguishability obfuscation, dual-mode NIZK proof
systems (with perfect soundness, witness indistinguishability and zero
knowledge), and additively homomorphic encryption for the group
Z_N^{+}. At a high level, we enable "bootstrapping" multilinear
assumptions from their simpler counterparts in standard cryptographic
groups, and show the equivalence of IO and multilinear maps under the
existence of the aforementioned primitives.
The talk will be based on this article.
Joint work with Martin R. Albrecht, Dennis Hofheinz, Enrique Larraia and Kenneth G. Paterson.
We 25/05, 9:am - 11:30am: Shota Yamada (AIST, Japan), Adaptively
Secure Identity-Based Encryption from Lattices with Asymptotically
Shorter Public Parameters
In this paper, we present two new adaptively secure identity-based
encryption (IBE) schemes from lattices. The size of the public
parameters, ciphertexts, and private keys are \tilde{O}(n2κ1/d),
\tilde{O}(n), and \tilde{O}(n) respectively. Here, n is the security
parameter, κ is the length of the identity, and d is a flexible
constant that can be set arbitrary (but will affect the reduction
cost). Ignoring the poly-logarithmic factors hidden in the asymptotic
notation, our schemes achieve the best efficiency among existing
adaptively secure IBE schemes from lattices. In more detail, our first
scheme is anonymous, but proven secure under the LWE assumption with
approximation factor n^{ω(1)}. Our second scheme is not anonymous, but
proven adaptively secure assuming the LWE assumption for all
polynomial approximation factors.
As a side result, based on a similar idea, we construct an
attribute-based encryption scheme for branching programs that
simultaneously satisfies the following properties for the first time:
Our scheme achieves compact secret keys, the security is proven under
the LWE assumption with polynomial approximation factors, and the
scheme can deal with unbounded length branching programs.
The talk will be based on this article.
We 25/05, 2pm-4pm, Daniel Masny (Ruhr-Universität Bochum, Germany),
Chosen-Ciphertext Security from Subset Sum
We construct a public-key encryption (PKE) scheme whose security is
polynomial-time equivalent to the hardness of the Subset Sum
problem. Our scheme achieves the standard notion of
indistinguishability against chosen-ciphertext attacks (IND-CCA) and
can be used to encrypt messages of arbitrary polynomial length,
improving upon a previous construction by Lyubashevsky, Palacio, and
Segev (TCC 2010) which achieved only the weaker notion of semantic
security (IND-CPA) and whose concrete security decreases with the
length of the message being encrypted.
At the core of our construction is a trapdoor technique which
originates in the work of Micciancio and Peikert (Eurocrypt 2012).
The talk will be based on this article.
Joint work with Sebastian Faust and Daniele Venturi.
13.04.2016 - 14.04.2016
Topic: Towards making lattice-based crypto practical.
Location: ENS Lyon, Monod campus.
Room: Amphi K on the 13th, Amphi I on the 14th
(main building, Level 0, North side).
We 13/04, 2pm - 4pm: Carlos Aguilar-Melchor (INP ENSEEIHT),
NFLlib: NTT-based Fast Lattice Library
Recent years have witnessed an increased interest in lattice
cryptography. Besides its strong security guarantees, its simplicity
and versatility make this powerful theoretical tool a promising
competitive alternative to classical cryptographic schemes. In this
paper, we introduce NFLlib, an efficient and open-source C++ library
dedicated to ideal lattice cryptography in the widely-spread
polynomial ring Zp[x]/(x^n + 1) for n a power of 2. The library
combines algorithmic optimizations (Chinese Remainder Theorem,
optimized Number Theoretic Transform) together with programming
optimization techniques (SSE and AVX2 specializations, C++ expression
templates, etc.), and will be fully available under the GPL license.
The library compares very favorably to other libraries used in ideal
lattice cryptography implementations (namely the generic number theory
libraries NTL and flint implementing polynomial arithmetic, and the
optimized library for lattice homomorphic encryption HElib):
restricting the library to the aforementioned polynomial ring allows
to gain several orders of magnitude in efficiency.
The talk will be based on this article.
Joint work with Joris Barrier, Serge Guelton, Adrien Guinet, Marc-Olivier Killijian and
Tancrède Lepoint .
We 13/04, 4:30pm - 5:30pm: Léo Ducas (CWI), Decoding algorithms for special lattice codes
The final rounding stage of decryption of many lattice-based schemes can
be interpreted as analog decoding of the trivial lattice Z^n. It is well
known that noise/signal gain can be achieved with better lattice code: this
would translate into more compact or more secure cryptosystems.
This idea has already been explored implicitly with the lattice D_2 and was
made explicit in the protocol Newhope with the lattice D_4.The gain should
increases as we switch to larger
lattice codes.
In this talk, we propose a brief introduction to decoding algorithms
for special lattice code. While the literature tends to be quite ad-hoc,
we wish to propose a few general lemmas to translate the decomposition
(gluing, intersection with parity codes) of a lattice code into an algorithm.
As an example, we revisit the Maximum-Likelyhood decoding (CVP) algorithm
for the 24-dimensional Leech lattice avoiding the typical details on the
Miracle Octade Genrators (MOG).
If time permits, we may also show connections between ring of integers of
cyclotomic number fields and the dual root lattices A_n^* and deduce CVP
algorithms for those rings.
This is work in progress with Alex van Poppelen and Wessel van Woerden
Th 14/04, 10:15am-11:15am, Peter Schwabe (Radboud University),
Post-quantum key exchange -- a new hope
At IEEE Security & Privacy 2015, Bos, Costello, Naehrig, and Stebila
proposed an instantiation of Peikert's ring-learning-with-errors--based
(Ring-LWE) key-exchange protocol (PQCrypto 2014), together with an
implementation integrated into OpenSSL, with the affirmed goal of
providing post-quantum security for TLS. In this talk I will present
joint work with Alkim, Ducas, and Pöppelmann, in which we revisit their
instantiation and stand-alone implementation. Specifically, we propose
new parameters and a better suited error distribution, analyze the
scheme's hardness against attacks by quantum computers in a conservative
way, introduce a new and more efficient error-reconciliation mechanism,
and propose a defense against backdoors and all-for-the-price-of-one
attacks. By these measures and for the same lattice dimension, we more
than double the security parameter, halve the communication overhead,
and speed up computation by more than a factor of 9 in a portable C
implementation and by more than a factor of 24 in an optimized
implementation targeting current Intel CPUs. These speedups are achieved
with comprehensive protection against timing attacks.
The talk will be based on this article.
Joint work with Erdem Alkim, Léo Ducas and Thomas Pöppelmann.
Th 14/04, 1:30pm-3:30pm, Martin Albrecht (Royal Holloway), Implementing Operations in Power-of-2 Cyclotomic Rings
The talk will be based on this article.
Joint work with Catalin Cocis, Fabien Laguillaumlie and Adeline Langlois.
02.03.2016 - 03.03.2016
Topic: Fast lattice reduction algorithms and applications
Location: ENS Lyon, Monod campus.
Room: 116 (main building, level 1, South aisle).
We 02/03, 2:45pm - 4:45pm: Damien Stehlé (ENS de Lyon),
Faster LLL-type reduction of lattice bases
We describe a fast variant of the LLL lattice reduction algorithm.
Our algorithm takes as input a basis $\vec{B} \in \mZ^{n \times n}$
of a Euclidean lattice~$L$ and returns a (reduced) basis~$\vec{C}$ of~$L$, whose
first vector satisfies~$\|\vec{c}_1\| \leq (1+c) (4/3)^{(n-1)/4} (\det L)^{1/n}$ for any fixed~$c>0$.
It terminates within~$O(n^{4+\eps} \beta^{1+\eps})$ bit operations
for any~$\eps >0$, with~$\beta = \max_i \|\vec{b}_i\|$. It does rely on fast integer arithmetic
but does not make use of fast matrix multiplication.
Joint work with Arnold Neumaier.
We 02/03, 5:00pm - 6:00pm: Jinming Wen (ENS de Lyon), Three
Polynomial Time Algorithms for Optimally Solving a Shortest Vector
Problem in Compute-and-Forward Design
In relay networks, compute-and-forward (CF) is a promising relaying
strategy that can offer higher rates than traditional ones (e.g.,
amplify-and-forward, decode-and-forward), especially in the moderate
signal-to-noise regime. To find the optimal coefficient vector that
maximizes the computation rate at a relay in the CF scheme, a Shortest
Vector Problem (SVP) needs to be solved. In this talk, I will present
three polynomial time algorithms (including two linearithmic Time
Algorithms) for optimally solving this SVP.
The talk will be based on this
article and this article.
Joint work with Baojian Zhou, Wai Ho Mow and Xiao-Wen Chang.
Th 03/03, 9:00am-10:00am: Jinming Wen (ENS de Lyon), follow-up of previous talk
Th 03/03, 10:15am-11:15am, Fabrice Benhamouda (ENS Paris), Easing
Coppersmith Methods using Analytic Combinatorics: Applications to
Public-Key Cryptography with Weak Pseudorandomness
The \emph{Coppersmith methods} is a family of lattice-based techniques to find small integer roots of polynomial equations. They have found numerous applications in cryptanalysis and, in recent developments, we have seen applications where the number of unknowns and the number of equations are non-constant. In these cases, the combinatorial analysis required to settle the complexity and the success condition of the method becomes very intricate.
We provide a toolbox based on \emph{analytic combinatorics} for these
studies. It uses the structure of the considered polynomials to derive
their generating functions and applies complex analysis techniques to
get asymptotics. The toolbox is versatile and can be used for many
different applications, including multivariate polynomial systems with
arbitrarily many unknowns (of possibly different sizes) and
simultaneous modular equations over different moduli. To demonstrate
the power of this approach, we apply it to recent cryptanalytic
results on number-theoretic pseudorandom generators for which we
easily derive precise and formal analysis. We also present new
theoretical applications to two problems on RSA key generation and
randomness generation used in padding functions for encryption.
The talk will be based on this article.
Joint work with Céline Chevalier, Adrian Thillard, and Damien Vergnaud.
Th 03/03, 1:15pm-2:15pm, Fabrice Benhamouda (ENS Paris), follow-up of previous talk
Th 03/03, 2:45pm-4:45pm, Vincent Neiger (ENS de Lyon), Fast
Coppersmith method over the polynomials: finding a reduced basis via
approximation
After giving an overview of the analogue of the Coppersmith method
over the polynomials and of some of its applications, we focus on the
algorithmic aspects of the first step of the method. In the integer
Coppersmith method, this step is commonly solved using lattice basis
reduction: in the polynomial version of the method, we sketch the
analogous solution which uses row reduction of a polynomial
matrix. Then we present another, currently faster, approach for this
first step, which reduces it to the problem of finding a (basis of)
sufficiently small degree solution(s) to a system of modular
polynomial equations. We will observe that the fastest known
algorithms for both row reduction and modular system solving
fundamentally rely on fast Hermite-Pade approximation.
21.01.2016 (organized by Hoeteck Wee)
Topic: functional encryption
Location: ENS Paris.
Room: W
10:30am - 12:00pm: Hoeteck Wee (ENS Paris),
Predicate Encryption for Circuits from LWE.
In predicate encryption, a ciphertext is associated with descriptive
attribute values $x$ in addition to a plaintext $\mu$, and a secret key is
associated with a predicate $f$. Decryption returns plaintext $\mu$ if and only
if $f(x) = 1$. Moreover, security of predicate encryption guarantees that an
adversary learns nothing about the attribute $x$ or the plaintext $\mu$ from a
ciphertext, given arbitrary many secret keys that are not authorized to decrypt
the ciphertext individually.
We construct a leveled predicate encryption scheme for all circuits,
assuming the
hardness of the subexponential learning with errors (LWE) problem. That is, for
any polynomial function $d = d(\secp)$, we construct a predicate encryption
scheme for the class of all circuits with depth bounded by $d(\secp)$,
where $\secp$ is the security parameter.
The talk will be based on
this article.
Joint work with Sergey Gorbunov and Vinod Vaikuntanathan.
2:00pm - 3:30pm: Benoît Libert and Damien Stehlé
(ENS de Lyon), Fully Secure Functional Encryption for Inner Products, from Standard
Assumptions
Functional encryption is a modern public-key paradigm where a master
secret key can be used to derive sub-keys $SK_F$ associated with
certain functions $F$ in such a way that the decryption operation
reveals $F(M)$, if $M$ is the encrypted message, and nothing
else. Recently, \mbox{Abdalla} {\it et al.} gave simple and efficient
realizations of the primitive for the computation of linear functions
on encrypted data: given an encryption of a vector $\vec{y}$ over some
specified base ring, a secret key $SK_{\vec{x}}$ for the vector $\vec
{x}$ allows computing $\langle \vec{x} ,\vec{y} \rangle$. Their
technique surprisingly allows for instantiations under standard
assumptions, like the hardness of the Decision Diffie-Hellman (DDH)
and Learning-with-Errors (LWE) problems. Their constructions, however,
are only proved secure against selective adversaries, which have to
declare the challenge messages $M_0$ and $M_1$ at the outset of the
game. In this paper, we provide constructions that provably achieve
security against more realistic {\it adaptive} attacks (where the
messages $M_0$ and $M_1$ may be chosen in the challenge phase, based
on the previously collected information) for the same inner product
functionality. Our constructions are obtained from hash proof systems
endowed with homomorphic properties over the key space. They are
(almost) as efficient as those of Abdalla {\it et al.} and rely on the
same hardness assumptions.
In addition, we obtain a solution based on Paillier's composite
residuosity assumption, which was an open problem even in the case of
selective adversaries. We also propose $\LWE$-based schemes that
allow evaluation of inner products modulo a prime~$p$, as opposed to
the schemes of Abdalla {\it et al.} that are restricted to evaluations
of integer inner products of short integer vectors. We finally propose
a solution based on Paillier's composite residuosity assumption that
enables evaluation of inner products modulo an RSA integer $N =
pq$. We demonstrate that the functionality of inner products over a
prime field is very powerful and can be used to construct bounded
collusion FE for all circuits.
The talk will be based on this
article.
Joint work with Shweta Agrawal.
3:50pm - 5:20pm: Thomas Prest (Thales), Fast Fourier
Orthogonalization
The classical Fast Fourier Transform (FFT) allows to compute in
quasi-linear time the product of two polynomials, in the circular
convolution ring R[x]/(x^d−1) --- a task that naively requires
quadratic time. Equivalently, it allows to accelerate matrix-vector
products when the matrix is circulant.
In this work, we discover that the ideas of the FFT can be applied to
speed up the orthogonalization process of a circulant matrix. We show
that, when n is composite, it is possible to proceed to the
orthogonalization in an inductive way, leading to a structured
Gram-Schmidt decomposition. In turn, this structured Gram-Schmidt
decomposition accelerates a cornerstone lattice algorithm: the Nearest
Plane algorithm. The results easily extend to cyclotomic rings, and
can be adapted to Gaussian Samplers. This finds applications in
lattice-based cryptography, improving the performances of trapdoor
functions.
The talk will be based on this
article.
Joint work with Léo Ducas.
25.11.2015 - 26.11.2015
Topic: algorithms for hard problems on lattices and codes
Location: ENS Lyon, Monod campus.
Room: 116 (main building, level 1, South aisle).
We 25/11, 1:00pm - 3:00pm: Nicolas Gama (Inpher Switzerland and UVSQ France),
Solving shortest and closest vector problems: The
decomposition approach.
The talk will be based on
this article.
Joint work with Anja Becker and Antoine Joux.
We 25/11, 3:30pm - 5:30pm: Ilya Ozerov (RUB),
On Computing Nearest Neighbors with Applications to Decoding of Binary
Linear Codes
We propose a new decoding algorithm for random binary linear codes. The
so-called information set decoding algorithm of Prange (1962) achieves
worst-case complexity $2^{0.121n}$ . In the late 80s, Stern proposed a
sort-and-match version for Prange's algorithm, on which all variants of
the currently best known decoding algorithms are build. The fastest
algorithm of Becker, Joux, May and Meurer (2012) achieves running time
$2^{0.102n}$ in the full distance decoding setting and $2^{0.0494n}$
with half (bounded) distance decoding.
In this work we point out that the sort-and-match routine in Stern's
algorithm is carried out in a non-optimal way, since the matching is
done in a two step manner to realize an {\em approximate matching} up to
a small number of error coordinates. Our observation is that such an
approximate matching can be done by a variant of the so-called High
Dimensional Nearest Neighbor Problem. Namely, out of two lists with
entries from $\F_2^m$ we have to find a pair with closest Hamming
distance. We develop a new algorithm for this problem with sub-quadratic
complexity which might be of independent interest in other contexts.
Using our algorithm for full distance decoding improves Stern's
complexity from $2^{0.117n}$ to $2^{0.114n}$. Since the techniques of
Becker et al apply for our algorithm as well, we eventually obtain the
fastest decoding algorithm for binary linear codes with complexity
$2^{0.097n}$. In the half distance decoding scenario, we obtain a
complexity of $2^{0.0473n}$.
The talk will be based on this
article.
Joint work with Alexander May.
Th 26/11, 10:15am - 11:15am: Ludovic Perret (UPMC), Algebraic
Algorithms for LWE
The Learning with Errors (LWE) problem, proposed by Regev in 2005, has
become an ever-popular cryptographic primitive, due mainly to its
simplicity, flexibility and convincing theoretical arguments regarding
its hardness. Among the main proposed approaches to solving LWE
instances --- namely, lattice algorithms, combinatorial algorithms,
and algebraic algorithms --- the last is the one that has received the
least attention in the literature, and is the focus of this paper. We
present a detailed and refined complexity analysis of the original
Arora-Ge algorithm, which reduced LWE to solving a system of
high-degree, error-free polynomials. Moreover, we generalise their
method and establish the complexity of applying Gr\"obner basis
techniques from computational commutative algebra to solving LWE. As a
result, we show that the use of Gr\"obner basis algorithms yields an
exponential speed-up over the basic Arora-Ge algorithm. On the other
hand, our results show that such techniques do not yield a
subexponential algorithm for the LWE problem. We also apply our
algebraic algorithm to the BinaryError-LWE problem, which was recently
introduced by Micciancio and Peikert. We show that BinaryError-\LWE in
dimension n can be solved in subexponential time given access to a
quasi-linear number of samples m under a regularity assumption. We
also give precise complexity bounds for BinaryError-LWE in function of
the number of samples. The results in this work depend crucially on
the assumption that the encountered systems have no special
structure. We give experimental evidence that this assumption holds
and also prove the assumptions in some special cases. Therewith, we
also make progress towards proving Froberg's long-standing conjecture
from algebraic geometry.
The talk will be based on this
article.
Joint work with Martin R. Albrecht, Carlos Cid and Jean-Charles
Faugère.
Th 26/11, 1:00pm - 2:00pm: Ludovic Perret (UPMC), Algebraic
Algorithms for LWE
(follow-up of previous talk)
Th 26/11, 2:30pm - 4:30pm: Léo Ducas (CWI), New directions in nearest neighbor searching with applications to lattice sieving
30.09.2015 - 01.10.2015
Topic: Lattice-based and code-based group signatures.
Location: ENS Lyon, Monod campus.
Room: 116 (main building, level 1, South aisle).
We 30/09, 2:30pm - 3:30pm: Benoît Libert (ENS Lyon),
Tutorial on group signatures
The tutorial will give a self-contained description of the primitive
and its security definitions for static groups. Examples of (both
theoretical and practical) constructions will be given. The case of
dynamic groups will be briefly outlined.
Suggested readings:
We 30/09, 4:00pm - 5:30pm: Khoa Nguyen (NTU),
Group signatures from lattices: simpler, tighter, shorter, ring-based
We introduce a lattice-based group signature scheme that
provides several noticeable improvements over the contemporary ones:
simpler construction, weaker hardness assumptions, and shorter sizes
of keys and signatures. Moreover, our scheme can be transformed into
the ring setting, resulting in a scheme based on ideal lattices, in
which the public key and signature both have bit-size soft-O(n log N),
for security parameter n, and for group of N users. Towards our goal,
we construct a new lattice-based cryptographic tool: a statistical
zero-knowledge argument of knowledge of a valid message-signature pair
for Boyen's signature scheme (Boyen, PKC'10), which potentially can be
used as the building block to design various privacy-enhancing
cryptographic constructions.
The talk will be based on
this article.
Joint work with San Ling and Huaxiong Wang.
Th 01/10, 10:15am - 11:15am: Khoa Nguyen (NTU),
A provably secure group signature scheme from code-based assumptions
We solve an open question in code-based cryptography by introducing
the first provably secure group signature scheme from code-based
assumptions. Specifically, the scheme satisfies the CPA-anonymity and
traceability requirements in the random oracle model, assuming the
hardness of the McEliece problem, the Learning Parity with Noise
problem, and a variant of the Syndrome Decoding problem. Our
construction produces smaller key and signature sizes than the
existing post-quantum group signature schemes from lattices, as long
as the cardinality of the underlying group does not exceed the
population of the Netherlands (approx 2^24 users). The
feasibility of the scheme is supported by implementation
results. Additionally, the techniques introduced in this work might be
of independent interest: a new verifiable encryption protocol for the
randomized McEliece encryption and a new approach to design formal
security reductions from the Syndrome Decoding problem.
The talk will be based on
this article.
Joint work with Martianus Frederic Ezerman, Hyung Tae Lee, San Ling
and Huaxiong Wang.
Th 01/10, 1:30pm - 3:00pm: Philippe Gaborit (U. Limoges),
Dynamic traceable signature on lattices with non-frameability property
Traceable signature schemes were introduced in 2004 as a
generalization of group signatures. In this talk we introduce the
first traceable signature scheme based on lattices. Our scheme is
conceptually simple and provides two other very nice features:
dynamicity and non- frameability, which ensuse that the group manager
can add new participants to the group and secondly, that he can not
frame members of the group.
Joint work with Carlos Aguilar Melchor, Olivier Blazy and
Jean-Christophe Deneuville.
Th 01/10, 3:30pm - 5:00pm: Fabrice Mouhartem (ENS Lyon),
Lattice-based group signatures for dynamic groups
TBA
Joint work with Benoît Libert.
Before 09/2015.