Invited talk, MAP5, Paris, May 17th 2024

Frugality in machine learning: Sparsity, a value for the future?

Sparse vectors and sparse matrices play a transerve role in signal and image processing: they have led to succesful approaches efficiently addressing tasks as diverse as data compression, fast transforms, signal denoising and source separation, or more generally inverse problems. To what extent can the potential of sparsity be also leveraged to achieve more frugal (deep) learning techniques? Through an overview of recent explorations around this theme, I will compare and contrast classical sparse regularization for inverse problems with its natural extensions that aim at learning neural networks with sparse connections. During our journey, I will notably highlight the role of rescaling-invariances of modern deep parameterizations, which come with their curses and blessings.

Invited talk, Math Machine Learning seminar MPI MIS + UCLA, online, May 16, 2024

‘Conservation Laws for Gradient Flows’

Understanding the geometric properties of gradient descent dynamics is
a key ingredient in deciphering the recent success of very large
machine learning models. A striking observation is that trained
over-parameterized models retain some properties of the optimization
initialization. This “implicit bias” is believed to be responsible for
some favorable properties of the trained models and could explain
their good generalization properties. In this work, we expose the
definitions and properties of “conservation laws”, that define
quantities conserved during gradient flows of a given machine learning
model, such as a ReLU network, with any training data and any loss.
After explaining how to find the maximal number of independent
conservation laws via Lie algebra computations, we provide algorithms
to compute a family of polynomial laws, as well as to compute the
number of (not necessarily polynomial) conservation laws. We obtain
that on a number of architecture there are no more laws than the known
ones, and we identify new laws for certain flows with momentum and/or
non-Euclidean geometries.
Joint work with Sibylle Marcotte and Gabriel Peyré.

Invited talk @RWTH Sparsity and Singular Structures, Aachen, 26-29 February 2024

https://sfb1481.rwth-aachen.de/symposium24/programme

Journées SMAI-MODE 2024, Lyon, 25-29 mars 2024

Details and registration https://indico.math.cnrs.fr/event/9418/

Journée SIGMA – MODE 2024, 30 janvier 2024, Inria Paris.

More information and (free but mandatory) registration at:

http://angkor.univ-mlv.fr/~vialard/conferences/sigmamode/

Invited talk at JRAF, Grenoble, 13-14 December 2023

https://jraf-2023.sciencesconf.org/

Invited Talk @DIPOpt workshop, Nov 29th 2023

http://perso.ens-lyon.fr/nelly.pustelnik/DIPOpt/

Exposé à la journée d’étude “Quelle IA pour 2030?” du projet AILyS

https://www.ens-lyon.fr/evenement/recherche/quelle-ia-pour-2030-le-projet-ailys

New preprint “Abide by the Law and Follow the Flow”

New preprint “Abide by the Law and Follow the Flow: Conservation Laws for Gradient Flows” w. @SibylleMarcotte and @GabrielPeyre, we define and study “conservation laws” for the optimization of over-parameterized models. https://arxiv.org/abs/2307.00144

Invited talk @ Stéphane Mallat’s 60

Invited talk at the workshop “A Multiscale tour of Harmonic Analysis and Machine Learning” – celebrating Stéphane Mallat’s 60th birthday – @Institut_IHES, “Rapture of the deep : Highs and lows of Sparsity in a world of depth”, April 21st 2023