Talk @ PEPR IA Days, Saclay, Mar 18th 2025

PEPR IA Days, Talk on Highs and lows of sparsity in a world of depths

Journée AILYS at ENS Lyon, Feb 14th 2025

Optimization for Artificial Intelligence
Robustness, Overfitting, Transfer, Frugality

Talk on Conservation laws during neural network training (joint work with S. Marcotte and G. Peyré)


Séminaire “mathématiques de l’IA”, IMB, Bordeaux, Jan 30 2025

Rescaling Symmetries in Neural Networks: a Path-lifting Perspective

joint work with A. Gonon, N. Brisebarre, E. Riccietti (https://hal.science/hal-04225201v5, https://hal.science/hal-04584311v3) 
and with A. Gagneux, M. Massias, E. Soubies (https://hal.science/hal-04877619v1) 

Séminaire MMCS de l’ICJ, Lyon 1, 7/01/2024 – Conservation Laws for Gradient Flows

https://math.univ-lyon1.fr/wikis/seminaire-mmcs/doku.php

Prix Loréal-UNESCO 2024 for Sibylle Marcotte

Sibylle Marcotte, doctorante à l’ENS Ulm que je co-supervise avec Gabriel Peyré, parmi les lauréates du 18e Prix Jeunes Talents France L’Oréal-UNESCO Pour Les Femmes et la Science

Invited talk @Workshop Frugalité en IA et en statistiques, Sorbonne Université, Paris, Oct 4th 2024

https://frugalias.sciencesconf.org/

PolSys Seminar, LIP6, Paris, Sep 27th 2024

https://www-polsys.lip6.fr/Seminar/seminar.html Conservation Laws for Gradient Flows

Conservation laws for neural network training: two papers at @NeurIPS23 & @ICML24

We study conservation laws during the (euclidean or not) gradient or momentum flow of neural networks.

Keep the Momentum: Conservation Laws beyond Euclidean Gradient Flows, accepted at ICML24

1/ We define the concept of conservation laws for momentum flows and show how to extend the framework from our previous paper (Abide by the Law and Follow the Flaw: Conservation Laws for Gradient Flows, oral @NeurIPS23) for non-Euclidean gradient flow (GF) and momentum flow (MF) settings. In stark contrast to the case of GF, conservation laws for MF exhibit temporal dependence.

2/ We discover new conservation laws for linear networks in the Euclidean momentum case, and these new laws are complete. In contrast, there is no conservation law for ReLU networks in the Euclidean momentum case

3/ In a non-Euclidean context, such as in NMF or for ICNN implemented with two-layer ReLU networks, we discover new conservation laws for gradient flows and find none in the momentum case. We obtain new conservation laws in the Natural Gradient Flow case.

4/ We shed light on a quasi-systematic loss of conservation when transitioning from the GF to the MF setting.

Invited talk @IEM @EPFL, May 24th 2024

Frugality in machine learning: Sparsity, a value for the future?

Sparse vectors and sparse matrices play a transerve role in signal and image processing: they have led to succesful approaches efficiently addressing tasks as diverse as data compression, fast transforms, signal denoising and source separation, or more generally inverse problems. To what extent can the potential of sparsity be also leveraged to achieve more frugal (deep) learning techniques? Through an overview of recent explorations around this theme, I will compare and contrast classical sparse regularization for inverse problems with its natural extensions that aim at learning neural networks with sparse connections. During our journey, I will notably highlight the role of rescaling-invariances of modern deep parameterizations, which come with their curses and blessings.

Invited talk, MAP5, Paris, May 17th 2024

Frugality in machine learning: Sparsity, a value for the future?

Sparse vectors and sparse matrices play a transerve role in signal and image processing: they have led to succesful approaches efficiently addressing tasks as diverse as data compression, fast transforms, signal denoising and source separation, or more generally inverse problems. To what extent can the potential of sparsity be also leveraged to achieve more frugal (deep) learning techniques? Through an overview of recent explorations around this theme, I will compare and contrast classical sparse regularization for inverse problems with its natural extensions that aim at learning neural networks with sparse connections. During our journey, I will notably highlight the role of rescaling-invariances of modern deep parameterizations, which come with their curses and blessings.