Category: Talk

Talk @ PEPR IA Days, Saclay, Mar 18th 2025

PEPR IA Days, Talk on Highs and lows of sparsity in a world of depths

Journée AILYS at ENS Lyon, Feb 14th 2025

Optimization for Artificial IntelligenceRobustness, Overfitting, Transfer, Frugality Talk on Conservation laws during neural network training (joint work with S. Marcotte and G. Peyré)

Séminaire “mathématiques de l’IA”, IMB, Bordeaux, Jan 30 2025

Rescaling Symmetries in Neural Networks: a Path-lifting Perspective joint work with A. Gonon, N. Brisebarre, E. Riccietti (https://hal.science/hal-04225201v5, https://hal.science/hal-04584311v3) and with A. Gagneux, M. Massias, E. Soubies (https://hal.science/hal-04877619v1) 

Séminaire MMCS de l’ICJ, Lyon 1, 7/01/2024 – Conservation Laws for Gradient Flows

https://math.univ-lyon1.fr/wikis/seminaire-mmcs/doku.php

Invited talk @Workshop Frugalité en IA et en statistiques, Sorbonne Université, Paris, Oct 4th 2024

https://frugalias.sciencesconf.org/

PolSys Seminar, LIP6, Paris, Sep 27th 2024

https://www-polsys.lip6.fr/Seminar/seminar.html Conservation Laws for Gradient Flows

Invited talk @IEM @EPFL, May 24th 2024

Frugality in machine learning: Sparsity, a value for the future? Sparse vectors and sparse matrices play a transerve role in signal and image processing: they have led to succesful approaches efficiently addressing tasks as diverse as data compression, fast transforms, signal denoising and source separation, or more generally inverse problems. To what extent can the …

Continue reading

Invited talk, MAP5, Paris, May 17th 2024

Frugality in machine learning: Sparsity, a value for the future? Sparse vectors and sparse matrices play a transerve role in signal and image processing: they have led to succesful approaches efficiently addressing tasks as diverse as data compression, fast transforms, signal denoising and source separation, or more generally inverse problems. To what extent can the …

Continue reading

Invited talk, Math Machine Learning seminar MPI MIS + UCLA, online, May 16, 2024

‘Conservation Laws for Gradient Flows’ Understanding the geometric properties of gradient descent dynamics isa key ingredient in deciphering the recent success of very largemachine learning models. A striking observation is that trainedover-parameterized models retain some properties of the optimizationinitialization. This “implicit bias” is believed to be responsible forsome favorable properties of the trained models and …

Continue reading

Invited talk @RWTH Sparsity and Singular Structures, Aachen, 26-29 February 2024

https://sfb1481.rwth-aachen.de/symposium24/programme