# Workshop DIPOpt, Lyon

## Deep learning, image analysis, inverse problems, and optimization

** November 27th - 30th, 2023**

**Location**

Salle condorcet, site Monod, ENS Lyon,

1 place de l'école, 69007 Lyon

** Organization committee**

Jérémy Cohen, CNRS CREATIS

Marion Foare, CPE et Laboratoire de l'Informatique du Parallélisme, ENS Lyon

Adrien Meynard, Laboratoire de Physique, ENS Lyon

Nelly Pustelnik, CNRS, Laboratoire de Physique, ENS Lyon

Elisa Riccietti, Laboratoire de l'Informatique du Parallélisme, ENS Lyon

Julian Tachella, CNRS, Laboratoire de Physique, ENS Lyon

**Registration**

The registration is free but mandatory HERE.

**Scholarship application for covering travel expenses for PhD and post-doctoral students.**

Each application must be accompanied by a curriculum vitae, a reasoned opinion from the research supervisor, the amount requested, and a letter of commitment to take part in the workshop from 27 to 30 November 2023 and to present her/his work in one of the poster sessions.

Applications should be sent by email to Nelly Pustelnik (nelly.pustelnik@ens-lyon.fr) before Nov. 1st.

** Monday 27th November **

#### 10h30-11h40: ** Martin STORATH ** -- Professor Technische Hochschule Würzburg-Schweinfurt, Germany

**Title** -- Fast algorithms for discontinuity-preserving
smoothing. [PDF]

**Abstract** -- Discontinuities in functions frequently encode significant information: for
instance, they represent the boundaries of cellular structures in microscopic
images, they correspond to change points in microarray data, and they
define tissue layers in tomographic images. Since classical linear smooth-
ing methods destroy this important information, discontinuity preserving
models such as the Potts model, the Mumford-Shah model and the Blake-
Zisserman models have been developed. Such free-discontinuity problems
are algorithmically challenging as they lead to nonsmooth and nonconvex
problems.

In the talk, we start discussing the one-dimensional case, and we look
at algorithms for solving these problems efficiently, exactly and numerically
stable. The methods involve dynamic programming and recurrence schemes
for least squares or least absolute deviations. We in particular discuss a
recent advance regarding efficient computation of smoothing splines with
discontinuities. Then we turn to the higher dimensional case, where only
approximative solutions are possible. We study splitting approaches based
on the alternating method of multipliers or on iterative minimization. For
the higher order models, where smoothing is based on higher order derivatives, we discuss at a recent splitting approach based on Taylor jets.

The talk is based on joints works with Lukas Kiefer and Andreas Weinmann. [PDF]

#### 11h40-12h50: ** Odyssée MERVEILLE ** -- Assistant Professor CREATIS, INSA Lyon, France

**Title** -- Vascular segmentation based on variational approach. [PDF]

**Abstract** -- The segmentation of blood vessels in medical images is challenging as they are thin, connected and tortuous. Despite more than two decades of research, it is still difficult to detect a complete connected vascular network, in particular from 3D images. Yet, a geometrically accurate segmentation is critical for clinical applications (e.g. blood flow simulations, vascular network modeling and analysis). End-to-end deep learning approaches have been developed to tackle this issue. However, they demand large annotated datasets tailored to each new application. In practice, this is impossible to obtain, as the manual segmentation of a single 3D vascular network requires several hours by a trained expert. By contrast, variational methods do not rely on annotations, but classic approaches are not suited to the detection of thin structures that are essentially composed of contours.
In this talk, we will first discuss some of the regularization terms that have been proposed to preserve thin structures in variational frameworks. Then, we will present recent work towards learning a dedicated regularizer aimed at preserving the connectivity of vascular networks. This connectivity-preserving regularizer focuses on geometric properties and can be trained on synthetic data, circumventing the need for annotated data. It can then be plugged into a variational segmentation framework, enabling vascular segmentation across various types of vascular images. The interest and limits of this approach will be illustrated on several applications such as retinal, brain and liver vascular networks.

#### 13h50-15h50: ** Poster session**

**L. Amador**-- Super-résolution d'IRM grâce à des approches variationnelles.

**L. Davy**-- Combining dual-tree wavelet analysis and proximal optimization for anisotropic scalefree texture segmentation.

**J. El Haouari**-- Estimating instrument spectral response functions using sparse approximation.

**L. Leon**-- Spectral parameter estimation for quantitative acoustic microscopy.

**R. Merabet**-- On the local geometry of the PET reconstruction problem.

**A. Moshtaghpour**-- Exploring low-dose and fast electron ptychography using l0 regularisation of rxtended ptychographical iterative engine.

**N. Pustelnik**-- On the primal and dual formulations of the discrete Mumford-Shah functional.

**G. Schramm**-- Listmode stochastic primal-dual hybrid gradient for sparse Poisson (e.g. PET) data.

**R. Vo**-- Neural Fields and Regularization by Denoising for sparse view x-ray CT.

#### 15h50 - 17h00: ** Xiaohao CAI ** -- Assistant Professor, University of Southampton, UK

**Title** -- Segmentation and Classification using Deep Learning Technologies. [PDF]

**Abstract** -- Deep learning technologies have revolutionised many fields including computer vision and image processing. Their success generally relies on big data. However, for the data scarcity scenarios like in medical imaging, their performance could drop significantly. Moreover, in many cases, they also lack generalisation (e.g. the cross-domain adaptation problem) and explanation (e.g. explainable AI). In this presentation, I will introduce some of our recent work on segmentation and classification targeting those challenges, such as subspace feature representations, cross-domain adaptation in point clouds, multilevel explainable AI, etc.

** Tuesday 28th November **

#### 10h30-11h40: ** Dirk LORENZ ** -- Professor TU Braunschweig, Germany

**Title** -- Learning regularizers - bilevel opitimization or unrolling? [PDF]

**Abstract** -- In this talk we will consider the problem of learning a convex regularizer from a theoretical perspective. In general, learning of variational methods can be done by bilevel optimization where the variational problem is the lower level problem and the upper level problem minimizes over some parameter of the lower level problem. However, this is usually too difficult in practice and one practically feasible method is the approach by so-called unrolling (or unfolding). There, one replaces the lower level problem by an algorithm that converges to a solution of that problem and uses the N-th iterate instead of the true solution. While this approach is often successful in practice little theoretical results are available. In this talk we will consider a situation in which one can make a thorough comparison of the bilevel approach and the unrolling approach in a particular case of a quite simple toy example. Even though the example is quite simple, the situation is already quite complex and reveals a few phenomena that have been observed in practice.

#### 11h40-12h50: ** Fernando ROLDAN ** -- Postdoctoral researcher CentraleSupélec, Inria Saclay

**Title** -- Solution of Mismatched Monotone+Lipschitz Inclusion Problems. [PDF]

**Abstract** -- Adjoint mismatch problems arises when the adjoint of a linear operator is replaced by an approximation, due to computational or physical issues. This occurs in inverse problems, particularly in computed tomography. In this talk, we address the convergence of algorithms for solving
monotone inclusions in real Hilbert spaces in the presence of adjoint mismatch. In particular, we investigate the case of a mismatched Lipschitzian operator. We propose variants of the algorithms * Forward-Backward-Half-Forward * and * Forward-Half-Reflected-Backward * allowing to cope the mismatch. We establish conditions under the weak convergence to a solution of these variants is guaranteed. Moreover, the
proposed algorithms allow each iteration to be implemented with a possibly iteration-dependent approximation to the mismatch
operator, thus allowing this operator to be modified in each iteration.

Joint work with: Emilie Chouzenoux and Jean-Christophe Pesquet. [PDF]

#### 13h50-15h50: ** Poster session**

**D. Chen**-- Deepinverse: a PyTorch library for solving inverse problems with deep learning.

**F. Coeurdoux**-- Plug-and-Play split Gibbs sampler: embedding deep generative priors in Bayesian inference.

**J. Hertrich**-- Posterior Sampling via Sliced MMD Flows with the Negative Distance Kernel.

**O. Leblanc**-- CoLSI: Continuous Lippmann-Schwinger Intensity diffraction tomography.

**A. Paliwal**-- Reconstruction of microparticles from holographic images using Neural Networks.

**J. Rue Queralt**-- Pyxu: Modular & Scalable Computational Imaging.

**M. Mohamed**-- Guaranteed sparse support recovery using the Straight-Through-Estimator.

**M. Savanier**-- Convergent Plug-and-Play reconstruction for positron emission tomography.

**M. Terris**-- Meta learning for adaptive inverse problem solvers.

#### 15h50 - 17h: ** Michael DAVIES ** -- Professor The University of Edinburgh

**Title** -- Unsupervised Machine Imaging: when is data-driven knowledge discovery really possible? [PDF]

**Abstract** -- Today modern machine learning offers state-of-the art solutions for compressed and computational imaging, exploiting the sophisticated statistical dependencies within images. However, such solutions require unrealistic access to a large quantity of ground truth images for training, suggesting that we need to have "solved" the imaging problem before it is amenable to machine learning techniques. This has led to a big drive to try to train such systems without ground truth image data. In the talk, I will discuss a new theoretical and algorithmic framework for learning the image reconstruction mapping in ill-posed inverse problems using only measurement data for training. We will begin with the relatively trivial result that such learning is, in general, impossible. However, if we assume that the underlying signal model is low dimensional (c.f. compressed sensing), and that we have measurement data from a sufficiently diverse set of experiments (different measurement operators), or if we are happy to make weak symmetry assumptions on the signal model, e.g. a rotated image is still a valid image, then I will show that the signal model can be identifiable from the incomplete measurement data alone. I will also discuss some initial results using a new class of self-supervised learning, equivariant imaging (EI), that enables learning from measurement data alone. EI has already demonstrated unsupervised imaging performance for undersampled MRI and CT tasks on a par with solutions trained in a fully supervised manner. Under appropriate assumptions this framework appears to offer the real possibility for genuine data-driven knowledge discovery through machine learning.

The talk is based on joint work with Julian Tachella and Dongdong Chen.

#### 17h-18h10 : ** Rémi GRIBONVAL** DR INRIA, LIP, ENS Lyon, France

**Title** -- Rapture of the deep: highs and lows of sparsity in a world of depths [PDF]

**Abstract** -- Promoting sparse connections in neural networks is natural to control their complexity. Besides, given its thoroughly documented role in inverse problems and variable selection, sparsity also has the potential to give rise to learning mechanisms endowed with certain interpretability guarantees. Through an overview of recent explorations around this theme, I will compare and contrast classical sparse regularization for inverse problems with multilayer sparse regularization. During our journey, I will notably highlight the role of rescaling-invariances in deep parameterizations. In the process we will also be remembered that there is life beyond gradient descent, as illustrated by an algorithm that brings speedups of up to two orders of magnitude when learning certain fast transforms via multilayer sparse factorization.

** Wednesday 29th November **

#### 10h30-11h40: ** Aude RONDEPIERRE ** -- Professeur des Universités INSA de Toulouse, IMT, France

**Title** -- Strong convergence of the iterates of FISTA. [PDF]

**Abstract** -- In this talk, we are interested in the famous FISTA algorithm. In a first part we show that FISTA is an automatic geometrically optimized algorithm for functions satisfying some quadratic growth assumption and having a unique minimizer. This explains why FISTA works better than the standard Forward-Backward algorithm (FB) in such a case, although FISTA is known to have a polynomial asymptotic convergence rate while FB is exponential. We provide a simple rule to tune the α parameter within the FISTA algorithm to reach an ε-solution with an optimal number of iterations. These new results highlight the efficiency of FISTA algorithms, and they rely on new non asymptotic bounds for FISTA. In a second part, we will extend these results and prove that the iterates of FISTA strongly strongly converge to a minimizer of F as soon as F satisfies some growth condition weaker than the strong convexity, and without the minimizer's uniqueness assumption.

#### 11h40-12h50: ** Adrien TAYLOR ** -- CR INRIA, ENS, Paris, France.

**Title** -- Constructive approaches to the analysis and construction of optimization algorithms. [PDF]

**Abstract** -- In this talk, I will provide a high-level overview of recent principled approaches for constructively analyzing and designing numerical optimization algorithms.
The presentation will be example-based, as the main ingredients necessary for understanding the methodologies are already present in the analysis of base optimization schemes, such as gradient descent.
Based on those examples, I will discuss how those techniques can be leveraged for constructing Lyapunov-based analyses and optimal convex optimization algorithms.
The methodology can be accessed through easy-to-use open source packages (including PEPit: https://github.com/PerformanceEstimation/PEPit), allowing to use the framework without the modelling pain.
This talk is based on joint works with great colleagues that I will introduce during the presentation.

#### 13h50-15h50: ** Poster session**

**N. Bousselmi**-- Worst-case analysis of first-order optimization methods involving linear operators.

**J. Du**-- Compared performance of Covid19 reproduction number estimators based on realistic synthetic data.

**T. Gelvez-Barrera**-- Model-based beamforming for acoustic applications.

**S. Hurault**-- Proximal denoiser for convergent plug-and-play proximal gradient descent.

**G. Lauga**-- Multilevel Proximal Methods for Image Restoration.

**H.T.V. Le**-- PNN: From proximal algorithms to robust unfolded image denoising networks and Plug-and-Play methods.

**S. Neumayer**-- Learning Filter-Based Regularizers.

**B. Pascal**-- Credibility interval Design for Covid19 Reproduction Number from Nonsmooth Langevin-type Monte Carlo sampling.

**S. Salehi**-- An adaptively inexact first-order method for bilevel learning.

**V. Sechaud**-- Learning from saturated measurements.

**J. Tachella**-- Learning to reconstruct signals from binary measurements alone.

** Thursday 30th November **

#### 10h-11h15 :** Anne-Cécile ORGERIE, CNRS Researcher, Myriads team, IRISA, EcoInfo **

**Title** -- Measuring and modeling the energy consumption of servers. [PDF]

**Abstract** -- This talk will deal with measuring the energy consumption of servers, deriving models from these measurements and implementing these models into simulation tools that can be used to experiment energy-efficient strategies. In particular, we will focus on pitfalls to avoid for both measuring and modeling the energy consumption of servers belonging to computing clusters.

**Location** -- Amphi F , 46 allée d'Italie, 69007 Lyon.

#### 11h15 -16h : ** DeepInverse: a Pytorch library for imaging with deep learning ** -- Tutorial day animated by Julian TACHELLA, Samuel HURAULT and Matthieu TERRIS.

**Location** -- Amphi F , 46 allée d'Italie, 69007 Lyon.

** For those interested in creating a forward operator:**
[Link]

** For those interested in creating your plug-and-play medical image reconstruction algorithm:**
[Link]

** For those interested in training a network (in a self-supervised way):**
[Link]

** For the general tutorial presented by Samuel:**
[Link]