Harnessing Inexactness
in Scientific Computing
Summary
Inexactness is ubiquitous in scientific computing: modelization, discretization, and rounding errors all
contribute to making
scientific simulation inherently inexact.
As a result, it is common for many algorithms to oversolve the problems: they are much more accurate than they need to be.
Approximate computing harnesses inexactness to dramatically reduce the
resource consumption of computational science and engineering applications.
However, using inexactness blindly would also dramatically reduce the accuracy !
This course covers both the
fundamental theoretical questions
related to the use of approximations in scientific computing (where
can inexactness be tolerated ? to what extent?), as well as the
practical challenges in implementing these methods
at scale for
real-life applications.
To achieve these goals,
it is indispensable to consider the
entire application chain:
from nonlinear optimization processes to their underlying linear algebra building blocks.
This course will therefore review
a wide range of tools (low-rank approximations,
low and mixed precision algorithms, sparsity, quantization, randomization, stochastic
algorithms, dimensionality reduction, etc.) and illustrate how they can help scientific simulation
into
solving urgent societal and technological problems: climate and weather prediction, nuclear safety,
autonomous driving, medical imaging, the design of safe and energy-efficient vehicles, etc.
At the end of the course, you will be able to:
- Understand the impact of inexactness on scientific simulation
- Identify opportunities to harness inexactness to accelerate computations
- Analyze rigorously inexact algorithms to guarantee their accuracy and convergence
- Develop efficient implementations of these algorithms
- Analyze the performance of these algorithms at scale
- Apply these algorithms to the solution of various important real-life problems
Organization
-
Lecturers:
Elisa Riccietti (ENS Lyon, LIP)
and Theo Mary (Sorbonne Université, CNRS, LIP6).
-
Duration: 16 lectures of 2 hours.
- Each lecture will present
a case-study illustrating the potential of the methods
in a real-life setting.
-
At the end of each lecture, a short optional practical exercise will be
proposed and corrected at the beginning of the next lecture.
-
Evaluation: oral presentation on a selected research article + bonus points for optional practical home exercises between lectures.
Outline
-
Nov 18 (10:00): Introduction (TM+ER)
-
Nov 22 (13:30): Sparse direct liner solvers (TM)
-
Nov 25 (10:00): Stochastic optimization methods (ER)
-
Nov 29 (13:30): Sparse iterative linear solvers (TM)
-
Dec 2 (10:00): Multigrid and multilevel methods (ER)
-
Dec 6 (13:30): Low-rank approximation (TM)
-
Dec 9 (10:00): no lecture
-
Dec 13 (13:30): Image restoration (ER)
-
Dec 16 (10:00): Block low-rank matrices (TM)
-
Dec 20 (13:30): PINNs: physics informed neural networks (ER)
-
Jan 6 (10:00): Summation algorithms (TM)
-
Jan 10 (13:30): Neural networks and low precision arithmetic (ER)
-
Jan 13 (10:00): no lecture
-
Jan 17 (13:30): Probabilistic error analysis (TM)
-
Jan 20 (10:00): Sparse approximation (ER)
-
Jan 24 (13:30): GPU numerical computing (TM)
-
Jan 27 (10:00): séance bonus (to be determined)
-
Jan 31 (13:30): oral presentations
Resources