Harnessing Inexactness
in Scientific Computing

Summary

Inexactness is ubiquitous in scientific computing: modelization, discretization, and rounding errors all contribute to making scientific simulation inherently inexact. As a result, it is common for many algorithms to oversolve the problems: they are much more accurate than they need to be. Approximate computing harnesses inexactness to dramatically reduce the resource consumption of computational science and engineering applications. However, using inexactness blindly would also dramatically reduce the accuracy !

This course covers both the fundamental theoretical questions related to the use of approximations in scientific computing (where can inexactness be tolerated ? to what extent?), as well as the practical challenges in implementing these methods at scale for real-life applications. To achieve these goals, it is indispensable to consider the entire application chain: from nonlinear optimization processes to their underlying linear algebra building blocks. This course will therefore review a wide range of tools (low-rank approximations, low and mixed precision algorithms, sparsity, quantization, randomization, stochastic algorithms, dimensionality reduction, etc.) and illustrate how they can help scientific simulation into solving urgent societal and technological problems: climate and weather prediction, nuclear safety, autonomous driving, medical imaging, the design of safe and energy-efficient vehicles, etc.

At the end of the course, you will be able to:

Organization

Outline (order may change)

  1. Nov 17 (10:15): Introduction (ER)
  2. Nov 21 (13:30): Summation methods (TM) (exercise)
  3. Nov 24 (10:15): Probabilistic error analysis (TM) (no exercise)
  4. Dec 1 (10:15): Stochastic optimization methods (ER) (exercise)
  5. Dec 5 (13:30): Multigrid and multilevel methods (ER) (no exercise)
  6. Image restoration (ER)
  7. Direct and iterative methods for linear systems (TM)
  8. Neural networks and low precision arithmetic (ER)
  9. Exploiting data sparsity (TM)
  10. Iterative refinement (TM)
  11. Orthonormalization and least squares (TM+ER)
  12. Communication-avoiding and randomized algorithms (TM)
  13. Low rank and block low rank approximation (TM)
  14. PINNs: physics informed neural networks (ER)
  15. Sparse approximation (ER)
  16. Probabilistic error analysis (TM)
  17. GPU computing (TM)

Internship proposals

You can find all the internship proposals here

Resources