Harnessing Inexactness
in Scientific Computing

Summary

Inexactness is ubiquitous in scientific computing: modelization, discretization, and rounding errors all contribute to making scientific simulation inherently inexact. As a result, it is common for many algorithms to oversolve the problems: they are much more accurate than they need to be. Approximate computing harnesses inexactness to dramatically reduce the resource consumption of computational science and engineering applications. However, using inexactness blindly would also dramatically reduce the accuracy !

This course covers both the fundamental theoretical questions related to the use of approximations in scientific computing (where can inexactness be tolerated ? to what extent?), as well as the practical challenges in implementing these methods at scale for real-life applications. To achieve these goals, it is indispensable to consider the entire application chain: from nonlinear optimization processes to their underlying linear algebra building blocks. This course will therefore review a wide range of tools (low-rank approximations, low and mixed precision algorithms, sparsity, quantization, randomization, stochastic algorithms, dimensionality reduction, etc.) and illustrate how they can help scientific simulation into solving urgent societal and technological problems: climate and weather prediction, nuclear safety, autonomous driving, medical imaging, the design of safe and energy-efficient vehicles, etc.

At the end of the course, you will be able to:

Organization

Outline

  1. Nov 18 (10:00): Introduction (TM+ER)
  2. Nov 22 (13:30): Sparse direct liner solvers (TM)
  3. Nov 25 (10:00): Stochastic optimization methods (ER)
  4. Nov 29 (13:30): Sparse iterative linear solvers (TM)
  5. Dec 2 (10:00): Multigrid and multilevel methods (ER)
  6. Dec 6 (13:30): Low-rank approximation (TM)
  7. Dec 9 (10:00): no lecture
  8. Dec 13 (13:30): Image restoration (ER)
  9. Dec 16 (10:00): Block low-rank matrices (TM)
  10. Dec 20 (13:30): PINNs: physics informed neural networks (ER)
  11. Jan 6 (10:00): Summation algorithms (TM)
  12. Jan 10 (13:30): Neural networks and low precision arithmetic (ER)
  13. Jan 13 (10:00): no lecture
  14. Jan 17 (13:30): Probabilistic error analysis (TM)
  15. Jan 20 (10:00): Sparse approximation (ER)
  16. Jan 24 (13:30): GPU numerical computing (TM)
  17. Jan 27 (10:00): séance bonus (to be determined)
  18. Jan 31 (13:30): oral presentations

Resources