# Data-Aware Algorithms

In many today's computing systems, the cost, both in time and energy, of moving the data has become predominant compared to the cost of computing. Part of the solution for this problem can be found in algorithms: over the years, several computation models and algorithmic strategies have been proposed to optimize data movement instead of (or in addition to) the classical objective of computational complexity. The goal of this course is to present a variety of these solutions.

NB: this is a short version of a Master 2 course given at ENS-Lyon in Fall 2020.

## Part 1: Introduction, pebble games models, I/O lower bounds for matrix product

The first part consists in a introduction to the course, presentation of the early models based on pebble games, study of I/O complexity for matrix multiplication, both in a sequential and a parallel context.

#### Available material:

- Slides of the first lecture (October 16)

#### Some articles where results are taken from:

- Hong & Kung (1981) seminal paper of red/blue pebble game
- Toledo (1991) matrix product lower bound
- Langou et al. (2019) refined lower bound and optimal algorithms for matrix product
- Ballard et al. (2011) analysis of generalized matrix computations and algorithm design
- Solomonik et al. (2011) 2.5D matrix multiplication algorithm

## Part 2: Cache-oblivious algorithms

We present here the basics of extern memory algorithms and cache-oblivious algorithms.

#### Available material:

- Slides for this part (October 17, part 1)

#### Other courses/resources on cache-oblivious algorithms:

- Lecture notes by Erik Demaine on Cache-Oblivious Algorithms and Data Structures at the 2002 EEF Summer School on Massive Data Sets
- Series of lectures at MIT by Erik Demaine, available on YouTube:
- Cormen et al., Introduction on Algorithms, Chap. 18: B-Trees

#### Some articles where results on cache-oblivious algorithms are taken from:

- Sleator & Tarjan (1985) for the proof of LRU competitive ratio
- Frigo et al. (1999) for the justification of the ideal cache model (Section 6), first cache-oblivious algorithms (matrix mult., etc.)
- Aggarwal and Witter for the proof of I/O lower bounds (sorting and permuting)
- Brodal and Fagerberg (2003) for the proof of searching lower bound (Lemma 1)
- Survey on External Memory Algorithms and Data Structures by Jeffrey Scott Vitter (1998)
- Brodal and Fagerberg (2002): Funnel Sort and Distribution Sweeping

## Part 3: Memory-aware DAG scheduling

We describe how to model scientific computations as DAGs of tasks, and how the way the DAG is scheduled (traversed) influences the amount of memory needed for the computation.

#### Available material:

- Slides for this part (October 17, part 2)

#### Some articles where results on DAG scheduling are taken from:

- Jacquelin et al. (2011) analysis of postorder and general traversals for trees
- Kayaaslan et al. (2018) optimal algorithms for minimizing memory of SP-graphs
- Eyraud-Dubois et al. (2015) complexity of parallel bi-criteria (makespan/memory) task tree scheduling
- Marchal et al. (2018) computing with bounded maximum parallel memory