When parallelizing loop nests for distributed memory parallel computers, we have to specify when the different computations are carried out (computation scheduling), where they are carried out (computation mapping), and where the data are stored (data mapping). The scheduling functions specify a computation ordering which explicits some parallelism while respecting the dependences in the loops. The objective when defining mapping functions is usually to minimize the amount of communication overhead due to non local data references. Of course a mapping that induces no communications but leads to a completely sequential execution, when combined with the scheduling function, is not desirable.
We show that even the ``best'' scheduling and mapping functions can lead to a sequential execution when they are combined, if they were independently chosen. We characterize the compatibility between mapping and scheduling functions, i.e. we characterize when combined scheduling and mapping functions actually leads to a parallel execution. We give an algorithm to compute a scheduling compatible with a given computation mapping, when such a scheduling exists. This algorithm can process either an approximation of the dependences or an exact representation.