AbstractIn this paper, we derive bounds on the speedup and efficiency of applications that schedule tasks on a set of parallel processors. We assume that the application runs an algorithm that consists of N iterations and before starting its i + 1st iteration, a processor must wait for data (i.e., synchronize) evaluated in the ith iteration by a subset of the other processors of the system. Processing times and interconnections between iterations are modeled by random variables with possibly deterministic distributions. Scientific applications consisting of iterations of recursive equations are examples of applications that can be modeled within this formulation. We consider the efficiency of such applications and show that, although efficiency decreases with an increase in the number of processors, it has a nonzero limit when the number of processors increases to infinity. We obtain a lower bound for the efficiency by solving an equation that depends on the distribution of task service times and the expected number of tasks needed to be synchronized. We also show that the lower bound is approached if the topology of the processor graph is ``spread-out,'' a notion we define in the paper. Copyright 1995 by ACM, Inc.
The abstract is also available as a LaTeX file, a DVI file, or a PostScript file.
Categories and Subject Descriptors: C.1.2 [Processor Architectures]: Multiple Data Stream Architectures (Multiprocessors) -- parallel processors; C.4 [Performance of Systems] -- performance attributes
General Terms: Measurement, Performance
Additional Key Words and Phrases: Large deviations theory, synchronization
Selected references
- François Baccelli and Zhen Lu. On the execution of parallel programs on multiprocessor systems -- a queuing theory approach. Journal of the ACM, 37(2):373-414, April 1990.