MADNESS 0.10.1
|
Load and memory balancing is a critical issue on the current generation of shared and distributed memory computers. Many terascale and petascale computers have no virtual memory capabilities on the compute nodes, so memory management is very important.
Poor distribution of work (load imbalance) is the largest reason for inefficient parallel execution within MADNESS. Poor data distribution (data imbalance) contributes to load imbalance and also leads to out-of-memory problems due to one or more processes having too much data. Thus, we are interested in a uniform distribution of both work and data.
Many operations in MADNESS are entirely data driven (i.e., computation occurs in the processor that "owns" the data) since there is insufficient work to justify moving data between processes (e.g., computing the inner product between functions). However, a few expensive operations can have work shipped to other processors.
There are presently three load balancing mechanisms within MADNESS:
Until the work stealing becomes production quality we must exploit the first two forms. The random work assignment is controlled by options in the FunctionDefaults
class:
FunctionDefaults::set_apply_randomize(bool)
controls the use of randomization in applying integral (convolution) operators. It is typically beneficial when computing to medium/high precision.FunctionDefaults::set_project_randomize(bool)
controls the use of randomization in projecting from an analytic form (i.e., C++) into the discontinuous spectral element basis. It is typically beneficial unless there is already a good static data distribution. Since these options are straightforward to enable, this example focuses on static data redistribution.The process map (an instance of WorldDCPmapInterface
) controls mapping of data to processors and it is actually quite easy to write your own (e.g., see WorldDCDefaultPmap
or LevelPmap
) that ensure uniform data distribution. However, you also seek to incorporate estimates of the computational cost into the distribution. The class LBDeuxPmap
(deux since it is the second such class) in trunk/src/madness/mra/lbdeux.h
does this by examining the functions you request and using provided weights to estimate the computational cost.
Communication costs are proportional to the number of broken links in the tree. Since some operations work in the scaling function basis, some in the multiwavelet basis, and some in non-standard form, there is an element of empiricism in getting best performance from most algorithms.
The example code in src/examples/dataloadbal.cc
illustrates how the discussions in this section can be applied.
Previous: Input/Output; Next: Thinking in MADNESS