MADNESS
0.10.1
|
MADNESS is based on multiresolution analysis (MRA) and low-separation rank (LSR) approximations of functions and operators. It can be considered an adaptive spectral element method using a discontinuous and singular multiresolution basis. The representations of differential operators in the wavelet bases are provably similar to adaptive finite difference discretizations. Thus, the process of solving the resulting linear systems has similar behaviors to other adaptive methods. For example, the derivative operator and the Laplacian operator are unbounded operators. Thus the condition number, which often constrains how accurately the linear system can be solved, goes to infinity as the bases or the nets are refined. In order to solve these equations in practice, one has to precondition the system. Effective preconditioners are problem dependent and the theory of their construction is an area of on-going research.
The integral operator, which is the formal inverse associated with the differential operator, is usually bounded. MRA and LSR have been proven to be suitable techniques for effectively applying some of the physically important operators and their kernel fast and with ease.
Two of the most important operators that we illustrate in this manual are the Poisson operator, and the Helmholtz operator (also note that heat/diffusion equation kernel is just a single Gaussian).
Herein, we discuss techniques for "thinking in MADNESS", which will allow the best utilization of the numerical tools underlying MADNESS (in most cases).
In many situations the integral operator associated with the differential operator has an analytic kernel. The simplest examples are convolution operators.
where .
where .
Most codes, including MADNESS, are bad at solving differential equations to high accuracy – this is why there is so much emphasis placed on finding a good preconditioner. The problem arises from the spectrum of the differential operator. Consider the Laplacian in 1D acting on a plane wave,
The Laplacian greatly amplify high frequencies (where most of the numerical error lies), whereas physical applications are primarily interested in lower frequencies. The eigenvalues of the corresponding inverse or integral operator have the opposite effect – high frequencies are suppressed and lower frequencies are emphasized.
The integral form is potentially better in many ways – accuracy, speed, robustness, asymptotic behavior, etc. If you really, really, want to solve the differential form, then instead of using the phrase "integral form" say "perfectly preconditioned differential form" so that you can do the right thing.
Your function needs to be evaluated at close to machine precision. The higher the order of the basis ( ) the greater the necessary accuracy, regardless of what threshold you are trying to compute to. The accuracy and convergence of the Gauss-Legendre quadrature rests on the function being smooth (well approximated by a polynomial) at some level of refinement. Discontinuities in the function value or its derivatives, singularities, and/or numerical noise can all cause excessive refinement as MADNESS tries to deliver the requested precision. It's the Gibbs effect in action. The usual symptoms of this problem are unexpectedly slow execution and/or excessive memory use. Here are some tips to work with these effects.
Discontinuities and singularities need to be consciously managed. Integrable point singularities might sometimes work unmodified (e.g., in 3-D) but can unpredictably fail, e.g., if a quadrature point lands very near to the singularity by accident. If possible, arrange for such points/surfaces to coincide with dyadic points (i.e., an integer multiple of some power of two division of the domain) – this will give the most accurate representation and exploits the discontinuous spectral basis. If you cannot ensure such placement, you must manually or analytically regularize the function. One would usually employ a parameter to control the length scale of any problem modification and to enable systematic demonstration of convergence. E.g., eliminate the cusp in an exponential with
or replace a step function with
or the Coulomb potential in 3-D with
subject to
The integral indicates that the mean error is zero, independent of .
Numerical noise can be a problem if your function is evaluated using interpolation or some other approximation scheme, or when switching between representations (e.g., between forms suitable for small or large arguments). If you are observing inefficient projection into the basis, ensure that your approximation is everywhere smooth to circa 1 part in or better.
MADNESS itself computes to a finite precision, and when computing a point-wise function of a function (i.e., , where is a MADNESS function and is a user-provided function), the user-provided function must tolerate that approximation within tolerance or noise. A classic example is computing the function
where in the original problem one knows that for all but numerically this positivity not guaranteed. In this case an effective smoothing is
The function coincides with its argument for and, for smaller values, smoothly switches to a minimum value of with a continuous value and derivative at both end points.
Some computations are intrinsically expensive. For instance, the function is oscillatory everywhere and the number of required coefficients will increase linearly with the solution volume. In a 3-D box of width , the number of coefficients will be (where is the multiwavelet or polynomial order). For , and , a few hundred TB of data (i.e., too much!) will be generated. Thus, it is worth making a back of the envelope estimate about the expected cost of computation before getting started.
Choice of polynomial order ( ) depends upon the problem and algorithm. Smooth functions can be very accurately and efficiently represented using high-order polynomials, so it can be advantageous to use or higher (for some time-dependent problems we have even used ). However, functions with fine-scale structure or cusps or discontinuities require adpative refinement so lower order polynomials (e.g., ) are more efficient. If you are using integral operators, increasing the polynomial order as you increase the accuracy maintains sparsity in the operator, which is why in moldft
we use the heuristic that to get an accuracy of we use .
Previous: Load and memory balancing; Next: MADNESS environment variables