MADNESS 0.10.1
Thinking in MADNESS
Collaboration diagram for Thinking in MADNESS:

MADNESS is based on multiresolution analysis (MRA) and low-separation rank (LSR) approximations of functions and operators. It can be considered an adaptive spectral element method using a discontinuous and singular multiresolution basis. The representations of differential operators in the wavelet bases are provably similar to adaptive finite difference discretizations. Thus, the process of solving the resulting linear systems has similar behaviors to other adaptive methods. For example, the derivative operator and the Laplacian operator are unbounded operators. Thus the condition number, which often constrains how accurately the linear system can be solved, goes to infinity as the bases or the nets are refined. In order to solve these equations in practice, one has to precondition the system. Effective preconditioners are problem dependent and the theory of their construction is an area of on-going research.

The integral operator, which is the formal inverse associated with the differential operator, is usually bounded. MRA and LSR have been proven to be suitable techniques for effectively applying some of the physically important operators and their kernel fast and with ease.

Two of the most important operators that we illustrate in this manual are the Poisson operator, and the Helmholtz operator (also note that heat/diffusion equation kernel is just a single Gaussian).

Herein, we discuss techniques for "thinking in MADNESS", which will allow the best utilization of the numerical tools underlying MADNESS (in most cases).

Solve the integral equation

In many situations the integral operator associated with the differential operator has an analytic kernel. The simplest examples are convolution operators.

Most codes, including MADNESS, are bad at solving differential equations to high accuracy – this is why there is so much emphasis placed on finding a good preconditioner. The problem arises from the spectrum of the differential operator. Consider the Laplacian in 1D acting on a plane wave,

\[
\frac{d^{2}}{\mathit{dx}^{2}}e^{i\omega x}=-\omega ^{2}e^{i\omega x}.
\]

The Laplacian greatly amplify high frequencies $\omega$ (where most of the numerical error lies), whereas physical applications are primarily interested in lower frequencies. The eigenvalues of the corresponding inverse or integral operator have the opposite effect – high frequencies are suppressed and lower frequencies are emphasized.

The integral form is potentially better in many ways – accuracy, speed, robustness, asymptotic behavior, etc. If you really, really, want to solve the differential form, then instead of using the phrase "integral form" say "perfectly preconditioned differential form" so that you can do the right thing.

Carefully analyze discontinuities, noise, singularities, and asymptotic forms

Your function needs to be evaluated at close to machine precision. The higher the order of the basis ( $k$) the greater the necessary accuracy, regardless of what threshold you are trying to compute to. The accuracy and convergence of the Gauss-Legendre quadrature rests on the function being smooth (well approximated by a polynomial) at some level of refinement. Discontinuities in the function value or its derivatives, singularities, and/or numerical noise can all cause excessive refinement as MADNESS tries to deliver the requested precision. It's the Gibbs effect in action. The usual symptoms of this problem are unexpectedly slow execution and/or excessive memory use. Here are some tips to work with these effects.

Discontinuities and singularities need to be consciously managed. Integrable point singularities might sometimes work unmodified (e.g., $1/r$ in 3-D) but can unpredictably fail, e.g., if a quadrature point lands very near to the singularity by accident. If possible, arrange for such points/surfaces to coincide with dyadic points (i.e., an integer multiple of some power of two division of the domain) – this will give the most accurate representation and exploits the discontinuous spectral basis. If you cannot ensure such placement, you must manually or analytically regularize the function. One would usually employ a parameter to control the length scale of any problem modification and to enable systematic demonstration of convergence. E.g., eliminate the cusp in an exponential with

\[
\exp(-r) \to \exp (-\sqrt{r^{2}+\sigma ^{2}}),
\]

or replace a step function with

\[
\theta(x) \to \theta(x, \lambda) = \frac{1}{2} (1 + \tanh\frac{x}{\lambda}),
\]

or the Coulomb potential in 3-D with

\[
\frac{1}{r} \to u(r,c) = \frac{1}{r} \mathrm{erf} \frac{r}{c} + \frac{1}{c\sqrt{\pi}} e^{-\left( \frac{r}{c} \right)^{2}}
\]

subject to

\[
\int_{0}^{\infty} \left(u(r, c) - r^{-1}\right) r^{2} d\mathit{r} = 0.
\]

The integral indicates that the mean error is zero, independent of $c$.

Numerical noise can be a problem if your function is evaluated using interpolation or some other approximation scheme, or when switching between representations (e.g., between forms suitable for small or large arguments). If you are observing inefficient projection into the basis, ensure that your approximation is everywhere smooth to circa 1 part in $10^{12}$ or better.

MADNESS itself computes to a finite precision, and when computing a point-wise function of a function (i.e., $g(f(x))$, where $f(x)$ is a MADNESS function and $g(s)$ is a user-provided function), the user-provided function must tolerate that approximation within tolerance or noise. A classic example is computing the function

\[
V(\rho(x)) = \frac{C}{\rho^{1/3}(x)},
\]

where in the original problem one knows that $\rho (x)>0$ for all $x$ but numerically this positivity not guaranteed. In this case an effective smoothing is

\begin{eqnarray*}
V(\rho) & \to & V(S(\rho)) \\
S(s) & = & \left\{ \begin{array}{ll} s_{0}, & s\le 0 \\ q(s, s_{0}, s_{1}), & 0 < s \le s_{1} \\ s, & s > s_{1} \end{array} \right. \\
q(s, s_{0}, s_{1}) & = & s_{0} - (-2s_{1} + 3s_{0}) \left( \frac{s}{s_{1}} \right)^{2} + (2s_{0} - s_{1}) \left(\frac{s}{s_{1}}\right)^{3}.
\end{eqnarray*}

The function $S(s)$ coincides with its argument for $s>s_{1}$ and, for smaller values, smoothly switches to a minimum value of $s_{0}$ with a continuous value and derivative at both end points.

Some computations are intrinsically expensive. For instance, the function $ \exp(i\omega r)$ is oscillatory everywhere and the number of required coefficients will increase linearly with the solution volume. In a 3-D box of width $L$, the number of coefficients will be $\mathcal{O}\left(\left(Lk\omega \right)^{3}\right)$ (where $k$ is the multiwavelet or polynomial order). For $L=1000$, $k=12$ and $\omega=3$, a few hundred TB of data (i.e., too much!) will be generated. Thus, it is worth making a back of the envelope estimate about the expected cost of computation before getting started.

Choice of polynomial order ( $k$) depends upon the problem and algorithm. Smooth functions can be very accurately and efficiently represented using high-order polynomials, so it can be advantageous to use $k=10$ or higher (for some time-dependent problems we have even used $k=30$). However, functions with fine-scale structure or cusps or discontinuities require adpative refinement so lower order polynomials (e.g., $k=6$) are more efficient. If you are using integral operators, increasing the polynomial order as you increase the accuracy maintains sparsity in the operator, which is why in moldft we use the heuristic that to get an accuracy of $10^{-n}$ we use $k=n+2$.

Previous: Load and memory balancing; Next: MADNESS environment variables