Timings and parallelization

This section summarizes the options DFTK offers to monitor and influence performance of the code.

Timing measurements

By default DFTK uses TimerOutputs.jl to record timings, memory allocations and the number of calls for selected routines inside the code. These numbers are accessible in the object DFTK.timer. Since the timings are automatically accumulated inside this datastructure, any timing measurement should first reset this timer before running the calculation of interest.

For example to measure the timing of an SCF:

scfres = self_consistent_field(basis, tol=1e-8)

                                       Time                   Allocations      
                               ──────────────────────   ───────────────────────
       Tot / % measured:            1.00s / 21.2%           85.5MiB / 42.0%    

 Section               ncalls     time   %tot     avg     alloc   %tot      avg
 self_consistent_field      1    211ms   100%   211ms   35.6MiB  99.0%  35.6MiB
   compute_density          6   94.9ms  44.8%  15.8ms   6.12MiB  17.0%  1.02MiB
   LOBPCG                  12   88.5ms  41.7%  7.37ms   11.8MiB  32.9%  0.99MiB
     Hamiltonian mu...     40   52.6ms  24.8%  1.31ms   3.34MiB  9.29%  85.4KiB
       kinetic+local       40   49.8ms  23.5%  1.24ms    725KiB  1.97%  18.1KiB
       nonlocal            40   1.85ms  0.87%  46.2μs    797KiB  2.17%  19.9KiB
     ortho                103   15.6ms  7.35%   151μs    868KiB  2.36%  8.43KiB
     rayleigh_ritz         28   10.2ms  4.82%   365μs   0.97MiB  2.69%  35.4KiB
     block multipli...    119   1.46ms  0.69%  12.3μs   1.74MiB  4.84%  15.0KiB
   energy_hamiltonian      13   24.4ms  11.5%  1.88ms   14.5MiB  40.4%  1.12MiB
     ene_ops               13   21.9ms  10.3%  1.69ms   10.9MiB  30.2%   855KiB
       ene_ops: xc         13   15.8ms  7.47%  1.22ms   3.07MiB  8.54%   242KiB
       ene_ops: har...     13   3.21ms  1.51%   247μs   5.84MiB  16.3%   460KiB
       ene_ops: kin...     13    818μs  0.39%  62.9μs    530KiB  1.44%  40.8KiB
       ene_ops: non...     13    771μs  0.36%  59.3μs    152KiB  0.41%  11.7KiB
       ene_ops: local      13    742μs  0.35%  57.1μs   1.22MiB  3.41%  96.5KiB
   QR orthonormaliz...     12    261μs  0.12%  21.7μs    160KiB  0.44%  13.3KiB
 guess_density              1    609μs  0.29%   609μs    370KiB  1.00%   370KiB

The output produced when printing or displaying the DFTK.timer now shows a nice table summarising total time and allocations as well as a breakdown over individual routines.

Timing measurements and stack traces

Timing measurements have the unfortunate disadvantage that they alter the way stack traces look making it sometimes harder to find errors when debugging. For this reason timing measurements can be disabled completely (i.e. not even compiled into the code) by setting the environment variable DFTK_TIMING to "0" or "false". For this to take effect recompiling all DFTK (including the precompile cache) is needed.

Timing measurements and threading

Unfortunately measuring timings in TimerOutputs is not yet thread-safe. Therefore taking timings of threaded parts of the code will be disabled unless you set DFTK_TIMING to "all". In this case you must not use Julia threading (see section below) or otherwise undefined behaviour results.

Options for parallelization

At the moment DFTK offers two ways to parallelize a calculation, firstly shared-memory parallelism using threading and secondly multiprocessing using MPI (via the MPI.jl Julia interface). MPI-based parallelism is currently only over k-Points, such that it cannot be used for calculations with only a single k-Point. Otherwise combining both forms of parallelism is possible as well.

The scaling of both forms of parallelism for a number of test cases is demonstrated in the following figure. These values were obtained using DFTK version 0.1.17 and Julia 1.6 and the precise scalings will likely be different depending on architecture, DFTK or Julia version. The rough trends should, however, be similar.

The MPI-based parallelization strategy clearly shows a superior scaling and should be preferred if available.

MPI-based parallelism

Currently DFTK uses MPI to distribute on k-Points only. This implies that calculations with only a single k-Point cannot use make use of this. For details on setting up and configuring MPI with Julia see the MPI.jl documentation.

  1. First disable all threading inside DFTK, by adding the following to your script running the DFTK calculation:

    using DFTK
  2. Run Julia in parallel using the mpiexecjl wrapper script from MPI.jl:

    mpiexecjl -np 16 julia myscript.jl

    In this -np 16 tells MPI to use 16 processes and -t 1 tells Julia to use one thread only. Notice that we use mpiexecjl to automatically select the mpiexec compatible with the MPI version used by MPI.jl.

As usual with MPI printing will be garbled. You can use

DFTK.mpi_master() || (redirect_stdout(); redirect_stderr())

at the top of your script to disable printing on all processes but one.

MPI-based parallelism is experimental

Even though MPI-based parallelism shows the better scaling it is still experimental and some routines (e.g. band structure and direct minimization) are not compatible with it yet.

Thread-based parallelism

Threading in DFTK currently happens on multiple layers distributing the workload over different $k$-Points, bands or within an FFT or BLAS call between threads. At its current stage our scaling for thread-based parallelism is worse compared MPI-based and therefore the parallelism described here should only be used if no other option exists. To use thread-based parallelism proceed as follows:

  1. Ensure that threading is properly setup inside DFTK by adding to the script running the DFTK calculation:

    using DFTK

    This disables FFT threading and sets the number of BLAS threads to the number of Julia threads.

  2. Run Julia passing the desired number of threads using the flag -t:

    julia -t 8 myscript.jl

For some cases (e.g. a single k-Point, fewish bands and a large FFT grid) it can be advantageous to add threading inside the FFTs as well. One example is the Caffeine calculation in the above scaling plot. In order to do so just call setup_threading(n_fft=2), which will select two FFT threads. More than two FFT threads is rarely useful.

Advanced threading tweaks

The default threading setup done by setup_threading is to select one FFT thread and the same number of BLAS and Julia threads. This section provides some info in case you want to change these defaults.

BLAS threads

All BLAS calls in Julia go through a parallelized OpenBlas or MKL (with MKL.jl. Generally threading in BLAS calls is far from optimal and the default settings can be pretty bad. For example for CPUs with hyper threading enabled, the default number of threads seems to equal the number of virtual cores. Still, BLAS calls typically take second place in terms of the share of runtime they make up (between 10% and 20%). Of note many of these do not take place on matrices of the size of the full FFT grid, but rather only in a subspace (e.g. orthogonalization, Rayleigh-Ritz, ...) such that parallelization is either anyway disabled by the BLAS library or not very effective. To set the number of BLAS threads use

using LinearAlgebra

where N is the number of threads you desire. To check the number of BLAS threads currently used, you can use

Int(ccall((BLAS.@blasfunc(openblas_get_num_threads), BLAS.libblas), Cint, ()))

or (from Julia 1.6) simply BLAS.get_num_threads().

Julia threads

On top of BLAS threading DFTK uses Julia threads (Thread.@threads) in a couple of places to parallelize over k-Points (density computation) or bands (Hamiltonian application). The number of threads used for these aspects is controlled by the flag -t passed to Julia or the environment variable JULIA_NUM_THREADS. To check the number of Julia threads use Threads.nthreads().

FFT threads

Since FFT threading is only used in DFTK inside the regions already parallelized by Julia threads, setting FFT threads to something larger than 1 is rarely useful if a sensible number of Julia threads has been chosen. Still, to explicitly set the FFT threads use

using FFTW

where N is the number of threads you desire. By default no FFT threads are used, which is almost always the best choice.