This section summarizes the options DFTK offers to monitor and influence performance of the code.
By default DFTK uses TimerOutputs.jl to record timings, memory allocations and the number of calls for selected routines inside the code. These numbers are accessible in the object
DFTK.timer. Since the timings are automatically accumulated inside this datastructure, any timing measurement should first reset this timer before running the calculation of interest.
For example to measure the timing of an SCF:
DFTK.reset_timer!(DFTK.timer) scfres = self_consistent_field(basis, tol=1e-8) DFTK.timer
────────────────────────────────────────────────────────────────────────────── Time Allocations ────────────────────── ─────────────────────── Tot / % measured: 1.06s / 19.5% 89.8MiB / 39.1% Section ncalls time %tot avg alloc %tot avg ────────────────────────────────────────────────────────────────────────────── self_consistent_field 1 205ms 100% 205ms 34.8MiB 99.0% 34.8MiB compute_density 6 89.6ms 43.5% 14.9ms 6.15MiB 17.5% 1.02MiB LOBPCG 12 83.6ms 40.6% 6.97ms 10.7MiB 30.4% 911KiB Hamiltonian mu... 42 59.2ms 28.7% 1.41ms 3.50MiB 9.95% 85.3KiB kinetic+local 42 56.0ms 27.2% 1.33ms 758KiB 2.10% 18.0KiB nonlocal 42 1.73ms 0.84% 41.3μs 834KiB 2.32% 19.8KiB rayleigh_ritz 30 6.30ms 3.06% 210μs 1.04MiB 2.97% 35.6KiB ortho! 114 3.66ms 1.77% 32.1μs 958KiB 2.66% 8.40KiB energy_hamiltonian 13 26.9ms 13.1% 2.07ms 14.1MiB 40.1% 1.08MiB ene_ops 13 24.0ms 11.7% 1.85ms 10.4MiB 29.7% 821KiB ene_ops: xc 13 16.8ms 8.17% 1.30ms 3.07MiB 8.73% 242KiB ene_ops: har... 13 3.78ms 1.83% 290μs 5.84MiB 16.6% 460KiB ene_ops: non... 13 1.04ms 0.50% 79.9μs 152KiB 0.42% 11.7KiB ene_ops: local 13 851μs 0.41% 65.5μs 1.23MiB 3.49% 96.5KiB ene_ops: kin... 13 629μs 0.31% 48.4μs 95.0KiB 0.26% 7.31KiB QR orthonormaliz... 12 271μs 0.13% 22.6μs 160KiB 0.44% 13.3KiB χ0Mixing 6 131μs 0.06% 21.8μs 26.9KiB 0.07% 4.48KiB guess_density 1 657μs 0.32% 657μs 370KiB 1.03% 370KiB ──────────────────────────────────────────────────────────────────────────────
The output produced when printing or displaying the
DFTK.timer now shows a nice table summarising total time and allocations as well as a breakdown over individual routines.
Timing measurements have the unfortunate disadvantage that they alter the way stack traces look making it sometimes harder to find errors when debugging. For this reason timing measurements can be disabled completely (i.e. not even compiled into the code) by setting the environment variable
"false". For this to take effect recompiling all DFTK (including the precompile cache) is needed.
Unfortunately measuring timings in
TimerOutputs is not yet thread-safe. Therefore taking timings of threaded parts of the code will be disabled unless you set
"all". In this case you must not use Julia threading (see section below) or otherwise undefined behaviour results.
At the moment DFTK offers two ways to parallelize a calculation, firstly shared-memory parallelism using threading and secondly multiprocessing using MPI (via the MPI.jl Julia interface). MPI-based parallelism is currently only over $k$-points, such that it cannot be used for calculations with only a single $k$-point. Otherwise combining both forms of parallelism is possible as well.
The scaling of both forms of parallelism for a number of test cases is demonstrated in the following figure. These values were obtained using DFTK version 0.1.17 and Julia 1.6 and the precise scalings will likely be different depending on architecture, DFTK or Julia version. The rough trends should, however, be similar.
The MPI-based parallelization strategy clearly shows a superior scaling and should be preferred if available.
Currently DFTK uses MPI to distribute on $k$-points only. This implies that calculations with only a single $k$-point cannot use make use of this. For details on setting up and configuring MPI with Julia see the MPI.jl documentation.
First disable all threading inside DFTK, by adding the following to your script running the DFTK calculation:
using DFTK disable_threading()
Run Julia in parallel using the
mpiexecjlwrapper script from MPI.jl:
mpiexecjl -np 16 julia myscript.jl
-np 16tells MPI to use 16 processes and
-t 1tells Julia to use one thread only. Notice that we use
mpiexecjlto automatically select the
mpiexeccompatible with the MPI version used by MPI.jl.
As usual with MPI printing will be garbled. You can use
DFTK.mpi_master() || (redirect_stdout(); redirect_stderr())
at the top of your script to disable printing on all processes but one.
While standard procedures (such as the SCF or band structure calculations) fully support MPI, not all routines of DFTK are compatible with MPI yet and will throw an error when being called in an MPI-parallel run. In most cases there is no intrinsic limitation it just has not yet been implemented. If you require MPI in one of our routines, where this is not yet supported, feel free to open an issue on github or otherwise get in touch.
Threading in DFTK currently happens on multiple layers distributing the workload over different $k$-points, bands or within an FFT or BLAS call between threads. At its current stage our scaling for thread-based parallelism is worse compared MPI-based and therefore the parallelism described here should only be used if no other option exists. To use thread-based parallelism proceed as follows:
Ensure that threading is properly setup inside DFTK by adding to the script running the DFTK calculation:
using DFTK setup_threading()
This disables FFT threading and sets the number of BLAS threads to the number of Julia threads.
Run Julia passing the desired number of threads using the flag
julia -t 8 myscript.jl
For some cases (e.g. a single $k$-point, fewish bands and a large FFT grid) it can be advantageous to add threading inside the FFTs as well. One example is the Caffeine calculation in the above scaling plot. In order to do so just call
setup_threading(n_fft=2), which will select two FFT threads. More than two FFT threads is rarely useful.
The default threading setup done by
setup_threading is to select one FFT thread and the same number of BLAS and Julia threads. This section provides some info in case you want to change these defaults.
All BLAS calls in Julia go through a parallelized OpenBlas or MKL (with MKL.jl. Generally threading in BLAS calls is far from optimal and the default settings can be pretty bad. For example for CPUs with hyper threading enabled, the default number of threads seems to equal the number of virtual cores. Still, BLAS calls typically take second place in terms of the share of runtime they make up (between 10% and 20%). Of note many of these do not take place on matrices of the size of the full FFT grid, but rather only in a subspace (e.g. orthogonalization, Rayleigh-Ritz, ...) such that parallelization is either anyway disabled by the BLAS library or not very effective. To set the number of BLAS threads use
using LinearAlgebra BLAS.set_num_threads(N)
N is the number of threads you desire. To check the number of BLAS threads currently used, you can use
Int(ccall((BLAS.@blasfunc(openblas_get_num_threads), BLAS.libblas), Cint, ()))
or (from Julia 1.6) simply
On top of BLAS threading DFTK uses Julia threads (
Thread.@threads) in a couple of places to parallelize over $k$-points (density computation) or bands (Hamiltonian application). The number of threads used for these aspects is controlled by the flag
-t passed to Julia or the environment variable
JULIA_NUM_THREADS. To check the number of Julia threads use
Since FFT threading is only used in DFTK inside the regions already parallelized by Julia threads, setting FFT threads to something larger than
1 is rarely useful if a sensible number of Julia threads has been chosen. Still, to explicitly set the FFT threads use
using FFTW FFTW.set_num_threads(N)
N is the number of threads you desire. By default no FFT threads are used, which is almost always the best choice.