# HG changeset patch # User Ashwin Vishnu <avmo@kth.se> # Date 1526549669 -7200 # Thu May 17 11:34:29 2018 +0200 # Node ID d302ac59a270fbeeac977fbe2d299a10194f878b # Parent 1e67d14c3f2e8c92b676c31d8b10f2f64d25ccf3 Corrections on fluidfft paper diff --git a/fluiddyn/fluiddyn_paper.tex b/fluiddyn/fluiddyn_paper.tex --- a/fluiddyn/fluiddyn_paper.tex +++ b/fluiddyn/fluiddyn_paper.tex @@ -1746,7 +1746,8 @@ \href{https://bitbucket.org/fluiddyn/fluiddyn/issues}{Issues page on Bitbucket}. Discussions and questions can be aired on instant messaging channels in Riot (or equivalent with Matrix protocol) at -\href{https://riot.im/app/#/room/#fluiddyn-users:matrix.org}{\codeinline{\#fluiddyn-users:matrix.org}} +\url{% + https://matrix.to/#/#fluiddyn-users:matrix.org} or via IRC protocol on Freenode at \codeinline{\#fluiddyn-users}. \subsection*{Conclusions} diff --git a/fluidfft/Makefile b/fluidfft/Makefile --- a/fluidfft/Makefile +++ b/fluidfft/Makefile @@ -24,7 +24,7 @@ vimtex: # gvim $(name).tex --servername GVIM & # xterm -class GVIM -e vim $(name).tex --servername GVIM & - NVIM_LISTEN_ADDRESS=GVIM nvim-gtk $(name).tex & + NVIM_LISTEN_ADDRESS=GVIM nvim-qt $(name).tex & doit: vimtex $(name).pdf zathura $(name).pdf & diff --git a/fluidfft/fluidfft_paper.tex b/fluidfft/fluidfft_paper.tex --- a/fluidfft/fluidfft_paper.tex +++ b/fluidfft/fluidfft_paper.tex @@ -108,13 +108,13 @@ There are two strategies to distribute the memory, the 1D (or \emph{slab}) decomposition and the 2D (or \emph{pencil}) decomposition. The 1D decomposition is more efficient when only few processes are used but suffers from an -important limitation in terms of number of MPI processes that can be used. In -contrast, this limitation is overcome by the 2D decomposition. +important limitation in terms of number of MPI processes that can be used. +Utilizing 2D decomposition overcomes this limitation. Some of the well-known libraries are written in C, C++ and Fortran. \libpack{FFTW} supports MPI using 1D decomposition and hybrid parallelism using MPI and OpenMP. -Other libraries, now implement the 2D decomposition: \libpack{pfft} -\citep{pippig_pfft2013}, \libpack{p3dfft} \citep{pekurovsky2012p3dfft}, +Other libraries, now implement the 2D decomposition: \libpack{PFFT} +\citep{pippig_pfft2013}, \libpack{P3DFFT} \citep{pekurovsky2012p3dfft}, \libpack{2decomp\&FFT} and so on. These libraries rely on MPI for the communications between processes, are optimized for supercomputers and scales well to hundreds of thousands of cores. However, since there is no common API, it is @@ -145,7 +145,7 @@ \2 \pack{mpiFFT4py} and \pack{mpi4py-fft} built on top of \pack{pyFFTW} and \pack{numpy.fft}. \2 \pack{pfft-python} which provides extensions for - pfft library. + PFFT library. \1 FFT with GPGPU, using: \2 \pack{Reikna}, a pure python package which depends on \pack{PyCUDA} and \pack{PyOpenCL} @@ -161,14 +161,14 @@ sequential, MPI and GPGPU based under a single package with similar syntax. \item Quite complicated even for the simplest use case scenarios. To - understand how to use them, a novice user has to read at least the FFTW + understand how to use them, a novice user has to at least read the FFTW documentation. \item No benchmarks between libraries and between the Python solutions and solutions based only on a compiled language (as C, C++ or Fortran). \item Provides just the FFT and inverse FFT functions, no associated - linear algebra operators. + mathematical operators. \end{itemize} @@ -190,12 +190,12 @@ \end{itemize} In the present article, we shall start by describe the implementation of -\fluidpack{fft} including its design aspects, the code organization. Thereafter, -we shall compare the performance of different classes in \fluidpack{fft} in two -high performance computing clusters, and also describe using microbenchmarks how a -Python function can be optimized to be as fast as a Fortran -implementation. Finally, we show how we test and maintain the quality of the code -base through continuous integration and mention some possible applications of +\fluidpack{fft} including its design aspects and the code organization. Thereafter, +we shall compare the performance of different classes in \fluidpack{fft} in +three computing clusters, and also describe, using microbenchmarks, how a Python +function can be optimized to be as fast as a Fortran implementation. Finally, +we show how we test and maintain the quality of the code base through +continuous integration and mention some possible applications of \fluidpack{fft}. \section*{Implementation and architecture} @@ -213,7 +213,7 @@ Both C++ and Python APIs provided by \fluidpack{fft} currently support linking with \libpack{FFTW} (with and without MPI and OpenMP support enabled), -\libpack{MKL}, \libpack{pfft}, \libpack{p3dfft}, \libpack{cuFFT} libraries. The +\libpack{MKL}, \libpack{PFFT}, \libpack{P3DFFT}, \libpack{cuFFT} libraries. The classes in \fluidpack{fft} offers API for performing double-precision\footnote{Most C++ classes also support single-precision.} computation with real-to-complex FFT, complex-to-real inverse FFT, and additional @@ -226,7 +226,8 @@ \includegraphics[width=\linewidth]{Pyfig/fig_classes} \caption{Class hierarchy demonstrating object-oriented approach. The sequential classes are shown in red, the CUDA-based classes in magenta and - the MPI-based classes in green. + the MPI-based classes in green. The arrows represent inheritance from + parent to child class. }\label{fig:classes} \end{figure} @@ -268,8 +269,8 @@ \end{itemize} Let us illustrate with a trivial example, wherein we initialize the FFT with a -random physical array, and perform a \codeinline{fft} and \codeinline{ifft} -operation. +random physical array, and perform a set of \codeinline{fft} and \codeinline{ifft} +operations. \begin{minted}[fontsize=\footnotesize]{cpp} #include <iostream> using namespace std; @@ -299,28 +300,28 @@ } \end{minted} -As suggested in the comments above, to switch the FFT library and the user only -needs to change the header file and the class name. Another added advantage is -that the user does not need to worry about the domain decomposition to declare -and allocate the arrays. A few more helper functions are available with the FFT -classes, such as functions to compute the mean value and energies in the array. -These are illustrated with examples in the documentation.\footnote{% +As suggested through comments above, in order to switch the FFT library, the +user only needs to change the header file and the class name. An added +advantage is that, the user does not need to bother about the domain +decomposition while declaring and allocating the arrays. A few more helper +functions are available with the FFT classes, such as functions to compute the +mean value and energies in the array. These are illustrated with examples in +the documentation.\footnote{% \url{https://fluidfft.readthedocs.io/en/latest/examples/cpp.html}.} % -Detailed information related to the C++ classes and its member functions can be -found in the online documentation\footnote{% +Detailed information regarding the C++ classes and its member functions are +also included in the online documentation\footnote{% \url{https://fluidfft.readthedocs.io/en/latest/doxygen/index.html}.}. -\subsection*{Python API} -Similar to other packages in the FluidDyn project, \fluidpack{fft} also uses an -object-oriented approach to wrap the FFT libraries. +\subsection*{Python API} Similar to other packages in the FluidDyn project, +\fluidpack{fft} also uses an object-oriented approach, providing FFT classes. % -This is in contrast with the approach taken by \pack{numpy.fft} and \pack{% -scipy.fftpack}, wherein the user has to figure out from the documentation the -procedure to design the input values and to use the return values while using -the FFT and inverse FFT functions. The Python API wraps all the functionalities -of its C++ counterpart and offers a more richer experience through an -accompanying operator class. +This is in contrast with the approach adopted by \pack{numpy.fft} and \pack{% +scipy.fftpack} which provides FFT and inverse IFFT functions, with which the +user has to figure out from the documentation, the procedure to design the +input values and to use the return values. In \fluidpack{fft}, the Python API +wraps all the functionalities of its C++ counterpart and offers a more richer +experience through an accompanying operator class. As a short example, let us try to calculate the gradient of a plane sine-wave using spectral methods, mathematically described as follows: @@ -344,13 +345,14 @@ The equivalent pseudo-spectral implementation in \fluidpack{fft} is as follows: \begin{minted}[fontsize=\footnotesize]{python} from fluidfft.fft2d.operators import OperatorsPseudoSpectral2D, pi + from numpy import sin nx = ny = 100 lx = ly = 2 * pi oper = OperatorsPseudoSpectral2D(nx, ny, lx, ly, fft='fft2d.with_fftw2d') - u = np.sin(oper.XX + oper.YY) + u = sin(oper.XX + oper.YY) u_fft = oper.fft(u) px_u_fft, py_u_fft = oper.gradfft_from_fft(u_fft) px_u = oper.ifft(px_u_fft) @@ -358,8 +360,8 @@ grad_u = (px_u, py_u) \end{minted} -A parallelized version of the above code will work out of the box, simply by -replacing the FFT class with an MPI-based FFT class, for example +A parallelized version of the code above will work out of the box, simply by +replacing the FFT class with an MPI-based FFT class, for instance \codeinline{fft2d.with\_fftwmpi2d}. Even if one finds a methods in the operator class to be lacking, one can inherit the class and easily create the new method, for instance using the wavenumber arrays, \codeinline{oper.KX} and @@ -398,22 +400,23 @@ https://fluidfft.readthedocs.io/en/latest/examples/cpp.html}{% in the documentation}. -The Python API is built automatically when \fluidpack{fft} is installed\footnote{% -\href{https://fluidfft.readthedocs.io/en/latest/install.html}{Detailed steps for -installation} are provided in the documentation.}. +The Python API is built automatically when \fluidpack{fft} is +installed\footnote{% +\href{https://fluidfft.readthedocs.io/en/latest/install.html}{Detailed steps +for installation} are provided in the documentation.}. % It first generates the Cython source code as a pair of \codeinline{.pyx} and \codeinline{.pxd} files containing a class wrapping its C++ counterpart\footnote{Uses an approach similar to guidelines \href{% -https://cython.readthedocs.io/en/latest/src/userguide/wrapping_CPlusPlus.html}{% + https://cython.readthedocs.io/en/latest/src/userguide/wrapping_CPlusPlus.html}{% ``Using C++ in Cython''} in the Cython documentation.}. % The Cython files are produced from template files (specialized for the 2D and 3D cases) using the template library \mako. % -Thereafter, \pack{cython} generates C++ code with necessary Python bindings, which -are then built in the form of extensions or dynamic libraries importable in Python -code. All the built extensions are then installed as a Python package +Thereafter, \pack{Cython} generates C++ code with necessary Python bindings, +which are then built in the form of extensions or dynamic libraries importable +in Python code. All the built extensions are then installed as a Python package \fluidpack{fft}. A helper function \codeinline{fluidfft.import\_fft\_class} is provided with the @@ -431,7 +434,7 @@ \item One C++ class per method derived from a hierarchy of C++ classes as shown in Fig.~\ref{fig:classes}. -\item Cython wrappers of the C++ classes with their unit test cases. +\item \pack{Cython} wrappers of the C++ classes with their unit test cases. \item Python operators classes (2D and 3D) to write code independently of the library used for the computation of the FFT and with some mathematical helper @@ -465,9 +468,10 @@ processes}]_\alpha} \label{eq:speedup} \end{equation*} + where $n_{p,\min}$ is the minimum number of processes employed for a specific -array size and hardware, $\alpha$ denotes the FFT class used and ``fastest'' -corresponds to the fastest result among various FFT classes. +array size and hardware. The subscripts, $\alpha$ denotes the FFT class used and +``fastest'' corresponds to the fastest result among various FFT classes. To compute strong scaling the utility \codeinline{fluidfft-bench} is launched as scheduled jobs on HPC clusters, ensuring no interference from background @@ -500,10 +504,10 @@ On big HPC clusters, we have only focussed on 3D array transforms as benchmark problems, since these are notoriously expensive to compute and require massive -parallelization. The physical arrays used in all four MPI based FFT classes are -identical in structure. However, there are subtle differences, in terms of how -the domain decomposition and the allocation of the transformed array in the memory -are handled\footnote{Detailed discussion on \href{% +parallelization. The physical arrays used in all four 3D MPI based FFT classes +are identical in structure. However, there are subtle differences, in terms of +how the domain decomposition and the allocation of the transformed array in the +memory are handled\footnote{Detailed discussion on \href{% https://fluidfft.readthedocs.io/en/latest/ipynb/executed/tuto_fft3d_mpi_domain_decomp.html}{% ``FFT 3D parallel (MPI): Domain decomposition''} tutorial}. @@ -518,7 +522,7 @@ \href{https://www.top500.org/system/178465}{Occigen} is a GENCI-CINES HPC cluster which uses Intel Xeon CPU E5--2690 v3 (2.6 GHz) processors with 24 cores -per node. The installation was done using Intel C++ 17.2 compiler, Python +per node. The installation was performed using Intel C++ 17.2 compiler, Python 3.6.5, and OpenMPI 2.0.2. \begin{figure}[htp!] @@ -530,12 +534,13 @@ \end{figure} Fig.~\ref{fig:occigen384x1152x1152} demonstrates the strong scaling performance -of a cuboidal array sized $384\times1152\times1152$. This case is particularly +of a cuboid array sized $384\times1152\times1152$. This case is particularly interesting since for FFT classes implementing 1D domain decomposition (\codeinline{fftw1d} and \codeinline{fftwmpi3d}), the processes are spread on the first index for the physical input array. This restriction is as a result of some \libpack{FFTW} library internals and design choices adopted in \fluidpack{fft}. This limits \codeinline{fftw1d} to 192 cores and +% av: The data for 192 cores is still missing from Occigen. \codeinline{fftwmpi3d} to 384 cores. The latter can utilize more cores since it is capable of working with empty arrays, while sharing some of the computational load. @@ -564,10 +569,10 @@ \end{figure} Fig.~\ref{fig:occigen1152x1152x1152} demonstrates the strong scaling -performance of an cubical array sized $1152\times1152\times1152$. For this +performance of a cubical array sized $1152\times1152\times1152$. For this resolution as well, \codeinline{fftw1d} is the fastest method when using only few cores and it can not be used for more that 192 cores. The faster library -when using more cores is also \codeinline{p3dfft}. This also shows that +when using more cores is also \codeinline{P3DFFT}. This also shows that \fluidpack{fft} can effectively scale for over 10,000 cores with a significant increase in speedup. @@ -577,7 +582,7 @@ \href{ https://www.pdc.kth.se/hpc-services/computing-systems}{Beskow} is a Cray machine maintained by SNIC at PDC, Stockholm. It runs on Intel(R) Xeon(R) CPU E5-2695 v4 (2.1 GHz) processors with 36 cores per node. The installation was -done using Intel C++ 18 compiler, Python 3.6.5 and cray-mpich 7.0.4. +done using Intel C++ 18 compiler, Python 3.6.5 and CRAY-MPICH 7.0.4. \begin{figure}[htp!] \centering @@ -587,12 +592,12 @@ \label{fig:beskow384x1152x1152} \end{figure} -In Fig.~\ref{fig:beskow384x1152x1152}, the strong scaling results of the cubical +In Fig.~\ref{fig:beskow384x1152x1152}, the strong scaling results of the cuboid array can be observed. In this set of results we have also included intra-node -scaling, in which there is no latency introduced due to typically slower -node-to-node communication. The fastest library for very low (below 16) and very -high (above 384) number of processes in this configuration is -\codeinline{p3dfft}. For moderately high number of processes (16 and above) the +scaling, wherein there is no latency introduced due to typically slower +node-to-node communication. The fastest library for very low (below 16) and +very high (above 384) number of processes in this configuration is +\codeinline{P3DFFT}. For moderately high number of processes (16 and above) the fastest library is \codeinline{fftwmpi3d}. Here too, we notice that \codeinline{fftw1d} is limited to 192 cores and \codeinline{fftwmpi3d} to 384 cores, for reasons mentioned earlier. @@ -617,8 +622,8 @@ Fig.~\ref{fig:beskow1152x1152x1152}, wherein we restrict to inter-node computation. We observe that the fastest method for low number of processes is again, \codeinline{fftwmpi3d}. When high number of processes (above 1000) -are utilized, initially \codeinline{p3dfft} is the faster methods as before, -but with 3000 and above processes \codeinline{pfft} is comparable in speed and +are utilized, initially \codeinline{P3DFFT} is the faster methods as before, +but with 3000 and above processes, \codeinline{PFFT} is comparable in speed and sometimes faster. \paragraph{Benchmarks on a LEGI cluster} @@ -627,8 +632,8 @@ maintained at an institutional level, named Cluster8 at \href{% http://www.legi.grenoble-inp.fr}{LEGI}, Grenoble. This cluster functions using Intel Xeon CPU E5-2650 v3 (2.3 GHz) with 20 cores per node and \fluidpack{fft} -was installed a toolchain which includes gcc 4.9.2, Python 3.6.4 and OpenMPI -1.6.5 as key software components. +was installed using a toolchain which comprises of gcc 4.9.2, Python 3.6.4 and +OpenMPI 1.6.5 as key software components. \begin{figure}[htp!] \centering @@ -661,15 +666,15 @@ \codeinline{fftwmpi2d}. Both libraries display near-linear scaling, except when more than one node is used and the performance tapers off. -As a conclusive remark on scalability, a general rule of thumb should be to use 1D -domain decomposition when only very few processors are employed. For massive +As a conclusive remark on scalability, a general rule of thumb should be to use +1D domain decomposition when only very few processors are employed. For massive parallelization, 2D decomposition is required to achieve good speedup without being limited by the number of processors at disposal. We have thus shown that overall performance of the libraries implemented in \fluidpack{fft} are quite good, and there is no noticeable drop in speedup when the Python API is used. % -This benchmark analysis also shows that the fastest FFT implementation depends on -the size of the arrays and on the hardware. +This benchmark analysis also shows that the fastest FFT implementation depends +on the size of the arrays and on the hardware. % Therefore, an application build upon \fluidpack{fft} can be efficient for different sizes and machines. @@ -691,24 +696,30 @@ tmp = (kx * vx + ky * vy + kz * vz) * inv_k_square_nozero return vx - kx * tmp, vy - ky * tmp, vz - kz * tmp \end{minted} -Note that this first version is ``outplace'', meaning that the result is returned -by the function and that the input velocity field is not modified. +Note that, this implementation is ``outplace'', meaning that the result is +returned by the function and that the input velocity field (\codeinline{vx, vy, +vz}) is unmodified. % -Here, we have already included the \pack{Pythran} annotation in a comment. This -annotation gives us information about the types used in the code -- all -arguments are \pack{Numpy} arrays -- but has of course no effect on the -execution when using only Python. +The comment above the function definition is a \pack{Pythran} annotation, which +serves as a type-hint for the variables used within the functions --- all +arguments being \pack{Numpy} arrays in this case. +% +\pack{Pythran} needs such annotation to be able to compile this code into +efficient machine instructions \emph{via} a C++ code. % -\pack{Pythran} needs such annotation to be able to compile this code to -efficient machine instructions {\it via} a C++ code. -% -We see that the array notation is well adapted to express this simple calculus. +Without \pack{Pythran} the annotation has no effect, and +of course, the function defaults to using Python with \pack{Numpy} to execute. + +The array notation is well adapted and less verbose to express this simple +vector calculus. % -Since we do not need explicit loops with indexing, the computation with Python -and \pack{Numpy} is not extremely slow. However, even in this quite favorable -case for \pack{Numpy}, the computation with \pack{Numpy} is not efficient -because it involves many loops (one per operator) and creations of temporary -arrays. +Since explicit loops with indexing is not required, the computation with Python +and \pack{Numpy} is not extremely slow. Despite this being quite a favourable +case for \pack{Numpy}, the computation with \pack{Numpy} is not optimized +because, internally, it involves many loops (one per arithmetic operator) and +creation of temporary arrays. +%av: Have I understood correctly here with the clarifications: "internally" & +% "arithmetic operator"? \begin{figure}[htp] \centering @@ -771,7 +782,7 @@ (blue bar). % We do not show the result for \pack{Numba} for the code without explicit loops -because it is slower than with \pack{Numpy}. We do not show the result for +because it is slower than \pack{Numpy}. We have also omitted the result for \pack{Numpy} for the code with explicit loops because it is very inefficient. % The timing is performed with the package @@ -780,15 +791,15 @@ We see that \pack{Numpy} is approximately three time slower than the Fortran implementation (which as already mentioned contains the memory allocation). % -Just using \pack{Pythran} without changing the code (first cyan bar), we nearly -divide the execution time per two but we are still much slower than the Fortran -implementation. +Just using \pack{Pythran} without changing the code (first cyan bar), we save +nearly 50\% of the execution time but we are still significantly slower than +the Fortran implementation. % We reach the Fortran performance (even slightly faster) only by using \pack{Pythran} with the code with explicit loops. % -With this code, \pack{Numba} is nearly as fast (but still slower) without needed -any type annotation. +With this code, \pack{Numba} is nearly as fast (but still slower) without +requiring any type annotation. Note that the exact performance differences depend on the hardware, the software versions\footnote{Here, we use Python~3.6.4 (packaged by conda-forge), @@ -804,7 +815,7 @@ Since allocating memory is expensive and we do not need the non-projected velocity field after the call of the function, an evident optimization is to -put the output in the input arrays. Such ``in-place'' version can be written +put the output in the input arrays. Such an ``in-place'' version can be written with \pack{Numpy} as: \begin{minted}[fontsize=\footnotesize]{python} # pythran export proj_inplace( @@ -818,7 +829,7 @@ vz -= kz * tmp \end{minted} -As for the first version, we have included the \pack{Pythran} annotation. +As in the first version, we have included the \pack{Pythran} annotation. % We also consider an ``in-place'' version with explicit loops: \begin{minted}[fontsize=\footnotesize]{python} @@ -856,18 +867,18 @@ However, \pack{Numpy} is even more slower (7.8 times slower than \pack{Pythran} with the explicit loops) than for the outplace versions. -From this short and simple microbenchmark, we can retain four main points: +From this short and simple microbenchmark, we can infer four main points: \begin{itemize} \item Memory allocation takes time! In Python, memory management is automatic so that we tend to forget it. An important rule to write efficient code is to -reuse as much as possible the buffers already allocated. +reuse the buffers already allocated as much as possible. \item Even for this very simple case quite favorable for \pack{Numpy} (no indexing -or slicing), \pack{Numpy} is three to height time slower than the Fortran -implementations. As long as the execution time is not a problem or that the -function represents a small part of the total execution time, this is not a -problem. However, in the other cases, the Python-\pack{Numpy} users need other -solutions. +or slicing), \pack{Numpy} is three to eight time slower than the Fortran +implementations. As long as the execution time is small or that the +function represents a small part of the total execution time, this is not an +issue. However, in other cases, Python-\pack{Numpy} users need to consider +other solutions. \item \pack{Pythran} is able to speedup the \pack{Numpy} code without explicit loops and is as fast as Fortran (even slightly faster in our case) for the @@ -902,10 +913,10 @@ code coverage report, ready for upload. It is also possible to run similar isolated tests using \pack{tox} or coverage analysis using \pack{coverage} in a local machine. Up-to-date build status and coverage status are displayed on the -landing page of the Bitbucket repository. Instructions on how to run unittests, +landing page of the Bitbucket repository. Instructions on how to run unit tests, coverage and lint tests are included in the documentation. -We also try to follow a consistent code style as recomended by PEP (Python +We also try to follow a consistent code style as recommended by PEP (Python enhancement proposals) 8 and 257. This is also inspected using lint checkers such as \codeinline{flake8} and \codeinline{pylint} among the developers. The Python code is regularly cleaned up using the code formatter \codeinline{black}. @@ -926,7 +937,10 @@ Python 2.7, 3.5 or above. % Note that while Cython and Pythran both use the C API of CPython, \fluidpack{fft} -has been successfully tested on Pypy 6.0. +has been successfully tested on PyPy 6.0. +% +A C++11 supporting compiler, while not mandatory for the C++ API or Cython +extensions of \fluidpack{fft}, is recommended to able to use Pythran extensions. \section*{Dependencies} @@ -934,9 +948,10 @@ % compatibility.} \begin{itemize} -\item {\bf Minimum:} \fluidpack{dyn}, \pack{Numpy}. -\item {\bf Optional:} \pack{Scipy}, \pack{mpi4py}, \pack{Cython} and -\pack{Pythran}. +\item {\bf Minimum:} \fluidpack{dyn}, \pack{Numpy}, \pack{Cython}, and + \pack{mako}\ or \pack{Jinja2}; \libpack{FFTW} library. +\item {\bf Optional:} \pack{mpi4py} and \pack{Pythran}; \libpack{P3DFFT}, + \libpack{PFFT} and \libpack{cuFFT} libraries. \end{itemize} @@ -949,7 +964,7 @@ \item Pierre Augier (LEGI): creator of the FluidDyn project and of \fluidpack{fft}. \item Cyrille Bonamy (LEGI): C++ code and some methods in the operator classes. -\item Ashwin Vishnu Mohanan (KTH): command lines utilities, benchmarks, unittests +\item Ashwin Vishnu Mohanan (KTH): command lines utilities, benchmarks, unit tests and continuous integration, bug fixes, etc. \end{itemize} @@ -1012,7 +1027,8 @@ \href{https://bitbucket.org/fluiddyn/fluidfft/issues}{Issues page on Bitbucket}. Discussions and questions can be aired on instant messaging channels in Riot (or equivalent with Matrix protocol) at -\href{https://riot.im/app/#/room/#fluiddyn-users:matrix.org}{\codeinline{\#fluiddyn-users:matrix.org}} +\url{% + https://matrix.to/#/#fluiddyn-users:matrix.org} or via IRC protocol on Freenode at \codeinline{\#fluiddyn-users}. \section*{Acknowledgements} @@ -1036,7 +1052,7 @@ Cino Del Duca de l'Institut de France, the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No 647018-WATU and Euhit consortium) and the Swedish Research Council -(Vetenskapsr{\aa}det): 2013-5191. +(Vetenskapsr{\aa}det): 2013--5191. % We have also been able to use supercomputers of CIMENT/GRICAD, CINES/GENCI and the Swedish National Infrastructure for Computing (SNIC). diff --git a/fluidsim/Makefile b/fluidsim/Makefile --- a/fluidsim/Makefile +++ b/fluidsim/Makefile @@ -22,8 +22,9 @@ evince $(name).pdf & vimtex: - gvim $(name).tex --servername GVIM & - # gnome-terminal -- vim $(name).tex --servername GVIM + # gvim $(name).tex --servername GVIM & + # xterm -class GVIM -e vim $(name).tex --servername GVIM & + NVIM_LISTEN_ADDRESS=GVIM nvim-qt $(name).tex & doit: vimtex $(name).pdf zathura $(name).pdf & diff --git a/fluidsim/fluidsim_paper.tex b/fluidsim/fluidsim_paper.tex --- a/fluidsim/fluidsim_paper.tex +++ b/fluidsim/fluidsim_paper.tex @@ -1200,8 +1200,8 @@ the \href{https://bitbucket.org/fluiddyn/fluidsim/issues}{Issues page on Bitbucket}. Discussions and questions can be aired on instant messaging channels in Riot (or equivalent with Matrix protocol) at -\href{https://riot.im/app/#/room/#fluiddyn-users:matrix.org}{\codeinline{% -\#fluiddyn-users:matrix.org}} +\url{% + https://matrix.to/#/#fluiddyn-users:matrix.org} or via IRC protocol on Freenode at \codeinline{\#fluiddyn-users}. \section*{Acknowledgements} diff --git a/jors.cls b/jors.cls --- a/jors.cls +++ b/jors.cls @@ -120,9 +120,9 @@ \href{http://fftw.org}{#2}}{% \ifstrequal{#2}{MKL}{% \href{https://software.intel.com/en-us/mkl}{#2}}{% - \ifstrequal{#2}{pfft}{% + \ifstrequal{#2}{PFFT}{% \href{https://www-user.tu-chemnitz.de/~potts/workgroup/pippig/software.php.en}{#2}}{% - \ifstrequal{#2}{p3dfft}{% + \ifstrequal{#2}{P3DFFT}{% \href{http://p3dfft.net}{#2}}{% \ifstrequal{#2}{2decomp\&FFT}{% \href{http://www.2decomp.org}{#2}}{%