# HG changeset patch # User Ashwin Vishnu <avmo@kth.se> # Date 1524242730 -7200 # Fri Apr 20 18:45:30 2018 +0200 # Node ID d5a42a331888b85f07585ecb059bd25f3c93014d # Parent 42e09421a6e75af15dd4b2a1aa589b41fbb35bba Examples for API diff --git a/fluidfft/fluidfft_paper.tex b/fluidfft/fluidfft_paper.tex --- a/fluidfft/fluidfft_paper.tex +++ b/fluidfft/fluidfft_paper.tex @@ -4,6 +4,8 @@ \documentclass{../jors} + + \begin{document} {\bf Software paper for submission to the Journal of Open Research Software} \\ @@ -82,17 +84,17 @@ % articles. A short comparison with software which implements similar % functionality should be included in this section. } -Fast Fourier transforms (FFT) are useful for many applications, such as signal -processing, numerical simulations and scientific computing in general. There -are many good libraries to perform FFT, in particular the \emph{de-facto} -standard FFTW. A new challenge is to efficiently scale FFTon clusters with the -memory distributed over a large number of cores. This is imperative to solve -big problems faster and when the arrays do not fit in the memory of single -computational node. A problem is that for one-dimensional FFT, all the data has -to be located in the memory of the on clusters with the memory distributed over -a large number of cores. A problem is that for one-dimensional FFT, all the -data has to be located in the memory of the process that perform the FFT, so a -lot of communication between processes are needed for 2D and 3D FFT. +Fast Fourier transforms (FFT) are useful for many applications, +such as signal processing, numerical simulations and scientific computing in +general. There are many good libraries to perform FFT, in particular the +\emph{de-facto} standard FFTW. A new challenge is to efficiently scale FFTon +clusters with the memory distributed over a large number of cores. This is +imperative to solve big problems faster and when the arrays do not fit in the +memory of single computational node. A problem is that for one-dimensional FFT, +all the data has to be located in the memory of the on clusters with the memory +distributed over a large number of cores. A problem is that for one-dimensional +FFT, all the data has to be located in the memory of the process that perform the +FFT, so a lot of communication between processes are needed for 2D and 3D FFT. There are two strategies to distribute the memory, the 1D (or \emph{slab}) decomposition and the 2D (or \emph{pencil}) decomposition. The 1D decomposition @@ -100,29 +102,30 @@ important limitation in terms of number of MPI processes that can be used. In contrast, this limitation is overcome by the 2D decomposition. -Some of the well-known libraries are written in C, C++ and Fortran. FFTW -supports MPI using 1D decomposition and hybrid parallelism using OpenMP. Other -libraries, now implement the 2D decomposition: pfft, p3dfft, 2decomp\&FFT and -so on. These libraries rely on MPI for the communications between processes, -are optimized for supercomputers and scales well to hundreds of thousands of -cores. However, since there is no common API, it is not simple to write -applications that are able to use these libraries and to compare their -performances. As a result, developers are met with the hard decision to choose -a library before the code is implemented. +Some of the well-known libraries are written in C, C++ and Fortran. \libpack{FFTW} +supports MPI using 1D decomposition and hybrid parallelism using OpenMP. Other +libraries, now implement the 2D decomposition: \libpack{pfft}, \libpack{p3dfft}, +\libpack{2decomp\&FFT} and so on. These libraries rely on MPI for the +communications between processes, are optimized for supercomputers and scales well +to hundreds of thousands of cores. However, since there is no common API, it is +not simple to write applications that are able to use these libraries and to +compare their performances. As a result, developers are met with the hard decision +to choose a library before the code is implemented. Apart from CPU-based parallelism, general purpose computing on graphical processing units (GPGPU) is also gaining traction in scientific computing. Scalable libraries written for GPGPU such as OpenCL and CUDA have emerged, with -their own FFT implementations, namely clFFT and cuFFT respectively. +their own FFT implementations, namely \libpack{clFFT} and \libpack{cuFFT} +respectively. -As explained in \citet{fluiddyn}, Python can easily leverage these libraries +As explained in \citet{fluiddyn}, Python can easily link these libraries through compiled extensions. For a Python developer, the following packages -follow this approach to perform FFT: +leverage this approach to perform FFT: \begin{outline} \1 sequential FFT, using: \2 \pack{numpy.fft} and \pack{scipy.fftpack} which are essentially - C and Fortran extensions for FFTPACK library. + C and Fortran extensions for \libpack{FFTPACK} library. \2 \pack{pyFFTW} which wraps FFTW library and provides interfaces similar to the \pack{numpy.fft} and \pack{scipy.fftpack} implementations. \2 \pack{mkl\_fft}, which wraps Intel's MKL library and exposes python @@ -191,15 +194,82 @@ % \textcolor{blue}{How the software was implemented, with details of the % architecture where relevant. Use of relevant diagrams is appropriate. Please % also describe any variants and associated implementation differences.} +The two major design goals of \fluidpack{fft} are: +\begin{itemize} + \item to support multiple FFT libraries under the same umbrella and expose the + interface for both C++ and Python code development. + \item to keep the design of the interfaces as human-centric and easy to use as + possible, without sacrificing performance. +\end{itemize} -Similar to other packages in the FluidDyn project, \fluidpack{fft} also is -designed with an object-oriented approach. Thus to access the FFT and inverse -FFT functions calls are made through an object of class. The advantage is the -improvement in ease of use, by making use of methods attached to the class. +Both C++ and Python APIs provided by \fluidpack{fft} currently support linking +with \libpack{FFTW} (with and without MPI and OpenMP support enabled), +\libpack{MKL}, \libpack{pfft}, \libpack{p3dfft}, \libpack{cuFFT} libraries. The +classes in \fluidpack{fft} offers API for performing double-precision computation +with real-to-complex FFT, complex-to-real inverse FFT, and additional helper +functions. + +\subsection*{C++ API} +The C++ API is implemented as a hierarchy of classes as shown in Fig. 1. +% todo: +Through inheritance the classes share the same function names and syntax. + +Let us illustrate with a trivial example, in which we initialize the FFT with a +random physical array, and perform a \codeinline{fft} and \codeinline{ifft} +operation. +\begin{minted}[fontsize=\footnotesize]{cpp} +#include <iostream> +using namespace std; + +#include <fft3dmpi_with_fftwmpi3d.h> +// #include <fft3dmpi_with_p3dfft.h> +#include <mpi.h> + +int main(int argc, char **argv) +{ + int N0 = N1 = N2 = 32; + // MPI-related + int nb_procs = 4; + MPI_Init(&argc, &argv); + MPI_Comm_size(MPI_COMM_WORLD, &(nb_procs)); + + myreal* array_X; + mycomplex* array_K; + + FFT3DMPIWithFFTWMPI3D o(N0, N1, N2); + // FFT3DMPIWithP3DFFT o(N0, N1, N2); + + o.init_array_X_random(array_X); // Initialize the physical array with random values + o.alloc_array_K(array_K); // Allocate the spectral array in memory + o.fft(array_X, array_K); // Forward FFT + o.ifft(array_K, array_X) // Inverse FFT + MPI_Finalize(); + return 0; +} +\end{minted} + +As suggested in the comments, to switch the FFT library and the user only needs to +change the header file and the class name. Another added advantage is that the +user does not need to worry about the domain decomposition to declare and allocate +the arrays. A few more helper functions are available with the FFT classes, such +as functions to compute the mean value and energies in the array. These are +illustrated with examples in the documentation.\footnote{ +\url{https://fluidfft.readthedocs.io/en/latest/examples/cpp.html}}. + +Detailed information related to the C++ classes and its member functions can be +found in the online documentation\footnote{ + \url{https://fluidfft.readthedocs.io/en/latest/doxygen/index.html}}. + +\subsection*{Python API} +Similar to other packages in the FluidDyn project, \fluidpack{fft} also uses an +object-oriented approach to implement the FFT classes. +% This is in contrast with the approach taken by \pack{numpy.fft} and \pack -{scipy.fftpack}, wherein the user has to figure out from the documentation -how to design the input values and use the return values provided to the -class. +{scipy.fftpack}, wherein the user has to figure out from the documentation the +procedure to design the input values and to use the return values provided to the +FFT and inverse FFT functions. The Python API wraps all the functionalities of its +C++ counterpart and offers a more richer experience through an accompanying +operator class. As a short example, let us try to calculate the gradient of a plane sine-wave using spectral methods, mathematically described as follows: @@ -230,7 +300,6 @@ nx = ny = 100 lx = ly = 2 * np.pi - FFTClass = import_fft_class('fft2d.with_fftw2d') # Create an FFT object o = FFTClass(nx, ny) # And an operator object @@ -256,7 +325,8 @@ found in the tutorials\footnote{\url{https://fluidfft.readthedocs.io/en/latest/tutorials.html}}. -Let us +Let us now turn our attention to how the code is organized. We shall also describe +how the source code is built, and linked with the supported FFT libraries. \subsection*{Code organization} These classes unify the supported libraries by sharing method diff --git a/jors.cls b/jors.cls --- a/jors.cls +++ b/jors.cls @@ -87,6 +87,8 @@ \usepackage{xspace} +\usepackage{etoolbox} + %% Set source code listings style \lstset{basicstyle=\ttfamily, language=Python} @@ -109,8 +111,32 @@ \newcommand{\numpy}{\codeinline{numpy}\xspace} \newcommand{\scipy}{\codeinline{scipy}\xspace} -\newcommand{\pack}[1]{\codeinline{#1}} +\newcommand{\pack}[1]{\codeinline{#1}\xspace} +% Override the PyPI url by using the optional 2nd argument +\newcommand{\libpack}[2][]{% + \ifstrequal{#2}{FFTW}{% + \href{http://fftw.org}{#2}}{% + \ifstrequal{#2}{MKL}{% + \href{https://software.intel.com/en-us/mkl}{#2}}{% + \ifstrequal{#2}{pfft}{% + \href{https://www-user.tu-chemnitz.de/~potts/workgroup/pippig/software.php.en}{#2}}{% + \ifstrequal{#2}{p3dfft}{% + \href{http://p3dfft.net}{#2}}{% + \ifstrequal{#2}{2decomp\&FFT}{% + \href{http://www.2decomp.org}{#2}}{% + \ifstrequal{#2}{cuFFT}{% + \href{https://docs.nvidia.com/cuda/cufft/index.html}{#2}}{% + \ifstrequal{#2}{clFFT}{% + \href{https://clmathlibraries.github.io/clFFT/}{#2}}{% + \ifstrequal{#2}{FFTPACK}{% + \href{http://www.netlib.org/fftpack}{#2}}{% + \ifstrempty{#1}{% + #2 + }{% + \href{#1}{#2}} + }}}}}}}}\xspace % Close the if-else-if tree above! +} % \newcommand{\annotate}[1]{\marginpar{\textcolor{red}{#1}}}