Skip to content
Snippets Groups Projects
Commit 42e09421 authored by Ashwin Vishnu's avatar Ashwin Vishnu
Browse files

More intro and short example

parent 216087d9
No related branches found
No related tags found
No related merge requests found
......@@ -85,11 +85,14 @@
Fast Fourier transforms (FFT) are useful for many applications, such as signal
processing, numerical simulations and scientific computing in general. There
are many good libraries to perform FFT, in particular the \emph{de-facto}
standard FFTW. A new challenge is to perform efficiently FFT on clusters with
the memory distributed over a large number of cores. A problem is that for
one-dimensional FFT, all the data has to be located in the memory of the
process that perform the FFT, so a lot of communication between processes are
needed for 2D and 3D FFT.
standard FFTW. A new challenge is to efficiently scale FFTon clusters with the
memory distributed over a large number of cores. This is imperative to solve
big problems faster and when the arrays do not fit in the memory of single
computational node. A problem is that for one-dimensional FFT, all the data has
to be located in the memory of the on clusters with the memory distributed over
a large number of cores. A problem is that for one-dimensional FFT, all the
data has to be located in the memory of the process that perform the FFT, so a
lot of communication between processes are needed for 2D and 3D FFT.
There are two strategies to distribute the memory, the 1D (or \emph{slab})
decomposition and the 2D (or \emph{pencil}) decomposition. The 1D decomposition
......@@ -97,13 +100,13 @@
important limitation in terms of number of MPI processes that can be used. In
contrast, this limitation is overcome by the 2D decomposition.
Some of the well-known are written in C, C++ and Fortran. FFTW supports MPI
using 1D decomposition and hybrid parallelism using OpenMP. Other libraries,
now implement the 2D decomposition: pfft, p3dfft, 2decomp\&FFT and so on.
These libraries rely on MPI for the communications between processes, are
optimized for supercomputers and scales well to hundreds of thousands of
Some of the well-known libraries are written in C, C++ and Fortran. FFTW
supports MPI using 1D decomposition and hybrid parallelism using OpenMP. Other
libraries, now implement the 2D decomposition: pfft, p3dfft, 2decomp\&FFT and
so on. These libraries rely on MPI for the communications between processes,
are optimized for supercomputers and scales well to hundreds of thousands of
cores. However, since there is no common API, it is not simple to write
applications that are able to use these libraries and to compare their
performances. As a result, developers are met with the hard decision to choose
a library before the code is implemented.
......@@ -105,9 +108,16 @@
cores. However, since there is no common API, it is not simple to write
applications that are able to use these libraries and to compare their
performances. As a result, developers are met with the hard decision to choose
a library before the code is implemented.
For a Python developer, some packages are available to perform:
Apart from CPU-based parallelism, general purpose computing on graphical
processing units (GPGPU) is also gaining traction in scientific computing.
Scalable libraries written for GPGPU such as OpenCL and CUDA have emerged, with
their own FFT implementations, namely clFFT and cuFFT respectively.
As explained in \citet{fluiddyn}, Python can easily leverage these libraries
through compiled extensions. For a Python developer, the following packages
follow this approach to perform FFT:
\begin{outline}
\1 sequential FFT, using:
......@@ -111,7 +121,7 @@
\begin{outline}
\1 sequential FFT, using:
\2 \pack{numpy.fft}, \pack{scipy.fftpack} which are essentially
\2 \pack{numpy.fft} and \pack{scipy.fftpack} which are essentially
C and Fortran extensions for FFTPACK library.
\2 \pack{pyFFTW} which wraps FFTW library and provides interfaces similar to
the \pack{numpy.fft} and \pack{scipy.fftpack} implementations.
......@@ -119,7 +129,7 @@
interfaces to act as drop-in replacements for \pack{numpy.fft} and
\pack{scipy.fftpack}.
\1 FFT with MPI, using:
\2 \pack{mpiFFT4py}, \pack{mpi4py-fft} built on top of pyFFTW and \pack{numpy.fft} using
\pack{Cython} and \pack{mpi4py}.
\2 \pack{mpiFFT4py} and \pack{mpi4py-fft} built on top of \pack{pyFFTW} and
\pack{numpy.fft}.
\2 \pack{pfft-python} which provides extensions for
pfft library.
......@@ -124,6 +134,6 @@
\2 \pack{pfft-python} which provides extensions for
pfft library.
\1 FFT with CUDA/OpenCL, using:
\1 FFT with GPGPU, using:
\2 \pack{Reikna}, a pure python package which depends on \pack{PyCUDA}
and \pack{PyOpenCL}
\2 \pack{pytorch-fft}: provides C extensions for cuFFT, meant to work with
......@@ -136,10 +146,11 @@
\begin{itemize}
% \item Nearly nothing for parallel FFT with distributed memory (using mpi),
% \item No GPU-based FFT,
\item No consolidated effort to unify all possible FFT implementations under
a single package.
\item Quite complicated even for the simplest use case scenarios. To understand
how to use them, a novice user has to read at least the FFTW documentation.
\item No effort so far to consolidate all possible FFT libraries, both
sequential, MPI and GPGPU based under a single package with similar syntax.
\item Quite complicated even for the simplest use case scenarios. To
understand how to use them, a novice user has to read at least the FFTW
documentation.
\item No benchmarks between libraries and between the Python solutions and
......@@ -145,9 +156,9 @@
\item No benchmarks between libraries and between the Python solutions and
solutions based only on a compiled language (as C, C++ or Fortran).
\item Provides just the FFT and inverse IFFT functions, no assosciated linear
algebra operators.
solutions based only on a compiled language (as C, C++ or Fortran).
\item Provides just the FFT and inverse FFT functions, no assosciated
linear algebra operators.
\end{itemize}
The Python package \fluidpack{fft} fills this gap by providing C++ classes and
......@@ -150,10 +161,10 @@
\end{itemize}
The Python package \fluidpack{fft} fills this gap by providing C++ classes and
their Python wrapper classes for performing simple and common tasks with
different FFT libraries. It has been written to make things easy while being as
efficient as possible. It provides:
their Python wrapper classes for performing simple and common tasks with different
FFT libraries. It has been written to make things easy while being as efficient as
possible. It provides:
\begin{itemize}
\item tests,
......@@ -167,6 +178,13 @@
\end{itemize}
In the present article, we shall start by describe the implementation of
\fluidpack{fft} including its design aspects, the code organization.
Thereafter, we shall compare the performance of different classes in
\fluidpack{fft} in two high performance computing clusters, and describe
microbenchmarks of the critical functions in . Finally, we show how we test
and maintain the quality of the code base through continuous integration and
mention some possible applications of \fluidpack{fft}.
\section*{Implementation and architecture}
......@@ -170,9 +188,75 @@
\section*{Implementation and architecture}
\textcolor{blue}{How the software was implemented, with details of the
architecture where relevant. Use of relevant diagrams is appropriate. Please
also describe any variants and associated implementation differences.}
% \textcolor{blue}{How the software was implemented, with details of the
% architecture where relevant. Use of relevant diagrams is appropriate. Please
% also describe any variants and associated implementation differences.}
Similar to other packages in the FluidDyn project, \fluidpack{fft} also is
designed with an object-oriented approach. Thus to access the FFT and inverse
FFT functions calls are made through an object of class. The advantage is the
improvement in ease of use, by making use of methods attached to the class.
This is in contrast with the approach taken by \pack{numpy.fft} and \pack
{scipy.fftpack}, wherein the user has to figure out from the documentation
how to design the input values and use the return values provided to the
class.
As a short example, let us try to calculate the gradient of a plane sine-wave
using spectral methods, mathematically described as follows:
\begin{align*}
u(x,y) &=
\sin(x + y) &\forall x,y \in \left[0, 2\pi \right] \\
\hat u(k_x,k_y) &=
\frac{1}{\sqrt{2\pi}}
\int_0^{2\pi}\int_0^{2\pi}
u(x,y) \exp(ik_x x + ik_y y) dx dy \\
\nabla u(x,y) &=
\frac{1}{\sqrt{2\pi}}
\int_0^{2\pi}\int_0^{2\pi}
i\mathbf{k}
\hat u(k_x,k_y) \exp(-ik_x x - ik_y y) dk_x dk_y
\end{align*}
where $k_x$, $k_y$ represent the wavenumber corresponding to x- and y-directions,
and $\mathbf{k}$ is the wavenumber vector.
The equivalent pseudo-spectral implementation in \fluidpack{fft} is as follows:
\begin{minted}[fontsize=\footnotesize]{python}
import numpy as np
from fluidfft import import_fft_class
from fluidfft.fft2d.operators import OperatorsPseudoSpectral2D
nx = ny = 100
lx = ly = 2 * np.pi
FFTClass = import_fft_class('fft2d.with_fftw2d')
# Create an FFT object
o = FFTClass(nx, ny)
# And an operator object
oper = OperatorsPseudoSpectral2D(nx, ny, lx, ly, fft='fft2d.with_fftw2d')
u = np.sin(oper.XX + oper.YY)
u_fft = o.fft(u)
px_u_fft, py_u_fft = oper.gradfft_from_fft(u_fft)
px_u = o.ifft(px_u_fft)
py_u = o.ifft(py_u_fft)
grad_u = (px_u, py_u)
\end{minted}
A parallelized version of the above code will work out of the box, simply by
replacing the FFT class with an MPI-based FFT class, for eg.
\codeinline{fft2d.with\_fftwmpi2d}. Even if one finds the available methods
in the operator class to be lacking one can use, for instance use the
wavenumber arrays, \codeinline{oper.KX} and \codeinline{oper.KY} to create a
new function. Arguably, a similar implementation with other available
packages would require the know-how on how FFT arrays are allocated in the
memory, normalized, decomposed in parallel and so on. A more detailed
introduction on how to use \fluidpack{fft} and available functions can be
found in the
tutorials\footnote{\url{https://fluidfft.readthedocs.io/en/latest/tutorials.html}}.
Let us
\subsection*{Code organization}
These classes unify the supported libraries by sharing method
......@@ -456,7 +540,7 @@
\rule{\textwidth}{1pt}
{ \bf Copyright Notice} \\
{\bf Copyright Notice} \\
Authors who publish with this journal agree to the following terms: \\
Authors retain copyright and grant the journal right of first publication with
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment