Skip to content
Snippets Groups Projects
Commit e0a695db authored by Ashwin Vishnu's avatar Ashwin Vishnu
Browse files

More scalability

parent 0dcaf56a
No related branches found
No related tags found
No related merge requests found
......@@ -400,9 +400,9 @@
for installation} are provided in the documentation}
%
which executes the script \codeinline{setup.py}. It first generates the Cython
source code as a pair \codeinline{.pyx} and \codeinline{.pxd} files defining a
class wrapping its C++ counterpart\footnote{Uses an approach similar to
guidelines titled \href{%
source code as a pair of \codeinline{.pyx} and \codeinline{.pxd} files
containing a class wrapping its C++ counterpart\footnote{Uses an approach
similar to guidelines titled \href{%
https://cython.readthedocs.io/en/latest/src/userguide/wrapping_CPlusPlus.html}{%
``Using C++ in Cython''}}.
%
......@@ -453,10 +453,33 @@
% Simple!! Few cases. Few clusters. Figures obtained with
% fluidfft-bench-analysis
For every FFT classes available for the resolution and for the two tasks
forward and inverse FFT, three different functions are used and compared (see
the legend in subsequent figures):
Scalability of \fluidpack{fft} is measured in the form of strong scaling
speedup, defined in the present context as:
\begin{equation*}
S(n_p) = \frac
{\mathrm{Time\ elapsed\ for\ } N \mathrm{\ iterations\ with\ }n_{p,\min}\mathrm{\ processes}\times S(n_{p,\min})}
{\mathrm{Time\ elapsed\ for\ } N \mathrm{\ iterations\ with\ } n_p \mathrm{\
processes}}
\label{eq:speedup}
\end{equation*}
where $n_{p,\min}$ is the minimum number of processes employed for a specific
array size and hardware, and $S(n_{p,\min})$ is assigned the value $n_{p,\min}$
for the fastest result among various FFT classes. For slower FFT classes,
$S(n_{p,\min})$ is set proportionally.
To compute strong scaling the utility \codeinline{fluidfft-bench} is launched
as scheduled jobs on HPC clusters, ensuring no interference from background
processes. No hyperthreading was used.
%
We have used $N=20$ iterations for each run, with which we obtain sufficiently
repeatable results.
%
For a particular choice of array size, every FFT class available are
benchmarked for the two tasks, forward and inverse FFT. Three different function
variants are compared (see the legend in subsequent figures):
\begin{itemize}
\item \codeinline{fft\_cpp}, \codeinline{ifft\_cpp} (continuous lines):
......@@ -459,10 +482,10 @@
\begin{itemize}
\item \codeinline{fft\_cpp}, \codeinline{ifft\_cpp} (continuous lines):
benchmark of the C++ function from the C++ code. An array (technically, a
pointer) is passed as an argument to store the result. No memory allocation
is performed inside the functions.
benchmark of the C++ function from the C++ code. An array is passed as an
argument to store the result. No memory allocation is performed inside the
functions.
\item \codeinline{fft\_as\_arg}, \codeinline{ifft\_as\_arg} (dashed lines):
benchmark of a Python method from Python. Similar to the C++ code, the second
......@@ -476,11 +499,11 @@
\end{itemize}
The results\footnote{\url{%
https://bitbucket.org/fluiddyn/fluidfft-bench-results}} are then plotted using
The results\footnote{Saved at \url{%
https://bitbucket.org/fluiddyn/fluidfft-bench-results}} are then plotted using
\codeinline{fluidfft-bench-analysis}.
\paragraph{Benchmarks on Occigen}
\href{https://www.top500.org/system/178465}{Occigen} is a GENCI-CINES HPC
......@@ -481,13 +504,14 @@
\codeinline{fluidfft-bench-analysis}.
\paragraph{Benchmarks on Occigen}
\href{https://www.top500.org/system/178465}{Occigen} is a GENCI-CINES HPC
cluster.
cluster which uses Intel Xeon CPU E5--2690 v3 (2.6 GHz) processors with 24 cores
per node. The installation was done using Intel C++ 17.2 compiler, Python
3.6.5, and OpenMPI 2.0.2.
\begin{figure}[htp]
\centering
\includegraphics[width=\linewidth]{tmp/fig_occigen_384x1152x1152}
\caption{Speedup computed from the median of the elapsed times for 3D fft
......@@ -489,9 +513,9 @@
\begin{figure}[htp]
\centering
\includegraphics[width=\linewidth]{tmp/fig_occigen_384x1152x1152}
\caption{Speedup computed from the median of the elapsed times for 3D fft
(384$\times$1152$\times$1152, left: fft and right: ifft) on Occigen.}
(384$\times$1152$\times$1152, left: fft and right: ifft) on Occigen.}%
\label{fig:occigen384x1152x1152}
\end{figure}
......@@ -495,9 +519,10 @@
\label{fig:occigen384x1152x1152}
\end{figure}
The fastest methods are fftw1d (which is limited to 96 cores) and p3dfft.
Fig.~\ref{fig:occigen384x1152x1152} demonstrates the strong scaling performance
of an array sized $384\times1152\times1152$.
The fastest methods are \codeinline{fftw1d} (which is limited to 96 cores) and
\codeinline{p3dfft}.
The benchmark is not sufficiently accurate to measure the cost of calling the
functions from Python (difference between continuous and dashed lines,
......@@ -515,9 +540,9 @@
\label{fig:occigen1152x1152x1152}
\end{figure}
For this resolution, the fftw1d is also the fastest method when using only few
cores and it can not be used for more that 192 cores. The faster library when
using more cores is also p3dfft.
For this resolution, the \codeinline{fftw1d} is also the fastest method when
using only few cores and it can not be used for more that 192 cores. The faster
library when using more cores is also \codeinline{p3dfft}.
\paragraph{Benchmarks on Beskow}
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment