Skip to content
Snippets Groups Projects
Commit de4af64c authored by Ashwin Vishnu's avatar Ashwin Vishnu
Browse files

Bench legi

parent 77e05f2e
No related branches found
No related tags found
No related merge requests found
......@@ -571,6 +571,14 @@
E5-2695 v4 (2.1 GHz) processors with 36 cores per node. The installation was
done using Intel C++ 18 compiler, Python 3.6.5 and cray-mpich 7.0.4.
\begin{figure}[htp!]
\centering
\includegraphics[width=\linewidth]{tmp/fig_beskow_384x1152x1152}
\caption{Speedup computed from the median of the elapsed times for 3D fft
(384$\times$1152$\times$1152, left: fft and right: ifft) on Beskow.}
\label{fig:beskow384x1152x1152}
\end{figure}
In Fig.~\ref{fig:beskow384x1152x1152}, the strong scaling results of the
cuboidal array can be observed. In this set of results we have also included
intra-node scaling, in which there is no latency introduced due to typically
......@@ -583,9 +591,9 @@
A striking difference when compared with Fig.~\ref{fig:occigen384x1152x1152} is
that \codeinline{fftw1d} is not the fastest of the 4 classes in this machine.
One can only speculate that, this could be a consequence of the use of a
different MPI library and hardware has been employed. This also emphasises the
One can only speculate that, this could be a consequence of the differences in
MPI library and hardware which has been employed. This also emphasises the
need to perform benchmarks when using an entirely new configuration.
\begin{figure}[htp!]
\centering
......@@ -588,15 +596,7 @@
need to perform benchmarks when using an entirely new configuration.
\begin{figure}[htp!]
\centering
\includegraphics[width=\linewidth]{tmp/fig_beskow_384x1152x1152}
\caption{Speedup computed from the median of the elapsed times for 3D fft
(384$\times$1152$\times$1152, left: fft and right: ifft) on Beskow.}
\label{fig:beskow384x1152x1152}
\end{figure}
\begin{figure}[htp!]
\centering
\includegraphics[width=\linewidth]{tmp/fig_beskow_1152x1152x1152}
\caption{Speedup computed from the median of the elapsed times for 3D fft
(1152$\times$1152$\times$1152, left: fft and right: ifft) on Beskow.}
......@@ -613,5 +613,10 @@
\paragraph{Benchmarks on a LEGI cluster}
http://www.legi.grenoble-inp.fr
Let us also analyse how \fluidpack{fft} scales on a computing cluster
maintained at an institutional level, named Cluster8 at \href{%
http://www.legi.grenoble-inp.fr}{LEGI}, Grenoble. This cluster functions using
Intel Xeon CPU E5-2650 v3 (2.3 GHz) with 20 cores per node and \fluidpack{fft}
was installed a toolchain which includes gcc 4.9.2, Python 3.6.4 and OpenMPI
1.6.5 as key software components.
......@@ -617,5 +622,5 @@
\begin{figure}[htp]
\begin{figure}[htp!]
\centering
\includegraphics[width=\linewidth]{tmp/fig_legi_cluster8_320x640x640}
\caption{Speedup computed from the median of the elapsed times for 3D fft
......@@ -623,4 +628,12 @@
\label{fig:cluster8:320x640x640}
\end{figure}
In Fig.~\ref{fig:cluster8:320x640x640} we observe that the strong scaling for
an array shape of $320\times640\times640$ is not far from the ideal linear
trend. The fastest library is \codeinline{fftwmpi3d} for this case. As expected
from FFT algorithms, there is a slight drop in speedup when the array size is
not exactly divisible by the number of processes, i.e.\~with 12 processes. The
speedup declines rapidly when more than one node is employed (above 20
processes). This effect can be attributed to the latency introduced by
inter-node communications, a hardware limitation.
......@@ -626,5 +639,5 @@
\begin{figure}[htp]
\begin{figure}[htp!]
\centering
\includegraphics[width=\linewidth]{tmp/fig_legi_cluster8_2160x2160}
\caption{Speedup computed from the median of the elapsed times for 2D fft
......@@ -632,6 +645,18 @@
\label{fig:cluster8:2160x2160}
\end{figure}
We have also analysed the performance of 2D MPI enabled FFT classes on the same
machine using an array shaped $2160\times2160$ in
Fig.~\ref{fig:cluster8:2160x2160}. The fastest library is
\codeinline{fftwmpi2d}. Both libraries display near-linear scaling, except when
more than one node is used and the performance tapers off.
As a final remark on scalability, a general rule of thumb should be to use 1D
domain decomposition when only very few processors are employed. For massive
parallelization, 2D decomposition is required to achieve good speedup without
being limited by the number of processors at disposal. We have thus shown that
overall performance of the libraries implemented in \fluidpack{fft} are quite
good, and there is no noticeable drop in speedup when the Python API is used.
\subsubsection*{Microbenchmark of critical ``operator'' functions}
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment