Skip to content
Snippets Groups Projects
Commit 6aaab2e1 authored by Ashwin Vishnu's avatar Ashwin Vishnu
Browse files

Finish rewriting scalablility section

parent 18295d50
No related branches found
No related tags found
No related merge requests found
# -*- coding: future_fstrings -*-
"""Load and plot benchmarks (:mod:`fluidsim.util.console.bench_analysis`)
=========================================================================
......
......@@ -19,7 +19,7 @@
figy = 2 # 6 / 2.54
root = Path("/tmp") / getpass.getuser() / "fluidsim-bench-results" / "profiles"
if not os.path.exists(root):
if not os.path.exists(str(root)):
raise FileNotFoundError("Run sync.py")
patterns2d = [
......
......@@ -319,7 +319,7 @@
Once initialized, the ``public'' (not hidden) API does not allow to add new
parameters to this object and only modifications are permitted.\footnote{Example
on modifying the parameters for a simple simulation:
\href{https://fluidsim.readthedocs.io/en/latest/examples/running_simul.html}{%
\href{https://fluidsim.readthedocs.io/en/latest/examples/running-simul-onlineplot.html}{%
fluidsim.readthedocs.io/en/latest/examples/running\_simul.html}}
......@@ -427,7 +427,7 @@
\2 \codeinline{sim.output.spatial\_means}: mean quantities such as energy,
enstrophy, forcing power, dissipation.
%
\2 \codeinline{sim.output.spectra}: energy spectra as line plots (i.e. as
\2 \codeinline{sim.output.spectra}: energy spectra as line plots (i.e.\ as
functions of the module or a component of the wavenumber).
%
\2 \codeinline{sim.output.spect\_energy\_budg}: spectral energy budget by
......@@ -631,7 +631,7 @@
Navier-Stokes solver shows that majority of time is spent in inverse and forward
FFT calls (\codeinline{ifft\_as\_arg} and \codeinline{fft\_as\_arg}). For the
sequential case, approximately 0.14\% of the time is spent in pure Python
functions, i.e. functions not built using \pack{Cython} and \pack{Pythran}.
functions, i.e.\ functions not built using \pack{Cython} and \pack{Pythran}.
%
\pack{Cython} extensions are responsible for interfacing with FFT operators and
also for the time-step algorithm. \pack{Pythran} extensions are used to translate
......@@ -685,10 +685,10 @@
used for the sequential mode differ from the parallel mode, especially the FFT
class. Speedup is formally defined here as:
\begin{equation*}
\begin{equation}
S_\alpha(n_p) = \frac
{[\mathrm{Time\ elapsed\ for\ } N \mathrm{\ iterations\ with\ }n_{p,\min}\mathrm{\ processes}]_{\mathrm{fastest}}
\times n_{p,\min}}
{[\mathrm{Time\ elapsed\ for\ } N \mathrm{\ iterations\ with\ } n_p \mathrm{\
processes}]_\alpha}
\label{eq:speedup}
......@@ -689,10 +689,10 @@
S_\alpha(n_p) = \frac
{[\mathrm{Time\ elapsed\ for\ } N \mathrm{\ iterations\ with\ }n_{p,\min}\mathrm{\ processes}]_{\mathrm{fastest}}
\times n_{p,\min}}
{[\mathrm{Time\ elapsed\ for\ } N \mathrm{\ iterations\ with\ } n_p \mathrm{\
processes}]_\alpha}
\label{eq:speedup}
\end{equation*}
\end{equation}
where $n_{p,\min}$ is the minimum number of processes employed for a specific
array size and hardware, $\alpha$ denotes the FFT class used and ``fastest''
corresponds to the fastest result among various FFT classes.
......@@ -696,6 +696,6 @@
where $n_{p,\min}$ is the minimum number of processes employed for a specific
array size and hardware, $\alpha$ denotes the FFT class used and ``fastest''
corresponds to the fastest result among various FFT classes.
%
In addition to number of processes, there is another important parameter, which
is the size of the problem; in other words, the number of grid points used to
......@@ -700,10 +700,18 @@
In addition to number of processes, there is another important parameter, which
is the size of the problem; in other words, the number of grid points used to
discretize the problem at hand. In \emph{strong scaling} analysis, we keep the
global discretization fixed and increase the number of processes. Ideally,
this should yield a speedup which increases linearly with number of processes.
Realistically, as the number of processes increase, so does the number of MPI
communications, contributing to some latency in the overall time spent and thus
resulting in less than ideal performance.
discretize the problem at hand.
%
In \emph{strong scaling} analysis, we keep the global grid-size fixed and
increase the number of processes.
Ideally, this should yield a speedup which increases linearly with number of
processes. Realistically, as the number of processes increase, so does the
number of MPI communications, contributing to some latency in the overall time
spent and thus resulting in less than ideal performance.
%
Also, as shown by profiling in the previous section, majority of the time is
consumed in making forward- and inverse-FFT calls, an inherent bottleneck of
the pseudo-spectral approach. The FFT function calls are the source of most of
the MPI calls during runtime, limiting the parallelism.
\subsubsection*{2D benchmarks}\label{sec:bench2d}
......@@ -708,3 +716,4 @@
\subsubsection*{2D benchmarks}\label{sec:bench2d}
The Navier-Stokes 2D solver (\codeinline{fluidsim.solvers.ns2d}) solving an
......@@ -710,4 +719,4 @@
The Navier-Stokes 2D solver (\codeinline{fluidsim.solvers.ns2d}) solving an
initial value problem over a box size of $8\times8$ is chosen as reference
test case for strong scaling analysis, discretized using
initial value problem over a box size of $8\times8$ was chosen as the test case
for strong scaling analysis here. The physical grid was discretized
%
......@@ -713,3 +722,3 @@
%
physical global grids with $1024\times1024$ and $2048\times2048$ gridpoints.
with $1024\times1024$ and $2048\times2048$ points.
%
......@@ -715,11 +724,4 @@
%
The test case uses a hyper-viscosity term set with $\nu_8 = 1$ and a constant
time-step, $\Delta t = 1\times10^{-6}$. No file input-output is enabled so as
to measure the performance accurately. The test case is then executed for 20
iterations for one or more passes.
The time elapsed is measured just before and after the
\codeinline{sim.time\_stepping.start()} function call.
% ``The median time elapsed'' -> too confusing?
The time elapsed recorded is used to calculate the mean walltime per
iteration and speedup.
Fourth-order Runge-Kutta (RK4) method with a constant time-step, $\Delta t =
1\times10^{-6}$ was used for time-integration.
%
......@@ -725,5 +727,12 @@
%
This process is repeated for two different FFTW implementations provided by
File input-output and the forcing term has been disabled so as to measure the
performance accurately. The test case is then executed for 20 iterations.
%for one or more passes
The time elapsed was measured just before and after the
\codeinline{sim.time\_stepping.start()} function call, which was then utilized
to calculate the average walltime per iteration and speedup.
%
This process is repeated for two different FFT classes provided by
\fluidpack{fft}, viz. \codeinline{fft2d.mpi\_with\_fftw1d} and
\codeinline{fft2d.mpi\_with\_fftwmpi2d}.
......@@ -727,7 +736,6 @@
\fluidpack{fft}, viz. \codeinline{fft2d.mpi\_with\_fftw1d} and
\codeinline{fft2d.mpi\_with\_fftwmpi2d}.
\begin{figure}[htp]
\centering
\includegraphics[width=\linewidth]{tmp/fig_bench_strong2d}
......@@ -735,5 +743,5 @@
(\codeinline{fluidsim.solvers.ns2d}) solver.}\label{fig:strong2d}
\end{figure}
In Fig.~\ref{fig:strong2d} we analyze the strong scaling speedup $S$ and
In Fig.~\ref{fig:strong2d} we have analyzed the strong scaling speedup $S$ and
walltime per iteration. The fastest result for a particular case is assigned
......@@ -739,11 +747,6 @@
walltime per iteration. The fastest result for a particular case is assigned
the value $S=n_p$ as mentioned earlier in the definition. Ideal speedup is
indicated with a dotted black line and it varies linearly with number of
processes. We notice that for all cases there is an assured increasing trend
in speedup for intra-nodes computation.
%
However, when this test case is solved in Beskow, with over a node ($n_p >
32$); the speedup drops abruptly. The speedup is impacted by the cost of MPI
communications. MPI operations are especially slower in inter-node
computation, since nodes communicate over network interfaces.
the value $S=n_p$ as mentioned earlier in Eq.~\ref{eq:speedup}. Ideal speedup
is indicated with a dotted black line and it varies linearly with number of
processes. We notice that for the $1024\times1024$ case there is an assured
increasing trend in speedup for intra-nodes computation.
%
......@@ -749,9 +752,10 @@
%
We turn our attention to the sub-plot on the right, indicating strong scaling
efficiency $E_{strong}$. The drop in efficiency implies that while there is an
increase in speedup, it is not close to the ideal linear speedup desired. This
could due to fact that while some of the time spent is to do fully parallelized
linear algebra operations, more that $50\%$ of the computation time would be
attributed to make forward- and inverse- fast Fourier transforms (FFTs). We
will analyze this aspect through profiling in the next section.
Nevertheless, when this test case is solved with over a node ($n_p > 32$); the
speedup drops abruptly. While it may be argued that the speedup is impacted by
the cost of inter-node MPI communications via network interfaces, that is not
the case here. This is shown by speedup for the $2048\times2048$ case, where
speedup increases from $n_p = 32$ to $64$, after which it drops again. It is thus
important to remember that a decisive factor in pseudo-spectral simulations is
the choice of the grid size, both global and local (per-process), and for certain
shapes the FFT calls can be exceptionally fast or vice-versa.
......@@ -757,2 +761,5 @@
%MPI operations are especially slower in inter-node computation, since nodes
%communicate over network interfaces.
%
......@@ -758,11 +765,8 @@
% In Fig.~\ref{fig:weak2d}, for weak scaling analysis, the ideal speedup is a
% constant line as the size of problem per process remains unique. It can be a
% argued that this form of analysis is more of a measure of scalability than
% performance, i.e.\ to show that a solver can handle a larger domain. For
\fluidpack{sim}, we observe that the speedup $S_{weak}$ drops, but remains
within the same of order of magnitude. In Trolith beyond three nodes
(i.e. $n_p > 48$) $S_{weak}$ drops sharply. The weak scaling efficiency
$E_{weak}$ is qualitatively similar to its strong scaling counterpart
$E_{strong}$.
From the above results, it may also be inferred that superior performance is
achieved through the use of \codeinline{fft2d.mpi\_with\_fftwmpi2d} as the FFT method. The
\codeinline{fft2d.mpi\_with\_fftw1d} method serves as a fallback option when
either FFTW library is not compiled using MPI bindings or the domain
decomposition results in zero-shaped arrays, which is a known issue with the current
version of \fluidpack{sim} and requires further development.
......@@ -768,8 +772,9 @@
From all the above results, it may be inferred that superior performance is
achieved through the use of \codeinline{fft2d.mpi\_with\_fftwmpi2d} as the
backend rather than \codeinline{fft2d.mpi\_with\_fftw1d}. The
\codeinline{fft2d.mpi\_with\_fftw1d} method serves as a fallback option when
FFTW cannot be or is not compiled using MPI bindings.
To the right of Fig.~\ref{fig:strong2d}, the real-time or walltime required to
perform a single iteration in seconds is found to vary inversely proportional
to the number of processes, $n_p$. The walltime per iteration ranges from
$0.195$ to $0.023$ seconds for the $1024\times1024$ case, and from
$0.128$ to $0.051$ seconds for the $2048\times2048$ case. Thus it is indeed
feasible and scalable to use this particular solver.
\subsubsection*{3D benchmarks}
......@@ -774,8 +779,8 @@
\subsubsection*{3D benchmarks}
For 3D benchmarks the analysis here is limited to strong scaling benchmarks.
The reason for not investigating weak scaling benchmarks are described as
follows. Broadly speaking \codeinline{fft3d.mpi\_with\_fftw1d} and
\codeinline{fft3d.mpi\_with\_fftwmpi3d} use a slab decomposition, i.e.\ the
processes are distributed over one index of a 3D array.
Using a similar process as described in the previous section,
%~\ref{sec:bench2d},
the Navier-Stokes 3D solver (\codeinline{fluidsim.solvers.ns3d}) is chosen to
perform 3D benchmarks.
%
......@@ -781,12 +786,9 @@
%
On the other hand, \codeinline{fft3d.mpi\_with\_p3dfft} and
\codeinline{fft3d.mpi\_with\_pfft} use a pencil decomposition approach, wherein
processes are spread over two indices of the 3D array. There are subtle
differences in how the domain is divided thus resulting in different shapes of
local array allocated to each process. Moreover, it is impractical to solve
large problems of the order of hundreds of millions of grid points using very
few processes which always exceed memory limitations of a compute node. Due to
these added complexities, instead of weak scaling benchmarks, a series of
strong scaling benchmarks is performed with progressively larger global grid
sizes as number of processes increase.
A box size of $2\pi\times2\pi\times2\pi$ is chosen as the reference test case.
As demonstrated in Fig.~\ref{fig:strong3d_beskow} two physical global grids
with $128\times128\times128$ and $1024\times1024\times1024$ are used to
discretize the domain. A constant time-step, $\Delta t = 1\times10^{-4}$ with
RK4 time integration was used.
%
Other parameters are identical to what was described for the 2D benchmarks.
......@@ -792,12 +794,3 @@
Using a similar process as described in the previous Section~\ref{sec:bench2d},
here the Navier-Stokes 3D solver (\codeinline{fluidsim.solvers.ns3d}) is chosen
to perform 3D benchmarks.
%
A box size of $2\pi\times2\pi\times2\pi$ is chosen as the reference test case. As
demonstrated in Fig.~\ref{fig:strong3d_beskow} a physical global grid with
$128\times128\times128$ grid points is used when up to two compute nodes are
allocated; a grid size of $512\times512\times512$ is used when between two and
sixteen nodes are allocated; and a grid size of $1024\times1024\times1024$ is used
when sixteen or more nodes are allocated.
Through \fluidpack{fft}, this solver has four FFT methods at disposal:
......@@ -803,15 +796,18 @@
The forcing term in the solver and file input output have been disabled, so as
to measure the performance of the solver accurately. A constant time-step,
$\Delta t = 1\times10^{-4}$ is used. The test case is then executed for 10
iterations for three or more passes. The median time elapsed recorded is used
to analyze speedup and efficiency. The time elapsed is measured just before
and after the \codeinline{sim.time\_stepping.start()} function call.
%
This process is repeated for four different FFT implementations provided by the
FluidFFT package, viz. \codeinline{fft3d.mpi\_with\_fftw1d},
\codeinline{fft3d.mpi\_with\_fftwmpi3d}, \codeinline{fft3d.mpi\_with\_p3dfft},
and \codeinline{fft3d.mpi\_with\_pfft}.
\begin{itemize}
\item \codeinline{fft3d.mpi\_with\_fftw1d}
\item \codeinline{fft3d.mpi\_with\_fftwmpi3d}
\item \codeinline{fft3d.mpi\_with\_p3dfft}
\item \codeinline{fft3d.mpi\_with\_pfft}
\end{itemize}
The first two methods implements a 1D or \emph{slab} decomposition, i.e.\ the
processes are distributed over one index of a 3D array. And the last two
methods implement a 2D or \emph{pencil} decomposition. For the sake of clarity,
we have restricted this analysis to the fastest FFT method of the two types in
this configuration, viz. \codeinline{fft3d.mpi\_with\_fftwmpi3d} and
\codeinline{fft3d.mpi\_with\_p3dfft}. A more comprehensive study of the
performance of these FFT methods can be found in \citet{fluidfft}.
\begin{figure}[htp]
\centering
......@@ -821,13 +817,26 @@
Beskow}\label{fig:strong3d_beskow}
\end{figure}
In Fig.~\ref{fig:strong3d_beskow} the strong scaling speedup and efficiency are
plotted from 3D benchmarks in Beskow. We observe a consistent increase in the
speedup as number of processes increase. The fastest FFT algorithm turns out to
be \codeinline{fft3d.mpi\_with\_fftwmpi3d} for this particular set of test
cases with cubical discretization. Pencil decomposition based FFT
implementations (P3DFFT and PFFT) demonstrate similar, but subpar speedup in
comparison with FFTW methods.
In Fig.~\ref{fig:strong3d_beskow} the strong scaling speedup and walltime per
iteration are plotted from 3D benchmarks in Beskow.
%
The analysis here is limited to single-node and inter-node performance.
%
For both grid-sizes analyzed here, the \codeinline{fft3d.mpi\_with\_fftwmpi3d}
method is the fastest of all methods but limited in scalability because of the
1D domain decomposition strategy. To utilize a large number of processors, one
requires the 2D decomposition approach. Also, note that for the
$1024\times1024\times1024$ case, a single-node measurement was not possible as
the size of the arrays required to run the solvers exceeds the available
memory. For the same case, a speedup reasonably close to linear variation is
observed with \codeinline{fft3d.mpi\_with\_p3dfft}.
%
It is also shown that the walltime per iteration ranges from
%
$0.083$ to $0.027$ seconds for the $128\times128\times128$ case, and from
$31.078$ to $2.175$ seconds for the $1024\times1024\times1024$ case.
\subsection*{CFD pseudo-spectral code comparisons}
......@@ -854,7 +863,7 @@
%
This approach is very different than the one of \fluidpack{sim}, where the
equation are described with simple \Numpy code. There is no equivalent of the
\fluidpack{sim} concept of ``solver'', i.e. a class corresponding to a set of
\fluidpack{sim} concept of ``solver'', i.e.\ a class corresponding to a set of
equations with specialized outputs (with the corresponding plotting methods). To
run a simulation with Dedalus, one has to describe the problem using mathematical
equations. This can be very convenient because it is very versatile and it is not
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment