Skip to content
GitLab
Explore
Sign in
Register
Primary navigation
Search or go to…
Project
F
fluiddyn_papers
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Container Registry
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
fluiddyn
fluiddyn_papers
Commits
6aaab2e1
Commit
6aaab2e1
authored
6 years ago
by
Ashwin Vishnu
Browse files
Options
Downloads
Patches
Plain Diff
Finish rewriting scalablility section
parent
18295d50
No related branches found
No related tags found
No related merge requests found
Changes
3
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
fluidsim/Python/bench_analysis.py
+1
-0
1 addition, 0 deletions
fluidsim/Python/bench_analysis.py
fluidsim/Python/make_fig_profile.py
+1
-1
1 addition, 1 deletion
fluidsim/Python/make_fig_profile.py
fluidsim/fluidsim_paper.tex
+110
-101
110 additions, 101 deletions
fluidsim/fluidsim_paper.tex
with
112 additions
and
102 deletions
fluidsim/Python/bench_analysis.py
+
1
−
0
View file @
6aaab2e1
# -*- coding: future_fstrings -*-
"""
Load and plot benchmarks (:mod:`fluidsim.util.console.bench_analysis`)
=========================================================================
...
...
This diff is collapsed.
Click to expand it.
fluidsim/Python/make_fig_profile.py
+
1
−
1
View file @
6aaab2e1
...
...
@@ -19,7 +19,7 @@
figy
=
2
# 6 / 2.54
root
=
Path
(
"
/tmp
"
)
/
getpass
.
getuser
()
/
"
fluidsim-bench-results
"
/
"
profiles
"
if
not
os
.
path
.
exists
(
root
):
if
not
os
.
path
.
exists
(
str
(
root
)
)
:
raise
FileNotFoundError
(
"
Run sync.py
"
)
patterns2d
=
[
...
...
This diff is collapsed.
Click to expand it.
fluidsim/fluidsim_paper.tex
+
110
−
101
View file @
6aaab2e1
...
...
@@ -319,7 +319,7 @@
Once initialized, the ``public'' (not hidden) API does not allow to add new
parameters to this object and only modifications are permitted.
\footnote
{
Example
on modifying the parameters for a simple simulation:
\href
{
https://fluidsim.readthedocs.io/en/latest/examples/running
_
simul.html
}{
%
\href
{
https://fluidsim.readthedocs.io/en/latest/examples/running
-
simul
-onlineplot
.html
}{
%
fluidsim.readthedocs.io/en/latest/examples/running
\_
simul.html
}}
...
...
@@ -427,7 +427,7 @@
\2
\codeinline
{
sim.output.spatial
\_
means
}
: mean quantities such as energy,
enstrophy, forcing power, dissipation.
%
\2
\codeinline
{
sim.output.spectra
}
: energy spectra as line plots (i.e. as
\2
\codeinline
{
sim.output.spectra
}
: energy spectra as line plots (i.e.
\
as
functions of the module or a component of the wavenumber).
%
\2
\codeinline
{
sim.output.spect
\_
energy
\_
budg
}
: spectral energy budget by
...
...
@@ -631,7 +631,7 @@
Navier-Stokes solver shows that majority of time is spent in inverse and forward
FFT calls (
\codeinline
{
ifft
\_
as
\_
arg
}
and
\codeinline
{
fft
\_
as
\_
arg
}
). For the
sequential case, approximately 0.14
\%
of the time is spent in pure Python
functions, i.e. functions not built using
\pack
{
Cython
}
and
\pack
{
Pythran
}
.
functions, i.e.
\
functions not built using
\pack
{
Cython
}
and
\pack
{
Pythran
}
.
%
\pack
{
Cython
}
extensions are responsible for interfacing with FFT operators and
also for the time-step algorithm.
\pack
{
Pythran
}
extensions are used to translate
...
...
@@ -685,10 +685,10 @@
used for the sequential mode differ from the parallel mode, especially the FFT
class. Speedup is formally defined here as:
\begin{equation
*
}
\begin{equation}
S
_
\alpha
(n
_
p) =
\frac
{
[
\mathrm
{
Time
\
elapsed
\
for
\
}
N
\mathrm
{
\
iterations
\
with
\
}
n
_{
p,
\min
}
\mathrm
{
\
processes
}
]
_{
\mathrm
{
fastest
}}
\times
n
_{
p,
\min
}}
{
[
\mathrm
{
Time
\
elapsed
\
for
\
}
N
\mathrm
{
\
iterations
\
with
\
}
n
_
p
\mathrm
{
\
processes
}
]
_
\alpha
}
\label
{
eq:speedup
}
...
...
@@ -689,10 +689,10 @@
S
_
\alpha
(n
_
p) =
\frac
{
[
\mathrm
{
Time
\
elapsed
\
for
\
}
N
\mathrm
{
\
iterations
\
with
\
}
n
_{
p,
\min
}
\mathrm
{
\
processes
}
]
_{
\mathrm
{
fastest
}}
\times
n
_{
p,
\min
}}
{
[
\mathrm
{
Time
\
elapsed
\
for
\
}
N
\mathrm
{
\
iterations
\
with
\
}
n
_
p
\mathrm
{
\
processes
}
]
_
\alpha
}
\label
{
eq:speedup
}
\end{equation
*
}
\end{equation}
where
$
n
_{
p,
\min
}$
is the minimum number of processes employed for a specific
array size and hardware,
$
\alpha
$
denotes the FFT class used and ``fastest''
corresponds to the fastest result among various FFT classes.
...
...
@@ -696,6 +696,6 @@
where
$
n
_{
p,
\min
}$
is the minimum number of processes employed for a specific
array size and hardware,
$
\alpha
$
denotes the FFT class used and ``fastest''
corresponds to the fastest result among various FFT classes.
%
In addition to number of processes, there is another important parameter, which
is the size of the problem; in other words, the number of grid points used to
...
...
@@ -700,10 +700,18 @@
In addition to number of processes, there is another important parameter, which
is the size of the problem; in other words, the number of grid points used to
discretize the problem at hand. In
\emph
{
strong scaling
}
analysis, we keep the
global discretization fixed and increase the number of processes. Ideally,
this should yield a speedup which increases linearly with number of processes.
Realistically, as the number of processes increase, so does the number of MPI
communications, contributing to some latency in the overall time spent and thus
resulting in less than ideal performance.
discretize the problem at hand.
%
In
\emph
{
strong scaling
}
analysis, we keep the global grid-size fixed and
increase the number of processes.
Ideally, this should yield a speedup which increases linearly with number of
processes. Realistically, as the number of processes increase, so does the
number of MPI communications, contributing to some latency in the overall time
spent and thus resulting in less than ideal performance.
%
Also, as shown by profiling in the previous section, majority of the time is
consumed in making forward- and inverse-FFT calls, an inherent bottleneck of
the pseudo-spectral approach. The FFT function calls are the source of most of
the MPI calls during runtime, limiting the parallelism.
\subsubsection*
{
2D benchmarks
}
\label
{
sec:bench2d
}
...
...
@@ -708,3 +716,4 @@
\subsubsection*
{
2D benchmarks
}
\label
{
sec:bench2d
}
The Navier-Stokes 2D solver (
\codeinline
{
fluidsim.solvers.ns2d
}
) solving an
...
...
@@ -710,4 +719,4 @@
The Navier-Stokes 2D solver (
\codeinline
{
fluidsim.solvers.ns2d
}
) solving an
initial value problem over a box size of
$
8
\times
8
$
i
s chosen as
referenc
e
test case
for strong scaling analysis
,
discretized
using
initial value problem over a box size of
$
8
\times
8
$
wa
s chosen as
the test cas
e
for strong scaling analysis
here. The physical grid was
discretized
%
...
...
@@ -713,3 +722,3 @@
%
physical global grids
with
$
1024
\times
1024
$
and
$
2048
\times
2048
$
grid
points.
with
$
1024
\times
1024
$
and
$
2048
\times
2048
$
points.
%
...
...
@@ -715,11 +724,4 @@
%
The test case uses a hyper-viscosity term set with
$
\nu
_
8
=
1
$
and a constant
time-step,
$
\Delta
t
=
1
\times
10
^{
-
6
}$
. No file input-output is enabled so as
to measure the performance accurately. The test case is then executed for 20
iterations for one or more passes.
The time elapsed is measured just before and after the
\codeinline
{
sim.time
\_
stepping.start()
}
function call.
% ``The median time elapsed'' -> too confusing?
The time elapsed recorded is used to calculate the mean walltime per
iteration and speedup.
Fourth-order Runge-Kutta (RK4) method with a constant time-step,
$
\Delta
t
=
1
\times
10
^{
-
6
}$
was used for time-integration.
%
...
...
@@ -725,5 +727,12 @@
%
This process is repeated for two different FFTW implementations provided by
File input-output and the forcing term has been disabled so as to measure the
performance accurately. The test case is then executed for 20 iterations.
%for one or more passes
The time elapsed was measured just before and after the
\codeinline
{
sim.time
\_
stepping.start()
}
function call, which was then utilized
to calculate the average walltime per iteration and speedup.
%
This process is repeated for two different FFT classes provided by
\fluidpack
{
fft
}
, viz.
\codeinline
{
fft2d.mpi
\_
with
\_
fftw1d
}
and
\codeinline
{
fft2d.mpi
\_
with
\_
fftwmpi2d
}
.
...
...
@@ -727,7 +736,6 @@
\fluidpack
{
fft
}
, viz.
\codeinline
{
fft2d.mpi
\_
with
\_
fftw1d
}
and
\codeinline
{
fft2d.mpi
\_
with
\_
fftwmpi2d
}
.
\begin{figure}
[htp]
\centering
\includegraphics
[width=\linewidth]
{
tmp/fig
_
bench
_
strong2d
}
...
...
@@ -735,5 +743,5 @@
(
\codeinline
{
fluidsim.solvers.ns2d
}
) solver.
}
\label
{
fig:strong2d
}
\end{figure}
In Fig.~
\ref
{
fig:strong2d
}
we analyze the strong scaling speedup
$
S
$
and
In Fig.~
\ref
{
fig:strong2d
}
we
have
analyze
d
the strong scaling speedup
$
S
$
and
walltime per iteration. The fastest result for a particular case is assigned
...
...
@@ -739,11 +747,6 @@
walltime per iteration. The fastest result for a particular case is assigned
the value
$
S
=
n
_
p
$
as mentioned earlier in the definition. Ideal speedup is
indicated with a dotted black line and it varies linearly with number of
processes. We notice that for all cases there is an assured increasing trend
in speedup for intra-nodes computation.
%
However, when this test case is solved in Beskow, with over a node (
$
n
_
p >
32
$
); the speedup drops abruptly. The speedup is impacted by the cost of MPI
communications. MPI operations are especially slower in inter-node
computation, since nodes communicate over network interfaces.
the value
$
S
=
n
_
p
$
as mentioned earlier in Eq.~
\ref
{
eq:speedup
}
. Ideal speedup
is indicated with a dotted black line and it varies linearly with number of
processes. We notice that for the
$
1024
\times
1024
$
case there is an assured
increasing trend in speedup for intra-nodes computation.
%
...
...
@@ -749,9 +752,10 @@
%
We turn our attention to the sub-plot on the right, indicating strong scaling
efficiency
$
E
_{
strong
}$
. The drop in efficiency implies that while there is an
increase in speedup, it is not close to the ideal linear speedup desired. This
could due to fact that while some of the time spent is to do fully parallelized
linear algebra operations, more that
$
50
\%
$
of the computation time would be
attributed to make forward- and inverse- fast Fourier transforms (FFTs). We
will analyze this aspect through profiling in the next section.
Nevertheless, when this test case is solved with over a node (
$
n
_
p >
32
$
); the
speedup drops abruptly. While it may be argued that the speedup is impacted by
the cost of inter-node MPI communications via network interfaces, that is not
the case here. This is shown by speedup for the
$
2048
\times
2048
$
case, where
speedup increases from
$
n
_
p
=
32
$
to
$
64
$
, after which it drops again. It is thus
important to remember that a decisive factor in pseudo-spectral simulations is
the choice of the grid size, both global and local (per-process), and for certain
shapes the FFT calls can be exceptionally fast or vice-versa.
...
...
@@ -757,2 +761,5 @@
%MPI operations are especially slower in inter-node computation, since nodes
%communicate over network interfaces.
%
...
...
@@ -758,11 +765,8 @@
% In Fig.~\ref{fig:weak2d}, for weak scaling analysis, the ideal speedup is a
% constant line as the size of problem per process remains unique. It can be a
% argued that this form of analysis is more of a measure of scalability than
% performance, i.e.\ to show that a solver can handle a larger domain. For
\fluidpack
{
sim
}
, we observe that the speedup
$
S
_{
weak
}$
drops, but remains
within the same of order of magnitude. In Trolith beyond three nodes
(i.e.
$
n
_
p >
48
$
)
$
S
_{
weak
}$
drops sharply. The weak scaling efficiency
$
E
_{
weak
}$
is qualitatively similar to its strong scaling counterpart
$
E
_{
strong
}$
.
From the above results, it may also be inferred that superior performance is
achieved through the use of
\codeinline
{
fft2d.mpi
\_
with
\_
fftwmpi2d
}
as the FFT method. The
\codeinline
{
fft2d.mpi
\_
with
\_
fftw1d
}
method serves as a fallback option when
either FFTW library is not compiled using MPI bindings or the domain
decomposition results in zero-shaped arrays, which is a known issue with the current
version of
\fluidpack
{
sim
}
and requires further development.
...
...
@@ -768,8 +772,9 @@
From all the above results, it may be inferred that superior performance is
achieved through the use of
\codeinline
{
fft2d.mpi
\_
with
\_
fftwmpi2d
}
as the
backend rather than
\codeinline
{
fft2d.mpi
\_
with
\_
fftw1d
}
. The
\codeinline
{
fft2d.mpi
\_
with
\_
fftw1d
}
method serves as a fallback option when
FFTW cannot be or is not compiled using MPI bindings.
To the right of Fig.~
\ref
{
fig:strong2d
}
, the real-time or walltime required to
perform a single iteration in seconds is found to vary inversely proportional
to the number of processes,
$
n
_
p
$
. The walltime per iteration ranges from
$
0
.
195
$
to
$
0
.
023
$
seconds for the
$
1024
\times
1024
$
case, and from
$
0
.
128
$
to
$
0
.
051
$
seconds for the
$
2048
\times
2048
$
case. Thus it is indeed
feasible and scalable to use this particular solver.
\subsubsection*
{
3D benchmarks
}
...
...
@@ -774,8 +779,8 @@
\subsubsection*
{
3D benchmarks
}
For 3D benchmarks the analysis here is limited to strong scaling benchmarks.
The reason for not investigating weak scaling benchmarks are described as
follows. Broadly speaking
\codeinline
{
fft3d.mpi
\_
with
\_
fftw1d
}
and
\codeinline
{
fft3d.mpi
\_
with
\_
fftwmpi3d
}
use a slab decomposition, i.e.
\
the
p
rocesses are distributed over one index of a 3D array
.
Using a similar process as described in the previous section,
%~\ref{sec:bench2d},
the Navier-Stokes 3D solver (
\codeinline
{
fluidsim.solvers.ns3d
}
) is chosen to
p
erform 3D benchmarks
.
%
...
...
@@ -781,12 +786,9 @@
%
On the other hand,
\codeinline
{
fft3d.mpi
\_
with
\_
p3dfft
}
and
\codeinline
{
fft3d.mpi
\_
with
\_
pfft
}
use a pencil decomposition approach, wherein
processes are spread over two indices of the 3D array. There are subtle
differences in how the domain is divided thus resulting in different shapes of
local array allocated to each process. Moreover, it is impractical to solve
large problems of the order of hundreds of millions of grid points using very
few processes which always exceed memory limitations of a compute node. Due to
these added complexities, instead of weak scaling benchmarks, a series of
strong scaling benchmarks is performed with progressively larger global grid
sizes as number of processes increase.
A box size of
$
2
\pi\times
2
\pi\times
2
\pi
$
is chosen as the reference test case.
As demonstrated in Fig.~
\ref
{
fig:strong3d
_
beskow
}
two physical global grids
with
$
128
\times
128
\times
128
$
and
$
1024
\times
1024
\times
1024
$
are used to
discretize the domain. A constant time-step,
$
\Delta
t
=
1
\times
10
^{
-
4
}$
with
RK4 time integration was used.
%
Other parameters are identical to what was described for the 2D benchmarks.
...
...
@@ -792,12 +794,3 @@
Using a similar process as described in the previous Section~
\ref
{
sec:bench2d
}
,
here the Navier-Stokes 3D solver (
\codeinline
{
fluidsim.solvers.ns3d
}
) is chosen
to perform 3D benchmarks.
%
A box size of
$
2
\pi\times
2
\pi\times
2
\pi
$
is chosen as the reference test case. As
demonstrated in Fig.~
\ref
{
fig:strong3d
_
beskow
}
a physical global grid with
$
128
\times
128
\times
128
$
grid points is used when up to two compute nodes are
allocated; a grid size of
$
512
\times
512
\times
512
$
is used when between two and
sixteen nodes are allocated; and a grid size of
$
1024
\times
1024
\times
1024
$
is used
when sixteen or more nodes are allocated.
Through
\fluidpack
{
fft
}
, this solver has four FFT methods at disposal:
...
...
@@ -803,15 +796,18 @@
The forcing term in the solver and file input output have been disabled, so as
to measure the performance of the solver accurately. A constant time-step,
$
\Delta
t
=
1
\times
10
^{
-
4
}$
is used. The test case is then executed for 10
iterations for three or more passes. The median time elapsed recorded is used
to analyze speedup and efficiency. The time elapsed is measured just before
and after the
\codeinline
{
sim.time
\_
stepping.start()
}
function call.
%
This process is repeated for four different FFT implementations provided by the
FluidFFT package, viz.
\codeinline
{
fft3d.mpi
\_
with
\_
fftw1d
}
,
\codeinline
{
fft3d.mpi
\_
with
\_
fftwmpi3d
}
,
\codeinline
{
fft3d.mpi
\_
with
\_
p3dfft
}
,
and
\codeinline
{
fft3d.mpi
\_
with
\_
pfft
}
.
\begin{itemize}
\item
\codeinline
{
fft3d.mpi
\_
with
\_
fftw1d
}
\item
\codeinline
{
fft3d.mpi
\_
with
\_
fftwmpi3d
}
\item
\codeinline
{
fft3d.mpi
\_
with
\_
p3dfft
}
\item
\codeinline
{
fft3d.mpi
\_
with
\_
pfft
}
\end{itemize}
The first two methods implements a 1D or
\emph
{
slab
}
decomposition, i.e.
\
the
processes are distributed over one index of a 3D array. And the last two
methods implement a 2D or
\emph
{
pencil
}
decomposition. For the sake of clarity,
we have restricted this analysis to the fastest FFT method of the two types in
this configuration, viz.
\codeinline
{
fft3d.mpi
\_
with
\_
fftwmpi3d
}
and
\codeinline
{
fft3d.mpi
\_
with
\_
p3dfft
}
. A more comprehensive study of the
performance of these FFT methods can be found in
\citet
{
fluidfft
}
.
\begin{figure}
[htp]
\centering
...
...
@@ -821,13 +817,26 @@
Beskow
}
\label
{
fig:strong3d
_
beskow
}
\end{figure}
In Fig.~
\ref
{
fig:strong3d
_
beskow
}
the strong scaling speedup and efficiency are
plotted from 3D benchmarks in Beskow. We observe a consistent increase in the
speedup as number of processes increase. The fastest FFT algorithm turns out to
be
\codeinline
{
fft3d.mpi
\_
with
\_
fftwmpi3d
}
for this particular set of test
cases with cubical discretization. Pencil decomposition based FFT
implementations (P3DFFT and PFFT) demonstrate similar, but subpar speedup in
comparison with FFTW methods.
In Fig.~
\ref
{
fig:strong3d
_
beskow
}
the strong scaling speedup and walltime per
iteration are plotted from 3D benchmarks in Beskow.
%
The analysis here is limited to single-node and inter-node performance.
%
For both grid-sizes analyzed here, the
\codeinline
{
fft3d.mpi
\_
with
\_
fftwmpi3d
}
method is the fastest of all methods but limited in scalability because of the
1D domain decomposition strategy. To utilize a large number of processors, one
requires the 2D decomposition approach. Also, note that for the
$
1024
\times
1024
\times
1024
$
case, a single-node measurement was not possible as
the size of the arrays required to run the solvers exceeds the available
memory. For the same case, a speedup reasonably close to linear variation is
observed with
\codeinline
{
fft3d.mpi
\_
with
\_
p3dfft
}
.
%
It is also shown that the walltime per iteration ranges from
%
$
0
.
083
$
to
$
0
.
027
$
seconds for the
$
128
\times
128
\times
128
$
case, and from
$
31
.
078
$
to
$
2
.
175
$
seconds for the
$
1024
\times
1024
\times
1024
$
case.
\subsection*
{
CFD pseudo-spectral code comparisons
}
...
...
@@ -854,7 +863,7 @@
%
This approach is very different than the one of
\fluidpack
{
sim
}
, where the
equation are described with simple
\Numpy
code. There is no equivalent of the
\fluidpack
{
sim
}
concept of ``solver'', i.e. a class corresponding to a set of
\fluidpack
{
sim
}
concept of ``solver'', i.e.
\
a class corresponding to a set of
equations with specialized outputs (with the corresponding plotting methods). To
run a simulation with Dedalus, one has to describe the problem using mathematical
equations. This can be very convenient because it is very versatile and it is not
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment