Skip to content
GitLab
Explore
Sign in
Register
Primary navigation
Search or go to…
Project
F
fluiddyn_papers
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Container Registry
Model registry
Operate
Environments
Monitor
Incidents
Service Desk
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
fluiddyn
fluiddyn_papers
Commits
044e2935
Commit
044e2935
authored
6 years ago
by
Pierre Augier
Browse files
Options
Downloads
Patches
Plain Diff
Take into account Cyrille's remarks.
parent
932091f8
No related branches found
Branches containing commit
No related tags found
Tags containing commit
No related merge requests found
Changes
3
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
fluidfft/fluidfft_paper.tex
+76
-78
76 additions, 78 deletions
fluidfft/fluidfft_paper.tex
fluidfft/microbench/make_fig_bar.py
+10
-2
10 additions, 2 deletions
fluidfft/microbench/make_fig_bar.py
fluidsim/fluidsim_paper.tex
+12
-14
12 additions, 14 deletions
fluidsim/fluidsim_paper.tex
with
98 additions
and
94 deletions
fluidfft/fluidfft_paper.tex
+
76
−
78
View file @
044e2935
...
...
@@ -73,9 +73,9 @@
The package supplies utilities to easily test itself and benchmark the different
FFT solutions for a particular case and on a particular machine.
%
We present a performance scaling analysis
and a microbenchmark showing that
\fluidpack
{
fft
}
is an interesting solution to write
efficient Python applications
using FFT.
We present a performance scaling analysis
on three different clusters and a
microbenchmark showing that
\fluidpack
{
fft
}
is an interesting solution to write
efficient Python applications
using FFT.
\section*
{
Keywords
}
...
...
@@ -96,8 +96,8 @@
Fast Fourier transforms (FFT) are useful for many applications, such as signal
processing, numerical simulations and scientific computing in general. There are
many good libraries to perform FFT, in particular the
\emph
{
de-facto
}
standard
FFTW
\citep
{
frigo2005design
}
.
\@
A
new
challenge is to efficiently scale FFT on
FFTW
\citep
{
frigo2005design
}
.
\@
A challenge is to efficiently scale FFT on
clusters with the memory distributed over a large number of cores using Message
Passing Interface (MPI). This is imperative to solve big problems faster and when
the arrays do not fit in the memory of single computational node.
%
...
...
@@ -100,8 +100,8 @@
clusters with the memory distributed over a large number of cores using Message
Passing Interface (MPI). This is imperative to solve big problems faster and when
the arrays do not fit in the memory of single computational node.
%
A problem is that for one-dimensional FFT, all the data ha
s
to be located in the
A problem is that for one-dimensional FFT, all the data ha
ve
to be located in the
memory of the process that perform the FFT, so a lot of communication between
processes are needed for 2D and 3D FFT.
...
...
@@ -111,17 +111,16 @@
important limitation in terms of number of MPI processes that can be used. In
contrast, this limitation is overcome by the 2D decomposition.
Some of the well-known libraries are written in C, C++ and
Fortran.
\libpack
{
FFTW
}
supports MPI using 1D decomposition and hybrid
parallelism using OpenMP.
\@
Other libraries, now implement the 2D
decomposition:
\libpack
{
pfft
}
\citep
{
pippig
_
pfft2013
}
,
\libpack
{
p3dfft
}
\citep
{
pekurovsky2012p3dfft
}
,
\libpack
{
2decomp
\&
FFT
}
and so on. These libraries
rely on MPI for the communications between processes, are optimized for
supercomputers and scales well to hundreds of thousands of cores. However,
since there is no common API, it is not simple to write applications that are
able to use these libraries and to compare their performances. As a result,
developers are met with the hard decision to choose a library before the code
is implemented.
Some of the well-known libraries are written in C, C++ and Fortran.
\libpack
{
FFTW
}
supports MPI using 1D decomposition and hybrid parallelism using MPI and OpenMP.
Other libraries, now implement the 2D decomposition:
\libpack
{
pfft
}
\citep
{
pippig
_
pfft2013
}
,
\libpack
{
p3dfft
}
\citep
{
pekurovsky2012p3dfft
}
,
\libpack
{
2decomp
\&
FFT
}
and so on. These libraries rely on MPI for the
communications between processes, are optimized for supercomputers and scales well
to hundreds of thousands of cores. However, since there is no common API, it is
not simple to write applications that are able to use these libraries and to
compare their performances. As a result, developers are met with the hard decision
to choose a library before the code is implemented.
Apart from CPU-based parallelism, General Purpose computing on Graphical
Processing Units (GPGPU) is also gaining traction in scientific computing.
...
...
@@ -215,9 +214,10 @@
Both C++ and Python APIs provided by
\fluidpack
{
fft
}
currently support linking
with
\libpack
{
FFTW
}
(with and without MPI and OpenMP support enabled),
\libpack
{
MKL
}
,
\libpack
{
pfft
}
,
\libpack
{
p3dfft
}
,
\libpack
{
cuFFT
}
libraries. The
classes in
\fluidpack
{
fft
}
offers API for performing double-precision computation
with real-to-complex FFT, complex-to-real inverse FFT, and additional helper
functions.
classes in
\fluidpack
{
fft
}
offers API for performing
double-precision
\footnote
{
Most C++ classes also support single-precision.
}
computation with real-to-complex FFT, complex-to-real inverse FFT, and additional
helper functions.
\subsection*
{
C++ API
}
...
...
@@ -400,5 +400,5 @@
The Python API is built automatically when
\fluidpack
{
fft
}
is installed
\footnote
{
%
\href
{
https://fluidfft.readthedocs.io/en/latest/install.html
}{
Detailed steps for
installation
}
are provided in the documentation.
}
installation
}
are provided in the documentation.
}
.
%
...
...
@@ -404,8 +404,7 @@
%
which executes the script
\codeinline
{
setup.py
}
. It first generates the Cython
source code as a pair of
\codeinline
{
.pyx
}
and
\codeinline
{
.pxd
}
files containing
a class wrapping its C++ counterpart
\footnote
{
Uses an approach similar to
guidelines
\href
{
%
It first generates the Cython source code as a pair of
\codeinline
{
.pyx
}
and
\codeinline
{
.pxd
}
files containing a class wrapping its C++
counterpart
\footnote
{
Uses an approach similar to guidelines
\href
{
%
https://cython.readthedocs.io/en/latest/src/userguide/wrapping
_
CPlusPlus.html
}{
%
``Using C++ in Cython''
}
in the Cython documentation.
}
.
%
...
...
@@ -444,10 +443,10 @@
\end{itemize}
Command-line utilities (
\codeinline
{
fluidfft-bench
}
and
\codeinline
{
fluidfft-bench-analysis
}
) are also provided with the
\fluidpack
{
fft
}
installation to run benchmarks and plot the results. In the
next subsection, we
shall look at some results by making use of these
utilities on two computing
clusters.
\codeinline
{
fluidfft-bench-analysis
}
) are also provided with the
\fluidpack
{
fft
}
installation to run benchmarks and plot the results. In the
next subsection, we
shall look at some results by making use of these
utilities on three computing
clusters.
\subsection*
{
Performance
}
...
...
@@ -456,7 +455,6 @@
% Simple!! Few cases. Few clusters. Figures obtained with
% fluidfft-bench-analysis
Scalability of
\fluidpack
{
fft
}
is measured in the form of strong scaling
speedup, defined in the present context as:
Scalability of
\fluidpack
{
fft
}
is measured in the form of strong scaling speedup,
defined in the present context as:
\begin{equation*}
...
...
@@ -462,7 +460,8 @@
\begin{equation*}
S(n
_
p) =
\frac
{
\mathrm
{
Time
\
elapsed
\
for
\
}
N
\mathrm
{
\
iterations
\
with
\
}
n
_{
p,
\min
}
\mathrm
{
\
processes
}
\times
S(n
_{
p,
\min
}
)
}
{
\mathrm
{
Time
\
elapsed
\
for
\
}
N
\mathrm
{
\
iterations
\
with
\
}
n
_
p
\mathrm
{
\
processes
}}
S
_
\alpha
(n
_
p) =
\frac
{
[
\mathrm
{
Time
\
elapsed
\
for
\
}
N
\mathrm
{
\
iterations
\
with
\
}
n
_{
p,
\min
}
\mathrm
{
\
processes
}
]
_{
\mathrm
{
fastest
}}
\times
n
_{
p,
\min
}}
{
[
\mathrm
{
Time
\
elapsed
\
for
\
}
N
\mathrm
{
\
iterations
\
with
\
}
n
_
p
\mathrm
{
\
processes
}
]
_
\alpha
}
\label
{
eq:speedup
}
\end{equation*}
...
...
@@ -467,4 +466,3 @@
\label
{
eq:speedup
}
\end{equation*}
where
$
n
_{
p,
\min
}$
is the minimum number of processes employed for a specific
...
...
@@ -470,8 +468,6 @@
where
$
n
_{
p,
\min
}$
is the minimum number of processes employed for a specific
array size and hardware, and
$
S
(
n
_{
p,
\min
}
)
$
is assigned the value
$
n
_{
p,
\min
}$
for the fastest result among various FFT classes.
% pa: I don't understand this sentence. Is it necessary?
% For slower FFT classes, $S(n_{p,\min})$ is set proportionally.
array size and hardware,
$
\alpha
$
denotes the FFT class used and ``fastest''
corresponds to the fastest result among various FFT classes.
To compute strong scaling the utility
\codeinline
{
fluidfft-bench
}
is launched
as scheduled jobs on HPC clusters, ensuring no interference from background
...
...
@@ -486,9 +482,8 @@
\begin{itemize}
\item
\codeinline
{
fft
\_
cpp
}
,
\codeinline
{
ifft
\_
cpp
}
(continuous lines):
benchmark of the C++ function from the C++ code. An array is passed as an
argument to store the result. No memory allocation is performed inside the
functions.
\item
\codeinline
{
fft
\_
cpp
}
,
\codeinline
{
ifft
\_
cpp
}
(continuous lines): benchmark
of the C++ function from the C++ code. An array is passed as an argument to store
the result. No memory allocation is performed inside these functions.
\item
\codeinline
{
fft
\_
as
\_
arg
}
,
\codeinline
{
ifft
\_
as
\_
arg
}
(dashed lines):
...
...
@@ -493,7 +488,7 @@
\item
\codeinline
{
fft
\_
as
\_
arg
}
,
\codeinline
{
ifft
\_
as
\_
arg
}
(dashed lines):
benchmark of a Python method from Python. Similar to the C++ code, the second
argument of this method is an array to contain the result of the transform,
so no
memory allocation is needed.
benchmark of a Python method from Python. Similar to the C++ code, the second
argument of this method is an array to contain the result of the transform,
so no
memory allocation is needed.
\item
\codeinline
{
fft
\_
return
}
,
\codeinline
{
ifft
\_
return
}
(dotted lines):
...
...
@@ -498,8 +493,8 @@
\item
\codeinline
{
fft
\_
return
}
,
\codeinline
{
ifft
\_
return
}
(dotted lines):
benchmark of a Python method from Python. No array is provided to the
function to
contain the result, and therefore a numpy array is created and
then returned by
the function.
benchmark of a Python method from Python. No array is provided to the
function to
contain the result, and therefore a numpy array is created and
then returned by
the function.
\end{itemize}
...
...
@@ -513,7 +508,9 @@
``FFT 3D parallel (MPI): Domain decomposition''
}
tutorial
}
.
Hereafter, for the sake of brevity, the FFT classes will be named in terms of the
associated library. Let us go through the results
\footnote
{
Saved at
\url
{
%
associated library (For example, the class
\codeinline
{
FFT3DMPIWithFFTW1D
}
is
named
\codeinline
{
fftw1d
}
). Let us go through the results
\footnote
{
Saved at
\url
{
%
https://bitbucket.org/fluiddyn/fluidfft-bench-results
}}
plotted using
\codeinline
{
fluidfft-bench-analysis
}
.
...
...
@@ -641,14 +638,14 @@
\label
{
fig:cluster8:320x640x640
}
\end{figure}
In Fig.~
\ref
{
fig:cluster8:320x640x640
}
we observe that the strong scaling for
an
array shape of
$
320
\times
640
\times
640
$
is not far from the ideal linear
trend. The
fastest library is
\codeinline
{
fftwmpi3d
}
for this case. As expected
from FFT
algorithms, there is a slight drop in speedup when the array size is
not exactly
divisible by the number of processes, i.e.
\
with 12 processes. The
speedup
declines rapidly when more than one node is employed (above 20
processes). This
effect can be attributed to the latency introduced by
inter-node communications, a hardware limitation
.
In Fig.~
\ref
{
fig:cluster8:320x640x640
}
we observe that the strong scaling for
an
array shape of
$
320
\times
640
\times
640
$
is not far from the ideal linear
trend. The
fastest library is
\codeinline
{
fftwmpi3d
}
for this case. As expected
from FFT
algorithms, there is a slight drop in speedup when the array size is
not exactly
divisible by the number of processes, i.e.
\
with 12 processes. The
speedup
declines rapidly when more than one node is employed (above 20
processes). This
effect can be attributed to the latency introduced by
inter-node communications, a
hardware limitation of this cluster (10 Gb/s)
.
\begin{figure}
[htp!]
\centering
...
...
@@ -664,7 +661,7 @@
\codeinline
{
fftwmpi2d
}
. Both libraries display near-linear scaling, except when
more than one node is used and the performance tapers off.
As a
final
remark on scalability, a general rule of thumb should be to use 1D
As a
conclusive
remark on scalability, a general rule of thumb should be to use 1D
domain decomposition when only very few processors are employed. For massive
parallelization, 2D decomposition is required to achieve good speedup without
being limited by the number of processors at disposal. We have thus shown that
...
...
@@ -716,5 +713,5 @@
\begin{figure}
[htp]
\centering
\includegraphics
[width=\linewidth]
{
tmp/fig
_
microbench
}
\caption
{
Time e
lapsed (smaller is better) for the projection function for
\caption
{
E
lapsed
time
(smaller is better) for the projection function for
different implementations and tools. The shape of the arrays is
...
...
@@ -720,5 +717,6 @@
different implementations and tools. The shape of the arrays is
$
(
128
,
\
128
,
\
65
)
$
.
}
$
(
128
,
\
128
,
\
65
)
$
. The dotted lines indicate the times for Fortran for better
comparison.
}
\label
{
fig:microbench
}
\end{figure}
...
...
@@ -727,8 +725,8 @@
%
For this outplace version, we used three different codes:
\begin{enumerate}
\item
a Fortran code (not shown
\footnote
{
The codes
used for this benchmark study
are available in
\href
{
%
\item
a Fortran code (not shown
\footnote
{
The codes
and a MakeFile used for this
benchmark study
are available in
\href
{
%
https://bitbucket.org/fluiddyn/fluiddyn
_
paper/src/default/fluidfft/microbench/
}{
%
the repository of the article
}
.
}
) written with three nested explicit loops (one
per dimension). Note that as in the Python version we also allocate the memory
...
...
@@ -891,12 +889,12 @@
% running the software with sample input and output data). }
The package
\fluidpack
{
fft
}
currently supplies unit tests covering 93
\%
of its
code. These unit tests are run regularly through continuous integration on
Travis
CI with the most recent releases of
\fluidpack
{
fft
}
's dependencies and
on
Bitbucket Pipelines inside a static
code. These unit tests are run regularly through continuous integration on
Travis
CI with the most recent releases of
\fluidpack
{
fft
}
's dependencies and
on
Bitbucket Pipelines inside a static
\href
{
https://hub.docker.com/u/fluiddyn
}{
Docker container
}
. The tests are run
using standard Python interpreter with all supported versions.
For
\fluidpack
{
fft
}
, the code coverage results are displayed at
\href
{
https://codecov.io/gh/fluiddyn/fluidfft
}{
Codecov
}
. Using third-party
packages
\pack
{
coverage
}
and
\pack
{
tox
}
, it is straightforward to bootstrap the
...
...
@@ -897,9 +895,9 @@
\href
{
https://hub.docker.com/u/fluiddyn
}{
Docker container
}
. The tests are run
using standard Python interpreter with all supported versions.
For
\fluidpack
{
fft
}
, the code coverage results are displayed at
\href
{
https://codecov.io/gh/fluiddyn/fluidfft
}{
Codecov
}
. Using third-party
packages
\pack
{
coverage
}
and
\pack
{
tox
}
, it is straightforward to bootstrap the
installation with dependencies, test with multiple Python versions and combine
the
code coverage report, ready for upload. It is also possible to run similar
installation with dependencies, test with multiple Python versions and combine
the
code coverage report, ready for upload. It is also possible to run similar
isolated tests using
\pack
{
tox
}
or coverage analysis using
\pack
{
coverage
}
in a
...
...
@@ -905,6 +903,6 @@
isolated tests using
\pack
{
tox
}
or coverage analysis using
\pack
{
coverage
}
in a
local machine. Up-to-date build status and coverage status are displayed on
the
landing page of the Bitbucket repository. Instructions on how to run
unittests,
coverage and lint tests are included in the documentation.
local machine. Up-to-date build status and coverage status are displayed on
the
landing page of the Bitbucket repository. Instructions on how to run
unittests,
coverage and lint tests are included in the documentation.
We also try to follow a consistent code style as recomended by PEP (Python
...
...
@@ -909,9 +907,8 @@
We also try to follow a consistent code style as recomended by PEP (Python
enhancement proposals) --- 8 and 257. This is also inspected using lint
checkers such as
\codeinline
{
flake8
}
and
\codeinline
{
pylint
}
among the
developers. The Python code is regularly cleaned up using the code formatter
\codeinline
{
black
}
.
enhancement proposals) 8 and 257. This is also inspected using lint checkers such
as
\codeinline
{
flake8
}
and
\codeinline
{
pylint
}
among the developers. The Python
code is regularly cleaned up using the code formatter
\codeinline
{
black
}
.
\section*
{
(2) Availability
}
...
...
@@ -951,8 +948,9 @@
\begin{itemize}
\item
Pierre Augier (LEGI): creator of the FluidDyn project and of
\fluidpack
{
fft
}
.
\item
Cyrille Bonamy (LEGI): C++ code.
\item
Ashwin Vishnu Mohanan (KTH): benchmark, unittests, ...
\item
Cyrille Bonamy (LEGI): C++ code and some methods in the operator classes.
\item
Ashwin Vishnu Mohanan (KTH): command lines utilities, benchmarks, unittests
and continuous integration, bug fixes, etc.
\end{itemize}
\section*
{
Software location:
}
...
...
This diff is collapsed.
Click to expand it.
fluidfft/microbench/make_fig_bar.py
+
10
−
2
View file @
044e2935
...
...
@@ -49,6 +49,6 @@
ax
.
set_xticks
([])
ax
.
set_xticklabels
([])
ax
.
set_ylabel
(
'
time (ms)
'
)
ax
.
set_ylabel
(
'
elapsed
time (ms)
'
)
ax
.
set_title
(
'
outplace (with memory allocation)
'
)
...
...
@@ -53,5 +53,9 @@
ax
.
set_title
(
'
outplace (with memory allocation)
'
)
xlim
=
ax
.
get_xlim
()
ax
.
plot
(
xlim
,
(
times_outplace
[
0
],)
*
2
,
'
k:
'
)
ax
.
set_xlim
(
xlim
)
y
=
55
for
x
,
s
in
zip
(
left
,
keys_outplace
):
ax
.
text
(
x
,
y
,
s
,
rotation
=
20
,
...
...
@@ -75,6 +79,6 @@
ax
.
set_xticks
([])
ax
.
set_xticklabels
([])
ax
.
set_ylabel
(
'
time (ms)
'
)
ax
.
set_ylabel
(
'
elapsed
time (ms)
'
)
ax
.
set_title
(
'
inplace
'
)
...
...
@@ -79,5 +83,9 @@
ax
.
set_title
(
'
inplace
'
)
xlim
=
ax
.
get_xlim
()
ax
.
plot
(
xlim
,
(
times_inplace
[
0
],)
*
2
,
'
k:
'
)
ax
.
set_xlim
(
xlim
)
y
=
45
for
x
,
s
in
zip
(
left
,
keys_inplace
):
ax
.
text
(
x
,
y
,
s
,
rotation
=
20
,
...
...
This diff is collapsed.
Click to expand it.
fluidsim/fluidsim_paper.tex
+
12
−
14
View file @
044e2935
...
...
@@ -649,5 +649,4 @@
methods used for the sequential mode of the solvers in
\fluidpack
{
sim
}
differ
from the parallel mode, the smallest number of processes we will use in this
analysis. Speedup is formally defined here as:
\begin{equation*}
...
...
@@ -653,7 +652,8 @@
\begin{equation*}
S(n
_
p) =
\frac
{
\mathrm
{
Time
\
elapsed
\
for
\
}
N
\mathrm
{
\
iterations
\
with
\
}
n
_{
p,
\min
}
\mathrm
{
\
processes
}
\times
S(n
_{
p,
\min
}
)
}
{
\mathrm
{
Time
\
elapsed
\
for
\
}
N
\mathrm
{
\
iterations
\
with
\
}
n
_
p
\mathrm
{
\
processes
}}
S
_
\alpha
(n
_
p) =
\frac
{
[
\mathrm
{
Time
\
elapsed
\
for
\
}
N
\mathrm
{
\
iterations
\
with
\
}
n
_{
p,
\min
}
\mathrm
{
\
processes
}
]
_{
\mathrm
{
fastest
}}
\times
n
_{
p,
\min
}}
{
[
\mathrm
{
Time
\
elapsed
\
for
\
}
N
\mathrm
{
\
iterations
\
with
\
}
n
_
p
\mathrm
{
\
processes
}
]
_
\alpha
}
\label
{
eq:speedup
}
\end{equation*}
...
...
@@ -658,9 +658,8 @@
\label
{
eq:speedup
}
\end{equation*}
where
$
n
_{
p,
\min
}$
is the minimum number of processes employed for a unique
test case in a particular hardware and
$
S
(
n
_{
p,
\min
}
)
=
n
_{
p,
\min
}$
for the
fastest result among various FFT methods. For slower methods
$
S
(
n
_{
p,
\min
}
)
$
is
set proportionally.
where
$
n
_{
p,
\min
}$
is the minimum number of processes employed for a specific
array size and hardware,
$
\alpha
$
denotes the FFT class used and ``fastest''
corresponds to the fastest result among various FFT classes.
In addition to number of processes, there is another important parameter, which
is the size of the problem; in other words, the number of grid points used to
...
...
@@ -777,8 +776,7 @@
here the Navier-Stokes 3D solver (
\codeinline
{
fluidsim.solvers.ns3d
}
) is chosen
to perform 3D benchmarks.
%
A box size of
$
2
\pi\times
2
\pi\times
2
\pi
$
is chosen as the reference test case.
As demonstrated in Fig.~
\ref
{
fig:strong3d
_
beskow
}
and
Fig.~
\ref
{
fig:strong3d
_
triolith
}
a physical global grid with
A box size of
$
2
\pi\times
2
\pi\times
2
\pi
$
is chosen as the reference test case. As
demonstrated in Fig.~
\ref
{
fig:strong3d
_
beskow
}
a physical global grid with
$
128
\times
128
\times
128
$
grid points is used when up to two compute nodes are
allocated; a grid size of
$
512
\times
512
\times
512
$
is used when between two and
...
...
@@ -783,7 +781,7 @@
$
128
\times
128
\times
128
$
grid points is used when up to two compute nodes are
allocated; a grid size of
$
512
\times
512
\times
512
$
is used when between two and
sixteen nodes are allocated; and a grid size of
$
1024
\times
1024
\times
1024
$
is
used
when sixteen or more nodes are allocated.
sixteen nodes are allocated; and a grid size of
$
1024
\times
1024
\times
1024
$
is
used
when sixteen or more nodes are allocated.
The forcing term in the solver and file input output have been disabled, so as
to measure the performance of the solver accurately. A constant time-step,
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment