Skip to content
GitLab
Explore
Sign in
Register
Primary navigation
Search or go to…
Project
F
fluiddyn_papers
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Container Registry
Model registry
Operate
Environments
Monitor
Incidents
Service Desk
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
fluiddyn
fluiddyn_papers
Commits
de4af64c
Commit
de4af64c
authored
6 years ago
by
Ashwin Vishnu
Browse files
Options
Downloads
Patches
Plain Diff
Bench legi
parent
77e05f2e
No related branches found
Branches containing commit
No related tags found
Tags containing commit
No related merge requests found
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
fluidfft/fluidfft_paper.tex
+38
-13
38 additions, 13 deletions
fluidfft/fluidfft_paper.tex
with
38 additions
and
13 deletions
fluidfft/fluidfft_paper.tex
+
38
−
13
View file @
de4af64c
...
...
@@ -571,6 +571,14 @@
E5-2695 v4 (2.1 GHz) processors with 36 cores per node. The installation was
done using Intel C++ 18 compiler, Python 3.6.5 and cray-mpich 7.0.4.
\begin{figure}
[htp!]
\centering
\includegraphics
[width=\linewidth]
{
tmp/fig
_
beskow
_
384x1152x1152
}
\caption
{
Speedup computed from the median of the elapsed times for 3D fft
(384
$
\times
$
1152
$
\times
$
1152, left: fft and right: ifft) on Beskow.
}
\label
{
fig:beskow384x1152x1152
}
\end{figure}
In Fig.~
\ref
{
fig:beskow384x1152x1152
}
, the strong scaling results of the
cuboidal array can be observed. In this set of results we have also included
intra-node scaling, in which there is no latency introduced due to typically
...
...
@@ -583,9 +591,9 @@
A striking difference when compared with Fig.~
\ref
{
fig:occigen384x1152x1152
}
is
that
\codeinline
{
fftw1d
}
is not the fastest of the 4 classes in this machine.
One can only speculate that, this could be a consequence of the
use of a
different
MPI library and hardware has been employed. This also emphasises the
One can only speculate that, this could be a consequence of the
differences in
MPI library and hardware
which
has been employed. This also emphasises the
need to perform benchmarks when using an entirely new configuration.
\begin{figure}
[htp!]
\centering
...
...
@@ -588,15 +596,7 @@
need to perform benchmarks when using an entirely new configuration.
\begin{figure}
[htp!]
\centering
\includegraphics
[width=\linewidth]
{
tmp/fig
_
beskow
_
384x1152x1152
}
\caption
{
Speedup computed from the median of the elapsed times for 3D fft
(384
$
\times
$
1152
$
\times
$
1152, left: fft and right: ifft) on Beskow.
}
\label
{
fig:beskow384x1152x1152
}
\end{figure}
\begin{figure}
[htp!]
\centering
\includegraphics
[width=\linewidth]
{
tmp/fig
_
beskow
_
1152x1152x1152
}
\caption
{
Speedup computed from the median of the elapsed times for 3D fft
(1152
$
\times
$
1152
$
\times
$
1152, left: fft and right: ifft) on Beskow.
}
...
...
@@ -613,5 +613,10 @@
\paragraph
{
Benchmarks on a LEGI cluster
}
http://www.legi.grenoble-inp.fr
Let us also analyse how
\fluidpack
{
fft
}
scales on a computing cluster
maintained at an institutional level, named Cluster8 at
\href
{
%
http://www.legi.grenoble-inp.fr
}{
LEGI
}
, Grenoble. This cluster functions using
Intel Xeon CPU E5-2650 v3 (2.3 GHz) with 20 cores per node and
\fluidpack
{
fft
}
was installed a toolchain which includes gcc 4.9.2, Python 3.6.4 and OpenMPI
1.6.5 as key software components.
...
...
@@ -617,5 +622,5 @@
\begin{figure}
[htp]
\begin{figure}
[htp
!
]
\centering
\includegraphics
[width=\linewidth]
{
tmp/fig
_
legi
_
cluster8
_
320x640x640
}
\caption
{
Speedup computed from the median of the elapsed times for 3D fft
...
...
@@ -623,4 +628,12 @@
\label
{
fig:cluster8:320x640x640
}
\end{figure}
In Fig.~
\ref
{
fig:cluster8:320x640x640
}
we observe that the strong scaling for
an array shape of
$
320
\times
640
\times
640
$
is not far from the ideal linear
trend. The fastest library is
\codeinline
{
fftwmpi3d
}
for this case. As expected
from FFT algorithms, there is a slight drop in speedup when the array size is
not exactly divisible by the number of processes, i.e.
\~
with 12 processes. The
speedup declines rapidly when more than one node is employed (above 20
processes). This effect can be attributed to the latency introduced by
inter-node communications, a hardware limitation.
...
...
@@ -626,5 +639,5 @@
\begin{figure}
[htp]
\begin{figure}
[htp
!
]
\centering
\includegraphics
[width=\linewidth]
{
tmp/fig
_
legi
_
cluster8
_
2160x2160
}
\caption
{
Speedup computed from the median of the elapsed times for 2D fft
...
...
@@ -632,6 +645,18 @@
\label
{
fig:cluster8:2160x2160
}
\end{figure}
We have also analysed the performance of 2D MPI enabled FFT classes on the same
machine using an array shaped
$
2160
\times
2160
$
in
Fig.~
\ref
{
fig:cluster8:2160x2160
}
. The fastest library is
\codeinline
{
fftwmpi2d
}
. Both libraries display near-linear scaling, except when
more than one node is used and the performance tapers off.
As a final remark on scalability, a general rule of thumb should be to use 1D
domain decomposition when only very few processors are employed. For massive
parallelization, 2D decomposition is required to achieve good speedup without
being limited by the number of processors at disposal. We have thus shown that
overall performance of the libraries implemented in
\fluidpack
{
fft
}
are quite
good, and there is no noticeable drop in speedup when the Python API is used.
\subsubsection*
{
Microbenchmark of critical ``operator'' functions
}
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment