Skip to content
GitLab
Explore
Sign in
Register
Primary navigation
Search or go to…
Project
F
fluiddyn_papers
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Container Registry
Model registry
Operate
Environments
Monitor
Incidents
Service Desk
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
fluiddyn
fluiddyn_papers
Commits
7393fc200f12
Commit
7393fc200f12
authored
6 years ago
by
Ashwin Vishnu
Browse files
Options
Downloads
Patches
Plain Diff
Fluidfft: elaborate rebuttal
parent
059086e7b6e7
No related branches found
No related tags found
No related merge requests found
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
fluidfft/fluidfft_paper.tex
+7
-6
7 additions, 6 deletions
fluidfft/fluidfft_paper.tex
fluidfft/rebuttal.md
+23
-16
23 additions, 16 deletions
fluidfft/rebuttal.md
with
30 additions
and
22 deletions
fluidfft/fluidfft_paper.tex
+
7
−
6
View file @
7393fc20
...
...
@@ -121,8 +121,8 @@
Some of the well-known libraries are written in C, C++ and Fortran. The classical
\libpack
{
FFTW
}
library supports MPI using 1D decomposition and hybrid parallelism
using MPI and OpenMP.
Other libraries, now implement the 2D decomposition
:
\libpack
{
PFFT
}
\citep
{
pippig
_
pfft2013
}
,
\libpack
{
P3DFFT
}
using MPI and OpenMP. Other libraries, now implement the 2D decomposition
for
FFT over 3D arrays:
\libpack
{
PFFT
}
\citep
{
pippig
_
pfft2013
}
,
\libpack
{
P3DFFT
}
\citep
{
pekurovsky2012p3dfft
}
,
\libpack
{
2decomp
\&
FFT
}
and so on. These libraries
rely on MPI for the communications between processes, are optimized for
supercomputers and scales well to hundreds of thousands of cores. However, since
...
...
@@ -460,7 +460,7 @@
methods. These classes are accompanied by unit test cases.
\item
\pack
{
Pythran
}
functions to speedup critical methods in the Python
operator
s
classes.
operator classes.
\end{itemize}
...
...
@@ -559,9 +559,10 @@
the first index for the physical input array. This restriction is as a result
of some
\libpack
{
FFTW
}
library internals and design choices adopted in
\fluidpack
{
fft
}
. This limits
\codeinline
{
fftw1d
}
(our own MPI implementation
using MPI types and sequential 1d fft) to 192 cores and
\codeinline
{
fftwmpi3d
}
to 384 cores. The latter can utilize more cores since it is capable of working
with empty arrays, while sharing some of the computational load.
using MPI types and 1D transforms from FFTW) to 192 cores and
\codeinline
{
fftwmpi3d
}
to 384 cores. The latter can utilize more cores since it
is capable of working with empty arrays, while sharing some of the
computational load.
%
The fastest methods for relatively
low and high number of processes are
\codeinline
{
fftw1d
}
and
...
...
This diff is collapsed.
Click to expand it.
fluidfft/rebuttal.md
+
23
−
16
View file @
7393fc20
...
...
@@ -41,18 +41,18 @@
We have created an issue and added some lines in the manuscript:
"For the aforementioned reasons, we have preferred Pythran to compile optimized
`operator`
functions that complement the FFT classes. Although with this we
obtain remarkable performance, there is still room for some improvement, in
terms of logical implementation and allocation of arrays. For example,
applications such as CFD simulations often deals with non-linear terms which
require dealiasing. The FFT classes of FluidFFT, currently allocates the same
number of modes in the spectral array so as to transform the physical array.
Thereafter, we apply dealiasing by setting zeros to wavenumbers which are
larger than, say, two-thirds of the maximum wavenumber. Instead, we could take
into account dealiasing in the FFT classes to save some memory and computation
time (See
[
FluidFFT issue
21
](
https://bitbucket.org/fluiddyn/fluidfft/issues/21/
)
)."
>
"For the aforementioned reasons, we have preferred Pythran to compile optimized
>
`operator` functions that complement the FFT classes. Although with this we
>
obtain remarkable performance, there is still room for some improvement, in
>
terms of logical implementation and allocation of arrays. For example,
>
applications such as CFD simulations often deals with non-linear terms which
>
require dealiasing. The FFT classes of FluidFFT, currently allocates the same
>
number of modes in the spectral array so as to transform the physical array.
>
Thereafter, we apply dealiasing by setting zeros to wavenumbers which are
>
larger than, say, two-thirds of the maximum wavenumber. Instead, we could take
>
into account dealiasing in the FFT classes to save some memory and computation
>
time (See [FluidFFT issue
>
21](https://bitbucket.org/fluiddyn/fluidfft/issues/21/))."
## address typos and clarifications suggested by Reviewer B
...
...
@@ -56,9 +56,13 @@
## address typos and clarifications suggested by Reviewer B
Done.
We have fixed all the typos pointed out by the reviewer. We have clarified
that choosing between slab and pencil decompositions are only possible for FFT
over 3D arrays. The usage 'method' has been replaced with 'FFT library'. Other
clarifications were made to the statements which were pointed out to be vague by
the reviewer.
## respond to Reviewer C's query about FFTW1D algorithm use
We now write:
...
...
@@ -60,10 +64,12 @@
## respond to Reviewer C's query about FFTW1D algorithm use
We now write:
"This limits
`fftw1d`
(our own MPI implementation using MPI types and
sequential 1d fft) to 192 cores and
`fftwmpi3d`
to 384 cores."
> "This limits `fftw1d` (our own MPI implementation using MPI types and 1D
> transforms from FFTW) to 192 cores and `fftwmpi3d` to 384 cores".
which should shed light on the underlying algorithm.
## clarify scaling limitations of the slab-parallelized algorithms
...
...
@@ -79,7 +85,7 @@
## respond to Reviewer B's query about dependency on FluidDyn
It is now clear that the Python package fluiddyn is not a dependency for the
C++ API.
C++ API.
The dependencies for the C++ and Python API are distinctly mentioned.
## respond to Reviewer B's query about cuFFT comparison
...
...
@@ -87,3 +93,4 @@
We did not add the cuFFT comparison because the hardware used for the
benchmarks is not compatible with this library.
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment