Skip to content
Snippets Groups Projects
Commit 7393fc200f12 authored by Ashwin Vishnu's avatar Ashwin Vishnu
Browse files

Fluidfft: elaborate rebuttal

parent 059086e7b6e7
No related branches found
No related tags found
No related merge requests found
......@@ -121,8 +121,8 @@
Some of the well-known libraries are written in C, C++ and Fortran. The classical
\libpack{FFTW} library supports MPI using 1D decomposition and hybrid parallelism
using MPI and OpenMP. Other libraries, now implement the 2D decomposition:
\libpack{PFFT} \citep{pippig_pfft2013}, \libpack{P3DFFT}
using MPI and OpenMP. Other libraries, now implement the 2D decomposition for
FFT over 3D arrays: \libpack{PFFT} \citep{pippig_pfft2013}, \libpack{P3DFFT}
\citep{pekurovsky2012p3dfft}, \libpack{2decomp\&FFT} and so on. These libraries
rely on MPI for the communications between processes, are optimized for
supercomputers and scales well to hundreds of thousands of cores. However, since
......@@ -460,7 +460,7 @@
methods. These classes are accompanied by unit test cases.
\item \pack{Pythran} functions to speedup critical methods in the Python
operators classes.
operator classes.
\end{itemize}
......@@ -559,9 +559,10 @@
the first index for the physical input array. This restriction is as a result
of some \libpack{FFTW} library internals and design choices adopted in
\fluidpack{fft}. This limits \codeinline{fftw1d} (our own MPI implementation
using MPI types and sequential 1d fft) to 192 cores and \codeinline{fftwmpi3d}
to 384 cores. The latter can utilize more cores since it is capable of working
with empty arrays, while sharing some of the computational load.
using MPI types and 1D transforms from FFTW) to 192 cores and
\codeinline{fftwmpi3d} to 384 cores. The latter can utilize more cores since it
is capable of working with empty arrays, while sharing some of the
computational load.
%
The fastest methods for relatively
low and high number of processes are \codeinline{fftw1d} and
......
......@@ -41,18 +41,18 @@
We have created an issue and added some lines in the manuscript:
"For the aforementioned reasons, we have preferred Pythran to compile optimized
`operator` functions that complement the FFT classes. Although with this we
obtain remarkable performance, there is still room for some improvement, in
terms of logical implementation and allocation of arrays. For example,
applications such as CFD simulations often deals with non-linear terms which
require dealiasing. The FFT classes of FluidFFT, currently allocates the same
number of modes in the spectral array so as to transform the physical array.
Thereafter, we apply dealiasing by setting zeros to wavenumbers which are
larger than, say, two-thirds of the maximum wavenumber. Instead, we could take
into account dealiasing in the FFT classes to save some memory and computation
time (See [FluidFFT issue
21](https://bitbucket.org/fluiddyn/fluidfft/issues/21/))."
> "For the aforementioned reasons, we have preferred Pythran to compile optimized
> `operator` functions that complement the FFT classes. Although with this we
> obtain remarkable performance, there is still room for some improvement, in
> terms of logical implementation and allocation of arrays. For example,
> applications such as CFD simulations often deals with non-linear terms which
> require dealiasing. The FFT classes of FluidFFT, currently allocates the same
> number of modes in the spectral array so as to transform the physical array.
> Thereafter, we apply dealiasing by setting zeros to wavenumbers which are
> larger than, say, two-thirds of the maximum wavenumber. Instead, we could take
> into account dealiasing in the FFT classes to save some memory and computation
> time (See [FluidFFT issue
> 21](https://bitbucket.org/fluiddyn/fluidfft/issues/21/))."
## address typos and clarifications suggested by Reviewer B
......@@ -56,9 +56,13 @@
## address typos and clarifications suggested by Reviewer B
Done.
We have fixed all the typos pointed out by the reviewer. We have clarified
that choosing between slab and pencil decompositions are only possible for FFT
over 3D arrays. The usage 'method' has been replaced with 'FFT library'. Other
clarifications were made to the statements which were pointed out to be vague by
the reviewer.
## respond to Reviewer C's query about FFTW1D algorithm use
We now write:
......@@ -60,10 +64,12 @@
## respond to Reviewer C's query about FFTW1D algorithm use
We now write:
"This limits `fftw1d` (our own MPI implementation using MPI types and
sequential 1d fft) to 192 cores and `fftwmpi3d` to 384 cores."
> "This limits `fftw1d` (our own MPI implementation using MPI types and 1D
> transforms from FFTW) to 192 cores and `fftwmpi3d` to 384 cores".
which should shed light on the underlying algorithm.
## clarify scaling limitations of the slab-parallelized algorithms
......@@ -79,7 +85,7 @@
## respond to Reviewer B's query about dependency on FluidDyn
It is now clear that the Python package fluiddyn is not a dependency for the
C++ API.
C++ API. The dependencies for the C++ and Python API are distinctly mentioned.
## respond to Reviewer B's query about cuFFT comparison
......@@ -87,3 +93,4 @@
We did not add the cuFFT comparison because the hardware used for the
benchmarks is not compatible with this library.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment