Skip to content
Snippets Groups Projects
Commit 60569823219b authored by Ashwin Vishnu's avatar Ashwin Vishnu
Browse files

`make revision` and address typos and clarifications suggested by Reviewer B

parent 3cb153249e22
No related branches found
No related tags found
No related merge requests found
......@@ -26,7 +26,7 @@
*/*.pstats
*/.ipynb_checkpoints
**/.mypy_cache
auto/*
**/_minted-*
......
......@@ -7,5 +7,16 @@
all: $(name).pdf
revision.tex:
hg cat -r 0.2.0 $(name).tex > $(name)_submitted.tex
latexdiff --flatten -c PICTUREENV=minted $(name)_submitted.tex $(name).tex > revision.tex
rm -f $(name)_submitted.tex
revision.pdf: revision.tex
latexmk -pdf -pdflatex="$(LATEX)" revision.tex
rm -f revision.tex
revision: revision.tex revision.pdf clean
clean:
rm -f *.log *.aux *.out *.bbl *.blg *.tmp
......@@ -10,6 +21,6 @@
clean:
rm -f *.log *.aux *.out *.bbl *.blg *.tmp
rm -rf _minted-$(name)
rm -rf _minted-*
cleanall: clean
rm -f $(name).pdf
......
......@@ -111,7 +111,9 @@
memory of the process that perform the FFT, so a lot of communications between
processes are needed for 2D and 3D FFT.
There are two strategies to distribute an array in the memory, the 1D (or
To elaborate, there is only one way to apply domain decomposition for 2D FFT,
which is to split them into narrow strips across one dimension. However for 3D
FFT, there are two strategies to distribute an array in the memory, the 1D (or
\emph{slab}) decomposition and the 2D (or \emph{pencil}) decomposition. The 1D
decomposition is more efficient when only few processes are used but suffers
from an important limitation in terms of number of MPI processes that can be
......@@ -164,8 +166,8 @@
certain drawbacks:
\begin{itemize}
\item No effort so far to consolidate all possible FFT libraries, both
sequential, MPI and GPGPU based under a single package with similar syntax.
\item No effort so far to consolidate sequential, MPI and GPGPU based FFT
libraries under a single package with similar syntax.
\item Quite complicated even for the simplest use case scenarios. To
understand how to use them, a novice user has to, at least, read the
......@@ -330,7 +332,7 @@
values, from the documentation.
%
In \fluidpack{fft}, the Python API wraps all the functionalities of its C++
counterpart and offers a more richer experience through an accompanying
counterpart and offers a richer experience through an accompanying
operator class.
As a short example, let us try to calculate the gradient of a plane sine-wave
......@@ -349,7 +351,7 @@
\hat u(k_x,k_y) \exp(-ik_x x - ik_y y)
\end{align*}
%
where $k_x$, $k_y$ represent the wavenumber corresponding to x- and y-directions,
where $k_x$, $k_y$ represent the wavenumber corresponding to $x$ and $y$ directions,
and $\mathbf{k}$ is the wavenumber vector.
The equivalent pseudo-spectral implementation in \fluidpack{fft} is as follows:
......@@ -434,7 +436,7 @@
Thereafter, \pack{Cython} \citep{behnel_cython2011} generates C++ code with
necessary Python bindings, which are then built in the form of extensions or
dynamic libraries importable in Python code. All the built extensions are then
installed as a Python package \fluidpack{fft}.
installed as a Python package named \fluidpack{fft}.
A helper function \codeinline{fluidfft.import\_fft\_class} is provided with the
package to simply import the FFT class. However, it is more convenient and
......@@ -448,8 +450,8 @@
To summarize, \fluidpack{fft} consists of the following layers:
\begin{itemize}
\item One C++ class per method derived from a hierarchy of C++ classes as shown
in Fig.~\ref{fig:classes}.
\item One C++ class per FFT library derived from a hierarchy of C++ classes
as shown in Fig.~\ref{fig:classes}.
\item \pack{Cython} wrappers of the C++ classes with their unit test cases.
......@@ -453,7 +455,7 @@
\item \pack{Cython} wrappers of the C++ classes with their unit test cases.
\item Python operators classes (2D and 3D) to write code independently of the
\item Python operator classes (2D and 3D) to write code independently of the
library used for the computation of the FFT and with some mathematical helper
methods. These classes are accompanied by unit test cases.
......@@ -680,10 +682,11 @@
We have also analysed the performance of 2D MPI enabled FFT classes on the same
machine using an array shaped $2160\times2160$ in
Fig.~\ref{fig:cluster8:2160x2160}. The fastest library is
\codeinline{fftwmpi2d}. Both libraries display near-linear scaling, except when
more than one node is used and the performance tapers off.
\codeinline{fftwmpi2d}. Both \codeinline{fftw1d} and \codeinline{fftwmpi2d}
libraries display near-linear scaling, except when more than one node is used
and the performance tapers off.
As a conclusive remark on scalability, a general rule of thumb should be to use
1D domain decomposition when only very few processors are employed. For massive
parallelization, 2D decomposition is required to achieve good speedup without
being limited by the number of processors at disposal. We have thus shown that
......@@ -685,9 +688,9 @@
As a conclusive remark on scalability, a general rule of thumb should be to use
1D domain decomposition when only very few processors are employed. For massive
parallelization, 2D decomposition is required to achieve good speedup without
being limited by the number of processors at disposal. We have thus shown that
overall performance of the libraries implemented in \fluidpack{fft} are quite
overall performance of the libraries interfaced by \fluidpack{fft} are quite
good, and there is no noticeable drop in speedup when the Python API is used.
%
This benchmark analysis also shows that the fastest FFT implementation depends
......@@ -705,7 +708,7 @@
velocity field on a non-divergent velocity field. It is performed in spectral
space, where it can simply be written as
\begin{minted}[fontsize=\footnotesize]{python}
# pythran export proj_outplace(
# pythran export proj_out_of_place(
# complex128[][][], complex128[][][], complex128[][][],
# float64[][][], float64[][][], float64[][][], float64[][][])
......@@ -709,7 +712,7 @@
# complex128[][][], complex128[][][], complex128[][][],
# float64[][][], float64[][][], float64[][][], float64[][][])
def proj_outplace(vx, vy, vz, kx, ky, kz, inv_k_square_nozero):
def proj_out_of_place(vx, vy, vz, kx, ky, kz, inv_k_square_nozero):
tmp = (kx * vx + ky * vy + kz * vz) * inv_k_square_nozero
return vx - kx * tmp, vy - ky * tmp, vz - kz * tmp
\end{minted}
......@@ -713,7 +716,7 @@
tmp = (kx * vx + ky * vy + kz * vz) * inv_k_square_nozero
return vx - kx * tmp, vy - ky * tmp, vz - kz * tmp
\end{minted}
Note that, this implementation is ``outplace'', meaning that the result is
Note that, this implementation is ``out-of-place'', meaning that the result is
returned by the function and that the input velocity field (\codeinline{vx, vy,
vz}) is unmodified.
%
......@@ -751,7 +754,7 @@
In the top axis of Fig.~\ref{fig:microbench}, we compare the elapsed times for
different implementations of this function.
%
For this outplace version, we used three different codes:
For this out-of-place version, we used three different codes:
\begin{enumerate}
\item a Fortran code (not shown\footnote{The codes and a Makefile used for this
benchmark study are available in \href{%
......@@ -763,7 +766,7 @@
\item a Python version with three nested explicit loops:
% (code not shown).
\begin{minted}[fontsize=\footnotesize]{python}
# pythran export proj_outplace_loop(
# pythran export proj_out_of_place_loop(
# complex128[][][], complex128[][][], complex128[][][],
# float64[][][], float64[][][], float64[][][], float64[][][])
......@@ -767,7 +770,7 @@
# complex128[][][], complex128[][][], complex128[][][],
# float64[][][], float64[][][], float64[][][], float64[][][])
def proj_outplace_loop(vx, vy, vz, kx, ky, kz, inv_k_square_nozero):
def proj_out_of_place_loop(vx, vy, vz, kx, ky, kz, inv_k_square_nozero):
rx = np.empty_like(vx)
ry = np.empty_like(vx)
......@@ -835,7 +838,7 @@
put the output in the input arrays. Such an ``in-place'' version can be written
with \pack{Numpy} as:
\begin{minted}[fontsize=\footnotesize]{python}
# pythran export proj_inplace(
# pythran export proj_in_place(
# complex128[][][], complex128[][][], complex128[][][],
# float64[][][], float64[][][], float64[][][], float64[][][])
......@@ -839,7 +842,7 @@
# complex128[][][], complex128[][][], complex128[][][],
# float64[][][], float64[][][], float64[][][], float64[][][])
def proj_inplace(vx, vy, vz, kx, ky, kz, inv_k_square_nozero):
def proj_in_place(vx, vy, vz, kx, ky, kz, inv_k_square_nozero):
tmp = (kx * vx + ky * vy + kz * vz) * inv_k_square_nozero
vx -= kx * tmp
vy -= ky * tmp
......@@ -850,7 +853,7 @@
%
We also consider an ``in-place'' version with explicit loops:
\begin{minted}[fontsize=\footnotesize]{python}
# pythran export proj_inplace_loop(
# pythran export proj_in_place_loop(
# complex128[][][], complex128[][][], complex128[][][],
# float64[][][], float64[][][], float64[][][], float64[][][])
......@@ -854,7 +857,7 @@
# complex128[][][], complex128[][][], complex128[][][],
# float64[][][], float64[][][], float64[][][], float64[][][])
def proj_inplace_loop(vx, vy, vz, kx, ky, kz, inv_k_square_nozero):
def proj_in_place_loop(vx, vy, vz, kx, ky, kz, inv_k_square_nozero):
n0, n1, n2 = kx.shape
......@@ -875,6 +878,6 @@
without explicit loops. This is however the version which is used in
\pack{fluidfft} since it leads to faster execution.
The elapsed time for these inplace versions and for an equivalent Fortran
The elapsed time for these in-place versions and for an equivalent Fortran
implementation are displayed in the bottom axis of Fig.~\ref{fig:microbench}.
%
......@@ -879,6 +882,6 @@
implementation are displayed in the bottom axis of Fig.~\ref{fig:microbench}.
%
The ranking is the same as for the outplace versions and \pack{Pythran} is also
The ranking is the same as for the out-of-place versions and \pack{Pythran} is also
the faster solution.
%
However, \pack{Numpy} is even more slower (7.8 times slower than \pack{Pythran}
......@@ -882,7 +885,7 @@
the faster solution.
%
However, \pack{Numpy} is even more slower (7.8 times slower than \pack{Pythran}
with the explicit loops) than for the outplace versions.
with the explicit loops) than for the out-of-place versions.
From this short and simple microbenchmark, we can infer four main points:
\begin{itemize}
......@@ -957,7 +960,7 @@
has been successfully tested on PyPy 6.0.
%
A C++11 supporting compiler, while not mandatory for the C++ API or Cython
extensions of \fluidpack{fft}, is recommended to able to use Pythran extensions.
extensions of \fluidpack{fft}, is recommended to be able to use Pythran extensions.
\section*{Dependencies}
......
......@@ -50,7 +50,7 @@
ax.set_xticks([])
ax.set_xticklabels([])
ax.set_ylabel('elapsed time (ms)')
ax.set_title('outplace (with memory allocation)')
ax.set_title('out-of-place (with memory allocation)')
xlim = ax.get_xlim()
ax.plot(xlim, (times_outplace[0],)*2, 'k:')
......@@ -80,7 +80,7 @@
ax.set_xticks([])
ax.set_xticklabels([])
ax.set_ylabel('elapsed time (ms)')
ax.set_title('inplace')
ax.set_title('in-place')
xlim = ax.get_xlim()
ax.plot(xlim, (times_inplace[0],)*2, 'k:')
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment