Skip to content
Snippets Groups Projects
Commit 1953e40e authored by Pierre Augier's avatar Pierre Augier
Browse files

Text compare codes fluidsim.

parent 1c65e952
No related branches found
No related tags found
No related merge requests found
......@@ -13,8 +13,8 @@
@article{fluidfft,
author = "Ashwin Vishnu Mohanan and Cyrille Bonamy and Pierre Augier",
year = "2018",
title = "Fluidfft: common API (C++ and Python) for Fast Fourier Transform
libraries",
title = "{F}luidfft: common {API} ({C}++ and {P}ython) for {F}ast {F}ourier
{T}ransform libraries",
journal = "J. Open Research Software",
volume = "(to be submitted)",
pages = ""
......@@ -24,7 +24,7 @@
author = "Ashwin Vishnu Mohanan and Cyrille Bonamy and Pierre Augier",
year = "2018",
title = "FluidSim: modular, object-oriented Python package for
high-performance CFD simulations",
high-performance {CFD} simulations",
journal = "J. Open Research Software",
volume = "(to be submitted)",
pages = ""
......@@ -296,4 +296,14 @@
author={Meyers, Scott},
year={2012},
publisher={Addison-Wesley}
}
\ No newline at end of file
}
@article{DeloncleBillantChomaz2008,
title={Nonlinear evolution of the zigzag instability in stratified fluids: a shortcut on the route to dissipation},
author={Deloncle, Axel and Billant, Paul and Chomaz, Jean-Marc},
journal={Journal of Fluid Mechanics},
volume={599},
pages={229--239},
year={2008},
publisher={Cambridge University Press}
}
......@@ -31,7 +31,7 @@
figures: tmp/fig_microbench.pdf
cd python && python makefile_figures.py
$(name).pdf: figures $(name).log
$(name).pdf: figures $(name).log
@# $(LATEX) $(name).tex
@if [ `grep "Package rerunfilecheck Warning: File" $(name).log | wc -l` != 0 ]; then $(LATEX) $(name).tex; fi
@if [ `grep "Rerun to get cross-references right." $(name).log | wc -l` != 0 ]; then $(LATEX) $(name).tex; fi
......@@ -46,5 +46,5 @@
$(name).aux: $(name).tex
$(LATEX) $(name).tex
tmp/fig_microbench.pdf:
tmp/fig_microbench.pdf: microbench/make_fig_bar.py
python microbench/make_fig_bar.py save
......@@ -13,6 +13,7 @@
cleanall: clean
rm -f $(name).pdf
rm -rf tmp
edittex:
emacs $(name).tex &
......@@ -27,7 +28,7 @@
doit: vimtex $(name).pdf
zathura $(name).pdf &
$(name).pdf: $(name).log
$(name).pdf: tmp/fig_compare_with_ns3d.pdf $(name).log
@# $(LATEX) $(name).tex
@if [ `grep "Package rerunfilecheck Warning: File" $(name).log | wc -l` != 0 ]; then $(LATEX) $(name).tex; fi
@if [ `grep "Rerun to get cross-references right." $(name).log | wc -l` != 0 ]; then $(LATEX) $(name).tex; fi
......@@ -41,3 +42,6 @@
$(name).aux: $(name).tex
$(LATEX) $(name).tex
tmp/fig_compare_with_ns3d.pdf: compare_codes/make_fig_bar.py
python compare_codes/make_fig_bar.py save
......@@ -9,6 +9,8 @@
* spectralDNS
* dedalus
For these codes, the benchmark scripts are in the directory `fluidsim/bench`.
For ns3d, see https://bitbucket.org/paugier/ns3d (directory `ns3d/jobs`).
Results 2d cases
......
import os
import sys
import numpy as np
import matplotlib.pyplot as plt
from fluiddyn.output.rcparams import set_rcparams
set_rcparams(fontsize=14, for_article=True, for_beamer=False)
here = os.path.abspath(os.path.dirname(__file__))
here_tmp = os.path.join(here, '../tmp')
if not os.path.exists(here_tmp):
os.mkdir(here_tmp)
keys = (
'total',
'FFT',
'RK4',
'curl',
'vector\nproduct',
'projection',
)
times_ns3d = np.array([9.52, 6.18, 1.91, 0.49, 0.35, 0.46])
times_fluidsim = np.array([9.45, 6.72, 1.57, 0.44, 0.34, 0.30])
left = 2.2*np.arange(len(times_ns3d))
fig, ax = plt.subplots(1, figsize=(8, 4))
shift = 0.9
ax.bar(left, height=times_ns3d, color='b', edgecolor='k', yerr=0.07)
ax.bar(left+shift, height=times_fluidsim, color='y', edgecolor='k', yerr=0.07)
ax.set_xticks([])
ax.set_xticklabels([])
ax.set_ylabel('time (s)')
# ax.set_title('outplace (with memory allocation)')
y = -0.8
for x, s in zip(left, keys):
ax.text(x+shift/2, y, s, rotation=0,
horizontalalignment='center',
verticalalignment='center')
fig.tight_layout(rect=(0, 0.05, 1, 1))
path_fig = os.path.join(here_tmp, 'fig_compare_with_ns3d.pdf')
if 'save' in sys.argv:
fig.savefig(path_fig, dpi=800)
else:
plt.show()
"""
pip install tabulate
"""
from tabulate import tabulate
table = [
["512$^2$", 0.54, 17.1, 0.92, 0.82],
["1024$^2$", 2.69, 75.3, 3.48, 3.96]
]
headers = ["", r"\fluidpack{sim}", 'Dedalus', 'SpectralDNS', 'NS3D']
print(tabulate(table, headers, tablefmt="latex_raw"))
......@@ -16,7 +16,8 @@
\section*{Title}
FluidSim: modular, object-oriented Python package for high-performance CFD simulations
FluidSim: modular, object-oriented Python package for high-performance CFD
simulations
\section*{Paper Authors}
......@@ -801,5 +802,33 @@
% First broad description of the three codes
From high versatility (Dedalus) to high performance (NS3D)...
In this subsection, we compare \fluidpack{sim} with three other open-source CFD
pseudo-spectral codes:
\begin{itemize}
\item \href{http://dedalus-project.org/}{Dedalus} \citep{burns_dedalus} is ``a
flexible framework for spectrally solving differential equations''. It is very
versatile and the user describes the problem to be solved symbolically.
\item \href{https://github.com/spectralDNS/spectralDNS}{SpectralDNS}
\citep{mortensen_spectraldns2016} is a ``high-performance pseudo-spectral
Navier-Stokes DNS solver for triply periodic domains. The most notable feature of
this solver is that it is written entirely in Python using NumPy, MPI for Python
(mpi4py) and pyFFTW.''
SpectralDNS is therefore technically very similar to \fluidpack{sim}.
\item \href{https://bitbucket.org/paugier/ns3d}{NS3D} \cite[see for
example][]{DeloncleBillantChomaz2008} is a highly efficient code (parallelized
with MPI and OpenMP) written in Fortran. It has been highly optimized by
generations of PhD students at \href{https://www.ladhyx.polytechnique.fr}{LadHyX}
so it is really fast. However, it is limited to 2d decomposition for the 3d FFT.
\end{itemize}
For these comparisons and for the sake of simplicity, we limit ourselves to
compare only sequential runs. We have already discussed in details the issue of
the scalability of pseudo-spectral codes based on Fourier transforms in the
previous section and in the companion paper \citep{fluidfft}.
\paragraph{Bi-dimensional simulations}
......@@ -805,4 +834,33 @@
\paragraph{Comparison with Dedalus}
\begin{table}
\centering
\begin{tabular}{lrrrr}
\hline
& \fluidpack{sim} & Dedalus & SpectralDNS & NS3D \\
\hline
512$^2$ & 0.54 & 17.1 & 0.92 & 0.82 \\
1024$^2$ & 2.69 & 75.3 & 3.48 & 3.96 \\
\hline
\end{tabular}
\caption{Elapsed times (in seconds) for 10 time steps for two bidimensional cases
and the four CFD codes.}
\label{table:compare}
\end{table}
We first compare elapsed times for two resolutions (512$^2$ and 1024$^2$) over a
bi-dimensional space. The results are summarized in Table~\ref{table:compare}.
%
The results are consistent for the two resolutions. \fluidpack{sim} is the faster
code for these cases. Dedalus is much slower. The two other codes have similar
performance, slightly slower than \fluidpack{sim} and much faster than Dedalus
%
% todo: interpret and comment the results...
The Fortran code NS3D is surprisingly slow (47\% slower than \fluidpack{sim})
since there is no specialized numerical scheme for the 2d case in NS3D, so that
more FFTs have to be performed compared to SpectralDNS and \fluidpack{sim}.
%
This shows the importance of implementing the adapted algorithm for each problem,
which is much easier with a highly modular code as \fluidpack{sim} than with a
specialized code as NS3D.
......@@ -807,4 +865,26 @@
\paragraph{Comparison with SpectralDNS}
\paragraph{Tri-dimensional simulations}
We now turn our attention to a tri-dimensional case, what are the elapsed time for
10 time steps for a resolution 128$^3$.
%
Dedalus is extremely slow and does not seem to be adapted for this case so we do
not give exact elapsed time for this code.
%
SpectralDNS is slightly slower (11.55 s) than the two other codes (9.45 for
\fluidpack{sim} and 9.52 s for ns3d). This difference is mainly explained by the
slower FFTs for SpectralDNS.
\begin{figure}[htp]
\centering
\includegraphics[width=\linewidth]{./tmp/fig_compare_with_ns3d}
\caption{Comparison of the execution times for a 3d case (128$^3$, 10 time steps)
between ns3d (blue bars) and \codeinline{fluidsim.solvers.ns3d} (yellow bars).
%
The first two bars correspond to the total time and the others to the main tasks
in terms of time consumption, namely FFT, Runge-Kutta 4, curl, vector product and
``projection''. }
\label{fig:compare:with:ns3d}
\end{figure}
......@@ -810,2 +890,8 @@
Figure~\ref{fig:compare:with:ns3d} presents a more detailed comparison between
ns3d (blue bars) and \codeinline{fluidsim.solvers.ns3d} (yellow bars).
%
The total elapsed times is mainly spent in five tasks: FFTs, Runge-Kutta 4, curl,
vector product and ``projection''. The times spent to perform these tasks are
compared for the two codes.
......@@ -811,5 +897,31 @@
\paragraph{Comparison with NS3D}
We see that NS3D's FFTs are very fast: the FFT execution is 0.55 s longer for
\fluidpack{sim} (nearly 9\% longer). This difference is especially important for
sequential run for which there is no communication cost in the FFT computation.
This difference can partially be explained by the fact that in NS3D, all FFTs are
inplace (so the input can be erased during the transform). Another factor is that
the flag FFTW\_PATIENT is used in ns3d which leads to very long initialization and
some times faster FFTs. Since we did not see significant speed-up by using this
flag in \fluidpack{sim} and that we also care about initialization time, this flag
is not used and we prefer to use the flag FFTW\_MEASURE, which usually leads to
similar performance.
NS3D's time stepping is significantly slower than \fluidpack{sim}'s time stepping
(0.34 s $\simeq$ 20 \% slower). We did not find the performance issue in NS3D.
The linear operators are slightly faster in \fluidpack{sim} than in the Fortran
code NS3D. This is because this corresponds to Pythran functions written with
explicit loops. There are also few unnecessary projections in NS3D (5 per time
step in NS3D compared to 4 per time step in \fluidpack{sim}).
Although the FFTs are a little bit faster for NS3D, the total time is slightly
smaller (less than 1\% of the total time) for \fluidpack{sim} for this case.
These examples do not show that fluidsim is always faster than ns3d or as fast as
any very well optimized Fortran codes. However, it proves that our very
high-level and modular Python code is very efficient and is not much slower that a
well-optimized Fortran code.
......
# Notes on FluidSim
First, let's say there have been many fluiddyn, fluidsim and fluidfft to
improve the speed of the code, especially for the 3d solvers. We still have to
work on this (for example really implement the ifft_destroy methods in C++, and
also make this feature available for the 2d case).
Please check that it works as wanted for your use cases with the new versions!
# Comparison with the Fortran 90 code ns3d (https://bitbucket.org/paugier/ns3d)
......@@ -59,11 +51,3 @@
- I'd like to test using FFT with Cuda. Cyrille, does it work? Can you show me
how to try this at LEGI?
# Comparison with SpectralDNS
??? Can you please complete this section?
# Comparison with Dedalus
??? Can you please complete this section?
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment