Skip to content
Snippets Groups Projects
Commit c3a105b7ff3e authored by Pierre Augier's avatar Pierre Augier
Browse files

Improve correspondence Zwart

parent c820a2f68858
No related branches found
No related tags found
No related merge requests found
......@@ -11,10 +11,10 @@
---
Zwart recently published in Nature Astronomy a comment on **The ecological
impact of high-performance computing in astrophysics** \cite{Zwart2020}. His
impact of high-performance computing in astrophysics** \cite{Zwart2020}. The
main claim is that the Python programming language represents an issue for the
climate and should be avoided. We will show that scientific programs written in
Python can be very efficient and energy friendly. We argue that human factors
and education are much more important than the choice of languages.
To support his idea, Zwart presents a benchmark on the N-Body problem with a
......@@ -15,11 +15,11 @@
main claim is that the Python programming language represents an issue for the
climate and should be avoided. We will show that scientific programs written in
Python can be very efficient and energy friendly. We argue that human factors
and education are much more important than the choice of languages.
To support his idea, Zwart presents a benchmark on the N-Body problem with a
very bad and inefficient implementation in Python, running 50 times slower than
a C++ implementation! As Python users concerned about our ecological impact, we
very inefficient implementation in Python, running 50 times slower than a C++
implementation! As Python users concerned about our ecological impact, we
worked on similar benchmarks on the same problem[^1]. In contrast to Zwart, we
(i) also consider efficient implementations in Python and Julia and (ii)
properly measure the energy consumption with dedicated hardware equipped with
......@@ -30,6 +30,6 @@
[^2]: The measurements were carried out on Grid'5000 clusters
(<https://www.grid5000.fr>).
Before focusing on the N-Body problem, let's put it in perspective and recall
Before focusing on the N-Body problem, let us put it in perspective and recall
what is "Python" and why it is so successful. Indeed, all indicators show that
Python is one of the most used and loved languages for science and data
......@@ -34,9 +34,10 @@
what is "Python" and why it is so successful. Indeed, all indicators show that
Python is one of the most used and loved languages for science and data
analysis[^3]. Python is a generalist programming language (quite good for most
tasks) oriented towards communication between humans and fast prototyping.
Reading and writing Python is very accessible and do not require a long
training. There are strong open-source communities using Python and a rich
analysis[^3]. Python is a dynamic programming language oriented towards
communication between humans and fast prototyping. Reading and writing Python
is very accessible and do not require a long training. It is generalist (quite
good for very different tasks) and was designed to increase developers
productivity. There are strong open-source communities using Python and a rich
scientific ecosystem of several efficient libraries.
[^3]: TODO. References.
......@@ -44,7 +45,7 @@
It is worth to understand that in 2020, it is no longer meaningful to separate
languages as being "compiled" or "interpreted". More precisely, many
"interpreted" dynamic languages (for example Julia or Matlab) are actually
partly compiled. Let's recall that compiling code to machine instructions can
partly compiled. Let us recall that compiling code to machine instructions can
be done ahead-of-time (AOT, before the execution) or just-in-time (JIT, during
the execution). Python also has AOT and JIT compilers. However, the most
standard way to execute Python code is to interpret it with a program called
......@@ -48,8 +49,8 @@
be done ahead-of-time (AOT, before the execution) or just-in-time (JIT, during
the execution). Python also has AOT and JIT compilers. However, the most
standard way to execute Python code is to interpret it with a program called
CPython. It is the reference implementation of the language and in 2021, it
CPython. It is the reference implementation of the language and in 2020, it
still does not have a builtin JIT compiler. Therefore, CPython is relatively
slow which explains Zwart results. However, it is important to realize that
this inefficiency of the interpreter has a weak effect on the overall
performance of most programs. The total elapsed time and the energy consumption
......@@ -52,8 +53,8 @@
still does not have a builtin JIT compiler. Therefore, CPython is relatively
slow which explains Zwart results. However, it is important to realize that
this inefficiency of the interpreter has a weak effect on the overall
performance of most programs. The total elapsed time and the energy consumption
are dominated by hard work done in optimized and efficient libraries. This is
the basic principle of all the scientific Python ecosystem using Numpy
\cite{Numpy}.
are often dominated by hard work done in optimized libraries. This is the basic
principle of all the scientific Python ecosystem using Numpy
\cite{harris2020array}.
......@@ -59,5 +60,5 @@
This is a very usual situation for which only very few lines of code dominate
the total computation. It is usually known as the 80/20 rule and associated
with two principles: (i) "premature optimization is the root of all evil" and
In many cases, very few lines of code dominate the total computation. It is
usually known as the 80/20 rule and associated with two principles: (i)
"premature optimization is the root of all evil" \cite{knuth1974structured} and
(ii) "measure, don't guess". These principles also apply for energy efficiency.
......@@ -63,4 +64,4 @@
(ii) "measure, don't guess". These principles also apply for energy efficiency.
For most Python programs, it would be very inefficient and expensive to turn
them fully in C++, with a small gain/cost ratio.
For most Python programs, it would be counter productive and expensive to
manually rewrite them in C++, with a small gain/cost ratio.
......@@ -66,7 +67,7 @@
However, there are also algorithms requiring low-level code. For example, for
the N-Body problem, the computation of the accelerations involves for each
particle a loop on all other particles. Few lines of code are repeated $N^2/2$
times per timestep. For Zwart benchmark, there are 10000 timesteps and
However, some algorithms require low-level code and explicit loops. For
example, for the N-Body problem, the computation of the acceleration of each
particle involves a loop on all other particles. Few lines of code are repeated
$N^2/2$ times per timestep. Zwart (2020) considered 10000 timesteps and
$N=16384$, so the program is dominated by 1,342,177,280,000 executions of a
simple and inexpensive computation. Using CPython for this very hot loop makes
......@@ -71,12 +72,13 @@
$N=16384$, so the program is dominated by 1,342,177,280,000 executions of a
simple and inexpensive computation. Using CPython for this very hot loop makes
the whole program very inefficient. Good news for Python, it's very easy to use
efficient alternatives. For this benchmark, we use 3 tools: (i) Pythran
\cite{guelton2015pythran}, a Python-Numpy AOT compiler transpiling to C++, (ii)
Numba \cite{lam2015numba}, a Python-Numpy JIT compiler based on LLVM and (iii)
PyPy \cite{bolz2009tracing}, an alternative Python interpreter with a JIT.
the whole program very inefficient. Good news for Python: it is straightforward
to use efficient alternatives. For this benchmark, we use three tools: (i)
Pythran \cite{guelton2015pythran}, a Python-Numpy AOT compiler transpiling to
C++, (ii) Numba \cite{lam2015numba}, a Python-Numpy JIT compiler based on LLVM
(same compilation target than Julia) and (iii) PyPy \cite{bolz2009tracing}, an
alternative Python interpreter with a JIT.
\begin{figure}[ht]
\centerline{\includegraphics[width=0.65\textwidth]{figs/fig_bench_nbabel_parallel}}
\cprotect\caption{Efficiency in terms of CO$_2$ production and elapsed time for
......@@ -78,9 +80,9 @@
\begin{figure}[ht]
\centerline{\includegraphics[width=0.65\textwidth]{figs/fig_bench_nbabel_parallel}}
\cprotect\caption{Efficiency in terms of CO$_2$ production and elapsed time for
10 implementations in Python, Julia, C++ and Fortran. Energy consumption
implementations in Python, Julia, C++ and Fortran. Energy consumption
measurements were carried out on Grid'5000 clusters with 2.30 GHz Intel Xeon
E5-2630 processors and converted from kWh to CO$_2$ using 283 g CO$_2$ / kWh.
Optimizations were activated for all implementations with flags like
......@@ -84,10 +86,10 @@
measurements were carried out on Grid'5000 clusters with 2.30 GHz Intel Xeon
E5-2630 processors and converted from kWh to CO$_2$ using 283 g CO$_2$ / kWh.
Optimizations were activated for all implementations with flags like
\verb!-OFast!, \verb!-march=native! and \verb!--check-bounds=no!. We use a not
released version of PyPy including optimizations described in
\cite{cheng2020type}. }
\verb!-OFast!, \verb!-march=native! and \verb!--check-bounds=no!. We use gcc
8.3.0, Julia 1.5.3, Python 3.8.5, Pythran 0.9.8, Numba 0.52 and an unreleased
version of PyPy including optimizations described in \cite{cheng2020type}. }
\end{figure}
Figure 1 is equivalent to Figure 3 in Zwart (2020). The CO$_2$ production is
......@@ -90,8 +92,8 @@
\end{figure}
Figure 1 is equivalent to Figure 3 in Zwart (2020). The CO$_2$ production is
ploted as a function of the elapsed time for 10 implementations. The C++ and
ploted as a function of the elapsed time for ten implementations. The C++ and
Fortran implementations (green stars) are taken from the website
<http://www.nbabel.org/> and were used by Zwart (2020). Note that these
implementations could have been further optimized. However, we think they are
......@@ -95,15 +97,11 @@
Fortran implementations (green stars) are taken from the website
<http://www.nbabel.org/> and were used by Zwart (2020). Note that these
implementations could have been further optimized. However, we think they are
representative of what many scientists using C++ or Fortran could have
obtained. We consider 3 implementations in Julia (blue circles): "Julia
nbabel.org" is taken from the NBabel website. "Julia optimized" and "Julia
parallel" have been proposed by Julia users after a long discussion[^4] on
Julia's forum. Finally, we consider 5 implementations in Python (red markers).
We would like to emphasize few points: (1) These implementations are fully
written in Python. The core of the algorithm is written in "low level" code.
The implementation using Pythran and Numba are written in Python-Numpy but
Numpy is only used for its arrays as a data-structure and not for advanced
high-level functions. (2) Four implementations in Python are fastest than the
C++ implementation. The simple implementation labelled "Pythran naive" (simple
representative of C++ or Fortran codes written by many scientists. We consider
five implementations in Python (red markers). We would like to emphasize few
points: (1) These implementations are fully written in Python. The
implementations using Pythran and Numba are written in Python-Numpy but Numpy
is only used for its arrays as a data-structure and not for advanced high-level
functions. (2) Four implementations in Python are fastest than the C++
implementation. The simple implementation labelled "Pythran naive" (simple
Numpy code accelerated only by decorating one function with `@transonic.jit`
......@@ -109,8 +107,15 @@
Numpy code accelerated only by decorating one function with `@transonic.jit`
\cite{transonic}) is only 5.7 times slower than the optimized version in Julia.
(3) All Python implementations are simpler to reason, read and write than the
C++ and Fortran implementations.
\cite{transonic}) is only 3 times slower than the Fortran implementation. (3)
All Python implementations are simpler to reason, read and write than the C++
and Fortran implementations.
For comparison, we also consider three implementations in Julia (blue circles):
the implementation labelled "Julia" is comparable with the "Pythran" and
"Numba" implementations and could have been written by scientists with similar
skills. We did not include a Julia implementation similar to "Pythran naive"
because it is very inefficient. "Julia optimized" and "Julia parallel" have
been proposed by Julia users after a long discussion on Julia forum[^4].
[^4]: https://discourse.julialang.org/t/nbabel-nbody-integrator-speed-up/
The 4 points close to the bottom-left corner correspond to 2 parallel
......@@ -113,10 +118,11 @@
[^4]: https://discourse.julialang.org/t/nbabel-nbody-integrator-speed-up/
The 4 points close to the bottom-left corner correspond to 2 parallel
implementations using Pythran+OpenMP and Julia for 6 and 12 cores. We consider
in Figure 1 the energy consumption of the cores used (6 or 12 for these runs
and 1 for the sequential jobs), which make sense on shared clusters in which
one can reserve only the needed cores. We see that parallelism with threads
speedup the computation but does not decrease too much energy consumption.
implementations using Pythran+OpenMP and Julia executed using 6 and 12 CPU
cores. We consider in Figure 1 the energy consumption of the cores used (6 or
12 for these runs and 1 for the sequential jobs), which make sense on shared
clusters in which one can reserve only the needed cores. We see that
parallelism with threads decreases the elapsed time but has a weak impact on
energy consumption.
......@@ -122,14 +128,18 @@
We think that minimizing the ecological impact of scientific computing is
limited by human factors: time, work, knowledge and skills. For example,
scientists have to be able to run heavy computations on shared clusters
optimized in terms of energy consumption. They should also know how to profile
their codes to discover which parts can potentially be optimized. Therefore,
money and time should be invested on educating students and scientists. Our
measurements demonstrate that Python is actually a good solution to easily
obtain good performance with simple and readable codes. Therefore, teaching
efficient Python to scientists and engineers can be profitable to minimize the
overall ecological impact of computing.
Our work shows that the performance of implementations depends less on
languages than on developer skills and time spent on optimization. Moreover,
one can obtain very good results with dynamic languages. We think that
minimizing the ecological impact of scientific computing is limited by human
factors: time, work, knowledge and skills. For example, scientists have to be
able to run heavy computations on shared clusters optimized in terms of energy
consumption. They should also know how to profile their codes to discover which
parts can potentially be optimized. Therefore, money and time should be
invested on educating students and scientists. This benchmark demonstrates that
Python is actually a good solution to easily obtain good performance with
simple and readable codes. Therefore, teaching efficient Python to scientists
and engineers can be profitable to minimize the overall ecological impact of
computing. Of course, other languages have their own strengths and are more
adapted than Python for specific tasks.
\bibliographystyle{naturemag}
\bibliography{./pubs}
\ No newline at end of file
reply_Zwart2020/figs/fig_bench_nbabel_parallel.png

46.9 KiB | W: | H:

reply_Zwart2020/figs/fig_bench_nbabel_parallel.png

47.7 KiB | W: | H:

reply_Zwart2020/figs/fig_bench_nbabel_parallel.png
reply_Zwart2020/figs/fig_bench_nbabel_parallel.png
reply_Zwart2020/figs/fig_bench_nbabel_parallel.png
reply_Zwart2020/figs/fig_bench_nbabel_parallel.png
  • 2-up
  • Swipe
  • Onion skin
......@@ -91,4 +91,15 @@
title = {{FluidDyn}: A Python Open-Source Framework for Research and Teaching in Fluid Dynamics
by Simulations, Experiments and Data Processing},
journal = {Journal of Open Research Software}
}
\ No newline at end of file
}
@article{knuth1974structured,
title={Structured programming with go to statements},
author={Knuth, Donald E},
journal={ACM Computing Surveys (CSUR)},
volume={6},
number={4},
pages={261--301},
year={1974},
publisher={ACM New York, NY, USA}
}
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment