Skip to content
Snippets Groups Projects
Commit fe740f102e65 authored by Pierre Augier's avatar Pierre Augier
Browse files

First version correspondence.md

parent 43776bbcaf5b
No related branches found
No related tags found
No related merge requests found
......@@ -48,3 +48,4 @@
fluidfft_final/Pyfig/fig_classes.svg
reply_Zwart2020/paper.tex
reply_Zwart2020/correspondence.tex
\ No newline at end of file
......@@ -16,4 +16,10 @@
rm -f $(name).pdf
letter_contact_NatureAstronomy.pdf: letter_contact_NatureAstronomy.md figs/fig_bench_nbabel_parallel.png header.tex
pandoc -V fontsize=12pt -s letter_contact_NatureAstronomy.md -o letter_contact_NatureAstronomy.pdf
\ No newline at end of file
pandoc -V fontsize=12pt -s letter_contact_NatureAstronomy.md -o letter_contact_NatureAstronomy.pdf
correspondence.tex: correspondence.md figs/fig_bench_nbabel_parallel.png header.tex
pandoc -V fontsize=12pt --natbib -s correspondence.md -o correspondence.tex
correspondence.pdf: correspondence.tex
$(LATEXMK) correspondence.tex
\ No newline at end of file
---
title: "Ecological impact of computing with Python:
education is more important than languages"
bibliography: ./pubs.bib
link-citations: true
lang: en
documentclass: article
numbersections: true
header-includes:
- \include{header}
---
\cite{Zwart2020} recently published in Nature Astronomy a comment on **The
ecological impact of high-performance computing in astrophysics**. His main
claim is that the Python programming language represents an issue for the
climate and should be avoided. We will show that scientific programs written in
Python can be very efficient and energy friendly. We argue that human factors
and education are much more important than the choice of languages.
To support his idea, Zwart presents a benchmark on the N-Body problem with a
very bad and inefficient implementation in Python, running 50 times slower than
a C++ implementation! As Python users concerned about our ecological impact, we
worked on similar benchmarks on the same problem[^1]. In contrast to Zwart, we
(i) also consider efficient implementations in Python and Julia and (ii)
properly measure the energy consumption with dedicated hardware equipped with
wattmeters[^2].
[^1]: Our code is available here: <https://github.com/paugier/nbabel>.
[^2]: The measurements were carried out on Grid'5000 clusters
(<https://www.grid5000.fr>).
Before focusing on the N-Body problem, let's put it in perspective and recall
what is "Python" and why it is so successful. Indeed, all indicators show that
Python is one of the most used and loved languages for science and data
analysis[^3]. Python is a generalist programming language (quite good for most
tasks) oriented towards communication between humans and fast prototyping.
Reading and writing Python is very accessible and do not require a long
training. There are strong open-source communities using Python and a rich
scientific ecosystem of several efficient libraries.
[^3]: TODO. References.
It is worth to understand that in 2020, it is no longer meaningful to separate
languages as being "compiled" or "interpreted". More precisely, many
"interpreted" dynamic languages (for example Julia or Matlab) are actually
partly compiled. Let's recall that compiling code to machine instructions can
be done ahead-of-time (AOT, before the execution) or just-in-time (JIT, during
the execution). Python also has AOT and JIT compilers. However, the most
standard way to execute Python code is to interpret it with a program called
CPython. It is the reference implementation of the language and in 2021, it
still does not have a builtin JIT compiler. Therefore, CPython is relatively
slow which explains Zwart results. However, it is important to realize that
this inefficiency of the interpreter has a weak effect on the overall
performance of most programs. The total elapsed time and the energy consumption
are dominated by hard work done in optimized and efficient libraries. This is
the basic principle of all the scientific Python ecosystem using Numpy.
This is a very usual situation for which only very few lines of code dominate
the total computation. It is usually known as the 80/20 rule and associated
with two principles: (i) "premature optimization is the root of all evil" and
(ii) "measure, don't guess". These principles also apply for energy efficiency.
For most Python programs, it would be very inefficient and expensive to turn
them fully in C++, with a small gain/cost ratio.
However, there are also algorithms requiring low-level code. For example, for
the N-Body problem, the computation of the accelerations involves for each
particle a loop on all other particles. Few lines of code are repeated $N^2/2$
times per timestep. For Zwart benchmark, there are 10000 timesteps and
$N=16384$, so the program is dominated by 1,342,177,280,000 executions of a
simple and inexpensive computation. Using CPython for this very hot loop makes
the whole program very inefficient. Good news for Python, it's very easy to use
efficient alternatives. For this benchmark, we use 3 tools: (i) Pythran, a
Python-Numpy AOT compiler transpiling to C++, (ii) Numba, a Python-Numpy JIT
compiler based on LLVM and (iii) PyPy, an alternative Python interpreter with a
JIT.
\begin{figure}[ht]
\centerline{\includegraphics[width=0.65\textwidth]{figs/fig_bench_nbabel_parallel}}
\cprotect\caption{Efficiency in terms of CO$_2$ production and elapsed time for
10 implementations in Python, Julia, C++ and Fortran. Energy consumption
measurements were carried out on Grid'5000 clusters with 2.30 GHz Intel Xeon
E5-2630 processors and converted from kWh to CO$_2$ using 283 g CO$_2$ / kWh.
Optimizations were activated for all implementations with flags like
\verb!-OFast!, \verb!-march=native! and \verb!--check-bounds=no!.}
\end{figure}
Figure 1 is equivalent to Figure 3 in Zwart (2020). The CO$_2$ production is
ploted as a function of the elapsed time for 10 implementations. The C++ and
Fortran implementations (green stars) are taken from the website
<http://www.nbabel.org/> and were used by Zwart (2020). Note that these
implementations could have been further optimized. However, we think they are
representative of what many scientists using C++ or Fortran could have
obtained. We consider 3 implementations in Julia (blue circles): "Julia
nbabel.org" is taken from the NBabel website. "Julia optimized" and "Julia
parallel" have been proposed by Julia users after a long discussion[^4] on
Julia's forum. Finally, we consider 5 implementations in Python (red markers).
We would like to emphasize few points: (1) These implementations are fully
written in Python. The core of the algorithm is written in "low level" code.
The implementation using Pythran and Numba are written in Python-Numpy but
Numpy is only used for its arrays as a data-structure and not for advanced
high-level functions. (2) Four implementations in Python are fastest than the
C++ implementation. The simple implementation labelled "Pythran naive" (simple
Numpy code accelerated only by decorating one function with `@transonic.jit`)
is only 5.7 times slower than the optimized version in Julia. (3) All Python
implementations are simpler to reason, read and write than the C++ and Fortran
implementations.
[^4]: https://discourse.julialang.org/t/nbabel-nbody-integrator-speed-up/
The 4 points close to the bottom-left corner correspond to 2 parallel
implementations using Pythran+OpenMP and Julia for 6 and 12 cores. We consider
in Figure 1 the energy consumption of the cores used (6 or 12 for these runs
and 1 for the sequential jobs), which make sense on shared clusters in which
one can reserve only the needed cores. We see that parallelism with threads
speedup the computation but does not decrease too much energy consumption.
We think that minimizing the ecological impact of scientific computing is
limited by human factors: time, work, knowledge and skills. For example,
scientists have to be able to run heavy computations on shared clusters
optimized in terms of energy consumption. They should also know how to profile
their codes to discover which parts can potentially be optimized. Therefore,
money and time should be invested on educating students and scientists. Our
measurements demonstrate that Python is actually a good solution to easily
obtain good performance with simple and readable codes. Therefore, teaching
efficient Python to scientists and engineers can be profitable to minimize the
overall ecological impact of computing.
reply_Zwart2020/figs/fig_bench_nbabel_parallel.png

44.2 KiB | W: | H:

reply_Zwart2020/figs/fig_bench_nbabel_parallel.png

46.9 KiB | W: | H:

reply_Zwart2020/figs/fig_bench_nbabel_parallel.png
reply_Zwart2020/figs/fig_bench_nbabel_parallel.png
reply_Zwart2020/figs/fig_bench_nbabel_parallel.png
reply_Zwart2020/figs/fig_bench_nbabel_parallel.png
  • 2-up
  • Swipe
  • Onion skin
......@@ -46,3 +46,5 @@
\usepackage{titling}
\setlength{\droptitle}{-10mm}
\usepackage{cprotect}
\ No newline at end of file
......@@ -70,7 +70,7 @@
- There is no drastically inefficient implementation. Even the simple
implementation labelled "Pythran naive" (simple Numpy code accelerated only by
decorating one function with `@jit`) is only 5.7 times slower than the
optimized version in Julia, and not by several orders of magnitude.
optimized version in Julia, and not by orders of magnitude.
- All Python implementations considered in this benchmark are simpler to
reason, read and write than the C++ and Fortran implementations. These
......
......@@ -77,7 +77,7 @@
First, one can note that the article is based on only one particular problem
(N-Body). Most of the time spent in one function of few lines (two loops). It
is the case in most real codes but one need to keep this in mind when
is the case in most real codes but one needs to keep this in mind when
interpreting the results.
The benchmark seems tailored to give a bad result for Python.
......@@ -99,6 +99,6 @@
because of 2 differences in the codes. There is an algorithm problem in the C++
code and big floats are used in the Fortran code.
We now turn to the presentation of 4 other implementations written only in
We now turn to the presentation of 5 other implementations written only in
Python.
......@@ -103,5 +103,16 @@
Python.
- **nbabel/py/bench_pypy_Point.py** contains a pure Python implementation
running the 1024 particles case with PyPy in 151 s, i.e. only 3 times slower
than the C++ implementation (compared to ~50 times slower as shown in the
figure taken from Zwart, 2020).
What is PyPy. PyPy is very promissing. However, PyPy is slow for codes using
Python extensions using the CPython C-API (in particular Numpy). And pure
Python as a language is not well adapted for very high performance computing.
Classes and objects defined in pure Python are too dynamical. There is no
homogeneous container.
- **nbabel/py/bench_numba.py** is a quite simple Numpy-Numba implementation. We
measure it to be 20% faster than the C++ implementation. Therefore, we think
there is a problem with the point in figure 3 of \cite{Zwart2020} which show
......@@ -139,16 +150,7 @@
Just with these small modifications, we obtained very high performance (see
figure). ...
- **nbabel/py/bench_pypy_Point.py** contains a pure Python implementation
running the 1024 particles case with PyPy in 151 s, i.e. only 3 times slower
than the C++ implementation (compared to ~50 times slower as shown in the
figure taken from Zwart, 2020).
What is PyPy. PyPy is very promissing. However, PyPy is slow for codes using
Python extensions using the CPython C-API (in particular Numpy). And pure
Python as a language is not well adapted for very high performance computing.
Classes and objects defined in pure Python are too dynamical. There is no
homogeneous container.
- **nbabel/py/bench_omp.py**
## What is "Python used for science and data"?
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment