Skip to content
Snippets Groups Projects
correspondence.md 8.54 KiB
Newer Older
---
title: "Ecological impact of computing with Python:
  education is more important than languages"
bibliography: ./pubs.bib
link-citations: true
lang: en
documentclass: article
numbersections: true
header-includes:
    - \include{header}
---

Pierre Augier's avatar
Pierre Augier committed
Zwart recently published in Nature Astronomy a comment on **The ecological
impact of high-performance computing in astrophysics** \cite{Zwart2020}. The
Pierre Augier's avatar
Pierre Augier committed
main claim is that the Python programming language represents an issue for the
Pierre Augier's avatar
Pierre Augier committed
climate and should be avoided. We advocate that scientific programs written in
Python can be very efficient and energy friendly. We argue that human factors
and education are much more important than the choice of languages.

To support his idea, Zwart presents a benchmark on the N-Body problem with a
very inefficient implementation in Python, running 50 times slower than a C++
Pierre Augier's avatar
Pierre Augier committed
implementation. As Python users concerned about our ecological impact, we
worked on similar benchmarks on the same problem[^1]. In contrast to Zwart, we
(i) also consider efficient implementations in Python and Julia and (ii)
properly measure the energy consumption with dedicated hardware equipped with
wattmeters[^2].

[^1]: Our code is available here: <https://github.com/paugier/nbabel>.

[^2]: The measurements were carried out on Grid'5000 clusters
(<https://www.grid5000.fr>).

Before focusing on the N-Body problem, let us put it in perspective and recall
what is "Python" and why it is so successful. Indeed, all indicators show that
Python is one of the most used and loved languages for science and data
analysis[^3]. Python is a dynamic programming language oriented towards
communication between humans and fast prototyping. Reading and writing Python
Pierre Augier's avatar
Pierre Augier committed
is very accessible and do not require a long training. It is generalist
(seemingly suited to different tasks) and was designed to increase developers
productivity. There are strong open-source communities using Python and a rich
scientific ecosystem of several efficient libraries.

Pierre Augier's avatar
Pierre Augier committed
[^3]: See for exemple the [TIOBE Index](https://www.tiobe.com/tiobe-index/),
the [IEEE Spectrum
ranking](https://spectrum.ieee.org/computing/software/the-top-programming-languages-2019),
[Github reports](https://octoverse.github.com/) or the [Stack Overflow Annual
Developer Survey](https://insights.stackoverflow.com/survey).
Pierre Augier's avatar
Pierre Augier committed
It is worth understanding that characterizing a language as being "compiled" or
Pierre Augier's avatar
Pierre Augier committed
"interpreted" is a language abuse: these categories make sense only for
specific implementations of languages. Moreover, some interpreters of dynamic
languages (for example Julia or Matlab) actually compile parts of the code on
the fly. Let us recall that compiling code to machine instructions can be done
ahead-of-time (AOT, before the execution) or just-in-time (JIT, during the
execution) \cite{aycock2003brief}. Python also has AOT and JIT compilers.
However, the most standard way to execute Python code is to interpret it with a
program called CPython. It is the reference implementation of the language and
in 2020, it still does not have a builtin JIT compiler. Therefore, CPython is
relatively slow which explains Zwart results. However, it is important to
realize that this inefficiency of the interpreter has a weak effect on the
overall performance of most programs. The total elapsed time and the energy
consumption are often dominated by hard work done in optimized libraries. This
is the basic principle of all the scientific Python ecosystem using Numpy
\cite{harris2020array}.
In many cases, very few lines of code dominate the total computation. It is
Pierre Augier's avatar
Pierre Augier committed
usually known as the 80/20 rule and provides support for two software
development principles: (i) "premature optimization is the root of all evil"
\cite{knuth1974structured} and (ii) "measure, don't guess". These principles
also apply for energy efficiency. For most Python programs, it would be counter
productive and expensive to manually rewrite them in C++, with a small
gain/cost ratio.
However, some algorithms require low-level code and explicit loops. For
example, for the N-Body problem, the computation of the acceleration of each
particle involves a loop on all other particles. Few lines of code are repeated
$N^2/2$ times per timestep. Zwart (2020) considered 10000 timesteps and
$N=16384$, so the program is dominated by 1,342,177,280,000 executions of a
simple and inexpensive computation. Using CPython for this very hot loop makes
the whole program very inefficient. Good news for Python: it is straightforward
to use efficient alternatives. For this benchmark, we use three tools: (i)
Pythran \cite{guelton2015pythran}, a Python-Numpy AOT compiler transpiling to
C++, (ii) Numba \cite{lam2015numba}, a Python-Numpy JIT compiler based on LLVM
Pierre Augier's avatar
Pierre Augier committed
(same compilation target as Julia) and (iii) PyPy \cite{bolz2009tracing}, an
alternative Python interpreter with a JIT.

\begin{figure}[ht]
\centerline{\includegraphics[width=0.65\textwidth]{figs/fig_bench_nbabel_parallel}}

\cprotect\caption{Efficiency in terms of CO$_2$ production and elapsed time for
implementations in Python, Julia, C++ and Fortran. Energy consumption
measurements were carried out on Grid'5000 clusters with 2.30 GHz Intel Xeon
E5-2630 processors and converted from kWh to CO$_2$ using 283 g CO$_2$ / kWh.
Optimizations were activated for all implementations with flags like
\verb!-OFast!, \verb!-march=native! and \verb!--check-bounds=no!. We use gcc
8.3.0, Julia 1.5.3, Python 3.8.5, Pythran 0.9.8, Numba 0.52 and an unreleased
version of PyPy including optimizations described in \cite{cheng2020type}. }

\end{figure}

Figure 1 is equivalent to Figure 3 in Zwart (2020). The CO$_2$ production is
ploted as a function of the elapsed time for ten implementations. The C++ and
Fortran implementations (green stars) are taken from the website
<http://www.nbabel.org/> and were used by Zwart (2020). Note that these
implementations could have been further optimized. However, we think they are
representative of C++ or Fortran codes written by many scientists. We consider
five implementations in Python (red markers). We would like to emphasize few
points: (1) These implementations are fully written in Python. The
implementations using Pythran and Numba are written in Python-Numpy but Numpy
is only used for its arrays as a data-structure and not for advanced high-level
functions. (2) Four implementations in Python are fastest than the C++
Pierre Augier's avatar
Pierre Augier committed
implementation. The implementation labelled "Pythran naive" (simple Numpy code
accelerated only by decorating one function with `@transonic.jit`
\cite{transonic}) is only 3 times slower than the Fortran implementation. (3)
Pierre Augier's avatar
Pierre Augier committed
All Python implementations are simpler to reason about, read and write than the
C++ and Fortran implementations.

For comparison, we also consider three implementations in Julia (blue circles):
the implementation labelled "Julia" is comparable with the "Pythran" and
"Numba" implementations and could have been written by scientists with similar
skills. We did not include a Julia implementation similar to "Pythran naive"
because it is very inefficient. "Julia optimized" and "Julia parallel" have
been proposed by Julia users after a long discussion on Julia forum[^4].

[^4]: https://discourse.julialang.org/t/nbabel-nbody-integrator-speed-up/

Pierre Augier's avatar
Pierre Augier committed
The four points close to the bottom-left corner correspond to two parallel
implementations using Pythran+OpenMP and Julia executed using 6 and 12 CPU
cores. We consider in Figure 1 the energy consumption of the cores used (6 or
12 for these runs and 1 for the sequential jobs), which make sense on shared
clusters in which one can reserve only the needed cores. We see that
parallelism with threads decreases the elapsed time but has a weak impact on
energy consumption.
Our work shows that the performance of implementations depends less on
languages than on developer skills and time spent on optimization. Moreover,
one can obtain very good results with dynamic languages. We think that
minimizing the ecological impact of scientific computing is limited by human
factors: time, work, knowledge and skills. For example, scientists have to be
able to run heavy computations on shared clusters optimized in terms of energy
consumption. They should also know how to profile their codes to discover which
parts can potentially be optimized. Therefore, money and time should be
invested on educating students and scientists. This benchmark demonstrates that
Pierre Augier's avatar
Pierre Augier committed
Python can actually be a good solution to easily obtain good performance with
simple and readable codes. Therefore, education and tooling can be profitable
to minimize the overall ecological impact of computing, whatever the underlying
language.
Pierre Augier's avatar
Pierre Augier committed

\bibliographystyle{naturemag}
\bibliography{./pubs}