Skip to content
Snippets Groups Projects
Commit ced4a590ceb7 authored by Pierre Augier's avatar Pierre Augier
Browse files

Correspondence Zwart ++

parent c3a105b7ff3e
No related branches found
No related tags found
No related merge requests found
......@@ -18,6 +18,6 @@
letter_contact_NatureAstronomy.pdf: letter_contact_NatureAstronomy.md figs/fig_bench_nbabel_parallel.png header.tex
pandoc -V fontsize=12pt -s letter_contact_NatureAstronomy.md -o letter_contact_NatureAstronomy.pdf
correspondence.tex: correspondence.md figs/fig_bench_nbabel_parallel.png header.tex pubs.bib
correspondence.tex: correspondence.md header.tex pubs.bib
pandoc -V fontsize=12pt -s correspondence.md -o correspondence.tex
......@@ -22,4 +22,4 @@
pandoc -V fontsize=12pt -s correspondence.md -o correspondence.tex
correspondence.pdf: correspondence.tex
correspondence.pdf: correspondence.tex figs/fig_bench_nbabel_parallel.png
$(LATEXMK) correspondence.tex
\ No newline at end of file
......@@ -40,5 +40,9 @@
productivity. There are strong open-source communities using Python and a rich
scientific ecosystem of several efficient libraries.
[^3]: TODO. References.
[^3]: See for exemple the [TIOBE Index](https://www.tiobe.com/tiobe-index/),
the [IEEE Spectrum
ranking](https://spectrum.ieee.org/computing/software/the-top-programming-languages-2019),
[Github reports](https://octoverse.github.com/) or the [Stack Overflow Annual
Developer Survey](https://insights.stackoverflow.com/survey).
......@@ -44,18 +48,19 @@
It is worth to understand that in 2020, it is no longer meaningful to separate
languages as being "compiled" or "interpreted". More precisely, many
"interpreted" dynamic languages (for example Julia or Matlab) are actually
partly compiled. Let us recall that compiling code to machine instructions can
be done ahead-of-time (AOT, before the execution) or just-in-time (JIT, during
the execution). Python also has AOT and JIT compilers. However, the most
standard way to execute Python code is to interpret it with a program called
CPython. It is the reference implementation of the language and in 2020, it
still does not have a builtin JIT compiler. Therefore, CPython is relatively
slow which explains Zwart results. However, it is important to realize that
this inefficiency of the interpreter has a weak effect on the overall
performance of most programs. The total elapsed time and the energy consumption
are often dominated by hard work done in optimized libraries. This is the basic
principle of all the scientific Python ecosystem using Numpy
It is worth to understand that characterizing a language as being "compiled" or
"interpreted" is a language abuse: these categories make sense only for
specific implementations of languages. Moreover, some interpreters of dynamic
languages (for example Julia or Matlab) actually compile parts of the code on
the fly. Let us recall that compiling code to machine instructions can be done
ahead-of-time (AOT, before the execution) or just-in-time (JIT, during the
execution) \cite{aycock2003brief}. Python also has AOT and JIT compilers.
However, the most standard way to execute Python code is to interpret it with a
program called CPython. It is the reference implementation of the language and
in 2020, it still does not have a builtin JIT compiler. Therefore, CPython is
relatively slow which explains Zwart results. However, it is important to
realize that this inefficiency of the interpreter has a weak effect on the
overall performance of most programs. The total elapsed time and the energy
consumption are often dominated by hard work done in optimized libraries. This
is the basic principle of all the scientific Python ecosystem using Numpy
\cite{harris2020array}.
In many cases, very few lines of code dominate the total computation. It is
......@@ -59,11 +64,12 @@
\cite{harris2020array}.
In many cases, very few lines of code dominate the total computation. It is
usually known as the 80/20 rule and associated with two principles: (i)
"premature optimization is the root of all evil" \cite{knuth1974structured} and
(ii) "measure, don't guess". These principles also apply for energy efficiency.
For most Python programs, it would be counter productive and expensive to
manually rewrite them in C++, with a small gain/cost ratio.
usually known as the 80/20 rule and provides support for two software
development principles: (i) "premature optimization is the root of all evil"
\cite{knuth1974structured} and (ii) "measure, don't guess". These principles
also apply for energy efficiency. For most Python programs, it would be counter
productive and expensive to manually rewrite them in C++, with a small
gain/cost ratio.
However, some algorithms require low-level code and explicit loops. For
example, for the N-Body problem, the computation of the acceleration of each
......@@ -75,7 +81,7 @@
to use efficient alternatives. For this benchmark, we use three tools: (i)
Pythran \cite{guelton2015pythran}, a Python-Numpy AOT compiler transpiling to
C++, (ii) Numba \cite{lam2015numba}, a Python-Numpy JIT compiler based on LLVM
(same compilation target than Julia) and (iii) PyPy \cite{bolz2009tracing}, an
(same compilation target as Julia) and (iii) PyPy \cite{bolz2009tracing}, an
alternative Python interpreter with a JIT.
\begin{figure}[ht]
......@@ -103,8 +109,8 @@
implementations using Pythran and Numba are written in Python-Numpy but Numpy
is only used for its arrays as a data-structure and not for advanced high-level
functions. (2) Four implementations in Python are fastest than the C++
implementation. The simple implementation labelled "Pythran naive" (simple
Numpy code accelerated only by decorating one function with `@transonic.jit`
implementation. The implementation labelled "Pythran naive" (simple Numpy code
accelerated only by decorating one function with `@transonic.jit`
\cite{transonic}) is only 3 times slower than the Fortran implementation. (3)
All Python implementations are simpler to reason, read and write than the C++
and Fortran implementations.
......@@ -118,7 +124,7 @@
[^4]: https://discourse.julialang.org/t/nbabel-nbody-integrator-speed-up/
The 4 points close to the bottom-left corner correspond to 2 parallel
The four points close to the bottom-left corner correspond to two parallel
implementations using Pythran+OpenMP and Julia executed using 6 and 12 CPU
cores. We consider in Figure 1 the energy consumption of the cores used (6 or
12 for these runs and 1 for the sequential jobs), which make sense on shared
......
reply_Zwart2020/figs/fig_bench_nbabel_parallel.png

47.7 KiB | W: | H:

reply_Zwart2020/figs/fig_bench_nbabel_parallel.png

54.5 KiB | W: | H:

reply_Zwart2020/figs/fig_bench_nbabel_parallel.png
reply_Zwart2020/figs/fig_bench_nbabel_parallel.png
reply_Zwart2020/figs/fig_bench_nbabel_parallel.png
reply_Zwart2020/figs/fig_bench_nbabel_parallel.png
  • 2-up
  • Swipe
  • Onion skin
......@@ -103,3 +103,14 @@
year={1974},
publisher={ACM New York, NY, USA}
}
@article{aycock2003brief,
title={A brief history of just-in-time},
author={Aycock, John},
journal={ACM Computing Surveys (CSUR)},
volume={35},
number={2},
pages={97--113},
year={2003},
publisher={ACM New York, NY, USA}
}
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment