Skip to content
Snippets Groups Projects
Commit 0342212544c7 authored by Pierre Augier's avatar Pierre Augier
Browse files

reply_Zwart2020, nearly ready to submit (?)

parent 51c30dea7622
No related branches found
No related tags found
No related merge requests found
......@@ -75,10 +75,11 @@
in 2020, it still does not have a built-in JIT compiler. Therefore, CPython is
relatively slow which explains Zwart's results. However, it is important to
realize that this inefficiency of the interpreter has a weak effect on the
overall performance of most programs. Total elapsed time and energy
consumption are often dominated by computations done in optimized libraries. This
is the case for the scientific Python ecosystem, which uses NumPy for fast
numerics \cite{harris2020array}.
overall performance of most programs. Total elapsed time and energy consumption
are often dominated by computations done in optimized libraries. The Numpy
language and implementation \cite{harris2020array} were designed to described
algorithms with high-level code to avoid too frequent interactions with the
interpreters.
In many cases, very few lines of code dominate the total computation. It is
usually known as the 80/20 rule and provides support for two software
......@@ -102,7 +103,7 @@
alternative Python interpreter with a JIT.
\begin{figure}[ht]
\centerline{\includegraphics[width=0.65\textwidth]{figs/fig_bench_nbabel_parallel}}
\centerline{\includegraphics[width=0.7\textwidth]{figs/fig_bench_nbabel_parallel}}
\cprotect\caption{Efficiency in terms of CO$_2$ production and elapsed time for
implementations in Python, Julia, C++ and Fortran. Energy consumption
......@@ -125,9 +126,9 @@
points: (1) These implementations are fully written in Python. The
implementations using Pythran and Numba are written in Python-NumPy but NumPy
is only used for its arrays as a data-structure and not for advanced high-level
functions. (2) Four implementations in Python are faster than the C++
implementation. The implementation labelled "Pythran naive" (simple NumPy code
accelerated only by decorating one function with `@transonic.jit`
functions. (2) Four implementations in Python are faster than the C++ and
Fortran implementations. The implementation labelled "Pythran naive" (simple
NumPy code accelerated only by decorating one function with `@transonic.jit`
\cite{transonic}) is only 3 times slower than the Fortran implementation. (3)
All Python implementations are simpler to reason about, read and write than the
C++ and Fortran implementations.
......@@ -146,8 +147,12 @@
cores. We consider in Figure 1 the energy consumption of the cores used (6 or
12 for these runs and 1 for the sequential jobs), which make sense on shared
clusters in which one can reserve only the needed cores. We see that
parallelism with threads decreases the elapsed time but has a weak impact on
energy consumption.
parallelism with threads has only a moderate impact on energy consumption since
the increase in power consumption partly counterbalances the decrease of
elapsed time[^5].
[^5]: For example, the 12-thread Pythran version is 10 times faster than the
single-threaded one but produces only 2 times less CO$_2$.
Our work shows that the performance of implementations depends less on
languages than on developer skills and time spent on optimization. Moreover,
......
reply_Zwart2020/figs/fig_bench_nbabel_parallel.png

54.5 KiB | W: | H:

reply_Zwart2020/figs/fig_bench_nbabel_parallel.png

56.2 KiB | W: | H:

reply_Zwart2020/figs/fig_bench_nbabel_parallel.png
reply_Zwart2020/figs/fig_bench_nbabel_parallel.png
reply_Zwart2020/figs/fig_bench_nbabel_parallel.png
reply_Zwart2020/figs/fig_bench_nbabel_parallel.png
  • 2-up
  • Swipe
  • Onion skin
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment