Skip to content
GitLab
Explore
Sign in
Register
Primary navigation
Search or go to…
Project
F
fluiddyn_papers
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Container Registry
Model registry
Operate
Environments
Monitor
Incidents
Service Desk
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
fluiddyn
fluiddyn_papers
Commits
0342212544c7
Commit
0342212544c7
authored
4 years ago
by
Pierre Augier
Browse files
Options
Downloads
Patches
Plain Diff
reply_Zwart2020, nearly ready to submit (?)
parent
51c30dea7622
No related branches found
Branches containing commit
No related tags found
No related merge requests found
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
reply_Zwart2020/correspondence.md
+15
-10
15 additions, 10 deletions
reply_Zwart2020/correspondence.md
reply_Zwart2020/figs/fig_bench_nbabel_parallel.png
+0
-0
0 additions, 0 deletions
reply_Zwart2020/figs/fig_bench_nbabel_parallel.png
with
15 additions
and
10 deletions
reply_Zwart2020/correspondence.md
+
15
−
10
View file @
03422125
...
...
@@ -75,10 +75,11 @@
in 2020, it still does not have a built-in JIT compiler. Therefore, CPython is
relatively slow which explains Zwart's results. However, it is important to
realize that this inefficiency of the interpreter has a weak effect on the
overall performance of most programs. Total elapsed time and energy
consumption are often dominated by computations done in optimized libraries. This
is the case for the scientific Python ecosystem, which uses NumPy for fast
numerics
\c
ite{harris2020array}.
overall performance of most programs. Total elapsed time and energy consumption
are often dominated by computations done in optimized libraries. The Numpy
language and implementation
\c
ite{harris2020array} were designed to described
algorithms with high-level code to avoid too frequent interactions with the
interpreters.
In many cases, very few lines of code dominate the total computation. It is
usually known as the 80/20 rule and provides support for two software
...
...
@@ -102,7 +103,7 @@
alternative Python interpreter with a JIT.
\b
egin{figure}[ht]
\c
enterline{
\i
ncludegraphics[width=0.
65
\t
extwidth]{figs/fig_bench_nbabel_parallel}}
\c
enterline{
\i
ncludegraphics[width=0.
7
\t
extwidth]{figs/fig_bench_nbabel_parallel}}
\c
protect
\c
aption{Efficiency in terms of CO$_2$ production and elapsed time for
implementations in Python, Julia, C++ and Fortran. Energy consumption
...
...
@@ -125,9 +126,9 @@
points: (1) These implementations are fully written in Python. The
implementations using Pythran and Numba are written in Python-NumPy but NumPy
is only used for its arrays as a data-structure and not for advanced high-level
functions. (2) Four implementations in Python are faster than the C++
implementation. The implementation labelled "Pythran naive" (simple
NumPy code
accelerated only by decorating one function with
`@transonic.jit`
functions. (2) Four implementations in Python are faster than the C++
and
Fortran
implementation
s
. The implementation labelled "Pythran naive" (simple
NumPy code
accelerated only by decorating one function with
`@transonic.jit`
\c
ite{transonic}) is only 3 times slower than the Fortran implementation. (3)
All Python implementations are simpler to reason about, read and write than the
C++ and Fortran implementations.
...
...
@@ -146,8 +147,12 @@
cores. We consider in Figure 1 the energy consumption of the cores used (6 or
12 for these runs and 1 for the sequential jobs), which make sense on shared
clusters in which one can reserve only the needed cores. We see that
parallelism with threads decreases the elapsed time but has a weak impact on
energy consumption.
parallelism with threads has only a moderate impact on energy consumption since
the increase in power consumption partly counterbalances the decrease of
elapsed time[^5].
[
^5
]:
For
example, the 12-thread Pythran version is 10 times faster than the
single-threaded one but produces only 2 times less CO$_2$.
Our work shows that the performance of implementations depends less on
languages than on developer skills and time spent on optimization. Moreover,
...
...
This diff is collapsed.
Click to expand it.
reply_Zwart2020/figs/fig_bench_nbabel_parallel.png
+
0
−
0
View replaced file @
51c30dea
View file @
03422125
54.5 KiB
|
W:
|
H:
56.2 KiB
|
W:
|
H:
2-up
Swipe
Onion skin
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment