diff --git a/reply_Zwart2020/correspondence.md b/reply_Zwart2020/correspondence.md
index 221c5714391760b85d322a34d0d5f0cf44b7f4e7_cmVwbHlfWndhcnQyMDIwL2NvcnJlc3BvbmRlbmNlLm1k..3967202b753d491149526f64e1b33c6030b99f42_cmVwbHlfWndhcnQyMDIwL2NvcnJlc3BvbmRlbmNlLm1k 100644
--- a/reply_Zwart2020/correspondence.md
+++ b/reply_Zwart2020/correspondence.md
@@ -15,7 +15,7 @@
 main claim is that the Python programming language represents an issue for the
 climate and should be avoided. We advocate that scientific programs written in
 Python can be very efficient and energy friendly. We argue that human factors
-and education are much more important than the choice of languages.
+and education are much more important than choice of language.
 
 To support his idea, Zwart presents a benchmark on the N-Body problem with a
 very inefficient implementation in Python, running 50 times slower than a C++
@@ -31,7 +31,7 @@
 (<https://www.grid5000.fr>).
 
 Before focusing on the N-Body problem, let us put it in perspective and recall
-what is "Python" and why it is so successful. Indeed, all indicators show that
+what "Python" is and why it is so successful. Indeed, all indicators show that
 Python is one of the most used and loved languages for science and data
 analysis[^3]. Python is a dynamic programming language oriented towards
 communication between humans and fast prototyping. Reading and writing Python
@@ -35,6 +35,6 @@
 Python is one of the most used and loved languages for science and data
 analysis[^3]. Python is a dynamic programming language oriented towards
 communication between humans and fast prototyping. Reading and writing Python
-is very accessible and do not require a long training. It is generalist
-(seemingly suited to different tasks) and was designed to increase developers
+is very accessible and does not require a lot of training. It is generalist
+(seemingly suited to different tasks) and was designed to increase developers'
 productivity. There are strong open-source communities using Python and a rich
@@ -40,5 +40,5 @@
 productivity. There are strong open-source communities using Python and a rich
-scientific ecosystem of several efficient libraries.
+scientific ecosystem of efficient libraries.
 
 [^3]: See for exemple the [TIOBE Index](https://www.tiobe.com/tiobe-index/),
 the [IEEE Spectrum
@@ -47,7 +47,7 @@
 Developer Survey](https://insights.stackoverflow.com/survey).
 
 It is worth understanding that characterizing a language as being "compiled" or
-"interpreted" is a language abuse: these categories make sense only for
+"interpreted" is an oversimplification: these categories make sense only for
 specific implementations of languages. Moreover, some interpreters of dynamic
 languages (for example Julia or Matlab) actually compile parts of the code on
 the fly. Let us recall that compiling code to machine instructions can be done
@@ -56,5 +56,5 @@
 However, the most standard way to execute Python code is to interpret it with a
 program called CPython. It is the reference implementation of the language and
 in 2020, it still does not have a builtin JIT compiler. Therefore, CPython is
-relatively slow which explains Zwart results. However, it is important to
+relatively slow which explains Zwart's results. However, it is important to
 realize that this inefficiency of the interpreter has a weak effect on the
@@ -60,10 +60,10 @@
 realize that this inefficiency of the interpreter has a weak effect on the
-overall performance of most programs. The total elapsed time and the energy
-consumption are often dominated by hard work done in optimized libraries. This
-is the basic principle of all the scientific Python ecosystem using Numpy
-\cite{harris2020array}.
+overall performance of most programs. Total elapsed time and energy
+consumption are often dominated by computations done in optimized libraries. This
+is the case for the scientific Python ecosystem, which uses NumPy for fast
+numerics \cite{harris2020array}.
 
 In many cases, very few lines of code dominate the total computation. It is
 usually known as the 80/20 rule and provides support for two software
 development principles: (i) "premature optimization is the root of all evil"
 \cite{knuth1974structured} and (ii) "measure, don't guess". These principles
@@ -65,11 +65,11 @@
 
 In many cases, very few lines of code dominate the total computation. It is
 usually known as the 80/20 rule and provides support for two software
 development principles: (i) "premature optimization is the root of all evil"
 \cite{knuth1974structured} and (ii) "measure, don't guess". These principles
-also apply for energy efficiency. For most Python programs, it would be counter
-productive and expensive to manually rewrite them in C++, with a small
+also apply for energy efficiency. For most Python programs, it would be
+counterproductive and expensive to manually rewrite them in C++, with a small
 gain/cost ratio.
 
 However, some algorithms require low-level code and explicit loops. For
 example, for the N-Body problem, the computation of the acceleration of each
@@ -72,10 +72,10 @@
 gain/cost ratio.
 
 However, some algorithms require low-level code and explicit loops. For
 example, for the N-Body problem, the computation of the acceleration of each
-particle involves a loop on all other particles. Few lines of code are repeated
+particle involves a loop over all other particles. Few lines of code are repeated
 $N^2/2$ times per timestep. Zwart (2020) considered 10000 timesteps and
 $N=16384$, so the program is dominated by 1,342,177,280,000 executions of a
 simple and inexpensive computation. Using CPython for this very hot loop makes
 the whole program very inefficient. Good news for Python: it is straightforward
 to use efficient alternatives. For this benchmark, we use three tools: (i)
@@ -77,10 +77,10 @@
 $N^2/2$ times per timestep. Zwart (2020) considered 10000 timesteps and
 $N=16384$, so the program is dominated by 1,342,177,280,000 executions of a
 simple and inexpensive computation. Using CPython for this very hot loop makes
 the whole program very inefficient. Good news for Python: it is straightforward
 to use efficient alternatives. For this benchmark, we use three tools: (i)
-Pythran \cite{guelton2015pythran}, a Python-Numpy AOT compiler transpiling to
-C++, (ii) Numba \cite{lam2015numba}, a Python-Numpy JIT compiler based on LLVM
+Pythran \cite{guelton2015pythran}, a Python-NumPy AOT compiler transpiling to
+C++, (ii) Numba \cite{lam2015numba}, a Python-NumPy JIT compiler based on LLVM
 (same compilation target as Julia) and (iii) PyPy \cite{bolz2009tracing}, an
 alternative Python interpreter with a JIT.
 
@@ -99,8 +99,8 @@
 \end{figure}
 
 Figure 1 is equivalent to Figure 3 in Zwart (2020). The CO$_2$ production is
-ploted as a function of the elapsed time for ten implementations. The C++ and
+plotted as a function of the elapsed time for ten implementations. The C++ and
 Fortran implementations (green stars) are taken from the website
 <http://www.nbabel.org/> and were used by Zwart (2020). Note that these
 implementations could have been further optimized. However, we think they are
 representative of C++ or Fortran codes written by many scientists. We consider
@@ -103,6 +103,6 @@
 Fortran implementations (green stars) are taken from the website
 <http://www.nbabel.org/> and were used by Zwart (2020). Note that these
 implementations could have been further optimized. However, we think they are
 representative of C++ or Fortran codes written by many scientists. We consider
-five implementations in Python (red markers). We would like to emphasize few
+five implementations in Python (red markers). We would like to emphasize a few
 points: (1) These implementations are fully written in Python. The
@@ -108,3 +108,3 @@
 points: (1) These implementations are fully written in Python. The
-implementations using Pythran and Numba are written in Python-Numpy but Numpy
+implementations using Pythran and Numba are written in Python-NumPy but NumPy
 is only used for its arrays as a data-structure and not for advanced high-level
@@ -110,6 +110,6 @@
 is only used for its arrays as a data-structure and not for advanced high-level
-functions. (2) Four implementations in Python are fastest than the C++
-implementation. The implementation labelled "Pythran naive" (simple Numpy code
+functions. (2) Four implementations in Python are faster than the C++
+implementation. The implementation labelled "Pythran naive" (simple NumPy code
 accelerated only by decorating one function with `@transonic.jit`
 \cite{transonic}) is only 3 times slower than the Fortran implementation. (3)
 All Python implementations are simpler to reason about, read and write than the
@@ -138,7 +138,7 @@
 minimizing the ecological impact of scientific computing is limited by human
 factors: time, work, knowledge and skills. For example, scientists have to be
 able to run heavy computations on shared clusters optimized in terms of energy
-consumption. They should also know how to profile their codes to discover which
-parts can potentially be optimized. Therefore, money and time should be
-invested on educating students and scientists. This benchmark demonstrates that
+consumption. They should also know how to profile their code to discover which
+parts can potentially be optimized. Therefore, time and money should be
+invested in educating students and scientists. This benchmark demonstrates that
 Python can actually be a good solution to easily obtain good performance with
@@ -144,6 +144,6 @@
 Python can actually be a good solution to easily obtain good performance with
-simple and readable codes. Therefore, education and tooling can be profitable
+simple and readable code. Therefore, education and tooling can be profitable
 to minimize the overall ecological impact of computing, whatever the underlying
 language.
 
 \bibliographystyle{naturemag}
@@ -146,5 +146,5 @@
 to minimize the overall ecological impact of computing, whatever the underlying
 language.
 
 \bibliographystyle{naturemag}
-\bibliography{./pubs}
\ No newline at end of file
+\bibliography{./pubs}