TODO 10.6 KB
Newer Older
Remi Meier's avatar
Remi Meier committed
1
2
3
4
5
6
7
------------------------------------------------------------

maybe statically optimize away some stm_become_inevitable(), there
are some loops that call it repeatedly (may be not relevant
for performance) 

------------------------------------------------------------
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26

1b664888133d (March 15, 2014): the overhead of a non-JIT STM, when
compared with a non-JIT plain PyPy, is measured to be 54% in a
single-threaded trivial benchmark.  A tentative break-down of this figure:

* 15% are from stm_write() on writing non-GC pointers into GC objects

* 7% are from stm_write() on writing a constant GC pointer (likely null)
  (in a regular pypy, this doesn't emit a write_barrier)

* 14% are from stm_read()

* 3% (in this benchmark) is just a slower startup time

* 6% where removed soon afterwards by ddbc16971682

* the rest: ~9% from unknown other places (may include accessing
  prebuilt GC objects, which requires an indirection)

Armin Rigo's avatar
Update    
Armin Rigo committed
27
28
29
UPDATE: with ddbc16971682 the figure seems to be: only 38% slower.
Assuming that all other points stayed at the same overhead, it would
perfectly explain the slow-down.
30

31
32
33
34
35
36
37
------------------------------------------------------------

clang doesn't optimize multiple stm_write() in a row (unlike GCC).
Optimize them manually...

------------------------------------------------------------

38
39
40
41
42
43
44
45
stm_read() should also be optimized (moved out of loops). E.g.
quick_sort.py suffers a 30% slowdown because it uses deque.locate()
extensively. This method happens to have a *very* tight loop 
where the read barrier has a big negative effect and could 
actually be moved out.

------------------------------------------------------------

46
47
48
49
50
51
52
53
54
__pypy__.thread.getsegmentlimit():

XXX This limit is so far a compile time option (STM_NB_SEGMENTS in
rpython/translator/stm/src_stm/stmgc.h), but this should instead be
based on the machine found at run-time.  We should also be able to
change the limit (or at least lower it) with setsegmentlimit().

------------------------------------------------------------

55
56
JIT: add an artificial malloc if the loop is so small as to contain
any!
Armin Rigo's avatar
Armin Rigo committed
57
58

------------------------------------------------------------
59

60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
weakrefs stay alive longer than expected::
    
    y = some object that was already alive for a while
    x = ref(y)
    del y
    gc.collect()        # y doesn't die: it's needed if we abort
    assert x() is None  # so this assert fails

A dying weakref might be a cross-transaction way to exchange
information when there should be none::

    thread 1:
        if <condition>:
            x = some_weakref()
        ...

    thread 2:
        gc.collect()   # will kill some_weakref() but only if
                       # thread 1 did not, so far, read it
        if some_weakref() is None:
            ...
Armin Rigo's avatar
Armin Rigo committed
81

Armin Rigo's avatar
Armin Rigo committed
82
83
84
85
86
87
88
89
90
91
92
It might be enough to apply these rules: (1) an explicit gc.collect()
turns the transaction inevitable first; (2) if any non-inevitable
transaction has *read* the weakref yet, then its target remains alive.

This might require a tweak to consider an object as dead (for the
purposes of weakrefs) if it's only reachable via the old version of an
old_modified_object in the inevitable transaction: in this case,
other transaction may still reach the objects in question, so it
shouldn't be deallocted just now, but by doing so they will put
themselves in a situation where they necessarily abort.

Armin Rigo's avatar
Armin Rigo committed
93
94
------------------------------------------------------------

95
missing recursion detection (both in interpreted and JITted mode)
Armin Rigo's avatar
Armin Rigo committed
96
97
98

------------------------------------------------------------

99
100
101
change the limit of 1 GB

------------------------------------------------------------
Armin Rigo's avatar
Armin Rigo committed
102

103
104
105
106
107
108
Re-add the disabled optimization (only outside the jit):
(1) withmethodcache
(2) LOAD_ATTR_caching, LOOKUP_METHOD_mapdict

------------------------------------------------------------

Armin Rigo's avatar
Armin Rigo committed
109
pypy_g_BlackholeInterpBuilder_acquire_interp creates conflicts
110
111
by caching the BlackholeInterps.  It has been disabled, check
if we can re-enable it...
Armin Rigo's avatar
Armin Rigo committed
112
113
114

------------------------------------------------------------

Armin Rigo's avatar
Armin Rigo committed
115
116
117
118
119
120
121
allocating dummy 16 bytes if a loop doesn't allocate anything else:
could be replaced by lowering the nursery's limit, to avoid creating
holes (or even mostly-empty, already-zero nurseries that must still be
entirely memset)

------------------------------------------------------------

Armin Rigo's avatar
Armin Rigo committed
122
123
124
125
126
127
128
129
130
hg diff -r b0339cb53372^2 -r dd8e2f69fe96^2 : these trunk changes are
missing in the stmgc-c7 branch of PyPy.  In particular, reduce the diff
between stmgc-c7 and default (kill rstm.ThreadLocalRef, etc)
    
------------------------------------------------------------

there are in PyPy-STM crashes related to markers (with the JIT?).
Also, some markers logic is missing with the JIT.

Armin Rigo's avatar
Update    
Armin Rigo committed
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
------------------------------------------------------------

GC: call __del__()

------------------------------------------------------------

look at jit.elidables as a way to specify "this function can run
earlier, in a separate transaction".  Useful to avoid pointless
conflicts in cases the jit.edliable update some cache, like with
MapDictStrategy.

------------------------------------------------------------

dicts: have an implementation that follows the principles in
stmgc/hashtable/design.txt

------------------------------------------------------------

replace "atomic transactions" with better management of thread.locks.

------------------------------------------------------------

153
154
stm_read(p125)
cond_call_gc_wb_array(p125...)    # don't need the stm_read maybe?
155

156
157
158
159
160
161
------------------------------------------------------------

we should fake the stm_location inside jit/metainterp, so that it
is reported correctly even if we're (1) tracing, (2) blackholing,
or (3) in ResumeDataDirectReader.change_stm_location

162
163
164
165
166
167
168
169


===============================================================================
===========    the rest is from stmgc-c4    ===================================
===============================================================================



Armin Rigo's avatar
Armin Rigo committed
170
------------------------------------------------------------
Remi Meier's avatar
Remi Meier committed
171

172
POSSIBLE BUG:
173
174
175
part of this is done: still investigate where transaction
  breaks are really allowed to happen in the JIT. (JUMP,
  FINISH, call_footer(), ...)
176
177
178
179
180
181
182
183
184
185
investigate if another thread can force a jitframe. Thus,
making a transaction break *after* a guard_not_forced
would be wrong, as the force will only be visible after
the break. (The GIL doesn't get released inbetween
the GUARD and the next call that is allowed to, so no
problems there)..
Solution, move transaction breaks right before guard_not_forced, maybe

------------------------------------------------------------

186
187
188
189
190
191
should stm_thread_local_obj always be read & writeable? would
a write-barrier in begin_transaction be too much for small
transactions? should we handle it specially (undolog?)

------------------------------------------------------------

Remi Meier's avatar
Remi Meier committed
192
193
194
195
196
197
198
199
looking at trace of fibo.tlc (targettlc.py), there are a lot
of read-barriers followed by write-barriers. Merging them
and making the first a write-barrier is important!
(attention: changing A2R->A2W after having placed the barrier
must still invalidate all A2R at the point of the new A2W)

------------------------------------------------------------

Remi Meier's avatar
Remi Meier committed
200
201
202
we have crashes when setting trace_eagerness very low
-> many guards generated.
It may be because patching the assembler code is not atomic.
Remi Meier's avatar
Remi Meier committed
203
204
(unlikely after trying to (badly) synchronize code around
places that patch assembler)
Remi Meier's avatar
Remi Meier committed
205
206
207
208
209
210
211
212
213
214
215
216
Maybe solve by new stmgc library function that synchronizes
all threads so only the caller is running.

------------------------------------------------------------

stmgc-library: since we copy back over h_originals, it may
make sense to treat them differently during allocation and
collection.
E.g. have a separate space to allocate h_originals from.
Thus, we have a more compact layout in memory (fragmentation).

------------------------------------------------------------
Armin Rigo's avatar
Armin Rigo committed
217

218
219
220
221
222
make stm_transaction_break use cond_call (or other ways to not
spill all registers)

------------------------------------------------------------

223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
constptrs always require slowpath of read_barrier if they
point to a stub
they also always require the slowpath of a write-barrier
because there is always one indirection to the current version

------------------------------------------------------------

we may have too many transaction breaks in jitted code.

------------------------------------------------------------

unregister constptrs in stmgc when freeing traces

------------------------------------------------------------

Remi Meier's avatar
Remi Meier committed
238
239
240
241
stm-jitdriver with autoreds

------------------------------------------------------------

Armin Rigo's avatar
Armin Rigo committed
242
243
244
245
246
try to let non-atomic inevitable transactions run for longer, until
another thread starts waiting for the mutex

------------------------------------------------------------

Armin Rigo's avatar
Armin Rigo committed
247
248
249
250
251
252
253
RPyAssert(i < len(lst)): if lst is global this turns into tons of code

------------------------------------------------------------

JIT: finish (missing: the call in execute_token(), reorganize pypy source, ?)

------------------------------------------------------------
Armin Rigo's avatar
Armin Rigo committed
254

Remi Meier's avatar
Remi Meier committed
255
optimize the static placement of the STM_XxxBARRIERs and use them in JIT
256
257
258
259
260
261
262
263
264
265
266
267
268

------------------------------------------------------------



Current optimization opportunities (outside the JIT)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

tweak translator/stm/ to improve placement of barriers, at least at
whole-function level, but maybe cross-function; and reintroduce tweaks
to the PyFrame object (make sure it's always written and don't put more
barriers)

Remi Meier's avatar
Remi Meier committed
269
in parallel, tweak the API of stmgc: support "tentative" write_barrier calls
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
that are not actually followed by a write (checked by comparing the
object contents)

in the interpreter, e.g. BINARY_ADD calls space.add() which possibly
(but rarely) can cause a transaction break, thus requiring that the
frame be write-barrier()-ed again.  I'm thinking about alternatives for
this case: e.g. have a separate stack of objects, and the top-most
object on this stack is always in write mode.  so just after a
transaction break, we force a write barrier on the top object of the
stack.  this would be needed to avoid the usually-pointless write
barriers on the PyFrame everywhere in the interpreter

running valgrind we can see X% of the time in the read or write
barriers, but it would be interesting to know also the time spent in the
fast-path, as well as splitting it based e.g. on the RPython type of
Armin Rigo's avatar
Armin Rigo committed
285
object.  See also vtune.
286

Remi Meier's avatar
Remi Meier committed
287
288
289
290
291
292
293
294
295
296
297
JIT
~~~

* use specialized barriers in JIT
* optimize produced assembler code
* avoid calling aroundstate.after() for call_release_gil and instead
  start a normal transaction after the call
* maybe GUARD_NOT_INEVITABLE after call_may_force, call_assembler
  which is a small check if we are inevitable and does a transaction_break
  if we are.
* look at XXXs for STM everywhere