In which we document cpyext compatibility progress, this time with cython and pandas.
[TOC]
Jan 1, 2018
Coming back to this, I see I never updated for "success". cython and Pandas now just work with PyPy.
Aug 28, 2017
After a few fixes to PyPy, Cython, and Pandas, with some pending tweaks to pandas, there are 15 tests that still fail. https://gist.github.com/mattip/f587cabc83d24236dff54d4831048941 Some of the issues are understandable: different error messages in exceptions or locked resources, but there are still some deeper issues with buffers and exceptions that are not raised.
There is also an incompatibility with the use of gc.collect()
to break weakref cycles. NDFrame._check_setitem_copy
uses this method to check that it is safe to modify data, and the check fails to break the cycles on PyPy.
Aug 9, 2017
It turns out that the MemoryError
had nothing to do with memory, and was due to a missing check in calling unbound descriptor methods with no arguments, specifically np.int32.__aray__()
. This now properly raises a TypeError("descriptor '%s' of '%s' object needs an argument" % ...)
. After fixing another annoying incompatibility in PyObject_RichCompareBool(obj, obj, Py_EQ)
where obj is nan
, we are now down to 81 failed, 9565 passed, 1980 skipped, 11 xfailed, 2 error
. Here is the latest sorted list of failures.
July 12, 2017
Pandas is now down to ~150 errors since I found the cause for the first two major failures
in the sorted list https://gist.github.com/mattip/9b39ded038d3c3f4a344bcb03d675b3b. I also found the cause for the uncaught exception, np.sort()
fails to check PyErr_Occurred()
properly
and PyPy is more pedantic about not letting that slip through. Here is the issue. Pandas is testing refcount semantics, that is another 25 or so failures, so there are about 100 true failures left, about half
are MemoryError
, where we seem to be leaking or the gc is not aware of allocations. A major contributor seems to be things like a = np.array(range(5000))
where a=np.arange(5000)
does
not seem to "leak".
July 7, 2017
Using a PyPy2 from the cpyext-add_newdocs branch, and NumPy from the refactor-updateifcopy branch, I now have only a single NumPy failure, which for me also fails on CPython. Pandas tests (using the xdist run as in the May 12 report below) completes, I now get
267 failed, 9087 passed, 1631 skipped, 6 xfailed in 702.07 seconds
For a sorted listing of 258 of the failures, see https://gist.github.com/mattip/9b39ded038d3c3f4a344bcb03d675b3b
I did make this test change to prevent creating many many PyIntObject objects when creating the ndarray from a list
--- a/pandas/tests/test_sorting.py
+++ b/pandas/tests/test_sorting.py
@@ -59,7 +59,7 @@ class TestSorting(object):
def test_int64_overflow_moar(self):
# GH9096
- values = range(55109)
+ values = np.arange(55109)
data = pd.DataFrame.from_dict({'a': values,
'b': values,
'c': values,
For posterity, here are the results of the NumPy test run
#!sh
$ ../pypy-test/bin/pypy runtests.py
Building, see build.log...
Build OK
Running unit tests for numpy
NumPy version 1.14.0.dev0+6dc0c65
NumPy relaxed strides checking option: True
NumPy is installed in /home/matti/pypy_stuff/numpy/build/testenv/site-packages/numpy
Python version 2.7.13 (b2f03ffd8457, Jul 07 2017, 07:54:14)[PyPy 5.9.0-alpha0 with GCC 5.4.0 20160609]
nose version 1.3.7
.......F
======================================================================
FAIL: test_scripts.test_f2py
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/matti/pypy_stuff/pypy-test/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/home/matti/pypy_stuff/numpy/build/testenv/site-packages/numpy/testing/decorators.py", line 147, in skipper_func
return f(*args, **kwargs)
File "/home/matti/pypy_stuff/numpy/build/testenv/site-packages/numpy/tests/test_scripts.py", line 93, in test_f2py
assert_(success, msg)
File "/home/matti/pypy_stuff/numpy/build/testenv/site-packages/numpy/testing/utils.py", line 92, in assert_
raise AssertionError(smsg)
AssertionError: Warning: neither f2py nor f2py2 nor f2py2.7 found in path
----------------------------------------------------------------------
Ran 6588 tests in 157.434s
FAILED (KNOWNFAIL=6, SKIP=30, failures=1)
I made sure all the adddocs calls were passing by modifying the NumPy source code
#!diff
diff --git a/numpy/lib/function_base.py b/numpy/lib/function_base.py
index 8ee6a54..b1da22f 100644
--- a/numpy/lib/function_base.py
+++ b/numpy/lib/function_base.py
@@ -4536,11 +4536,17 @@ def add_newdoc(place, obj, doc):
new = getattr(__import__(place, globals(), {}, [obj]), obj)
if isinstance(doc, str):
add_docstring(new, doc.strip())
+ if new.__doc__ != doc.strip():
+ print('add_docstring failed for', new)
elif isinstance(doc, tuple):
add_docstring(getattr(new, doc[0]), doc[1].strip())
+ if getattr(new, doc[0]).__doc__ != doc[1].strip():
+ print('add_docstring failed for', new, doc[0], getattr(new, doc[0]))
elif isinstance(doc, list):
for val in doc:
add_docstring(getattr(new, val[0]), val[1].strip())
+ if getattr(new, val[0]).__doc__ != val[1].strip():
+ print('add_docstring failed for', new, val[0], getattr(new, val[0]))
except:
pass
June 20, 2017
Using the latest PyPy2 nightly and NumPy including the updateifcopy
fix from
here (which is a
numpy pull request but will need work
before being accepted), the latest Pandas head crashes for me. I needed to use the --box
argument to pytest-xdist. In more detail, I tried using the following command
#!shell
$(ulimit -v 5000000; p=../pypy-test/bin/pypy; \
export PYTHONHASHSEED=$($p -c 'import random; \
print(random.randint(1, 4294967295))'); \
$p -mpytest pandas --skip-slow --skip-network --memory-usage \
--boxed -n4 -v 2>&1 | tee ../test_pandas.txt)
which utilizes the pytest-memory-usage plugin with a patch for pypy and the pytest-xdist package for running each test in a boxed process.
Tests now ran to completion, and gave me:
#!python
============================= test session starts ==============================
platform linux2 -- Python 2.7.13[pypy-5.9.0-alpha], pytest-3.1.2, py-1.4.34, pluggy-0.4.0 -- /home/matti/pypy_stuff/pypy-test/bin/pypy
cachedir: .cache
rootdir: /home/matti/pypy_stuff/pandas, inifile: setup.cfg
plugins: memory-usage-0.1.0, xdist-1.17.1
gw0 I / gw1 I / gw2 I / gw3 I
[gw0] linux2 Python 2.7.13 cwd: /home/matti/pypy_stuff/pandas
[gw1] linux2 Python 2.7.13 cwd: /home/matti/pypy_stuff/pandas
[gw2] linux2 Python 2.7.13 cwd: /home/matti/pypy_stuff/pandas
[gw3] linux2 Python 2.7.13 cwd: /home/matti/pypy_stuff/pandas
[gw0] Python 2.7.13 (6d81023ea3fe, Jun 20 2017, 14:06:44) -- [PyPy 5.9.0-alpha0 with GCC 5.4.0 20160609]
[gw2] Python 2.7.13 (6d81023ea3fe, Jun 20 2017, 14:06:44) -- [PyPy 5.9.0-alpha0 with GCC 5.4.0 20160609]
[gw1] Python 2.7.13 (6d81023ea3fe, Jun 20 2017, 14:06:44) -- [PyPy 5.9.0-alpha0 with GCC 5.4.0 20160609]
[gw3] Python 2.7.13 (6d81023ea3fe, Jun 20 2017, 14:06:44) -- [PyPy 5.9.0-alpha0 with GCC 5.4.0 20160609]
gw0 [11199] / gw1 [11199] / gw2 [11199] / gw3 [11199]
scheduling tests via LoadScheduling
...
261 failed, 9065 passed, 1867 skipped, 6 xfailed, 1 error in 1560.98 seconds
June 19, 2017
Issued a pull request to handle updateifcopy
, which used to rely on refcount semantics. Now a run of NumPy tests on the nightly after today's commit 6d81023ea3fe, yeild sthe following output:
#!python
$ ../pypy-test/bin/python runtests.py
Building, see build.log...
Build OK
Running unit tests for numpy
NumPy version 1.14.0.dev0+f012769
NumPy relaxed strides checking option: True
NumPy is installed in /home/matti/pypy_stuff/numpy/build/testenv/site-packages/numpy
Python version 2.7.13 (6d81023ea3fe, Jun 19 2017, 17:02:48)[PyPy 5.9.0-alpha0 with GCC 5.4.0 20160609]
nose version 1.3.7
................F
======================================================================
ERROR: test_add_doc (test_function_base.TestAdd_newdoc)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/matti/pypy_stuff/numpy/build/testenv/site-packages/numpy/testing/decorators.py", line 147, in skipper_func
return f(*args, **kwargs)
File "/home/matti/pypy_stuff/numpy/build/testenv/site-packages/numpy/lib/tests/test_function_base.py", line 3393, in test_add_doc
self.assertEqual(np.core.flatiter.index.__doc__[:len(tgt)], tgt)
TypeError: 'NoneType' object is not subscriptable
======================================================================
FAIL: test_collections_hashable (test_multiarray.TestHashing)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/matti/pypy_stuff/numpy/build/testenv/site-packages/numpy/core/tests/test_multiarray.py", line 6693, in test_collections_hashable
self.assertFalse(isinstance(x, collections.Hashable))
AssertionError: True is not false
======================================================================
FAIL: test_scripts.test_f2py
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/matti/pypy_stuff/pypy-test/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/home/matti/pypy_stuff/numpy/build/testenv/site-packages/numpy/testing/decorators.py", line 147, in skipper_func
return f(*args, **kwargs)
File "/home/matti/pypy_stuff/numpy/build/testenv/site-packages/numpy/tests/test_scripts.py", line 93, in test_f2py
assert_(success, msg)
File "/home/matti/pypy_stuff/numpy/build/testenv/site-packages/numpy/testing/utils.py", line 92, in assert_
raise AssertionError(smsg)
AssertionError: Warning: neither f2py nor f2py2 nor f2py2.7 found in path
----------------------------------------------------------------------
Ran 6588 tests in 143.643s
FAILED (KNOWNFAIL=6, SKIP=32, errors=1, failures=2)
I do not know if we have a solution for the test_add_doc
failure, and the hashable
one should probably just be skipped.
June 3, 2017
Both pandas and numpy accepted some patches, so now using HEAD on pypy, numpy, pandas results in
268 failed, 9064 passed, 1837 skipped, 6 xfailed, 2 error in 1175.75 seconds
The breakdown of test failures should be available here https://gist.github.com/mattip/9b39ded038d3c3f4a344bcb03d675b3b
May 12, 2017
After this patch to cython for issue 1704 https://github.com/cython/cython/pull/1704/files, and using pandas HEAD, I use the following (taken from pandas' test_fast
script::
#!shell
$ ../pypy-latest/bin/pip install pytest-xdist
$(ulimit -v 5000000; p=../pypy-latest/bin/pypy; \
export PYTHONHASHSEED=$($p -c 'import random; print(random.randint(1, 4294967295))'); \
$p -mpytest pandas --skip-slow --skip-network -m "not single" -n 4)
and get
268 failed, 8822 passed, 1386 skipped, 3 xfailed, 2 error in 1359.61 seconds
So still more work to do, but progress. I also discovered the --lf
option to pytest and the file .cache/v/cache/lastfailed
which should make rerunning the tests much easier.
May 2, 2017
Found a memory checker for pytest, but it had an issue with PyPy. Not sure if this will help, but at least now I have a report via
psutils
on memory delta after each test, and after running gc.collect
. I needed to run the test suite one-directory-at-a-time in order to complete the tests. A major offender seems to be
pandas/tests/indexing/test_indexing.py::TestMisc::test_indexer_caching
which passes but
increases 944MB, and runs for ~11 seconds, so maybe a good candidate to find a leak?
April 27, 2017
PyPy 5.7 was released, which changed the default GC rootfinder strategy to shadowstack, which
introduced a subtle bug that took a few weeks to find. I (@mattip) also found a few more places
where refcheck=False should be exposed, and pandas itself updated to a v0.20.0 release candidate.
Test still do not run to completion using the pypy cpyext-obj-stealing
branch, the process
eats up too much memory. Time to find where the leaks are.
Mar 19, 2017
progress, using the pypy-refcheck branch of pandas from https://github.com/mattip/pandas, which exposes a recheck=True argument to some panda's functions and sets it to False where safe, we now can run the pandas test suite to the end. It does "leak" memory, I needed ot have 5GB available, but that was with the matplotlib tests as well. Many of the matplotlib tests fail since we do not support their tkagg backend.
Mar 2, 2017
(antocuni's notes after the PyPy sprint) I tried to run pandas tests on PyPy. The first thing I noticed is that a lot of tests fail because pandas call numpy.array.resize() with its default argument refcheck=True, which is not supported on PyPy. We need to think about it because this makes almost all numpy+python programs around broken on PyPy; the error message suggests to pass refcheck=False, but maybe emitting a warning would be better? To go further, I patched numpy to allow refcheck=True even on PyPy.
Then I tried to run pandas tests and I went out of memory. After a bit of investigation, it turned out that PyPy leaks memory when you call memoryview() on a numpy array (this was called only very indirectly by cython on pandas, but the actual leak was pypy fault). I fixed the memory leak in the fix-cpyext-releasebuffer branch, which is not merged yet at the time of writing.
Then, I tried to run tests again but found another leak. After a lot of trial/error/bisection, I managed to produce a small example which shows the leak, which is available from this gist: https://gist.github.com/antocuni/9cbdaf80c7f53a80635a9e1f58b393ec
I tried to run the example with valgrind --tool=massif: the resulting graph shows a leak but it's not clear what is the cause. It seems that we are leaking both in the "red" and "orange" zones:
The red zone are probably normal pypy GC objects; the orange zone are raw_malloced objects, which seems to be created by pypy_g_BaseCPyTypeDescr_allocate.
It is possible that this is not an actual leak but just a limitation of the current PyPy GC: as Armin pointed out, our GC is currently unable to free the memory is there is a cycle of references which involves both GC objects and PyObject*. It might be the case, but I could not manage to find an obvious spot by looking at the pandas source code.
Jan 17, 2017
Pandas's HEAD now can import on a PyPy nightly from today, all the merging needed has been completed. However, the test suite crashes and some of the objects are never collected by the GC. This is due to the extensive use of Cython in Pandas. In C, Cython allocates directly with malloc, so the PyPy garbage collector has no idea how large an object is. While correctly running the tp_dealloc function when collecting the objects, unfortunately the garbage collector does not even try since it thinks the allocated memory is much smaller than reality.
Jan 8, 2017
Pandas now loads and runs using the pypy-fixes
branch (hopefully the pandas devs will
merge it soon) and the missing-tp_new
branch of PyPy. Testing pandas quickly eats up
all available memory. We used to have a LeakTest class that would check references at
method teardown to try to discover reference leaks in the cpyext layer, I guess I need to
recreate a similar mechanism in order to find where I create long-lived objects.
Dec 26, 2016
Wow, that took a while. I dived deep into a rabbit hole examining PyPy's handling of
PyBuffer
s, in the set of functions PyObject_GetBuffer
and PyMemoryView_GET_BUFFER
.
I learned how to create a finalizer, which is called earlier in object descruction than a
full-blown __del__
method so it can do more (but not too much). I also found a few leaked
references and memory, all due to the urging of the pybind11 devs who filed PyPy issues and pushed us to fix them.
All the work took place on the better-Py_Dict and issue2444 branches, and will be merged soon I hope. There is still at least one more buffer interface issue 2453, I will be taking a look at this next. Hopefully soon I will return to cython, with the ultimate goal of being able to run Pandas.
Nov 16, 2016
Running the cython test suite takes about half an hour, by default it runs both c and cpp tests:
Ran 8868 tests in 1640.512s FAILED (failures=152, errors=39, skipped=2)
After fixing most of the places where cython forces a newly allocated PyTuple before calling SetItem, the tests can run to completion. It seems the next most serious problem is the lack of a PyBaseExceptionObject, but when I reran the tests with --xml-output, I started to get more segfaults. It turns out it is really hard to avoid the refnanny code in annotation_typing
since the GETREF is emitted as part of the SingleAssignmentNode for a tuple. But then I discovered the --no-refnanny flag, so now I am running tests with:
#!shell
pypy runtests.py annotation_typing --xml-output TEST_XML --no-cleanup --no-refnanny -vv 2>&1 |tee ../test_cython.txt
Nov 24, 2016
Backing up a step, how to duplicate this work. Download a nightly tarball from http://buildbot.pypy.org/nightly/trunk and create a virtualenv with it. Then call
path/to/pypy/bin/pip install git+https://github.com/numpy/numpy.git
and run tests with
pypy runtests.py --no-refnanny -vv --no-cpp 2>&1 | tee ../test_cython.txt
After fixing a few simple issues, and discovering the --no-cpp flag, I now get
Ran 4632 tests in 931.486s FAILED (failures=80, errors=18, skipped=1)
Issued first pull request against the list of skipped tests when run with PyPy