Skip to content
GitLab
  • Menu
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • cffi cffi
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 75
    • Issues 75
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 4
    • Merge requests 4
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages & Registries
    • Packages & Registries
    • Container Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • PyPy
  • cfficffi
  • Issues
  • #295
Closed
Open
Created Dec 05, 2016 by Bitbucket Importer@bitbucket_importerMaintainer

cffi.new is way slower than it should be; it should use calloc

Created originally on Bitbucket by njs (Nathaniel Smith)

Requests and PyOpenSSL recently ran into an issue where cffi's default allocator was causing pathological slowdowns due to memory zeroing, specifically in cases where they were allocating a large buffer but then only using a small portion of it:

  • https://github.com/kennethreitz/requests/issues/3729
  • https://github.com/pyca/pyopenssl/issues/577
  • https://github.com/pyca/pyopenssl/pull/578

Switching to a non-zeroing allocator produced real-world dramatic speedups (see the last link in particular for benchmarks).

I thought this was very odd, because on any even slightly modern system, allocating a large zeroed buffer should be just as cheap as allocating a large non-zeroed buffer, because of how calloc is implemented. E.g. on glibc, any allocation greater than 128 KiB (by default) is satisfied by directly asking the kernel for more memory via mmap, and the kernel always returns zeroed memory (for security reasons). BUT since it's the kernel, it does this in a clever way: it maps in a bunch of CoW views of the system zero page, so allocating N pages of zeroed memory is an O(1) operation, and the actual zeroing only happens lazily as the memory is accessed. And calloc knows when it's satisfying an allocation via mmap, so in this case it just returns the memory directly, and is also O(1). This means calloc is wayyyyy faster than malloc+memset: memset eagerly faults in all those pages, the kernel zeroes them, and then memset zeroes them again -- and then in the case requests/pyopenssl ran into, most of those pages just get thrown away without ever being touched again. It's a huge waste of time.

Unfortunately, cffi's default allocator emulates calloc using malloc+memset. It should use calloc instead.

There is one thing that makes this slightly tricky, which is that right now the default allocator uses PyObject_Malloc instead of calling malloc directly. CPython 3.5 provides a PyObject_Calloc, but earlier versions do not. So on earlier versions, the only way to get the benefits of calloc are to switch to using calloc/free directly instead of the PyObject_* wrappers. This seems like a plausibly good idea (I'm dubious that the PyObject_* wrappers are providing much value?), but I haven't benchmarked it or anything. On 3.5+ though it's a no-brainer.

To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Assignee
Assign to
Time tracking