Skip to content
Snippets Groups Projects
  1. Nov 18, 2016
    • Gregory Szorc's avatar
      commands: print chunk type in debugrevlog · 932b18c9
      Gregory Szorc authored
      Each data entry ("chunk") in a revlog has a type based on the first
      byte of the data. This type indicates how to interpret the data.
      
      This seems like a useful thing to be able to query through a debug
      command. So let's add that to `hg debugrevlog`.
      
      This does make `hg debugrevlog` slightly slower, as it has to read
      more than just the index. However, even on the mozilla-unified
      manifest (which is ~200MB spread over ~350K revisions), this takes
      <400ms.
      932b18c9
    • Gregory Szorc's avatar
      perf: add command for measuring revlog chunk operations · 94ca0e13
      Gregory Szorc authored
      Upcoming commits will teach revlogs to leverage the new compression
      engine API so that new compression formats can more easily be
      leveraged in revlogs. We want to be sure this refactoring doesn't
      regress performance. So this commit introduces "perfrevchunks" to
      explicitly test performance of reading, decompressing, and
      recompressing revlog chunks.
      
      Here is output when run on the mozilla-unified repo:
      
      $ hg perfrevlogchunks -c
      ! read
      ! wall 0.346603 comb 0.350000 user 0.340000 sys 0.010000 (best of 28)
      ! read w/ reused fd
      ! wall 0.337707 comb 0.340000 user 0.320000 sys 0.020000 (best of 30)
      ! read batch
      ! wall 0.013206 comb 0.020000 user 0.000000 sys 0.020000 (best of 221)
      ! read batch w/ reused fd
      ! wall 0.013259 comb 0.030000 user 0.010000 sys 0.020000 (best of 222)
      ! chunk
      ! wall 1.909939 comb 1.910000 user 1.900000 sys 0.010000 (best of 6)
      ! chunk batch
      ! wall 1.750677 comb 1.760000 user 1.740000 sys 0.020000 (best of 6)
      ! compress
      ! wall 5.668004 comb 5.670000 user 5.670000 sys 0.000000 (best of 3)
      
      $ hg perfrevlogchunks -m
      ! read
      ! wall 0.365834 comb 0.370000 user 0.350000 sys 0.020000 (best of 26)
      ! read w/ reused fd
      ! wall 0.350160 comb 0.350000 user 0.320000 sys 0.030000 (best of 28)
      ! read batch
      ! wall 0.024777 comb 0.020000 user 0.000000 sys 0.020000 (best of 119)
      ! read batch w/ reused fd
      ! wall 0.024895 comb 0.030000 user 0.000000 sys 0.030000 (best of 118)
      ! chunk
      ! wall 2.514061 comb 2.520000 user 2.480000 sys 0.040000 (best of 4)
      ! chunk batch
      ! wall 2.380788 comb 2.380000 user 2.360000 sys 0.020000 (best of 5)
      ! compress
      ! wall 9.815297 comb 9.820000 user 9.820000 sys 0.000000 (best of 3)
      
      We already see some interesting data, such as how much slower
      non-batched chunk reading is and that zlib compression appears to be
      >2x slower than decompression.
      
      I didn't have the data when I wrote this commit message, but I ran this
      on Mozilla's NFS-based Mercurial server and the time for reading with a
      reused file descriptor was faster. So I think it is worth testing both
      with and without file descriptor reuse so we can make informed
      decisions about recycling file descriptors.
      94ca0e13
    • Gregory Szorc's avatar
      setup: add flag to build_ext to control building zstd · 0acf3fd7
      Gregory Szorc authored
      Downstream packagers will inevitably want to disable building the
      vendored python-zstandard Python package. Rather than force them
      to patch setup.py, let's give them a knob to use.
      
      distutils Command classes support defining custom options. It requires
      setting certain class attributes (yes, class attributes: instance
      attributes don't work because the class type is consulted before it
      is instantiated).
      
      We already have a custom child class of build_ext, so we set these
      class attributes, implement some scaffolding, and override
      build_extensions to filter the Extension instance for the zstd
      extension if the `--no-zstd` argument is specified.
      
      Example usage:
      
        $ python setup.py build_ext --no-zstd
      0acf3fd7
  2. Nov 09, 2016
    • Jun Wu's avatar
      drawdag: update test repos by drawing the changelog DAG in ASCII · a3163433
      Jun Wu authored
      Currently, we have "debugbuilddag" which is a powerful tool to build test
      cases but not intuitive. We may end up running "hg log" in the test to make
      the test more readable.
      
      This patch adds a "drawdag" extension with a "debugdrawdag" command for
      similar testing purpose. Unlike the cryptic "debugbuilddag" command, it
      reads an ASCII graph that is intuitive to human, so the test case can be
      more readable.
      
      Unlike "debugbuilddag", "drawdag" does not require an empty repo. So it can
      be used to add new changesets to an existing repo.
      
      Since the "drawdag" logic is not that trivial and only makes sense for
      testing purpose, the extension is added to the "tests" directory, to make
      the core logic clean. If we find it useful (for example, to demonstrate
      cases and help user understand some cases) and want to ship it by default in
      the future, we can move it to a ship-by-default "debugdrawdag" at that time.
      a3163433
  3. Jan 14, 2015
    • Mads Kiilerich's avatar
      posix: give checklink a fast path that cache the check file and is read only · 8836f13e
      Mads Kiilerich authored
      util.checklink would create a symlink and remove it again. That would sometimes
      happen multiple times. Write operations are relatively expensive and give disk
      tear and noise for applications monitoring file system activity.
      
      Instead of creating a symlink and deleting it again, just create it once and
      leave it in .hg/cache/check-link . If the file exists, just verify that
      os.islink reports true. We will assume that this check is as good as symlink
      creation not failing.
      
      Note: The symlink left in .hg/cache has to resolve to a file - otherwise 'make
      dist' will fail ...
      
      test-symlink-os-yes-fs-no.py does some monkey patching to simulate a platform
      without symlink support. The slightly different testing method requires
      additional monkeying.
      8836f13e
  4. Nov 17, 2016
    • Mads Kiilerich's avatar
      posix: move checklink test file to .hg/cache · 0d87b1ca
      Mads Kiilerich authored
      This avoids unnecessary churn in the working directory.
      
      It is not necessarily a fully valid assumption that .hg/cache is on the same
      filesystem as the working directory, but I think it is an acceptable
      approximation. It could also be the case that different parts of the working
      directory is on different mount points so checking in the root folder could
      also be wrong.
      0d87b1ca
  5. Jan 14, 2015
    • Mads Kiilerich's avatar
      posix: give checkexec a fast path; keep the check files and test read only · b324b4e4
      Mads Kiilerich authored
      Before, Mercurial would create a new temporary file every time, stat it, change
      its exec mode, stat it again, and delete it. Most of this dance was done to
      handle the rare and not-so-essential case of VFAT mounts on unix. The cost of
      that was paid by the much more common and important case of using normal file
      systems.
      
      Instead, try to create and preserve .hg/cache/checkisexec and
      .hg/cache/checknoexec with and without exec flag set. If the files exist and
      have correct exec flags set, we can conclude that that file system supports the
      exec flag. Best case, the whole exec check can thus be done with two stat
      calls. Worst case, we delete the wrong files and check as usual. That will be
      because temporary loss of exec bit or on file systems without support for the
      exec bit. In that case we check as we did before, with the additional overhead
      of one extra stat call.
      
      It is possible that this different test algorithm in some cases on odd file
      systems will give different behaviour. Again, I think it will be rare and
      special cases and I think it is worth the risk.
      
      test-clone.t happens to show the situation where checkisexec is left behind
      from the old style check, while checknoexec only will be created next time a
      exec check will be performed.
      b324b4e4
    • Mads Kiilerich's avatar
      posix: simplify checkexec check · 1ce4c206
      Mads Kiilerich authored
      Use a slightly simpler logic that in some cases can avoid an unnecessary chmod
      and stat.
      
      Instead of flipping the X bits, make it more clear that we rely on no X bits
      being set on initial file creation, and that at least some of them stick after
      they all have been set.
      1ce4c206
  6. Nov 17, 2016
    • Mads Kiilerich's avatar
      posix: move checkexec test file to .hg/cache · b1ce25a4
      Mads Kiilerich authored
      This avoids unnecessary churn in the working directory.
      
      It is not necessarily a fully valid assumption that .hg/cache is on the same
      filesystem as the working directory, but I think it is an acceptable
      approximation. It could also be the case that different parts of the working
      directory is on different mount points so checking in the root folder could
      also be wrong.
      b1ce25a4
    • Durham Goode's avatar
      manifest: move manifestctx creation into manifestlog.get() · 4e1eab73
      Durham Goode authored
      Most manifestctx creation already happened in manifestlog.get(), but there was
      one spot in the manifestctx class itself that created an instance manually. This
      patch makes that one instance go through the manifestlog. This means extensions
      can just wrap manifestlog.get() and it will cover all manifestctx creations. It
      also means this code path now hits the manifestlog cache.
      4e1eab73
  7. Nov 11, 2016
    • Gregory Szorc's avatar
      util: implement zstd compression engine · 41a81067
      Gregory Szorc authored
      Now that zstd is vendored and being built (in some configurations), we
      can implement a compression engine for zstd!
      
      The zstd engine is a little different from existing engines. Because
      it may not always be present, we have to defer load the module in case
      importing it fails. We facilitate this via a cached property that holds
      a reference to the module or None. The "available" method is
      implemented to reflect reality.
      
      The zstd engine declares its ability to handle bundles using the
      "zstd" human name and the "ZS" internal name. The latter was chosen
      because internal names are 2 characters (by only convention I think)
      and "ZS" seems reasonable.
      
      The engine, like others, supports specifying the compression level.
      However, there are no consumers of this API that yet pass in that
      argument. I have plans to change that, so stay tuned.
      
      Since all we need to do to support bundle generation with a new
      compression engine is implement and register the compression engine,
      bundle generation with zstd "just works!" Tests demonstrating this
      have been added.
      
      How does performance of zstd for bundle generation compare? On the
      mozilla-unified repo, `hg bundle --all -t <engine>-v2` yields the
      following on my i7-6700K on Linux:
      
      engine        CPU time     bundle size   vs orig size   throughput
      none            97.0s     4,054,405,584     100.0%       41.8 MB/s
      bzip2 (l=9)    393.6s       975,343,098      24.0%       10.3 MB/s
      gzip (l=6)     184.0s     1,140,533,074      28.1%       22.0 MB/s
      zstd (l=1)     108.2s     1,119,434,718      27.6%       37.5 MB/s
      zstd (l=2)     111.3s     1,078,328,002      26.6%       36.4 MB/s
      zstd (l=3)     113.7s     1,011,823,727      25.0%       35.7 MB/s
      zstd (l=4)     116.0s     1,008,965,888      24.9%       35.0 MB/s
      zstd (l=5)     121.0s       977,203,148      24.1%       33.5 MB/s
      zstd (l=6)     131.7s       927,360,198      22.9%       30.8 MB/s
      zstd (l=7)     139.0s       912,808,505      22.5%       29.2 MB/s
      zstd (l=12)    198.1s       854,527,714      21.1%       20.5 MB/s
      zstd (l=18)    681.6s       789,750,690      19.5%        5.9 MB/s
      
      On compression, zstd for bundle generation delivers:
      
      * better compression than gzip with significantly less CPU utilization
      * better than bzip2 compression ratios while still being significantly
        faster than gzip
      * ability to aggressively tune compression level to achieve
        significantly smaller bundles
      
      That last point is important. With clone bundles, a server can
      pre-generate a bundle file, upload it to a static file server, and
      redirect clients to transparently download it during clone. The server
      could choose to produce a zstd bundle with the highest compression
      settings possible. This would take a very long time - a magnitude
      longer than a typical zstd bundle generation - but the result would
      be hundreds of megabytes smaller! For the clone volume we do at
      Mozilla, this could translate to petabytes of bandwidth savings
      per year and faster clones (due to smaller transfer size).
      
      I don't have detailed numbers to report on decompression. However,
      zstd decompression is fast: >1 GB/s output throughput on this machine,
      even through the Python bindings. And it can do that regardless of the
      compression level of the input. By the time you have enough data to
      worry about overhead of decompression, you have plenty of other things
      to worry about performance wise.
      
      zstd is wins all around. I can't wait to implement support for it
      on the wire protocol and in revlogs.
      41a81067
    • Gregory Szorc's avatar
      hghave: add check for zstd support · de48d3a0
      Gregory Szorc authored
      Not all configurations will support zstd. Add a check so we can
      conditionalize tests.
      de48d3a0
    • Gregory Szorc's avatar
      exchange: obtain compression engines from the registrar · c3944ab1
      Gregory Szorc authored
      util.compengines has knowledge of all registered compression engines
      and the metadata that associates them with various bundle types.
      
      This patch removes the now redundant declaration of this metadata from
      exchange.py and obtains it from the new source.
      
      The effect of this patch is that once a new compression engine is
      registered with util.compengines, `hg bundle -t <engine>` will just
      work.
      c3944ab1
    • Gregory Szorc's avatar
      bundle2: equate 'UN' with no compression · 71b368e3
      Gregory Szorc authored
      An upcoming patch will change the "alg" argument passed to this
      function from None to "UN" when no compression is wanted.
      
      The existing implementation of bundle2 does not set a "Compression"
      parameter if no compression is used. In theory, setting
      "Compression=UN" should work. But I haven't audited the code to see if
      all client versions supporting bundle2 will accept this.
      
      Rather than take the risk, avoid the BC breakage and treat "UN"
      the same as None.
      71b368e3
    • Gregory Szorc's avatar
      util: check for compression engine availability before returning · 90933e4e
      Gregory Szorc authored
      If a requested compression engine is registered but not available,
      requesting it will now abort.
      
      To be honest, I'm not sure if this is the appropriate mechanism
      for handling optional compression engines. I won't know until
      all uses of compression (bundles, wire protocol, revlogs, etc)
      are using the new API and zstd (our planned optional engine)
      is implemented. So this API could change.
      90933e4e
    • Gregory Szorc's avatar
      util: expose an "available" API on compression engines · 64d72754
      Gregory Szorc authored
      When the zstd compression engine is introduced, it won't work in all
      installations, namely pure Python installs. So, we need a mechanism to
      declare whether a compression engine is available. We don't want to
      conditionally register the compression engine because it is sometimes
      useful to know when a compression engine name or encountered data is
      valid but just not available versus unknown.
      64d72754
    • Gregory Szorc's avatar
      setup: compile zstd C extension · 788ea4ac
      Gregory Szorc authored
      Now that zstd and python-zstandard are vendored, we can start compiling
      them as part of the install.
      
      python-zstandard provides a self-contained Python function that returns
      a distutils.extension.Extension, so it is really easy to add zstd
      to our setup.py without having to worry about defining source files,
      include paths, etc. The function even allows specifying the module
      name the extension should be compiled as. This conveniently allows us
      to compile the module into the "mercurial" package so "our" version
      won't collide with a version installed under the canonical "zstd"
      module name.
      788ea4ac
    • Gregory Szorc's avatar
      zstd: vendor python-zstandard 0.5.0 · b86a448a
      Gregory Szorc authored
      As the commit message for the previous changeset says, we wish
      for zstd to be a 1st class citizen in Mercurial. To make that
      happen, we need to enable Python to talk to the zstd C API. And
      that requires bindings.
      
      This commit vendors a copy of existing Python bindings. Why do we
      need to vendor? As the commit message of the previous commit says,
      relying on systems in the wild to have the bindings or zstd present
      is a losing proposition. By distributing the zstd and bindings with
      Mercurial, we significantly increase our chances that zstd will
      work. Since zstd will deliver a better end-user experience by
      achieving better performance, this benefits our users. Another
      reason is that the Python bindings still aren't stable and the
      API is somewhat fluid. While Mercurial could be coded to target
      multiple versions of the Python bindings, it is safer to bundle
      an explicit, known working version.
      
      The added Python bindings are mostly a fully-featured interface
      to the zstd C API. They allow one-shot operations, streaming,
      reading and writing from objects implements the file object
      protocol, dictionary compression, control over low-level compression
      parameters, and more. The Python bindings work on Python 2.6,
      2.7, and 3.3+ and have been tested on Linux and Windows. There are
      CFFI bindings, but they are lacking compared to the C extension.
      Upstream work will be needed before we can support zstd with PyPy.
      But it will be possible.
      
      The files added in this commit come from Git commit
      e637c1b214d5f869cf8116c550dcae23ec13b677 from
      https://github.com/indygreg/python-zstandard and are added without
      modifications. Some files from the upstream repository have been
      omitted, namely files related to continuous integration.
      
      In the spirit of full disclosure, I'm the maintainer of the
      "python-zstandard" project and have authored 100% of the code
      added in this commit. Unfortunately, the Python bindings have
      not been formally code reviewed by anyone. While I've tested
      much of the code thoroughly (I even have tests that fuzz APIs),
      there's a good chance there are bugs, memory leaks, not well
      thought out APIs, etc. If someone wants to review the code and
      send feedback to the GitHub project, it would be greatly
      appreciated.
      
      Despite my involvement with both projects, my opinions of code
      style differ from Mercurial's. The code in this commit introduces
      numerous code style violations in Mercurial's linters. So, the code
      is excluded from most lints. However, some violations I agree with.
      These have been added to the known violations ignore list for now.
      b86a448a
    • Gregory Szorc's avatar
      zstd: vendor zstd 1.1.1 · 2e484bde
      Gregory Szorc authored
      zstd is a new compression format and it is awesome, yielding
      higher compression ratios and significantly faster compression
      and decompression operations compared to zlib (our current
      compression engine of choice) across the board.
      
      We want zstd to be a 1st class citizen in Mercurial and to eventually
      be the preferred compression format for various operations.
      
      This patch starts the formal process of supporting zstd by vendoring
      a copy of zstd. Why do we need to vendor zstd? Good question.
      
      First, zstd is relatively new and not widely available yet. If we
      didn't vendor zstd or distribute it with Mercurial, most users likely
      wouldn't have zstd installed or even available to install. What good
      is a feature if you can't use it? Vendoring and distributing the zstd
      sources gives us the highest liklihood that zstd will be available to
      Mercurial installs.
      
      Second, the Python bindings to zstd (which will be vendored in a
      separate changeset) make use of zstd APIs that are only available
      via static linking. One reason they are only available via static
      linking is that they are unstable and could change at any time.
      While it might be possible for the Python bindings to attempt to
      talk to different versions of the zstd C library, the safest thing to
      do is link against a specific, known-working version of zstd. This
      is why the Python zstd bindings themselves vendor zstd and why we
      must as well. This also explains why the added files are in a
      "python-zstandard" directory.
      
      The added files are from the 1.1.1 release of zstd (Git commit
      4c0b44f8ced84c4c8edfa07b564d31e4fa3e8885 from
      https://github.com/facebook/zstd) and are added without modifications.
      Not all files from the zstd "distribution" have been added. Notably
      missing are files to support interacting with "legacy," pre-1.0
      versions of zstd. The decision of which files to include is made by
      the upstream python-zstandard project (which I'm the author of). The
      files in this commit are a snapshot of the files from the 0.5.0
      release of that project, Git commit
      e637c1b214d5f869cf8116c550dcae23ec13b677 from
      https://github.com/indygreg/python-zstandard.
      2e484bde
  8. Nov 15, 2016
    • Mads Kiilerich's avatar
      bdiff: give slight preference to removing trailing lines · 96f2f50d
      Mads Kiilerich authored
      [This change could be folded into the previous changeset to minimize the repo
      churn ...]
      
      Similar to the previous change, introduce an exception to the general
      preference for matches in the middle of bdiff ranges: If the best match on the
      B side starts at the beginning of the bdiff range, don't aim for the
      middle-most A side match but for the earliest.
      
      New (later) matches on the A side will only be considered better if the
      corresponding match on the B side *not* is at the beginning of the range.
      Thus, if the best (middle-most) match on the B side turns out to be at the
      beginning of the range, the earliest match on the A side will be used.
      
      The bundle size for 4.0 (hg bundle --base null -r 4.0 x.hg) happens to go from
      22807275 to 22808120 bytes - a 0.004% increase.
      96f2f50d
    • Mads Kiilerich's avatar
      bdiff: give slight preference to appending lines · 36334038
      Mads Kiilerich authored
      [This change could be folded into the previous changeset to minimize the repo
      churn ...]
      
      The general preference to matches in the middle of bdiff ranges helps getting
      balanced recursion and efficient computation. But, as previous changes have
      shown, it might also give diffs that seems "obviously wrong".
      
      To mitigate that: If the best match on the A side starts at the beginning of
      the bdiff range, don't aim for the middle-most B side match but for the
      earliest.
      
      This will make the matches balanced (by both sides being "early") even though
      the bisection will be less balanced. Still, this case only apply if the *best*
      and middle-most match was fully unbalanced on the A side. Each recursion will
      thus even in this worst case reduce the problem significantly and we are not
      re-introducing the problem that was fixed in f1ca249696ed.
      
      The bundle size for 4.0 (hg bundle --base null -r 4.0 x.hg) happens to go from
      22806817 to 22807275 bytes - a 0.002% increase.
      
      This make the recent test-bdiff.py changes give a more pretty output ... but
      they no longer show that the recursion is around middle matches (because it in
      these cases isn't).
      36334038
  9. Nov 08, 2016
    • Mads Kiilerich's avatar
      bdiff: give slight preference to longest matches in the middle of the B side · 8c0c75aa
      Mads Kiilerich authored
      We already have a slight preference for matches close to the middle on the A
      side. Now, do the same on the B side.
      
      j is iterating the b range backwards and we thus accept a new j if the previous
      match was in the upper half.
      
      This makes the test-bhalf diff "correct". It obviously also gives more
      preference to balanced recursion than to appending to sequences. That is kind
      of correct, but will also unfortunately make some bundles bigger. No doubt, we
      can also create examples where it will make them smaller ...
      
      The bundle size for 4.0 (hg bundle --base null -r 4.0 x.hg) happens to go from
      22803824 to 22806817 bytes - an 0.01% increase.
      8c0c75aa
    • Mads Kiilerich's avatar
      bdiff: rearrange the "better longest match" code · 5c4e2636
      Mads Kiilerich authored
      This is primarily to make the code more managable and prepare for later
      changes.
      
      More specific assignments might also be slightly faster, even thought it also
      might generate a bit more code.
      5c4e2636
    • Mads Kiilerich's avatar
      bdiff: adjust criteria for getting optimal longest match in the A side middle · 38ed5488
      Mads Kiilerich authored
      We prefer matches closer to the middle to balance recursion, as introduced in
      f1ca249696ed.
      
      For ranges with uneven length, matches starting exactly in the middle should
      have preference. That will be optimal for matches of length 1. We will thus
      accept equality in the half check.
      
      For ranges with even length, half was ceil'ed when calculated but we got the
      preference for low matches from the 'less than half' check. To get the same
      result as before when we also accept equality, floor it. Without that,
      test-annotate.t would show some different (still correct but less optimal)
      results.
      
      This will change the heuristics. Tests shows a slightly different output - and
      sometimes slightly smaller bundles.
      
      The bundle size for 4.0 (hg bundle --base null -r 4.0 x.hg) happens to go from
      22804885 to 22803824 bytes - an 0.005% reduction.
      38ed5488
    • Mads Kiilerich's avatar
      tests: explore some bdiff cases · 3743e5db
      Mads Kiilerich authored
      3743e5db
  10. Nov 15, 2016
  11. Nov 17, 2016
  12. Nov 10, 2016
  13. Nov 15, 2016
    • Jun Wu's avatar
      util: improve iterfile so it chooses code path wisely · 1156ec81
      Jun Wu authored
      We have performance concerns on "iterfile" as it is 4X slower on normal
      files. While modern systems have the nice property that reading a "fast"
      (on-disk) file cannot be interrupted and should be made use of.
      
      This patch dumps the related knowledge in comments. And "iterfile" chooses
      code paths wisely:
      
        1. If it's CPython 3, or PyPY, use the fast path.
        2. If fp is a normal file, use the fast path.
        3. If fp is not a normal file and CPython version >= 2.7.4, use the same
           workaround (4x slower) as before.
        4. If fp is not a normal file and CPython version < 2.7.4, use another
           workaround (2x slower but may block longer then necessary) which
           basically re-invents the buffer + readline logic in Python.
      
      This will give us good confidence on both correctness and performance
      dealing with EINTR in iterfile(fp) for all known supported Python versions.
      1156ec81
  14. Nov 17, 2016
  15. Nov 12, 2016
    • Jun Wu's avatar
      worker: stop using a separate thread waiting for children · c27614f2
      Jun Wu authored
      Now that we have a SIGCHLD hander, and it could get executed when waiting
      for I/O. It's no longer necessary to have a separated waitpid thread. So
      just remove it.
      c27614f2
    • Jun Wu's avatar
      worker: add a SIGCHLD handler to collect worker immediately · e8fb03cf
      Jun Wu authored
      As planned by previous patches, add a SIGCHLD handler to get notifications
      about worker exits, and deals with worker failure immediately.
      
      Note that the SIGCHLD handler gets unregistered before killworkers(), so
      SIGCHLD won't interrupt "killworkers" - making it harder to send kill
      signals to waited processes.
      e8fb03cf
  16. Nov 15, 2016
    • Jun Wu's avatar
      worker: make waitforworkers reentrant · 5069a8a4
      Jun Wu authored
      We are going to use it in the SIGCHLD handler. The handler will be executed
      in the main thread with the non-blocking version of waitpid, while the
      waitforworkers thread runs the blocking version. It's possible that one of
      them collects a worker and makes the other error out (no child to wait).
      This patch handles these errors: ECHILD is ignored. EINTR needs a retry.
      
      The "pids" set is designed to be only modifiable by "waitforworkers". And we
      only remove items after a successful waitpid. Since a child process can only
      be "waitpid"-ed once. It's guaranteed that "pids.remove(p)" won't be called
      with duplicated "p"s. And once a "p" is removed from "pids", that "p" does
      not need to be killed or waited any more.
      5069a8a4
    • Jun Wu's avatar
      worker: change "pids" to a set · 9c25a1a8
      Jun Wu authored
      There is no need to keep any order of the "pids" array. A set is more
      efficient for the "remove" operation. And the following patch will use that.
      9c25a1a8
  17. Jul 28, 2016
    • Jun Wu's avatar
      worker: allow waitforworkers to be non-blocking · 7bc25549
      Jun Wu authored
      This patch adds a boolean flag to waitforworkers and makes it non-blocking
      if set to True.
      
      This is to make it possible that we can reap our workers while keep other
      unrelated children untouched, after receiving SIGCHLD.
      7bc25549
Loading