Skip to content
Snippets Groups Projects
  1. Sep 10, 2018
    • Boris Feld's avatar
      snapshot: fix line order when skipping over empty deltas · bdb41eaa
      Boris Feld authored
      The code movement in 37957e07138c introduced an error.
      
      Since 8f83a953dddf, we discarded some revisions because they are identical to
      their delta base (and use that delta base instead). That logic is good,
      however, in 37957e07138c we mixed up the order of two line, adding the "new"
      revision to the set of already tested one, instead of the discarded one. So in
      practice, we were never investigating any revisions in a chain starting with
      an empty delta. Creating significantly worst delta chain (eg: Mercurial's
      manifest move goes from about 60MB up to about 80MB).
      bdb41eaa
  2. Sep 13, 2018
  3. Aug 21, 2018
  4. Sep 12, 2018
  5. Sep 13, 2018
  6. Sep 12, 2018
  7. Jul 29, 2018
  8. Jun 07, 2018
  9. Sep 01, 2018
    • Yuya Nishihara's avatar
      109b2c2d
    • Yuya Nishihara's avatar
      formatter: fill missing resources by formatter, not by resource mapper · ee1e74ee
      Yuya Nishihara authored
      While working on demand loading of ctx/fctx objects, I found it's weird
      to support lookup in both directions. For instance, fctx can be loaded
      from (ctx, path) pair, but ctx may also be derived from fctx.changectx()
      in the original mapping. If the original mapping has had fctx but no ctx,
      and if the new mapping provides {path}, we can't be sure if fctx should be
      updated by fctx'.changectx()[path] or not.
      
      This patch simply drops the support for the resolution in fctx -> ctx -> repo
      direction.
      ee1e74ee
  10. Jun 07, 2018
  11. Sep 10, 2018
  12. Sep 12, 2018
  13. Sep 10, 2018
  14. Sep 04, 2018
  15. Sep 07, 2018
    • Gregory Szorc's avatar
      util: update lrucachedict order during get() · 8f2c0d1b
      Gregory Szorc authored
      get() should have the same semantics as __getitem__ for item
      retrieval.
      
      Differential Revision: https://phab.mercurial-scm.org/D4506
      8f2c0d1b
    • Gregory Szorc's avatar
      util: lower water mark when removing nodes after cost limit reached · f296c0b3
      Gregory Szorc authored
      See the inline comment for the reasoning here. This is a pretty
      common strategy for garbage collectors, other cache-like primtives.
      
      The performance impact is substantial:
      
      $ hg perflrucachedict --size 4 --gets 1000000 --sets 1000000 --mixed 1000000 --costlimit 100
      ! inserts w/ cost limit
      ! wall 1.659181 comb 1.650000 user 1.650000 sys 0.000000 (best of 7)
      ! wall 1.722122 comb 1.720000 user 1.720000 sys 0.000000 (best of 6)
      ! mixed w/ cost limit
      ! wall 1.139955 comb 1.140000 user 1.140000 sys 0.000000 (best of 9)
      ! wall 1.182513 comb 1.180000 user 1.180000 sys 0.000000 (best of 9)
      
      $ hg perflrucachedict --size 1000 --gets 1000000 --sets 1000000 --mixed 1000000 --costlimit 10000
      ! inserts
      ! wall 0.679546 comb 0.680000 user 0.680000 sys 0.000000 (best of 15)
      ! sets
      ! wall 0.825147 comb 0.830000 user 0.830000 sys 0.000000 (best of 13)
      ! inserts w/ cost limit
      ! wall 25.105273 comb 25.080000 user 25.080000 sys 0.000000 (best of 3)
      ! wall  1.724397 comb  1.720000 user  1.720000 sys 0.000000 (best of 6)
      ! mixed
      ! wall 0.807096 comb 0.810000 user 0.810000 sys 0.000000 (best of 13)
      ! mixed w/ cost limit
      ! wall 12.104470 comb 12.070000 user 12.070000 sys 0.000000 (best of 3)
      ! wall  1.190563 comb  1.190000 user  1.190000 sys 0.000000 (best of 9)
      
      $ hg perflrucachedict --size 1000 --gets 1000000 --sets 1000000 --mixed 1000000 --costlimit 10000 --mixedgetfreq 90
      ! inserts
      ! wall 0.711177 comb 0.710000 user 0.710000 sys 0.000000 (best of 14)
      ! sets
      ! wall 0.846992 comb 0.850000 user 0.850000 sys 0.000000 (best of 12)
      ! inserts w/ cost limit
      ! wall 25.963028 comb 25.960000 user 25.960000 sys 0.000000 (best of 3)
      ! wall  2.184311 comb  2.180000 user  2.180000 sys 0.000000 (best of 5)
      ! mixed
      ! wall 0.728256 comb 0.730000 user 0.730000 sys 0.000000 (best of 14)
      ! mixed w/ cost limit
      ! wall 3.174256 comb 3.170000 user 3.170000 sys 0.000000 (best of 4)
      ! wall 0.773186 comb 0.770000 user 0.770000 sys 0.000000 (best of 13)
      
      $ hg perflrucachedict --size 100000 --gets 1000000 --sets 1000000 --mixed 1000000 --mixedgetfreq 90 --costlimit 5000000
      ! gets
      ! wall 1.191368 comb 1.190000 user 1.190000 sys 0.000000 (best of 9)
      ! wall 1.195304 comb 1.190000 user 1.190000 sys 0.000000 (best of 9)
      ! inserts
      ! wall 0.950995 comb 0.950000 user 0.950000 sys 0.000000 (best of 11)
      ! inserts w/ cost limit
      ! wall 1.589732 comb 1.590000 user 1.590000 sys 0.000000 (best of 7)
      ! sets
      ! wall 1.094941 comb 1.100000 user 1.090000 sys 0.010000 (best of 9)
      ! mixed
      ! wall 0.936420 comb 0.940000 user 0.930000 sys 0.010000 (best of 10)
      ! mixed w/ cost limit
      ! wall 0.882780 comb 0.870000 user 0.870000 sys 0.000000 (best of 11)
      
      This puts us ~2x slower than caches without cost accounting. And for
      read-heavy workloads (the prime use cases for caches), performance is
      nearly identical.
      
      In the worst case (pure write workloads with cost accounting enabled),
      we're looking at ~1.5us per insert on large caches. That seems "fast
      enough."
      
      Differential Revision: https://phab.mercurial-scm.org/D4505
      f296c0b3
  16. Sep 06, 2018
    • Gregory Szorc's avatar
      util: optimize cost auditing on insert · cc23c09b
      Gregory Szorc authored
      Calling popoldest() on insert with cost auditing enabled introduces
      significant overhead.
      
      The primary reason for this overhead is that popoldest() needs to
      walk the linked list to find the first non-empty node. When we
      call popoldest() within a loop, this can become quadratic. The
      performance impact is more pronounced on caches with large capacities.
      
      This commit effectively inlines the popoldest() call into
      _enforcecostlimit(). By doing so, we only do the backwards walk
      to find the first empty node once. However, we still may still
      perform this work on insert when the cache is near cost capacity.
      So this is only a partial performance win.
      
      $ hg perflrucachedict --size 4 --gets 1000000 --sets 1000000 --mixed 1000000 --costlimit 100
      ! gets w/ cost limit
      ! wall 0.598737 comb 0.590000 user 0.590000 sys 0.000000 (best of 17)
      ! inserts w/ cost limit
      ! wall 1.694282 comb 1.700000 user 1.700000 sys 0.000000 (best of 6)
      ! wall 1.659181 comb 1.650000 user 1.650000 sys 0.000000 (best of 7)
      ! mixed w/ cost limit
      ! wall 1.157655 comb 1.150000 user 1.150000 sys 0.000000 (best of 9)
      ! wall 1.139955 comb 1.140000 user 1.140000 sys 0.000000 (best of 9)
      
      $ hg perflrucachedict --size 1000 --gets 1000000 --sets 1000000 --mixed 1000000 --costlimit 10000
      ! gets w/ cost limit
      ! wall 0.598526 comb 0.600000 user 0.600000 sys 0.000000 (best of 17)
      ! wall 0.601993 comb 0.600000 user 0.600000 sys 0.000000 (best of 17)
      ! inserts w/ cost limit
      ! wall 37.838315 comb 37.840000 user 37.840000 sys 0.000000 (best of 3)
      ! wall 25.105273 comb 25.080000 user 25.080000 sys 0.000000 (best of 3)
      ! mixed w/ cost limit
      ! wall 18.060198 comb 18.060000 user 18.060000 sys 0.000000 (best of 3)
      ! wall 12.104470 comb 12.070000 user 12.070000 sys 0.000000 (best of 3)
      
      $ hg perflrucachedict --size 1000 --gets 1000000 --sets 1000000 --mixed 1000000 --costlimit 10000 --mixedgetfreq 90
      ! gets w/ cost limit
      ! wall 0.600024 comb 0.600000 user 0.600000 sys 0.000000 (best of 17)
      ! wall 0.614439 comb 0.620000 user 0.620000 sys 0.000000 (best of 17)
      ! inserts w/ cost limit
      ! wall 37.154547 comb 37.120000 user 37.120000 sys 0.000000 (best of 3)
      ! wall 25.963028 comb 25.960000 user 25.960000 sys 0.000000 (best of 3)
      ! mixed w/ cost limit
      ! wall 4.381602 comb 4.380000 user 4.370000 sys 0.010000 (best of 3)
      ! wall 3.174256 comb 3.170000 user 3.170000 sys 0.000000 (best of 4)
      
      Differential Revision: https://phab.mercurial-scm.org/D4504
      cc23c09b
    • Gregory Szorc's avatar
      util: teach lrucachedict to enforce a max total cost · 842cd0bd
      Gregory Szorc authored
      Now that lrucachedict entries can have a numeric cost associated
      with them and we can easily pop the oldest item in the cache, it
      now becomes relatively trivial to implement support for enforcing
      a high water mark on the total cost of items in the cache.
      
      This commit teaches lrucachedict instances to have a max cost
      associated with them. When items are inserted, we pop old items
      until enough "cost" frees up to make room for the new item.
      
      This feature is close to zero cost when not used (modulo the insertion
      regressed introduced by the previous commit):
      
      $ ./hg perflrucachedict --size 4 --gets 1000000 --sets 1000000 --mixed 1000000
      ! gets
      ! wall 0.607444 comb 0.610000 user 0.610000 sys 0.000000 (best of 17)
      ! wall 0.601653 comb 0.600000 user 0.600000 sys 0.000000 (best of 17)
      ! inserts
      ! wall 0.678261 comb 0.680000 user 0.680000 sys 0.000000 (best of 14)
      ! wall 0.685042 comb 0.680000 user 0.680000 sys 0.000000 (best of 15)
      ! sets
      ! wall 0.808770 comb 0.800000 user 0.800000 sys 0.000000 (best of 13)
      ! wall 0.834241 comb 0.830000 user 0.830000 sys 0.000000 (best of 12)
      ! mixed
      ! wall 0.782441 comb 0.780000 user 0.780000 sys 0.000000 (best of 13)
      ! wall 0.803804 comb 0.800000 user 0.800000 sys 0.000000 (best of 13)
      
      $ hg perflrucachedict --size 1000 --gets 1000000 --sets 1000000 --mixed 1000000
      ! init
      ! wall 0.006952 comb 0.010000 user 0.010000 sys 0.000000 (best of 418)
      ! gets
      ! wall 0.613350 comb 0.610000 user 0.610000 sys 0.000000 (best of 17)
      ! wall 0.617415 comb 0.620000 user 0.620000 sys 0.000000 (best of 17)
      ! inserts
      ! wall 0.701270 comb 0.700000 user 0.700000 sys 0.000000 (best of 15)
      ! wall 0.700516 comb 0.700000 user 0.700000 sys 0.000000 (best of 15)
      ! sets
      ! wall 0.825720 comb 0.830000 user 0.830000 sys 0.000000 (best of 13)
      ! wall 0.837946 comb 0.840000 user 0.830000 sys 0.010000 (best of 12)
      ! mixed
      ! wall 0.821644 comb 0.820000 user 0.820000 sys 0.000000 (best of 13)
      ! wall 0.850559 comb 0.850000 user 0.850000 sys 0.000000 (best of 12)
      
      I reckon the slight slowdown on insert is due to added if checks.
      
      For caches with total cost limiting enabled:
      
      $ hg perflrucachedict --size 4 --gets 1000000 --sets 1000000 --mixed 1000000 --costlimit 100
      ! gets w/ cost limit
      ! wall 0.598737 comb 0.590000 user 0.590000 sys 0.000000 (best of 17)
      ! inserts w/ cost limit
      ! wall 1.694282 comb 1.700000 user 1.700000 sys 0.000000 (best of 6)
      ! mixed w/ cost limit
      ! wall 1.157655 comb 1.150000 user 1.150000 sys 0.000000 (best of 9)
      
      $ hg perflrucachedict --size 1000 --gets 1000000 --sets 1000000 --mixed 1000000 --costlimit 10000
      ! gets w/ cost limit
      ! wall 0.598526 comb 0.600000 user 0.600000 sys 0.000000 (best of 17)
      ! inserts w/ cost limit
      ! wall 37.838315 comb 37.840000 user 37.840000 sys 0.000000 (best of 3)
      ! mixed w/ cost limit
      ! wall 18.060198 comb 18.060000 user 18.060000 sys 0.000000 (best of 3)
      
      $ hg perflrucachedict --size 1000 --gets 1000000 --sets 1000000 --mixed 1000000 --costlimit 10000 --mixedgetfreq 90
      ! gets w/ cost limit
      ! wall 0.600024 comb 0.600000 user 0.600000 sys 0.000000 (best of 17)
      ! inserts w/ cost limit
      ! wall 37.154547 comb 37.120000 user 37.120000 sys 0.000000 (best of 3)
      ! mixed w/ cost limit
      ! wall 4.381602 comb 4.380000 user 4.370000 sys 0.010000 (best of 3)
      
      The functions we're benchmarking are slightly different, which could
      move numbers by a few milliseconds. But the slowdown on insert is too
      great to be explained by that. The slowness is due to insert heavy
      operations needing to call popoldest() repeatedly when the cache is
      at capacity. The next commit will address this.
      
      Differential Revision: https://phab.mercurial-scm.org/D4503
      842cd0bd
  17. Sep 07, 2018
    • Gregory Szorc's avatar
      util: allow lrucachedict to track cost of entries · ee087f0d
      Gregory Szorc authored
      Currently, lrucachedict allows tracking of arbitrary items with the
      only limit being the total number of items in the cache.
      
      Caches can be a lot more useful when they are bound by the size
      of the items in them rather than the number of elements in the
      cache.
      
      In preparation for teaching lrucachedict to enforce a max size of
      cached items, we teach lrucachedict to optionally associate a numeric
      cost value with each node.
      
      We purposefully let the caller define their own cost for nodes.
      
      This does introduce some overhead. Most of it comes from __setitem__,
      since that function now calls into insert(), thus introducing Python
      function call overhead.
      
      $ hg perflrucachedict --size 4 --gets 1000000 --sets 1000000 --mixed 1000000
      ! gets
      ! wall 0.599552 comb 0.600000 user 0.600000 sys 0.000000 (best of 17)
      ! wall 0.614643 comb 0.610000 user 0.610000 sys 0.000000 (best of 17)
      ! inserts
      ! <not available>
      ! wall 0.655817 comb 0.650000 user 0.650000 sys 0.000000 (best of 16)
      ! sets
      ! wall 0.540448 comb 0.540000 user 0.540000 sys 0.000000 (best of 18)
      ! wall 0.805644 comb 0.810000 user 0.810000 sys 0.000000 (best of 13)
      ! mixed
      ! wall 0.651556 comb 0.660000 user 0.660000 sys 0.000000 (best of 15)
      ! wall 0.781357 comb 0.780000 user 0.780000 sys 0.000000 (best of 13)
      
      $ hg perflrucachedict --size 1000 --gets 1000000 --sets 1000000 --mixed 1000000
      ! gets
      ! wall 0.621014 comb 0.620000 user 0.620000 sys 0.000000 (best of 16)
      ! wall 0.615146 comb 0.620000 user 0.620000 sys 0.000000 (best of 17)
      ! inserts
      ! <not available>
      ! wall 0.698115 comb 0.700000 user 0.700000 sys 0.000000 (best of 15)
      ! sets
      ! wall 0.560247 comb 0.560000 user 0.560000 sys 0.000000 (best of 18)
      ! wall 0.832495 comb 0.830000 user 0.830000 sys 0.000000 (best of 12)
      ! mixed
      ! wall 0.686172 comb 0.680000 user 0.680000 sys 0.000000 (best of 15)
      ! wall 0.841359 comb 0.840000 user 0.840000 sys 0.000000 (best of 12)
      
      We're still under 1us per insert, which seems like reasonable
      performance for a cache.
      
      If we comment out updating of self.totalcost during insert(),
      performance of insert() is identical to __setitem__ before. However,
      I don't want to make total cost evaluation lazy because it has
      significant performance implications for when we need to evaluate the
      total cost at mutation time (it requires a cache traversal, which could
      be expensive for large caches).
      
      Differential Revision: https://phab.mercurial-scm.org/D4502
      ee087f0d
  18. Sep 06, 2018
  19. Aug 29, 2018
    • Gregory Szorc's avatar
      wireprotov2peer: stream decoded responses · d06834e0
      Gregory Szorc authored
      Previously, wire protocol version 2 would buffer all response data.
      Only once all data was received did we CBOR decode it and resolve
      the future associated with the command. This was obviously not
      desirable. In future commits that introduce large response payloads,
      this caused significant memory bloat and slowed down client
      operations due to waiting on the server.
      
      This commit refactors the response handling code so that response
      data can be streamed.
      
      Command response objects now contain a buffered CBOR decoder. As
      new data arrives, it is fed into the decoder. Decoded objects are
      made available to the generator as they are decoded.
      
      Because there is a separate thread processing incoming frames and
      feeding data into the response object, there is the potential for
      race conditions when mutating response objects. So a lock has been
      added to guard access to critical state variables.
      
      Because the generator emitting decoded objects needs to wait on
      those objects to become available, we've added an Event for the
      generator to wait on so it doesn't busy loop. This does mean
      there is the potential for deadlocks. And I'm pretty sure they can
      occur in some scenarios. We already have a handful of TODOs around
      this. But I've added some more. Fixing this will likely require
      moving the background thread receiving frames into clienthandler.
      We likely would have done this anyway when implementing the client
      bits for the SSH transport.
      
      Test output changes because the initial CBOR map holding the overall
      response state is now always handled internally by the response
      object.
      
      Differential Revision: https://phab.mercurial-scm.org/D4474
      d06834e0
    • Gregory Szorc's avatar
      wireprotoframing: buffer emitted data to reduce frame count · 84bf6ded
      Gregory Szorc authored
      An upcoming commit introduces a wire protocol command that can emit
      hundreds of thousands of small objects. Without a buffering layer,
      we would emit a single, small frame for every object. Performance
      profiling revealed this to be a source of significant overhead for
      both client and server.
      
      This commit introduces a very crude buffering layer so that we emit
      fewer, bigger frames in such a scenario. This code will likely get
      rewritten in the future to be part of the streams API, as we'll
      need a similar strategy for compressing data. I don't want to think
      about it too much at the moment though.
      
      server
      before: user 32.500+0.000 sys 1.160+0.000
      after:  user 20.230+0.010 sys 0.180+0.000
      
      client
      before: user 133.400+0.000 sys 93.120+0.000
      after:  user  68.370+0.000 sys 32.950+0.000
      
      This appears to indicate we have significant overhead in the frame
      processing code on both client and server. It might be worth profiling
      that at some point...
      
      Differential Revision: https://phab.mercurial-scm.org/D4473
      84bf6ded
  20. Sep 05, 2018
    • Gregory Szorc's avatar
      wireprotov2: implement commands as a generator of objects · 07b58266
      Gregory Szorc authored
      Previously, wire protocol version 2 inherited version 1's model of
      having separate types to represent the results of different wire
      protocol commands.
      
      As I implemented more powerful commands in future commits, I found
      I was using a common pattern of returning a special type to hold a
      generator. This meant the command function required a closure to
      do most of the work. That made logic flow more difficult to follow.
      I also noticed that many commands were effectively a sequence of
      objects to be CBOR encoded.
      
      I think it makes sense to define version 2 commands as generators.
      This way, commands can simply emit the data structures they wish to
      send to the client. This eliminates the need for a closure in
      command functions and removes encoding from the bodies of commands.
      
      As part of this commit, the handling of response objects has been
      moved into the serverreactor class. This puts the reactor in the
      driver's seat with regards to CBOR encoding and error handling.
      Having error handling in the function that emits frames is
      particularly important because exceptions in that function can lead
      to things getting in a bad state: I'm fairly certain that uncaught
      exceptions in the frame generator were causing deadlocks.
      
      I also introduced a dedicated error type for explicit error reporting
      in command handlers. This will be used in subsequent commits.
      
      There's still a bit of work to be done here, especially around
      formalizing the error handling "protocol." I've added yet another
      TODO to track this so we don't forget.
      
      Test output changed because we're using generators and no longer know
      we are at the end of the data until we hit the end of the generator.
      This means we can't emit the end-of-stream flag until we've exhausted
      the generator. Hence the introduction of 0-sized end-of-stream frames.
      
      Differential Revision: https://phab.mercurial-scm.org/D4472
      07b58266
  21. Aug 27, 2018
    • Gregory Szorc's avatar
      internals: extract frame-based protocol docs to own document · b0e0db15
      Gregory Szorc authored
      wireprotocol.txt is quite long and difficult to digest. The
      frame-based protocol is effectively a standalone concept (and could
      even be used outside of Mercurial). So this commit extracts its
      docs to a standalone file.
      
      The first few paragraphs were rewritten as part of the extraction.
      Sections headers were adjusted accordingly.
      
      Existing referalls in wireprotocol.txt were updated to refer to the
      new doc / concept, which I've started referring to as `hgrpc`.
      
      I'm on the fence as to whether to move the HTTP and SSH transport
      details to the new doc as well. For now, I'm leaving them in
      wireprotocol.txt.
      
      Differential Revision: https://phab.mercurial-scm.org/D4443
      b0e0db15
  22. Sep 12, 2018
Loading