Skip to content
Snippets Groups Projects
  1. Jun 21, 2018
  2. Jul 17, 2018
  3. Jun 05, 2018
    • Paul Morelle's avatar
      sparse-revlog: implement algorithm to write sparse delta chains (issue5480) · f8762ea7
      Paul Morelle authored
      The classic behavior of revlog._isgooddeltainfo is to consider the span size
      of the whole delta chain, and limit it to 4 * textlen.
      Once sparse-revlog writing is allowed (and enforced with a requirement),
      revlog._isgooddeltainfo considers the span of the largest chunk as the
      distance used in the verification, instead of using the span of the whole
      delta chain.
      
      In order to compute the span of the largest chunk, we need to slice into
      chunks a chain with the new revision at the top of the revlog, and take the
      maximal span of these chunks. The sparse read density is a parameter to the
      slicing, as it will stop when the global read density reaches this threshold.
      For instance, a density of 50% means that 2 of 4 read bytes are actually used
      for the reconstruction of the revision (the others are part of other chains).
      
      This allows a new revision to be potentially stored with a diff against
      another revision anywhere in the history, instead of forcing it in the last 4
      * textlen. The result is a much better compression on repositories that have
      many concurrent branches. Here are a comparison between using deltas from
      current upstream (aggressive-merge-deltas on by default) and deltas from a
      sparse-revlog
      
      Comparison of `.hg/store/` size:
      
          mercurial (6.74% merges):
              before:     46,831,873 bytes
              after:      46,795,992 bytes (no relevant change)
          pypy (8.30% merges):
              before:    333,524,651 bytes
              after:     308,417,511 bytes -8%
          netbeans (34.21% merges):
              before:  1,141,847,554 bytes
              after:   1,131,093,161 bytes -1%
          mozilla-central (4.84% merges):
              before:  2,344,248,850 bytes
              after:   2,328,459,258 bytes -1%
          large-private-repo-A (merge 19.73%)
              before: 41,510,550,163 bytes
              after:   8,121,763,428 bytes -80%
          large-private-repo-B (23.77%)
              before: 58,702,221,709 bytes
              after:   8,351,588,828 bytes -76%
      
      Comparison of `00manifest.d` size:
      
          mercurial (6.74% merges):
              before:      6,143,044 bytes
              after:       6,107,163 bytes
          pypy (8.30% merges):
              before:     52,941,780 bytes
              after:      27,834,082 bytes -48%
          netbeans (34.21% merges):
              before:    130,088,982 bytes
              after:     119,337,636 bytes -10%
          mozilla-central (4.84% merges):
              before:    215,096,339 bytes
              after:     199,496,863 bytes -8%
          large-private-repo-A (merge 19.73%)
              before: 33,725,285,081 bytes
              after:     390,302,545 bytes -99%
          large-private-repo-B (23.77%)
              before: 49,457,701,645 bytes
              after:   1,366,752,187 bytes -97%
      
      
      The better delta chains provide a performance boost in relevant repositories:
      
          pypy, bundling 1000 revisions:
              before: 1.670s
              after:  1.149s -31%
      
      Unbundling got a bit slower. probably because the sparse algorithm is still
      pure
      python.
      
          pypy, unbundling 1000 revisions:
              before: 4.062s
              after:  4.507s +10%
      
      Performance of bundle/unbundle in repository with few concurrent branches (eg:
      mercurial) are unaffected.
      
      No significant differences have been noticed then timing `hg push` and `hg
      pull` locally. More state timings are being gathered.
      
      Same as for aggressive-merge-delta, better delta comes with longer delta
      chains. Longer chains have a performance impact. For example. The length of
      the chain needed to get the manifest of pypy's tip moves from 82 item to 1929
      items. This moves the restore time from 3.88ms to 11.3ms.
      
      Delta chain length is an independent issue that affects repository without
      this changes. It will be dealt with independently.
      
      No significant differences have been observed on repositories where
      `sparse-revlog` have not much effect (mercurial, unity, netbeans). On pypy,
      small differences have been observed on some operation affected by delta chain
      building and retrieval.
      
      
          pypy, perfmanifest
              before: 0.006162s
              after:  0.017899s +190%
      
          pypy, commit:
              before: 0.382
              after:  0.376 -1%
      
          pypy, status:
              before: 0.157
              after:  0.168 +7%
      
      More comprehensive and stable timing comparisons are in progress.
      f8762ea7
  4. Jun 04, 2018
    • Paul Morelle's avatar
      sparse-revlog: new requirement enabled with format.sparse-revlog · aa21a9ad
      Paul Morelle authored
      The meaning of the new 'sparse-revlog' requirement is that the revlogs are
      allowed to contain wider delta chains with larger holes between the interesting
      chunks. These sparse delta chains should be read in several chunks to avoid a
      potential explosion of memory usage.
      
      Former version won't know how to read a delta chain in several chunks. They
      would keep reading them in a single read, and therefore would be subject to the
      potential memory explosion. Hence this new requirement: only versions having
      support of sparse-revlog reading should be allowed to read such a revlog.
      
      Implementation of this new algorithm and tools to enable or disable the
      requirement will follow in the next changesets.
      aa21a9ad
    • Paul Morelle's avatar
      revlog: extract `deltainfo.distance` for future conditional redefinition · c67093e8
      Paul Morelle authored
      This commit exist to make the next one clearer.
      c67093e8
  5. Jul 16, 2018
  6. Jul 13, 2018
  7. Jul 12, 2018
    • Jörg Sonnenberger's avatar
      ssh: avoid reading beyond the end of stream when using compression · 27391d74
      Jörg Sonnenberger authored
      Compressed streams can be used as part of getbundle. The normal read()
      operation of bufferedinputpipe will try to fulfill the request exactly
      and can deadlock if the server sends less as it is done. At the same
      time, the bundle2 logic will stop reading when it believes it has gotten
      all parts of the bundle, which can leave behind end of stream markers as
      used by bzip2 and zstd.
      
      To solve this, introduce a new optional unbufferedread interface and
      provided it in bufferedinputpipe and doublepipe. If there is buffered
      data left, it will be returned, otherwise it will issue a single read
      request and return whatever it obtains.
      
      Reorganize the decompression handlers to try harder to read until the
      end of stream, especially if the requested read can already be
      fulfilled. Check for end of stream is messy with Python 2, none of the
      standard compression modules properly exposes it. At least with zstd and
      bzip2, decompressing will remember EOS and fail for empty input after
      the EOS has been seen. For zlib, the only way to detect it with Python 2
      is to duplicate the decompressobj and force some additional data into
      it. The common handler can be further optimized, but works as PoC.
      
      Differential Revision: https://phab.mercurial-scm.org/D3937
      27391d74
  8. Jul 16, 2018
  9. Jul 15, 2018
    • Yuya Nishihara's avatar
      obsolete: explode if metadata contains invalid UTF-8 sequence (API) · ff1182d1
      Yuya Nishihara authored
      The current metadata API can be a source of bugs since it forces callers to
      process encoding conversion by themselves. So let's make it reject bad data
      as a last ditch. I assume there's no metadata field which is supposed to store
      arbitrary BLOB like transplant_source.
      ff1182d1
    • Yuya Nishihara's avatar
      obsolete: store user name and note in UTF-8 (issue5754) (BC) · 6b5ca1d0
      Yuya Nishihara authored
      Before, user names were stored in local encoding and transferred across
      repositories, which made it impossible to restore non-ASCII user names on
      different platforms. This patch fixes new markers to be encoded in UTF-8
      and decoded back to local encoding when displaying. Existing markers are
      unfixable so they may result in mojibake.
      
      I don't like the API that requires metadata dict to be UTF-8 encoded, which
      is a source of bugs, but there's no abstraction layer to process the encoding
      thingy efficiently. So we apply the same rule as extras dict to obsstore
      metadata.
      6b5ca1d0
    • Yuya Nishihara's avatar
  10. Jul 12, 2018
    • Yuya Nishihara's avatar
      revset: special case commonancestors(none()) to be empty set · e4b270a3
      Yuya Nishihara authored
      This matches the behavior of ancestor(none()).
      
      From an implementation perspective, ancestor() and commonancestors() are
      intersection, and ancestors() is union, so it would make some sense that
      commonancestors(none()) returned all revisions. However, ancestor(none())
      isn't implemented as such, which breaks ancestor(x) == max(commonancestors(x)).
      
      From a user perspective, ancestors of nothing is nothing whichever type
      of operation the ancestor predicate does.
      e4b270a3
  11. Jul 10, 2018
  12. Jul 14, 2018
  13. Jul 08, 2018
  14. Jun 01, 2018
  15. Jul 14, 2018
  16. Jun 22, 2018
  17. Jul 11, 2018
  18. Jun 09, 2018
  19. Jul 10, 2018
  20. Jun 10, 2018
    • Yuya Nishihara's avatar
      fileset: parse argument of size() by predicate function · 1500cbe2
      Yuya Nishihara authored
      This change is necessary to pass in a size expression to predicatematcher.
      See the next patch.
      1500cbe2
    • Yuya Nishihara's avatar
      fileset: add "tracked()" to explicitly select files in the revision · 131aae58
      Yuya Nishihara authored
      I'm going to rewrite filesets to be match predicates, which means basic
      patterns such as '*' will no longer be "closed" to the subset constructed
      from the ctx.
      
      Good thing is that 'hg status "set:not binary()"' can include unknown files
      out of the box, and fileset computation will likely to be faster as we won't
      have to walk dirstate twice, for example. Bad thing is that we can't select
      files at a certain revision by 'set:revs(REV, **)' since '**' is "open" to
      any paths. So, this patch introduces "tracked()" as a replacement for the '**'
      in the example above.
      131aae58
  21. Jun 09, 2018
    • Yuya Nishihara's avatar
      fileset: rewrite andset() to not use mctx.narrow() · 80466fd8
      Yuya Nishihara authored
      New code is less efficient than the original, but it helps porting andset()
      to matcher composition. This will be cleaned up later.
      
      This effectively disables the fullmatchctx magic since mctx will never be
      demoted to the matchctx. The fullmatchctx class will be removed later.
      80466fd8
  22. Jun 10, 2018
  23. Jul 14, 2018
  24. Jul 15, 2018
Loading