Skip to content
Snippets Groups Projects
  1. May 04, 2019
    • Matt Harbison's avatar
      lfs: add a TODO file · 1756859a
      Matt Harbison authored
      This is a cleaned up and reorganized list of items I sent out about a year ago.
      But tracking this in the repo (like the narrow extension) gives more visibility
      in case anyone wants to help out.
      1756859a
  2. Apr 28, 2019
  3. May 03, 2019
  4. Mar 11, 2019
    • Pierre-Yves David's avatar
      updatecaches: also warm hgtagsfnodescache · 32338e27
      Pierre-Yves David authored
      Now that a full update of this cache run in a reasonable amount of time, we can
      warm everything when during a full update.
      32338e27
    • Pierre-Yves David's avatar
      hgtagsfnodescache: inherit fnode from parent when possible · 9f45d3d5
      Pierre-Yves David authored
      If a changeset does not update the content of `.hgtags`, it means it will use
      the same file-node (for `.hgtags`) as its parents. In this case we can
      directly reuse the parent's file-node.
      
      We use this property when updating the `hgtagsfnodescache` taking a faster path
      if we already have a cached value for the parents of the node we are looking
      at.
      
      Doing so provides a large performance boost when looking at a lot of fnodes,
      especially on repository with very large manifest:
      
      timing for `tagsmod.fnoderevs(ui, repo, repo.changelog.revs())`
      
      mercurial: (41907 revisions, 1923 files)
      
          before: 6.9 seconds
          after:  2.7 seconds (-54%)
      
      pypy: (96266 revisions, 5198 files)
      
          before: 80 seconds
          after:  20 seconds (-75%)
      
      mozilla-central: (463411 revisions, 272080 files)
      
          before: 7166.4 seconds
          after:    47.8 seconds (-99%, x150 speedup)
      
      On a copy of mozilla-try with about 35K heads ans 1.7M changesets, this moves
      the computation from many hours to a couple of minutes, making it more
      interesting to do a full warm up of this cache before computing tags (from a
      cold cache).
      
      There seems to be other performance low hanging fruits, like avoiding the use of
      changectx or a more revision centric logic. However, the new code is fast enough
      for my needs right now.
      9f45d3d5
    • Pierre-Yves David's avatar
      hgtagsfnodescache: handle nullid lookup · 2930b313
      Pierre-Yves David authored
      The null revision is empty, so it `.hgtags` content is `nullid` in regards with
      the `hgtagsfnodescache`. Dealing with `nullid` will help with the next
      changeset. Before this change, feeding `nullid` to `hgtagsfnodescache.getfnode` would
      return a wrong result (fnode for tip).
      2930b313
  5. Apr 26, 2019
  6. Apr 25, 2019
    • feyu's avatar
      histedit: Speed up scrolling in patch view mode · c4a50e86
      feyu authored
      Store patchcontents into the mode state, avoiding the expensive
      call to ui for computing the patchcontents.
      
      Before this change in large repos histedit patch view mode can
      be very irresponsive.
      c4a50e86
  7. May 02, 2019
  8. Apr 06, 2019
    • Pierre-Yves David's avatar
      repoview: introduce a `experimental.extra-filter-revs` config · d345627d
      Pierre-Yves David authored
      The option define revisions to additionally filter out of all repository "view".
      The end goal is to provide and easy to way to serve multiple subset of the same
      repository using multiple "shares".
      
      The simplest use case of this feature is to have one view serving the public
      changesets and one view also serving the draft. This is currently achievable
      using the new `server.view` option introduced recently by Joerg Sonnenberger.
      However, more advanced use cases need more advanced definitions. For example
      some needs a view dedicated to some release branches, or view that hides
      security fixes to be released. Joerg Sonnenberger and I discussed this topic at
      the recent mini-sprint and the both of us have seen real life use cases for
      this. (This series got written during the same mini-sprint).
      
      The feature is fully functional, and use similar cache-fallback mechanism to
      ensure decent performance. However,there remaining room to ensure each share
      caches and hooks collaborate with each others. This will come at a later time
      once users start to actually test this feature on real usecase.
      d345627d
  9. Apr 18, 2019
    • Martin von Zweigbergk's avatar
      copies: filter out copies from non-existent source later in _chain() · fdbeacb9
      Martin von Zweigbergk authored
      _changesetforwardcopies() repeatedly calls _chain(). That is very
      expensive because _chain() does lookups in the manifest. I hope to
      split up the function in two parts: 1) simple chaining, not
      considering end points, and 2) filter out files that don't exist in
      the end points (and ping-pong copies/renames).
      
      This patches gets us closer to that by moving the check for
      non-existent source later in the function. Now there are no more
      checks for "src" and "dst" in the first loop; all the filtering of
      invalid copies is done in the second loop. The code also looks much
      more consistent now.
      
      No measureable impact on `hg debugpathcopies 4.0 4.8`. That shouldn't
      be surprising since the only case we're doing more checks now is in
      case of chained copies/renames, which are quire rare in practice.
      
      Differential Revision: https://phab.mercurial-scm.org/D6277
      fdbeacb9
    • Martin von Zweigbergk's avatar
      copies: clarify mutually exclusive cases in _chain() with a s/if/elif/ · 5a397952
      Martin von Zweigbergk authored
      If the 'b' dict has a rename from 'x' to 'y', it shouldn't be possible
      for 'x' to be both (a key) in 'a' and in 'src'. That would mean that
      'x' is a file in the source commit and also a rename destination in
      the intermediate commit. But we currently don't allow renaming files
      onto existing files, so that shouldn't happen. So let's clarify that
      by using an "elif" instead of an "if". And if we did allow renaming
      files onto existing files, we should prefer to use the rename
      destination in the intermediate commit as source anyway.
      
      Differential Revision: https://phab.mercurial-scm.org/D6276
      5a397952
    • Martin von Zweigbergk's avatar
      copies: delete a redundant cleanup step in _chain() · df7ad90e
      Martin von Zweigbergk authored
      The check is redundant since d5edb5d3a337 (copies: filter out copies
      when target is not in destination manifest, 2019-02-14). To test that
      hypothesis, I made this change in the commit that commit, but all
      tests still passed. I think the case was necessary before then, we
      just didn't have tests for it.
      
      Differential Revision: https://phab.mercurial-scm.org/D6275
      df7ad90e
    • Martin von Zweigbergk's avatar
      d1c2688e
  10. Apr 17, 2019
  11. Apr 12, 2019
    • Martin von Zweigbergk's avatar
      copies: inline _computenonoverlap() in mergecopies() · d69bc8ff
      Martin von Zweigbergk authored
      We now call pathcopies() from the base to each of the commits, and
      that calls _computeforwardmissing(), which does file prefetching (in
      the remotefilelog override). So the call to _computenonoverlap() is
      now pointless (the sets of files from _computenonoverlap() are subsets
      of the sets of files from _computeforwardmissing()).
      
      This somehow also fixes a broken remotefilelog test.
      
      Differential Revision: https://phab.mercurial-scm.org/D6256
      d69bc8ff
    • Martin von Zweigbergk's avatar
      copies: calculate mergecopies() based on pathcopies() · 57203e02
      Martin von Zweigbergk authored
      When copies are stored in changesets, we need a changeset-centric
      version of mergecopies() just like we have a changeset-centric version
      of pathcopies(). I think the natural way of thinking about
      mergecopies() is in terms of pathcopies() from the base to each of the
      commits. So if we can rewrite mergecopies() based on two such
      pathcopies() calls, we'll get the changeset-centric version for
      free. That's what this patch does.
      
      A nice bonus is that it ends up being a lot simpler. mergecopies() has
      accumulated a lot of technical debt over time. One good example is the
      code for dealing with grafts (the "partial/incomplete/dirty"
      stuff). Since pathcopies() already deals with backwards renames and
      ping-pong renames, we get that for free.
      
      I've run tests with hard-coded debug logging for "fullcopy" and while
      I haven't looked at every difference it produces, all the ones I have
      looked at seemed reasonable to me. I'm a little surprised that no more
      tests fail when run with '--extra-config-opt
      experimental.copies.read-from=compatibility' compared to before this
      patch. This patch also fixes the broken cases in test-annotate.t and
      test-fastannotate.t. It also enables the part of test-copies.t that
      was previously disabled exactly because mergecopies() needed to get a
      changeset-centric version.
      
      One drawback of the rewritten code is that we may now make
      remotefilelog prefetch more files. We used to prefetch files that were
      unique to either side of the merge compared to the other. We now
      prefetch files that are unique to either side of the merge compared to
      the base. This means that if you added the same file to each side, we
      would not prefetch it before, but we would now. Such cases are
      probably quite rare, but one likely scenario where they happen is when
      moving from a commit to its successor (or the other way around). The
      user will probably already have the files in the cache in such cases,
      so it's probably not a big deal.
      
      Some timings for calculating mergecopies between two revisions
      (revisions shown on each line, all using the common ancestor as base):
      
      In the hg repo:
      4.8 4.9: 0.21s -> 0.21s
      4.0 4.8: 0.35s -> 0.63s
      
      In and old copy of the mozilla-unified repo:
      FIREFOX_BETA_60_BASE^ FIREFOX_BETA_60_BASE: 0.82s -> 0.82s
      FIREFOX_NIGHTLY_59_END FIREFOX_BETA_60_BASE: 2.5s -> 2.6s
      FIREFOX_BETA_59_END FIREFOX_BETA_60_BASE: 3.9s -> 4.1s
      FIREFOX_AURORA_50_BASE FIREFOX_BETA_60_BASE: 31s -> 33s
      
      So it's measurably slower in most cases. The most significant
      difference is in the hg repo between revisions 4.0 and 4.8. In that
      case it seems to come from the fact that pathcopies() uses
      fctx.isintroducedafter() (in _tracefile), while the old mergecopies()
      used fctx.linkrev() (in _checkcopies()). That results in a single call
      to filectx._adjustlinkrev(), which is responsible for the entire
      difference in time (in my repo). So we pay a performance penalty but
      we get more correct code (see change in
      test-mv-cp-st-diff.t). Deleting the "== f.filenode()" in _tracefile()
      recovers the lost performance in the hg repo.
      
      There were are few other optimizations in _checkcopies() that I could
      not measure any impact from. One was from the "seen" set. Another was
      from a "continue" when the file was not in the destination manifest
      (corresponding to "am" in _tracefile).
      
      Also note that merge copies are not calculated when updating with a
      clean working copy, which is probably the most common case. I
      therefore think the much simpler code is worth the slowdown.
      
      Differential Revision: https://phab.mercurial-scm.org/D6255
      57203e02
  12. Apr 29, 2019
  13. May 01, 2019
  14. Apr 30, 2019
  15. Apr 27, 2019
  16. Apr 25, 2019
  17. Apr 26, 2019
  18. Apr 28, 2019
  19. Apr 17, 2019
    • Pulkit Goyal's avatar
      narrow: send specs as bundle2 data instead of param (issue5952) (issue6019) · 280f7a09
      Pulkit Goyal authored
      Before this patch, when ACL is involved, narrowspecs are send as bundle2
      parameter for narrow:spec bundle2 part. The limitation of bundle2 parts are they
      cannot send data larger than 255 bytes. Includes and excludes in narrow are not
      limited by size and they can grow over 255 bytes.
      
      This patch introduces a new mandatory bundle2 part and send narrowspecs as data
      of that. The new bundle2 part is introduced to keep things cleaner and easy to
      distinguish related to backward compatibility.
      The part is mandatory because without server's narrowspec, the local ACL narrow
      repo won't work.
      
      This patch makes clients compatible with servers which have older versions.
      However I left a comment that we should drop the other bundle2 part soon as
      that's broken and people should not rely on that.
      
      I named the new bundle2 part 'Narrow:responsespec' because:
      
      1) Capital 'N' to make it mandatory
      2) 'Narrow:spec' cannot be used because bundle2 enforces that there should not
      be two different parts which resolve to same name when lowercased.
      3) reponsespec clears that they are specs which are send as reponse by the
      server
      
      While I was here, I renamed `narrowhgacl` section to `narrowacl` as suggested by
      idlsoft@ and martinvonz@.
      
      Differential Revision: https://phab.mercurial-scm.org/D6310
      280f7a09
  20. Apr 19, 2019
  21. Apr 27, 2019
  22. Jan 24, 2019
  23. Apr 27, 2019
  24. Mar 01, 2019
  25. Apr 15, 2019
Loading