Skip to content
Snippets Groups Projects
  1. Nov 15, 2018
    • Matt Harbison's avatar
      lfs: improve the hints for common errors in the Batch API · 9f78d10742af
      Matt Harbison authored
      The previous message was too debug-ish and less action oriented than a hint
      should be.  The remaining errors that aren't handled are more along the lines of
      programming errors (not using POST, bad accept type, etc), so I'm not bothering
      with that.
      
      The friendly errors purposely use `self.baseurl` instead of the full Batch API
      endpoint because I'd expect some copy/paste/modify on the part of the user here,
      and it would be more confusing if '/objects/batch' magically appeared, but
      shouldn't be used in the config setting.  It still seems like the right thing
      for debugging in the catchall case.
      9f78d10742af
    • Matt Harbison's avatar
      lfs: provide more Batch API error info via a hint in the raised exception · 8863f08c1630
      Matt Harbison authored
      A coworker had a typo in `lfs.url`, forgot it was even set because usually the
      blob server is inferred, and then got a 404.  It would have been easier to debug
      with the failing URL printed.
      8863f08c1630
  2. Oct 22, 2018
    • Matt Harbison's avatar
      lfs: consult the narrow matcher when extracting pointers from ctx (issue5794) · 4a81d82474e9
      Matt Harbison authored
      I added a testcase for lfs to all narrow tests, and the following failed:
      
          test-narrow-acl.t
          test-narrow-exchange.t
          test-narrow-patterns.t
          test-narrow-strip.t
          test-narrow-trackedcmd.t
          test-narrow-widen.t
          test-narrow.t
      
      The first two still have errors in the pretxnchangegroup on clone and (receiving
      a) push, which I'm still looking into (4d63f3bc1e1a fixed something in this area
      already).  These two modified tests seem to cover the things that failed in the
      remaining narrow tests, i.e. `hg tracked` and `hg strip`, so I didn't bother
      enabling the testcases elsewhere.  Maybe we should, but it's 68 tests total.
      4a81d82474e9
  3. Oct 19, 2018
  4. Sep 21, 2018
    • Matt Harbison's avatar
      lfs: autoload the extension when cloning from repo with lfs enabled · 6637b079ae45
      Matt Harbison authored
      This is based on a patch by Gregory Szorc.  I made small adjustments to
      clean up the messaging when the server has the extension enabled, but the
      client has it disabled (to prevent autoloading).  Additionally, I added
      a second server capability to distinguish between the server having the
      extension enabled, and the server having LFS commits.  This helps prevent
      unnecessary requirement propagation- the client shouldn't add a requirement
      that the server doesn't have, just because the server had the extension
      loaded.  The TODO I had about advertising a capability when the server can
      natively serve up blobs isn't relevant anymore (we've had 2 releases that
      support this), so I dropped it.
      
      Currently, we lazily add the "lfs" requirement to a repo when we first
      encounter LFS data. Due to a pretxnchangegroup hook that looks for LFS
      data, this can happen at the end of clone.
      
      Now that we have more control over how repositories are created, we can
      do better.
      
      This commit adds a repo creation option to add the "lfs" requirement.
      hg.clone() sets this creation option if the remote peer is advertising
      lfs usage (as opposed to just support needed to push).
      
      So, what this change effectively does is have cloned repos
      automatically inherit the "lfs" requirement.
      
      Differential Revision: https://phab.mercurial-scm.org/D5130
      6637b079ae45
  5. Oct 16, 2018
  6. Oct 04, 2018
    • Matt Harbison's avatar
      lfs: register the flag processors per repository · 9c4cbbb0fc51
      Matt Harbison authored
      Previously, enabling the extension for any repo in commandserver or hgweb would
      enable the flags on all repos.  Since localrepo.resolverevlogstorevfsoptions()
      is called so early, the check to see if the extension is enabled on the repo
      (which hasn't been instantiated yet) is a bit awkward.  But I don't see a better
      way.
      9c4cbbb0fc51
  7. Oct 10, 2018
    • Matt Harbison's avatar
      lfs: avoid a potential variable reference before assignment error in cmdserver · 535fc8a22365
      Matt Harbison authored
      A coworker hit this once yesterday when pulling in thg (a retry worked), and
      then I hit it with strip after a pull.  I had a difficult time recreating a test
      for this (at least one of the tricks was to not use '-R', which seems to cause
      reposetup() to be called for each command), so I'm not sure how large of a
      window there actually is for this.  Calling reposetup() *after* the requirement
      is added will skip the hook entirely.
      
      The other issue I had was adding a couple `ui.status()` lines around the check
      that installs the hook.  On Windows, the cmdserver process ballooned to 1.6GB
      and hung.  Changing that to `ui.warn()` avoided the hang.  It also hung on
      macOS, but without the large memory usage.
      535fc8a22365
  8. Oct 02, 2018
  9. Sep 24, 2018
  10. Sep 26, 2018
  11. Sep 21, 2018
    • Gregory Szorc's avatar
      filelog: store filename directly on revlog instance · 96838b620b9c
      Gregory Szorc authored
      This attribute is only used by LFS. It is used by one of the revlog
      flag processor functions, which gets an instance of the revlog - not
      the file storage type. So, it makes sense to store this attribute on
      the revlog instead of the filelog.
      
      With this change, I'm pretty confident that LFS is no longer directly
      accessing file storage interface members that are revlog centric. i.e.
      it gets us one step closer to eliminating revlog-centric APIs from the
      file storage interface!
      
      Differential Revision: https://phab.mercurial-scm.org/D4715
      96838b620b9c
    • Gregory Szorc's avatar
      lfs: access revlog directly · 62a532045e71
      Gregory Szorc authored
      LFS is monkeypatching filelog.filelog and is then accessing
      various filelog attributes in the monkeypatched function. This is all
      fine.
      
      But some of the attributes being accessed by LFS are revlog centric
      and shouldn't be exposed on the file storage interface.
      
      This commit changes the monkeypatched functions to access proxied
      attributes on self._revlog instead of self.
      
      This should be safe to do because non-revlog repositories should not
      be using filelog instances: instead they should have a separate class
      to represent file storage. So it is reasonable for LFS to assume the
      _revlog attribute exists and points to a revlog.
      
      Differential Revision: https://phab.mercurial-scm.org/D4714
      62a532045e71
  12. Sep 20, 2018
    • Gregory Szorc's avatar
      lfs: don't add extension to hgrc after clone or share (BC) · bcf72d7b1524
      Gregory Szorc authored
      Now that repository loading in core supports automatically loading
      the lfs extension when the "lfs" requirement is present, we no
      longer need to update the .hg/hgrc of newly-created repos to load
      the lfs extension!
      
      I'm marking this as BC because it is a change in behavior. But users
      should not notice unless they create an LFS repo with new Mercurial
      and then attempt to use it with an old versions that doesn't support
      automatic extension loading.
      
      Differential Revision: https://phab.mercurial-scm.org/D4712
      bcf72d7b1524
  13. Sep 19, 2018
  14. Sep 20, 2018
    • Gregory Szorc's avatar
      localrepo: support writing shared file (API) · d3d4b4b5f725
      Gregory Szorc authored
      Now that we can create a shared repository via creation options, we
      can handle other special actions related to share at repo creation time
      as well.
      
      One of the things we do after creating a shared repository is write
      out a .hg/shared file containing the list of additional things to
      share. Of which only "bookmarks" is supported.
      
      We add a creation option to hold the set of additional items to
      share. If items are defined, we write out the .hg/shared file at
      repo creation time.
      
      As part of this, we no longer hold the repo lock when writing the
      file. I'm pretty sure we don't care about the tiny race condition
      window. I'm also pretty sure the reason we used the lock was because
      the vfs auditor on the repo instance complained otherwise. Since the
      repo creation code doesn't have an audited vfs, we don't need to
      appease it.
      
      Because we no longer need to tell the post share hook what items
      are shared, the "bookmarks" argument to that function has been
      dropped, incurring an API change.
      
      Differential Revision: https://phab.mercurial-scm.org/D4708
      d3d4b4b5f725
  15. Sep 18, 2018
  16. Sep 06, 2018
    • Matt Harbison's avatar
      lfs: ensure the blob is linked to the remote store on skipped uploads · a913d2892e17
      Matt Harbison authored
      I noticed a "missing" blob when pushing two repositories with common blobs to a
      fresh server, and then running `hg verify` as a user different from the one
      running the web server.  When pushing the second repo, several of the blobs
      already existed in the user cache, so the server indicated to the client that it
      doesn't need to upload the blobs.  That's good enough for the web server process
      to serve up in the future.  But a different user has a different cache by
      default, so verify complains that `lfs.url` needs to be set, because it wants to
      fetch the missing blobs.
      
      Aside from that corner case, it's better to keep all of the blobs in the repo
      whenever possible.  Especially since the largefiles wiki says the user cache can
      be deleted at any time to reclaim disk space- users switching over may have the
      same expectations.
      a913d2892e17
  17. Aug 25, 2018
  18. Aug 24, 2018
    • Matt Harbison's avatar
      lfs: add a progress bar when searching for blobs to upload · 37e56607cbb9
      Matt Harbison authored
      The search itself can take an extreme amount of time if there are a lot of
      revisions involved.  I've got a local repo that took 6 minutes to push 1850
      commits, and 60% of that time was spent here (there are ~70K files):
      
           \ 58.1%  wrapper.py:     extractpointers      line 297:  pointers = extractpointers(...
             | 57.7%  wrapper.py:     pointersfromctx    line 352:  for p in pointersfromctx(ct...
             | 57.4%  wrapper.py:     pointerfromctx     line 397:  p = pointerfromctx(ctx, f, ...
               \ 38.7%  context.py:     __contains__     line 368:  if f not in ctx:
                 | 38.7%  util.py:        __get__        line 82:  return key in self._manifest
                 | 38.7%  context.py:     _manifest      line 1416:  result = self.func(obj)
                 | 38.7%  manifest.py:    read           line 472:  return self._manifestctx.re...
                   \ 25.6%  revlog.py:      revision     line 1562:  text = rl.revision(self._node)
                     \ 12.8%  revlog.py:      _chunks    line 2217:  bins = self._chunks(chain, ...
                       | 12.0%  revlog.py:      decompressline 2112:  ladd(decomp(buffer(data, ch...
                     \  7.8%  revlog.py:      checkhash  line 2232:  self.checkhash(text, node, ...
                       |  7.8%  revlog.py:      hash     line 2315:  if node != self.hash(text, ...
                       |  7.8%  revlog.py:      hash     line 2242:  return hash(text, p1, p2)
                   \ 12.0%  manifest.py:    __init__     line 1565:  self._data = manifestdict(t...
               \ 16.8%  context.py:     filenode         line 378:  if not _islfs(fctx.filelog(...
                 | 15.7%  util.py:        __get__        line 706:  return self._filelog
                 | 14.8%  context.py:     _filelog       line 1416:  result = self.func(obj)
                 | 14.8%  localrepo.py:   file           line 629:  return self._repo.file(self...
                 | 14.8%  filelog.py:     __init__       line 1134:  return filelog.filelog(self...
                 | 14.5%  revlog.py:      __init__       line 24:  censorable=True)
      37e56607cbb9
  19. Jul 22, 2018
  20. Jun 09, 2018
    • Yuya Nishihara's avatar
      fileset: rewrite predicates to return matcher not closed to subset (API) (BC) · ff5b6fca1082
      Yuya Nishihara authored
      This makes fileset expression open to any input, so that we can just say
      "hg status 'set: not binary()'" to select text files including unknowns.
      
      With this and removal of subset computation, 'set:**' becomes as fast as
      'glob:**'. Further optimization will probably be possible by narrowing the
      file tree to compute status for example.
      
      This also fixes 'subrepo()' to not ignore the current mctx.subset.
      
      .. bc::
      
         The fileset expression may include untracked files by default. Use
         ``tracked()`` to explicitly filter out files not existing at the context
         revision.
      ff5b6fca1082
  21. Jun 26, 2018
  22. Jun 18, 2018
  23. May 31, 2018
    • Matt Harbison's avatar
      lfs: bypass wrapped functions when reposetup() hasn't been called (issue5902) · 3790efb388ca
      Matt Harbison authored
      There are only a handful of methods that access repo attributes that are applied
      in reposetup().  The `diff` test covers all of the commands that call
      scmutil.prefetchfiles().  Along the way, I saw that adding files and upgrading
      the repo format were also problems (also tested here).
      
      I don't think running `hg serve` through the commandserver is sane, but I
      conditionalized both the capabilities and the wsgirequest handler because it's
      trivially correct.  It doesn't look like there has ever been a caller of
      candownload(), so there's no test for that path.
      
      The upload case isn't testable, because uploadblobs() bails if there are no
      pointers.  The requirement should be added any time pointers are introduced, and
      that would force the extension to be loaded specifically for the repo.  This
      covers `debuglfsupload`, the pre-push hook (which isn't set until the repo is
      promoted to LFS), and uploadblobsfromrevs(), which can be called by other
      extensions.
      
      I think readfromstore() and writetostore() are only reachable as a flag
      processor for revlog.REVIDX_EXTSTORED, and a requirement is added as soon as
      that is seen, so I don't think those are a problem.
      3790efb388ca
  24. Apr 27, 2018
  25. May 10, 2018
  26. Apr 27, 2018
  27. Apr 17, 2018
  28. Apr 06, 2018
  29. Apr 15, 2018
    • Matt Harbison's avatar
      lfs: enable the final download count status message · ab04972a33ef
      Matt Harbison authored
      At this point, I think all of the core commands are prefetching, except grep and
      verify.  Verify will need some special handling, in case the revlogs are
      corrupt.
      
      Grep has an issue that still needs to be debugged, but we probably need to give
      the behavior some thought too- it would be a shame to have to download
      everything in order to search.  I think the benefit of having this info for all
      commands outweighs extra printing in a command that is arguably not well
      behaved in this context anyway.
      ab04972a33ef
  30. Apr 14, 2018
    • Matt Harbison's avatar
      scmutil: teach the file prefetch hook to handle multiple commits · 7269b87f817c
      Matt Harbison authored
      The remainder of the commands that need prefetch deal with multiple revisions.
      I initially coded this as a separate hook, but then it needed a list of files
      to handle `diff` and `grep`, so it didn't seem worth keeping them separate.
      
      Not every matcher will emit bad file messages (some are built from a list of
      files that are known to exist).  But it seems better to filter this in one place
      than to push this on either each caller or each hook implementation.
      7269b87f817c
  31. Apr 13, 2018
    • Matt Harbison's avatar
      lfs: update the HTTP status codes in error cases · 31a0d47d69b3
      Matt Harbison authored
      I'm not bothering with validating PUT requests (for now), because the spec
      doesn't explicitly call out a Content-Type (though the example transcript does
      use the sensible 'application/octet-stream').
      31a0d47d69b3
  32. Feb 25, 2018
    • Matt Harbison's avatar
      lfs: gracefully handle aborts on the server when corrupt blobs are detected · 10e5bb9678f4
      Matt Harbison authored
      The aborts weren't killing the server, but this seems cleaner.  I'm not sure if
      it matters to handle the remaining IOError in the test like this, for
      consistency.
      
      The error code still feels wrong (especially if the client is trying to download
      a corrupt blob) but I don't see anything better in the RFCs, and this is already
      used elsewhere because the Batch API spec specifically mentioned this as a
      "Validation Error".
      10e5bb9678f4
  33. Apr 13, 2018
  34. Apr 11, 2018
  35. Apr 08, 2018
    • Matt Harbison's avatar
      lfs: infer the blob store URL from an explicit push dest or default-push · 31a4ea773369
      Matt Harbison authored
      Unlike pull, the blobs are uploaded within the exchange.push() window, so simply
      wrap it and swap in a properly configured remote store.  The '_subtoppath' field
      shouldn't be available during this window, but give the passed path priority for
      clarity.
      
      At one point I hit an AttributeError in one of the convert tests when trying to
      save the original remote blobstore when the swap was run unconditionally.  I
      wrapped it in a util.safehasattr(), but then today I wasn't able to reproduce
      it.  But now the whole thing is tucked under the requirement guard because
      without the requirement, there are no blobs in the repo, even if the extension
      is loaded.
      31a4ea773369
Loading