Skip to content
Snippets Groups Projects
  1. Sep 18, 2018
  2. Sep 06, 2018
    • Matt Harbison's avatar
      lfs: ensure the blob is linked to the remote store on skipped uploads · a913d2892e17
      Matt Harbison authored
      I noticed a "missing" blob when pushing two repositories with common blobs to a
      fresh server, and then running `hg verify` as a user different from the one
      running the web server.  When pushing the second repo, several of the blobs
      already existed in the user cache, so the server indicated to the client that it
      doesn't need to upload the blobs.  That's good enough for the web server process
      to serve up in the future.  But a different user has a different cache by
      default, so verify complains that `lfs.url` needs to be set, because it wants to
      fetch the missing blobs.
      
      Aside from that corner case, it's better to keep all of the blobs in the repo
      whenever possible.  Especially since the largefiles wiki says the user cache can
      be deleted at any time to reclaim disk space- users switching over may have the
      same expectations.
      a913d2892e17
  3. Aug 25, 2018
  4. Aug 24, 2018
    • Matt Harbison's avatar
      lfs: add a progress bar when searching for blobs to upload · 37e56607cbb9
      Matt Harbison authored
      The search itself can take an extreme amount of time if there are a lot of
      revisions involved.  I've got a local repo that took 6 minutes to push 1850
      commits, and 60% of that time was spent here (there are ~70K files):
      
           \ 58.1%  wrapper.py:     extractpointers      line 297:  pointers = extractpointers(...
             | 57.7%  wrapper.py:     pointersfromctx    line 352:  for p in pointersfromctx(ct...
             | 57.4%  wrapper.py:     pointerfromctx     line 397:  p = pointerfromctx(ctx, f, ...
               \ 38.7%  context.py:     __contains__     line 368:  if f not in ctx:
                 | 38.7%  util.py:        __get__        line 82:  return key in self._manifest
                 | 38.7%  context.py:     _manifest      line 1416:  result = self.func(obj)
                 | 38.7%  manifest.py:    read           line 472:  return self._manifestctx.re...
                   \ 25.6%  revlog.py:      revision     line 1562:  text = rl.revision(self._node)
                     \ 12.8%  revlog.py:      _chunks    line 2217:  bins = self._chunks(chain, ...
                       | 12.0%  revlog.py:      decompressline 2112:  ladd(decomp(buffer(data, ch...
                     \  7.8%  revlog.py:      checkhash  line 2232:  self.checkhash(text, node, ...
                       |  7.8%  revlog.py:      hash     line 2315:  if node != self.hash(text, ...
                       |  7.8%  revlog.py:      hash     line 2242:  return hash(text, p1, p2)
                   \ 12.0%  manifest.py:    __init__     line 1565:  self._data = manifestdict(t...
               \ 16.8%  context.py:     filenode         line 378:  if not _islfs(fctx.filelog(...
                 | 15.7%  util.py:        __get__        line 706:  return self._filelog
                 | 14.8%  context.py:     _filelog       line 1416:  result = self.func(obj)
                 | 14.8%  localrepo.py:   file           line 629:  return self._repo.file(self...
                 | 14.8%  filelog.py:     __init__       line 1134:  return filelog.filelog(self...
                 | 14.5%  revlog.py:      __init__       line 24:  censorable=True)
      37e56607cbb9
  5. Jul 22, 2018
  6. Jun 09, 2018
    • Yuya Nishihara's avatar
      fileset: rewrite predicates to return matcher not closed to subset (API) (BC) · ff5b6fca1082
      Yuya Nishihara authored
      This makes fileset expression open to any input, so that we can just say
      "hg status 'set: not binary()'" to select text files including unknowns.
      
      With this and removal of subset computation, 'set:**' becomes as fast as
      'glob:**'. Further optimization will probably be possible by narrowing the
      file tree to compute status for example.
      
      This also fixes 'subrepo()' to not ignore the current mctx.subset.
      
      .. bc::
      
         The fileset expression may include untracked files by default. Use
         ``tracked()`` to explicitly filter out files not existing at the context
         revision.
      ff5b6fca1082
  7. Jun 26, 2018
  8. Jun 18, 2018
  9. May 31, 2018
    • Matt Harbison's avatar
      lfs: bypass wrapped functions when reposetup() hasn't been called (issue5902) · 3790efb388ca
      Matt Harbison authored
      There are only a handful of methods that access repo attributes that are applied
      in reposetup().  The `diff` test covers all of the commands that call
      scmutil.prefetchfiles().  Along the way, I saw that adding files and upgrading
      the repo format were also problems (also tested here).
      
      I don't think running `hg serve` through the commandserver is sane, but I
      conditionalized both the capabilities and the wsgirequest handler because it's
      trivially correct.  It doesn't look like there has ever been a caller of
      candownload(), so there's no test for that path.
      
      The upload case isn't testable, because uploadblobs() bails if there are no
      pointers.  The requirement should be added any time pointers are introduced, and
      that would force the extension to be loaded specifically for the repo.  This
      covers `debuglfsupload`, the pre-push hook (which isn't set until the repo is
      promoted to LFS), and uploadblobsfromrevs(), which can be called by other
      extensions.
      
      I think readfromstore() and writetostore() are only reachable as a flag
      processor for revlog.REVIDX_EXTSTORED, and a requirement is added as soon as
      that is seen, so I don't think those are a problem.
      3790efb388ca
  10. Apr 27, 2018
  11. May 10, 2018
  12. Apr 27, 2018
  13. Apr 17, 2018
  14. Apr 06, 2018
  15. Apr 15, 2018
    • Matt Harbison's avatar
      lfs: enable the final download count status message · ab04972a33ef
      Matt Harbison authored
      At this point, I think all of the core commands are prefetching, except grep and
      verify.  Verify will need some special handling, in case the revlogs are
      corrupt.
      
      Grep has an issue that still needs to be debugged, but we probably need to give
      the behavior some thought too- it would be a shame to have to download
      everything in order to search.  I think the benefit of having this info for all
      commands outweighs extra printing in a command that is arguably not well
      behaved in this context anyway.
      ab04972a33ef
  16. Apr 14, 2018
    • Matt Harbison's avatar
      scmutil: teach the file prefetch hook to handle multiple commits · 7269b87f817c
      Matt Harbison authored
      The remainder of the commands that need prefetch deal with multiple revisions.
      I initially coded this as a separate hook, but then it needed a list of files
      to handle `diff` and `grep`, so it didn't seem worth keeping them separate.
      
      Not every matcher will emit bad file messages (some are built from a list of
      files that are known to exist).  But it seems better to filter this in one place
      than to push this on either each caller or each hook implementation.
      7269b87f817c
  17. Apr 13, 2018
    • Matt Harbison's avatar
      lfs: update the HTTP status codes in error cases · 31a0d47d69b3
      Matt Harbison authored
      I'm not bothering with validating PUT requests (for now), because the spec
      doesn't explicitly call out a Content-Type (though the example transcript does
      use the sensible 'application/octet-stream').
      31a0d47d69b3
  18. Feb 25, 2018
    • Matt Harbison's avatar
      lfs: gracefully handle aborts on the server when corrupt blobs are detected · 10e5bb9678f4
      Matt Harbison authored
      The aborts weren't killing the server, but this seems cleaner.  I'm not sure if
      it matters to handle the remaining IOError in the test like this, for
      consistency.
      
      The error code still feels wrong (especially if the client is trying to download
      a corrupt blob) but I don't see anything better in the RFCs, and this is already
      used elsewhere because the Batch API spec specifically mentioned this as a
      "Validation Error".
      10e5bb9678f4
  19. Apr 13, 2018
  20. Apr 11, 2018
  21. Apr 08, 2018
    • Matt Harbison's avatar
      lfs: infer the blob store URL from an explicit push dest or default-push · 31a4ea773369
      Matt Harbison authored
      Unlike pull, the blobs are uploaded within the exchange.push() window, so simply
      wrap it and swap in a properly configured remote store.  The '_subtoppath' field
      shouldn't be available during this window, but give the passed path priority for
      clarity.
      
      At one point I hit an AttributeError in one of the convert tests when trying to
      save the original remote blobstore when the swap was run unconditionally.  I
      wrapped it in a util.safehasattr(), but then today I wasn't able to reproduce
      it.  But now the whole thing is tucked under the requirement guard because
      without the requirement, there are no blobs in the repo, even if the extension
      is loaded.
      31a4ea773369
    • Matt Harbison's avatar
      lfs: infer the blob store URL from an explicit pull source · be1cc65bdb1c
      Matt Harbison authored
      I don't see any easier way to do this because the update part of `hg pull -u`
      happens outside exchange.pull(), and commands.postincoming() doesn't take a
      path.  So (ab)use the mechanism used by subrepos to redirect where subrepos are
      pulled from when an explicit path is given.  As a bonus, this should allow lfs
      blobs to be pulled into a subrepo when it is checked out.
      
      An explicit push path can be handled within exchange.push().  That can be done
      next, outside of this dirty hack.
      be1cc65bdb1c
  22. Apr 11, 2018
    • Matt Harbison's avatar
      lfs: special case the null:// usercache instead of treating it as a url · e5cd8d1a094d
      Matt Harbison authored
      The previous code worked on Windows, but not on Unix, and a pending patch's test
      failed.  The url being used was something like "/tmp/.../client1/null://",
      courtesy of ui.configpath().  Looking at the doc comment, this seems like it's
      maybe not the right function to call (why should a relative cache path be
      expanded relative to the repo root or config file?), but largefiles has been
      using it since 8b8dd13295db (Oct 2011).  It was introduced in 1b591f9b7fd2 (Jan
      2011) without comment or callers.  A grep over the whole history shows that only
      largefiles used it until lfs and infinitepush came along recently.
      
      It looks like if the `if not os.path.isabs(v) or "://" not in v` in configpath()
      is changed to an 'and', both Linux and Windows are happy.  I'm guessing that
      "://" is to pick off URLs, so that seems reasonable.  But I'm not sure why it
      isn't explicitly "file://", and I thought that "file://foo" is relative anyway.
      (At least, there are doctests for file:///tmp in util.url.)  There is no mention
      of this setting in the help, but it is referenced on the wiki page for
      largefiles.  (There's no mention that this is intended to be a URL, and the
      example uses an absolute path.)
      
      I don't want this blocking the rest of the lfs server discovery stuff.  It was
      also wrong to allow a file:// URL here, but not in largefiles.
      e5cd8d1a094d
  23. Apr 08, 2018
    • Matt Harbison's avatar
      lfs: infer the blob store URL from paths.default · 092eff6833a7
      Matt Harbison authored
      If `lfs.url` is specified, it takes precedence.  However, now that we support
      serving blobs via hgweb, we shouldn't *require* this setting.  Less
      configuration is better (things will work out of the box once this is sorted
      out), and git has similar functionality.
      
      This is not a complete solution- it isn't able to infer the blob store from an
      explicitly supplied path, and it should consider `paths.default-push` for push.
      The pull solution for that is a bit hacky, and this alone is an improvement for
      the vast majority of cases.
      
      Even though there are only a handful of references to the saved remote store,
      the location of them makes things complicated.
      
        1) downloading files on demand in the revlog flag processor
        2) copying to readonlyvfs with bundlerepo
        3) downloading in the file prefetch hook
        4) the canupload()/skipdownload() checks
        5) uploading blobs
      
      Since revlog doesn't have a repo or ui reference, we can't avoid creating a
      remote store when the extension is loaded.  While the long term goal is to make
      sure the prefetch hook is invoked early for every command for efficiency, this
      handling in the flag processor is needed as a last ditch fetch.
      
      In order to support the clone command, the remote store needs to be created
      later than when the extension loads, since `paths.default` isn't set until just
      before the files are checked out.  Therefore, this patch changes the prefetch
      hook to ignore the saved reference, and build a new one.
      
      The canupload()/skipdownload() checks simply check if the stored instance is a
      `_nullremote`.  Since this can only be set via `lfs.url` (which is reflected in
      the saved reference), checking only the instance created when the extension
      loaded is fine.
      
      The blob uploading function is called from several places:
      
        1) a prepush hook
        2) when writing a new bundle
        3) from infinitepush
      
      The prepush hook gets an exchange.pushop, so it has a path to where the push is
      going.  The bundle writer and infinitepush don't.  Further, bundle creation for
      things like strip and amend are causing blobs to be uploaded.  This seems wrong,
      but I don't want to side track this sorting that out, so punt on trying to
      handle explicit push paths or `paths.default-push`.
      
      I also think that sending blobs to a remote store when pushing to a local repo
      is wrong.  This functionality predates the usercache, so perhaps that's the
      reason for it.  I've got some patches floating around to stop sending blobs
      remotely in this case, and instead write directly to the other repo's blob
      store.  But the tests for corruption handling weren't happy with this change,
      and I don't have time to rewrite them.  So exclude filesystem based paths from
      this for now.
      
      I don't think there's much of a chance to implement `paths.remote:lfsurl` style
      configs, given how early these are resolved vs how late the remote store is
      created.  But git has it, so I threw a TODO in there, in case anyone has ideas.
      
      I have no idea why this is now doing http auth twice when it wasn't before.  I
      don't think the original blobstore's url is ever being used in these cases.
      092eff6833a7
    • Matt Harbison's avatar
      lfs: add the ability to disable the usercache · 491edf2435a0
      Matt Harbison authored
      While the usercache is important for real world uses, I've been tripped up more
      than a couple of times by it in tests- thinking a file was being downloaded, but
      it was simply linked from the local cache.  The syntax for setting it is the
      same as for setting a null remote endpoint, and like that endpoint, is left
      undocumented.
      
      This may or may not be a useful feature in the real world (I'd expect any sane
      filesystem to support hardlinks at this point).
      491edf2435a0
  24. Apr 06, 2018
    • Gregory Szorc's avatar
      revlog: move parsemeta() and packmeta() from filelog (API) · 0596d27457c6
      Gregory Szorc authored
      filelog.parsemeta() and filelog.packmeta() are used to decode
      and encode metadata for file copies and censor.
      
      An upcoming commit will move the core logic for censoring revlogs
      into revlog.py. This would create a cycle between revlog.py and
      filelog.py. So we move these metadata functions to revlog.py.
      
      .. api::
      
         filelog.parsemeta() and filelog.packmeta() have been moved to
         the revlog module.
      
      Differential Revision: https://phab.mercurial-scm.org/D3150
      0596d27457c6
  25. Apr 01, 2018
  26. Mar 31, 2018
    • Matt Harbison's avatar
      lfs: add an experimental knob to disable blob serving · dfb38c4850a9
      Matt Harbison authored
      The use case here is the server admin may want to store the blobs elsewhere.  As
      it stands now, the `lfs.url` config on the client side is all that enforces this
      (the web.allow-* permissions aren't able to block LFS blobs without also
      blocking normal hg traffic).  The real solution to this is to implement the
      'verify' action on the client and server, but that's not a near term goal.
      Whether this is useful in its own right, and should be promoted out of
      experimental at some point is TBD.
      
      Since the other two tests that deal with LFS and `hg serve` are already complex
      and have #testcases, this seems like a good time to start a new test dedicated
      to access checks against the server.  Instead of conditionally wrapping the
      wire protocol handler, I put this in the handler because I'd still like to bring
      the annotations in from the evolve extension in order to set up the wrapping.
      The 400 status probably isn't great, but that's what it would be for existing
      `hg serve` instances without support for serving blobs.
      dfb38c4850a9
  27. Feb 25, 2018
  28. Feb 26, 2018
    • Matt Harbison's avatar
      lfs: improve the client message when the server signals an object error · 67db84842356
      Matt Harbison authored
      Two things here.  First, the previous message included a snippet of JSON, which
      tends to be long (and in the case of lfs-test-server, has no error message).
      Instead, give a concise message where possible, and leave the JSON to a debug
      output.  Second, the server can signal issues other than a missing individual
      file.  This change shows a corrupt file, but I'm debating letting the corrupt
      file get downloaded, because 1) the error code doesn't really fit, and 2) having
      it locally makes forensics easier.  Maybe need a config knob for that.
      67db84842356
  29. Mar 31, 2018
  30. Mar 30, 2018
  31. Mar 17, 2018
    • Matt Harbison's avatar
      lfs: add support for serving blob files · cc0a6ea95d98
      Matt Harbison authored
      cc0a6ea95d98
    • Matt Harbison's avatar
      lfs: add server side support for the Batch API · ea6fc58524d7
      Matt Harbison authored
      ea6fc58524d7
    • Matt Harbison's avatar
      lfs: add basic routing for the server side wire protocol processing · a2566597acb5
      Matt Harbison authored
      The recent hgweb refactoring yielded a clean point to wrap a function that could
      handle this, so I moved the routing for this out of the core.  While not an hg
      wire protocol, this seems logically close enough.  For now, these handlers do
      nothing other than check permissions.
      
      The protocol requires support for PUT requests, so that has been added to the
      core, and funnels into the same handler as GET and POST.  The permission
      checking code was assuming that anything not checking 'pull' or None ops should
      be using POST.  But that breaks the upload check if it checks 'push'.  So I
      invented a new 'upload' permission, and used it to avoid the mandate to POST.  A
      function wrap point could be added, but security code should probably stay
      grouped together.  Given that anything not 'pull' or None was requiring POST,
      the comment on hgweb.common.permhooks is probably wrong- there is no 'read'.
      
      The rationale for the URIs is that the spec for the Batch API[1] defines the URL
      as the LFS server url + '/objects/batch'.  The default git URLs are:
      
          Git remote: https://git-server.com/foo/bar
          LFS server: https://git-server.com/foo/bar.git/info/lfs
          Batch API: https://git-server.com/foo/bar.git/info/lfs/objects/batch
      
      '.git/' seems like it's not something a user would normally track.  If we adhere
      to how git defines the URLs, then the hg-git extension should be able to talk to
      a git based server without any additional work.
      
      The URI for the transfer requests starts with '.hg/' to ensure that there are no
      conflicts with tracked files.  Since these are handed out by the Batch API, we
      can change this at any point in the future.  (Specifically, it might be a good
      idea to use something under the proposed /api/ namespace.)  In any case, no
      files are stored at these locations in the repository directory.
      
      I started a new module for this because it seems like a good idea to keep all of
      the security sensitive server side code together.  There's also an issue with
      `hg verify` in that it will want to download *all* blobs in order to run.
      Sadly, there's no way in the protocol to ask the server to verify the content of
      a blob it may have.  (The verify action is for storing files on a 3rd party
      server, and then informing the LFS server when that completes.)  So we may end
      up implementing a custom transfer adapter that simply indicates if the blobs are
      valid, and fall back to basic transfers for non-hg servers.  In other words,
      this code is likely to get bigger before this is made non-experimental.
      
      [1] https://github.com/git-lfs/git-lfs/blob/master/docs/api/batch.md
      a2566597acb5
  32. Mar 15, 2018
    • Matt Harbison's avatar
      test-lfs: drop trailing ', ' item separators from debug JSON output · c37c47e47a95
      Matt Harbison authored
      The trailing space looks weird when conditionalizing the line.  The commas
      shouldn't be necessary because of the indenting.  The `lfs-test-server` isn't
      sending all of the same items (notably, the "transfer" attribute is missing), so
      having the commas means more lines need to be conditionalized.
      c37c47e47a95
Loading