Skip to content
Snippets Groups Projects
  1. Feb 04, 2018
    • Matt Harbison's avatar
      cat: call the storage prefetch hook · 264b90a060b7
      Matt Harbison authored
      It's not important to call in the case of a single file, but maybe it's better
      to do so for consistency.
      264b90a060b7
    • Matt Harbison's avatar
      archive: call the storage prefetch hook · 533f04d4cb6d
      Matt Harbison authored
      533f04d4cb6d
    • Matt Harbison's avatar
      lfs: prefetch lfs blobs during revert · d857cad588e4
      Matt Harbison authored
      The revert command oddly prints out what it will do before requesting the files
      to be prefetched.  But the 'need to transfer' line indicates the blobs are being
      grouped.
      d857cad588e4
    • Matt Harbison's avatar
      lfs: prefetch lfs blobs when applying merge updates · 0b79f99fd7b0
      Matt Harbison authored
      In addition to merge, this method ultimately gets called by many commands:
      
        - backout
        - bisect
        - clone
        - fetch
        - graft
        - import (without --bypass)
        - pull -u
        - rebase
        - strip
        - share
        - transplant
        - unbundle
        - update
      
      Additionally, it's also called by histedit, shelve, unshelve, and split, but it
      seems that the related blobs should always be available locally for these.
      
      For `hg update`, it happens after the normal argument checking and pre-update
      hook processing, and remote corruption is detected prior to manipulating the
      working directory.  Other commands could use this treatment (archive, cat,
      revert, etc), but this covers so many of the frequently used bulk commands, it
      seems like a good starting point.
      
      Losing the verbose message that prints the file name before a corrupt blob
      aborts the command is a little sad, because there's no easy way to go from oid
      to file name.  I'd like to change that message to list the file name so it looks
      cleaner and less cryptic, but the pointer object is nowhere near where it needs
      to be to do this.  So punt on that for now.
      0b79f99fd7b0
  2. Jan 30, 2018
    • Matt Harbison's avatar
      lfs: emit a status message to indicate how many blobs were uploaded · fa993c3c8462
      Matt Harbison authored
      Previously, there was a progress bar indicating the byte count, but then it
      disappeared once the transfer was done.  Having that value stay on the screen
      seems useful.  Downloads are done one at a time, so hold off on that until they
      can be coalesced, to avoid a series of lines being printed.  (I don't have any
      great ideas on how to do that.  It would be a shame to have to wrap a bunch of
      read commands to be able to do this.)
      
      I'm not sure if the 'lfs:' prefix is the right thing to do here.  The others in
      the test are verbose/debug messages, so in the normal case, this is the only
      line that's prefixed.
      fa993c3c8462
  3. Jan 11, 2018
    • Jun Wu's avatar
      lfs: remove internal url in test · 2c6ebd0c850e
      Jun Wu authored
      `test-lfs-test-server.t` refers to a FB internal domain and requires certain
      implementation (ex. set error code to 404) at that endpoint. Without any
      workaround, It should in theory error out like "Domain cannot be resolved".
      I don't know how Matt Harbison ran the test.
      
      This patch changes the test to only depend on `lfs-test-server`.
      Unfortunately the logic has to be changed since `lfs-test-server` does not
      set error code to 404 but just removes "download" from "actions".
      
      Differential Revision: https://phab.mercurial-scm.org/D1849
      2c6ebd0c850e
  4. Dec 31, 2017
  5. Jan 07, 2018
    • Matt Harbison's avatar
      lfs: improve the error message for a missing remote blob · ebf14075a5c1
      Matt Harbison authored
      It seems better to print the name known to the user, not the internal file.  The
      previous code unconditionally set 'p.filename'.  That potentially made the
      attribute None, and would be printed as such in
      _gitlfsremote._checkforservererror() instead of "unknown".  Normally, files are
      printed relative to CWD, but I don't see a way to get the repo path to make that
      adjustment.
      
      The test modified here apparently only runs within Facebook, but a print
      statement confirmed the name change.  I tried uploading the blob to a different
      remote store (so the git server never saw it), and also killing the git server
      and removing the blob directory, and removing the 'lfs.db' file.  All resulted
      in a message:
      
        abort: LFS server claims required objects do not exist:
        bdc26931acfb734b142a8d675f205becf27560dc461f501822de13274fe6fc8a!
      
      So I have no idea how to make this test generally runnable.
      ebf14075a5c1
  6. Dec 22, 2017
    • Matt Harbison's avatar
      lfs: use the localstore download method to transfer from remote stores · fd610befc37f
      Matt Harbison authored
      Both gitlfsremote and file based remotes benefit from not requiring the whole
      file in memory (though the whole file is still loaded when passing through the
      revlog interface).  With a method specific to downloading from a remote store,
      the misleading 'use hg verify' hint is removed.  The behavior is otherwise
      unchanged, in that a download from both remote store types will yield a copy of
      the blob via util.atomictempfile.
      
      There's no response payload defined for the non 'download' actions, but the
      previous code attempted to read the payload in this case anyway.  This
      refactored code made that more obvious, so any payload is printed as a debug
      message, just in case.
      fd610befc37f
  7. Jan 03, 2018
    • Matt Harbison's avatar
      lfs: use the local store method for opening a blob · e8f80529abeb
      Matt Harbison authored
      I noticed that when I cloned without updating and then turned around and pushed
      that clone to an lfs server, it was only trying to find the blob in the local
      store.
      
      Writes to the dummyremote (file based store) use local.read(), which looks at
      both the usercache and local store.
      e8f80529abeb
  8. Dec 24, 2017
    • Matt Harbison's avatar
      lfs: add the 'lfs' requirement in the changegroup transaction introducing lfs · 6bb940de4c4c
      Matt Harbison authored
      A hook like this is how largefiles manages to do the same.  Largefiles uses a
      changegroup hook, but this uses pretxnchangegroup because that actually causes
      the transaction to rollback in the unlikely event that writing the requirements
      out fails.  Sadly, the requires file itself isn't rolled back if a subsequent
      hook fails, but that seems trivial.
      
      Now that commit, changegroup and convert are covered, I don't think there's any
      way to get an lfs repo without the requirement.
      
      The grep exit code is blotted out of some test-lfs-serve.t tests now showing the
      requirement, because run-tests.py doesn't support conditionalizing the exit
      code.
      6bb940de4c4c
  9. Nov 17, 2017
    • Matt Harbison's avatar
      lfs: verify lfs object content when transferring to and from the remote store · 417e8e040102
      Matt Harbison authored
      This avoids inserting corrupt files into the usercache, and local and remote
      stores.  One down side is that the bad file won't be available locally for
      forensic purposes after a remote download.  I'm thinking about adding an
      'incoming' directory to the local lfs store to handle the download, and then
      move it to the 'objects' directory after it passes verification.  That would
      have the additional benefit of not concatenating each transfer chunk in memory
      until the full file is transferred.
      
      Verification isn't needed when the data is passed back through the revlog
      interface or when the oid was just calculated, but otherwise it is on by
      default.  The additional overhead should be well worth avoiding problems with
      file based remote stores, or buggy lfs servers.
      
      Having two different verify functions is a little sad, but the full data of the
      blob is mostly passed around in memory, because that's what the revlog interface
      wants.  The upload function, however, chunks up the data.  It would be ideal if
      that was how the content is always handled, but that's probably a huge project.
      
      I don't really like printing the long hash, but `hg debugdata` isn't a public
      interface, and is the only way to get it.  The filelog and revision info is
      nowhere near this area, so recommending `hg verify` is the easiest thing to do.
      417e8e040102
  10. Dec 05, 2017
  11. Nov 17, 2017
    • Matt Harbison's avatar
      test-lfs: add tests around corrupted lfs objects · 16660fd4428d
      Matt Harbison authored
      These are mostly tests against file:// based remote stores, because that's what
      we have the most control over.
      
      The test uploading a corrupt blob to lfs-test-server demonstrates an overly
      broad exception handler in the retry loop.  A corrupt blob is actually
      transferred in a download, but eventually caught when it is accessed (only after
      it leaves the corrupt file in a couple places locally).  I don't think we want
      to trust random 3rd party implementations, and this would be a problem if there
      were a `debuglfsdownload` command that simply cached the files.  And given the
      cryptic errors, we should probably validate the file hash locally before
      uploading, and also after downloading.
      16660fd4428d
  12. Dec 19, 2017
    • Matt Harbison's avatar
      lfs: add note messages indicating what store holds the lfs blob · 02f54a1ec9eb
      Matt Harbison authored
      The following corruption related patches were written prior to adding the user
      level cache, and it took awhile to track down why the tests changed.  (It
      generally made things more resilient.)  But I think this will be useful to the
      end user as well.  I didn't make it --debug level, because there can be a ton of
      info coming out of clone/push/pull --debug.  The pointers are sorted for test
      stability.
      
      I opted for ui.note() instead of checking ui.verbose and then using ui.write()
      for convenience, but I see most of this extension does the latter.  I have no
      idea what the preferred form is.
      02f54a1ec9eb
  13. Dec 12, 2017
    • Wojciech Lis's avatar
      lfs: using workers in lfs prefetch · f98fac24b757
      Wojciech Lis authored
      This significantly speeds up lfs prefetch. With fast network we are
      seeing ~50% improvement of overall prefetch times
      Because of worker's API in posix we do lose finegrained progress update and only
      see progress when a file finished downloading.
      
      Test Plan:
      Run tests:
      ./run-tests.py -l test-lfs*
      ....
      # Ran 4 tests, 0 skipped, 0 failed.
      Run commands resulting in lfs prefetch e.g. hg sparse --enable-profile
      
      Differential Revision: https://phab.mercurial-scm.org/D1568
      f98fac24b757
  14. Dec 07, 2017
    • Matt Harbison's avatar
      lfs: introduce a user level cache for lfs files · 8e72f9152c4d
      Matt Harbison authored
      This is the same mechanism in place for largefiles, and solves several problems
      working with multiple local repositories.  The existing largefiles method is
      reused in place, because I suspect that there are other functions that can be
      shared.  If we wait a bit to identify more before `hg cp lfutil.py ...`, the
      history will be easier to trace.
      
      The push between repo14 and repo15 in test-lfs.t arguably shouldn't be uploading
      any files with a local push.  Maybe we can revisit that when `hg push` without
      'lfs.url' can upload files to the push destination.  Then it would be consistent
      for blobs in a local push to be linked to the local destination's cache.
      
      The cache property is added to run-tests.py, the same as the largefiles
      property, so that test generated files don't pollute the real location.  Having
      files available locally broke a couple existing lfs-test-server tests, so the
      cache is cleared in a few places to force file download.
      8e72f9152c4d
  15. Nov 21, 2017
    • Matt Harbison's avatar
      test-lfs: allow the test server to be killed on Windows · 32bb27dd5282
      Matt Harbison authored
      Apparently '$!' doesn't return a Win32 PID, so the process was never killed, and
      the next run was screwed up.  Oddly, without the explicit killdaemons.py at the
      end, the test seems to hang.  This spawning is just sad, so I limited it to
      Windows.
      32bb27dd5282
  16. Nov 15, 2017
  17. Nov 14, 2017
    • Matt Harbison's avatar
      lfs: import the Facebook git-lfs client extension · 66c5a8cf2868
      Matt Harbison authored
      The purpose of this is the same as the built-in largefiles extension- to handle
      huge files outside of the normal storage system, generally to keep the amount of
      data cloned to a lower amount.  There are several benefits of implementing the
      git-lfs protocol, instead of using the largefiles extension:
      
        - Bitbucket and Github support (and probably wider support in 3rd party
          hosting sites in general). [1][2]
      
        - The number of hg internals monkey patched are several orders of magnitude
          lower, so it will be easier to reason about and maintain.  Future commands
          will likely just work, without requiring various wrappers.
      
        - The "standin" files are only written to the filelog, not the disk.  That
          should avoid weird edge cases where the largefile and standin files get out
          of sync. [3]  It also avoids the occasional printing of the "hidden" standin
          file in various messages.
      
        - Filesets like size() will work, even if the file isn't present.  (It always
          says 41 bytes for largefiles, whether present or not.)
      
      The only place that I see where largefiles comes out on top is that it works
      with `hg serve` for simple sharing, without external infrastructure.  Getting
      lfs-test-server working was a hassle, and took awhile to figure out.  Maybe we
      can do something to make it work in the future.
      
      Long term, I expect that this will be highly preferred over largefiles.  But if
      we are to recommend this to largefile users, there are some UI issues to
      bikeshed.  Until they are resolved, I've marked this experimental, and am not
      putting a pointer to this in the largefiles help.  The (non exhaustive) list of
      issues I've seen so far are:
      
        - It isn't sufficient to just enable the largefiles extension- you have to
          explicitly add a file with --large before it will pay attention to the
          configured sizes and patterns on future adds.  The justification being that
          once you use it, you're stuck with it.  I've seen people confused by this,
          and haven't liked it myself.  But it's also saved me a few times.  Should we
          do something like have a specific enabling config setting that must be set
          in the local repo config, so that enabling this extension in the user or
          system hgrc doesn't silently start storing lfs files?
      
        - The largefiles extension adds a repo requirement when the first largefile is
          committed, so that the extension must always be enabled in the future.  This
          extension is not doing that, and since I only enabled it locally to avoid
          infecting other repos, I got a cryptic error about missing flag processors
          when I cloned.  Is there no repo requirement due to shallow/narrow clone
          considerations (or other future advanced things)?
      
        - In the (small amount of) reading I've done about the git implementation, it
          seems that the files and sizes are stored in a tracked .gitattributes file.
          I think a tracked file for this would be extremely useful for consistency
          across developers, but this kind of touches on the tracked hgrc file
          proposal a few months back.
      
        - The git client can specify file patterns, not just sizes.
      
        - The largefiles extension has a cache directory in the local repo, but also a
          system wide one.  We should probably implement a system wide cache too, so
          that multiple clones don't have to refetch the files from the server.
      
        - Jun mentioned other missing features, like SSH authentication, gc, etc.
      
      The code corresponds to c0492b73c7ef in hg-experimental. [4]  The only tweaks
      are to load the extension in the tests with 'lfs=' instead of
      'lfs=$TESTDIR/../hgext3rd/lfs', change the import in the *.py test to hgext
      (from hgext3rd), add the 'testedwith' declaration, and mark it experimental for
      now.  The infinite-push, p4fastimport, and remotefilelog tests were left behind.
      
      The devel-warnings for unregistered config options are not corrected yet, nor
      are the import check warnings.
      
      [1] https://www.mercurial-scm.org/pipermail/mercurial/2017-November/050699.html
      [2] https://bitbucket.org/site/master/issues/3843/largefiles-support-bb-3903
      [3] https://bz.mercurial-scm.org/show_bug.cgi?id=5738
      [4] https://bitbucket.org/facebook/hg-experimental
      66c5a8cf2868
Loading