Skip to content
Snippets Groups Projects
  1. Jan 07, 2018
    • Matt Harbison's avatar
      lfs: improve the error message for a missing remote blob · ebf14075
      Matt Harbison authored
      It seems better to print the name known to the user, not the internal file.  The
      previous code unconditionally set 'p.filename'.  That potentially made the
      attribute None, and would be printed as such in
      _gitlfsremote._checkforservererror() instead of "unknown".  Normally, files are
      printed relative to CWD, but I don't see a way to get the repo path to make that
      adjustment.
      
      The test modified here apparently only runs within Facebook, but a print
      statement confirmed the name change.  I tried uploading the blob to a different
      remote store (so the git server never saw it), and also killing the git server
      and removing the blob directory, and removing the 'lfs.db' file.  All resulted
      in a message:
      
        abort: LFS server claims required objects do not exist:
        bdc26931acfb734b142a8d675f205becf27560dc461f501822de13274fe6fc8a!
      
      So I have no idea how to make this test generally runnable.
      ebf14075
    • Matt Harbison's avatar
      lfs: remove the verification option when writing to the local store · a7741809
      Matt Harbison authored
      This partially reverts 417e8e040102 and bb6a80fc969a.  But since there's now a
      dedicated download function, there's no functional change.  The last sentence in
      the commit message of the latter is wrong- write() didn't need the one time hash
      check if verification wasn't requested.  I suspect I missed 'read()' in there
      ("... but _read()_ also needs to do a one time check..."), because that did fail
      without the hash check before linking to the usercache.  The write() method
      simply took the same check for consistency.
      
      While here, clarify that the write() method is *only* for storing content
      directly from filelog, which has already checked the hash.
      
      If someone can come up with a way to bridge the differences between writing to a
      file and sending a urlreq.request across the wire, we can create an upload()
      function and cleanup read() in a similar way.  About the only common thread I
      see is an open() that verifies the content before returning a file descriptor.
      a7741809
  2. Dec 23, 2017
    • Matt Harbison's avatar
      lfs: show a friendly message when pushing lfs to a server without lfs enabled · fa865878
      Matt Harbison authored
      Upfront disclaimer: I don't know anything about the wire protocol, and this was
      pretty much cargo-culted from largefiles, and then clonebundles, since it seems
      more modern.  I was surprised that exchange.push() will ensure all of the proper
      requirements when exchanging between two local repos, but doesn't care when one
      is remote.
      
      All this new capability marker does is inform the client that the extension is
      enabled remotely.  It may or may not contain commits with external blobs.
      
      Open issues:
      
        - largefiles uses 'largefiles=serve' for its capability.  Someday I hope to
          be able to push lfs blobs to an `hg serve` instance.  That will probably
          require a distinct capability.  Should it change to '=serve' then?  Or just
          add an 'lfs-serve' capability then?
      
        - The flip side of this is more complicated.  It looks like largefiles adds an
          'lheads' command for the client to signal to the server that the extension
          is loaded.  That is then converted to 'heads' and sent through the normal
          wire protocol plumbing.  A client using the 'heads' command directly is
          kicked out with a message indicating that the largefiles extension must be
          loaded.  We could do similar with 'lfsheads', but then a repo with both
          largefiles and lfs blobs can't be pushed over the wire.  Hopefully somebody
          with more wire protocol experience can think of something else.  I see
          'x-hgarg-1' on some commands in the tests, but not on heads, and didn't dig
          any further.
      fa865878
  3. Dec 24, 2017
  4. Dec 22, 2017
  5. Nov 17, 2017
    • Matt Harbison's avatar
      lfs: verify lfs object content when transferring to and from the remote store · 417e8e04
      Matt Harbison authored
      This avoids inserting corrupt files into the usercache, and local and remote
      stores.  One down side is that the bad file won't be available locally for
      forensic purposes after a remote download.  I'm thinking about adding an
      'incoming' directory to the local lfs store to handle the download, and then
      move it to the 'objects' directory after it passes verification.  That would
      have the additional benefit of not concatenating each transfer chunk in memory
      until the full file is transferred.
      
      Verification isn't needed when the data is passed back through the revlog
      interface or when the oid was just calculated, but otherwise it is on by
      default.  The additional overhead should be well worth avoiding problems with
      file based remote stores, or buggy lfs servers.
      
      Having two different verify functions is a little sad, but the full data of the
      blob is mostly passed around in memory, because that's what the revlog interface
      wants.  The upload function, however, chunks up the data.  It would be ideal if
      that was how the content is always handled, but that's probably a huge project.
      
      I don't really like printing the long hash, but `hg debugdata` isn't a public
      interface, and is the only way to get it.  The filelog and revision info is
      nowhere near this area, so recommending `hg verify` is the easiest thing to do.
      417e8e04
  6. Dec 19, 2017
    • Matt Harbison's avatar
      lfs: add note messages indicating what store holds the lfs blob · 02f54a1e
      Matt Harbison authored
      The following corruption related patches were written prior to adding the user
      level cache, and it took awhile to track down why the tests changed.  (It
      generally made things more resilient.)  But I think this will be useful to the
      end user as well.  I didn't make it --debug level, because there can be a ton of
      info coming out of clone/push/pull --debug.  The pointers are sorted for test
      stability.
      
      I opted for ui.note() instead of checking ui.verbose and then using ui.write()
      for convenience, but I see most of this extension does the latter.  I have no
      idea what the preferred form is.
      02f54a1e
  7. Dec 13, 2017
  8. Dec 08, 2017
  9. Dec 07, 2017
  10. Nov 27, 2017
  11. Nov 17, 2017
  12. Nov 23, 2017
    • Matt Harbison's avatar
      lfs: add a repo requirement for this extension when converting to lfs · f8f939a2
      Matt Harbison authored
      This covers both the vanilla repo -> lfs repo and largefiles -> lfs conversions.
      The largefiles extension adds the requirement directly, because it has a
      dedicated command to convert.  Using the convert extension is better, because it
      supports more features.
      
      I'd like ideas about how to ensure that converting away from lfs works on all
      files.  (See comments in test-lfs.t)
      f8f939a2
  13. Nov 14, 2017
    • Matt Harbison's avatar
      lfs: quiesce check-module-import warnings · b8e5fb8d
      Matt Harbison authored
      Specifically, 'symbol import follows non-symbol import: mercurial.i18n'
      b8e5fb8d
    • Matt Harbison's avatar
      lfs: import the Facebook git-lfs client extension · 66c5a8cf
      Matt Harbison authored
      The purpose of this is the same as the built-in largefiles extension- to handle
      huge files outside of the normal storage system, generally to keep the amount of
      data cloned to a lower amount.  There are several benefits of implementing the
      git-lfs protocol, instead of using the largefiles extension:
      
        - Bitbucket and Github support (and probably wider support in 3rd party
          hosting sites in general). [1][2]
      
        - The number of hg internals monkey patched are several orders of magnitude
          lower, so it will be easier to reason about and maintain.  Future commands
          will likely just work, without requiring various wrappers.
      
        - The "standin" files are only written to the filelog, not the disk.  That
          should avoid weird edge cases where the largefile and standin files get out
          of sync. [3]  It also avoids the occasional printing of the "hidden" standin
          file in various messages.
      
        - Filesets like size() will work, even if the file isn't present.  (It always
          says 41 bytes for largefiles, whether present or not.)
      
      The only place that I see where largefiles comes out on top is that it works
      with `hg serve` for simple sharing, without external infrastructure.  Getting
      lfs-test-server working was a hassle, and took awhile to figure out.  Maybe we
      can do something to make it work in the future.
      
      Long term, I expect that this will be highly preferred over largefiles.  But if
      we are to recommend this to largefile users, there are some UI issues to
      bikeshed.  Until they are resolved, I've marked this experimental, and am not
      putting a pointer to this in the largefiles help.  The (non exhaustive) list of
      issues I've seen so far are:
      
        - It isn't sufficient to just enable the largefiles extension- you have to
          explicitly add a file with --large before it will pay attention to the
          configured sizes and patterns on future adds.  The justification being that
          once you use it, you're stuck with it.  I've seen people confused by this,
          and haven't liked it myself.  But it's also saved me a few times.  Should we
          do something like have a specific enabling config setting that must be set
          in the local repo config, so that enabling this extension in the user or
          system hgrc doesn't silently start storing lfs files?
      
        - The largefiles extension adds a repo requirement when the first largefile is
          committed, so that the extension must always be enabled in the future.  This
          extension is not doing that, and since I only enabled it locally to avoid
          infecting other repos, I got a cryptic error about missing flag processors
          when I cloned.  Is there no repo requirement due to shallow/narrow clone
          considerations (or other future advanced things)?
      
        - In the (small amount of) reading I've done about the git implementation, it
          seems that the files and sizes are stored in a tracked .gitattributes file.
          I think a tracked file for this would be extremely useful for consistency
          across developers, but this kind of touches on the tracked hgrc file
          proposal a few months back.
      
        - The git client can specify file patterns, not just sizes.
      
        - The largefiles extension has a cache directory in the local repo, but also a
          system wide one.  We should probably implement a system wide cache too, so
          that multiple clones don't have to refetch the files from the server.
      
        - Jun mentioned other missing features, like SSH authentication, gc, etc.
      
      The code corresponds to c0492b73c7ef in hg-experimental. [4]  The only tweaks
      are to load the extension in the tests with 'lfs=' instead of
      'lfs=$TESTDIR/../hgext3rd/lfs', change the import in the *.py test to hgext
      (from hgext3rd), add the 'testedwith' declaration, and mark it experimental for
      now.  The infinite-push, p4fastimport, and remotefilelog tests were left behind.
      
      The devel-warnings for unregistered config options are not corrected yet, nor
      are the import check warnings.
      
      [1] https://www.mercurial-scm.org/pipermail/mercurial/2017-November/050699.html
      [2] https://bitbucket.org/site/master/issues/3843/largefiles-support-bb-3903
      [3] https://bz.mercurial-scm.org/show_bug.cgi?id=5738
      [4] https://bitbucket.org/facebook/hg-experimental
      66c5a8cf
Loading