Skip to content
Snippets Groups Projects
  1. Sep 20, 2018
    • Gregory Szorc's avatar
      lfs: don't add extension to hgrc after clone or share (BC) · bcf72d7b
      Gregory Szorc authored
      Now that repository loading in core supports automatically loading
      the lfs extension when the "lfs" requirement is present, we no
      longer need to update the .hg/hgrc of newly-created repos to load
      the lfs extension!
      
      I'm marking this as BC because it is a change in behavior. But users
      should not notice unless they create an LFS repo with new Mercurial
      and then attempt to use it with an old versions that doesn't support
      automatic extension loading.
      
      Differential Revision: https://phab.mercurial-scm.org/D4712
      bcf72d7b
    • Gregory Szorc's avatar
      localrepo: automatically load lfs extension when required (BC) · 2c2fadbc
      Gregory Szorc authored
      If an unrecognized requirement is present (possibly due to an unloaded
      extension), the user will get an error message telling them to go to
      https://mercurial-scm.org/wiki/MissingRequirement for more info.
      
      And some requirements clearly map to known extensions shipped by
      Mercurial.
      
      This commit teaches repository loading to automatically map
      requirements to extensions. We implement support for loading the
      lfs extension when the "lfs" requirement is present.
      
      This behavior feels more user-friendly to me and I'm having trouble
      coming up with a compelling reason to not do it. The strongest
      argument I have against is that - strictly speaking - requirements
      are general repository features and there could be N providers of that
      feature. e.g. in the case of LFS, there could be another extension
      implementing LFS support. And the user would want to use this
      non-official extension rather than the built-in one. The way this
      patch implements things, the non-official extension could be
      missing and Mercurial would load the official lfs extension, leading
      to unexpected behavior. But this feels like a highly marginal use
      case to me and doesn't outweigh the user benefit of "it just works."
      If someone really wanted to e.g. use a custom LFS extension, they
      could prevent the built-in one from being loaded by either defining
      "extensions.lfs=/path/to/custom/extension" or "extensions.lfs=!",
      as the automatic extension loading only occurs if there is no config
      entry for that extension.
      
      Differential Revision: https://phab.mercurial-scm.org/D4711
      2c2fadbc
  2. Sep 19, 2018
  3. Sep 04, 2018
  4. Jun 09, 2018
    • Yuya Nishihara's avatar
      fileset: rewrite predicates to return matcher not closed to subset (API) (BC) · ff5b6fca
      Yuya Nishihara authored
      This makes fileset expression open to any input, so that we can just say
      "hg status 'set: not binary()'" to select text files including unknowns.
      
      With this and removal of subset computation, 'set:**' becomes as fast as
      'glob:**'. Further optimization will probably be possible by narrowing the
      file tree to compute status for example.
      
      This also fixes 'subrepo()' to not ignore the current mctx.subset.
      
      .. bc::
      
         The fileset expression may include untracked files by default. Use
         ``tracked()`` to explicitly filter out files not existing at the context
         revision.
      ff5b6fca
  5. Jun 21, 2018
  6. May 15, 2018
  7. Apr 06, 2018
  8. Apr 03, 2018
  9. Mar 06, 2018
    • Gregory Szorc's avatar
      merge with stable · 7bf80d9d
      Gregory Szorc authored
      There were a handful of merge conflicts in the wire protocol code due
      to significant refactoring in default. When resolving the conflicts,
      I tried to produce the minimal number of changes to make the incoming
      security patches work with the new code.
      
      I will send some follow-up commits to get the security patches better
      integrated into default.
      7bf80d9d
  10. Feb 07, 2018
    • Jun Wu's avatar
      changegroup: do not delta lfs revisions · d031609b
      Jun Wu authored
      There is no way to distinguish whether a delta base is LFS or non-LFS.
      
      If the delta is against LFS rawtext, and the client trying to apply it has
      the base revision stored as fulltext, the delta (aka. bundle) will fail to
      apply.
      
      This patch forbids using delta for LFS revisions in changegroup so bad
      deltas won't be transmitted.
      
      Note: this does not solve the problem entirely. It solves LFS delta applying
      to non-LFS base. But the other direction: non-LFS delta applying to LFS base
      is not solved yet.
      
      Differential Revision: https://phab.mercurial-scm.org/D2067
      d031609b
  11. Mar 03, 2018
  12. Jan 28, 2018
  13. Jan 27, 2018
    • Matt Harbison's avatar
      lfs: add a fileset for detecting lfs files · eefb5d60
      Matt Harbison authored
      This currently has the same limitation as {lfs_files}, namely it doesn't report
      removed files.
      
      We may want a dedicated 'lfs()' revset for efficiency, but combining this with
      the 'contains()' revset should be equivalent for now.  Combining with
      'set:added()' or 'set:modified()' inside 'files()' should be equivalent to a
      hypothetical lfs_adds() and lfs_modifies().  I wonder if there's a way to tweak
      the filesets to evaluate lazily, to close the efficiency gap.
      
      It would also be interesting to come up with a template filter for '{files}'
      that looked at the pattern to 'files()', and filtered appropriately.  While
      passing a fileset as the pattern to `hg log` does filter '{files}', the set is
      evaluated against the working directory, so there's no way to list all non-lfs
      files above a certain size in all revisions, for example.
      eefb5d60
  14. Feb 04, 2018
    • Matt Harbison's avatar
      lfs: prefetch lfs blobs when applying merge updates · 0b79f99f
      Matt Harbison authored
      In addition to merge, this method ultimately gets called by many commands:
      
        - backout
        - bisect
        - clone
        - fetch
        - graft
        - import (without --bypass)
        - pull -u
        - rebase
        - strip
        - share
        - transplant
        - unbundle
        - update
      
      Additionally, it's also called by histedit, shelve, unshelve, and split, but it
      seems that the related blobs should always be available locally for these.
      
      For `hg update`, it happens after the normal argument checking and pre-update
      hook processing, and remote corruption is detected prior to manipulating the
      working directory.  Other commands could use this treatment (archive, cat,
      revert, etc), but this covers so many of the frequently used bulk commands, it
      seems like a good starting point.
      
      Losing the verbose message that prints the file name before a corrupt blob
      aborts the command is a little sad, because there's no easy way to go from oid
      to file name.  I'd like to change that message to list the file name so it looks
      cleaner and less cryptic, but the pointer object is nowhere near where it needs
      to be to do this.  So punt on that for now.
      0b79f99f
  15. Jan 30, 2018
    • Matt Harbison's avatar
      lfs: don't require the .hglfs file to be tracked to control the policy · 4425790f
      Matt Harbison authored
      The .hgignore file doesn't need to be tracked, nor does the git equivalent of
      this file.  I'm still a little concerned about the effects of forgetting to
      commit this file.  But the fact that conversions maintain the hashes if only the
      normal vs external storage changes, should make this less risky.
      4425790f
  16. Jan 25, 2018
  17. Jan 24, 2018
  18. Jan 22, 2018
  19. Jan 20, 2018
  20. Jan 14, 2018
    • Matt Harbison's avatar
      lfs: add the '{lfsattrs}' template keyword to '{lfs_files}' · f58245b9
      Matt Harbison authored
      This provides access to the metadata dictionary contained within the tracked
      pointer file.  The OID is probably the most important attribute, and has its own
      keyword.  But we might as well have this for completeness.
      
      I liked {pointer} better, but couldn't make it work with the singular/plural
      forms.
      f58245b9
    • Matt Harbison's avatar
      lfs: control tracked file selection via a tracked file · 1ad1e59b
      Matt Harbison authored
      Since the lfs tracking policy can dramatically affect the repository, it makes
      more sense to have the policy file checked in, than to rely on all developers
      configuring their .hgrc properly.  The inspiration for this is the .hgeol file.
      The configuration lives under '[track]', so that other things can be added in
      the future.  Eventually, the config option should be limited to `convert` only.
      
      If the file can't be parsed for any reason (including unrecognized elements of
      the minifileset language), the commit will abort until the problem is corrected.
      This seems more useful than the warning that hgeol emits, and has no effect on
      reading the data, so there's no compatibility concerns.
      
      My initial thought was to read the file and change each "key = value" line into
      "((key) & (value))", so that each line could be ORed together, and make a single
      pass at compiling.  Unfortunately, that prevents exclusions if there's a
      catchall rule.  Consider what happens to a large *.c file here:
      
        [track]
        **.c = none()
        ** = size('>1MB')
        # ((**.c) & (none())) | ((**) & (size('>1MB'))) => anything > 1MB
      
      I also thought about having separate [include] and [exclude] sections.  But that
      just seems to open things up to user mistakes.  Consider:
      
        [include]
        **.zip = all()
        **.php = size('>10MB')
      
        [exclude]
        **.zip = all()  # Who wins?
        **.php = none() # Effectively 'all()' (i.e. nothing excluded), or >10MB ?
      
      Therefore, it just compiles each key and value separately, and walks until the
      key matches something.  I'm not sure how to enforce just file patterns on LHS
      without leaking knowledge about the minifileset here.  That means this will
      allow odd looking lines like this:
      
        [track]
        **.c | **.txt = none()
      
      But that's also fewer lines to compile, so slightly more efficient?  Some things
      like 'none()' won't work as expected on LHS though, because that won't match, so
      that line is skipped.  For now, these quirks are not mentioned in the
      documentation.
      
      Jun previously expressed concern about efficiency when scaling to large repos,
      so I tried avoiding 'repo[None]'.  (localrepo.commit() gets repo[None] already,
      but doesn't tie it to the workingcommitctx used here.)  Therefore, I looked at
      the passed context for 'AMR' status.  But that doesn't help with the normal case
      where the policy file is tracked, but clean.  That requires looking up p1() to
      read the file.  I don't see any way to get the content of one file without first
      creating the full parent context.
      1ad1e59b
  21. Jan 17, 2018
    • Matt Harbison's avatar
      lfs: allow the pointer file to be viewed with `hg cat -T '{rawdata}'` · a9858349
      Matt Harbison authored
      The only other interface to this data is `hg debugdata`, which requires
      knowledge of the filelog revision that corresponds to the changeset.  Since the
      data is uninterpreted, this is an important debugging capability, and needs to
      be simpler to use than that.
      
      For non-LFS files, this displays the regular data.
      
      Alternately, we could forego the messy function extraction in the last patch if
      this template keyword can just be added unconditionally.
      a9858349
  22. Jan 14, 2018
  23. Dec 31, 2017
  24. Dec 22, 2017
    • Matt Harbison's avatar
      lfs: use the localstore download method to transfer from remote stores · fd610bef
      Matt Harbison authored
      Both gitlfsremote and file based remotes benefit from not requiring the whole
      file in memory (though the whole file is still loaded when passing through the
      revlog interface).  With a method specific to downloading from a remote store,
      the misleading 'use hg verify' hint is removed.  The behavior is otherwise
      unchanged, in that a download from both remote store types will yield a copy of
      the blob via util.atomictempfile.
      
      There's no response payload defined for the non 'download' actions, but the
      previous code attempted to read the payload in this case anyway.  This
      refactored code made that more obvious, so any payload is printed as a debug
      message, just in case.
      fd610bef
  25. Dec 24, 2017
    • Matt Harbison's avatar
      lfs: add the 'lfs' requirement in the changegroup transaction introducing lfs · 6bb940de
      Matt Harbison authored
      A hook like this is how largefiles manages to do the same.  Largefiles uses a
      changegroup hook, but this uses pretxnchangegroup because that actually causes
      the transaction to rollback in the unlikely event that writing the requirements
      out fails.  Sadly, the requires file itself isn't rolled back if a subsequent
      hook fails, but that seems trivial.
      
      Now that commit, changegroup and convert are covered, I don't think there's any
      way to get an lfs repo without the requirement.
      
      The grep exit code is blotted out of some test-lfs-serve.t tests now showing the
      requirement, because run-tests.py doesn't support conditionalizing the exit
      code.
      6bb940de
  26. Dec 22, 2017
    • Matt Harbison's avatar
      test-lfs: add tests covering local exchanges · 67611e06
      Matt Harbison authored
      The root issue here is that requirements are not exchanged and preserved on
      push/pull.  This can be handled with a changegroup hook.  Testing for remote
      exchanges is much more extensive (it's possible for one process or the other to
      not have the extension loaded at all), so it is added separately.
      67611e06
  27. Dec 21, 2017
    • Matt Harbison's avatar
      lfs: only hardlink between the usercache and local store if the blob verifies · bb6a80fc
      Matt Harbison authored
      This fixes the issue where verify (and other read commands) would propagate
      corrupt blobs.  I originalled coded this to only hardlink if 'verify=True' for
      store.read(), but then good blobs weren't being linked, and this broke a bunch
      of tests.  (The blob in repo5 that is being corrupted seems to be linked into
      repo5 in the loop running dumpflog.py prior to it being corrupted, but only if
      verify=False is handled too.)  It's probably better to do a one time extra
      verification in order to create these files, so that the repo can be copied to a
      removable drive.
      
      Adding the same check to store.write() was only for completeness, but also needs
      to do a one time extra verification to avoid breaking tests.
      bb6a80fc
  28. Nov 17, 2017
    • Matt Harbison's avatar
      lfs: verify lfs object content when transferring to and from the remote store · 417e8e04
      Matt Harbison authored
      This avoids inserting corrupt files into the usercache, and local and remote
      stores.  One down side is that the bad file won't be available locally for
      forensic purposes after a remote download.  I'm thinking about adding an
      'incoming' directory to the local lfs store to handle the download, and then
      move it to the 'objects' directory after it passes verification.  That would
      have the additional benefit of not concatenating each transfer chunk in memory
      until the full file is transferred.
      
      Verification isn't needed when the data is passed back through the revlog
      interface or when the oid was just calculated, but otherwise it is on by
      default.  The additional overhead should be well worth avoiding problems with
      file based remote stores, or buggy lfs servers.
      
      Having two different verify functions is a little sad, but the full data of the
      blob is mostly passed around in memory, because that's what the revlog interface
      wants.  The upload function, however, chunks up the data.  It would be ideal if
      that was how the content is always handled, but that's probably a huge project.
      
      I don't really like printing the long hash, but `hg debugdata` isn't a public
      interface, and is the only way to get it.  The filelog and revision info is
      nowhere near this area, so recommending `hg verify` is the easiest thing to do.
      417e8e04
    • Matt Harbison's avatar
      test-lfs: add tests around corrupted lfs objects · 16660fd4
      Matt Harbison authored
      These are mostly tests against file:// based remote stores, because that's what
      we have the most control over.
      
      The test uploading a corrupt blob to lfs-test-server demonstrates an overly
      broad exception handler in the retry loop.  A corrupt blob is actually
      transferred in a download, but eventually caught when it is accessed (only after
      it leaves the corrupt file in a couple places locally).  I don't think we want
      to trust random 3rd party implementations, and this would be a problem if there
      were a `debuglfsdownload` command that simply cached the files.  And given the
      cryptic errors, we should probably validate the file hash locally before
      uploading, and also after downloading.
      16660fd4
  29. Dec 19, 2017
    • Matt Harbison's avatar
      lfs: add note messages indicating what store holds the lfs blob · 02f54a1e
      Matt Harbison authored
      The following corruption related patches were written prior to adding the user
      level cache, and it took awhile to track down why the tests changed.  (It
      generally made things more resilient.)  But I think this will be useful to the
      end user as well.  I didn't make it --debug level, because there can be a ton of
      info coming out of clone/push/pull --debug.  The pointers are sorted for test
      stability.
      
      I opted for ui.note() instead of checking ui.verbose and then using ui.write()
      for convenience, but I see most of this extension does the latter.  I have no
      idea what the preferred form is.
      02f54a1e
Loading