- Sep 20, 2018
-
-
Gregory Szorc authored
Now that repository loading in core supports automatically loading the lfs extension when the "lfs" requirement is present, we no longer need to update the .hg/hgrc of newly-created repos to load the lfs extension! I'm marking this as BC because it is a change in behavior. But users should not notice unless they create an LFS repo with new Mercurial and then attempt to use it with an old versions that doesn't support automatic extension loading. Differential Revision: https://phab.mercurial-scm.org/D4712
-
Gregory Szorc authored
If an unrecognized requirement is present (possibly due to an unloaded extension), the user will get an error message telling them to go to https://mercurial-scm.org/wiki/MissingRequirement for more info. And some requirements clearly map to known extensions shipped by Mercurial. This commit teaches repository loading to automatically map requirements to extensions. We implement support for loading the lfs extension when the "lfs" requirement is present. This behavior feels more user-friendly to me and I'm having trouble coming up with a compelling reason to not do it. The strongest argument I have against is that - strictly speaking - requirements are general repository features and there could be N providers of that feature. e.g. in the case of LFS, there could be another extension implementing LFS support. And the user would want to use this non-official extension rather than the built-in one. The way this patch implements things, the non-official extension could be missing and Mercurial would load the official lfs extension, leading to unexpected behavior. But this feels like a highly marginal use case to me and doesn't outweigh the user benefit of "it just works." If someone really wanted to e.g. use a custom LFS extension, they could prevent the built-in one from being loaded by either defining "extensions.lfs=/path/to/custom/extension" or "extensions.lfs=!", as the automatic extension loading only occurs if there is no config entry for that extension. Differential Revision: https://phab.mercurial-scm.org/D4711
-
- Sep 19, 2018
-
-
Matt Harbison authored
Python3 defaults to installing under "Program Files".
-
- Sep 04, 2018
-
-
Meirambek Omyrzak authored
output before: "500 files, 2035 changesets, 2622 total revisions" output after: "checked 2035 changesets with 2622 changes to 500 files" new one was suggested in the comments inside the issue. Differential Revision: https://phab.mercurial-scm.org/D4476
-
- Jun 09, 2018
-
-
Yuya Nishihara authored
This makes fileset expression open to any input, so that we can just say "hg status 'set: not binary()'" to select text files including unknowns. With this and removal of subset computation, 'set:**' becomes as fast as 'glob:**'. Further optimization will probably be possible by narrowing the file tree to compute status for example. This also fixes 'subrepo()' to not ignore the current mctx.subset. .. bc:: The fileset expression may include untracked files by default. Use ``tracked()`` to explicitly filter out files not existing at the context revision.
-
- Jun 21, 2018
-
-
Matt Harbison authored
This ensures that the blobs don't need to be present to be filtered properly.
-
Matt Harbison authored
Since LFS stores the binary attribute in the pointer file, this means that the file doesn't need to be downloaded in order to be skipped. This function also catches an IOError if the data can't be loaded in the non-LFS case. I wonder if it's worth storing the unix/dos attributes in the pointer file as well, though I'd expect LFS files to be binary most of the time.
-
- May 15, 2018
-
-
Kyle Lippincott authored
As far as I can tell, most of these failures are due to using $HGPORT, which it seems chg might be using itself? I don't know enough to debug these failures to fix them properly. Differential Revision: https://phab.mercurial-scm.org/D3562
-
- Apr 06, 2018
-
-
Gregory Szorc authored
rawsize() is not reimplemented outside of revlog. I'm not sure why this code was insisting it call a specific implementation. Changing it to call rawsize() on the repo.file(f) result seems to work just fine. Differential Revision: https://phab.mercurial-scm.org/D3153
-
- Apr 03, 2018
-
-
Gregory Szorc authored
Getting these tests to pass is more work than it is worth right now. Let's punt on it. Differential Revision: https://phab.mercurial-scm.org/D3063
-
- Mar 06, 2018
-
-
Gregory Szorc authored
There were a handful of merge conflicts in the wire protocol code due to significant refactoring in default. When resolving the conflicts, I tried to produce the minimal number of changes to make the incoming security patches work with the new code. I will send some follow-up commits to get the security patches better integrated into default.
-
- Feb 07, 2018
-
-
Jun Wu authored
There is no way to distinguish whether a delta base is LFS or non-LFS. If the delta is against LFS rawtext, and the client trying to apply it has the base revision stored as fulltext, the delta (aka. bundle) will fail to apply. This patch forbids using delta for LFS revisions in changegroup so bad deltas won't be transmitted. Note: this does not solve the problem entirely. It solves LFS delta applying to non-LFS base. But the other direction: non-LFS delta applying to LFS base is not solved yet. Differential Revision: https://phab.mercurial-scm.org/D2067
-
- Mar 03, 2018
-
-
Augie Fackler authored
Somehow these are only caught when running the test under Python 3. Differential Revision: https://phab.mercurial-scm.org/D2580
-
- Jan 28, 2018
-
-
Matt Harbison authored
The callstatus setting is required to notice the removal of 'lfs.test' in rev 6 in the tests, even though this isn't directly calling mctx.status(). However, it's not needed to get the results in the tests for `hg status`, so I'm probably missing something.
-
Matt Harbison authored
-
- Jan 27, 2018
-
-
Matt Harbison authored
This currently has the same limitation as {lfs_files}, namely it doesn't report removed files. We may want a dedicated 'lfs()' revset for efficiency, but combining this with the 'contains()' revset should be equivalent for now. Combining with 'set:added()' or 'set:modified()' inside 'files()' should be equivalent to a hypothetical lfs_adds() and lfs_modifies(). I wonder if there's a way to tweak the filesets to evaluate lazily, to close the efficiency gap. It would also be interesting to come up with a template filter for '{files}' that looked at the pattern to 'files()', and filtered appropriately. While passing a fileset as the pattern to `hg log` does filter '{files}', the set is evaluated against the working directory, so there's no way to list all non-lfs files above a certain size in all revisions, for example.
-
- Feb 04, 2018
-
-
Matt Harbison authored
In addition to merge, this method ultimately gets called by many commands: - backout - bisect - clone - fetch - graft - import (without --bypass) - pull -u - rebase - strip - share - transplant - unbundle - update Additionally, it's also called by histedit, shelve, unshelve, and split, but it seems that the related blobs should always be available locally for these. For `hg update`, it happens after the normal argument checking and pre-update hook processing, and remote corruption is detected prior to manipulating the working directory. Other commands could use this treatment (archive, cat, revert, etc), but this covers so many of the frequently used bulk commands, it seems like a good starting point. Losing the verbose message that prints the file name before a corrupt blob aborts the command is a little sad, because there's no easy way to go from oid to file name. I'd like to change that message to list the file name so it looks cleaner and less cryptic, but the pointer object is nowhere near where it needs to be to do this. So punt on that for now.
-
- Jan 30, 2018
-
-
Matt Harbison authored
The .hgignore file doesn't need to be tracked, nor does the git equivalent of this file. I'm still a little concerned about the effects of forgetting to commit this file. But the fact that conversions maintain the hashes if only the normal vs external storage changes, should make this less risky.
-
- Jan 25, 2018
-
-
Matt Harbison authored
This was useful in debugging, because I stupidly quoted it out of habit from the command line. This isn't a great example that clearly shows the problem, but I don't know how to improve it. The problem *is* obvious once a complex statement or a clearly bogus string is used.
-
- Jan 24, 2018
-
-
Matt Harbison authored
The only reasons I did this in the first place was because tracking externally seems like it would always be a mistake, and the eol extension does the same thing. Yuya and Jun thought it might be better to not do this[1], so I'll defer to them on this. If a problem with say, .hgtags or .hgeol does arise, it can be added back without breaking existing repos. [1] https://www.mercurial-scm.org/pipermail/mercurial-devel/2018-January/110371.html
-
Matt Harbison authored
Per Yuya, for consistency with {lfspointer}. It might be slightly confusing for there to be {lfsoid} and {lfspointer.oid}. But preventing ambiguity seems more important, and the latter is controlled by the git-lfs spec.
-
- Jan 22, 2018
-
-
Matt Harbison authored
Per Martin von Zweigbergk's suggestion to keep this unambiguous, for when it is migrated to {files} and friends.
-
- Jan 20, 2018
-
-
Matt Harbison authored
This seems more descriptive.
-
- Jan 14, 2018
-
-
Matt Harbison authored
This provides access to the metadata dictionary contained within the tracked pointer file. The OID is probably the most important attribute, and has its own keyword. But we might as well have this for completeness. I liked {pointer} better, but couldn't make it work with the singular/plural forms.
-
Matt Harbison authored
Since the lfs tracking policy can dramatically affect the repository, it makes more sense to have the policy file checked in, than to rely on all developers configuring their .hgrc properly. The inspiration for this is the .hgeol file. The configuration lives under '[track]', so that other things can be added in the future. Eventually, the config option should be limited to `convert` only. If the file can't be parsed for any reason (including unrecognized elements of the minifileset language), the commit will abort until the problem is corrected. This seems more useful than the warning that hgeol emits, and has no effect on reading the data, so there's no compatibility concerns. My initial thought was to read the file and change each "key = value" line into "((key) & (value))", so that each line could be ORed together, and make a single pass at compiling. Unfortunately, that prevents exclusions if there's a catchall rule. Consider what happens to a large *.c file here: [track] **.c = none() ** = size('>1MB') # ((**.c) & (none())) | ((**) & (size('>1MB'))) => anything > 1MB I also thought about having separate [include] and [exclude] sections. But that just seems to open things up to user mistakes. Consider: [include] **.zip = all() **.php = size('>10MB') [exclude] **.zip = all() # Who wins? **.php = none() # Effectively 'all()' (i.e. nothing excluded), or >10MB ? Therefore, it just compiles each key and value separately, and walks until the key matches something. I'm not sure how to enforce just file patterns on LHS without leaking knowledge about the minifileset here. That means this will allow odd looking lines like this: [track] **.c | **.txt = none() But that's also fewer lines to compile, so slightly more efficient? Some things like 'none()' won't work as expected on LHS though, because that won't match, so that line is skipped. For now, these quirks are not mentioned in the documentation. Jun previously expressed concern about efficiency when scaling to large repos, so I tried avoiding 'repo[None]'. (localrepo.commit() gets repo[None] already, but doesn't tie it to the workingcommitctx used here.) Therefore, I looked at the passed context for 'AMR' status. But that doesn't help with the normal case where the policy file is tracked, but clean. That requires looking up p1() to read the file. I don't see any way to get the content of one file without first creating the full parent context.
-
- Jan 17, 2018
-
-
Matt Harbison authored
The only other interface to this data is `hg debugdata`, which requires knowledge of the filelog revision that corresponds to the changeset. Since the data is uninterpreted, this is an important debugging capability, and needs to be simpler to use than that. For non-LFS files, this displays the regular data. Alternately, we could forego the messy function extraction in the last patch if this template keyword can just be added unconditionally.
-
- Jan 14, 2018
-
-
Matt Harbison authored
The 'sha256:' prefix is skipped because this seems like the most convenient way to consume it. Maybe we should also add a '{oid_type}' keyword? Then again, that can be added in the future if a different algorithm is supported.
-
Matt Harbison authored
This will allow more attributes about the file to be queried.
-
Matt Harbison authored
This is a very cool feature that we should document, but I'll punt that to the freeze. From what I can tell, git doesn't have this capability.
-
Matt Harbison authored
-
Matt Harbison authored
I can't think of any problematic scenarios (though things might get interesting with .hgtags, since every head is consulted). The eol extension explicitly disables handling these files, and that seems reasonable here too.
-
- Dec 31, 2017
-
-
Matt Harbison authored
Migrate `lfs.threshold` to more powerful `lfs.filter` added by D4990618 so people can specify what files to be stored in LFS with more flexibility. This patch was authored by Jun Wu for the fb-experimental repo, to avoid using matcher for efficiency[1]. All I've changed here is to register the new 'lfs.track' default so that the tests run cleanly, and adapt the subsequent language changes. Migrating the remaining uses of 'lfs.threshold' can be done separately since there's a fallback in place. [1] https://www.mercurial-scm.org/pipermail/mercurial-devel/2017-December/109388.html
-
Matt Harbison authored
This will allow a Facebook patch that creates 'repo11' to be imported without breaking a bunch of tests, or requiring edits on the fly.
-
- Dec 22, 2017
-
-
Matt Harbison authored
Both gitlfsremote and file based remotes benefit from not requiring the whole file in memory (though the whole file is still loaded when passing through the revlog interface). With a method specific to downloading from a remote store, the misleading 'use hg verify' hint is removed. The behavior is otherwise unchanged, in that a download from both remote store types will yield a copy of the blob via util.atomictempfile. There's no response payload defined for the non 'download' actions, but the previous code attempted to read the payload in this case anyway. This refactored code made that more obvious, so any payload is printed as a debug message, just in case.
-
- Dec 24, 2017
-
-
Matt Harbison authored
A hook like this is how largefiles manages to do the same. Largefiles uses a changegroup hook, but this uses pretxnchangegroup because that actually causes the transaction to rollback in the unlikely event that writing the requirements out fails. Sadly, the requires file itself isn't rolled back if a subsequent hook fails, but that seems trivial. Now that commit, changegroup and convert are covered, I don't think there's any way to get an lfs repo without the requirement. The grep exit code is blotted out of some test-lfs-serve.t tests now showing the requirement, because run-tests.py doesn't support conditionalizing the exit code.
-
- Dec 22, 2017
-
-
Matt Harbison authored
The root issue here is that requirements are not exchanged and preserved on push/pull. This can be handled with a changegroup hook. Testing for remote exchanges is much more extensive (it's possible for one process or the other to not have the extension loaded at all), so it is added separately.
-
- Dec 21, 2017
-
-
Matt Harbison authored
This fixes the issue where verify (and other read commands) would propagate corrupt blobs. I originalled coded this to only hardlink if 'verify=True' for store.read(), but then good blobs weren't being linked, and this broke a bunch of tests. (The blob in repo5 that is being corrupted seems to be linked into repo5 in the loop running dumpflog.py prior to it being corrupted, but only if verify=False is handled too.) It's probably better to do a one time extra verification in order to create these files, so that the repo can be copied to a removable drive. Adding the same check to store.write() was only for completeness, but also needs to do a one time extra verification to avoid breaking tests.
-
- Nov 17, 2017
-
-
Matt Harbison authored
This avoids inserting corrupt files into the usercache, and local and remote stores. One down side is that the bad file won't be available locally for forensic purposes after a remote download. I'm thinking about adding an 'incoming' directory to the local lfs store to handle the download, and then move it to the 'objects' directory after it passes verification. That would have the additional benefit of not concatenating each transfer chunk in memory until the full file is transferred. Verification isn't needed when the data is passed back through the revlog interface or when the oid was just calculated, but otherwise it is on by default. The additional overhead should be well worth avoiding problems with file based remote stores, or buggy lfs servers. Having two different verify functions is a little sad, but the full data of the blob is mostly passed around in memory, because that's what the revlog interface wants. The upload function, however, chunks up the data. It would be ideal if that was how the content is always handled, but that's probably a huge project. I don't really like printing the long hash, but `hg debugdata` isn't a public interface, and is the only way to get it. The filelog and revision info is nowhere near this area, so recommending `hg verify` is the easiest thing to do.
-
Matt Harbison authored
These are mostly tests against file:// based remote stores, because that's what we have the most control over. The test uploading a corrupt blob to lfs-test-server demonstrates an overly broad exception handler in the retry loop. A corrupt blob is actually transferred in a download, but eventually caught when it is accessed (only after it leaves the corrupt file in a couple places locally). I don't think we want to trust random 3rd party implementations, and this would be a problem if there were a `debuglfsdownload` command that simply cached the files. And given the cryptic errors, we should probably validate the file hash locally before uploading, and also after downloading.
-
- Dec 19, 2017
-
-
Matt Harbison authored
The following corruption related patches were written prior to adding the user level cache, and it took awhile to track down why the tests changed. (It generally made things more resilient.) But I think this will be useful to the end user as well. I didn't make it --debug level, because there can be a ton of info coming out of clone/push/pull --debug. The pointers are sorted for test stability. I opted for ui.note() instead of checking ui.verbose and then using ui.write() for convenience, but I see most of this extension does the latter. I have no idea what the preferred form is.
-