- Dec 22, 2017
-
-
Matt Harbison authored
Both gitlfsremote and file based remotes benefit from not requiring the whole file in memory (though the whole file is still loaded when passing through the revlog interface). With a method specific to downloading from a remote store, the misleading 'use hg verify' hint is removed. The behavior is otherwise unchanged, in that a download from both remote store types will yield a copy of the blob via util.atomictempfile. There's no response payload defined for the non 'download' actions, but the previous code attempted to read the payload in this case anyway. This refactored code made that more obvious, so any payload is printed as a debug message, just in case.
-
- Jan 03, 2018
-
-
Matt Harbison authored
I noticed that when I cloned without updating and then turned around and pushed that clone to an lfs server, it was only trying to find the blob in the local store. Writes to the dummyremote (file based store) use local.read(), which looks at both the usercache and local store.
-
- Dec 24, 2017
-
-
Matt Harbison authored
A hook like this is how largefiles manages to do the same. Largefiles uses a changegroup hook, but this uses pretxnchangegroup because that actually causes the transaction to rollback in the unlikely event that writing the requirements out fails. Sadly, the requires file itself isn't rolled back if a subsequent hook fails, but that seems trivial. Now that commit, changegroup and convert are covered, I don't think there's any way to get an lfs repo without the requirement. The grep exit code is blotted out of some test-lfs-serve.t tests now showing the requirement, because run-tests.py doesn't support conditionalizing the exit code.
-
- Nov 17, 2017
-
-
Matt Harbison authored
This avoids inserting corrupt files into the usercache, and local and remote stores. One down side is that the bad file won't be available locally for forensic purposes after a remote download. I'm thinking about adding an 'incoming' directory to the local lfs store to handle the download, and then move it to the 'objects' directory after it passes verification. That would have the additional benefit of not concatenating each transfer chunk in memory until the full file is transferred. Verification isn't needed when the data is passed back through the revlog interface or when the oid was just calculated, but otherwise it is on by default. The additional overhead should be well worth avoiding problems with file based remote stores, or buggy lfs servers. Having two different verify functions is a little sad, but the full data of the blob is mostly passed around in memory, because that's what the revlog interface wants. The upload function, however, chunks up the data. It would be ideal if that was how the content is always handled, but that's probably a huge project. I don't really like printing the long hash, but `hg debugdata` isn't a public interface, and is the only way to get it. The filelog and revision info is nowhere near this area, so recommending `hg verify` is the easiest thing to do.
-
- Dec 05, 2017
-
-
Matt Harbison authored
The retries were added to workaround TCP RESETs in fb-experimental fc8c131314a9. I have no idea if that's been debugged yet, but this wide net caught local I/O errors, bad hostnames and other things that shouldn't be retried. The next patch will validate objects as they are uploaded, and there's no need to retry those errors. The spec[1] does mention that certain http errors can be retried, including 500. But let's work through the corruption detection issues first. [1] https://github.com/git-lfs/git-lfs/blob/master/docs/api/batch.md
-
- Nov 17, 2017
-
-
Matt Harbison authored
These are mostly tests against file:// based remote stores, because that's what we have the most control over. The test uploading a corrupt blob to lfs-test-server demonstrates an overly broad exception handler in the retry loop. A corrupt blob is actually transferred in a download, but eventually caught when it is accessed (only after it leaves the corrupt file in a couple places locally). I don't think we want to trust random 3rd party implementations, and this would be a problem if there were a `debuglfsdownload` command that simply cached the files. And given the cryptic errors, we should probably validate the file hash locally before uploading, and also after downloading.
-
- Dec 19, 2017
-
-
Matt Harbison authored
The following corruption related patches were written prior to adding the user level cache, and it took awhile to track down why the tests changed. (It generally made things more resilient.) But I think this will be useful to the end user as well. I didn't make it --debug level, because there can be a ton of info coming out of clone/push/pull --debug. The pointers are sorted for test stability. I opted for ui.note() instead of checking ui.verbose and then using ui.write() for convenience, but I see most of this extension does the latter. I have no idea what the preferred form is.
-
- Dec 12, 2017
-
-
Wojciech Lis authored
This significantly speeds up lfs prefetch. With fast network we are seeing ~50% improvement of overall prefetch times Because of worker's API in posix we do lose finegrained progress update and only see progress when a file finished downloading. Test Plan: Run tests: ./run-tests.py -l test-lfs* .... # Ran 4 tests, 0 skipped, 0 failed. Run commands resulting in lfs prefetch e.g. hg sparse --enable-profile Differential Revision: https://phab.mercurial-scm.org/D1568
-
- Dec 07, 2017
-
-
Matt Harbison authored
This is the same mechanism in place for largefiles, and solves several problems working with multiple local repositories. The existing largefiles method is reused in place, because I suspect that there are other functions that can be shared. If we wait a bit to identify more before `hg cp lfutil.py ...`, the history will be easier to trace. The push between repo14 and repo15 in test-lfs.t arguably shouldn't be uploading any files with a local push. Maybe we can revisit that when `hg push` without 'lfs.url' can upload files to the push destination. Then it would be consistent for blobs in a local push to be linked to the local destination's cache. The cache property is added to run-tests.py, the same as the largefiles property, so that test generated files don't pollute the real location. Having files available locally broke a couple existing lfs-test-server tests, so the cache is cleared in a few places to force file download.
-
- Nov 21, 2017
-
-
Matt Harbison authored
Apparently '$!' doesn't return a Win32 PID, so the process was never killed, and the next run was screwed up. Oddly, without the explicit killdaemons.py at the end, the test seems to hang. This spawning is just sad, so I limited it to Windows.
-
- Nov 15, 2017
-
-
Matt Harbison authored
This is consistent with how the other tests require a feature.
-
- Nov 14, 2017
-
-
Matt Harbison authored
The purpose of this is the same as the built-in largefiles extension- to handle huge files outside of the normal storage system, generally to keep the amount of data cloned to a lower amount. There are several benefits of implementing the git-lfs protocol, instead of using the largefiles extension: - Bitbucket and Github support (and probably wider support in 3rd party hosting sites in general). [1][2] - The number of hg internals monkey patched are several orders of magnitude lower, so it will be easier to reason about and maintain. Future commands will likely just work, without requiring various wrappers. - The "standin" files are only written to the filelog, not the disk. That should avoid weird edge cases where the largefile and standin files get out of sync. [3] It also avoids the occasional printing of the "hidden" standin file in various messages. - Filesets like size() will work, even if the file isn't present. (It always says 41 bytes for largefiles, whether present or not.) The only place that I see where largefiles comes out on top is that it works with `hg serve` for simple sharing, without external infrastructure. Getting lfs-test-server working was a hassle, and took awhile to figure out. Maybe we can do something to make it work in the future. Long term, I expect that this will be highly preferred over largefiles. But if we are to recommend this to largefile users, there are some UI issues to bikeshed. Until they are resolved, I've marked this experimental, and am not putting a pointer to this in the largefiles help. The (non exhaustive) list of issues I've seen so far are: - It isn't sufficient to just enable the largefiles extension- you have to explicitly add a file with --large before it will pay attention to the configured sizes and patterns on future adds. The justification being that once you use it, you're stuck with it. I've seen people confused by this, and haven't liked it myself. But it's also saved me a few times. Should we do something like have a specific enabling config setting that must be set in the local repo config, so that enabling this extension in the user or system hgrc doesn't silently start storing lfs files? - The largefiles extension adds a repo requirement when the first largefile is committed, so that the extension must always be enabled in the future. This extension is not doing that, and since I only enabled it locally to avoid infecting other repos, I got a cryptic error about missing flag processors when I cloned. Is there no repo requirement due to shallow/narrow clone considerations (or other future advanced things)? - In the (small amount of) reading I've done about the git implementation, it seems that the files and sizes are stored in a tracked .gitattributes file. I think a tracked file for this would be extremely useful for consistency across developers, but this kind of touches on the tracked hgrc file proposal a few months back. - The git client can specify file patterns, not just sizes. - The largefiles extension has a cache directory in the local repo, but also a system wide one. We should probably implement a system wide cache too, so that multiple clones don't have to refetch the files from the server. - Jun mentioned other missing features, like SSH authentication, gc, etc. The code corresponds to c0492b73c7ef in hg-experimental. [4] The only tweaks are to load the extension in the tests with 'lfs=' instead of 'lfs=$TESTDIR/../hgext3rd/lfs', change the import in the *.py test to hgext (from hgext3rd), add the 'testedwith' declaration, and mark it experimental for now. The infinite-push, p4fastimport, and remotefilelog tests were left behind. The devel-warnings for unregistered config options are not corrected yet, nor are the import check warnings. [1] https://www.mercurial-scm.org/pipermail/mercurial/2017-November/050699.html [2] https://bitbucket.org/site/master/issues/3843/largefiles-support-bb-3903 [3] https://bz.mercurial-scm.org/show_bug.cgi?id=5738 [4] https://bitbucket.org/facebook/hg-experimental
-