- Jan 07, 2018
-
-
Matt Harbison authored
It seems better to print the name known to the user, not the internal file. The previous code unconditionally set 'p.filename'. That potentially made the attribute None, and would be printed as such in _gitlfsremote._checkforservererror() instead of "unknown". Normally, files are printed relative to CWD, but I don't see a way to get the repo path to make that adjustment. The test modified here apparently only runs within Facebook, but a print statement confirmed the name change. I tried uploading the blob to a different remote store (so the git server never saw it), and also killing the git server and removing the blob directory, and removing the 'lfs.db' file. All resulted in a message: abort: LFS server claims required objects do not exist: bdc26931acfb734b142a8d675f205becf27560dc461f501822de13274fe6fc8a! So I have no idea how to make this test generally runnable.
-
Matt Harbison authored
This partially reverts 417e8e040102 and bb6a80fc969a. But since there's now a dedicated download function, there's no functional change. The last sentence in the commit message of the latter is wrong- write() didn't need the one time hash check if verification wasn't requested. I suspect I missed 'read()' in there ("... but _read()_ also needs to do a one time check..."), because that did fail without the hash check before linking to the usercache. The write() method simply took the same check for consistency. While here, clarify that the write() method is *only* for storing content directly from filelog, which has already checked the hash. If someone can come up with a way to bridge the differences between writing to a file and sending a urlreq.request across the wire, we can create an upload() function and cleanup read() in a similar way. About the only common thread I see is an open() that verifies the content before returning a file descriptor.
-
- Dec 23, 2017
-
-
Matt Harbison authored
Upfront disclaimer: I don't know anything about the wire protocol, and this was pretty much cargo-culted from largefiles, and then clonebundles, since it seems more modern. I was surprised that exchange.push() will ensure all of the proper requirements when exchanging between two local repos, but doesn't care when one is remote. All this new capability marker does is inform the client that the extension is enabled remotely. It may or may not contain commits with external blobs. Open issues: - largefiles uses 'largefiles=serve' for its capability. Someday I hope to be able to push lfs blobs to an `hg serve` instance. That will probably require a distinct capability. Should it change to '=serve' then? Or just add an 'lfs-serve' capability then? - The flip side of this is more complicated. It looks like largefiles adds an 'lheads' command for the client to signal to the server that the extension is loaded. That is then converted to 'heads' and sent through the normal wire protocol plumbing. A client using the 'heads' command directly is kicked out with a message indicating that the largefiles extension must be loaded. We could do similar with 'lfsheads', but then a repo with both largefiles and lfs blobs can't be pushed over the wire. Hopefully somebody with more wire protocol experience can think of something else. I see 'x-hgarg-1' on some commands in the tests, but not on heads, and didn't dig any further.
-
- Dec 24, 2017
-
-
Matt Harbison authored
Once the 'lfs' requirement is added, the extension must be loaded on both sides, and changegroup3 used. But there's no reason that I can see for bailing with cryptic errors if lfs is not required, but randomly enabled somewhere.
-
- Dec 22, 2017
-
-
Matt Harbison authored
Even though the upload/download message is still in a ui.verbose check, I switched that to ui.note() too so that the 'ui.note' label is applied. The debug message is no longer marked for translation because check-code complained.
-
- Nov 17, 2017
-
-
Matt Harbison authored
This avoids inserting corrupt files into the usercache, and local and remote stores. One down side is that the bad file won't be available locally for forensic purposes after a remote download. I'm thinking about adding an 'incoming' directory to the local lfs store to handle the download, and then move it to the 'objects' directory after it passes verification. That would have the additional benefit of not concatenating each transfer chunk in memory until the full file is transferred. Verification isn't needed when the data is passed back through the revlog interface or when the oid was just calculated, but otherwise it is on by default. The additional overhead should be well worth avoiding problems with file based remote stores, or buggy lfs servers. Having two different verify functions is a little sad, but the full data of the blob is mostly passed around in memory, because that's what the revlog interface wants. The upload function, however, chunks up the data. It would be ideal if that was how the content is always handled, but that's probably a huge project. I don't really like printing the long hash, but `hg debugdata` isn't a public interface, and is the only way to get it. The filelog and revision info is nowhere near this area, so recommending `hg verify` is the easiest thing to do.
-
- Dec 19, 2017
-
-
Matt Harbison authored
The following corruption related patches were written prior to adding the user level cache, and it took awhile to track down why the tests changed. (It generally made things more resilient.) But I think this will be useful to the end user as well. I didn't make it --debug level, because there can be a ton of info coming out of clone/push/pull --debug. The pointers are sorted for test stability. I opted for ui.note() instead of checking ui.verbose and then using ui.write() for convenience, but I see most of this extension does the latter. I have no idea what the preferred form is.
-
- Dec 13, 2017
-
-
Matt Harbison authored
Also spotted by Yuya.
-
- Dec 08, 2017
-
-
Matt Harbison authored
This also ends up testing the local extension wrapping for dstrepo during upgrade, which was fixed in 06987c6971be.
-
- Dec 07, 2017
-
-
Boris Feld authored
The extensions wrap the necessary function to ensure the 'lfs' requirements won't be dropped. It is now possible to run `hg debugupgraderepo` on a repository with lfs.
-
- Nov 27, 2017
-
-
Matt Harbison authored
This is consistent with clone and share in the previous commits.
-
- Nov 17, 2017
-
-
Matt Harbison authored
This is consistent with clone in the previous commit.
-
Matt Harbison authored
We do the same thing on clone for the largefiles extension, as a convenience. Similar to largefiles, it's probably safer to only enable this extension on a per repo basis because it is trivial to add an lfs file. And that gives the repository some centralized VCS characteristics.
-
- Nov 23, 2017
-
-
Matt Harbison authored
This covers both the vanilla repo -> lfs repo and largefiles -> lfs conversions. The largefiles extension adds the requirement directly, because it has a dedicated command to convert. Using the convert extension is better, because it supports more features. I'd like ideas about how to ensure that converting away from lfs works on all files. (See comments in test-lfs.t)
-
- Nov 14, 2017
-
-
Matt Harbison authored
Specifically, 'symbol import follows non-symbol import: mercurial.i18n'
-
Matt Harbison authored
The purpose of this is the same as the built-in largefiles extension- to handle huge files outside of the normal storage system, generally to keep the amount of data cloned to a lower amount. There are several benefits of implementing the git-lfs protocol, instead of using the largefiles extension: - Bitbucket and Github support (and probably wider support in 3rd party hosting sites in general). [1][2] - The number of hg internals monkey patched are several orders of magnitude lower, so it will be easier to reason about and maintain. Future commands will likely just work, without requiring various wrappers. - The "standin" files are only written to the filelog, not the disk. That should avoid weird edge cases where the largefile and standin files get out of sync. [3] It also avoids the occasional printing of the "hidden" standin file in various messages. - Filesets like size() will work, even if the file isn't present. (It always says 41 bytes for largefiles, whether present or not.) The only place that I see where largefiles comes out on top is that it works with `hg serve` for simple sharing, without external infrastructure. Getting lfs-test-server working was a hassle, and took awhile to figure out. Maybe we can do something to make it work in the future. Long term, I expect that this will be highly preferred over largefiles. But if we are to recommend this to largefile users, there are some UI issues to bikeshed. Until they are resolved, I've marked this experimental, and am not putting a pointer to this in the largefiles help. The (non exhaustive) list of issues I've seen so far are: - It isn't sufficient to just enable the largefiles extension- you have to explicitly add a file with --large before it will pay attention to the configured sizes and patterns on future adds. The justification being that once you use it, you're stuck with it. I've seen people confused by this, and haven't liked it myself. But it's also saved me a few times. Should we do something like have a specific enabling config setting that must be set in the local repo config, so that enabling this extension in the user or system hgrc doesn't silently start storing lfs files? - The largefiles extension adds a repo requirement when the first largefile is committed, so that the extension must always be enabled in the future. This extension is not doing that, and since I only enabled it locally to avoid infecting other repos, I got a cryptic error about missing flag processors when I cloned. Is there no repo requirement due to shallow/narrow clone considerations (or other future advanced things)? - In the (small amount of) reading I've done about the git implementation, it seems that the files and sizes are stored in a tracked .gitattributes file. I think a tracked file for this would be extremely useful for consistency across developers, but this kind of touches on the tracked hgrc file proposal a few months back. - The git client can specify file patterns, not just sizes. - The largefiles extension has a cache directory in the local repo, but also a system wide one. We should probably implement a system wide cache too, so that multiple clones don't have to refetch the files from the server. - Jun mentioned other missing features, like SSH authentication, gc, etc. The code corresponds to c0492b73c7ef in hg-experimental. [4] The only tweaks are to load the extension in the tests with 'lfs=' instead of 'lfs=$TESTDIR/../hgext3rd/lfs', change the import in the *.py test to hgext (from hgext3rd), add the 'testedwith' declaration, and mark it experimental for now. The infinite-push, p4fastimport, and remotefilelog tests were left behind. The devel-warnings for unregistered config options are not corrected yet, nor are the import check warnings. [1] https://www.mercurial-scm.org/pipermail/mercurial/2017-November/050699.html [2] https://bitbucket.org/site/master/issues/3843/largefiles-support-bb-3903 [3] https://bz.mercurial-scm.org/show_bug.cgi?id=5738 [4] https://bitbucket.org/facebook/hg-experimental
-