- Sep 18, 2018
-
-
Gregory Szorc authored
When catching errors in storage, we should be catching StorageError instead of RevlogError. When throwing errors related to storage, we shouldn't be using RevlogError unless we know the error stemmed from revlogs. And we only reliably know that if we're in revlog.py or are inheriting from a type defined in revlog.py. Differential Revision: https://phab.mercurial-scm.org/D4655
-
- Sep 06, 2018
-
-
Matt Harbison authored
I noticed a "missing" blob when pushing two repositories with common blobs to a fresh server, and then running `hg verify` as a user different from the one running the web server. When pushing the second repo, several of the blobs already existed in the user cache, so the server indicated to the client that it doesn't need to upload the blobs. That's good enough for the web server process to serve up in the future. But a different user has a different cache by default, so verify complains that `lfs.url` needs to be set, because it wants to fetch the missing blobs. Aside from that corner case, it's better to keep all of the blobs in the repo whenever possible. Especially since the largefiles wiki says the user cache can be deleted at any time to reclaim disk space- users switching over may have the same expectations.
-
- Aug 25, 2018
-
-
Matt Harbison authored
-
- Aug 24, 2018
-
-
Matt Harbison authored
The search itself can take an extreme amount of time if there are a lot of revisions involved. I've got a local repo that took 6 minutes to push 1850 commits, and 60% of that time was spent here (there are ~70K files): \ 58.1% wrapper.py: extractpointers line 297: pointers = extractpointers(... | 57.7% wrapper.py: pointersfromctx line 352: for p in pointersfromctx(ct... | 57.4% wrapper.py: pointerfromctx line 397: p = pointerfromctx(ctx, f, ... \ 38.7% context.py: __contains__ line 368: if f not in ctx: | 38.7% util.py: __get__ line 82: return key in self._manifest | 38.7% context.py: _manifest line 1416: result = self.func(obj) | 38.7% manifest.py: read line 472: return self._manifestctx.re... \ 25.6% revlog.py: revision line 1562: text = rl.revision(self._node) \ 12.8% revlog.py: _chunks line 2217: bins = self._chunks(chain, ... | 12.0% revlog.py: decompressline 2112: ladd(decomp(buffer(data, ch... \ 7.8% revlog.py: checkhash line 2232: self.checkhash(text, node, ... | 7.8% revlog.py: hash line 2315: if node != self.hash(text, ... | 7.8% revlog.py: hash line 2242: return hash(text, p1, p2) \ 12.0% manifest.py: __init__ line 1565: self._data = manifestdict(t... \ 16.8% context.py: filenode line 378: if not _islfs(fctx.filelog(... | 15.7% util.py: __get__ line 706: return self._filelog | 14.8% context.py: _filelog line 1416: result = self.func(obj) | 14.8% localrepo.py: file line 629: return self._repo.file(self... | 14.8% filelog.py: __init__ line 1134: return filelog.filelog(self... | 14.5% revlog.py: __init__ line 24: censorable=True)
-
- Jul 22, 2018
-
-
Yuya Nishihara authored
I'll add a couple more functions that work on parsed tree. % wc -l mercurial/fileset*.py 559 mercurial/fileset.py 135 mercurial/filesetlang.py 694 total
-
Yuya Nishihara authored
It was added at 91aac8e6604d, but is no longer needed since a fileset expression is now compiled into an "open" matcher. See ff5b6fca1082 for details.
-
- Jun 09, 2018
-
-
Yuya Nishihara authored
This makes fileset expression open to any input, so that we can just say "hg status 'set: not binary()'" to select text files including unknowns. With this and removal of subset computation, 'set:**' becomes as fast as 'glob:**'. Further optimization will probably be possible by narrowing the file tree to compute status for example. This also fixes 'subrepo()' to not ignore the current mctx.subset. .. bc:: The fileset expression may include untracked files by default. Use ``tracked()`` to explicitly filter out files not existing at the context revision.
-
- Jun 26, 2018
-
-
Augie Fackler authored
This has consistent behavior on Python 2.7, 3.6, and 3.7 and has the benefit of probably being a little faster. Test output changes are largely because / used to be pointlessly escaped. Differential Revision: https://phab.mercurial-scm.org/D3842
-
- Jun 18, 2018
-
-
Martin von Zweigbergk authored
Differential Revision: https://phab.mercurial-scm.org/D3806
-
- May 31, 2018
-
-
Matt Harbison authored
There are only a handful of methods that access repo attributes that are applied in reposetup(). The `diff` test covers all of the commands that call scmutil.prefetchfiles(). Along the way, I saw that adding files and upgrading the repo format were also problems (also tested here). I don't think running `hg serve` through the commandserver is sane, but I conditionalized both the capabilities and the wsgirequest handler because it's trivially correct. It doesn't look like there has ever been a caller of candownload(), so there's no test for that path. The upload case isn't testable, because uploadblobs() bails if there are no pointers. The requirement should be added any time pointers are introduced, and that would force the extension to be loaded specifically for the repo. This covers `debuglfsupload`, the pre-push hook (which isn't set until the repo is promoted to LFS), and uploadblobsfromrevs(), which can be called by other extensions. I think readfromstore() and writetostore() are only reachable as a flag processor for revlog.REVIDX_EXTSTORED, and a requirement is added as soon as that is seen, so I don't think those are a problem.
-
- Apr 27, 2018
-
-
Matt Harbison authored
It wasn't obvious that LFS was involved from the error messages when `hg verify` fails.
-
- May 10, 2018
-
-
Yuya Nishihara authored
If we use pprint() as a drop-in replacement for repr(), bprefix=False is more appropriate. Let's make it the default to remove bprefix=False noise.
-
- Apr 27, 2018
-
-
Augie Fackler authored
Differential Revision: https://phab.mercurial-scm.org/D3513
-
- Apr 17, 2018
-
-
Gregory Szorc authored
We have wireprotov2server, wireprotov1peer, and wireprotov2peer. wireproto only contains server functionality. So it makes sense to rename it to wireprotov1server so the naming aligns with everything else. Differential Revision: https://phab.mercurial-scm.org/D3400
-
- Apr 06, 2018
-
-
Matt Harbison authored
The client copies all of these properties under 'header' to the HTTP Headers of the subsequent GET or PUT request that it performs. That allows the Basic HTTP authentication used to authorize the Batch API request to also authorize the upload/download action. There's likely further work to do here. There's an 'authenticated' boolean key in the Batch API response that can be set, and there is an 'LFS-Authenticate' header that is used instead of 'WWW-Authenticate'[1]. (We likely need to support both, since some hosting solutions are likely to only respond with the latter.) In any event, this works with SCM Manager, so there is real world benefit. I'm limiting the headers returned to 'Basic', because that's all the lfs spec calls out. In practice, I've seen gitbucket emit custom header content[2]. [1] https://github.com/git-lfs/git-lfs/blob/master/docs/api/batch.md#response-errors [2] https://github.com/gitbucket/gitbucket/blob/35655f33c7713f08515ed640ece0948acd6d6168/src/main/scala/gitbucket/core/servlet/GitRepositoryServlet.scala#L119
-
- Apr 15, 2018
-
-
Matt Harbison authored
At this point, I think all of the core commands are prefetching, except grep and verify. Verify will need some special handling, in case the revlogs are corrupt. Grep has an issue that still needs to be debugged, but we probably need to give the behavior some thought too- it would be a shame to have to download everything in order to search. I think the benefit of having this info for all commands outweighs extra printing in a command that is arguably not well behaved in this context anyway.
-
- Apr 14, 2018
-
-
Matt Harbison authored
The remainder of the commands that need prefetch deal with multiple revisions. I initially coded this as a separate hook, but then it needed a list of files to handle `diff` and `grep`, so it didn't seem worth keeping them separate. Not every matcher will emit bad file messages (some are built from a list of files that are known to exist). But it seems better to filter this in one place than to push this on either each caller or each hook implementation.
-
- Apr 13, 2018
-
-
Matt Harbison authored
I'm not bothering with validating PUT requests (for now), because the spec doesn't explicitly call out a Content-Type (though the example transcript does use the sensible 'application/octet-stream').
-
- Feb 25, 2018
-
-
Matt Harbison authored
The aborts weren't killing the server, but this seems cleaner. I'm not sure if it matters to handle the remaining IOError in the test like this, for consistency. The error code still feels wrong (especially if the client is trying to download a corrupt blob) but I don't see anything better in the RFCs, and this is already used elsewhere because the Batch API spec specifically mentioned this as a "Validation Error".
-
- Apr 13, 2018
-
-
Matt Harbison authored
This wasn't appending the '.git/info/lfs' path in this case.
-
Matt Harbison authored
Reporting a 500 and then not leaving any traces on the server seems like a receipe for frustration. There's similar log writing in hgweb.server.do_POST(). That doesn't write directly to the wsgi.errors object, so it doesn't seem worth trying to refactor. It does seem like earlier stack frames are missing for some reason.
-
Matt Harbison authored
-
- Apr 11, 2018
-
-
Matt Harbison authored
While here, I also checked the lfs.url config directly instead of testing the scheme, as requested by Yuya.
-
- Apr 08, 2018
-
-
Matt Harbison authored
Unlike pull, the blobs are uploaded within the exchange.push() window, so simply wrap it and swap in a properly configured remote store. The '_subtoppath' field shouldn't be available during this window, but give the passed path priority for clarity. At one point I hit an AttributeError in one of the convert tests when trying to save the original remote blobstore when the swap was run unconditionally. I wrapped it in a util.safehasattr(), but then today I wasn't able to reproduce it. But now the whole thing is tucked under the requirement guard because without the requirement, there are no blobs in the repo, even if the extension is loaded.
-
Matt Harbison authored
I don't see any easier way to do this because the update part of `hg pull -u` happens outside exchange.pull(), and commands.postincoming() doesn't take a path. So (ab)use the mechanism used by subrepos to redirect where subrepos are pulled from when an explicit path is given. As a bonus, this should allow lfs blobs to be pulled into a subrepo when it is checked out. An explicit push path can be handled within exchange.push(). That can be done next, outside of this dirty hack.
-
- Apr 11, 2018
-
-
Matt Harbison authored
The previous code worked on Windows, but not on Unix, and a pending patch's test failed. The url being used was something like "/tmp/.../client1/null://", courtesy of ui.configpath(). Looking at the doc comment, this seems like it's maybe not the right function to call (why should a relative cache path be expanded relative to the repo root or config file?), but largefiles has been using it since 8b8dd13295db (Oct 2011). It was introduced in 1b591f9b7fd2 (Jan 2011) without comment or callers. A grep over the whole history shows that only largefiles used it until lfs and infinitepush came along recently. It looks like if the `if not os.path.isabs(v) or "://" not in v` in configpath() is changed to an 'and', both Linux and Windows are happy. I'm guessing that "://" is to pick off URLs, so that seems reasonable. But I'm not sure why it isn't explicitly "file://", and I thought that "file://foo" is relative anyway. (At least, there are doctests for file:///tmp in util.url.) There is no mention of this setting in the help, but it is referenced on the wiki page for largefiles. (There's no mention that this is intended to be a URL, and the example uses an absolute path.) I don't want this blocking the rest of the lfs server discovery stuff. It was also wrong to allow a file:// URL here, but not in largefiles.
-
- Apr 08, 2018
-
-
Matt Harbison authored
If `lfs.url` is specified, it takes precedence. However, now that we support serving blobs via hgweb, we shouldn't *require* this setting. Less configuration is better (things will work out of the box once this is sorted out), and git has similar functionality. This is not a complete solution- it isn't able to infer the blob store from an explicitly supplied path, and it should consider `paths.default-push` for push. The pull solution for that is a bit hacky, and this alone is an improvement for the vast majority of cases. Even though there are only a handful of references to the saved remote store, the location of them makes things complicated. 1) downloading files on demand in the revlog flag processor 2) copying to readonlyvfs with bundlerepo 3) downloading in the file prefetch hook 4) the canupload()/skipdownload() checks 5) uploading blobs Since revlog doesn't have a repo or ui reference, we can't avoid creating a remote store when the extension is loaded. While the long term goal is to make sure the prefetch hook is invoked early for every command for efficiency, this handling in the flag processor is needed as a last ditch fetch. In order to support the clone command, the remote store needs to be created later than when the extension loads, since `paths.default` isn't set until just before the files are checked out. Therefore, this patch changes the prefetch hook to ignore the saved reference, and build a new one. The canupload()/skipdownload() checks simply check if the stored instance is a `_nullremote`. Since this can only be set via `lfs.url` (which is reflected in the saved reference), checking only the instance created when the extension loaded is fine. The blob uploading function is called from several places: 1) a prepush hook 2) when writing a new bundle 3) from infinitepush The prepush hook gets an exchange.pushop, so it has a path to where the push is going. The bundle writer and infinitepush don't. Further, bundle creation for things like strip and amend are causing blobs to be uploaded. This seems wrong, but I don't want to side track this sorting that out, so punt on trying to handle explicit push paths or `paths.default-push`. I also think that sending blobs to a remote store when pushing to a local repo is wrong. This functionality predates the usercache, so perhaps that's the reason for it. I've got some patches floating around to stop sending blobs remotely in this case, and instead write directly to the other repo's blob store. But the tests for corruption handling weren't happy with this change, and I don't have time to rewrite them. So exclude filesystem based paths from this for now. I don't think there's much of a chance to implement `paths.remote:lfsurl` style configs, given how early these are resolved vs how late the remote store is created. But git has it, so I threw a TODO in there, in case anyone has ideas. I have no idea why this is now doing http auth twice when it wasn't before. I don't think the original blobstore's url is ever being used in these cases.
-
Matt Harbison authored
While the usercache is important for real world uses, I've been tripped up more than a couple of times by it in tests- thinking a file was being downloaded, but it was simply linked from the local cache. The syntax for setting it is the same as for setting a null remote endpoint, and like that endpoint, is left undocumented. This may or may not be a useful feature in the real world (I'd expect any sane filesystem to support hardlinks at this point).
-
- Apr 06, 2018
-
-
Gregory Szorc authored
filelog.parsemeta() and filelog.packmeta() are used to decode and encode metadata for file copies and censor. An upcoming commit will move the core logic for censoring revlogs into revlog.py. This would create a cycle between revlog.py and filelog.py. So we move these metadata functions to revlog.py. .. api:: filelog.parsemeta() and filelog.packmeta() have been moved to the revlog module. Differential Revision: https://phab.mercurial-scm.org/D3150
-
- Apr 01, 2018
-
-
Matt Harbison authored
Since the dispatching code only checks the beginning of the string, this enforces that there's only one more path component.
-
Matt Harbison authored
-
- Mar 31, 2018
-
-
Matt Harbison authored
The use case here is the server admin may want to store the blobs elsewhere. As it stands now, the `lfs.url` config on the client side is all that enforces this (the web.allow-* permissions aren't able to block LFS blobs without also blocking normal hg traffic). The real solution to this is to implement the 'verify' action on the client and server, but that's not a near term goal. Whether this is useful in its own right, and should be promoted out of experimental at some point is TBD. Since the other two tests that deal with LFS and `hg serve` are already complex and have #testcases, this seems like a good time to start a new test dedicated to access checks against the server. Instead of conditionally wrapping the wire protocol handler, I put this in the handler because I'd still like to bring the annotations in from the evolve extension in order to set up the wrapping. The 400 status probably isn't great, but that's what it would be for existing `hg serve` instances without support for serving blobs.
-
- Feb 25, 2018
-
- Feb 26, 2018
-
-
Matt Harbison authored
Two things here. First, the previous message included a snippet of JSON, which tends to be long (and in the case of lfs-test-server, has no error message). Instead, give a concise message where possible, and leave the JSON to a debug output. Second, the server can signal issues other than a missing individual file. This change shows a corrupt file, but I'm debating letting the corrupt file get downloaded, because 1) the error code doesn't really fit, and 2) having it locally makes forensics easier. Maybe need a config knob for that.
-
- Mar 31, 2018
-
-
Matt Harbison authored
-
- Mar 30, 2018
-
-
Boris Feld authored
We will introduce a new bundlespec for stream bundle which will influence the contentops. Differential Revision: https://phab.mercurial-scm.org/D1952
-
- Mar 17, 2018
-
-
Matt Harbison authored
-
Matt Harbison authored
-
Matt Harbison authored
The recent hgweb refactoring yielded a clean point to wrap a function that could handle this, so I moved the routing for this out of the core. While not an hg wire protocol, this seems logically close enough. For now, these handlers do nothing other than check permissions. The protocol requires support for PUT requests, so that has been added to the core, and funnels into the same handler as GET and POST. The permission checking code was assuming that anything not checking 'pull' or None ops should be using POST. But that breaks the upload check if it checks 'push'. So I invented a new 'upload' permission, and used it to avoid the mandate to POST. A function wrap point could be added, but security code should probably stay grouped together. Given that anything not 'pull' or None was requiring POST, the comment on hgweb.common.permhooks is probably wrong- there is no 'read'. The rationale for the URIs is that the spec for the Batch API[1] defines the URL as the LFS server url + '/objects/batch'. The default git URLs are: Git remote: https://git-server.com/foo/bar LFS server: https://git-server.com/foo/bar.git/info/lfs Batch API: https://git-server.com/foo/bar.git/info/lfs/objects/batch '.git/' seems like it's not something a user would normally track. If we adhere to how git defines the URLs, then the hg-git extension should be able to talk to a git based server without any additional work. The URI for the transfer requests starts with '.hg/' to ensure that there are no conflicts with tracked files. Since these are handed out by the Batch API, we can change this at any point in the future. (Specifically, it might be a good idea to use something under the proposed /api/ namespace.) In any case, no files are stored at these locations in the repository directory. I started a new module for this because it seems like a good idea to keep all of the security sensitive server side code together. There's also an issue with `hg verify` in that it will want to download *all* blobs in order to run. Sadly, there's no way in the protocol to ask the server to verify the content of a blob it may have. (The verify action is for storing files on a 3rd party server, and then informing the LFS server when that completes.) So we may end up implementing a custom transfer adapter that simply indicates if the blobs are valid, and fall back to basic transfers for non-hg servers. In other words, this code is likely to get bigger before this is made non-experimental. [1] https://github.com/git-lfs/git-lfs/blob/master/docs/api/batch.md
-
- Mar 15, 2018
-
-
Matt Harbison authored
The trailing space looks weird when conditionalizing the line. The commas shouldn't be necessary because of the indenting. The `lfs-test-server` isn't sending all of the same items (notably, the "transfer" attribute is missing), so having the commas means more lines need to be conditionalized.
-